text stringlengths 9 7.94M |
|---|
\begin{document}
\title{Existence of solution of the p(x)-Laplacian problem involving critical exponent and radon measure} \author{Amita Soni and D. Choudhuri}
\date{} \maketitle
\begin{abstract} \noindent In this paper we are proving the existence of a nontrivial solution of the {\it{p}}(x)- Laplacian equation with Dirichlet boundary condition. We will use the variational method and concentration compactness principle involving positive radon measure $\mu$. \begin{align*} \begin{split}
-\Delta_{p(x)}u & = |u|^{q(x)-2}u+f(x,u)+\mu\,\,\mbox{in}\,\,\Omega,\\ u & = 0\,\, \mbox{on}\,\, \partial\Omega, \end{split} \end{align*} where $\Omega \subset \mathbb{R}^N$ is a smooth bounded domain, $\mu > 0$ and $1 < p^{-}:=\underset{x\in \Omega}{\text{inf}}\;p(x) \leq p^{+}:= \underset{x\in \Omega}{\text{sup}}\;p(x) < q^{-}:=\underset{x\in \Omega}{\text{inf}}\;q(x)\leq q(x) \leq p^{\ast}(x) < N$. The function $f$ satisfies certain conditions. Here, $q^{\prime}(x)=\frac{q(x)}{q(x)-1}$ is the conjugate of $q(x)$ and $p^{\ast}(x)=\frac{Np(x)}{N-p(x)}$ is the Sobolev conjugate of $p(x)$. \end{abstract} \begin{flushleft}
{\bf Keywords}:~ Radon measure, concentration compactness principle, truncation function. \end{flushleft}
\section{Introduction} Existence results for the problem involving the critical exponent case has been studied by many researchers, for example readers may refer \cite{Napoli}, \cite{Azorero}, \cite{Bonder1}, \cite{Zhang}, \cite{Silva} and references therein. In \cite{Napoli} the authors have proved the existence of multiple solutions for critical case with $p$-Laplacian operator by using manifold technique. In \cite{Azorero} the authors have proved the existence and non-existence of solution of problem involving critical exponent by using concentration-compactness principle for different values of $\lambda$. In \cite{Bonder1} the authors dealt with $p(x)$-Laplacian operator with critical exponent and applied concentration-compactness principle for proving existence of solution. Many problems have also been solved with measure term in variable exponent space. For example refer \cite{Bend}, \cite{Azroul} and other references therein. In \cite{Bend}, the authors have shown the existence of a distributional solution and in \cite{Azroul} the authors have shown the existence of an entropy solution.In {\cite{Choudhuri}} we have proved the existence of multiple solution of p-Laplacian problem without Ambrosetti-Rabinowitz condition and with measure term. Motivated by this paper we are considering a similar type of problem. In this paper we are trying to extend the result by proving the existence of a nontrivial solution for {\it{p}(x)}-Laplacian problem involving an exponent $q(x)$ which is allowed to be critical in bounded domains with positive radon measure. In this work we will mainly use variational method and concentration-compactness principle. The problem which we have addressed in this article is as follows. \begin{align*} \begin{split}
(P):~~-\Delta_{p(x)}u & = |u|^{q(x)-2}u+f(x,u)+\mu\,\,\mbox{in}\,\,\Omega,\\ u & = 0\,\, \mbox{on}\,\, \partial\Omega,\label{main_prob} \end{split} \end{align*} where $\mu > 0$ is a Radon measure and $1 < p(x) \leq \underset{x\in \Omega}{\text{sup}\;}p(x):= p^{+} < q^{-}:=\underset{x\in \Omega}{\text{inf}}\;q(x)\leq q(x) \leq p^{\ast}(x) < N$.\\ The problem $(P)$ is new in the sense that we have tackled the presence of a Radon measure and a variable critical exponent together. The conditions assumed on the function $f$ are as follows.\\ $(f_{1})\; f(x,0)=0$ and $f$ is measurable with respect to first variable and continuous with respect to second variable.\\ $(f_{2})\; \exists$\; $c_{1}\in [p^{+},q^{-})$ s.t. $0 < c_{1}\int_{\Omega}F(x,t)dt\leq \int_{\Omega}f(x,t)t$ a.e. $x \in \Omega$ where $F(x,t) := \int_0^{t}f(x,s)ds$ being the primitive of $f(x,t)$.\\ $(f_{3}) \underset{\mid t\mid \rightarrow \infty}{\text{lim}}\frac{f(x,t)}{{\mid t\mid}^{q(x)-1}}= 0$ uniformly a.e. $x \in \Omega$.\\ An example of a function satisfying the above conditions is
$f(x,t)=|t|^{r(x)-1}$;\\ where $c_{1} < r(x) < q(x)$.\\
Throughout this article, we will denote the measure of a measurable set $E$ of $\Omega$ by $|E|$ and the absolute value of any real number, say $a$, as $|a|$. We will denote ${\parallel .\parallel}_{W_{0}^{1,p(x)}}= {\parallel .\parallel}$.
\begin{theorem} Suppose that $(f_{1})-(f_{3})$ hold. Then problem $(P)$ possesses a nontrivial weak solution. \end{theorem}
\section{Premilinaries} \subsection{Definitions} \begin{definition}\label{defn1}
Let $(\mu_n)$ be a bounded sequence of measures in
$\mathfrak{M}(\Omega)$. We say that $(\mu_n)$ converges to a measure $\mu \in \mathfrak{M}(\Omega)$ in the sense of measure if
$$\displaystyle{\int_{\Omega}\phi d\mu_n} \rightarrow
\int_{\Omega}\phi d\mu\,\, \,\,\,\forall\,\,\phi\in
C_0(\bar{\Omega}).$$ We denote this convergence by $\mu_n
\xrightharpoonup{} \mu$. The topology defined via this weak
convergence is metrizable and a bounded sequence with respect to
this topology is pre-compact. \end{definition} \begin{definition} The Marcinkiewicz space ${M}^q(\Omega)$ \cite{Benilan} (or the weak $L^q(\Omega)$ space) defined for every $0 < q <\infty$, as the space of all measurable functions $f:\Omega\rightarrow \mathbb{R}$ such that the corresponding distribution satisfy an estimate of the form
$$|\{x\in \Omega:|f(x)|>t\}|\leq \frac{C}{t^q},\hspace{0.4cm}t>0,C<\infty.$$ \end{definition} For bounded $\Omega$ we have ${M}^q\subset {M}^{\bar{q}}$ if $q\geq \bar{q}$, for some fixed positive $\bar{q}$. We recall here the following useful continuous embeddings \begin{equation}\label{mar} L^q(\Omega)\hookrightarrow {M}^q(\Omega)\hookrightarrow L^{q-\epsilon}(\Omega), \end{equation} for every $1<q<\infty$ and $0<\epsilon<q-1$.\\
\subsection{Variable exponent Sobolev space}
For each open subset $\Omega \subset \mathbb{R}^{N}(N \geq 2)$, we define $C_{+}(\overline{\Omega})=\lbrace p\;|\;p\in C(\overline{\Omega}), p(x) > 1\; {\text{for any}}\; x\in \overline{\Omega}\rbrace$ and $1 < p^{-}:=\underset{x\in \Omega}{\text{inf}}\;p(x)\leq \underset{x\in\Omega}{\text{sup}}\;p(x)=: p^{+} < N.$ The variable exponent Lebesgue space $L^{p(x)}(\Omega)$ is defined by\\
$$L^{p(x)}(\Omega)=\left\lbrace u:\Omega\rightarrow\mathbb{R}\;|\;u \;{\text{is measurable and}} \int_{\Omega}|u|^{p(x)}dx < \infty\right\rbrace$$ endowed with the norm (the Luxemburg norm)
$|u|_{p(x)}={\text{inf}}\lbrace \lambda > 0\;|\;\int_{\Omega}|\frac{u}{\lambda}|^{p(x)}dx\leq 1\rbrace$.\\
We will define variable exponent Sobolev space as $$W^{1,p(x)}(\Omega)=\lbrace u\in L^{p(x)}(\Omega)\;|\;|\nabla u|\in L^{p(x)}(\Omega)\rbrace$$ with the norm $\|u\|_{1,p(x)}=|u|_{p(x)}+|\nabla u|_{p(x)}$. With these norms, $L^{p(x)}(\Omega)$ and $W^{1,p(x)}(\Omega)$ are separable reflexive Banach spaces(\cite{Kovacik}). For $p(x)\equiv p,\; p(x)$- Laplacian reduces to $p$-Laplacian. \begin{proposition}
Set $\rho(u):=\int_{\Omega}|u(x)|^{p(x)}dx$. For $u\in L^{p(x)}(\Omega)$ and $(u_{n})_{n\in\mathbb{N}}\subset L^{p(x)}(\Omega)$, we have \begin{itemize}
\item $u\neq 0\Rightarrow \|u\|_{L^{p(x)}}(\Omega)=\lambda$ iff $\rho(\frac{u}{\lambda})=1$,
\item $\|u\|_{L^{p(x)}(\Omega)} < 1(=1;> 1)\Leftrightarrow \rho(u) < 1(=1; >1)$,
\item $\|u\|_{L^{p(x)}(\Omega)} < 1\Rightarrow \|u\|_{L^{p(x)}(\Omega)}^{p^{+}}\leq \rho(u)\leq \|u\|_{L^{p(x)}(\Omega)}^{p^{-}},$
\item $\|u\|_{L^{p(x)}(\Omega)} > 1\Rightarrow \|u\|_{L^{p(x)}(\Omega)}^{p^{-}}\leq \rho(u)\leq \|u\|_{L^{p(x)}(\Omega)}^{p^{+}},$
\item $\underset{n\rightarrow\infty}{\text{lim}}||u_{n}||_{L^{p(x)}(\Omega)}=0(\infty)\Leftrightarrow \underset{n\rightarrow\infty}{\text{lim}}\rho(u_{n})=0(\infty)$. \end{itemize} \end{proposition} We state the generalized H\"{o}lder inequality and embedding results in the following propositions (\cite{Fan}, \cite{Kovacik}, \cite{Zhao}, \cite{Diening}). \begin{proposition} For any $u\in L^{p(x)}(\Omega)$ and $v\in L^{p^{\prime}(x)}(\Omega)$, where $L^{p^{\prime}(x)}(\Omega)$ is the conjugate space of $L^{p(x)}(\Omega)$ such that $\frac{1}{p(x)}+\frac{1}{p^{\prime}(x)}=1$,\\
$\big|\int_{\Omega}uv\;dx\big|\leq \left(\frac{1}{p^{-}}+\frac{1}{p^{-\prime}}\right) \|u\|_{p(x)}\|v\|_{p^{\prime}(x)}$ \end{proposition}
\begin{proposition} (i) If $q\in C_{+}(\overline{\Omega})$ and $q(x) < p^{\ast}(x)$ for any $x\in\overline{\Omega}$, then $W^{1,p(x)}(\Omega)\hookrightarrow L^{q(x)}(\Omega)$ is compact and continuous.\\
(ii) There exists a constant $c > 0$ such that $\|u\|_{p(x)} \leq c\|\nabla u\|_{p(x)}\;\forall\; u\in W_{0}^{1,p(x)}(\Omega)$. \end{proposition}
\subsection{Functional analytic setup} We first consider a sequence of problems $(P_{n})$ which are as follows. \begin{align*} \begin{split}
-\Delta_{p(x)}u & = |u|^{q(x)-2}u+f(x,u)+\mu_{n}\,\,\mbox{in}\,\,\Omega,\\ u & = 0\,\, \mbox{on}\,\, \partial\Omega, \end{split} \end{align*} where $\mu_{n}$ are smooth functions such that $\mu_{n} \rightharpoonup \mu$ in measure in the sense of definition 2.1.\\ The corresponding energy functional to the sequence of problems $(P_{n})$ is given as \begin{equation*}
I_{n}(u)= \int_{\Omega}\frac{{|\nabla u|}^{p(x)}}{p(x)}dx -\int_{\Omega} \frac{{|u|}^{q(x)}}{q(x)}dx - \int_{\Omega}F(x,u) dx - \int_{\Omega}u d\mu_{n}. \end{equation*} The Fr\'{e}chet derivative of $I_{n}$ is defined as \begin{equation*}
<I_{n}^{\prime}(u),v> = \int_{\Omega}{|\nabla u|}^{p(x)-2}\nabla u\nabla v dx - \int_{\Omega}{|u|}^{q(x)-2}uv dx - \int_{\Omega}f(x,u)v dx - \int_{\Omega}\mu_{n}v dx \end{equation*}
$\forall u,v \in T$, where $T=W^{1, p(x)}(\Omega)\cap C_0(\bar{\Omega})$, $C_0(\bar{\Omega})=\{\varphi\in C(\bar{\Omega}):\varphi|_{\partial\Omega}=0\}$ and $C(\bar{\Omega})$ will denote the space of continuous functions over $\bar{\Omega}$. We now define the corresponding energy functional of the problem $(P)$ as \begin{equation*}
I(u)=\int_{\Omega}\frac{{|\nabla u|}^{p(x)}}{p(x)}dx -\int_{\Omega} \frac{{|u|}^{q(x)}}{q(x)}dx - \int_{\Omega}F(x,u) dx - \int_{\Omega}u d\mu \end{equation*} and its Fr\'{e}chet derivative as \begin{equation*}
<I^{\prime}(u),v> = \int_{\Omega}{|\nabla u|}^{p(x)-2}\nabla u\nabla v dx -\int_{\Omega}{|u|}^{q(x)-2}uv dx - \int_{\Omega}f(x,u)v dx - \int_{\Omega}v d\mu \end{equation*} for every $u,v\in T^{\prime}$, where $T^{\prime}=W^{1, s(x)}(\Omega)\cap C_0(\bar{\Omega})$ and $1\leq s(x) < s={\text{min}}\left\lbrace 1-\frac{1}{\gamma},{\frac{(\gamma -1)p^{-}}{2\gamma-1}}\right\rbrace$. \begin{definition} $u\in W_{0}^{1,s(x)}$ is said to be a weak solution of the problem $(P)$ if \begin{equation*}
\int_{\Omega}|\nabla u|^{p(x)-2}\nabla u\nabla\varphi dxdy - \int_\Omega |u|^{q-2}u\varphi dx - \int_\Omega f(x,u)\varphi dx - \int_{\Omega}\varphi d\mu =0, \end{equation*} \end{definition} $\forall$\; $\varphi \in T^{\prime}$.\\
\section{Existence Results} To prove the main result of this paper which is given in form of Theorem 1.1, we need to first prove few lemmas related to the mountain pass theorem and Palais-Smale condition. It is clear that $I_{n}$ is $C^{1}$ functional\; $\forall\;n \geq 1$.
\begin{lemma}\label{lem1} The functional $I_{n}$ satisfies mountain pass geometry in the sense that: \begin{itemize} \item $I_{n}(0)=0$
\item $\exists\; r, \eta > 0$ such that $I_{n}(u)\geq\eta$ if $\|u\| > r$.
\item $\exists\; u, \|u\| > r$ such that $I_{n}(u)\leq 0$. \end{itemize} \end{lemma}
\begin{proof} $I_{n}(0)=0$ is obvious. For proving 2., we need the assumptions $(f_{2})$ and $(f_{3})$. From these assumptions we obtain, \begin{align*} \begin{split} c_{1}\int_{\Omega}F(x,u)dx&\leq \int_{\Omega}f(x,u)u dx\\
&\leq \int_{\Omega}|f(x,u)u|dx\\
&\leq \epsilon\int_{\Omega}|u|^{q(x)}dx+m(\epsilon)\\
&\leq(\epsilon+m(\epsilon))\int_{\Omega}|u|^{q(x)}dx \end{split} \end{align*}
This implies $\int_{\Omega}F(x,u)\leq \left(\frac{\epsilon+m(\epsilon)}{c_{1}}\right)\int_{\Omega}|u|^{q(x)}dx.$
Choose $\|u\|=r$ sufficiently small so that$\int_{\Omega}|u|^{q(x)}dx\leq \|u\|_{q(x)}^{q^-}$ since $\|u\|=r < 1$. Now using the Poincar\'{e} inequality, H\"{o}lder inequality and continuous embedding of $W_{0}^{1,p(x)}(\Omega)$ into $L^{q(x)}(\Omega)$, we have \begin{align*} \begin{split}
I_{n}(u)&=\int_{\Omega}\frac{{|\nabla u|}^{p(x)}}{p(x)}dx-\int_{\Omega}\frac{|u|^{q(x)}}{q(x)}dx-\int_{\Omega}F(x,u)dx-\int_{\Omega}\mu_{n}udx\\
&\geq \frac{c}{p^{+}}|r|^{p^{+}}-\frac{1}{q^{-}}|r|_{q(x)}^{q^{-}}-\left(\frac{\epsilon+m(\epsilon)}{c_{1}}\right)|r|_{q(x)}^{q^{-}}-\|\mu_{n}\|_{q^{\prime}(x)}\|u\|_{q(x)}\\
&\geq \frac{c}{p^{+}}|r|^{p^{+}}-\frac{1}{q^{-}}|r|^{q^{-}}-\left(\frac{\epsilon+m(\epsilon)}{c_{1}}\right)|r|^{q^{-}}-\|\mu_{n}\|_{q^{\prime}(x)}|r|^{q^{-}}\\
&=\frac{cr^{p^{+}}}{p^{+}}-r^{q^{-}}\left\lbrace\frac{1}{q^{-}}-\left(\frac{\epsilon+m(\epsilon)}{c_{1}}\right)-\|\mu_{n}\|_{q^{\prime}(x)}\right\rbrace \end{split} \end{align*} Since $q^{-} > p^{+}$ so $I_{n}(u)\geq\eta$ for some $\eta > 0$. We can prove the 3. by using the assumption $(f_{2})$, for $t > 0$ and $u\neq 0$ consider, \begin{align*} \begin{split}
I_{n}(tu)&= \int_{\Omega}\frac{1}{p(x)}{|\nabla tu|}^{p(x)}dx - \int_{\Omega}\frac{1}{q(x)}{|tu|}^{q(x)}dx - \int_{\Omega}F(x,tu) dx - \int_{\Omega}tu d\mu_{n}\\
&= \int_{\Omega}\frac{t^{p(x)}}{p(x)}{|\nabla tu|}^{p(x)}dx - \int_{\Omega}\frac{t^{q(x)}}{q(x)}{|tu|}^{q(x)}dx - \int_{\Omega}F(x,tu) dx - \int_{\Omega}tu d\mu_{n}\\
&\leq \int_{\Omega}\frac{t^{p(x)}}{p(x)}{|\nabla tu|}^{p(x)}dx - \int_{\Omega}\frac{t^{q(x)}}{q(x)}{|tu|}^{q(x)}dx - \int_{\Omega}tu d\mu_{n} \end{split} \end{align*} This implies \begin{align}
I_{n}(tu)\leq t^{p^{+}}\int_{\Omega}\frac{1}{p(x)}{|\nabla u|}^{p(x)}dx-t^{q^{-}}\int_{\Omega}\frac{1}{q(x)}{|u|}^{q(x)}dx-t\int_{\Omega}\mu_{n}u dx. \end{align} On dividing (3.1) by $t^{p^+}$ and passing the limit $t\rightarrow \infty$ we get, $I_{n}(tu)\rightarrow -\infty$ since $q^{-} > p^{+}$.\\ Hence, $I_{n}(u)$ satisfies the hypothesis of mountain pass theorem. \end{proof}
\begin{lemma}\label{lem2} The functional $I_{n}$ satisfies Palais-Smale condition. \end{lemma}
\begin{proof}
Let $(u_{m,n})$ be a Palais-Smale sequence such that $I(u_{m,n})\rightarrow c$ and $I^{\prime}(u_{m,n})\rightarrow 0$ in $(W_{0}^{1,p(x)}(\Omega))^{\prime}.$ We first show that $(u_{m,n})$ is bounded in $W_{0}^{1,p(x)}(\Omega)$. We will prove it by contradiction. Let $\|u_{m,n}\|\rightarrow \infty$ as $m\rightarrow\infty$. Then we have, \begin{align*} \begin{split}
I_{n}(u_{m,n})-\frac{1}{c_{1}}\langle I^{\prime}(u_{m,n}),u_{m,n}\rangle&=\int_{\Omega}\frac{|\nabla u_{m,n}|^{p(x)}}{p(x)}dx-\frac{1}{c_{1}} \int_{\Omega}|\nabla u_{m,n}|^{p(x)}dx-\int_{\Omega}\frac{|u_{m,n}|^{q(x)}}{q(x)}dx\\
&+\frac{1}{c_{1}}\int_{\Omega}|u_{m,n}|^{q(x)}dx-\int_{\Omega}F(x,u_{m,n})dx -\int_{\Omega}u_{m,n}\mu_{n}dx\\ &+\frac{1}{c_{1}}\int_{\Omega}f(x,u_{m,n})u_{m,n}dx+\frac{1}{c_{1}}\int_{\Omega}u_{m,n}\mu_{n}dx\\
\geq& \frac{1}{p^{+}}\int_{\Omega}{|\nabla u_{m,n}|^{p(x)}}-\frac{1}{c_{1}} \int_{\Omega}|\nabla u_{m,n}|^{p(x)}dx-\frac{1}{q^{-}}\int_{\Omega}|u_{m,n}|^{q(x)}dx\\
&+\frac{1}{c_{1}}\int_{\Omega}|u_{m,n}|^{q(x)}dx-\int_{\Omega}F(x,u_{m,n})dx -\int_{\Omega}u_{m,n}\mu_{n}dx\\ &+\frac{1}{c_{1}}\int_{\Omega}f(x,u_{m,n})u_{m,n}dx+\frac{1}{c_{1}}\int_{\Omega}u_{m,n}\mu_{n}dx\\
=&\left(\frac{1}{p^{+}}-\frac{1}{c_{1}}\right)\int_{\Omega}{|\nabla u_{m,n}|^{p(x)}}dx+\left(\frac{1}{c_{1}}-\frac{1}{q^{-}}\right)\int_{\Omega}|u_{m,n}|^{q(x)}dx\\ &+\frac{1}{c_{1}}\left(\int_{\Omega}f(x,u_{m,n})u_{m,n}dx-c_{1}\int_{\Omega}F(x,u_{m,n})dx\right)\\ &-\left(1-\frac{1}{c_{1}}\right)\int_{\Omega}u_{m,n}\mu_{n}dx \end{split} \end{align*} Using the assumption $(f_{2})$, we get \begin{align*} \begin{split}
I_{n}(u_{m,n})-\frac{1}{c_{1}}\langle I^{\prime}(u_{m,n}),u_{m,n}\rangle=\left(\frac{1}{p^{+}}-\frac{1}{c_{1}}\right)\int_{\Omega}{|\nabla u_{m,n}|^{p(x)}}dx-A\int_{\Omega}u_{m,n}\mu_{n}dx \end{split} \end{align*}
where $A=\left(1-\frac{1}{c_{1}}\right) > 0$. Furthermore on applying the Poincar\'{e} inequality, H\"{o}lder inequality, embedding of $W_{0}^{1,p(x)}(\Omega)$ into $L^{p(x)}(\Omega)$ and the fact that $\|\nabla u\|_{p(x)}$ and $\|u\|_{1,p(x)}$ are equivalent norm on $W_{0}^{1,p(x)}(\Omega)$ we get \begin{align} \begin{split}
I_{n}(u_{m,n})-\frac{1}{c_{1}}\langle I^{\prime}(u_{m,n}),u_{m,n}\rangle\geq c^{\prime}\left(\frac{1}{p^{+}}-\frac{1}{c_{1}}\right)\|u_{m,n}\|^{p^{-}}-A\|\mu_{n}\|_{p^{\prime}(x)}\|u_{m,n}\| \end{split} \end{align}
Now on dividing both sides of (3.2) by $\|u_{m,n}\|$ and passing the limit $m\rightarrow\infty$ we get $0\geq\infty$ as $p^{-} > 1$ which is absurd. Hence $(u_{m,n})$ is bounded in $W_{0}^{1,p(x)}(\Omega)$. Since this is a reflexive space, there exists a subsequence say $(u_{m,n})$ which converges weakly to $u_{n}$ in $W_{0}^{1,p(x)}(\Omega)$. To prove this convergence to be a strong convergence we will use the concentration compactness principle for variable exponents (refer \cite{Bonder2}, Theorem 1.1) from which we have\\
$|u_{m,n}|^{q(x)}\rightharpoonup\nu=|u_{n}|^{q(x)}+\sum_{i\in J}\nu_{i}\delta x_{i}$\\
$|\nabla u_{m,n}|^{p(x)}\rightharpoonup \mu\geq |\nabla u_{n}|^{p(x)}+\sum_{i\in J}\mu_{i}\delta x_{i}$\\
$ S\nu_{i}^{\frac{1}{p^{\ast}(x_{i})}}\leq\mu_{i}^{\frac{1}{p(x_{i})}}\;\forall\; i\in J$, where $S :=\underset{\phi\in C_{0}^{\infty}(\Omega)}{\text {inf}}\frac{{\parallel |\nabla\phi| \parallel}_{L^{p(x)}(\Omega)}}{\|\phi\|_{L^{q(x)}(\Omega)}}$ and $J$ is a finite set. $(\nu_{i})_{i\in J}$ and $(\mu_{i})_{i\in J}$ are positive numbers and points $(x_{i})_{i\in J}$ belongs to the critical set\\
$A=\lbrace x\in \Omega: q(x)=p^{\ast}(x)\rbrace$. \begin{claim*} $J$ is empty. \end{claim*} \begin{proof} Let $J\neq \phi$. Define $\psi_{i,\epsilon}(x)=\psi\left(\frac{x-x_{i}}{\epsilon}\right)$. \begin{align*} \begin{split} 0=&\langle I^{\prime}(u_{m,n}),u_{m,n}\psi_{i,\epsilon}\rangle\\
=&\int_{\Omega}{|\nabla u_{m,n}|}^{p(x)-2}\nabla u_{m,n}\nabla(u_{m,n}\psi_{i,\epsilon})dx-\int_{\Omega}|u_{m,n}|^{q(x)-2}u_{m,n} u_{m,n}\psi_{i,\epsilon}dx\\ &-\int_{\Omega}f(x,u_{m,n})u_{m,n}\psi_{i,\epsilon}dx-\int_{\Omega}\mu_{n}u_{m,n}\psi_{i,\epsilon}dx\\
=& \int_{\Omega}{|\nabla u_{m,n}|}^{p(x)}\nabla\psi_{i,\epsilon}dx+\int_{\Omega}({|\nabla u_{m,n}|}^{p(x)-2}\nabla u_{m,n}\nabla\psi_{i,\epsilon})u_{m,n}dx\\
&-\int_{\Omega}|u_{m,n}|^{q(x)}\psi_{i,\epsilon}dx-\int_{\Omega}f(x,u_{m,n})u_{m,n}\psi_{i,\epsilon}dx-\int_{\Omega}\mu_{n}u_{m,n}\psi_{i,\epsilon}dx \end{split} \end{align*} In addition to this we have \begin{align} \begin{split} 0=\underset{m\rightarrow\infty}{\text{lim}}&\langle I^{\prime}(u_{m,n}),\phi\rangle\\
=\underset{m\rightarrow\infty}{\text{lim}}&\left[\int_{\Omega}{|\nabla u_{m,n}|}^{p(x)-2}\nabla u_{m,n}\nabla\phi\;dx-\int_{\Omega}|u_{m,n}|^{q(x)-2}u_{m,n}\phi\;dx\right.\\ &\left.-\int_{\Omega}f(x,u_{m,n})\phi\;dx-\int_{\Omega}\mu_{n}\phi\;dx\right] \end{split} \end{align} We also have \begin{align} \begin{split}
|\nabla u_{m,n}|^{p(x)-2}\nabla u_{m,n}&\rightharpoonup |\nabla u_{n}|^{p(x)-2}\nabla u_{n}\; \text{in}\; L^{p^{\prime}(x)}(\Omega)\\
|u_{m,n}|^{q(x)-2}u_{m,n}&\rightharpoonup |u_{n}|^{q(x)-2}u_{n}\;\; \;\;\;\;\;\text{in}\; L^{q^{\prime}(x)}(\Omega) \end{split} \end{align} Since, $u_{m,n}\rightharpoonup u_{n}$ in $W_{0}^{1,p(x)}(\Omega)$ and $W_{0}^{1,p(x)}(\Omega)$ is compactly embedded in $L^{p(x)}(\Omega)$ hence $u_{m,n}\rightarrow u_{n}$ in $L^{p(x)}(\Omega)$. By the Egoroff's theorem, $u_{m,n}\rightarrow u_{n}$ a.e. in $\Omega$ upto a subsequence. Also, by the continuity of $f$ with respect to the second variable we conclude
$f(x,u_{m,n})\rightarrow f(x,u_{n})$ which implies $\int_{\Omega}f(x,u_{m,n})\phi\rightarrow\int_{\Omega}f(x,u_{n})\phi$ by the Dominated convergence theorem.\\ Hence from (3.3), \begin{align*} \begin{split}
\int_{\Omega}{|\nabla u_{n}|}^{p(x)-2}\nabla u_{n}\nabla\phi\;dx-\int_{\Omega}|u_{n}|^{q(x)-2}u_{n}\phi\;dx-\int_{\Omega}f(x,u_{n})\phi\;dx-\int_{\Omega}\mu_{n}\phi\;dx=0 \end{split} \end{align*} implying that $u_{n}$ is a weak solution to the sequence of problems $(P_{n})$. Thus, \begin{align} \begin{split}
0=\langle I^{\prime}(u_{n}),u_{n}\psi_{i,\epsilon}\rangle=&\int_{\Omega}|\nabla u_{n}|^{p(x)}\nabla \psi_{i,\epsilon}dx+\int_{\Omega}({|\nabla u_{n}|}^{p(x)-2}\nabla u_{n}\nabla\psi_{i,\epsilon})u_{n}dx\\
&-\int_{\Omega}|u_{n}|^{q(x)}\psi_{i,\epsilon}dx-\int_{\Omega}f(x,u_{n})u_{n}\psi_{i,\epsilon}dx-\int_{\Omega}\mu_{n}u_{n}\psi_{i,\epsilon}dx \end{split} \end{align} Substituting $\phi=u_{m,n}\psi_{i,\epsilon}$ in (3.3) and then subtracting (3.3) from (3.5), we get \begin{equation} \begin{split} 0=\underset{m\rightarrow\infty}{\text{lim}}&\langle I^{\prime}(u_{m,n}),u_{m,n}\psi_{i,\epsilon}\rangle-\langle I^{\prime}(u_{n}),u_{n}\psi_{i,\epsilon}\rangle\\
=\underset{m\rightarrow\infty}{\text{lim}}&\left[\int_{\Omega}|\nabla u_{n}|^{p(x)}\nabla \psi_{i,\epsilon}dx+\int_{\Omega}({|\nabla u_{m,n}|}^{p(x)-2}\nabla u_{m,n}\nabla\psi_{i,\epsilon})u_{m,n}\;dx\right.\\
&-\int_{\Omega}|u_{m,n}|^{q(x)}\psi_{i,\epsilon}\;dx-\int_{\Omega}f(x,u_{m,n})\psi_{i,\epsilon}\;dx\left.-\int_{\Omega}\mu_{n}u_{m,n}\psi_{i,\epsilon}\;dx\right]\\
&-\left[\int_{\Omega}|\nabla u_{n}|^{p(x)}\nabla \psi_{i,\epsilon}dx-\int_{\Omega}\mu_{n}u_{n}\psi_{i,\epsilon}dx-\int_{\Omega}|u_{n}|^{q(x)}\psi_{i,\epsilon}dx\right.\\
&\left.-\int_{\Omega}f(x,u_{n})u_{n}\psi_{i,\epsilon}dx+\int_{\Omega}({|\nabla u_{n}|}^{p(x)-2}\nabla u_{n}\nabla\psi_{i,\epsilon})u_{n}dx\right] \end{split} \end{equation}
From $(f_{3})$ we have, $|f(x,t)t| < \frac{\epsilon}{2\tilde{c}}t^{q(x)}+m(\epsilon)\;\forall\;t \in\mathbb{R}$ and a.e. in $\Omega$. Let $\|u\|_{q(x)}^{q^{+}}=\tilde{c}$. Choose $\delta=\frac{\epsilon}{2m(\epsilon)} > 0$ and $F\subseteq\Omega$ such that $|F| < \delta$. Then \begin{align*} \begin{split}
\bigg|\int_{F}f(x,u_{m,n})u_{m,n}dx\bigg| &\leq \int_{\Omega}\big|f(x,u_{m,n})u_{m,n}\big|dx\\
&\leq \int_{F}m(\epsilon)dx+\frac{\epsilon}{2\tilde{c}}\int_{F}|u|^{q(x)}dx\\
&\leq m(\epsilon)|F|+\frac{\epsilon}{2\tilde{c}}\|u\|_{q(x)}^{q^{+}}\\ & < m(\epsilon)\frac{\epsilon}{2m(\epsilon)}+\frac{\epsilon}{2\tilde{c}}\tilde{c}\\ &=\epsilon \end{split} \end{align*} Hence, $\lbrace f(x,u_{m,n})u_{m,n}dx:m\in\mathbb{N}\rbrace$ is equiabsolutely continuous and therefore by the Vitali convergence theorem $\int_{\Omega}f(x,u_{m,n})u_{m,n}dx\rightarrow \int_{\Omega}f(x,u_{n})u_{n}dx$ as $ m\rightarrow\infty$. This implies $\int_{\Omega}f(x,u_{m,n})u_{m,n}\psi_{i,\epsilon}dx\rightarrow \int_{\Omega}f(x,u_{n})u_{n}\psi_{i,\epsilon}dx$ as $ m\rightarrow\infty$. We further have from (3.4) and weak convergence of $(u_{m,n})$ that \begin{align*} \begin{split}
\int_{\Omega}|\nabla u_{m,n}|^{p(x)-2}\nabla u_{m,n}\nabla\psi_{i,\epsilon}dx&\rightarrow \int_{\Omega}|\nabla u_{n}|^{p(x)-2}\nabla u_{n}\nabla\psi_{i,\epsilon}dx\\ \int_{\Omega}\mu_{n}\psi_{i,\epsilon}u_{m,n}dx&\rightarrow \mu_{n}\psi_{i,\epsilon}u_{n}dx. \end{split} \end{align*} Using these results in (3.6), \begin{align} \begin{split}
0=&\underset{m\rightarrow\infty}{\text{lim}}\left(\int_{\Omega}|\nabla u_{m,n}|^{p(x)}\nabla\psi_{i,\epsilon}dx-\int_{\Omega}|u_{m,n}|^{q(x)}\psi_{i,\epsilon}dx\right)\\
&-\left( \int_{\Omega}|\nabla u_{n}|^{p(x)}\nabla\psi_{i,\epsilon}dx-\int_{\Omega}|u_{n}|^{q(x)}\psi_{i,\epsilon}dx\right) \end{split} \end{align} Now on applying concentration compactness principle in (3.7), we have \\ $\psi_{i,\epsilon}d\mu-\psi_{i,\epsilon}d\nu=0$. As $\epsilon\rightarrow 0, \mu_{i}=\nu_{i}$.\\ Again, using $(f_{2})$, H\"{o}lder inequality and embedding theorem (proposition 2.3), \begin{align*} \begin{split} c=&\underset{m\rightarrow\infty}{\text{lim}}(I(u_{m,n})-\frac{1}{p^{+}}\langle I^{\prime}(u_{m,n}),u_{m,n}\rangle)\\
=&\underset{m\rightarrow\infty}{\text{lim}}\left[\int_{\Omega}\frac{|\nabla u_{m,n}|^{p(x)}}{p(x)}dx-\int_{\Omega}\frac{|u_{m,n}|^{q(x)}}{q(x)}dx-\int_{\Omega}F(x,u_{m,n})dx-\int_{\Omega}\mu_{n}u_{m,n}dx\right.\\
&\left.-\frac{1}{p^{+}}\left\lbrace \int_{\Omega}|\nabla u_{m,n}|^{p(x)}dx-\int_{\Omega}|u_{m,n}|^{q(x)}dx-\int_{\Omega}f(x,u_{m,n})u_{m,n}dx-\int_{\Omega}\mu_{n}u_{m,n}dx\right\rbrace\right] \\
\geq& \underset{m\rightarrow\infty}{\text{lim}}\int_{\Omega}\left(\frac{1}{p^{+}}-\frac{1}{q(x)}\right)|u_{m,n}|^{q(x)}dx-\int_{\Omega}\mu_{n}u_{m,n}dx+\frac{1}{p^{+}}\int_{\Omega}\mu_{n}u_{m,n}dx\\
\geq& \underset{m\rightarrow\infty}{\text{lim}}\int_{\Omega}\left(\frac{1}{p^{+}}-\frac{1}{q(x)}\right)|u_{m,n}|^{q(x)}dx-A^{\prime}\|\mu_{n}\|_{p^{\prime}(x)}\|u_{m,n}\| \end{split} \end{align*} where, $A^{\prime}=(1-\frac{1}{p^{+}}) > 0$. This implies \begin{equation*}
c+A^{\prime}\|\mu_{n}\|_{p^{\prime}(x)}\|u_{m,n}\|\geq \underset{m\rightarrow\infty}{\text{lim}}\int_{\Omega}\left(\frac{1}{p^{+}}-\frac{1}{q(x)}\right)|u_{m,n}|^{q(x)}dx \end{equation*}
We already proved that $(u_{m,n})$ is bounded so let $M > 0$ be an upper bound of $\left(\|u_{m,n}\|\right) $ for some fixed $n$. Define $q_{A_{\delta}}^{-}:=\underset{A{\delta}}{\text{inf}}\;q(x)$ so \begin{align*} \begin{split}
c+A^{\prime}\|\mu_{n}\|_{p^{\prime}(x)}M&\geq \underset{m\rightarrow\infty}{\text{lim}}\int_{\Omega}\left(\frac{1}{p^{+}}-\frac{1}{q(x)}\right)|u_{m,n}|^{q(x)}dx\\
&\geq \underset{m\rightarrow\infty}{\text{lim}}\int_{A_{\delta}}\left(\frac{1}{p^{+}}-\frac{1}{q(x)}\right)|u_{m,n}|^{q(x)}dx\\
&\geq \underset{m\rightarrow\infty}{\text{lim}}\int_{A_{\delta}}\left(\frac{1}{p^{+}}-\frac{1}{q_{A_{\delta}}^{-}}\right)|u_{m,n}|^{q(x)}dx\\
&=\left(\frac{1}{p^{+}}-\frac{1}{q_{A_{\delta}}^{-}}\right)\left(\int_{A_{\delta}}|u_{n}|^{q(x)}dx+\sum_{i\in J}\nu_{i}\right) \\ &\geq \left(\frac{1}{p^{+}}-\frac{1}{q_{A_{\delta}}^{-}}\right)\nu_{i}\\ &\geq \left(\frac{1}{p^{+}}-\frac{1}{q_{A_{\delta}}^{-}}\right)S^{N} \end{split} \end{align*}
Let us denote $A^{\prime}\|\mu_{n}\|_{p^{\prime}(x)}M=M^{\prime}$. Since, $\delta > 0$ is arbitrary and $q(x)$ is continuous we can say $c+M^{\prime}\geq \left(\frac{1}{p^{+}}-\frac{1}{q_{A}^{-}}\right)S^{N}$. This further implies $c\geq \left(\frac{1}{p^{+}}-\frac{1}{q_{A}^{-}}\right)S^{N}-M^{\prime}$.\\ Hence, for $c < \left(\frac{1}{p^{+}}-\frac{1}{q_{A}^{-}}\right)S^{N}-M^{\prime}$, index set $J$ is empty. \end{proof} \noindent We have proved that $u_{m,n}\rightarrow u_{n}$ in $L^{q(x)}(\Omega)$ and $\nabla u_{m,n}\rightarrow \nabla u_{n}$ in $L^{p(x)}(\Omega)$ which were obtained by concentration compactness principle. Since $p(x) < q(x)$ we have embedding of $L^{q(x)}(\Omega)$ in $L^{p(x)}(\Omega)$. Thus we get $u_{m,n}\rightarrow u_{n}$ in $L^{p(x)}(\Omega)$ and $\nabla u_{m,n}\rightarrow \nabla u_{n}$ in $L^{p(x)}(\Omega)$. Hence $u_{m,n}\rightarrow u_{n}$ in $W_{0}^{1,p(x)}(\Omega)$. Therefore, the functional $I_{n}$ satisfies the Palais-Smale condition. \end{proof} \noindent Therefore, from Lemma (3.1) and (3.2) we conclude that there exist critical point $u_{n} \in W_{0}^{1,p(x)}(\Omega)$ corresponding to each $\mu_{n}$ for the sequence of problems $(P_{n})$.\\ Now, choose a test function $v=T_k(u_n)$, where $T_k$ is a truncation operator defined as \[T_k(t)=\begin{cases}
t, & |t| < k \\
k{\text{sign}}(t), & |t |\geq k.
\end{cases}\]
Clearly $T_k(u_{n})\in W_0^{1,p(x)}(\Omega)$. Now
\begin{align*}
\{|\nabla u_n|> t\} & = \{|\nabla u_n|> t,|u_n|\leq k\} \cup \{|\nabla u_n| > t,|u_n| > k\}
\\& \subset \{|\nabla u_n|> t,|u_n|\leq k\} \cup \{|u_n| > k\}\subset \Omega.
\end{align*}
Hence, by the subadditivity of Lebesgue measure, we have
\begin{equation}\label{eq5}
|\{|\nabla u_n|> t\}| \leq |\{|\nabla u_n|> t,|u_n|\leq k\}| + |\{|u_n| > k\}|.
\end{equation} Hence we have
\begin{align*}
\int_\Omega |\nabla T_k(u_n)|^{p(x)}dx & \leq \lambda\int_\Omega |u_n|^{q(x)-2}u_n T_k(u_n)dx+\int_{\Omega}f(x,u_n)T_k(u_n)dx+\int_{\Omega}\mu_nT_k(u_n)dx
\\& \leq k|\Omega|^{1/q(x)}\|u_n\|_{q(x)}^{q(x)/q'(x)}+\epsilon\int _{(|u_n| > T)}|u_n|^{q(x)-1}T_k(u_n)dx+\int _{\Omega\times[-T,T]}f(x,u_n)T_k(u_n)dx
\\& ~~~+\int_{\Omega}\mu_nT_k(u_n)dx
\\& \leq C_1(q(x),\Omega)k+C_2(\epsilon,\Omega)k+k\int_{\Omega}\mu_n\;dx
\\& \leq Ck,
\end{align*}
where we have used the condition $(f_3)$ to bound the second integral and the $L^1$-bound of the sequence $(\mu_n)$ to bound the third integral.Thus, $\|\nabla T_{k}(u_{n})\|^{\gamma}_{p(x)}\leq Ck\;\forall\;k > 1,$ where \[\gamma=\begin{cases}
p^{+}, & \|\nabla T_{k}(u_{n})\|_{p(x)} < 1\\
p^{-}, & \|\nabla T_{k}(u_{n})\|_{p(x)} > 1.
\end{cases}\]
Define $A_{1}=\{x\in\Omega:|u_n(x)| > k\}$. On using the Poincar\'{e} and the generalized H\"{o}lder inequality, we get \begin{align*} \begin{split}
k|\left\lbrace|u_{n}|>k\right\rbrace|&=\int_{\left\lbrace |u_n| > k \right\rbrace} |T_k(u_n)|dx\\
&\leq \int_{\Omega}|T_{k}(u_{n})|dx\leq \left(\frac{1}{p^{-}}-\frac{1}{p^{-\prime}}\right)(|\Omega|+1)^{\frac{1}{p^{-\prime}}}\|T_{k}(u_{n})\|_{p(x)}\leq C_{3}k^{\frac{1}{\gamma}} \end{split} \end{align*}
From this we get, $|\lbrace|u_{n}| > k\rbrace|\leq \frac{c_{3}}{k^{1-\frac{1}{\gamma}}}\;\forall\; k > 1$. Hence, $(u_{n})$ is bounded in $M^{1-\frac{1}{\gamma}}(\Omega)$.\\
Now again on restricting the integral over the set defined as $A_{2}=\{x\in\Omega:|u_n(x)|\leq k\}$.\\ \begin{align*} \begin{split}
\int_{\lbrace|u_{n}|\leq k\rbrace}|\nabla T_{k}(u_{n})|^{p(x)}dx&=\int_{\lbrace|u_{n}|\leq k\rbrace}|\nabla(u_{n})|^{p(x)}dx\\
&\geq \int_{\lbrace|\nabla u_{n}| > t, |u_{n}|\leq k\rbrace}|\nabla u_{n}|^{p(x)}dx\\
&\geq \int_{\lbrace|\nabla u_{n}| > t, |u_{n}|\leq k\rbrace}|t|^{p(x)}dx\\
&\geq t^{p^{-}}|\lbrace|\nabla u_{n}| > t, |u_{n}|\leq k\rbrace| \end{split} \end{align*}
Thus $Ck\geq t^{p^{-}}|\lbrace|\nabla u_{n}| > t, |u_{n}|\leq k\rbrace|$ which implies $|\lbrace|\nabla u_{n}| > t, |u_{n}|\leq k\rbrace| \leq \frac{Ck}{t^{p^{-}}}$. Hence, from (3.8) we have $\lbrace|\nabla u_{n}| > t\rbrace\leq \frac{Ck}{t^{p^{-}}}+\frac{C_{3}}{k^{1-\frac{1}{\gamma}}}\;\forall\; k > 1$.\\
On choosing $k=t^{\frac{\gamma p^{-}}{2\gamma-1}}$ we obtain $|\lbrace|\nabla u_{n}| > t\rbrace|\leq \frac{C_{4}}{t^{\frac{(\gamma -1)p^{-}}{2\gamma-1}}}\;\forall\;t\geq 1$, where $C_{4}={\text{max}}\lbrace C , C_{3}\rbrace$. This implies that $(\nabla u_{n})$ is bounded in $M^{{\frac{(\gamma -1)p^{-}}{2\gamma-1}}}(\Omega)$. Then $(u_{n})$ is bounded in $W_{0}^{1,s(x)}(\Omega)$ for $s(x) < s$, where $s={\text{min}}\left\lbrace 1-\frac{1}{\gamma},{\frac{(\gamma -1)p^{-}}{2\gamma-1}}\right\rbrace$. We know that $W_{0}^{1,s(x)}(\Omega)$ is again a reflexive space. Hence on repeating the arguments used to prove $u_{m,n}\rightarrow u_{n}$ in $W_{0}^{1,p(x)}(\Omega)$ we find that $u_{n}\rightarrow u$ in $W_{0}^{1,s(x)}(\Omega)$. This limit $u$ is a nontrivial weak solution of problem $(P)$ in $W_{0}^{1,s(x)}(\Omega)$.
{\sc Amita Soni} and {\sc D. Choudhuri}\\ Department of Mathematics,\\ National Institute of Technology Rourkela, Rourkela - 769008, India\\ e-mails: soniamita72@gmail.com and dc.iit12@gmail.com.
\end{document} |
\begin{document}
\setlength{\textheight}{8.0truein}
\normalsize\textlineskip \thispagestyle{empty} \setcounter{page}{1}
\vspace*{0.88truein}
\alphfootnote
\fpage{1}
\centerline{\bf
TIME-SHIFT ATTACK IN PRACTICAL QUANTUM CRYPTOSYSTEMS } \vspace*{0.37truein} \centerline{\footnotesize
BING QI} \vspace*{10pt} \centerline{\footnotesize CHI-HANG FRED FUNG} \vspace*{10pt} \centerline{\footnotesize HOI-KWONG LO} \vspace*{10pt} \centerline{\footnotesize XIONGFENG MA}
\vspace*{10pt} \centerline{\footnotesize\it Center for Quantum Information and Quantum Control (CQIQC), } \baselineskip=10pt \centerline{\footnotesize\it Dept. of Physics and Dept. of Electrical and Computer Engineering, } \baselineskip=10pt \centerline{\footnotesize\it University of Toronto, Toronto, Ontario M5S 3G4, Canada } \vspace*{0.225truein}
\vspace*{0.21truein}
\abstracts{ Recently, a new type of attack, which exploits the efficiency mismatch of two single photon detectors (SPD) in a quantum key distribution (QKD) system, has been proposed. In this paper, we propose another ``time-shift'' attack that exploits the same imperfection. In our attack, Eve shifts the arrival time of either the signal pulse or the synchronization pulse or both between Alice and Bob. In particular, in a QKD system where Bob employs time-multiplexing technique to detect both bit ``0'' and bit ``1'' with the same SPD, Eve, in some circumstances, could acquire full information on the final key without introducing any error. In addition, we prove that if Alice and Bob are unaware of our attack, the final key they share is insecure. We emphasize that our attack is simple and feasible with current technology. Finally, we discuss some counter measures against our and earlier attacks. }{}{}
\vspace*{10pt}
\vspace*{3pt}
\vspace*{1pt}\textlineskip
\section{Introduction\label{introduction}}
One of the most important practical application of quantum information is quantum key distribution (QKD), whose unconditional security is based on the fundamental law of quantum mechanics \cite{Bennett1984,Ekert1991,Gisin2002,Mayers2001,Lo1999,Shor2000}. In principle, any eavesdropping attempt by a third party (Eve) will unavoidably introduce disturbance. So, it's possible for the legitimate users (Alice and Bob) to upper bound the amount of information acquired by an eavesdropper from the measured quantum bit error rate (QBER). Alice and Bob can then distill a final secure key by performing error correction and privacy amplification. Unfortunately, a practical QKD system has imperfections and Eve may try to exploit these imperfections and launch specific attacks. One example is the photon number splitting (PNS) attack\cite{PNS1,PNS2,PNS3,PNS4}, where Eve takes advantage of the nonzero multi-photon emission probability of Alice's laser source and the lossy nature of a quantum channel. Security proofs for practical QKD systems exist \cite{Inamori2005,Gottesman2004}. Recently, decoy state QKD \cite{Hwang2003,Lo2004,Lo2005,Wang2005a,Ma2005b,Wang2005b,Harrington,Yi2005} has also been proposed
to substantially increase the distance and key generation rate of a practical QKD system when weak coherent-state sources are used.
Another critical device, which limits the performance of a long distance QKD system, is the single photon detector (SPD). Currently, many practical QKD systems use optical communication fibers as their quantum channels and work at the Telecom wavelengths of around either 1550nm or 1310nm. Single photon detection at these wavelengths is often performed by (thermoelectric) cooled InGaAs avalanche photo diodes (APDs) \cite{Ribordy2004}. To minimize the dark count rate, this type of detector usually works in gated mode: it is activated by a suitable electrical gate signal for a narrow window (a few ns) only when a signal pulse (a few hundreds of ps) is expected to arrive. Outside of this time window, a detector has no response to the input photons. Obviously, in this design, the synchronization between Alice's source and Bob's detectors is critical.
Recently, an eavesdropping attack that exploits the efficiency mismatch of two single photon detectors in a practical QKD system has been proposed in \cite{Makarov2005b}. In this attack, Eve intercepts and performs a complete von Neumann measurement on each quantum state sent out by Alice. She then generates a new time-shifted signal based on her measurement result. In the extreme case where there is complete detector efficiency mismatch (that is to say, there is a time window where the detector for the bit ``0'' is active while the detector for the bit ``1'' is completely inactive, and vice versa), Eve can acquire full information on the final key without introducing any error.
In this paper, we propose a simple attack that exploits the same imperfection. More specifically, in our attack, Eve does not measure the quantum state sent by Alice. Instead, Eve simply time-shifts each of Alice's signal forward or backward as she wishes. It turns out that the security of some practical QKD systems, particularly those with time-multiplexed SPDs, could be totally compromised by this simple attack. We emphasize that our attack can be implemented with current technology.
This paper is organized as follows: In Section $2$, we summarize the basic results in \cite{Makarov2005b} and then propose our new eavesdropping strategy. This is followed by a comparison of these two attacks. In Section $3$, we discuss the security loophole of QKD systems with a time-multiplexed SPD. Finally, in Section $4$, we discuss some counter measures against our and earlier attacks.
\section{Eavesdropping strategies exploiting detector efficiency mismatch\label{strategy}}
In a BB84 QKD system \cite{Bennett1984}, typically, Bob uses two separate SPDs, which are labeled as SPD0 and SPD1, to detect bit ``0'' and bit ``1'' respectively. As we discussed in Section $1$, to minimize the dark count rate, each of these two SPDs is only activated for a narrow time window when the signal is expected. Owing to the different responses of the two APDs and other imperfections in electronics, the time-dependent efficiencies of these two SPDs are not identical. Because the width of SPD's open window (a few ns) is often substantially larger, or at least not shorter than the laser pulse duration (a few hundreds ps), Alice and Bob can synchronize the laser pulse with the center of SPD's open window. This ensures that the small detector efficiency mismatch will not affect the normal operation of the QKD system.
\begin{figure}\label{fig:SPD-a}
\end{figure} Ref.~\cite{Makarov2005b} considers the time-dependent detector efficiency of the two detectors. Fig.\ref{fig:SPD-a} shows a diagram of the time-dependent efficiency of the two SPDs: at time $t_0$, the efficiency of SPD0, $\eta_0(t_0)$, is much higher than that of SPD1, $\eta_1(t_0)$, while at time $t_1$, $\eta_0(t_1) \ll \eta_1(t_1)$. Taking a symmetrical assumption \cite{Makarov2005b}, we can define \begin{equation} r = \frac{\eta_1(t_0)}{\eta_0(t_0)} = \frac{\eta_0(t_1)}{\eta_1(t_1)} \end{equation} where $r \in [0,1]$. In the extreme case favorable to Eve, $r=0$, which means only SPD0 is active in time $t_0$, and only SPD1 is active at time $t_1$.
The intercept-resend attack described in \cite{Makarov2005b} goes as follows: \begin{enumerate} \item[(1)] Eve intercepts quantum states from Alice and measures each of them in a randomly chosen basis; \item[(2)] According to her measurement result, Eve prepares a new quantum state (faked state \cite{Makarov2005a}) in a different basis with a different bit value. For example, if she measures in Z basis and gets bit ``0'' (labeled as $Z_0$), then she prepares bit ``1'' in X basis ($X_1$); \item[(3)] According to her measurement result, Eve sends out her faked state at different time so that it arrives at Bob's SPD at either time $t_0$ (corresponding to her measurement result ``0'') or $t_1$ (corresponding to her measurement result ``1''). \end{enumerate}
Assuming the intrinsic QBER in Alice and Bob's system is zero, the QBER caused by this attack can be derived as \cite{Makarov2005b}: \begin{equation} QBER = \frac{2 \eta_0(t_1) + 2 \eta_1(t_0)}{\eta_0(t_0)+3\eta_0(t_1)+3\eta_1(t_0)+\eta_1(t_1)} \end{equation}
In one extreme case when $r=0$ (which corresponds to $\eta_0(t_1) = \eta_1(t_0) =0$), QBER=0. So Eve can acquire full information on the sifted key without introducing any errors. In the other extreme case, when $r=1$ (the two SPDs are perfectly matched), Eve causes QBER=$1/2$. So, Eve is only doing a random guess. This is worse than the standard intercept-resend attack (which causes QBER =$1/4$, and requires exactly the same equipment capabilities from Eve). The attack \cite{Makarov2005b} becomes more efficient than the standard intercept-resend attack for $r<0.2$. In the intermediate case when $0<r<0.2$, the QBER calculated from (2) could be lower than the proven lower bound of standard BB84 system. In other words, a
direct application of standard security proofs, e.g. GLLP \cite{Gottesman2004}, without taking into account of detector efficiency mismatch, is invalid.
Note that, to implement the attack in \cite{Makarov2005b}, Eve will need a complicated detection (similar to Bob's system) and resend (similar to Alice's system) system. If we assume that Eve builds her ``practical'' eavesdropping device based on today's technology, she will also experience the problem of low detection efficiency and will introduce additional errors (even in the case of $r=0$) due to the imperfections in her setup.
Here we propose a simple and practical attack that exploits the same imperfection. Fig.\ref{fig:attack1} shows the schematic diagram of the experimental system that implements our attack. Instead of measuring Alice's quantum state, Eve just randomly shifts the time of Alice's quantum state to make sure that it arrives at Bob's detector at either time $t_0$ or at time $t_1$. If Eve chooses time $t_0$ ( $t_1$ ), whenever Bob detects a signal, with the probability of $1/(r+1)$, the bit value will be ``0'' ( ``1'' ). [Here, we assume equal prior probability for the bit ``0'' and ``1'' emitted by Alice.] In the extreme case when $r=0$, this probability is equal to one. Because the probability that Eve guesses Bob's bit value wrong is $r/(r+1)$, So Eve's knowledge about the final key is given by: \begin{equation} I(B:E) = 1 - h(r/(r+1)), \end{equation} where $h(x)=-x \log_2(x)-(1-x) \log_2(1-x)$ is the binary Shannon entropy function. Note that in this attack, Eve does not measure Alice's state. Therefore, Eve never introduces any errors.
We now show that whenever $I(B:E)>0$,
the final key shared between Alice and Bob is insecure if they are not aware of Eve's attack. In order to show this, we compute an upper bound on the key generation rate under Eve's attack. One upper bound is the conditional mutual information \cite{Maurer1999} given by \begin{equation}
I(A:B|E) = h(r/(r+1)) . \end{equation} Thus, if Alice and Bob are not aware of our attack, then they would generate keys at a rate equal to $1$ since there are no bit errors\footnote{In reality, due to the finiteness of the number of signals, Alice and Bob may choose the key generation rate to be $1-h(\epsilon)$, where $\epsilon$ is some security parameter.}. Therefore, whenever $h(r/(r+1))<1$, or in other words, when Alice and Bob generate keys at a rate higher than the upper bound,
the final key shared between them is insecure. To quantify this key rate upper bound, we may substitute in an experimental value of $r=2$ observed in our experiment. This gives us an upper bound of $I(A:B|E) = h(2/3) = 0.9183$. Thus, our attack does cause a moderate decrease in the key rate upper bound in this case.
Comparing with the attack in \cite{Makarov2005b}, our attack is simpler and can be easily realized with today's technology: Eve can use high speed optical switches to re-route Alice's signal through either a long optical path or a short optical path to achieve the desired time shift. Another advantage of our attack is that Eve never introduces any errors. So, it is hard for Alice and Bob to detect Eve's presence.
\begin{figure}\label{fig:attack1}
\end{figure}
One drawback of our attack (from the point of view of Eve) is that Eve can't get the full information (unless $r=0$) on the sifted key. So, it might still be possible for Alice and Bob to work out a secure key, provided that they can correctly upper bound Eve's information.
In both the attack proposed in \cite{Makarov2005b} and the one proposed here, Bob's detection efficiency is lower than normal. In the previous case, Eve can compensate this decrease by using a stronger faked state, or by using a low-loss quantum channel, or by placing her intercept unit near Alice and resend unit near Bob to bypass the quantum channel\cite{Makarov2005b}, whereas in our attack, Eve has to use a low-loss channel. Another alternative to using a low-loss channel is for Eve to mask herself by introducing additional loss during the calibration phase. This can be done with current technology.
In next section, we will show that for some QKD design, where Bob uses the same SPD to detect both bit ``0'' and bit ``1'', Eve could time-shift both the synchronization signal and quantum signal to acquire full information without introducing any errors. In other words, our attack is fatal there.
\section{Security loophole of QKD system with a time-multiplexed SPD\label{time-mux}}
In a standard design of BB84 QKD system, Alice sends out her quantum bit in a randomly chosen basis, while Bob also randomly chooses his measurement basis and uses two separate SPDs to detect bit ``0'' or bit ``1''. In a modified system design \cite{Marand1995,Makarov2004}, Bob detects both bit ``0'' and bit ``1'' with the same SPD, by employing time-multiplexing technique.
\begin{figure}\label{fig:setup1}
\end{figure} In this design, the expected arrival time $t_0$ of bit ``0'' (to SPD) is different from the expected arrival time $t_1$ of bit ``1'', with the time difference \begin{equation} \Delta t = t_1-t_0 \end{equation} determined by the optical length difference $n(L_1-L_0)$, as shown in Fig.~\ref{fig:setup1}. Here, $n$ is the effective refractive index of the fibers and we assume $t_1 > t_0$. So, it's possible for Bob to distinguish bit ``0'' and bit ``1'' from the detection time information.
Besides the advantage of saving one SPD (which is a most expensive component in a QKD system), this specific design also helps to minimize the asymmetry between the detection efficiency of bit ``0'' and bit ``1''. Unfortunately, as we will demonstrate below, this design may also open a loophole that will allow Eve to launch the ``time-shift'' attack that we discussed in Section $2$.
Note that if Bob's SPD works in a gated mode (as the case for most Telecommunication wavelength QKD), it has to be activated twice for each incoming pulse (at time $t_0$ and $t_1$). Since there is no overlap between these two open time windows, this is equivalent to the case $r=0$ in Section $2$. Therefore, Eve could acquire full information on the sifted key without introducing any error. Before we go into the details of this attack, we will first discuss the synchronization of Alice's laser pulse with Bob's detection window in a practical QKD system. This synchronization issue is crucial for our discussion.
\begin{figure}\label{fig:sync}
\end{figure} A widely used synchronization method in a practical QKD system is by multiplexing a strong synchronizing laser pulse with the weak signal pulse by employing wavelength division multiplexing (WDM) technique \cite{Hughes2000,Yuan2005}. In \cite{Yuan2005}, Alice multiplexes each weak signal pulse (1.55 $\mu$m) with a strong timing pulse (1.3$\mu$m) and sends them to Bob through the same optical fiber. On the receiver's side, Bob first separates the timing pulse from the signal pulse with a WDM component, and then detects the timing pulse with an APD, which in turn produces two electrical gating signals to control the two SPDs (or in the case of time-multiplexed SPD, these two electrical gating signals activate the SPD twice for each signal pulse). The time relations of various signals are depicted in Fig.~\ref{fig:sync}.
Eve's eavesdropping strategy is quite similar to the one we proposed in Section $2$: Eve doesn't measure Alice's quantum state. Instead, she flips the bit value first (for phase encoding BB84 protocol, this can easily be done by using a phase modulator to introduce an additional $\pi$ phase shift between the signal and reference pulses sent by Alice) and then randomly time-shifts the pulse (relative to the strong sync pulse) by an amount of either $-\Delta t$ or $+\Delta t$. Fig.~\ref{fig:sync} ($S_4$) shows the situation when Eve does the $+\Delta t$ shift. Obviously, in this case, Bob can only detect bit ``1'' or nothing; the bit ``0'' is blinded. Similarly, when Eve does the ``$-\Delta t$'' shift, Bob can only detect bit ``0'' or nothing. So, whenever Bob has a detection event, Eve has full information about his bit value.
\begin{table} \tcaption{\label{table-attack1}Alice sends out $Z_0$ and Bob measures in Z basis. }
\centerline{
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\footnotesize Quantum state } & {\footnotesize Eve flips the} & \multicolumn{2}{c|}{\footnotesize Eve does random time shift} & \multicolumn{2}{c|}{\footnotesize Bob's measurement results} \\
{\footnotesize from Alice} & {\footnotesize bit value} & \multicolumn{2}{c|}{}& \multicolumn{2}{c|}{\footnotesize in Z basis} \\ & & {\footnotesize Time shift} & {\footnotesize Possibility} & {\footnotesize Bit value} & {\footnotesize Possibility} \\ \hline \multirow{4}{*}{$Z_0$} & \multirow{4}{*}{$Z_1$} & \multirow{2}{*}{$-\Delta t$} & \multirow{2}{*}{$\frac{1}{2}$} & 0 & $\frac{1}{2}$ \\ & & & & 1 & 0 \\ & & \multirow{2}{*}{$+\Delta t$} & \multirow{2}{*}{$\frac{1}{2}$} & 0 & 0 \\ & & & & 1 & 0 \\ \hline \end{tabular} }
\end{table} Table~\ref{table-attack1} summarizes all possible events when Alice sends out $Z_0$ and Bob measures in Z basis. The other three cases can be easily worked out by symmetry. Again, as we explained in Section $2$, Eve's attack does not introduce any error.
Note that if the time interval $\Delta t$ between the two open windows of Bob' SPD is exactly equal to half of the laser pulse period $\Delta T$, as the case in \cite{Makarov2004}, this attack cannot be applied\cite{error}. The reason is the following: Owing to Eve's shifting of the pulse, the arriving time of the ``blinded'' bit will overlap with one of the SPD's two open windows for the successive signal. For example, suppose Eve does ``$+\Delta t$'' shift to the $k^{th}$ laser pulse, then with $50\%$ chance, the incoming photon will be detected at the time corresponding to bit ``0'' for the $(k+1)^{th}$ pulse. It's easy to see that QBER in this case is $25\%$.
Note that QKD systems with a long SPD gating period (larger than $3\Delta t$) may be immune to this kind of attack. In this case, Bob may monitor counts fall outside the predetermined ``0'' and ``1''windows. For example, in \cite{Townsend}, the SPD is gated with $\sim 100$ns pulse, while the time interval $\Delta t$ between bit ``0'' and bit ``1'' is a $6.2$ns. In this case, the SPD is only activated once for each incoming signal, and Bob can distinguish bit ``0'' from bit ``1'' by measuring the time difference between the sync pulse and the detected photon.
\section{Discussion\label{discussion}}
The recently proposed eavesdropping strategy exploits the detector efficiency mismatch of SPDs\cite{Makarov2005b}. In this paper, we propose a simple time-shifting attack that exploits the same imperfection. We emphasize that our attack is simple and feasible with current technology. In particular, we demonstrate a QKD system, where Bob employs time-multiplexing technique to detect both bit ``0'' and bit ``1'' with the same SPD, is especially vulnerable to this kind of attack.
People may ask that even there is a mismatch between the efficiencies of two SPDs, how Eve can know that. We remark that in quantum cryptography, if we cannot prove that Eve is incapable of getting this information, then we have to assume that this information is available to Eve. In fact, in this specific case, Eve, in principle, does have a way to acquire this information during the normal quantum key distribution process: she can randomly block a small fraction of the pulses from Alice and resend {\it faked pulses} to Bob (she only changes a {\it small} fraction, so the QBER does not change significantly). Each faked pulse is randomly prepared in one of the four states as in the BB84 protocol, and its delay time can be tuned by Eve. After the classical communication stage, Eve knows Bob's basis choice for each detected faked pulse. In the case when they (Eve and Bob) use the same basis, Eve knows for sure which SPD clicks.
Let us assume that the properties of the two SPDs stay constant over a long period of time during an experiment. In this case, given a long enough time, Eve can simply repeat her attacks and thus gather enough statistical data to determine the efficiencies of both SPDs at various delay times. With this information, she can launch the time-shift attack we discussed in this paper. Another scenario where Eve can obtain the efficiency information is that Eve is actually
the producer of the QKD system Alice and Bob use. Thus, in principle, Eve knows the efficiency mismatch between the two SPDs.
To counter Eve's attack, Alice and Bob could develop various counter measures, such as those discussed in \cite{Makarov2005b}. Other counter measures include: strict checks for detection rate and loss in the quantum channel; applying phase shift setting to Bob's phase modulator for a narrower time period than [$-\Delta t$, $+\Delta t$] relative to the normal pulse position; randomly shifting the gating window; and using a non-gating SPD.
We note that a recently proposed single SPD QKD system is also immune to this attack \cite{LaGasse2005}. In a phase encoding BB84 version of this design, instead of randomly selecting from a set of two values, Bob's phase modulation is randomly selected from a set of four values, which is identical to the set for Alice's phase modulation. In this case, Bob not only randomly chooses his measuring basis for each incoming pulse, he also randomly determines which SPD (or which time window in the case of time-multiplexed SPD) is used for detecting bit ``0'' or bit ``1''. Bob broadcasts his basis choice, but keeps his choice of detector (for the bit ``0'' or ``1'') secret. In such a set-up, even if Eve has the information about which detector clicks, Eve still cannot work out Bob's bit value because she does not know which detector corresponds to bit ``0''. On another hand, the set-up in \cite{LaGasse2005} may open up an unrelated security loophole. More concretely, Eve may try to read out Bob's detector assignments by employing a strong external laser pulse, as described in \cite{Largepulse}.
We conclude with some general comments on the security of QKD. First, a security proof is only as good as its underlying assumptions. Once a security loophole has been discovered, it is often not too difficult to develop counter-measures that will plug the loophole and regain proofs of unconditional security of QKD. One example is the PNS attack that we mentioned in Section $1$. However, the hard part is how to find out those security loopholes in the first place. A QKD system is a complicated system with many intrinsic imperfections. It is, thus, very important to conduct extensive research in those imperfections carefully to see if they are innocent or fatal for security. We need more quantum hackers in the field. The investigation of loopholes and counter-measures in practical QKD systems plays a complementary role to security proofs.
Second, implementing a countermeasure for some security loophole may open up new loopholes (such as the one mentioned above). Also, regarding countermeasures, in our opinion, there is a big difference between what Bob could do in principle and what Bob actually does in practice.
Third, it is important to construct a list of potential countermeasures, and to compare and contrast them. This task is specified in the ARDA's quantum cryptography roadmap \cite{arda} as one of the major milestones. Such a study will help us to better understand the pros and cons of the various countermeasures and eventually lead to the identification of the best countermeasure in practice.
Fourth, given that a practical QKD system will always have imperfections, one might wonder if QKD systems offer any real advantage over conventional systems. Our answer is three-fold. First of all, implementation loopholes are a fact of life. Even conventional security systems such as smart cards suffer implementation loopholes. For instance, Eve may attempt to read off a private key from a smart card by using various techniques (including X-ray) to reverse-engineer the circuit embedded in a smart card. Second, QKD can be used in concatenation with a conventional system to ensure security\cite{concatenation}. By defending in depth, QKD can only increase security, not reduce it. Third, QKD has an important advantage of being future-proof: The signals are quantum. Once the transmission is done, there is no transcript for the transmission. For an eavesdropper to launch a quantum attack, she has to possess much of the quantum technology during the quantum transmission. In contrast, in a standard Diffie-Hellman public-key key exchange scheme, Eve has a complete transcript of the transmission and can save such a transcript for decades to wait for unexpected future advances in hardware and algorithms. Given that public key crypto-systems was itself an unexpected discovery made only three decades ago, our view is that it will be complacent to believe that our standard public key crypto-systems will be safe forever. Therefore, it pays to diversify one's risk by defending in depth with a QKD system in concatenation with a conventional cryptosystem.
\paragraph{Acknowledgments} The authors are grateful to the anonymous reviewers for pointing out one error in the first version and making numerous important comments. Financial support from NSERC, CIAR, CRC Program, CFI, OIT, PREA and CIPI are gratefully acknowledged.
\paragraph{References}
\end{document} |
\begin{document}
\title[] {Scalar curvature and singular metrics}
\author{Yuguang Shi$^1$} \address[Yuguang Shi]{Key Laboratory of Pure and Applied Mathematics, School of Mathematical Sciences, Peking University, Beijing, 100871, P.\ R.\ China} \email{ygshi@math.pku.edu.cn} \thanks{$^1$Research partially supported by NSFC 11671015}
\author{Luen-Fai Tam$^2$} \address[Luen-Fai Tam]{The Institute of Mathematical Sciences and Department of
Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong, China.}
\email{lftam@math.cuhk.edu.hk} \thanks{$^2$Research partially supported by Hong Kong RGC General Research Fund \#CUHK 14305114}
\renewcommand{\subjclassname}{
\textup{2010} Mathematics Subject Classification} \subjclass[2010]{Primary 53C20; Secondary 83C99}
\date{November, 2016}
\begin{abstract} Let $M^n$, $n\ge3$, be a compact differentiable manifold with nonpositive Yamabe invariant $\sigma(M)$. Suppose $g_0$ is a continuous metric with volume $V(M,g_0)=1$, smooth outside a compact set $\Sigma$, and is in $W^{1,p}_{loc}$ for some $p>n$. Suppose the scalar curvature of $g_0$ is at least $\sigma(M)$ outside $\Sigma$. We prove that $g_0$ is Einstein outside $\Sigma$ if the codimension of $\Sigma$ is at least $2$. If in addition, $g_0$ is Lipschitz then $g_0$ is smooth and Einstein after a change of the smooth structure. If $\Sigma$ is a compact embedded hypersurface, $g_0$ is smooth up to $\Sigma$ from two sides of $\Sigma$, and if the difference of the mean curvatures along $\Sigma$ at two sides of $\Sigma$ has a fixed appropriate sign, then $g_0$ is also Einstein outside $\Sigma$. For manifolds with dimension between $3$ and $7$, without spin assumption we obtain a positive mass theorem on an asymptotically flat manifold for metrics with a compact singular set of codimension at least $2$. \end{abstract}
\keywords{Yamabe invariants, positive mass theorems, singular metrics}
\maketitle
\markboth{Yuguang Shi and Luen-Fai Tam}{Scalar curvature and singular metrics }
\section{introduction}\label{s-intro}
There are two celebrated results on manifolds with nonnegative scalar curvature. The first result is on compact manifolds. It was proved by Schoen and Yau \cite{SchoenYau1979,SchoenYau1979-1} that any smooth metric on a torus $T^n$, $n\le 7$ with nonnegative scalar curvature must be flat. Later, the result was proved to be true for all $n$ by Gromov and Lawson \cite{GromovLawson}. The second result is the positive mass theorem on noncompact manifolds. Schoen and Yau \cite{SY1979,SY1981,Schoen1989} proved that the Arnowitt-Deser-Misner (ADM) mass of each end of an $n$-dimensional asymptotically flat (AF) manifold with $3\le n\le 7$ with nonnegative scalar curvature is positive and if the ADM mass of an end is zero, then the manifold is isometric to the Euclidean space. Under the additional assumption that the manifold is spin, the same result is still true and was proved by Witten \cite{Witten}, see also \cite{ParkerTaubes,BTK86}. In the two results the metrics are assumed to be smooth.
There are many results on positive mass theorem for nonsmooth metrics. Miao \cite{Miao2002} and the authors \cite{ShiTam2002} studied and proved positive mass theorems for metrics with corners. The metrics are smooth away from a compact hypersurface, which are Lipschitz and satisfy certain conditions on the mean curvatures of the hypersurface. The result was used to prove the positivity of the Brown-York quasilocal mass \cite{ShiTam2002}. In \cite{Lee2013}, Lee considered positive mass theorem for metrics with bounded $C^2$ norm and are smooth away from a singular set with codimension greater than $n/2$, where $n$ is the dimension of the manifold. On the other hand, McFeron and Sz\'ekelyhidi \cite{McFeronSzekelyhidi2012} were able to prove Miao's result using Ricci flow and Ricci-DeTurck flow, which was studied in details by Simon \cite{Simon2002}. More recently, Lee and Feloch \cite{LeeFeloch2015} are able to prove for spin manifolds, under rather general conditions, a positive mass theorem for metrics which may be singular. Their theorem can be applied to all previous results for nonsmooth metrics under the additional assumption that the manifold is spin.
Motivated by these study of singular metrics on AF manifolds, we want to understand singular metrics on compact manifolds. One of the question is to see if there are nonflat metrics with nonnegative scalar curvature on $T^n$ which may be singular somewhere. Another question can be described as follows. It is now well-known that in every conformal class of smooth metrics on a compact manifold without boundary there is a metric with constant scalar curvature by the works of Yamabe, Trudinger, Aubin and Schoen, see \cite{Yamabe1960,Trudinger1968,Aubin1976-1,Aubin1976,Schoen1984}. One motivation for the result is to obtain Einstein metric. It is well-known that if a smooth metric on a compact manifold attains the Yamabe invariant and if the invariant is nonpositive, then the metric is Einstein. See \cite[p.126-127]{Schoen1989}. In this work, we will study the question whether this last result is still true for nonsmooth metrics.
Let us recall the definition of Yamabe invariant, which is called $\sigma$-invariant in \cite{Schoen1989}. Let $\mathcal{C}$ be a conformal class of smooth Riemannian metrics $g$, then the {\it Yamabe constant of $\mathcal{C}$} is defined as: $$ Y(\mathcal{C})=\inf_{g\in\mathcal{C}}\displaystyle{\frac{\int_M\mathcal{S}_gdv_g}{(V(M,g))^{1-\frac2n}}}. $$ where $\mathcal{S}_g$ is the scalar curvature and $V(M,g)$ is the volume of $M$ with respect to $g$. The {\it Yamabe invariant} is defined as $$ \sigma(M)=\sup_{\mathcal{C}}Y(\mathcal{C}). $$ The supremum is taken among all conformal classes of smooth metrics. It is finite, see \cite{Aubin1976}. If $\sigma(M)>0$, then in general it is still unclear whether $g$ is Einstein or not.
To answer the question on Einstein metrics, let $M^n$ be a compact smooth manifold without boundary and let $g_0$ be a continuous Riemannian metric on $M$ with $V(m,g_0)=1$ such that it is smooth outside a compact set $\Sigma$. The first case is that $\Sigma$ has codimension at least 2 and $g_0$ is in $W_{\text{loc}}^{1,p}$ for some $p>n$ (see sections \ref{s-gradient} and \ref{s-Einstein-1} for more precise definitions).
\begin{thm}\label{t-intro-1} Let $(M^n,g_0)$ be as above. Suppose $\sigma(M)\le0$ and suppose the scalar curvature of $g_0$ outside $\Sigma$ is at least $\sigma(M)$. Then $g_0$ is Einstein outside $\Sigma$. If in addition that $g_0$ is Lipschitz, then after changing the smooth structure, $g_0$ is smooth and Einstein. \end{thm}
In case $\Sigma$ is a compact embedded hypersurface, as in \cite{Miao2002} we assume that near $\Sigma$, $g_0=dt^2+g_{\pm}(z,t)$, $z\in \Sigma$ so that $(t,z)$ are smooth coordinates and $g_-(\cdot,0)=g_+(\cdot,0)$, where $g_+$, $g_-$ are smooth up to $\Sigma$. Moreover, with respect to the unit normal $\frac{\partial}{\partial t}$ the mean curvature $H_+$ of $\Sigma$ with respect to $g_+$ and the mean curvature $H_-$ of $\Sigma$ with respect to $g_-$ satisfies $H_-\ge H_+$. Under these assumptions, we have:
\begin{thm}\label{t-intro-2} Let $(M^n,g_0)$ be as above with $V(m,g_0)=1$. Suppose $\sigma(M)\le0$ and suppose the scalar curvature of $g_0$ outside $\Sigma$ is at least $\sigma(M)$. Then $g_0$ is Einstein outside $\Sigma$. Moreover, $H_+=H_-$. \end{thm}
Note that it is easy to construct examples so that the theorem is not true if the assumption $H_-\ge H_+$ is removed.
In the process of proofs, one also obtain the following: In case $M^n$ is $T^n$, under the regularity assumptions in Theorem \ref{t-intro-1} or Theorem \ref{t-intro-2} and if $g_0$ has nonnegative scalar curvature outside $\Sigma$, then $g_0$ is flat outside $\Sigma$.
The method of proofs of the above results can also be adapted to AF manifolds. We want to discuss positive mass theorem with singular metric on AF manifold with dimension $3\le n\le 7$ without assuming that the manifold is spin. We will prove the following:
\begin{thm}\label{t-intro-3} Let $(M^n,g_0)$ be an AF manifold with $3\leq n\leq 7$, $g_0$ be a continuous metric on $M$ with regularity assumptions as in Theorem \ref{t-intro-1}. Suppose $g_0$ has nonnegative scalar curvature outside $\Sigma$. Then the ADM mass of each end is nonnegative. Moreover, if the ADM mass of one of the ends is zero, then $M$ is diffeomorphic to $\mathbb R^n$ and is flat outside $\Sigma.$ \end{thm}
We should mention that all the results mentioned above for nonsmooth metrics, all the metrics are assumed to be continuous. On the other hand, one can construct an example of AF metric with a cone singularity and nonnegative scalar curvature and with negative ADM mass, see section \ref{s-examples}. One can also construct examples of metrics on compact manifolds with a cone singularity so that Theorem \ref{t-intro-1} is not true. In these examples, the metrics are not continuous.
The structure of the paper is as follows: in section \ref{s-examples}, we construct examples which are related to results in later sections; in section \ref{s-gradient} we obtain some estimates for the Ricci-DeTurck flow; in section \ref{s-approx-1} we use the Ricci-DeTurck flow to approximate singular metrics; in sections \ref{s-Einstein-1} and \ref{s-harmonic-1} we will prove Theorems \ref{t-intro-1} and \ref{t-intro-2}; in section \ref{s-pmt} we will prove Theorem \ref{t-intro-3}. In this work, the dimension of any manifold is assumed to be at least three. We will also use the Einstein summation convention.
The authors would like to thank Xue Hu and Richard Schoen for some useful discussions.
\section{examples of metrics with cone singularities}\label{s-examples}
In previous results on positive mass theorems on AF manifolds with singular metrics mentioned in section \ref{s-intro}, the metrics are all assumed to be continuous. To understand this condition on continuity and to motivate our study, in this section, we construct some examples with cone singularities which are related to the study in the later sections.
The following lemma is standard. See \cite{Petersen1998}.
\begin{lma}\label{l-warped-1} Consider the metric $g=dr^2+\phi^2(r) h_0$ on $(0,r_0)\times \mathbb{S}^{n-1}$, where $h_0$ is the standard metric of $\mathbb{S}^{n-1}$, $n\ge 3$ and $\phi$ is a smooth positive function on $(0,r_0)$. Then the scalar curvature of $g$ is given by $$ \mathcal{S}=(n-1)\left[- \frac{2\phi''}{\phi}+ (n-2)\frac{1-(\phi')^2}{\phi^2}\right]. $$
Suppose $\phi=\alpha r^\beta$, with $\alpha, \beta>0$. Then $\mathcal{S}>0$ if $\alpha<1$, $\beta=1$ or if $0<\beta\le \frac2n$. In both cases, the metric is not continuous up to $r=0$. If $\alpha>1, \beta=1$, then $\mathcal{S}<0$ for $r$ small enough. \end{lma}
We can construct asymptotically flat manifold with nonnegative scalar curvature defined on $\mathbb R^3\setminus\{0\}$ such that the metric behaves like $dr^2+(\alpha r)^2 h_0$ for some $0<\alpha<1$ with positive mass. \begin{prop}\label{p-cone-pm-1}
Let $0<\epsilon<\frac12$ and let $\eta(x)=\eta(r)$, with $r=|x|$, be a smooth function on $\mathbb R^3\setminus \{0\}$ such that \begin{equation*} \left\{
\begin{array}{ll}
\eta(r)=-\epsilon(1-\epsilon)r^{-\epsilon-2}, & \hbox{if $0<r\le 1$;} \\
\eta(r)<0, & \hbox{if $1\le r\le 2$;}\\ \eta(r)=0, &\hbox{if $r\ge 2$.}
\end{array} \right. \end{equation*} Let $\phi$ be the function defined on $\mathbb R^3\setminus\{0\}$ with $$ \phi(r)=\int_1^r\frac1{s^2}\left(\int_0^st^2\eta(t)dt\right)ds. $$ Then there are constants $a, b>0$ such that if $$ u=\phi+b+\frac a2+1 $$
then $u>0$. Moreover, if $g=u^4g_\epsilon$ where $g_e$ is the standard Euclidean metric, then near infinity, $$ g=(1+\frac a r)^4 g_e, $$ and near $r=0$, $$ g=d\rho^2+\left((1-2\epsilon)^2\rho^2+O(\rho^{2+\delta})\right)h_0 $$ for some $\delta>0$, where $$ \rho=\int_0^{r} u^2(t) dt. $$ $g$ has nonnegative scalar curvature and has zero scalar curvature outside a compact set. Moreover, the end near infinity is asymptotically flat in the sense of Definition \ref{defaf} in section \ref{s-pmt}, and has positive mass $2a$. \end{prop} \begin{proof} Let $\Delta_0$ be the Euclidean Laplacian. Then one can check that $$ \Delta_0\phi=\eta\le0. $$ For $0< r\le 1$, $$ \phi(r)=r^{-\epsilon}-1. $$ For $r\ge 2$, let $$ a=-\int_0^r s^2\eta(s)ds>0, $$ and $$ b=-\int_1^2\frac1{s^2}\left(\int_0^s \tau^2\eta(\tau)d\tau\right)ds>0. $$ Then \begin{equation*} \begin{split} \phi(r)=&-b+\int_2^r\frac1{s^2}\left(\int_0^st^2\eta(t)dt\right)ds\\ =&-b-a\int_2^r\frac1{s^2}ds\\ =&-b-\frac a2+\frac ar. \end{split} \end{equation*} Hence if $u=\phi+b+\frac a2+1$, then $\Delta_0 u=\eta\le0$. Since $u\to\infty$ as $r\to 0$ and $u\to 1$ as $r\to \infty$, $u>0$ by the strong maximum principle. The metric $$ g=u^4g_e $$ is defined on $\mathbb R^3\setminus\{0\}$, has nonnegative scalar curvature and has zero scalar curvature near infinity. $g$ is also asymptotically flat. Near $r=0$, $$ u=b+\frac a2+r^{-\epsilon}. $$
Since $0<\epsilon<\frac12$, we let $$ \rho=\int_0^r u^2(t) dt=\frac1{(1-2\epsilon)}r^{1-2\epsilon}+O(r^{1-\epsilon}). $$ So $$ \rho^2= \frac1{(1-2\epsilon)^2}r^{2-4\epsilon}+O(r^{2-3\epsilon}). $$ Hence near $r=0$, \begin{equation*} \begin{split} g=&d\rho^2+u^4r^2h_0 \\ =& d\rho^2+(r^{2-4\epsilon}+O(r^{2-3\epsilon}))h_0\\ =&d\rho^2+((1-2\epsilon)^2\rho^2+O(r^{2-3\epsilon}))h_0\\ =&d\rho^2+(\alpha^2\rho^2+O(r^{2-3\epsilon}))h_0 \end{split} \end{equation*} where $\alpha=1-2\epsilon$. Note that $r^{2-3\epsilon}=O(\rho^{2+\delta})$ for some $\delta>0$.
\end{proof}
The following example is the type of singularity which is called zero area singularity in \cite{BrayJauregui2013}.
\begin{prop}\label{p-cone-pm-2} Let $m>0$ and let $\phi=1-\frac {2m}r$. Then the metric $$ g=\phi^4g_e $$ is asymptotically flat defined on $r>2m$ in $\mathbb R^3$, with zero scalar curvature and with negative mass $-m$. Moreover, near $r=2m$, $$ g=d\rho^2+c\rho^\frac43(1+O(\rho^\frac23))h_0 $$ for some $c>0$, where $$ \rho=\int_0^{r-2m}\phi^2(t+2m)dt. $$ Hence near $\rho=0$ the metric is asymptotically of the form as in Lemma \ref{l-warped-1} with $\beta=\frac23$. \end{prop}
\begin{proof} We only need to consider $g$ near $r=2m$. The rest is well-known. Let $t=r-2m$, $r>2m$. Then $$ \widetilde \phi(t)=\phi(t+2m)=\frac{t}{t+2m}=\frac{t}{2m}\left(1-\frac t{2m}+\frac{t^2}{4m^2}+O(t^3)\right). $$ $$ \rho=\int_0^t\widetilde \phi^2(s)ds=\int_0^t\frac{s^2}{ (s+2m)^2}ds. $$ Note that as $r\to 2m$, $\rho\to0$. In terms of $\rho$, near $\rho=0$, $$ g=d\rho^2+\phi^4r^2h_0. $$ Near $\rho=0$, \begin{equation*} \begin{split} \phi^4r^2=&\frac{t^4}{(t+2m)^4}(t+2m)^2\\ =&c\rho^\frac43(1+O(\rho^\frac23)) \end{split} \end{equation*} for some $c>0$. \end{proof}
We can also construct conical metric on $T^3\setminus \{ \text {a point}\}$, with nonnegative scalar curvature and with positive scalar curvature somewhere.
First, we have \begin{prop}\label{p-gluing1} Let $m>0$ There is a metric $g$ on $\mathbf{R}^3\setminus B(2m)$ satisfying:
\begin{enumerate}
\item[(i)] The scalar curvature $R\geq 0$ and $R>0$ somewhere;
\item[(ii)] There exist $r_0$ and $r_1$ with $r_1>r_0>2m$ so that $g=(1-\frac{2m}r)^4 g_e$ for any $r\in (2m, r_0)$ and $g=g_e$ for any $r\geq r_1$, where $g_e$ is the Euclidean metric.
\end{enumerate}
\end{prop}
\begin{proof} Let $r_1>r_0>2m$ to be chosen later. Let $\eta(r)$ be a smooth nonincreasing function with \begin{equation} \eta(r)=\left\{
\begin{array}{ll}
2m, & \hbox{$2m\leq r\leq r_0$;} \\
0, & \hbox{$r\geq r_1$.}
\end{array}
\right. \end{equation} For any $\rho\geq 2m$, let
$$ y(\rho)=\int^\rho _{2m} \frac{\eta(r)}{r^2} dr, $$ By choosing suitable $r_0, r_1$, we may get $y(\rho)=1$ for any $\rho\geq r_1$, then we see that
\begin{equation} y(r)=\left\{
\begin{array}{ll}
1-\frac{2m}{r}, & \hbox{$2m\leq r\leq r_0$;} \\
1, & \hbox{$r\geq r_1$.}
\end{array}
\right. \end{equation} We claim that
$$ \Delta_0 y \leq 0, \quad \text{on $\mathbb R^3\setminus B_{2m}$}, $$ here $\Delta_0$ is the standard Laplace operator on $\mathbb R^3$. By a direct computation, we see that
\begin{equation} \begin{split} \Delta_0 y =y''+\frac2r y'=r^{-2}(r^2 y')'=r^{-2}\eta'\leq 0 \end{split} \end{equation}
For any $x\in\mathbb R^3\setminus B_{2m} $, let $u(x)=y( |x|)$, then $g=u^4 (dr^2 +r^2 h_0)$ is the required metric. \end{proof}
Suppose $T^3 (r)$ is flat torus, by taking $r$ large enough we may glue $(B_r\setminus B_{2m},g)$ with $T^3 (r) \setminus B_r$ directly. As in Proposition \ref{p-cone-pm-2}, near $r=2m$, the metric can be considered as a metric with cone singularity. The question is whether we have a metric on $n$-torus which has a cone singularity of the form $dr^2+\alpha^2 r^2h_0$ with $0<\alpha<1$ and with nonnegative scalar curvature. This will be answered in section \ref{s-approx-1}. The problem can be reduced to the study of singular metrics on $T^n$ with nonnegative scalar curvature.
\section{gradient estimates for solution to the $h$-flow}\label{s-gradient}
We want to use the Ricci-DeTurck flow to deform a singular metric to a smooth one. We need some basic facts about the flow.
Let $(M^n, h)$ be a complete manifold without boundary. We assume that the curvature of $h$ and its covariant derivatives are bounded: \begin{equation}\label{e-h-bounds-1}
|\wt\nabla^{(i)} \widetilde Rm |\le k_i
\end{equation}
for all $3\ge i\ge0$. Here $\wt\nabla$ is the covariant derivative with respect to $h$ and $\widetilde \text{Rm}$ is the curvature tensor of $h$. A smooth family of metrics $g(t)$ on $M\times(0,T]$, $T>0$, is said to be a solution to the $h$-flow if $g(t)$ satisfies: \begin{equation}\label{e-hflow}\begin{split}
\frac{\partial}{\partial t} g_{ij}=&g^{\alpha\beta}\wt\nabla_\alpha\wt\nabla_\beta g_{ij}-g^{\alpha\beta}g_{ip}h^{pq}\widetilde \text{Rm}_{j\alpha q\beta}-g^{\alpha\beta}g_{jp}h^{pq}\widetilde \text{Rm}_{i\alpha q\beta}\\
&+\frac{1}{2}g^{\alpha\beta}g^{pq}(\wt\nabla_i g_{p\alpha}\cdot\wt\nabla_j g_{q\beta}+2\wt\nabla_\alpha g_{jp}\cdot\wt\nabla_q g_{i\beta}-2\wt\nabla_\alpha g_{jp}\cdot\wt\nabla_\beta g_{iq}\\
&-2\wt\nabla_j g_{\alpha p}\cdot\wt\nabla_\beta g_{iq}-2\wt\nabla_i g_{\alpha p}\cdot\wt\nabla_\beta g_{jq}), \end{split} \end{equation} Let \begin{equation}\label{e-box} \Box=\frac{\partial}{\partial t}-g^{ij}\wt\nabla_i\wt\nabla_j. \end{equation}
For a constant $\delta>1$, $h$ is said to be $\delta$ close to a metric $g$ if
$$
\delta^{-1}h\le g\le \delta h.
$$
In \cite{Simon2002}, Simon obtained the following:
\begin{thm}\label{t-Simon} {\rm (Simon)}
There exists $\epsilon=\epsilon(n)>0$ depending only on $n$ such that if $(M^n,g_0)$ is an $n$-dimensional compact or noncompact manifold without boundary with continuous Riemmannian metric $g_0$ which is $(1+\epsilon(n))$ close to a smooth complete Riemannian metric $h$ with curvature bounded by $k_0$, then the $h$-flow \eqref{e-hflow} has a smooth solution on $M\times(0,T]$ for some $T>0$ with $T$ depending only on $n, k_0$ such that $g(t)\to g_0$ as $t\to 0$ uniformly on compact sets and such that
$$
\sup_{x\in M}|\wt\nabla^i g(t)|^2\le \frac{C_i}{t^i}
$$
for all $i$, where $C_i$ depends only on $n, k_0,\dots,k_i$ where $k_j$ is the bound of $|\wt\nabla^j \text{Rm}(h)|$. Moreover, $h$ is $(1+2\epsilon)$ close to $g(t)$ for all $t$. Here and in the following $|\cdot|$ is the norm with respect to $h$.
\end{thm}
In case $g_0$ is smooth, and if $|\wt\nabla g_0|$ is bounded, then it is also proved in \cite{Simon2002} that $$
|\wt\nabla g(t)|\le C;\ \ |\wt\nabla^2 g(t)|\le Ct^{-\frac12}. $$
We want to obtain estimates in case $g_0\in W_{\text{loc}}^{1,p}$ in the sense that $|\wt\nabla g_0|$ is in $L^p_{\text{loc}}$, for $p>n$. We have the following:
\begin{lma}\label{l-gradient-1} Fix $p\ge 2$. There is $b=b(n,p)>0$ depending only on $n, p$, with $e^b\le 1+\epsilon(n)$ where $\epsilon(n)$ is the constant in Theorem \ref{t-Simon}, such that if $g_0$ is smooth metric which is $e^b$ close to $ h$, where $h$ is smooth and satisfies \eqref{e-h-bounds-1} for $0\le i\le 2$, then solution $g(t)$ of the $h$-flow with initial metric $g_0$ on $M\times[0,T]$ described in Theorem \ref{t-Simon} satisfies the following estimates: There is a constant $C>0$ depending only $n, p, h$ such that for any $x_0\in M$ with injectivity radius $\iota(x_0)$ with respect to $h$, the following estimate is true: $$
|\wt\nabla g(t,x_0)|^2\le \frac{CD}{ t^{\frac n{2 p}}}
$$
for $T>t>0$ where $D$ depends only $n$, the lower bound of $\iota(x_0)$ and the $L^{2p}$ norm of $|\wt\nabla g_0|$ in $B(x_0,\iota(x_0))$ which is the geodesic ball with respect to $h$. \end{lma}
\begin{proof} Suppose $g_0$ is $e^b<1+\epsilon(n)$ close to $h$, then for any $\lambda>0$, $\lambda g_0$ is also $e^b$ close to $\lambda h$. Moreover, if $g(t)$ is the solution to the $h$-flow, then $\lambda g(\frac1\lambda t)$ is a solution to the $\lambda h$-flow. Hence by scaling, we may assume that $k_0+k_1+k_2\le 1$. The solution $g(t)$ constructed in \cite{Simon2002} is $e^{2b}$ close to $h$. Moreover, we may assume that $T\le 1$.
Denote $\iota(x_0)$ by $\iota_0$ and we may assume that $\iota_0\le 1$. In the following $c_i$ will denote a constant depending only on $n$. Let $m\ge 2$ be an integer, which will be chosen depending only on $n, p$. Let $b=\frac1{2m}$. First choose $m$ so that $e^b\le 1+\epsilon(n)$. Let $f_1=|\wt\nabla g|$ and $\psi=\left(a+\sum_{i=1}^n\lambda_i^m\right)f_1^2$ with $a>0$, where $\lambda_i$ are the eigenvalues of $g(t)$ with respect to $h$. By choosing $a$ depending only on $n$ and $m$ large enough depending only on $n$, as in \cite{Shi1989,Simon2002}, see also \cite[(5.8)]{HuangTam2015}, we have
\begin{equation}\label{e-psi-1}
\Box \psi\le c_1-c_2m^2 f_1^4
\end{equation}
Let $x^i$ be normal coordinates in $B(x_0,\iota_0)$. Since $k_0+k_1+k_2\le 1$, by \cite[Corollary 4.11]{Hamilton1995}, then on $B(x_0,\iota_0)$ we have
\begin{equation}\label{e-Hamilton-1}
\left\{
\begin{array}{ll}
\frac12 |\xi|^2\le h_{ij}\xi^i\xi^j\le 2|\xi|^2,\ \ \text{\rm for $\xi\in\mathbb R^n$}; \\
\left|D^\beta_x h_{ij}\right|\le c_3, \ \ \text{\rm for all $i, j$ },
\end{array}
\right.
\end{equation} where $h_{ij}=h(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j})$ and $\beta=(\beta_1,\dots,\beta_n)$ is a multi-index with $|\beta|\le 2$ and $D_{x^k}=\frac{\partial}{\partial x^k}$. Let $\eta$ be a smooth function on $[0,1]$ such that $0\le \eta\le 1$, $\eta(s)=0$ for $s\ge \frac34$, $\eta(s)=1$ for $0\le s\le \frac12$. Still denote $\eta(|x|/\iota_0)$ by $\eta(x)$. Then $|\wt\nabla \eta|\le c_4\iota_0^{-1}$. We have \begin{equation*} \begin{split} \frac{d}{dt}\int_{B(x_0,\iota_0)}& \eta^2\psi^p dv_h\\ =&p\int_{B(x_0,\iota_0)} \eta^2\psi^{p-1}\psi_t dv_h\\ \le & p\int_{B(x_0,\iota_0)} \eta^2\psi^{p-1}g^{ij}\wt\nabla_i\wt\nabla_j \psi dv_h+ p\int_{B(x_0,\iota_0)} \eta^2\psi^{p-1} (c_1-c_2m^2f_1^4) dv_h\\
\le & -pc_5\int_{B(x_0,\iota_0)}(p-1)\eta^2\psi^{p-2}|\wt\nabla\psi|^2 dv_h
+pc_6\int_{B(x_0,\iota_0)} \eta^2\psi^{p-1}f_1|\wt\nabla\psi| dv_h\\
&+pc_7\iota_0^{-1}\int_{B(x_0,\iota_0)} \eta\eta'\psi^{p-1}|\wt\nabla\psi| dv_h + p\int_{B(x_0,\iota_0)}\eta^2\psi^{p-1} (c_1-c_2m^2f_1^4) dv_h\\ \le & \frac{4c_6p}{p-1}\int_{B(x_0,\iota_0)} f_1^2\eta^2\psi^p dv_h+ \frac{4c_7p}{(p-1)\iota_0^2}\int_{B(x_0,\iota_0)} (\eta')^2 \psi^pdv_h+ \\& + p\int_{B(x_0,\iota_0)}\eta^2\psi^{p-1} (c_1-c_2m^2f_1^4) dv_h\\ \le &\frac{c_8p}{p-1}\int_{B(x_0,\iota_0)} f_1^4\eta^2\psi^{p-1} dv_h+ \frac{4c_7p}{(p-1)\iota_0^2}\int_{B(x_0,\iota_0)} (\eta')^2 \psi^pdv_h+ \\& + p\int_{B(x_0,\iota_0)}\eta^2\psi^{p-1} (c_1-c_2m^2f_1^4) dv_h \end{split} \end{equation*} where we have used the fact that $\psi\le cf_1^2$ for some constant $c$ depending only on $n$ by the fact that $2bm=1$ so that $\lambda_i^m\le 1$ for all $i$. Hence by choosing $m$ large enough depending only on $n, p$ and if $b=\frac1{2m}$, we have
\begin{equation*} \begin{split} \frac{d}{dt}\int_{B(x_0,\iota_0)} \eta^2\psi^p dv_h\le c_9p\iota_0^{-2}\left(\int_{B(x_0,\iota_0)} (\eta')^2 \psi^pdv_h + \int_{B(x_0,\iota_0)}\eta^2\psi^{p-1} dv_h\right).\\ \end{split}
\end{equation*} By replacing $\eta$ by $\eta^q$ for $q\ge 1$, we may assume that $|\eta'|\le C\eta^{1-\frac1q}$, where $C$ depends only on $q$. Let $q=2p$, say, then we have \begin{equation*} \begin{split} &\frac{d}{dt} \int_{B(x_0,\iota_0)} \eta^2\psi^p dv_h\\ \le& C_1 \iota_0^{-2}\left(\int_{B(x_0,\iota_0)} (\eta^2)^{ 1-\frac1{2p} } \psi^pdv_h+\int_{B(x_0,\iota_0)}\eta^2\psi^{p-1} dv_h\right) \\ \le &C_1\iota_0^{-2}\left[\left(\int_{B(x_0,\iota_0)}\eta^2\psi^p dv_h\right)^{1-\frac1{2p}}\left(\int_{B(x_0,\iota_0)} \psi^p dv_h\right)^{\frac1{2p}}+\left(\int_{B(x_0,\iota_0)}\eta^2\psi^p dv_h\right)^{1-\frac1p} \right]\\ \le &C_2\iota_0^{-2}\left[\left(\int_{B(x_0,\iota_0)}\eta^2\psi^p dv_h\right)^{1-\frac1{2p}}t^{-\frac12}+\left(\int_{B(x_0,\iota_0)}\eta^2\psi^p dv_h\right)^{1-\frac1p} \right] \end{split} \end{equation*} here and below upper case $C_i$ denote a positive constant depending only on $n, p$ and $h$. Here we have used the estimates in Theorem \ref{t-Simon}. Let $$ F=\int_{B(x_0,\iota_0)} \eta^2\psi^p dv_h+1. $$ Then we have $$\frac{d}{dt}F\le C_3\iota_0^{-2}F^{1-\frac1{2p}}t^{-\frac12}. $$
Let $I=\int_{B(x_0,\iota_0)}|\wt\nabla g_0|^{ 2p} dv_h$. We conclude that \begin{equation*} F(t)\le C_4\left(I+\iota_0^{-2p}\right), \end{equation*} or \begin{equation*} \int_{B(x_0,\frac12\iota_0)}\psi^p dv_h\le C_5 \left(I+\iota_0^{-2p}\right). \end{equation*} Hence $0<t_0<T$, by the mean value equality \cite[Theorem 7.21]{Lieberman} applied to \eqref{e-psi-1} to $B(x_0,r)\times (t_0-r^2,t_0)$ with $r=\frac12\sqrt {t_0}$, we have
\begin{equation*}
\psi^p(x_0,t_0)\le C_6 r^{-n}\left(I+\iota_0^{-2p}+1\right).
\end{equation*}
From this the result follows.
\end{proof}
Assume $2p>n$ and let $\delta=n/(2p)$. Let $b$ as in Lemma \ref{l-gradient-1}. Assume $h$ satisfies \eqref{e-h-bounds-1}, for $0\le i\le 2$. Then we have the following:
\begin{lma}\label{l-gradient-2} Let $x_0\in M$ and let $r_0>0$. Let $$
I:=\int_{B(x_0,r_0)}|\wt\nabla g_0|^{2p}dv_h. $$ Let $\iota$ be the infinmum of the injectivity radii $\iota(x)$, $x\in B(x_0,r_0)$. Then there is a constant $C$ depending only on $n, p, h, r_0$, lower bound of $\iota$ and upper bound of $I$ such that $$
|\wt\nabla^2g(x_0,t)|^2\le C t^{-1-\delta}. $$ \end{lma} \begin{proof} In the following, $C_i$ will denote a constant depending only on the quantities mentioned in the lemma. By Lemma \ref{l-gradient-1}, we have \begin{equation}\label{e-gradient-1}
\sup_{x\in B(x_0,\frac {r_0}2)}|\wt\nabla g(x, t)|^2\le C_1 t^{-\delta}.
\end{equation} Let $f_i=|\wt\nabla^i g|$. As in \cite{Shi1989,Simon2002}, see also \cite[(5.11)]{HuangTam2015}, one can find $a>0$ depending only on the quantities mentioned in the lemma such that if $\psi=(at^{-\delta}+f_1^2)f_2^2$, then
\begin{equation}\label{e-ddg-1} \begin{split} \square\psi\le -\frac18 f_2^4+C_2 t^{-4\delta}
\end{split}
\end{equation} on $B(x_0,\frac{r_0}2)\times(0,T]$. We may assume that $\iota(x_0)\le \frac {r_0}2$. Let $\eta$ be a cutoff function so that $(\eta')^2+|\eta''|\le c\eta$ for some absolute constant as in the proof of Lemma \ref{l-gradient-2}, let $F=t^{1+2\delta}\eta\psi.$ Since $g$ is smooth up to $t=0$, and $f_1^2\le C_1 t^{-\delta}$, we have $F(\cdot,0)=0$. If $F$ has a positive maximum, then there is $x_1\in B(x_0,\iota)$ and $T\ge t_1>0$ such that $$ F(x_1,t_1)=\sup_{B(x_0,\iota)\times[0,T]}F. $$ Hence at $(x_1,t_1)$, we have \begin{equation*} \eta\wt\nabla_i\psi+\psi\wt\nabla_i\eta=0 \end{equation*} and \begin{equation*} \begin{split} 0\le &\Box F\\ =& t_1^{1+2\delta} \left(\eta\Box\psi+\psi \Box \eta-2g^{ij}\wt\nabla_i \psi\wt\nabla_j\eta\right)+(1+2\delta) t_1^{-1}F \\ \le &t_1^{1+2\delta}\left[\eta\left(-\frac18 f_2^4+C_2 t^{-4\delta}\right)-\psi g^{ij}\wt\nabla_i\wt\nabla_j \eta+2g^{ij}\eta^{-1}\psi\wt\nabla_i\eta\wt\nabla_j\eta \right]+(1+2\delta) t_1^{-1}F\\ \le &t_1^{1+2\delta}\left[\eta\left(-\frac18 f_2^4+C_2 t^{-4\delta}\right)+C_3\psi \right]+(1+2\delta) t_1^{-1}F \end{split} \end{equation*} Multiply the inequality by $t_1^{1+2\delta}\eta(at^{-\delta}+f_1^2)^2=F\psi^{-1}(at^{-\delta}+f_1^2)$, we have \begin{equation*} \begin{split} 0\le &-\frac18 F^2+C_3t_1^{1+\delta}(at^{-\delta}+f_1^2)F+(1+2\delta)t^{2\delta}(at^{-\delta}+f_1^2)^2F\\ \le& -\frac18 F^2+C_4F. \end{split} \end{equation*} Hence $F\le 8C_4.$ From this it is easy to see that the result follows. \end{proof}
\section{approximation of singular metrics}\label{s-approx-1}
Let $(M^n,\mathfrak{b})$ be a smooth complete manifold of dimension $n$ without boundary. Let $g_0$ be a continuous Riemannian metric on $M$ satisfying the following:
\begin{enumerate}
\item [({\bf a1})] There is a compact subset $\Sigma$ such that $g_0$ is smooth on $M\setminus \Sigma$.
\item [({\bf a2})] $g_0$ is in $W_{\text{loc}}^{1,p}$ for some $p\ge1$ in the sense that $g_0$ has weak derivative and $|g_0|_\mathfrak{b}$, $|^\mathfrak{b}\nabla g_0|_\mathfrak{b}\in L^p_{\text{loc}}$ with respect to the metric $\mathfrak{b}$. \end{enumerate}
We want to approximate $g_0$ by smooth metrics with uniform bound on the $W^{1,p}$ norm locally. As in \cite{Lee2013}, cover $\Sigma$ by finitely many precompact coordinate patches $U_1,\dots,U_N$ and cover $M$ with
$U_1,\dots,U_N$ and $U_{N+1}$ so that $U_{N+1}$ is an open set with $U_{N+1}\cap \Sigma=\emptyset$. we may assume that there is a partition of unity $\psi_k$ with $ \text{\rm supp}(\psi_k)\subset U_k$. Since $g_0$ is continuous, we may assume that $g_0$, $\mathfrak{b}$ and the Euclidean metric are equivalent in each $U_k$, $1\le k\le N$. For any $a>0$, let $\Sigma(a)=\{x\in M| d_{\mathfrak{b}}(x,\Sigma)<a\}$. By \cite[Lemma 3.1]{Lee2013}, for each $1\le k\le N$, there is a smooth function $\epsilon\ge\rho_k\ge0$ in $U_k$ such that for $\epsilon>0$ small enough: \begin{equation}\label{e-cutoff-1} \left\{
\begin{array}{ll}
\rho_k= \epsilon &\hbox{$\Sigma(\epsilon)\cap U_k$;} \\
\rho_k= 0 &\hbox{$U_k\setminus \Sigma(2\epsilon)$}; \\
|\partial\rho_k|\le C; & \\
|\partial^2\rho_k|\le C\epsilon^{-1};& \\
\end{array} \right. \end{equation} for some $C$ independent of $\epsilon$ and $k$. Here $\partial\rho_k$ and $\partial^2\rho_k$ are derivatives with respect to the Euclidean metric. Let $g^{k}_{0}=\psi_kg_0$ and for $1\le k\le N$, let \begin{equation}\label{e-approx-1} (g^{k}_{\epsilon,0})_{ij}(x)=\int_{\mathbb R^n}g_{0,ij}^k(x-\lambda\rho_k(x)y)\varphi(y)dy \end{equation} Here $\varphi$ is a nonnegative smooth function in $\mathbb R^n$ with support in $B(1)$ and integral equal to 1. $\lambda>0$ is a constant independent of $\epsilon$ and $k$, to be determined. Finally, define \begin{equation}\label{e-approx-1} g_{\epsilon,0}=\sum_{k=1}^Ng^{k}_{\epsilon,0}+\psi_{N+1}g_0. \end{equation}
\begin{lma}\label{l-approx-1} For $\epsilon>0$ small enough, $g_{\epsilon,0}$ is a smooth metric such that $g_{\epsilon,0}$ converge to $g_0$ in $C^0$ norm, $g_{\epsilon,0}=g_0$ outside $\Sigma(2\epsilon)$. Moreover, there is a constant $C$ independent of $\epsilon$ such that $$
\int_{\Sigma(1)}|^\mathfrak{b}\nabla g_{\epsilon,0}|_\mathfrak{b}^pdv_{\mathfrak{b}}\le C. $$ \end{lma} \begin{proof} It is easy to see that $g_{\epsilon,0}$ is smooth and converge to $g_0$ uniformly as $\epsilon\to0$. In order to estimate the $W^{1,p}_{\text{loc}}$ norm of $g_{\epsilon,0}$, it is sufficient to estimate the norm in each $U_k$, $1\le k\le N$. Moreover, we may assume that $\mathfrak{b}$ is the Euclidean metric. So it is sufficient to prove the following: For fixed $k$, $1\le k\le N$ and for any $u\in W^{1,p}_{\text{loc}}$ if $$ v(x)=\int_{\mathbb R^n}u(x-\lambda\rho_k(x)y)\varphi(y)dy, $$
then the $W^{1,p}$ norm of $v$ in $\Sigma(1)$ can be estimated in terms of the $W^{1,p}$ norm of $u$ in $\Sigma(2)$, say. For fixed $y$ with $|y|\le 1$, let $z=x-\lambda\rho_k(x)y$. Then $$ \frac{\partial z^i}{\partial x^j}=\delta_{ij}-y^i\lambda\frac{\partial\rho_k}{\partial x^i}. $$ By \eqref{e-cutoff-1}, we can Choose $\lambda>0$ small enough independent of $\epsilon$ and $k$ so that $$2\ge \det(\delta_{ij}-\lambda y^i\frac{\partial\rho_k}{\partial x^i} )\ge \frac12,
$$
and so that $z=z(x)$ is a diffeomorphism with the Jacobian being bounded above and below by some constants independent of $\epsilon, k$. Hence \begin{equation*} \begin{split}
\left(\int_{\Sigma(1)\cap U_k}|v|^p(x)dx\right)^\frac1p\le &\left[\int_{\Sigma(1)\cap U_k}\left(\int_{\mathbb R^n}|u(x-\lambda\rho_k(x)y)|\varphi(y)dy\right)^pdx\right]^\frac1p\\
\le &\int_{B(1)}\varphi(y)\left(\int_{\Sigma(1)\cap U_k}|u(x-\lambda\rho_k(x)y)|^pdx\right)^\frac1p dy\\
\le &C_1\left(\int_{\Sigma(2)}|u(z)|^p dz\right)^\frac1p \end{split} \end{equation*} for some constant $C_1$ independent of $\epsilon, k$ provided $\epsilon$ is small enough. Now, for $x\notin \Sigma(2\epsilon)$, then $v(x)=u(x)$ and if $x\in \Sigma(\epsilon)$, then $v(x)$ is the standard mollification. If $x\in \Sigma(2\epsilon)\setminus \Sigma(\epsilon)$, then \begin{equation*}
|\partial v|(x)\le \int_{\mathbb R^n}|\partial u|(x-\lambda\rho_k(x)y)\lambda|\partial \rho_k(x)|\varphi(y)dy.
\end{equation*} Since $|\partial\rho_k|$ is bounded by \eqref{e-cutoff-1}, we can prove as before that $$
\left(\int_{\Sigma(1)\cap U_k}|\partial v|^p(x)dx\right)^\frac1p\le C_2\left(\int_{\Sigma(2)}|\partial u|^p (z) dz\right)^\frac1p $$ for some constant $C_2$ independent of $\epsilon, k$ provided $\epsilon$ is small enough. This completes the proof of the lemma.
\end{proof}
In addition to ({\bf a1}) and ({\bf a2}), assume \begin{enumerate}
\item [({\bf a3})] The scalar curvature $\mathcal{S}_{g_0}$ of $g_0$ satisfies $\mathcal{S}_{g_0}\ge\sigma$ in $M\setminus\Sigma$, where $\sigma$ is a constant. \end{enumerate}
We want to modify $g_{\epsilon,0}$ to obtain a smooth metric with scalar curvature bounded below by $\sigma$. We first consider the case that $M$ is compact. Let $\epsilon_0>0$ be small enough so that for all $\epsilon_0\ge\epsilon>0$, $$ (1+\epsilon(n))^{-1}g_{\epsilon_0,0}\le g_{\epsilon,0}\le (1+\epsilon(n)) g_{\epsilon_0,0}, $$ where $\epsilon(n)>0$ is the constant depending only on $n$ in Theorem \ref{t-Simon}. Hence if we let $h=g_{\epsilon_0,0}$, then the $h$-flow has solution $g_\epsilon(t)$ on $M\times[0,T]$ for some $T>0$ independent of $\epsilon$, with initial data $g_{\epsilon,0}$ in the sense that $\lim_{t\to0}g_{\epsilon}(x,t)=g_{\epsilon,0}(x)$ uniformly in $M$, see Theorem \ref{t-Simon}. The curvature and all the covariant derivatives of curvature of $h$ are bounded because $M$ is compact.
By \cite{Simon2002} and Lemmas \ref{l-gradient-1}, \ref{l-gradient-2}, \ref{l-approx-1} we have the following: \begin{lma}\label{l-approx-2} Let $M$ be compact and $g_0$ satisfies ({\bf a1})--({\bf a3}). Suppose $p>n$. Let $\delta=\frac np<1$. Then
$$|^h\nabla g_\epsilon(t)|_h^2\le Ct^{-\delta}, \ \ |^h\nabla^2g_\epsilon(t)|^2\le Ct^{-1-\delta} $$ for some constant $C$ independent of $\epsilon$, $t$. Moreover, $g_\epsilon(t)$ subconverge to the solution $g(t)$ of the $h$-flow with initial data $g_0$ in $C^\infty$ norm in compact sets of $M\times(0,T]$ and in compact sets of $M\setminus \Sigma\times[0,T]$. \end{lma}
For $\epsilon>0$ small enough, let \begin{equation}\label{e-DeTurck-1} W^k=(g_\epsilon(t))^{pq}\left(\Gamma_{pq}^k(g_\epsilon(t))-\Gamma_{pq}^k(h)\right), \end{equation} and let $\Phi_t$ be the diffeomorphism given by \begin{equation}\label{e-DeTurck-2} \frac{\partial}{\partial t}\Phi_t(x)=-W(\Phi_t(x),t); \ \ \Phi_0(x)=x. \end{equation} Let $\widetilde g_\epsilon(t)=\Phi_t^*g_\epsilon(t)$. Then $\widetilde g_\epsilon(t)$ satisfies the Ricci flow equation with initial data $g_{\epsilon,0}$. Note that $W$ and $\Phi_t$ depend also on $\epsilon$. Recall the Ricci flow equation is: \begin{equation}\label{e-Ricciflow} \frac{\partial}{\partial t}g_{ij}=-2R_{ij}. \end{equation}
\begin{lma}\label{l-hflow-2} Same assumptions and notation as in Lemma \ref{l-approx-2}. For $\epsilon$ small enough, $|W|_h\le Ct^{-\frac12\delta}$, $|\text{Rm}(\widetilde g_\epsilon(t))|\le Ct^{-\frac12(1+\delta)}$ and
$$ C^{-1}h\le g_\epsilon(t)\le Ch $$ for some $C$, independent of $\epsilon, t$. \end{lma}
\begin{proof} The bound of $W$ is given by Lemma \ref{l-approx-2}. Since the bound of curvature is unchanged under diffeomorphism, $|\text{Rm}(\widetilde g_\epsilon(t))|\le Ct^{-\frac12(1+\delta)}$ by Lemma \ref{l-approx-2}. From this we conclude from the Ricci flow equation that $\widetilde g_\epsilon(t)$ is uniformly equivalent to $g_{0,\epsilon}$ which is uniformly equivalent to $h$. \end{proof} \begin{lma}\label{l-codimone-1} Let $\mathcal{S}(t)$ be the scalar curvature of $g(t)$. Then there is $C>0$ independent of $t, \epsilon$ such that $$ \exp(-Ct^{\frac12(1-\delta)})\int_M (\mathcal{S}(t)-\sigma)_-dv_{g(t)} $$ is nonincreasing in $(0,T]$, where $f_-=\max\{-f,0\}$ is the negative part of $f$. \end{lma} \begin{proof} As in \cite{McFeronSzekelyhidi2012}, fix $\theta>0$,
for $\epsilon>0$, let
$$
v=\left((\mathcal{S}_\epsilon(t)-\sigma)^2+\theta\right)^\frac12-\left(\mathcal{S}_\epsilon(t)-\sigma\right)
$$
where $\mathcal{S}_\epsilon(t)$ is the scalar curvature of $\widetilde g_\epsilon(t)$. Let $\Delta$ and $\nabla$ be the Laplacian and covariant derivative with respect to $\widetilde g_\epsilon(t)$. Use the evolution equation of the scalar curvature in Ricci flow, we have
\begin{equation*} \begin{split} \left(\frac{\partial}{\partial t}-\Delta\right)v=&\left(\frac{\mathcal{S}_\epsilon(t)-\sigma}
{\left((\mathcal{S}_\epsilon(t)-\sigma)^2+\theta\right)^\frac12}-1\right)\left(\frac{\partial}{\partial t}-\Delta\right)\mathcal{S}_\epsilon(t)-\frac{\theta|\nabla \mathcal{S}_\epsilon|^2}{\left((\mathcal{S}_\epsilon(t)-\sigma)^2+\theta\right)^\frac12}\\ =&\left(\frac{\mathcal{S}_\epsilon(t)-\sigma}
{\left((\mathcal{S}_\epsilon(t)-\sigma)^2+\theta\right)^\frac12}-1\right)\cdot 2|\nabla \mbox{Ric}(t)|^2-\frac{\theta|\nabla \mathcal{S}_\epsilon(t)|^2}{\left((\mathcal{S}_\epsilon(t)-\sigma)^2+\theta\right)^\frac32}\\ \le &0 \end{split} \end{equation*} where $\mbox{Ric}(t)$ is the Ricci tensor of $\widetilde g_\epsilon(t)$. Using Lemma \ref{l-hflow-2} we have
\begin{equation}\label{e-neg-part-2} \begin{split} \frac{d}{dt}\int_M vdv_{\widetilde g_\epsilon(t)}=&\int_M \frac{\partial}{\partial t}vdv_{\widetilde g_\epsilon(t)}-\int_M\mathcal{S}_\epsilon(t) vdv_{\widetilde g_\epsilon(t)}\\ \le &\int_M \Delta vdv_{\widetilde g_\epsilon(t)}+C_1t^{-\frac12(1+\delta)}\int_M vdv_{\widetilde g_\epsilon(t)}\\ =& C_1t^{-\frac12(1+\delta)}\int_M vdv_{\widetilde g_\epsilon(t)}\\ \end{split} \end{equation} for some constant $C_1$ independent of $t, \epsilon$. From this and let $\theta\to0$, we conclude that for some constant $C$ independent of $t$ and $\epsilon$: \begin{equation*} \exp(-Ct^{\frac12(1-\delta)})\int_M (\mathcal{S}_\epsilon(t)-\sigma)_-dv_{\widetilde g_\epsilon(t)} \end{equation*} is nonincreasing in $(0,T]$. Noting that $\widetilde g_\epsilon(t)=\Phi_t^*(g_\epsilon(t))$, by Lemma \ref {l-approx-2} let $\epsilon\to0$, the result follows. \end{proof}
We first consider the case that the codimension of $\Sigma$ is at least 2 in the following sense.
\begin{enumerate}
\item [({\bf a4})] The volume $V(\Sigma(\epsilon),g_0)$ with respect to $g_0$ of the $\epsilon$-neighborhood $\Sigma(\epsilon)$ of $\Sigma$ is bounded by $C\epsilon^2$ for some constant $C$ independent of $\epsilon$. Here
$$
\Sigma(\epsilon)=\{x\in M|\ d_{g_0}(x,\Sigma)<\epsilon\}.
$$ \end{enumerate}
\begin{lma}\label{l-codimtwo-2} With the same assumptions and notation as in Lemma \ref{l-approx-2}. Suppose ({\bf a4}) is true. Then $S(t)\ge\sigma$ for all $t>0$. \end{lma} \begin{proof} By Lemma \ref{l-codimone-1}, it is sufficient to prove that: \begin{equation}\label{e-approx-3} \lim_{t\to 0}\int_M (\mathcal{S}(t)-\sigma)_-dv_{g(t)}=0. \end{equation}
For any $\epsilon>0$, let $\Phi_t$ be the diffeomorphisms as before so that $\widetilde g_\epsilon(t)=\Phi_t^*(g_\epsilon(t))$ is the solution to the Ricci flow. For any $\theta>0$, let $v$ as in the proof of Lemma \ref{l-codimone-1}. Let
$$\beta=\displaystyle{\frac1\epsilon(\epsilon-\sum_{k=1}^{N}\psi_k\rho_k)}.$$
We may modify $\rho_k$ so that if $\epsilon$ is small enough then $\beta$ is a smooth function on $M$ so that $\beta=0$ in $\Sigma(2\epsilon)$, $ \beta=1$ outside $\Sigma(4\epsilon)$, $0\le \beta\le 1 $, $|^h\nabla \beta|\le C\epsilon^{-1}$, and $|^h\nabla^2\beta|\le C \epsilon^{-2} $ for some constant $C$ independent of $\epsilon, t$. Let $$ \widetilde \beta(t,x)=\beta(\Phi_t(x)). $$ Then \begin{equation*} \begin{split} \frac{d}{dt}\int_M \widetilde \beta^2 vdv_{\widetilde g_\epsilon(t)}=& \int_M v \frac{\partial}{\partial t}(\widetilde \beta^2) dv_{\widetilde g_\epsilon (t)}+\int_M \widetilde \beta^2 \frac{\partial}{\partial t}vdv_{\widetilde g_\epsilon(t)}-\int_M\mathcal{S}_\epsilon(t)\widetilde \beta^2 vdv_{\widetilde g_\epsilon(t)}\\ \le & \int_M v \frac{\partial}{\partial t}(\widetilde \beta)^2 dv_{\widetilde g_\epsilon (t)}+\int_M \widetilde \beta^2 \Delta_{\widetilde g_\epsilon(t)} vdv_{\widetilde g_\epsilon(t)}\\ &+C_1t^{-\frac12(1+\delta)}\int_M \widetilde \beta^2 vdv_{\widetilde g_\epsilon(t)}\\ =&I+II+C_1t^{-\frac12(1+\delta)}\int_M \widetilde \beta^2 vdv_{\widetilde g_\epsilon(t)}. \end{split} \end{equation*} for some constant $C_1>0$ independent of $t,\epsilon, \theta$ by Lemma \ref{l-hflow-2}. Let $w(y)=v(\Phi_t^{-1}(y))$.
Since in local coordinates, $$ \Delta_{g_\epsilon(t)}f=g_\epsilon^{ij}\left(\partial_i\partial_j f-\Gamma_{ij}^k \partial_k\right) $$
with $|\Gamma_{ij}^k|\le Ct^{-\frac\delta2}$ for some constant independent of $\epsilon, t$ by Lemma \ref{l-approx-2}, we have \begin{equation*} \begin{split} II=&\int_M \beta^2 \Delta_{ g_\epsilon(t)} w dv_{ g_\epsilon(t)}\\ =&\int_M w\Delta_{g_\epsilon(t)}(\beta^2) dv_{g_\epsilon(t)}\\
\le& C_2\int_{\Sigma(2\epsilon)} w|\epsilon^{-2}+\epsilon^{-1}t^{-\frac\delta2}\beta| dv_{g_\epsilon(t)} \\ \le& C_3\left(t^{-\frac12(1+\delta)}+\epsilon^{-1}t^{-\frac\delta2-\frac14(1+\delta)}\int_{\Sigma(4\epsilon)}\beta w^\frac12dv_{g_\epsilon(t)}\right)\\ \le&C_4\left[t^{-\frac12(1+\delta)}+ t^{-\frac14(1+3\delta)} \left( \int_{M}\widetilde \beta^2 v dv_{\widetilde g_\epsilon(t)}\right)^\frac12\right] \end{split} \end{equation*} for some constants $C_2-C_4$ independent of $\epsilon, t, \theta$, where we have used Lemma \ref{l-approx-2}, the fact that $\beta=1$ outside $\Sigma(4\epsilon)$, H\"older inequality and the fact that $V(\Sigma(4\epsilon))=O(\epsilon^2)$. To estimate $I$, we have
\begin{equation*}
\begin{split}
\frac{\partial}{\partial t} \widetilde \beta =&(d\widetilde \beta)(\frac{\partial}{\partial t})\\
=&d\beta\circ d\Phi_t(\frac{\partial}{\partial t})\\
=&d\beta (W).
\end{split} \end{equation*} Hence by Lemma \ref{l-approx-2}, we have \begin{equation*}
|\frac{\partial}{\partial t}\widetilde \beta|(x)\le C_5|^h\nabla \beta|(\Phi_t(x))|\le C_6\epsilon^{-1}t^{-\frac\delta2} \end{equation*} for some constants $C_5, C_6$ independent of $\epsilon, t, \theta$. Hence if $w$ is as above, then \begin{equation*} \begin{split} I\le &C_6\epsilon^{-1}t^{-\frac\delta2}\int_{\Sigma(4\epsilon)} \beta w(y)dv_{g_\epsilon(t)}\\ \le &C_7 t^{-\frac14(1+3\delta)}\left(\int_{\Sigma(4\epsilon)}\widetilde \beta^2 v dv_{\widetilde g_\epsilon(t)}\right)^\frac12. \end{split} \end{equation*} for some constant $C_7$ independent of $\epsilon, t, \theta$.
To summarize, if we let
$$
F=\int_M \widetilde \beta^2 vdv_{\widetilde g_\epsilon(t)},
$$
then
\begin{equation*}
\begin{split}
\frac{d F}{dt}\le &C_8\left (t^{-\frac12(1+\delta)}+ t^{-\frac12(1+\delta)}F+t^{-\frac14(1+3\delta)}F^\frac12\right)\\
\le&C_8\left(t^{-\frac12(1+\delta)}+t^{-\delta}+2t^{-\frac12(1+\delta)}F\right)
\end{split}
\end{equation*}
for some constant $C_8$ independent of $\epsilon, t, \theta$. Integrating from $0$ to $t$, and let $\theta\to0$. Since $g_{\epsilon,0}=g_0$ outside $\Sigma(2\epsilon)$, $\Phi_0=$id, and $\beta=0$ on $\Sigma(2\epsilon)$, and $\mathcal{S}_{g_0} \ge\sigma$ outside $\Sigma$, there exist constants $C_9-C_{10}$ independent of $\epsilon, t$
$$
\exp(-C_9t^{\frac12(1-\delta)})\int_M \widetilde \beta^2(\mathcal{S}_\epsilon(t)-\sigma)_-dv_{\widetilde g_\epsilon(t)}\le C_{10}\left(t^{\frac12(1-\delta)}+t^{1-\delta}\right)
$$
because $0<\delta<1$. Let $\epsilon\to0$, we see that \eqref{e-approx-3} is true and the proof of the lemma is completed.
\end{proof} By Lemmas \ref{l-approx-2} and \ref{l-codimtwo-2}, using $g(t)$ we have: \begin{cor}\label{c-approx-1} Let $(M^n,\mathfrak{b})$ be a smooth compact manifold and let $g_0$ be a continuous Riemannian metric satisfying the following:
\begin{enumerate}
\item [(a)] There is a compact set $\Sigma$ such that $g_0$ is smooth on $M\setminus \Sigma$ with scalar curvature bounded below by $\sigma$.
\item [(b)] $g_0$ is in $W_{\text{loc}}^{1,p}$ for some $p>n.$
\item [(c)] $V(\Sigma(\epsilon),g_0)=O(\epsilon^2)$ as $\epsilon\to0$, where $\Sigma(\epsilon)=\{x\in M| d_{\mathfrak{b}}(x,\Sigma)<\epsilon\}$.
\end{enumerate}
Then there exist a sequence of smooth metrics $g_i$ satisfying the following: (i) as $i$ tends to infinity $g_i$ converges to $g_0$ uniformly in $M$, and converges to $g_0$ in $C^\infty$ norm on any compact subset of $M\setminus \Sigma $; (ii) the scalar curvature $\mathcal{S}_i$ of $g_i$ satisfies $\mathcal{S}_i\ge\sigma$. \end{cor}
\begin{remark}\label{r-codim} If the codimension of $\Sigma$ is only assumed to be larger than 1, then the conclusions of Lemma \ref{l-codimtwo-2} and Corollary \ref{c-approx-1} are still true under some additional assumptions on the second derivatives of $g_0$.
\end{remark}
Next let us consider the case that $\Sigma$ is an embedded hypersurface. Let $(M^n,g_0)$ be a Riemannian metric satisfying the following:
\begin{enumerate} \item[({\bf b1})] $\Sigma$ is a compact embedded orientable hypersurface, and $g_0$ is smooth on $M\setminus \Sigma$ with scalar curvature $\mathcal{S}_{g_0}\ge \sigma$.
\item [({\bf b2})] There is neighborhood $U$ of $\Sigma$ and a smooth function $t$ defined near $U$ so that $U$ is diffeomorphic to $\{ -a<t<a \}\times \Sigma$ for some $a>0$ with $\Sigma=\{t=0\}$. Moreover, $g_0=dt^2+g_{\pm}(z,t)$, $z\in \Sigma$ so that $(t,z)$ are smooth coordinates and $g_-(\cdot,0)=g_+(\cdot,0)$, where $g_+$ is defined and smooth on $t\ge0$, $g_-$ is defined and smooth on $t\le 0$.
\item [({\bf b3})] Let $U_+=\{t>0\}$, $U_-=\{t<0\}$. With respect to the unit normal $\frac{\partial}{\partial t}$ the mean curvature $H_+$ of $\Sigma$ with respect to $g_+$ and the mean curvature $H_-$ of $\Sigma$ with respect to $g_-$ satisfies $H_-\ge H_+$. \end{enumerate}
By \cite[Prop. 3.1]{Miao2002}, let $\epsilon>0$ be small enough, one can find smooth metric $g_{\epsilon,0}$ such that (i) $g_{\epsilon,0}=g_0$ outside $U(\epsilon)=\{-\epsilon <t<\epsilon\}$; (ii) $g_{0,\epsilon}$ converges uniformly to $g_0$; (iii) $|^h\nabla g_{0,\epsilon}|_h\le C$ for some fixed background smooth metric $g$; (iv) there exists a $c>0$ independent of $\epsilon$ such that the scalar curvature $\mathcal{S}_{g_{0,\epsilon}}$ satisfies:
\begin{equation}\label{e-scalar-1} \left\{
\begin{array}{ll}
\mathcal{S}_{g_{0,\epsilon}}=\mathcal{S}_{g_0}, & \hbox{outside $U(\epsilon)$;} \\
|\mathcal{S}_{g_{0,\epsilon}}|\le c, & \hbox{in $\frac{\epsilon^2}{100} \delta_i^2<|t|\le \epsilon$};\\
\mathcal{S}_{g_{0,\epsilon}}(z,t)\ge -c +(H_-(z)-H_+(z))\epsilon^{-2}\phi(\frac{100t}{\epsilon^2}), & \hbox{in $-\frac{\epsilon^2}{100} <|t|\le \frac{\epsilon^2}{100} $};\\
|\mathcal{S}_{g_{0,\epsilon}}|\le c\epsilon^{-2};
\end{array} \right. \end{equation} for $z\in \Sigma$. Here $\phi\ge0$ is a smooth function in $\mathbb R$ with compact support in $[-1/2,1/2]$ so that $$ \int_\mathbb R \phi(s)ds=1. $$
Similar arguments as before using $h$-flow, we can conclude: \begin{cor}\label{c-approx-2} Let $M^n$ be a compact smooth manifold and let $g_0$ be a Riemannian metric satisfying {\bf (b1)--(b3)} such that the scalar curvature of $g_0$ on $M\setminus\Sigma$ is at least $\sigma$. Then there exist a sequence of smooth metrics $g_i$ such that as $i$ tends to infinity $g_i$ converges to $g_0$ uniformly in $M$, and converges to $g_0$ in $C^\infty$ norm on any compact subset of $M\setminus \Sigma.$ Moreover, $\mathcal{S}_{g_i}\ge \sigma$. \end{cor} \begin{proof} As before, choose $h=g_{0,\epsilon_0}$ for $\epsilon_0$ small enough, one can solve the $h$-flow with initial data $g_{0,\epsilon}$. Let $g_{\epsilon}(t)$ be the solution and let $\mathcal{S}_\epsilon(t)$ be its scalar curvature. From the proof of Lemma \ref{l-codimone-1}, one can conclude that \begin{equation*} \begin{split} \exp(-C_3t^\frac12)\int_M(\mathcal{S}_\epsilon(t)-\sigma)_-dv_{g_\epsilon(t)}\le& \int_M (\mathcal{S}_{g_{0,\epsilon}}-\sigma)_-dv_{g_{0,\epsilon}}\\ =&\int_{U(\epsilon)}(\mathcal{S}_{g_{0,\epsilon}}-\sigma)_-dv_{g_{0,\epsilon}}\\
\le& C_1\epsilon \end{split} \end{equation*} for some $C_3>0$ independent of $\epsilon, t$. Here we have used the fact that $H_--H_+\ge0$. Let $\epsilon\to0$, we conclude that the solution $g(t)$ of the $h$-flow with initial value $g_0$ has scalar curvature no less than $\sigma$. The result follows as before. \end{proof}
\begin{remark}\label{r-Miao}By \cite{Miao2002}, suppose $\Sigma$ is a compact orientable hypersurface a neighborhood of $\Sigma$ is of the disjoint union of $U_1$, $U_2$ and $\Sigma$. Assume $g_0$ is smooth up $\Sigma$ from each side $U_i$ of $\Sigma$ and such that the mean curvatures $H_1, H_2$ with respect to unit normals in the two sides of $\Sigma$ satisfying $H_1+H_2\ge0$, where unit normals are chosen to be outward pointing in each side. Then one can find a smooth structure so that {\bf (b2), (b3)} are true. \end{remark}
We give some applications:
\begin{cor}\label{c-cone-1} Let $(M^n,g)$ be a compact manifold such that $M^n$ is topological $n$-torus, $g$ is smooth except at a point where $g$ has a cone singularity of the form $$ g=dr^2+\alpha^2 r^2h_0 $$ with $0<\alpha\le 1$ and $h_0$ is the standard metric on $\mathbb{S}^{n-1}$. Suppose the scalar curvature is of $g$ is nonnegative, then $g$ must be flat and $\alpha=1$.
\end{cor} \begin{proof} For $r$ small, the mean curvature of the level set $\{r\}\times \mathbb{S}^{n-1}$ with respect to the normal $\partial_r$ is $H=\frac{n-1}r$. Consider the Euclidean ball $B(\alpha r)$ of radius $\alpha r$ with center at the origin. Then metric of the boundary is $(\alpha r)^2h_0$. Moreover, the mean curvature is $H_0=\frac{n-1}{\alpha r}$. Since $\alpha\le 1$, $H_0\ge H$. By gluing $B(\alpha r)$ along with $M$ along $\{r\}\times \mathbb{S}^{n-1}$, we obtain a metric with corner so that {\bf(b1)--(b3)} are true by changing the smooth structure if necessary. Still denote this metric by $g$. By Corollary \ref{c-approx-2}, there exist smooth metrics $g_i$ on the new manifold with nonnegative scalar curvature so that $g_i\to g$ in $C^\infty$ away from the singular part. By \cite{SchoenYau1979,SchoenYau1979-1,GromovLawson}, $g_i$ is flat. Hence $g$ must be flat away from the singular part. Let $r\to0$, we conclude that the original metric $g$ is flat, and we must have $\alpha=1$. \end{proof}
Similarly, one can prove the following:
\begin{cor}\label{c-cone-2} Let $(M^n,g)$ be a compact manifold such that $M^n$ is topological $n$-torus, $g$ is smooth away some compact set with codimension at least 2. Moreover, assume $g$ is in $W^{1,p}_{\text{loc}}$ for some $p>n$. Suppose the scalar curvature is of $g$ is nonnegative, then $g$ must be flat. \end{cor}
\begin{remark}\label{r-cone-1} Suppose $M$ is asymptotically flat with nonnegative scalar curvature and with some cone singularities as in Corollary \ref{c-cone-1}, then we still have positive mass for each end by the result in \cite{Miao2002}. The proof is similar. Compare this result with the example in Proposition \ref{p-cone-pm-2}.
\end{remark}
Let us consider the case that $M^n$ is noncompact. Let $g_0$ be a continuous Riemannian metric on $M$ which is smooth outside a compact set $\Sigma$. Suppose there is a family of smooth complete metrics $g_{\epsilon,0}$ on $M$ such that $g_{\epsilon,0}$ converges uniformly to $g_0$ and converges smoothly on compact sets of $M\setminus \Sigma$. Assume $g_{\epsilon,0}$ has bounded curvature for all $\epsilon$. As before, we can find $\epsilon_0>0$ such that if $h=g_{\epsilon_0,0}$ then there are solutions $g_\epsilon(t)$ to the $h$-flow with initial data $g_{\epsilon,0}$, and solution to the $h$-flow with initial data $g(t)$ on some fixed interval $[0,T]$, $T>0$. As in \cite{Simon2002}, using \cite{Shi1989}, we may assume that all the derivatives of the curvature of $h$ are bounded. Moreover, $g_\epsilon(t)$ converge uniformly on compact sets of $M\times(0,T]$ and $M\setminus\Sigma\times[0,T]$. Suppose the scalar curvature of $g_0$ satisfies $\mathcal{S}_{g_0}\ge \sigma$. We want to find conditions so that the scalar curvature of $g(t)$ is also bounded below by $\sigma$.
\begin{lma}\label{l-noncompact-1} With the above assumptions and notation, suppose \begin{enumerate}
\item [(i)] $g_{\epsilon,0}=g_0$ outside $\Sigma(2\epsilon)$.
\item [(ii)] $|^h\nabla g_{\epsilon}(t)|\le Ct^{-\frac\delta 2}$, $|^h\nabla^2 g_{\epsilon}(t)|\le Ct^{-\frac12(1+\delta)}$ for some $C$ independent of $\epsilon, t$.
\item [(iii)] There is $R_0>0$ and $C>0$ independent of $\epsilon, t$ such that
$$
\int_{M\setminus B(o,R_0)}|\mathcal{S}_\epsilon(t)-\sigma|dv_h\le C.
$$
where $B(o,R_0)$ is the geodesic ball with respect to $h$ and $\mathcal{S}_\epsilon(t)$ is the scalar curvature of $g_\epsilon(t)$.
\item[(iv)] $V(\Sigma(2\epsilon), g_0)=O(\epsilon^2)$.
\end{enumerate} Then the scalar curvature $\mathcal{S}(t)$ of $g(t)$ satisfies $\mathcal{S}(t)\ge \sigma$ for all $t>0$. \end{lma} \begin{proof} By \cite{Shi1989,Tam2010}, we can find a smooth function $\rho$ such that $$ C_1^{-1}(r(x)+1)\le \rho(x)\le C_1(1+r(x)) $$ for some constant $C_1>0$ where $r(x)$ is the distance function to a fixed point $o$ with respect to $h$. Moreover, the gradient and Hessian of $\rho$ with respect to $h$ are uniformly bounded.
Let $0\le \eta\le1$ be a smooth function on $\mathbb R$ so that $\eta=1$ on $[0,1]$ and $\eta=0$ on $[2,\infty)$. We proceed as in the proofs of Lemmas \ref{l-codimone-1}, \ref{l-codimtwo-2}. For $R>>1$, denote $ \eta(\rho(x)/R)$ still by $\eta(x)$. Let $\widetilde g_\epsilon$ be the Ricci flow corresponding to the $g_\epsilon(t)$ and let $\mathcal{S}_\epsilon(t)$ be its scalar curvature. Let $\theta>0$ and let $v$ as in the proof of Lemma \ref{l-codimone-1}, we have \begin{equation*} \begin{split} &\frac{d}{dt}\int_M \eta vdv_{\widetilde g_\epsilon(t)}\le\\
&C_2\left(t^{-\frac12(1+\delta)}\int_M \eta v dv_{\widetilde g_\epsilon(t)}+\int_M v|\Delta \eta| dv_{\widetilde g_\epsilon(t)}\right)\\
\le &C_3\left(t^{-\frac12(1+\delta)}\int_M \eta v dv_{\widetilde g_\epsilon(t)}+t^{-\frac \delta2}R^{-1}\int_{M\setminus B(o, 2C_1R)} (|\mathcal{S}_\epsilon(t)-\sigma|+\theta) dv_{\widetilde g_\epsilon(t)}\right) \end{split} \end{equation*} for some positive constants $C_2, C_3$ independent of $t,\epsilon, \theta$. Hence
$$
\frac{d}{dt}\left(\exp(-C_4t^{\frac12(1+\delta)})\int_M \eta vdv_{\widetilde g_\epsilon(t)}\right)\le C_5t^{-\frac \delta2}R^{-1}\int_{M\setminus B(o, 2C_1R)} (|\mathcal{S}_\epsilon(t)-\sigma|+\theta) dv_{\widetilde g_\epsilon(t)} $$ for some positive constants $C_4, C_5$ independent of $t,\epsilon, \theta$. Integrating from $0<t_1<t_2$, let $\theta\to0$ and then let $R\to\infty$, using condition (iii), we conclude that $$ \exp(-C_4t^{\frac12(1+\delta)})\int_M \left(\mathcal{S}_\epsilon(t)-\sigma\right)_-dv_{\widetilde g_\epsilon(t)} $$ is nonincreasing in $t$. Let $\epsilon\to0$, we conclude that
$$ \exp(-C_4t^{\frac12(1+\delta)})\int_M \left(\mathcal{S}(t)-\sigma\right)_-dv_{g_\epsilon(t)} $$ is nonincreasing in $t$.
Next we proceed as in the proof of Lemma \ref{l-codimtwo-2}. But we need the cutoff function $\eta$. For $\epsilon>0$ and $\theta>0$ as in the proof of Lemma \ref{l-codimtwo-2}, let $\beta, \widetilde \beta$ as in that proof, we have for $R>>1$, \begin{equation}\label{e-approx-4} \begin{split}
\frac{d}{dt}F dv \le& C_6 \left(t^{-\frac12(1+\delta)}+t^{-\delta}+t^{-\frac12(1+\delta)}F+\int_M|\Delta\eta|v\widetilde \beta^2\right)\\
\le &C_7\left(t^{-\frac12(1+\delta)}+t^{-\delta}+t^{-\frac12(1+\delta)}F+\frac1R\int_{M\setminus B(o,2C_1R)} (|\mathcal{S}_\epsilon(t)-\sigma|+\theta)dv_{\widetilde g_\epsilon(t)}\right) \end{split} \end{equation} for some constants $C_6, C_7$ independent of $\epsilon, t,\theta$ where $$ F=\int_M\eta\widetilde \beta^2vdv_{\widetilde g_\epsilon(t)}. $$ Integrating from 0 to $t$ and let $\theta\to0$, we have $$
\int_M\eta\widetilde \beta^2(\mathcal{S}_\epsilon(t)-\sigma)_- dv_{\widetilde g_\epsilon(t)} \le C_8\left(t^{1-\delta}+t^{\frac12(1-\delta)})+\frac1R\int_0^t\int_{M\setminus B(o,2C_1R)} (|\mathcal{S}_\epsilon(s)-\sigma|)dv_{\widetilde g_\epsilon(s)} )ds\right) $$ for some constant $C_8$ independent of $\epsilon, t$. Here we have used the fact that $g_{\epsilon,0}=g_0$ outside $\Sigma(2\epsilon)$ and the fact that $\mathcal{S}_{g_0}\ge \sigma$. Let $R\to\infty$, using (iii), and finally let $\epsilon\to0$, we conclude that $$ \int_M(\mathcal{S}(t)-\sigma)_-dv_{g(t)}\le C_8(t^{1-\delta}+t^{\frac12(1-\delta)}) . $$ Since $$ \exp(-C_4t^{\frac12(1+\delta)})\int_M \left(\mathcal{S}(t)-\sigma\right)_-dv_{g_\epsilon(t)} $$ is nonincreasing in $t$, we conclude that the lemma is true. \end{proof}
\section{singular metrics realizing the nonpositive Yamabe invariant}\label{s-Einstein-1}
In this section, we will apply the results in previous sections to study singular metrics on compact manifold. Let $M^n$ be a compact smooth manifold without boundary. Then as in the Introduction, we may define the {\it Yamabe invariant} $\sigma(M)$. It is well-known that if $\sigma(M)\le0$ and if $g$ is a smooth metric which realizes $\sigma(M)$, then $g$ is Einstein, see \cite[p.126--127]{Schoen1989} for example. If $\sigma(M)>0$, the situation is more complicated, for some recent results see \cite{Macbeth2014}.
In this section we want to discuss the following question:
{\it Suppose $g$ is a continuous Riemannian metric on $M$ which is smooth outside some compact set $\Sigma$ so that the volume of $g$ is normalized to be 1. Suppose the scalar curvature of $g$ satisfies $\mathcal{S}_g\ge \sigma(M)$ away from $\Sigma$. What can we say about $g$?}
In the case that $\Sigma$ has codimension at least 2, we have the following: \begin{thm}\label{t-Yamabe-1} Let $M^n$ be a smooth compact manifold such that $\sigma(M)\le 0$. Suppose $g_0$ is a Riemannian metric with $V(M,g_0)=1$ satisfying the following: \begin{enumerate}
\item [(i)] There is a compact subset $\Sigma$ such that $g_0$ is smooth on $M\setminus \Sigma$ with scalar curvature $\mathcal{S}_{g_0}\ge\sigma(M)$ away from $\Sigma$.
\item [(ii)] $g_0$ is in $W_{\text{loc}}^{1,q}$ for some $q>n$ in the sense that $g_0$ has weak derivative and $|g_0|_\mathfrak{b}$, $|^\mathfrak{b}\nabla g_0|_\mathfrak{b}\in L^q_{\text{loc}}$ with respect to a smooth background metric $\mathfrak{b}$. \item [(iii)] The volume $V(\Sigma(\epsilon),g_0)$ with respect to $g_0$ of the $\epsilon$-neighborhood $\Sigma(\epsilon)$ of $\Sigma$ is bounded by $C\epsilon^2$ for some constant $C$ independent of $\epsilon$. Here
$$
\Sigma(\epsilon)=\{x\in M|\ d_{g_0}(x,\Sigma)<\epsilon\}.
$$ \end{enumerate}
Then $g_0$ is Einstein on $M\setminus \Sigma$. \end{thm}
Let $(M^n,g_0)$ be as in the theorem. Let $$\accentset{\circ}{\mbox{Ric}}(g_0)=\displaystyle{\mbox{Ric}(g_0)-\frac{\mathcal{S}_0}n g_0}$$
be the traceless part of $\mbox{Ric}(g_0)$ where $\mathcal{S}_0=\mathcal{S}_{g_0}$ is the scalar curvature of $g_0$. Let $x_0\in M\setminus \Sigma$. We want to prove that $\accentset{\circ}{\mbox{Ric}}(x_0)=0$. Suppose $\accentset{\circ}{\mbox{Ric}}(g_0)(x_0)\neq0$, then there is $r>0$ such that $B_{x_0}(4r;g_0)\cap \Sigma=\emptyset$ and there is $c>0$, $|\accentset{\circ}{\mbox{Ric}}(g_0)|(x_0)\ge2c$ in $B_{x_0}(3r)$. By Corollary \ref{c-approx-1}, we can find smooth metrics $g_i$ such that (i) $g_i$ converges uniformly to $g_0$ and converges in $C^\infty$ norm on any compact sets in $M\setminus \Sigma$; (ii) $V(M,g_i)=1$; (iii) the scalar curvature $\mathcal{S}_i$ of $g_i$ satisfies $\mathcal{S}_i\ge\sigma-\delta_i$ for all $i$ with $\delta_i\downarrow0$. Hence we may assume that
\begin{equation}\label{e-Einstein-1}
|\accentset{\circ}{\mbox{Ric}}(g_i)|(x)\ge c \end{equation} in $B_{x_0}(2r;g_i)$ for all $i$, and $B_{x_0}(r;g_i)\subset B_{x_0}(2r;g)$, $B_{x_0}(2r;g_i)\subset B_{x_0}(3r;g)$. We may also assume that the distance function $r_i(x)$ from $x_0$ with respect to $g_i$ are smooth in $B_{x_0}(3r;g)$, provided $r>0$ is small enough, independent of $i$.
Let $\phi$ be smooth function on $[0,\infty)$ with $\phi\ge 0$, $\phi=1$ on $[0,1]$ and $\phi=0$ on $[2,\infty)$ and such that $|\phi'|^2\le C\phi$, with $C$ being an absolute constant. Let $$h_i(x)=\phi\left(\frac{r_i(x)} r\right)\accentset{\circ}{\mbox{Ric}}(g_i)(x).$$
For $|\tau|>0$, let $G_{i;\tau}=g_i+\tau h_i$. Then there is $\tau_0>0$ such that $G_{i;\tau}$ are smooth metrics for all $i$ and for all $0<|\tau|\le\tau_0$.
In the following, $E_k=E_k(x,\tau)$ ($k=1, 2$) will denote a quantity such that $|E_k|\le C|\tau|^k$ for some $C$ independent of $i$ and $\tau$.
\begin{lma}\label{l-vol-1} $dv_{G_{i;t}}=dv_{g_i}(1+E_2)$ and $V(M, G_{i;t})=1+E_2$, here $dv_g$ denots the volume element of metric $g$
\end{lma} \begin{proof} Since $g_i\to g$ uniformly on compact sets of $M\setminus\Sigma$ in $C^\infty$ norm and since $h_i$ is traceless, the results follow.
\end{proof}
We have the following general fact, see \cite[Prop. 4]{BrendleMarques2011}: \begin{lma}\label{l-scalar-curvature-1}
Let $(\Omega^n, g)$ be a smooth Riemannian manifold. Let $\bar g= g+h$ with $ |h|_{ g}\le \frac12$, then the scalar curvatures are related as:
\begin{equation*}
\mathcal{S}_{\bar g}-\mathcal{S}_g= \mbox{div}_g(\mbox{div}_g(h))- \Delta_g\mbox{tr}_gh-\langle h,\mbox{Ric}(g)\rangle_g+F
\end{equation*}
where $$|F|\le C\left( |\nabla h|^2+|h|_g|\nabla^2h|_g+|\mbox{Ric}(g)||h|^2_g\right)$$ for some constant $C$ depending only on $n$. Here $\nabla$ is the covariant derivative with respect to $ g$. \end{lma}
\begin{lma}\label{l-scalar-curvature-2} Let $\mathcal{S}_i$ be the scalar curvature of $g_i$ and $\mathcal{S}_{i;\tau}$ be the scalar curvature of $G_{i;\tau}$. Then \begin{equation*} \label{e-scalar-1} \begin{split}
{\mathcal{S}}_{i;\tau}=\mathcal{S}_i+\tau\mbox{div}_{g_i}(\mbox{div}_{g_i} h_i)-\tau\langle h_i,\mbox{Ric}(g_i)\rangle_{g_i}+E_2(\tau). \end{split} \end{equation*}
${\mathcal{S}}_{i;\tau}=\mathcal{S}_i$ outside $B_{x_0}(2r,g_i)$ and is bounded below by a constant independent of $i, \tau$. \end{lma} \begin{proof} The lemma follows from Lemma \ref{l-scalar-curvature-1}, the fact that $h_i$ is traceless, $h_i=0$ outside $B_{x_0}(2r,g_i)$, the fact that $g_i\to g$ in $C^\infty$ outside $\Sigma$ and the fact that $\mathcal{S}_i\ge\sigma-\delta_i$. \end{proof}
In the following, let \begin{equation}\label{e-constant} a=\frac{4(n-1)}{n-2};\ \ p=\frac{2n}{n-2}. \end{equation} By the resolution of the Yamabe conjecture \cite{Yamabe1960,Trudinger1968,Aubin1976-1,Schoen1984}, for each $i, \tau$, we can find smooth positive solution $u_{i;\tau}$ of satisfies:
\begin{equation}\label{e-Yamabe-1} -a\Delta_{G_{i;\tau}}u_{i;\tau}+{\mathcal{S}}_{i;\tau}u_{i;\tau}= \lambda_{i;\tau}V_{i;\tau}^{-\frac2n}u_{i;\tau}^{p-1}. \end{equation} with $ \lambda_{i;\tau}=Y(\mathcal{C}_{i,\tau})$ which is less than or equal to $\sigma$, in particular, it is nonpositive, where $\mathcal{C}_{i,\tau}$ is the class of smooth metrics conformal to $G_{i;\tau}$. Moreover, $u_{i;\tau}$ is normalized by $$
\int_M u_{i;\tau}^pdv_{G_{i;\tau}}=1,
$$
and $V_{i,\tau}=V(M,G_{i;\tau})$.
\begin{lma}\label{l-Einstein-1} There is $0<\tau_1\le \tau_0$ independent of $i$ such that if $0>\tau\ge -\tau_1$, then \begin{equation*} \begin{split}
\frac a2\int_M|^{(i;\tau)}\nabla u_{i;\tau}|^2_{G_{i;\tau}}dv_{G_{i;\tau}}&-\lambda_{i;\tau}V_{i;\tau}^{-\frac2n}+\sigma\\
\le& -C|\tau|\int_{B_{x_0}(2r,g_i)}\phi u_{i;\tau}^2 dv_{g_i}+C'\delta_i+E_2(\tau) \end{split} \end{equation*} for some positive constants $C, C'$ independent of $i$ and $\tau$. Here $^{(i;\tau)}\nabla$ is the covariant derivative with respect to $G_{i;\tau}$. \end{lma} \begin{proof} For simplicity of notations, in the following we denote $^{(i;\tau)}\nabla$ by $\nabla$, $G_{i;\tau}$ by $G$; $g_i$ by $g$; $u_{i;\tau}$ by $u$; $\lambda_{i;\tau}$ by $\lambda$; $\mathcal{S}_{i;\tau}$ by $\mathcal{S}_G$; $\mathcal{S}_{i}$ by $\mathcal{S}_g$; and $V_{i;\tau}$ by $V$.
Multiply \eqref{e-Yamabe-1} by $u$ and integrating by parts, using the fact that $\int_M u^p dv_G=1$, we have
\begin{equation}\label{e-E-1}
\begin{split}
a\int_M|\nabla u|^2_{G}dv_{G}- \lambda V^{-\frac2n} =&- \int_M \mathcal{S}_{G}u^2 dv_{G}\\ \le&-\int_M {\mathcal{S}}_{G}u^2dv_g+E_2(\tau)\int_M u^2 dv_g\\ \end{split} \end{equation} by Lemmas \ref{l-vol-1}, \ref{l-scalar-curvature-2}, and the fact that $g_i$ converges in $C^\infty$ norm in $B_{x_0}(3r,g_0)\supset B_{x_0}(g_i,2r)$. On the other hand, by Lemma \ref{l-scalar-curvature-2}, for any $0<\epsilon<1$, \begin{equation}\label{e-E-2}
\begin{split}
-\int_M {\mathcal{S}}_{G}u^2dv_g \le &-\int_M {\mathcal{S}}_{g}u^2dv_g-\tau\int_M\left(\mbox{div}_g(\mbox{div}_gh)-\langle h,\mbox{Ric}(g)\rangle_g \right)u^2dv_g\\ &+E_2(\tau)\int_{B_{x_0}(2r;g)}u^2 dv_g\\
\le & -\int_M {\mathcal{S}}_{g}u^2dv_g+C_1|\tau|\int_Mu|^g\nabla u|_g\left(|\phi'||\accentset{\circ}{\mbox{Ric}}(g)|_g+\phi|^g\nabla \mathcal{S}_0|_g\right)dv_g\\
&-|\tau|\int_M\phi|\accentset{\circ}{\mbox{Ric}}(g)|^2 u^2dv_g
+E_2(\tau)\int_{B_{x_0}(2r;g)}u^2 dv_g\\
\le&(-\sigma+\delta)\int_Mu^2dv_g+\left(C_2+\epsilon^{-1}\right)|\tau|\int_M|^g\nabla u|_g^2dv_g\\&- C_3|\tau|\int_M\phi|\accentset{\circ}{\mbox{Ric}}(g)|^2 u^2dv_g
+\left(E_2(\tau)+C_2\epsilon|\tau|\right)\int_{B_{x_0}(2r;g)}\phi u^2 dv_g\\
\le &(-\sigma+\delta)\int_Mu^2dv_g+\left(C_2+\epsilon^{-1}\right)|\tau|\int_M|^g\nabla u|_g^2dv_g\\&
+\left(E_1(\tau)+C_2\epsilon-C_3 c\right)|\tau|\int_{B_{x_0}(2r;g)}\phi u^2 dv_g \end{split}
\end{equation} for some constants $C_1, C_2, C_3>0$ independent of $i,\tau$. Here we have used the fact that $|\phi'|^2\le C\phi$ and the fact that $\mathcal{S}_g\ge \sigma-\delta_i$ which is negative, where we denote $\delta_i$ by $\delta$. Choose $\epsilon>0$ so that $C_2\epsilon=\frac 12 C_3c$, then result follows if $\tau_1>0$ is small enough independent of $i$, by \eqref{e-E-1}, \eqref{e-E-2}, H\"older inequality, the fact that $g, G$ are uniformly equivalent, the fact that $\int_M u^pdv_G=1$, $V(M,g)=1$, and $V(M,G)=1+E_2(\tau)$.
\end{proof}
Let $0>\tau_k>-\tau_1$, $\tau_k\to0$. Since $\delta_i\to0$, for each $k$ we can find $i_k$ such that $ \delta_{i_k}\le \tau_k^2 $, $i_k\to\infty$. Let us denote $G_{i_k;\tau_k}$ by $G_k$, and $u_{i_k;\tau_k}$ by $u_k$. We want to prove the following: \begin{lma}\label{l-uconvergence-1} There is a constant $C>0$ such that for all $k$, $$\inf_{B_{x_0}(3r,g_0)}u_k\ge C.$$ \end{lma}
Suppose the lemma is true then we will have a contradiction. In fact, if we denote $\delta_{i_k}$ by $\delta_k$, since $V(M,G_k)=1+E_2(\tau_k)$, $\lambda\le\sigma$, by Lemma \ref{l-Einstein-1}, we have
\begin{equation*} \begin{split}
\frac a2\int_M|^{G_k}\nabla u_k|^2_{G_{k}}dv_{G_{k}}\le& -C_1|\tau_k|\int_{B_{x_0}(2r,g_{i_k})}\phi u_k^2 dv_{g_{i_k}}+C_2\delta_k+C_2\tau_k^2\\
\le& -C_1|\tau_k|\int_{B_{x_0}(2r,g_{i_k})}\phi u_k^2 dv_{g_{i_k}}+(C_2+1) \tau_k^2 \end{split} \end{equation*}
for some positive constants $C_1, C_2$ independent of $k$. By Lemma \ref{l-uconvergence-1}, this is impossible if $k$ is large enough. Hence $\accentset{\circ}{\mbox{Ric}}(g_0)(x_0)$ must be zero. Theorem \ref{t-Yamabe-1} then follows.
It remains to prove Lemma \ref{l-uconvergence-1}. Consider the equation: \begin{equation}\label{e-conformal-1} -a\Delta u+\mathcal{S}u=\lambda u^{p-1}. \end{equation}
\begin{lma}\label{l-Lq-1} Let $(M^n,g)$ be a smooth metric with scalar curvature $\mathcal{S}\ge -s_0$, with $s_0\ge0$. Let $u>0$ be a solution of \eqref{e-conformal-1} with $||u||_p=1$ and with $\lambda\le 0$. Then for any $q>p$, $$
||u||_q\le C(s_0, V(M ;g), n, q). $$
\end{lma} \begin{proof} This is from \cite{Trudinger1968}, see also \cite[Prop. 4.4]{LeeParker1987}. We sketch the proof here. Let $\theta>0$. Multiply \eqref{e-conformal-1} by $u^{1+2\theta}$ and integrating by parts, we have \begin{equation*} \begin{split} 0\ge& \int_M \lambda u^{1+2\theta+p-1}dV_g\\ =& \int_M \left(-a u^{1+2\theta}\Delta u+\mathcal{S}u^{2+2\theta}\right)dv_g\\
=&\int_M\left(a(1+2\theta)u^{2\theta}|\nabla u|^2+\mathcal{S}u^{2+2\theta}\right)dv_g. \end{split} \end{equation*} Let $w=u^{1+\theta}$, we have \begin{equation}\label{e-Lq} \begin{split}
\int_M|\nabla w|^2 dv_g\le& -\frac{(1+\theta)^2}{a(1+2\theta)}\int_M\mathcal{S}w^2 dv_g\\ \le &\frac{s_0(1+\theta)^2}{a (1+2\theta)}\int_M w^2 dv_g. \end{split} \end{equation} Combining this with \cite[Th. 2.3]{LeeParker1987} (take $\epsilon=1$ there), we have \begin{equation*} \begin{split}
||w||_p^2\le& C(n)\int_M(|\nabla w|^2+w^2)dv_g \\ \le& C(n, \theta, s_0)\int_M w^2dv_g\\ =&C(n, \theta, s_0)\int_M w^{2-\epsilon}w^\epsilon dv_g\text{\ \ ($0<\epsilon<2 $ to be chosen)}\\ \le &C(n, \theta, s_0)\left(\int_M w^p dv_g\right)^{\frac{2-\epsilon}p}\left(\int_M w^{\epsilon\cdot\frac p{p-2+\epsilon} } dv_g\right)^{\frac{p-2+\epsilon}p} \end{split} \end{equation*} So $$ \left(\int_M w^pdv_g\right)^\frac \epsilon p\le C(n, \theta, s_0) \left(\int_M w^{\epsilon\cdot\frac p{p-2+\epsilon} } dv_g\right)^{\frac{p-2+\epsilon}p}. $$ Let $\epsilon=\frac{p-2}\theta$ so that $$ (1+\theta) \epsilon\cdot\frac p{p-2+\epsilon}=p. $$ If $2\theta>p-2$, then $0<\epsilon<2$, we have $$ \int_M u^{p(1+\theta)}dv_g\le C(n,s_0,\theta). $$ This proves the lemma for $q=p(1+\theta)$ with $2\theta>p-2$. If $q=p(1+\theta)$, with $2\theta \le p-2$, the lemma follows from H\"older inequality. \end{proof}
\begin{lma}\label{l-u-1} As in Lemma \ref{l-uconvergence-1}, \begin{enumerate}
\item [(i)] For any $q>p$, there is a constant $C$ independent of $k$ such that $$
||u_k||_{q,g_0}\le C. $$
\item [(ii)] $u_k$ subconverge in $C^{2}$ norm with respect to $g_0$ in any compact set $K\subset M\setminus \Sigma$.
\item [(iii)] $$
\lim_{k\to\infty}\int_M||^{g_0}\nabla u_k||_{g_0}^2dv_{g_0}=0. $$
\item[(iv)] $\lim_{k\to\infty}\lambda_k=\sigma$. \end{enumerate} where $\lambda_k=\lambda_{i_k;\tau_k}$ as in \eqref{e-Yamabe-1}. \end{lma} \begin{proof} Since $\mathcal{S}_{i_k;\tau_k}\ge \sigma-\delta_k$ and $\delta_k\to0$, (i) follows from Lemma \ref{l-Lq-1} and the fact that $C^{-1}g_0\le G_k\le Cg_0$ for some $C>0$ for all $k$.
To prove (ii), for any compact set $K\subset M\setminus \Sigma$, then there is an open set $ K\Subset U\subset M\setminus \Sigma$ so that $G_k$ converges in $C^\infty$ norm to $g_0$ on $U$. By Lemma \ref{l-Einstein-1}, we conclude that $0\le -\lambda_k\le C$ for some constant independent of $k$. Then by (i), and \cite[Th. 2.4]{LeeParker1987}, we conclude that for any $U'\Subset U$, $$
||u_k||_{L_{2}^q(U')}\le C_1 $$ for some constant $C$ independent of $k$. We then use the Sobolev embedding theorem to conclude that the $C^\alpha$ norm of $u_k$ are uniformly bounded in $U'\Subset U$. From this the results follows by Schauder estimates.
(iii) and (iv) follows from Lemma \ref{l-Einstein-1}.
\end{proof}
\begin{cor}\label{c-u-1} After passing to a subsequence, $u_k$ converge in $C^2$ norm locally in $M\setminus \Sigma$ to a function $\mathfrak{u}$. Moreover, $\mathfrak{u}=1$ in $M\setminus\Sigma$ and $$
\mathcal{S}_{g_0} =\sigma. $$ In particular Lemma \ref{l-uconvergence-1} is true. \end{cor} \begin{proof} By Lemma \ref{l-u-1}, after passing to a subsequence, $u_k$ converge in $C^2$ norm locally in $M\setminus \Sigma$ to a function $\mathfrak{u}$. Moreover, $\mathfrak{u}$ is constant in each component of $M\setminus\Sigma$. We claim that there is $C_1>0$ such that $0\le u_k\le C_1$ for all $k$.
Since the scalar curvature $\mathcal{S}_{G_k}\ge -s_0$ for some $s_0>0$ independent of $k$ and since $\lambda_k\le0$, we have $$ -a\Delta_{G_k}u_k-s_0 u_k\le -a\Delta_{G_k}u_k+\mathcal{S}_{G_k}u_k\le 0. $$ Moreover, $\int_Mu_k^pdv_{G_k}=1$ and $G_k$ is equivalent to $g_0$ uniformly in $k$, the claim follows from mean value inequality \cite[Theorem 8.17]{GT}.
Since $u_k\to \mathfrak{u}$ almost everywhere, and $G_k$ converge uniformly to $g_0$, we have $$ \int_M \mathfrak{u}^pdv_{g_0}=1. $$ In particular, $\mathfrak{u}>0$ somewhere.
Next we want to prove that $\mathfrak{u}$ is constant on $M$. By Lemma \ref{l-u-1}, there is a constant $C_2$ independent of $k$ so that $$
\int_M(|^{g_0}\nabla u_k|^2_{g_0}+u_k^2)dv_{g_0}\le C_2. $$ Passing to a subsequence, we may assume that $u_k$ converge weakly in $W^{1,2}(M,g_0)$ to $v$ say. We claim that $v$ is constant. In fact, for any $\ell\ge1$, the sequence $u_{\ell+k}$, $k\ge 1$ also weakly converge to $v$. Then we can find convex combinations of $u_{\ell+k}$ which converge to $v$ strongly in $W^{1,2}(M,g_0)$. Namely, for any $\epsilon>0$, there exists $\alpha_1,\dots,\alpha_{m}$ with $\alpha_k\ge0$, $\sum_{k=1}^m\alpha_j=1$ such that if $w=\sum_{k=1}^m\alpha_ku_{\ell+k}$, then $$
||w-v||_{W^{1,2}(M, g_0})\le\epsilon. $$ On the other hand, by Lemma \ref{l-u-1}, if $\ell$ is large enough, then \begin{equation*} \begin{split}
(\int_M|^{g_0}\nabla w|^2_{g_0}dv_{g_0})^\frac12\le &(\int_M(|\sum_k\alpha_k|^{g_0}\nabla u_{\ell+k}|_{g_0} )^2dv_{g_0})^\frac12\\
\le& \sum_k\alpha_k(\int_M|^{g_0}\nabla u_{k+i}|^2_{g_0}dv_{g_0})^\frac12\\
\le &\epsilon. \end{split} \end{equation*} Hence $$
\int_M|^{g_0}\nabla v|^2 dv_{g_0}\le (2\epsilon)^2. $$ This implies $^{g_0}\nabla v=0$, a.e. Since $v\in W^{1,2}(M,g_0)$, we conclude that $v=c$ is a constant as claimed.
On the other hand, for any smooth function $\phi$ on $M$ \begin{equation*} \begin{split} \lim_{k\to\infty}\int_M(\langle ^{g_0}\nabla \phi, ^{g_0}\nabla u_k\rangle_{g_0} +\phi u_i)dv_{g_0}=&\int_M(\langle ^{g_0}\nabla \phi, ^{g_0}\nabla v\rangle_{g_0} +\phi v)dv_{g_0}\\ =&\int_M \phi v dv_{g_0}. \end{split} \end{equation*} Also by Lemma \ref{l-u-1} again, and the fact that $u_k$ are uniformly bounded and $u_k\to \mathfrak{u}$ a.e., we have
\begin{equation*} \lim_{k\to\infty}\int_M(\langle ^{g_0}\nabla \phi, ^{g_0}\nabla u_k\rangle_{g_0} +\phi u_i)dv_{g_0}=\int_M \phi \mathfrak{u} dv_{g_0}. \end{equation*} So \begin{equation*} \int_M \phi \mathfrak{u} dv_{g_0}=\int_M \phi v dv_{g_0}. \end{equation*} Hence $\mathfrak{u}=v$ is a constant. Since $\int_M\mathfrak{u}^pdv_{g_0}=1 $ so $\mathfrak{u}=1$. Since $\mathfrak{u}$ satisfies: $$ -a\Delta_{g_0}\mathfrak{u}+\mathcal{S}_{g_0} \mathfrak{u}=\sigma \mathfrak{u}^p, $$ the last assertion follows.
\end{proof}
This completes the proof of Theorem \ref{t-Yamabe-1}. Next we want to discuss the case that $\Sigma$ has codimension one. We have the following: \begin{thm}\label{t-Yamabe-2} Let $M^n$ be a smooth compact manifold such that $\sigma(M)\le 0$. Suppose $g_0$ is a Riemannian metric with $V(M,g_0)=1$ satisfying {\bf (b1)--(b-3)} in section \ref{s-approx-1}. Then $g_0$ is Einstein on $M\setminus \Sigma$ and $\mathcal{S}_{g_0}=\sigma(M)$. Moreover, $H_-=H_+$. \end{thm} \begin{proof} Let $g_i=g_{\epsilon_i,0}$ be the smooth approximation of $g_0$ by \cite{Miao2002} as given in section \ref{s-approx-1}. The fact that $g_0$ is Einstein outside $\Sigma$ can be proved similarly as above using Corollary \ref{c-approx-2}. It remains to prove that $H_-=H_+$. Let $\epsilon_i\to0$ and let $u_i$ be the positive solution of $$ -a\Delta_i u_i+\mathcal{S}_iu_i=\lambda_i u_i^{p-1} $$ normalized as $$ \int_M u_i^pdv_i=1 $$ Here $\Delta_i$ is the Laplacian of $g_i$ etc. Also $\lambda_i\le \sigma$, where $\sigma:=\sigma(M)$. Suppose $H_-(z)>H_+(z)$ somewhere, then one can easily check that there is a positive constant $b$ such that for $i$ large enough. \begin{equation}\label{e-meancurvature-1} \int_M \mathcal{S}_idv_i\ge \sigma+b. \end{equation} As before, passing to a subsequence if necessary, $u_i\to 1$ outside $\Sigma$ and uniform in $C^\infty$ norm in any compact set of $M\setminus \Sigma$. Moreover, $u_i$ are uniformly bounded, and $\lambda_i\to \sigma$. Since $\mathcal{S}_i$ be bounded below by $-s_0$, for some $s_0\ge0$ and $u_i$ is bounded from below, we have \begin{equation*} \begin{split} \sigma=&\lim_{i\to\infty}\lambda_i\int_Mu_i^{p-1}dv_i \\ =&\lim_{i\to\infty}\int_M \mathcal{S}_iu_idv_i\\ \ge&\lim_{i\to\infty}\int_M\mathcal{S}_i(u_i-1) dv_i+\sigma+b\\ \end{split} \end{equation*} where we have used the fact that $V(M,g_{0,\epsilon_i})\to V(M,g_0)=1$ and \eqref{e-meancurvature-1}. We claim that $$ \lim_{i\to\infty}\int_M\mathcal{S}_i(u_i-1) dv_i=0. $$
If the claim is true, then we have a contradiction because $b>0$. To prove the claim, note that on $|t|\le a$, the original metric $g_0$ is of the form:
$$ g_0(z,t)=dt^2+g_{ij}(z,t)dz^idz^j. $$ We assume that $g_{ij}(z,t)$ (which will be denoted by $h^t_{ij }(z)$) is uniformly equivalent to $g_{ij}(z,0)$ (which will be denoted by $h_{ij}(z)$). For any $z\in \Sigma$ and for any $1\ge t\ge 0$, \begin{equation*}
|u_i(z,a)-u_i(z,t)|\le \int_0^a\left|\frac{\partial u_i(z,s)}{\partial s}\right|ds\le \int_0^1|^{g_0}\nabla u_i|(z,s)ds \end{equation*}
By the properties of $g_{0,\epsilon}$, \begin{equation}\label{e-scalar-4}
\int_{\frac{\epsilon^2_i}{100}\le |t|\le \epsilon_i}|\mathcal{S}_i(u_i-1)|dv_i=o(1) \end{equation} because $u_i$ are uniformly bounded. \begin{equation}\label{e-scalar-5} \begin{split}
&\int_{|t|\le \frac{\epsilon^2_i}{100} } \mathcal{S}_i(z,t)(u_i(z,t)-1) dv_{g_i}\\ =&
\int_{|t|\le \frac{\epsilon^2_i}{100} } \mathcal{S}_i(z,t)(u_i(z,1)-1) dv_{g_i}+
\int_{|t|\le \frac{\epsilon^2_i}{100} } \mathcal{S}_i(z,t)(u_i(z,t)-u_i(z,1)) dv_{g_i}\\ =&I+II. \end{split}
\end{equation} Since $u_i(z,1)\to 1$ uniformly on $z\in \Sigma$, and $\int_M |\mathcal{S}_i|dv_{g_i}$ is bounded, we conclude that \begin{equation}\label{e-scalar-5} I=o(1) \end{equation} as $i\to\infty$. On the other hand, \begin{equation}\label{e-scalar-6} \begin{split}
|II|\le &\int_{|t|\le \frac{\epsilon^2_i}{100} } |\mathcal{S}_i(z,t)(u_i(z,t)-u_i(z,1))| dv_{g_i}\\
\le &c\int_{z\in \Sigma}\left(\int_{-\frac{\epsilon^2_i}{100} }^{ \frac{\epsilon^2_i}{100} }\epsilon_i^{-2}\int_0^1|\nabla u_i(z,s)|ds\right) dtdv_{h}\\
\le& c\int_{z\in \Sigma}\left( \int_0^a|\nabla u_i(z,s)|ds\right) dtdv_{h}\\
\le &c\int_M|\nabla u_i|dv_{g_i}\\ =&o(1) \end{split} \end{equation} by the Schwartz inequality and Lemma \ref{l-u-1}. The claim follows from \eqref{e-scalar-4}--\eqref{e-scalar-6}. \end{proof}
\section{singular Einstein metrics}\label{s-harmonic-1}
In the conclusions of Theorem \ref{t-Yamabe-1}, one obtains metrics which are smooth and Einstein outside some singular sets. In this section, we want to prove that under certain conditions, one may introduce smooth structure so that the Einstein metric is actually smooth. More precisely, we have the following:
\begin{thm}\label{t-harmonic-1} Let $ M^n $, $n\ge 3$ be a smooth manifold and $g$ is a Riemannian metric on $M$ satisfying the following conditions: There is a compact set $\Sigma$ in $M$ such that \begin{enumerate}
\item [(i)] $g$ is Lipschitz and $g$ is smooth on $M\setminus \Sigma$;
\item [(ii)] $g=\lambda \mbox{Ric}$ on $M\setminus\Sigma$ for some constant $\lambda$;
\item [(iii)] codimension of $\Sigma$ is larger than 1 in the sense that
$V(\Sigma(\epsilon),g) =O(\epsilon^{1+\theta})$ for some $\theta>0$, where $\Sigma(\epsilon)=\{x\in M|\ d(x,\Sigma)<\epsilon\}$. \end{enumerate} Then for any open set $U$ containing $\Sigma$, there is a smooth structure on $M$ which is the same as the original smooth structure on $M\setminus U$ so that $g$ is a smooth Einstein metric on $M$. \end{thm}
We want to construct the required smooth structure using harmonic coordinates. First recall the following.
\begin{lma}\label{l-harmonic-1} Let $B(1)$ be the unit ball in $\mathbb R^n$ with center at the origin. Let $(a_{ij})$ be a symmetric matrix so that $$
\lambda|\xi|^2\le a^{ij}\xi^i\xi^j\le \Lambda|\xi|^2, $$ for some $\Lambda>\lambda>0$ for all $\xi\in \mathbb R^n$ and $a^{ij}$ is Lipschitz with Lipschitz constant $L$. Let $f\in L^\infty(B(1))$. Then the following boundary value problem: \begin{equation*} \left\{
\begin{array}{ll}
\displaystyle{ \frac{\partial}{\partial x^i}\left(a^{ij}\frac{\partial u}{\partial x^j}\right)}&= f \hbox{\ in $B(1)$;} \\
u&= 0\hbox{\ on $\partial B(1)$,}
\end{array} \right. \end{equation*} has a unique solution in $W^{2,p}(B(1))$ for any $p>1$ with $u\in W_0^{1,p}(B(1))$. $$
||u||_{2,p}\le C\left(||u||_p+ ||f||_p\right) $$
for some constant depending only on $p, n, \lambda,\Lambda, L$. Here $||u||_{2,p}$ is the $W^{2,p}$ norm on $B(1)$ and $||u||_p$ is the $L^p$ norm in $B(1)$. \end{lma} \begin{proof} The results follow from \cite[Theorem 9.15, Corollary 9.13]{GT}. By taking $p>n$, by the Sobolev embedding theorem, $u$ is continuous up to the boundary and $u=0$ at the boundary.
\end{proof}
With the same assumptions and notation as in Theorem \ref{t-harmonic-1}, let $q\in \Sigma$. Let $U_\delta=\{(x^1,\dots,x^n)| |x|<\delta\}$ be smooth local coordinates neighborhood with $q$ being at the origin such that $g_{ij}$ is equivalent to the Euclidean metric and $g_{ij}$ is Lipschitz with Lipschitz constant $L$
\begin{lma}\label{l-harmonic-2} With the above assumptions and notation, there is $\delta>\epsilon>0$ and functions $u^1,\dots, u^n$ on $U_\epsilon=\{(x^1,\dots,x^n)| |x|<\epsilon\}$ such that the mapping $(x^1,\dots,x^n)\to (u^1,\dots,u^n)$ is a local $C^{1,\alpha}$ diffeomorphism at the origin for some $0<\alpha<1$, $u^i\in W^{2,p}(U_\epsilon)$ for all $p>1$ and $u^i$ is harmonic with respect to $g$ for $1\le i\le n$. Moreover, $u^i$ is smooth outside $\Sigma$.
\end{lma}
\begin{proof} Let $\delta>\epsilon>0$ to be chosen later. Fix $\ell$, let $f=\Delta_g x^\ell$ which is bounded by the assumption on $g_{ij}$. Let $\lambda, \Lambda>0$ be such that
\begin{equation}\label{e-elliptic}
\lambda|\xi|^2\le g^{ij}\xi^i\xi^j\le \Lambda|\xi|^2, \end{equation} in $U_\delta$.
Let $y=\epsilon^{-1}x$. Consider the following boundary value problem on $B(1)$ in the $y$-space
\begin{equation}\label{e-v-1} \left\{
\begin{array}{ll}
\displaystyle{\frac{\partial}{\partial y^i}\left(\sqrt gg^{ij}\frac{\partial v}{\partial y^j}\right)}&= \epsilon^2 \sqrt gf \hbox{\ in $B(1)$;} \\
v&= 0\hbox{\ on $\partial B(1)$,}
\end{array} \right. \end{equation}
By Lemma \ref{l-harmonic-1}, the boundary value problem has a solution $v$ satisfying the conclusions in that lemma. Here we have used the fact that $g_{ij}$ has Lipschitz constant bounded by $\epsilon L$ and still satisfies \eqref{e-elliptic} as functions of $y$. In particular, we have
$$
||v||_{2,p;y}\le C_1\left(||v||_{p;y}+\epsilon^2\right).
$$
Here and below, $C_i$ will denote positive constants independent of $\epsilon$. Let $p>n$ be fixed, then one can see that there is $1>\alpha>0$ such that $v\in C^{1,\alpha}(B(1))$ in the $y$-space and
\begin{equation}\label{e-harmonic-1}
||v||_{C^{1,\alpha}(B(1))}\le C_2\left(||v||_{p;y}+\epsilon^2\right).\\ \end{equation}
for some positive constants $C_2-C_4$ independent of $\epsilon$.
Let $w(x)=v(\epsilon^{-1} x )$ with $x\in B(\epsilon)$ in the $x$-space. Then $w$ satisfies
\begin{equation*} \left\{
\begin{array}{ll}
\displaystyle{ \frac{\partial}{\partial x^i}\left(\sqrt gg^{ij}\frac{\partial w}{\partial x^j}\right)}&= \sqrt gf \hbox{\ in $B(\epsilon)$;} \\
w&= 0\hbox{\ on $\partial B(\epsilon)$,}
\end{array} \right. \end{equation*} in the $x$-space. Moreover, $w\in W^{2,p}(B(\epsilon))$. Let $u^\ell=w-x^\ell$. Then $u^\ell$ is harmonic, namely, $u^\ell$ satisfies
\begin{equation*} \left\{
\begin{array}{ll}
\displaystyle{\frac1{\sqrt g} \frac{\partial}{\partial x^i}\left(\sqrt gg^{ij}\frac{\partial u^\ell}{\partial x^j}\right)}&= 0 \hbox{\ in $B(\epsilon)$;} \\
u^\ell&= x^\ell \hbox{\ on $\partial B(\epsilon)$,}
\end{array} \right.
\end{equation*} By the maximum principle, we conclude that $|u^\ell|\le \epsilon$ and so $|w|\le 2\epsilon $. \begin{equation}\label{e-localdiff-1}
\sup_{B(\epsilon}|\partial_x w|=\epsilon^{-1}\sup_{B(1)}|\partial_yv|\le C_2\epsilon^{-1}\left(||v||_{p;y}+\epsilon^2\right) \end{equation} To estimate the RHS, multiply \eqref{e-v-1} by $v$ and integrating by parts, using the Poincar\'e inequaltiy, we have \begin{equation*}
\int_{B(1)}v^2dy\le C_3\epsilon^2\int_{B(1)}|v|dy \end{equation*} and so \begin{equation*} \begin{split}
||v||_{p;y}\le&\left(\sup_{B(1)}|v|\right)^{1-\frac2p}\left(\int_{B(1)}v^2\right)^\frac1p\\ \le &C_4\epsilon^{1-\frac2p}\cdot\epsilon^\frac4p\\ =&C_4\epsilon^{1+\frac2p} \end{split}
\end{equation*} where we have used the H\"older inequality and the fact that $|v|=|w|\le 2\epsilon$. By \eqref{e-localdiff-1} we conclude that \begin{equation*}
\sup_{B(\epsilon)}|\partial_x w|\le C_5\epsilon^{\frac2p}. \end{equation*} Hence \begin{equation*} \frac{\partial u^\ell}{\partial x^i}=\delta_i^\ell+O(\epsilon^{\frac2p}). \end{equation*} From this and the fact that $g$ is smooth outside $\Sigma$ it is easy to see that the lemma is true, provided $\epsilon$ is small enough.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t-harmonic-1}] Let $U$ be any open set containing $\Sigma$. For any $q\in \Sigma$, by Lemma \ref{l-harmonic-2}, we can find smooth coordinates neighborhood $V_q\Subset U$ around $q$ and $C^{1,\alpha}$ functions $u^1,\dots,u^n$ on $V_q$ near $q$ which are in $ W^{2,p}(V_q)$ as functions of $x$. Moreover, $(x^1,\dots,x^n)\to (u^1,\dots,u^n)$ is a $C^1$ diffeomorphism from $V_q$ to its image $\widetilde V_q$ in the $u$-space. Let
\begin{equation}\label{e-h-1}h_{ab}=g(\frac{\partial}{\partial u^a},\frac{\partial}{\partial u^b})=\frac{\partial x^i}{\partial u^a}\frac{\partial x^j}{\partial u^b}g_{ij}, \end{equation}
where $g_{ij}=g(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j})$. Let $R_{ab}=\mbox{Ric}(\frac{\partial}{\partial u^a},\frac{\partial}{\partial u^b})$. Since each $u^a$ is harmonic, and $R_{ab}=\lambda h_{ab}$ by assumption, away from $\Sigma$ for all $a, b$ we have
\begin{equation}\label{e-Ricci-harmonic-1}
h^{cd}h_{ab,cd}=-2\lambda h_{ab}+\partial h^{-1}*\partial h+ h^{-1}*h^{-1}*\partial h*\partial h:=Q(h,\partial h).
\end{equation}
where $(h^{cd})=(h_{cd})^{-1}$, $h_{ab,c}=\frac{\partial}{\partial u^c}h_{ab}$ etc, and $\partial h^{-1}*\partial h$ denotes a sum of finite terms of the form $(\frac{\partial}{\partial u^c}h^{ab})(\frac{\partial}{\partial u^f}h_{de})$ etc. By \eqref{e-h-1},
\begin{equation}\label{e-h-2}
h_{ab,c}=2\frac{\partial^2x^i}{\partial u^a\partial u^c}\frac{\partial x^j}{\partial u^b}g_{ij}+\frac{\partial x^i}{\partial u^a }\frac{\partial x^j}{\partial u^b}\frac{\partial x^k}{\partial u^c}\frac{\partial }{\partial x^k}g_{ij}.
\end{equation} We may assume that $\widetilde V_q$ contains the origin which is the coordinates of $q$. Then by shrinking $\widetilde V_q$ is necessary, by Lemma \ref{l-harmonic-2}, $h_{ab}$ is bounded and $h_{ab,c}$ is in $L^p$ for all $p>1$ for all $a, b, c$ as functions of $u$. In particular, $h_{ab}$ is in $W^{1,p}(\widetilde V_q)$ for all $p>1$. Moreover, $(h^{ab})$ is uniformly elliptic. Since $h^{ab}$ is only in $C^\alpha$ with $0<\alpha<1$, we cannot apply standard $L^p$ estimate as in \cite[Theorem 9.19]{GT}. Hence, we want to prove that $h_{ab}$ is in $W^{2,p}(B(\delta))$ for all $a, b$ for all $p>n$ and for some $\delta>0$ in the $u$-space, where $B(\delta)=\{u|\ |u|<\delta\}$. Suppose this is true, then $h_{ab}\in C^{0,1}_{{\rm loc}}(B(\delta))$ and $\partial h\in W^{1,p}_{{\rm loc}}(B(\delta))$. This implies $Q(h,\partial h)$ in \eqref{e-Ricci-harmonic-1} is in $W^{1,\frac p2}_{{\rm loc}}(B(\delta))$. Since this is true for all $p>n$, by \cite[Theorem 9.19]{GT}, we conclude that $h_{ab}$ is in $W^{3,p}(B(\delta))$. Continue in this way, we conclude that $h_{ab}\in W^{k,p}_{{\rm loc}}(B(\delta))$ for all $k\ge 1$ and $p>n$ by booth trap argument. Hence $h_{ab}$ is smooth near the origin.
It remains to prove that $h_{ab}\in W^{2,p}(B(\delta))$ for all $p>n$ for all $a, b$ for some $\delta>0$. Fix $a, b$ and let $w=\phi h_{ab}$ where $\phi$ is a smooth cutoff function in $B(2\delta)$ so that $\phi=1$ in $B(\delta)$, $\phi=0$ outside $B(\frac32\delta)$, where $\delta>0$ is small enough so that $B(2\delta)\Subset \widetilde V_q$. Then away from $\Sigma$, $w$ satisfies:
\begin{equation}\label{e-Ricci-harmonic-2}
h^{cd}w_{cd}=Q_1(h,\partial h, \phi, \partial \phi, \partial^2\phi).
\end{equation} Since $Q_1$ is in $L^p(B(2\delta))$ by Lemma \ref{l-harmonic-2} and $(h^{cd})$ is continuous and is uniformly elliptic, by \cite[Theorem 9.15]{GT} for any $p<n$ there is $v\in W^{2,p}(B(2\delta))\cap W_0^{1,p}(B(2\delta))$ such that \begin{equation*}
h^{cd}v_{cd}=Q_1(h,\partial h, \phi, \partial \phi, \partial^2\phi). \end{equation*} Since $h^{cd}\in W^{1,p}(B(2\delta))$ for all $p$, for any smooth function $\eta$ with compact support in $B(2\delta)$, we have \begin{equation}\label{e-Ricci-harmonic-3} \int_{B(2\delta)}\left(h^{cd}\frac{\partial\eta}{\partial u^a} \frac{\partial v}{\partial u^b}+\eta s^d\frac{\partial v}{\partial u^d}\right)du=-\int_{B(2\delta))}\eta Q_1 du. \end{equation} where $s^d=\frac{\partial}{\partial u^c}h^{cd}$. We want to prove that $w$ also satisfies this relation.
To prove the claim, note that if we consider $\Sigma\cap \widetilde V_q$ then the codimension of $\Sigma$ in the $u$-space is at least $1+\theta$ for some $\theta>0$ because $h_{ab}$ and the Euclidean metric are uniformly equivalent. As in \cite{Lee2013}, for $\epsilon>0$ small enough, we can find a smooth function $0\le \xi_\epsilon\le 1$ in $\widetilde V_q$ such that $\xi_\epsilon=1$ outside $\Sigma_{2\epsilon} $ and is zero in $\Sigma_\epsilon\cap \widetilde V_q$ where $\Sigma_\epsilon=\{u\in \widetilde V_q| d(u,\Sigma)<\epsilon\}$ where the distance is the Euclidean distance. Moreover, $|\partial \xi_\epsilon|\le C_1\epsilon^{-1}$. Here and below $C_i$ denotes a positive constant independent of $\epsilon$. Now let $\eta$ be a smooth function with compact support in $B(2\delta)$. Multiply \eqref{e-Ricci-harmonic-2} by $\eta\xi_\epsilon$ and integrating by parts, we have
\begin{equation*} \begin{split} -\int_{B(2\delta)}\eta \xi_\epsilon Q_1du=& \int_{B(2\delta)}\left[h^{cd}\left(\xi_\epsilon\frac{\partial \eta}{\partial u^a} + \eta\frac{\partial \xi_\epsilon}{\partial u^a}\right) \frac{\partial w}{\partial u^b}+\eta \xi_\epsilon s^d\frac{\partial w}{\partial u^d}\right]du \end{split} \end{equation*} Since $w, h^{cd}\in L^{1,p}(B(2\delta))$ for all $p>1$, we have
$$
\int_{B(2\delta)}|\eta (\xi_\epsilon-1) Q_1|du\le \left(\int_{B(2\delta)} |\eta (\xi_\epsilon-1) Q_1|^2\right)^\frac12 V(\Sigma_{2\epsilon})^\frac12\to 0 $$ as $\epsilon\to0$. Similarly, one can prove that
$$
\int_{B(2\delta)}\left|h^{cd} (\xi_\epsilon-1)\frac{\partial \eta}{\partial u^a} \frac{\partial w}{\partial u^b} + \eta (\xi_\epsilon-1) s^d\frac{\partial w}{\partial u^d}\right| du\to0 $$ as $\epsilon\to0$. On the other hand, \begin{equation*} \begin{split}
\int_{B(2\delta)}\left|h^{cd} \eta\frac{\partial \xi_\epsilon}{\partial u^a} \frac{\partial w}{\partial u^b}\right|du
\le&C_2\epsilon^{-1}\int_{\Sigma_{2\epsilon}}|\partial w|du\\
\le &C_3 \epsilon^{-1}\left(\int_{\Sigma_{2\epsilon}}|\partial w|^pdu\right)^\frac1p (V(\Sigma(2\epsilon)))^(1-1p)\\
\le &C_4\epsilon^{-1+(1+\theta)(1-\frac1p)}\left(\int_{\Sigma_{2\epsilon}}|\partial w|^pdu\right)^\frac1p\\ \to&0 \end{split} \end{equation*} as $\epsilon\to0$ provided $p$ is large enough. Hence we have \begin{equation}\label{e-Ricci-harmonic-4} \int_{B(2\delta)}\left(h^{cd}\frac{\partial\eta}{\partial u^a} \frac{\partial w}{\partial u^b}+\eta s^d\frac{\partial w}{\partial u^d}\right)du=-\int_{B(2\delta))}\eta Q_1 du. \end{equation} for all smooth function $\eta$ with compact support $B(2\delta)$.
Let $\zeta=v-w$, then $v-w\in W_0^{1,p}$ for all $p>1$ and \begin{equation}\label{e-Ricci-harmonic-5} \int_{B(2\delta)}\left(h^{cd}\frac{\partial\eta}{\partial u^a} \frac{\partial \zeta}{\partial u^b}+\eta s^d\frac{\partial \zeta}{\partial u^d}\right)du=0 \end{equation}
for all smooth function $\eta$ with compact support in $B(2\delta)$. Using the fact that $s^d\in L^p(B(2\delta))$ we can proceed as in the proof of \cite[Theorem 8.1]{GT} to conclude that $\zeta\equiv0$. Since it is assumed that $s^d$ is bounded in that theorem and we only have $s^q\in L^p(B(2\delta))$ for all $p>1$ in our case, we sketch the proof as follows. Suppose $\sup_{B(2\delta)}\zeta=m>0$ which is finite because $\zeta$ is continuous. For any $m>\tau>0$, let $\zeta_\tau=\max\{\zeta-\tau,0\}$. Multiply \eqref{e-Ricci-harmonic-5} by $\zeta_\tau$ and integrating by parts, using the uniform ellipticity of $h^{cd}$ and the Sobolev inequality, we have
\begin{equation*}
\begin{split}
||\zeta_\tau||_{\frac{2n}{n-2}}\le& D_1||\partial \zeta_\tau||_2\\
\le &D_2\left(\int_{\Gamma_\tau} \zeta_\tau^2 \mathbf{s}^2du\right)^\frac12\\
\le &D_2||\zeta_\tau||_{\frac{2n}{n-2}}\left(\int_{\Gamma_\tau}\mathbf{s}^{n}\right)^\frac1n\\
\le &D_3||\zeta_\tau||_{\frac{2n}{n-2}}|\Gamma_\tau|^\frac1{2n}
\end{split}
\end{equation*} here and below $D_i$ denotes a positive constant independent of $\tau$, $\mathbf{s}=\sqrt{\sum_{d}(s^d)^2}$ and $|\Gamma_\tau|$ is the measure of the support of $\partial \zeta_\tau$. Hence $$
|\Gamma_\tau|^\frac1{2n}\ge D_3^{-1} $$ for all $m>\tau>0$. Since support of $\partial \zeta_\tau$ is a subset of the support of $\zeta_\tau$, and $\cap_{\tau}\text{supp}(\zeta_\tau)=\{\zeta=m\}$, we have
$$\bigcap_\tau (\Gamma_\tau\cap \{\zeta=m\})$$ has positive measure. But for almost all $x\in \{\zeta=m\}$, $\partial\zeta(x)=0$. Hence for almost all $x$ in $ \Gamma_\tau\cap \{\zeta=m\}$, $\partial \zeta_\tau(x)=0$. So $|\Gamma_\tau\cap \{\zeta=m\}|=0$. This is impossible.
To summarize we have proved that $h_{ab}\in W^{2,p}(B(2\delta))$ for all $p>n$ and $h_{ab}$ is smooth in $u$ for all $a, b$.
We can cover $\Sigma$ by such harmonic coordinate neighborhoods $V_q$ so that the components of $g$ are smooth with respect to these coordinates. By \cite[Theorem 2.1]{Taylor2006} one can conclude that the theorem is true.
\end{proof}
\begin{cor}\label{c-Einstein-1} Suppose $(M^n,g_0)$ is as in Theorem \ref{t-Yamabe-1}. If in addition, $g_0$ is Lipschitz. Then there is a smooth structure on $M$ so that $g_0$ is smooth and Einstein.
\end{cor} \section{a positive mass theorem with singular set}\label{s-pmt}
In this section, we will use the results in sections \ref{s-gradient} and \ref{s-approx-1} to study positive mass theorems on asymptotically flat manifolds with singular metrics. We want to discuss the theorem without assuming that the manifold is spin. There are different definitions for asymptotically flat manifold. For our purpose, we use the following:
\begin{definition}\label{defaf} An $n$ dimensional Riemannian manifold $(M^n,g)$, where $g$ is continuous, is said to be asymptotically flat (AF) if there is a compact subset $K$ such that $g$ is smooth on $M\setminus K$, and $M\setminus K$ has finitely many components $E_k$, $1\le k\le l$, each $E_k$ is called an end of $M$, such that each $E_k$ is diffeomorphic to $\mathbb R^n\setminus B(R_k)$ for some Euclidean ball $B(R_k)$, and the following are true: \begin{enumerate}
\item [(i)] In the standard coordinates $x^i$ of $\mathbb R^n$,
\begin{equation*} g_{ij}=\delta_{ij}+\sigma_{ij} \end{equation*} with \begin{equation*} \label{daf2}
\sup_{E_k} \left\{\sum_{s=0}^2|x|^{\tau+s}|\partial^s\sigma_{ij}| +[|x|^{\alpha+2+\tau}\partial\partial \sigma_{ij}]_\alpha \right\}<\infty, \end{equation*} for some $0<\alpha\le 1$, $\tau>\frac{n-2}2$, where $\partial f$ and $\partial^2f$ are the gradient and Hessian of $f$ with respect to the Euclidean metric, and $[f]_\alpha$ the $\alpha$-H\"older norm of $f$.
\item [(ii)] The scalar curvature $\mathcal{S}$ satisfies the decay condition:
$$
|\mathcal{S}|(x)\le C(1+d(x))^{-q}
$$
for some $n+2\ge q>n$. Here $d(x)$ is the distance function from a fixed point in $M$.
\end{enumerate} The coordinate chart satisfying (i) is said to be {\it admissible}. \end{definition}
In the following, for a function $f$ defined near infinity or $\mathbb R^n$, and for $k\geq 0$, $f=O_k(r^{-\tau})$ refers to $\sum^k_{i=0} r^i |\partial^i f|=O(r^{-\tau})$ as $r\to\infty$, where $r=|x|$.
\begin{definition} The Arnowitt-Deser-Misner (ADM) mass (see \cite{ADM}) of an end $E$ of an AF manifold $M$ is defined as: \begin{equation} \label{defadm1} \mathfrak{m}_{ADM}(E)=\lim_{r\to\infty}\frac{1}{4(n-1)\omega_{n-1}}\int_{S_r} \left(g_{ij,i}-g_{ii,j}\right)\nu^jd\Sigma_r^0, \end{equation} in an admissible coordinate chart where $S_r$ is the Euclidean sphere, $\omega_{n-1}$ is the volume of $n-1$ dimensional unit sphere, $d\Sigma_r^0$ is the volume element induced by the Euclidean metric, $\nu$ is the outward unit normal of $S_r$ in $\mathbb R^n$ and the derivative is the ordinary partial derivative. \end{definition} By the result of Bartnik \cite{BTK86}, the $\mathfrak{m}_{ADM}(E)$ is well-defined, i.e. it is independent of the choice of admissible charts.
For smooth metrics, without assuming the manifold is spin, we have the following positive mass theorem by Schoen and Yau \cite{SY1979,SY1981,Schoen1989}: \begin{thm}\label{t-SY} Let $(M^n,g)$, $3\le n\le 7$, be an AF manifold with nonnegative scalar curvature $\mathcal{S}\ge0$. Then the ADM mass of each end is nonnegative. Moreover, if the ADM mass of one of the ends is zero, then $(M^n,g)$ is isometric to $\mathbb R^n$ with the standard metric.
\end{thm}
We want to prove the following positive mass theorem for metrics which are smooth outside a compact set of codimension at least 2. More precisely, we want to prove the following: \begin{thm}\label{t-pmtsing} Let $(M^n,g_0)$ be an AF manifold with $3\leq n\leq 7$, $g_0$ being a continuous metric on $M$ such that \begin{enumerate}
\item [(i)] $g_0$ is smooth outside a compact set $\Sigma$ with codimension at least 2 as in {\bf (a4)} in section \ref{s-approx-1}.
\item [(ii)] The scalar curvature $\mathcal{S}$ of $g_0$ is nonnegative outside $\Sigma$.
\item [(iii)] $g_0\in W^{1,p}_{\text{loc}}$ for some $p>n$ as in {\bf (a2)} in section \ref{s-approx-1}.
\item[(iv)] On each end $E$, in an admissible coordinate chart,
$$
g_{ij}=\delta_{ij}+\sigma_{ij}
$$
with $\sigma_{ij}=O_5 (r^{-\tau})$ with $\tau>\frac{n-2}2$. \end{enumerate} Then the ADM mass of each end is nonnegative. Moreover, if the mass of one of the ends is zero, then $M$ is diffeomorphic to $\mathbb R^n$, and $g_0$ is flat outside $\Sigma$.
\end{thm} \begin{remark} \begin{enumerate}
\item [(a)] The assumption of continuity of metric cannot be removed. See the construction in Proposition \ref{p-cone-pm-2}.
\item [(b)] The case that the singular set is an embedded hypersurface has been studied in \cite{Miao2002,ShiTam2002}, see also \cite{McFeronSzekelyhidi2012}.
\item [(c)] In case the singular set has codimension larger than 1, for spin manifolds, positive mass theorems have been obtained under rather general assumptions in \cite{LeeFeloch2015}. Without the spin condition, there are also results for metrics with bounded $C^2$ norm and with singular set to have codimension at least $n/2$ \cite{Lee2013}. \end{enumerate}
\end{remark}
We proceed as in \cite{McFeronSzekelyhidi2012}. As in section \ref{s-approx-1}, let $\epsilon>0$, $\epsilon\to0$, we can construct a family of metrics $g_{\epsilon,0}$ such that \begin{enumerate}
\item [(i)] $g_{\epsilon,0}\to g_0$ uniformly.
\item [(ii)] $g_{\epsilon,0}=g_0$ outside $\Sigma(2\epsilon)$.
\item [(iii)] The $W^{1,p}$ norm of $g_{\epsilon,0}$ in a fixed precompact open set containing $\Sigma$ is bounded by a constant independent of $\epsilon$. \end{enumerate}
As in section \ref{s-approx-1}, we can choose $\epsilon_0>0$ small enough and let $h=g_{\epsilon_0,0}$. Then there is a $T>0$ independent of $\epsilon$ such that if $0<\epsilon \le \epsilon_0$, then there is a smooth solution $g_\epsilon(t)$ on $M\times[0,T]$ to the $h$-flow with initial data $g_{\epsilon,0}$. There is also a smooth solution $g(t)$ on $M\times(0,T]$ to the $h$-flow such that $g(t)\to g_0$ uniformly on compact sets as $t\to0$. Moreover, Lemma \ref{l-approx-2} is still true with $M$ being noncompact in this case because $M$ is AF.
Let $\widetilde g_\epsilon(t)$ be the corresponding solution to the Ricci flow with $\widetilde g_\epsilon(t)=\Phi_t^*(g_\epsilon(t))$ as in the compact case in section \ref{s-approx-1}. Then we have the following:
\begin{lma}\label{l-approx-1} \begin{enumerate}
\item [(i)] $g_\epsilon(t)$, $\widetilde g_\epsilon(t)$, $g(t)$ are AF in the sense of Definition \ref{defaf}.
\item [(ii)] For each end $E$ of $M$, $\mathfrak{m}(E)(\epsilon,t)=\mathfrak{m}(E)(\epsilon,0) =\mathfrak{m}(E)$ where $\mathfrak{m}(E)(\epsilon,t)$ is the mass with respect to $g_\epsilon(t)$ or $ \widetilde g_\epsilon(t)$; and $\mathfrak{m}(E)$ are the masses with respect to $g_{\epsilon,0}$ or $g_0$. \end{enumerate}
\end{lma} \begin{proof} (i) First note that $C_1^{-1}h\le g_\epsilon(t)\le C_1h$ for some $C_1>0$ independent of $\epsilon, t$. On the other hand, by Lemma \ref{l-approx-2} applied to the noncompact case, we conclude that the curvature of $\widetilde g_\epsilon(t)$ is bounded by $Ct^{-\frac12(1+\delta)}$ for some $0<\delta<1$ where $C, \delta $ are independent of $\epsilon, t$. Hence we also have $C_1^{-1}g_{\epsilon,0}\le g_\epsilon(t)\le C_1g_{\epsilon,0}$ and $C_1^{-1}h\le \widetilde g_\epsilon(t)\le C_1h$, with possible larger $C_1$.
Using the fact that $\sigma_{ij}=O_5(r^{-\tau})$, we can proceed with some modifications as in \cite{DaiMa2007,McFeronSzekelyhidi2012} to show that outside a fixed compact set, for $0\le l\le 3$,
$$
|^h\nabla^lg_\epsilon(x,t)|\le C_2d^{-l-\tau}(x) $$ for some constant $C_2$ independent of $\epsilon,t,x$, where $d(x)$ is the distance function from a fixed point with respect to $h$. Here we use the fact that $\sigma_{ij}=O_5(r^{-\tau})$. The proof is similar to the proof for the decay rate of scalar curvature. So we only carry out the proof for this case in more details.
We want to prove the following: There is a constant $C_3>0$ independent of $\epsilon, t$ and a compact set $K$ such that if $\widetilde {\mathcal{S}}_\epsilon(t)$ is the scalar curvature of $\widetilde g_\epsilon(t)$, then \begin{equation}\label{e-scalar-1}
\sup_{M\setminus K}d^q(x)|\widetilde {\mathcal{S}}_\epsilon(x,t)|\le C_3. \end{equation}
We will prove this on each end. Fix $\epsilon$. Denote the scalar curvature of $g_\epsilon(t)$ simply by $\mathcal{S}$ and curvature by $\text{Rm}$ etc. Let $E$ be an end which is diffeomorphic to $\mathbb R^n\setminus B(R)$, say. By the result of \cite{Simon2002}, by choosing $R$ large enough, so that $g_{\epsilon,0}=h=g_0$ outside $B(\frac R2)$ and $g_0$ is smooth there, we may assume that $|\text{Rm}(g_\epsilon(t))|\le C_4$ for some constant $C_4$ independent of $\epsilon, t$ outside $B(\frac R2)$. Here we have used the fact that $g_\epsilon(t), \widetilde g_\epsilon(t)$ are uniformly equivalent.
Let $g_e$ be the standard Euclidean metric and let $0\le\phi\le 1$ be a fixed smooth function on $\mathbb R^n$ so that $\phi=1$ in $B(R)$ and $\phi=0$ outside $B(2R)$. Consider the metric $\phi g_e+(1-\phi)g_\epsilon(t)$. Still denote its curvature by $\text{Rm}$ etc.
Let $\rho$ be a fixed function $\rho\ge 1$, $\rho=1$ in $B(R)$, $\rho(x)=|x|$ outside $B(2R)$. Hence the gradient and the Hessian of $\rho$ with respect to $g_\epsilon(t)$ are bounded by a constant independent of $\epsilon, t$.
$$ \frac{\partial}{\partial t}\mathcal{S}^2\le \Delta \mathcal{S}^2+C_5, $$ in $B(2R)$ and
$$
\frac{\partial}{\partial t}\mathcal{S}^2=\Delta \mathcal{S}^2+2\mathcal{S}|\mbox{Ric}|^2-2|\nabla \mathcal{S}|^2, $$ outside $B(2R)$,
Let $F=\rho^{ 2q}\mathcal{S}^2$, then outside $B(2R)$. \begin{equation*} \begin{split}
\left(\frac{\partial}{\partial t}-\Delta\right)F= &\rho^{2q}\left(2\mathcal{S}|\mbox{Ric}|^2-2|\nabla \mathcal{S}|^2\right)-2\langle\nabla \rho^{2q},\nabla \mathcal{S}^2\rangle\\ \le &C_6\rho^{q-4-2\tau}\rho^q\mathcal{S}-4q\rho^{-1}\langle\nabla\rho,\nabla F\rangle+C_6F\\ \le &C_7-4q\rho^{-1}\langle\nabla\rho,\nabla F\rangle+C_7F \end{split}
\end{equation*} for some constants $C_6, C_7$ independent of $\epsilon, t$. The inequality is still true in $B(2R)$ because in $B(R)$, $\nabla \rho=0$ and in $B(2R)\setminus B(R)$, $|\nabla\rho|$ and $|\nabla\mathcal{S}| $ are uniformly bounded. Hence if $\widetilde F=e^{-C_7t}F-C_7t$ \begin{equation}\label{e-scalar-2} \left(\frac{\partial}{\partial t}-\Delta\right)\widetilde F\le -4q\rho^{-1}\langle\nabla\rho,\nabla \widetilde F\rangle \end{equation} Let $A>0$ to be chosen later. Let $\eta=\exp(2At+\rho)$. Then \begin{equation*} \left(\frac{\partial}{\partial t}-\Delta\right)\eta\ge 2A\eta-C\eta \end{equation*} for some constant $C$ independent of $\epsilon, t$. Choose $A=C$, then we have \begin{equation*} \left(\frac{\partial}{\partial t}-\Delta\right)\eta\ge A\eta. \end{equation*}
Let $\kappa>0$ be any positive number, then
$$
\left(\frac{\partial}{\partial t}-\Delta\right)(\widetilde F-\kappa \eta)\le -4q\rho^{-1}\langle\nabla\rho,\nabla \widetilde F\rangle-\kappa A\eta.
$$ Since $\widetilde F$ is at most polynomial growth, if $\widetilde F-\kappa\eta$ has a positive maximum, then at some point $(x_0,t_0)$. Suppose $t_0>0$, then at $(x_0,t_0)$, $$ \nabla \widetilde F=\kappa \nabla \eta. $$ Hence at $(x_0,t_0)$ \begin{equation*} \begin{split} 0\le &\left(\frac{\partial}{\partial t}-\Delta\right)(\widetilde F-\kappa \eta)\\ \le& -4q\rho^{-1}\langle\nabla\rho,\nabla \widetilde F\rangle-\kappa A\eta\\ =& -4q\rho^{-1}\kappa\langle\nabla\rho,\nabla \eta \rangle-\kappa A\eta\\ \le& -\kappa A\eta \end{split} \end{equation*} which is impossible. Hence either $\widetilde F-\kappa \eta\le 0$, or $$ \widetilde F-\kappa\eta\le \sup_{\mathbb R^n}\left(\rho^{2q}(x) \mathcal{S}^2(0)\right) $$
where $\mathcal{S}(0)$ is the scalar curvature of $\phi g_e+(1-\phi g_0)$. Let $\kappa\to0$, we conclude the \eqref{e-scalar-1} is true.
(ii) Since $g_{\epsilon,0}=g_0$ outside a compact set, $\mathfrak{m}(E)=\mathfrak{m}(E)(\epsilon,0)$. On the other hand by the fact that $\widetilde g_\epsilon(t)$ and $\widetilde g(t)$ are given by a diffeomorphism and by (i) and \cite{BTK86}, the mass of $E$ is the same whether it is computed with respect to $\widetilde g_\epsilon(t)$ or $g_\epsilon(t)$.
The fact that $\mathfrak{m}(E)(\epsilon,t)=\mathfrak{m}(E)(\epsilon,0)$ follows from \cite{DaiMa2007}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t-pmtsing}] By Lemmas \ref{l-approx-1} and \ref{l-noncompact-1}, we conclude that $g(t)$ is AF and with nonnegative scalar curvature for $t>0$. Let $E$ be an end, using the notation as in Lemma \ref{l-approx-1}, by the lemma and \cite[Theorem 14]{McFeronSzekelyhidi2012}, the mass $\mathfrak{m}(E)(t)$ of $E$ with respect to $g(t)$ satisfies, \begin{equation*} \begin{split} \mathfrak{m}(E)=&\liminf_{\epsilon\to 0}\mathfrak{m}(E)(\epsilon,0)\\ =&\liminf_{\epsilon\to 0}\mathfrak{m}(E)(\epsilon,t)\\ \ge& \mathfrak{m}(E)(t). \end{split} \end{equation*} By Theorem \ref{t-SY}, $\mathfrak{m}(E)(t)\ge0$, we have $\mathfrak{m}(E)\ge0$. If $\mathfrak{m}(E)=0$, then $\mathfrak{m}(E)(t)=0$ and $(M^n,g(t))$ is isometric to the Euclidean space. Since $g(t)$ converges to $g_0$ in $C^\infty$ as $t\to 0$ away from $\Sigma$, $g_0$ is flat outside $\Sigma$. \end{proof}
\end{document} |
\begin{document}
\bstctlcite{IEEEexample:BSTcontrol}
\title{Satellite-based Distribution of Hybrid Entanglement} \author{Hung Do} \author{
\IEEEauthorblockN{Hung Do\IEEEauthorrefmark{1}, Robert Malaney\IEEEauthorrefmark{1} and Jonathan Green\IEEEauthorrefmark{2}}\\
\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Electrical Engineering and Telecommunications,\\ The University of New South Wales, Sydney, NSW 2052, Australia.}\\
\IEEEauthorblockA{\IEEEauthorrefmark{2}Northrop Grumman Corporation, San Diego, California, USA.} }
\maketitle \thispagestyle{distribution} \renewcommand{0pt}{0pt}
\begin{abstract}
Heterogeneous quantum networks consisting of mixed-technologies - Continuous Variable (CV) and Discrete Variable (DV) - will become ubiquitous as global quantum communication matures. Hybrid quantum-entanglement between CV and DV modes will be a critical resource in such networks. A leading candidate for such hybrid quantum entanglement is that between Schr\"odinger-cat states and photon-number states. In this work, we explore, for the first time, the use of Two-Mode Squeezed Vacuum (TMSV) states, distributed from satellites, as a teleportation resource for the re-distribution of our candidate hybrid entanglement pre-stored within terrestrial quantum networks. We determine the loss conditions under which teleportation via the TMSV resource outperforms direct-satellite distribution of the hybrid entanglement, in addition to quantifying the advantage of teleporting the DV mode relative to the CV mode.
Our detailed calculations show that under the loss conditions anticipated from Low-Earth-Orbit, DV teleportation via the TMSV resource will always provide for significantly improved outcomes, relative to other means for distributing hybrid entanglement within heterogeneous quantum networks. \end{abstract}
\section{Introduction}
The Schr\"odinger-cat state is of fundamental importance because it represents the superposition of two macroscopic quantum states. In the optical domain, the cat state can take the form of a superposition of two coherent states of opposite phase \cite{dodonov1974even}. Such a cat state finds applications in many areas including universal quantum computing via Continuous Variables (CV), where the quantum information is encoded in the quadratures of the optical field \cite{ralph2003quantum,lund2008fault,brask2010ahybrid,sangouard2010quantumrepeaters}. CV protocols can in many instances be more efficient relative to Discrete Variable (DV) versions of the same protocols, where the quantum information is encoded in the DV properties of single photons such as polarization or photon-number \cite{park2010entangled,morin2013remote,takeda2015entanglementswapping}.
\begin{figure}
\caption{Direct distribution.}
\label{direct_setup}
\caption{Teleportation.}
\label{teleported_setup}
\caption{(a) The hybrid entangled state $A- B$ is beamed directly from the satellite down to Earth. (b) In the hybrid scheme, the attenuated CV entanglement of the modes $A'-B'$ is used as a teleportation channel. The hybrid entangled state $C-D$ is comprised of the CV mode $C$ and the DV mode $D$. Mode $D$ (or $C$) is teleported through the CV teleportation channel, with teleportation gain $g$, to create the final mode $B''$ entangled to $C$ (or $D$). ${\bar x_u}$ and ${\bar p_v}$ stand for the Bell state measurement outputs. In both plots, $\epsilon_A$ and $\epsilon_B$ represent the vacuum contributions, while $T_A$ and $T_B$ represent the channel transmissivities. }
\label{g_opt_E_max}
\end{figure}
Hybrid entanglement between quantum states can be considered as the entanglement between two physically separated modes, one of which is encoded in DV information while the other is encoded in CV information. An example of such a hybrid state would be entanglement between the CV Schr\"odinger-cat states and the DV photon-number states. Such a hybrid entangled state has been demonstrated to have applications in quantum control, specifically for the remote preparation of a CV qubit using a local DV mode \cite{morin2013remote,huang2013performance,lejeannic2018remote,sychez2018entanglement}. In quantum communications, hybrid entangled states can lead to the violation of the steering inequality \cite{cavailles2018demonstration}, thus generating a positive key rate for the one-sided device-independent Quantum Key Distribution (QKD) protocol \cite{branciard2012onesided}. Impressively, the hybrid entangled state can be used to distribute secret keys over 1000km of optical fibre via the coherent-state-based twin-field QKD protocol \cite{yin2019coherent, yin2019finite, zhang2019improving}.
Hybrid entanglement may become particularly important for long-distance communication via satellites, especially as a mechanism to interconnect (via teleportation and entanglement swapping) terrestrial devices operating within heterogeneous (mixed DV and CV) networks. Connecting such mixed-technology devices through traditional fiber links has its limitations since the loss in optical fiber scales exponentially with distance.
In contrast, for a satellite in Low-Earth-Orbit (LEO), which is about 500km above the ground, the Micius experiment has shown that the down-link (satellite to ground) channel is largely affected by atmospheric turbulence and diffraction \cite{yin2017satellite}. In future systems, this level of loss will likely be further mitigated by large receiving telescopes and/or adaptive optical tracking techniques \cite{zhao2012aberration,li2014evaluation}. It is, therefore, of great interest to explore the use of entanglement distribution via satellites as a means to interconnect terrestrial devices running on mixed technologies.
Previous work has studied the scenario where both the state to be teleported and the teleportation channel are hybrid entangled states \cite{sychez2018entanglement,lim2016loss,ulanov2017teleportation,guccione2020connecting}. However, the generation of a hybrid-entangled teleportation resource channel is experimentally challenging - especially when the source is placed on board a satellite as in Fig.~\ref{direct_setup}. On the other hand, the generation of CV entanglement in the form of a Two-Mode Squeezed Vacuum (TMSV) state is relatively easy to achieve and potentially available as a satellite-based technology \cite{yin2017satellite}. In this work, we study, for the first time, the use of a TMSV channel (as shown in Fig.~\ref{teleported_setup}) to teleport a hybrid entangled state.
The use of the attenuated TMSV channel for teleportation (henceforth simply referred to as the teleportation) was previously studied using the Wigner function formalism \cite{braustein1998teleportationCV}, the P-function formalism \cite{takeoka2003continuous}, and the Fock basis formalism \cite{lie2019limitations}. However, such mathematical models were limited to the teleportation of common single-mode states, such as the vacuum state, the single-photon state \cite{lie2019limitations}, the coherent state \cite{takeoka2003continuous}, or the Schr\"odinger's cat state \cite{braustein1998teleportationCV}. The Wigner function formalism was then extended to describe the teleportation of one mode in a \emph{pure} DV entangled state, which takes into consideration the cross-diagonal matrices of zero and single photon numbers, $\ketbra{0}{1}{}$ and $\ketbra{1}{0}{}$ \cite{takeda2013gaintuning,do2019hybrid}. Different from \cite{braustein1998teleportationCV,takeoka2003continuous, lie2019limitations,takeda2013gaintuning, do2019hybrid}, in this work, we describe the teleportation of either the DV or CV mode of \emph{hybrid} entanglement. Since the DV mode is only comprised of \emph{zero} and \emph{single} photon numbers, the DV-mode teleportation can be derived by extending the Wigner-function formalism in \cite{takeda2013gaintuning,do2019hybrid}. The CV mode, however, is comprised of cross-diagonal matrices of \emph{multi}-photon numbers, where the teleportation is too complex to be described by the previous formalisms. As a solution, we use the characteristic function formalism to model the CV-mode teleportation, identifying a new simplification in this formalism that allows for analytic determination of the teleportation fidelity of a hybrid state.
Hybrid entanglement distribution will be important for the design of future heterogeneous quantum networks. Our two main novel contributions to this end, relative to the existing literature, are: \begin{itemize} \item We determine the teleportation fidelity of a hybrid state when either the DV or CV mode is passed through a lossy TMSV channel.
\item We calculate the fidelity of a hybrid entangled state following a direct distribution of each mode through a differential lossy channel; using this calculation to then quantify the difference in performance between the direct and teleported distribution schemes as a function of loss in the satellite-to-ground channel.
\end{itemize}
The structure of the remainder of this paper is as follows. Section~\ref{hybrid_entanglement_math} introduces hybrid entanglement. Section~\ref{direct_distribution} studies the direct distribution of hybrid entanglement, with transmission loss in both the DV and CV mode. Section~\ref{teleportation} studies the teleportation of either mode of the hybrid entanglement via a lossy TMSV channel. Section~\ref{result} compares the results between CV/DV-mode teleportation and direct distribution, highlighting the different advantages of both schemes. Section~\ref{conclusion} summarizes our findings.
\section{Hybrid entanglement}
\label{hybrid_entanglement_math} Consider the hybrid entanglement between cat states and qubit states \begin{align} &\ket{\psi}_{h} = \frac{\ket{cat_-}_C\ket{0}_D + \ket{cat_+}_C\ket{1}_D}{\sqrt{2}}, \label{hybrid_ket} \end{align} where for clarity, we have specified the spatial modes as $C$ and $D$ following the setup in Fig.~\ref{teleported_setup}. $\ket{0}$ and $\ket{1}$ are the vacuum state and the single photon state in the photon-number basis, while the Schr\"{o}dinger cat states are given by \begin{equation} \ket{cat_\pm} = \frac{\ket{\alpha_0} \pm \ket{-\alpha_0}}{N_{\pm}}, \label{cat} \end{equation} with $\ket{\pm \alpha_0}$ denoting the coherent states of
amplitude $|\alpha_0|$, and the normalization constants being
\begin{equation} N_{\pm} = \sqrt{2\left(1 \pm e^{-2 |\alpha_0|^2}\right)}. \label{N_cat} \end{equation} The corresponding density operator is given by \begin{align} &\rho_{h} = \frac{1}{2}\left(\ketbra{cat_-}{cat_-}{C}\otimes\ketbra{0}{0}{D} + \ketbra{cat_+}{cat_+}{C}\otimes\ketbra{1}{1}{D} \right.\nonumber\\ &+ \left.\ketbra{cat_-}{cat_+}{C}\otimes\ketbra{0}{1}{D} + \ketbra{cat_+}{cat_-}{C}\otimes\ketbra{1}{0}{D}\right). \label{hybrid_CD} \end{align}
Due to the mathematical complexity of the hybrid entangled state in Eq.~(\ref{hybrid_ket}), especially when describing the teleportation of the CV mode in section~\ref{section_teleport_cv_mode}, we will find it useful to introduce approximations to such a hybrid entangled state in the cases of large and small cat states. The mathematical approximations we utilize will have high fidelity with Eq.~(\ref{hybrid_ket}) and will allow for improved analytical insight. Deviation from results produced using Eq.~(\ref{hybrid_ket}) directly will be inconsequential \cite{morin2014remote,huang2019engineering,nielson2007transforming}. Our main interest will be large cat states, as they tend to be of wider interest in a range of quantum information protocols, such as information processing \cite{gilchrist2004}. However, we will still investigate the small cat-state limit, mainly because the results in this regime provide firm upper bounds on the teleportation fidelities as the cat states approach the so-called `kitten' states.
\paragraph{Large cat states ($|\alpha_0|>1$)}
In order to give an approximation for large cat states, we first need to rearrange the terms in Eq.~(\ref{hybrid_ket}) by substituting \begin{equation} \ket{0} = \left(\ket{+}+\ket{-}\right)/\sqrt{2}, \; \ket{1} = \left(\ket{+}-\ket{-}\right)/\sqrt{2}, \label{pm_01} \end{equation} giving \begin{align} \ket{\psi}_{h} = \frac{1}{\sqrt{2}}& \;\left( \; \frac{\ket{cat_-}_C + \ket{cat_+}_C}{\sqrt{2}} \ket{+}_D \right.\nonumber\\ & + \left. \frac{\ket{cat_-}_C - \ket{cat_+}_C}{\sqrt{2}}\ket{-}_D\right). \end{align}
When $|\alpha_0|>1$, the states in mode $C$ can be approximated by coherent states, and our hybrid entangled state becomes \cite{morin2014remote}\cite{huang2019engineering} \begin{equation} \ket{\psi}_{h}^{(large)} \approx \frac{\ket{\alpha_0}_C\ket{+}_D - \ket{-\alpha_0}_C\ket{-}_D}{\sqrt{2}} \ . \label{hybrid_large_alpha} \end{equation} The corresponding density matrix is \begin{align} &\rho_h^{(large)} \approx \nonumber\\ &\quad\frac{1}{2}(\ketbra{\;\alpha_0}{\;\alpha_0}{C} \otimes\ketbra{+}{+}{D}
+\ketbra{-\alpha_0}{-\alpha_0}{C} \otimes \ketbra{-}{-}{D} \nonumber\\ &\quad - \ketbra{\;\alpha_0}{-\alpha_0}{C} \otimes \ketbra{+}{-}{D}
-\ketbra{-\alpha_0}{\;\alpha_0}{C} \otimes \ketbra{-}{+}{D}). \label{hybrid_large_alpha_rho} \end{align}
Eqs.~(\ref{hybrid_large_alpha}) and (\ref{hybrid_large_alpha_rho}) will be useful for the calculation of the fidelity of CV-mode teleportation later in section~\ref{section_teleport_cv_mode}.
\paragraph{Small cat states ($|\alpha_0|<0.5$)}
We can approximate small cat states with the Single-Mode Squeezed Vacuum (SMSV) state and the Single-Photon Subtracted Squeezed Vacuum (1PS) state \cite{huang2019engineering}\cite{nielson2007transforming}, \begin{align} \ket{cat_+} &\approx \ket{SMSV} = S(\zeta)\ket{0}, \\ \ket{cat_-} &\approx \ket{1PS} = \frac{\hat{a}S(\zeta)\ket{0}}{\sinh \zeta} = S(\zeta)\ket{1}, \label{cat_approximation} \end{align} where $S(\zeta)$ is the single-mode squeezing operator, and $\zeta = s e^{i\theta}$ with $s$ being the single-mode squeezing parameter. For an arbitrary value of $\alpha_0$ in Eq.~(\ref{cat}), the fidelity between $\ket{cat_-}$ and $\ket{1PS}$ becomes maximized when $\alpha_0$ is real and \cite{nielson2007transforming} \begin{equation} s = \frac{\sqrt{9+4 \alpha_0^4} - 3}{2 \alpha_0^2}. \label{s} \end{equation} Under these conditions, the fidelity between the SMSV (or 1PS) state and $\ket{cat_+}$ (or $\ket{cat_-}$) increases as $\alpha_0$ decreases, and the fidelity approaches unity as $\alpha_0$ approaches $0.5$ \cite{nielson2007transforming}.
For simplicity, we will henceforth set $\theta=0$ and write the single-mode squeezing operator as $S(s)$. The original hybrid entangled state in Eq.~(\ref{hybrid_ket}) can therefore be approximated as \cite{huang2019engineering} \begin{equation} \ket{\psi}_h^{(small)} \approx \frac{ S_C(s)\ket{1}_C \otimes \ket{0}_{D} + S_C(s)\ket{0}_C\otimes \ket{1}_{D}}{\sqrt{2}}. \label{approx_hybrid} \end{equation} The corresponding density matrix is given by \begin{align} \rho_h^{(small)} &\approx \frac{1}{2}\left[S(s)\ketbra{1}{1}{C} S(s)^\dagger \otimes \ketbra{0}{0}{D} \right.\nonumber\\ &\quad + S(s)\ketbra{0}{0}{C} S(s)^\dagger \otimes \ketbra{1}{1}{D} \nonumber\\ &\quad + S(s)\ketbra{1}{0}{C} S(s)^\dagger \otimes \ketbra{0}{1}{D} \nonumber\\ &\quad + \left. S(s)\ketbra{0}{1}{C} S(s)^\dagger\otimes\ketbra{1}{0}{D}\right]. \label{approx_hybrid_rho} \end{align} From now on, we will assume that $\alpha_0$ is real in all our calculations. For simplicity, we will also replace the approximation signs following $\rho_h^{(large)}$ and $\rho_h^{(small)}$ with equalities. Henceforth, whenever we mention large (small) cat states, or large (small) $\alpha_0$, we are refering to $\alpha_0>1$ ($\alpha_0<0.5$).
\paragraph{Intermediate cat states ($0.5 \le \alpha_0 \le 1$)} In this case, we use the exact form of the hybrid entangled state in Eq.~(\ref{hybrid_ket}) to carry out our calculations. These lead to relations that are perhaps less illuminating as the approximate solutions above, and as such relegate this exposition to appendix~\ref{appendix_cat_exact}. We shall see how the results from this exact solution match the approximate solutions in the relevant range.
Hybrid entanglement with large cat states can be deterministically generated by a weak and dispersive light-matter interaction \cite{vanloock2006hybrid}, or by a cross-Kerr nonlinear interaction between a coherent state and a photon-number qubit state \cite{gerry1999generation}\cite{jeong2005using}. However, due to experimental challenges in nonlinear optics, hybrid entanglement with small cat states can be more easily produced by using linear optics and a probabilistic heralded scheme
\cite{morin2013remote}\cite{sychez2018entanglement}\cite{huang2019engineering}\cite{knill2001ascheme}. The latter setup takes a small cat state ($\ket{cat_+}$) as an input, and produces hybrid entanglement between small cat states and qubit states. The advantage of the heralded scheme is that the loss in the heralding channel only affects the success rate but not the fidelity of the final entanglement. Such a scheme is particularly beneficial for long-distance communication \cite{huang2019engineering}. In addition, the heralded setup in \cite{sychez2018entanglement} has the potential to be extended for cat states with higher amplitudes.
\section{Direct distribution of a hybrid entangled state through lossy channels} \label{direct_distribution}
The hybrid entangled state is given in Eq.~(\ref{hybrid_CD}), where the cat states have amplitude $\alpha_0$. We refer to the modes as given in Fig.~\ref{direct_setup}. That is, we denote the two modes of the entanglement as $A$ and $B$, where $A$ is the CV mode and $B$ is the DV mode. For each mode, the channel attenuation can be modeled by a beam splitter with corresponding transmissivities $T_A$ and $T_B$. The photonic loss in \emph{either} the DV or CV mode has been studied in previous papers \cite{yin2019coherent,zhang2019improving,parker2017hybrid, parker2020photonic}. However, when the loss is present in \emph{both} the DV and CV mode, the resulting density matrix becomes complex and has not been hitherto determined. In this section, we calculate this density matrix and then determine the resulting fidelity of the hybrid state after direct distribution from the satellite.
At the first beam splitter, the CV mode $A$ is mixed with the auxiliary vacuum mode $\epsilon_A$ to give the output modes $A'$ and $\epsilon_{A'}$.
For a general bi-coherent input state $\ket{\alpha}_A\ket{\alpha_\epsilon}_{\epsilon_A}$, the beam splitter transformation gives \begin{align}
&\ket{\alpha}_A\ket{\alpha_\epsilon}_{\epsilon_A} \rightarrow \nonumber\\ &\ket{\sqrt{T_A}\alpha - \sqrt{1-T_A}\alpha_\epsilon}_{A'} \ket{\sqrt{T_A}\alpha_\epsilon + \sqrt{1-T_A}\alpha}_{\epsilon_{A'}}. \label{BStransformation_CV} \end{align}
At the second beam splitter, the DV mode $B$ is mixed with the auxiliary vacuum mode $\epsilon_B$, giving the output modes $ B'$ and $\epsilon_{B'}$. Let the creation operators of the four modes be $\hat{a}_B^{\dagger}$, $\hat{a}_{\epsilon_B}^{\dagger}$,$\hat{a}_{B'}^{\dagger}$ and $\hat{a}_{\epsilon_{B'}}^{\dagger}$, the beam splitter transformation can then be written as \begin{align} \hat{a}^{\dagger}_B&\rightarrow \sqrt{T_B}\,\hat{a}^{\dagger}_{B'} + \sqrt{1-T_B}\,\hat{a}^{\dagger}_{\epsilon_{B'}},\label{BStransformation_DV}\\ \hat{a}^{\dagger}_{\epsilon_B}&\rightarrow \sqrt{T_B}\,\hat{a}^{\dagger}_{\epsilon_{B'}} - \sqrt{1-T_B}\,\hat{a}^{\dagger}_{B'}. \label{BStransformation_DV_au} \end{align}
After applying the beam splitter transformations, the density matrix of the hybrid entangled state (Eq.~(\ref{hybrid_CD})) becomes $\rho_{A^{'}\epsilon_{A'} B^{'} \epsilon_{B'}}$. The transmitted state can be found by tracing out the auxiliary output modes $\epsilon_{A'}$ and $\epsilon_{B'}$. Mode $\epsilon_{B'}$ can be easily traced out, since it is a DV mode. The CV mode $\epsilon_{A'}$ can be traced out by applying the integration \begin{equation} \int \frac{d^2 \beta}{\pi} \bra{\beta}\rho_{A^{'}\epsilon_{A'} B^{'} \epsilon_{B'}}\ket{\beta}_{\epsilon_{A'}}, \end{equation} where $\beta$ is a generic complex number. This integration can be calculated by applying the following well-known identities for a general coherent state $\ket{\alpha}_{\epsilon_{A'}}$
\begin{align} \int \frac{d^2 \beta}{\pi}\braket{\beta}{\pm \alpha}_{\epsilon_{A'}}\braket{\pm \alpha}{\beta}_{\epsilon_{A'}} &= 1,\nonumber\\
\int \frac{d^2 \beta}{\pi}\braket{\beta}{\pm \alpha}_{\epsilon_{A'}}\braket{\mp \alpha}{\beta}_{\epsilon_{A'}} &= \exp\left(-2 |\alpha|^2\right). \label{trace_cv} \end{align} After tracing out the auxiliary modes (where the details can be found in appendix~\ref{appendix_direct_distribution}), we find the directly-distributed state \begin{align} &\rho_{A'B'}=\textrm{Tr}_{\epsilon_{A'} \epsilon_{B'}}(\rho_{A^{'}\epsilon_{A'} B^{'} \epsilon_{B'}})= \frac{1}{2}\left\{\ketbra{\sqrt{T_A}\alpha_0}{\sqrt{T_A}\alpha_0}{A}\right.\nonumber\\ &\otimes\left[(a_1 + a_2)\ketbra{0}{0}{B}+ a_3 \ketbra{1}{1}{B} + a_4\ketbra{0}{1}{B} + a_4 \ketbra{1}{0}{B}\right]\nonumber\\ &+\ketbra{-\sqrt{T_A}\alpha_0}{-\sqrt{T_A}\alpha_0}{A}\nonumber\\ &\otimes\left[(a_1 + a_2)\ketbra{0}{0}{B}+ a_3 \ketbra{1}{1}{B} - a_4\ketbra{0}{1}{B} - a_4 \ketbra{1}{0}{B}\right]\nonumber\\ &+ e^{-2(1-T_A)\alpha_0^2}\ketbra{\sqrt{T_A}\alpha_0}{-\sqrt{T_A}\alpha_0}{A}\nonumber\\ &\otimes\left[(a_1 - a_2)\ketbra{0}{0}{B}+ a_3 \ketbra{1}{1}{B} + a_4\ketbra{0}{1}{B} - a_4 \ketbra{1}{0}{B}\right]\nonumber\\ &+ e^{-2(1-T_A)\alpha_0^2}\ketbra{-\sqrt{T_A}\alpha_0}{\sqrt{T_A}\alpha_0}{A}\nonumber\\ &\left.\otimes \left[(a_1 - a_2)\ketbra{0}{0}{B}+ a_3 \ketbra{1}{1}{B} - a_4\ketbra{0}{1}{B} + a_4 \ketbra{1}{0}{B}\right]\vphantom{\sqrt{T_A}}\right\}, \label{arbitrary_alpha} \end{align} where \begin{equation} a_1 = \frac{1-T_B}{N_+^2},\; a_2 = \frac{1}{N_-^2}, \; a_3= \frac{T_B}{N_+^2},\; a_4 = \frac{\sqrt{T_B}}{N_+ N_-}, \end{equation} with $N_\pm$ defined in Eq.~(\ref{N_cat}).
After calculating the directly-distributed state, we will compare it to the original hybrid entangled state - using fidelity as our key performance metric. However, since in some applications the logarithmic negativity of the entangled state is useful, that metric will also be discussed. In the density matrix formalism, the fidelity between the perfect hybrid entangled state ($\rho_h$ in Eq.~(\ref{hybrid_CD})) and an arbitrary state $\rho$ is given by \cite{jozsa1994fidelity} \begin{equation} F = \left[\textrm{Tr}\left( \sqrt{ \sqrt{\rho_h} \, \rho \, \sqrt{\rho_h}} \right)\right]^2.
\label{fidelity_rho} \end{equation}
As the fidelity approaches one, the directly-distributed state becomes identical to the hybrid state created by the satellite (we will assume the hybrid state is initially perfect when produced anywhere). The negativity of an arbitrary state $\rho$ is defined by \cite{vidal2002computable}
\begin{equation} E_N(\rho) = \frac{1}{2}\sum_i\left( |\lambda_i|-\lambda_i \right), \end{equation} where $\lambda_i$ are the eigenvalues of the partial transpose of the entangled state $\rho$.
The logarithmic negativity ($0 \leq E_{LN} \leq 1$) is related to the negativity by \cite{vidal2002computable} \begin{equation} E_{LN}(\rho) = \log_2[1+2 E_N(\rho)]. \label{ELN_rho} \end{equation} When $E_{LN}>0$, $\rho$ is an entangled state.
\section{Teleportation through a TMSV channel} \label{teleportation}
In this section, we detail the teleportation of an input mode of the hybrid entangled state through the attenuated TMSV teleportation channel (Fig.~\ref{teleported_setup}). The CV entangled state $A-B$ is created on the satellite, then transmitted down to Alice and Bob, with channel transmissivities $T_A$ and $T_B$, respectively, resulting in the attenuated TMSV channel $A'-B'$. When the input mode is the DV mode (mode $D$) of the hybrid entangled pair $C-D$, the CV teleportation channel teleports $D$ to $B''$, resulting in the entanglement between $C$ and $B''$. Otherwise, when the input mode is the CV mode, $C$ is teleported to $B''$, resulting in the entanglement between $D$ and $B''$.
Experimentally, the CV entanglement ($A-B$) can be produced by letting two single-mode squeezed vacuum states interfere through a balanced beam splitter.\footnote{In this work, we do not take into account the loss at this beam splitter. However, such additional losses can be readily mapped to a re-scaling in both the initial squeezing parameter and channel loss \cite{takeda2013gaintuning}.} Theoretically, the CV entanglement can be created by applying a two-mode squeezing operator on a two-mode vacuum state to obtain a TMSV state \begin{equation} \ket{TMSV} = S(\xi)\ket{0,0} = \exp\left(\xi \hat{a}_A^\dagger \hat{a}_B^\dagger - \xi^* \hat{a}_A\hat{a}_B\right)\ket{0,0}, \label{TMSV_ket} \end{equation} where $\xi = r e^{i\phi}$, with $r$ being the two-mode squeezing parameter (henceforth referred to as the initial squeezing), and $\phi$ being the phase. $\hat{a}_l$ and $\hat{a}_l^\dagger$ represent the annihilation and creation operators of the optical field, respectively, with $l \in \{A,B\}$ denoting the spatial modes. Let $\hbar =1 / 2 $, CV entanglement is encoded in the dimensionless position and momentum quadrature operators of the optical field, $\hat{x_l} = (\hat{a_l}^\dagger+\hat{a_l})/{2}$ and $\hat{p_l}=i(\hat{a_l}^\dagger-\hat{a_l})/{2}$, respectively, where $i$ denotes the imaginary unit. In the following two subsections, we will present the mathematical models that describe the teleportation of either the DV or CV mode of a hybrid entangled state through an attenuated TMSV channel.\footnote{The models developed in this work have been verified by teleporting a simple vacuum state through a symmetric channel, which produces the same fidelity as the analytical results in \cite{takeoka2003continuous} and \cite{lie2019limitations}.}
\subsection{Teleporting the DV mode of hybrid entanglement} \label{section_dv_teleportation} In order to describe the teleportation of the DV mode of the hybrid entanglement through the CV teleportation channel, we will use the Wigner function representation. We first study a general input state with the Wigner function $W_{in}(\alpha)$, where $\alpha = x + i p$ with non-zero real values $x$ and $p$. Let $G_{\sigma}(\alpha)$ denote a Gaussian function, with $x$ and $p$ denoting two uncorrelated random variables with equal variance $\sigma$
\begin{equation} G_\sigma(\alpha) =\frac{1}{\pi\sigma} \exp\left(-\frac{|\alpha|^2}{\sigma}\right)=\frac{1}{\pi\sigma} \exp\left(-\frac{x^2 + p^2}{\sigma}\right). \label{Gaussian} \end{equation} From \cite{takeda2013gaintuning}, the teleported Wigner function after the use of the TMSV channel is \begin{align} W_{tel}(\alpha_{B''}) &= \frac{1}{g^2}\left[W_{in}*G_{\sigma}\right]\left(\frac{\alpha_{B''}}{g}\right)\nonumber\\ &= \frac{1}{g^2}\iint dx dp W_{in}(\alpha) G_{\sigma}\left(\frac{\alpha_{B''}}{g}-\alpha\right). \label{general_teleportation} \end{align}
Here $G_{\sigma}$ is the Gaussian function defined in Eq.~(\ref{Gaussian}) with variance $\sigma$ given by \begin{align} \sigma & = \frac{1}{4g^2}\;\,\left[\;\, e^{+2r}\left(g\sqrt{T_A}-\sqrt{T_B}\right)^2 \right.\nonumber\\ &\qquad\quad+e^{-2r}\left(g\sqrt{T_A}+\sqrt{T_B}\right)^2 \nonumber\\ &\qquad\quad + 2g^2 (1-T_A) + 2(1-T_B) \left.\vphantom{\left(\sqrt{T_A}\right)^2}\right], \label{sigma_general} \end{align} where $g$ is the teleportation gain. For fixed values of $T_A$ and $T_B$, the calculation of the logarithmic negativity has shown that the optimal gain $g$ increases with $r$, and reaches its maximal value at $r\simeq 2.5$ \cite{do2019hybrid}. From now on, for simplicity, whenever we mention the 'optimal gain', we are referring to the optimal gain when the initial squeezing satisfies $r>2.5$. When $T_A = T_B=1$, the optimal gain is approximately $g\approx 1$, and we have $\sigma= \exp(-2 r)$ \cite{braustein1998teleportationCV}. When the channels are asymmetric, i.e., $T_A\neq T_B$, the teleportation (in terms of logarithmic negativity and fidelity) is optimized when $g \approx \sqrt{\frac{T_B}{T_A}}$ \cite{do2019hybrid}. Note that tuning the gain further does not improve the fidelity much, thus, from now on, we will simply replace the approximation signs by the equal signs.
When the input mode is the DV mode ($D$) of a hybrid entangled pair $C-D$, the subspace of $C$ stays the same, while the subspace of $D$ is transformed. Mode $D$ can be decomposed to different terms $\ketbra{m}{n}{D}$ ($m,n \in \{0,1\}$), with Wigner functions $W^{\ketbra{m}{n}{D}}(\alpha_D)$. The teleportation of each term is described by Eqs.~(\ref{Gaussian}), (\ref{general_teleportation}) and (\ref{sigma_general}), leading to \begin{equation} W^{\ketbra{m}{n}{D}}(\alpha_D) \rightarrow \frac{1}{g^2}\left[W_{in}^{\ketbra{m}{n}{D}}(\alpha_D)*G_{\sigma}\right]\left(\frac{\alpha_{B''}}{g}\right). \label{Fock_teleportation} \end{equation} The teleportation transformation has been previously calculated for the case where mode $D$ is part of a Bell state \cite{takeda2013gaintuning}\cite{do2019hybrid}. When mode $D$ is part of hybrid entanglement, we recognize that the initial hybrid entanglement $\ket{\psi}_h$ (Eq.~(\ref{hybrid_CD})) has a form that is similar to a Bell state, where $\ket{cat_-}_C$ corresponds to $\ket{1}_C$, and $\ket{cat_+}_C$ corresponds to $\ket{0}_C$. By transforming the subspace of mode $D$ similar to that described in \cite{takeda2013gaintuning}\cite{do2019hybrid}, and then converting the Wigner formalism to the density-matrix formalism, we find the DV-mode teleported state \begin{align} \rho_{tel} &= \sum_{k=-1}^{\infty}\rho_k, \,\textrm{where} \nonumber\\ \hat{\rho}_{k} &= a_{k} \ketbra{cat_+}{cat_+}{C}\otimes\ketbra{k}{k}{B''} \nonumber \\ &+b_{k}\ketbra{cat_-}{cat_+}{C}\otimes\ketbra{k}{k+1}{B''} \nonumber\\ &+ b_{k} \ketbra{cat_+}{cat_-}{C}\otimes\ketbra{k+1}{k}{B''}\nonumber\\
&+c_{k} \ketbra{cat_-}{cat_-}{C}\otimes\ketbra{k+1}{k+1}{B''} , \label{teleported_output} \end{align} where $a_k$, $b_k$ and $c_k$ are defined by \begin{align}
a_k &= \frac{1}{2}T_{1,1\rightarrow k,k} \,( k\geq 0),\text{or} \;0 \; (k = -1),\nonumber\\
b_k & = \frac{1}{2}T_{1,0\rightarrow k+1, \,k}\, (k\geq 0), \text{or} \;0 \;(k=-1) ,\nonumber\\
c_k & = \frac{1}{2}T_{0,0\rightarrow k+1, \, k+1} \,(k\geq -1), \,\textrm{and where}
\label{abc} \end{align} \begin{align} T_{0,0\rightarrow k,k} &= \frac{2(\gamma-1)^k}{(\gamma+1)^{k+1}},\nonumber\\ T_{1,1\rightarrow k,k} &= \frac{2(\gamma-1)^{k-1}}{(\gamma+1)^{k+2}}\left[(\gamma-2g^2+1)(\gamma-1)+4kg^2\right]\nonumber\\ T_{1,0\rightarrow k+1,\,k} &= \frac{4g\sqrt{k+1}(\gamma-1)^k}{(\gamma+1)^{k+2}}, \label{T_transform} \end{align} with $\gamma \equiv g^2(2\sigma + 1)$.
In the density matrix representation, the teleported state $\rho_{tel}$ can be compared to the perfect hybrid entangled state $\rho_h$ by using the measures of fidelity (Eq.~(\ref{fidelity_rho})) and logarithmic negativity (Eq.~(\ref{ELN_rho})). The simulation results are shown in section~\ref{result}.
\subsection{Teleporting the CV mode of hybrid entanglement} \label{section_teleport_cv_mode} In this section, we will discuss how to teleport the CV mode $C$ of the hybrid entangled state $D-C$ through the TMSV channel to obtain the final entangled state $D-B''$. (Note that, in this section, we swap the order of modes $C$ and $D$ in Fig.~\ref{teleported_setup}). Due to mathematical complexity, we will rewrite all required states in the characteristic-function formalism and approximate the hybrid entangled state ($D-C$) for the limits of large and small cat states in section~\ref{hybrid_entanglement_math}.
\subsubsection{The characteristic function} \label{section_define_cf} For an arbitrary state $\rho$, the usual characteristic function is defined as \begin{equation} \bigchi_{\rho}(\beta) = \textrm{Tr}\left[ \rho D(\beta)\right], \label{characteristic_function_define} \end{equation} with complex number $\beta=x + ip$ and displacement operator $
D(\beta) = e^{-|\beta|^2/2}e^{\beta\hat{a}^\dagger}e^{-\beta^* \hat{a}}. $
For example, for a vacuum state $\ket{0}$, the characteristic function is given by \begin{equation}
\bigchi_{vac}(\beta) = \exp{\left[ -|\beta|^2/2 \right]}. \label{vac_chi} \end{equation} For a single-mode squeezed vacuum (SMSV) state, let $S(\zeta)$ be the single-mode squeezing operator with $\zeta = s e^{i\theta}$, where $s$ is the single-mode squeezing parameter (which is different from the two-mode squeezing parameter $r$ in Eq.~(\ref{TMSV_ket})) and $\theta$ is the phase. The characteristic function of the SMSV state can be written as \cite{farias2009thesis} \cite{seshadreesan2015nongaussian} \begin{align}
\bigchi_{SMSV}(\beta) &= \exp\left( -\frac{1}{2}|\beta \cosh s - e^{i\theta}\beta^* \sinh s|^2 \right)\\ &=\exp\left[-\frac{1}{2}(x^2e^{-2s}+ p^2e^{2s}) \right], \label{SMSV} \end{align} where the last equality has assumed $\theta=0$. For a TMSV state given by Eq.~(\ref{TMSV_ket}), when there is no channel loss, the characteristic function is given by
\cite{seshadreesan2015nongaussian}, \begin{align}
&\bigchi_{TMSV}(\beta_A,\beta_B) = \exp \left[ -\frac{1}{2} \left( |\beta_A \cosh r + \beta_B^* e^{i\phi}\sinh r|^2 \right.\right.\nonumber\\
&\qquad\qquad\qquad\qquad\left.\left. + |\beta_B \cosh r + \beta_A^*e^{i\phi}\sinh r|^2 \right) \vphantom{\frac{1}{2}}\right]\\ &=\exp \left\{ -\frac{e^{2r}}{4}\left[\left(x_A-x_B\right)^2 + \left( p_A+p_B\right)^2 \right]\right.\nonumber\\ & \qquad\quad\left. -\frac{e^{-2r}}{4}\left[\left(x_A+x_B\right)^2 + \left( p_A-p_B\right)^2 \right]\right\}, \label{TMSV_chi} \end{align} where the last equality has used $\phi=\pi$ for optimal teleportation fidelity.
For a tensor product of arbitrary density matrices, the characteristic function is a product of individual characteristic functions. For example, for an arbitrary separable state $\rho=\rho_1\otimes\rho_2$, we have \cite{dellanno2018nongaussian} \begin{equation} \bigchi_\rho(\beta_1,\beta_2) = \textrm{Tr}\left[\rho D(\beta_1)D(\beta_2)\right] = \bigchi_{\rho_1}(\beta_1)\bigchi_{\rho_2}(\beta_2) . \label{characteristic_terms} \end{equation} The characteristic function is also linear with respect to the density operator, as can be seen from the linearity of the trace function in Eq.~(\ref{characteristic_function_define}).
\subsubsection{Characteristic function of an attenuated TMSV state} Again, we model the two down-link channels by two beam splitters with transmissivities $T_A$ and $T_B$, respectively. At the beam splitters, modes $A$ and $B$ are mixed with the auxiliary vacuum modes $\epsilon_A$ and $\epsilon_B$, respectively, giving the output modes $\{A',\epsilon_{A'}\}$, and $\{ B', \epsilon_{B'}\}$, respectively. Let $t_l = \sqrt{T_l}, r_l = \sqrt{1-T_l}$ with $l \in \{A,B\}$, the beam splitter transformation can be given by \begin{align}
\beta_{l'} &= t_l \beta_l - r_l \beta_{\epsilon_l}, \quad \beta_{\epsilon_{l'}} = r_l \beta_l + t_l \beta_{\epsilon_l}. \label{bs_cv} \end{align} The four-mode characteristic equation before the beam splitter is given by multiplying the characteristic functions of the TMSV state with the vacuum states \begin{align}
\bigchi_{TMSV}(\beta_A,\beta_B)\bigchi_{vac}(\beta_{\epsilon_A})\bigchi_{vac}(\beta_{\epsilon_B}). \end{align} The beam splitter transformation can be performed by substituting $\beta_l = \;t_l \beta_{l'} + r_l \beta_{\epsilon_{l'}}$ and $\beta_{\epsilon_l} = - r_l \beta_{l'} + t_l \beta_{\epsilon_{l'}}$
into the above equation. It has been shown that the tracing out the vacuum modes is equivalent to setting $\beta_{\epsilon_{l'}}=0$ \cite{dellanno2010realistic}. The resulting attenuated TMSV state is given by \begin{align} &\bigchi_{TMSV}^{(T_A,T_B)}(\beta_{A'},\beta_{B'}) = \bigchi_{TMSV}(\sqrt{T_A}\beta_{A'},\sqrt{T_B}\beta_{B'})\nonumber\\ &\qquad\times\bigchi_{vac}(-\sqrt{1-T_A}\beta_{A}')\bigchi_{vac}(-\sqrt{1-T_B}\beta_{B}'). \label{TMSV_tatb} \end{align}
\subsubsection{Characteristic function of a hybrid entangled state} \label{approximate_hybrid} The exact density matrix of the hybrid entangled state is given by Eq.~(\ref{hybrid_CD}), in which the cat states $\ket{cat_\pm}$ have the characteristic functions \cite{Girish2013QuantumOptics} \begin{align}
\bigchi_{cat_\pm}(\beta) = &\frac{1}{N_{\pm}^2}\left\{2 e^{-\frac{|\beta|^2}{2}} \cos\left[ 2\textrm{Im}\left( \beta\alpha_0^* \right)\right]\right.\nonumber\\
&\left. \pm e^{-\frac{1}{2}|\beta+2\alpha_0|^2} \pm e^{-\frac{1}{2}|\beta-2\alpha_0|^2}\right\}. \end{align} However, due to the mathematical complexity, especially when calculating the matrix $\ketbra{cat_\pm}{cat_\mp}{}$ in Eq.~(\ref{hybrid_CD}), we will approximate the hybrid entangled state for the cases of large $\alpha_0$ (Eq.~(\ref{hybrid_large_alpha_rho})) and small $\alpha_0$ (Eq.~(\ref{approx_hybrid_rho})). For hybrid states with intermediate sizes, an exact calculation can be found in appendix~\ref{appendix_cat_exact}.
\paragraph {When the cat state is large} the hybrid entangled state can be approximated by $\rho_h^{(large)}$ in Eq.~(\ref{hybrid_large_alpha_rho}).
Given the definition of the characteristic function (Eq.~(\ref{characteristic_function_define})), and the form of the density matrix of the approximate hybrid state (in Eq.~(\ref{hybrid_large_alpha_rho})), we will find it useful to introduce a new function for a general matrix $M$ \begin{equation} X_{M}(\beta) = \textrm{Tr}\left[M D(\beta) \right]. \label{X_function} \end{equation} From the linearity of the trace function, the above function is also linear with respect to the matrix $M$ \cite{Girish2013QuantumOptics}. For a tensor product of the form $M = M_1 \otimes M_2$, we have \begin{align} X_M (\beta_1,\beta_2)&= \textrm{Tr}\left[M_1 \otimes M_2 \; D_1(\beta_1)\otimes D _2(\beta_2)\right] \nonumber\\ &= X_{M_1}(\beta_1)X_{M_2}(\beta_2), \label{X_tensor} \end{align} where the last equality makes use of the tensor product property of the trace function. From the linearity and tensor product property above, the characteristic function of $\rho_h^{(large)}$ can thus be found by applying Eq.~(\ref{X_tensor}) to a modified Eq.~(\ref{hybrid_large_alpha_rho}) where the order of modes $C$ and $D$ is swapped (see discussion start of section~\ref{section_teleport_cv_mode}), giving
\begin{align}
\bigchi_{h}^{(large)}(\beta_D,\beta_C) = &\frac{1}{2}\left[X_{\ketbra{+}{+}{}}(\beta_D) X_{\ketbra{\alpha_0}{\alpha_0}{}}(\beta_C) \right.\nonumber\\ &+X_{\ketbra{-}{-}{}}(\beta_D) X_{\ketbra{-\alpha_0}{-\alpha_0}{}}( \beta_C) \nonumber\\ &-X_{\ketbra{+}{-}{}}(\beta_D) X_{\ketbra{\alpha_0}{-\alpha_0}{}}(\beta_C) \nonumber\\ &-\left. X_{\ketbra{-}{+}{}}(\beta_D) X_{\ketbra{-\alpha_0}{\alpha_0}{}}(\beta_C)\right]. \label{hybrid_chi_large} \end{align}
In the following, we will explicitly calculate the individual terms in the above equation.
From the change of basis in Eq.~(\ref{pm_01}), we can show \begin{align} &X_{\ketbra{+}{+}{}}(\beta) \nonumber\\ &\quad = \frac{1}{2}\left[X_{\ketbra{0}{0}{}}(\beta) + X_{\ketbra{1}{1}{}}(\beta) + X_{\ketbra{0}{1}{}}(\beta) + X_{\ketbra{1}{0}{}}(\beta)\right],\nonumber\\ &X_{\ketbra{-}{-}{}}(\beta) \nonumber\\ &\quad = \frac{1}{2}\left[X_{\ketbra{0}{0}{}}(\beta) + X_{\ketbra{1}{1}{}}(\beta) - X_{\ketbra{0}{1}{}}(\beta) - X_{\ketbra{1}{0}{}}(\beta)\right],\nonumber\\ &X_{\ketbra{+}{-}{}}(\beta) \nonumber\\ &\quad = \frac{1}{2}\left[X_{\ketbra{0}{0}{}}(\beta) - X_{\ketbra{1}{1}{}}(\beta) - X_{\ketbra{0}{1}{}}(\beta) + X_{\ketbra{1}{0}{}}(\beta)\right],\nonumber \\ &X_{\ketbra{-}{+}{}}(\beta) \nonumber\\ &\quad = \frac{1}{2}\left[X_{\ketbra{0}{0}{}}(\beta) - X_{\ketbra{1}{1}{}}(\beta) + X_{\ketbra{0}{1}{}}(\beta) - X_{\ketbra{1}{0}{}}(\beta)\right], \label{cf_pm} \end{align} where \begin{align}
X_{\ketbra{0}{0}{}}(\beta) &= \bra{0}D(\beta)\ket{0}=e^{-|{\beta}|^2/2},\nonumber\\
X_{\ketbra{1}{1}{}}(\beta) &= \bra{1}D(\beta)\ket{1} = e^{-|{\beta}|^2/2}(1-|{\beta}|^2),\nonumber\\
X_{\ketbra{0}{1}{}}(\beta) &= \bra{1}D(\beta)\ket{0}=\beta e^{-|{\beta}|^2/2 }, \nonumber\\
X_{\ketbra{1}{0}{}}(\beta) &= \bra{0}D(\beta)\ket{1}= \left[\bra{0}D^\dagger(-\beta)\right]\ket{1}= -{\beta}^* e^{-|{\beta}|^2/2}. \label{cf_fock} \end{align} In Eq.~(\ref{cf_fock}) we have used the appropriate relations for the displaced number states \cite{glauber1963coherent,sudarshan1963equivalence, deoliveira1990properties}. For the terms $\ketbra{\pm\alpha}{\pm \alpha}{}$ and $\ketbra{\pm\alpha}{\mp \alpha}{}$ in Eq.~(\ref{hybrid_large_alpha_rho}), we have \cite{Girish2013QuantumOptics} \begin{align}
X_{\ketbra{\pm\alpha_0}{\pm\alpha_0}{}} (\beta)&=e^{-\frac{|\beta|^2}{2}}e^{\pm i 2 \textrm{Im}[\beta\alpha_0^*]}, \nonumber\\
X_{\ketbra{\pm\alpha_0}{\mp\alpha_0}{}}(\beta) &= e^{-\frac{|\beta\pm 2\alpha_0|^2}{2}}. \label{cf_coh} \end{align} By substituting Eqs.~(\ref{cf_pm}), (\ref{cf_fock}) and (\ref{cf_coh}) into Eq.~(\ref{hybrid_chi_large}), we can find the characteristic function of the hybrid entangled state with large $\alpha_0$.
\paragraph {When the cat state is small} the hybrid entangled state can be approximated by $\rho_h^{(small)}$. By applying Eqs.~(\ref{characteristic_function_define}) and (\ref{X_tensor}) to Eq.~(\ref{approx_hybrid_rho}), the characteristic function of $\rho_h^{(small)}$ is given by \begin{align} \bigchi_{h}^{(small)}(\beta_D,\beta_C) &= \frac{1}{2}\left[X_{\ketbra{0}{0}{}}(\beta_D) X_{S(s)\ketbra{1}{1}{}S(s)^\dagger}(\beta_C) \right.\nonumber\\ &\quad\;+ X_{\ketbra{1}{1}{}}( \beta_D) X_{S(s)\ketbra{0}{0}{}S(s)^\dagger}( \beta_C) \nonumber\\ &\quad\;+X_{\ketbra{0}{1}{}}(\beta_D) X_{S(s)\ketbra{1}{0}{}S(s)^\dagger}(\beta_C)\nonumber\\ &\quad \; + \left. X_{\ketbra{1}{0}{}}(\beta_D) X_{S(s)\ketbra{0}{1}{}S(s)^\dagger}(\beta_C)\right]. \end{align}
For the matrix $S(s)\ketbra{m}{n}{} S(s)^\dagger$ with $m,n \in \{0,1\}$ and $s$ defined in Eq.~(\ref{s}), $ X_{S(s)\ketbra{m}{n}{} S(s)^\dagger}(\beta)$
can be found by applying the relation \begin{equation} S^\dagger(s)D(\beta)S(s) = D(\tilde{\beta}),\,\textrm{with} \; \tilde{\beta} = \beta \cosh s - \beta^* \sinh s. \end{equation} As a result, $X_{S(s)\ketbra{0}{0}{}S(s)^\dagger}(\beta)$ and $X_{S(s)\ketbra{1}{1}{}S(s)^\dagger}(\beta)$ can be found by substituting $\tilde{\beta}$ for $\beta$ in Eq.~(\ref{cf_fock}), giving \cite {farias2009thesis}\cite{seshadreesan2015nongaussian} \begin{align} X_{S(s)\ketbra{0}{0}{}S(s)^\dagger}(\beta) &=X_{\ketbra{0}{0}{}}(\tilde{\beta}),\\ X_{S(s)\ketbra{1}{1}{}S(s)^\dagger}(\beta) &=X_{\ketbra{1}{1}{}}(\tilde{\beta}).
\end{align} Our calculation shows that $X_{S(s)\ketbra{0}{1}{}S(s)^\dagger}(\beta)$ and $S(s)X_{\ketbra{1}{0}{}S(s)^\dagger}(\beta)$ can also be found by the same substitution \begin{align} X_{S(s)\ketbra{0}{1}{}S(s)^\dagger}( \beta) &= X_{\ketbra{0}{1}{}}(\tilde{\beta}),\\ X_{S(s)\ketbra{1}{0}{}S(s)^\dagger}(\beta) &= X_{\ketbra{1}{0}{}}(\tilde{\beta}). \end{align} (Note that, when $s=0$, $\tilde{\beta}$ becomes $\beta$, we have $ X_{S(0)\ketbra{m}{n}{} S(0)^\dagger}(\beta) = X_{\ketbra{m}{n}{}}(\beta)$). We can now write \begin{align} &\bigchi_{h}^{(small)}(\beta_D,\beta_C) = \nonumber\\ &\frac{1}{2}\left[X_{\ketbra{0}{0}{}}(\beta_D) X_{\ketbra{1}{1}{}}(\tilde{\beta}_C) \right.+ X_{\ketbra{1}{1}{}}( \beta_D) X_{\ketbra{0}{0}{}}(\tilde{ \beta}_C) \nonumber\\ &\quad+X_{\ketbra{0}{1}{}}(\beta_D) X_{\ketbra{1}{0}{}}(\tilde{\beta}_C)+ \left. X_{\ketbra{1}{0}{}}(\beta_D) X_{\ketbra{0}{1}{}}(\tilde{\beta}_C)\right]. \label{hybrid_chi} \end{align} By substituting Eq.~(\ref{cf_fock}) into the above equation, we can find the characteristic function of a hybrid entangled state with small $\alpha_0$.
\subsection{Fidelity} \label{section_fidelity_cv} In general, fidelities above the classical teleportation limit are considered useful in quantum communications (this is $\frac{2}{3}$ for a qubit state and $\frac{1}{2}$ for a coherent state \cite{massar1995optimal}) The fidelity between two arbitrary two-mode states, for example, between the original hybrid entangled state $\bigchi_h(\beta_D,\beta_C)$ and the CV-mode teleported state $\bigchi_{tel}(\beta_D,\beta_{B''})$, is given by \begin{align} F &= \frac{1}{\pi^2}\iint d^2\beta_D d^2 \beta_{B''} \bigchi_h(\beta_D,\beta_{B''})\bigchi_{tel}(-\beta_D,-\beta_{B''}). \label{eq_fidelity} \end{align}
In order to verify our calculations, we check that the fidelity of a state with itself is always unity. For example, in the limit where the TMSV channel has no loss and has infinite squeezing, $\bigchi_{tel}(\beta_D,\beta_{B''})$ becomes identical to $\bigchi_h(\beta_D,\beta_{B''})$. For the case of large~$\alpha_0$, $\bigchi_h^{(large)}(\beta_D,\beta_{B''})$ is given by Eq.~(\ref{hybrid_chi_large}). The multiplication $\bigchi_h^{(large)}(\beta_D,\beta_{B''})\bigchi_h^{(large)}(-\beta_D,-\beta_{B''})$ creates a summation of 16 different terms. The summation can be simplified by exploiting the symmetry of the DV mode, which gives \begin{align} \int d^2\beta_D X_{\ketbra{+}{+}{}}(\beta_D)X_{\ketbra{+}{+}{}}(-\beta_D)& \nonumber\\ \qquad=\int d^2\beta_D \lvert X_{\ketbra{+}{+}{}}(\beta_D)\rvert ^2 &=\pi,\nonumber\\ \int d^2\beta_D X_{\ketbra{-}{-}{}}(\beta_D)X_{\ketbra{-}{-}{}}(-\beta_D)&\nonumber\\ \qquad =\int d^2\beta_D \lvert X_{\ketbra{-}{-}{}}(\beta_D)\rvert ^2 &=\pi,\nonumber\\ \int d^2\beta_D X_{\ketbra{+}{-}{}}(\beta_D)X_{\ketbra{-}{+}{}}(-\beta_D)&=\pi,\nonumber\\ \int d^2\beta_D X_{\ketbra{-}{+}{}}(\beta_D)X_{\ketbra{+}{-}{}}(-\beta_D)&=\pi, \label{symmetrya} \end{align} while the other twelve integrations give zero, for example, \begin{align} &\int d^2\beta_D X_{\ketbra{+}{+}{}}(\beta_D)X_{\ketbra{-}{-}{}}(-\beta_D)=0,\nonumber\\ &\int d^2\beta_D X_{\ketbra{+}{+}{}}(\beta_D)X_{\ketbra{+}{-}{}}(-\beta_D)=0,\cdots \label{symmetryb} \end{align} The fidelity of a large hybrid entangled state with itself is therefore given by the four non-zero terms \begin{align} &F_{self}^{(large)} = \nonumber\\
&\frac{1}{4}\left[ \frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{+}{+}{}}(\beta_D)|^2 \right.\nonumber\\
&\qquad\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{\alpha_0}{\alpha_0}{}}(\beta_{B''})|^2 \nonumber\\
&\;+\frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{-}{-}{}}(\beta_D)|^2 \nonumber\\
&\qquad\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{-\alpha_0}{-\alpha_0}{}}(\beta_{B''})|^2\nonumber\\ &\;+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{+}{-}{}}(\beta_D)X_{\ketbra{-}{+}{}}(-\beta_D) \nonumber\\ &\qquad\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{\alpha_0}{-\alpha_0}{}}(\beta_{B''}) X_{\ketbra{-\alpha_0}{\alpha_0}{}}(-\beta_{B''})\nonumber\\ &\;+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{-}{+}{}}(\beta_D) X_{\ketbra{+}{-}{}}(-\beta_D)\nonumber\\ & \left.\qquad\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{-\alpha_0}{\alpha_0}{}}(\beta_{B''}) X_{\ketbra{\alpha_0}{-\alpha_0}{}}(-\beta_{B''})\right]. \label{fidelity1_large_alpha} \end{align} All the integrals above are equal to $\pi$, giving $F_{self}^{(large)}=~1$. Similarly, for the case of small $\alpha_0$, $\bigchi_h^{(small)}(\beta_D,\beta_{B''})$ is given by Eq.~(\ref{hybrid_chi}). The DV mode in the $\{\ket{0},\ket{1}\}$ basis also exhibits a symmetry similar to that in Eqs.~(\ref{symmetrya}) and (\ref{symmetryb}). From such a symmetry, the fidelity of the state to itself can be simplified from sixteen terms to only four non-zero terms \begin{align} &F_{self}^{(small)} = \nonumber\\
&\frac{1}{4}\left[ \frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{0}{0}{}}(\beta_D)|^2 \right.\nonumber\\
&\qquad\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{1}{1}{}}(\tilde{\beta}_{B''})|^2 \nonumber\\
&\;+\frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{1}{1}{}}(\beta_D)|^2 \nonumber\\
&\qquad\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{0}{0}{}}(\tilde{\beta}_{B''})|^2\nonumber\\ &\;+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{0}{1}{}}(\beta_D)X_{\ketbra{1}{0}{}}(-\beta_D) \nonumber\\ &\qquad\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{1}{0}{}}(\tilde{\beta}_{B''}) X_{\ketbra{0}{1}{}}(-\tilde{\beta}_{B''})\nonumber\\ &\;+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{1}{0}{}}(\beta_D) X_{\ketbra{0}{1}{}}(-\beta_D)\nonumber\\ & \left.\qquad\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{0}{1}{}}(\tilde{\beta}_{B''}) X_{\ketbra{1}{0}{}}(-\tilde{\beta}_{B''})\right]. \label{fidelity1_small_alpha} \end{align} All the integrals above are equal to $\pi$, again giving $F_{self}^{(small)}=~1$.
When the CV mode of the hybrid entangled state is teleported, mode $C$ will be teleported to mode $B''$. Assuming a teleportation gain of $g=1$,\footnote{$g=1$ was found to be the optimal gain for symmetric channels with regard to the logarithmic negativity \cite{takeda2013gaintuning}\cite{do2019hybrid}.} the teleportation output is given by \cite{farias2009thesis}\cite{dellanno2018nongaussian}
\begin{equation} \bigchi_{tel}(\beta_D, \beta_{B''}) = \bigchi_h(\beta_D, \beta_{B''})\bigchi_{TMSV}^{(T_A,T_B)}(\beta_{B''}^*,\beta_{B''}),
\label{teleportation_characteristic} \end{equation} where the channel is given by the attenuated TMSV state\footnote{More generally, see \cite{marian2006continuous} for how teleportation can operate under any type of CV teleportation channel.} $\bigchi_{TMSV}^{(T_A,T_B)}(\beta_{B''}^*,\beta_{B''})$ in Eq.~(\ref{TMSV_tatb}).
The fidelity of a hybrid entangled state after teleportation can be calculated by applying Eq.~(\ref{eq_fidelity}) \cite{farias2009thesis,seshadreesan2015nongaussian,dellanno2018nongaussian} \begin{align}
F_{tel} &=\frac{1}{\pi}\iint d^2\beta_D d^2\beta_{B''}\bigchi_h(\beta_D,\beta_{B''})\bigchi_h(-\beta_{D},-\beta_{B''})\nonumber\\ &\qquad \qquad\times \bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}), \end{align}
where, by applying Eqs.~(\ref{vac_chi}), (\ref{TMSV_chi}), and (\ref{TMSV_tatb}), we have \begin{equation}
\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}) = \exp \left[-\sigma|\beta_{B''}|^2\right], \label{TMSV_minus} \end{equation} with $\sigma$ defined in Eq.~(\ref{sigma_general}). When there is no loss ($T_A=T_B=1$), we have $\sigma = e^{-2r}$ \cite{seshadreesan2015nongaussian}, where $r$ is again the initial squeezing. When the two channels are symmetric ($T_A=T_B=T$), we have \begin{equation} \sigma =T e^{-2r} + (1-T). \label{sigma_T} \end{equation} In order to calculate the fidelity, we perform a summation of integrations similar to that in Eqs.~(\ref{fidelity1_large_alpha}) and (\ref{fidelity1_small_alpha}).
In the following, we will calculate the fidelity for the cases when $\alpha_0$ is large or small, respectively. For hybrid states with intermediate sizes, the detailed calculations can be found in appendix~\ref{appendix_cat_exact}.
\paragraph{Large cat state} \label{large_fidelity} When calculating the fidelity of the large hybrid state after teleportation, similar to Eqs.~(\ref{symmetrya}), (\ref{symmetryb}), and (\ref{fidelity1_large_alpha}), we obtain sixteen terms, which can be simplified to four non-zero terms by applying the DV-mode symmetry, giving \begin{align} &F_{tel}^{(large)} = \nonumber\\
&\frac{1}{4}\left[\vphantom{\frac{1}{4}}\right. \frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{+}{+}{}}(\beta_D)|^2 \nonumber\\
&\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{\alpha_0}{\alpha_0}{}}(\beta_{B''})|^2 \bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}) \nonumber\\
&+\frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{-}{-}{}}(\beta_D)|^2 \nonumber\\
&\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{-\alpha_0}{-\alpha_0}{}}(\beta_{B''})|^2 \bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}) \nonumber\\ &+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{+}{-}{}}(\beta_D)X_{\ketbra{-}{+}{}}(-\beta_D) \nonumber\\ &\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{\alpha_0}{-\alpha_0}{}}(\beta_{B''}) X_{\ketbra{-\alpha_0}{\alpha_0}{}}(-\beta_{B''}) \nonumber\\ &\qquad\quad \times\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''})\nonumber\\ &+{\frac{1}{\pi}}\int d^2\beta_D X_{\ketbra{-}{+}{}}(\beta_D) X_{\ketbra{+}{-}{}}(-\beta_D)\nonumber\\ & \times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{-\alpha_0}{\alpha_0}{}}(\beta_{B''}) X_{\ketbra{\alpha_0}{-\alpha_0}{}}(-\beta_{B''})\nonumber\\
& \qquad\quad \times\left.\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''})\right]. \end{align} The integrations over mode $D$ are still equal to $\pi$. For the integrations over mode $B''$, after applying Eq.~(\ref{TMSV_minus}), we define the following four terms and calculate them using Mathematica \begin{align}
f_{\pm\pm\alpha_0} &= \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{\pm\alpha_0}{\pm\alpha_0}{}}(\beta_{B''})|^2 e^{-\sigma|\beta_{B''}|^2} \nonumber\\
&= \frac{1}{\pi}\int d^2\beta_{B''}e^{-|\beta|^2} e^{-\sigma|\beta_{B''}|^2}= \frac{1}{1+\sigma},\\ f_{\pm\mp\alpha_0} &=\frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{\pm\alpha_0}{\mp\alpha_0}{}}(\beta_{B''}) X_{\ketbra{\mp\alpha_0}{\pm\alpha_0}{}}(-\beta_{B''})\nonumber\\
&\qquad \times e^{-\sigma|\beta_{B''}|^2}\nonumber\\
&= \frac{1}{\pi}\int d^2\beta_{B''} e^{-|\beta_{B''}\pm 2\alpha_0|^2}e^{-\sigma|\beta_{B''}|^2} \nonumber\\ &=\frac{1}{1+\sigma} \exp\left(-\frac{4\alpha_0^2\sigma}{1+\sigma}\right). \end{align} So the fidelity after teleportation is
\begin{align} F_{tel}^{(large)} &= \frac{1}{4}(f_{++\alpha_0}+ f_{--\alpha_0}+ f_{+-\alpha_0}+f_{-+\alpha_0}) \nonumber\\ &= \frac{1}{2(1+\sigma)}\left[1+\exp\left(-\frac{4\alpha_0^2\sigma}{1+\sigma}\right)\right]. \label{fidelity_large_alpha} \end{align}
\paragraph{Small cat state}
\begin{figure*}
\caption{$\alpha_0 = 1 \rightarrow 2$}
\label{fidelity_large}
\caption{$\alpha_0 < 0.5$}
\label{fidelity_small}
\caption{The fidelity is plotted for direct distribution (black), or teleportation of the the DV/CV mode (red-orange/blue-navy). The channels are assumed to be symmetric ($T_A = T_B$), while the initial two-mode squeezing parameter $r$ is set to either 2.5 or 0.5, and the teleportation gain is set to $g=1$. In general, the fidelity of DV-mode teleportation stays the same regardless of $\alpha_0$. (a) For $\alpha_0 = 1 \rightarrow 2$, the fidelities of direct distribution and CV-mode teleportation vary in the shaded ranges, with the upper bound corresponding to $\alpha_0=1$, and the lower bound corresponding to $\alpha_0=2$. (b) For $\alpha_0< 0.5$, the fidelities of the three schemes stay approximately unchanged, providing an upper bound for the fidelity with respect to $\alpha_0$.}
\label{fidelity}
\end{figure*}
When calculating the fidelity of the small hybrid state after CV-mode teleportation, we arrive at sixteen different terms. Similar to Eq.~(\ref{fidelity1_small_alpha}), by applying the DV-mode symmetry, twelve of these terms become zero, leaving the following four terms \begin{align} &F_{tel}^{(small)} = \nonumber\\
&\frac{1}{4}\left[ \frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{0}{0}{}}(\beta_D)|^2 \right.\nonumber\\
&\qquad\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{1}{1}{}}(\tilde{\beta}_{B''})|^2 \bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}) \nonumber\\
&\;+\frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{1}{1}{}}(\beta_D)|^2 \nonumber\\
&\qquad\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{0}{0}{}}(\tilde{\beta}_{B''})|^2\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}) \nonumber\\ &\;+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{0}{1}{}}(\beta_D)X_{\ketbra{1}{0}{}}(-\beta_D) \nonumber\\ &\qquad\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{1}{0}{}}(\tilde{\beta}_{B''}) X_{\ketbra{0}{1}{}}(-\tilde{\beta}_{B''})\nonumber\\ &\qquad\qquad\quad \times\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''})\nonumber\\ &\;+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{1}{0}{}}(\beta_D) X_{\ketbra{0}{1}{}}(-\beta_D)\nonumber\\ & \qquad\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{0}{1}{}}(\tilde{\beta}_{B''}) X_{\ketbra{1}{0}{}}(-\tilde{\beta}_{B''})\nonumber\\ &\qquad\qquad\quad \times\left.\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''})\right]. \label{fidelity_small_alpha} \end{align} The integrations over mode $D$ still give $\pi$, for the integrations over mode $B''$, we apply Eq.~(\ref{TMSV_minus}) and define \begin{align}
f_{00}&=\frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{0}{0}{}}(\tilde{\beta}_{B''})|^2 e^{-\sigma|\beta_{B''}|^2} = \frac{1}{\sqrt{\tau}},\nonumber\\
f_{11}&=\frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{1}{1}{}}(\tilde{\beta}_{B''})|^2 e^{-\sigma|\beta_{B''}|^2} \nonumber\\ &= \frac{2+\sigma^2+2\sigma^4+4\sigma(1+\sigma^2)\cosh(2s)+3\sigma^2\cosh(4s)}{2\sqrt{e^{-2s}+\sigma}(e^{2s}+\sigma)^{5/2}(e^{-2s}+\sigma)^2}, \nonumber\\
f_{10}&=\frac{1}{\pi}\int d^2\beta_{B''}X_{\ketbra{1}{0}{}}(\tilde{\beta}_{B''})X_{\ketbra{0}{1}{}}(\tilde{\beta}_{B''}) e^{-\sigma|\beta_{B''}|^2} \nonumber\\ &= \frac{1+\sigma\cosh(2s)}{\tau^{3/2}},\nonumber\\
f_{01}&=\frac{1}{\pi}\int d^2\beta_{B''}X_{\ketbra{0}{1}{}}(\tilde{\beta}_{B''})X_{\ketbra{1}{0}{}}(-\tilde{\beta}_{B''}) e^{-\sigma|\beta_{B''}|^2} \nonumber\\ &=f_{10}, \end{align} where $\tau = 1 + \sigma^2 +2\sigma\cosh(2s)$, with $s$ being the single-mode squeezing parameter in Eq.~(\ref{s}). The final fidelity of hybrid entanglement after teleportation is given by \begin{equation} F_{tel}^{(small)} = \frac{1}{4}\left(f_{00}+f_{11}+ f_{10} + f_{01}\right), \label{fidelity_cv} \end{equation} which tends to unity as the channel loss approaches zero ($T_A=T_B\rightarrow 1$). The detailed simulation results are presented in the next section.
\section{Results} \label{result}
In this section, we aim to compare the long-distance distribution of hybrid entanglement using two different schemes: one scheme (in Fig.~\ref{direct_setup}) uses the direct distribution from the satellite, with one terrestrial node receiving the DV mode and the other node the CV mode; and one scheme (in Fig.~\ref{teleported_setup}) uses the TMSV channel created by the satellite to teleport one mode of an initially localised hybrid state from one terrestrial node to another. We compare the two schemes in terms of the fidelity and logarithmic negativity.
We simulate the hybrid entangled state in Eq.~(\ref{hybrid_CD}) with different real values of $\alpha_0$,\footnote{ The same simulation was run with complex values of $\alpha_0$ as well. However, for both direct distribution and DV-mode teleportation, we found that the fidelity is the highest when the initial cat states are small and real. } especially in the cases of small cat states ($\alpha_0~<~0.5$) and large cat states ($\alpha_0~>~1$). For DV-mode teleportation, when calculating the summation in Eq.~(\ref{teleported_output}), we stop at a value of $k=k_{max}$. Since the averaged photon number is given by $\alpha_0^2$, we see that $k_{max}$ should be at least $\alpha_0^2$ \cite{parker2018thesis}. Practically, to find $k_{max}$, we start with a high value, then slowly reduce $k_{max}$ while checking that $\textrm{Tr}\left(\ketbra{cat_\pm}{cat_\pm}{}\right) = 1 \pm \delta$, where $\delta$ is the tolerance. In the following simulations, we use the values of $\delta=10^{-14}$ and $k_{max} \approx 30$.
In our first simulation, we assume that the two down-link channels are symmetric, i.e., $T_A = T_B = T$. The total loss of two channels can be calculated by $-10 \log_{10}(T^2)$ (dB). The teleportation gain is set to $g=1$, which is the optimal gain when $T_A=T_B$ \cite{takeda2013gaintuning}\cite{do2019hybrid}. The fidelity is calculated by Eqs.~(\ref{arbitrary_alpha}), (\ref{fidelity_rho}), (\ref{teleported_output}), (\ref{fidelity_large_alpha}), and (\ref{fidelity_cv}). Fig.~\ref{fidelity_large} shows the case of large cat states where we selectively adopted values of $\alpha_0$ in the range $1-2$ ($\alpha_0 = 1\rightarrow 2$).
We first vary the initial squeezing $r$ of the TMSV channel and find that the fidelity of teleportation increases as $r$ increases, then saturates to its maximum value when $r\approx 2.5$, which we will refer to as the optimal squeezing parameter. Thus, in this figure, we plot the fidelity for $r=2.5$ and a lower value of $r=0.5$. We found that, for higher loss, the effect of the initial squeezing $r$ on teleportation results becomes less significant. This implies that for long-distance communications, we need not spend too much experimental resources on enhancing the channel squeezing.
We can also see from Fig.~\ref{fidelity_large} that the fidelity of DV-mode teleportation stays the same regardless of $\alpha_0$, while the fidelities of CV-mode teleportation and direct distribution decrease when $\alpha_0$ increases. In particular, for $r=2.5$, our simulation shows that when $\alpha_0>1.2$, the fidelity of DV-mode teleportation is always higher than direct distribution, and the difference further increases with increasing $\alpha_0$ and higher channel loss.
When $\alpha_0$ decreases below 1.2, the fidelity of direct distribution starts to become higher than DV-mode teleportation at low channel loss. However, as the loss increases, the fidelity of direct distribution quickly decreases, while the fidelity of DV-mode teleportation decreases slower then saturates to a higher value. The loss level where the crossover occurs (which we will henceforth refer to as the `crossover loss') increases as $\alpha_0$ decreases, and varies from 0dB go 5dB when $\alpha_0$ varies from 1.2 to 1.
This crossover can be explained by how the channel loss enters the calculations of the direct distribution scheme (via both DV and CV loss components) as compared to the teleportation channel (via two CV loss components). Ultimately, our detailed mathematical models offer the explanation of these findings.
\begin{figure*}
\caption{Direct Distribution}
\caption{DV-mode Teleportation, $g=1$}
\caption{The fidelities of direct distribution (a) and DV-mode teleportation (b,c) is plotted where $T_A$ and $T_B$ are independently varied. We fixed $\alpha_0=1.5$, $r=2.5$, and $g=1$ (b) or $g=\sqrt{T_B/T_A}$ (c).
}
\label{3d_f_direct}
\label{3d_f_teleport}
\label{3d_f_teleport_g}
\label{3D}
\end{figure*}
\begin{figure*}
\caption{Direct Distribution}
\caption{DV-mode Teleportation, $g=1$}
\label{3d_ln_direct}
\label{3d_ln_teleport}
\label{3d_ln_teleport_g}
\label{3D_LN}
\end{figure*}
In addition, in Fig.~\ref{fidelity_large}, under all loss conditions, teleporting the DV mode of the hybrid entangled state is always better or equal to teleporting the CV mode. The difference becomes more significant when $\alpha_0$ increases. Our results can be explained by the notion that for a given squeezing level and a given loss level in the CV teleportation channel, the teleportation of larger Fock states will provide for less fidelity than smaller Fock states. Only in the limit of zero loss would this not be the case. As such, superpositions of Fock states weighted to excitations of small photon numbers should be `easier' to teleport. Our DV state can be considered loosely as such a superposition of Fock states.
Fig.~\ref{fidelity_small} shows the case of small cat states where we selectively simulated $\alpha_0 = 0.1 \rightarrow 0.5$. In this case, we can see that the fidelities of the three schemes all remain almost unchanged, and the difference cannot be seen from the curves in the plot. For losses below 7dB, we can also see that direct distribution gives a higher fidelity than teleportation. However, when the loss increases above 7dB, teleportation is always better. In addition, for all loss conditions, we find that it does not matter to teleport either the DV or CV mode of hybrid entanglement, since the output fidelity is the same. This symmetry can be explained by the fact that when $\alpha_0<0.5$, Eq.~(\ref{s}) gives $s<0.1$, so that the SMSV and 1PS states in Eq.~(\ref{cat_approximation}) are very lightly squeezed. Intuitively, we can understand that when the cat states become small, their average photon numbers approach either 0 or 1. As a result, the cat states approach $\ket{0}$ and $\ket{1}$, and the approximated hybrid entangled state in Eq.~(\ref{approx_hybrid}) approaches a DV entangled state, which is spatially symmetric. In general, we can see that lower photon-number states, especially qubit states, are less sensitive to channel loss. Thus, the case of small $\alpha_0$ provides the upper bounds for the fidelities of the three schemes.
For cat states of intermediate sizes where $0.5<\alpha_0<1$, our simulations show that the fidelity of DV-mode teleportation still stays unchanged, while the fidelities of direct distribution and CV-mode teleportation decrease monotonically as $\alpha_0$ increases. For this intermediate range, the fidelities naturally stay in between the bounds set by large and small $\alpha_0$.
To study asymmetric channels, we next vary $T_A$ and $T_B$ independently, while plotting the 3D map for the fidelity. Since large cat states are more useful for quantum computations, in this simulation, we fix $\alpha_0=1.5$ and $r=2.5$, respectively. Since our simulations have shown that DV-mode is always preferable over CV-mode teleportation, especially for large $\alpha_0$, we will only plot the fidelity for direct distribution and DV-mode teleportation. From the plot of direct distribution (Fig.~\ref{3d_f_direct}), we see that the fidelity decreases monotonically with $T_A$ and $T_B$. Especially, the fidelity decreases faster with the loss in the CV mode (mode $B$). For DV-mode teleportation, in order to study the effect of gain-tuning, we set the gain to either $g=1$ or $g=\sqrt{T_B/T_A}$.\footnote{This $g$ value was previously found to be optimal with regard to logarithmic negativity (see section~\ref{section_dv_teleportation} and \cite{do2019hybrid}).} As can be seen in Fig.~\ref{3d_f_teleport}, when $g=1$, the output attains the highest level of entanglement when the channels are symmetric $T_A=T_B$, but quickly drops to near zero when $T_A\neq T_B$. In Fig.~\ref{3d_f_teleport_g}, we can see that tuning the teleportation gain to $g=\sqrt{T_B/T_A}$ significantly improves the fidelity for asymmetric channels \cite{do2019hybrid}. During gain tuning, it is especially important to maintain a high transmissivity in Bob's down-link channel, since when $T_B$ is high, the fidelity remains reasonably high for all different values of $T_A$.
In the next simulation, we use the same parameters as the previous one, but we plot the logarithmic negativity using Eq.~(\ref{ELN_rho}). From the results in Fig.~\ref{3D_LN}, we see that the logarithmic negativity follows the same trend as the fidelity, however, the former decreases more sharply with low transmissivities and channel asymmetry as compared to the latter. We also see that tuning the gain to the optimal value of $g=\sqrt{T_B/T_A}$ significantly improves the quality of the teleportation outcomes.
We also perform a series of similar simulations where the initial hybrid entangled cat state in Eq.~(\ref{hybrid_ket}) is replaced by a hybrid coherent state of the form \cite{kreis2012classifying}\cite{sheng2013hybrid} \begin{align}
( \ket{\alpha_0}_C\ket{0}_D + \ket{-\alpha_0}_C\ket{1}_D)/\sqrt{2}. \end{align} We find that for large $\alpha_0$, all our conclusions found for our cat hybrid state remain intact. Indeed, the detailed results show trends for both direct and teleported states very similar to those shown in Fig.~\ref{fidelity_large}.
For small $\alpha_0$, clear differences in both the direct and teleported states can be identified relative to Fig.~\ref{fidelity_small}. However, again we emphasize the large $\alpha_0$ states are the states of interest for almost all hybrid DV-CV quantum protocols. States beyond large cat states remain worthy of study, particularly in regard to upper bounds on the teleported fidelities.
\section{Conclusion} \label{conclusion} Hybrid technologies play an important role in interfacing mixed quantum protocols, which will in turn help accelerate the global development of quantum networks. In this work, we studied, for the first time, the teleportation of either mode of a hybrid entangled state via a lossy TMSV channel, and compared such teleportation to the case where the hybrid entanglement is generated on board a satellite and directly distributed to Earth. Due to the complexity of the hybrid entangled state, especially in the cross-diagonal matrices with multi-photon numbers, we resorted to the characteristic function formalism. By identifying symmetries within this formalism, we then developed a novel mathematical framework to calculate the fidelity of the teleported hybrid state. Our results showed that when the initial cat states have $\alpha_0>1.2$, it was always preferable to teleport the DV mode of the hybrid entangled state. For cat states with $\alpha_0<1.2$, direct distribution gave a higher fidelity than teleportation at low channel loss. However, for losses higher than 7dB, the teleportation of the DV mode was always better, regardless of $\alpha_0$. We note, losses above 7dB are expected for satellite-to-Earth channels for transceiver apertures of reasonable sizes. For all loss conditions, we found CV-mode teleportation always gave a fidelity that is less than or equal to DV-mode teleportation, where the equality happens when the cat states become small ($\alpha_0<0.5$). Similar results to the above were found for a hybrid state in which the cat states were replaced by large coherent states of opposite phases. Our results will be important for next-generation heterogeneous quantum communication networks, whose teleportation resource is sustained by standard TMSV entanglement distribution from LEO satellites.
\begin{appendices}
\section{CV-mode teleportation of an exact hybrid entangled state} \label{appendix_cat_exact} This appendix uses the exact mathematical form of the hybrid entangled state (Eq.~(\ref{hybrid_CD})) to calculate the characteristic function and fidelity after CV-mode teleportation. The mathematical model below is especially useful for hybrid states with intermediate sizes ($0.5<\alpha_0<1$), where a simple approximation does not exist.
The characteristic function of the exact hybrid entangled state is found by substituting Eq.~(\ref{hybrid_CD}) into Eqs.~(\ref{characteristic_function_define}) and (\ref{X_function}), giving
\begin{align} \bigchi_{h}^{(exact)}(\beta_D,\beta_C) &= \frac{1}{2}\left[X_{\ketbra{0}{0}{}}(\beta_D) X_{\ketbra{cat_-}{cat_-}{}}(\beta_C) \right.\nonumber\\ &\qquad + X_{\ketbra{1}{1}{}}( \beta_D) X_{\ketbra{cat_+}{cat_+}{}}(\beta_C) \nonumber\\ &\qquad+X_{\ketbra{0}{1}{}}(\beta_D) X_{\ketbra{cat_-}{cat_+}{}}(\beta_C) \nonumber\\ &\qquad + \left. X_{\ketbra{1}{0}{}}(\beta_D) X_{\ketbra{cat_+}{cat_-}{}}( \beta_C)\right]. \end{align}
To find the fidelity after teleporting the CV mode, we follow the calculations in section~\ref{section_fidelity_cv}, giving \begin{align}
&F_{tel}^{(exact)} = \frac{1}{4}\left[ \frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{0}{0}{}}(\beta_D)|^2 \right.\nonumber\\
&\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{cat_-}{cat_-}{}}(\beta_{B''})|^2 \bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}) \nonumber\\
&+\frac{1}{\pi}\int d^2\beta_D|X_{\ketbra{1}{1}{}}(\beta_D)|^2 \nonumber\\
&\times \frac{1}{\pi}\int d^2\beta_{B''}|X_{\ketbra{cat_+}{cat_+}{}}(\beta_{B''})|^2 \bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''}) \nonumber\\ &+\frac{1}{\pi}\int d^2\beta_D X_{\ketbra{1}{0}{}}(\beta_D)X_{\ketbra{0}{1}{}}(-\beta_D) \nonumber\\ &\times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{cat_+}{cat_-}{}}(\beta_{B''}) X_{\ketbra{cat_-}{cat_+}{}}(-\beta_{B''}) \nonumber\\ &\qquad\quad \times\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''})\nonumber\\ &+{\frac{1}{\pi}}\int d^2\beta_D X_{\ketbra{0}{1}{}}(\beta_D) X_{\ketbra{1}{0}{}}(-\beta_D)\nonumber\\ & \times \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{cat_-}{cat_+}{}}(\beta_{B''}) X_{\ketbra{cat_+}{cat_-}{}}(-\beta_{B''})\nonumber\\
& \qquad\quad \times\left.\bigchi_{TMSV}^{(T_A,T_B)}(-\beta_{B''}^*,-\beta_{B''})\right]. \end{align} The integrations over mode $D$ are still giving $\pi$, for the integrations in mode $B''$, after applying Eq.~(\ref{TMSV_minus}), the exact fidelity can be given by \begin{equation} F_{tel}^{(exact)} = \frac{1}{4}(f_{++cat} + f_{--cat} + f_{+-cat} + f_{-+cat}), \label{f_exact} \end{equation} where \begin{align} f_{++cat} &= \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{cat_+}{cat_+}{}}(\beta_{B''})X_{\ketbra{cat_+}{cat_+}{}}(-\beta_{B''})\nonumber\\
&\qquad\quad\times e^{-\sigma|\beta_{B''}|^2},\nonumber\\ f_{--cat} &= \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{cat_-}{cat_-}{}}(\beta_{B''})X_{\ketbra{cat_-}{cat_-}{}}(-\beta_{B''})\nonumber\\
&\qquad\quad\times e^{-\sigma|\beta_{B''}|^2},\nonumber\\ f_{+-cat} &= \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{cat_+}{cat_-}{}}(\beta_{B''})X_{\ketbra{cat_-}{cat_+}{}}(-\beta_{B''})\nonumber\\
&\qquad\quad\times e^{-\sigma|\beta_{B''}|^2},\nonumber\\ f_{-+cat} &= \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{cat_-}{cat_+}{}}(\beta_{B''})X_{\ketbra{cat_+}{cat_-}{}}(-\beta_{B''})\nonumber\\
&\qquad\quad\times e^{-\sigma|\beta_{B''}|^2}. \end{align} In order to calculate the above integrations, we first apply Eq.~(\ref{cat}) to find \begin{align} &X_{\ketbra{cat_+}{cat_+}{}}(\beta)= \frac{1}{N_+^2}( \nonumber\\ &\quad X_{\ketbra{\alpha_0}{\alpha_0}{}}+X_{\ketbra{-\alpha_0}{-\alpha_0}{}}+X_{\ketbra{\alpha_0}{-\alpha_0}{}}+X_{\ketbra{-\alpha_0}{\alpha_0}{}}),\nonumber\\ &X_{\ketbra{cat_-}{cat_-}{}}(\beta) = \frac{1}{N_-^2}(\nonumber\\ &\quad X_{\ketbra{\alpha_0}{\alpha_0}{}}+X_{\ketbra{-\alpha_0}{-\alpha_0}{}}-X_{\ketbra{\alpha_0}{-\alpha_0}{}}-X_{\ketbra{-\alpha_0}{\alpha_0}{}}),\nonumber\\ &X_{\ketbra{cat_+}{cat_-}{}}(\beta) = \frac{1}{N_+N_-}( \nonumber\\ &\quad X_{\ketbra{\alpha_0}{\alpha_0}{}}-X_{\ketbra{-\alpha_0}{-\alpha_0}{}}-X_{\ketbra{\alpha_0}{-\alpha_0}{}}+X_{\ketbra{-\alpha_0}{\alpha_0}{}}),\nonumber\\ &X_{\ketbra{cat_-}{cat_+}{}}(\beta) = \frac{1}{N_+N_-}( \nonumber\\ &\quad X_{\ketbra{\alpha_0}{\alpha_0}{}}-X_{\ketbra{-\alpha_0}{-\alpha_0}{}}+X_{\ketbra{\alpha_0}{-\alpha_0}{}}-X_{\ketbra{-\alpha_0}{\alpha_0}{}}). \label{X_function_cat} \end{align} We then define the following integrations \begin{align} f_{jlmn}&= \frac{1}{\pi}\int d^2\beta_{B''} X_{\ketbra{j\alpha_0}{l\alpha_0}{}}(\beta_{B''})X_{\ketbra{m\alpha_0}{n\alpha_0}{}}(-\beta_{B''})\nonumber\\
&\qquad\quad\times e^{-\sigma|\beta_{B''}|^2}, \end{align} where $j,l,m,n\in\{+,-\}$. By substituting Eq.~(\ref{cf_coh}) into the above equation and using Mathematica to calculate, we find \begin{align} f_0 &= f_{\pm\pm\pm\pm} = \frac{1}{1+\sigma},\nonumber\\ f_1 &= f_{\pm\mp\mp\pm} = \frac{1}{1+\sigma} \exp\left(-\alpha_0 \frac{4\sigma}{1+\sigma}\right),\nonumber\\ f_2 &= f_{\pm\pm\mp\mp} = \frac{1}{1+\sigma} \exp\left(-\alpha_0 \frac{4}{1+\sigma}\right),\nonumber\\ f_3 &= f_{\pm\mp\pm\mp} = \frac{1}{1+\sigma} \exp\left(-4\alpha_0 \right),\nonumber\\ f_4 &= f_{\pm\pm\pm\mp}=f_{\mp\mp\pm\mp} \nonumber\\ &=f_{\pm\mp\pm\pm}=f_{\pm\mp\mp\mp}= \frac{1}{1+\sigma} \exp\left(-2\alpha_0 \right). \label{f01234} \end{align} We can then find that \begin{align} f_{++cat} &= \frac{2}{N_+^4}(f_0+f_1+f_2+f_3 +4f_4),\nonumber\\ f_{--cat} &= \frac{2}{N_-^4}(f_0+f_1+f_2+f_3 -4f_4),\nonumber\\ f_{+-cat} &= f_{-+cat} =\frac{2}{N_+^2N_-^2}(f_0+f_1-f_2-f_3 ). \label{f_pm_cat} \end{align} From Eqs.~(\ref{f_exact}), (\ref{f01234}), and (\ref{f_pm_cat}), we can find the exact fidelity of CV-mode teleportation. From our simulation, we can find that for cat states with intermediate sizes ($0.5<\alpha_0<1$), the fidelity stays between that of large and small cat states, and decreases monotonically as $\alpha_0$ increases.
\section{Direct distribution over a lossy channel} \label{appendix_direct_distribution} This section derives the attenuated hybrid entangled state after the direct distribution in Fig.~\ref{direct_setup} and section~\ref{direct_distribution}. The original hybrid entangled state in Eq.~(\ref{hybrid_ket}), together with two auxiliary vacuum modes, can be written together as the four-mode state \begin{align} &\ket{\psi_{A\epsilon_A B \epsilon_B}} = \frac{1}{\sqrt{2}}\left[ \frac{\ket{\alpha_0}_A - \ket{-\alpha_0}_A}{N_-}\ket{0}_{\epsilon_A} \otimes \ket{0}_B \ket{0}_{\epsilon_B} \right.\nonumber\\ &+ \frac{\ket{\alpha_0}_A + \ket{-\alpha_0}_A}{N_+}\ket{0}_{\epsilon_A}\otimes \hat{a}^\dagger_B\ket{0}_B\ket{0}\epsilon_B\left.\vphantom{\frac{1}{2}}\right], \end{align} where $N_\pm$ is given in Eq.~(\ref{N_cat}). The two lossy down-link channels are modeled by two beam splitters. Let $t_l = \sqrt{T_l}$ and $r_l = \sqrt{1-T_l}$ with $l \in \{A,B\}$, after the two beam splitter transformations in Eqs.~(\ref{BStransformation_CV}) and (\ref{BStransformation_DV}), the state becomes \begin{align} &\ket{\psi_{A'{\epsilon_{A'}} B' {\epsilon_{B'}}}} = \frac{1}{\sqrt{2}}\left\{ \vphantom{\frac{1}{2}} \right. \nonumber\\ &\left[\frac{\ket{t_A \alpha_0}_{A'} \ket{r_A \alpha_0}_{\epsilon_{A'}} - \ket{-t_A\alpha_0}_{A'} \ket{-r_A\alpha_0}_{\epsilon_{A'}}}{N_-} \otimes \ket{0}_{B'} \ket{0}_{\epsilon_{B'}}\right] \nonumber\\ &+ \left[\frac{\ket{t_A \alpha_0}_{A'} \ket{r_A \alpha_0}_{\epsilon_{A'}} + \ket{-t_A\alpha_0}_{A'} \ket{-r_A\alpha_0}_{\epsilon_{A'}}}{N_+}\right.\nonumber\\ & \quad\left.\left. \otimes \left(t_B\ket{1}_{B'} \ket{0}{\epsilon_{B'}} + r_B\ket{0}_{B'}\ket{1}_{\epsilon_{B'}}\right) \vphantom{\frac{1}{2}}\right]\right\}. \end{align} The corresponding density matrix can be written as \begin{align} &\rho_{A'{\epsilon_{A'}} B' {\epsilon_{B'}}} = \frac{1}{2} \left\{\vphantom{\frac{r_B^2}{N_+^2}}\right.\nonumber\\ &\quad\;\ketbra{1}{1}{{\epsilon_{B'}}}\otimes\left[\frac{r_B^2}{N_+^2} \ketbra{0}{0}{B'}\otimes(\rho_{++}+\rho_{--}+\rho_{-+}+\rho_{+-})\right]\nonumber\\ &+\ketbra{0}{0}{{\epsilon_{B'}}}\otimes\left[\frac{t_B^2}{N_+^2} \ketbra{1}{1}{B'}\otimes(\rho_{++}+\rho_{--}+\rho_{-+}+\rho_{+-})\right.\nonumber\\ &\qquad \quad\;\;\,\quad + \frac{1}{N_-^2}\; \; \ketbra{0}{0}{B'}\otimes(\rho_{++}+\rho_{--}-\rho_{-+}-\rho_{+-})\nonumber\\ &\qquad \quad \;\;\,+ \frac{t_B}{N_+ N_-} \ketbra{0}{1}{B'}\otimes(\rho_{++}-\rho_{--}+\rho_{-+}-\rho_{+-})\nonumber\\ &\qquad \quad \;\,+ \left.\frac{t_B}{N_+ N_-} \ketbra{1}{0}{B'}\otimes(\rho_{++}-\rho_{--}-\rho_{-+}+\rho_{+-})\right]\nonumber\\ &+ \left.\ketbra{0}{1}{{\epsilon_{B'}}}\otimes(\cdots) + \ketbra{1}{0}{{\epsilon_{B'}}}\otimes(\cdots)\vphantom{\frac{1}{2}}\right\}, \end{align} where \begin{align} \rho_{\pm\pm} &= \ketbra{\pm t_A\alpha_0}{\pm t_A\alpha_0}{A'}\otimes \ketbra{\pm r_A\alpha_0}{\pm r_A\alpha_0}{{\epsilon_{A'}}},\nonumber\\ \rho_{\pm\mp} &= \ketbra{\pm t_A\alpha_0}{\mp t_A\alpha_0}{A'}\otimes \ketbra{\pm r_A\alpha_0}{\mp r_A\alpha_0}{{\epsilon_{A'}}}. \end{align} The states indicated by the triple dots are not important, because when tracing out the auxiliary mode ${\epsilon_{B'}}$ from the state $\rho_{A'{\epsilon_{A'}} B' {\epsilon_{B'}}} $, the terms with $\ketbra{0}{1}{{\epsilon_{B'}}}$ and $\ketbra{1}{0}{{\epsilon_{B'}}}$ are removed. The tracing of ${\epsilon_{B'}}$ also equates $\ketbra{1}{1}{{\epsilon_{B'}}}$ and $\ketbra{0}{0}{{\epsilon_{B'}}}$ to unity. The tracing of ${\epsilon_{A'}}$ can be done by applying the integrations in Eq.~(\ref{trace_cv}), which transforms $\rho_{\pm\pm}$ and $\rho_{\pm\mp}$ to \begin{align} \rho'_{\pm\pm} &= \ketbra{\pm t_A\alpha_0}{\pm t_A\alpha_0}{A'},\nonumber\\ \rho'_{\pm\mp} &= \ketbra{\pm t_A\alpha_0}{\mp t_A\alpha_0}{A'}e^{-2r_A^2 \alpha_0^2}, \end{align} respectively. After tracing out ${\epsilon_{A'}}$ and ${\epsilon_{B'}}$, we can find the two-mode directly distributed state \begin{align} &\rho_{A'B'} = \textrm{Tr}_{{\epsilon_{A'}}{\epsilon_{B'}}} \left[\rho_{A'{\epsilon_{A'}} B'{\epsilon_{B'}}}\right] = \frac{1}{2} \left\{\vphantom{\frac{r_B^2}{N_+^2}}\right.\nonumber\\ &\qquad\; \; \frac{r_B^2}{N_+^2} \; \ketbra{0}{0}{B'}\otimes(\rho'_{++}+\rho'_{--}+\rho'_{-+}+\rho'_{+-})\nonumber\\ &\quad+\;\frac{t_B^2}{N_+^2}\; \ketbra{1}{1}{B'}\otimes(\rho'_{++}+\rho'_{--}+\rho'_{-+}+\rho'_{+-})\nonumber\\ &\quad+\;\frac{1}{N_-^2} \; \ketbra{0}{0}{B'}\otimes(\rho'_{++}+\rho'_{--}-\rho'_{-+}-\rho'_{+-})\nonumber\\ &+\frac{t_B}{N_+ N_-} \ketbra{0}{1}{B'}\otimes(\rho'_{++}-\rho'_{--}+\rho'_{-+}-\rho'_{+-})\nonumber\\ &+\frac{t_B}{N_+ N_-} \ketbra{1}{0}{B'}\otimes(\rho'_{++}-\rho'_{--}-\rho'_{-+}+\rho'_{+-})\left.\vphantom{\frac{t_B}{N_+ N_-}}\right\}. \end{align} By rearranging the terms, we come to the directly-distributed state given by Eq.~(\ref{arbitrary_alpha}).
\end{appendices}
\end{document} |
\begin{document}
\begin{abstract} We investigate the strength of a randomness notion $\mathcal R$ as a set-existence principle in second-order arithmetic: for each $Z$ there is an $X$ that is $\mathcal R$-random relative to $Z$. We show that the equivalence between $2$-randomness and being infinitely often $C$-incompressible is provable in $\mathsf{RCA}_0$. We verify that $\mathsf{RCA}_0$ proves the basic implications among randomness notions: $2$-random $\Rightarrow$ weakly $2$-random $\Rightarrow$ Martin-L\"of random $\Rightarrow$ computably random $\Rightarrow$ Schnorr random. Also, over $\mathsf{RCA}_0$ the existence of computable randoms is equivalent to the existence of Schnorr randoms. We show that the existence of balanced randoms is equivalent to the existence of Martin-L\"of randoms, and we describe a sense in which this result is nearly optimal. \end{abstract}
\title{Randomness notions and reverse mathematics}
\section{Introduction} \subsection*{Randomness} The theory of randomness via algorithmic tests has its beginnings in Martin-L\"of's paper \cite{MartinLof:68}, in the work of Schnorr~\cites{Schnorr:71,Schnorr:75}, as well as in the work of Demuth such as~\cite{Demuth:75a}. Each of these authors employed algorithmic tools to introduce tests of whether an infinite bit sequence is random. Rather than an absolute notion of algorithmic randomness, a hierarchy of randomness notions emerged based on the strength of the algorithmic tools that were allowed. Martin-L\"of introduced the randomness notion now named after him, which was based on uniformly computably enumerable sequences of open sets in Cantor space. Schnorr considered more restricted tests based on computable betting strategies, which led to the weaker notion now called computable randomness and the even weaker notion now called Schnorr randomness. Notions of randomness stronger than Martin-L\"of's but still arithmetical were introduced somewhat later by Kurtz \cite{Kurtz:81}. Of importance for us will be 2-randomness (namely, ML-randomness relative to the halting problem), and the notion of weak 2-randomness intermediate between 2-randomness and ML-randomness. See Sections~\ref{sec-RandFormal} and~\ref{sec-Implications} for the formal definitions.
The field of algorithmic randomness entered a period of intense activity from the late 1990s, with a flurry of research papers leading to the publication of two textbooks~\cites{NiesBook, DHBook}. One reason for this was the realization, going back to Ku\v{c}era~\cites{Kucera:85,Kucera:86}, that sets satisfying randomness notions interact in a meaningful way with the computational complexity of Turing oracles (the latter is a prime topic in computability theory). One can discern two main directions in the study of randomness notions:
\noindent \emph{(A) Characterizing theorems.} Give conditions on bit sequences that are equivalent to being random in a particular sense, and thereby reveal more about the randomness notions. The Levin-Schnorr theorem characterizes ML-randomness of $Z$ by the incompressibility of $Z$'s initial segments in the sense of the prefix-free descriptive string complexity $K$: $$Z \text{ is ML-random } \Leftrightarrow \, \exists b \forall n \, K(Z {\restriction} n) \ge n-b.$$ 2-randomness is equivalent to being infinitely often incompressible in the sense of the plain descriptive string complexity $C$
\cites{NiesStephanTerwijn,MillerKRand} (see also~\cite[Theorem~3.6.10]{NiesBook}): $$Z \text{ is 2-random } \Leftrightarrow \, \exists b \exists^\infty n \, C(Z {\restriction} n) \ge n-b.$$ There are also examples of characterizations not relying on the descriptive complexity of initial segments. For instance, a bit sequence $Z$ is 2-random iff $Z$ is ML-random and the halting probability $\Omega$ is ML-random relative to it; $Z$ is weakly 2-random iff $Z$ is ML-random and bounds no incomputable set that is below the halting problem.
\noindent \emph{(B) Separating theorems.} Given randomness notions that appear to be close to each other, one wants to find a bit sequence that is random in the weaker sense but not in the stronger sense. For instance, Schnorr provided a sequence that is Schnorr random but not computably random. For a more recent example, Day and Miller \cite{Day.Miller:15} separated notions only slightly stronger than ML-randomness. They provided a sequence that is difference random but not density random. Difference randomness, introduced via so-called difference tests, is equivalent to being ML-random and Turing incomplete~\cite{Franklin.Ng:10}. Density randomness, by definition, is the combination of ML-randomness and satisfying the conclusion of the Lebesgue density theorem for effectively closed sets.
Some separations of randomness notions are open problems. For instance, it is unknown whether Oberwolfach randomness is stronger than density randomness~\cite[Section 6]{Miyabe.Nies.Zhang:16} and of course whether ML-randomness is stronger than Kolmogorov-Loveland randomness \cites{Ambos.Kucera:00,Miller.Nies:06}.
Some motivation for obtaining separations of notions that appear to be close was provided by the above mentioned interaction of randomness with computability and, in particular, with lowness properties of oracles. The Turing incomplete ML-random set obtained in the Day/Miller result is Turing above all $K$-trivial sets because it is not density random~\cite{Bienvenu.Day.ea:14}. Whether such a set exists had been open for eight years \cite{Miller.Nies:06}.
\subsection*{The viewpoint of reverse mathematics} Our purpose is to study randomness notions from the viewpoint of {reverse mathematics}. This program in the foundations of mathematics, introduced by H.\ Friedman~\cite{Friedman}, attempts to classify the axiomatic strength of mathematical theorems. The typical goal in reverse mathematics is to determine which axioms are necessary and sufficient to prove a given mathematical statement. In order to do this, one fixes a base axiom system over which the reasoning is done. Then one asks which stronger axioms must be added to this base system in order to prove a given statement.
Algorithmic randomness plays an important role in reverse mathematics. Axioms asserting that random sets exist are interesting because typically they are weak compared to traditional comprehension schemes; in particular, the randomness notions that we consider produce axioms that are weaker than arithmetical comprehension. They still have important mathematical consequences, particularly concerning measure theory. Given a randomness notion~$\mathcal R$, we consider the statement ``for every set $Z$, there is a set $X$ that is $\mathcal R$-random relative to $Z$.'' Informally, we refer to this statement as the ``existence of $\mathcal R$-random sets.''
Quite a bit is known in the case that $\mathcal R$ is Martin-L\"of randomness. The existence of Martin-L\"of random sets is equivalent to \emph{weak weak K\"onig's lemma}, which states that every binary-branching tree of positive measure has an infinite path. This equivalence is obtained by formalizing a classic result of Ku\v{c}era (see e.g.~\cite[Proposition~3.2.24]{NiesBook}).
Via the equivalence, the existence of Martin-L\"of random sets is also equivalent to the statement ``every Borel measure on a compact complete separable metric space is countably additive''~\cite{YuSimpson}, as well as to the monotone convergence theorem for Borel measures on compact metric spaces~\cite{YuLebesgue} (see also~\cite[Section~X.1]{SimpsonSOSOA}).
Recently~\cite{NiesTriplettYokoyama}, equivalences between the existence of Martin-L\"of random sets and well-known theorems from analysis have been found: ``every continuous function of bounded variation is differentiable at some point'' and ``every continuous function of bounded variation is differentiable almost everywhere.''
In this paper we mainly consider the reverse mathematics of randomness notions other than Martin-L\"of's. The two directions outlined above lead to two types of questions.
\begin{itemize} \item[(A)] Examine whether characterizing theorems can be proved over a weak axiomatic system such as $\mathsf{RCA}_0$.
\item[(B)] For randomness notions that appear close to each other yet can be separated, see whether nonetheless the corresponding existence principles are equivalent over $\mathsf{RCA}_0$. (If so, this would gives a precise meaning to the intuition that the notions are close.) \end{itemize}
\subsection*{Results} Our first result follows direction (A): we investigate the above-mentioned fact that a set is $2$-random if and only if it is infinitely often $C$-incompressible. Formalizing randomness notions relative to the halting problem is delicate in weak axiomatic systems because of subtleties involving the induction axioms. For example, the existence of $2$-random sets does not imply $\Sigma^0_2$-bounding (equivalently, $\Delta^0_2$-induction)~\cite{SlamanFrag}, but weak weak K\"onig's lemma for $\Delta^0_2$ trees (i.e., \emph{$2$-weak weak K\"onig's lemma}) does imply $\Sigma^0_2$-bounding~\cite{AvigadDeanRute}. Therefore the equivalence between the existence of $2$-random sets and $2$-weak weak K\"onig's lemma requires $\Sigma^0_2$-bounding. It is then natural to ask how much induction is required to prove theorems about $2$-random sets. We show that the equivalence between $2$-randomness and infinitely often $C$-incompressibility can be proved without appealing to $\Sigma^0_2$-bounding. Formalizing infinitely often $C$-incompressibility in weak systems is straightforward, whereas formalizing $2$-randomness in terms of tests is not, so we hope that the formalized equivalence between the two notions will prove useful in future applications of algorithmic randomness in reverse mathematics, in addition to being technically interesting.
Towards direction (B), as a motivating example consider balanced randomness, introduced in \cite[Section 7]{FigueiraHirschfeldtMillerNgNies} (see Definition~\ref{df:balanced} below), which was the first notion slightly stronger than ML-randomness considered (Oberwolfach, density, and difference randomness, discussed above, are even closer to ML-randomness). The existence of Martin-L\"of random sets is equivalent to the existence of balanced random sets (Theorem~\ref{thm-MLRimpBran} below). Always relative to some oracle, if a balanced random set exists, then that set is Martin-L\"of random; conversely, if a Martin-L\"of random set exists, then at least one of its ``halves'' (i.e., either the bits in the even positions or the bits in the odd positions) is balanced random, so a balanced random set exists.
We show in Theorem~\ref{thm-WKLnothWDR} that the preceding equivalence is nearly optimal, in the sense that if $h \colon \mathbb{N} \rightarrow \mathbb{N}$ is any function that eventually dominates every function of the form $n \mapsto k^n$, then the existence of $h$-weakly Demuth random sets is strictly stronger than the existence of Martin-L\"of random sets.
Still following (B), we show that
the existence of Schnorr random sets is equivalent to the existence of computably random sets (Theorem~\ref{thm-SRimpCR}). In all cases, we actually prove that the equivalence holds for the same oracle.
In the alternative context of the Muchnik and Medvedev degrees (see~\cites{HinmanSurvey, SimpsonSurvey, SimpsonMassProblemsRandomness} for background), related work has recently been done by Miyabe~\cite{Miyabe}. He views randomness notions as mass problems (so there is no relativization). Miyabe shows that computable randomness and Schnorr randomness are Muchnik equivalent but not Medvedev equivalent, and he gives a similar result for difference randomness versus ML-randomness. Yet another alternative context for (B) is given by the Weihrauch degrees. Randomness notions are now viewed as multivalued functions mapping an oracle $X$ to the sets random in $X$. See~\cites{Brattka2015LasVC, brattka2016algebraic, Brattka2017UniformComputability}. In the Weihrauch degrees, ML-randomness is strictly weaker than weak weak K\"onig's lemma. Brattka and Pauly~\cite[Proposition~6.6]{brattka2016algebraic} exactly characterizes ML-randomness in terms of weak weak K\"onig's lemma and a weak choice principle.
This paper is organized as follows. In Section~\ref{sec-RM}, we recall the basic axiom system $\mathsf{RCA}_0$ and establish notational conventions. In Sections~\ref{sec-RandFormal} and \ref{sec-Implications}, we explain how the randomness notions we discussed above can be formalized in second-order arithmetic. In Section~\ref{sec-Inc}, we show that the equivalence between $2$-randomness and infinitely often $C$-incompressibility can be proved in $\mathsf{RCA}_0$. In Sections~\ref{sec-Implications} and~\ref{s:wD}, we study implications and equivalences among randomness notions as set-existence principles that can be proved in $\mathsf{RCA}_0$. In Section~\ref{sec-NonImp}, we exhibit non-implications over $\mathsf{RCA}_0$ among certain randomness notions, recursion-theoretic principles, and combinatorial principles.
\section{Preliminaries}\label{sec-RM}
\subsection*{Basic axioms}
We provide a short introduction to the typical base system of reverse mathematics $\mathsf{RCA}_0$ that suits our purposes here. We refer the reader to Simpson~\cite{SimpsonSOSOA} for further details. The setting of $\mathsf{RCA}_0$ is second-order arithmetic. Its axioms consist of: \begin{itemize} \item
The basic axioms of Peano arithmetic (denoted $\mathsf{PA^{-}}$) expressing that the natural numbers form a discretely-ordered commutative semi-ring with $1$;
\item the \emph{$\Sigma^0_1$ induction scheme} ($\mathsf{I}\Sigma^0_1$, for short), which consists of the universal closures of all formulas of the form \begin{align*} \bigl(\varphi(0) \wedge \forall n(\varphi(n) \rightarrow \varphi(n+1))\bigr) \rightarrow \forall n \varphi(n),\label{fmla-IND}\tag{$\star$} \end{align*} where $\varphi$ is $\Sigma^0_1$;
\item the \emph{$\Delta^0_1$ comprehension scheme}, which consists of the universal closures of all formulas of the form \begin{align*} \forall n (\varphi(n) \leftrightarrow \psi(n)) \rightarrow \exists X \forall n(n \in X \leftrightarrow \varphi(n)), \end{align*} where $\varphi$ is $\Sigma^0_1$, $\psi$ is $\Pi^0_1$, and $X$ is not free in $\varphi$. \end{itemize}
`$\mathsf{RCA}_0$' stands for `recursive comprehension axiom,' which refers to the $\Delta^0_1$ comprehension scheme, and the `$0$' indicates that the induction scheme is restricted to $\Sigma^0_1$ formulas. The intuition is that $\mathsf{RCA}_0$ corresponds to computable mathematics. To show that some set exists when working in $\mathsf{RCA}_0$, one must show how to compute that set from existing sets.
$\mathsf{RCA}_0$ proves many variants of the $\Sigma^0_1$ induction scheme, which we list here for the reader's reference. In the list below, $\varphi$ is a formula and $\Gamma$ is a class of formulas. \begin{itemize} \item The \emph{induction axiom for $\varphi$} is the universal closure of \eqref{fmla-IND} above. The $\Gamma$ induction scheme consists of the induction axioms for all $\varphi \in \Gamma$.
\item The \emph{least element principle for $\varphi$} is the universal closure of the formula \begin{align*} \exists n \varphi(n) \rightarrow \exists n[\varphi(n) \wedge (\forall m < n)(\neg\varphi(m))]. \end{align*} The $\Gamma$ least element principle consists of the least element principles for all $\varphi \in \Gamma$.
\item The \emph{bounded comprehension axiom for $\varphi$} is the universal closure of the formula \begin{align*} \forall b \exists X \forall n[n \in X \leftrightarrow (n < b \wedge \varphi(n))], \end{align*} where $X$ is not free in $\varphi$. The bounded $\Gamma$ comprehension scheme consists of the bounded comprehension axioms for all $\varphi \in \Gamma$.
\item The \emph{bounding (or collection) axiom for $\varphi$} is the universal closure of the formula \begin{align*} \forall a[(\forall n < a)(\exists m)\varphi(n,m) \rightarrow \exists b(\forall n < a)(\exists m < b)\varphi(n,m)], \end{align*} where $a$ and $b$ are not free in $\varphi$. The $\Gamma$ bounding scheme consists of the bounding axioms for all $\varphi \in \Gamma$. \end{itemize}
In addition to $\mathsf{I}\Sigma^0_1$, $\mathsf{RCA}_0$ proves \begin{itemize} \item the $\Pi^0_1$ induction scheme ($\mathsf{I}\Pi^0_1$);
\item the $\Sigma^0_1$ least element principle and the $\Pi^0_1$ least element principle;
\item the bounded $\Sigma^0_1$ comprehension scheme and the bounded $\Pi^0_1$ comprehension scheme;
\item the $\Sigma^0_1$ bounding scheme ($\mathsf{B}\Sigma^0_1$). \end{itemize}
The schemes $\mathsf{I}\Sigma^0_1$, $\mathsf{I}\Pi^0_1$, the $\Sigma^0_1$ least element principle, the $\Pi^0_1$ least element principle, the bounded $\Sigma^0_1$ comprehension scheme, and the bounded $\Pi^0_1$ comprehension scheme are all equivalent over $\mathsf{PA^{-}}$ (or over $\mathsf{PA^{-}}$ plus $\Delta^0_1$ comprehension in the case of the bounded comprehension schemes). The scheme $\mathsf{B}\Sigma^0_1$ is weaker. $\mathsf{RCA}_0$ does \textbf{not} prove the $\Pi^0_1$ bounding scheme ($\mathsf{B}\Pi^0_1$), which is equivalent to both the $\Sigma^0_2$ bounding scheme ($\mathsf{B}\Sigma^0_2$) and the $\Delta^0_2$ induction scheme. We refer the reader to~\cite[Section~I.2]{HajekPudlak} and \cite[Section~II.3]{SimpsonSOSOA} for proofs of these facts. The equivalence of $\mathsf{B}\Sigma^0_2$ and $\Delta^0_2$ induction is proved in~\cite{SlamanInd}.
$\mathsf{RCA}_0$ suffices to implement the typical codings ubiquitous in computability theory. Finite strings, finite sets, integers, rational numbers, etc.\ are coded in the usual way. Real numbers are coded by rapidly converging Cauchy sequences of rational numbers. $\mathsf{RCA}_0$ also suffices to define Turing reducibility $\leq_\mathrm{T}$ and an effective sequence $(\Phi_e)_{e \in \mathbb{N}}$ of all Turing functionals (see~\cite[Section~VII.1]{SimpsonSOSOA}).
\subsection*{Notation} Let us fix some notation and terminology for strings. $\mathbb{N}^{<\mathbb{N}}$ denotes the set of all finite strings, and $2^{<\mathbb{N}}$ denotes the set of all finite binary strings. We also sometimes use $2^n$ to denote the set of binary strings of length $n$, use $2^{< n}$ to denote the set of binary strings of length less than $n$, etc. For strings $\sigma$ and $\tau$, $|\sigma|$ denotes the length of $\sigma$, $\sigma \subseteq \tau$ denotes that $\sigma$ is an initial segment of $\tau$, $\sigma^\smallfrown\tau$ denotes the concatenation of $\sigma$ and $\tau$, and $\sigma {\restriction} n = \langle \sigma(0), \dots, \sigma(n-1) \rangle$ denotes the initial segment of $\sigma$ of length $n$ (when $n \leq |\sigma|$). The `$\subseteq$' and `${\restriction}$' notation extend to second-order objects, thought of as infinite strings. For example, $\sigma \subseteq X$ denotes that $\sigma$ is an initial segment of $X$, and $X {\restriction} n = \langle X(0), \dots, X(n-1) \rangle$ denotes the initial segment of $X$ of length $n$. For a string $\sigma$, $[\sigma]$ denotes the basic open set determined by $\sigma$, i.e., the class of all $X$ such that $\sigma \subseteq X$. Likewise, if $U$ is a set of strings, then $[U]$ represents the open set determined by $U$, and $X \in [U]$ abbreviates $(\exists \sigma \in U)(\sigma \subseteq X)$. As usual, a \emph{tree} is a set $T \subseteq \mathbb{N}^{<\mathbb{N}}$ that is closed under initial segments: $\forall \sigma \forall \tau((\sigma \in T \wedge \tau \subseteq \sigma) \rightarrow \tau \in T)$. $T^n = \{\sigma \in T : |\sigma| = n\}$ denotes the $n$\textsuperscript{th} level of tree $T$. A function $f$ is a \emph{path} through a tree $T$ if every initial segment of $f$ is in $T$: $\forall n (f {\restriction} n \in T)$. $[T]$ denotes the set of paths through tree $T$.
We follow the common convention distinguishing the two symbols `$\mathbb{N}$' and `$\omega$' in reverse mathematics. `$\mathbb{N}$' denotes the (possibly non-standard) first-order part of whatever structure is implicitly under consideration, whereas `$\omega$' denotes the standard natural numbers. We write `$\mathbb{N}$' when explicitly working in a formal system, such as when proving some implication over $\mathsf{RCA}_0$. We write `$\omega$' when constructing a standard model of $\mathsf{RCA}_0$ witnessing some non-implication.
\section{Formalizing algorithmic randomness in second-order arithmetic}\label{sec-RandFormal}
Here and at the beginning of Section~\ref{sec-Implications} we provide a reference for formalized definitions from effective topology and algorithmic randomness for use in $\mathsf{RCA}_0$, following the style of~\cite{AvigadDeanRute}. The notions we review here easily form a linear hierarchy according to randomness strength; however, it will require some effort to verify these implications in $\mathsf{RCA}_0$.
In order to define Martin-L\"of randomness in $\mathsf{RCA}_0$, we must define (codes for) effectively open sets and the measures of these sets. We could of course consider $2^\mathbb{N}$ as a complete separable metric space in $\mathsf{RCA}_0$ (see~\cite[Section~II.5]{SimpsonSOSOA}) and use the corresponding notion of open set. Instead, we use the following equivalent formulation because it more closely resembles the definition used in algorithmic randomness, and it makes defining an open set's measure a little easier. \begin{Definition}[$\mathsf{RCA}_0$]\label{def-EffOpenSet}{\ } \begin{itemize} \item A \emph{code for a $\mbf{\Sigma^0_1}$ set} (or an \emph{open} set) is a sequence $(B_i)_{i \in \mathbb{N}}$, where each $B_i$ is a coded finite subset of $2^{<\mathbb{N}}$.
\item A \emph{code for a $\Sigma^{0,Z}_1$ set} is a code $(B_i)_{i \in \mathbb{N}}$ for a $\mbf{\Sigma^0_1}$ set with $(B_i)_{i \in \mathbb{N}} \leq_\mathrm{T} Z$.
\item A \emph{code for a uniform sequence of $\Sigma^{0,Z}_1$ sets} is a double-sequence $(B_{n,i})_{n,i \in \mathbb{N}} \leq_\mathrm{T} Z$, where $(B_{n,i})_{i \in \mathbb{N}}$ is a code for a $\Sigma^{0,Z}_1$ set for each $n \in \mathbb{N}$.
\item If $\MP U = (B_i)_{i \in \mathbb{N}}$ codes a $\mbf{\Sigma^0_1}$ set, then `$X \in \MP U$' denotes $\exists i (X \in [B_i])$. \end{itemize} \end{Definition}
Equivalently, we could take a code for a $\Sigma^{0, Z}_1$ set to be an index $e$ for $W_e^Z = \dom(\Phi_e^Z)$. Typically, we write `$\MP U$ is a $\Sigma^{0, Z}_1$ set' and `$(\MP{U}_n)_{n \in \mathbb{N}}$ is a uniform sequence of $\Sigma^{0, Z}_1$ sets' instead of `$\MP U$ codes a $\Sigma^{0, Z}_1$ set' and `$(\MP{U}_n)_{n \in \mathbb{N}}$ codes a uniform sequence of $\Sigma^{0, Z}_1$ sets.'
Now we define Lebesgue measure for $\mbf{\Sigma^0_1}$ sets. \begin{Definition}[$\mathsf{RCA}_0$]{\ } \begin{itemize}
\item Let $B \subseteq 2^{<\mathbb{N}}$ be finite. Define $\mu(B) = \sum_{\sigma \in \widehat B}2^{-|\sigma|}$, where\\ $\widehat B= \{\sigma \in B : \text{$\sigma$ has no proper initial segment in $B$}\}$.
\item Let $\MP U = (B_i)_{i \in \mathbb{N}}$ be a $\mbf{\Sigma^0_1}$ set. \begin{itemize} \item The \emph{Lebesgue measure} of $\MP U$ is $\mu(\MP U) = \lim_m \mu(\bigcup_{i \leq m}B_i)$ (if the limit exists). \item For $r \in \mathbb{R}$, `$\mu(\MP U) > r$' denotes $\exists m (\mu(\bigcup_{i \leq m}B_i) > r)$. \item For $r \in \mathbb{R}$, `$\mu(\MP U) \leq r$' denotes $\forall m (\mu(\bigcup_{i \leq m}B_i) \leq r)$. \end{itemize} \end{itemize} \end{Definition}
We warn the reader that $\mathsf{RCA}_0$ is not strong enough to prove that the limit defining $\mu(\MP U)$ exists for every $\mbf{\Sigma^0_1}$ set $\MP U$, which is why we must give explicit definitions for $\mu(\MP U) > r$ and $\mu(\MP U) \leq r$. In $\mathsf{RCA}_0$, the assertion $\mu(\MP U) = r$ includes the implicit assertion that the limit exists.
Now we can define the notions of algorithmic randomness that we consider. We start with Martin-L\"of randomness. \begin{Definition}[$\mathsf{RCA}_0$]{\ } \begin{itemize} \item A \emph{$\Sigma^{0,Z}_1$-test} (or \emph{Martin-L\"of test relative to $Z$}) is a uniform sequence $(\MP{U}_n)_{n \in \mathbb{N}}$ of $\Sigma^{0,Z}_1$ sets such that $\forall n(\mu(\MP{U}_n) \leq 2^{-n})$.
\item $X$ is \emph{$1$-random relative to $Z$} (or \emph{Martin-L\"of random relative to $Z$}) if $X \notin \bigcap_{n \in \mathbb{N}}\MP{U}_n$ for every $\Sigma^{0,Z}_1$-test $(\MP{U}_n)_{n \in \mathbb{N}}$.
\item $\mathsf{MLR}$ is the statement ``for every $Z$ there is an $X$ that is $1$-random relative to $Z$.'' \end{itemize} \end{Definition}
A notion stronger than Martin-L\"of randomness is weak $2$-randomness. A weak $2$-test generalizes the concept of a Martin-L\"of test in that one no longer bounds the rate at which the measures of the components of the test converge to $0$.
\begin{Definition}[$\mathsf{RCA}_0$]{\ } \begin{itemize} \item A \emph{weak $2$-test relative to $Z$} is a uniform sequence $(\MP{U}_n)_{n \in \mathbb{N}}$ of $\Sigma^{0,Z}_1$ sets such that $\lim_n \mu(\MP{U}_n) = 0$, meaning that $\forall k \exists n (\forall m > n)(\mu(\MP{U}_m) \leq 2^{-k})$.
\item $X$ is \emph{weakly $2$-random relative to $Z$} if $X \notin \bigcap_{n \in \mathbb{N}}\MP{U}_n$ for every weak $2$-test $(\MP{U}_n)_{n \in \mathbb{N}}$ relative to $Z$.
\item $\mathsf{W2R}$ is the statement ``for every $Z$ there is an $X$ that is weakly $2$-random relative to $Z$.'' \end{itemize} \end{Definition}
Even stronger is $2$-randomness, which we define here in terms of $\Sigma^{0, Z}_2$-tests. We must first define $\mbf{\Sigma}^0_2$ sets and their measures.
\begin{Definition}[$\mathsf{RCA}_0$]\label{def-Sigma2Class}{\ } \begin{itemize} \item A \emph{code for a $\mbf{\Sigma}^0_2$ set} is a sequence $(T_i)_{i \in \mathbb{N}}$ of subtrees of $2^{<\mathbb{N}}$.
\item A \emph{code for a $\Sigma^{0,Z}_2$ set} is a code for a $\mbf{\Sigma}^0_2$ set $(T_i)_{i \in \mathbb{N}}$ with $(T_i)_{i \in \mathbb{N}} \leq_\mathrm{T} Z$.
\item A \emph{code for a uniform sequence of $\Sigma^{0,Z}_2$ sets} is a double-sequence $(T_{n,i})_{n,i \in \mathbb{N}} \leq_\mathrm{T} Z$, where $(T_{n,i})_{i \in \mathbb{N}}$ is a code for a $\Sigma^{0,Z}_2$ set for each $n \in \mathbb{N}$.
\item If $\MP W = (T_i)_{i \in \mathbb{N}}$ codes a $\mbf{\Sigma}^0_2$ set, then $X \in \MP W$ denotes $\exists i \forall n (X {\restriction} n \in T_i)$. \end{itemize} \end{Definition}
Again, we write `$\MP W$ is a $\Sigma^{0,Z}_2$ set,' etc.\ instead of `$\MP W$ codes a $\Sigma^0_2$ set,' etc.
\begin{Definition}[$\mathsf{RCA}_0$]
Let $(T_i)_{i \in \mathbb{N}}$ be a sequence of trees that codes the $\mbf{\Sigma}^0_2$ set $\MP W$. Let $q \in \mathbb{Q}$. Then `$\mu(\MP W) \leq q$' denotes $\forall i \exists n(2^{-n}|\bigcup_{j \leq i}T_j^n| \leq q)$. \end{Definition}
\begin{Definition}[$\mathsf{RCA}_0$; \cite{AvigadDeanRute}]{\ } \begin{itemize} \item A \emph{$\Sigma^{0,Z}_2$-test} is a uniform sequence $(\MP{W}_n)_{n \in \mathbb{N}}$ of $\Sigma^{0,Z}_2$ sets such that $\forall n(\mu(\MP{W}_n) \leq 2^{-n})$.
\item A set $X$ is \emph{$2$-random relative to $Z$} if $X \notin \bigcap_{n \in \mathbb{N}}\MP{W}_n$ for every $\Sigma^{0,Z}_2$-test $(\MP{W}_n)_{n \in \mathbb{N}}$.
\item $\mlrr{2}$ is the statement ``for every $Z$ there is an $X$ that is $2$-random relative to $Z$.'' \end{itemize} \end{Definition}
\section{$\mlrr{2}$ and $C$-incompressibility over $\mathsf{RCA}_0$}\label{sec-Inc}
The statement $\mlrr 2$ (i.e., the existence of 2-random sets) is well-studied in reverse mathematics. For instance, in the presence of the scheme $\mathsf{B}\Sigma^0_2$ (i.e., $\Sigma^0_2$-bounding), $\mlrr 2$ is equivalent to two formalizations of the dominated convergence theorem~\cite{AvigadDeanRute}, and it implies the rainbow Ramsey theorem for pairs and $2$-bounded colorings~\cites{CsimaMileti,ConidisSlaman}.
The goal of this section is to prove the equivalence between $2$-randomness and infinitely often $C$-incompressibility in $\mathsf{RCA}_0$. The difficulty in doing so is in avoiding arbitrary computations relative to $Z'$ for a set $Z$ (in the sense described the discussion of $\mathsf{DNR}$ in Section~\ref{sec-NonImp}). In general, $\mathsf{B}\Sigma^0_2$ is required to show that if $\forall n(\Phi^{Z'}(n){\downarrow})$, then for every $n$ the sequence $\sigma = \langle \Phi^{Z'}(0), \dots, \Phi^{Z'}(n-1) \rangle$ of the first $n$ values of $\Phi^{Z'}$ exists because this is essentially an arbitrary instance of bounded $\Delta^0_2$ comprehension, which is equivalent to $\Delta^0_2$ induction and hence to $\mathsf{B}\Sigma^0_2$. Thus there is a danger of needing $\mathsf{B}\Sigma^0_2$ when working with computations relative to $Z'$ in $\mathsf{RCA}_0$. Furthermore, we wish to give proofs that are as concrete as possible, meaning that we prefer to work with objects that exists as sets in $\mathsf{RCA}_0$, such as codes for tests, rather than with virtual objects defined by formulas, such as $Z'$ and sets computable from $Z'$. This is one reason why we prefer the formalization of $2$-randomness relative to $Z$ in terms of $\Sigma^{0,Z}_2$-tests to the formalization in terms of $1$-randomness relative to $Z'$.
In $\mathsf{RCA}_0$, we may define the standard optimal plain oracle machine $\mathbb{V}$ from an effective sequence of all Turing functionals in the usual way. We may then discuss plain complexity relative to a set $Z$ by writing \begin{itemize}
\item $C^Z(\sigma) \leq n$ if there is a $\tau$ such that $|\tau| \leq n$ and $\mathbb{V}^Z(\tau) = \sigma$ (and similarly with `$<$' in place of `$\leq$');
\item $C^Z(\sigma) > n$ if $C^Z(\sigma) \nleq n$ (and similarly with `$\geq$' in place of `$>$');
\item $C^Z(\sigma) = n$ if $n$ is least such that $C^Z(\sigma) \leq n$. \end{itemize}
$\mathsf{RCA}_0$ proves, using $\mathsf{I}\Sigma^0_1$ in the guise of the $\Sigma^0_1$ least element principle, that for every $\sigma$ there is an $n$ such that $C(\sigma) = n$. However, the function $\sigma \mapsto C(\sigma)$ is not computable and therefore $\mathsf{RCA}_0$ does not prove that this function exists.
\begin{Definition}[$\mathsf{RCA}_0$]{\ } \begin{itemize} \item $X$ is \emph{eventually $C^Z$-compressible} if $\forall b \forall^\infty m (C^Z(X {\restriction} m) < m-b)$.
\item $X$ is \emph{infinitely often $C^Z$-incompressible} if $\exists b \exists^\infty m(C^Z(X {\restriction} m) \geq m-b)$.
\item $C\mbox{-}\mathsf{INC}$ is the statement ``for every $Z$ there is an $X$ that is infinitely often $C^Z$-incompressible.'' \end{itemize} \end{Definition}
We first show that $\mathsf{RCA}_0 \vdash C\mbox{-}\mathsf{INC} \rightarrow \mlrr{2}$. The original proof that every infinitely often $C$-incompressible set is $2$-random makes use of prefix-free Kolmogorov complexity relative to $0'$, which we wish to avoid. We give a proof that is similar to the one given in~\cite{BMSV}. To do this, we use the following parameterized version of~\cite[Proposition~2.1.14]{NiesBook}, which says that if $\rho(p,n,\tau,Z)$ defines a sequence of $\Sigma^{0,Z}_1$ `sets' (`sets' in quotation because, in $\mathsf{RCA}_0$, $\rho$ may not literally define a set) of requests indexed by $p$, then there is a machine $M$ such that, for every $p$, $M(p, \cdot)$ honors request set $p$.
\begin{Proposition}[$\mathsf{RCA}_0$]\label{prop-MachineExist} Let $Z$ be a set and suppose that $\rho(p,n,\tau,Z)$ is a $\Sigma^0_1$ formula such that, for each $p, n \in \mathbb{N}$, there are at most $2^n$ strings $\tau \in 2^{<\mathbb{N}}$ such that $\rho(p,n,\tau,Z)$ holds. Then there is a machine $M$ such that \begin{align*} (\forall p,n \in \mathbb{N})(\forall \tau \in 2^{<\mathbb{N}})[\rho(p,n,\tau,Z) \leftrightarrow (\exists \sigma \in 2^n)(M^Z(p,\sigma) = \tau)]. \end{align*} \end{Proposition}
\begin{proof} The proof is a straightforward (even in $\mathsf{RCA}_0$) extension of the proof of~\cite[Proposition~2.1.14]{NiesBook}. \end{proof}
\begin{Theorem} \begin{align*} \mathsf{RCA}_0 \vdash \forall X \forall Z(\text{$X$ is infinitely often $C^Z$-incompressible} \rightarrow \text{$X$ is $2$-random relative to $Z$}). \end{align*} Hence $\mathsf{RCA}_0 \vdash C\mbox{-}\mathsf{INC} \rightarrow \mlrr{2}$. \end{Theorem}
\begin{proof} We work in $\mathsf{RCA}_0$ and show that for every $X$ and $Z$, if $X$ is not $2$-random relative to $Z$, then $X$ is eventually $C^Z$-compressible.
Suppose $X$ and $Z$ are sets where $X$ is not $2$-random relative to $Z$. Let $(T_{n,i})_{n,i \in \mathbb{N}} \leq_\mathrm{T} Z$ be a code for a $\Sigma^{0, Z}_2$-test $(\MP{U}_n)_{n \in \mathbb{N}}$ capturing $X$. Assume that $(\forall n,i,j)(i \leq j \rightarrow T_{n,i} \subseteq T_{n,j})$ by replacing each $T_{n,j}$ by $\bigcup_{i \leq j}T_{n,i}$ if necessary. Note that $(\forall n,i)(\mu([T_{n,i}]) \leq 2^{-n})$ because $(\MP{U}_n)_{n \in \mathbb{N}}$ is a $\Sigma^{0, Z}_2$-test.
Recall that for a tree $T$, $T^m = \{\sigma \in T : |\sigma| = m\}$ denotes the $m$\textsuperscript{th} level of $T$. To compress the initial segments of $X$, define a parameterized $\Sigma^{0, Z}_1$ set of requests as follows. First, uniformly define auxiliary sequences $p < m_{p,0} < m_{p,1} < m_{p,2} < \cdots$ for each $p \in \mathbb{N}$ so that $(\forall p,i)(|T_{p+1,i}^{m_{p,i}}| \leq 2^{m_{p,i}-p})$, which is possible because $(\forall p,i)(\mu([T_{p+1,i}]) \leq 2^{-(p+1)})$. Let \begin{align*} \rho(p, n, \tau, Z) = \exists i[(\tau \in T_{p+1,i}^{p+n}) \wedge (m_{p,i} \leq p+n < m_{p,i+1})]. \end{align*} If $\rho(p, n, \tau, Z)$ holds, then it must be that $\tau \in T_{p+1,i}^{p+n}$ for the $i$ such that $m_{p,i} \leq p+n < m_{p,i+1}$. There are at most $2^{m_{p,i} - p} \leq 2^n$ such $\tau$ by the choice of $m_{p,i}$. Thus for every $p,n \in \mathbb{N}$, there are at most $2^n$ strings $\tau \in 2^{<\mathbb{N}}$ such that $\rho(p,n,\tau,Z)$ holds. Thus let $M$ be as in the conclusion of Proposition~\ref{prop-MachineExist} for this $\rho$. Let $N$ be a machine such that $(\forall p \in \mathbb{N})(\forall \sigma \in 2^{<\mathbb{N}})(N^Z({0^p}^\smallfrown{1}^\smallfrown\sigma) = M^Z(2^p, \sigma))$ (here we warn the reader that in $N$, `$0^p$' is the string of $0$'s of length $p$, but in $M$, `$2^p$' is the number $2^p$). Let $c \in \mathbb{N}$ be a constant such that $\forall \tau(C^Z(\tau) \leq C_N^Z(\tau) + c)$.
We show that $\forall b \forall^\infty m(C^Z(X {\restriction} m) < m-b)$ by showing that $\forall b \forall^\infty m(C_N^Z(X {\restriction} m) < m-b-c)$. Fix $b \in \mathbb{N}$. Let $p$ be large enough so that $2^p > b+c+p+1$. By the fact that $(\MP{U}_n)_{n \in \mathbb{N}}$ captures $X$, let $i_0$ be such that $(\forall i \geq i_0)(X \in [T_{2^p+1, i}])$. Now consider any $n \geq m_{2^p, i_0} - 2^p$. Let $i \geq i_0$ be the $i$ such that $m_{2^p, i} \leq 2^p + n < m_{2^p, i+1}$. Then $X {\restriction} (2^p+n) \in T_{2^p+1, i}^{2^p + n}$ by the choice of $i_0$, so $\rho(2^p, n, X {\restriction} (2^p + n), Z)$. Thus there is a $\sigma \in 2^n$ such that $N^Z({0^p}^\smallfrown{1}^\smallfrown\sigma) = M^Z(2^p, \sigma) = X {\restriction} (2^p + n)$. Thus \begin{align*}
C_N^Z(X {\restriction} (2^p + n)) \leq p + 1 + |\sigma| = p+1+n < 2^p + n - b - c. \end{align*} Therefore, if $m \geq m_{2^p, i_0}$, then $C_N^Z(X {\restriction} m) < m-b-c$, as desired. Thus $X$ is eventually $C^Z$-compressible. \end{proof}
Next we show the harder implication that $\mathsf{RCA}_0 \vdash \mlrr{2} \rightarrow C\mbox{-}\mathsf{INC}$. The proof in Miller~\cite{MillerKRand} that every $2$-random set is infinitely often $C$-incompressible uses the familiar characterization of $2$-random sets in terms of prefix-free Kolmogorov complexity relative to $0'$, which we wish to avoid. The proof in Nies, Stephan, and Terwijn~\cite{NiesStephanTerwijn} (see also Nies~\cite[Theorem 3.6.10]{NiesBook}) uses the so-called \emph{compression functions} and an application of the low basis theorem. Though we did not pursue this approach in detail, we believe that it is possible to give a metamathematical version of the argument via compression functions in $\mathsf{RCA}_0$ by following the proof of~\cite[Theorem 3.6.10]{NiesBook} and using a carefully formalized version of the low basis theorem, such as H\'{a}jek and Pudlak~\cite[Theorem~I.3.8]{HajekPudlak}. This strategy could be implemented entirely (and quite delicately) in $\mathsf{RCA}_0$, or it could be implemented by observing that the desired statement \begin{align*} \forall X \forall Z (\text{$X$ is $2$-random relative to $Z$} \rightarrow \text{$X$ is infinitely often $C^Z$-incompressible})\label{fmla-2RandImpCinc}\tag{$\star$} \end{align*} is $\Pi^1_1$ and by appealing to conservativity. A classic result of Harrington is that every countable model of $\mathsf{RCA}_0$ can be extended to a countable model of $\mathsf{WKL}_0$ with the same first-order part (see~\cite[Theorem~IX.2.1]{SimpsonSOSOA}). It follows that $\mathsf{WKL}_0$ is $\Pi^1_1$-conservative over $\mathsf{RCA}_0$. By combining the proof of Harrington's result with the proof of the formalized low basis theorem from H\'{a}jek and Pudlak, one may ensure that the sets in the extended model of $\mathsf{WKL}_0$ are all low in the sense of H\'{a}jek and Pudlak. This yields that $\mathsf{RCA}_0$ plus the statement ``every infinite binary-branching tree has a low infinite path'' is $\Pi^1_1$-conservative over $\mathsf{RCA}_0$. The conceptual advantage of the conservativity strategy over the directly-in-$\mathsf{RCA}_0$ strategy is that one may assume that the desired compression function actually exists as a second-order object instead of merely being defined by some formula. We thank Keita Yokoyama for many helpful comments concerning metamathematical approaches to showing that $\mathsf{RCA}_0 \vdash$~\eqref{fmla-2RandImpCinc}.
We prefer a concrete argument given in $\mathsf{RCA}_0$ to the metamathematical approach outlined above, and find it interesting that a concrete argument is possible. Our argument is a formalization of the proof presented in Bauwens~\cite{Bauwens}, which itself is based on the proof in Bienvenu et al.~\cite{BMSV}. The proof in~\cite{Bauwens} proceeds via the following covering result.
\begin{Theorem}[{Conidis \cite[Theorem~3.1]{Conidis}}]\label{thm-Conidis} Let $q \in \mathbb{Q}$, and let $(\MP{U}_i)_{i \in \omega}$ be a uniform sequence of $\Sigma^0_1$ sets such that $ \mu(\MP{U}_i) \leq q$ for each $i$. For every $p \in \mathbb{Q}$ with $p > q$, there is a $\Sigma^{0,0'}_1$ set $\MP V$ such that $\mu(\MP V) \leq p$ and $ \bigcap_{i \geq N}\MP{U}_i \subseteq \MP V$ for each $N$. Furthermore, $\MP V$ is produced uniformly from an index $e$ such that $\Phi_e = (\MP{U}_i)_{i \in \mathbb{N}}$ as well as $q$ and $p$. \end{Theorem}
Assuming Theorem~\ref{thm-Conidis}, we sketch the argument that no eventually $C$-compressible set $X$ is $2$-random. Suppose that $\forall^\infty i (C(X {\restriction} i) < i-b)$ for each $b$. We want to find a $\Sigma^{0,0'}_1$-test capturing $X$. Define a double-sequence $(\MP{U}_{b,i})_{b,i \in \omega}$ of $\Sigma^0_1$ sets by taking $\MP{U}_{b,i} = \{Y : C(Y {\restriction} i) < i-b\}$. Then $ \mu(\MP{U}_{b,i}) \leq 2^{-b}$ for each $b$ and $i$. By Theorem~\ref{thm-Conidis}, we obtain a $\Sigma^{0,0'}_1$-test $(\MP{V}_b)_{b \in \omega}$ such that $ \bigcap_{i \geq N}\MP{U}_{b+1, i} \subseteq \MP{V}_b$ for each $b$ and $N$. The test $(\MP{V}_b)_{b \in \omega}$ captures $X$ because for each $b$ there is an $N$ such that $(\forall i > N)(C(X {\restriction} i) < i-(b+1))$, and hence $ X \in \bigcap_{i \geq N}\MP{U}_{b+1, i}$, which is contained in $ \MP{V}_b$. Thus $X$ is not $2$-random.
The proof of Theorem~\ref{thm-Conidis} in~\cites{Bauwens} makes use of an inclusion-exclusion principle for open sets provable in $\mathsf{RCA}_0$. We include the standard proof to in order convince the reader that it can be carried out in $\mathsf{RCA}_0$. \begin{Lemma}[$\mathsf{RCA}_0$]\label{lem-IncExcl} Let $\MP A, \MP B \subseteq 2^{\mathbb{N}}$ be open sets, and let $a,b,r \in \mathbb{Q}^{\geq 0}$ be such that $\mu(\MP A) \leq a$, $\mu(\MP B) \leq b$ and $\mu(\MP{A} \cup \MP{B}) > r$. Then $\mu(\MP{A} \cap \MP{B}) \leq a+b-r$. \end{Lemma}
\begin{proof} Suppose for a contradiction that $\mu(\MP{A} \cap \MP{B}) > a+b-r$. Then $\mu(\MP{A} \cap \MP{B}) > a+b-r + 2^{-n}$ for some $n \in \mathbb{N}$, so there is a clopen $\MP C \subseteq \MP{A} \cap \MP{B}$ with $a+b-r + 2^{-(n+1)} \leq \mu(\MP C) \leq a+b-r + 2^{-n}$. Let $\MP{A}_0 = \MP{A} \setminus \MP{C}$, and let $\MP{B}_0 = \MP{B} \setminus \MP{C}$. Note that $\mu(\MP{A}_0) \leq a - (a+b-r + 2^{-(n+1)}) = r-b - 2^{-(n+1)}$ and that $\mu(\MP{B}_0) \leq b - (a+b-r + 2^{-(n+1)}) = r-a - 2^{-(n+1)}$. Then \begin{align*} \mu(\MP{A} \cup \MP{B}) &\leq \mu(\MP{A}_0) + \mu(\MP{B}_0) + \mu(\MP C)\\ &\leq (r-b - 2^{-(n+1)}) + (r-a - 2^{-(n+1)}) + (a+b-r + 2^{-n}) = r. \end{align*} This contradicts $\mu(\MP{A} \cup \MP{B}) > r$. \end{proof}
Lemma~\ref{lem-2MLRimpINCHelper} formalizes Theorem~\ref{thm-Conidis} for use in $\mathsf{RCA}_0$. Notice that the set $\MP V$ produced is now a $\Sigma^{0,Z}_2$ set, rather than a $\Sigma^{0,Z'}_1$ set.
\begin{Lemma}[$\mathsf{RCA}_0$]\label{lem-2MLRimpINCHelper} Let $Z$ be a set, let $q \in \mathbb{Q}$, and let $(\MP{U}_i)_{i \in \mathbb{N}}$ be a uniform sequence of $\Sigma^{0,Z}_1$ sets such that $\forall i(\mu(\MP{U}_i) \leq q)$. Then, for every $p \in \mathbb{Q}$ with $p > q$, there is a $\Sigma^{0,Z}_2$ set $\MP V$ such that $\mu(\MP V) \leq p$ and $\forall N(\bigcap_{i \geq N}\MP{U}_i \subseteq \MP V)$. Furthermore, $\MP V$ is produced uniformly from $Z$, an index $e$ such that $\Phi_e^Z = (\MP{U}_i)_{i \in \mathbb{N}}$, $q$, and $p$. \end{Lemma}
\begin{proof} The basic idea is to replace \begin{align*} \bigcup_{N \in \mathbb{N}} \bigcap_{i \geq N} \MP{U}_i \end{align*} by a superset of the form \begin{align*} \MP V = \bigcup_{N \in \mathbb{N}} \bigcap_{i = N}^{b_N} \MP{U}_i \end{align*}
for an appropriate sequence $0 < b_0 < b_1 < \cdots$ because $\bigcup_{N \in \mathbb{N}} \bigcap_{i \geq N} \MP{U}_i$ is too complicated, whereas $\bigcup_{N \in \mathbb{N}} \bigcap_{i = N}^{b_N} \MP{U}_i$ is open (but in our case not effectively open; we produce a $\Sigma^0_2$ code for a set that happens to be open).
We want to identify a sequence $0 < b_0 < b_1 < \cdots$ that yields $\mu(\MP V) \leq p$. The proof in~\cite{Bauwens} computes such a sequence from $0'$. We wish to avoid explicit computations relative to $0'$ because the analysis of such computations has the danger of possibly requiring $\mathsf{B}\Sigma^0_2$.
First some notation. For $a, b \in \mathbb{N}$ with $a < b$, let $\MP{U}_{a \ldots b} = \bigcap_{i=a}^b \MP{U}_i$. For a sequence $\langle b_0, b_1, \dots, b_{n-1} \rangle$ with $0 < b_0 < b_1 < \cdots < b_{n-1}$, let $$\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle} = \bigcup_{j < n} \MP{U}_{j \ldots b_j}.$$ We can fix codes for these sets: \begin{itemize} \item Let $(U_{i,s})_{i,s \in \mathbb{N}} \leq_\mathrm{T} Z$ denote the code for $(\MP{U}_i)_{i \in \mathbb{N}}$ so that, for all $i$, $\bigcup_{s \in \mathbb{N}}[U_{i,s}] = \MP{U}_i$.
\item From $(U_{i,s})_{i,s \in \mathbb{N}}$, define codes $(U_{a \ldots b, s})_{s \in \mathbb{N}} \leq_\mathrm{T} Z$ uniformly for all $a,b \in \mathbb{N}$ with $a < b$ so that $\bigcup_{s \in \mathbb{N}}[U_{a \ldots b, s}] = \MP{U}_{a \ldots b}$.
\item Similarly, for every sequence $\langle b_0, b_1, \dots, b_{n-1} \rangle$ with $0 < b_0 < b_1 < \cdots < b_{n-1}$, uniformly define codes $(S_{\langle b_0, \dots, b_{n-1} \rangle, s})_{s \in \mathbb{N}} \leq_\mathrm{T} Z$ so that $\bigcup_{s \in \mathbb{N}}[S_{\langle b_0, \dots, b_{n-1} \rangle, s}] = \MP{S}_{\langle b_0, \dots, b_{n-1} \rangle}$. \end{itemize}
Notice that if $\langle b_0, b_1, \dots, b_{n-1} \rangle$ is a sequence with $0 < b_0 < b_1 < \cdots < b_{n-1}$, then $\bigcap_{i \geq N}\MP{U}_i \subseteq \MP{S}_{\langle b_0, \dots, b_{n-1} \rangle}$ holds when $N < n$.
We would like to define $\MP V$ by taking the union of sets of the form $\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle}$ for longer and longer sequences $\langle b_0, \dots, b_{n-1} \rangle$. However, we also need to ensure that $\mu(\MP V) \leq p$. Thus we need to find sequences $\langle b_0, \dots, b_{n-1} \rangle$ where $\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle}$ has small measure and that additionally are extendible to longer sequences $\langle b_0, \dots, b_{m-1} \rangle \supseteq \langle b_0, \dots, b_{n-1} \rangle$ where $\MP{S}_{\langle b_0, \dots, b_{m-1} \rangle}$ also has small measure. Part~(ii) of the following claim says that this is possible: there are sequences $\langle b_0, \dots, b_{n-1} \rangle$ of arbitrary length such that for every subsequence $\langle b_0, \dots, b_k \rangle$ with $k < n$, the set $\MP{S}_{\langle b_0, \dots, b_k \rangle} \cup \MP{U}_i$ has small measure for all $i > b_k$. The main technical work to prove the claim is in its Part~(i).
\begin{Claim}\label{claim-LongSeq}{\ } \begin{itemize} \item[(i)] For every $a \in \mathbb{N}$ and every $r \in \mathbb{Q}$ with $r > q$, there is $b > a$ such that $\mu(\MP{U}_{a \ldots b} \cup \MP{U}_i) \leq r$ for each $i>b$.
\item[(ii)] For every $n \in \mathbb{N}$ and every $q_0, \dots, q_{n-1} \in \mathbb{Q}$ with $q < q_0 < \cdots < q_{n-1}$, there is a sequence $\langle b_0, b_1, \dots, b_{n-1} \rangle$ with $0 < b_0 < b_1 < \cdots < b_{n-1}$ such that $\mu(\MP{S}_{\langle b_0, \dots, b_k \rangle} \cup \MP{U}_i) \leq q_k)$ for each $ k < n$ and each $ i > b_k$. \end{itemize} \end{Claim}
\begin{proof}[Proof of Claim] (i) Suppose for a contradiction that $(\forall b > a)(\exists i > b)(\mu(\MP{U}_{a \ldots b} \cup \MP{U}_i) > r)$. Consider for a moment any $b > a$ and a $c > b$ such that $\mu(\MP{U}_{a \ldots b} \cup \MP{U}_c) > r$. We assume that $\mu(\MP{U}_c) \leq q$, so if $\mu(\MP{U}_{a \ldots b}) \leq x$ for some $x \in \mathbb{Q}$, then $\mu(\MP{U}_{a \ldots c}) \leq \mu(\MP{U}_{a \ldots b} \cap \MP{U}_c) \leq x - (r-q)$ by Lemma~\ref{lem-2MLRimpINCHelper}. By iterating this argument sufficiently many times, we find a contradictory $c$ such that $\mu(\MP{U}_{a \ldots c}) < 0$.
To implement this argument formally, consider the formula \begin{align*} \varphi(k) = (\exists \langle b_0, \dots, b_k \rangle \in \mathbb{N})\left[(a < b_0) \wedge (\forall i < k)(b_i < b_{i+1}) \wedge (\forall i < k)(\mu(\MP{U}_{a \ldots b_i} \cup \MP{U}_{b_{i+1}}) > r)\right]. \end{align*} The formula $\varphi(k)$ is $\Sigma^{0, Z}_1$ because `$\mu(\MP{U}_{a \ldots b_i} \cup \MP{U}_{b_{i+1}}) > r$' is $\Sigma^{0,Z}_1$. Thus we may conclude $\forall k \varphi(k)$ by $\mathsf{I}\Sigma^0_1$ and the assumption $(\forall b > a)(\exists i > b)(\mu(\MP{U}_{a \ldots b} \cup \MP{U}_i) > r)$. Now choose $k > q/(r-q)$ and, by $\varphi(k)$, let $a < b_0 < b_1 < \cdots < b_k$ be such that $(\forall i < k)(\mu(\MP{U}_{a \ldots b_i} \cup \MP{U}_{b_{i+1}}) > r)$. Then, for any $x \in \mathbb{Q}$ and $i < k$, if $\mu(\MP{U}_{a \ldots b_i}) \leq x$, then $\mu(\MP{U}_{a \ldots b_{i+1}}) \leq x - (r-q)$ by Lemma~\ref{lem-IncExcl} and the assumption $\mu(\MP{U}_{b_{i+1}}) \leq q$. By $\mathsf{I}\Pi^0_1$, we can then conclude that $(\forall i \leq k)[\mu(\MP{U}_{a \ldots b_i}) \leq q - i(r-q)]$. This is a contradiction because for $i = k$ it gives \begin{align*} \mu(\MP{U}_{a \ldots b_k}) \leq q - k(r-q) < q - q = 0. \end{align*}
\noindent (ii) Given $n$ and $q_0, \dots, q_{n-1} \in \mathbb{Q}$ with $q < q_0 < \cdots < q_{n-1}$, let $b > n$ be such that $ \mu(\MP{U}_{n \ldots b} \cup \MP{U}_i) \leq q_0$ for each $i> b$. Let $b_j = b+j$ for each $j < n$. Then $(\forall k < n)(\MP{S}_{\langle b_0, b_1, \dots, b_k \rangle} \subseteq \MP{U}_{n \ldots b})$, so if $i > b_k \geq b$, then $\mu(\MP{S}_{\langle b_0, b_1, \dots, b_k \rangle} \cup \MP{U}_i) \leq \mu(\MP{U}_{n \ldots b} \cup \MP{U}_i) \leq q_0 \leq q_k$. \end{proof}
Choose an increasing sequence of rationals $q_0 < q_1 < q_2 < \cdots$ inside the interval $(q, p)$. We first illustrate some of the ideas behind constructing the code for $\MP V$ before diving into its full construction. Claim~\ref{claim-LongSeq} part~(ii) tells us that it is possible to find arbitrary long sequences $b_0 < \cdots < b_{n-1}$ with $\mu(\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle})$ under control that can be extended to even longer sequences with the corresponding measure still under control. The conclusion of Claim~\ref{claim-LongSeq} part~(ii) is $\Pi^{0, Z}_1$, so we can use trees to identify sequences $b_0 < \cdots < b_{n-1}$ satisfying the conclusion for $q_0, \dots, q_{n-1}$ in the following way. For each $t$ and $\langle b_0, \dots, b_{n-1} \rangle$ we can define a tree $T_{\langle t, b_0, \dots, b_{n-1} \rangle}$ such that \begin{align*} [T_{\langle t, b_0, \dots, b_{n-1} \rangle}] = \begin{cases} [S_{\langle b_0, \dots, b_{n-1} \rangle, t}] & \text{if $\langle b_0, \dots, b_{n-1} \rangle$ satisfies Claim~\ref{claim-LongSeq} part~(ii)}\\ \emptyset & \text{otherwise}. \end{cases} \end{align*} This is accomplished by adding to $T_{\langle t, b_0, \dots, b_{n-1} \rangle}$ all strings comparable with the finitely many strings in $S_{\langle b_0, \dots, b_{n-1} \rangle, t}$ until possibly noticing that $b_0 < \cdots < b_{n-1}$ does \textbf{not} satisfy Claim~\ref{claim-LongSeq} part~(ii) for $q_0, \dots, q_{n-1}$.
For a fixed $b_0 < \cdots < b_{n-1}$, we then have that \begin{align*} \bigcup_{t \in \mathbb{N}}&[T_{\langle t, b_0, \dots, b_{n-1} \rangle}] =\\ &\begin{cases} \bigcup_{t \in \mathbb{N}}[S_{\langle b_0, \dots, b_{n-1} \rangle, t}] = \MP{S}_{\langle b_0, \dots, b_{n-1} \rangle} & \text{if $\langle b_0, \dots, b_{n-1} \rangle$ satisfies Claim~\ref{claim-LongSeq} part~(ii)}\\ \emptyset & \text{otherwise}. \end{cases} \end{align*}
Therefore, if we take the sequence $(T_i)_{i \in \mathbb{N}}$ of all trees $T_{\langle t, b_0, \dots, b_{n-1} \rangle}$ for all $t$, $n$, and $b_0 < \cdots < b_{n-1}$ as a code for the $\Sigma^0_2$ set $\MP V$, we get that \begin{align*} \MP V = \bigcup \{\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle} : \text{$\langle b_0, \dots, b_{n-1} \rangle$ satisfies Claim~\ref{claim-LongSeq} part~(ii)}\}. \end{align*} In this case, we certainly have \begin{align*} \bigcup_{N \in \mathbb{N}} \bigcap_{i \geq N} \MP{U}_i \subseteq \MP V, \end{align*} but we have done nothing to help keep track of $\mu(\MP V)$.
So instead of having $\MP V$ contain $\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle}$ for every $b_0 < \dots < b_{n-1}$ that satisfies Claim~\ref{claim-LongSeq} part~(ii), we want $\MP V$ to contain $\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle}$ for exactly one $b_0 < \dots < b_{n-1}$ satisfying Claim~\ref{claim-LongSeq} part~(ii) for each $n$. Moreover, if $n > m$, we want $\langle b_0, \dots, b_{n-1} \rangle$ to extend $\langle b_0, \dots, b_{m-1} \rangle$ so that $\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle} \supseteq \MP{S}_{\langle b_0, \dots, b_{m-1} \rangle}$, which makes the measures of these sets easier to analyze. To accomplish this and to give the full construction of the code for $\MP V$, we introduce the notion of a \emph{good} sequence.
Call a sequence $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ \emph{good} if $\langle s_0, \dots, s_{n-1} \rangle$ witnesses that $\langle b_0, \dots, b_{n-1} \rangle$ is the lexicographically least sequence of length $n$ satisfying Claim~\ref{claim-LongSeq} part~(ii) for $q_0, \dots, q_{n-1}$. More formally, $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is \emph{good} if \begin{itemize} \item[(i)] $0 < b_0 < b_1 < \cdots < b_{n-1}$;
\item[(ii)] $(\forall k < n)(\forall i > b_k)(\mu(\MP{S}_{\langle b_0, \dots, b_k \rangle} \cup \MP{U}_i) \leq q_k)$; and
\item[(iii)] for all $k < n$, if $b_k > b_{k-1} + 1$ (or if $b_0 > 1$ in the case $k = 0$), then $s_k = \langle i, s \rangle$ is such that $i > b_k - 1$ and $\mu([S_{\langle b_0, \dots, b_{k-1}, b_k - 1 \rangle, s}] \cup [U_{i,s}]) > q_k$. \end{itemize}
Item (iii) says that if $b_k$ is not as small as possible (i.e., if $b_k > b_{k-1}+1$ or if $b_0 > 1$ in the case $k = 0$), then $s_k$ is a pair witnessing that $b_k$ cannot be chosen smaller and still satisfy Claim~\ref{claim-LongSeq} part~(ii). It is in this sense that $\langle s_0, \dots, s_{n-1} \rangle$ witnesses that $\langle b_0, \dots, b_{n-1} \rangle$ is the lexicographically least sequence of length $n$ satisfying Claim~\ref{claim-LongSeq} part~(ii). Notice that items~(i) and~(iii) are $\Delta^{0, Z}_1$ and that item~(ii) is $\Pi^{0, Z}_1$, so `$\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good' is $\Pi^{0, Z}_1$. So instead of defining trees $T_{\langle t, b_0, \dots, b_{n-1} \rangle}$ as above, we will define similar trees $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ so that \begin{align*} [T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}] = \begin{cases} [S_{\langle b_0, \dots, b_{n-1} \rangle, t}] & \text{if $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good}\\ \emptyset & \text{otherwise}. \end{cases} \end{align*} However, before we do this, we show that the good sequences do indeed have their intended properties. Note that if $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good and $k \leq n$, then $\langle b_0, s_0, \dots, b_{k-1}, s_{k-1} \rangle$ is also good. By the following the good sequences identify a unique infinite sequence $0 < b_0 < b_1 < \cdots$, which is the sequence we use to define $\MP V$.
\begin{Claim}\label{claim-UniqueSeq} For each $n$ there is exactly one sequence $b_0 < \cdots < b_{n-1}$ for which there are $s_0, \dots, s_{n-1}$ such that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good. \end{Claim}
\begin{proof}[Proof of Claim] Fix $n$. We first show that there is at most one sequence $b_0 < \cdots < b_{n-1}$ for which there are $s_0, \dots, s_{n-1}$ such that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good. Suppose that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ and $\langle b_0', s_0', \dots, b_{n-1}', s_{n-1}' \rangle$ are both good and that (for the sake of argument) there is a $k < n$ such that $b_k < b_k'$ and $(\forall j < k)(b_j = b_j')$. Then $(\forall i > b_k)(\mu(\MP{S}_{\langle b_0, \dots, b_k \rangle} \cup \MP{U}_i) \leq q_k)$ because $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good. However, $\MP{S}_{\langle b_0', \dots, b_{k-1}', b_k'-1 \rangle} \subseteq \MP{S}_{\langle b_0, \dots, b_k \rangle}$ because $b_k \leq b_k'-1$ and $(\forall j < k)(b_j = b_j')$. Therefore $(\forall i > b_k'-1)(\mu(\MP{S}_{\langle b_0', \dots, b_{k-1}', b_k'-1 \rangle} \cup \MP{U}_i) \leq q_k)$. Thus there can be no $s_k' = \langle i, s \rangle$ such that $i > b_k' - 1$ and $\mu([S_{\langle b_0', \dots, b_{k-1}', b_k' - 1 \rangle, s}] \cup [U_{i,s}]) > q_k$. Therefore $\langle b_0', s_0', \dots, b_{n-1}', s_{n-1}' \rangle$ is not good.
Now we show that there is at least one sequence $b_0 < \cdots < b_{n-1}$ for which there are $s_0, \dots, s_{n-1}$ such that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good. By Claim~\ref{claim-LongSeq} part~(ii) and the $\Pi^0_1$ least element principle, there is a least code $\langle b_0, \dots, b_{n-1} \rangle$ with $0 < b_0 < \cdots < b_{n-1}$ and such that $(\forall k < n)(\forall i > b_k)(\mu(\MP{S}_{\langle b_0, \dots, b_k \rangle} \cup \MP{U}_i) \leq q_k)$. As usual, we tacitly assume that the coding of sequences is increasing in every coordinate. Let $A$ be the set of $k < n$ such that $b_k > b_{k-1}+1$ (or $b_0 > 1$ in the case $k=0$). Then, by the minimality of $\langle b_0, \dots, b_{n-1} \rangle$, $(\forall k \in A)(\exists i > b_k-1)(\mu(\MP{S}_{\langle b_0, \dots, b_{k-1}, b_k-1 \rangle} \cup \MP{U}_i) > q_k)$ and so $(\forall k \in A)(\exists i > b_k-1)(\exists s)(\mu([S_{\langle b_0, \dots, b_{k-1}, b_k-1 \rangle, s}] \cup [U_{i, s}]) > q_k)$. Thus, for every $k \in A$ we may choose an $s_k = \langle i, s \rangle$ such that $i > b_{k-1}$ and $\mu([S_{\langle b_0, \dots, b_{k-1}, b_k-1 \rangle, s}] \cup [U_{i,s}]) > q_k$. Then, letting $s_k = 0$ for all $k < n$ that are not in $A$, we see that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good. \end{proof}
We are now ready to define a code $(T_i)_{i \in \mathbb{N}}$ for the desired $\Sigma^{0,Z}_2$ set $\MP V$. The idea is to arrange that $\MP V = \bigcup_{n \in \mathbb{N}} \MP{S}_{\langle b_0, \dots, b_{n-1} \rangle}$, for the sequence $b_0 < b_1 < \cdots$ identified above.
We view each $i$ as a sequence $i = \langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ and use the trees $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ to ensure that $\MP{S}_{\langle b_0, \dots, b_{n-1} \rangle} \subseteq \MP V$ when there are $s_0, \dots, s_{n-1}$ such that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good. Thus for every $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \in \mathbb{N}$, we define $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ so that \begin{align*} [T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}] = \begin{cases} [S_{\langle b_0, \dots, b_{n-1} \rangle, t}] & \text{if $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good}\\ \emptyset & \text{otherwise}, \end{cases} \end{align*} as described above.
To define $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$, first check that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ satisfies items (i) and (iii) in the definition of `good.' If the check fails, set $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle} = \emptyset$. If the check passes, then add to $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ all initial segments of the strings in $S_{\langle b_0, \dots, b_{n-1} \rangle, t}$, and then add all extensions of all strings in $S_{\langle b_0, \dots, b_{n-1} \rangle, t}$, level-by-level, until possibly seeing that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is not good by the failure of item~(ii) in the definition of `good.' In the end, if $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good, then $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ consists of all strings comparable with some string in $S_{\langle b_0, \dots, b_{n-1} \rangle, t}$, so $[T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}] = [S_{\langle b_0, \dots, b_{n-1} \rangle, t}]$. Otherwise, $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ is finite, so we have that $[T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}] = \emptyset$.
Formally, if $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is not good by the failure of either (i) or (iii), then let $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle} = \emptyset$. Otherwise, let $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ be the set of all strings $\tau \in 2^{< \mathbb{N}}$ such that either \begin{itemize} \item $\tau \subseteq \sigma$ for some $\sigma \in S_{\langle b_0, \dots, b_{n-1} \rangle, t}$; or
\item $\tau \supseteq \sigma$ for some $\sigma \in S_{\langle b_0, \dots, b_{n-1} \rangle, t}$ and $(\forall k < n)(\forall i < |\tau|)(i > b_k \rightarrow \mu([S_{\langle b_0, \dots, b_k \rangle, |\tau|}] \cup [U_{i, |\tau|}]) \leq q_k)$. \end{itemize} That is, in this case we add to $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}$ all extensions of strings in $S_{\langle b_0, \dots, b_{n-1} \rangle, t}$ until possibly reaching a level witnessing that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is not good by the failure of (ii).
Let $\MP V$ denote the $\Sigma^{0,Z}_2$ set defined by $(T_i)_{i \in \mathbb{N}}$ according to Definition~\ref{def-Sigma2Class}. To show that $\mu(\MP V) \leq p$, we need to show that $\forall m \exists \ell(2^{-\ell}|\bigcup_{i \leq m}T_i^\ell| \leq p)$. Fix $m \in \mathbb{N}$. We find an $\ell$ large enough so that each string in $\bigcup_{i \leq m}T_i^\ell$ is an extension of some string in $\bigcup_{t \in \mathbb{N}}S_{\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1} \rangle, t}$ for a $\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1} \rangle$ for which there are $\tilde{s}_0, \dots, \tilde{s}_{\tilde{n}-1}$ such that $\langle \tilde{b}_0, \tilde{s}_0, \dots, \tilde{b}_{\tilde{n}-1}, \tilde{s}_{\tilde{n}-1} \rangle$ is good. Once we have $\ell$, it follows that $2^{-\ell}|\bigcup_{i \leq m}T_i^\ell| \leq p$ because then \begin{align*}
2^{-\ell}\left|\bigcup_{i \leq m}T_i^\ell\right| \leq \mu(\MP{S}_{\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1} \rangle}) \leq q_{\tilde{n}-1} < p. \end{align*}
To find $\ell$, first use bounded $\Pi^0_1$ comprehension to let $A$ be the set of all $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \leq m$ such that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is good. By bounded $\Sigma^0_1$ comprehension, let $B$ be the set of all $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \leq m$ such that $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is not good due to the failure of~(ii). Then, for each $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \in B$, \begin{align*} (\exists k < n)(\exists i > b_k)(\exists s)(\mu([S_{\langle b_0, \dots, b_k \rangle, s}] \cup [U_{i, s}]) > q_k). \end{align*} By $\mathsf{B}\Sigma^0_1$ there is a bound $\ell$ such that, for each $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \in B$, there are a $k < n$, an $i$ with $b_k < i < \ell$, and an $s < \ell$ such that $\mu([S_{\langle b_0, \dots, b_k \rangle, s}] \cup [U_{i, s}]) > q_k$. Therefore $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}^\ell = \emptyset$ for each $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \in B$. We have established that if $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \leq m$ and $\langle b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle$ is not good, then $T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}^\ell = \emptyset$. Therefore $\bigcup_{i \leq m}T_i^\ell = \bigcup_{i \in A}T_i^\ell$. Now, let $\tilde{n}$ be greatest such that some $\langle \tilde{t}, \tilde{b}_0, \tilde{s}_0, \dots, \tilde{b}_{\tilde{n}-1}, \tilde{s}_{\tilde{n}-1} \rangle$ is in $A$, and fix a witnessing $\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1} \rangle$. By Claim~\ref{claim-UniqueSeq}, $\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1}\rangle$ is the unique sequence of length $\tilde{n}$ for which there are $\tilde{s}_0, \dots, \tilde{s}_{\tilde{n}-1}$ such that $\langle \tilde{b}_0, \tilde{s}_0, \dots, \tilde{b}_{\tilde{n}-1}, \tilde{s}_{\tilde{n}-1} \rangle$ is good. Therefore, for any $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \in A$, it must be that $n \leq \tilde{n}$ (by the maximality of $\tilde{n}$) and $(\forall j < n)(b_j = \tilde{b}_j)$. We thus have that if $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \in A$, then \begin{align*} [T_{\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle}] = [S_{\langle b_0, \dots, b_{n-1} \rangle, t}] \subseteq \MP{S}_{\langle b_0, \dots, b_{n-1} \rangle} \subseteq \MP{S}_{\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1} \rangle}. \end{align*} However, $\mu(\MP{S}_{\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1} \rangle}) \leq q_{\tilde{n}-1}$. So if we increase $\ell$ so as to be greater than the length of every string in every $S_{\langle b_0, \dots, b_{n-1} \rangle, t}$ for every $\langle t, b_0, s_0, \dots, b_{n-1}, s_{n-1} \rangle \in A$, we have that \begin{align*}
2^{-\ell}\left|\bigcup_{i \leq m}T_i^\ell\right| = 2^{-\ell}\left|\bigcup_{i \in A}T_i^\ell\right| \leq \mu(\MP{S}_{\langle \tilde{b}_0, \dots, \tilde{b}_{\tilde{n}-1} \rangle}) \leq q_{\tilde{n}-1} < p \end{align*} as desired.
To see that $\bigcap_{i \geq N}\MP{U}_i \subseteq \MP V$ for each $N \in \mathbb{N}$, fix $N$ and suppose that $X \in \bigcap_{i \geq N}\MP{U}_i$. Let $\langle b_0, s_0, \dots, b_N, s_N \rangle$ be good (which exists because by Claim~\ref{claim-UniqueSeq} there are good sequences of arbitrary length). Then \begin{align*} X \in \bigcap_{i \geq N}\MP{U}_i \subseteq \MP{U}_{N \ldots b_N} \subseteq \MP{S}_{\langle b_0, \dots, b_N \rangle}. \end{align*} Let $t$ be such that $X \in [S_{\langle b_0, \dots, b_N \rangle, t}]$. Then $X \in [T_{\langle t, b_0, \dots, b_N \rangle}] \subseteq \MP V$ as desired.
Finally, we observe that the sequence of trees $(T_i)_{i \in \mathbb{N}}$, and therefore the set $\MP V$, is produced with the required uniformity. \end{proof}
\begin{Theorem} \begin{align*} \mathsf{RCA}_0 \vdash \forall X \forall Z(\text{$X$ is $2$-random relative to $Z$} \rightarrow \text{$X$ is infinitely often $C^Z$-incompressible}). \end{align*} Hence $\mathsf{RCA}_0 \vdash \mlrr{2} \rightarrow C\mbox{-}\mathsf{INC}$. \end{Theorem}
\begin{proof} We work in $\mathsf{RCA}_0$ and show that for every $X$ and $Z$, if $X$ is eventually $C^Z$-compressible, then $X$ is not $2$-random relative to $Z$.
Suppose $X$ and $Z$ are sets where $X$ is eventually $C^Z$-compressible. That is, \begin{align*} \forall b \forall^\infty i(C^Z(X {\restriction} i) < i-b). \end{align*} We show that there is a $\Sigma^{0, Z}_2$-test capturing $X$ and therefore that $X$ is not $2$-random relative to $Z$.
Define a double-sequence of open sets $(\MP{U}_{b,i})_{b, i \in \mathbb{N}} \leq_\mathrm{T} Z$ by defining $U_{b,i,s}$ so that, for each $b$ and $i$, $\bigcup_{s \in \mathbb{N}}U_{b,i,s}$ is an enumeration of all $\sigma \in 2^i$ such that $C^Z(\sigma) < i - b$. Then $\forall b \forall i(\mu(\MP{U}_{b,i}) \leq 2^{-b})$ because there are at most $2^{i-b}$ strings $\sigma$ with $C^Z(\sigma) < i-b$. Thus, for each fixed $b \in \mathbb{N}$, $(\MP{U}_{b,i})_{i \in \mathbb{N}}$ is a sequence of open sets such that $\forall i(\mu(\MP{U}_{b,i}) \leq 2^{-b})$. Therefore, by the uniformity in Lemma~\ref{lem-2MLRimpINCHelper}, there is a sequence $(\MP{V}_b)_{b \in \mathbb{N}} \leq_\mathrm{T} Z$ of $\Sigma^{0, Z}_2$ sets such that $\forall b(\mu(\MP{V}_b) \leq 2^{-b+1})$ and $\forall N(\bigcap_{i \geq N}\MP{U}_{b, i} \subseteq \MP{V}_b)$. The sequence $(\MP{V}_{b+1})_{b \in \mathbb{N}}$ is thus a $\Sigma^{0, Z}_2$ test. We show that it captures $X$. Given $b$, let $N$ be such that $(\forall i \geq N)[C^Z(X {\restriction} i) < i-(b+1)]$. Then $(\forall i \geq N)(X \in \MP{U}_{b+1, i})$. Thus $X \in \bigcap_{i \geq N}\MP{U}_{b+1, i} \subseteq \MP{V}_{b+1}$ as desired. \end{proof}
\begin{Corollary} \begin{align*} \mathsf{RCA}_0 \vdash \forall X \forall Z(\text{$X$ is infinitely often $C^Z$-incompressible} \leftrightarrow \text{$X$ is $2$-random relative to $Z$}). \end{align*} Hence $C\mbox{-}\mathsf{INC}$ and $\mlrr{2}$ are equivalent over $\mathsf{RCA}_0$. \end{Corollary}
\section{Implications between major randomness notions within $\mathsf{RCA}_0$}\label{sec-Implications} Recall the implications of randomness notions
\begin{center} $2$-random $\Rightarrow$ weakly $2$-random $\Rightarrow$ $1$-random $\Rightarrow$ computably random $\Rightarrow$ Schnorr random. \end{center}
In this section, we show that the implications between the corresponding principles are provable in $\mathsf{RCA}_0$. We first provide the definitions of Schnorr and computable randomness. For a Schnorr test one requires that the $n$\textsuperscript{th} component of the test has measure exactly $2^{-n}$.
\begin{Definition}[$\mathsf{RCA}_0$] A \emph{Schnorr test relative to $Z$} is a Martin-L\"of test $(\MP{U}_n)_{n \in \mathbb{N}}$ relative to $Z$ where additionally the measures of the components of the test are uniformly computable from $Z$: $(\mu(\MP{U}_n))_{n \in \mathbb{N}} \leq_\mathrm{T} Z$. $X$ is \emph{Schnorr random relative to $Z$} if $X \notin \bigcap_{n \in \mathbb{N}}\MP{U}_n$ for every Schnorr test $(\MP{U}_n)_{n \in \mathbb{N}}$ relative to $Z$. $\mathsf{SR}$ is the statement ``for every $Z$ there is an $X$ that is Schnorr random relative to $Z$.'' \end{Definition}
For the purpose of defining Schnorr randomness relative to a set $Z$, we may assume that if $(\MP{U}_n)_{n \in \mathbb{N}}$ is a Schnorr test relative to $Z$, then $\mu(\MP{U}_n) = 2^{-n}$ for every $n$. It is straightforward to implement the usual proof of this fact (see~\cite[Proposition~7.1.6]{DHBook}, for example) in $\mathsf{RCA}_0$.
Computable randomness is defined in terms of computable betting strategies. They are called supermartingales in this context.
\begin{Definition}[$\mathsf{RCA}_0$]{\ }
A function $S \colon 2^{<\mathbb{N}} \rightarrow \mathbb{Q}^{\geq 0}$ is called a \emph{supermartingale} if \begin{align*} (\forall \sigma \in 2^{<\mathbb{N}})(S(\sigma^\smallfrown 0) + S(\sigma^\smallfrown 1) \leq 2S(\sigma)), \end{align*} and it is called a \emph{martingale} if the defining property always holds with equality.
A supermartingale $S$ \emph{succeeds} on a set $X$ if $\forall k \exists n(S(X {\restriction} n) > k)$.
$X$ is \emph{computably random relative to $Z$} if there is no supermartingale $S \leq_\mathrm{T} Z$ that succeeds on $X$. $\mathsf{CR}$ is the statement ``for every $Z$ there is an $X$ that is computably random relative to $Z$.''
\end{Definition}
By \cite[Propositions~7.1.6 and~7.3.8]{NiesBook}, it makes no difference whether computable randomness relative to $Z$ is defined in terms of \begin{itemize} \item supermartingales $S \colon 2^{<\mathbb{N}} \rightarrow \mathbb{Q}^{\geq 0}$ that are $\leq_\mathrm{T} Z$; \item supermartingales $S\colon 2^{<\mathbb{N}} \rightarrow \mathbb{R}^{\geq 0}$ that are $\leq_\mathrm{T} Z$; \item martingales $M \colon 2^{<\mathbb{N}} \rightarrow \mathbb{Q}^{\geq 0}$ that are $\leq_\mathrm{T} Z$; or \item martingales $M \colon 2^{<\mathbb{N}} \rightarrow \mathbb{R}^{\geq 0}$ that are $\leq_\mathrm{T} Z$. \end{itemize} It is straightforward to formalize these arguments in $\mathsf{RCA}_0$. In this setting, a function $S \colon 2^{<\mathbb{N}} \rightarrow \mathbb{R}^{\geq 0}$ is coded by the corresponding sequence of values $(S(\sigma))_{\sigma \in 2^{<\mathbb{N}}}$.
\begin{Proposition}\label{prop-BasicRandImp}{\ } \begin{itemize} \item[(i)] $\mathsf{RCA}_0 \vdash \forall X \forall Z(\text{$X$ is $2$-random relative to $Z$} \rightarrow \text{$X$ is weakly $2$-random relative to $Z$})$.
Hence $\mathsf{RCA}_0 \vdash \mlrr{2} \rightarrow \mathsf{W2R}$.
\item[(ii)] $\mathsf{RCA}_0 \vdash \forall X \forall Z(\text{$X$ is weakly $2$-random relative to $Z$} \rightarrow \text{$X$ is $1$-random relative to $Z$})$.
Hence $\mathsf{RCA}_0 \vdash \mathsf{W2R} \rightarrow \mathsf{MLR}$.
\item[(iii)] $\mathsf{RCA}_0 \vdash \forall X \forall Z(\text{$X$ is $1$-random relative to $Z$} \rightarrow \text{$X$ is computably random relative to $Z$})$.
Hence $\mathsf{RCA}_0 \vdash \mathsf{MLR} \rightarrow \mathsf{CR}$.
\item[(iv)] $\mathsf{RCA}_0 \vdash \forall X \forall Z(\text{$X$ is computably random rel.\ to $Z$} \rightarrow \text{$X$ is Schnorr random rel.\ to $Z$})$.
Hence $\mathsf{RCA}_0 \vdash \mathsf{CR} \rightarrow \mathsf{SR}$. \end{itemize} \end{Proposition}
\begin{proof} (i) To prove that every $2$-random set is weakly $2$-random, one views $2$-randomness as $1$-randomness relative to $\emptyset'$ and shows that every weak $2$-test can be thinned to a Martin-L\"of test relative to $\emptyset'$ because $\emptyset'$ can uniformly compute the measures of $\Pi^0_1$ classes. However, our formulation of $2$-randomness in $\mathsf{RCA}_0$ is in terms of $\Sigma^0_2$-tests, so we need a version of this argument that works with $\Sigma^0_2$-tests instead of with Martin-L\"of tests relative to $0'$.
Let $(\MP{U}_n)_{n \in \mathbb{N}}$ be a weak $2$-test relative to $Z$, and let $(U_{n,i})_{n,i \in \mathbb{N}} \leq_\mathrm{T} Z$ be a code for $(\MP{U}_n)_{n \in \mathbb{N}}$. For notational ease, assume that $\forall n \forall i(U_{n,i} \subseteq U_{n,i+1})$. We define a double-sequence $(T_{n,i})_{n,i \in \mathbb{N}} \leq_\mathrm{T} Z$ of trees coding a $\Sigma^{0,Z}_2$-test $(\MP{W}_n)_{n \in \mathbb{N}}$ such that $\bigcap_{n \in \mathbb{N}} \MP{U}_n \subseteq \bigcap_{n \in \mathbb{N}} \MP{W}_n$. The idea is to take $\MP{W}_n = \bigcup_{i \in \mathbb{N}}[T_{n, i}]$ to be $\MP{U}_m$ for the least $m$ such that $\mu(\MP{U}_m) \leq 2^{-n}$. To do this, we view each $i$ as a triple $i = \langle \sigma, m, s \rangle$ and use the trees $T_{n, \langle \sigma, m, s \rangle}$ to ensure that $[\sigma] \subseteq \MP{W}_n$ when $[\sigma] \subseteq \MP{U}_m$ and $m$ is least such that $\mu(\MP{U}_m) \leq 2^{-n}$.
To define $T_{n, \langle \sigma, m, s \rangle}$, first check that $\sigma \in U_{m,s}$ and that $s$ is large enough to witness that $\mu(\MP{U}_k) > 2^{-n}$ for all $k < m$. If one of the checks fails, set $T_{n, \langle \sigma, m, s \rangle} = \emptyset$. If both checks pass, then $[\sigma] \subseteq \MP{U}_m$, and possibly $m$ is least such that $\mu(\MP{U}_m) \leq 2^{-n}$. In this case, start adding to $T_{n, \langle \sigma, m, s \rangle}$ all strings comparable with $\sigma$, level-by-level, until possibly seeing that $\mu(\MP{U}_m) > 2^{-n}$. In the end, if $[\sigma] \subseteq \MP{U}_m$, $m$ is least such that $\mu(\MP{U}_m) \leq 2^{-n}$, and $s$ is large enough, then $T_{n, \langle \sigma, m, s \rangle}$ consists of all strings comparable with $\sigma$, so $[T_{n, \langle \sigma, m, s \rangle}] = [\sigma]$. Otherwise, $T_{n, \langle \sigma, m, s \rangle}$ is finite, so $[T_{n, \langle \sigma, m, s \rangle}] = \emptyset$. Therefore $\MP{W}_n = \MP{U}_m$.
Formally, for each $n$ and $\langle \sigma, m, s \rangle$, define $T_{n, \langle \sigma, m, s \rangle}$ so that \begin{align*} [T_{n, \langle \sigma, m, s\rangle}] = \begin{cases} [\sigma] & \text{if $\sigma \in U_{m,s}$, $\mu(\MP{U}_m) \leq 2^{-n}$, and $(\forall k < m)(\mu(U_{k,s}) > 2^{-n})$}\\ \emptyset & \text{otherwise}. \end{cases} \end{align*}
To do this, if $\sigma \notin U_{m,s}$ or $(\exists k < m)(\mu(U_{k,s}) \leq 2^{-n})$, then let $T_{n, \langle \sigma, m, s \rangle} = \emptyset$. Otherwise, let $T_{n, \langle \sigma, m, s \rangle}$ be the set of all strings $\tau \in 2^{< \mathbb{N}}$ such that $\tau$ is comparable with $\sigma$ (i.e., either $\tau \subseteq \sigma$ or $\tau \supseteq \sigma$) and $\mu(U_{m, |\tau|}) \leq 2^{-n}$. Observe that $(T_{n,i})_{n,i \in \mathbb{N}} \leq_\mathrm{T} Z$ because $(U_{n,i})_{n,i \in \mathbb{N}} \leq_\mathrm{T} Z$. Let $(\MP{W}_n)_{n \in \mathbb{N}}$ denote the uniform sequence of $\Sigma^{0,Z}_2$ sets defined by $(T_{n,i})_{n,i \in \mathbb{N}}$.
Fix $n$. We show that there is a least $m$ such that $\mu(\MP{U}_m) \leq 2^{-n}$, that $\MP{U}_m \subseteq \MP{W}_n$, and that $\mu(\MP{W}_n) \leq 2^{-n}$.
To see that there is a least $m$ such that $\mu(\MP{U}_m) \leq 2^{-n}$, first observe that there is some $m$ such that $\mu(\MP{U}_m) \leq 2^{-n}$ because $\lim_m \mu(\MP{U}_m) = 0$ by the fact that $(\MP{U}_n)_{n \in \mathbb{N}}$ is a weak $2$-test. Thus there is a least such $m$ by the $\Pi^0_1$ least element principle. Henceforth $m$ always denotes the least $m$ such that $\mu(\MP{U}_m) \leq 2^{-n}$.
To show that $\MP{U}_m \subseteq \MP{W}_n$, we first show that there is a $t$ large enough to witness that $\mu(\MP{U}_k) > 2^{-n}$ for all $k < m$. Once we have $t$, we argue that if $\sigma \in U_{m,s}$ for some $s$, then $\sigma \in U_{m,s}$ for some $s > t$ (as we assume that the $U_{m,s}$'s are nested), in which case $[\sigma] = [T_{n, \langle \sigma, m, s \rangle}] \subseteq \MP{W}_n$. Formally, because $m$ is least, we have that $(\forall k < m)(\mu(\MP{U}_k) > 2^{-n})$ and hence that $(\forall k < m)(\exists t)(\mu(U_{k,t}) > 2^{-n})$. By $\mathsf{B}\Sigma^0_1$, there is a fixed $t$ such that $(\forall k < m)(\mu(U_{k,t}) > 2^{-n})$. Now, suppose that $Y \in \MP{U}_m$, and let $\sigma \subseteq Y$ and $s > t$ be such that $\sigma \in U_{m,s}$. Then $[T_{n, \langle \sigma, m, s \rangle}] = [\sigma]$, so $Y \in [T_{n, \langle \sigma, m, s \rangle}] \subseteq \MP{W}_n$. Thus $\MP{U}_m \subseteq \MP{W}_n$.
To show that $\mu(\MP{W}_n) \leq 2^{-n}$, we need to show that $\forall i \exists b (2^{-b}|\bigcup_{j \leq i}T_{n,j}^b| \leq 2^{-n})$. Fix $i$. We find a $b$ large enough so that each string in $\bigcup_{j \leq i}T_{n,j}^b$ is an extension of some string in $\bigcup_{s \in \mathbb{N}} U_{m,s}$. This is achieved by choosing $b$ to be greater than $|\sigma|$ for every $\langle \sigma, k, s \rangle \leq i$ and greater than the length of every string in the finite trees $T_{n, \langle \sigma, k, s \rangle}$ with $\langle \sigma, k, s \rangle \leq i$. Once we have $b$, since $\mu(\MP{U}_m) \leq 2^{-n}$ it follows that $2^{-b}|\bigcup_{j \leq i}T_{n,j}^b| \leq 2^{-n}$.
As above, let $t$ be such that $(\forall k < m)(\mu(U_{k,t}) > 2^{-n})$. Let $b > \max\{t, i\}$ (so that if $\langle \sigma, k, s \rangle \leq i$, then $b > |\sigma|$). We show that this $b$ is large enough. Consider a $\langle \sigma, k, s \rangle \leq i$. If $k < m$, then $T_{n, \langle \sigma, k, s \rangle}^t = \emptyset$ because $\mu(U_{k,t}) > 2^{-n}$ by choice of $t$. If $k > m$, then $T_{n, \langle \sigma, k, s \rangle} = \emptyset$ because $\mu(U_{m,s}) \leq 2^{-n}$. So if $\tau \in T_{n, \langle \sigma, k, s \rangle}^b$ for $\langle \sigma, k, s \rangle \leq i$, then it must be that $k = m$, $\sigma \in U_{m,s}$, and $\tau \supseteq \sigma$. Thus every string in $\bigcup_{j \leq i}T_{n,j}^b$ is an extension of some string in $\bigcup_{s \in \mathbb{N}} U_{m,s}$. Therefore $2^{-b}|\bigcup_{j \leq i}T_{n,j}^b| \leq \mu(\MP{U}_m) \leq 2^{-n}$.
Now, suppose that $X$ is not weakly $2$-random relative to $Z$. Then there is a weak $2$-test $(\MP{U}_n)_{n \in \mathbb{N}}$ relative to $Z$ that captures $X$. By the above, there is a $\Sigma^{0,Z}_2$-test $(\MP{W}_n)_{n \in \mathbb{N}}$ such that $X \in \bigcap_{n \in \mathbb{N}} \MP{U}_n \subseteq \bigcap_{n \in \mathbb{N}} \MP{W}_n$. Therefore $X$ is not $2$-random relative to $Z$.
(ii) This is immediate from the definitions because every Martin-L\"of test relative to $Z$ is also a weak $2$-test relative to $Z$.
(iii) See the proof of the (i)$\Rightarrow$(iii) implication of~\cite[Proposition~7.2.6]{NiesBook}, which is straightforward to formalize in $\mathsf{RCA}_0$.
(iv) See the proof of~\cite[Proposition~7.3.2]{NiesBook}, which is straightforward to formalize in $\mathsf{RCA}_0$. Note however that this proof makes use of $\mathbb{R}^{\geq 0}$-valued martingales. \end{proof}
Not every Schnorr random set is computably random (see for example~\cite[Theorem 7.5.10]{NiesBook}). However, it is provable in $\mathsf{RCA}_0$ that if a Schnorr random set exists, then a computably random set exists. Thus computable randomness and Schnorr randomness are equivalent as set-existence axioms.
\begin{Theorem}\label{thm-SRimpCR} $\mathsf{RCA}_0 \vdash \mathsf{SR} \rightarrow \mathsf{CR}$. Thus $\mathsf{SR}$ and $\mathsf{CR}$ are equivalent over $\mathsf{RCA}_0$. \end{Theorem}
\begin{proof} Assume $\mathsf{SR}$. Let $Z$ be given. We want to find a set $X$ that is computably random relative to $Z$. By $\mathsf{SR}$, let $Y$ be Schnorr random relative to $Z$. If $Y$ is $1$-random relative to $Z$, then it is also computably random relative to $Z$ by Proposition~\ref{prop-BasicRandImp} and we are done. Otherwise, $Y$ is not $1$-random relative to $Z$, so there is a $\Sigma^{0,Z}_1$-test $(\MP{U}_n)_{n \in \mathbb{N}}$ with $Y \in \bigcap_{n \in \mathbb{N}}\MP{U}_n$. Let $(B_{n,i})_{n,i \in \mathbb{N}}$ denote the code for $(\MP{U}_n)_{n \in \mathbb{N}}$. For notational ease, assume that $\forall n \forall i(B_{n,i} \subseteq B_{n,i+1})$. Define $f \colon \mathbb{N} \rightarrow \mathbb{N}$ by \begin{align*} f(n) = \text{the least $i$ such that $(\exists \sigma \in B_{n,i})(\sigma \subseteq Y)$} \end{align*} (recall that each $B_{n,i}$ is finite, so $f$ can be defined in $\mathsf{RCA}_0$).
For functions $f, g \colon \mathbb{N} \rightarrow \mathbb{N}$, say that \emph{$f$ eventually dominates $g$} if $(\exists n)(\forall k > n)(g(k) < f(k))$.
\begin{Claim}\label{claim-ZDom} If $g \colon \mathbb{N} \rightarrow \mathbb{N}$ is a function with $g \leq_\mathrm{T} Z$, then $f$ eventually dominates $g$. \end{Claim}
\begin{proof}[Proof of Claim] Suppose for a contradiction that there is a $g \leq_\mathrm{T} Z$ that is not eventually dominated by $f$. Define a uniform sequence of $\Sigma^{0,Z}_1$ sets $(\MP{V}_n)_{n \in \mathbb{N}}$ coded by $(C_{n,m})_{n,m \in \mathbb{N}} \leq_\mathrm{T} Z$ by letting
\begin{align*} C_{n,m} = \begin{cases} \emptyset & \text{if $n \geq m$}\\ \text{a finite $C \supseteq C_{n,m-1} \cup B_{m,g(m)}$ with $\mu(C) = 2^{-n}-2^{-m}$} & \text{if $n < m$}. \end{cases} \end{align*} This is possible because if $n < m$ and $\mu(C_{n,m-1}) = 2^{-n} - 2^{-(m-1)}$, then \begin{align*} \mu(C_{n, m-1} \cup B_{m,g(m)}) \leq 2^{-n} - 2^{-(m-1)} + 2^{-m} = 2^{-n} - 2^{-m}, \end{align*} so such a $C_{n,m}$ exists. We have that $\forall n (\mu(\MP{V}_n) = 2^{-n})$, so $(\MP{V}_n)_{n \in \mathbb{N}}$ is a Schnorr test relative to $Z$. Furthermore, this test captures $Y$ because if $m > n$ and $f(m) \leq g(m)$, then $Y \in [B_{m,g(m)}] \subseteq [C_{n,m}] \subseteq \MP{V}_n$. This contradicts that $Y$ is Schnorr random relative to $Z$. \end{proof}
The rest of the proof follows the usual proof that every high set computes a computably random set (see e.g.\ \cite[Theorem~7.5.2]{NiesBook}). We use $f$ to define a supermartingale that multiplicatively dominates every supermartingale $\leq_\mathrm{T} Z$. In the following, all supermartingales are $\mathbb{Q}^{\geq 0}$-valued.
First, using our effective sequence $(\Phi_e)_{e \in \mathbb{N}}$ of all Turing functionals, define a sequence of Turing functionals $(\Psi_e)_{e \in \mathbb{N}}$ such that
$\Psi_e^Z$ always computes a partial supermartingale, and
if $\Phi_e^Z$ is total and computes a supermartingale, then $\forall \sigma(\Phi_e^Z(\sigma) = \Psi_e^Z(\sigma))$. This can be accomplished by setting \begin{align*} \Psi_e^Z(\emptyset) &= \Phi_e^Z(\emptyset)\\ \Psi_e^Z(\sigma^\smallfrown a) &= \begin{cases} \Phi_e^Z(\sigma^\smallfrown a) & \text{if $\Phi_e^Z(\sigma){\downarrow}$, $\Phi_e^Z(\sigma^\smf0){\downarrow}$, $\Phi_e^Z(\sigma^\smf1){\downarrow}$, and $\Phi_e^Z(\sigma^\smf0) + \Phi_e^Z(\sigma^\smf1) \leq 2\Phi_e^Z(\sigma)$}\\ {\uparrow} & \text{otherwise}, \end{cases} \end{align*} for $a \in \{0,1\}$. Now define a sequence of Turing functionals $(\Gamma_e)_{e \in \mathbb{N}}$ such that
$\Gamma_e^Z$ always computes a partial supermartingale,
$\Gamma_e^Z(|\sigma|) = 1$ if $|\sigma| \leq e$, and
if $\Phi_e^Z$ is total and computes a supermartingale, then there is a $c \in \mathbb{N}$ such that $\forall \sigma(\Phi_e^Z(\sigma) \leq c\Gamma_e^Z(\sigma))$. This can be accomplished by setting \begin{align*} \Gamma_e^Z(\sigma) = \begin{cases}
1 & \text{if $|\sigma| \leq e$}\\
0 & \text{if $|\sigma| > e$ and $\Psi_e^Z(\sigma {\restriction} e){\downarrow} = 0$}\\
\Psi_e^Z(\sigma)/\Psi_e^Z(\sigma {\restriction} e) & \text{if $|\sigma| > e$, $\Psi_e^Z(\sigma){\downarrow}$, and $\Psi_e^Z(\sigma {\restriction} e){\downarrow} > 0$}\\ {\uparrow} & \text{otherwise}. \end{cases} \end{align*} If $\Phi_e^Z$ is total and computes a supermartingale, let $c > \max\{\Phi_e^Z(\sigma) : \sigma \in 2^e\}$. Then $\forall \sigma(\Phi_e^Z(\sigma) \leq c\Gamma_e^Z(\sigma))$.
Assemble a supermartingale from $Z$ and $f$ as follows. First, for each $e \in \mathbb{N}$, let \begin{align*} S_e(\sigma) = \begin{cases}
\Gamma_e^Z(\sigma) & \text{if $|\sigma| \leq e$ or $(\forall \tau \in 2^{\leq|\sigma|})(\text{$\Gamma_e^Z(\tau)$ halts within $f(|\sigma|) + e$ steps})$}\\ 0 & \text{otherwise.} \end{cases} \end{align*}
Now let $S(\sigma) = \sum_{e \in \mathbb{N}}2^{-e}S_e(\sigma)$. Notice that $S$ is $\mathbb{Q}^{\geq 0}$-valued because $S_e(\sigma) = 1$ when $e \geq |\sigma|$, so $\sum_{e \geq |\sigma|}2^{-e}S_e(\sigma) = 2^{-e+1}$. One may then verify that each $S_e$ is a supermartingale and therefore that $S$ is a supermartingale.
Suppose that $P \leq_\mathrm{T} Z$ is a supermartingale. We show that there is a $d \in \mathbb{N}$ such that $\forall \sigma(P(\sigma) \leq dS(\sigma))$. Let $e_0$ be such that $P = \Phi_{e_0}^Z$. Then $\Gamma_{e_0}^Z$ is total, so define the total function $g \leq_\mathrm{T} Z$ by \begin{align*} g(n) = \text{the least $t$ such that $(\forall \tau \in 2^{\leq n})(\text{$\Gamma_{e_0}^Z(\tau)$ halts within $t$ steps})$}. \end{align*} By Claim~\ref{claim-ZDom}, there is an $n \in \mathbb{N}$ such that $(\forall k > n)(g(k) < f(k))$. By padding, let $e_1 > \max\{g(m) : m \leq n\}$ be such that $\Gamma_{e_1}$ is the same functional as $\Gamma_{e_0}$. Then \begin{align*} (\forall k)(\forall \tau \in 2^{\leq k})(\text{$\Gamma_{e_1}^Z(\tau)$ halts within $f(k) + e_1$ steps}), \end{align*} and therefore $\forall \sigma (S_{e_1}(\sigma) = \Gamma_{e_1}^Z(\sigma))$. Let $c$ be such that $\forall \sigma(P(\sigma) \leq c\Gamma_{e_1}^Z(\sigma))$. Let $d = c2^{e_1}$. Then, for all $\sigma \in 2^{<\mathbb{N}}$, \begin{align*} P(\sigma) \leq c\Gamma_{e_1}^Z(\sigma) = cS_{e_1}(\sigma) \leq c2^{e_1}S(\sigma) = dS(\sigma), \end{align*} as desired.
To finish the proof, let $X$ be the leftmost non-ascending path of $S$. That is, define $X = \lim_{s}\sigma_s$ recursively by $\sigma_0 = \emptyset$ and \begin{align*} \sigma_{s+1} = \begin{cases} {\sigma_s}^\smallfrown 0 & \text{if $S({\sigma_s}^\smallfrown 0) \leq S(\sigma_s)$}\\ {\sigma_s}^\smallfrown 1 & \text{otherwise}. \end{cases} \end{align*} If $P \leq_\mathrm{T} Z$ is a supermartingale, there is a $d \in \mathbb{N}$ such that $\forall \sigma(P(\sigma) \leq dS(\sigma))$. So for all $n \in \mathbb{N}$, $P(X {\restriction} n) \leq dS(X {\restriction} n) \leq dS(\emptyset)$. Thus $P$ does not succeed on $X$. Thus no supermartingale $P \leq_\mathrm{T} Z$ succeeds on $X$, so $X$ is computably random relative to $Z$. \end{proof}
\section{Weak Demuth and balanced randomness} \label{s:wD}
Randomness notions that have been introduced only recently include $h$-weak Demuth randomness for an order function $h$ as well as the special case of balanced randomness, where $h(n) $ can be taken to be $2^n$ \cite[Section 7]{FigueiraHirschfeldtMillerNgNies}. An $h$-Demuth test is like a Martin-L\"of test, except that we allow ourselves to change the index of the $n$\textsuperscript{th} component of the test up to $h(n)$ many times. To make this precise, we must first define codes for $h$-r.e.\ functions.
\begin{Definition}[$\mathsf{RCA}_0$] \label{df:balanced}
Let $h \colon \mathbb{N} \rightarrow \mathbb{N}$. A (coded) \emph{$h$-r.e.\ function} is a function $g \colon \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{N}$ such that $|\{s : g(n,s) \neq g(n,s+1)\}| \leq h(n)$ for every $n \in \mathbb{N}$. If also $h, g \leq_\mathrm{T} Z$ for some set $Z$, then we say that $g$ is a (coded) \emph{$h$-r.e.\ function relative to $Z$}. \end{Definition}
If $g$ codes an $h$-r.e.\ function, then $\mathsf{RCA}_0$ proves that the limit $\lim_s g(n,s)$ exists for each individual $n$, but it does not prove that there is always a function $f$ such that $\forall n (f(n) = \lim_s g(n,s))$.
\begin{Definition}[$\mathsf{RCA}_0$]{\ } \begin{itemize} \item Let $h \leq_\mathrm{T} Z$. A code for an \emph{$h$-Demuth test relative to $Z$} is a coded $h$-r.e.\ function $g \leq_\mathrm{T} Z$ where, for all $n \in \mathbb{N}$, $e_n = \lim_s g(n,s)$ is an index such that $\Phi_{e_n}^Z$ computes a code for a $\Sigma^{0,Z}_1$ set $\MP{U}_n$ with $\mu(\MP{U}_n) \leq 2^{-n}$.
\item A set $X$ \emph{weakly passes} the $h$-Demuth test relative to $Z$ coded by $g$ if there is an $n \in \mathbb{N}$ such that $X \notin \MP{U}_n$, where, as above, $\MP{U}_n$ is the $\Sigma^{0,Z}_1$ set coded by $\Phi_{e_n}^Z$ for $e_n = \lim_s g(n,s)$.
\item For $h \leq_\mathrm{T} Z$, $X$ is \emph{$h$-weakly Demuth random relative to $Z$} if $X$ weakly passes every $h$-Demuth test relative to $Z$. These definitions are sometimes extended to classes of order functions in the expected way.
\item $X$ is \emph{balanced random relative to $Z$} if $X$ weakly passes every $O(2^n)$-Demuth test relative to $Z$ (that is, if, for every $c \in \mathbb{N}$, $X$ weakly passes every $c2^n$-Demuth test relative to $Z$).
\item Let $h$ be a function that is provably total in $\mathsf{RCA}_0$. Then $\wdrr{h}$ is the statement ``for every $Z$ there is an $X$ that is $h$-weakly Demuth random relative to $Z$.''
\item $\mathsf{BR}$ is the statement ``for every $Z$ there is an $X$ that is balanced random relative to $Z$.'' \end{itemize} \end{Definition}
Not every $1$-random set is balanced random. For example, there are left-r.e.\ $1$-random sets, but no left-r.e.\ set is balanced random. However, if $X = X_0 \oplus X_1$ is $1$-random, then either $X_0$ is balanced random or $X_1$ is balanced random. This fact follows from the more elaborate \cite[Theorem 23]{FigueiraHirschfeldtMillerNgNies}, which states that a $1$-random set $X$ is $O(h(n)2^n)$-weakly Demuth random for some order function $h$ if and only $X$ it is not $\omega$-r.e.-tracing (roughly, $X$ is $\omega$-r.e.-tracing if for each $\omega$-r.e.\ function there is an $X$-r.e.\ trace of a fixed size bound). We sketch the argument. Suppose that $X = X_0 \oplus X_1$ is $1$-random. Then $X_0$ is $1$-random and $X_1$ is $1$-random relative to $X_0$ by van Lambalgen's theorem. If $X_0$ is not $\omega$-r.e.-tracing, then, by \cite[Theorem 23]{FigueiraHirschfeldtMillerNgNies}, $X_0$ is $O(h(n)2^n)$-weakly Demuth random for some order function $h$, and therefore $X_0$ is balanced random. On the other hand, if $X_0$ is $\omega$-r.e.-tracing, then every $O(2^n)$-Demuth test can be converted into a $\Sigma^{0, X_0}_1$-test. So if $X_1$ were not balanced random, then $X_1$ would not be $1$-random relative to $X_0$, which contradicts van Lambalgen's theorem. Thus, in this case, $X_1$ must be balanced random.
We now give a direct proof that if $X = X_0 \oplus X_1$ is $1$-random, then either $X_0$ or $X_1$ is balanced random. This proof avoids considering $\omega$-r.e.-traceability and is easy to formalize in $\mathsf{RCA}_0$.
\begin{Theorem}\label{thm-MLRimpBran} $\mathsf{RCA}_0 \vdash \mathsf{MLR} \rightarrow \mathsf{BR}$. Thus $\mathsf{MLR}$ and $\mathsf{BR}$ are equivalent over $\mathsf{RCA}_0$. \end{Theorem}
\begin{proof} Assume $\mathsf{MLR}$. Let $Z$ be given. We want to find a set that is balanced random relative to $Z$. Let $X = X_0 \oplus X_1$ be $1$-random relative to $Z$. We show that one of $X_0$, $X_1$ is balanced random relative to $Z$. Assume otherwise. Let $g_0, g_1 \colon \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{N}$ be codes for $c2^n$-Demuth tests (for some $c \in \mathbb{N}$) relative to $Z$ capturing $X_0$ and $X_1$, respectively. By modifying $g_0$ and $g_1$ if necessary, we may assume that $\Phi_{g_0(n,s)}^Z$ and $\Phi_{g_1(n,s)}^Z$ both compute codes of $\Sigma^{0,Z}_1$ sets $\MP{U}^0_{n,s}$ and $\MP{U}^1_{n,s}$ of measure $\leq 2^{-n}$ for all $n$ and $s$. We may also assume that $g_0(n, \cdot)$ and $g_1(n, \cdot)$ change at least once for each $n$ (by increasing $c$ by $1$ and adding dummy changes, if necessary).
We define a $\Sigma^{0,Z}_1$-test capturing $X$, contradicting that $X$ is $1$-random relative to $Z$. If $g_0$ does not change last on infinitely many $n$, then $g_1$ changes last on infinitely many $n$. So suppose for the sake of argument that $g_0$ changes last on infinitely many $n$. Define a uniform sequence $(\MP{O}_n)_{n \in \mathbb{N}}$ of $\Sigma^{0,Z}_1$ sets by letting \begin{align*} \MP{O}_n = \bigcup_{\substack{s > 0\\g_0(n,s) \neq g_0(n,s-1)}} \MP{U}^0_{n,s} \oplus \MP{U}^1_{n,s} \end{align*} for each $n$. Here, for $\mbf{\Sigma^0_1}$ sets $\MP{A}_0$ and $\MP{A}_1$, $\MP{A}_0 \oplus \MP{A}_1$ denotes the $\mbf{\Sigma^0_1}$ set of all $Y = Y_0 \oplus Y_1$ with $Y_0 \in \MP{A}_0$ and $Y_1 \in \MP{A}_1$. For $\mbf{\Sigma^0_1}$ sets $\MP{A}_0$ and $\MP{A}_1$, it is straightforward to produce a code for $\MP{A}_0 \oplus \MP{A}_1$ and to show that if $\mu(\MP{A}_0) \leq a_0$ and $\mu(\MP{A}_1) \leq a_1$, then $\mu(\MP{A}_0 \oplus \MP{A}_1) \leq a_0a_1$. So $\mu(\MP{U}^0_{n,s} \oplus \MP{U}^1_{n,s}) \leq 2^{-2n}$ for all $n$ and $s$. Each $\MP{O}_n$ is the union of at most $c2^n$ sets (because $g_0$ is $c2^n$-r.e.) of measure at most $2^{-2n}$ each. Therefore $\mu(\MP{O}_n) \leq c2^{-n}$ for each $n$. Now, choose $k$ such that $2^k > c$. Define another uniform sequence $(\MP{V}_n)_{n \in \mathbb{N}}$ of $\Sigma^{0,Z}_1$ sets by letting $\MP{V}_n = \bigcup_{i > n+k}\MP{O}_i$ for each $n$. Then $\mu(\MP{V}_n) \leq c2^{-n-k} \leq 2^{-n}$ for each $n$, so $(\MP{V}_n)_{n \in \mathbb{N}}$ is a $\Sigma^{0,Z}_1$-test.
We claim that $X \in \bigcap_{n \in \mathbb{N}}\MP{V}_n$. It suffices to show that, for every $n$, there is an $i > n+k$ with $X \in \MP{O}_i$. By the assumption on $g_0$, let $i > n+k$ be such that there is an $s_0 > 0$ such that $g_0(i, s_0) \neq g_0(i, s_0-1)$ and $(\forall t > s_0)(g_1(i, t) = g_1(i, s_0))$. Let $s_0 > 0$ be greatest such that $g_0(i, s_0) \neq g_0(i, s_0-1)$. Then $g_0(i,s_0) = \lim_s g_0(i,s)$ and $g_1(i,s_0) = \lim_s g_1(i,s)$, so $X_0 \in \MP{U}^0_{i,s_0}$ and $X_1 \in \MP{U}^1_{i,s_0}$ because the $c2^n$-Demuth tests coded by $g_0$ and $g_1$ capture $X_0$ and $X_1$. Thus \begin{align*} X = X_0 \oplus X_1 \in \MP{U}^0_{i,s_0} \oplus \MP{U}^1_{i,s_0} \subseteq \MP{O}_i \end{align*} as desired. \end{proof}
\section{Non-implications via $\omega$-models}\label{sec-NonImp}
In this section, we exhibit $\omega$-models of $\mathsf{RCA}_0$ that witness various non-implications between pairs of randomness-existence principles. We also compare randomness-existence principles to principles asserting the existence of diagonally non-recursive functions.
\begin{Definition}[$\mathsf{RCA}_0$]{\ }
A function $f \colon \mathbb{N} \rightarrow \mathbb{N}$ is \emph{diagonally non-recursive relative to $Z$} if $\forall e(\Phi_e^Z(e){\downarrow} \rightarrow f(e) \neq \Phi_e^Z(e))$.
$\mathsf{DNR}$ is the statement ``for every $Z$ there is an $f$ that is diagonally non-recursive relative to $Z$.'' \end{Definition}
$\mathsf{DNR}$ is a common benchmark by which to gauge the computability-theoretic strength of set-existence principles. It is a well-known observation of Ku\v{c}era (see e.g.~\cite[Proposition~4.1.2]{NiesBook}) that every $1$-random set computes a diagonally non-recursive function. By formalizing this result, one readily sees that $\mathsf{RCA}_0 \vdash \mathsf{MLR} \rightarrow \mathsf{DNR}$. In contrast, $\mathsf{CR}$ is not strong enough to produce diagonally non-recursive functions.
\begin{Proposition} There is an $\omega$-model of $\mathsf{RCA}_0 + \mathsf{CR} + \neg\mathsf{DNR}$. Thus $\mathsf{RCA}_0 \nvdash \mathsf{CR} \rightarrow \mathsf{DNR}$, and therefore also $\mathsf{RCA}_0 \nvdash \mathsf{CR} \rightarrow \mathsf{MLR}$. \end{Proposition}
\begin{proof} For the purposes of this proof, say that a set $A$ is \emph{high} relative to a set $B$ if there is a single function $f \leq_\mathrm{T} A$ that eventually dominates every function $g \leq_\mathrm{T} B$. We apply the following two results.
\begin{itemize} \item[(i)] If $A$ is high relative to $B$, then $A \oplus B$ computes a set that is computably random relative to $B$ (see \cite[Theorem~7.5.2]{NiesBook}; the proof is also replicated in the proof of Theorem~\ref{thm-SRimpCR}).
\item[(ii)] If $B$ does not compute a diagonally non-recursive function, then there is an $A$ that is high relative to $B$ such that $A \oplus B$ does not compute a diagonally non-recursive function~\cite[Lemma~4.14]{Cholak:2006eg}. \end{itemize}
\noindent By iterating result~(ii) in the usual way, we produce an $\omega$-model $\mf M = (\omega, \MP S)$ of $\mathsf{RCA}_0 + \neg\mathsf{DNR}$ such that for every $B \in \MP S$ there is an $A \in \MP S$ that is high relative to $B$. By result~(i), $\mf M \models \mathsf{CR}$. Thus $\mf M \models \mathsf{RCA}_0 + \mathsf{CR} + \neg\mathsf{DNR}$. \end{proof}
In order to give useful formalizations of stronger versions of $\mathsf{DNR}$, we must carefully express computations relative to $Z'$ for a set $Z$ without implying the existence of $Z'$ as a set. We make the following definitions in $\mathsf{RCA}_0$ (see~\cites{AvigadDeanRute,BienvenuPateyShafer}).
\begin{itemize} \item Let $e \in Z'$ abbreviate the formula $\Phi_e^Z(e){\downarrow}$.
\item Let $\sigma \subseteq Z'$ abbreviate the formula $(\forall e < |\sigma|)(\sigma(e) = 1 \leftrightarrow e \in Z')$.
\item Let $\Phi_e^{Z'}(x) = y$ abbreviate the formula $(\exists \sigma \subseteq Z')(\Phi_e^\sigma(x) = y)$. Similarly, let $\Phi_e^{Z'}(x){\downarrow}$ denote that there is a $y$ such that $\Phi_e^{Z'}(x) = y$. \end{itemize}
Notice that, by bounded $\Sigma^0_1$ comprehension, $\mathsf{RCA}_0$ proves that the set $\{e < n : e \in Z'\}$ exists for every $Z$ and $n$. Then by letting $\sigma$ be the characteristic string of $\{e < n : e \in Z'\}$, we see that $\mathsf{RCA}_0$ proves that for every $Z$ and $n$ there is a $\sigma$ of length $n$ such that $(\forall e < |\sigma|)(\sigma(e) = 1 \leftrightarrow e \in Z')$.
\begin{Definition}[$\mathsf{RCA}_0$]{\ }
A function $f \colon \mathbb{N} \rightarrow \mathbb{N}$ is \emph{diagonally non-recursive relative to $Z'$} for a set $Z$ if $\forall e(\Phi_e^{Z'}(e){\downarrow} \rightarrow f(e) \neq \Phi_e^{Z'}(e))$.
$\dnrr{2}$ is the statement ``for every $Z$ there is an $f$ that is diagonally non-recursive relative to $Z'$.'' \end{Definition}
Ku\v{c}era in fact showed that every $n$-random set computes a function that is diagonally non-recursive relative to $0^{(n-1)}$. This result can be formalized in $\mathsf{RCA}_0$ (see \cite[Theorem~2.8]{BienvenuPateyShafer}). In particular, $\mathsf{RCA}_0 \vdash \mlrr{2} \rightarrow \dnrr{2}$. In contrast, $\mathsf{W2R}$ is not strong enough to produce diagonally non-recursive functions relative to $0'$.
\begin{Theorem}\label{thm-WTRnot2DNR} There is an $\omega$-model of $\mathsf{RCA}_0 + \mathsf{W2R} + \neg\dnrr{2}$. Thus $\mathsf{RCA}_0 \nvdash \mathsf{W2R} \rightarrow \dnrr{2}$, and therefore also $\mathsf{RCA}_0 \nvdash \mathsf{W2R} \rightarrow \mlrr{2}$. \end{Theorem}
\begin{proof} The intuition is to build a model of $\mathsf{RCA}_0 + \mathsf{W2R} + \neg\dnrr{2}$ out of the columns of a weakly $2$-random set $Z$ that does not compute a $\dnrr{2}$ function. For this idea to work, $Z$ must be chosen with a little care because the relevant direction of van Lambalgen's theorem does not hold for weak $2$-randomness in general~\cite{BarmpaliasDowneyNg}.
Recall that a set $A$ has \emph{hyperimmune-free degree} (or is \emph{computably dominated}) if every $f \leq_\mathrm{T} A$ is eventually dominated by a computable function. Let $Z$ be a $1$-random set of hyperimmune-free degree that does not compute a diagonally non-recursive function relative to $0'$. Such a $Z$ exists by \cite[Theorem 5.1]{Kucera.Nies:11} (also see~\cite[Exercise 1.8.46]{NiesBook}), which states that if $\MP C \subseteq 2^\omega$ is a non-empty $\Pi^0_1$ class and $B >_\mathrm{T} 0'$ is $\Sigma^0_2$, then there is a $Z \in \MP C$ of hyperimmune-free degree with $Z' \leq_\mathrm{T} B$. Let $\MP C \subseteq 2^\omega$ be a non-empty $\Pi^0_1$ class consisting entirely of $1$-randoms, and let $B$ be any set r.e.\ in $0'$ such that $0' <_\mathrm{T} B <_\mathrm{T} 0''$. Let $Z \in \MP C$ be of hyperimmune-free degree such that $Z' \leq_\mathrm{T} B$. Then of course $Z$ is $1$-random and has hyperimmune-free degree. Furthermore, $Z$ does not compute a diagonally non-recursive function relative to $0'$. If $Z$ computes a diagonally non-recursive function relative to $0'$, then so does $B$, but then we would have $B \geq_\mathrm{T} 0''$ by the Arslanov completeness criterion relative to $0'$, which is a contradiction.
Decompose $Z$ into columns $Z = \bigoplus_{n \in \omega}Z_n$, where $Z_n = \{k : \langle n, k \rangle \in Z\}$ for each $n$. By a straightforward relativization of~\cite[Proposition~3.6.4]{NiesBook}, if $X \oplus Y$ has hyperimmune-free degree and $Y$ is $1$-random relative to $X$, then $Y$ is also weakly $2$-random relative to $X$. It follows that $Z_{n+1}$ is weakly $2$-random relative to $\bigoplus_{i \leq n}Z_i$ for every $n$. This is because $\bigoplus_{i \leq n+1}Z_i$ has hyperimmune-free degree (as $Z$ has hyperimmune-free degree) and $Z_{n+1}$ is $1$-random relative to $\bigoplus_{i \leq n}Z_i$ by van Lambalgen's theorem.
Let $\MP S = \{X : \exists n(X \leq_\mathrm{T} \bigoplus_{i \leq n}Z_i)\}$, and let $\mf M = (\omega, \MP S)$. $\MP S$ contains no diagonally non-recursive function relative to $0'$, so $\mf M \models \mathsf{RCA}_0 + \neg\dnrr{2}$. If $X \in \MP S$ and $n$ is such that $X \leq_\mathrm{T} \bigoplus_{i \leq n}Z_i$, then $Z_{n+1} \in \MP S$ is weakly $2$-random relative to $X$. Thus $\mf M \models \mathsf{W2R}$. Therefore $\mf M \models \mathsf{RCA}_0 + \mathsf{W2R} + \neg\dnrr{2}$. \end{proof}
The principles $\mlrr{2}$ and $\dnrr{2}$ are closely related to the \emph{rainbow Ramsey theorem}. Let $[\mathbb{N}]^n$ denote the set of $n$-element subsets of $\mathbb{N}$, and call a function $f \colon [\mathbb{N}]^n \rightarrow \mathbb{N}$ \emph{$k$-bounded} if $|f^{-1}(c)| \leq k$ for every $c \in \mathbb{N}$. Call an infinite $R \subseteq \mathbb{N}$ a \emph{rainbow for $f$} if $f$ is injective on $[R]^n$. The rainbow Ramsey theorem for pairs and $2$-bounded colorings (denoted $\mathsf{RRT}^2_2$) is the statement ``for every 2-bounded $f \colon [\mathbb{N}]^2 \rightarrow \mathbb{N}$, there is a set $R$ that is a rainbow for $f$.'' By formalizing work of Csima and Mileti~\cite{CsimaMileti}, Conidis and Slaman~\cite{ConidisSlaman} have shown that $\mathsf{RCA}_0 \vdash \mlrr{2} \rightarrow \mathsf{RRT}^2_2$. J.\ Miller~\cite{MillerRRT}, again building on~\cite{CsimaMileti}, has shown that in fact $\mathsf{RCA}_0 \vdash \dnrr{2} \leftrightarrow \mathsf{RRT}^2_2$. By Theorem~\ref{thm-WTRnot2DNR}, it follows that $\mathsf{RCA}_0 \nvdash \mathsf{W2R} \rightarrow \mathsf{RRT}^2_2$.
From Theorem~\ref{thm-MLRimpBran}, we know that $\mathsf{RCA}_0 \vdash \mathsf{MLR} \rightarrow \mathsf{BR}$. In particular, if $h$ is any provably total function that is $O(2^n)$, then $\mathsf{RCA}_0 \vdash \mathsf{MLR} \rightarrow \wdrr{h}$. We now show that this implication is close to optimal. Specifically, in Theorem~\ref{thm-WKLnothWDR} below we show that if $h$ is a provably total function that dominates the function $n \mapsto k^n$ for every $k$, then $\mathsf{RCA}_0 \nvdash \mathsf{MLR} \rightarrow \wdrr{h}$. In fact, in this case even $\mathsf{WKL}_0 \nvdash \wdrr{h}$. $\mathsf{WKL}_0$ is the system whose axioms are those of $\mathsf{RCA}_0$, plus \emph{weak K\"onig's lemma}, which is the statement ``every infinite subtree of $2^{<\mathbb{N}}$ has an infinite path.'' $\mathsf{WKL}_0$ is strictly stronger than $\mathsf{RCA}_0 + \mathsf{MLR}$~\cite{YuSimpson}.
Recall the following definitions for a set $X \subseteq \omega$. \begin{itemize} \item Write $\sigma <_\mathrm{L} X$ if $\sigma$ is to the left of $X$: $\exists \rho(\rho^\smallfrown 0 \subseteq \sigma \wedge \rho^\smallfrown 1 \subseteq X)$. Then $X$ is \emph{left-r.e.}\ if $\{\sigma : \sigma <_\mathrm{L} X\}$ is r.e.
\item $X$ is \emph{superlow} if $X' \leq_\mathrm{tt} 0'$. Equivalently, $X$ is superlow if $X' \leq_\mathrm{wtt} 0'$ because, for any $Z \subseteq \omega$, $Z \leq_\mathrm{wtt} 0'$ if and only if $Z \leq_\mathrm{tt} 0'$. \end{itemize}
\begin{Proposition}\label{prop-SuperlowKre} For every non-empty $\Pi^0_1$ class $\MP C \subseteq 2^\omega$, there is a superlow $Z \in \MP C$ such that, for every set $X \leq_\mathrm{T} Z$, there is a $k \in \omega$ such that $X$ is $k^n$-r.e. \end{Proposition}
\begin{proof} Let $Z \mapsto W^Z$ be the r.e.\ operator defined by \begin{align*} 2^e(2n+1) \in W^Z \Leftrightarrow \Phi_e^Z(n) = 1. \end{align*} Apply the proof of the superlow basis theorem as given in~\cite[Theorem~1.8.38]{NiesBook}, but with the operator $W$ instead of the usual Turing jump operator $J$, to get a $Z \in \MP C$ such that $W^Z$ is left-r.e. Clearly $Z' \leq_\mathrm{m} W^Z$, from which it follows that $Z$ superlow. Now suppose that $X \leq_\mathrm{T} Z$, and let $e$ be such that $\Phi_e^Z = X$. The fact that $W^Z$ is left-r.e.\ implies that $X$ is $2^{2^e(2n+1)}$-r.e., so $X$ is $k^n$-r.e.\ for $k = 2^{2^{e+2}}$. \end{proof}
\begin{Proposition}\label{prop-WKL-Kre} There is an $\omega$-model $\mf M = (\omega, \MP S)$ of $\mathsf{WKL}_0$ such that every $X \in \MP S$ is superlow and for every $X \in \MP S$ there is a $k \in \omega$ such that $X$ is $k^n$-r.e. \end{Proposition}
\begin{proof} Given a set $Z$, decompose $Z$ into columns $Z = \bigoplus_{n \in \omega}Z_n$, and let $$\MP{S}_Z = \{X : \exists n(X \leq_\mathrm{T} \bigoplus_{i \leq n}Z_i)\}.$$ Let $\MP C \subseteq 2^\omega$ be a non-empty $\Pi^0_1$ class such that $(\omega, \MP{S}_Z) \models \mathsf{WKL}_0$ for all $Z \in \MP C$. This can be accomplished, for example, by taking $\MP C$ to be the class of all sets $Z$ such that, for every $n$, $Z_{n+1}$ codes a $\{0,1\}$-valued diagonally non-recursive function relative to $\bigoplus_{i \leq n}Z_i$. Then, for any such $Z$, $(\omega, \MP{S}_Z)$ models $\mathsf{RCA}_0$ plus ``for every $X$ there is a $\{0,1\}$-valued diagonally non-recursive function relative to $X$,'' which is well-known to be equivalent to $\mathsf{WKL}_0$ by formalizing classic results of Jockusch and Soare~\cite{Jockusch-Soare:Pi01}. Let $Z \in \MP C$ be as in the conclusion of Proposition~\ref{prop-SuperlowKre}. Then every $X \in \MP{S}_Z$ is superlow, and, for every $X \in \MP{S}_Z$, there is a $k$ such that $X$ is $k^n$-r.e. Thus $\mf M = (\omega, \MP{S}_Z)$ is the desired model. \end{proof}
\begin{Theorem}\label{thm-WKLnothWDR} Let $h \colon \omega \rightarrow \omega$ be a function that is provably total in $\mathsf{RCA}_0$ and eventually dominates the function $n \mapsto k^n$ for every $k \in \omega$. Then there is an $\omega$-model of $\mathsf{WKL}_0 + \neg\wdrr{h}$. Thus $\mathsf{WKL}_0 \nvdash \wdrr{h}$ and therefore also $\mathsf{RCA}_0 \nvdash \mathsf{MLR} \rightarrow \wdrr{h}$. \end{Theorem}
\begin{proof} If $X$ is $k^n$-r.e.\ and $h$ eventually dominates $k^n$, then it is straightforward to define an $h$-Demuth test capturing $X$. Thus no $k^n$-r.e.\ set is $h$-weakly Demuth random. Let $\mf M = (\omega, \MP S)$ be the model of $\mathsf{WKL}_0$ from Proposition~\ref{prop-WKL-Kre}. Then no $X \in \MP S$ is $h$-weakly Demuth random because for every $X \in \MP S$ there is a $k$ such that $X$ is $k^n$-r.e. Thus $\mf M \models \mathsf{WKL}_0 + \neg\wdrr{h}$. \end{proof}
\end{document} |
\begin{document}
\title{Strong Amplifiers of Natural Selection: Proofs}
\begin{abstract} We consider the modified Moran process on graphs to study the spread of genetic and cultural mutations on structured populations. An initial mutant arises either spontaneously (aka \emph{uniform initialization}), or during reproduction (aka \emph{temperature initialization}) in a population of $n$ individuals, and has a fixed fitness advantage $r>1$ over the residents of the population. The fixation probability is the probability that the mutant takes over the entire population. Graphs that ensure fixation probability of~1 in the limit of infinite populations are called \emph{strong amplifiers}. Previously, only a few examples of strong amplifiers were known for uniform initialization, whereas no strong amplifiers were known for temperature initialization.
In this work, we study necessary and sufficient conditions for strong amplification, and prove negative and positive results. We show that for temperature initialization, graphs that are unweighted and/or self-loop-free have fixation probability upper-bounded by $1-1/f(r)$, where $f(r)$ is a function linear in $r$. Similarly, we show that for uniform initialization, bounded-degree graphs that are unweighted and/or self-loop-free have fixation probability upper-bounded by $1-1/g(r,c)$, where $c$ is the degree bound and $g(r,c)$ a function linear in $r$. Our main positive result complements these negative results, and is as follows: every family of undirected graphs with (i)~self loops and (ii)~diameter bounded by $n^{1-\epsilon}$, for some fixed $\epsilon>0$, can be assigned weights that makes it a strong amplifier, both for uniform and temperature initialization. \end{abstract}
\tableofcontents
\section{Introduction}\label{sec:intro}
\noindent{\em The Moran process.} Evolutionary dynamics study the change of population over time under the effect of natural selection and random drift~\cite{Nowak06b}. The Moran process~\cite{Moran1962} is an elegant stochastic model for the rigorous study of how mutations spread in a population. Initially, a population of $n$ individuals, called the residents, exists in a homogeneous state, and a random individual becomes mutant. The mutants are associated with a fitness advantage $r\geq 1$, whereas the residents have fitness normalized to~1. The Moran process is a discrete-time stochastic process, described as follows. In every step, a single individual is chosen for reproduction with probability proportional to its fitness. This individual produces a single offspring (a copy of itself), which replaces another individual chosen uniformly at random from the population. The main quantity of interest is the \emph{fixation probability} $\mathsf{\rho}(n, r)$, defined as the probability that the single invading mutant will eventually take over the population. As typically $r$ is small (i.e., $r=1+\epsilon$, for some small $\epsilon>0$) and $n$ is large, we study the fixation probability at the limit of large populations, i.e., $\mathsf{\rho}(r)=\lim_{n\to\infty}\mathsf{\rho}(n,r)$. It is known that $\mathsf{\rho}(r)=1-r^{-1}$.
\noindent{\em The Moran process on graphs.} The standard Moran process takes place on \emph{well-mixed} populations where the reproducing individual can replace any other in the population. However, natural populations have spatial structure, where each individual has a specific set of neighbors, and mutation spread must respect this structure. Evolutionary graph theory represents spatial structure as a (generally weighted, directed) graph, where each individual occupies a vertex of the graph, and edges define interactions between neighbors~\cite{Lieberman05}. The Moran process on graphs is similar to the standard Moran process, with the exception that the offspring replaces a neighbor of the reproducing individual. The well-mixed population is represented by the complete graph $K_n$. If the graph is strongly connected, the Moran process is guaranteed to reach a homogeneous state where mutants either fixate or go extinct.
\noindent{\em Mutant initialization.} The asymmetry introduced by the population structure makes the fixation probability depend on the placement of the initial mutant. In \emph{uniform initialization}, the initial mutant arises \emph{spontaneously}, i.e., uniformly at random on each vertex. In \emph{temperature initialization}, the initial mutant arises\emph{during reproduction} i.e., on each vertex with probability proportional to the rate that the vertex is replaced by offspring from its neighbors. Hence our interested is on the fixation probability $\mathsf{\rho}(G_n^{\mathsf{w}}, r, Z)$ for a weighted graph $G_n^{\mathsf{w}}$ of $n$ vertices and under initialization $Z\in\{\mathsf{U}, \mathsf{T}\}$, denoting uniform and temperature initialization, respectively.
\noindent{\em Amplifiers of selection.} Population structure affects the fixation probability of mutants. An infinite family of graphs $(G_n^{\mathsf{w}})_n$ is \emph{amplifying} for initialization $Z$ if $\lim_{n\to\infty}\mathsf{\rho}(G_n^{\mathsf{w}}, r, Z)>1-r^{-1}$,
Intuitively, the fitness advantage of mutants is being ``amplified'' by the structure compared to the well-mixed population. \emph{Strong amplifying} families have $\lim_{n\to\infty}\mathsf{\rho}(G_n^{\mathsf{w}}, r, Z)=1$, and hence ensure the fixation of mutants. On the other hand, \emph{bounded amplifiers} have $\lim_{n\to\infty}\mathsf{\rho}(G_n^{\mathsf{w}}, r, Z)\leq 1-1/f(r)$, where $f$ is a linear function, and hence provide limited amplification at best.
\noindent{\em Existing results.} The Moran process on graphs was introduced in~\cite{Lieberman05}, where several amplifying and strongly amplifying families were presented. Under uniform initialization, the canonical example is the family of undirected Star graphs, with fixation probability $1-r^{-2}$, making it a \emph{quadratic uniform amplifier}~\cite{Lieberman05,Broom2008,Monk2014}. Among directed graphs, strongly amplifying families are known to exist: (i)~Superstars and Metafunnels were already introduced in~\cite{Lieberman05}, where their strong amplifying properties were outlined, and (ii)~more recently, the family of Megastars was rigorously proved to be a strong amplifying family~\cite{Galanis17}. Megastars were subsequently shown to be optimal (up to logarithmic factors) wrt the rate that fixation probability converges to~1 as a function of $n$~\cite{Goldberg17}. Among undirected graphs, the family of Stars was the best amplifying family know for a long time, and the existence of strong amplifiers was open. Recently, undirected strong amplifiers were presented independently in~\cite{Goldberg17} and~\cite{Giakkoupis16}.
Under temperature initialization, the landscape is more scarce. None of the uniform amplifiers mentioned in the previous paragraph is a temperature amplifier. It turns out that on all those structures the mutants go extinct with high probability when the initial placement is according to temperature. Recently, the Looping Star family was introduced in~\cite{Adlam15} and was shown to be a quadratic amplifier under both initialization schemes. Crucially, Looping Stars contain self-loops and weights. To our knowledge, no other temperature amplifier has been known.
\noindent{\em Our contributions.} In this work, we study necessary and sufficient conditions for strong amplifiers, and prove negative and positive results. \begin{compactenum} \item Our negative results are as follows. For temperature initialization, we show that graphs which are unweighted and/or self-loop-free have fixation probability upper-bounded by $1-1/f(r)$, where $f(r)$ is a function linear in $r$. Hence, without both weights and self-loops, there are only bounded temperature amplifiers. Similarly, we show that for uniform initialization, bounded-degree graphs that are unweighted and/or self-loop-free have fixation probability upper-bounded by $1-1/g(r,c)$, where $c$ is the degree bound and $g(r,c)$ a function linear in $r$. Hence, without both weights and self-loops, bounded-degree graph families are only bounded uniform amplifiers. \item Our positive result complements these negative results and is as follows. We show that every family of undirected graphs with (i)~self loops and (ii)~diameter bounded by $n^{1-\epsilon}$, for some fixed $\epsilon>0$, can be assigned weights that makes the family a strong amplifier, both for uniform and temperature initialization. Moreover, the weight construction requires $O(n)$ time. \end{compactenum}
Our proof techniques rely on the analysis of Markov chains, the Cauchy-Schwarz inequality, concentration bounds, stochastic domination and coupling arguments. The weight construction in our positive result is straightforward, however proving the amplification properties of the resulting structure is more involved.
\subsection{Other Related Work}\label{subsec:other_related} Strong amplifiers were already introduced in~\cite{Lieberman05}, however it was later shown that the fixation probability on Superstars is weaker than originally stated, and hence the heuristic argument for strong amplification cannot be made formal~\cite{Diaz13}. In~\cite{Galanis17}, it was shown that the fixation probability on Superstars as appeared in~\cite{Lieberman05} is indeed too optimistic, by proving an upper bound on the rate that the probability can tend to~1 as a function of $n$. A revised analysis of Superstars appeared in~\cite{Hauert}. The work of~\cite{Pavlogiannis17} introduced the Metastars as a family of unweighted undirected graphs with better amplification properties than Stars, for specific values of the fitness advantage $r$. Other aspects of the Moran process on graphs have also been studied in the literature. In~\cite{Mertzios13}, the authors studied undirected suppressors of selection, which are graphs that suppress the selective advantage of mutants, as opposed to amplifying it. Recently, a family of strong suppressors was presented~\cite{Giakkoupis16}. The work of~\cite{Mertzios13b} studies selective amplifiers, a notion that characterizes the number of initial vertices that guarantee mutant fixation. Randomly structured populations were shown to have no effect on fixation probability in~\cite{Adlam2014}. Besides the fixation probability, the absorption time of the Moran process is crucial for characterizing the rate of evolution~\cite{Frean2013b} and has been studied on various graphs~\cite{Diaz16}. Finally, computational aspects of computing the fixation probability on graphs were studied in~\cite{CSPPL}, where the problem was shown to admit a fully polynomial randomized approximation scheme, later improved in~\cite{Chatterjee17}.
\section{Organization} The organization of this document is as follows: Before presenting our proofs we present the detailed description of our model and the results in Section 2. We then present the formal notation (Section 3), the proofs of our negative results (Section 4) and the proofs of our positive results (Section 5).
\section{Model and Summary of Results}
\subsection{Model}
\noindent{\em The birth-death Moran process.} The \textit{Moran process} considers a population of $n$ individuals, which undergoes reproduction and death, and each individual is either a resident or a mutant~\cite{Moran1962}. The residents and the mutants have constant fitness 1 and $r$, respectively. The Moran process is a discrete-time stochastic process defined as follows: in the initial step, a single mutant is introduced into a homogeneous resident population. At each step, an individual is chosen randomly for reproduction with probability proportional to its fitness; another individual is chosen uniformly at random for death and is replaced by a new individual of the same type as the reproducing individual. Eventually, this Markovian process ends when all individuals become of one of the two types. The probability of the event that all individuals become mutants is called the {\em fixation} probability.
\noindent{\em The Moran process on graphs.} In general, the Moran process takes place on a population structure, which is represented as a graph. The vertices of the graph represent individuals and edges represent interactions between individuals \cite{Lieberman05, Nowak06b}. Formally, let $G_n=(V_n,E_n,W_n)$ be a weighted, directed graph, where $V_n=\{1,2,\ldots,n\}$ is the vertex set , $E_n$ is the Boolean edge matrix, and $W_n$ is a stochastic weight matrix. An edge is a pair of vertices $(i,j)$ which is indicated by $E_n[i,j]=1$ and denotes that there is an interaction from $i$ to $j$ (whereas we have $E_n[i,j]=0$ if there is no interaction from $i$ to $j$). The stochastic weight matrix $W_n$ assigns weights to interactions, i.e., $W_n[i,j]$ is positive iff $E_n[i,j]=1$, and for all $i$ we have $\sum_{j} W_n[i,j]=1$. For a vertex $i$, we denote by $\mathsf{In}(i)=\{ j \mid E_n[j,i]=1\}$ (resp., $\mathsf{Out}(i)=\{ j \mid E_n[i,j]=1\}$) the set of vertices that have incoming (resp., outgoing) interaction or edge to (resp., from) $i$. Similarly to the Moran process, at each step an individual is chosen randomly for reproduction with probability proportional to its fitness. An edge originating from the reproducing vertex is selected randomly with probability equal to its weight. The terminal vertex of the chosen edge takes on the type of the vertex at the origin of the edge. In other words, the stochastic matrix $W_n$ is the weight matrix that represents the choice probability of the edges. We only consider graphs which are {\em connected}, i.e., every pair of vertices is connected by a path. This is a sufficient condition to ensure that in the long run, the Moran process reaches a homogeneous state (i.e., the population consists entirely of individuals of a single type). \iffullproofs See Figure~\ref{fig:moran} for an illustration. \fi The well-mixed population is represented by a complete graph where all edges have equal weight of $1/n$.
\iffullproofs \begin{figure}
\caption{ Illustration of one step of the Moran process on a weighted graph with self-loops. Residents are depicted as red vertices, and mutants as blue vertices. As a concrete example, we consider the relative fitness of the mutants is $r=2$. In Figure~\ref{fig:moran}(A), the total fitness of the population is $\mathcal{F}=1+2=3$, and hence the probability of selecting resident (resp., mutant) for reproduction equals $1/3$ (resp., $2/3$). The mutant reproduces along an edge, and the edge is chosen randomly proportional to the edge weight. Figure~\ref{fig:moran}(B) shows that different reproduction events might lead to the same outcome. }
\label{fig:moran}
\end{figure} \fi
\noindent{\em Classification of graphs.} We consider the following classification of graphs: \begin{enumerate} \item {\em Directed vs undirected graphs.} A graph $G_n=(V_n,E_n,W_n)$ is called {\em undirected} if for all $1\leq i,j \leq n$ we have $E_n[i,j]=E_n[j,i]$. In other words, there is an edge from $i$ to $j$ iff there is an edge from $j$ to $i$, which represents symmetric interaction. If a graph is not undirected, then it is called a {\em directed} graph.
\item {\em Self-loop free graphs.} A graph $G_n=(V_n,E_n,W_n)$ is called a {\em self-loop free} graph iff for all $1 \leq i \leq n$ we have $E_n[i,i]=W_n[i,i]=0$.
\item {\em Weighted vs unweighted graphs.} A graph $G_n=(V_n,E_n,W_n)$ is called an {\em unweighted} graph if for all $1 \leq i \leq n$ we have \[ W_n[i,j]= \begin{cases}
\frac{1}{|\mathsf{Out}(i)|} & j \in \mathsf{Out}(i);\\ 0 & j \not\in \mathsf{Out}(i) \end{cases} \] In other words, in unweighted graphs for every vertex the edges are choosen uniformly at random. Note that for unweighted graphs the weight matrix is not relevant, and can be specified simply by the graph structure $(V_n,E_n)$. In the sequel, we will represent unweighted graphs as $G_n=(V_n,E_n)$.
\item {\em Bounded degree graphs.} The degree of a graph $G_n=(V_n,E_n,W_n)$, denoted $\deg(G_n)$, is $\max\{ \mathsf{In}(i),\mathsf{Out}(i) \mid 1 \leq i \leq n\}$, i.e., the maximum in-degree or out-degree. For a family of graphs $(G_n)_{n > 0}$ we say that the family has bounded degree, if there exists a constant $c$ such that the degree of all graphs in the family is at most $c$, i.e., for all $n$ we have $\deg(G_n) \leq c$. \end{enumerate}
\noindent{\em Initialization of the mutant.} The fixation probability is affected by many different factors \cite{Patwa2008}. In a well-mixed population, the fixation probability depends on the population size $n$ and the relative fitness advantage $r$ of mutants~\cite{Maruyama1974a, Nowak06b}. For the Moran process on graphs, the fixation probability also depends on the population structure, which breaks the symmetry and homogeneity of the well-mixed population
\cite{levin1974disturbance, levin1976population, durrett1994stochastic, Lieberman05, Broom2008, Frean2013, Whitlock2003, Houchmandzadeh2011}. Finally, for general population structures, the fixation probability typically depends on the initial location of the mutant \cite{Allen2014, Antal2006}, unlike the well-mixed population where the probability of the mutant fixing is independent of where the mutant arises \cite{Maruyama1974a, Nowak06b}. There are two standard ways mutants may arise in a population~\cite{Lieberman05,Adlam15}. First, mutants may arise spontaneously and with equal probability at any vertex of the population structure. In this case we consider that the mutant arise at any vertex uniformly at random and we call this \textit{uniform initialization}. Second, mutants may be introduced through reproduction, and thus arise at a vertex with rate proportional to the incoming edge weights of the vertex. We call this \textit{temperature initialization}. In general, uniform and temperature initialization result in different fixation probabilities.
\noindent{\em Amplifiers, quadratic amplifiers, and strong amplifiers.} Depending on the initialization, a population structure can distort fitness differences \cite{Lieberman05, Nowak06b, Broom2008}, where the well-mixed population serves as a canonical point of comparison. Intuitively, amplifiers of selection exaggerate variations in fitness by increasing (respectively decreasing) the chance of fitter (respectively weaker) mutants fixing compared to their chance of fixing in the well-mixed population. In a well-mixed population of size $n$, the fixation probability is \[ \frac{1-1/r}{1-(1/r)^n}. \] Thus, in the limit of large population (i.e., as $n \to \infty$) the fixation probability in a well-mixed population is $1-1/r$. We focus on two particular classes of amplifiers that are of special interest. A family of graphs $(G_n)_{n>0}$ is a {\em quadratic} amplifier if in the limit of large population the fixation probability is $1-1/r^2$. Thus, a mutant with a 10\% fitness advantage over the resident has approximately the same chance of fixing in quadratic amplifiers as a mutant with a 21\% fitness advantage in the well-mixed population. A family of graphs $(G_n)_{n> 0}$ is an {\em arbitrarily strong} amplifier (hereinafter called simply a strong amplifier) if for any constant $r> 1$ the fixation probability approaches~1 at the limit of large population sizes, whereas when $r<1$, the fixation probability approaches~0. There is a much finer classification of amplifiers presented in~\cite{Adlam15}.
We focus on quadratic amplifiers which are the most well-known among polynomial amplifiers, and strong amplifiers which represent the strongest form of amplification.
Amplifiers tend to have fixation times longer than the well mixed population. Therefore they are especially useful in situations where the rate limiting step is the discovery and evaluation of marginally advantageous mutants. An interesting direction for future work would be to consider amplifiers as well as the time-scale of evolutionary trajectories.
\noindent{\em Existing results.} We summarize the main existing results in terms of uniform and temperature initialization. \begin{enumerate} \item {\em Uniform initialization.} First, consider the family of Star graphs, which consist of one central vertex and $n-1$ leaf vertices, with each leaf being connected to and from the central vertex. Star graphs are unweighted, undirected, self-loop free graphs, whose degree is linear in the population size. Under uniform initialization, the family of Star graphs is a quadratic amplifier~\cite{Lieberman05, Nowak06b}. A generalization of Star graphs, called Superstars~\cite{Lieberman05,Nowak06b,Hauert,CSPPL}, are known to be strong amplifiers under uniform initialization~\cite{Galanis17}. The Superstar family consists of unweighted, self-loop free, but directed graphs where the degree is linear in the population size. Another family of directed graphs with strong amplification properties, called Megastars, was recently introduced in~\cite{Galanis17}. The Megastars are stronger amplifiers than the Superstars, as the fixation probability on the former is a approximately $1-n^{-1/2}$ (ignoring logarithmic factors), and is asymptotically optimal (again, ignoring logarithmic factors). In contrast, the fixation probability on the Superstars is approximately $1-n^{-1/2}$. In the limit of $n\to \infty$, both families approach the fixation probability~1.
\item {\em Temperature initialization.} While the family of Star graphs is a quadratic amplifier under uniform initialization, it is not even an amplifier under temperature initialization~\cite{Adlam15}. It was shown in~\cite{Adlam15} that by adding self-loops and weights to the edges of the Star graph, a graph family, namely the family of Looping Stars, can be constructed, which is a quadratic amplifier simultaneously under temperature and uniform initialization. Note that in contrast to Star graphs, the Looping Star graphs are weighted and also have self-loops.
\end{enumerate}
\noindent{\em Open questions.} Despite several important existing results on amplifiers of selection, several basic questions have remained open: \begin{enumerate} \item {\em Question~1.} Does there exist a family of self-loop free graphs (weighted or unweighted) that is a quadratic amplifier under temperature initialization?
\item {\em Question~2.} Does there exist a family of unweighted graphs (with or without self-loops) that is a quadratic amplifier under temperature initialization?
\item {\em Question~3.} Does there exist a family of bounded degree self-loop free (weighted or unweighted) graphs that is a strong amplifier under uniform initialization?
\item {\em Question~4.} Does there exist a family of bounded degree unweighted graphs (with or without self-loops) that is a strong amplifier under uniform initialization?
\item {\em Question~5.} Does there exist a family of graphs that is a strong amplifier under temperature initialization? More generally, does there exist a family of graphs that is a strong amplifier both under temperature and uniform initialization?
\end{enumerate} To summarize, the open questions ask for (i)~the existence of quadratic amplifiers under temperature initialization without the use of self-loops, or weights (Questions~1 and~2); (ii)~the existence of strong amplifiers under uniform initialization without the use of self-loops, or weights, and while the degree of the graph is small;
and (iii)~the existence of strong amplifiers under temperature initialization. While the answers to Question~1 and Question~2 are positive under uniform initialization, they have remained open under temperature initialization. Questions~3 and~4 are similar to~1 and~2, but focus on uniform rather than temperature initialization. The restriction on graphs of bounded degree is natural: large degree means that some individuals must have a lot of interactions, whereas graphs of bounded degree represent simple structures. Question~5 was mentioned as an open problem in~\cite{Adlam15}. Note that under temperature initialization, even the existence of a cubic amplifier, that achieves fixation probability at least $1-(1/r^3)$ in the limit of large population, has been open~\cite{Adlam15}.
\subsection{Results} In this work we present several negative as well as positive results that answer the open questions (Questions~1-5) mentioned above. We first present our negative results.
\noindent{\em Negative results.} Our main negative results are as follows: \begin{enumerate} \item Our first result (Theorem~1) shows that for any self-loop free weighted graph $G_n=(V_n,E_n,W_n)$, for any $r\geq 1$, under temperature initialization the fixation probability is at most $1-1/(r+1)$. The implication of the above result is that it answers Question~1 in negative.
\item Our second result (Theorem~2) shows that for any unweighted (with or without self-loops) graph $G_n=(V_n,E_n)$, for any $r \geq 1$, under temperature initialization the fixation probability is at most $1-1/(4r+2)$. The implication of the above result is that it answers Question~2 in negative.
\item Our third result (Theorem~3) shows that for any bounded degree self-loop free graph (possibly weighted) $G_n=(V_n,E_n,W_n)$, for any $r \geq 1$, under uniform initialization the fixation probability is at most $1-1/(c+c^2r)$, where $c$ is the bound on the degree, i.e., $\deg(G_n) \leq c$. The implication of the above result is that it answers Question~3 in negative.
\item Our fourth result (Theorem~4) shows that for any unweighted, bounded degree graph (with or without self-loops) $G_n=(V_n,E_n)$, for any $r \geq 1$, under uniform initialization the fixation probability is at most $1-1/(1+r c)$, where $c$ is the bound on the degree, i.e., $\deg(G_n) \leq c$. The implication of the above result is that it answers Question~4 in negative.
\end{enumerate}
\noindent{\em Significance of the negative results.} We now discuss the significance of the above results. \begin{enumerate} \item The first two negative results show that in order to obtain quadratic amplifiers under temperature initialization, self-loops and weights are inevitable, complementing the existing results of~\cite{Adlam15}. More importantly, it shows a sharp contrast between temperature and uniform initialization: while self-loop free, unweighted graphs (namely, Star graphs) are quadratic amplifiers under uniform initialization, no such graph families are quadratic amplifiers under temperature initialization.
\item The third and fourth results show that without using self-loops and weights, bounded degree graphs cannot be made strong amplifiers even under uniform initialization. See also \cref{rem:bounded_degree_uniform}. \end{enumerate}
\noindent{\em Positive result.} Our main positive result shows the following: \begin{enumerate} \item For any constant $\epsilon>0$, consider any connected unweighted graph $G_n=(V_n,E_n)$ of $n$ vertices with self-loops and which has {\em diameter} at most $n^{1-\epsilon}$. The diameter of a connected graph is the maximum, among all pairs of vertices, of the length of the shortest path between that pair. We establish (Theorem~5) that there is a stochastic weight matrix $W_n$
such that for any $r>1$ the fixation probability on $G_n=(V_n,E_n,W_n)$ both under uniform and temperature initialization is at least $1-\frac{1}{n^{\epsilon/3}}$. An immediate consequence of our result is the following: for any family of connected unweighted graphs with self-loops $(G_n=(V_n,E_n))_{n> 0}$ such that the diameter of $G_n$ is at most $n^{1-\epsilon}$, for a constant $\epsilon>0$, one can construct a stochastic weight matrix $W_n$ such that the resulting family $(G_n=(V_n,E_n,W_n))_{n > 0}$ of weighted graphs is a strong amplifier simultaneously under uniform and temperature initialization. Thus we answer Question~5 in affirmative. \end{enumerate}
\noindent{\em Significance of the positive result.} We highlight some important aspects of the results established in this work. \begin{enumerate} \item First, note that for the fixation probability of the Moran process on graphs to be well defined, a necessary and sufficient condition is that the graph is connected. A uniformly chosen random connected unweighted graph of $n$ vertices has diameter bounded by a constant, with high probability. Hence, within the family of connected, unweighted graphs, the family of graphs of diameter at most $O(n^{1-\epsilon})$, for any constant $0<\epsilon<1$, has probability measure~1. Our results establish a strong dichotomy: (a)~the negative results state that without self-loops and/or without weights, {\em no} family of graphs can be a quadratic amplifier (even more so a strong amplifier) even for only temperature initialization; and (b)~in contrast, for {\em almost all} families of connected graphs with self-loops, there exist weight functions such that the resulting family of weighted graphs is a strong amplifier both under temperature and uniform initialization.
\item Second, with the use of self-loops and weights, even simple graph structures, such as Star graphs, Grids, and well-mixed structures (i.e., complete graphs) can be made strong amplifiers.
\item Third, our positive result is constructive, rather than existential. In other words, we not only show the existence of strong amplifiers, but present a construction of them.
\end{enumerate}
Our results are summarized in \cref{table}.
\begin{remark}
\noindent{\em Edges with zero weight.} Note that edges can be effectively removed by being assigning zero weight (however, no weight assignment can create edges that don't exist.) Therefore, when our construction works for some graph, it also works for a graph that contains some additional edges. In particular, our construction easily works for complete graphs. The construction can also be extended to a scenario in which we insist that each edge is assigned a positive (non-zero) weight. \end{remark}
\begin{table} \begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
& \multicolumn{2}{|c|}{Temperature} & \multicolumn{2}{|c|}{Uniform$^\star$} \\ \cline{2-5}
& Loops & No Loops & Loops & No Loops \\ \hline Weights & $\checkmark$ & $\times$ & $\checkmark$& $\times$ \\ \hline No Weights & $\times$ & $\times$ & $\times$ & $\times$ \\ \hline \end{tabular} \end{center} \caption{ Summary of our results on existence of strong amplifiers for different initialization schemes (temperature initialization or uniform initialization) and graph families (presence or absence of loops and/or weights). The ``$\checkmark$'' symbol marks that for given choice of initialization scheme and graph family, almost all graphs admit a weight function that makes them strong amplifiers. The ``$\times$'' symbol marks that for given choice of initialization scheme and graph family, no strong amplifiers exist (under any weight function). The asterisk signifies that the negative results under uniform initialization only hold for bounded degree graphs. } \label{table} \end{table}
\section{Preliminaries: Formal Notation}\label{sec:preliminaries}
\begin{comment}
\subsection{Notation on Graphs}\label{subsec:graphs} We denote by $G_n=(V,E)$ an undirected graph where $V$ is the set of $n$ nodes and $E$ is the set of edges. For every node $u\in V$, we denote by $\mathsf{Nh}(u)=\{v: (u,v)\in E\}$ the set of \emph{neighbors} of $u$ (since $G_n$ is undirected, $v\in \mathsf{Nh}(u) \iff u\in \mathsf{Nh}(v)$). We require that $u\in \mathsf{Nh}(u)$ for every node $u\in V$ (i.e, the graph $G_n$ has self-loops). Additionally, for every pair of nodes $u,v$, a path $P:u\rightsquigarrow v$ exists (i.e., $G_n$ is \emph{connected}). Given $G_n$, a \emph{weight function} $\mathsf{w}:E\to \mathbb{R}_{\geq 0}$ maps every edge of $G_n$ to a non-negative real value. For brevity, we extend the weight function to nodes, and denote by $\mathsf{w}(u)=\sum_{v\in \mathsf{Nh}(u)}\mathsf{w}(u,v)$. We denote by $G_n^{\mathsf{w}}$ the weighted undirected graph $G_n$ with weight function $\mathsf{w}$. The \emph{temperature} of a node $u$ is defined as \[ \mathsf{T}(u) = \sum_{v\in \mathsf{Nh}(u)} \frac{\mathsf{w}(u,v)}{ \mathsf{w}(v)} \]
Note that \[ \sum_u\mathsf{T}(u) = \sum_u \sum_{v\in \mathsf{Nh}(u)} \frac{\mathsf{w}(u,v)}{\mathsf{w}(v)} = \sum_v \sum_{u\in\mathsf{Nh}(v)} \frac{\mathsf{w}(u,v)}{\mathsf{w}(v)} = \sum_v 1 = n \numberthis\label{eq:total_temp} \]
The weighted graph $G_n^{\mathsf{w}}$ is called \emph{isothermal}~\cite{Lieberman05} if for every node $u$ we have $\mathsf{T}(u)=1$. Given a set of nodes $X\subseteq V$ with $|X|=m$, we denote by $G_m^{\mathsf{w}}\restr{X}=(X, E\restr{X} = E\cap (X\times X))$ the subgraph of $G$ induced by $X$, with weight function $\mathsf{w}\restr{X}: E\restr{X}\to \mathbb{R}_{\geq 0}$ such that \[ \mathsf{w}\restr{X}(u,v)= \left\{ \begin{array}{lr} \mathsf{w}(u,u) + \sum_{(u,w)\in E\setminus E\restr{X} }\mathsf{w}(u,w) & \text{ if } u=v\\ \mathsf{w}(u,v) \text{ otherwise } \end{array} \right. \] In words, the weights on the edges of $u$ to nodes that do not belong to $X$ are added to the self-loop weight of $u$. The temperature of $u$ in $G_n^{\mathsf{w}}\restr{X}$ is defined as \[ \mathsf{T}\restr{X}(u) = \sum_{v\in \mathsf{Nh}(u)\cap X} \frac{\mathsf{w}\restr{X}(u,v)}{ \mathsf{w}\restr{X}(v)} \]
Finally, we call $G_n^{\mathsf{w}}$ \emph{self-loop free} if $\mathsf{w}(u,u)=0$ for every $u\in V$, and we call $G_n^{\mathsf{w}}$ \emph{unweighted} if (i)~$\mathsf{w}(u,v)=1$ for all $u\neq v$ and (ii)~$\mathsf{w}(u,u)\in\{0,1\}$ for all $u\in V$. \end{comment}
\subsection{The Moran Process on Weighted Structured Populations}\label{subsec:moran_process} We consider a population of $n$ individuals on a graph $G_n=(V_n,E_n,W_n)$. Each individual of the population is either a \emph{resident}, or a \emph{mutant}. Mutants are associated with a \emph{reproductive rate} (or \emph{fitness}) $r$, whereas the reproductive rate of residents is normalized to $1$. Typically we consider the case where $r>1$, i.e., mutants are \emph{advantageous}, whereas when $r<1$ we call the mutants \emph{disadvantageous}. We now introduce the formal notation related to the process.
\noindent{\em Configuration.} A \emph{configuration} of $G_n$ is a subset $S \subseteq V$ which specifies the vertices of $G_n$
that are occupied by mutants and thus the remaining vertices $V\setminus S$ are occupied by residents. We denote by $\mathsf{F}(S)=r\cdot |S| + n-|S|$ the
total fitness of the population in configuration $S$, where $|S|$ is the number of mutants in $S$.
\noindent{\em The Moran process.}
The birth-detah Moran process on $G_n$ is a discrete-time Markovian random process. We denote by $\mathsf{X}_i$ the random variable for a configuration at time step $i$, and $\mathsf{F}(\mathsf{X}_i)$ and $|\mathsf{X}_i|$ denote the total fitness and the number of mutants of the corresponding configuration, respectively. The probability distribution for the next configuration $\mathsf{X}_{i+1}$ at time $i+1$ is determined by the following two events in succession: \begin{compactdesc} \item[{\em Birth:}] One individual is chosen at random to reproduce, with probability proportional to its fitness. That is, the probability to reproduce is $r/\mathsf{F}(\mathsf{X}_i)$ for a mutant, and $1/\mathsf{F}(\mathsf{X}_i)$ for a resident.
Let $u$ be the vertex occupied by the reproducing individual. \item[{\em Death:}] A neighboring vertex $v\in \mathsf{Out}(u)$ is chosen randomly with probability $W_n[u,v]$. The individual occupying $v$ dies, and the reproducing individual places a copy of its own on $v$. Hence, if $u\in \mathsf{X}_i$, then $\mathsf{X}_{i+1}=\mathsf{X}_i\cup \{v\}$, otherwise $\mathsf{X}_{i+1}=\mathsf{X}_i\setminus\{v\}$. \end{compactdesc} The above process is known as the \emph{birth-death} Moran process, where the death event is conditioned on the birth event, and the dying individual is a neighbor of the reproducing one.
\noindent{\em Probability measure.} Given a graph $G_n$ and the fitness $r$, the birth-death Moran process defines a probability measure on sequences of configurations, which we denote as $\mathbb{P}^{G_n,r}[\cdot]$. If the initial configuration is $\{u\}$, then we define the probability measure as $\mathbb{P}^{G_n,r}_u[\cdot]$, and if the graph and fitness $r$ is clear from the context, then we drop the superscript.
\noindent{\em Fixation event.} The fixation event, denoted $\mathcal{E}$, represents that all vertices are mutants, i.e., $\mathsf{X}_i=V$ for some $i$. In particular, $\mathbb{P}^{G_n,r}_u[\mathcal{E}]$ denotes the fixation probability in $G_n$ for fitness $r$ of the mutant, when the initial mutant is placed on vertex $u$. We will denote this fixation probability as $\mathsf{\rho}(G_n,r,u)=\mathbb{P}^{G_n,r}_u[\mathcal{E}]$.
\subsection{Initialization and Fixation Probabilities} We will consider three types of initialization, namely, (a)~uniform initialization, where the mutant arises at vertices with uniform probability, (b)~temperature initialization, where the mutant arises at vertices proportional to the temperature, and (c)~convex combination of the above two.
\noindent{\em Temperature.} For a weighted graph $G_n=(V_n,E_n,W_n)$, the temperature of a vertex $u$, denoted $\mathsf{T}(u)$, is $\sum_{v \in \mathsf{In}(u)} W_n[v,u]$, i.e., the sum of the incoming weights. Note that $\sum_{u \in V_n} \mathsf{T}(u)=n$, and a graph is {\em isothermal} iff $\mathsf{T}(u)=1$ for all vertices $u$.
\noindent{\em Fixation probabilities.} We now define the fixation probabilities under different initialization.
\begin{compactenum} \item {\em Uniform initialization.} The fixation probability under uniform initialization is \[ \mathsf{\rho}(G_n,r,\mathsf{U}) =\sum_{u \in V_n} \frac{1}{n} \cdot \mathsf{\rho}(G_n,r,u). \]
\item {\em Temperature initialization.} The fixation probability under temperature initialization is \[ \mathsf{\rho}(G_n,r,\mathsf{T}) =\sum_{u \in V_n} \frac{\mathsf{T}(u)}{n} \cdot \mathsf{\rho}(G_n,r,u). \]
\item {\em Convex initialization.} In $\eta$-\emph{convex initialization}, where $\eta\in[0,1]$, the initial mutant arises with probability $(1-\eta)$ via uniform initialization, and with probability $\eta$ via temperature initialization. The fixation probability is then \[ \mathsf{\rho}(G_n,r,\eta) =(1-\eta) \cdot \mathsf{\rho}(G_n,r,\mathsf{U}) + \eta \cdot \mathsf{\rho}(G_n,r,\mathsf{T}). \] \end{compactenum}
\subsection{Strong Amplifier Graph Families}
A \emph{family} of graphs $\mathcal{G}$ is an infinite sequence of weighted graphs $\mathcal{G}=(G_n)_{n\in \Nats^+}$.
\begin{compactitem} \item {\em Strong amplifiers.} A family of graphs $\mathcal{G}$ is a \emph{strong uniform amplifier} (resp. \emph{strong temperature amplifier}, \emph{strong convex amplifier}) if for every fixed $r_1>1$ and $r_2<1$ we have that \[ \liminf_{n\to\infty} \mathsf{\rho}(G_n,r_1,Z) = 1 \qquad \text{and} \qquad \limsup_{n\to\infty} \mathsf{\rho}(G_n,r_2,Z) = 0\ ; \] where $Z=\mathsf{U}$ (resp., $Z=\mathsf{T}$, $Z=\eta$). \end{compactitem} Intuitively, strong amplifiers ensures (a)~fixation of advantageous mutants with probability~1 and (b)~extinction of disadvantageous mutants with probability~1. In other words, strong amplifiers represent the strongest form of amplifiers possible.
\section{Negative Results}\label{sec:bounded_temp} In the current section we present our negative results, which show the nonexistence of strong amplifiers in the absence of either self-loops or weights. In our proofs, we consider weighted graph $G_n=(V_n,E_n,W_n)$, and for notational simplicity we drop the subscripts from vertices, edges and weights, i.e., we write $G_n=(V,E,W)$. We also consider that $G_n$ is connected and $n\geq 2$. Throughout this section we will use a technical lemma, which we present below. Given a configuration $\mathsf{X}_i=\{u\}$ with one mutant, let $x$ and $y$ be the probability that in the next configuration the mutants increase and go extinct, respectively. The following lemma bounds the fixation probability $\mathsf{\rho}(G_n,r,u)$ as a function of $x$ and $y$.
\begin{lemma}\label{lemm:technical} Consider a vertex $u$ and the initial configuration $\mathsf{X}_0=\{u\}$ where the initial mutant arises at vertex $u$. For any configuration $\mathsf{X}_i=\{u\}$, let \[
x=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=2 \mid \mathsf{X}_i=\{u\}]
\qquad \text{and} \quad y=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=0 \mid \mathsf{X}_i=\{u\}]\ . \] be the probability that the number of mutants increases (or decreases) in a single step. Then the fixation probability from $u$ is at most $x/(x+y)$, i.e., \[ \mathsf{\rho}(G_n,r,u) \leq \frac{x}{x+y} = 1 -\frac{y}{x+y}\ . \] \end{lemma} \ifshortproofs \begin{proof} Let's focus on the first event that changes the number of mutants. The probability that this event decreases the number of mutants equals $\frac{y}{x+y}$. In such case, the mutants have gone extinct, hence the extinction probability is at least $\frac{y}{x+y}$ and the fixation probability is at most $\frac{x}{x+y}$. \end{proof} \fi \iffullproofs \begin{proof} We upperbound the fixation probability $\mathsf{\rho}(G_n,r,u)$ starting from $u$ by the probability that a configuration
$\mathsf{X}_t$ is reached with $|\mathsf{X}_t|=2$. Note that to reach fixation the Moran process must first reach a configuration with at least two mutants. We now analyze the probability to reach at least two mutants. This is represented by a three-state one dimensional random walk, where two states are absorbing, one absorbing state represents a configuration with two mutants, and the other absorbing state represents the extinction of the mutants, and the bias towards the absorbing state representing two mutants is $x/y$. See \cref{fig:three_state_rw} for an illustration. Using the formulas for absorption probability in one-dimensional three-state Markov chains (see, e.g.,~\cite{Kemeny12},~\cite[Section~6.3]{Nowak06b}), we have the probability that a configuration with two mutants is reached is \[ \frac{1 -(x/y)^{-1}}{1-(x/y)^{-2}}= \frac{1}{1 + (x/y)^{-1}}= \frac{x}{x+y}\ . \] Hence it follows that $\mathsf{\rho}(G_n,r,u) \leq 1-\frac{y}{x+y}$. \end{proof} \input{three_state_rw} \fi
\subsection{Negative Result~1}\label{subsec:bounded_loopless_temperatue} We now prove our negative result~1.
\begin{theorem}\label{them:loopless_bounded_temp} For all self-loop free graphs $G_n$ and for every $r\geq 1$ we have $\mathsf{\rho}(G_n,r,\mathsf{T})\leq 1-1/(r+1)$. \end{theorem} \ifshortproofs \begin{proof} Since $G_n$ is self-loop free, for all $u$ we have $W[u,u]=0$. Hence $ \mathsf{T}(u) = \sum_{v\in \mathsf{In}(u)\setminus\{u\}} W[v,u]$. Consider the case where the initial mutant is placed on vertex $u$. Using Lemma~\ref{lemm:technical},
a simple calculation shows that the fixation probability $\mathsf{\rho}(G_n,r,u)$ from $u$ is at most
\[ 1-\frac{\mathsf{T}(u)}{\mathsf{T}(u)+r}\ . \] Summing over all vertices $u$, we obtain \[ \mathsf{\rho}(G_n,r,\mathsf{T}) = \sum_{u} \frac{\mathsf{T}(u)}{n} \cdot \mathsf{\rho}(G_n,r,u) \leq \frac{1}{n}\cdot \sum_{u} \mathsf{T}(u)\cdot \left(1-\frac{\mathsf{T}(u)}{\mathsf{T}(u)+r}\right) \leq 1-\frac{1}{r+1}\ . \] where the first inequality uses the bound above, and the last inequality is obtained using the Cauchy-Schwarz inequality in the form $$\sum_u \frac{\mathsf{T}(u)^2}{\mathsf{T}(u)+r} \geq \frac{\left(\sum_u \mathsf{T}(u)\right)^2}{\sum_u (\mathsf{T}(u)+r)}=\frac{n}{r+1}. $$
The desired result follows. \end{proof} \fi \iffullproofs \begin{proof} Since $G_n$ is self-loop free, for all $u$ we have $W[u,u]=0$. Hence $ \mathsf{T}(u) = \sum_{v\in \mathsf{In}(u)\setminus\{u\}} W[v,u]$. Consider the case where the initial mutant is placed on vertex $u$, i.e, $\mathsf{X}_0=\{u\}$. For any configuration $\mathsf{X}_i=\{u\}$, we have the following: \[
x=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=2 \mid \mathsf{X}_i=\{u\}] = \frac{r}{\mathsf{F}(\mathsf{X}_i)} \]
\[
y=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=0 \mid \mathsf{X}_i=\{u\}] = \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \sum_{v\in \mathsf{In}(u)\setminus\{u\}} W[v,u] = \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \mathsf{T}(u)\ . \] Thus $x/y= r/\mathsf{T}(u)$.
Hence by Lemma~\ref{lemm:technical} we have \[ \mathsf{\rho}(G_n,r,u) \leq 1-\frac{\mathsf{T}(u)}{\mathsf{T}(u)+r}\ . \]
Summing over all $u$, we obtain \[ \mathsf{\rho}(G_n,r,\mathsf{T})=\sum_{u} \frac{\mathsf{T}(u)}{n} \cdot \mathsf{\rho}(G_n,r,u) \leq \frac{1}{n}\cdot \sum_{u} \mathsf{T}(u)\cdot \left(1-\frac{\mathsf{T}(u)}{\mathsf{T}(u)+r}\right) = 1-\frac{1}{n}\cdot \sum_u \frac{\mathsf{T}(u)^2}{\mathsf{T}(u)+r}\ ; \numberthis\label{eq:temp_upperbound} \] since $\sum_u\mathsf{T}(u)=n$. Using the Cauchy-Schwarz inequality, we obtain \[ \sum_u \frac{\mathsf{T}(u)^2}{\mathsf{T}(u)+r}\geq \frac{\left(\sum_u\mathsf{T}(u)\right)^2}{\sum_u (\mathsf{T}(u) + r)} = \frac{n^2}{n+n\cdot r} = \frac{n}{r+1}\ ; \] and thus \cref{eq:temp_upperbound} becomes \[ \mathsf{\rho}(G_n,r,\mathsf{T}) \leq 1-\frac{1}{n}\cdot \frac{n}{r+1} = 1-\frac{1}{r+1}\; \] as desired. \end{proof} \fi
We thus arrive at the following corollary.
\begin{corollary}\label{cor:loopless_bounded_temp} There exists no self-loop free family of graphs which is a strong temperature amplifier. \end{corollary}
\subsection{Negative Result~2}\label{subsec:bounded_weightless_temperature} We now prove our negative result~2.
\begin{theorem}\label{them:unweighted_bounded_temp} For all unweighted graphs $G_n$ and for every $r\geq 1$ we have $\mathsf{\rho}(G_n,r,\mathsf{T}) \leq 1-1/(4r+2)$. \end{theorem} \ifshortproofs \begin{proof} For every vertex $u\in V$, let \[
\mathsf{T}'(u) = \sum_{v\in \mathsf{In}(u)\setminus\{u\}} \frac{1}{|\mathsf{Out}(v)|}\ . \] We have the following two inequalities for $\mathsf{T}'$ \[ \mathsf{T}(u) \geq \mathsf{T}'(u) \qquad \text{ and } \qquad \sum_u\mathsf{T}'(u) \geq \frac{n}{2}\ . \] Consider the case where the initial mutant is placed on vertex $u$. Using Lemma~\ref{lemm:technical},
a simple calculation shows that the fixation probability $\mathsf{\rho}(G_n,r,u)$ from $u$ is at most
\[ 1-\frac{\mathsf{T}'(u)}{\mathsf{T}'(u)+r}. \] Summing over all vertices $u$, we obtain \[ \mathsf{\rho}(G_n,r,\mathsf{T}) = \sum_{u} \frac{\mathsf{T}(u)}{n}\cdot \mathsf{\rho}(G_n,r,u) \leq \frac{1}{n}\cdot \sum_{u} \mathsf{T}(u)\cdot \left(1-\frac{\mathsf{T}'(u)}{\mathsf{T}'(u)+r}\right) \leq 1-\frac{1}{4r+2}\ ; \] where the last inequality follows from the inequalities for $\mathsf{T}'$, the Cauchy-Schwarz inquality in the form
$$\sum_u \frac{\mathsf{T}'(u)^2}{\mathsf{T}'(u)+r} \geq \frac{\left(\sum_u \mathsf{T}'(u)\right)^2}{\sum_u (\mathsf{T}'(u)+r)}$$ and from the fact that the function $\frac{x^2}{x+n\cdot r}$ is increasing in $x$ for $x>0$ and any $r,n>0$.
The desired result follows. \end{proof} \fi \iffullproofs \begin{proof} For every vertex $u\in V$, let \[
\mathsf{T}'(u) = \sum_{v\in \mathsf{In}(u)\setminus\{u\}} \frac{1}{|\mathsf{Out}(v)|}\ . \] We establish two inequalities related to $\mathsf{T}'$. Since $G_n$ is unweighted, we have \[
\mathsf{T}(u) = \sum_{v\in \mathsf{In}(u)} \frac{1}{|\mathsf{Out}(v)|}\geq \mathsf{T}'(u)\ . \]
For a vertex $u$, let $\mathsf{sl}(u)=1$ if $u$ has a self-loop and $\mathsf{sl}(u)=0$ otherwise. Since $G_n$ is connected, each vertex $u$ has at least one neighbor other than itself. Thus for every vertex $u$ with $\mathsf{sl}(u)=1$ we have that $|\mathsf{Out}(u)|\geq 2$. Hence \begin{align*}
\sum_u\mathsf{T}'(u) = & \sum_u\left (\sum_{v\in \mathsf{In}(u)} \frac{1}{|\mathsf{Out}(v)|} -\mathsf{sl}(u)\frac{1}{|\mathsf{Out}(u)|}\right)
=\sum_u\left (\sum_{v\in \mathsf{In}(u)} \frac{1}{|\mathsf{Out}(v)|}\right) -\sum_{u:\mathsf{sl}(u)=1}\left(\frac{1}{|\mathsf{Out}(u)|}\right)\\[2ex]
\geq & \sum_u \mathsf{T}(u) - \sum_u\frac{1}{2}= n- \frac{n}{2} = \frac{n}{2}.
\numberthis\label{eq:tempprime} \end{align*}
Similarly to the proof of \cref{them:loopless_bounded_temp}, the fixation probability given that a mutant is initially placed on vertex $u$ is at most \[ \mathsf{\rho}(G_n,r,u) \leq
1-\frac{\mathsf{T}'(u)}{\mathsf{T}'(u)+r} \] Summing over all $u$, we obtain \[ \mathsf{\rho}(G_n,r,\mathsf{T})= \frac{1}{n}\cdot \sum_{u} \mathsf{T}(u)\cdot \mathsf{\rho}(G_n,r,u) \leq \frac{1}{n}\cdot \sum_{u} \mathsf{T}(u)\cdot \left(1-\frac{\mathsf{T}'(u)}{\mathsf{T}'(u)+r}\right) \leq 1-\frac{1}{n}\cdot \sum_u \frac{\mathsf{T}'(u)^2}{\mathsf{T}'(u)+r}\ ; \numberthis \label{eq:temp_upperbound2} \] since $\sum_u\mathsf{T}(u)=n$ and $\mathsf{T}(u)\geq \mathsf{T}'(u)$.
Using the Cauchy-Schwarz inequality we get \[ \sum_u \frac{\mathsf{T}'(u)^2}{\mathsf{T}'(u)+r}\geq \frac{\left(\sum_u\mathsf{T}'(u)\right)^2}{\sum_u (\mathsf{T}'(u) + r)} = \frac{x^2}{x+n\cdot r}, \] where $x=\sum_u\mathsf{T}'(u)$. Note that the function $f(x)=\frac{x^2}{x+n\cdot r}$ is increasing in $x$ for $x>0$ and any $r,n>0$. Since $x>n/2$, the right-hand side is minimized for $x=n/2$, that is \[ \sum_u \frac{\mathsf{T}'(u)^2}{\mathsf{T}'(u)+r}\geq \frac{(n/2)^2}{n/2+n\cdot r} = \frac{n}{4r+2}. \] Thus \cref{eq:temp_upperbound2} becomes \[ \mathsf{\rho}(G_n,r,\mathsf{T}) \leq 1-\frac{1}{n}\cdot \frac{n}{4r+2} = 1-\frac{1}{4r+2} \] as desired.
\end{proof} \fi
We thus arrive at the following corollary.
\begin{corollary}\label{cor:unweighted_bounded_temp} There exists no unweighted family of graphs which is a strong temperature amplifier. \end{corollary}
\subsection{Negative Result~3}\label{subsec:bounded_loopless_uniform} We now prove our negative result~3.
\begin{theorem}\label{them:loopless_bounded_uniform} For all self-loop free graphs $G_n$ with $c=\deg(G_n)$, and for every $r\geq 1$ we have $\mathsf{\rho}(G_n,r,\mathsf{U})\leq 1-1/(c+r\cdot c^2)$. \end{theorem} \ifshortproofs \begin{proof} Let $G_n=(V,E,W)$ and $\gamma=1/c$. Let $V^{h}$ be the set of vertices $u$ which have a neighbor $v \in \mathsf{In}(u)$ such that $W[v,u] \geq \gamma$. In words, the set $V^{h}$ contains ``hot'' vertices, since each vertex $u\in V^h$ is replaced frequently (with rate at least $\gamma$) by at least one neighbor $v$. Consider that the initial mutant is placed on a vertex $u\in V^h$. Then using Lemma~\ref{lemm:technical} a simple calculation shows that the fixation probability is at most
$(r\cdot c)/(1+r\cdot c)$, and hence it is upperbounded by a constant. Thus for vertices $u \in V^h$ we have $\mathsf{\rho}(G_n,r,u) \leq (r\cdot c)/(1+r\cdot c)$. Additionally, we have $|V^{h}| \geq \frac{n}{c}$, i.e., the set $V^{h}$ contains at least a constant fraction of the vertices. Hence the initial mutant will be placed on some hot vertex from $V^{h}$ with at least some constant probability, by uniform initialization. To favor fixation even more, we consider that if the initial mutant is placed on some vertex outside $V^{h}$, then fixation is achieved with probability~1. We have \[
\mathsf{\rho}(G_n,r,\mathsf{U}) \leq \frac{|V^{h}|}{n} \cdot \frac{r\cdot c}{1+r\cdot c} +
\frac{n-|V^{\gamma|}}{n}\cdot 1 = \frac{1}{c}\cdot \frac{r\cdot c}{1+r\cdot c} + \frac{c-1}{c} \leq 1-\frac{1}{c+r\cdot c^2}\ . \]
The desired result follows. \end{proof} \fi \iffullproofs \begin{proof} Let $G_n=(V,E,W)$ and $\gamma=1/c$. For a vertex $u$, denote by $\mathsf{Out}^{\gamma}(u)=\{v\in \mathsf{Out}(u) \ :\ W[u,v] \geq \gamma \}$. Observe that since $\deg(G_n)=c$, every vertex $u$ has an outgoing edge of weight at least $1/c$, and thus $\mathsf{Out}^{\gamma}(u)\neq \emptyset$ for all $u\in V$. Let $V^{h}=\bigcup_u \mathsf{Out}^{\gamma}(u)$. Intuitively, the set $V^{h}$ contains ``hot'' vertices, since each vertex $u\in V^h$ is replaced frequently (with rate at least $\gamma$) by at least one neighbor $v$.
\noindent{\em Bound on size of $V^h$.} We first obtain a bound on the size of $V^h$. Consider a vertex $u \in V$ and a vertex $v \in \mathsf{Out}^{\gamma}(u)$ (i.e., $v \in V^h$). For every vertex $w \in \mathsf{In}(v)$ such that $v\in \mathsf{Out}^{\gamma}(w)$ we can count $v \in V^h$ and to avoide multiple counting, we consider for each count of $v$ a contribution of
$\frac{1}{|\{w\in \mathsf{In}(v):~v\in \mathsf{Out}^{\gamma}(w)\}|}$, which is at least $\frac{1}{c}$ due to the degree bound. Hence we have \[
|V^{h}| = \sum_{u\in V} \sum_{v\in \mathsf{Out}^{\gamma}(u)} \frac{1}{|\{w\in \mathsf{In}(v):~v\in \mathsf{Out}^{\gamma}(w)\}|} \geq \sum_{u\in V} \sum_{v\in \mathsf{Out}^{\gamma}(u)} \frac{1}{c} \geq \sum_{u\in V} \frac{1}{c}= \frac{n}{c}\ ; \] where the last inequality follows from the fact that $\mathsf{Out}^{\gamma}(u)\neq \emptyset$ for all $u\in V$. Hence the probability that the initial mutant is a vertex in $V^h$ has probability at least $1/c$ according to the uniform initialization.
\noindent{\em Bound on probability.} Consider that the initial mutant is a vertex $u\in V^{h}$.
Consider any configuration $\mathsf{X}_i=\{u\}$, we have the following: \[
x=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=2 \mid \mathsf{X}_i=\{u\}] = \frac{r}{\mathsf{F}(\mathsf{X}_i)} \]
\[
y=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=0 \mid \mathsf{X}_i=\{u\}] = \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \sum_{(v,u)\in E} W[v,u] \geq \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \sum_{v:u\in \mathsf{Out}^{\gamma}(v)} \gamma \geq \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \gamma\ . \] Thus $x/y \leq r/\gamma$. Hence by Lemma~\ref{lemm:technical} we have \[ \mathsf{\rho}(G_n,r,u) \leq
\frac{r\cdot c}{1+r\cdot c}\ . \]
Finally, we have \begin{align*} \mathsf{\rho}(G_n,r,\mathsf{U})= & \sum_{u\in V^{h}} \frac{1}{n}\cdot \mathsf{\rho}(G_n,r,u) + \sum_{u\in V\setminus V^{h}} \frac{1}{n}\cdot \mathsf{\rho}(G_n,r,u) \\[2ex] \leq & \frac{1}{c}\cdot \frac{r\cdot c}{1+r\cdot c} + \frac{c-1}{c}\cdot 1 = 1-\frac{1}{c}\cdot \left(1-\frac{r\cdot c}{1+r\cdot c}\right) = 1-\frac{1}{c+r\cdot c^2} \ . \end{align*} The desired result follows. \end{proof} \fi
We thus arrive at the following corollary.
\begin{corollary}\label{cor:loopless_bounded_uniform} There exists no self-loop free, bounded-degree family of graphs which is a strong uniform amplifier. \end{corollary}
\subsection{Negative Result~4}\label{subsec:bounded_weightless_uniform} We now prove our negative result~4.
\begin{theorem}\label{them:unweighted_bounded_uniform} For all unweighted graphs $G_n$ with $c=\deg(G_n)$, and for every $r\geq 1$ we have $\mathsf{\rho}(G_n,r,\mathsf{U})\leq 1-1/(1+r\cdot c)$. \end{theorem} \ifshortproofs \begin{proof} Let $G_n=(V,E,W)$ and consider that the initial mutant is placed on a vertex $u$. Then the fixation probability is at most
$r\cdot c/(1+r\cdot c)$ (using Lemma~\ref{lemm:technical} and simple calculations). Summing over all vertices $u$, we obtain \[ \mathsf{\rho}(G_n,r,\mathsf{U}) = \frac{1}{n}\cdot \sum_u\mathsf{\rho}(G_n,r,u) \leq \frac{r\cdot c}{1+r\cdot c} = 1-\frac{1}{1+r\cdot c}\ . \]
The desired result follows. \end{proof} \fi \iffullproofs \begin{proof} Let $G_n=(V,E,W)$ and consider that $\mathsf{X}_0=u$ for some $u\in V$.
Consider any configuration $\mathsf{X}_i=\{u\}$, we have the following: \[
x=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=2 \mid \mathsf{X}_i=\{u\}] \leq \frac{r}{\mathsf{F}(\mathsf{X}_i)}\ . \]
\[
y=\mathbb{P}^{G_n,r}[|\mathsf{X}_{i+1}|=0 \mid \mathsf{X}_i=\{u\}] = \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \sum_{v\in \mathsf{In}(u)\setminus\{u\}} W[v,u] \geq \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \frac{1}{c}\ . \] Thus $x/y \leq r\cdot c$. By Lemma~\ref{lemm:technical} we have \[ \mathsf{\rho}(G_n,r,u)\leq \frac{r\cdot c}{1+r\cdot c}. \] Finally, we have \begin{align*} \mathsf{\rho}(G_n,r,\mathsf{U}) =\frac{1}{n}\cdot \sum_u \mathsf{\rho}(G_n,r,u) \leq \frac{r\cdot c}{1+r\cdot c} = 1-\frac{1}{1+r\cdot c}\ . \end{align*} The desired result follows. \end{proof} \fi
We thus arrive at the following corollary.
\begin{corollary}\label{cor:weightless_bounded_uniform} There exists no unweighted, bounded-degree family of graphs which is a strong uniform amplifier. \end{corollary}
\begin{remark}\label{rem:bounded_degree_uniform} \cref{them:loopless_bounded_uniform,them:unweighted_bounded_uniform} establish the nonexistence of strong amplification with bounded degree graphs. A relevant result can be found in~\cite{Mertzios13}, which establishes an upperbound of the fixation probability of mutants under uniform initialization on unweighted, undirected graphs. If the bounded degree restriction is relaxed to bounded average degree, then recent results show that strong amplifiers (called \emph{sparse incubators}) exist~\cite{Goldberg17}.
\end{remark}
\section{Positive Result}\label{sec:amplifiers}
In the previous section we showed that self-loops and weights are necessary for the existence of strong amplifiers. In this section we present our positive result, namely that every family of undirected graphs with self-loops and whose diameter is not ``too large'' can be made a strong amplifier by using appropriate weight functions. Our result relies on several novel conceptual steps, therefore the proof is structured in three parts. \begin{enumerate} \item First, we introduce some formal notation that will help with the exposition of the ideas that follow. \item Second, we describe an algorithm which takes as input an undirected graph $G_n=(V_n,E_n)$ of $n$ vertices, and constructs a weight matrix $W_n$ to obtain the weighted graph $G^{\mathsf{w}}_n=(V_n, E_n, W_n)$. \item Lastly, we prove that $G_n^{\mathsf{w}}$ is a strong amplifier both for uniform and temperature initialization. \end{enumerate}
Before presenting the details we introduce some notation to be used in this section.
\subsection{Undirected Graphs and Notation}\label{subsec:notation} We first present some additional notation required for the exposition of the results of this section.
\noindent{\em Undirected graphs.} Our input is an unweighted undirected graph $G_n=(V_n,E_n)$ with self loops. For ease of notation, we drop the subscript $n$ and refer to the graph $G=(V,E)$ instead. Since $G$ is undirected, for all vertices $u$ we have $\mathsf{In}(u)=\mathsf{Out}(u)$, and we denote by $\mathsf{Nh}(u)=\mathsf{In}(u)=\mathsf{Out}(u)$ the set of neighbors of vertex $u$. Hence, $v\in \mathsf{Nh}(u)$ iff $u\in \mathsf{Nh}(v)$. Moreover, since $G$ has self-loops, we have $u\in \mathsf{Nh}(u)$. Also we consider that $G$ is connected, i.e., for every pair of vertices $u,v$, there is a path from $u$ to $v$.
\noindent{\em Symmetric weight function.} So far we have used a stochastic weight matrix $W$, where for every $u$ we have $\sum_{v} W[u,v]=1$. In this section, we will consider a weight function $\mathsf{w}:E\to \mathbb{R}_{\geq 0}$, and given a vertex $u \in V$ we denote by $\mathsf{w}(u)=\sum_{v\in \mathsf{Nh}(u)}\mathsf{w}(u,v)$. Our construction will not only assign weights, but also ensure symmetry. In other words, we we construct \emph{symmetric} weights such that for all $u,v$ we have $\mathsf{w}(u,v)=\mathsf{w}(v,u)$. Given such a weight function $\mathsf{w}$, the corresponding stochastic weight matrix $W$ is defined as $W[u,v]=\mathsf{w}(u,v)/\mathsf{w}(u)$ for all pairs of vertices $u,v$. Given a unweighted graph $G$ and weight function $\mathsf{w}$, we denote by $G^{\mathsf{w}}$ the corresponding weighted graph.
\noindent{\em Vertex-induced subgraphs.} Given a set of vertices $X\subseteq V$, we denote by $G^{\mathsf{w}}\restr{X}=(X, E\restr{X}, \mathsf{w}\restr{X})$ the subgraph of $G$ induced by $X$, where $E\restr{X}= E\cap (X\times X)$, and the weight function $\mathsf{w}\restr{X}: E\restr{X}\to \mathbb{R}_{\geq 0}$ defined as \[ \mathsf{w}\restr{X}(u,v)= \left\{ \begin{array}{lr} \mathsf{w}(u,u) + \sum_{(u,w)\in E\setminus E\restr{X} }\mathsf{w}(u,w) & \text{ if } u=v\\ \mathsf{w}(u,v)& \text{ otherwise } \end{array} \right. \] In words, the weights on the edges of $u$ to vertices that do not belong to $X$ are added to the self-loop weight of $u$. Since the sum of all weights does not change, we have $\mathsf{w}\restr{X}(u)=\mathsf{w}(u)$ for all $u$. The temperature of $u$ in $G\restr{X}$ is \[ \mathsf{T}\restr{X}(u) = \sum_{v\in \mathsf{Nh}(u)\cap X} \frac{\mathsf{w}\restr{X}(v,u)}{ \mathsf{w}\restr{X}(v)}\ . \]
\subsection{Algorithm for Weight Assignment on $G$}\label{subsec:weights} We start with the construction of the weight function $\mathsf{w}$ on $G$. Since we consider arbitrary input graphs, $\mathsf{w}$ is constructed by an algorithm. The time complexity of the algorithm is $O(n\cdot \log n)$. Since our focus is on the properties of the resulting weighted graph, we do not explicitly analyze the time complexity.
\noindent{\bf Steps of the construction.} Consider a connected graph $G$ with diameter $\mathrm{diam}(G)\leq n^{1-\varepsilon}$, where $\varepsilon>0$ is a constant independent of $n$. We construct a weight function $\mathsf{w}$ such that whp an initial mutant arising under uniform or temperature initialization, eventually fixates on $G^{\mathsf{w}}$. The weight assignment consists of the following conceptual steps.
\begin{compactenum} \item {\em Spanning tree construction and partition.} First, we construct a \emph{spanning tree} $\mathcal{T}_n^x$ of $G$ rooted on some arbitrary vertex $x$. In words, a spanning tree of an undirected graph is a connected subgraph that is a tree and includes all of the vertices of the graph. Then we partition the tree into a number of component trees of appropriate sizes.
\item {\em Hub construction.} Second, we construct the \emph{hub} of $G$, which consists of the vertices $x_i$ that are roots of the component trees, together with all vertices in the paths that connect each $x_i$ to the root $x$ of $\mathcal{T}_n^x$. All vertices that do not belong to the hub belong to the \emph{branches} of $G$.
\item {\em Weight assignment.} Finally, we assign weights to the edges of $G$, such that the following properties hold: \begin{compactenum} \item The hub is an isothermal graph, and evolves exponentially faster than the branches. \item All edges between vertices in different branches are effectively cut-out (by being assigned weight $0$). \end{compactenum} \end{compactenum}
In the following we describe the above steps formally.
\noindent{\bf Spanning tree $\mathcal{T}_n^x$ construction and partition.} Given the graph $G$, we first construct a spanning tree using the standard breadth-first-search (BFS) algorithm. Let $\mathcal{T}_n^x$ be such a spanning tree of $G$, rooted at some arbitrary vertex $x$. We now construct the partitioning as follows: We choose a constant $c=2\varepsilon/3$, and pick a set $S\subset V$ such that \begin{compactenum}
\item $|S|\leq n^c$, and \item the removal of $S$ splits $\mathcal{T}_n^x$ into $k$ trees $T^{x_1}_{n_1},\dots, T^{x_k}_{n_k}$, each $T^{x_i}_{n_i}$ rooted at vertex $x_i$ and of size $n_i$, with the property that $n_i\leq n^{1-c}$ for all $1\leq i\leq k$. \end{compactenum} The set $S$ is constructed by a simple bottom-up traversal of $\mathcal{T}_n^x$ in which we keep track of the size $\mathsf{size}(u)$
of the subtree marked by the current vertex $u$ and the vertices already in $S$. Once $\mathsf{size}(u)> n^{1-c}$, we add $u$ to $S$ and proceed as before. Since every time we add a vertex $u$ to $S$ we have $\mathsf{size}(u)>n^{1-c}$, it follows that $|S|\leq n^c$. Additionally, the subtree rooted in every child of $u$ has size at most $n^{1-c}$, otherwise that child of $u$ would have been chosen to be included in $S$ instead of $u$.
\noindent{\bf Hub construction: hub $\mathcal{H}$.} Given the set of vertices $S$ constructed during the spanning tree partitioning, we construct the set of vertices $\mathcal{H}\subset V$ called the \emph{hub}, as follows: \begin{compactenum} \item We choose a constant $\gamma=\varepsilon/3$.
\item For every vertex $u\in S$, we add in $\mathcal{H}$ every vertex $v$ that lies in the unique simple path $P_u:x\rightsquigarrow u$ between the root $x$ of $\mathcal{T}_n^x$ and $u$ (including $x$ and $u$). Since $\mathrm{diam}(G)\leq n^{1-\varepsilon}$ and $|S|\leq n^c$, we have that $|\mathcal{H}|\leq n^{1-\varepsilon+c} \leq n^{1-\gamma}$.
\item We add $n^{1-\gamma}-|\mathcal{H}|$ extra vertices to $\mathcal{H}$, such that in the end, the vertices of $\mathcal{H}$ form a connected subtree of $\mathcal{T}_n^x$ (rooted in $x$). This is simply done by choosing a vertex $u\in \mathcal{H}$ and a neighbor $v$ of $u$ with $v\not \in \mathcal{H}$, and adding $v$ to $\mathcal{H}$, until $\mathcal{H}$ contains $n^{1-\gamma}$ vertices. \end{compactenum}
\noindent{\bf Branches $B_j=T_{m_j}^{y_j}$.} The hub $\mathcal{H}$ defines a number of trees $B_j=T_{m_j}^{y_j}$, where each tree is rooted at a vertex $y_j\not\in \mathcal{H}$ adjacent to $\mathcal{H}$, and has $m_j$ vertices. We will refer to these trees as \emph{branches}(see~\cref{fig:sink_branches}).
\begin{proposition}\label{prop:sizes}
Note that by construction, we have $m_j\leq n^{1-2/3\cdot \varepsilon}$ for every $j$, and $|\mathcal{H}|=n^{1-\varepsilon/3}$, and $\sum_j m_j = n-n^{1-\varepsilon/3}$. \end{proposition}
\begin{figure}
\caption{ Illustration of the hub $\mathcal{H}$ and the branches $T_{m_j}^{y_j}$. }
\label{fig:sink_branches}
\end{figure}
\noindent{\bf Notation.} To make the exposition of the ideas clear, we rely on the following notation. \begin{compactenum} \item {\em Parent $\mathsf{par}(u)$ and ancestors $\mathsf{anc}(u)$.} Given a vertex $u\neq x$, we denote by $\mathsf{par}(u)$ the parent of $u$ in $\mathcal{T}_n^x$ and by $\mathsf{anc}(u)$ the set of ancestors of $u$.
\item {\em Children $\mathsf{chl}(u)$ and descendants $\mathsf{des}(u)$.} Given a vertex $u$ that is not a leaf in $\mathcal{T}_n^x$, we denote by $\mathsf{chl}(u)$ the children of $u$ in $\mathcal{T}_n^x$ that do not belong to the hub $\mathcal{H}$, and by $\mathsf{des}(u)$ the set of descendants of $u$ in $\mathcal{T}_n^x$ that do not belong to the hub $\mathcal{H}$. \end{compactenum}
\noindent{\bf Frontier, distance, and branches.} We present few notions required for the weight assignment: \begin{compactenum} \item {\em Frontier $\mathcal{F}$.} Given the hub $\mathcal{H}$, the \emph{frontier} of $\mathcal{H}$ is the set of vertices $\mathcal{F}\subseteq \mathcal{H}$ defined as \[ \mathcal{F} = \bigcup_{u\in V\setminus\mathcal{H}} \mathsf{Nh}(u) \cap \mathcal{H}\ . \] In words, $\mathcal{F}$ contains all vertices of $\mathcal{H}$ that have a neighbor not in $\mathcal{H}$.
\item {\em Distance function $\lambda$.} For every vertex $u$, we define its \emph{distance} $\lambda(u)$ to be the length of the shortest path $P:u\rightsquigarrow v$ in $T_n^{x}$ to some vertex $v\in \mathcal{F}$ (e.g., if $u\in \mathcal{F}$, we have (i)~$\lambda(u)=0$, and (ii)~for every $v\in \mathsf{Nh}(u)\setminus \mathcal{H}$ we have $\lambda(v)=1$).
\item {\em Values $\mu$ and $\nu$.}
For every vertex $u\in \mathcal{H}$, we define $\mathsf{deg}(u)=|(\mathsf{Nh}(u)\cap \mathcal{H})\setminus\{u\}|$ i.e., $\mathsf{deg}(u)$ is the number of neighbors of $u$ that belong to the hub (excluding $u$ itself). Let \[
\mu=\max_{u\in \mathcal{F}} |\mathsf{chl}(u)| \qquad \text{ and } \qquad \nu=\max_{u\in \mathcal{H}}\mathsf{deg}(u)\ . \] \end{compactenum}
\noindent{\bf Weight assignment.} We are now ready to define the weight function $\mathsf{w}:E\to \mathbb{R}_{\geq 0}$.
\begin{compactenum} \item For every edge $(u,v)$ such that $u\neq v$ and $u,v\not \in \mathcal{H}$ and $u$ and $v$ are not neighbors in $\mathcal{T}_n^x$, we assign $\mathsf{w}(u,v)=0$.
\item For every vertex $u\in \mathcal{F}$ we assign $\mathsf{w}(u,u)=(\mu-|\mathsf{chl}(u)|)\cdot 2^{-n}+\nu-\mathsf{deg}(u)$. \item For every vertex $u\in \mathcal{H}\setminus\mathcal{F}$ we assign $\mathsf{w}(u,u)=\mu\cdot 2^{-n}+\nu-\mathsf{deg}(u)$. \item For every vertex $u\not\in\mathcal{H}$ we assign $\mathsf{w}(u,u)=n^{-2\cdot\lambda(u)}$. \item For every edge $(u,v)\in E$ such that $u\neq v$ and $u,v\in \mathcal{H}$ we assign $\mathsf{w}(u,v)=1$. \item For every remaining edge $(u,v)\in E$ such that $u=\mathsf{par}(v)$ we assign $\mathsf{w}(u,v)=2^{-n}\cdot n^{-4\cdot \lambda(u)}$. \end{compactenum}
The following lemma is straightforward from the weight assignment, and captures that every vertex in the hub has the same weight.
\begin{lemma}\label{lem:sink_weights} For every vertex $u\in \mathcal{H}$ we have $\mathsf{w}(u)=\sum_{v\in\mathsf{Nh}(u)}\mathsf{w}(u,v)=\mu\cdot 2^{-n}+\nu$. \end{lemma} \iffullproofs \begin{proof} Consider any vertex $u\in \mathcal{H}\setminus\mathcal{F}$. We have \begin{align*} \mathsf{w}(u) =& \mathsf{w}(u,u) + \sum_{v\in\mathsf{Nh}(u)\setminus\{u\}} \mathsf{w}(u,v)\\ =& \mu\cdot 2^{-n}+\nu - \mathsf{deg}(u) + \sum_{v\in\mathsf{Nh}(u)\setminus\{u\}} 1\\ =& \mu\cdot 2^{-n}+\nu - \mathsf{deg}(u) + \mathsf{deg}(u) \\ =& \mu\cdot 2^{-n}+\nu \numberthis\label{eq:weight_sink} \end{align*}
Similarly, consider any $u\in \mathcal{F}$. We have \begin{align*} \mathsf{w}(u) =& \mathsf{w}(u,u) + \sum_{v\in(\mathsf{Nh}(u)\cap \mathcal{H})\setminus\{u\}} \mathsf{w}(u,v)+ \sum_{v\in \mathsf{chl}(u)} \mathsf{w}(u,v)\\
=& (\mu-|\mathsf{chl}(u)|)\cdot 2^{-n}+\nu-\mathsf{deg}(u) + \sum_{v\in(\mathsf{Nh}(u)\cap \mathcal{H}) \setminus\{u\}} 1 + \sum_{v\in \mathsf{chl}(u)}2^{-n} \\
=& \mu\cdot 2^{-n} - |\mathsf{chl}(u)|\cdot 2^{-n} +\nu - \mathsf{deg}(u) + \mathsf{deg}(u) + |\mathsf{chl}(u)|\cdot 2^{-n}\\ =& \mu\cdot 2^{-n}+\nu \numberthis\label{eq:weight_frontier} \end{align*} \end{proof} \fi
\subsection{Analysis of the Fixation Probability} In this section we present detailed analysis of the fixation probability and we start with the outline of the proof.
\subsubsection{Outline of the proof}
The fixation of new mutants is guaranteed by showing that each of the following four stages happens with high probability. \begin{compactenum} \item[(A)] In stage~1 we consider the event $\mathcal{E}_1$ that a mutant arises in one of the branches (i.e., outside the hub $\mathcal{H}$). We show that event $\mathcal{E}_1$ happens whp.
\item[(B)] In stage~2 we consider the event $\mathcal{E}_2$ that a mutant occupies a vertex $v$ of the branches which is a neighbor to the hub. We show that given event $\mathcal{E}_1$ the event $\mathcal{E}_2$ happens whp.
\item[(C)] In stage~3 we consider the event $\mathcal{E}_3$ that the mutants fixate in the hub. We show that given event $\mathcal{E}_2$ the event $\mathcal{E}_3$ happens whp.
\item[(D)] In stage~4 we consider the event $\mathcal{E}_4$ that the mutants fixate in all the branches. We show that given event $\mathcal{E}_3$ the event $\mathcal{E}_4$ happens whp.
\end{compactenum}
\noindent{\bf Crux of the proof.} Before the details of the proof we present the main crux of the proof. We say a vertex $v \not \in \mathcal{H}$ hits the hub when it places an offspring to the hub. First, our construction ensures that the hub is isothermal. Second, our construction ensures that a mutant appearing in a branch reaches to a vertex adjacent to the hub, and hits the hub with a mutant polynomially many times. Third, our construction also ensures that the hub reaches a homogeneous configuration whp between any two hits to the hub. We now describe two crucial events. \begin{itemize} \item Consider that a mutant is adjacent to a hub of residents. Every time a mutant is introduced in the hub it has a constant probability (around $1-1/r$ for large population) of fixation since the hub is isothermal. The polynomially many hits of the hub by mutants ensure that the hub becomes mutants whp.
\item In contrast consider that a resident is adjacent to a hub. Every time a resident is introduced in the hub it has exponentially small probability
(around $(r-1)/(r^{|\mathcal{H}|}-1)$) of fixation. \end{itemize}
Hence, given a hub of mutants, the probability (say, $\eta_1=2^{-\Omega(|\mathcal{H}|)}$) that the residents win over the hub is exponentially small. Given a hub of mutant, the probability that the hub wins over a branch $B_j$ is also
exponentially small (say, $\eta_2=2^{-O(|B_j|)}$). More importantly the ratio of $\eta_1/\eta_2$ is also exponentially small (by \cref{prop:sizes} regarding the sizes of the hub and branches). Using this property, se show that fixation the mutants reach fixation whp. We now analyze each stage in detail.
\subsubsection{Analysis of Stage~1: Event $\mathcal{E}_1$}
\begin{lemma}\label{lem:initialization} Consider the event $\mathcal{E}_1$ that the initial mutant is placed at a vertex outside the hub. Formally, the event $\mathcal{E}_1$ is that $\mathsf{X}_0 \cap \mathcal{H}=\emptyset$. The event $\mathcal{E}_1$ happens with probability at least $1-O(n^{-\varepsilon/3})$, i.e., the event $\mathcal{E}_1$ happens whp. \end{lemma} \ifshortproofs \begin{proof} We present the result for the uniform and temperature initialization below. \begin{compactitem} \item \emph{(Uniform initialization):} The initial mutant is placed on a vertex $u\not\in \mathcal{H}$ with probability \[
\sum_{u\not\in \mathcal{H}}\frac{1}{n}=\frac{|V\setminus \mathcal{H}|}{n} = \frac{n-n^{1-\gamma}}{n} = 1-\frac{n^{1-\gamma}}{n}=1-O(n^{-\varepsilon/3})\ ; \] since by definition $\gamma=\varepsilon/3$.
\item \emph{(Temperature initialization):} For every vertex $u\not\in \mathcal{H}$, we have \[ \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(v,u) \leq \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}} 2^{-n} = 2^{-\Omega(n)}\ ; \] whereas since $\mathrm{diam}(G)\leq n^{1-\varepsilon}$ we have \[ \mathsf{w}(u,u) = n^{-2\cdot \lambda(u)} \geq n^{-2\cdot \mathrm{diam}(G)} \geq n^{-O(n^{1-\varepsilon})}\ . \] Let $A=\mathsf{w}(u,u)$ and $B=\sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(v,u)$, and we have \[ \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} = \frac{A}{A + B} = 1-\frac{B}{A+B} = 1-2^{-\Omega(n)}\ . \] Then the desired event happens with probability at least \begin{align*}
\sum_{u\not \in \mathcal{H}} \frac{\mathsf{T}(u)}{n} = &\frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \sum_{v\in \mathsf{Nh}(u)} \frac{\mathsf{w}(v,u)}{ \mathsf{w}(v)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \left(1-2^{-\Omega(n)}\right) \\ =& 1-O(n^{-\varepsilon/3}) \end{align*} since $\gamma=\varepsilon/3$. The desired result follows. \end{compactitem} \end{proof} \fi
\iffullproofs \begin{proof} We examine the uniform and temperature initialization schemes separately. \begin{compactitem} \item \emph{(Uniform initialization):} The initial mutant is placed on a vertex $u\not\in \mathcal{H}$ with probability \[
\sum_{u\not\in \mathcal{H}}\frac{1}{n}=\frac{|V\setminus \mathcal{H}|}{n} = \frac{n-n^{1-\gamma}}{n} = 1-\frac{n^{1-\gamma}}{n}=1-O(n^{-\varepsilon/3})\ ; \] since $\gamma=\varepsilon/3$.
\item \emph{(Temperature initialization):} For any vertex $u\not\in \mathcal{H}$, we have \[ \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(u,v) \leq \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}} 2^{-n} = 2^{-\Omega(n)}\ ; \] whereas since $\mathrm{diam}(G)\leq n^{1-\varepsilon}$ we have \[ \mathsf{w}(u,u) = n^{-2\cdot \lambda(u)} \geq n^{-2\cdot \mathrm{diam}(G)} \geq n^{-O(n^{1-\varepsilon})}\ . \] Note that \[ n^{-O(n^{1-\varepsilon})}=2^{-O(n^{1-\varepsilon}\cdot \log n)} >> 2^{-O(n)}\ . \] Let $A=\mathsf{w}(u,u)$ and $B=\sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(u,v)$, and we have \[ \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} = \frac{A}{A + B} = 1-\frac{B}{A+B} = 1-\frac{2^{-\Omega(n)}}{n^{-O(n^{1-\varepsilon})}+2^{-\Omega(n)}} = 1-\frac{2^{-\Omega(n)}}{n^{-O(n^{1-\varepsilon})}} = 1-2^{-\Omega(n)}\ . \] Then the desired event happens with probability at least \begin{align*} \sum_{u\not\in \mathcal{H}}\ProbT{\mathsf{X}_0=\{u\}} =& \sum_{u\not \in \mathcal{H}} \frac{\mathsf{T}(u)}{n} = \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \sum_{v\in \mathsf{Nh}(u)} \frac{\mathsf{w}(u,v)}{ \mathsf{w}(v)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \left(1-2^{-\Omega(n)}\right) \\
=& \frac{|V\setminus \mathcal{H}|}{n}\cdot \left(1-2^{-\Omega(n)}\right) = \frac{n-n^{1-\gamma}}{n}\cdot \left(1-2^{-\Omega(n)}\right) = (1-n^{-\gamma})\cdot \left(1-2^{-\Omega(n)}\right)\\ =& 1-O(n^{-\varepsilon/3}) \end{align*} since $\gamma=\varepsilon/3$. The desired result follows.
\end{compactitem} \end{proof} \fi
\subsubsection{Analysis of Stage~2: Event $\mathcal{E}_2$}
The following lemma states that if a mutant is placed on a vertex $w$ outside the hub, then whp the mutant will propagate to the ancestor $v$ of $w$ at distance $\lambda(v)=1$ from the hub (i.e., the parent of $v$ belongs to the hub). This is a direct consequence of the weight assignment, which guarantees that for every vertex $u\not\in\mathcal{H}$, the individual occupying $u$ will place an offspring on the parent of $u$ before some neighbor of $u$ places an offspring on $u$, and this event happens with probability at least $1-O(n^{-1})$.
\begin{lemma}\label{lem:ancestor_mutant} Consider that at some time $j$ the configuration of the Moran process on $G^{\mathsf{w}}$ is $\mathsf{X}_j=\{w\}$ with $w\not\in \mathcal{H}$. Let $v\in \mathsf{anc}(w)$ with $\lambda(v)=1$, i.e., $v$ is the ancestor of $w$ and $v$ is adjacent to the hub. Then a subsequent configuration $\mathsf{X}_{t}$ with $v\in \mathsf{X}_t$ is reached with probability $1-O(n^{-1})$, i.e., given event $\mathcal{E}_1$, the event $\mathcal{E}_2$ happens whp. \end{lemma}
\ifshortproofs \begin{proof} Consider any configuration $\mathsf{X}_i$, with $i\geq j$, and let $u\neq v$ be the highest ancestor of $w$ that is occupied by a mutant (initially it is $u=w$). Let $\rho_1$ be the probability that $u$ reproduces and places an offspring on its parent $\mathsf{par}(u)$, and $\rho_2$ be the probability that a neighbor of $u$ places an offspring on $u$. It is straightforward from the weight assignment that \[ \frac{\rho_1}{\rho_2} = \Omega(n)\ ; \] that is, whp $u$ places an offspring on its parent before $u$ is replaced by a neighbor. Hence, if $\rho=s_i, s_{i+1},\dots $ is the one-dimensional random walk that tracks the highest ancestor of $u$ which is occupied by a mutant, the forward bias of $\rho$ is $\Omega(n)$. The desired result follows directly from the analysis of biased random walks (see, e.g.,~\cite{Kemeny12},~\cite[Section~6.3]{Nowak06b}). \end{proof} \fi \iffullproofs \begin{proof} Let $t$ be the first time such that $v\in \mathsf{X}_t$ (possibly $t=\infty$, denoting that $v$ never becomes mutant). Let $s_i$ be the random variable such that \[ s_i= \left\{ \begin{array}{lr}
|\mathsf{X}_i\cap \mathsf{anc}(w)| &\text{ if } i<t\\
| \mathsf{anc}(w)| &\text{ if } i\geq t \end{array} \right. \]
In words, $s_i$ counts the number of mutant ancestors of $u$ until time $t$. Given the current configuration $\mathsf{X}_i$ with $0<s_i<|\mathsf{anc}(w)|$, let $u=\arg\min_{z\in \mathsf{X}_i\cap \mathsf{anc}(w)} \lambda(z)$. The probability that $s_{i+1}=s_i+1$ is lowerbounded by the probability that $u$ reproduces and places an offspring on $\mathsf{par}(u)$. Similarly, the probability that $s_{i+1}=s_i-1$ is upperbounded by the probability that (i)~$\mathsf{par}(u)$ reproduces and places an offspring on $u$, plus (ii)~the probability that some $z\in\mathsf{des}(u)\setminus\mathsf{X}_i$ reproduces and places an offspring on $\mathsf{par}(z)$.
We now proceed to compute the above probabilities. Consider any configuration $\mathsf{X}_i$, and
and let $z$ be any child of $u$ and $z'$ any child of $z$. The above probabilities crucially depend on the following quantities: \[ \frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(u)};\qquad \frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(\mathsf{par}(u))}; \qquad \sum_{z_i\in \mathsf{des}(u)}\frac{\mathsf{w}(\mathsf{par}(z_i),z_i)}{\mathsf{w}(z_i)}\ . \]
Recall that \begin{compactitem} \item $\mathsf{w}(u, \mathsf{par}(u))=2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(u))}$ \item $\mathsf{w}(u,x)=2^{-n}\cdot n^{-4\cdot \lambda(u)}$ \item $\mathsf{w}(z,z')=2^{-n}\cdot n^{-4\cdot \lambda(z)}$ \item $\mathsf{w}(\mathsf{par}(u), \mathsf{par}(\mathsf{par}(u))) = 2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(\mathsf{par}(u)))}$ \item $\mathsf{w}(u,u)=n^{-2\cdot\lambda(u)}$ \item $\mathsf{w}(\mathsf{par}(u), \mathsf{par}(u))=n^{-2\cdot\lambda(\mathsf{par}(u))}$ \item $\mathsf{w}(z,z)=n^{-2\cdot\lambda(z)}$ \end{compactitem}
Thus, we have
\begin{align*}
\frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(u)}=&\frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(u,u) + \mathsf{w}(u, \mathsf{par}(u)) + |\mathsf{chl}(u)|\cdot \mathsf{w}(u,x)} = \frac{2^{-n}\cdot n^{-4\cdot(\lambda(u)-1)}}{O(n^{-2\cdot\lambda(u)})}\\ =&\Omega(2^{-n}\cdot n^{-2\cdot(\lambda(u)-2)})\numberthis\label{eq:weights1} \end{align*}
\begin{align*}
\frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(\mathsf{par}(u))}=&\frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(\mathsf{par}(u), \mathsf{par}(u)) + \mathsf{w}(\mathsf{par}(u), \mathsf{par}(\mathsf{par}(u))) + |\mathsf{chl}(\mathsf{par}(u))|\cdot \mathsf{w}(u,\mathsf{par}(u))}\\ =& \frac{2^{-n}\cdot n^{-4\cdot(\lambda(u)-1)}}{\Omega(n^{-2\cdot(\lambda(u)-1)})} =O(2^{-n}\cdot n^{-2\cdot(\lambda(u)-1)})\numberthis\label{eq:weights2} \end{align*}
\begin{align*}
\sum_{z_i\in \mathsf{des}(u)}\frac{\mathsf{w}(\mathsf{par}(z_i),z_i)}{\mathsf{w}(z_i)} = & |\mathsf{des}(u)|\cdot \frac{\mathsf{w}(u,z)}{\mathsf{w}(z,z)+\mathsf{w}(u,z) + |\mathsf{chl}(z)|\cdot \mathsf{w}(z,z')}\\
\leq & |\mathsf{des}(u)|\cdot \frac{2^{-n}\cdot n^{-4\cdot\lambda(u)}}{\Omega(n^{-2\cdot (\lambda(u)+1)})} = n\cdot O(2^{-n}\cdot n^{-2\cdot (\lambda(u)-1)})\\ =& O(2^{-n}\cdot n^{-2\cdot \lambda(u)+3}) \numberthis\label{eq:weights3} \end{align*}
\begin{comment} \begin{align*}
\frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(u)}=&\frac{2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(u))}}{n^{-2\cdot\lambda(u)}+2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(u))} + |\mathsf{chl}(u)|\cdot 2^{-n}\cdot n^{-4\cdot \lambda(u)}} = \frac{2^{-n}\cdot n^{-4\cdot(\lambda(u)-1)}}{O(n^{-2\cdot\lambda(u)})}\\ =&\Omega(2^{-n}\cdot n^{-2\cdot(\lambda(u)-2)})\numberthis\label{eq:weights1} \end{align*}
\begin{align*}
\frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(\mathsf{par}(u))}=&\frac{2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(u))}}{n^{-2\cdot\lambda(\mathsf{par}(u))}+2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(\mathsf{par}(u)))} + |\mathsf{chl}(\mathsf{par}(u))|\cdot 2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(u))}}\\ =& \frac{2^{-n}\cdot n^{-4\cdot(\lambda(u)-1)}}{\Omega(n^{-2\cdot(\lambda(u)-1)})} =O(2^{-n}\cdot n^{-2\cdot(\lambda(u)-1)})\numberthis\label{eq:weights2} \end{align*}
\begin{align*}
\sum_{v\in \mathsf{des}(u)}\frac{\mathsf{w}(\mathsf{par}(v),v)}{\mathsf{w}(v)} = & |\mathsf{des}(u)|\cdot \frac{2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(v))}}{n^{-2\cdot\lambda(v)}+2^{-n}\cdot n^{-4\cdot \lambda(\mathsf{par}(v))} + |\mathsf{chl}(v)|\cdot 2^{-n}\cdot n^{-4\cdot \lambda(v)}}\\
\leq & |\mathsf{des}(u)|\cdot \frac{2^{-n}\cdot n^{-4\cdot\lambda(u)}}{\Omega(n^{-2\cdot (\lambda(u)+1)})} = n\cdot O(2^{-n}\cdot n^{-2\cdot (\lambda(u)-1)})\\ =& O(2^{-n}\cdot n^{-2\cdot \lambda(u)+3}) \numberthis\label{eq:weights3} \end{align*} \end{comment}
Thus, using \cref{eq:weights1}, \cref{eq:weights2} and \cref{eq:weights3}, we obtain
\begin{align*} \frac{\Probr{s_{i+1}=s_i+1}}{\Probr{s_{i+1}=s_i-1}}\geq &\frac{\frac{r}{\mathsf{F}(\mathsf{X}')}\cdot \frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(u)} }{\frac{1}{\mathsf{F}(\mathsf{X}')}\cdot \left(\frac{\mathsf{w}(u, \mathsf{par}(u))}{\mathsf{w}(\mathsf{par}(u))} + \sum_{z_i\in \mathsf{des}(u)}\frac{\mathsf{w}(\mathsf{par}(z_i),z_i)}{\mathsf{w}(z_i)}\right)}\\ =&\frac{\Omega(2^{-n}\cdot n^{-2\cdot(\lambda(u)-2)})}{O(2^{-n}\cdot n^{-2\cdot(\lambda(u)-1)})+ O(2^{-n}\cdot n^{-2\cdot \lambda(u)+3})} =\Omega(n) \numberthis\label{eq:ratio5} \end{align*}
Let $\alpha(n)=1-O(n^{-1})$ and consider a one-dimensional random walk $P:s'_0, s'_1,\dots$ on states $0\leq i\leq |\mathsf{anc}(w)| $, with transition probabilities
\[\numberthis\label{eq:walk2}
\Probr{s'_{i+1}=\ell|s'_i}= \left\{\begin{array}{lr}
\alpha(n) & \text{ if } 0<s'_i<|\mathcal{H}| \text{ and } \ell=s'_i+1\\
1-\alpha(n) & \text{ if } 0<s'_i<|\mathcal{H}| \text{ and } \ell=s'_i-1\\
0 & \text{otherwise } \end{array}\right. \]
Using \cref{eq:ratio5}, we have that
\[ \frac{\Probr{s'_{i+1}=s'_i+1}}{\Probr{s'_{i+1}=s'_i-1}} = \frac{\alpha(n)}{1-\alpha(n)} = \Omega(n) \leq \frac{\Probr{s_{i+1}=s_i+1}}{\Probr{s_{i+1}=s_i-1}}\ . \]
Hence the probability that $s_{\infty}=|\mathsf{anc}(w)|$ is lowerbounded by the probability that $s'_{\infty}=|\mathsf{anc}(w)|$. The latter event occurs with probability $1-O(n^{-1})$ (see e.g.,~\cite{Kemeny12},~\cite[Section~6.3]{Nowak06b}), as desired. \end{proof} \fi
\subsubsection{Analysis of Stage~3: Event $\mathcal{E}_3$} We now focus on the evolution on the hub $\mathcal{H}$, and establish several useful results. \begin{compactenum} \item First, we show that $G^{\mathsf{w}}\restr{\mathcal{H}}$ is isothermal (\cref{lem:sink_isothermal})
\item Second, the above result implies that the hub behaves as a well-mixed population. Considering advantageous mutants ($r>1$) this implies the following (\cref{lem:sink_independent}). \begin{compactenum} \item Every time a mutant hits a hub of only residents, then the mutant has at least a \emph{constant} probability of fixating in the hub. \item In contrast, every time a resident hits a hub of only mutants, then the resident has \emph{exponentially small} probability of fixating in the hub. \end{compactenum}
\item Third, we show that an initial mutant adjacent to the hub, hits the hub a polynomial number of times (\cref{lem:sink_hit}).
\item Finally, we show that an initial mutant adjacent to the hub ensures fixating in the hub whp (\cref{lem:sink_mutant}), i.e., we show that given event $\mathcal{E}_2$ the event $\mathcal{E}_3$ happens whp. \end{compactenum} We start with observing that the hub is isothermal, which follows by a direct application of the definition of isothermal (sub)graphs~\cite{Lieberman05}.
\begin{lemma}\label{lem:sink_isothermal} The graph $G^{\mathsf{w}}\restr{\mathcal{H}}$ is isothermal. \end{lemma}
\begin{proof}
Consider any vertex $u\in \mathcal{H}\setminus\mathcal{F}$. We have \begin{align*} \mathsf{T}\restr{X}(u)=&\sum_{v\in \mathsf{Nh}(u)\cap \mathcal{H}}\frac{\mathsf{w}\restr{\mathcal{H}}(v,u)}{\mathsf{w}\restr{\mathcal{H}}(v)} = \frac{\mathsf{w}\restr{\mathcal{H}}(u,u)}{\mathsf{w}\restr{\mathcal{H}}(u)} + \sum_{v\in (\mathsf{Nh}(u)\setminus\{u\})\cap \mathcal{H} }\frac{\mathsf{w}\restr{\mathcal{H}}(v,u)}{\mathsf{w}\restr{\mathcal{H}}(v)} \\ =&\frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} + \sum_{v\in (\mathsf{Nh}(u)\setminus\{u\})\cap \mathcal{H} }\frac{\mathsf{w}(v,u)}{\mathsf{w}(v)}\\ =&\frac{1}{\mu\cdot 2^{-n}+\nu} \cdot \left(\mathsf{w}(u,u) + \sum_{v\in (\mathsf{Nh}(u)\setminus\{u\})\cap \mathcal{H} }1\right)\\ =&\frac{1}{\mu\cdot 2^{-n}+\nu}\cdot (\mu\cdot 2^{-n}+\nu - \mathsf{deg}(u) + \mathsf{deg}(u))\\ =& 1 \end{align*}
since by \cref{lem:sink_weights} we have $\mathsf{w}(u)=\mu\cdot 2^{-n}+\nu$. Similarly, consider any $u\in \mathcal{F}$. We have \begin{align*} \mathsf{T}\restr{X}(u)=&\sum_{v\in \mathsf{Nh}(u)\cap \mathcal{H}}\frac{\mathsf{w}\restr{\mathcal{H}}(v,u)}{\mathsf{w}\restr{\mathcal{H}}(v)} = \frac{\mathsf{w}\restr{\mathcal{H}}(u,u)}{\mathsf{w}\restr{\mathcal{H}}(u)} + \sum_{v\in (\mathsf{Nh}(u)\setminus\{u\})\cap \mathcal{H} }\frac{\mathsf{w}\restr{\mathcal{H}}(v,u)}{\mathsf{w}\restr{\mathcal{H}}(v)} \\ =&\frac{\mathsf{w}(u,u) + \sum_{v\in \mathsf{Nh}(u)\setminus\mathcal{H}}\mathsf{w}(u,v)}{\mathsf{w}(u)} + \sum_{v\in (\mathsf{Nh}(u)\setminus\{u\})\cap \mathcal{H} }\frac{\mathsf{w}(v,u)}{\mathsf{w}(v)}\\ =&\frac{1}{\mu\cdot 2^{-n}+\nu} \cdot \left(\mathsf{w}(u,u) +\sum_{v\in \mathsf{Nh}(u)\setminus\mathcal{H}} 2^{-n} + \sum_{v\in (\mathsf{Nh}(u)\setminus\{u\})\cap \mathcal{H} }1\right)\\
=&\frac{1}{\mu\cdot 2^{-n}+\nu}\cdot ((\mu-|\mathsf{chl}(u)|)\cdot 2^{-n}+\nu - \mathsf{deg}(u) + |\mathsf{chl}(u)|\cdot 2^{-n} + \mathsf{deg}(u))\\ =& 1 \end{align*} Thus for all $u\in \mathcal{H}$ we have $\mathsf{T}\restr{X}(u)=1$, as desired. \end{proof}
\begin{lemma}\label{lem:sink_independent} Consider that at some time $j$ the configuration of the Moran process on $G^{\mathsf{w}}$ is $\mathsf{X}_j$. \begin{compactenum}
\item If $|\mathcal{H}\cap \mathsf{X}_j|\geq 1$, i.e., there is at least one mutant in the hub, then a subsequent configuration $\mathsf{X}_{t}$ with $\mathcal{H}\subseteq \mathsf{X}_t$ will be reached with probability at least $1-r^{-1} - 2^{-\Omega(n)}$ (i.e., mutants fixate in the hub with constant probability).
\item If $|\mathcal{H}\setminus \mathsf{X}_j|=1$, i.e., there is exactly one resident in the hub, then a subsequent configuration $\mathsf{X}_{t}$ with $\mathcal{H}\subseteq \mathsf{X}_t$ will be reached with probability at least $1-2^{-\Omega(m)}$, where $m=n^{1-\gamma}$ (i.e., mutants fixate in the hub with probability exponentially close to~1). \end{compactenum}
\end{lemma} \ifshortproofs \begin{proof}
Given a configuration $\mathsf{X}_i$, denote by $s_i=|\mathcal{H} \cap \mathsf{X}_i|$ the number of mutant individuals in the hub. Consider any configuration $\mathsf{X}_i$ with $0<s_i<|\mathsf{X}_i|$, i.e., mutants and residents coexist in the hub. It follows directly from the weight assignment and \cref{lem:sink_isothermal} that the ratio of probabilities of increasing the mutants by one (i.e., $s_{i+1}=s_i+1$) over decreasing the mutants by one (i.e., $s_{i+1}=s_i-1$) is at most \[ \frac{1}{r} + 2^{-\Omega(n)} = \beta\ . \] Hence we have a one-dimensional random walk $\rho=s_j, s_{j+1},\dots $ with backward bias $\beta=1/r + 2^{-\Omega(n)}$. We use the standard results on one-dimensional random walks (see, e.g.,~\cite{Kemeny12},~\cite[Section~6.3]{Nowak06b}) to obtain a lowerbound on the probability that the Moran process reaches a configuration $\mathsf{X}_j$ with $\mathcal{H}\subseteq \mathsf{X}_t$. In particular, we have the following. \begin{compactenum}
\item If initially $s_j=|\mathcal{H}\cap \mathsf{X}_j|\geq 1$ (i.e., a single mutant invades a resident hub), then the probability is \[
\rho_1\geq \frac{1-\beta}{1-\beta^{|\mathcal{H}|}} \geq 1-r^{-1} - 2^{-\Omega(n)}\ . \]
\item If initially $s_j=|\mathcal{H}\setminus \mathsf{X}_j|=1$ (i.e., a single resident invades a mutant hub), then the probability is \[
\rho_2\geq \frac{1-\beta^{-1}}{1-\beta^{-|\mathcal{H}|}} \geq 1-2^{-\Omega(m)}\ . \]
where $m=|\mathcal{H}|=n^{1-\gamma}$. \end{compactenum} The desired result follows. \end{proof} \fi
\iffullproofs \begin{proof}
Given a configuration $\mathsf{X}_i$, denote by $s_i=|\mathcal{H} \cap \mathsf{X}_i|$. Let $\mathsf{X}_i$ be any configuration of the Moran process with $0<s_i<|\mathsf{X}_i|$, $u$ be the random variable that indicates the vertex that is chosen for reproduction in $\mathsf{X}_i$, and $\mathsf{X}_{i+1}$ be the random variable that indicates the configuration of the population in the next step. By \cref{lem:sink_isothermal}, the subgraph $G^{\mathsf{w}}\restr{\mathcal{H}}$ induced by the hub $\mathcal{H}$ is isothermal, thus \[
\frac{\Probr{s_{i+1}=s_i-1|u\in \mathcal{H}}}{\Probr{s_{i+1}=s_i+1|u\in \mathcal{H}}}=\frac{1}{r}\ . \numberthis\label{eq:ratio1} \] Additionally, \begin{align*}
\Probr{s_{i+1}=s_i-1|u\not \in \mathcal{H}} \leq & \sum_{\substack{v \in \mathcal{F} \\ u\in\mathsf{chl}(v)}} \left( \frac{1}{\mathsf{F}(\mathsf{X}_i)}\cdot \frac{\mathsf{w}(u,v)}{\mathsf{w}(u)} \right) \leq n^{-1}\cdot \sum_{\substack{v \in \mathcal{F} \\ u\in\mathsf{chl}(v)}} \frac{ 2^{-n}}{n^{-2}}\\ \leq& n^{-1}\cdot n \cdot 2^{-n}\cdot n^{2} =O(n^{2}\cdot 2^{-n}) \numberthis\label{eq:ratio2} \end{align*}
since $1/\mathsf{F}(\mathsf{X}_i)\leq n^{-1}$, $\mathsf{w}(u,v)=2^{-n}$ and $\mathsf{w}(u,u)=n^{-2}$. Moreover, as $\mathcal{H}$ is heterogeneous, it contains at least a mutant vertex $v$ and a resident vertex $w\in \mathsf{Nh}(v)$, and $v$ reproduces with probability $r/\mathsf{F}(\mathsf{X}_i)\geq n^{-1}$, and replaces the individual $v\in \mathcal{H}$ with probability at least $1/\mathsf{w}(v)$. Hence we have \begin{align*}
\Probr{s_{i+1}=s_i+1|u\in \mathcal{H}}\cdot \Probr{u\in \mathcal{H}} \geq \frac{1}{\mathsf{w}(u)} \cdot \frac{r}{\mathsf{F}(\mathsf{X}_i)} \geq \frac{1}{\mu\cdot 2^{-n} + \nu} \cdot n^{-1} \geq \frac{1}{n\cdot 2^{-n} + n}\cdot n^{-1} = \Omega(n^{-2}) \numberthis\label{eq:ratio3} \end{align*}
since by \cref{lem:sink_weights} we have $\mathsf{w}(v)=\mu\cdot 2^{-n}+\nu$. Using \cref{eq:ratio1}, \cref{eq:ratio2} and \cref{eq:ratio3}, we have \begin{align*}
\frac{\Probr{s_{i+1}=s_i-1}}{\Probr{s_{i+1}=s_i+1}}=&\frac{\Probr{s_{i+1}=s_i-1|u\in \mathcal{H}}\cdot \Probr{u\in \mathcal{H}} + \Probr{s_{i+1}=s_i-1|u\not \in \mathcal{H}}\cdot \Probr{u\not \in \mathcal{H}}}{\Probr{s_{i+1}=s_i+1|u\in \mathcal{H}}\cdot \Probr{u \in \mathcal{H}} + \Probr{s_{i+1}=s_i+1|u \not \in \mathcal{H}}\cdot \Probr{u \not \in \mathcal{H}}}\\
\leq& \frac{\Probr{s_{i+1}=s_i-1|u\in \mathcal{H}}\cdot \Probr{u\in \mathcal{H}} + \Probr{s_{i+1}=s_i-1|u\not \in \mathcal{H}}\cdot \Probr{u\not \in \mathcal{H}}}{\Probr{s_{i+1}=s_i+1|u\in \mathcal{H}}\cdot \Probr{u \in \mathcal{H}}}\\
\leq & \frac{\Probr{s_{i+1}=s_i-1|u\in \mathcal{H}}}{\Probr{s_{i+1}=s_i+1|u\in \mathcal{H}}} + O(n^{2})\cdot \Probr{s_{i+1}=s_i-1|u\not \in \mathcal{H}} = \frac{1}{r} + 2^{-\Omega(n)} \numberthis\label{eq:ratio4} \end{align*}
Hence, $s_{j}, s_{j+1},\dots$ performs a one-dimensional random walk on the states $0\leq i\leq |\mathcal{H}|$, with the ratio of transition probabilities given by \cref{eq:ratio4}. Let $\alpha(n)=r/(r+1+2^{-\Omega(n)})$ and consider the one-dimensional random walk $\rho:s'_{j}, s'_{j+1},\dots$ on states $0\leq i\leq |\mathcal{H}|$, with transition probabilities
\[\numberthis\label{eq:walk1}
\Probr{s'_{i+1}=\ell|s'_i}= \left\{\begin{array}{lr}
\alpha(n) & \text{ if } 0<s'_i<|\mathcal{H}| \text{ and } \ell=s'_i+1\\
1-\alpha(n) & \text{ if } 0<s'_i<|\mathcal{H}| \text{ and } \ell=s'_i-1\\
0 & \text{otherwise } \end{array}\right. \]
Using \cref{eq:ratio4} we have that \[ \frac{\Probr{s'_{i+1}=s'_i-1}}{\Probr{s'_{i+1}=s'_i+1}} = \frac{1-\alpha(n)}{\alpha(n)} = \frac{1}{r} + 2^{-\Omega(n)} \geq \frac{\Probr{s_{i+1}=s_i-1}}{\Probr{s_{i+1}=s_i+1}}\ . \]
Let $\rho_1$ (resp. $\rho_2$) be the probability that the Moran process starting on configuration $\mathsf{X}_j$ with $|\mathcal{H}\cap \mathsf{X}_j|\geq 1$ (resp. $|\mathcal{H}\setminus \mathsf{X}_j|=1$) will reach a configuration $\mathsf{X}_t$ with $\mathcal{H}\subseteq \mathsf{X}_t$. We have that $\rho_1$ (resp. $\rho_2$) is lowerbounded by the probability that $\rho$ gets absorbed in $s'_{\infty}=|\mathcal{H}|$ when it starts from $s'_{j}=1$ (resp. $s'_{j}=|\mathcal{H}|-1$). Let \[ \beta=\frac{\Probr{s'_{i+1}=s'_i-1}}{\Probr{s'_{i+1}=s'_i+1}}=\frac{1}{r} + 2^{-\Omega(n)}<1\ ; \] and we have (see e.g.,~\cite{Kemeny12},~\cite[Section~6.3]{Nowak06b}) \[
\rho_1\geq \frac{1-\beta}{1-\beta^{|\mathcal{H}|}}\geq 1-\beta = 1-\frac{1}{r} - 2^{-\Omega(n)}\ ; \] and \[
\rho_2\geq 1-\frac{1-\beta^{-1}}{1-\beta^{-|\mathcal{H}|}} \geq 1-\frac{\beta^{-1}}{\beta^{-|\mathcal{H}|}} = 1-\beta^{|\mathcal{H}|-1}= 1-\left(\frac{1}{r} + 2^{-\Omega(n)}\right)^{n^{1-\gamma}-1}= 1-2^{-\Omega(n^{1-\gamma})}\ ; \]
since $\beta^{-|\mathcal{H}|} > \beta^{-1}$ and thus $(\beta^{-1}-1)/(\beta^{-|\mathcal{H}|}-1)\leq \beta^{-1}/\beta^{-|\mathcal{H}|}$. The desired result follows. \end{proof} \fi
\begin{lemma}\label{lem:sink_hit} Consider that at some time $j$ the configuration of the Moran process on $G^{\mathsf{w}}$ is $\mathsf{X}_j$ such that $v\in \mathsf{X}_j$ for some $v\not\in \mathcal{H}$ that is adjacent to the hub ($\lambda(v)=1)$. Then a mutant hits the hub at least $n^{1/3}$ times with probability $1-O(n^{-1/3})$. \end{lemma} \ifshortproofs \begin{proof} For any configuration $\mathsf{X}_i$ occurring after $\mathsf{X}_j$, let \begin{compactenum} \item $A$ be the event that $v$ places an offspring on $\mathsf{par}(v)$ in $\mathsf{X}_{i+1}$, and \item $B$ be the event that a neighbor of $v$ places an offspring on $v$ in $\mathsf{X}_{i+1}$, \end{compactenum} and let $\rho_A$ and $\rho_B$ be the corresponding probabilities. It follows directly from the weight assignments that \[ \rho_A=\Omega\left( n\cdot 2^{-n} \right) \qquad \text{ and } \qquad \rho_B = O(2^{-n})\ ; \] i.e., the event $A$ occurs $\Omega(n)$ times more frequently than event $B$. Using Markov's inequality, we have that with probability at least $\Omega(n^{1/3})$, the event $A$ occurs at least $n^{1/3}$ times before event $B$ occurs. The desired result follows. \end{proof} \fi
\iffullproofs \begin{proof} For any configuration $\mathsf{X}_i$ occurring after $\mathsf{X}_j$, let \begin{compactenum} \item $A$ be the event that $v$ places an offspring on $\mathsf{par}(v)$ in $\mathsf{X}_{i+1}$, and \item $B$ be the event that a neighbor of $v$ places an offspring on $v$ in $\mathsf{X}_{i+1}$, \end{compactenum} and let $\rho_A$ and $\rho_B$ be the corresponding probabilities. Using \cref{eq:weights1}, we have \[ \rho_A=\frac{r}{\mathsf{F}(\mathsf{X}_i)}\cdot \frac{\mathsf{w}(v, \mathsf{par}(v))}{\mathsf{w}(v)}=\Omega\left( n\cdot 2^{-n} \right)\ ; \numberthis\label{eq:rhoA} \] and using \cref{eq:weights2} and \cref{eq:weights3} \[ \rho_B\leq \frac{r}{\mathsf{F}(\mathsf{X}_i)}\cdot \left(\frac{\mathsf{w}(v, \mathsf{par}(v))}{\mathsf{w}(\mathsf{par}(u))} + \sum_{z\in \mathsf{chl}(v)}\frac{\mathsf{w}(v,z)}{\mathsf{w}(z)}\right) \leq \frac{r}{n}\cdot \left(2^{-n}+O\left(n\cdot 2^{-n}\right)\right) = 2^{-\Omega(n)}\ . \numberthis\label{eq:rhoB} \] since $\mathsf{par}(u)\in \mathcal{H}$ and by \cref{lem:sink_weights} we have $\mathsf{w}(\mathsf{par}(u))\geq 1$. Let $X$ be the random variable that counts the time required until event $A$ occurs $n^{1/3}$ times. Then, for all $\ell\in \mathbb{N}$ we have $\Prob{X\geq \ell}\leq \Prob{X'\geq \ell}$ where $X'$ is a random variable that follows the negative binomial distribution on $n^{1/3}$ failures with success rate $\rho_{X'}=1-O(n\cdot 2^{-n})\leq \rho_A$ (using \cref{eq:rhoA}). The expected value of $X'$ is \[ \Expect{X'} = \frac{\rho_{X'}\cdot n^{1/3}}{1-\rho_{X'}} = O\left(\frac{1-n\cdot 2^{-n}}{n^{2/3}\cdot 2^{-n}}\right)\ . \] Let $\alpha=2^n\cdot n^{-1/3}$, and by Markov's inequality, we have \[ \Prob{X'\geq \alpha}\leq \frac{\Expect{X'}}{\alpha}=\frac{O\left(\frac{1-n\cdot 2^{-n}}{n^{2/3}\cdot 2^{-n}}\right)}{2^n\cdot n^{-1/3}}=O(n^{-1/3})\ . \] Similarly, let $Y$ be the random variable that counts the time required until event $B$ occurs. Then, for all $\ell\in \mathbb{N}$, we have $\Prob{Y\leq \ell}\leq \Prob{Y'\leq \ell}$, where $Y'$ is a geometrically distributed variable with rate $\rho_{Y'}=2^{-\Omega(n)}\geq \rho_B$ (using \cref{eq:rhoB}). Then \[ \Prob{Y'\leq \alpha} = 1-(1-\rho_{Y'})^{\alpha} = O(n^{-1/3})\ ; \] and thus \[ \Prob{Y\leq X} \leq \Prob{Y\leq \alpha} + \Prob{X\geq \alpha} \leq \Prob{Y'\leq \alpha } + \Prob{X'\geq \alpha} = O(n^{-1/3})\ . \numberthis\label{eq:hitsink} \] Hence, with probability at least $1-O(n^{1/3})$, the vertex $v$ places an offspring on $\mathsf{par}(v)$ at least $n^{1/3}$ times before it is replaced by a neighbor. The desired result follows. \end{proof} \fi
\begin{lemma}\label{lem:sink_mutant} Consider that at some time $j$ the configuration of the Moran process on $G^{\mathsf{w}}$ is $\mathsf{X}_j$ with $v\in \mathsf{X}_j$ for some $v\not\in \mathcal{H}$ that is adjacent to the hub ($\lambda(v)=1)$. Then a subsequent configuration $\mathsf{X}_t$ with $\mathcal{H} \subseteq \mathsf{X}_t$ (mutants fixating in the hub) is reached with probability $1-O(n^{-1/3})$, i.e., given event $\mathcal{E}_2$, the event $\mathcal{E}_3$ happens whp. \end{lemma} \ifshortproofs \begin{proof} By \cref{lem:sink_hit}, we have that with probability at least $\Omega(n^{1/3})$, the vertex $v$ places an offspring on $\mathsf{par}(v)$ at least $n^{1/3}$ times before it is replaced by a neighbor. By \cref{lem:sink_independent}, each time event a new mutant hits the hub, the probability that a configuration $\mathsf{X}_t$ is reached with $\mathcal{H}\subseteq \mathsf{X}_t$ (i.e., the hub becomes mutant) is at least $ 1-r^{-1}-2^{-\Omega(n)} $. Hence, the probability that a configuration $\mathsf{X}_t$ is reached with $\mathcal{H}\in \mathsf{X}_t$ is at least \[ (1-O(n^{-1/3}))\cdot \left(1-\left(r^{-1}+z\right)^{m}\right) = 1-O(n^{-1/3})\ ; \] where $z=2^{-\Omega(n)}$ and $m=n^{1/3}$. The desired result follows. \end{proof} \fi
\iffullproofs \begin{proof} By \cref{lem:sink_hit}, we have that with probability at least $\Omega(n^{1/3})$, the vertex $v$ places an offspring on $\mathsf{par}(v)$ at least $n^{1/3}$ times before it is replaced by a neighbor. Let $t_i$ be the time that $v$ places its $i$-th offspring on $\mathsf{par}(v)$, with $1\leq i\leq n^{1/3}$. Let $A_i$ be the event that a configuration $\mathsf{X}_t$ is reached, where $t\geq t_i$ and such that $\mathcal{H}\subseteq \mathsf{X}_t$. By \cref{lem:sink_independent}, we have $\Probr{A_i}\geq 1-r^{-1}-2^{-\Omega(n)}$. Moreover, with probability $1-2^{-\Omega(n)}$, at each time $t_i$ the hub is in a homogeneous state, i.e., either $\mathcal{H}\subseteq \mathsf{X}_{t_i}$ or $\mathcal{H}\cap \mathsf{X}_{t_i}=\emptyset$. The proof is similar to that of \cref{lem:sink_leak}, and is based on the fact that every edge which has one end on the hub and the other outside the hub has exponentially small weight (i.e., $2^{-n}$), whereas the hub $G^{\mathsf{w}}\restr{\mathcal{H}}$ resolves to a homogeneous state in polynomial time with probability exponentially close to 1. It follows that with probability at least $p=1-2^{-\Omega(n)}$, the events $\bar{A_i}$ are pairwise independent, and thus
\[ \Probr{\overline{A}_1\cap \overline{A}_2\dots \cap \overline{A}_{n^{1/3}}} \leq p\cdot \prod_{i=1}^{n^{1/3}}\Probr{\overline{A}_i} + (1-p) \leq \prod_{i=1}^{n^{1/3}}(1-\Probr{A_i}) + 2^{-\Omega(n)} \leq \left(r^{-1}+2^{-\Omega(n)}\right)^{n^{1/3}} + 2^{-\Omega(n)}\ . \numberthis\label{eq:independentAi} \]
Finally, starting from $\mathsf{X}_0=\{u\}$, the probability that a configuration $\mathsf{X}_t$ is reached such that $\mathcal{H}\subseteq\mathsf{X}_t$ is lowerbounded by the probability of the events that \begin{compactenum} \item the ancestor $v$ of $u$ is eventually occupied by a mutant, and \item $v$ places at least $n^{1/3}$ offsprings to $\mathsf{par}(v)\in \mathcal{H}$ before a neighbor of $v$ places an offspring on $v$, and \item the event $\overline{A}_1\cap \overline{A}_2\dots \cap \overline{A}_{n^{1/3}}$ does not occur. \end{compactenum} Combining \cref{lem:ancestor_mutant}, \cref{eq:hitsink} and \cref{eq:independentAi}, we obtain that the goal configuration $\mathsf{X}_t$ is reached with probability at least \[ (1-O(n^{-1}))\cdot (1-O(n^{-1/3}))\cdot \left(1-\Probr{\overline{A}_1\cap \overline{A}_2\dots \cap \overline{A}_{n^{1/3}}}\right) = 1-O(n^{-1/3})\ ; \] as desired. \end{proof} \fi
\subsubsection{Analysis of Stage~4: Event $\mathcal{E}_4$}
In this section we present the last stage to fixation. This is established in four intermediate steps. \begin{compactenum} \item First, we consider the event of some vertex in the hub placing an offspring in one of the branches, while the hub is heterogeneous. We show that this event has exponentially small probability of occurring (\cref{lem:sink_leak}). \item We introduce the \emph{modified} Moran process which favors residents when certain events occur, more than the conventional Moran process. This modification underapproximates the fixation probability of mutants, but simplifies the analysis. \item We define a set of simple Markov chains $\mathcal{M}_j$ and show that the fixation of mutants on the $j$-th branch $T_{m_j}^{y_j}$ is captured by the absorption probability to a specific state of $\mathcal{M}_j$ (\cref{lem:coupling}). This absorption probability is computed in \cref{lem:mc_absorb}. \item Finally we combine the above steps in \cref{lem:modified_process_fix} to show that if the hub is occupied by mutants (i.e., given that event $\mathcal{E}_3$ holds), the mutants eventually fixate in the graph (i.e., event $\mathcal{E}_4$ holds) whp. \end{compactenum}
We start with an intermediate lemma, which states that while the hub is heterogeneous, the probability that a node from the hub places an offspring to one of the branches is exponentially small.
\begin{lemma}\label{lem:sink_leak}
For any configuration $\mathsf{X}_j$ with $|\mathcal{H}\setminus \mathsf{X}_j|=1$, let $t_1\geq j$ be the first time such that $\mathcal{H}\subseteq \mathsf{X}_{t_1}$ (possibly $t_1=\infty$), and $t_2\geq j$ the first time in which a vertex $u\in \mathcal{F}$ places an offspring on some vertex $v\in\mathsf{Nh}(u)\setminus\mathcal{H}$. We have that $\Probr{t_2<t_1}=2^{-\Omega(m)}$, where $m=n^{1-\gamma}$. \end{lemma} \ifshortproofs \begin{proof}
Consider any configuration $\mathsf{X}_i$, and let $s_i=|\mathcal{H} \cap \mathsf{X}_i|$ denote the number of mutants in the hub. As shown in the proof of \cref{lem:sink_independent}, the random variables $s_i$ perform a random walk $\rho$ with backward bias upperbounded by \[ \beta= \frac{1}{r} + 2^{-\Omega(n)}<1\ . \]
Without self-loops, the walk $\rho$ has expected fixation time $O(n)$. With self-loops, in every round, the walk $\rho$ changes state with probability at least $n^{-2}$. Hence, the expected fixation time of $\rho$ is $O(n^3)$, i.e., $\Expect{t_1|t_1 \text{ is finite}}=O(n^3)$. On the other hand, we have $\Expect{t_2}=2^{\Omega(n)}$ since the probability of a vertex $u\in \mathcal{F}$ placing an offspring on some vertex $v\in\mathsf{Nh}(u)\setminus\mathcal{H}$ is at most \[ \frac{1}{n}\cdot \mathsf{w}(u,v)=2^{-\Omega(n)}\ . \]
The desired result follows easily from applying Markov's inequality on the expectations $\Expect{t_1}$ and $\Expect{t_2}$. \end{proof} \fi
\iffullproofs \begin{proof}
Given a configuration $\mathsf{X}_i$, denote by $s_i=|\mathcal{H} \cap \mathsf{X}_i|$. Recall from the proof of \cref{lem:sink_mutant} that $s_j,s_{j+1},\dots $
performs a one-dimensional random walk on the states $0\leq i\leq |\mathcal{H}|$, with the ratio of transition probabilities given by \cref{eq:ratio4}. Observe that in each $s_i$, the random walk changes state with probability at least $n^{-2}$, which is a lowerbound on the probability that the walk progresses to $s_{i+1}=s_i + 1$ (i.e., the mutants increase by one). Consider that the walk starts from $s_j$, and let $H_a$ be the expected absorption time, $H_f$ the expected fixation time on state $|\mathcal{H}|$, and $H_e$ the expected extinction time on state $0$ of the random walk, respectively. The unlooped variant of the random walk $\rho=s_i,s_{i+1},\dots$ has expected absorption time $O(n)$~\cite{Levin06}, hence the random walk $s_j,s_{j+1},\dots $ has expected absorption time \[ H_a\leq n^{2}\cdot O(n) = O(n^3)\ ; \]
and since by \cref{lem:sink_independent} for large enough $n$ we have $\Probr{s_{\infty}=|\mathcal{H}|}\geq \Probr{s_{\infty}=0}$, we have \[
H_a = \Probr{s_{\infty}=|\mathcal{H}|}\cdot H_f + \Probr{s_{\infty}=0}\cdot H_e \implies H_f \leq 2\cdot H_a =O(n^3)\ . \] Let $t_1'$ be the random variable defined as $t_1'=t_1-j$, and we have \[
\Expect{t_1'|t'_1<\infty} = H_f=O(n^3)\ ; \] i.e., given that a configuration $\mathsf{X}_{t_1}$ with $\mathcal{H}\subseteq \mathsf{X}_{t_1}$ is reached (thus $t_1<\infty$ and $t'_1<\infty$), the expected time we have to wait after time $j$ for this event to happen equals the expected fixation time $H_f$ of the random walk $s_j,s_{j+1},\dots$. Let $\alpha=2^{\frac{n}{2}}$, and by Markov's inequality, we have \[
\Probr{t_1'>\alpha|t'_1<\infty} \leq \frac{\Expect{t_1'|t'_1<\infty}}{\alpha} = n^3\cdot 2^{-\frac{n}{2}}\ . \numberthis\label{eq:markov_bound} \] Consider any configuration $\mathsf{X}_i$. The probability $p$ that a vertex $u\in \mathcal{F}$ places an offspring on some vertex $v\in\mathsf{Nh}(u)\setminus\mathcal{H}$ is at most \[ p\leq \frac{r}{\mathsf{F}(\mathsf{X}_i)}\cdot \sum_{u\in \mathcal{F}}\sum_{v\in \mathsf{Nh}(u)\setminus \mathcal{H}}\frac{\mathsf{w}(u,v)}{\mathsf{w}(u)}\leq r\cdot n^{-1}\cdot n^{1-\gamma}\cdot 2^{-n} \leq r\cdot n^2 \cdot 2^{-n}\ . \] since $\mathsf{w}(u,v)=2^{-n}$ and by \cref{lem:sink_weights} we have $\mathsf{w}(u)>1$. Let $t_2'=t_2-i$, and we have $\Probr{t'_2\leq\alpha}\leq \Probr{X\leq \alpha}$, where $X$ is a geometrically distributed random variable with rate $\rho= r\cdot n^2 \cdot 2^{-n}$. Since $\Probr{t_2<t_1} = \Probr{t'_2<t'_1}$, we have \begin{align*}
\Probr{t_2<t_1} = & \Probr{t'_2<t'_1|t'_1<\infty}\cdot \Probr{t'_1<\infty} + \Probr{t'_2<t'_1|t'_1=\infty}\cdot \Probr{t'_1=\infty}\\
\leq & \Probr{t'_2<t'_1|t'_1<\infty} + \Probr{t'_1=\infty} \\
\leq & \Probr{t'_2<t'_1|t_1<\infty} + 2^{-\Omega(n^{1-\gamma})}\\
\leq & \Probr{t'_2\leq \alpha|t'_1<\infty} + \Probr{t'_1>\alpha|t'_1<\infty} + 2^{-\Omega(n^{1-\gamma})}\\
\leq & \Probr{t'_2\leq \alpha|t'_1<\infty} + n^3\cdot 2^{-\frac{n}{2}} + 2^{-\Omega(n^{1-\gamma})}\\ \leq & \Probr{X\leq \alpha} + 2^{-\Omega(n^{1-\gamma})}\\ \leq & 1-(1-\rho)^{\alpha} + 2^{-\Omega(n^{1-\gamma})}\\ \leq & 1-(1-r\cdot n^2\cdot 2^{-n})^{2^{n/2}}+ 2^{-\Omega(n^{1-\gamma})}\\ =& 2^{-\Omega(n^{1-\gamma})} \end{align*}
The second inequality holds since by \cref{lem:sink_independent} we have $\Probr{t'_1=\infty}=2^{-\Omega(n^{1-\gamma})}$. The fourth inequality comes from \cref{eq:markov_bound}. \end{proof} \fi
To simplify the analysis, we replace the Moran process with a \emph{modified} Moran process, which favors the residents (hence it is conservative) and allows for rigorous derivation of the fixation probability of the mutants.
\noindent{\bf The modified Moran process.} Consider the Moran process on $G^{\mathsf{w}}$, and assume there exists a first time $t^*<\infty$ when a configuration $\mathsf{X}_{t^{*}}$ is reached such that $\mathcal{H}\subseteq \mathsf{X}_{t^*}$. We underapproximate the fixation probability of the Moran process starting from $\mathsf{X}_{t^*}$ by the fixation probability of the \emph{modified} Moran process $\ov{\State}_{t^*},\ov{\State}_{t^{*}+1},\dots$, which behaves as follows. Recall that for every vertex $y_j$ with $\lambda(y_j)=1$, we denote by $T_{m_j}^{y_j}$ the subtree of $\mathcal{T}_n^{x}$ rooted at $y_j$, which has $m_j$ vertices. Let $V_i$ be the set of vertices of $T_{m_i}^{y_i}$, and note that by construction $m_i\leq n^{1-c}$, while there are at most $n$ such trees. The \emph{modified} Moran process is identical to the Moran process, except for the following modifications.
\begin{compactenum} \item\label{item:mod0} Initially, $\ov{\State}_{t^*}=\mathcal{H}$.
\item\label{item:mod1} At any configuration $\ov{\State}_i$ with $\mathcal{H}\in \ov{\State}_i$, for all trees $T_{m_j}^{y_j}$, if a resident vertex $u\in V_j$ places an offspring on some vertex $v$ with $u\neq v$, then $\ov{\State}_{i+1}=\ov{\State}_i\setminus V_j$ and $|\mathcal{H}\setminus\ov{\State}_{i+1}|=1$ i.e., all vertices of $T_{m_j}^{y_j}$ become residents and the hub is invaded by a single resident. \item\label{item:mod2} If the modified process reaches a configuration $\ov{\State}_i$ with $\ov{\State}_i\cap \mathcal{H}=\emptyset$, the process instead transitions to configuration $\ov{\State}_i=\emptyset$, i.e., if the hub becomes resident, then all mutants go extinct. \item\label{item:mod3} At any configuration $\ov{\State}_i$ with $\mathcal{H}\setminus\ov{\State}_i \neq \emptyset$, if some vertex $u\in \mathcal{F}$ places an offspring on some vertex $v\in\mathsf{Nh}(u)\setminus\mathcal{H}$, then the process instead transitions to configuration $\ov{\State}_i=\emptyset$, i.e., if while the hub is heterogeneous, an offspring is placed from the hub to a vertex outside the hub, the mutants go extinct. \end{compactenum} Note that any time a case of \cref{item:mod0}-\cref{item:mod3} applies, the Moran and modified Moran processes transition to configurations $\mathsf{X}_i$ and $\ov{\State}_i$ respectively, with $\ov{\State}_i\subseteq \mathsf{X}_i$. \ifshortproofs Thus, the fixation probability of the Moran process on $G_{n}^{\mathsf{w}}$ is underapproximated by the fixation probability of the modified Moran process. \fi \iffullproofs Thus, the fixation probability of the Moran process on $G_{n}^{\mathsf{w}}$ is underapproximated by the fixation probability of the modified Moran process
(i.e., we have $\Probr{\mathsf{X}_{\infty}=V|t^{*}<\infty}\geq \Probr{\ov{\State}_{\infty}=V}$). \fi It is easy to see that \cref{lem:sink_independent} and \cref{lem:sink_leak} directly apply to the modified Moran process.
\noindent{\bf The Markov chain $\mathcal{M}_j$.} Recall that $T_{m_j}^{y_j}$ refers to the $j$-th branch of the weighted graph $G^{\mathsf{w}}$, rooted at the vertex $y_j$ and consisting of $m_j$ vertices. We associate $T_{m_j}^{y_j}$ with a Markov chain $\mathcal{M}_j$ of $m_j+3$ vertices, which captures the number of mutants in $T_{m_j}^{y_j}$, and whether the state of the hub. Intuitively, a state $0\leq i \leq m_j$ of $\mathcal{M}_j$ represents a configuration where the hub is homogeneous and consists only of mutants, and there are $i$ mutants in the branch $T_{m_j}^{y_j}$. The state $\mathcal{H}$ represents a configuration where the hub is heterogeneous, whereas the state $\mathcal{D}$ represents a configuration where the mutants have gone extinct in the hub, and thus the modified Moran process has terminated. We first present formally the Markov chain $\mathcal{M}_j$, and later (in \cref{lem:coupling}) we couple $\mathcal{M}_j$ with the modified Moran process.
Consider any tree $T_{m_j}^{y_j}$, and let $\alpha=1/(n^3+1)$. We define the Markov chain $\mathcal{M}_j=(\mathcal{X}_j, \delta_j)$ as follows: \begin{compactenum} \item The set of states is $\mathcal{X}_j=\{\mathcal{H}, \mathcal{D}\}\cup \{0,1,\dots, m_j\}$ \item The transition probability matrix $\delta_j: \mathcal{X}_j\times \mathcal{X}_j\to [0,1]$ is defined as follows: \begin{compactenum} \item $\delta_j[i, i+1]=\alpha $ for $0\leq i < m_j$, \item $\delta_j[i, 0]= 1-\alpha$ for $1<i<m_j$, \item $\delta_j[0, \mathcal{H}] = 1-\alpha$, \item $\delta_j[\mathcal{H}, 0]=1-2^{-\Omega(m)}$, and $\delta_j[\mathcal{H}, \mathcal{D}]=2^{-\Omega(m)}$, where $m=n^{1-\gamma}$, \item $\delta_j[m_j,m_j]=\delta_j[\mathcal{D}, \mathcal{D}]=1$, \item $\delta_j[x,y]=0$ for all other pairs $x,y\in \mathcal{X}_j$ \end{compactenum} \end{compactenum}
See \cref{fig:mc_coupling} for an illustration. \input{fig_mc_coupling} The Markov chain $\mathcal{M}_j$ has two absorbing states, $\mathcal{D}$ and $m_j$. We denote by $\rho_j$ the probability that a random walk on $\mathcal{M}_j$ starting from state $0$ will be absorbed in state $m_j$. The following lemma lowerbounds $\rho_j$, and comes from a straightforward analysis of $\mathcal{M}_j$.
\begin{lemma}\label{lem:mc_absorb} For all Markov chains $\mathcal{M}_j$, we have $\rho_j=1-2^{-\Omega(m)}$, where $m=n^{1-\gamma}$. \end{lemma}
\iffullproofs \begin{proof} Given a state $a\in \mathcal{X}_j$, we denote by $x_a$ the probability that a random walk starting from state $a$ will be absorbed in state $m_j$. Then $\rho_j=x_0$, and we have the following linear system
\begin{align*}
x_{\mathcal{H}} =& \delta[\mathcal{H},0] \cdot x_0 = \left(1-2^{\Omega(n^{1-\gamma})}\right)\cdot x_0\\
x_{i} =& \delta[i, \mathcal{H}] \cdot x_{\mathcal{H}} + \delta[i,i+1] \cdot x_{i+1} = (1-\alpha)\cdot x_{\mathcal{H}} + \alpha\cdot x_{i+1} & \text{ for } 0\leq i<m_j\\
x_{m_j} =& 1 \end{align*} and thus
\begin{align*} &x_{\mathcal{H}} = \left(1-2^{-\Omega(n^{1-\gamma})}\right)\cdot\left(x_{\mathcal{H}}\cdot (1-\alpha)\cdot \sum_{0=1}^{m_j} a^i +a^{m_j}\right)\\ \implies & x_{\mathcal{H}} = \left(1-2^{-\Omega(n^{1-\gamma})}\right)\cdot \left(x_{\mathcal{H}}\cdot \left(1-a^{m_j-1}\right) + a^{m_j}\right)\\ \implies& x_{\mathcal{H}}\left(1-\left(1-2^{-\Omega(n^{1-\gamma})}\right)\cdot \left(1-a^{m_j-1}\right)\right) = a^{m_j} \numberthis\label{eq:xzero} \end{align*}
Note that \[ 1-\left(1-2^{-\Omega(n^{1-\gamma})}\right)\cdot \left(1-a^{n_j-1}\right) \leq 2^{-\Omega(n^{1-\gamma})} + a^{n^j}\ ; \] and from \cref{eq:xzero} we obtain \[ x_{\mathcal{H}}\geq \frac{\alpha^{n_j}}{2^{-\Omega(n^{1-\gamma})} + \alpha^{n_j}} = 1 - \frac{2^{-\Omega(n^{1-\gamma})}}{2^{-\Omega(n^{1-\gamma})} + \alpha^{n_j}} \geq 1-2^{-\Omega(n^{1-\gamma})}\cdot \alpha^{-n_j} = 1-2^{-\Omega(n^{1-\gamma})}\cdot (n^3+1)^{n^{1-c}} = 1-2^{-\Omega(n^{1-\gamma})}\ ; \] since $a=1/(n^3+1)$ and by construction $n_j\leq n^{1-c}$ and $\gamma=\varepsilon/3<\varepsilon/2=c$. Finally, we have that $\rho_j=x_0\geq x_{\mathcal{H}}=1-2^{-\Omega(n^{1-\gamma})}$, as desired. \end{proof} \fi
Given a configuration $\ov{\State}_k$ of the modified Moran process, we denote by $\overline{\rho}_j(\ov{\State}_k)$ the probability that the process reaches a configuration $\ov{\State}_t$ with $\mathcal{H} \cup V_j\subseteq \ov{\State}_t$. The following lemma states that the probability $\overline{\rho}_j(\ov{\State}_{\ell})$ is underapproximated by the probability $\rho_j$. The proof is by a coupling argument, which ensures that \begin{compactenum} \item every time the run on $\mathcal{M}_j$ is on a state $0\leq i\leq m_j$, there are at least $i$ mutants placed on $T_{m_j}^{y_j}$, and \item every time the modified Moran process transitions to a configuration where hub is heterogeneous (i.e., we reach a configuration $\mathsf{X}$ with $\mathcal{H}\setminus\mathsf{X}\neq \emptyset$), the run on $\mathcal{M}_j$ transitions to state $\mathcal{H}$. \end{compactenum}
\begin{lemma}\label{lem:coupling} Consider any configuration $\ov{\State}_{\ell}$ of the modified Moran process, with $\mathcal{H}\subseteq \ov{\State}_{\ell}$, and any tree $T_{m_j}^{y_j}$. We have $\overline{\rho}_j(\ov{\State}_{\ell})\geq \rho_j$. \end{lemma} \begin{proof} The proof is by coupling the modified Moran process and the Markov chain $\mathcal{M}_j$. To do so, we let the modified Moran process execute, and use certain events of that process as the source of randomness for a run in $\mathcal{M}_j$. We describe the coupling process in high level. Intuitively, every time the run on $\mathcal{M}_j$ is on a state $0\leq i\leq m_j$, there are at least $i$ mutants placed on $T_{m_j}^{y_j}$. Additionally, every time the modified Moran process transitions to a configuration where hub is heterogeneous (i.e., we reach a configuration $\mathsf{X}$ with $\mathcal{H}\setminus\mathsf{X}\neq \emptyset$), then the run on $\mathcal{M}_j$ transitions to state $\mathcal{H}$. Finally, if the modified Moran process ends on a configuration $\mathsf{X}=\emptyset$, then the run on $\mathcal{M}_j$ gets absorbed to state $\mathcal{D}$. The coupling works based on the following two facts. \begin{compactenum} \item For every state $0<i<m_j$, the ratio $\delta_j[i,i+1]/\delta_j[i,i-1]$ is upperbounded by the ratio of the probabilities of increasing the number of mutant vertices in $T_{m_j}^{y_j}$ by one, over decreasing that number by one and having the hub being invaded by a resident. Indeed, we have \[ \frac{\delta_j[i,i+1]}{\delta_j[i,i-1]}=\frac{\alpha}{1-\alpha}=\frac{1}{n^3}\ ; \] while for every mutant vertex $x$ of $G$ with at last one resident neighbor, the probability that $x$ becomes mutant in the next step of the modified Moran process over the probability that $x$ becomes resident is at least $1/n^3$ (this ratio is at least $1/n^2$ for every resident neighbor $y$ of $x$, and there are at most $n$ such resident neighbors). The same holds for the ratio $\delta_j[0,1]/\delta_j[0,\mathcal{H}]$. \item The probability of transitioning from state $\mathcal{H}$ to state $0$ is upperbounded by the probability that once the mutant hub gets invaded by a resident the modified Moran process reaches a configuration where the hub consists of only mutants (using \cref{lem:sink_independent} and \cref{lem:sink_leak}). \end{compactenum} \end{proof}
The following lemma captures the probability that the modified Moran process reaches fixation whp. That is, whp a configuration $\ov{\State}_{i}$ is reached which contains all vertices of $G^{\mathsf{w}}$. The proof is based on repeated applications of \cref{lem:coupling} and \cref{lem:mc_absorb}, one for each subtree $T_{m_j}^{y_j}$.
\begin{lemma}\label{lem:modified_process_fix} Consider that at some time $t^*$ the configuration of the Moran process on $G^{\mathsf{w}}$ is $\mathsf{X}_{t^*}$ with $\mathcal{H}\subseteq \mathsf{X}_{t^*}$. Then, a subsequent configuration $\mathsf{X}_t$ with $\mathsf{X}_t=V$ is reached with probability at least $1-2^{-\Omega(m)}$ where $m=n^{1-\gamma}$, i.e., given event $\mathcal{E}_3$, the event $\mathcal{E}_4$ is happens whp. \end{lemma} \ifshortproofs \begin{proof} It suffices to consider the modified Moran process on $G$ starting from configuration $\ov{\State}_{t^*}=\mathcal{H}$, and showing that whp we eventually reach a configuration $\ov{\State}_t=V$. First note that if there exists a configuration $\ov{\State}_{t'}$ with $V_i\subseteq \ov{\State}_{t'}$ for any $V_i$, then for all $t''\geq {t'}$ with $\ov{\State}_{t''}\neq \emptyset$ we have $V_i\subseteq \ov{\State}_{t''}$. Let $t_1=t^*$. Since $\mathcal{H}\subseteq \ov{\State}_{t_1}$, by \cref{lem:coupling}, with probability $\overline{\rho}_1(\ov{\State}_{t_1}) \geq \rho_1$ there exists a time $t_2\geq t_1$ such that $\mathcal{H} \cup V_1\subseteq \ov{\State}_{t_2}$. Inductively, given the configuration $\ov{\State}_{t_i}$, with probability $\overline{\rho}_i(\ov{\State}_{t_i}) \geq \rho_i$ there exists a time $t_{i+1}\geq t_i$ such that $\mathcal{H}\cup V_1\cup\dots \cup V_i\subseteq \ov{\State}_{t_{i+1}}$. Since $V=\mathcal{H}\cup (\bigcup_{i=1}^k V_i)$, we obtain that the probability that the mutants get fixed is at least \[ \prod_{i=1}^{n} \rho_i \geq \left(1-2^{-\Omega(m)}\right)^n=1-2^{-\Omega(m)}\ ; \] as by \cref{lem:mc_absorb} we have that $\rho_i=1-2^{-\Omega(m)}$ for all $i$. The desired result follows. \end{proof} \fi \iffullproofs \begin{proof} It suffices to consider the modified Moran process on $G$ starting from configuration $\ov{\State}_{t^*}=\mathcal{H}$, and showing that whp we eventually reach a configuration $\ov{\State}_t=V$. First note that if there exists a configuration $\ov{\State}_{t'}$ with $V_i\subseteq \ov{\State}_{t'}$ for any $V_i$, then for all $t''\geq {t'}$ with $\ov{\State}_{t''}\neq \emptyset$ we have $V_i\subseteq \ov{\State}_{t''}$. Let $t_1=t^*$. Since $\mathcal{H}\subseteq \ov{\State}_{t_1}$, by \cref{lem:coupling}, with probability $\overline{\rho}_1(\ov{\State}_{t_1}) \geq \rho_1$ there exists a time $t_2\geq t_1$ such that $\mathcal{H} \cup V_1\subseteq \ov{\State}_{t_2}$. Inductively, given the configuration $\ov{\State}_{t_i}$, with probability $\overline{\rho}_i(\ov{\State}_{t_i}) \geq \rho_i$ there exists a time $t_{i+1}\geq t_i$ such that $\mathcal{H}\cup V_1\cup\dots \cup V_i\subseteq \ov{\State}_{t_{i+1}}$. Since $V=\mathcal{H}\cup (\bigcup_{i=1}^k V_i)$, we obtain \[ \Probr{\ov{\State}_{\infty}=V} \geq \prod_{i=1}^{n} \rho_i = \prod_{i=1}^{n} \left(1-2^{-\Omega(n^{1-\gamma})}\right) \geq \left(1-2^{-\Omega(n^{1-\gamma})}\right)^n=1-2^{-\Omega(m)}\ ; \] as by \cref{lem:mc_absorb} we have that $\rho_i=1-2^{-\Omega(m)}$ for all $i$. The desired result follows. \end{proof} \fi
\subsubsection{Main Positive Result}
We are now ready to prove the main theorem of this section. First, combining \cref{lem:initialization}, \cref{lem:ancestor_mutant}, \cref{lem:sink_mutant} and \cref{lem:modified_process_fix}, we obtain that if $r>1$, then the mutants fixate $G_n$ whp.
\begin{lemma}\label{lem:amplifiers} For any fixed $\varepsilon >0$, for any graph $G_n$ of $n$ vertices and diameter $\mathrm{diam}(G_n)\leq n^{1-\varepsilon}$, there exists a weight function $\mathsf{w}$ such that for all $r>1$, we have $\mathsf{\rho}(G^{\mathsf{w}}_n,r,\mathsf{U})=1-O(n^{-\varepsilon/3})$ and $\mathsf{\rho}(G^{\mathsf{w}}_n,r,\mathsf{T})=1-O(n^{-\varepsilon/3})$. \end{lemma}
\begin{comment} \ifshortproofs \begin{proof} We present the result for the uniform and temperature initialization below. \begin{compactitem} \item For uniform initialization, the initial mutant is placed on a vertex $u\not\in \mathcal{H}$ with probability \[
\sum_{u\not\in \mathcal{H}}\frac{1}{n}=\frac{|V\setminus \mathcal{H}|}{n} = \frac{n-n^{1-\gamma}}{n} = 1-\frac{n^{1-\gamma}}{n}=1-O(n^{-\varepsilon/3})\ ; \] since by definition $\gamma=\varepsilon/3$. By \cref{lem:sink_mutant}, a configuration $\mathsf{X}_{t^*}$ with $\mathcal{H}\subseteq \mathsf{X}_{t^*}$ is reached with probability $1-O(n^{-1/3})$. Since the fixation probability in the Moran process is underapproximated by the fixation probability in the modified Moran process, and by \cref{lem:modified_process_fix} the later probability is at least $1-2^{-\Omega(n^{1-\gamma})}$, we reach $\mathsf{\rho}(G^{\mathsf{w}}_n,r,\mathsf{U})=1-O(n^{-1/3})$.
\item For temperature initialization, it suffices to show that the initial mutant is placed on a vertex $u\not\in\mathcal{H}$ with probability $1-O(n^{-\varepsilon/3})$, as the rest of the proof is similar to the first part. Indeed, for any vertex $u\not\in \mathcal{H}$, we have \[ \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(u,v) \leq \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}} 2^{-n} = 2^{-\Omega(n)}\ ; \] whereas since $\mathrm{diam}(G)\leq n^{1-\varepsilon}$ we have \[ \mathsf{w}(u,u) = n^{-2\cdot \lambda(u)} \geq n^{-2\cdot \mathrm{diam}(G)} \geq n^{-O(n^{1-\varepsilon})}\ . \] Let $A=\mathsf{w}(u,u)$ and $B=\sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(u,v)$, and we have \[ \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} = \frac{A}{A + B} = 1-\frac{B}{A+B} = 1-2^{-\Omega(n)}\ . \] Then the desired event happens with probability at least \begin{align*}
\sum_{u\not \in \mathcal{H}} \frac{\mathsf{T}(u)}{n} = &\frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \sum_{v\in \mathsf{Nh}(u)} \frac{\mathsf{w}(u,v)}{ \mathsf{w}(v)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \left(1-2^{-\Omega(n)}\right) \\ =& 1-O(n^{-\varepsilon/3}) \end{align*} since $\gamma=\varepsilon/3$. The desired result follows. \end{compactitem} \end{proof} \fi
\iffullproofs \begin{proof} We examine the uniform and temperature initialization schemes separately. \begin{compactitem} \item For uniform initialization, the initial mutant is placed on a vertex $u\not\in \mathcal{H}$ with probability \[
\sum_{u\not\in \mathcal{H}}\frac{1}{n}=\frac{|V\setminus \mathcal{H}|}{n} = \frac{n-n^{1-\gamma}}{n} = 1-\frac{n^{1-\gamma}}{n}=1-O(n^{-\varepsilon/3})\ ; \]
since $\gamma=\varepsilon/3$. By \cref{lem:sink_mutant}, a configuration $\mathsf{X}_{t^*}$ with $\mathcal{H}\in \mathsf{X}_{t^*}$ is reached with probability $1-O(n^{-1/3})$. Since $\Probr{\mathsf{X}_{\infty}=V|t^{*}<\infty}\geq \Probr{\ov{\State}_{\infty}=V}$, and by \cref{lem:modified_process_fix} we have $\Probr{\ov{\State}_{\infty}=V}=1-2^{-\Omega(n^{1-\gamma})}$, we reach $\mathsf{f}^\Uniform_r(G_n^{\mathsf{w}})=1-O(n^{-1/3})$.
\item For temperature initialization, it suffices to show that the initial mutant is placed on a vertex $u\not\in\mathcal{H}$ with probability $1-O(n^{-\varepsilon/3})$, as the rest of the proof is similar to the first part. Indeed, for any vertex $u\not\in \mathcal{H}$, we have \[ \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(u,v) \leq \sum_{v\in \mathsf{Nh}(u)\setminus\{u\}} 2^{-n} = 2^{-\Omega(n)}\ ; \] whereas since $\mathrm{diam}(G)\leq n^{1-\varepsilon}$ we have \[ \mathsf{w}(u,u) = n^{-2\cdot \lambda(u)} \geq n^{-2\cdot \mathrm{diam}(G)} \geq n^{-O(n^{1-\varepsilon})}\ . \] Let $A=\mathsf{w}(u,u)$ and $B=\sum_{v\in \mathsf{Nh}(u)\setminus\{u\}}\mathsf{w}(u,v)$, and we have \[ \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} = \frac{A}{A + B} = 1-\frac{B}{A+B} =1-\frac{2^{-\Omega(n)}}{n^{-O(n^{1-\varepsilon})}+2^{-\Omega(n)}} \leq 1-\frac{2^{-\Omega(n)}}{n^{-O(n^{1-\varepsilon})}} = 1-2^{-\Omega(n)}\ . \] Then the desired event happens with probability at least \begin{align*} \sum_{u\not\in \mathcal{H}}\ProbT{\mathsf{X}_0=\{u\}} =& \sum_{u\not \in \mathcal{H}} \frac{\mathsf{T}(u)}{n} = \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \sum_{v\in \mathsf{Nh}(u)} \frac{\mathsf{w}(u,v)}{ \mathsf{w}(v)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \frac{\mathsf{w}(u,u)}{\mathsf{w}(u)} \geq \frac{1}{n} \cdot \sum_{u\not \in \mathcal{H}} \left(1-2^{-\Omega(n)}\right) \\
=& \frac{|V\setminus \mathcal{H}|}{n}\cdot \left(1-2^{-\Omega(n)}\right) = \frac{n-n^{1-\gamma}}{n}\cdot \left(1-2^{-\Omega(n)}\right) = (1-n^{-\gamma})\cdot \left(1-2^{-\Omega(n)}\right)\\ =& 1-O(n^{-\varepsilon/3}) \end{align*} since $\gamma=\varepsilon/3$. The desired result follows.
\end{compactitem} \end{proof} \fi \end{comment}
It now remains to show that if $r<1$, then the mutants go extinct whp. This is a direct consequence of the following lemma, which states that for any $r\geq 1$, the fixation probability of a mutant with relative fitness $1/r$ is upperbounded by one minus the fixation probability of a mutant with relative fitness $r$, in the same population.
\begin{lemma}\label{lem:uniform_suppress_residents} For any graph $G_n$ and any weight function $\mathsf{w}$, for all $r\geq 1$, we have that $\mathsf{\rho}(G^{\mathsf{w}}_n,1/r,\mathsf{U}) \leq 1-\mathsf{\rho}(G^{\mathsf{w}}_n,r,\mathsf{U})$. \end{lemma} \begin{proof} Let $\sigma$ be any irreflexive permutation of $V$ (i.e., $\sigma(u)\neq u$ for all $u\in V$), and observe that for every vertex $u$, the probability that a mutant of fitness $1/r$ arising at $u$ fixates in $G_n$ is upperbounded by one minus the probability that a mutant of fitness $r$ arising in $\sigma(u)$ fixates in $G_n$. We have \begin{align*} \mathsf{\rho}(G^{\mathsf{w}}_n,1/r,\mathsf{U}) =& \frac{1}{n}\sum_{u}\mathsf{\rho}(G^{\mathsf{w}}_n,1/r,u)\\ \leq& \frac{1}{n}\cdot \sum_{u} (1- \mathsf{\rho}(G^{\mathsf{w}}_n,r,\sigma(u)))\\ =&1- \frac{1}{n}\cdot \sum_{\sigma(u)}\mathsf{\rho}(G^{\mathsf{w}}_n,r,u)\\ =& 1-\mathsf{\rho}(G^{\mathsf{w}}_n,r,\mathsf{U}) \end{align*} \end{proof}
A direct consequence of the above lemma is that under uniform initialization, for any graph family where the fixation probability of advantageous mutants ($r>1$) approaches $1$, the fixation probability of disadvantageous mutants ($r<1$) approaches zero. Since under our weight function $\mathsf{w}$ temperature initialization coincides with uniform initialization whp, \cref{lem:amplifiers} and \cref{lem:uniform_suppress_residents} lead to the following corollary, which is our positive result.
\begin{theorem}\label{them:amplifiers} Let $\varepsilon>0$ and $n_0>0$ be any two fixed constants, and consider any sequence of unweighted, undirected graphs $(G_n)_{n>0}$ such that $\mathrm{diam}(G_n)\leq n^{1-\varepsilon}$ for all $n>n_0$. There exists a sequence of weight functions $(\mathsf{w}_n)_{n>0}$ such that the graph family $\mathcal{G}=(G_n^{\mathsf{w}_n})$ is a (i)~strong uniform, (ii)~strong temperature, and (iii)~strong convex amplifier. \end{theorem}
{
}
\end{document} |
\begin{document}
\maketitle
\begin{abstract} In this paper, we first introduce the notion of double Satake diagrams for compact symmetric triads. In terms of this notion, we give an alternative proof for the classification theorem for compact symmetric triads, which was originally given by Toshihiko Matsuki. Secondly, we introduce the notion of canonical forms for compact symmetric triads, and prove the existence of canonical forms for compact simple symmetric triads. We also give some properties for canonical forms. \end{abstract}
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
A \textit{compact symmetric triad} is a triple $(G,\theta_{1},\theta_{2})$ which consists of a compact connected semisimple Lie group $G$ and two involutions $\theta_{1}$ and $\theta_{2}$ on it. The study of compact symmetric triads is motivated by the geometry of \textit{Hermann actions}. If we denote by $K_{i}$ the identity component of the fixed point subgroup of $G$ for $\theta_{i}$, $i=1,2$, then $G/K_{i}$ is a compact Riemannian symmetric space. The natural isometric action of $K_{2}$ on $G/K_{1}$ is called the Hermann action of $K_{2}$ on $G/K_{1}$. In the case when $\theta_{1}=\theta_{2}$, we have $K_{1}=K_{2}$, and the Hermann action is nothing but the isotropy action on $G/K_{1}$. It is known that Hermann actions have a geometrically good property, the so-called hyperpolarity (\cite{HPTT}). In general, an isometric action of a compact connected Lie group on a Riemannian manifold is called \textit{hyperpolar}, if there exists a connected closed flat submanifold that meets all orbits orthogonally. Such a submanifold is called a \textit{section} or a \textit{canonical form} of the action. It is known that any section becomes a totally geodesic submanifold. The classification of hyperpolar actions on compact Riemannian symmetric spaces was given by Kollross (\cite{Kollross}). By his classification most of hyperpolar actions on compact Riemannian symmetric spaces are given by Hermann actions. It is expected that a further development of the theory for compact symmetric triads promotes a precise understanding of Hermann actions and their orbits.
In this paper, we first study the classification theory for compact symmetric triads. Matsuki (\cite{Matsuki97}) introduced a non-trivial equivalence relation $\sim$ on compact symmetric triads (Definition \ref{dfm:cst_sim}). Roughly speaking, if two compact symmetric triads are isomorphic with respect to $\sim$, then their Hermann actions are essentially the same. Our concern is to classify the local isomorphism classes of compact symmetric triads. For this, we will generalize the method to classify compact symmetric pairs due to Araki (\cite{Araki}). In fact, he obtained the local isomorphism classes of compact symmetric pairs in terms of Satake diagrams. Then, we introduce the notion of double Satake diagrams as a generalization of Satake diagrams (Definition \ref{dfn:2satake}). The equivalence relation $\sim$ induces a natural equivalence relation on double Satake diagrams. In fact, the local isomorphism of a compact symmetric triad determines that of a double Satake diagram, and this correspondence becomes bijective (Theorem \ref{thm:cst_dsatake_sim}, Lemma \ref{lem:cstdsatabij}). By using the results we obtain the classification of the local isomorphism classes of compact symmetric triads, namely, the classification of double Satake diagrams (Theorem \ref{thm:dsatake_classify}) derives that of the local isomorphism classes of compact symmetric triads (Corollary \ref{cor:cst_classify}). Our classification is listed in Table \ref{table:rank_ord}. In addition, for each isomorphism class of compact symmetric triads, we can give the method to determine its rank and order by means of the corresponding double Satake diagram, which are also given in the same table. Although the original classification of compact symmetric triads was given by Matsuki (\cite{Matsuki97}), such data are advantages of our classification. Our motivation for giving the alternative proof comes from the study of not only Hermann action but also the classification of noncompact symmetric pairs in terms of the theory for compact symmetric triads (\cite{BIS}). The results of this paper plays an important role in the forthcoming paper \cite{BIS2}.
Next we study canonical forms for compact symmetric triads. Intuitively, for the isomorphism class $[(G,\theta_{1},\theta_{2})]$, a canonical form is defined as a representative of $[(G,\theta_{1},\theta_{2})]$ which has the most easy structure in $[(G,\theta_{1},\theta_{2})]$. Our precise definition of the canonical form of $[(G,\theta_{1},\theta_{2})]$ is given in Definition \ref{dfn:cst_can}. We prove the existence of canonical forms in the case when $G$ is simple (Theorem \ref{thm:cst_exist_can}). As mentioned above, if two compact symmetric triads are isomorphic to each other, then so are their Hermann actions. Nevertheless, we find that there is a difference in understandability between their Hermann actions. Hence it is necessary to choose a canonical form in the isomorphic class. This is significant to study canonical forms of compact symmetric triads. For example, in the case when $[(G,\theta_{1},\theta_{2})]$ is commutable, its canonical form is given by a commutative compact symmetric triad $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$. The second author (\cite{Ikawa}) developed a systematic method to study orbits of Hermann actions for such $(G,\theta_{1}',\theta_{2}')$. By applying his method, many mathematicians contributed to study commutative Hermann actions (for example, \cite{Ikawa3}, \cite{ITT}, \cite{O}, \cite{Ohno}, \cite{OSU}). On the other hand, the study of Hermann actions for non-commutable case was found in \cite{GT} and \cite{Ohno21}, and their studies were based on the classification. For further direction, we should construct a unified theory for the geometry of Hermann actions whether $(G,\theta_{1},\theta_{2})$ is commutative or not, and expect that the canonical forms plays a important role in such a study.
The organization of this paper is as follows:
In Section \ref{sec:cst_fundamental}, we recall the notion of compact symmetric triads. We define the rank and the order for a compact symmetric triad and for its isomorphism class. The hyperpolarity of a Hermann action is also explained.
In Section \ref{sec:pre}, we recall that the local isomorphism classes of compact symmetric pairs correspond to $\sigma$-systems and Satake diagrams.
In Section \ref{sec:2satake}, we first introduce the notion of double $\sigma$-systems. We also define an equivalence relation on double $\sigma$-systems based on the equivalence relation $\sim$ (Subsection \ref{sec:2sigroot}). Next, we introduce the notion of double Satake diagrams and their isomorphism classes for double $\sigma$-systems (Subsection \ref{sec:sub2satake}).
In Section \ref{sec:cst_dsatake}, we introduce the notion of double Satake diagrams for compact symmetric triads. We prove Theorems \ref{thm:cst_dsatake_sim} and \ref{thm:dsatake_classify} mentioned above. We determine the rank and the order for the isomorphism classes of compact simple symmetric triads based on the classification. Furthermore, we give special isomorphisms for compact simple symmetric triads and determine which compact simple symmetric triads are self-dual.
In Section \ref{sec:cst_cf}, we introduce the notion of the canonicality for compact symmetric triads (Subsection \ref{sec:cst_can}), and prove its existence (Theorem \ref{thm:cst_exist_can} in Subsection \ref{sec:cst_can_exist}). We also give some properties for the rank and the order of a canonical form (Subsection \ref{sec:rankorder}).
\section{Compact symmetric triads}\label{sec:cst_fundamental}
\subsection{Compact symmetric triads and Hermann actions}
Let $G$ be a compact connected semisimple Lie group, and $\theta_{1}, \theta_{2}$ be two involutions of $G$. We call the triplet $(G,\theta_{1},\theta_{2})$ a \textit{compact symmetric triad}. Denote by $K_{i}$ ($i=1$, $2$) the identity component of the fixed point subgroup of $\theta_{i}$ in $G$. Then $G/K_{i}$ is a compact Riemannian symmetric space with respect to the Riemannian metric induced from a bi-invariant Riemannian metric on $G$. The natural isometric action of $K_{2}$ on $G/K_{1}$ is called the \textit{Hermann action}.
In what follows, we show that the Hermann action is a hyperpolar action. In particular, we give its section. We also recall an equivalence relation on compact symmetric triads which was introduced by Matsuki (\cite{Matsuki}). Then we observe that two compact symmetric triads are isomorphic in his sense, then their Hermann actions are essentially the same.
Let $\mathfrak{g}$ be the Lie algebra of $G$ and $\exp:\mathfrak{g}\to G$ denote the exponential map. For each $i=1,$ $2$, the differential $d\theta_{i}$ of $\theta_{i}$ at the identity element in $G$ gives an involution of $\mathfrak{g}$, which we write the same symbol $\theta_{i}$ if there is no confusion. Let $\mathfrak{g}=\mathfrak{k}_{1}\oplus\mathfrak{m}_{1}=\mathfrak{k}_{2}\oplus \mathfrak{m}_{2}$ be the canonical decompositions of $\mathfrak{g}$ for $\theta_{1}$ and $\theta_{2}$, respectively. We set $\mathfrak{g}^{\theta_{1}\theta_{2}}=\{X\in\mathfrak{g}\mid \theta_{1}\theta_{2}(X)=X\}=\{X\in\mathfrak{g}\mid\theta_{1}(X)=\theta_{2}(X)\}=\mathfrak{g}^{\theta_{2}\theta_{1}}$. Then $\mathfrak{g}^{\theta_{1}\theta_{2}}$ becomes a $(\theta_{1},\theta_{2})$-invariant Lie subalgebra of $\mathfrak{g}$. Clearly, $\theta_{1}=\theta_{2}$ holds on $\mathfrak{g}^{\theta_{1}\theta_{2}}$. The canonical decomposition of $\mathfrak{g}^{\theta_{1}\theta_{2}}$
for $\theta_{1}|_{\mathfrak{g}^{\theta_{1}\theta_{2}}}$ is given by \[ \mathfrak{g}^{\theta_{1}\theta_{2}}=(\mathfrak{k}_{1}\cap\mathfrak{k}_{2})\oplus(\mathfrak{m}_{1}\cap\mathfrak{m}_{2}). \] Let $\mathfrak{a}$ be a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. It is known that $A:=\exp(\mathfrak{a})$ is closed in $G$. Hence, $A$ becomes a compact connected abelian Lie subgroup of $G$, that is, a toral subgroup. The following theorem was proved by Hermann. \begin{thm}[\cite{Hermann}]\label{thm:Hermann} Retain the notation as above. Then, \[ G=K_{1}AK_{2} =K_{2}AK_{1}. \] \end{thm}
Let $\pi_{1}:G\to G/K_{1}$ denote the natural projection. Then $\pi_{1}(A)$ is a flat totally geodesic submanifold of $G/K_{1}$. It follows from Theorem \ref{thm:Hermann} that each $K_{2}$-orbit intersects $\pi_{1}(A)$. In fact, it is shown that $\pi_{1}(A)$ gives a section of the Hermann action $K_{2}$ on $G/K_{1}$. Hence this action is hyperpolar.
Let $\mathrm{Aut}(G)$ denote the group of automorphisms on $G$ and $\mathrm{Int}(G)$ the group of inner automorphisms on $G$. Then $\mathrm{Int}(G)$ is a normal subgroup of $\mathrm{Aut}(G)$. Matsuki (\cite{Matsuki}) introduced the following equivalence relation on compact symmetric triads.
\begin{dfn}\label{dfm:cst_sim} Two compact symmetric triads $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ are \textit{isomorphic}, which we write $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$, if there exist $\varphi\in\mathrm{Aut}(G)$ and $\tau\in\mathrm{Int}(G)$ satisfying the following relations: \begin{equation}\label{eqn:dfn_cstsim} \theta_{1}'=\varphi\theta_{1}\varphi^{-1},\quad \theta_{2}'=\tau\varphi\theta_{2}\varphi^{-1}\tau^{-1}. \end{equation} \end{dfn} Geometrically, $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$ means that their Hermann actions are isomorphic. Indeed, we obtain an isomorphism between them as follows. Assume that there exist $\varphi\in\mathrm{Aut}(G)$ and $\tau\in\mathrm{Int}(G)$ as in \eqref{eqn:dfn_cstsim}. Denote by $K_{i}'$ the identity component of the fixed point subgroup of $\theta_{i}'$ in $G$. We obtain an isometric isomorphism $\Phi: G/K_{1}\to G/K_{1}'$ by \[ \Phi: G/K_{1}\to G/K_{1}';~gK_{1}\mapsto \varphi(g)K_{1}'. \] Then the $K_{2}$-action on $G/K_{1}$ is isomorphic to the $\tau^{-1}(K_{2}')$-action on $G/K_{1}'$ via $\Phi$.
The Lie subgroups $K_{1}\cap K_{2}$ and $G^{\theta_{1}\theta_{2}}:=\{g\in G\mid \theta_{1}\theta_{2} (g)=g\}$ of $G$ play a fundamental role in the study of $(G,\theta_{1},\theta_{2})$. However, their Lie group structures and even their Lie algebra structures depend on the choice of a representative of its isomorphism class $[(G,\theta_{1},\theta_{2})]$. We expect that these structures are determined by taking a representative with `easy structure' in $[(G,\theta_{1},\theta_{2})]$. We will introduce such a representative as a canonical form in Section \ref{sec:cst_cf}, which is one of the main subjects of the present paper.
\subsection{Rank and order for compact symmetric triads}
We define the rank of a compact symmetric triad $(G,\theta_{1},\theta_{2})$ as the dimension of a maximal abelian subspace $\mathfrak{a}$ of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$, which we write $\mathrm{rank}(G,\theta_{1},\theta_{2})$. Its well-definedness is shown by the $\mathrm{Ad}(K_{1}\cap K_{2})$-conjugacy for maximal abelian subspaces of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$, where $\mathrm{Ad}$ denotes the adjoint representation of $G$. Since the tangent space of $\pi_{1}(A)$ gives the normal space of a principal orbit of $K_{2}$ on $G/K_{1}$, the rank is equal to the cohomogeneity of the action. Hence, for two compact symmetric triads $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$, the cohomogeneity of the $K_{2}$-action on $G/K_{1}$ is equal to that of the $K_{2}'$-action on $G/K_{2}'$. Namely, we have the following lemma.
\begin{lem}\label{lem:cst_rank} Assume that two compact symmetric triads $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ satisfies $(G,\theta_{1},\theta_{2})\sim (G,\theta_{1}',\theta_{2}')$. Then we have $\mathrm{rank}(G,\theta_{1},\theta_{2})=\mathrm{rank}(G,\theta_{1}',\theta_{2}')$. Hence we define the rank of the isomorphism class $[(G,\theta_{1},\theta_{2})]$ of $(G,\theta_{1},\theta_{2})$ as that of $(G,\theta_{1},\theta_{2})$, which we write $\mathrm{rank}[(G,\theta_{1},\theta_{2})]$. \end{lem}
Let $(G,\theta_{1},\theta_{2})$ be a compact symmetric triad and $\mathfrak{a}$ a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. Then there exists a maximal abelian subspace of $\mathfrak{m}_{i}$ containing $\mathfrak{a}$ for each $i=1$, $2$. However, $[\mathfrak{a}_{1},\mathfrak{a}_{2}]=\{0\}$ does not hold in general. In the case when $[\mathfrak{a}_{1},\mathfrak{a}_{2}]\neq\{0\}$, there exist no maximal abelian subalgebras $\mathfrak{t}$ of $\mathfrak{g}$ such that $\mathfrak{t}$ contains both $\mathfrak{a}_{1}$ and $\mathfrak{a}_{2}$. On the other hand, retaking $(G,\theta_{1},\theta_{2})$ in its isomorphism class if necessary, the following lemma holds.
\begin{lem}\label{lem:lemm2.5} There exists a compact symmetric triad $(G,\theta_{1},\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ and a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ satisfying the following conditions$:$ \begin{enumerate} \item[$(1)$] $\mathfrak{a}_{1}:=\mathfrak{t}\cap\mathfrak{m}_{1}$ and $\mathfrak{a}_{2}':=\mathfrak{t}\cap\mathfrak{m}_{2}'$ are maximal abelian subspaces of $\mathfrak{m}_{1}$ and $\mathfrak{m}_{2}'$, respectively. In particular, $\mathfrak{t}$ is $(\theta_{1},\theta_{2}')$-invariant. \item[$(2)$] $\mathfrak{a}:=\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2}')$ is a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}'$. \end{enumerate} \end{lem}
\begin{proof} Let $\mathfrak{a}$ be a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. Let $\mathfrak{a}_{i}$ be a maximal abelian subspace of $\mathfrak{m}_{i}$ containing $\mathfrak{a}$. We define a closed subgroup $N(\mathfrak{a})$ of $G$ by $N(\mathfrak{a}):=\{g\in G\mid \mathrm{Ad}(g)\mathfrak{a}=\mathfrak{a}\}$. Since $G$ is compact, so is $N(\mathfrak{a})$. Then the identity component $N(\mathfrak{a})_{0}$ of $N(\mathfrak{a})$ becomes a compact connected Lie group. Furthermore, its Lie algebra $\mathfrak{n}(\mathfrak{a})$ has the following expression: \[ \mathfrak{n}(\mathfrak{a})=\{X\in\mathfrak{g}\mid [X,\mathfrak{a}]\subset\mathfrak{a}\}. \] Since $[\mathfrak{a}_{i},\mathfrak{a}]\subset[\mathfrak{a}_{i},\mathfrak{a}_{i}]=\{0\}$ holds, we have $\mathfrak{a}_{i}\subset \mathfrak{n}(\mathfrak{a})$. Hence $\mathfrak{a}_{i}$ is an abelian subalgebra of $\mathfrak{n}(\mathfrak{a})$. By general theory of compact connected Lie groups, there exists $g\in N(\mathfrak{a})_{0}$ satisfying $[\mathfrak{a}_{1}, \mathrm{Ad}(g)\mathfrak{a}_{2}]=\{0\}$. We set $\theta_{2}':=\tau_{g}\theta_{2}\tau_{g}^{-1}$. Then we have $d\theta_{2}'=\mathrm{Ad}(g)d\theta_{2}\mathrm{Ad}(g)^{-1}$ and $\mathfrak{m}_{2}'=\mathrm{Ad}(g)\mathfrak{m}_{2}$. From the inclusion $\mathfrak{a}\subset\mathfrak{a}_{2}$, we get $\mathfrak{a}=\mathrm{Ad}(g)\mathfrak{a}\subset \mathrm{Ad}(g)\mathfrak{a}_{2}=:\mathfrak{a}_{2}'\subset \mathfrak{m}_{2}'$. This yields $\mathfrak{a}\subset \mathfrak{a}_{1}\cap\mathfrak{a}_{2}'$. In addition, by the maximality of $\mathfrak{a}$, we obtain $\mathfrak{a}=\mathfrak{a}_{1}\cap\mathfrak{a}_{2}'$. Since $[\mathfrak{a}_{1},\mathfrak{a}_{2}']=\{0\}$ holds, there exists a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ containing $\mathfrak{a}_{1}$ and $\mathfrak{a}_{2}'$. This $\mathfrak{t}$ satisfies the two conditions as in the statement. \end{proof}
We denote by $\mathrm{rank}(G)$ the rank of the compact connected semisimple Lie group $G$, and by $\mathrm{rank}(G,\theta_{i})$ the rank of the compact symmetric pair $(G,\theta_{i})$. From Lemma \ref{lem:lemm2.5} we have the following corollary.
\begin{cor} If $\mathrm{rank}(G)=\mathrm{rank}(G,\theta_{1})$, then $\mathrm{rank}(G,\theta_{2})=\mathrm{rank}(G,\theta_{1},\theta_{2})$ holds. \end{cor}
\begin{proof} The statement of this corollary is independent of the choice of a representative in the isomorphism class $[(G,\theta_{1},\theta_{2})]$. By Lemma \ref{lem:lemm2.5} we may assume that there exists a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}_{i}$ ($i=1,2$) is a maximal abelian subspace of $\mathfrak{m}_{i}$, and that $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ is a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. Then we have $\mathfrak{t}\subset \mathfrak{m}_{1}$ by $\mathrm{rank}(G)=\mathrm{rank}(G,\theta_{1})$. This implies $\mathrm{rank}(G,\theta_{1},\theta_{2}) =\dim(\mathfrak{t}\cap\mathfrak{m}_{1}\cap\mathfrak{m}_{2}) =\dim(\mathfrak{t}\cap\mathfrak{m}_{2})=\mathrm{rank}(G,\theta_{2})$. Thus, we have completed the proof. \end{proof}
We will define the order for the isomorphism class $[(G,\theta_{1},\theta_{2})]$. For the representative $(G,\theta_{1},\theta_{2})$, the order of the composition $\theta_{1}\theta_{2}$, which we write $\mathrm{ord}(\theta_{1}\theta_{2})$, is defined by the smallest positive integer $k$ satisfying $(\theta_{1}\theta_{2})^{k}=1$. If there is no such $k$, then $\theta_{1}\theta_{2}$ has infinite order, which we write $\mathrm{ord}(\theta_{1}\theta_{2})=\infty$. The value of $\mathrm{ord}(\theta_{1}\theta_{2})$ depends on the choice of a representative of $[(G,\theta_{1},\theta_{2})]$. We define the \textit{order} of the isomorphism class $[(G,\theta_{1},\theta_{2})]$ by \[ \mathrm{ord}[(G,\theta_{1},\theta_{2})] :=\min\{\mathrm{ord}(\theta_{1}'\theta_{2}')\mid (G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})\}\in \mathbb{N}\cup\{\infty\}. \] It will be shown later that $[(G,\theta_{1},\theta_{2})]$ has a finite order in the case when $G$ is simple.
Here, we observe compact symmetric triads with low order. For two involutions $\theta_{1}$ and $\theta_{2}$ on $G$, we write $\theta_{1}\sim\theta_{2}$ if there exists $\tau\in\mathrm{Int}(G)$ satisfying $\theta_{2}=\tau\theta_{1}\tau^{-1}$. A compact symmetric triad $(G,\theta_{1},\theta_{2})$ satisfying $\theta_{1}\sim\theta_{2}$ is isomorphic to $(G,\theta_{1},\theta_{1})$. Hence, for a compact symmetric triad $(G,\theta_{1},\theta_{2})$, the order of $[(G,\theta_{1},\theta_{2})]$ is equal to one if and only if $\theta_{1}\sim\theta_{2}$ holds. The Hermann action induced from such $(G,\theta_{1},\theta_{2})$ is nothing but the isotropy action $K_{1}$ on $G/K_{1}$. In other words, $(G,\theta_{1},\theta_{2})$ with $\theta_{1}\not\sim\theta_{2}$ gives a nontrivial Hermann action. The isotropy actions of compact symmetric spaces have been studied by many geometers. Therefore we will mainly focus our attention on compact symmetric triads $(G,\theta_{1},\theta_{2})$ with $\theta_{1}\not\sim\theta_{2}$. A compact symmetric triad $(G,\theta_{1},\theta_{2})$ is said to be \textit{commutative}, if $\theta_{1}\theta_{2}=\theta_{2}\theta_{1}$ holds. Clearly, $\mathrm{ord}[(G,\theta_{1},\theta_{2})] \leq 2$ holds if and only if $[(G,\theta_{1},\theta_{2})]$ is commutable, i.e., there exists a commutative compact symmetric triad $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$.
The following proposition gives a sufficient condition that the order of $[(G,\theta_{1},\theta_{2})]$ is equal to one.
\begin{pro}\label{pro:ordn_n+1_sim} Let $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ be two compact symmetric triads satisfying $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$. Assume that there exists $n\in\mathbb{N}$ such that $(\theta_{1}\theta_{2})^{n}=1$ and $(\theta_{1}'\theta_{2}')^{n+1}=1$. Then we have $\theta_{1}\sim\theta_{2}$. In particular, $\theta_{1}'\sim\theta_{2}'$ holds. \end{pro}
\begin{proof} Without loss of generalities we may assume that $\theta_{1}'=\theta_{1}$ and $\theta_{2}'=\tau\theta_{2}\tau^{-1}$ for some $\tau\in\mathrm{Int}(G)$. Then, we have $(\theta_{1}\theta_{2}')^{n+1} =(\theta_{1}\tau\theta_{2}\tau^{-1})^{n}\theta_{1}(\tau\theta_{2}\tau^{-1})$. Hence it is sufficient to show that there exists $\tau_{1}\in\mathrm{Int}(G)$ such that $(\theta_{1}\tau\theta_{2}\tau^{-1})^{n}\theta_{1}=\tau_{1}\theta_{2}\tau_{1}^{-1}$.
Let us consider the case when $n$ is even: $n=2m$ for some $m\in\mathbb{N}$. Then we have $(\theta_{1}\tau\theta_{2}\tau^{-1})^{n}\theta_{1}=(\theta_{1}\tau\theta_{2}\tau^{-1})^{m}\theta_{1}(\theta_{1}\tau\theta_{2}\tau^{-1})^{m}$. Let $g$ be in $G$ satisfying $\tau=\tau_{g}$. Since $\theta_{i}\tau_{g}=\tau_{\theta_{i}(g)}\theta_{i}$ holds, there exists $\tau_{1}\in\mathrm{Int}(G)$ satisfying $(\theta_{1}\tau\theta_{2}\tau^{-1})^{m}=\tau_{1}(\theta_{1}\theta_{2})^{m}$. From $(\theta_{1}\theta_{2})^{2m}=1$, we obtain \[ (\theta_{1}\tau\theta_{2}\tau^{-1})^{n}\theta_{1} =\tau_{1}(\theta_{1}\theta_{2})^{m}\theta_{1}(\theta_{2}\theta_{1})^{m}\tau_{1}^{-1} =\tau_{1}(\theta_{1}\theta_{2})^{2m}\theta_{1}\tau_{1}^{-1}=\tau_{1}\theta_{1}\tau_{1}^{-1}. \] In the case when $n$ is odd, a similar argument shows that there exists $\tau_{1}\in\mathrm{Int}(G)$ such that $(\theta_{1}\tau\theta_{2}\tau^{-1})^{n}\theta_{1}=\tau_{1}\theta_{2}\tau_{1}^{-1}$. Thus, we have complete the proof. \end{proof}
Here, let us consider the case when the rank of $[(G,\theta_{1},\theta_{2})]$ is equal to zero. Then $K_{2}$ acts transitively on $G/K_{1}$ by Theorem \ref{thm:Hermann}. Furthermore, the value of the order of $\theta_{1}'\theta_{2}'$ is independent of the choice of a representative $(G,\theta_{1}',\theta_{2}')$ in $[(G,\theta_{1},\theta_{2})]$, namely, the following proposition holds.
\begin{pro}\label{pro:rank0_ordconst} Assume that the rank of $[(G,\theta_{1},\theta_{2})]$ is equal to zero. If $(G,\theta_{1},\theta_{2})\sim (G,\theta_{1}',\theta_{2}')$ then, $\mathrm{ord}(\theta_{1}\theta_{2})=\mathrm{ord}(\theta_{1}'\theta_{2}')$ holds. \end{pro}
\begin{proof} It follows from $(G,\theta_{1},\theta_{2})\sim (G,\theta_{1}',\theta_{2}')$ that there exist $\varphi\in\mathrm{Aut}(G)$ and $g\in G$ satisfying the following relation: \begin{equation}\label{eqn:theta'theta} \theta_{1}'=\varphi\theta_{1}\varphi^{-1},\quad \theta_{2}'=\tau_{g}\varphi\theta_{2}\varphi\tau_{g}^{-1}, \end{equation} where $\tau_{g}$ is an inner automorphism of $G$ defined by $\tau_{g}(h)=ghg^{-1}$ ($h\in G$). By applying Theorem \ref{thm:Hermann} to $(G,\theta_{1}',\theta_{2}')$ we have $k_{1}\in K_{1}'$ and $k_{2}\in K_{2}'$ satisfying $g=k_{2}'k_{1}'$. Here, we have used the assumption $\mathrm{rank}[(G,\theta_{1},\theta_{2})]=0$. Then from \eqref{eqn:theta'theta} we obtain \begin{align*} \theta_{1}' &=\tau_{k_{1}}\theta_{1}'\tau_{k_{1}}^{-1} =(\tau_{k_{1}}\varphi)\theta_{1}(\tau_{k_{1}}\varphi)^{-1},\\ \theta_{2}'&=\tau_{k_{2}}^{-1}\theta_{2}'\tau_{k_{2}} =\tau_{k_{2}}^{-1}(\tau_{k_{2}}\tau_{k_{1}}\varphi\theta_{2}\varphi\tau_{k_{1}}^{-1}\tau_{k_{2}}^{-1})\tau_{k_{2}} =(\tau_{k_{1}}\varphi)\theta_{2}(\tau_{k_{1}}\varphi)^{-1}. \end{align*} This obeys $\mathrm{ord}(\theta_{1}\theta_{2})=\mathrm{ord}(\theta_{1}'\theta_{2}')$. Thus we have completed the proof. \end{proof}
\subsection{Compact symmetric triads at the Lie algebra level}
In the present paper, we will also treat compact symmetric triads at the Lie algebra level. A compact symmetric triad at the Lie algebra level is a triplet $(\mathfrak{g},\theta_{1},\theta_{2})$ which consists of a compact semisimple Lie algebra $\mathfrak{g}$ and two involutions $\theta_{1}$ and $\theta_{2}$ of $\mathfrak{g}$. Let $\mathrm{Aut}(\mathfrak{g})$ denote the group of automorphisms on $\mathfrak{g}$ and $\mathrm{Int}(\mathfrak{g})$ the group of inner automorphisms on $\mathfrak{g}$. Then $\mathrm{Int}(\mathfrak{g})$ is a normal subgroup of $\mathrm{Aut}(\mathfrak{g})$. Let us define the Lie algebra version of Definition \ref{dfm:cst_sim} as follows.
\begin{dfn}\label{dfn:cst_local_equiv} Two compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ and $(\mathfrak{g},\theta_{1}',\theta_{2}')$ are \textit{isomorphic}, which we write $(\mathfrak{g},\theta_{1},\theta_{2})\sim(\mathfrak{g},\theta_{1}',\theta_{2}')$, if there exist $\varphi\in\mathrm{Aut}(\mathfrak{g})$ and $\tau\in\mathrm{Int}(\mathfrak{g})$ satisfying the following relations: \[ \theta_{1}'=\varphi\theta_{1}\varphi^{-1},\quad \theta_{2}'=\tau\varphi\theta_{2}\varphi^{-1}\tau^{-1}. \] \end{dfn}
Let us consider a correspondence between the Lie group level and the Lie algebra level for compact symmetric triads. For a compact symmetric triad $(G,\theta_{1},\theta_{2})$ at the Lie group level, $(\mathfrak{g},d\theta_{1},d\theta_{2})$ gives a compact symmetric triad at the Lie algebra level. Then $(\mathfrak{g},d\theta_{1},d\theta_{2})$ is called the compact symmetric triad at the Lie algebra level associated with $(G,\theta_{1},\theta_{2})$. We find that for two compact symmetric triads $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$, $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$ implies $(\mathfrak{g},d\theta_{1},d\theta_{2})\sim(\mathfrak{g},d\theta_{1}',d\theta_{2}')$. We say that two compact symmetric triads $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ are \textit{locally isomorphic}, if $(\mathfrak{g},d\theta_{1},d\theta_{2})\sim(\mathfrak{g},d\theta_{1}',d\theta_{2}')$ holds.
Conversely, for a compact symmetric triad $(\mathfrak{g},\theta_{1},\theta_{2})$ at the Lie algebra level, there exists a compact symmetric triad $(G, \Theta_{1}, \Theta_{2})$ satisfying $(\mathfrak{g}, d\Theta_{1}, d\Theta_{2})=(\mathfrak{g},\theta_{1},\theta_{2})$, where $\mathfrak{g}$ is the Lie algebra of $G$. Indeed, let $G$ denote the universal covering group of a connected Lie group with Lie algebra $\mathfrak{g}$ or the adjoin group of $\mathfrak{g}$. Then we can get $\Theta_{i}$ as the extension of $\theta_{i}$ to an involution of $G$.
Let $(\mathfrak{g},\theta_{1},\theta_{2})$ be a compact symmetric triad at the Lie algebra level. The rank of $(\mathfrak{g},\theta_{1},\theta_{2})$ is defined as the dimension of a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. We define the rank of $[(\mathfrak{g},\theta_{1},\theta_{2})]$ by that of $(\mathfrak{g},\theta_{1},\theta_{2})$. In a similar manner, the orders of $(\mathfrak{g},\theta_{1},\theta_{2})$ and its isomorphic class $[(\mathfrak{g},\theta_{1},\theta_{2})]$
are defined in the same way as in the case of the Lie group level. We denote by $\mathrm{rank}[(\mathfrak{g},\theta_{1},\theta_{2})]$ the rank of $[(\mathfrak{g},\theta_{1},\theta_{2})]$, and by $\mathrm{ord}[(\mathfrak{g},\theta_{1},\theta_{2})]$ the order of $[(\mathfrak{g},\theta_{1},\theta_{2})]$.
By definition we have the following lemma. \begin{lem}\label{lem:rank_welldef} For any compact symmetric triad $(G,\theta_{1},\theta_{2})$, we have $\mathrm{rank}[(G,\theta_{1},\theta_{2})]=\mathrm{rank}[(\mathfrak{g},d\theta_{1},d\theta_{2})]$. \end{lem} In order to state a similar result for the order, we prepare the following lemma.
\begin{lem}\label{lem:autoGg} An automorphism $\theta$ of $G$ is the identity transformation on it if and only if so is its differential $d\theta$ on $\mathfrak{g}$. \end{lem}
We omit the details of the proof. The following lemma follows immediately from Lemma \ref{lem:autoGg}.
\begin{lem} Let $(G,\theta_{1},\theta_{2})$ be a compact symmetric triad. Then $\mathrm{ord}(\theta_{1}\theta_{2})=\mathrm{ord}(d\theta_{1}d\theta_{2})$ holds. In particular, we have $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}[(\mathfrak{g},d\theta_{1},d\theta_{2})]$. \end{lem}
\begin{lem}\label{lem:t1simt2>dt1dt2=1} Let $(G,\theta_{1},\theta_{2})$ be a compact symmetric triad. Assume that there exists a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}_{i}$ and $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ are maximal abelian subspaces of $\mathfrak{m}_{i}$ and $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$, respectively. Then $\theta_{1}\sim\theta_{2}$
implies $\mathrm{ord}(d\theta_{1}d\theta_{2}|_{\mathfrak{t}})=1$. \end{lem}
\begin{proof} It follows from $\theta_{1}\sim\theta_{2}$ we have $d\theta_{2}=\mathrm{Ad}(g)d\theta_{1}\mathrm{Ad}(g)^{-1}$ for some $g\in G$. By Theorem \ref{thm:Hermann} there exist $k_{i}\in K_{i}$ and $H\in\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ such that $g=k_{2}\exp(H)k_{1}$ holds. Here, we have used the maximality of $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ in $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. Then we have \begin{align*} \mathrm{Ad}(g)d\theta_{1}\mathrm{Ad}(g)^{-1} &=\mathrm{Ad}(k_{2})e^{\mathrm{ad}(H)}\mathrm{Ad}(k_{1})d\theta_{1}\mathrm{Ad}(k_{1})^{-1}e^{-\mathrm{ad}(H)}\mathrm{Ad}(k_{2})^{-1}\\ &=\mathrm{Ad}(k_{2})e^{\mathrm{ad}(H)}d\theta_{1}e^{-\mathrm{ad}(H)}\mathrm{Ad}(k_{2})^{-1}, \end{align*} from which $d\theta_{2}=e^{\mathrm{ad}(H)}d\theta_{1}e^{-\mathrm{ad}(H)}$ holds. Since the automorphism $e^{\mathrm{ad}(H)}$ gives the identity transformation on $\mathfrak{t}$, we obtain
$d\theta_{2}|_{\mathfrak{t}}
=d\theta_{1}|_{\mathfrak{t}}$. This yields $\mathrm{ord}(d\theta_{1}d\theta_{2}|_{\mathfrak{t}})=1$.
\end{proof}
\section{$\sigma$-systems, Satake diagrams and compact symmetric pairs}\label{sec:pre}
In this section, we recall the notions of $\sigma$-systems, Satake diagrams and compact symmetric pairs. We refer to the references \cite{Helgason} and \cite{Warner}, for example. The contents of this section will be generalized in Sections \ref{sec:2satake} and \ref{sec:cst_dsatake}.
\subsection{Root systems}
We begin with recalling the definition of a root system. Let $\mathfrak{t}$ be a finite dimensional real vector space. Fix an inner product $\INN{\,}{\,}$ on $\mathfrak{t}$. We write $\|\alpha\|=\INN{\alpha}{\alpha}^{1/2}$ as the norm of $\alpha\in\mathfrak{t}$. For $\alpha\in\mathfrak{t}-\{0\}$ we define a linear isometry $w_{\alpha}\in O(\mathfrak{t})$ by \begin{equation*}\label{eqn:wWeyl}
w_{\alpha}(H)=H-2\dfrac{\INN{\alpha}{H}}{\|\alpha\|^{2}}\alpha\quad (H\in\mathfrak{t}). \end{equation*} Then $w_{\alpha}$ satisfies $w_{\alpha}^{2}=1$ and $w_{\alpha}(\alpha)=-\alpha$.
\begin{dfn} A finite subset $\Delta\subset\mathfrak{t}-\{0\}$ is called a \textit{root system} of $\mathfrak{t}$, if it satisfies the following two conditions: \begin{enumerate} \item $\mathfrak{t}=\mathrm{span}_{\mathbb{R}}(\Delta)$.
\item If $\alpha$ and $\beta$ are in $\Delta$, then $w_{\alpha}(\beta)=\beta-2\dfrac{\INN{\alpha}{\beta}}{\|\alpha\|^{2}}\alpha$ is in $\Delta$, and
$2\dfrac{\langle\alpha ,\beta\rangle}{\|\alpha\|^2}$ is in $\mathbb{Z}$. \end{enumerate} In addition, a root system $\Delta$ is said to be \textit{reduced}, if it satisfies the following condition: \begin{enumerate} \setcounter{enumi}{2} \item If $\alpha$ and $\beta$ are in $\Delta$ with $\beta = m\alpha$, then $m=\pm 1$ holds. \end{enumerate} \end{dfn} A root system $\Delta$ of $\mathfrak{t}$ is said to be reducible if there exist two non-empty subsets $\Delta_{1}$ and $\Delta_{2}$ of $\Delta$ satisfying the following conditions: \[ \Delta=\Delta_{1}\cup\Delta_{2},\quad \Delta_{1}\cap\Delta_{2}=\emptyset,\quad \INN{\Delta_{1}}{\Delta_{2}}=\{0\}. \] Otherwise it is said to be \textit{irreducible}. Any root system is decomposed into irreducible ones, namely, there exist unique irreducible root systems $\Delta_{1},\dotsc,\Delta_{l}$ up to permutation of the indices such that $\Delta=\Delta_{1}\cup \dotsb \cup \Delta_{l}$ and that $\INN{\Delta_{i}}{\Delta_{j}}=\{0\}$ for $1\leq i\neq j\leq l$. This decomposition of $\Delta$ is called the irreducible decomposition of $\Delta$.
Let $\Delta$ and $\Delta'$ be reduced root systems of $\mathfrak{t}$ and $\mathfrak{t}'$, respectively. It is shown that, if $\Delta$ and $\Delta'$ are irreducible, then, for any linear isomorphism $\varphi:\mathfrak{t}\to\mathfrak{t}'$ satisfying $\varphi(\Delta)=\Delta'$, we have \[
2\dfrac{\INN{\beta}{\alpha}}{\|\alpha\|^{2}}
=2\dfrac{\INN{\varphi(\beta)}{\varphi(\alpha)}}{\|\varphi(\alpha)\|^{2}},\quad \alpha,\beta\in\Delta. \] Based on this observation we define an isomorphism of root systems as follows: A linear isomorphism $\varphi:\mathfrak{t}\to\mathfrak{t}'$ satisfying $\varphi(\Delta)=\Delta'$ is called an \textit{isomorphism} of root systems between $\Delta$ and $\Delta'$. Two root systems $\Delta$ and $\Delta'$ are \textit{isomorphic}, which we write $\Delta\simeq\Delta'$, if there exists such $\varphi$. Then we have $\varphi(\Delta)=\Delta'$. We find that $\simeq$ gives an equivalence relation on the set of root systems.
In the case when $\mathfrak{t}=\mathfrak{t}', \INN{\,}{\,}=\INN{\,}{\,}', \Delta=\Delta'$, an isomorphism $\varphi:\mathfrak{t}\to\mathfrak{t}$ of $\Delta$ is called an automorphism of $\Delta$. Denote by $\mathrm{Aut}(\Delta)$ the group of automorphisms of $\Delta$. It is clear that $\mathrm{Aut}(\Delta)$ is a finite group. The subgroup of $O(\mathfrak{t})$ generated by $\{w_{\alpha}\mid \alpha\in\Delta\}$ is called the \textit{Weyl group} of $\Delta$, which we write $W(\Delta)$. Then $W(\Delta)$ is a normal subgroup of $\mathrm{Aut}(\Delta)$. In particular, $W(\Delta)$ is a finite group.
\subsection{$\sigma$-systems}
Let $\Delta$ be a reduced root system of $\mathfrak{t}$. Let $\sigma:\mathfrak{t}\to\mathfrak{t}$ be an involutive linear isometry of $\Delta$, which we call an involution. Then the pair $(\Delta, \sigma)$ is called a \textit{$\sigma$-system} of $\mathfrak{t}$. If we put $\mathfrak{t}^{\pm\sigma}:=\{H\in\mathfrak{t}\mid \sigma(H)=\pm H\}$, then we have an orthogonal decomposition $\mathfrak{t}=\mathfrak{t}^{\sigma}\oplus\mathfrak{t}^{-\sigma}$ with respect to the inner product $\INN{\;}{\;}$. The rank of $(\Delta,\sigma)$ is defined as the dimension of $\mathfrak{t}^{\sigma}$, which we write $\mathrm{rank}(\Delta,\sigma)$. By definition, we have $\mathrm{rank}(\Delta,\sigma)\leq \mathrm{rank}(\Delta)$. Let $pr:\mathfrak{t}\to\mathfrak{t}^{\sigma}$ denote the orthogonal projection, that is, \[ pr:\mathfrak{t}\to\mathfrak{t}^{\sigma};~ H\mapsto \dfrac{1}{2}(H+\sigma(H)). \] Set $\Delta_{0}:=\{\alpha\in\Delta\mid pr(\alpha)=0\}=\{ \alpha\in\Delta\mid\sigma(\alpha)=-\alpha\}$. Then $\Delta_{0}$ satisfies $\Delta_{0}=-\Delta_{0}$ and $\alpha+\beta\in\Delta_{0}$ for all $\alpha,\beta\in\Delta_{0}$ with $\alpha+\beta\in\Delta$. We call such a subset of $\Delta$ a closed subsystem of $\Delta$. Then $\Delta_{0}$ becomes a root system of $\mathrm{span}_{\mathbb{R}}(\Delta_{0})$.
A $\sigma$-system $(\Delta,\sigma)$ is said to be $\sigma$-reducible if there exist two non-empty $\sigma$-invariant subsets $\Delta_{1}$ and $\Delta_{2}$ of $\Delta$ satisfying the following conditions: \[ \Delta=\Delta_{1}\cup\Delta_{2},\quad \Delta_{1}\cap\Delta_{2}=\emptyset,\quad \INN{\Delta_{1}}{\Delta_{2}}=\{0\}. \] Otherwise it is said to be \textit{$\sigma$-irreducible}. Any $\sigma$-system is decomposed into $\sigma$-irreducible ones, that is, there exist unique mutually orthogonal, $\sigma$-irreducible $\sigma$-systems $(\Delta_{1},\sigma_{1})$, $\dotsc$, $(\Delta_{l},\sigma_{l})$ up to permutation of the indices such that $\Delta=\Delta_{1}\cup\dotsb\cup\Delta_{l}$ and $\sigma=\sigma_{j}$ holds on $\Delta_{j}$ for each $1\leq j\leq l$. Then this decomposition is called the $\sigma$-irreducible decomposition of the $\sigma$-system $(\Delta, \sigma)$, which we write \[ (\Delta, \sigma)=(\Delta_{1},\sigma_{1})\cup \dotsb\cup(\Delta_{l},\sigma_{l}). \] It is clear that $(\Delta,\sigma)$ is $\sigma$-irreducible if $\Delta$ is irreducible as a root system. Two $\sigma$-systems $(\Delta,\sigma)$ and $(\Delta',\sigma')$ are said to be isomorphic, which we write $(\Delta,\sigma)\simeq (\Delta',\sigma')$, if there exists an isomorphism $\varphi: \mathfrak{t}\to \mathfrak{t}'$ of root systems satisfying $\sigma'=\varphi\sigma\varphi^{-1}$. We call such $\varphi$ an isomorphism of $\sigma$-systems. Then $\simeq$ gives an equivalence relation on the set of $\sigma$-systems. We find that if $(\Delta,\sigma)\simeq (\Delta',\sigma')$, then their rank are the same.
\subsection{Normal $\sigma$-systems and their Satake diagrams}\label{sec:NssSd} A $\sigma$-system $(\Delta,\sigma)$ is said to be \textit{normal} if $\sigma(\alpha)-\alpha\notin\Delta$ for all $\alpha\in\Delta$. For a normal $\sigma$-system $(\Delta,\sigma)$, Araki (\cite{Araki}) proved that the set $\{pr(\alpha) \mid \alpha \in \Delta-\Delta_{0}\}=:\Sigma$ becomes a root system of $\mathfrak{t}^{\sigma}$ (see also \cite[Proposition 1.1.3.1]{Warner}), which is called the \textit{restricted root system} of $(\Delta,\sigma)$. Then we have $\mathrm{rank}(\Sigma)=\mathrm{rank}(\Delta,\sigma)$. The equivalence relation $\simeq$ is compatible with the normality of a $\sigma$-system. Namely, if $(\Delta,\sigma)\simeq (\Delta',\sigma')$ and $(\Delta,\sigma)$ is normal, then $(\Delta',\sigma')$ is also normal. In addition, if we denote by $\Sigma'$ the restricted root system of $(\Delta',\sigma')$, then $\Sigma \simeq \Sigma'$ holds as root systems.
Now, let us recall the notion of Satake diagrams for normal $\sigma$-systems. Let $(\Delta,\sigma)$ be a normal $\sigma$-system. Let $\Pi$ be a fundamental system of $\Delta$. The positive root system $\Delta^{+}$ for $\Pi$ is described by $\Delta^{+}=\{\sum_{\alpha\in\Pi}m_{\alpha}\alpha\in\Delta\mid m_{\alpha}\in\mathbb{Z}_{\geq 0}\}$, where $\mathbb{Z}_{\geq 0}:=\{m\in\mathbb{Z} \mid m \geq 0\}$. Then $\Pi$ is called a \textit{$\sigma$-fundamental system}, if $\sigma(\alpha)$ is in $\Delta^{+}$ for all $\alpha\in\Delta^{+}-\Delta_{0}$. It is known that a $\sigma$-fundamental system always exists (cf.~\cite[p.~11]{Araki}). The following lemma will be needed later.
\begin{lem}\label{lem:Pi_varphiPi} Let $\Pi$ be a $\sigma$-fundamental system. For any $\varphi\in \mathrm{Aut}(\Delta)$, $\varphi(\Pi)$ is a $(\varphi\sigma\varphi^{-1})$-fundamental system of $\Delta$. \end{lem}
The proof is straightforward and is omitted. Let $\Pi$ be a $\sigma$-fundamental system of $\Delta$. It is known that $\Pi\cap\Delta_{0}=: \Pi_{0}$ is a fundamental system of $\Delta_{0}$ (cf.~\cite[p.~23]{Warner}). Denote by $(\Pi_{0})_{\mathbb{Z}}$ the $\mathbb{Z}$-submodule of $\mathfrak{t}$ generated by $\Pi_{0}$. It follows from \cite[Lemma 1.1.3.2]{Warner} that there exists uniquely a permutation $p:\Pi-\Pi_{0}\to\Pi-\Pi_{0}$ of order two such that \[ \sigma(\alpha)\equiv p(\alpha)\quad (\operatorname{mod}(\Pi_{0})_{\mathbb{Z}}), \] which is called the \textit{Satake involution} of $(\Delta,\sigma)$ associated with $\Pi$. Then the \textit{Satake diagram} $S=S(\Pi,\Pi_{0},p)$ of $(\Delta,\sigma)$ associated with $\Pi$ is described as follows: In the Dynkin diagram of $\Pi$, every root in $\Pi_{0}$ is replaced from a white circle to a black circle, and two roots $\alpha,\alpha'\in\Pi-\Pi_{0}$ with $\alpha\neq\alpha'$ are connected by a curved arrow if $p(\alpha)=\alpha'$.
The normal $\sigma$-system $(\Delta,\sigma)$ can be reconstructed from $S(\Pi,\Pi_{0},p)$. The Dynkin diagram of $\Pi$ determines the structures of $\Delta$, $\mathfrak{t}=\mathrm{span}_{\mathbb{R}}(\Pi)$ and $\INN{\,}{\,}$. We write $\Pi=\{\alpha_{1},\dotsc,\alpha_{l}\}$ with $l=\mathrm{rank}(\Delta)$. By renumbering the indices if necessary, we may assume that there exists $l_{1},l_{2}\leq l$ such that \[\Pi-\Pi_{0}=\{\alpha_{1},\dotsc,\alpha_{l_{1}}, \alpha_{l_{1}+1}, \dotsc, \alpha_{l_{1}+l_{2}}, \alpha_{l_{1}+l_{2}+1},\dotsc, \alpha_{l_{1}+2l_{2}}\}, \] and \[ p(\alpha_{j})=\alpha_{j}~(1\leq j\leq l_{1}),\quad p(\alpha_{l_{1}+j'})=\alpha_{l_{1}+l_{2}+j'}~(1\leq j'\leq l_{2}). \] In particular, $l_{2}$ is equal to the number of arrows in $S(\Pi,\Pi_{0},p)$. From this assumption the cardinality of $\Pi_{0}$ is equal to $l-(l_{1}+2l_{2})=:l_{0}$. Clearly, $\Pi_{0}=\{\alpha_{l-l_{0}+1},\dotsc,\alpha_{l}\}$ holds. For $1\leq j\leq l_{1}$, the condition $p(\alpha_{j})=\alpha_{j}$ implies that \begin{equation}\label{eqn:satakesigma1} \alpha_{j}-\sigma(\alpha_{j}) \in\sum_{k=l-l_{0}+1}^{l}\mathbb{Z}\alpha_{k}\subset\mathfrak{t}^{-\sigma}. \end{equation} Furthermore, for $1\leq j'\leq l_{2}$, from $p(\alpha_{l_{1}+j'})=\alpha_{l_{1}+l_{2}+j'}$ we have \begin{equation}\label{eqn:satakesigma2} \mathfrak{t}^{-\sigma}\ni \alpha_{l_{1}+j'}-\sigma(\alpha_{l_{1}+j'}) = \alpha_{l_{1}+j'}-\alpha_{l_{1}+l_{2}+j'}+\sum_{k=l-l_{0}+1}^{l}m_{k}\alpha_{k}, \end{equation} for some integers $m_{l-l_{0}+1},\dotsc,m_{l}$. Hence it follows from \eqref{eqn:satakesigma1} and \eqref{eqn:satakesigma2} that the $(-1)$-eigenspace $\mathfrak{t}^{-\sigma}$ of $\sigma$ in $\mathfrak{t}$ has the following description: \[ \mathfrak{t}^{-\sigma} =\sum_{k=1}^{l}\mathbb{R}(\alpha_{k}-\sigma(\alpha_{k})) =\sum_{j'=1}^{l_{2}}\mathbb{R}(\alpha_{l_{1}+j'}-\alpha_{l_{1}+l_{2}+j'}) \oplus \sum_{k=l-l_{0}+1}^{l}\mathbb{R}\alpha_{k}. \] In addition, we obtain $\mathfrak{t}^{\sigma}$ as the orthogonal complement of $\mathfrak{t}^{-\sigma}$ in $\mathfrak{t}$. Thus, the action of $\sigma$ on $\Delta$ is reconstructed. In particular, we get $\mathrm{rank}(\Delta,\sigma)= l-(l_{2}+l_{0})=l_{1}+l_{2}$.
Let us explain that the definition of the Satake diagram of $(\Delta,\sigma)$ is independent of the choice of $\sigma$-fundamental systems. Suppose that $\tilde{\Pi}$ is another $\sigma$-fundamental system of $\Delta$. Set $\tilde{\Pi}_{0}:=\tilde{\Pi}\cap\Delta_{0}$. We denote by $\tilde{p}$ the Satake involution associated with $\tilde{\Pi}$. Then it follows from \cite[Proposition A in Appendix]{Satake} that there exists $w\in W(\Delta)$ satisfying $\tilde{\Pi}=w(\Pi)$ and $w\sigma=\sigma w$. Thus, we have $\tilde{\Pi}_{0}=w(\Pi_{0})$ and $w(p(\alpha))=\tilde{p}(w(\alpha))$ for all $\alpha\in\Pi-\Pi_{0}$. Then we write \begin{equation*}\label{eqn:satake_eq} S(\Pi,\Pi_{0},p)=S(\Pi',\Pi_{0}',p'). \end{equation*}
\begin{dfn}\label{dfn:satake_simeq} Let $(\Delta,\sigma)$ and $(\Delta',\sigma')$ be two normal $\sigma$-systems, and $S(\Pi,\Pi_{0},p)$ and $S(\Pi',\Pi_{0}',p')$ denote the Satake diagrams of $(\Delta,\sigma)$ and $(\Delta',\sigma')$, respectively. We write $S(\Pi,\Pi_{0},p)\simeq S(\Pi',\Pi'_{0},p')$ if there exists an isomorphism $\psi:\Pi\to\Pi'$ of Dynkin diagrams such that $\psi(\Pi_{0})=\Pi_{0}'$ and $\psi(p(\alpha))=p'(\psi(\alpha))$ for all $\alpha \in \Pi-\Pi_{0}$. We call such $\psi$ an isomorphism of Satake diagrams. Then $\simeq$ gives an equivalence relation for Satake diagrams. \end{dfn}
By the reconstruction of $(\Delta, \sigma)$ from $S(\Pi,\Pi_{0},p)$ we have the following lemma:
\begin{lem}\label{lem:sigma_satake} Retain the notation as in Definition $\ref{dfn:satake_simeq}$. Then, $(\Delta,\sigma)\simeq (\Delta',\sigma')$ if and only if $S(\Pi,\Pi_{0},p)\simeq S(\Pi',\Pi_{0}',p')$. In particular, any isomorphism of Satake diagrams can be extended to an isomorphism of $\sigma$-systems. \end{lem}
\subsection{Compact symmetric pairs and their Satake diagrams}
Let $G$ be a compact connected semisimple Lie group, and $\theta$ be an involution of $G$. We call the pair $(G,\theta)$ a compact symmetric pair. Denote by $\mathfrak{g}$ the Lie algebra of $G$. Fix an $\mathrm{ad}(\mathfrak{g})$-invariant inner product $\INN{\,}{\,}$ on $\mathfrak{g}$. The differential $d\theta$ of $\theta$ at the identity element in $G$ gives an involution of $\mathfrak{g}$, which we write the same symbol $\theta$ if there is no confusion. Let $\mathfrak{g}=\mathfrak{g}^{\theta}\oplus\mathfrak{g}^{-\theta}=:\mathfrak{k}\oplus\mathfrak{m}$ be the canonical decomposition of $\mathfrak{g}$ for $\theta$. Take a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}$ is a maximal abelian subspace of $\mathfrak{m}$. This implies that $\mathfrak{t}$ is $\theta$-invariant.
Let $\Delta(\subset \mathfrak{t})$ denote the root system of $\mathfrak{g}$
with respect to $\mathfrak{t}$. Since $\theta$ induces an automorphism of $\Delta$, the pair $(\Delta, \sigma):=(\Delta, -\theta|_{\mathfrak{t}})$ gives a $\sigma$-system of $\mathfrak{t}$. It follows from \cite[Lemma 1.1.3.6]{Warner} that $(\Delta,\sigma)$ is normal. We call it the $\sigma$-system of $(G, \theta)$ (or $(\mathfrak{g},\theta)$) for $\mathfrak{t}$. We will show that the $\sigma$-system $(\Delta,\sigma)$
is uniquely determined from $(G,\theta)$ up to isomorphism. Let $\mathfrak{t}'$ be another maximal abelian subalgebra of $\mathfrak{g}$
such that $\mathfrak{t}'\cap\mathfrak{m}$ is a maximal abelian subspace of $\mathfrak{m}$. Denote by $(\Delta',\sigma'):=(\Delta',-\theta|_{\mathfrak{t}'})$ the $\sigma$-system of $(G,\theta)$ for $\mathfrak{t}'$. Let $K$ be the identity component of the fixed point subgroup of $\theta$ in $G$. From the $\mathrm{Ad}(K)$-conjugacy of maximal abelian subspaces of $\mathfrak{m}$, there exists $k\in K$ satisfying $\mathfrak{t}'\cap\mathfrak{m}=\mathrm{Ad}(k)(\mathfrak{t}\cap\mathfrak{m})$. In addition, there also exists $k'\in K$ satisfying the following relations (cf.~\cite[Proposition 5]{Sugiura}): \[ \mathrm{Ad}(k')(H)=H \quad (H\in\mathfrak{t}'\cap\mathfrak{m}),\quad \mathrm{Ad}(k')(\mathrm{Ad}(k)(\mathfrak{t}\cap\mathfrak{k}))=\mathfrak{t}'\cap\mathfrak{k}. \] If we put $\tilde{k}:=k'k\in K$, then we have $\mathrm{Ad}(\tilde{k})(\mathfrak{t}\cap\mathfrak{m})=\mathfrak{t}'\cap\mathfrak{m}$ and $\mathrm{Ad}(\tilde{k})(\mathfrak{t}\cap\mathfrak{k})=\mathfrak{t}'\cap\mathfrak{k}$. This obeys $\mathrm{Ad}(\tilde{k})(\mathfrak{t})=\mathfrak{t}'$. Thus, we obtain \begin{equation*}\label{eqn:csp_sigma} (\Delta',\sigma')
=(\mathrm{Ad}(\tilde{k})(\Delta),-(\mathrm{Ad}(\tilde{k})\theta\mathrm{Ad}(\tilde{k})^{-1})|_{\mathrm{Ad}(\tilde{k})(\mathfrak{t})})
\simeq (\Delta, -\theta|_{\mathfrak{t}})=(\Delta,\sigma). \end{equation*} We define the Satake diagram of $(G,\theta)$ as that of $(\Delta,\sigma)$, which is uniquely determined up to isomorphisms due to Lemma \ref{lem:sigma_satake}.
Two compact symmetric pairs $(G,\theta)$ and $(G,\theta')$ are said to be isomorphic, which we write $(G,\theta)\simeq (G,\theta')$, if there exists $\varphi\in\mathrm{Aut}(G)$ satisfying $\theta'=\varphi\theta\varphi^{-1}$. Then their Satake diagrams are isomorphic in the sense of Definition \ref{dfn:satake_simeq}. In order to consider the converse (see Theorem \ref{thm:cps_sigma_satake_equiv} for the precise statement), we will recall the result for compact symmetric pairs due to Araki (\cite{Araki}).
Let $(G,\theta)$ and $(G,\theta')$ be two compact symmetric pairs. Let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}$ and $\mathfrak{t}\cap\mathfrak{m}'$ are maximal abelian subspace of $\mathfrak{m}$ and $\mathfrak{m}'$, respectively. The following theorem states that, if their differentials $d\theta$ and $d\theta'$ coincide with each other on $\mathfrak{t}$, then $\theta$ and $\theta'$ are the same on the whole $G$ up to the $\mathrm{Int}(G)$-conjugacy.
\begin{thm}[Araki]\label{thm:araki} Retain the notation above. Assume that
$d\theta|_{\mathfrak{t}}=d\theta'|_{\mathfrak{t}}$ holds. Then, there exists $H\in\mathfrak{t}\cap\mathfrak{m}$ satisfying $d\theta'=e^{\mathrm{ad}(H)}d\theta e^{-\mathrm{ad}(H)}$. In addition, $\theta'=\tau_{h}\theta\tau_{h}^{-1}$ holds for $h=\exp(H)$, where $\tau_{h}$ denotes the inner automorphism of $G$ defined by $g\mapsto hgh^{-1}$. \end{thm}
Here, we note that, under the assumption of this theorem,
$d\theta|_{\mathfrak{t}}=d\theta'|_{\mathfrak{t}}$ implies $\mathfrak{t}\cap\mathfrak{m}=\mathfrak{t}\cap\mathfrak{m}'$. Theorem \ref{thm:araki} will be used in the proof of Theorem \ref{thm:cst_dsatake_sim}. For the completeness of our proof of Theorem \ref{thm:cst_dsatake_sim}, we will prove Theorem \ref{thm:araki} (see \cite[Theorem 2.14]{Araki} for the original statement and its proof).
For this purpose we need some preparation. We first restate Theorem \ref{thm:araki} in terms of the complexification of $\mathfrak{g}$. Let $\mathfrak{g}^{\mathbb{C}}$ denote the complexification of $\mathfrak{g}$. We write $d\theta^{\mathbb{C}}$ and $d\theta'^{\mathbb{C}}$ as the complexifications of $d\theta$
and $d\theta'$, respectively. Then it is sufficient to show that, if $d\theta|_{\mathfrak{t}}=d\theta'|_{\mathfrak{t}}$ holds, then there exists $H\in\mathfrak{t}\cap\mathfrak{m}$ satisfying $d\theta'^{\mathbb{C}}=e^{\mathrm{ad}(H)}d\theta^{\mathbb{C}} e^{-\mathrm{ad}(H)}$. In order to give such $H$, we next recall the result for compact symmetric pairs due to Klein (\cite{Klein}). In fact, following to his result, we can obtain a description for the action of $d\theta^{\mathbb{C}}$ on $\mathfrak{g}^{\mathbb{C}}$ by means of the corresponding Satake diagram.
Let $(G,\theta)$ be a compact symmetric pair. Take a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}$ is a maximal abelian subspace of $\mathfrak{m}$. Denote by $\Delta$ the root system of $\mathfrak{g}$ with respect to $\mathfrak{t}$. We write the root space decomposition of the complexification $\mathfrak{g}^{\mathbb{C}}$ as follows: \[ \mathfrak{g}^{\mathbb{C}}=\mathfrak{t}^{\mathbb{C}}\oplus\sum_{\alpha\in\Delta}\mathfrak{g}(\mathfrak{t},\alpha), \] where $\mathfrak{g}(\mathfrak{t},\alpha):=\{X\in\mathfrak{g}^{\mathbb{C}}\mid [H,X]=\sqrt{-1}\INN{\alpha}{H}X,\,H\in\mathfrak{t}\}$. For each $\alpha\in\Delta$, $\mathfrak{g}(\mathfrak{t},\alpha)$ is a complex one dimensional subspace of $\mathfrak{g}^{\mathbb{C}}$. A family $\{X_{\alpha}\}_{\alpha\in\Delta}$ of vectors in $\mathfrak{g}^{\mathbb{C}}$ is called a \textit{Chevalley basis} of $\mathfrak{g}^{\mathbb{C}}$, if it satisfies the following conditions: \begin{enumerate} \item For each $\alpha\in\Delta$, $X_{\alpha}$ is a nonzero vector in $\mathfrak{g}(\mathfrak{t},\alpha)$. \item $[X_{\alpha},X_{-\alpha}]=-\sqrt{-1}\alpha$ for $\alpha\in\Delta$. \item There exists a family $\{c_{\alpha,\beta} \mid \alpha,\beta\in\Delta, \alpha+\beta\in\Delta\}$ of real numbers satisfying $[X_{\alpha},X_{\beta}]=c_{\alpha,\beta}X_{\alpha+\beta}$ and $c_{\alpha,\beta}=-c_{-\alpha,-\beta}$. \item $[X_{\alpha},X_{\beta}]=0$ for $\alpha,\beta\in\Delta$ with $\alpha+\beta\not\in\Delta\cup\{0\}$. \end{enumerate} For formal reasons we put $c_{\alpha,\beta}=0$ for $\alpha,\beta\in\Delta$ with $\alpha+\beta\not\in\Delta\cup\{0\}$. Then $\{c_{\alpha,\beta}\}$ is called the \textit{Chevalley constants} associated with $\{X_{\alpha}\}_{\alpha\in\Delta}$. We note that a Chevalley basis is not a basis of the whole $\mathfrak{g}^{\mathbb{C}}$ but the subspace $\sum_{\alpha\in\Delta}\mathfrak{g}(\mathfrak{t},\alpha)$. It is known that a Chevalley basis exists (see \cite[Theorem 6.6, Chapter VI]{Knapp} for the proof).
We extend $\INN{\,}{\,}$ to a complex bilinear form on $\mathfrak{g}^{\mathbb{C}}$, which is denote by the same symbol $\INN{\,}{\,}$. Then it is $\mathrm{ad}(\mathfrak{g}^{\mathbb{C}})$-invariant and nondegenerate. For each $\alpha\in\Delta$, by taking the scalar product of both sides of $[X_{\alpha},X_{-\alpha}]=-\sqrt{-1}\alpha$ with $\alpha$ we get $\INN{X_{\alpha}}{X_{-\alpha}}=-1$. We write $\overline{X}$ the complex conjugate of $X\in\mathfrak{g}^{\mathbb{C}}$ with respect to $\mathfrak{g}$ in $\mathfrak{g}^{\mathbb{C}}$. By a similar argument in the proof of \cite[Proposition 3.5]{Klein} we obtain the following lemma.
\begin{lem}\label{lem:Cbasis_conj} There exists a Chevalley basis $\{X_{\alpha}\}_{\alpha\in\Delta}$ of $\mathfrak{g}^{\mathbb{C}}$ which satisfies $\overline{X_{\alpha}}=-X_{-\alpha}$ for $\alpha\in\Delta$. Then we have $\INN{X_{\alpha}}{\overline{X_{\alpha}}}=1$. \end{lem}
Let $\{X_{\alpha}\}_{\alpha\in\Delta}$ be a Chevalley basis of $\mathfrak{g}^{\mathbb{C}}$. The complexification of $\theta$ will be denoted by the same symbol $\theta$. For each $\alpha\in\Delta$, it follows from $\theta(\mathfrak{g}(\mathfrak{t},\alpha))=\mathfrak{g}(\mathfrak{t},\theta(\alpha))$ that there exists a nonzero complex number $s_{\alpha}$ satisfying $\theta(X_{\alpha})=s_{\alpha}X_{\theta(\alpha)}$. The family $\{s_{\alpha}\}_{\alpha\in\Delta}$ is called the \textit{Klein constants} of $(\mathfrak{g},\theta)$ associated with $\{X_{\alpha}\}_{\alpha\in\Delta}$. Then we get $s_{\theta(\alpha)}=s_{\alpha}^{-1}$ because of $\theta^{2}=1$. Furthermore, if $\overline{X}_{\alpha}=-X_{-\alpha}$ holds for $\alpha\in\Delta$, then $\{s_{\alpha}\}_{\alpha\in\Delta}$ has the following properties:
\begin{lem}[{\cite[Proposition 4.1]{Klein}}]\label{lem:Klein_pro4.1} Assume that $\{X_{\alpha}\}_{\alpha\in\Delta}$ satisfies $\overline{X_{\alpha}}=-X_{-\alpha}$ for $\alpha\in\Delta$. Let $\alpha$ and $\beta$ be in $\Delta$. \begin{enumerate}
\item We have $s_{-\alpha}=\overline{s_{\alpha}}=s_{\alpha}^{-1}$. In particular, $|s_{\alpha}|=1$ holds. \item If $\theta(\beta)=\beta$, then $\mathfrak{g}(\mathfrak{t},\beta)\subset\mathfrak{k}^{\mathbb{C}}$. In particular, we get $s_{\beta}=1$. \end{enumerate} \end{lem}
Fix an element $H\in\mathfrak{t}$. We have another involution $\theta':=e^{\mathrm{ad}(H)}\theta e^{-\mathrm{ad}(H)}$
of $\mathfrak{g}$, which satisfies $\theta'|_{\mathfrak{t}}=\theta|_{\mathfrak{t}}$. Set $\mathfrak{m}':=\mathfrak{g}^{-\theta'}=e^{\mathrm{ad}(H)}(\mathfrak{m})$. Then we obtain $\mathfrak{t}\cap\mathfrak{m}'=e^{\mathrm{ad}(H)}(\mathfrak{t}\cap\mathfrak{m})$, from which $\mathfrak{t}\cap\mathfrak{m}'$ is a maximal abelian subspace of $\mathfrak{m}'$. Denote by $\{s'_{\alpha}\}_{\alpha\in\Delta}$ the Klein constants of $(\mathfrak{g},\theta')$ associated with $\{X_{\alpha}\}_{\alpha\in\Delta}$. Then we have $s_{\alpha}'=e^{\sqrt{-1}\INN{H}{\theta(\alpha)-\alpha}}s_{\alpha}$ for $\alpha\in\Delta$.
The following lemma is useful in our proof of Theorem \ref{thm:araki}.
\begin{lem}\label{lem:uehtgs} Let $\{X_{\alpha}\}_{\alpha\in\Delta}$ be a Chevalley basis of $\mathfrak{g}^{\mathbb{C}}$ satisfying $\overline{X_{\alpha}}=-X_{-\alpha}$ for $\alpha\in\Delta$. We put $\Delta_{0}=\{\alpha\in\Delta\mid \theta(\alpha)=\alpha\}$. Assume that $\gamma_{1},\dotsc,\gamma_{r}$ $(r\in\mathbb{N})$ are in $\Delta-\Delta_{0}$ which satisfy $\{\theta(\gamma_{1})-\gamma_{1},\dotsc,\theta(\gamma_{r})-\gamma_{r}\}(\subset \mathfrak{t}\cap\mathfrak{m})$ are linearly independent. Then, for any $u_{1},\dotsc,u_{r}\in U(1)$, there exists $H\in\mathfrak{t}\cap\mathfrak{m}$ satisfying the following relation: \begin{equation}\label{eqn:ues} u_{j}=e^{\sqrt{-1}\INN{H}{\theta(\gamma_{j})-\gamma_{j}}}s_{\gamma_{j}}\quad (1\leq j\leq r). \end{equation} \end{lem}
\begin{proof} For each $1\leq j\leq r$, it follows from Lemma \ref{lem:Klein_pro4.1}, (1) that there exists $t_{j}\in\mathbb{R}$ satisfying $s_{\gamma_{j}}=e^{\sqrt{-1}t_{j}}$. Since $u_{j}$ is also in $U(1)$, there exists $v_{j}\in\mathbb{R}$ satisfying $u_{j}=e^{\sqrt{-1}v_{j}}$. We define a matrix $C$ by \[ C:=\bigm(\INN{\theta(\gamma_{j})-\gamma_{j}}{\theta(\gamma_{k})-\gamma_{k}}\bigm)_{1\leq j,k\leq r}. \] It follows from the assumption that the square matrix $C$ is invertible. Let $h_{1},\dotsc,h_{r}$ be real numbers defined by \[ \left( \begin{array}{c} h_{1}\\ \vdots\\ h_{r} \end{array} \right):=C^{-1}\left( \begin{array}{c} v_{1}-t_{1}\\ \vdots\\ v_{r}-t_{r} \end{array} \right). \] Then, if we put $H:=\sum_{k=1}^{r}h_{k}(\theta(\gamma_{k})-\gamma_{k})\in\mathfrak{t}\cap\mathfrak{m}$, then the following relation holds: \[ v_{j}=t_{j}+\INN{H}{\theta(\gamma_{j})-\gamma_{j}}\quad (1\leq j\leq r). \] Thus we obtain the assertion because $H$ satisfies (\ref{eqn:ues}). \end{proof}
Now, we are ready to prove Theorem \ref{thm:araki}.
\begin{proof}[Proof of Theorem $\ref{thm:araki}$] Let $\{X_{\alpha}\}_{\alpha\in\Delta}$ be a Chevalley basis of $\mathfrak{g}^{\mathbb{C}}$ with $\overline{X_{\alpha}}=-X_{-\alpha}$. Denote by $\{s_{\alpha}\}_{\alpha\in\Delta}$ (resp.~$\{s_{\alpha}'\}_{\alpha\in\Delta}$) the Klein constants of $(\mathfrak{g},\theta)$ (resp.~$(\mathfrak{g},\theta')$)
associated with $\{X_{\alpha}\}_{\alpha\in\Delta}$. By the assumption $\theta|_{\mathfrak{t}}=\theta'|_{\mathfrak{t}}$ we obtain
$(\Delta,-\theta|_{\mathfrak{t}})=(\Delta,-\theta'|_{\mathfrak{t}})=:(\Delta,\sigma)$. Set $r:=\mathrm{rank}(\Delta,\sigma)$. Let $\Pi$ be a $\sigma$-fundamental system of $\Delta$, and $\Pi_{0}:=\Pi\cap\Delta_{0}$. We can take $\alpha_{1},\dotsc,\alpha_{r}\in\Pi-\Pi_{0}$ such that $\{\theta(\alpha_{1})-\alpha_{1},\dotsc,\theta(\alpha_{r})-\alpha_{r}\}(\subset \mathfrak{t}\cap\mathfrak{m})$ are linearly independent (cf.~\cite[p.~23]{Warner}). By applying Lemma \ref{lem:uehtgs} to $s_{\alpha_{1}}',\dotsc,s_{\alpha_{r}}'\in U(1)$, there exists $H\in\mathfrak{t}\cap\mathfrak{m}$ satisfying the following relation: \begin{equation*}\label{eqn:saj'sa_rel} s_{\alpha_{j}}'=e^{\sqrt{-1}\INN{H}{\theta(\alpha_{j})-\alpha_{j}}} s_{\alpha_{j}}\quad (1\leq j\leq r). \end{equation*} Then we have $\theta'=e^{\mathrm{ad}(H)}\theta e^{-\mathrm{ad}(H)}$ on $\sum_{j=1}^{r}\mathfrak{g}(\mathfrak{t},\alpha_{j})$. Furthermore, if we put \[ \mathfrak{h} :=\mathfrak{t}\oplus \sum_{\beta\in\Pi_{0}}\mathfrak{g}(\mathfrak{t},\beta) \oplus\sum_{j=1}^{r}(\mathfrak{g}(t,\alpha_{j})\oplus \mathfrak{g}(\mathfrak{t},-\theta(\alpha_{j}))), \] then we have $\theta'=e^{\mathrm{ad}(H)}\theta e^{-\mathrm{ad}(H)}$ on the subset $\mathfrak{h}\cup\overline{\mathfrak{h}}$ of $\mathfrak{g}^{\mathbb{C}}$. Since $\mathfrak{g}^{\mathbb{C}}$ is generated by $\mathfrak{h}\cup\overline{\mathfrak{h}}$, we have $\theta'=e^{\mathrm{ad}(H)}\theta e^{-\mathrm{ad}(H)}$ on $\mathfrak{g}^{\mathbb{C}}$. Therefore we have completed the proof. \end{proof}
The following theorem is shown by Lemma \ref{lem:sigma_satake} and Theorem \ref{thm:araki}.
\begin{thm}\label{thm:cps_sigma_satake_equiv} Let $(G,\theta)$ and $(G,\theta')$ be two compact symmetric pairs. Then the followings conditions are equivalent: \begin{enumerate} \item $(G,\theta)$ and $(G,\theta')$ are locally isomorphic, namely, there exists $\varphi\in\mathrm{Aut}(\mathfrak{g})$ satisfying $d\theta'=\varphi d\theta\varphi^{-1}$. \item The $\sigma$-systems of $(G,\theta)$ and $(G,\theta')$ are isomorphic. \item The Satake diagrams of $(G,\theta)$ and $(G,\theta')$ are isomorphic. \end{enumerate} In addition, in the case when $G$ is simply-connected or when $G$ is the adjoint group, $(G,\theta)$ and $(G,\theta')$ are isomorphic if and only if one of the above conditions $(1)$--$(3)$ holds. \end{thm}
An abstract $\sigma$-system $(\Delta,\sigma)$ is said to be \textit{admissible}, if there exists a compact symmetric pair whose $\sigma$-system is isomorphic to $(\Delta,\sigma)$. Clearly, any admissible $\sigma$-system is normal. Araki (\cite[No.~5.11]{Araki}) determined the admissibilities of abstract normal $\sigma$-systems based on the classification. As a consequence of Theorem \ref{thm:cps_sigma_satake_equiv}, he gave an alternative proof of Cartan's classification for compact symmetric pairs at the Lie algebra level. Hence the locally isomorphism class of a compact symmetric pair is represented by a diagram. Furthermore, we can determine the restricted root system of $(G,\theta)$ with multiplicity by means of the Satake diagram, which characterizes the local isomorphism classes of compact symmetric pairs. This is a significance to give the alternative proof. In Section \ref{sec:cst_dsatake}, we will generalize this method to classify compact symmetric triads at the Lie algebra level.
Here, in order to present concrete examples of compact symmetric triads, we give an explicit description of the classification for the isomorphism classes of compact symmetric pairs $(\mathfrak{g},\theta)$ at the Lie algebra level. The following theorem gives a criterion for two compact symmetric pairs to be isomorphic to each other.
\begin{thm}\label{thm:csp_iso_iff} Assume that $\mathfrak{g}$ is simple. Two compact symmetric pairs $(\mathfrak{g},\theta)$ and $(\mathfrak{g},\theta')$ are isomorphic if and only if the fixed point subalgebras $\mathfrak{k}$ and $\mathfrak{k}'$ are isomorphic as Lie algebras. \end{thm}
The proof is essentially due to Helgason (\cite{Helgason}).
\begin{proof} The necessity is clear. In order to prove the sufficiency we assume that $\mathfrak{k}$ and $\mathfrak{k}'$ are isomorphic. We extend $\theta$ and $\theta'$ to complex linear involutions on $\mathfrak{g}^{\mathbb{C}}$, which we write $\theta^{\mathbb{C}}$ and $\theta'^{\mathbb{C}}$, respectively. Then the fixed point subalgebras of $\theta^{\mathbb{C}}$ and $\theta'^{\mathbb{C}}$ are isomorphic to each other. It follows from \cite[Theorem 6.2, Chapter X]{Helgason} that $\theta^{\mathbb{C}}$ and $\theta'^{\mathbb{C}}$ are $\mathrm{Aut}(\mathfrak{g}^{\mathbb{C}})$-conjugate. In addition, by \cite[Proposition 1.4, Chapter X]{Helgason} there exists $\varphi\in\mathrm{Aut}(\mathfrak{g})$ satisfying $\theta'=\varphi\theta\varphi^{-1}$. Hence the assertion holds. \end{proof}
From Theorem \ref{thm:csp_iso_iff} there is no confusion when we write $[(\mathfrak{g},\mathfrak{k})]$ in place of $[(\mathfrak{g},\theta)]$. Table \ref{table:fixed_pt_algebra} exhibits the classification of the fixed point subalgebras of involutions on $\mathfrak{g}$. In Section \ref{sec:cst_dsatake}, we will classify compact simple symmetric triads at the Lie algebra level, based on the classification for compact simple symmetric pairs.
\begin{table}[H] \caption{The classification of fixed point subalgebras of involutions (\cite[TABLE V, p.~518]{Helgason})}\label{table:fixed_pt_algebra} \centering \renewcommand{1.0}{1.5} \begin{tabular}{cc} \hline \hline $\mathfrak{g}$ & Fixed point subalgebra\\ \hline \hline $\mathfrak{su}(n)$ & $\mathfrak{so}(n)$, $\mathfrak{sp}(n/2)$ ($n$: even) , $\mathfrak{s}(\mathfrak{u}(a)\oplus\mathfrak{u}(b))$ ($a+b=n$) \\
$\mathfrak{so}(n)$ & $\mathfrak{so}(a)\oplus\mathfrak{so}(b)$ ($a+b=n\neq 2,4$), $\mathfrak{u}(n/2)$ ($n\geq 6$, even) \\
$\mathfrak{sp}(n)$ & $\mathfrak{u}(n)$, $\mathfrak{sp}(a)\oplus\mathfrak{sp}(b)$ ($a+b=n$) \\
$\mathfrak{e}_{6}$ & $\mathfrak{sp}(4)$, $\mathfrak{su}(6)\oplus\mathfrak{su}(2)$, $\mathfrak{so}(10)\oplus\mathfrak{so}(2)$, $\mathfrak{f}_{4}$\\
$\mathfrak{e}_{7}$ & $\mathfrak{su}(8)$, $\mathfrak{so}(12)\oplus\mathfrak{su}(2)$, $\mathfrak{e}_{6}\oplus\mathfrak{so}(2)$\\
$\mathfrak{e}_{8}$ & $\mathfrak{so}(16)$, $\mathfrak{e}_{7}\oplus\mathfrak{su}(2)$\\
$\mathfrak{f}_{4}$ & $\mathfrak{sp}(3)\oplus\mathfrak{su}(2)$, $\mathfrak{so}(9)$\\
$\mathfrak{g}_{2}$ & $\mathfrak{su}(2)\oplus\mathfrak{su}(2)$\\ \hline \hline \end{tabular} \renewcommand{1.0}{1.0} \end{table}
\section{Double Satake diagrams for double $\sigma$-systems}\label{sec:2satake}
In this section, we will introduce the notions of double $\sigma$-systems and double Satake diagrams, which are generalizations of $\sigma$-systems and Satake diagrams, respectively. Based on the equivalence relation for compact symmetric triads, we define equivalence relations for double $\sigma$-systems and for double Satake diagrams. In Theorem \ref{thm:dsig_dsatake_equiv} we give a necessary and sufficient condition for two double $\sigma$-systems to be equivalent. As explained in more detail in Section \ref{sec:cst_dsatake}, this theorem plays a fundamental role in the definition of double Satake diagrams for compact symmetric triads. We also define the rank and the order for the equivalence class of a double $\sigma$-systems. We will discuss a geometrical meaning of the rank in Sections \ref{sec:cst_dsatake}. On the other hand, we will show the relation of the ranks and the orders between compact symmetric triads and double $\sigma$-systems in Section \ref{sec:cst_cf}.
\subsection{Double $\sigma$-systems}\label{sec:2sigroot}
Let $\mathfrak{t}$ be a finite dimensional real vector space. Fix an inner product $\INN{\,}{\,}$ on $\mathfrak{t}$. Let $\Delta$ be a reduced root system of $\mathfrak{t}$. For two involutions $\sigma_{1}$ and $\sigma_{2}$ on $\Delta$, the triplet $(\Delta,\sigma_{1},\sigma_{2})$ is called a \textit{double $\sigma$-system} of $\mathfrak{t}$. In this paper, $\sigma_{1}$ and $\sigma_{2}$ are not necessarily commutative unless otherwise stated. Based on the equivalence relation for compact symmetric triads as in Definition \ref{dfm:cst_sim}, we introduce an equivalence relation $\sim$ on double $\sigma$-systems as follows.
\begin{dfn} Two double $\sigma$-systems $(\Delta,\sigma_{1},\sigma_{2})$ and $(\Delta',\sigma_{1}',\sigma_{2}')$ are isomorphic, which we write $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta',\sigma_{1}',\sigma_{2}')$, if there exist an isomorphism $\varphi:\mathfrak{t}\to \mathfrak{t}'$ of root systems between $\Delta$ and $\Delta'$, and $w'\in W(\Delta')$ satisfying the following relations: \begin{equation}\label{eqn:dsig_sim} \sigma_{1}'=\varphi\sigma_{1}\varphi^{-1},\quad \sigma_{2}'=w'\varphi\sigma_{2}\varphi^{-1}w'^{-1}. \end{equation} We write $[(\Delta,\sigma_{1},\sigma_{2})]$ the isomorphism class of $(\Delta,\sigma_{1},\sigma_{2})$. \end{dfn}
A double $\sigma$-system $(\Delta,\sigma_{1},\sigma_{2})$ is said to be \textit{normal}, if both $(\Delta,\sigma_{1})$ and $(\Delta,\sigma_{2})$ are normal as $\sigma$-systems. The normality of a double $\sigma$-system is compatible with $\sim$, namely, for two double $\sigma$-systems $(\Delta,\sigma_{1},\sigma_{2})$ and $(\Delta',\sigma_{1}',\sigma_{2}')$ satisfying $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta',\sigma_{1}',\sigma_{2}')$, if $(\Delta,\sigma_{1},\sigma_{2})$ is normal, then so is $(\Delta',\sigma_{1}',\sigma_{2}')$.
\begin{dfn}\label{dfn:dsig_canonical} Let $(\Delta,\sigma_{1},\sigma_{2})$ be a normal double $\sigma$-system. \begin{enumerate} \item A fundamental system $\Pi$ of $\Delta$ is called a \textit{$(\sigma_{1},\sigma_{2})$-fundamental system}, if $\Pi$ is both $\sigma_{1}$- and $\sigma_{2}$-fundamental systems. \item $(\Delta,\sigma_{1},\sigma_{2})$ is said to be \textit{canonical}, if $\Delta$ admits a $(\sigma_{1},\sigma_{2})$-fundamental system. \end{enumerate} \end{dfn}
\begin{pro}\label{pro:exist_dfs} For any normal double $\sigma$-system $(\Delta,\sigma_{1},\sigma_{2})$, there exists a normal double $\sigma$-system $(\Delta,\sigma_{1},\sigma_{2}')\sim(\Delta,\sigma_{1},\sigma_{2})$ such that $(\Delta,\sigma_{1},\sigma_{2}')$ is canonical. \end{pro}
\begin{proof} For $i=1,2$, let $\Pi_{i}$ be a $\sigma_{i}$-fundamental system of $\Delta$. Since $W(\Delta)$ acts transitively on the set of fundamental systems of $\Delta$, there exists $w\in W(\Delta)$ such that $\Pi_{1}=w(\Pi_{2})=:\Pi$. If we put $\sigma_{2}':=w\sigma_{2}w^{-1}$, then $(\Delta,\sigma_{1},\sigma_{2}')\sim(\Delta,\sigma_{1},\sigma_{2})$ holds. It follows from Lemma \ref{lem:Pi_varphiPi} that $\Pi$ is a $\sigma_{2}'$-fundamental system. Hence we get the assertion. \end{proof}
In general, a normal double $\sigma$-system $(\Delta, \sigma_{1},\sigma_{2})$ is not necessarily canonical. Furthermore, there exist two normal double $\sigma$-systems $(\Delta, \sigma_{1},\sigma_{2})\not\sim(\Delta, \sigma_{1},\sigma_{2}')$ such that they are canonical and that $(\Delta,\sigma_{2})\simeq(\Delta,\sigma_{2}')$ holds. Before giving an example we prepare the following notation.
\begin{nota}\label{nota:Dr} Let $e_{1},\dotsc,e_{r}$ be the canonical basis of $\mathbb{R}^{r}$. We write $D_{r}^{+}=\{e_{i}\pm e_{j}\mid 1\leq i< j\leq r\}$ as the set of all the positive roots for the root system of type $D$ with rank $r$ (\cite{Bo}). Then the following gives the set of all the simple roots for $D_{r}^{+}$: \[ \Pi=\{\alpha_{1}=e_{1}-e_{2},\dotsc,\alpha_{r-1}=e_{r-1}-e_{r},\alpha_{r}=e_{r-1}+e_{r}\}. \] \end{nota}
\begin{ex}\label{ex:so8so3so5_ndsig} Let $(\Delta,\sigma)$ be the $\sigma$-system corresponding to the compact symmetric pair $(\mathfrak{so}(8),\mathfrak{so}(3)\oplus\mathfrak{so}(5))$. Then we have $\Delta=\{\pm e_{i}\pm e_{j}\mid 1\leq i<j \leq 4\}$. There exists a $\sigma$-fundamental system $\Pi=\{\alpha_{1},\dotsc,\alpha_{4}\}$ of $\Delta$ such that its Satake diagram is described as follows: \begin{equation*} \begin{tabular}{c} \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\circ}="a1";(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(17.09,7.09)*++!L{\alpha_{3}}*{\circ}="a3" \ar@{-}"a2";(17.09,-7.09)*++!L{\alpha_{4}}*{\circ}="a4"
\ar@/^/@{<->} "a3";"a4" \end{xy} \end{tabular} \end{equation*} Then we have $\sigma:(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\mapsto (\alpha_{1},\alpha_{2},\alpha_{4},\alpha_{3})$. Clearly, $(\Delta, \sigma,\sigma)$ gives a trivial example of canonical normal double $\sigma$-systems. In what follows, we shall give an example of normal double $\sigma$-system $(\Delta,\sigma,\sigma')$ such that $(\Delta,\sigma,\sigma')\sim(\Delta,\sigma,\sigma)$ is not canonical. Furthermore, we give another example of normal double $\sigma$-system $(\Delta,\sigma,\sigma'')$ such that $(\Delta,\sigma,\sigma'')$ is canonical, $(\Delta,\sigma)\simeq(\Delta,\sigma'')$ and $(\Delta,\sigma,\sigma'')\not\sim(\Delta,\sigma,\sigma)$.
We define an automorphism $w\in \mathrm{Aut}(\Delta)$ by $w:(e_{1},e_{2},e_{3},e_{4})\mapsto(e_{1},e_{2},e_{4},e_{3})$. Then $w\in W(\Delta)$ holds. If we put $\sigma':=w\sigma w^{-1}$, then $(\Delta,\sigma,\sigma')\sim(\Delta,\sigma,\sigma)$ is a normal double $\sigma$-system. In addition, from $\mathrm{ord}(\sigma\sigma')=2\neq\mathrm{ord}(\sigma\sigma)$, $(\Delta,\sigma,\sigma')$ cannot be canonical due to Theorem \ref{thm:dsig_dsatake_equiv} as will be seen later.
Let $\kappa$ be an automorphism of $\Delta$ with order three defined by $\kappa:(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\mapsto(\alpha_{4},\alpha_{2},\alpha_{1},\alpha_{3})$. We set $\sigma'':=\kappa\sigma\kappa^{-1}$. Then $(\Delta,\sigma)\simeq(\Delta,\sigma'')$ holds. In addition, $(\Delta,\sigma'')$ is normal. Hence the double $\sigma$-system $(\Delta,\sigma,\sigma'')$ is normal. It follows from $\kappa(\Pi)=\Pi$ that $\Pi$ becomes a $\sigma''$-fundamental system by Lemma \ref{lem:Pi_varphiPi}. This yields that $(\Delta,\sigma,\sigma'')$ is canonical. Since the order of $\sigma\sigma''$ has three, we have $\mathrm{ord}(\sigma\sigma)\neq\mathrm{ord}(\sigma\sigma'')$. Thus $(\Delta,\sigma,\sigma)\not\sim(\Delta,\sigma,\sigma'')$ holds by means of Theorem \ref{thm:dsig_dsatake_equiv}. \end{ex}
\subsection{Double Satake diagrams}\label{sec:sub2satake}
Let $(\Delta,\sigma_{1},\sigma_{2})$ be a canonical normal double $\sigma$-system, and $\Pi$ be a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$. Set $\Delta_{i,0}:=\{\alpha\in\Delta\mid \sigma_{i}(\alpha)=-\alpha\}$ for $i=1,2$. We denote by $S_{i}=S(\Pi, \Pi_{i,0}, p_{i})$ the Satake diagram of $(\Delta,\sigma_{i})$ associated with $\Pi$, where $\Pi_{i,0}:=\Pi\cap\Delta_{i,0}$ and $p_{i}$ is the Satake involution. We note that these Satake diagrams $S_{1}$ and $S_{2}$ are described from the common Dynkin diagram of $\Pi$.
\begin{dfn}\label{dfn:2satake} Retain the notation above. The pair $(S_{1},S_{2})$ is called the \textit{double Satake diagram} of $(\Delta,\sigma_{1},\sigma_{2})$ associated with $\Pi$. \end{dfn}
Let us prove that the double Satake diagram $(S_{1},S_{2})$ of $(\Delta, \sigma_{1},\sigma_{2})$ is independent of the choice of $\Pi$. Let $\Pi'$ be another $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$, and $(S_{1}',S_{2}')$ denote the double Satake diagram of $(\Delta,\sigma_{1},\sigma_{2})$ associated with $\Pi'$. It follows from \cite[Proposition A in Appendix]{Satake} that there exist $w_{1}\in W(\Delta)_{\sigma_{1}}$ and $w_{2}\in W(\Delta)_{\sigma_{2}}$ satisfying $w_{1}(\Pi)=\Pi'=w_{2}(\Pi)$, where $W(\Delta)_{\sigma_{i}}:=\{w\in W(\Delta)\mid \sigma_{i}w=w\sigma_{i}\}$. Since the action of $W(\Delta)$ is simply transitive, we obtain $w:=w_{1}=w_{2}\in W(\Delta)_{\sigma_{1}}\cap W(\Delta)_{\sigma_{2}}$. Thus, we get \begin{equation*}\label{eqn:dsatake_eq} S_{1}=S_{1}',\quad S_{2}=S_{2}'. \end{equation*} Then we write $(S_{1},S_{2})=(S_{1}',S_{2}')$.
\begin{dfn}\label{dfn:2satake_sim} Two double Satake diagrams $(S_{1},S_{2})$ and $(S_{1}',S_{2}')$ are isomorphic, if there exists a common isomorphism $\psi$ of Satake diagrams between $S_{i}$ and $S_{i}'$ for $i=1,2$. Then we write $(S_{1},S_{2})\sim(S_{1},S_{2})$ for short. Such $\psi$ is called an isomorphism of double Satake diagrams. We denote by $[(S_{1},S_{2})]$ the isomorphism class of $(S_{1},S_{2})$. \end{dfn}
\begin{thm}\label{thm:dsig_dsatake_equiv} Let $(\Delta,\sigma_{1},\sigma_{2})$ and $(\Delta',\sigma_{1}',\sigma_{2}')$ be two canonical double $\sigma$-systems of $\mathfrak{t}$ and $\mathfrak{t}'$, respectively. Let $(S_{1},S_{2})$ and $(S_{1}',S_{2}')$ denote their double Satake diagrams. Then, the following three condition are equivalent$:$ \begin{enumerate} \item $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta',\sigma_{1}',\sigma_{2}')$. \item There exists an isomorphism $\varphi:\mathfrak{t}\to\mathfrak{t}'$ of root systems between $\Delta$ and $\Delta'$ satisfying $\sigma_{i}'=\varphi\sigma_{i}\varphi^{-1}$ for $i=1,2$. \item $(S_{1},S_{2})\sim(S_{1}',S_{2}')$. \end{enumerate} In particular, we have $\dim(\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}) =\dim(\mathfrak{t}'^{\sigma_{1}'}\cap\mathfrak{t}'^{\sigma_{2}'})$ and $\mathrm{ord}(\sigma_{1}\sigma_{2})=\mathrm{ord}(\sigma_{1}'\sigma_{2}')$. \end{thm}
\begin{proof} It is sufficient to show $(1)\Rightarrow (2)$ and $(2)\Leftrightarrow (3)$ because $(2)\Rightarrow (1)$ is clear.
$(1)\Rightarrow (2)$: Assume that $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta',\sigma_{1}',\sigma_{2}')$. Then there exist an isomorphism $\varphi:\Delta\to\Delta'$ and $w'\in W(\Delta')$ satisfying \eqref{eqn:dsig_sim}. Let $\Pi$ and $\Pi'$ be a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$ and a $(\sigma_{1}',\sigma_{2}')$-fundamental system of $\Delta'$, respectively. It follows from Lemma \ref{lem:Pi_varphiPi} that $\varphi(\Pi)$ is a $(\sigma_{1}',\varphi\sigma_{2}\varphi^{-1})$-fundamental system of $\Delta'$. Then there exist $w'_{1}\in W(\Delta')_{\sigma_{1}'}$ and $w'_{2}\in W(\Delta')_{\sigma_{2}'}$ satisfying the following relations: \[ \Pi'=w_{1}'(\varphi(\Pi)),\quad \Pi'=w'_{2}(w'\varphi(\Pi)), \] from which we have $w_{1}'(\varphi(\Pi))=w_{2}'w'(\varphi(\Pi))$. This yields $w'=w_{2}'^{-1}w_{1}'$. If we put $\varphi':=w_{1}'\varphi$, then it is an isomorphism of root systems which satisfies $\sigma_{1}'=w_{1}'\sigma_{1}'w_{1}'^{-1}=\varphi'\sigma_{1}\varphi'^{-1}$ and \[ \sigma_{2}' =w'_{2}\sigma_{2}'w_{2}'^{-1} =w'_{2}(w'\varphi\sigma_{2}\varphi^{-1}w'^{-1})w_{2}'^{-1} =w'_{2}(w_{2}'^{-1}w_{1}'\varphi\sigma_{2}\varphi^{-1}w_{1}'^{-1}w'_{2})w_{2}'^{-1} =\varphi'\sigma_{2}\varphi'^{-1}. \] Hence we have the implication $(1)\Rightarrow (2)$.
$(2)\Rightarrow (3)$: Let $\varphi:\mathfrak{t}\to\mathfrak{t}'$ be an isomorphism of root systems between $\Delta$ and $\Delta'$ satisfying $\sigma_{i}'=\varphi\sigma_{i}\varphi^{-1}$ for $i=1,2$. If $\Pi$ is a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$, then $\varphi(\Pi)$ is a $(\sigma_{1}',\sigma_{2}')$-fundamental system of $\Delta'$. This implies $(S_{1},S_{2})\sim(S_{1}',S_{2}')$.
$(3)\Rightarrow (2)$: Let $\psi:\Pi\to\Pi'$ be an isomorphism of double Satake diagrams between $(S_{1},S_{2})$ and $(S_{1}',S_{2}')$. We extend $\psi$ to an isomorphism $\tilde{\psi}$ of root systems between $\Delta$ and $\Delta'$ (cf.~Lemma \ref{lem:sigma_satake}). The $\tilde{\psi}$ satisfies $\sigma_{i}'=\tilde{\psi} \sigma_{i}\tilde{\psi}^{-1}$ for $i=1,2$. Thus, we have the implication $(3)\Rightarrow (2)$.
From the above argument we have completed the proof. \end{proof}
For two double $\sigma$-systems $(\Delta,\sigma_{1},\sigma_{2})$ and $(\Delta',\sigma_{1}',\sigma_{2}')$, we write $(\Delta,\sigma_{1},\sigma_{2})\equiv(\Delta',\sigma_{1}',\sigma_{2}')$ if they satisfies the condition stated in Theorem \ref{thm:dsig_dsatake_equiv}, (2). Then $\equiv$ gives an equivalence relation on the set of double $\sigma$-systems.
We define the rank and the order for the isomorphism class of a normal double $\sigma$-system $(\Delta,\sigma_{1},\sigma_{2})$ as follows: For a canonical representative $(\Delta,\sigma_{1}',\sigma_{2}')\in [(\Delta,\sigma_{1},\sigma_{2})]$,
\[ \mathrm{rank}[(\Delta,\sigma_{1},\sigma_{2})]:=\dim(\mathfrak{t}^{\sigma_{1}'}\cap\mathfrak{t}^{\sigma_{2}'}),\quad \mathrm{ord}[(\Delta,\sigma_{1},\sigma_{2})]:=\mathrm{ord}(\sigma_{1}'\sigma_{2}'). \]
It follows from Theorem \ref{thm:dsig_dsatake_equiv} that the values of $\dim(\mathfrak{t}^{\sigma_{1}'}\cap\mathfrak{t}^{\sigma_{2}'})$ and $\mathrm{ord}(\sigma_{1}'\sigma_{2}')$ are independent of the choice of $(\Delta,\sigma_{1}',\sigma_{2}')$. Thus the rank and the order of $[(\Delta,\sigma,\sigma_{2})]$ are well-defined. Since $\sigma_{1}'\sigma_{2}'$ induces a permutation of $\Delta$, the order of $[(\Delta,\sigma_{1},\sigma_{2})]$ is finite. As will be shown later, in the case when $G$ is simple, the rank and the order of $[(G,\theta_{1},\theta_{2})]$ coincide with those of $[(\Delta,\sigma_{1},\sigma_{2})]$ (see Theorem \ref{thm:cst_RO_can}).
\section{Double Satake diagrams for compact symmetric triads}\label{sec:cst_dsatake}
In Subsection \ref{sec:dsfcst}, we give a normal double $\sigma$-system for a compact symmetric triad. In Subsection \ref{sec:cst_qcan}, we define a quasi-canonical compact symmetric triad as a compact symmetric triad which admits a canonical normal double $\sigma$-system. Furthermore, we prove that, for any compact symmetric triad $(G,\theta_{1},\theta_{2})$, there exists $(G,\theta_{1}',\theta_{2}')\sim (G,\theta_{1},\theta_{2})$ such that $(G,\theta_{1}',\theta_{2}')$ is quasi-canonical. In Subsection \ref{sec:cst_2satake_const}, we introduce the notion of double Satake diagrams for quasi-canonical compact symmetric triads. We will show that the isomorphism class of a compact symmetric triad uniquely determines the double Satake diagram up to isomorphism (Propositions \ref{pro:exist_qcan} and \ref{pro:cst_dstake_determ}). For its converse, we generalize Theorem \ref{thm:cps_sigma_satake_equiv} to compact symmetric triads, which is given in Theorem \ref{thm:cst_dsatake_sim}. In Subsection \ref{sec:cst_class_2satake}, we classify compact symmetric triads $(G,\theta_{1},\theta_{2})$ such that $G$ is simple in terms of double Satake diagrams. Our classification will be given in Corollary \ref{cor:cst_classify}. In addition, we give some results by means of the classification.
\subsection{Double $\sigma$-systems for compact symmetric triads}\label{sec:dsfcst}
\subsubsection{Construction of double $\sigma$-systems from compact symmetric triads}
Let $(G,\theta_{1},\theta_{2})$ be a compact symmetric triad and $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$
such that $\mathfrak{t}\cap\mathfrak{m}_{i}$ is a maximal abelian subspace of $\mathfrak{m}_{i}$ (cf.~Lemma \ref{lem:lemm2.5}). Denote by $\Delta$ the root system of $\mathfrak{g}$ with respect to $\mathfrak{t}$. Then, for each $i=1$, $2$,
$(\Delta, \sigma_{i}):=(\Delta, -d\theta_{i}|_{\mathfrak{t}})$ gives a normal $\sigma$-system of $\Delta$. Hence $(\Delta,\sigma_{1},\sigma_{2})$ becomes a normal double $\sigma$-system. We call $(\Delta,\sigma_{1},\sigma_{2})$ the double $\sigma$-system of $(G,\theta_{1},\theta_{2})$
with respect to $\mathfrak{t}$. We will show that the double $\sigma$-system $(\Delta,\sigma_{1},\sigma_{2})$ is uniquely determined up to isomorphism, that is, we have the following lemma.
\begin{lem}\label{lem:dsig_isomorphic} Let $\mathfrak{t}'$ be another maximal abelian subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}'\cap\mathfrak{m}_{i}$ is a maximal abelian subspace of $\mathfrak{m}_{i}$ and $(\Delta',\sigma_{1}',\sigma_{2}')$ denote the corresponding normal double $\sigma$-system. Then we have $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta,\sigma_{1}',\sigma_{2}')$. \end{lem}
\begin{proof} By the choices of $\mathfrak{t}$ and $\mathfrak{t}'$, there exist $\nu_{1}\in\mathrm{Int}(\mathfrak{k}_{1})$ and $\nu_{2}\in\mathrm{Int}(\mathfrak{k}_{2})$ satisfying $\nu_{1}(\mathfrak{t})=\mathfrak{t}'=\nu_{2}(\mathfrak{t})$ (cf.~\cite[Proposition 5]{Sugiura}). In particular, $\nu_{1}^{-1}\nu_{2}(\mathfrak{t})=\mathfrak{t}$ holds. Thus we obtain \begin{align*} (\Delta',\sigma_{1}',\sigma_{2}')
&=(\nu_{1}(\Delta),-\nu_{1}d\theta_{1}\nu_{1}^{-1}|_{\nu_{1}(\mathfrak{t})},
-\nu_{1}(\nu_{1}^{-1}d\theta_{2}\nu_{1})\nu_{1}^{-1}|_{\nu_{1}(\mathfrak{t})})\\
&\sim(\Delta,-d\theta_{1}|_{\mathfrak{t}},(\nu_{1}^{-1}\nu_{2})|_{\mathfrak{t}}(-d\theta_{2}|_{\mathfrak{t}})
(\nu_{2}^{-1}\nu_{1})|_{\mathfrak{t}})\\
&\sim(\Delta,\sigma_{1},\sigma_{2}). \end{align*} Hence we have the assertion. \end{proof}
\begin{lem}\label{lem:CST_dsigma2} Let $(G,\theta_{1}',\theta_{2}')\sim (G,\theta_{1},\theta_{2})$ be another compact symmetric triad and $\mathfrak{t}'$ be a maximal abelian subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}'\cap\mathfrak{m}_{i}'$ $(i=1,2)$ is a maximal abelian subspaces of $\mathfrak{m}_{i}'$. Let $(\Delta',\sigma_{1}',\sigma_{2}')$ be the corresponding normal double $\sigma$-system of $(G,\theta_{1}',\theta_{2}')$. Then we have $(\Delta,\sigma_{1},\sigma_{2})\sim (\Delta',\sigma_{1}',\sigma_{2}')$. \end{lem}
\begin{proof} It is sufficient to consider the case when $\theta_{1}'=\theta_{1}$ and $\theta_{2}'=\tau\theta_{2}\tau^{-1}$ for some $\tau\in\mathrm{Int}(G)$. Then $\tau(\mathfrak{t})$ is $\theta_{2}'$-invariant and $\tau(\mathfrak{t})\cap\mathfrak{m}_{2}'$ is a maximal abelian subspace of $\mathfrak{m}_{2}'$. Furthermore, $\tau(\Delta)$ is the root system of $\mathfrak{g}$
with respect to $\tau(\mathfrak{t})$. If we put $\sigma_{2}''=-d\theta_{1}|_{\tau(\mathfrak{t})}$ and
$\sigma_{2}''=-d\theta_{2}'|_{\tau(\mathfrak{t})}$, then $(\tau(\Delta),\sigma_{1}'', \sigma_{2}'')$ gives the normal double $\sigma$-system of $(G,\theta_{1}',\theta_{2}')$ corresponding to $\tau(\mathfrak{t})$. By Lemma \ref{lem:dsig_isomorphic} we have $(\Delta',\sigma_{1}',\sigma_{2}')\sim(\tau(\Delta),\sigma_{1}'',\sigma_{2}'')$. On the other hand, we have $(\tau(\Delta),\sigma_{1}'',\sigma_{2}'')\sim(\Delta,\sigma_{1},\sigma_{2})$. Indeed, there exists $\nu_{1}\in\mathrm{Int}(\mathfrak{k}_{1})$ such that $\nu_{1}(\mathfrak{t})=\tau(\mathfrak{t})$, from which $\tau^{-1}\nu_{1}(\mathfrak{t})=\mathfrak{t}$ holds. Hence we have \begin{equation*} (\tau(\Delta),\sigma_{1}'',\sigma_{2}'') \equiv
(\Delta,-\tau^{-1}\theta_{1}\tau|_{\mathfrak{t}},-d\theta_{2}|_{\mathfrak{t}})
\sim(\Delta,(\tau^{-1}\nu_{1})|_{\mathfrak{t}}(-d\theta_{1}|_{\mathfrak{t}})(\nu_{1}^{-1}\tau)|_{\mathfrak{t}},-d\theta_{2}|_{\mathfrak{t}}). \end{equation*} We have complete the proof. \end{proof}
\subsubsection{Another interpretation of the rank for compact symmetric triads} As shown in Section \ref{sec:cst_fundamental}, the rank of a compact symmetric triad $(G,\theta_{1},\theta_{2})$ coincides with the cohomogeneity of the Hermann action induced from $(G,\theta_{1},\theta_{2})$. We give another interpretation of the rank in terms of the double $\sigma$-system of $(G,\theta_{1},\theta_{2})$. More precisely, we prove the following proposition.
\begin{pro}\label{pro:rank=maxdim} Let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}_{i}$
$(i=1,2)$ is a maximal abelian subspace of $\mathfrak{m}_{i}$, and $(\Delta,\sigma_{1},\sigma_{2}):=(\Delta,-d\theta_{1}|_{\mathfrak{t}},-d\theta_{2}|_{\mathfrak{t}})$. Then we have$:$ \[ \mathrm{rank}(G,\theta_{1},\theta_{2}) =\max\{\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}})\mid s\in W(\Delta)\}. \] \end{pro}
\begin{proof} First, we prove \begin{equation}\label{eqn:rankgeqmax} \mathrm{rank}(G,\theta_{1},\theta_{2}) \geq \max\{\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}})\mid s\in W(\Delta)\}. \end{equation} Let $s$ be in $W(\Delta)$
and $g$ be an element of $G$ with $\mathrm{Ad}(g)|_{\mathfrak{t}}=s$. If we put $\theta_{2}'=\tau_{g}\theta_{2}\tau_{g}^{-1}$, then $(G,\theta_{1},\theta_{2}')$ is a compact symmetric triad which is isomorphic to $(G,\theta_{1},\theta_{2})$. Furthermore, we find that $\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}}=\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2}')$ is an abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}'$. Hence we have \[ \mathrm{rank}(G,\theta_{1},\theta_{2}) =\mathrm{rank}(G,\theta_{1},\theta_{2}') \geq \dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}}). \] By the arbitrariness of $s$, this yields \eqref{eqn:rankgeqmax}.
Next, we show the reverse inequality of \eqref{eqn:rankgeqmax}. It follows from Lemma \ref{lem:lemm2.5} that there exist a compact symmetric triad $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ and a maximal abelian subalgebra $\mathfrak{t}'$ such that $\mathfrak{t}'\cap\mathfrak{m}_{i}'$ ($i=1,2$) is a maximal abelian subspace of $\mathfrak{m}_{i}'$, and that $\mathfrak{t}'\cap(\mathfrak{m}_{1}'\cap\mathfrak{m}_{2}')$
is a maximal abelian subspace of $\mathfrak{m}_{1}'\cap\mathfrak{m}_{2}'$. We write $(\Delta',\sigma_{1}',\sigma_{2}'):=(\Delta',-\theta_{1}'|_{\mathfrak{t}'},-\theta'_{2}|_{\mathfrak{t}'})$ as the double $\sigma$-system of $(G,\theta_{1}',\theta_{2}')$. From Lemma \ref{lem:dsig_isomorphic}, $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$ yields $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta',\sigma_{1}',\sigma_{2}')$. Then there exist an isomorphism $\varphi:\Delta\to \Delta'$ of root systems and $s\in W(\Delta)$ satisfying $\sigma_{1}'=\varphi\sigma_{1}\varphi^{-1}$ and $\sigma_{2}'=\varphi s\sigma_{2}s^{-1} \varphi^{-1}$, from which we get \[ \dim(\mathfrak{t}^{\sigma_{1}'}\cap\mathfrak{t}^{\sigma_{2}'}) =\dim(\varphi(\mathfrak{t}^{\sigma_{1}})\cap \varphi(s\mathfrak{t}^{\sigma_{2}})) =\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}}). \] Hence we obtain \[ \mathrm{rank}(G,\theta_{1},\theta_{2}) =\dim(\mathfrak{t}^{\sigma_{1}'}\cap\mathfrak{t}^{\sigma_{2}'}) \leq \max\{\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}})\mid s\in W(\Delta)\}. \] From the above we have complete the proof. \end{proof}
\subsection{Quasi-canonical forms in compact symmetric triads}\label{sec:cst_qcan}
\subsubsection{Definition and existence for quasi-canonical compact symmetric triads}
Let us introduce the notion of a quasi-canonical compact symmetric triad as follows.
\begin{dfn}\label{dfn:cst_qcf} A compact symmetric triad $(G,\theta_{1},\theta_{2})$ is said to be \textit{quasi-canonical}, if there exists a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ which satisfies the following conditions: \begin{enumerate} \item $\mathfrak{t}\cap\mathfrak{m}_{i}$ is a maximal abelian subspace of $\mathfrak{m}_{i}$ for $i=1,2$. \item The normal double $\sigma$-system
$(\Delta,\sigma_{1},\sigma_{2}):=(\Delta,-d\theta_{1}|_{\mathfrak{t}},-d\theta_{2}|_{\mathfrak{t}})$ is canonical, that is, there exists a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$. \end{enumerate} Then, $\mathfrak{t}$ is said to be quasi-canonical with respect to $(G,\theta_{1},\theta_{2})$. A \textit{quasi-canonical form} of $[(G,\theta_{1},\theta_{2})]$ is a representative $(G,\theta_{1}',\theta_{2}')$ of the isomorphism class $[(G,\theta_{1},\theta_{2})]$ such that $(G,\theta_{1}',\theta_{2}')$ is quasi-canonical as a compact symmetric triad. \end{dfn}
\begin{pro}\label{pro:exist_qcan} For a compact symmetric triad $(G,\theta_{1},\theta_{2})$, there exists a quasi-canonical compact symmetric triad $(G,\theta_{1},\theta_{2}')\sim(G,\theta_{1},\theta_{2})$. \end{pro}
\begin{proof} Let $(G,\theta_{1},\theta_{2})$ be a compact symmetric triad and $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}_{i}$ is a maximal abelian subspace of $\mathfrak{m}_{i}$. Denote by $(\Delta,\sigma_{1},\sigma_{2})$ the corresponding normal double $\sigma$-system of $(G,\theta_{1},\theta_{2})$. Let $\Pi_{i}$ be a $\sigma_{i}$-fundamental system of $\Delta$. Since $N(\mathfrak{t})$ acts transitively on the set of fundamental systems of $\Delta$, there exists $g\in N(\mathfrak{t})$ satisfying $\Pi_{1}=\mathrm{Ad}(g)(\Pi_{2})$. If we put $\theta_{2}':=\tau_{g}\theta_{2}\tau_{g}^{-1}$, then it is verified that $(G,\theta_{1},\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ is quasi-canonical. \end{proof}
In Section \ref{sec:cst_cf}, we will define the notion of a canonicality for compact symmetric triads, which is a stronger condition than the quasi-canonicality (see Definition \ref{dfn:cst_can}). Furthermore, in the case when $G$ is simple, we will prove that the existence of a representative of $[(G,\theta_{1},\theta_{2})]$ which is canonical as a compact symmetric triad (see Theorem \ref{thm:cst_exist_can}).
\subsubsection{Commutative compact symmetric triads are quasi-canonical}
The following proposition means that a quasi-canonical compact symmetric triad is a generalization of a commutative one.
\begin{pro}\label{pro:ccst_qcan} Any commutative compact symmetric triad is quasi-canonical. \end{pro}
The proof of this proposition consists of the following three lemmas, which are essentially due to Oshima-Sekiguchi (\cite{OS}). Roughly speaking, the first lemma states that Lemma \ref{lem:lemm2.5} holds without changing representatives of $[(G,\theta_{1},\theta_{2})]$ in the case when $(G,\theta_{1},\theta_{2})$ is commutative.
\begin{lem}\label{lem:comm_quasican} Assume that $(G,\theta_{1},\theta_{2})$ is commutative. Then there exists a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}_{i}$ and $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ are maximal abelian subspaces of $\mathfrak{m}_{i}$ and $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$, respectively. In particular, $(G,\theta_{1},\theta_{2})$ satisfies the condition {\rm (1)} as in Definition {\rm \ref{dfn:cst_qcf}}. \end{lem}
\begin{proof} From $\theta_{1}\theta_{2}=\theta_{2}\theta_{1}$ we have $\mathfrak{g} =(\mathfrak{k}_{1}\cap\mathfrak{k}_{2})\oplus(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})\oplus (\mathfrak{k}_{1}\cap\mathfrak{m}_{2})\oplus(\mathfrak{m}_{1}\cap\mathfrak{k}_{2})$. Let $\mathfrak{a}$ be a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. Let $\mathfrak{a}_{i}$ be a maximal abelian subspace of $\mathfrak{m}_{i}$ containing $\mathfrak{a}$. In a similar argument in the proofs of \cite[Lemmas (2.2) and (2.4)]{OS}, it is shown that $\mathfrak{a}_{1}$ and $\mathfrak{a}_{2}$ are $(\theta_{1},\theta_{2})$-invariant and that $[\mathfrak{a}_{1},\mathfrak{a}_{2}]=\{0\}$. In particular, $\mathfrak{a}_{1}+\mathfrak{a}_{2}$ is an abelian subalgebra of $\mathfrak{g}$. Let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ containing $\mathfrak{a}_{1}+\mathfrak{a}_{2}$. Since $\mathfrak{t}$ contains $\mathfrak{a}_{1}$ and $\mathfrak{a}_{2}$, it is shown that $\mathfrak{t}$ is $(\theta_{1},\theta_{2})$-invariant. We also obtain $\mathfrak{t}\cap\mathfrak{m}_{i}=\mathfrak{a}_{i}$ and $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})=\mathfrak{a}$. Hence we get the assertion. \end{proof}
\begin{lem}\label{lem:a=0a_i=0} Assume that $(G,\theta_{1},\theta_{2})$ is commutative. Let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ which satisfies the condition stated in Lemma $\ref{lem:comm_quasican}$. Set $\mathfrak{a}_{2}:=\mathfrak{t}\cap\mathfrak{m}_{2}$ and $\mathfrak{a}:=\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$. Then, we have the followings$:$ \begin{enumerate} \item We denote by $\Sigma_{2}$ the restricted root system of $(G,\theta_{2})$ with respect to $\mathfrak{a}_{2}$. For $\lambda\in\Sigma_{2}$ with $\INN{\lambda}{\mathfrak{a}}=\{0\}$, we have $\mathfrak{g}(\mathfrak{a}_{2},\lambda)\subset\mathfrak{k}_{1}^{\mathbb{C}}$, where \[ \mathfrak{g}(\mathfrak{a}_{2},\lambda) :=\{X\in\mathfrak{g}^{\mathbb{C}}\mid [H,X]=\sqrt{-1}\INN{\lambda}{H}X,H\in\mathfrak{a}_{2}\}. \]
\item We denote by $\Delta$ the root system of $\mathfrak{g}$ with respect to $\mathfrak{t}$. For $\alpha\in\Delta$ with $\INN{\alpha}{\mathfrak{a}}=\{0\}$, if $\INN{\alpha}{\mathfrak{a}_{2}}\neq\{0\}$ holds, then we obtain $\INN{\alpha}{\mathfrak{a}_{1}}=\{0\}$. \end{enumerate} \end{lem}
We omit its proof since one can prove this lemma by a similar argument in the proofs of \cite[Lemmas (2.7) and (2.8)]{OS}.
\begin{lem}\label{lem:ccst_t_std} Retain the notations $(G,\theta_{1},\theta_{2})$ and $\mathfrak{t}$ as in Lemma $\ref{lem:comm_quasican}$. Let $\Delta$ be the root system of $\mathfrak{g}$
with respect to $\mathfrak{t}$, and $\sigma_{i}:=-d\theta_{i}|_{\mathfrak{t}}$ for $i=1,2$. Then there exists a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$. Hence $(G,\theta_{1},\theta_{2})$ satisfies the condition {\rm (2)} as in Definition {\rm \ref{dfn:cst_qcf}}. \end{lem}
\begin{proof} Let $\mathfrak{a}_{i}:=\mathfrak{t}\cap\mathfrak{m}_{i}$ ($i=1,2$), and $\mathfrak{a}:=\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$. Then $\mathfrak{t}$ is decomposed into $\mathfrak{t} =\mathfrak{a} \oplus (\mathfrak{a}_{1}\cap\mathfrak{k}_{2}) \oplus (\mathfrak{a}_{2}\cap\mathfrak{k}_{1}) \oplus (\mathfrak{t}\cap(\mathfrak{k}_{1}\cap\mathfrak{k}_{2}))$. Take a ordered basis of $\{X_{j}, Y_{k}, Z_{l}, W_{r}\}$ such that $\{X_{j}\}$, $\{Y_{k}\}$, $\{Z_{l}\}$ and $\{W_{r}\}$ are bases of $\mathfrak{a}$, $\mathfrak{a}_{1}\cap\mathfrak{k}_{2}$, $\mathfrak{a}_{2}\cap\mathfrak{k}_{1}$ and $\mathfrak{t}\cap(\mathfrak{k}_{1}\cap\mathfrak{k}_{2})$, respectively. We denote by $\Delta^{+}$ the set of positive roots in $\Delta$ with respect to the lexicographic ordering $>$ of $\mathfrak{t}$ with respect to this basis. We obtain a fundamental system $\Pi$ of $\Delta$ such that $\Delta^{+}=\{\sum_{\alpha\in\Pi}m_{\alpha}\alpha\in\Delta\mid m_{\alpha}\in\mathbb{Z}_{\geq0}\}$. A similar argument as in \cite[p.~453]{OS} shows that $\Pi$ becomes a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$ by means of Lemma \ref{lem:a=0a_i=0}. \end{proof}
From the above argument we conclude that Proposition \ref{pro:ccst_qcan} holds.
\subsection{Double Satake diagrams for compact symmetric triads}\label{sec:cst_2satake_const}
Let us explain our construction of the double Satake diagram from a quasi-canonical compact symmetric triad. Let $(G,\theta_{1},\theta_{2})$ be a quasi-canonical compact symmetric triad and $\mathfrak{t}$ be a quasi-canonical maximal abelian subalgebra of $\mathfrak{g}$ with respect to $(G,\theta_{1},\theta_{2})$. Denote by $\Delta$ the root system of $\mathfrak{g}$
with respect to $\mathfrak{t}$. Then we obtain a normal double $\sigma$-system $(\Delta,\sigma_{1},\sigma_{2}):=(\Delta,-d\theta_{1}|_{\mathfrak{t}},-d\theta_{2}|_{\mathfrak{t}})$. It follows from the quasi-canonicality of $(G,\theta_{1},\theta_{2})$ that $(\Delta,\sigma_{1},\sigma_{2})$ becomes canonical in the sense of Definition \ref{dfn:dsig_canonical}. We define the double Satake diagram of $(G,\theta_{1},\theta_{2})$ as that of $(\Delta,\sigma_{1},\sigma_{2})$. Then the definition of the double Satake diagram of $(G,\theta_{1},\theta_{2})$ is independent of the choice of $\mathfrak{t}$. Indeed, we can show the following lemma.
\begin{lem} Let $\mathfrak{t}'$ be another quasi-canonical maximal abelian subalgebra of
$\mathfrak{g}$ with respect to $(G,\theta_{1},\theta_{2})$. We denote by $(\Delta',\sigma_{1}',\sigma_{2}'):=(\Delta',-d\theta_{1}|_{\mathfrak{t}'},-d\theta_{2}|_{\mathfrak{t}'})$ the corresponding canonical normal double $\sigma$-system of $(G,\theta_{1},\theta_{2})$. Then we have $(\Delta,\sigma_{1},\sigma_{2})\equiv(\Delta',\sigma_{1}',\sigma_{2}')$. Therefore the double Satake diagram of $(\Delta,\sigma_{1},\sigma_{2})$ is isomorphic to that of $(\Delta',\sigma_{1}',\sigma_{2}')$. \end{lem}
The proof is omitted since it is immediate from Theorem \ref{thm:dsig_dsatake_equiv} and Lemma \ref{lem:dsig_isomorphic}.
We next show that the double Satake diagram of a quasi-canonical compact symmetric triad is independent of the choice of the representative of its isomorphism class, namely, we have the following proposition, which is immediate from Theorem \ref{thm:dsig_dsatake_equiv} and Lemma \ref{lem:CST_dsigma2}.
\begin{pro}\label{pro:cst_dstake_determ} Let $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ be another quasi-canonical compact symmetric triad and $(\Delta',\sigma_{1}',\sigma_{2}')$ be the corresponding canonical normal double $\sigma$-system of $(G,\theta_{1}',\theta_{2}')$. Then we have $(\Delta,\sigma_{1},\sigma_{2})\equiv(\Delta',\sigma_{1}',\sigma_{2}')$. \end{pro}
It follows from Propositions \ref{pro:exist_qcan} and \ref{pro:cst_dstake_determ} that, for a compact symmetric triad $(G,\theta_{1},\theta_{2})$, its isomorphism class $[(G,\theta_{1},\theta_{2})]$ uniquely determines the double Satake diagram up to isomorphism. In fact, the converse also holds as shown in the following theorem.
\begin{thm}\label{thm:cst_dsatake_sim} Let $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ be two compact symmetric triads. We write $(\Delta,\sigma_{1},\sigma_{2})$ and $(\Delta',\sigma_{1}',\sigma_{2}')$ as the corresponding canonical normal double $\sigma$-systems of the isomorphism classes $[(G,\theta_{1},\theta_{2})]$ and $[(G,\theta_{1}',\theta_{2}')]$, respectively. We also write $(S_{1},S_{2})$ and $(S_{1}',S_{2}')$ as the double Satake diagrams of $(\Delta,\sigma_{1},\sigma_{2})$ and $(\Delta',\sigma_{1}',\sigma_{2}')$, respectively. Then the following conditions are equivalent$:$ \begin{enumerate} \item $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ are locally isomorphic, namely, there exist $\varphi\in\mathrm{Aut}(\mathfrak{g})$ and $\tau\in\mathrm{Int}(\mathfrak{g})$ satisfying $d\theta_{1}'=\varphi d\theta_{1}\varphi^{-1}$ and $d\theta_{2}'=\tau\varphi d\theta_{2}\varphi^{-1}\tau^{-1}$. \item $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta',\sigma_{1}',\sigma_{2}')$. \item $(S_{1},S_{2})\sim(S_{1}',S_{2}')$. \end{enumerate} In addition, in the case when $G$ is simply-connected or when $G$ is the adjoint group, $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ are isomorphic if and only if one of the above conditions $(1)$--$(3)$ holds. \end{thm}
\begin{proof} The implication $(1)\Rightarrow(2)$ follows from Propositions \ref{pro:exist_qcan} and \ref{pro:cst_dstake_determ}. We obtain $(2)\Leftrightarrow(3)$ from Theorem \ref{thm:dsig_dsatake_equiv}.
We prove the implication $(2)\Rightarrow(1)$. Without loss of generalities we may assume that $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$
are quasi-canonical. We write $(\Delta,\sigma_{1},\sigma_{2})=(\Delta,-d\theta_{1}|_{\mathfrak{t}},-d\theta_{2}|_{\mathfrak{t}})$
and $(\Delta',\sigma_{1}',\sigma_{2}')=(\Delta',-d\theta'_{1}|_{\mathfrak{t}'},-d\theta_{2}'|_{\mathfrak{t}'})$. It follows from Theorem \ref{thm:dsig_dsatake_equiv} that there exists an isomorphism $\varphi:\mathfrak{t}\to\mathfrak{t}'$ of root systems between $\Delta$ and $\Delta'$ satisfying $\sigma_{i}'=\varphi\sigma_{i}\varphi^{-1}$ for $i=1,2$.
Let $\tilde{\varphi}$
be an automorphism of $\mathfrak{g}$ with $\tilde{\varphi}|_{\mathfrak{t}}=\varphi$. Since $d\theta_{1}'|_{\mathfrak{t}'}=\tilde{\varphi}d\theta_{1}\tilde{\varphi}^{-1}|_{\mathfrak{t}'}$ holds, it follows from Theorem \ref{thm:araki} that there exists $H_{1}'\in\mathfrak{t}'\cap\mathfrak{m}_{1}'$ such that \begin{equation}\label{eqn:theta1'theta1} d\theta_{1}'=e^{\mathrm{ad}(H_{1}')}\tilde{\varphi}d\theta_{1}\tilde{\varphi}^{-1}e^{-\mathrm{ad}(H_{1}')}. \end{equation} In addition, from
$d\theta_{2}'|_{\mathfrak{t}'}=e^{\mathrm{ad}(H_{1}')}\tilde{\varphi}d\theta_{2}\tilde{\varphi}^{-1}e^{-\mathrm{ad}(H_{1}')}|_{\mathfrak{t}'}$ there also exists $H_{2}'\in\mathfrak{t}'\cap\mathfrak{m}_{2}'$ such that \begin{equation}\label{eqn:theta2'theta2} d\theta_{2}'=e^{\mathrm{ad}(H_{2}')}e^{\mathrm{ad}(H_{1}')}\tilde{\varphi}d\theta_{2}\tilde{\varphi}^{-1}e^{-\mathrm{ad}(H_{1}')}e^{-\mathrm{ad}(H_{2}')}. \end{equation} By combining (\ref{eqn:theta1'theta1}) and (\ref{eqn:theta2'theta2}), we find that $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ are locally isomorphic. Therefore we have completed the proof. \end{proof}
\subsection{The classification of compact symmetric triads by double Satake diagrams}\label{sec:cst_class_2satake}
In this subsection, we consider the classification problem for compact symmetric triads at the Lie algebra level.
\subsubsection{Reduction of the problem}
A compact symmetric triad $(\mathfrak{g},\theta_{1},\theta_{2})$ is said to be \textit{irreducible}, if it does not admit non-trivial $(\theta_{1},\theta_{2})$-invariant ideals of $\mathfrak{g}$ (cf.~\cite[p.~48]{Matsuki}). Any compact symmetric triad $(\mathfrak{g},\theta_{1},\theta_{2})$ is decomposed into irreducible ones, namely, there exist unique irreducible compact symmetric triads $(\mathfrak{g}^{(1)},\theta_{1}^{(1)},\theta_{2}^{(1)})$, $\dotsc$, $(\mathfrak{g}^{(k)},\theta_{1}^{(k)},\theta_{2}^{(k)})$ such that $\mathfrak{g}=\mathfrak{g}^{(1)} \oplus\dotsb\oplus\mathfrak{g}^{(k)}$ and that $\theta_{i}=\theta_{i}^{(j)}$ holds on $\mathfrak{g}^{(j)}$ for $i=1,2$ and $j=1,\dotsc,k$. Then we write \[ (\mathfrak{g},\theta_{1},\theta_{2}) =(\mathfrak{g}^{(1)},\theta_{1}^{(1)},\theta_{2}^{(1)})\oplus \dotsb\oplus(\mathfrak{g}^{(k)},\theta_{1}^{(k)},\theta_{2}^{(k)}). \] This decomposition is called the irreducible decomposition of $(\mathfrak{g},\theta_{1},\theta_{2})$.
The equivalence relation $\sim$ is compatible with the irreducibility of a compact symmetric triad, that is, if $(\mathfrak{g},\theta_{1},\theta_{2})\sim(\mathfrak{g},\theta_{1}',\theta_{2}')$ and $(\mathfrak{g},\theta_{1},\theta_{2})$ is irreducible, then $(\mathfrak{g},\theta_{1}',\theta_{2}')$ is also irreducible. This means that the classification problem for compact symmetric triads reduces to that for irreducible ones. Clearly, $(\mathfrak{g},\theta_{1},\theta_{2})$ is irreducible if $\mathfrak{g}$ is simple. Irreducible compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ can be classified depending on whether $\mathfrak{g}$ is simple or not. In the present paper, we only deal with the classification problem for compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ such that $\mathfrak{g}$ is simple.
Let $\mathfrak{g}$ be any fixed compact simple Lie algebra. We write $\mathrm{Inv}(\mathfrak{g})$ as the set of all the involutions on $\mathfrak{g}$. We will explain our strategy to find all elements of the set $\mathcal{T}(\mathfrak{g}):=\{[(\mathfrak{g},\theta_{1},\theta_{2})] \mid \theta_{1},\theta_{2}\in\mathrm{Inv}(\mathfrak{g})\}$. Denote by $\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$ the set of conjugacy classes in $\mathrm{Aut}(\mathfrak{g})$ of the elements in $\mathrm{Inv}(\mathfrak{g})$. Let $[\theta_{i}]$ be in $\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$ for $i=1,2$, and $\mathfrak{k}_{i}$ denote the fixed point subalgebra of $\theta_{i}$ in $\mathfrak{g}$.
We set \[\mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2}) := \{ [(\mathfrak{g},\varphi_{1}\theta_{1}\varphi_{1}^{-1},\varphi_{2}\theta_{2}\varphi_{2}^{-1})] \mid \varphi_{1},\varphi_{2}\in\mathrm{Aut}(\mathfrak{g}) \}.\] Then $\mathcal{T}(\mathfrak{g})$ has the following decomposition: \[ \mathcal{T}(\mathfrak{g}) =\bigcup_{[\theta_{1}],[\theta_{2}]\in\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})} \mathcal{T}(\mathfrak{g}, \mathfrak{k}_{1}, \mathfrak{k}_{2})\quad (\text{disjoint union}). \] Thus, it is sufficient to determine $\mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})$ for each $[\theta_{1}],[\theta_{2}]\in\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$.
For this purpose we make use of the classification of $\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$ and a one-to-one correspondence between $\mathcal{T}(\mathfrak{g}, \mathfrak{k}_{1}, \mathfrak{k}_{2})$ and the set $\mathcal{DS}(S_{1},S_{2})$ which is defined as follows: We may assume that $(\mathfrak{g},\theta_{1},\theta_{2})$ is quasi-canonical (cf.~Proposition \ref{pro:exist_qcan}). Let $(\Delta,\sigma_{1},\sigma_{2})$ be the double $\sigma$-system of $(\mathfrak{g},\theta_{1},\theta_{2})$ and $\Pi$ be a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$. We write $(S_{1},S_{2})=(S(\Pi,\Pi_{1,0},p_{1}),S(\Pi,\Pi_{2,0},p_{2}))$ as the double Satake diagram associated with $\Pi$. We define \begin{align} \mathcal{DS}(S_{1},S_{2}) &=\{[(\psi_{1}\cdot S_{1},\psi_{2}\cdot S_{2})]\mid \psi_{1},\psi_{2}\in\mathrm{Aut}(\Pi)\}\notag\\ &=\{[(S_{1},\psi\cdot S_{2})]\mid \psi\in\mathrm{Aut}(\Pi)\},\label{eqn:DSS1S2} \end{align} where $\psi_{i}\cdot S_{i}$ is the Satake diagram defined by $\psi_{i}\cdot S_{i}= S(\Pi,\psi_{i}(\Pi_{i,0}), \psi_{i}\cdot p_{i})$
with $\psi_{i}\cdot p_{i}=\psi_{i} p_{i} \psi_{i}^{-1}|_{\Pi-\psi_{i}(\Pi_{i,0})}$. Here, we describe the one-to-one correspondence between $\mathcal{T}(\mathfrak{g}, \mathfrak{k}_{1}, \mathfrak{k}_{2})$ and $\mathcal{DS}(S_{1},S_{2})$. Let $[(\mathfrak{g},\theta_{1}',\theta_{2}')]$ be in $\mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})$ such that $(\mathfrak{g},\theta_{1}',\theta_{2}')$ is quasi-canonical. Denote by $(S_{1}',S_{2}')$ the double Satake diagram of $(\mathfrak{g},\theta_{1}',\theta_{2}')$. By Theorem \ref{thm:cps_sigma_satake_equiv} it follows from $(\mathfrak{g},\theta_{i})\simeq(\mathfrak{g},\theta_{i}')$ that there exists an isomorphism $\psi_{i}:S_{i}\to S_{i}'$ of Satake diagrams. Then we have $(S_{1}',S_{2}') =(\psi_{1}\cdot S_{1},\psi_{2}\cdot S_{2}) \sim (S_{1},\psi_{1}^{-1}\psi_{2}\cdot S_{2})$, so that $[(S_{1}',S_{2}')]=[(S_{1},\psi_{1}^{-1}\psi_{2}\cdot S_{2})]$ is in $\mathcal{DS}(S_{1},S_{2})$ from \eqref{eqn:DSS1S2}. Furthermore, it can be shown that the following correspondence is well-defined in terms of Theorem \ref{thm:cst_dsatake_sim}, $(1) \Rightarrow (3)$: \begin{equation}\label{eqn:corrTtoDS} \mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})\to \mathcal{DS}(S_{1},S_{2});~ [(\mathfrak{g},\theta_{1}',\theta_{2}')]\mapsto [(S_{1}',S_{2}')]. \end{equation}
\begin{lem}\label{lem:cstdsatabij} The correspondence \eqref{eqn:corrTtoDS} is bijective. \end{lem}
\begin{proof} We first prove that \eqref{eqn:corrTtoDS} is injective. Let $[(\mathfrak{g},\theta_{1}',\theta_{2}')], [(\mathfrak{g},\theta_{1}'',\theta_{2}'')]$ be in $\mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})$ such that $(\mathfrak{g},\theta_{1}',\theta_{2}')$ and $(\mathfrak{g},\theta_{1}'',\theta_{2}'')$ are quasi-canonical. We write $(S_{1}',S_{2}')$ and $(S_{1}'',S_{2}'')$ as the double Satake diagrams of $(\mathfrak{g},\theta_{1}',\theta_{2}')$ and $(\mathfrak{g},\theta_{1}'',\theta_{2}'')$, respectively. If $[(S_{1}',S_{2}')]=[(S_{1}'',S_{2}'')]$ holds, then we obtain $[(\mathfrak{g},\theta_{1}',\theta_{2}')] =[(\mathfrak{g},\theta_{1}'',\theta_{2}'')]$ from Theorem \ref{thm:cst_dsatake_sim}, $(3) \Rightarrow (1)$.
Next, we prove that \eqref{eqn:corrTtoDS} is surjective. Let $[(S_{1}',S_{2}')]$ be in $\mathcal{DS}(S_{1},S_{2})$. Then, for each $i=1,2$, there exists $\psi_{i}\in\mathrm{Aut}(\Pi)$ satisfying $S_{i}'=\psi_{i}\cdot S_{i}$. Let $\varphi_{i}$ be an automorphism of $\mathfrak{g}$
such that $\varphi_{i}|_{\Pi}=\psi_{i}$ holds. Then $(\mathfrak{g},\varphi_{1}\theta_{1}\varphi_{1}^{-1},\varphi_{2}\theta_{2}\varphi_{2}^{-1})$ gives a compact symmetric triad. Let $(\Delta,\sigma_{1}',\sigma_{2}')$ be its double $\sigma$-system. Since $\Pi$ becomes a $(\sigma_{1}',\sigma_{2}')$-fundamental system, $(\mathfrak{g},\varphi_{1}\theta_{1}\varphi_{1}^{-1},\varphi_{2}\theta_{2}\varphi_{2}^{-1})$ is quasi-canonical and its double Satake diagram coincides with $(S_{1}',S_{2}')$. Thus we have complete the proof. \end{proof}
\subsubsection{The classification} Under the above argument we first determine $\mathcal{DS}(S_{1},S_{2})$ for $[\theta_{1}],[\theta_{2}]\in\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$. This can be easily obtained by means of the structure of $\mathrm{Aut}(\Pi)$ and the table of Satake diagrams of compact symmetric pairs (cf.~\cite[TABLE VI]{Helgason}). Our determination will be given in Theorem \ref{thm:dsatake_classify}.
Following to \cite[Chapter X, Theorem 3.29]{Helgason} the structure of $\mathrm{Aut}(\Pi)$ is given as follows: \[ \mathrm{Aut}(\Pi)=\begin{cases} \{1\} & (\mathfrak{g}=\mathfrak{su}(2), \mathfrak{so}(2m+1), \mathfrak{sp}(n), \mathfrak{e}_{7}, \mathfrak{e}_{8}, \mathfrak{f}_{4}, \mathfrak{g}_{2}),\\ \mathbb{Z}_{2} & (\mathfrak{g}=\mathfrak{su}(n)\,(n\geq 3), \mathfrak{so}(2m)\,(m\geq 5), \mathfrak{e}_{6}),\\ \mathfrak{S}_{3} & (\mathfrak{g}=\mathfrak{so}(8)), \end{cases} \] where $\mathbb{Z}_{2}$ and $\mathfrak{S}_{3}$ are the cyclic group of order two and the symmetric group of order three, respectively. Clearly, in the case when $\mathrm{Aut}(\Pi)=\{1\}$, $\mathcal{DS}(S_{1},S_{2})$ consists of only one element, that is, $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$. For the others, we will obtain $\mathcal{DS}(S_{1},S_{2})$ by a case-by-case verification based on the classification of $\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$ as shown in Table \ref{table:fixed_pt_algebra}.
Let us consider the case when $\mathrm{Aut}(\Pi)=\mathbb{Z}_{2}$. We first determine $\mathcal{DS}(S_{1},S_{2})$ for $(\mathfrak{g},\mathfrak{k}_{1})=(\mathfrak{g},\mathfrak{k}_{2})=(\mathfrak{so}(4m),\mathfrak{u}(2m))$ with $m\geq 3$.
\begin{ex}\label{ex:so4mu2mu2m} Let $(\mathfrak{g},\mathfrak{k}_{1})=(\mathfrak{g},\mathfrak{k}_{2})=(\mathfrak{so}(4m),\mathfrak{u}(2m))$ with $m\geq 3$. Denote by $(\Delta,\sigma_{1},\sigma_{2})$ the double $\sigma$-system of a quasi canonical form $(\mathfrak{g},\theta_{1},\theta_{2})$. Let $\Pi$ be a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$. If we write $\Pi=\{\alpha_{1},\dotsc,\alpha_{2m}\}$ as in Notation \ref{nota:Dr}, then $\mathrm{Aut}(\Pi)$ is generated by \[ \tau:\Pi\to\Pi; ~(\alpha_{1},\dotsc,\alpha_{2m-2},\alpha_{2m-1},\alpha_{2m}) \mapsto (\alpha_{1},\dotsc,\alpha_{2m-2},\alpha_{2m},\alpha_{2m-1}). \] Denote by $S_{i}=S(\Pi,\Pi_{i,0},p_{i})$ the Satake diagram of $(\Delta,\sigma_{i})$ associated with $\Pi$. Then, for each $i=1,2$, the graph of $S_{i}$ coincides with that of $S(\Pi,\Pi_{0}, p)$ or $S(\Pi,\tau(\Pi_{0}),\tau\cdot p))$ as in Table \ref{table:Satake_DIII}. It can be shown that $(S(\Pi,\Pi_{0},p),S(\Pi,\Pi_{0},p))$ and $(S(\Pi,\Pi_{0},p),S(\Pi,\tau(\Pi_{0}),\tau\cdot p))$ give a complete representative of $\mathcal{DS}(S_{1},S_{2})$. \end{ex}
\begin{table}[H] \centering \caption{Satake diagram of $(\mathfrak{so}(4m),\mathfrak{u}(2m))$ with $m\geq 3$}\label{table:Satake_DIII} \begin{tabular}{cc} \hline \hline $S(\Pi,\Pi_{0}, p)$ & $S(\Pi,\tau(\Pi_{0}),\tau\cdot p))$ \\ \hline \hline \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\bullet};(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(15,0)*{} \ar@{.}(15,0)*{};(20,0)*{} \ar@{-}(20,0)*{};(25,0)*++!D{\alpha_{2m-3}}*{\bullet}="a2m-3" \ar@{-}"a2m-3";(35,0)*++!L{\alpha_{2m-2}}*{\circ}="a2m-2" \ar@{-}"a2m-2";(40,8)*++!L{\alpha_{2m-1}}*{\bullet} \ar@{-}"a2m-2";(40,-8)*++!L{\alpha_{2m}}*{\circ}
\end{xy} & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\bullet};(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(15,0)*{} \ar@{.}(15,0)*{};(20,0)*{} \ar@{-}(20,0)*{};(25,0)*++!D{\alpha_{2m-3}}*{\bullet}="a2m-3" \ar@{-}"a2m-3";(35,0)*++!L{\alpha_{2m-2}}*{\circ}="a2m-2" \ar@{-}"a2m-2";(40,8)*++!L{\alpha_{2m-1}}*{\circ} \ar@{-}"a2m-2";(40,-8)*++!L{\alpha_{2m}}*{\bullet}
\end{xy}\\ \hline \hline \end{tabular} \end{table}
Except for this example among compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ with $\mathrm{Aut}(\Pi)=\mathbb{Z}_{2}$, it is verified that $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$ holds by means of the following lemma.
\begin{lem}\label{lem:2satake_determ_1pt} Assume that $S_{1}$ or $S_{2}$ is invariant under the action of $\mathrm{Aut}(\Pi)$, that is, there exists $i\in\{1,2\}$ such that $S_{i}=\psi\cdot S_{i}$ holds for all $\psi\in\mathrm{Aut}(\Pi)$. Then we have $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$. \end{lem}
We omit its proof since this lemma is easily shown by the definition of $\mathcal{DS}(S_{1},S_{2})$.
\begin{ex}\label{ex:susosuu_ds} Let us consider the case when $(\mathfrak{g},\mathfrak{k}_{1})=(\mathfrak{su}(n),\mathfrak{so}(n))$ and $(\mathfrak{g},\mathfrak{k}_{2})=(\mathfrak{su}(n),\mathfrak{s}(\mathfrak{u}(a)\oplus\mathfrak{u}(b)))$ with $n\geq 3$. Since the Satake diagram $S_{1}$ contains no black circles and no curved arrows, $S_{1}$ is invariant under the action of $\mathrm{Aut}(\Pi)$. From Lemma \ref{lem:2satake_determ_1pt} we get $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$. \end{ex}
From the above argument we conclude that $\mathcal{DS}(S_{1},S_{2})$ have been determined in the case when $\mathrm{Aut}(\Pi)=\mathbb{Z}_{2}$.
Finally, we consider the case when $\mathrm{Aut}(\Pi)=\mathfrak{S}_{3}$.
\begin{ex}\label{ex:so8soasob} Let $\mathfrak{g}=\mathfrak{so}(8)$. From Table \ref{table:fixed_pt_algebra} we have $\{[(\mathfrak{so}(8),\theta)]\mid \theta\in\mathrm{Inv}(\mathfrak{so}(8))\} =\{[(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a))]\mid a=1,2,3,4\}$. Here, we have used a special isomorphism $\mathfrak{u}(4)\simeq \mathfrak{so}(2)\oplus\mathfrak{so}(6)$. Our argument proceeds by a case-by-case argument as follows.
We first consider the case when $(\mathfrak{g},\mathfrak{k}_{1})$ or $(\mathfrak{g},\mathfrak{k}_{2})$ is isomorphic to $(\mathfrak{so}(8),\mathfrak{so}(4)\oplus\mathfrak{so}(4))$. A similar manner as in Example \ref{ex:susosuu_ds} obeys $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$.
Next, let us consider the case when $(\mathfrak{g},\mathfrak{k}_{1})=(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a))$ and $(\mathfrak{g},\mathfrak{k}_{2})=(\mathfrak{so}(8),\mathfrak{so}(c)\oplus\mathfrak{so}(8-c))$ for some $a,c\in\{1,2,3\}$. Denote by $(\Delta,\sigma_{1},\sigma_{2})$ the double $\sigma$-system of a quasi-canonical form $(\mathfrak{g},\theta_{1},\theta_{2})$. Let $\Pi$ be a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$. If we write $\Pi=\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\}$ as in Notation \ref{nota:Dr}, then $\mathrm{Aut}(\Pi)=\{1,\kappa,\kappa^{2},\tau,\kappa\tau\kappa^{-1},\kappa^{2}\tau\kappa^{-2}\}$ holds, where $\kappa, \tau\in\mathrm{Aut}(\Pi)$ are defined by $\kappa:(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}) \mapsto (\alpha_{4},\alpha_{2},\alpha_{1},\alpha_{3})$ and by $\tau:(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}) \mapsto (\alpha_{1},\alpha_{2},\alpha_{4},\alpha_{3})$. Denote by $S_{i}$ the Satake diagram of $(\Delta,\sigma_{i})$ associated with $\Pi$. Then, there exist $\psi,\psi'\in\{1,\kappa,\kappa^{2}\}$ satisfying $S_{1}=S(\Pi,\psi(\Pi_{0}^{(a)}),\psi\cdot p^{(a)})$ and $S_{2}=S(\Pi,\psi'(\Pi_{0}^{(c)}),\psi'\cdot p^{(c)})$, where the Satake diagram $S(\Pi,\psi(\Pi_{0}^{(*)}),\psi\cdot p^{(*)})$ are in Table \ref{table:Satake_DI} for $\psi\in\{1,\kappa,\kappa^{2}\}$. We write $S^{*,\psi}:=S(\Pi,\psi(\Pi_{0}^{(*)}),\psi\cdot p^{(*)})$ for short. Then it can be verified that $\mathcal{DS}(S_{1},S_{2})=\{[(S^{a,1},S^{c,1})],[(S^{a,1},S^{c,\kappa})]\}$ holds by a case-by-case verification. For example, in the case when $a=1, c=2$, we have $(S^{1,1},S^{2,1})\not\sim(S^{1,1},S^{2,\kappa})\overset{\tau}{\sim}(S^{1,1},S^{2,\kappa^{2}})$. This implies that $(S^{1,1},S^{2,1})$ and $(S^{1,1},S^{2,\kappa})$ give a complete representative of $\mathcal{DS}(S_{1},S_{2})$. \end{ex}
\begin{table}[H] \centering \caption{Satake diagram of $(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a))$ with $a=1,2,3$}\label{table:Satake_DI} \begin{tabular}{cccc} \hline \hline $a$ & $S(\Pi,\Pi_{0}^{(a)},p^{(a)})$ & $S(\Pi,\kappa(\Pi_{0}^{(a)}),\kappa\cdot p^{(a)})$ & $S(\Pi,\kappa^{2}(\Pi_{0}^{(a)}),\kappa^{2}\cdot p^{(a)})$ \\ \hline \hline $1$ & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\circ}="a1";(10,0)*++!D{\alpha_{2}}*{\bullet}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\bullet}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\bullet}="a4" \end{xy} & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\bullet}="a1";(10,0)*++!D{\alpha_{2}}*{\bullet}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\bullet}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\circ}="a4" \end{xy} & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\bullet}="a1";(10,0)*++!D{\alpha_{2}}*{\bullet}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\circ}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\bullet}="a4" \end{xy} \\ $2$ & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\circ}="a1";(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\bullet}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\bullet}="a4" \end{xy} & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\bullet}="a1";(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\bullet}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\circ}="a4" \end{xy} &\begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\bullet}="a1";(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\circ}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\bullet}="a4" \end{xy} \\ $3$ & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\circ}="a1";(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\circ}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\circ}="a4"
\ar@/^/@{<->} "a3";"a4" \end{xy} & \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\circ}="a1";(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\circ}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\circ}="a4"
\ar@/^/@{<->} "a1";"a3" \end{xy}& \begin{xy} \ar@{-}(0,0)*++!D{\alpha_{1}}*{\circ}="a1";(10,0)*++!D{\alpha_{2}}*{\circ}="a2" \ar@{-}"a2";(17.09,7.09)*++!D{\alpha_{3}}*{\circ}="a3" \ar@{-}"a2";(17.09,-7.09)*++!D{\alpha_{4}}*{\circ}="a4"
\ar@/_/@{<->} "a1";"a4" \end{xy}\\ \hline \hline \end{tabular} \end{table}
From the above argument we conclude:
\begin{thm}\label{thm:dsatake_classify} Fix a compact simple Lie algebra $\mathfrak{g}$. Let $[\theta_{1}],[\theta_{2}]\in\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$ such that $(\mathfrak{g},\theta_{1},\theta_{2})$ is quasi-canonical. Denote by $(S_{1},S_{2})$ the double Satake diagram corresponding to $(\mathfrak{g},\theta_{1},\theta_{2})$. Then we obtain $\mathcal{DS}(S_{1},S_{2})$ as follows:
\noindent $(1)$ Let $\mathfrak{g}\neq\mathfrak{so}(4m)$ with $m\geq 2${\rm :} $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$ holds.
\noindent $(2)$ Let $\mathfrak{g}=\mathfrak{so}(4m)$ with $m\geq 3${\rm :} \begin{enumerate} \item[{\rm (2-a)}] In the case when $(\mathfrak{g},\mathfrak{k}_{i})=(\mathfrak{so}(4m),\mathfrak{u}(2m))$ for $i=1,2$, the two double Satake diagrams $(S(\Pi,\Pi_{0},p),S(\Pi,\Pi_{0},p))$ and $(S(\Pi,\Pi_{0},p),S(\Pi,\tau(\Pi_{0}),\tau\cdot p))$ as in Example {\rm \ref{ex:so4mu2mu2m}} give a complete representative of $\mathcal{DS}(S_{1},S_{2})$. \item[{\rm (2-b)}] Otherwise, $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$ holds. \end{enumerate}
\noindent $(3)$ Let $\mathfrak{g}=\mathfrak{so}(8)${\rm :} \begin{enumerate} \item[{\rm (3-a)}] In the case when $\mathfrak{k}_{1}$ or $\mathfrak{k}_{2}$ is isomorphic to $\mathfrak{so}(4)\oplus\mathfrak{so}(4)$, $\mathcal{DS}(S_{1},S_{2})=\{[(S_{1},S_{2})]\}$ holds. \item[{\rm (3-b)}] Otherwise, there exist $a,c\in\{1,2,3\}$ such that $(\mathfrak{g},\mathfrak{k}_{1})=(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a))$ and $(\mathfrak{g},\mathfrak{k}_{2})=(\mathfrak{so}(8),\mathfrak{so}(c)\oplus\mathfrak{so}(8-c))$. Then, the two double Satake diagrams $(S(\Pi,\Pi_{0}^{(a)},p^{(a)}),S(\Pi,\Pi_{0}^{(c)},p^{(c)}))$ and $(S(\Pi,\Pi_{0}^{(a)},p^{(a)}),S(\Pi,\kappa(\Pi_{0}^{(c)}),\kappa\cdot p^{(c)}))$ as in Example {\rm \ref{ex:so8soasob}} give a complete representative of $\mathcal{DS}(S_{1},S_{2})$. \end{enumerate} \end{thm}
As a corollary of Theorem \ref{thm:dsatake_classify} we can obtain $\mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})$ for $[\theta_{1}],[\theta_{2}]\in\mathrm{Inv}(\mathfrak{g})/\mathrm{Aut}(\mathfrak{g})$. In order to present our determination of $\mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})$ we prepare the following notation.
\begin{nota}\label{nota:matrix_involutions} In order to give involutions on a compact simple Lie algebra $\mathfrak{g}$, we utilize the following notation: If $I_{n}$ denotes the unit matrix of order $n$, then we put \begin{equation}\label{eqn:IJ} \begin{array}{cc} I_{a,b}=\begin{pmatrix} I_{a} & O \\ O & -I_{b} \end{pmatrix}\in GL(a+b,\mathbb{R}),& J_{n}=\begin{pmatrix} O & -I_{n} \\ I_{n} & O \end{pmatrix} \in GL(2n,\mathbb{R}), \end{array} \end{equation} and $J_{n}'=I_{n-1,n+1}J_{n}\in GL(2n,\mathbb{R})$.
\end{nota}
\begin{cor}\label{cor:cst_classify} Fix a compact simple Lie algebra $\mathfrak{g}$. Then we have{\rm :}
\noindent $(1)$ Let $\mathfrak{g}\neq\mathfrak{so}(4m)$ with $m\geq 2${\rm :} $\mathcal{T}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=\{[(\mathfrak{g},\theta_{1},\theta_{2})]\}$ holds.
\noindent $(2)$ Let $\mathfrak{g}=\mathfrak{so}(4m)$ with $m\geq 3${\rm :} \begin{enumerate} \item[{\rm (2-a)}] In the case when $(\mathfrak{g},\mathfrak{k}_{i})=(\mathfrak{so}(4m),\mathfrak{u}(2m))$ for $i=1,2$, $(S(\Pi,\Pi_{0},p),S(\Pi,\Pi_{0},p))$, $(S(\Pi,\Pi_{0},p),S(\Pi,\tau(\Pi_{0}),\tau\cdot p))$ as in Example {\rm \ref{ex:so4mu2mu2m}} correspond to the two compact symmetric triads \begin{equation}\label{eqn:CST_2-a} (\mathfrak{so}(4m),\mathrm{Ad}(J_{2m}),\mathrm{Ad}(J_{2m})), \quad (\mathfrak{so}(4m),\mathrm{Ad}(J_{2m}),\mathrm{Ad}(J_{2m}')), \end{equation} respectively. Furthermore, the compact symmetric triads \eqref{eqn:CST_2-a} give a complete representative of $\mathcal{T}(\mathfrak{so}(4m),\mathfrak{u}(2m),\mathfrak{u}(2m))$. \item[{\rm (2-b)}] Otherwise, $\mathcal{T}(\mathfrak{so}(4m),\mathfrak{k}_{1},\mathfrak{k}_{2})=\{[(\mathfrak{so}(4m),\theta_{1},\theta_{2})]\}$ holds. \end{enumerate}
\noindent $(3)$ Let $\mathfrak{g}=\mathfrak{so}(8)${\rm :} \begin{enumerate} \item[{\rm (3-a)}] In the case when $\mathfrak{k}_{1}$ or $\mathfrak{k}_{2}$ is isomorphic to $\mathfrak{so}(4)\oplus\mathfrak{so}(4)$, we have $\mathcal{T}(\mathfrak{so}(8),\mathfrak{k}_{1},\mathfrak{k}_{2})=\{[(\mathfrak{so}(8),\theta_{1},\theta_{2})]\}$. \item[{\rm (3-b)}] Otherwise, we have $(\mathfrak{g},\mathfrak{k}_{1})=(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a))$ and $(\mathfrak{g},\mathfrak{k}_{2})=(\mathfrak{so}(8),\mathfrak{so}(c)\oplus\mathfrak{so}(8-c))$ for some $a,c\in\{1,2,3\}$. Then, $(S(\Pi,\Pi_{0}^{(a)},p^{(a)}),S(\Pi,\Pi_{0}^{(c)},p^{(c)}))$ and $(S(\Pi,\Pi_{0}^{(a)},p^{(a)}),S(\Pi,\kappa(\Pi_{0}^{(c)}),\kappa\cdot p^{(c)}))$ as in Example {\rm \ref{ex:so8soasob}} correspond to the two compact symmetric triads \begin{equation}\label{eqn:CST_3-b} (\mathfrak{so}(8),\mathrm{Ad}(I_{a,8-a}),\mathrm{Ad}(I_{c,8-c})),\quad (\mathfrak{so}(8),\mathrm{Ad}(I_{a,8-a}),\tilde{\kappa}\mathrm{Ad}(I_{c,8-c})\tilde{\kappa}^{-1}), \end{equation} respectively, where $\tilde{\kappa}$ denotes the extension of $\kappa$ to an automorphism of $\mathfrak{so}(8)$. Furthermore, \eqref{eqn:CST_3-b} give a complete representative of $\mathcal{T}(\mathfrak{so}(8),\mathfrak{k}_{1},\mathfrak{k}_{2})$. \end{enumerate} \end{cor}
From Corollary \ref{cor:cst_classify}, with a few exceptions, the isomorphism class $[(\mathfrak{g},\theta_{1},\theta_{2})]$ is uniquely determined by means of the Lie algebra structures of $\mathfrak{k}_{1}$ and $\mathfrak{k}_{2}$. Then, there is no confusion when we write $[(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})]$ in place of $[(\mathfrak{g},\theta_{1},\theta_{2})]$ except for compact simple symmetric triads as in Corollary \ref{cor:cst_classify}, (2-a) and (3-b). On the other hand, in the case of (2-a) in Corollary \ref{cor:cst_classify}, we shall use the symbols $[(\mathfrak{so}(4m),\mathfrak{u}(2m),\mathfrak{u}(2m))]$ and $[(\mathfrak{so}(4m),\mathfrak{u}(2m),\mathfrak{u}(2m)')]$ as the isomorphism classes of $(\mathfrak{so}(4m),\mathrm{Ad}(J_{2m}),\mathrm{Ad}(J_{2m}))$ and $(\mathfrak{so}(4m),\mathrm{Ad}(J_{2m}),\mathrm{Ad}(J_{2m}'))$, respectively. In the case of (3-b) in Corollary \ref{cor:cst_classify}, we shall also write $[(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a),\mathfrak{so}(c)\oplus\mathfrak{so}(8-c))]$ and $[(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a),\tilde{\kappa}(\mathfrak{so}(c)\oplus\mathfrak{so}(8-c)))]$
as the isomorphism classes of $(\mathfrak{so}(8),\mathrm{Ad}(I_{a,8-a}),\mathrm{Ad}(I_{c,8-c}))$ and $(\mathfrak{so}(8),\mathrm{Ad}(I_{a,8-a}),\tilde{\kappa}\mathrm{Ad}(I_{c,8-c})\tilde{\kappa}^{-1})$, respectively.
\subsubsection{Determination of rank and order for double $\sigma$-systems}
Based on the classification, we will determine the rank and the order of the double $\sigma$-system $(\Delta,\sigma_{1},\sigma_{2})$ for compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ such that $\mathfrak{g}$ is simple. Since $(\Delta,\sigma_{1},\sigma_{2})$ is canonical, we have $\mathrm{rank}[(\Delta,\sigma_{1},\sigma_{2})]=\mathrm{rank}(\Delta,\sigma_{1},\sigma_{2})$ and $\mathrm{ord}[(\Delta,\sigma_{1},\sigma_{2})]=\mathrm{ord}(\Delta,\sigma_{1},\sigma_{2})$.
First, we consider the case when $\theta_{1}\sim\theta_{2}$. Then $(\Delta,\sigma_{1},\sigma_{2})\sim(\Delta,\sigma_{1},\sigma_{1})$ holds. Since $(\Delta,\sigma_{1},\sigma_{1})$ is canonical, by Theorem \ref{thm:dsig_dsatake_equiv}, we obtain $\mathrm{rank}(\Delta, \sigma_{1},\sigma_{2})=\mathrm{rank}(\Delta,\sigma_{1},\sigma_{1}) =\mathrm{rank}(\mathfrak{g},\theta_{1})$ and $\mathrm{ord}(\Delta, \sigma_{1},\sigma_{2})=\mathrm{ord}(\Delta,\sigma_{1},\sigma_{1})=1$. In addition, we have the value of $\mathrm{rank}(\mathfrak{g},\theta_{1})$ from TABLE V in \cite{Helgason}. Thus, we have determined $\mathrm{rank}[(\Delta,\sigma_{1},\sigma_{2})]$ and $\mathrm{ord}[(\Delta,\sigma_{1},\sigma_{2})]$ in the case when $\theta_{1}\sim\theta_{2}$.
Secondly, we consider the case when $\theta_{1}\not\sim\theta_{2}$. In a similar manner as in Subsection \ref{sec:NssSd}, $(\Delta,\sigma_{1},\sigma_{2})$ can be reconstructed from its double Satake diagram. Then, a direct calculation gives the rank and the order of $(\Delta,\sigma_{1},\sigma_{2})$.
\begin{ex} Let $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=(\mathfrak{so}(8),\mathfrak{so}(1)\oplus\mathfrak{so}(7),\tilde{\kappa}(\mathfrak{so}(2)\oplus\mathfrak{so}(6)))$ and $(\Delta,\sigma_{1},\sigma_{2})$ denote its double $\sigma$-system. Then, $(S(\Pi,\Pi_{0}^{(1)},p^{(1)}),S(\Pi,\kappa(\Pi_{0}^{(2)}),\kappa\cdot p^{(2)}))$ as in Table \ref{table:Satake_DI} gives the double Satake diagram of $(\Delta,\sigma_{1},\sigma_{2})$. We write $\Pi=\{\alpha_{1},\dotsc,\alpha_{4}\}$ by means of the standard basis $e_{1},\dotsc, e_{4}$ of $\mathbb{R}^{4}$ as in Notation \ref{nota:Dr}. Under this setting, we have \begin{equation}\label{eqn:sigsso8so1726} \sigma_{1}(e_{1},e_{2},e_{3},e_{4})=(e_{1},-e_{2},-e_{3},-e_{4}),\quad \sigma_{2}(e_{1},e_{2},e_{3},e_{4})=(e_{2},e_{1},e_{4},e_{3}). \end{equation} Since we have \[ \mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}= \mathbb{R}e_{1}\cap(\mathbb{R}(e_{1}+e_{2})\oplus\mathbb{R}(e_{3}+e_{4}))=\{0\}, \] $\mathrm{rank}(\Delta,\sigma_{1},\sigma_{2})=0$ holds. We also obtain $\mathrm{ord}(\Delta,\sigma_{1},\sigma_{2})=4$ by means of \eqref{eqn:sigsso8so1726}. \end{ex}
We can carry out a similar calculation for the other compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ such that $\mathfrak{g}$ is simple and that $\theta_{1}\not\sim\theta_{2}$. Then, we have the following proposition.
\begin{pro} Table {\rm \ref{table:rank_ord}} exhibits the ranks and the orders of the isomorphism classes of the double $\sigma$-systems corresponding to compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ such that $\mathfrak{g}$ is simple and that $\theta_{1}\not\sim\theta_{2}$. \end{pro}
\begin{rem}\label{rem:table_ro} As will be shown later, Table {\rm \ref{table:rank_ord}} exhibits the ranks and the orders of the isomorphism classes of compact symmetric triads $(G,\theta_{1},\theta_{2})$ such that $G$ is simple. Indeed, this is shown by means of Theorems \ref{thm:cst_exist_can} and \ref{thm:cst_RO_can} in the next section. \end{rem}
\begin{table}[H]
\centering \caption{Rank and order for double $\sigma$-system corresponding to $(\mathfrak{g},\theta_{1},\theta_{2})$ with $\theta_{1}\not\sim\theta_{2}$}\label{table:rank_ord}
\renewcommand1.0{2.0} \tiny
\begin{tabular}{cccc} \hline \hline $(\mathfrak{g}, \mathfrak{k}_{1},\mathfrak{k}_{2})$ & Rank & Order & Remark\\ \hline \hline $(\mathfrak{su}(2m), \mathfrak{so}(2m), \mathfrak{sp}(m))$ & $m-1$ & $2$ & \\ \hline $(\mathfrak{su}(n), \mathfrak{so}(n), \mathfrak{s}(\mathfrak{u}(a) \oplus\mathfrak{u}(b)))$ & $a$ & $2$ & $n\geq 2a$\\ \hline $(\mathfrak{su}(2m), \mathfrak{sp}(m), \mathfrak{s}(\mathfrak{u}(a)\oplus\mathfrak{u}(b)))$ & $\left[\dfrac{a}{2}\right]$ & $\begin{cases}4 & (\text{$a$: odd, $m>a$} ),\\2 & (\text{otherwise})\end{cases}$ &$m\geq a$ \\ \hline $(\mathfrak{su}(n), \mathfrak{s}(\mathfrak{u}(a)\oplus\mathfrak{u}(b)), \mathfrak{s}(\mathfrak{u}(c) \oplus\mathfrak{u}(d)))$ & $a$ & $2$ & $a<c \leq d<b$ \\ \hline $(\mathfrak{so}(n), \mathfrak{so}(a)\oplus\mathfrak{so}(b), \mathfrak{so}(c) \oplus\mathfrak{so}(d))$ & $a$ & $2$ & $a<c \leq d<b$ \\ \hline $(\mathfrak{so}(8), \mathfrak{so}(a)\oplus\mathfrak{so}(b),\tilde{\kappa}(\mathfrak{so}(c) \oplus\mathfrak{so}(d)))$ & $\begin{cases}0 & ((a,c)=(1,\{1,2,3\})),\\1&((a,c)=(2,\{2,3\})),\\2 & ((a,c)=(3,3))\end{cases}$ & $\begin{cases} 2 & ((a,c)=(2,2)),\\ 3 & ((a,c)=(1,1),(3,3)),\\ 4 & ((a,c)=(1,2),(2,3)),\\ 6 & ((a,c)=(1,3))\end{cases}$& \\ \hline $(\mathfrak{so}(2m), \mathfrak{so}(a)\oplus\mathfrak{so}(b), \mathfrak{u}(m))$ & $\left[\dfrac{a}{2}\right]$ & $\begin{cases}4 & (\text{$a$: odd, $m>a$} ),\\2 & (\text{otherwise})\end{cases}$ &$m\geq a$ \\ \hline $(\mathfrak{so}(4m), \mathfrak{u}(2m), \mathfrak{u}(2m)')$ & $m-1$ & $2$ & \\ \hline $(\mathfrak{sp}(n), \mathfrak{u}(n), \mathfrak{sp}(a)\oplus\mathfrak{sp}(b))$ & $a$ & $2$ & $n\geq 2a$\\ \hline $(\mathfrak{sp}(n), \mathfrak{sp}(a)\oplus\mathfrak{sp}(b), \mathfrak{sp}(c)\oplus\mathfrak{sp}(d))$ & $a$ & $2$ & $a<c \leq d<b$ \\ \hline
$(\mathfrak{e}_{6}, \mathfrak{sp}(4), \mathfrak{su}(6)\oplus \mathfrak{su}(2))$ & $4$ & $2$ & \\ \hline $(\mathfrak{e}_{6}, \mathfrak{sp}(4), \mathfrak{so}(10)\oplus\mathfrak{so}(2))$ & $2$ & $2$ & \\ \hline $(\mathfrak{e}_{6}, \mathfrak{sp}(4), \mathfrak{f}_{4})$ & $2$ & $2$ & \\ \hline $(\mathfrak{e}_{6}, \mathfrak{su}(6)\oplus\mathfrak{su}(2), \mathfrak{so}(10)\oplus\mathfrak{so}(2))$ & $2$ & $2$ & \\ \hline $(\mathfrak{e}_{6}, \mathfrak{su}(6)\oplus\mathfrak{su}(2), \mathfrak{f}_{4})$ & $1$ & $2$ & \\ \hline $(\mathfrak{e}_{6}, \mathfrak{so}(10)\oplus\mathfrak{so}(2), \mathfrak{f}_{4})$ & $1$ & $2$ & \\ \hline $(\mathfrak{e}_{7}, \mathfrak{su}(8), \mathfrak{so}(12)\oplus\mathfrak{su}(2))$ & $4$ & $2$ & \\ \hline $(\mathfrak{e}_{7}, \mathfrak{su}(8), \mathfrak{e}_{6}\oplus\mathfrak{so}(2))$ & $3$ & $2$ & \\ \hline $(\mathfrak{e}_{7}, \mathfrak{so}(12) \oplus\mathfrak{su}(2), \mathfrak{e}_{6}\oplus\mathfrak{so}(2))$ & $2$ & $2$ & \\ \hline $(\mathfrak{e}_{8}, \mathfrak{so}(16), \mathfrak{e}_{7} \oplus\mathfrak{su}(2))$ & $4$ & $2$ & \\ \hline $(\mathfrak{f}_{4}, \mathfrak{su}(2)\oplus\mathfrak{sp}(3), \mathfrak{so}(9))$ & $1$ & $2$ & \\ \hline \hline \end{tabular}
\renewcommand1.0{1.0}
\end{table}
\subsubsection{Special isomorphism and self-duality}\label{sec:cst_SI}
First, we consider special isomorphisms for compact symmetric triads. In the theory of compact Lie algebras, there are some special isomorphisms for low-dimensional compact Lie algebras (cf.~\cite[pp.~519--520]{Helgason}). Hence we find that there are some overlaps in Table \ref{table:fixed_pt_algebra}. This obeys special isomorphisms for compact symmetric triads with low rank as follows.
\begin{cor}\label{cor:si_CST} The following relations hold: \begin{enumerate} \item $(\mathfrak{so}(8),\mathfrak{u}(4),\mathfrak{so}(4)\oplus\mathfrak{so}(4))\sim (\mathfrak{so}(8),\mathfrak{so}(2)\oplus\mathfrak{so}(6),\mathfrak{so}(4)\oplus\mathfrak{so}(4))$. \item $(\mathfrak{so}(5),\mathfrak{so}(1)\oplus\mathfrak{so}(4),\mathfrak{so}(2)\oplus\mathfrak{so}(3)) \sim (\mathfrak{sp}(2),\mathfrak{sp}(1)\oplus\mathfrak{sp}(1),\mathfrak{u}(2))$. \item $(\mathfrak{su}(4),\mathfrak{so}(4),\mathfrak{sp}(2)) \sim (\mathfrak{so}(6),\mathfrak{so}(3)\oplus\mathfrak{so}(3),\mathfrak{so}(1)\oplus\mathfrak{so}(5))$. \item $(\mathfrak{su}(4),\mathfrak{so}(4),\mathfrak{s}(\mathfrak{u}(2)\oplus\mathfrak{u}(2))) \sim (\mathfrak{so}(6),\mathfrak{so}(3)\oplus\mathfrak{so}(3),\mathfrak{so}(2)\oplus\mathfrak{so}(4))$. \item $(\mathfrak{su}(4),\mathfrak{so}(4),\mathfrak{s}(\mathfrak{u}(1)\oplus\mathfrak{u}(3))) \sim (\mathfrak{so}(6),\mathfrak{so}(3)\oplus\mathfrak{so}(3),\mathfrak{u}(3))$. \item $(\mathfrak{su}(4),\mathfrak{sp}(2),\mathfrak{s}(\mathfrak{u}(2)\oplus\mathfrak{u}(2))) \sim (\mathfrak{so}(6),\mathfrak{so}(1)\oplus\mathfrak{so}(5),\mathfrak{so}(2)\oplus\mathfrak{so}(4))$. \item $(\mathfrak{su}(4),\mathfrak{sp}(2),\mathfrak{s}(\mathfrak{u}(1)\oplus\mathfrak{u}(3))) \sim(\mathfrak{so}(6),\mathfrak{so}(1)\oplus\mathfrak{so}(5),\mathfrak{u}(3))$. \item $(\mathfrak{su}(4),\mathfrak{s}(\mathfrak{u}(2)\oplus\mathfrak{u}(2)),\mathfrak{s}(\mathfrak{u}(1)\oplus\mathfrak{u}(3))) \sim (\mathfrak{so}(6),\mathfrak{so}(2)\oplus\mathfrak{so}(4),\mathfrak{u}(3))$. \end{enumerate} \end{cor}
\begin{proof} (1) For $i=1,2$, let $\theta_{i}$ and $\theta_{i}'$ be the involutions of $\mathfrak{g}=\mathfrak{so}(8)$ defined by \[ \begin{array}{ccc} \theta_{1}=\mathrm{Ad}(I_{2,6}),& \theta_{1}'=\mathrm{Ad}(J_{4}),& \theta_{2}=\theta_{2}'=\mathrm{Ad}(I_{4,4}). \end{array} \] Then we have $\mathfrak{k}_{1}\simeq\mathfrak{so}(2)\oplus\mathfrak{so}(6)\simeq\mathfrak{u}(4)\simeq\mathfrak{k}_{1}'$ and $\mathfrak{k}_{2}=\mathfrak{k}_{2}'\simeq\mathfrak{so}(4)\oplus\mathfrak{so}(4)$. It follows from Corollary \ref{cor:cst_classify}, (3-a) that $(\mathfrak{g},\theta_{1},\theta_{2})\sim(\mathfrak{g},\theta_{1}',\theta_{2}')$ holds. In a similar argument we get (2)--(8). \end{proof}
A compact symmetric triad $(\mathfrak{g},\theta_{1},\theta_{2})$ is said to be \textit{self-dual}, if it satisfies $(\mathfrak{g},\theta_{1},\theta_{2})\sim(\mathfrak{g},\theta_{2},\theta_{1})$. Secondly, we classify self-dual compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ in the case when $\mathfrak{g}$ is simple. It is verified that, if two compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ and $(\mathfrak{g},\theta_{1}',\theta_{2}')$ are isomorphic, and $(\mathfrak{g},\theta_{1},\theta_{2})$ is self-dual, then so is $(\mathfrak{g},\theta_{1}',\theta_{2}')$. In the case when $\theta_{1}\sim\theta_{2}$, it follows from $(\mathfrak{g},\theta_{1},\theta_{1})\sim(\mathfrak{g},\theta_{1},\theta_{2})$ that $(\mathfrak{g},\theta_{1},\theta_{2})$ is self-dual. In the case when $\theta_{1}\not\sim\theta_{2}$, we can determine whether $(\mathfrak{g},\theta_{1},\theta_{2})$ is self-dual or not by means of our classification as follows.
\begin{cor}\label{cor:sdual_CST} Let $\theta_{1}$ and $\theta_{2}$ be involutions on a compact simple Lie algebra with $\theta_{1}\not\sim\theta_{2}$. The only self-dual compact symmetric triads are given by $(\mathfrak{so}(4m),\mathrm{Ad}(J_{2m}),\mathrm{Ad}(J_{2m}'))$ with $m\geq 3$, and by $(\mathfrak{so}(8),\mathrm{Ad}(I_{a,8-a}),\tilde{\kappa}\mathrm{Ad}(I_{a,8-a})\tilde{\kappa}^{-1})$ with $a\in\{1,2,3\}$.
In particular, $\mathfrak{k}_{1}\simeq\mathfrak{k}_{2}$ implies that $(\mathfrak{g},\theta_{1},\theta_{2})$ is self-dual. \end{cor}
\begin{proof} It is clear that, if $(\mathfrak{g},\theta_{1},\theta_{2})$ is self-dual, then $\mathfrak{k}_{1}\simeq\mathfrak{k}_{2}$ holds. Conversely, from the classification for compact symmetric triads, the only compact symmetric triads $(\mathfrak{g},\theta_{1},\theta_{2})$ satisfying $\mathfrak{k}_{1}\simeq\mathfrak{k}_{2}$ are ones as in the statement. Furthermore, it is verified that they are self-dual as follows: By using $\mathrm{Ad}(I_{2m-1,2m+1})^{2}=1$ we have \begin{align*} (\mathfrak{g},\theta_{1},\theta_{2})
&=(\mathfrak{so}(4m),\mathrm{Ad}(J_{2m}),\mathrm{Ad}(I_{2m-1,2m+1})\mathrm{Ad}(J_{2m})\mathrm{Ad}(I_{2m-1,2m+1})^{-1})\\ &\sim(\mathfrak{so}(4m),\mathrm{Ad}(I_{2m-1,2m+1})\mathrm{Ad}(J_{2m})\mathrm{Ad}(I_{2m-1,2m+1})^{-1},\mathrm{Ad}(J_{2m}))\\ &=(\mathfrak{g},\theta_{2},\theta_{1}). \end{align*} It is shown that the double Satake diagram for $(\mathfrak{so}(8),\mathrm{Ad}(I_{a,8-a}),\tilde{\kappa}\mathrm{Ad}(I_{a,8-a})\tilde{\kappa}^{-1})$ and that for $(\mathfrak{so}(8),\tilde{\kappa}\mathrm{Ad}(I_{a,8-a})\tilde{\kappa}^{-1},\mathrm{Ad}(I_{a,8-a}))$ are isomorphic. Thus, by Theorem \ref{thm:cst_dsatake_sim}, $(3)\Rightarrow(1)$ we have $(\mathfrak{so}(8),\mathrm{Ad}(I_{a,8-a}),\tilde{\kappa}\mathrm{Ad}(I_{a,8-a})\tilde{\kappa}^{-1})$ $\sim$$(\mathfrak{so}(8),\tilde{\kappa}\mathrm{Ad}(I_{a,8-a})\tilde{\kappa}^{-1},\mathrm{Ad}(I_{a,8-a}))$. Hence we have the assertion. \end{proof}
\section{Canonical forms in compact symmetric triads}\label{sec:cst_cf}
In Subsection \ref{sec:cst_can}, we define the canonicality for compact symmetric triads, and give concrete examples of canonical compact symmetric triads. In Subsection \ref{sec:cst_can_exist}, for any compact symmetric triad $(G,\theta_{1},\theta_{2})$, we prove the existence of a canonical one $(G,\theta_{1},\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ in the case when $G$ is simple. In Subsection \ref{sec:rankorder}, we give properties of canonical compact symmetric triads (Theorem \ref{thm:cst_RO_can}).
\subsection{Definition and examples for canonical compact symmetric triads}\label{sec:cst_can}
Let $G$ be a compact connected semisimple Lie group, and $\mathfrak{g}$ denote its Lie algebra.
\begin{dfn}\label{dfn:cst_can} A compact symmetric triad $(G,\theta_{1},\theta_{2})$ is said to be \textit{canonical}, if there exists a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ which satisfies the following conditions: \begin{enumerate} \item[(C1)] $\mathfrak{t}$ is quasi-canonical with respect to $(G,\theta_{1},\theta_{2})$, that is, $\mathfrak{t}$ satisfies the conditions (1) and (2) as in Definition \ref{dfn:cst_qcf}.
\item[(C2)] $\mathrm{ord}(\theta_{1}\theta_{2})=\mathrm{ord}(d\theta_{1}d\theta_{2}|_{\mathfrak{t}})$. \end{enumerate} Then, $\mathfrak{t}$ is said to be canonical with respect to $(G,\theta_{1},\theta_{2})$. A \textit{canonical form} of $[(G,\theta_{1},\theta_{2})]$ is a representative $(G,\theta_{1}',\theta_{2}')$ of the isomorphism class $[(G,\theta_{1},\theta_{2})]$ such that $(G,\theta_{1}',\theta_{2}')$ is canonical as a compact symmetric triad. \end{dfn}
In the case when $(G,\theta_{1},\theta_{2})$ is canonical, the condition (C2) implies that $\mathrm{ord}(\theta_{1}\theta_{2})$ and $\mathrm{ord}[(G,\theta_{1},\theta_{2})]$ are finite. Here, we give examples of canonical compact symmetric triads as follows.
\begin{ex}\label{ex:order1_can} For any involution $\theta$ on $G$, $(G,\theta,\theta)$ is canonical. Indeed, let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}$ is a maximal abelian subspace of $\mathfrak{m}$. Then $(G,\theta,\theta)$ and $\mathfrak{t}$ satisfy the two conditions as in Definition \ref{dfn:cst_can}. \end{ex}
\begin{ex}\label{ex:com_can} Any commutative compact symmetric triad $(G,\theta_{1},\theta_{2})$ with $\theta_{1}\not\sim\theta_{2}$ is canonical. Indeed, it follows from Lemma \ref{lem:comm_quasican} that there exists a maximal abelian subalgebra $\mathfrak{t}$ of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}_{i}$ and $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ are maximal abelian subspaces of $\mathfrak{m}_{i}$ and $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$, respectively. We have shown that $\mathfrak{t}$ is quasi-canonical with respect to $(G,\theta_{1},\theta_{2})$. In addition, from Lemma \ref{lem:t1simt2>dt1dt2=1} we obtain
$2\leq \mathrm{ord}(d\theta_{1}d\theta_{2}|_{\mathfrak{t}})\leq \mathrm{ord}(\theta_{1}\theta_{2})=2$. In particular, $\mathfrak{t}$ satisfies the condition (C2). \end{ex}
In some sense, the canonical forms are not uniquely determined, namely, there exist two compact symmetric triads $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$ such that they are canonical and that $(G,\theta_{1},\theta_{2})$ and $(G,\theta_{1}',\theta_{2}')$ are not equivalent under the following equivalence relation $\equiv$.
\begin{dfn} Two compact symmetric triads $(G,\theta_{1},\theta_{2})$, $(G,\theta_{1}',\theta_{2}')$ satisfy $(G,\theta_{1},\theta_{2})\equiv(G,\theta_{1}',\theta_{2}')$, if there exists $\varphi\in\mathrm{Aut}(G)$ satisfying $\theta_{1}'=\varphi\theta_{1}\varphi^{-1}$ and $\theta_{2}'=\varphi\theta_{2}\varphi^{-1}$. In a similar manner, we define an equivalence relation on the set of compact symmetric triads at the Lie algebra level. By the definition $(G,\theta_{1},\theta_{2})\equiv(G,\theta_{1}',\theta_{2}')$ implies $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$. The converse does not hold in general. \end{dfn}
The following example gives compact symmetric triad $(G,\theta_{1},\theta_{2})\sim(G,\theta_{1}',\theta_{2}')$ such that they are canonical and that $(G,\theta_{1},\theta_{2})\not\equiv(G,\theta_{1}',\theta_{2}')$ at the Lie algebra level. We find that another example is given in \cite[Examples 2.14--16]{BIS}.
\begin{ex}\label{ex:spn} Let $\mathfrak{g}=\mathfrak{su}(4m)$ with $m\geq 1$. We define two involutions $\theta$ and $\theta'$ of $\mathfrak{g}$ as follows: \[ \begin{array}{ccc} \theta(Z)=\overline{Z},& \theta'(Z)=I_{2m,2m}ZI_{2m,2m}& (Z\in\mathfrak{g}), \end{array} \] where $I_{2m,2m}$ is defined in \eqref{eqn:IJ}. Then we have $\theta\theta'=\theta'\theta$. Furthermore, $\mathfrak{g}^{\theta}=\mathfrak{so}(4m)$ and \[ \mathfrak{g}^{\theta'} =\left\{ \left( \begin{array}{cc} Z_{1} & O \\ O & Z_{2} \end{array}
\right)\biggm| \begin{array}{l} Z_{1}, Z_{2}\in\mathfrak{u}(2m),\\ \mathrm{Tr}(Z_{1}+Z_{2})=0 \end{array} \right\}= \mathfrak{s}(\mathfrak{u}(2m)\oplus\mathfrak{u}(2m)). \] If we put \[ I'_{2m,2m}=\dfrac{1}{\sqrt{2}}\left(\begin{array}{cc} I_{2m} & \sqrt{-1}I_{2m}\\ \sqrt{-1}I_{2m} & I_{2m} \end{array}\right)\in SU(4m), \] then the product $I'_{2m,2m}I_{2m,2m}(I'_{2m,2m})^{-1}$ has the following expression: \[ I'_{2m,2m}I_{2m,2m}(I'_{2m,2m})^{-1} =\left( \begin{array}{cc} O & -\sqrt{-1}I_{2m} \\ \sqrt{-1}I_{2m} & O \end{array} \right)=:I_{2m,2m}''\in SU(4m). \] Since $(I_{2m,2m}'')^{2}=I_{4m}$ holds, we have another involution $\theta'':=\mathrm{Ad}(I_{2m,2m}'')$ of $\mathfrak{g}$. Then $\theta''$ satisfies $\theta\theta''=\theta''\theta$ and $\theta'\sim\theta''$. In addition, we have $\mathfrak{g}^{\theta''}\simeq\mathfrak{g}^{\theta'} =\mathfrak{s}(\mathfrak{u}(2m)\oplus\mathfrak{u}(2m))$.
Now, let us consider the following two compact symmetric triads: \[ (\mathfrak{g},{\theta}_{1},{\theta}_{2})=(\mathfrak{su}(4m),\theta,\theta'),\quad (\mathfrak{g},{\theta}_{1}',{\theta}_{2}')=(\mathfrak{su}(4m),\theta,\theta''). \] It follows from Corollary \ref{cor:cst_classify} that $(\mathfrak{g},{\theta}_{1},{\theta}_{2}) \sim (\mathfrak{g},{\theta}_{1}',{\theta}_{2}')$ holds. In addition, by Example \ref{ex:com_can} they are canonical. A direct calculation shows that \begin{align*} \mathfrak{k}_{1}\cap\mathfrak{k}_{2}&=\left\{ \left( \begin{array}{cc} X_{1} & O \\ O & X_{2} \end{array}
\right)\biggm| X_{1}, X_{2}\in\mathfrak{so}(2m) \right\}= \mathfrak{so}(2m)\oplus\mathfrak{so}(2m),\\ \mathfrak{k}_{1}'\cap\mathfrak{k}_{2}'&=\left\{ \left( \begin{array}{cc} X_{1} & X_{2} \\ -X_{2} & X_{1} \end{array}
\right)\biggm| \begin{array}{ll} X_{1}\in\mathfrak{so}(2m),\\ X_{2}\in\mathfrak{gl}(2m,\mathbb{R});X_{2}={}^{t}X_{2} \end{array} \right\}\simeq \mathfrak{u}(2m). \end{align*} This implies that $\mathfrak{k}_{1}\cap\mathfrak{k}_{2}$ is not isomorphic to $\mathfrak{k}_{1}'\cap\mathfrak{k}_{2}'$. Thus, we have $(\mathfrak{g},{\theta}_{1},{\theta}_{2})\not\equiv (\mathfrak{g},{\theta}_{1}',{\theta}_{2}')$. \end{ex}
It can be proved the uniqueness of canonical forms by imposing an additional condition on the definition. However, when we observe at least the commutative case, we do not need to determine a canonical form uniquely.
\subsection{Existence for canonical compact symmetric triads}\label{sec:cst_can_exist}
The purpose of this subsection is to prove the following.
\begin{thm}\label{thm:cst_exist_can} Assume that $G$ is simple. For any compact symmetric triad $(G,\theta_{1},\theta_{2})$, there exists a canonical compact symmetric triad $(G,\theta_{1},\theta_{2}')\sim(G,\theta_{1},\theta_{2})$. \end{thm}
Without loss of generalities we may assume that $(G,\theta_{1},\theta_{2})$ is quasi-canonical by Proposition \ref{pro:exist_qcan}. Let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$
which is quasi-canonical with respect to $(G,\theta_{1},\theta_{2})$. Denote by $(\Delta,\sigma_{1},\sigma_{2})=(\Delta,-d\theta_{1}|_{\mathfrak{t}},-d\theta_{2}|_{\mathfrak{t}})$ the double $\sigma$-system of $(G,\theta_{1},\theta_{2})$. Let $\Pi$ be a $(\sigma_{1},\sigma_{2})$-fundamental system of $\Delta$.
\begin{lem}\label{lem:order<=>Pi}
Let $n=\mathrm{ord}(\theta_{1}\theta_{2}|_{\mathfrak{t}})\in\mathbb{N}$. Then, we have$:$ \begin{enumerate} \item For any $\beta\in\Pi_{1,0}\cup\Pi_{2,0}$, we have $(\theta_{1}\theta_{2})^{n}=1$ on the root space $\mathfrak{g}(\mathfrak{t},\beta)$. \item The order of the automorphism $\theta_{1}\theta_{2}$ on $\mathfrak{g}$ satisfies $(\theta_{1}\theta_{2})^{n}=1$ if and only if $(\theta_{1}\theta_{2})^{n}=1$ holds on $\mathfrak{g}(\mathfrak{t},\alpha)$ for all $\alpha\in\Pi-(\Pi_{1,0}\cup\Pi_{2,0})$. \end{enumerate} \end{lem}
\begin{proof} (1) Let $\beta$ be in $\Pi_{1,0}\cup\Pi_{2,0}$. It can be verified that $(\theta_{1}\theta_{2})^{n}=1$ holds on $\mathfrak{g}(\mathfrak{t},\beta)$ by a case-by-case verification. Let us consider the case when $n$ is odd and $\beta\in\Pi_{1,0}$. It is sufficient to show $(\theta_{2}\theta_{1})^{n}=1$ on $\mathfrak{g}(\mathfrak{t},\beta)$. If we write $n=2l+1$, then $(\theta_{1}\theta_{2})^{n}(\beta)=\beta$ yields $\theta_{2}((\theta_{1}\theta_{2})^{l}(\beta))=(\theta_{1}\theta_{2})^{l}(\beta)$. Hence we get $\mathfrak{g}(\mathfrak{t},\beta)\subset \mathfrak{k}_{1}^{\mathbb{C}}$ and $\mathfrak{g}(\mathfrak{t},(\theta_{1}\theta_{2})^{l}(\beta))\subset\mathfrak{k}_{2}^{\mathbb{C}}$ by Lemma \ref{lem:Klein_pro4.1}, (2). For any $X\in\mathfrak{g}(\mathfrak{t},\beta)$, we obtain \[ (\theta_{2}\theta_{1})^{n}(X) =(\theta_{2}\theta_{1})^{2l}\theta_{2}X =(\theta_{2}\theta_{1})^{l}\theta_{2}((\theta_{1}\theta_{2})^{l}(X)) =X. \] For the other cases, a similar argument shows that $(\theta_{1}\theta_{2})^{n}=1$ on $\mathfrak{g}(\mathfrak{t},\beta)$ for each $\beta\in\Pi_{1,0}\cup\Pi_{2,0}$.
(2) The necessity is clear. We will only prove the sufficiency. From (1), we have $(\theta_{1}\theta_{2})^{n}=1$ on $\sum_{\alpha\in\Pi}\mathfrak{g}(\mathfrak{t},\alpha)$. This yields $(\theta_{1}\theta_{2})^{n}=1$ on $\mathfrak{g}^{\mathbb{C}}$, equivalently, $(\theta_{1}\theta_{2})^{n}=1$ on $\mathfrak{g}$. Thus we have the assertion. \end{proof}
In order to prove Theorem \ref{thm:cst_exist_can} we need to give a refinement of Lemma \ref{lem:order<=>Pi}, (2) (see Lemma \ref{lem:order<=>Pi_refine}). Let $(G,\theta_{1},\theta_{2})$ be a quasi-canonical compact symmetric triad and $\mathfrak{t}$ be a quasi-canonical maximal abelian subalgebra of $\mathfrak{g}$ with respect to $(G,\theta_{1},\theta_{2})$. Denote by $(S_{1}(\Pi,\Pi_{1,0},p_{1}),S_{2}(\Pi,\Pi_{2,0},p_{2}))$
the corresponding double Satake diagram of $(G,\theta_{1},\theta_{2})$. We put $n=\mathrm{ord}(\theta_{1}\theta_{2}|_{\mathfrak{t}})$. Let $pr:\mathfrak{t}\to\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ be the orthogonal projection. Then, for any $\alpha\in\mathfrak{t}$, we have \[ pr(\alpha)=\dfrac{1}{n}\sum_{k=0}^{n-1}\{(\theta_{1}\theta_{2})^{k}(\alpha)-\theta_{1}(\theta_{1}\theta_{2})^{k}(\alpha)\} =\dfrac{1}{n}\sum_{k=0}^{n-1}\{(\theta_{1}\theta_{2})^{k}(\alpha)-\theta_{2}(\theta_{1}\theta_{2})^{k}(\alpha)\}, \] which is also expressed as \begin{equation}\label{eqn:pr_expression2} pr(\alpha)=\dfrac{1}{n}\sum_{k=0}^{n-1}(\theta_{1}\theta_{2})^{k}(H-\theta_{i}(H)). \end{equation}
The following lemma is fundamental in our argument.
\begin{lem}\label{lem:pr_S1S2_fr} Under the above settings, we have: \begin{enumerate} \item Fix $i\in\{1,2\}$. For any $\alpha,\beta\in\Pi-\Pi_{i,0}$, $\alpha=p_{i}(\beta)$ yields $pr(\alpha)=pr(\beta)$. \item $pr(\alpha)=0$ holds for all $\alpha\in \Pi_{1,0}\cup\Pi_{2,0}$ \end{enumerate} \end{lem}
This lemma can be shown by means of the expression \eqref{eqn:pr_expression2}. We omit the detail.
We set $\Pi_{0}=\{\alpha\in \Pi\mid pr(\alpha)=0\}$. By Lemma \ref{lem:pr_S1S2_fr}, (2), we have $\Pi_{1,0}\cup\Pi_{2,0}\subset\Pi_{0}$. In fact, it is verified that $\Pi_{0}$ is expressed as \[ \Pi_{0}=\Pi_{1,0}\cup\Pi_{2,0}\cup \{\alpha\in\Pi-(\Pi_{1,0}\cup\Pi_{2,0})\mid p_{1}(\alpha)\in \Pi_{2,0} \text{ or } p_{2}(\alpha)\in \Pi_{1,0} \}. \] In the case when $(G,\theta_{1},\theta_{2})$ is commutative, it follows from Lemma \ref{lem:a=0a_i=0}, (2) that $\Pi_{0}=\Pi_{1,0}\cup\Pi_{2,0}$ holds.
Let $\Pi^{*}$ be a subset of $\Pi-\Pi_{0}$ satisfying \[ \{\alpha,p_{1}(\alpha),p_{2}(\alpha)\mid \alpha\in \Pi^{*}\}=\Pi-\Pi_{0} \] with minimum cardinality among all such subsets. We call such $\Pi^{*}$ a \textit{core} of $\Pi-\Pi_{0}$. Clearly, if $\Pi=\Pi_{0}$, then we have $\Pi^{*}=\emptyset$. By means of the Satake involutions $p_{1}, p_{2}$, we can reconstruct $\Pi_{0}$ and $\Pi-\Pi_{0}$ from $\Pi_{1,0}\cup\Pi_{2,0}$ and $\Pi^{*}$, respectively, Then, $\Pi$ is obtained from $\Pi^{*}\cup\Pi_{1,0}\cup\Pi_{2,0}$ and so is $\Delta^{+}$.
By Lemma \ref{lem:a=0a_i=0}, we get $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2}) =pr(\mathrm{span}_{\mathbb{R}}\Pi) =\mathrm{span}_{\mathbb{R}}\{pr(\alpha)\mid \alpha\in\Pi-\Pi_{0}\}$. Furthermore, we have the following proposition.
\begin{pro}\label{pro:cardPi=rank} Assume that $G$ is simple. Then, there exists a core $\Pi^{*}\subset \Pi-\Pi_{0}$ satisfying the following conditions$:$ \begin{enumerate} \item $\{pr(\alpha)\mid \alpha\in\Pi^{*}\}$ are linearly independent. \item $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2}) =\mathrm{span}_{\mathbb{R}}\{pr(\alpha)\mid \alpha\in\Pi^{*}\}$. \end{enumerate} In particular, the cardinality of $\Pi^{*}$ is equal to the dimension of $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$. \end{pro}
The proof is given by a case-by-case verification based on the classification.
\begin{ex} Let us consider the case when $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})= (\mathfrak{so}(8),\mathfrak{so}(3)\oplus\mathfrak{so}(5),\tilde{\kappa}(\mathfrak{so}(3)\oplus\mathfrak{so}(5)))$. Its double Satake diagram is given by $(S(\Pi,\psi(\Pi_{0}^{(3)}),p^{(3)}),S(\Pi,\kappa(\Pi_{0}^{(3)}),\kappa\cdot p^{(3)}))$ as in Table \ref{table:Satake_DI}. Then $\Pi^{*}=\{\alpha_{2},\alpha_{3}\}$ gives a core of $\Pi-\Pi_{0}$. Since we have \[ pr(\alpha_{2})=\alpha_{2},\quad pr(\alpha_{3})=\dfrac{1}{3}(\alpha_{1}+\alpha_{3}+\alpha_{4}), \] $\{pr(\alpha_{2}),pr(\alpha_{3})\}$ are linearly independent. From $\dim(\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2}))=2$, we have $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})=\mathrm{span}_{\mathbb{R}} \{pr(\alpha_{2}),pr(\alpha_{3})\}$. \end{ex}
In a similar manner, we can prove Proposition \ref{pro:cardPi=rank} for the other cases. We omit the details.
The following is a refinement of Lemma \ref{lem:order<=>Pi}, (2).
\begin{lem}\label{lem:order<=>Pi_refine}
Let $n=\mathrm{ord}(\theta_{1}\theta_{2}|_{\mathfrak{t}})\in\mathbb{N}$ and $\Pi^{*}\subset \Pi-\Pi_{0}$ be a core. Then the order of the automorphism $\theta_{1}\theta_{2}$ on $\mathfrak{g}$ satisfies $(\theta_{1}\theta_{2})^{n}=1$ if and only if $(\theta_{1}\theta_{2})^{n}=1$ holds on $\mathfrak{g}(\mathfrak{t},\alpha)$ for all $\alpha\in\Pi^{*}$. \end{lem}
\begin{proof} We will only prove the sufficiency. We define a subspace $\mathfrak{h}$ of $\mathfrak{g}^{\mathbb{C}}$ as follows: \[ \mathfrak{h} =\mathfrak{t}^{\mathbb{C}} \oplus\sum_{\beta\in\Pi_{1,0}\cup\Pi_{2,0}} \left(\mathfrak{g}(\mathfrak{t},\beta)\oplus\mathfrak{g}(\mathfrak{t},-\beta)\right) \oplus\sum_{\alpha\in\Pi^{*}} \left(\mathfrak{g}(\mathfrak{t},\alpha)\oplus\mathfrak{g}(\mathfrak{t},-\alpha)\right). \] By the definition, we have $(\theta_{1}\theta_{2})^{n}=1$ on $\mathfrak{h}$. This implies that $(\theta_{1}\theta_{2})^{n}=1$ holds on $\mathfrak{h}+\theta_{1}(\mathfrak{h})+\theta_{2}(\mathfrak{h})$. Since $\mathfrak{h}+\theta_{1}(\mathfrak{h})+\theta_{2}(\mathfrak{h})$ generates $\mathfrak{g}^{\mathbb{C}}$, we get $(\theta_{1}\theta_{2})^{n}=1$ on $\mathfrak{g}^{\mathbb{C}}$. Thus we have the assertion. \end{proof}
We are ready to prove Theorem \ref{thm:cst_exist_can}.
\begin{proof}[Proof of Theorem $\ref{thm:cst_exist_can}$] Without loss of generalities we may assume that $(G,\theta_{1},\theta_{2})$ is quasi-canonical. Let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$
which is quasi-canonical with respect to $(G,\theta_{1},\theta_{2})$. Let $n=\mathrm{ord}(\theta_{1}\theta_{2}|_{\mathfrak{t}})$ and $\Pi^{*}\subset \Pi-\Pi_{0}$ be a core as in Proposition \ref{pro:cardPi=rank}.
First,
we show $\mathrm{ord}(\theta_{1}\theta_{2})=n$ in the case when $\Pi^{*}=\emptyset$. Indeed, we get $\mathrm{ord}(\theta_{1}\theta_{2})\geq\mathrm{ord}(\theta_{1}\theta_{2}|_{\mathfrak{t}})=n$. In addition, Lemma \ref{lem:order<=>Pi}, (1), we have $(\theta_{1}\theta_{2})^{n}=1$. Hence, we obtain $\mathrm{ord}(\theta_{1}\theta_{2})=n$, so that $(G,\theta_{1},\theta_{2})$ is canonical.
Secondly, we consider the case when $\Pi^{*}\neq\emptyset$. For any $H\in\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$, if we put $g=\exp(H)$, then $\mathrm{Ad}(g)$ gives the identity transformation on $\mathfrak{t}$. Hence $\mathfrak{t}$ is also quasi-canonical with respect to $(G,\theta_{1},\tau_{g}\theta_{2}\tau_{g}^{-1})=:(G,\theta_{1},\theta_{2}')$. Then it is sufficient to show that there exists $H\in\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ such that $(\theta_{1}\theta_{2}')^{n}=1$ holds.
Let $H$ be any element in $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$. Let $\{X_{\alpha}\}_{\alpha\in\Delta}$ be a Chevalley basis of $\mathfrak{g}^{\mathbb{C}}$ with $\overline{X_{\alpha}}=-X_{-\alpha}$ (cf.~Lemma \ref{lem:Cbasis_conj}). For each $\alpha\in\Delta$, we define a complex numbers $S_{\alpha}$ by $(\theta_{1}\theta_{2})^{n}X_{\alpha}=S_{\alpha}X_{\alpha}$. By the definition, we have $(\theta_{1}\theta_{2}')^{n}X_{\alpha}= e^{\sqrt{-1}\INN{2n\,\cdot\, pr(\alpha)}{H}}S_{\alpha}X_{\alpha}$ for each $\alpha\in\Delta$. Then, it follows from Lemma \ref{lem:order<=>Pi_refine} that $(\theta_{1}\theta_{2}')^{n}=1$ holds if and only if $e^{\sqrt{-1}\INN{2n\,\cdot\, pr^{(n)}(\alpha)}{H}}S_{\alpha}=1$ for all $\alpha\in\Pi^{*}$. From Lemma \ref{lem:Klein_pro4.1}
it is shown that $|S_{\alpha}|=1$ holds, so that there exists $u_{\alpha}\in\mathbb{R}$ such that $S_{\alpha}=e^{\sqrt{-1}u_{\alpha}}$. It follows from Proposition \ref{pro:cardPi=rank}, (1) that the square matrix $(\INN{pr(\alpha)}{pr(\beta)})_{\alpha,\beta\in\Pi^{*}}$ is invertible, so that the following equation has a solution $H$: \[ \INN{2n\,\cdot\, pr(\alpha)}{H}+u_{\alpha}=0 \quad (\alpha\in\Pi^{*}). \] Then $(\theta_{1}\theta_{2}')^{n}=1$ holds for the solution $H$.
From the above argument, we have complete the proof. \end{proof}
\subsection{Properties of canonical compact symmetric triads}\label{sec:rankorder}
The purpose of this subsection is to prove the following.
\begin{thm}\label{thm:cst_RO_can} Assume that $G$ is simple. Let $(G,\theta_{1},\theta_{2})$ be a canonical compact symmetric triad. Then, the followings hold$:$ \begin{enumerate} \item Let $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ which is canonical with respect to $(G,\theta_{1},\theta_{2})$. Then, $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ is a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. \item $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$. \end{enumerate} \end{thm}
\begin{rem}\label{rem:commutable} Let $(G,\theta_{1},\theta_{2})$ be a canonical compact symmetric triad and $\mathfrak{t}$ be a canonical maximal abelian subalgebra of $\mathfrak{g}$ with respect to $(G,\theta_{1},\theta_{2})$. Then, if $\theta_{1}$ and $\theta_{2}$ is commutative on $\mathfrak{t}$, then $[(G,\theta_{1},\theta_{2})]$ is commutable. In addition, Theorem \ref{thm:cst_RO_can}, (2) implies that the converse is also true in the case when $G$ is simple. Thus, we have the complete classification of commutable compact symmetric triads $[(G,\theta_{1},\theta_{2})]$ by means of Table \ref{table:rank_ord}. We note that, if the simple Lie group $G$ is of exceptional type, then $[(G,\theta_{1},\theta_{2})]$ is commutable. \end{rem}
\subsubsection{Proof of Theorem $\ref{thm:cst_RO_can}$, $(1)$}
We give two sufficient conditions for $\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}$ to be a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. One is that $(G,\theta_{1},\theta_{2})$ is commutative as shown in Lemma \ref{lem:comm_quasican}. The other is the following lemma.
\begin{lem}\label{lem:rank_ineq} Let $(G,\theta_{1},\theta_{2})$ be a compact symmetric triad and $\mathfrak{t}$ be a maximal abelian subalgebra of $\mathfrak{g}$ such that $\mathfrak{t}\cap\mathfrak{m}_{i}$ $(i=1,2)$ is a maximal abelian subspace of $\mathfrak{m}_{i}$. We denote by $(\Delta,\sigma_{1},\sigma_{2})$ the double $\sigma$-system of $(G,\theta_{1},\theta_{2})$ with respect to $\mathfrak{t}$. Then the following holds: \begin{equation}\label{eqn:rank_ineq} \dim(\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}})\leq \min\{\mathrm{rank}(G,\theta_{i})\mid i=1,2\}. \end{equation} Furthermore, if the equality in this inequality holds, then $\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}=\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ becomes a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. \end{lem}
\begin{proof} By the definition we have the inequality \eqref{eqn:rank_ineq}. Assume that the equality in \eqref{eqn:rank_ineq} holds. Let $\mathfrak{a}$ be a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$ containing $\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}$. From $\dim(\mathfrak{a})\leq \mathrm{rank}(G,\theta_{i})$, we have $\dim(\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}})\leq \dim(\mathfrak{a})\leq \min\{\mathrm{rank}(G,\theta_{i})\mid i=1,2\}$. By the assumption we obtain $\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}=\mathfrak{a}$. Thus, we have the assertion. \end{proof}
There are some non-commutative, canonical compact symmetric triads such that the equality in \eqref{eqn:rank_ineq} does not hold. In the case when $G$ is simple, we can classify such compact symmetric triads at the Lie algebra level by means of Table \ref{table:rank_ord}, which are listed in Table \ref{table:r<rr}.
\begin{table}[H] \centering \renewcommand1.0{1.3} \caption{ Canonical compact symmetric triads $(G,\theta_{1},\theta_{2})$ satisfying $\mathrm{ord}(\theta_{1}\theta_{2})\geq 3$ and $\dim(\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}})< \min\{\mathrm{rank}(G,\theta_{i})\mid i=1,2\}$ }\label{table:r<rr} \begin{tabular}{cc} \hline \hline $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})$ & Remark \\ \hline \hline $(\mathfrak{su}(2a+2b+2),\mathfrak{sp}(a+b+1), \mathfrak{s}(\mathfrak{u}(2a+1)\oplus\mathfrak{u}(2b+1)))$ & $0\leq a<b$ \\ $(\mathfrak{so}(2a+2b+2),\mathfrak{so}(2a+1)\oplus\mathfrak{so}(2b+1), \mathfrak{u}(a+b+1))$ & $0\leq a <b$ \\ $(\mathfrak{so}(8),\mathfrak{so}(1)\oplus\mathfrak{so}(7),\tilde{\kappa}(\mathfrak{so}(c)\oplus\mathfrak{so}(8-c)))$ & $c=1,2,3$ \\ $(\mathfrak{so}(8),\mathfrak{so}(2)\oplus\mathfrak{so}(6),\tilde{\kappa}(\mathfrak{so}(c)\oplus\mathfrak{so}(8-c)))$ & $c=2,3$ \\ $(\mathfrak{so}(8),\mathfrak{so}(3)\oplus\mathfrak{so}(5),\tilde{\kappa}(\mathfrak{so}(3)\oplus\mathfrak{so}(5)))$ \\
\hline \hline \end{tabular} \end{table}
For these canonical compact symmetric triads, we will prove Theorem \ref{thm:cst_RO_can}, (1) by a case-by-case verification.
\begin{ex}\label{ex:so3535_maximal} Let us consider the case when $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=(\mathfrak{so}(8),\mathfrak{so}(3)\oplus\mathfrak{so}(5),\kappa(\mathfrak{so}(3)\oplus\mathfrak{so}(5)))$. Then we have \[ 2=\dim(\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}) \leq \max\{\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}})\mid s\in W(\Delta)\}\leq 3. \] We will show that $\max\{\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}})\mid s\in W(\Delta)\}=2$. Under Notation \ref{nota:Dr} we have \[\mathfrak{t}^{\sigma_{1}}= \mathbb{R}e_{3} \oplus\mathbb{R}(e_{1}-e_{2}) \oplus\mathbb{R}(e_{2}-e_{3}),~ \mathfrak{t}^{\sigma_{2}}=\mathbb{R}( e_{1}-e_{2}+e_{3}-e_{4})\oplus\mathbb{R}(e_{2}-e_{3})\oplus\mathbb{R}(e_{3}+e_{4}). \] Suppose for contradiction that there exists $s\in W(\Delta)$ satisfying $\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}})=3$. Then we have $s^{-1}\mathfrak{t}^{\sigma_{1}}=\mathfrak{t}^{\sigma_{2}}$. It follows from the expression of $\mathfrak{t}^{\sigma_{1}}$ that there exists $j\in\{1,2,3,4\}$ satisfying $e_{j}\in w^{-1}\mathfrak{t}^{\sigma_{1}}$. This contradicts that $\mathfrak{t}^{\sigma_{2}}$ does not contain all the vectors $e_{1},e_{2},e_{3},e_{4}$. In addition, by Proposition \ref{pro:rank=maxdim} we obtain $\dim(\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}) = \max\{\dim(\mathfrak{t}^{\sigma_{1}}\cap s\mathfrak{t}^{\sigma_{2}})\mid s\in W(\Delta)\} =\mathrm{rank}(G,\theta_{1},\theta_{2})$. \end{ex}
In a similar manner as in Example \ref{ex:so3535_maximal}, we have Theorem \ref{thm:cst_RO_can}, (1) for $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a),\tilde{\kappa}(\mathfrak{so}(c)\oplus\mathfrak{so}(8-c)))$ with $(a,c)=(1,\{1, 2,3\})$, $(2,\{2,3\})$.
\begin{ex} Let us consider the case when $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})= (\mathfrak{so}(2a+2b+2),\mathfrak{so}(2a+1)\oplus\mathfrak{so}(2b+1), \mathfrak{u}(a+b+1))$ with $0\leq a<b$. We will show the following relation: \[ \mathrm{rank}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=a(=\dim(\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2}))). \] We define two involutions $\theta_{1}',\theta_{2}'$ on $\mathfrak{so}(2a+2b+2)$ as follows: \[ \theta'_{1}(X)=I_{2a+1,2b+1}XI_{2a+1,2b+1},\quad \theta'_{2}(X)=J_{a+b+1}XJ_{a+b+1}^{-1}. \] Then we have $\mathfrak{g}^{\theta_{1}'}=\mathfrak{so}(2a+1)\oplus\mathfrak{so}(2b+1)$ and $\mathfrak{g}^{\theta_{2}'}=\mathfrak{u}(a+b+1)$. By the classification, $(\mathfrak{g},\theta_{1}',\theta_{2}')$ is in $[(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})]$. From \[ {\mathfrak m}_{1}'\cap {\mathfrak m}_{2}' =\mathfrak{g}^{-\theta_{1}'}\cap\mathfrak{g}^{-\theta_{2}'} =\left\{\left. \begin{pmatrix} O_{2a+1} & O & W & O \\ O & O_{b-a} & O & O \\ W & O & O_{2a+1} & O \\ O & O & O & O_{b-a}
\end{pmatrix}\right| W\in \mathfrak{so}(2a+1) \right\}, \] we obtain \[ [\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}']= \left\{\left. \begin{pmatrix} X & & & \\
& O & & \\
& & X & \\
& & & O
\end{pmatrix}\right| X\in \mathfrak{so}(2a+1) \right\}= \mathfrak{so}(2a+1) \] For any $Z\in [\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}']$ and $Y\in \mathfrak{m}_{1}'\cap {\mathfrak m}_{2}'$ with \[ Z=\begin{pmatrix} X & & & \\
& O & & \\
& & X & \\
& & & O \end{pmatrix},\quad Y=\begin{pmatrix} O_{2a+1} & O & W & O \\ O & O_{b-a} & O & O \\ W & O & O_{2a+1} & O \\ O & O & O & O_{b-a} \end{pmatrix}, \] we have \[ [Z,Y]=\begin{pmatrix} O_{2a+1} & O & [X,W] & O \\ O & O_{b-a} & O & O \\ [X,W] & O & O_{2a+1} & O \\ O & O & O & O_{b-a} \end{pmatrix}. \] This yields \begin{align*} &([\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}'] \oplus \mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',[\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}'])\\ &\phantom{hogehogehogehoge}\simeq (\mathfrak{so}(2a+1)\oplus \mathfrak{so}(2a+1), \Delta (\mathfrak{so}(2a+1)\oplus \mathfrak{so}(2a+1))). \end{align*} Hence we get \[ \mathrm{rank}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2}) =\mathrm{rank}(\mathfrak{g},\theta_{1}',\theta_{2}') =\mathrm{rank}(\mathfrak{so}(2a+1))=a,\] so that $\mathfrak{t}\cap(\mathfrak{m}_{1}\cap\mathfrak{m}_{2})$ is a maximal abelian subspace of $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}$. \end{ex}
\begin{ex} Let us consider the case when $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=(\mathfrak{su}(2a+2b+2),\mathfrak{sp}(a+b+1), \mathfrak{s}(\mathfrak{u}(2a+1)\oplus\mathfrak{u}(2b+1)))$ with $0\leq a<b$. It is sufficient to show $\mathrm{rank}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=a$. We define two involutions $\theta_{1}',\theta_{2}'$ on $\mathfrak{su}(2a+2b+2)$ as follows: \[ \theta'_{1}(X)=J_{a+b+1}\bar{X}J_{a+b+1}^{-1},\quad \theta'_{2}(X)= I_{2a+1,2b+1}XI_{2a+1,2b+1}. \] Then we have $\mathfrak{g}^{\theta_{1}'}=\mathfrak{sp}(a+b+1)$ and $\mathfrak{g}^{\theta_{2}'}=\mathfrak{s}(\mathfrak{u}(2a+1)\oplus\mathfrak{u}(2b+1))$, from which $(\mathfrak{g},\theta_{1}',\theta_{2}')$ is in $[(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})]$. A direct calculation shows \[ {\mathfrak m}_{1}'\cap {\mathfrak m}_{2}' =\mathfrak{g}^{-\theta_{1}'}\cap\mathfrak{g}^{-\theta_{2}'} =\left\{\left. \begin{pmatrix} O_{2a+1} & O & Y & O \\ O & O_{b-a} & O & O \\ \bar{Y} & O & O_{2a+1} & O \\ O & O & O & O_{b-a}
\end{pmatrix}\right| Y=-{}^tY \in M(2a+1,\mathbb{C}) \right\}.\] In the case when $a=0$, we have $\mathfrak{m}_{1}\cap\mathfrak{m}_{2}=\{0\}$. This implies that $\mathrm{rank}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})= \mathrm{rank}(\mathfrak{g},\theta_{1}',\theta_{2}')=0=a$. In what follows, we assume that $a\geq 1$. Since $\mathfrak{u}(2a+1)$ can be expressed by \[ \mathfrak{u}(2a+1)=\mathrm{span}_{\mathbb{R}}\{X\bar{Y}-Y\bar{X}\mid X,Y\in\mathfrak{gl}(n,\mathbb{C}), {}^tX=-X,{}^tY=-Y\}, \] we have \[ [\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}']= \left\{\left. \begin{pmatrix} X & & & \\
& O & & \\
& & \bar{X} & \\
& & & O
\end{pmatrix}\right| X\in \mathfrak{u}(2a+1) \right\}=\mathfrak{u}(2a+1). \] Hence we get \[ ([\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}'] \oplus \mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',[\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}',\mathfrak{m}_{1}'\cap {\mathfrak m}_{2}']) \simeq (\mathfrak{so}(4a+2),\mathfrak{u}(2a+1)), \] from which $\mathrm{rank}(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2}) =\mathrm{rank}(\mathfrak{so}(4a+2),\mathfrak{u}(2a+1))=a$. \end{ex}
From the above argument we have complete the proof of Theorem \ref{thm:cst_RO_can}, (1).
\subsubsection{Proof of Theorem $\ref{thm:cst_RO_can}$, $(2)$}
In the rest of this paper, we give the proof of Theorem \ref{thm:cst_RO_can}, (2). In the case when $\mathrm{rank}(G,\theta_{1},\theta_{2})=0$, we have $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$ by Proposition \ref{pro:rank0_ordconst}. In what follows, we will prove $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$ in the case when $\mathrm{rank}(G,\theta_{1},\theta_{2})\geq 1$. Our proof is based on a case-by-case verification for the order of a canonical one $(G,\theta_{1},\theta_{2})$. We note that $n:=\mathrm{ord}(\theta_{1}\theta_{2})\in\{1,2,3,4,6\}$ holds by the classification.
In the case when $n=1$, we have $1\leq \mathrm{ord}[(G,\theta_{1},\theta_{2})]\leq \mathrm{ord}(\theta_{1}\theta_{2})=1$. This yields $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$.
Next, we consider the case when $n=2$. Suppose for a contradiction that there exists $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ such that $\mathrm{ord}(\theta_{1}'\theta_{2}')=1$. Then we have $\theta_{1}'=\theta_{2}'$. As explained in Example \ref{ex:order1_can},
$(G,\theta_{1}',\theta_{1}')$ is canonical. If we denote by $(\Delta',\sigma_{1}',\sigma_{1}')=(\Delta',-d\theta_{1}'|_{\mathfrak{t}'},-d\theta_{1}'|_{\mathfrak{t}'})$ the double $\sigma$-system of $(G,\theta_{1}',\theta_{1}')$, then we have $(\Delta,\sigma_{1},\sigma_{2})\equiv(\Delta',\sigma_{1}',\sigma_{1}')$ by Proposition \ref{pro:cst_dstake_determ}. This yields
$1=\mathrm{ord}(d\theta_{1}'d\theta_{1}'|_{\mathfrak{t}'})=\mathrm{ord}(d\theta_{1}d\theta_{2}|_{\mathfrak{t}})=2$, which is a contradiction. Thus we have $2\leq \mathrm{ord}[(G,\theta_{1},\theta_{2})]\leq \mathrm{ord}(\theta_{1}\theta_{2})=2$, that is, $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$.
Here, we note that $n=6$ implies $\mathrm{rank}(G,\theta_{1},\theta_{2})=0$ by the classification. Hence the rest of our proof consists of the case when $n\in\{3,4\}$, so that $(G,\theta_{1},\theta_{2})$ or $(G,\theta_{2},\theta_{1})$ is locally isomorphic to one of the following two cases among the compact symmetric triads:
(Case 1) $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=(\mathfrak{so}(8),\mathfrak{so}(a)\oplus\mathfrak{so}(8-a),\tilde{\kappa}(\mathfrak{so}(3)\oplus\mathfrak{so}(5)))$ for $a=2,3$: First, we consider the case when $a=2$. Then we have $\mathrm{ord}(\theta_{1}\theta_{2})=4$. Since $\mathfrak{k}_{1}\not\simeq\mathfrak{k}_{2}$ obeys $\theta_{1}\not\sim\theta_{2}$, we have $\mathrm{ord}[(G,\theta_{1},\theta_{2})]\geq 2$. Suppose for a contradiction that there exists $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ such that $\mathrm{ord}(\theta_{1}'\theta_{2}')=2$. As explained in Example \ref{ex:com_can}, $(G,\theta_{1}',\theta_{2}')$ is canonical. Let
$(\Delta',\sigma_{1}',\sigma_{2}')=(\Delta',-d\theta_{1}'|_{\mathfrak{t}'},-d\theta_{2}'|_{\mathfrak{t}'})$ denote the double $\sigma$-system of $(G,\theta_{1}',\theta_{2}')$. From $(\Delta,\sigma_{1},\sigma_{2})\equiv(\Delta',\sigma_{1}',\sigma_{2}')$, we have
$2=\mathrm{ord}(d\theta_{1}'d\theta_{2}'|_{\mathfrak{t}'})
=\mathrm{ord}(d\theta_{1}d\theta_{2}|_{\mathfrak{t}})=4$, which is contradiction. Furthermore, from $(\theta_{1}\theta_{2})^{4}=1$, it can be verified that there exist no compact symmetric triads $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ satisfying $\mathrm{ord}(\theta_{1}'\theta_{2}')=3$ by Proposition \ref{pro:ordn_n+1_sim}. Thus we have $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$.
Secondly, we consider the case when $a=3$. Then we have $\mathrm{rank}(G,\theta_{1},\theta_{2})=2$ and $\mathrm{ord}(\theta_{1}\theta_{2})=3$. Suppose for a contradiction that there exists $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ such that $\mathrm{ord}(\theta_{1}'\theta_{2}')=1$, that is, $\theta_{1}'=\theta_{2}'$. Let
$(\Delta',\sigma_{1}',\sigma_{1}')=(\Delta',-d\theta_{1}'|_{\mathfrak{t}'},-d\theta_{1}'|_{\mathfrak{t}'})$ denote the double $\sigma$-system of $(G,\theta_{1}',\theta_{1}')$. From $(\Delta,\sigma_{1},\sigma_{2})\equiv(\Delta',\sigma_{1}',\sigma_{1}')$, we have $2 =\dim(\mathfrak{t}^{\sigma_{1}}\cap\mathfrak{t}^{\sigma_{2}}) =\dim(\mathfrak{t}^{\sigma_{1}'}\cap\mathfrak{t}^{\sigma_{1}'})=3$, which is contradiction. In a similar argument as in the case when $a=2$, it can be shown that there exist no compact symmetric triads $(G,\theta_{1}',\theta_{2}')\sim(G,\theta_{1},\theta_{2})$ satisfying $\mathrm{ord}(\theta_{1}'\theta_{2}')=2$. Thus, we have $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$.
(Case 2) $(\mathfrak{g},\mathfrak{k}_{1},\mathfrak{k}_{2})=(\mathfrak{su}(2m),\mathfrak{sp}(m),\mathfrak{s}(\mathfrak{u}(2a+1)\oplus\mathfrak{u}(2b+1)))$ or $(\mathfrak{so}(2m),\mathfrak{so}(2a+1)\oplus\mathfrak{so}(2b+1),\mathfrak{u}(m))$ for $a<b$, $m=a+b+1$: Then we have $\mathrm{ord}(\theta_{1}\theta_{2})=4$ and $\theta_{1}\not\sim\theta_{2}$. By a similar argument as in (Case 1), $a=2$, it can be shown that $\mathrm{ord}[(G,\theta_{1},\theta_{2})]=\mathrm{ord}(\theta_{1}\theta_{2})$.
From the above argument we have complete the proof of Theorem $\ref{thm:cst_RO_can}, (2)$.
\end{document} |
\begin{document}
\title[The linking form and non-negative curvature]{Highly connected $7$-manifolds, the linking form and non-negative curvature}
\author[S.\ Goette]{S.\ Goette} \address[Goette]{ Mathematisches Institut, Universit\"at Freiburg, Germany.} \email{sebastian.goette@math.uni-freiburg.de}
\author[M.\ Kerin]{M.\ Kerin} \address[Kerin]{School of Mathematics, Statistics and Applied Mathematics, N.U.I. Galway, Ireland.} \email{martin.kerin@nuigalway.ie}
\author[K.\ Shankar]{K.\ Shankar} \address[Shankar]{Department of Mathematics, University of Oklahoma, U.S.A.} \email{Krishnan.Shankar-1@math.ou.edu} \thanks{}
\date{\today}
\subjclass[2010]{primary: 53C20, secondary: 55R55, 57R19, 57R30} \keywords{highly connected, non-negative curvature, linking form}
\begin{abstract} In a recent article, the authors constructed a six-parameter family of highly connected $7$-manifolds which admit an $\mathrm{SO}(3)$-invariant metric of non-negative sectional curvature. Each member of this family is the total space of a Seifert fibration with generic fibre $\mathbf{S}^3$ and, in particular, has the cohomology ring of an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$. In the present article, the linking form of these manifolds is computed and used to demonstrate that the family contains infinitely many manifolds which are not even homotopy equivalent to an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$, the first time that any such spaces have been shown to admit non-negative sectional curvature. \end{abstract}
\maketitle
Closed manifolds admitting non-negative sectional curvature are not very well understood and it is, at present, quite difficult to obtain examples with interesting topology. This is partially explained by the dearth of known constructions, all of which depend in some way on two basic facts: First, compact Lie groups admit a bi-invariant metric (hence, non-negative curvature) and, second, Riemannian submersions do not decrease sectional curvature.
In \cite{GKS1}, a $6$-parameter family of non-negatively curved, $2$-connected $7$-manifolds $M^7_{\ul{a}, \ul{b}}$ was constructed, where the parameters $\underline a = (a_1, a_2, a_3), \underline b = (b_1, b_2, b_3) \in \mathbb{Z}^3$ satisfy $a_i, b_i \equiv 1 \!\! \mod 4$, for all $ i \in \{1,2,3\}$, and $$ \gcd(a_1, a_2 \pm a_3) = 1 = \gcd(b_1, b_2 \pm b_3). $$
Each of the manifolds $M^7_{\ul{a}, \ul{b}}$ is the total space of a Seifert fibration over an orbifold $\mathbf{S}^4$ with generic fibre $\mathbf{S}^3$ and has the cohomology ring of an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$. In particular, $H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}$, where $n = \frac{1}{8} \det \left(\begin{smallmatrix} a_1^2 & b_1^2 \\ a_2^2 - a_3^2 & b_2^2 - b_3^2 \end{smallmatrix}\right)$ and, in the case $n = 0$, the notation $\mathbb{Z}_0$ signifies the integers $\mathbb{Z}$. The manifolds $M^7_{\ul{a}, \ul{b}}$ were shown in \cite{GKS1} to realise all exotic $7$-spheres. To the authors' knowledge, this was the first time that it was observed that all exotic $7$-spheres are Seifert fibred by $\mathbf{S}^3$. The following result is somewhat surprising.
\begin{theorem} \label{T:thmA}
Infinitely many of the manifolds $M^7_{\ul{a}, \ul{b}}$ are not even homotopy equivalent to an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$. \end{theorem}
In the context of non-negative curvature, the construction of the manifolds $M^7_{\ul{a}, \ul{b}}$ fits neatly into the general scheme of increasing topological complexity via reducing symmetry assumptions. The standard example of a non-negatively curved manifold is a compact homogeneous space. In \cite{GM}, Gromoll and Meyer discovered the first example of an exotic sphere admitting non-negative curvature by introducing the notion of a \emph{biquotient} $G /\!\!/ H$, that is, the quotient of a compact Lie group $G$ by a closed subgroup $H \subseteq G \times G$ acting freely on $G$ via $(h_1, h_2) \cdot g = h_1 g h_2^{-1}$, $g \in G$, $(h_1, h_2) \in H$. Clearly, the isometry group of a biquotient will, in general, be much smaller than that of a homogeneous space. In contrast to the homogeneous situation, Totaro \cite{To} showed, for example, that there are infinitely many rational homotopy types of (non-negatively curved) biquotients already in dimension $6$.
An alternative approach to reducing symmetry is to assume that the manifold in question has low cohomogeneity, that is, that the quotient by a group of isometries is low dimensional. In particular, when the quotient space is a closed interval, that is, for manifolds of \emph{cohomogeneity one}, Grove and Ziller \cite{GZ} discovered sufficient conditions to ensure the existence of an invariant metric of non-negative curvature, thus generalising earlier work of Cheeger \cite{Ch}, and used this to demonstrate that all $\mathbf{S}^3$-bundles over $\mathbf{S}^4$ admit a metric with non-negative curvature.
A cohomogeneity-one manifold as above naturally admits a codimension-one singular Riemannian foliation whose leaves are the orbits of the action, that is, are homogeneous spaces. It was observed by Wilking in \cite{BWi} that a manifold which admits a codimension-one singular Riemannian foliation with biquotient leaves will also admit non-negative curvature, providing the sufficient conditions of Grove and Ziller \cite{GZ} are satisfied. The manifolds $M^7_{\ul{a}, \ul{b}}$ fall into this category and can thus be seen as a further success of the strategy of symmetry reduction.
The manifolds mentioned in Theorem \ref{T:thmA} occur in infinitely many cohomology types and are distinguished from $\mathbf{S}^3$-bundles over $\mathbf{S}^4$ by having a non-standard linking form. In particular, these are the first manifolds with non-standard linking form observed to admit non-negative curvature (cf.\ \cite{GoKiSh}), thus implying that the linking form is not an obstruction to non-negative sectional curvature.
\begin{theorem} \label{T:thmB}
Suppose the manifold $M^7_{\ul{a}, \ul{b}}$ has $H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}$, $n \neq 0$. Then there is a generator $\mathbf 1 \in H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$ such that the linking form of $M^7_{\ul{a}, \ul{b}}$ is given (up to sign) by \begin{align*} \lk : H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \otimes H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \to& \mathbb{Q}/\mathbb{Z} \\ (x \mathbf 1,y \mathbf 1) \mapsto&
\pm \left(e_1 \, b_1^2 + e_0 \left(\tfrac{b_2^2 - b_3^2}{8} \right) \right) \frac{xy}{n} \!\! \mod 1
\end{align*} where $e_0, e_1 \in \mathbb{Z}$ satisfy $e_1 \, a_1^2 + e_0 \, \frac{1}{8}(a_2^2 - a_3^2) = 1$. \end{theorem}
Observe that, if $f_0, f_1 \in \mathbb{Z}$ are chosen such that $f_1 \, b_1^2 + f_0 \, \frac{1}{8}(b_2^2 - b_3^2) = 1$, then $$ \left( f_1 \, a_1^2 + f_0 \left(\tfrac{a_2^2 - a_3^2}{8} \right) \right) \left( e_1 \, b_1^2 + e_0 \left( \tfrac{b_2^2 - b_3^2}{8}\right) \right) \equiv 1 \!\! \mod n. $$ Therefore, the linking form of $M^7_{\ul{a}, \ul{b}}$ can equivalently be written (up to sign) as $$ \lk(x \mathbf 1',y \mathbf 1') = \pm \left(f_1 \, a_1^2 + f_0 \left( \tfrac{a_2^2 - a_3^2}{8} \right) \right) \frac{xy}{n} \!\! \mod 1 $$ with respect to the generator $\mathbf 1' := \left(f_1 \, a_1^2 + f_0 \left( \tfrac{a_2^2 - a_3^2}{8} \right) \right) \mathbf 1 \in H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$.
It will be demonstrated in Lemma \ref{L:stdLF} that $M^7_{\ul{a}, \ul{b}}$ has standard linking form whenever $\gcd(a_1, b_1) = 1$. In particular, this is the case for all $\mathbf{S}^3$-bundles over $\mathbf{S}^4$. However, it is well known from \cite{DWi} that there exist $2$-connected $7$-manifolds with non-standard linking form which have the same cohomology ring as in the case $\gcd(a_1, b_1) = 1$: see, for instance, Example \ref{Eg:nonst}.
\begin{obs} The manifolds $M^7_{\ul{a}, \ul{b}}$ do not realise all $2$-connected $7$-manifolds with the cohomolgy ring of an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$. \end{obs}
In light of this observation, it is tempting to make the following conjecture.
\begin{conjecture*} Every $2$-connected $7$-manifold with the cohomology ring of an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$ admits a non-negatively curved, codimension-one singular Riemannian foliation with singular leaves of codimension two, and a Seifert fibration onto an orbifold $\mathbf{S}^4$ with generic fibre $\mathbf{S}^3$. \end{conjecture*}
The paper is organised as follows. In Section \ref{S:prelim}, the construction and properties of the manifolds $M^7_{\ul{a}, \ul{b}}$ are reviewed and relevant notation introduced, before the linking form is introduced and some important facts recalled. In Section \ref{S:Bockstein}, the structure of the manifolds $M^7_{\ul{a}, \ul{b}}$ is used to obtain an understanding of the Bockstein homomorphism. Theorem \ref{T:thmB} is proved in Section \ref{S:linking}, while Section \ref{S:numth} is dedicated to the elementary number theory necessary to construct explicit manifolds $M^7_{\ul{a}, \ul{b}}$ satisfying the conclusion of Theorem \ref{T:thmA}.
\ack{It is a pleasure to thank Diarmuid Crowley for his interest in this project and for useful conversations about the linking form. Part of this research was performed at the mathematical research institute MATRIX in Australia and the authors wish to thank the institute for its hospitality. S.\ Goette and M.\ Kerin have received support from the DFG Priority Program 2026 \emph{Geometry at Infinity}, while M.\ Kerin and K.\ Shankar received support from SFB 898: \emph{Groups, Geometry \& Actions} at WWU M\"unster. K.\ Shankar received support from the National Science Foundation.\footnote{The views expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.}}
\section{Preliminaries and notation} \label{S:prelim}
\subsection{The manifolds $M^7_{\ul{a}, \ul{b}}$\, } \hspace*{1mm}\\ \label{SS:family}
Suppose that a compact Lie group $G$ acts smoothly on a closed, connected, smooth manifold $M$ via $G \times M \to M,\ (g,p) \mapsto g \cdot p$. For each $p \in M$, the \emph{isotropy group} at $p$ is the subgroup $G_p = \{g \in G \mid g \cdot p = p\} \subseteq G$, and the \emph{orbit} through $p$ is the submanifold $G \cdot p = \{g \cdot p \in M \mid g \in G \} \subseteq M$. The manifold $M$ is foliated by $G$-orbits and an orbit $G \cdot p$ is diffeomorphic to the homogeneous space $G/G_p$.
The action $G \times M \to M$ is said to be of \emph{cohomogeneity one} if there is an orbit of codimension one or, equivalently, if $\dim(M/G) = 1$. In such a case, the manifold $M$ is called a \emph{cohomogeneity-one ($G$-)manifold}. If, in addition, $\pi_1(M)$ is assumed to be finite, then the orbit space $M/G$ can be identified with a closed interval. By fixing an appropriately normalised $G$-invariant metric on $M$, it may be assumed that $M/G = [-1,1]$. Let $\pi: M \to M/G = [-1,1]$ denote the quotient map. The orbits $\pi^{-1}(t)$, $t \in (-1,1)$, are called \emph{principal orbits} and the orbits $\pi^{-1}(\pm 1)$ are called \emph{singular orbits}.
Choose a point $p_0 \in \pi^{-1}(0)$ and consider a geodesic $c:\mathbb{R} \to M$ orthogonal to all the orbits, such that $c(0) = p_0$ and $\pi \circ c|_{[-1,1]} = \mathrm{id}_{[-1,1]}$. Then, for every $t \in (-1,1)$, one has $G_{c(t)} = G_{p_0} \subseteq G$, and this \emph{principal isotropy group} will be denoted by $H \subseteq G$. If $p_\pm = c(\pm 1) \in M$, denote the \emph{singular isotropy groups} $G_{p_\pm}$ by $K_\pm$ respectively. In particular, $H \subseteq K_\pm$.
By the slice theorem, $M$ can be decomposed as the union of two disk-bundles, over the singular orbits $G/K_- = \pi^{-1}(- 1)$ and $G/K_+ = \pi^{-1}(+ 1)$ respectively, which are glued along their common boundary $G/H = \pi^{-1}(0)$: $$ M = (G \times_{K_-} \mathbf{D}^{l_-}) \cup_{G/H} (G \times_{K_+} \mathbf{D}^{l_+}) \, . $$ Since the principal orbit $G/H$ is the boundary of both disk-bundles, it follows that $K_\pm/H = \mathbf{S}^{l_\pm-1}$, where $l_\pm$ denote the codimensions of $G/K_\pm \subseteq M$.
Conversely, given any chain $H \subseteq K_\pm \subseteq G$, with $K_\pm/H = \mathbf{S}^{d_\pm}$, one can construct a cohomogeneity-one $G$-manifold $M$ with codimension $d_\pm + 1$ singular orbits. For this reason, a cohomogeneity-one manifold is conveniently represented by its group diagram: $$ \xymatrix{ & G & \\ K_- \ar@{-}[ur] & & K_+ \ar@{-}[ul] \\ & H \ar@{-}[ur] \ar@{-}[ul] & } $$
In \cite{GZ}, the authors determined a sufficient condition for a cohomogeneity-one manifold to admit non-negative curvature.
\begin{thm}[\cite{GZ}] \label{T:GZ} Let $G$ be a compact Lie group acting on a manifold $M$ with cohomogeneity one. If the singular orbits are of codimension $2$, then $M$ admits a $G$-invariant metric of non-negative sectional curvature. \end{thm}
Consider now the subgroups \begin{align*} Q &= \{\pm 1, \pm i, \pm j, \pm k\}, \\ \mathrm{Pin}(2) &= \{e^{i \theta} \mid \theta \in \mathbb{R}\} \cup \{e^{i \theta} j \mid \theta \in \mathbb{R}\}, \\ \mathrm{Pjn}(2) &= \{e^{j \theta} \mid \theta \in \mathbb{R}\} \cup \{i \, e^{j \theta} \mid \theta \in \mathbb{R}\} \end{align*} of the group $\mathbf{S}^3$ of unit quaternions, where the notation $\mathrm{Pjn}(2)$ is intended to be suggestive since, clearly, the groups $\mathrm{Pin}(2)$ and $\mathrm{Pjn}(2)$ are isomorphic, the only difference being that the roles of $i$ and $j$ are switched.
For $\underline a = (1, a_2, a_3), \underline b = (1, b_2, b_3) \in \mathbb{Z}^3$, with $a_i, b_i \equiv 1$ mod $4$ for all $i \in \{1,2,3\}$ and $\gcd(a_1, a_2, a_3) = \gcd(b_1, b_2, b_3) = 1$, a family of cohomogeneity-one ($\mathbf{S}^3 \times \mathbf{S}^3 \times \mathbf{S}^3$)-manifolds $P^{10}_{\ul{a}, \ul{b}}$ was introduced in \cite{GKS1} via the group diagram \begin{equation} \label{E:Pab} \xymatrix{ & \mathbf{S}^3 \times \mathbf{S}^3 \times \mathbf{S}^3 & \\ \mathrm{Pin}(2)_{\ul{a}} \ar@{-}[ur] & & \mathrm{Pjn}(2)_{\ul{b}} \ar@{-}[ul] \\ & \Delta Q \ar@{-}[ur] \ar@{-}[ul] & } \end{equation} where the principal isotropy group $\Delta Q$ denotes the diagonal embedding of $Q$ into $\mathbf{S}^3 \times \mathbf{S}^3 \times \mathbf{S}^3$, and the singular isotropy groups are given by \begin{align*} \mathrm{Pin}(2)_{\ul{a}} &= \{(e^{i a_1 \theta}, e^{i a_2 \theta}, e^{i a_3 \theta}) \mid \theta \in \mathbb{R}\} \cup \{(e^{i a_1 \theta} j, e^{i a_2 \theta} j, e^{i a_3 \theta} j) \mid \theta \in \mathbb{R}\}, \\ \mathrm{Pjn}(2)_{\ul{b}} &= \{(e^{j b_1 \theta}, e^{j b_2 \theta}, e^{j b_3 \theta}) \mid \theta \in \mathbb{R}\} \cup \{(i \, e^{j b_1 \theta}, i\, e^{j b_2 \theta}, i\, e^{j b_3 \theta}) \mid \theta \in \mathbb{R}\}. \end{align*} Note that the restriction $a_i, b_i \equiv 1$ mod $4$ is to ensure only that $\Delta Q$ is a subgroup of both $\mathrm{Pin}(2)_{\ul{a}}$ and $\mathrm{Pjn}(2)_{\ul{b}}$. The subfamily consisting of those $P^{10}_{\ul{a}, \ul{b}}$ having $a_1 = b_1 = 1$ describes all principal ($\mathbf{S}^3 \times \mathbf{S}^3$)-bundles over $\mathbf{S}^4$; see \cite{GZ}.
For the sake of notation, let $G = \mathbf{S}^3 \times \mathbf{S}^3 \times \mathbf{S}^3$ from now on. It was proven in \cite[Lemma 1.2]{GKS1} that the subgroup $\{1\} \times \Delta \mathbf{S}^3 \subseteq \{1\} \times \mathbf{S}^3 \times \mathbf{S}^3 \subseteq G$ acts freely on $P^{10}_{\ul{a}, \ul{b}}$ if and only if \begin{equation} \label{E:free} \gcd(a_1, a_2 \pm a_3) = 1 \ \textrm{ and } \ \gcd(b_1, b_2 \pm b_3) = 1. \end{equation}
Therefore, given a cohomogeneity-one $G$-manifold $P^{10}_{\ul{a}, \ul{b}}$ determined by a group diagram \eqref{E:Pab} satisfying the conditions \eqref{E:free}, one obtains a smooth, $7$-dimensional manifold $M^7_{\ul{a}, \ul{b}}$ defined via $$ M^7_{\ul{a}, \ul{b}} = (\{1\} \times \Delta \mathbf{S}^3) \backslash P^{10}_{\ul{a}, \ul{b}} \, . $$
Since the singular orbits of the cohomogeneity-one $G$-action on $P^{10}_{\ul{a}, \ul{b}}$ are of codimension $2$, it follows from Theorem \ref{T:GZ} that each $P^{10}_{\ul{a}, \ul{b}}$ admits a $G$-invariant metric of non-negative sectional curvature. As the free action of $\{1\} \times \Delta \mathbf{S}^3$ is by isometries, there is an induced metric of non-negative curvature on $M^7_{\ul{a}, \ul{b}}$.
By construction, there is a codimension-one singular Riemannian foliation of $M^7_{\ul{a}, \ul{b}}$ by biquotients, such that the leaf space is $[-1,1]$ and $M^7_{\ul{a}, \ul{b}}$ decomposes as a union of two-dimensional disk-bundles over the two singular leaves which are glued along their common boundary, a regular leaf. This follows easily from the Slice Theorem applied to $P^{10}_{\ul{a}, \ul{b}}$. Indeed, the action of $\{1\} \times \Delta \mathbf{S}^3$ preserves the $G$-orbits of $P^{10}_{\ul{a}, \ul{b}}$, and the image of an orbit $G/U$ is a leaf given by \begin{equation} \label{E:BiqDiff} (\{1\} \times \Delta \mathbf{S}^3) \backslash G / U \cong (\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ U \, , \end{equation} where this diffeomorphism is induced by $$ (q_1 \, u_1, q_2 \, u_2, q_3 \, u_3) \mapsto (q_1 \, u_1, u_2^{-1} q_2^{-1} q_3 \, u_3), $$ for $(q_1, q_2, q_3) \in G$ and $(u_1, u_2, u_3) \in U \subseteq G$. Viewing $M^7_{\ul{a}, \ul{b}}$ in this way, the $\gcd$ conditions \eqref{E:free} required in the definition are simply the conditions ensuring that each of the biquotient actions on $\mathbf{S}^3 \times \mathbf{S}^3$ is free.
If $\varepsilon \in (-1,1)$ and if $\tau : M^7_{\ul{a}, \ul{b}} \to [-1,1]$ denotes the projection onto the leaf space of the codimension-one foliation of $M^7_{\ul{a}, \ul{b}}$ by biquotients, define
$$ M_- = \tau^{-1}([-1,\varepsilon)), \ M_+ = \tau^{-1}((-\varepsilon, 1]) \ \text{ and } \ M_0 = \tau^{-1}(-\varepsilon, \varepsilon). $$ The preimages $M_\pm$ are two-dimensional disk-bundles over the singular leaves $(\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ \mathrm{Pin}(2)_{\ul{a}}$ and $(\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ \mathrm{Pjn}(2)_{\ul{b}}$, while $M_0 = M_- \cap M_+ \cong (\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ \Delta Q \times (-\varepsilon,\varepsilon) $. Clearly $M^7_{\ul{a}, \ul{b}} = M_- \cup M_+$.
It was shown in \cite{GKS1} that the manifolds $M^7_{\ul{a}, \ul{b}}$ are $2$-connected and that \begin{equation} \label{E:cohom}
H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}, \text{ where } n = \frac{1}{8} \det \begin{pmatrix} a_1^2 & b_1^2 \\ a_2^2 - a_3^2 & b_2^2 - b_3^2 \end{pmatrix}. \end{equation} The notation $\mathbb{Z}_0$ signifies the integers $\mathbb{Z}$, in the case $n = 0$. From Lemmas 2.6 and 2.7 of \cite{GKS1} it follows that \begin{equation} \label{E:leafcohom} \begin{split} H^j(M_\pm ; \mathbb{Z}) &= \begin{cases} \mathbb{Z}, & j = 0,3, \\ \mathbb{Z}_2, & j = 2, 5, \\ 0, & \text{otherwise,} \end{cases} \\
H^j(M_0 ; \mathbb{Z}) &= \begin{cases} \mathbb{Z}, & j = 0,6, \\ \mathbb{Z}_2 \oplus \mathbb{Z}_2, & j = 2, 5,\\ \mathbb{Z} \oplus \mathbb{Z}, & j = 3,\\ 0, & \text{otherwise.} \end{cases} \end{split} \end{equation}
Denote by \begin{equation} \label{E:inclusions} i_\pm : M_\pm \hookrightarrow M^7_{\ul{a}, \ul{b}} \ \text{ and } \ j_\pm : M_0 \hookrightarrow M_\pm \end{equation} the respective inclusion maps, and by \begin{equation} \label{E:pairmaps} q_\pm : (M^7_{\ul{a}, \ul{b}}, \emptyset) \to (M^7_{\ul{a}, \ul{b}}, M_\pm) \ \text{ and } \ f_\pm : (M_\mp, M_0) \to (M^7_{\ul{a}, \ul{b}}, M_\pm) \end{equation} the maps of pairs induced by the identity map on $M^7_{\ul{a}, \ul{b}}$ and by $i_\pm$ respectively. Note, furthermore, that the maps on cohomology induced by the inclusions $j_\pm$ are determined by the projection maps $\pi_\pm$ in the circle-bundles \begin{align*} \mathbf{S}^1 = \mathrm{Pin}(2)_{\ul{a}}/\Delta Q &\longrightarrow (\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ \Delta Q \stackrel{\pi_-}{\longrightarrow} (\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ \mathrm{Pin}(2)_{\ul{a}} \,, \\[1mm] \mathbf{S}^1 = \mathrm{Pjn}(2)_{\ul{b}}/\Delta Q &\longrightarrow (\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ \Delta Q \stackrel{\pi_+}{\longrightarrow} (\mathbf{S}^3 \times \mathbf{S}^3) /\!\!/ \mathrm{Pjn}(2)_{\ul{b}} \, , \end{align*} since $\pi_\pm$ respect deformation retractions of $M_-$, $M_+$ and $M_0$ onto the respective leaves. In particular, the maps $\pi_\pm^*$ been computed in degree three in \cite[Equation (2.16)]{GKS1} and, with respect to fixed bases $\{x_\pm\}$ of $H^3(M_\pm ; \mathbb{Z}) = \mathbb{Z}$ and $\{v_1, v_2\}$ of $H^3(M_0; \mathbb{Z}) = \mathbb{Z} \oplus \mathbb{Z} $, yield \begin{equation} \label{E:maps} \begin{split} j_-^*(x_-) &= \frac{1}{8}(a_2^2 - a_3^2) \, v_1 + a_1^2 \, v_2, \\ j_+^*(x_+) &= -\frac{1}{8}(b_2^2 - b_3^2) \, v_1 - b_1^2 \, v_2, \end{split} \end{equation} which, by the gcd conditions \eqref{E:free}, are each generators of $H^3(M_0; \mathbb{Z})$. Finally, by excision, the induced homomorphisms $f_\pm^* : H^j(M^7_{\ul{a}, \ul{b}}, M_\pm; R) \to H^j(M_\mp, M_0; R)$ are isomorphisms in all degrees, for any choice of coefficient ring $R$.
\subsection{The linking form} \hspace*{1mm}\\ \label{SS:link}
If $M$ is an $(s-1)$-connected, closed, oriented, smooth, $(2s+1)$-dimensional manifold, let $TH_s(M)$ and $TH^{s+1}(M;\mathbb{Z})$ denote the torsion subgroups of respective integral homology and cohomology groups. Let $a \in \mathcal C_{s}(M)$ be a chain representing a homology class $[a] \in TH_{s}(M)$. Then there is some $n_a \in \mathbb{Z}$ such that $n_a \cdot [a] = 0$ and, hence, some $c_a \in \mathcal C_{s+1} (M)$ such that $n_a \cdot a$ is the boundary of $c_a$, that is, $n_a \cdot a = \partial c_a$. The \emph{linking form} is a non-degenerate, bilinear pairing defined by \begin{equation} \label{E:LFhom} \begin{split} \lk : TH_{s}(N) \otimes TH_{s}(M) &\to \mathbb{Q} / \mathbb{Z} \\ ([a], [b]) &\mapsto \frac{\mathrm{Int}(c_a, b)}{n_a} \mod 1, \end{split} \end{equation} where $\mathrm{Int}: \mathcal C_{s+1}(M) \times \mathcal C_{s}(M) \to \mathbb{Z}$ yields the signed count of intersections of its arguments with respect to the orientation of $M$. The linking form is symmetric (respectively, skew-symmetric) for $s$ odd (respectively, $s$ even). It was introduced in \cite{Br} and \cite{ST}.
Consider now the short exact sequence $$ 0 \longrightarrow \mathbb{Z} \stackrel{m}{\longrightarrow} \mathbb{Q} \stackrel{r}{\longrightarrow} \mathbb{Q} / \mathbb{Z} \longrightarrow 0. $$ The boundary homomorphism $\beta : H^j(M; \mathbb{Q}/\mathbb{Z}) \to H^{j+1}(M; \mathbb{Z})$ in the associated long exact sequence $$ \dots \longrightarrow H^{j}(M; \mathbb{Z}) \stackrel{m}{\longrightarrow} H^{j}(M; \mathbb{Q}) \stackrel{r}{\longrightarrow} H^{j}(M; \mathbb{Q}/\mathbb{Z}) \stackrel{\beta}{\longrightarrow}H^{j+1}(M; \mathbb{Z}) \longrightarrow \dots $$ is called the \emph{Bockstein homomorphism}. Observe that $TH^j(M; \mathbb{Z}) \subseteq \im(\beta)$, since $TH^j(M; \mathbb{Z})$ lies in the kernel of $m : H^{j}(M; \mathbb{Z}) \to H^{j}(M; \mathbb{Q}) $.
Now, if $D : H_j(M) \to H^{2s + 1 - j}(M; \mathbb{Z})$ denotes the inverse of Poincar\'e duality, $[M] \in H_{2s+1}(M)$ the fundamental class of $M$ and $\langle \, ,\rangle : H^j(M;R) \otimes H_j(M) \to R$ the $R$-valued Kronecker pairing, the right-hand side of \eqref{E:LFhom} is given, modulo the integers, by $$ \frac{\mathrm{Int}(c_a,b)}{n_a} = \langle w_a \smile D([b]), [M] \rangle, $$ where $w_a \in H^{s}(M; \mathbb{Q}/\mathbb{Z})$ is such that $\beta(w_a) = D([a])$. That is, the linking form can be rewritten as a non-degenerate, bilinear form \begin{equation} \label{E:LFcohom} \begin{split} \lk : TH^{s+1}(M; \mathbb{Z}) \otimes TH^{s+1}(M; \mathbb{Z}) &\to \mathbb{Q}/\mathbb{Z} \\ (x,y) &\mapsto \langle w \smile y, [M] \rangle \mod 1, \end{split} \end{equation} where $\beta(w) = x \in TH^{s+1}(M; \mathbb{Z})$. Note, in particular, that the sign of the linking form depends on the choice of orientation on $M$. Furthermore, if $H^{s+1}(M; \mathbb{Z})$ is torsion, that is, $TH^{s+1}(M; \mathbb{Z}) = H^{s+1}(M; \mathbb{Z})$, then $M$ being $(s-1)$-connected implies that the Bockstein homomorphism is an isomorphism and it follows from \eqref{E:LFcohom} that \begin{equation} \label{E:LF} \lk(x,y) = \langle\beta^{-1}(x) \smile y, [M]\rangle, \end{equation} for all $x,y \in H^{s+1}(M; \mathbb{Z})$.
Suppose now that $TH^{s+1}(M; \mathbb{Z})$ is cyclic of order $n$. In this case, bilinearity ensures that the linking form is completely determined by $\lk(\mathbf 1, \mathbf 1)$, where $\mathbf 1$ is some generator of $TH^{s+1}(M; \mathbb{Z}) = \mathbb{Z}_n$. The linking form is said to be \emph{standard} if there exists an isomorphism $\theta : TH^{s+1}(M; \mathbb{Z}) \to TH^{s+1}(M; \mathbb{Z})$ such that $$ \lk(\theta(\mathbf 1), \theta(\mathbf 1)) = \frac{1}{n} \in \mathbb{Q}/\mathbb{Z}. $$ Recall, however, that the group of isomorphisms of $\mathbb{Z}_n$ is isomorphic to the group of units $\mathbb{Z}_n^* \subseteq \mathbb{Z}_n$. Therefore, the linking form is standard if and only if there is some unit $\lambda \in \mathbb{Z}_n^*$ such that $$ \lk(\mathbf 1, \mathbf 1) = \frac{\lambda^2}{n} \mod 1. $$ For $2$-connected $7$-manifolds, the linking form being standard imposes topological restrictions on the manifold.
\begin{thm}[{\cite[Corollary 2]{KiSh}}] \label{T:KS} A closed, smooth, $2$-connected $7$-manifold $M$, with $H^4(M; \mathbb{Z})$ finite cyclic, is homotopy equivalent to an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$ if and only if its linking form is standard for some choice of orientation on $M$. \end{thm}
By work of Crowley and Escher \cite{CE} and Kitchloo and Shankar \cite{KiSh} (cf.\ \cite{DWi}), the homotopy equivalence in Theorem \ref{T:KS} can, in fact, be strengthened to equivalence under a PL-homeomorphism.
The strategy for proving Theorem \ref{T:thmA} is now clear. One must identify manifolds $M^7_{\ul{a}, \ul{b}}$ which have a non-standard linking form, regardless of the choice of orientation. A simple example might shed some light on the number theoretic side of the problem, even though, by Lemma \ref{L:stdLF}, this particular example cannot occur among the manifolds $M^7_{\ul{a}, \ul{b}}$.
\begin{eg} \label{Eg:nonst} Suppose $M$ is a closed, smooth, $2$-connected $7$-manifold with $H^4(M; \mathbb{Z}) = \mathbb{Z}_5$ and $\lk(\mathbf 1, \mathbf 1) = \frac{2}{5} \in \mathbb{Q}/\mathbb{Z}$, for some generator $\mathbf 1 \in H^4(M; \mathbb{Z}) $. Since $\pm 2 \in \mathbb{Z}_5$ is not the square of a unit in $\mathbb{Z}_5^* = \{1,2,3,4\}$, it follows from Theorem \ref{T:KS} that $M$ is not homotopy equivalent to an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$. \end{eg}
A well-known fact from the study of quadratic reciprocities is being exploited in Example \ref{Eg:nonst} and plays an important role in finding further examples; namely, for an odd prime $p$, the unit $-1 \in \mathbb{Z}_p^*$ is a square if and only if $p \equiv 1$ mod $4$. Since the squares make up only half of all units in $\mathbb{Z}_p$, this implies that multiplying a non-square by $-1$ will not turn it into a square whenever $p \equiv 1$ mod $4$. This observation will yield non-standard linking forms, even up to a change of sign.
\section{The Bockstein homomorphism} \label{S:Bockstein}
Since the formula \eqref{E:LF} describes the linking form of the manifolds $M^7_{\ul{a}, \ul{b}}$, it will be important in what follows to have a good understanding of the Bockstein homomorphism for these manifolds.
Recall that $M^7_{\ul{a}, \ul{b}} = M_- \cup M_+$ and $M_- \cap M_+ = M_0$ and suppose from now on that \begin{equation} \label{E:finite}
H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}, \text{ where } n = \frac{1}{8} \det \begin{pmatrix} a_1^2 & b_1^2 \\ a_2^2 - a_3^2 & b_2^2 - b_3^2 \end{pmatrix} \neq 0. \end{equation}
It follows from \eqref{E:leafcohom} and the long exact cohomology sequence for the pair $(M^7_{\ul{a}, \ul{b}}, M_\pm)$ that $$ H^3(M^7_{\ul{a}, \ul{b}}, M_\pm ; \mathbb{Z}) = \mathbb{Z}_2 \ \text{ and } \ H^5(M^7_{\ul{a}, \ul{b}}, M_\pm ; \mathbb{Z}) = 0. $$ On the other hand, the long exact sequence for the pair $(M_\mp, M_0)$ yields a short exact sequence $$ 0 \longrightarrow H^3(M_\mp; \mathbb{Z}) \stackrel{j_\mp^*}{\longrightarrow } H^3(M_0; \mathbb{Z}) \longrightarrow H^4(M_\mp, M_0; \mathbb{Z}) \longrightarrow 0. $$ By \eqref{E:leafcohom} and \eqref{E:maps}, it now follows that $H^4(M_\mp, M_0; \mathbb{Z}) = \mathbb{Z}$. However, excision implies that the map $f_\pm^* : H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \to H^4(M_\mp, M_0; \mathbb{Z})$ is an isomorphism, from which it may be concluded that $$ H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) = \mathbb{Z}. $$
These considerations, together with the Universal Coefficient Theorem for cohomology \cite[Chap.\ 5, Theorem 10]{Sp}, now easily yield the cohomology groups listed in Table \ref{table:cohomgps}.
\begin{table}
\begin{tabular}[h]{|Sc||Sc|c|Sc|} \cline{2-4}
\multicolumn{1}{c||}{} & $M^7_{\ul{a}, \ul{b}}$ & $M_\pm$ & $(M^7_{\ul{a}, \ul{b}}, M_\pm)$ \\ \hline \hline $H^3(- , ; \mathbb{Z})$ & $0$ & $\mathbb{Z}$ & $\mathbb{Z}_2$ \\ \hline $H^3(-; \mathbb{Q})$ & $0$ & $\mathbb{Q}$ & $0$ \\ \hline
$H^3(-; \mathbb{Q}/\mathbb{Z})$ & $\mathbb{Z}_{|n|}$ & $\mathbb{Q}/\mathbb{Z}$ & $0$ \\ \hline\hline
$H^4(-; \mathbb{Z})$ & $\mathbb{Z}_{|n|}$ & $0$ & $\mathbb{Z}$ \\ \hline $H^4(-; \mathbb{Q})$ & $0$ & $0$ & $\mathbb{Q}$ \\ \hline $H^4(-; \mathbb{Q}/\mathbb{Z})$ & $0$ & $\mathbb{Z}_2$ & $\mathbb{Q}/\mathbb{Z}$ \\ \hline \end{tabular}
\caption{Important cohomology groups in $\mathbb{Z}$, $\mathbb{Q}$ and $\mathbb{Q}/\mathbb{Z}$ coefficients.} \label{table:cohomgps} \end{table}
From the short exact coefficient sequence $$ 0 \longrightarrow \mathbb{Z} \stackrel{m}{\longrightarrow} \mathbb{Q} \stackrel{r}{\longrightarrow} \mathbb{Q} / \mathbb{Z} \longrightarrow 0, $$ together with the maps $i_\pm : M_\pm \hookrightarrow M^7_{\ul{a}, \ul{b}}$ and $q_\pm : (M^7_{\ul{a}, \ul{b}}, \emptyset) \to (M^7_{\ul{a}, \ul{b}}, M_\pm)$, one obtains a commutative diagram \begin{equation} \label{E:bigdiag}
\xymatrix@C=0.54cm{ & 0 \ar[d] & 0 \ar[d] & 0 \ar[d] & \\ 0 \ar[r] & \mathcal C^*(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \ar[r]^(0.55){q_\pm^*} \ar[d]^m & \mathcal C^*(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \ar[r]^{i_\pm^*} \ar[d]^m & \mathcal C^*(M_\pm; \mathbb{Z}) \ar[r] \ar[d]^m & 0 \\ 0 \ar[r] & \mathcal C^*(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q}) \ar[r]^(0.55){q_\pm^*} \ar[d]^r & \mathcal C^*(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \ar[r]^{i_\pm^*} \ar[d]^r & \mathcal C^*(M_\pm; \mathbb{Q}) \ar[r] \ar[d]^r & 0 \\ 0 \ar[r] & \mathcal C^*(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q}/\mathbb{Z}) \ar[r]^(0.55){q_\pm^*} \ar[d] & \mathcal C^*(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \ar[r]^{i_\pm^*} \ar[d] & \mathcal C^*(M_\pm; \mathbb{Q}/\mathbb{Z}) \ar[r] \ar[d] & 0 \\ & 0 & 0 & 0 & }
\end{equation} of cochain complexes. This induces a commutative diagram
\begin{equation} \label{E:cohomdiag} \resizebox{\displaywidth}{!}{\xymatrix@C=0.4cm{ & & H^3(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q}/\mathbb{Z}) \ar[d] \ar[r]^(0.55){q_\pm^*} & H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \ar[d]^\beta \\ H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \ar[r]^{i_\pm^*} \ar[d]^m & H^3(M_\pm; \mathbb{Z}) \ar[r]^(0.45){\delta_\pm} \ar[d]^m & H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \ar[r]^(0.55){q_\pm^*} \ar[d]^m & H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \ar[d]^m \\ H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \ar[r]^{i_\pm^*} \ar[d]^r & H^3(M_\pm; \mathbb{Q}) \ar[r]^(0.45){\delta_\pm} \ar[d]^r & H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q}) \ar[r]^(0.55){q_\pm^*} \ar[d]^r & H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \ar[d]^r \\ H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \ar[r]^{i_\pm^*} \ar[d]^\beta & H^3(M_\pm; \mathbb{Q}/\mathbb{Z}) \ar[r]^(0.45){\delta_\pm} \ar[d] & H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q}/\mathbb{Z}) \ar[r]^(0.55){q_\pm^*} & H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \\ H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \ar[r]^{i_\pm^*} & H^4(M_\pm; \mathbb{Z}) & & }} \end{equation} of long exact sequences for the pair $(M^7_{\ul{a}, \ul{b}}, M_\pm)$, where $\delta_\pm : H^j(M_\pm; R) \to H^{j+1}(M^7_{\ul{a}, \ul{b}}, M_\pm; R)$, $R \in \{\mathbb{Z}, \mathbb{Q}, \mathbb{Q}/\mathbb{Z}\}$, denotes the coboundary homomorphism.
By exactness and by Table \ref{table:cohomgps}, it can immediately be deduced from diagram \eqref{E:cohomdiag}: both the Bockstein homomorphism $\beta : H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}};\mathbb{Z})$ and $\delta_\pm : H^3(M_\pm; \mathbb{Q}) \to H^{4}(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q})$ are isomorphisms; the homomorphisms $m : H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q})$ and $i_\pm^* : H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/ \mathbb{Z}) \to H^3(M_\pm; \mathbb{Q}/\mathbb{Z})$ are injective; and the homomorphims $r : H^3(M_\pm; \mathbb{Q}) \to H^3(M_\pm; \mathbb{Q}/\mathbb{Z})$ and $q_\pm^* : H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$ are surjective. From these observations, it is now possible to gain some further understanding of the Bockstein homomorphism.
\begin{prop} \label{P:Bockstein} Suppose $M^7_{\ul{a}, \ul{b}}$ satisfies \eqref{E:finite}. Then, with the notation above, the Bockstein homomorphism satisfies $$ i_\pm^* \circ \beta^{-1} \circ q_\pm^* = r \circ \delta_\pm^{-1} \circ m : H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \to H^3(M_\pm; \mathbb{Q}/\mathbb{Z}). $$ \end{prop}
\begin{proof} The proof will be on the level of cochains. If $w_\pm \in \mathcal C^4(M^7_{\ul{a}, \ul{b}}, M_\pm; Z)$ represents a cohomology class $[w_\pm] \in H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z})$, then, since the Bockstein homomorphism $\beta : H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}};\mathbb{Z})$ is an isomorphism, there is a unique class $[w] \in H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z})$ such that $\beta([w]) = q_\pm^*([w_\pm])$. Let $w \in \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z})$ represent $[w] \in H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z})$ and $q_\pm^*(w_\pm) \in \mathcal C^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$ represent $\beta([w]) = q_\pm^* ([w_\pm]) \in H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$.
Recall that $\beta : H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}};\mathbb{Z})$ arises by applying the Snake Lemma to the commutative diagram \begin{equation} \label{E:Bock} \xymatrix{ 0 \ar[r] & \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \ar[r]^m \ar[d]^\delta & \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \ar[r]^r \ar[d]^\delta & \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \ar[r] \ar[d]^\delta & 0 \\ 0 \ar[r] & \mathcal C^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \ar[r]^m & \mathcal C^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \ar[r]^r & \mathcal C^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \ar[r] & 0 } \end{equation} of exact sequences of cochain groups, where $\delta : \mathcal C^3(M^7_{\ul{a}, \ul{b}}; R) \to \mathcal C^4(M^7_{\ul{a}, \ul{b}}; R)$ is the coboundary map for coefficients in $R$. Since $\beta([w]) = q_\pm^* ([w_\pm])$, a cochain $u \in \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q})$ may thus be chosen such that \begin{equation} \label{E:ruw} r(u) = w \ \ \text{ and } \ \ \delta u = m(q_\pm^*(w_\pm)). \end{equation}
Notice, however, that the middle vertical map in \eqref{E:Bock} also appears in the same position in the commutative diagram \begin{equation} \label{E:pairdiag} \xymatrix{ 0 \ar[r] & \mathcal C^3(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q}) \ar[r]^(0.55){q_\pm^*} \ar[d]^\delta & \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \ar[r]^{i_\pm^*} \ar[d]^\delta & \mathcal C^3(M_\pm; \mathbb{Q}) \ar[r] \ar[d]^\delta & 0 \\ 0 \ar[r] & \mathcal C^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q}) \ar[r]^(0.55){q_\pm^*} & \mathcal C^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \ar[r]^{i_\pm^*} & \mathcal C^4(M_\pm; \mathbb{Q}) \ar[r] & 0 } \end{equation} for the pair$(M^7_{\ul{a}, \ul{b}}, M_\pm)$. Observe that, although $u \in \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q})$ is only a cochain, its image $i_\pm^*(u)$ under $i_\pm^* : \mathcal C^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}) \to \mathcal C^3(M_\pm; \mathbb{Q})$ is a cocycle. Indeed, from \eqref{E:ruw} and the diagram \eqref{E:bigdiag} it may be deduced that $$ \delta(i_\pm^*(u)) = i_\pm^*(\delta u) = i_\pm^*(m(q_\pm^*(w_\pm))) = m(i_\pm^*(q_\pm^*(w_\pm))) = 0. $$
Therefore, by applying the Snake Lemma to \eqref{E:pairdiag}, the image $\delta_\pm ([i_\pm^*(u)])$ of the class $[i_\pm^*(u)] \in H^3(M_\pm; \mathbb{Q})$ under the boundary homomorphism $\delta_\pm : H^3(M_\pm; \mathbb{Q}) \to H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q})$ can be represented by a cocycle $c_\pm \in \mathcal C^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q})$ such that, by \eqref{E:ruw} and \eqref{E:bigdiag}, $$ q_\pm^*(c_\pm) = \delta u = m(q_\pm^*(w_\pm)) = q_\pm^*(m(w_\pm)). $$
However, by \eqref{E:bigdiag}, the cochain map $q_\pm^* : \mathcal C^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \to \mathcal C^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$ is injective, implying that $c_\pm = m(w_\pm) \in \mathcal C^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q})$ and, hence, that $\delta_\pm([i_\pm^*(u)]) = [m(w_\pm)] = m([w_\pm])$.
Since $\delta_\pm : H^3(M_\pm; \mathbb{Q}) \to H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Q})$ is an isomorphism, it thus follows from \eqref{E:bigdiag} and \eqref{E:ruw} that \begin{align*} r \circ \delta_\pm^{-1} \circ m ([w_\pm]) &= r([i_\pm^*(u)]) \\ &= i_\pm^*([r(u)]) \\ &= i_\pm^*([w]) \\ &= i_\pm^* \circ \beta^{-1} \circ q_\pm^* ([w_\pm]), \end{align*} as desired, where the final equality is a consequence of $\beta([w]) = q_\pm^*([w_\pm])$ and the fact that $\beta : H^3(M^7_{\ul{a}, \ul{b}}; \mathbb{Q}/\mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$ is an isomorphism. \end{proof}
\section{The linking form} \label{S:linking}
Associated to the decomposition $M^7_{\ul{a}, \ul{b}} = M_-\cup M_+$ of each $M^7_{\ul{a}, \ul{b}}$ into the union of two disk-bundles with $M_- \cap M_+ = M_0$, there is a commutative braid diagram \begin{equation} \label{E:braid} \resizebox{\displaywidth}{!}{ \xymatrix@=0.4cm{ {\phantom{0}} \ar[dr]^(0.4){q_-^*} \ar@(ur,ul)[rr] & & H^3(M_+; R) \ar[dr]^{j_+^*} \ar@(ur,ul)[rr]^{\delta_+} & & H^4(M^7_{\ul{a}, \ul{b}}, M_+; R) \ar[dr]^(0.55){q_+^*} \ar@(ur,ul)[rr] & & {\phantom{0}} \\ & H^3(M^7_{\ul{a}, \ul{b}}; R) \ar[ur]^{i_+^*} \ar[dr]^(0.55){i_-^*} & & H^3(M_0; R) \ar[ur]^(0.45){\partial_-} \ar[dr]^(0.5){\partial_+} & & H^4(M^7_{\ul{a}, \ul{b}}; R) \ar[ur]^(0.55){i_-^*} \ar[dr]^(0.6){i_+^*} & \\ {\phantom{0}} \ar[ur]^(0.4){q_+^*} \ar@(dr,dl)[rr]& & H^3(M_-; R) \ar[ur]^{j_-^*} \ar@(dr,dl)[rr]_{\delta_-} & & H^4(M^7_{\ul{a}, \ul{b}}, M_-; R) \ar[ur]^(0.5){q_-^*} \ar@(dr,dl)[rr]& & {\phantom{0}} }} \end{equation} with coefficients in $R \in \{\mathbb{Z}, \mathbb{Q}, \mathbb{Q}/\mathbb{Z}\}$, where each braid is the long exact sequence of a pair. In particular, the isomorphisms $f_\pm^* : H^j(M^7_{\ul{a}, \ul{b}}, M_\pm; R) \to H^j(M_\mp, M_0; R)$ given by excision are being used implicitly and the homomorphism $\partial_\pm : H^3(M_0;R) \to H^4(M^7_{\ul{a}, \ul{b}}, M_\mp; R)$ corresponds to the boundary homomorphism in the long exact sequence for the pair $(M_\pm, M_0)$.
Furthermore, given the projection $\tau : M^7_{\ul{a}, \ul{b}} \to [-1,1]$ discussed in Section \ref{SS:family}, observe that the inclusion of the submanifold $\tau^{-1}[0,1] \subseteq M^7_{\ul{a}, \ul{b}}$ with boundary $\tau^{-1}\{0\}$ into the disk-bundle $M_+$ induces a homotopy equivalence $(\tau^{-1}[0,1], \tau^{-1}\{0\}) \to (M_+, M_0)$. Therefore, Poincar\'e duality holds for $(M_+, M_0)$ just as for compact, orientable manifolds with boundary. In particular, if $[M_+] \in H_7(M_+, M_0;\mathbb{Z})$ is a fundamental class, then $$ \frown [M_+] : H^k(M_+, M_0; R) \to H_{7-k}(M_+;R) \,;\, \alpha \mapsto \alpha \frown [M_+] $$ is an isomorphism for all $k$. An analogous argument works for $(M_-, M_0)$.
Let $[M] \in H_7(M^7_{\ul{a}, \ul{b}})$ be a fundamental class of $M^7_{\ul{a}, \ul{b}}$. Then $(q_-)_* [M]$ is a fundamental class for the pair $(M^7_{\ul{a}, \ul{b}}, M_-)$ and, by excision, there is a fundamental class $[M_+] \in H_7(M_+, M_0)$ for the pair $(M_+, M_0)$ such that $(f_-)_*[M_+] = (q_-)_* [M]$.
Let $x_\pm \in H^3(M_\pm; \mathbb{Z}) = \mathbb{Z}$ be the generators used in \eqref{E:maps}. By the Universal Coefficient Theorem, together with Table \ref{table:cohomgps}, $H^3(M_+; \mathbb{Z})$ is naturally isomorphic to $\Hom(H_3(M_+), \mathbb{Z})$. Therefore, by Poincar\'e duality, a generator $\gamma_- \in H^4(M^7_{\ul{a}, \ul{b}}, M_-; \mathbb{Z}) = \mathbb{Z}$ may be chosen such that $f_-^*(\gamma_-) \in H^4(M_+, M_0; \mathbb{Z})$ is a generator and the generator $(f_-^*(\gamma_-)) \!\frown\! [M_+] \in H_3(M_+)$ is dual to $x_+$.
By exactness and since $H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}$, the boundary homomorphisms $\delta_\pm : H^3(M_\pm; \mathbb{Z}) = \mathbb{Z} \to H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) = \mathbb{Z}$ are given, up to sign, by multiplication by $n$. Therefore, a generator $\gamma_+ \in H^4(M^7_{\ul{a}, \ul{b}}, M_+; \mathbb{Z})$ can be chosen such that $\delta_+(x_+) = n \gamma_+$. Moreover, since $m : H^4(M^7_{\ul{a}, \ul{b}}, M_+; \mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}}, M_+; \mathbb{Q})$ is injective and $\delta_+ : H^3(M_+; \mathbb{Q}) \to H^{4}(M^7_{\ul{a}, \ul{b}}, M_+; \mathbb{Q})$ is an isomorphism, it follows from \eqref{E:cohomdiag} that \begin{equation} \label{E:gens} \delta_+^{-1} \circ m (\gamma_+) = \frac{1}{n} \, m(x_+) \in H^3(M_+; \mathbb{Q}).
\end{equation} Now, since $q_\pm^* : H^4(M^7_{\ul{a}, \ul{b}}, M_\pm; \mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$ are surjective, a generator of $H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z})$ can be defined by $\mathbf 1 := q_+^*(\gamma_+)$. This is the generator mentioned in Theorem \ref{T:thmB} and, furthermore, there is some $\lambda \in \mathbb{Z}$ such that $\lambda$ mod $|n|$ is a unit in $\mathbb{Z}_{|n|}$ and such that $q_-^*(\lambda \gamma_-) = \mathbf 1$.
\begin{prop} \label{P:LF} With the notation above, the linking form $$ \lk : H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \otimes H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \to \mathbb{Q}/\mathbb{Z} $$ is given by $\lk(x \mathbf 1, y \mathbf 1) = \frac{\lambda xy}{n} \!\! \mod 1$. \end{prop}
\begin{proof} By bilinearity, only $\lk(\mathbf 1, \mathbf 1)$ needs to be computed. By \eqref{E:LF}, \begin{align*} \lk(\mathbf 1, \mathbf 1) &= \langle \beta^{-1}(\mathbf 1) \smile \mathbf 1, [M]\rangle \\ &= \langle \lambda \, \beta^{-1}(\mathbf 1) \smile (q_-^*(\gamma_-)), [M] \rangle \\ &= \langle \lambda \, q_-^*(\beta^{-1}(\mathbf 1) \smile \gamma_-), [M] \rangle, \end{align*} where the last equality follows from \cite[page 251]{Sp}, since $q_- : (M^7_{\ul{a}, \ul{b}}, \emptyset) \to (M^7_{\ul{a}, \ul{b}}, M_\pm)$ is induced by the identity map on $M^7_{\ul{a}, \ul{b}}$. By naturality of the Kronecker pairing, it now follows that \begin{align*} \lk(\mathbf 1, \mathbf 1) &= \langle \lambda \, \beta^{-1}(\mathbf 1) \smile \gamma_-, (q_-)_* [M] \rangle \\ &= \langle \lambda \, \beta^{-1}(\mathbf 1) \smile \gamma_-, (f_-)_* [M_+] \rangle \\ &= \langle \lambda \, f_-^*(\beta^{-1}(\mathbf 1) \smile \gamma_-), [M_+] \rangle \\ &= \langle \lambda \, i_+^*(\beta^{-1}(\mathbf 1)) \smile f_-^*(\gamma_-), [M_+] \rangle, \end{align*} where the last equality again follows from \cite[page 251]{Sp}, since the map $f_- : (M_+, M_0) \to (M^7_{\ul{a}, \ul{b}}, M_-)$ is induced by the inclusion $i_+ : M_+ \to M^7_{\ul{a}, \ul{b}}$. Now, by Proposition \ref{P:Bockstein} and \eqref{E:gens}, \begin{align*} i_+^*(\beta^{-1}(\mathbf 1)) &= i_+^* \circ \beta^{-1} \circ q_+^*(\gamma_+) \\ &= r \circ \delta_+^{-1} \circ m(\gamma_+) \\ &= r \left(\frac{1}{n} \, m(x_+) \right). \end{align*} Therefore, by naturality with respect to the inclusion $m : \mathbb{Z} \to \mathbb{Q}$ and the reduction $r : \mathbb{Q} \to \mathbb{Q}/\mathbb{Z}$, it follows that \begin{align*} \lk(\mathbf 1, \mathbf 1) &= \left\langle\lambda \, r \left(\frac{1}{n} \, m(x_+) \right) \smile f_-^*(\gamma_-), [M_+] \right\rangle \\ &= r \left( \frac{\lambda}{n} \, m \left( \langle x_+ \smile f_-^*(\gamma_-), [M_+] \rangle \right) \right) \\ &= r \left( \frac{\lambda}{n} \, m \left( \langle x_+, f_-^*(\gamma_-) \frown [M_+] \rangle \right) \right) \\ &= r \left( \frac{\lambda}{n} \right) \\ &= \frac{\lambda}{n} \!\! \mod 1, \end{align*} as desired, where the second-last equality follows since $(f_-^*(\gamma_-)) \frown [M_+] \in H_3(M_+)$ is dual to $x_+ \in H^3(M_+; \mathbb{Z})$. \end{proof}
Therefore, in order to prove Theorem \ref{T:thmB}, it remains only to determine the value of $\lambda \in \mathbb{Z}$ in the formula for the linking form given in Proposition \ref{P:LF}. To this end, it is necessary to first introduce two further bases, $\{u_1, u_2\}$ and $\{w_1, w_2\}$, for $H^3(M_0; \mathbb{Z}) = \mathbb{Z} \oplus \mathbb{Z}$, in addition to the basis $\{v_1, v_2\}$ used in \eqref{E:maps}. Recall from \eqref{E:free} that $$ \gcd(a_1, a_2 \pm a_3) = 1 = \gcd(b_1, b_2 \pm b_3). $$ Hence, there exist $e_0, e_1, f_0, f_1 \in \mathbb{Z}$ such that $$ e_1 \, a_1^2 + e_0 \left(\tfrac{a_2^2 - a_3^2}{8} \right) = 1 \ \ \text{ and } \ \ f_1 \, b_1^2 + f_0 \left(\tfrac{b_2^2 - b_3^2}{8} \right) = 1. $$ Therefore, as each of the elements $j_-^*(x_-) = \frac{1}{8}(a_2^2 - a_3^2) \, v_1 + a_1^2 \, v_2$ and $j_+^*(x_+) = -\frac{1}{8}(b_2^2 - b_3^2) \, v_1 - b_1^2 \, v_2$ is a generator of $H^3(M_0;\mathbb{Z})$, the two new bases can be defined via $$ u_1 := j_-^*(x_-), \qquad u_2 := - e_1 \, v_1 + e_0 \, v_2 $$ and $$ w_1 := j_+^*(x_+), \qquad w_2 := \varepsilon (- f_1 \, v_1 + f_0 \, v_2), $$ where $\varepsilon \in \{\pm 1\}$ is such that $\delta_-(x_-) = \varepsilon n \gamma_-$. Define, in addition, the integers $$ \kappa := f_1 \, a_1^2 + f_0 \left(\tfrac{a_2^2 - a_3^2}{8} \right) \ \ \text{ and } \ \ \rho := e_1 \, b_1^2 + e_0 \left(\tfrac{b_2^2 - b_3^2}{8} \right), $$ for which the following congruence identities hold: \begin{equation} \begin{split} \label{E:cong} a_1^2 \, \rho &\equiv b_1^2 \!\! \mod n, \\ b_1^2 \, \kappa &\equiv a_1^2 \!\! \mod n, \\ \frac{1}{8}(a_2^2 - a_3^2) \, \rho &\equiv \frac{1}{8}(b_2^2 - b_3^2) \!\! \mod n, \\ \frac{1}{8}(b_2^2 - b_3^2) \, \kappa &\equiv \frac{1}{8}(a_2^2 - a_3^2) \!\! \mod n. \end{split} \end{equation} Observe, finally, that the basis element $u_2$ can be written in terms of the basis $\{w_1, w_2\}$ as \begin{equation} \label{E:u2} u_2 = (e_1 \, f_0 - e_0 \, f_1)\, w_1 + \varepsilon \rho \, w_2. \end{equation}
It is now possible to complete the proof of Theorem \ref{T:thmB}.
\begin{thm} \label{T:PfthmB} With the notation above, the linking form $\lk : H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \otimes H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) \to \mathbb{Q}/\mathbb{Z}$ is given by $$ \lk(x \mathbf 1, y \mathbf 1) = \pm \frac{\rho \, xy}{n} \!\! \mod 1. $$ Alternatively, with respect to the generator $\mathbf 1' := \kappa \, \mathbf 1$, the linking form is given by $\lk(x \mathbf 1', y \mathbf 1') = \pm \frac{\kappa \, xy}{n} \!\! \mod 1$. \end{thm}
\begin{proof} From exactness and commutativity in the braid diagram \eqref{E:braid}, the following identities hold: $$ \partial_\pm \circ j_\pm^* = 0 \ \ \text{ and } \ \ \partial_\mp \circ j_\pm^* = \delta_\pm. $$ Now, recall that $\delta_+(x_+) = n \, \gamma_+$ and $\delta_-(x_-) = \varepsilon n \gamma_-$ for some $\varepsilon \in \{\pm 1\}$. Therefore, it is a simple calculation to show that the homomorphisms $\partial_\pm : H^3(M_0; \mathbb{Z}) \to H^4(M^7_{\ul{a}, \ul{b}}, M_\mp; \mathbb{Z})$ are given by $$ \partial_-(v_1) = -a_1^2 \, \gamma_+, \qquad \partial_-(v_2) = \frac{a_2^2 - a_2^3}{8} \, \gamma_+ $$ and $$ \partial_+(v_1) = - \varepsilon b_1^2 \, \gamma_-, \qquad \partial_+(v_2) = \varepsilon \frac{b_2^2 - b_2^3}{8} \, \gamma_-, $$ respectively. From the definition of the bases $\{u_1, u_2\}$ and $\{w_1, w_2\}$, it now follows that $$ \partial_-(u_1) = 0, \qquad \partial_-(u_2) = \gamma_+, \ \ \text{ and } \ \ \partial_-(w_2) = \varepsilon \kappa \, \gamma_+, $$ while $$ \partial_+(w_1) = 0, \qquad \partial_+(w_2) = \gamma_-, \ \ \text{ and } \ \ \partial_+(u_2) = \varepsilon \rho \, \gamma_-. $$ Therefore, by \eqref{E:braid} and \eqref{E:u2}, \begin{align*} \lambda \, q_-^* (\gamma_-) &= \mathbf 1 \\ &= q_+^*(\gamma_+) \\ &= q_+^*(\partial_-(u_2)) \\ &= q_-^*(\partial_+(u_2)) \\ &= q_-^*(\partial_+((e_1 \, f_0 - e_0 \, f_1)\, w_1 + \varepsilon \rho \, w_2)) \\ &= \varepsilon \rho \, q_-^*(\partial_+(w_2)) \\
&= \varepsilon \rho \, q_-^*(\gamma_-) \in H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = Z_{|n|}, \end{align*}
from which it immediately follows that $\lambda \equiv \varepsilon \rho$ mod $n$, as desired. The final statement in the theorem follows from a direct calculation showing that $\kappa \, \rho \equiv 1$ mod $n$, since this implies that $\mathbf 1' = \kappa \mathbf 1$ is a generator of $H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}$. \end{proof}
\section{Some elementary number theory} \label{S:numth}
As a simple corollary of Theorem \ref{T:PfthmB}, it turns out that any $M^7_{\ul{a}, \ul{b}}$ with $\gcd(a_1, b_1) = 1$ and satisfying \eqref{E:finite} has standard linking form. Such manifolds include, of course, all $\mathbf{S}^3$-bundles over $\mathbf{S}^4$ with non-trivial $H^4$, as described by Grove and Ziller \cite{GZ}, which are well known to have standard linking form \cite{CE}.
\begin{lem} \label{L:stdLF} Every $M^7_{\ul{a}, \ul{b}}$ with $\gcd(a_1, b_1) = 1$ and satisfying \eqref{E:finite} is homotopy equivalent, hence PL-homeomorphic, to an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$. \end{lem}
\begin{proof} Suppose that $M^7_{\ul{a}, \ul{b}}$ has $\gcd(a_1, b_1) = 1$. Then, by the definition of $n$, $$ \gcd(a_1, n) = 1 = \gcd(b_1, n), $$
that is, $a_1$ mod $n$ and $b_1$ mod $n$ are units in $\mathbb{Z}_{|n|}$. Therefore, $a_1 \mathbf 1$ and $b_1 \mathbf 1$ are generators of $H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}$. In particular, by \eqref{E:cong} and Theorem \ref{T:PfthmB}, \begin{align*} \lk(a_1 \mathbf 1, a_1 \mathbf 1) &= \pm \frac{a_1^2 \, \rho}{n} \!\! \mod 1 \\ &= \pm \frac{b_1^2}{n} \!\! \mod 1. \end{align*} Now, by the definition of standard linking form in Section \ref{SS:link} and Theorem \ref{T:KS}, the result follows. \end{proof}
As a consequence of Lemma \ref{L:stdLF}, to have any hope of obtaining manifolds $M^7_{\ul{a}, \ul{b}}$ with non-standard linking form, it is necessary to assume that $\gcd(a_1, b_1) \neq 1$. In particular, this implies that there is some prime $p$ dividing $n$ such that $p^2$ also divides $n$. Therefore, as in Example \ref{Eg:nonst}, whenever $n$ is not divisible by $p^2$ for all prime divisors $p$ of $n$, there is the possibility of finding manifolds with non-standard linking form which cannot be described as a manifold $M^7_{\ul{a}, \ul{b}}$. Hence, the manifolds $M^7_{\ul{a}, \ul{b}}$ do not realise all $2$-connected $7$-manifolds with $H^4$ finite cyclic.
Returning to the search for manifolds $M^7_{\ul{a}, \ul{b}}$ with non-standard linking form, the following simple observation will prove useful.
\begin{lem} \label{L:primes} Suppose $d \in \mathbb{N}$ divides $n \in \mathbb{N}$ and that $k \in \mathbb{Z}$ is not a square mod $d$. Then $k \in \mathbb{Z}$ is not a square mod $n$. \end{lem}
\begin{proof} Suppose that there is some $l \in \mathbb{Z}$ such that $k \equiv l^2$ mod $n$. Then it is clear that $k \equiv l^2$ mod $d$, a contradiction. \end{proof}
It now turns out that it is reasonably straightforward to find examples of manifolds $M^7_{\ul{a}, \ul{b}}$ with non-standard linking form. To avoid that the computations to follow become unnecessarily complicated, let $$ a_0 := \frac{a_2^2 - a_3^2}{8}, \qquad b_0 := \frac{b_2^2 - b_3^2}{8}. $$ With this notation, \begin{equation} \label{E:conds} \begin{split} n &= a_1^2 \, b_0 - a_0\, b_1^2, \\ 1 &= e_1 \, a_1^2 + e_0 \, a_0, \\ 1 &= f_1 \, b_1^2 + f_0 \, b_0.
\end{split} \end{equation}
Recall that, for $p$ an odd prime and $x \in \mathbb{Z}$, the \emph{Legendre symbol} $\bigl(\frac{x}{p} \bigr)$ is defined via $$ \Bigl(\frac{x}{p} \Bigr) = \begin{cases} \phantom{-}1, & \text{ if $x$ is a square mod $p$ and $x \not\equiv 0$ mod $p$},\\ -1, & \text{ if $x$ is not a square mod $p$},\\ \phantom{-}0, & \text{ if $x \equiv 0$ mod $p$}. \end{cases} $$ The Legendre symbol has the following properties: \begin{equation} \label{E:legendre} \begin{split} \Bigl(\frac{x}{p} \Bigr) &= \Bigl(\frac{y}{p} \Bigr), \ \ \text{ if } x \equiv y \!\! \mod p \,; \\ \Bigl(\frac{xy}{p} \Bigr) &= \Bigl(\frac{x}{p} \Bigr) \Bigl(\frac{y}{p} \Bigr). \end{split} \end{equation} The first supplement to the law of quadratic reciprocity states that \begin{equation} \label{E:-1} \Bigl(\frac{-1}{p} \Bigr) = 1 \ \text{ if and only if } p \equiv 1 \!\! \mod 4, \end{equation} that is, $-1$ is a square if and only if $p \equiv 1$ mod $4$.
\begin{thm} \label{T:egs} Suppose $M^7_{\ul{a}, \ul{b}}$ satisfies \eqref{E:finite} and that there is a prime $p \equiv 1$ mod $4$ such that $p$ divides $\gcd(a_1, b_1)$. If $a_0$ is not a square mod $p$ and $b_0$ is a square mod $p$, then $M^7_{\ul{a}, \ul{b}}$ has non-standard linking form and, hence, is not even homotopy equivalent to an $\mathbf{S}^3$-bundle over $\mathbf{S}^4$. \end{thm}
\begin{proof}
By Theorem \ref{T:PfthmB}, there is a generator $\mathbf 1 \in H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{|n|}$ such that $\lk(\mathbf 1, \mathbf 1) = \pm \frac{\rho}{n}$ mod $1$, with $\rho = e_1 \, b_1^2 + e_0 \, b_0$. On the other hand, by \eqref{E:conds}, $e_0 \, a_0 \equiv 1 \!\! \mod p$. Since $1$ is obviously a square mod $p$, \eqref{E:legendre} implies that $$ 1 = \Bigl(\frac{1}{p} \Bigr) = \Bigl(\frac{e_0 \, a_0}{p} \Bigr) = \Bigl(\frac{e_0}{p} \Bigr) \Bigl(\frac{a_0}{p} \Bigr) = - \Bigl(\frac{e_0}{p} \Bigr), $$ because $\bigl(\frac{a_0}{p} \bigr) = -1$, by hypothesis. That is, $\bigl(\frac{e_0}{p} \bigr) = -1$.
Therefore, since $p \equiv 1$ mod $4$ was assumed to divide $b_1$, \begin{align*} \Bigl(\frac{\pm \rho}{p} \Bigr) &= \Bigl(\frac{\pm (e_1 \, b_1^2 + e_0 \, b_0)}{p} \Bigr) \\ &= \Bigl(\frac{ \pm e_0 \, b_0}{p} \Bigr) \\ &= \Bigl(\frac{\pm 1}{p} \Bigr) \Bigl(\frac{e_0}{p} \Bigr) \Bigl(\frac{b_0}{p} \Bigr) \\ &= -1, \end{align*} where the final equality follows from \eqref{E:-1}, $\bigl(\frac{e_0}{p} \bigr) = -1$ and the hypothesis that $b_0$ is a square mod $p$.
Hence, $\pm \rho$ is not a square mod $p$ and, by Lemma \ref{L:primes}, it follows that $\pm \rho$ is not a square mod $n$. However, since $\pm \rho$ is a unit in $\mathbb{Z}_{|n|}$ (by the proof of Theorem \ref{T:PfthmB}), this implies that $M^7_{\ul{a}, \ul{b}}$ has a non-standard linking form, as desired. \end{proof}
Explicit examples satisfying the hypotheses of Theorem \ref{T:egs} are plentiful. Indeed, note that, by a simply counting argument, for any prime $p \equiv 1$ mod $4$ there must be a pair $m$, $m+1$, $m \in \{1, \dots, p-2\}$, of consecutive integers such that $\bigl( \frac{m}{p} \bigr) = -1$ and $\bigl( \frac{m+1}{p} \bigr) = 1$.
\begin{cor}
Let $p \equiv 1$ mod $4$ be an odd prime. If $m \in \{1, \dots, p-2\}$ is such that $\bigl( \frac{m}{p} \bigr) = -1$ and $\bigl( \frac{m+1}{p} \bigr) = 1$, then $a_1 = b_1 = p$, $|a_2| = 2m - 1$, $|a_3| = |b_2| = 2m +1$ and $|b_3| = 2m+3$ define a manifold $M^7_{\ul{a}, \ul{b}}$ with non-standard linking form. \end{cor}
\begin{proof} Observe first that some choice of signs for $\pm(2m-1)$, $\pm(2m+1)$ and $\pm(2m+3)$ yields integers $\equiv 1$ mod $4$. Furthermore, $a_2^2 - a_3^2 = -8m$ and $b_2^2 - b_3^2 = -8(m+1)$ are, by definition, prime to $p = a_1 = b_1$. Therefore, the freeness conditions \eqref{E:free} are satisified and $\underline a$, $\underline b$ define a manifold $M^7_{\ul{a}, \ul{b}}$. Moreover, $n = -p^2$, so that $H^4(M^7_{\ul{a}, \ul{b}}; \mathbb{Z}) = \mathbb{Z}_{p^2}$.
Now $a_0 = -m$ and $b_0 = -(m+1)$. Thus, by the hypotheses on $m \in \{1, \dots, p-2\}$, Theorem \ref{T:egs} implies that $M^7_{\ul{a}, \ul{b}}$ has non-standard linking form. \end{proof}
\end{document} |
\begin{document}
\newtheorem{claim}{Claim}[section] \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{problem}[theorem]{Problem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newcommand{
\par\noindent {\bf Remark.~~}}{
\par\noindent {\bf Remark.~~}} \newcommand{{\it p.}}{{\it p.}} \newcommand{\em}{\em}
\title{Planar graphs without cycles of length $4$ or $5$ are $(11:3)$-colorable} \author{Zden\v{e}k Dvo\v{r}\'{a}k\thanks{Computer Science Institute (CSI) of Charles University,
Malostransk\'{e} n\'am\v{e}st\'{\i} 25, 118 00 Prague,
Czech Republic. E-mail: \protect\href{mailto:rakdver@iuuk.mff.cuni.cz}{\protect\nolinkurl{rakdver@iuuk.mff.cuni.cz}}.}\and Xiaolan Hu\thanks{School of Mathematics and Statistics $\&$ Hubei Key Laboratory of Mathematical Sciences, Central China Normal University, Wuhan 430079, PR China.} } \date{} \maketitle
\begin{abstract} A graph $G$ is \emph{$(a:b)$-colorable} if there exists an assignment of $b$-element subsets of $\{1,\ldots,a\}$ to vertices of $G$ such that sets assigned to adjacent vertices are disjoint. We show that every planar graph without cycles of length $4$ or $5$ is $(11:3)$-colorable, a weakening of recently disproved Steinberg's conjecture. In particular, each such graph with $n$ vertices has an independent set of size at least $\frac{3}{11}n$.
\vskip 0.2cm \noindent{\bf Keywords:} planar graph; coloring; independence ratio \end{abstract}
\section{Introduction}
A function that assigns sets to all vertices of a graph is a \emph{set coloring} if the sets assigned to adjacent vertices are disjoint. For positive integers $a$ and $b\le a$, an {\em $(a:b)$-coloring} of a graph $G$ is a set coloring with range $\binom {\{1,\ldots, a\}}{b}$, i.e., a set coloring that to each vertex assigns a $b$-element subset of $\{1,\ldots, a\}$. The concept of $(a:b)$-coloring is a generalization of the conventional vertex coloring. In fact, an $(a:1)$-coloring is exactly an ordinary proper $a$-coloring.
The {\em fractional chromatic number} of $G$, denoted by $\chi_f(G)$, is the infimum of the fractions $a/b$ such that $G$ admits an $(a:b)$-coloring. Note that $\chi_f(G)\leq \chi(G)$ for any graph $G$, where $\chi(G)$ is the chromatic number of $G$. The fractional coloring was first introduced in 1973 \cite{planfr5} to seek for a proof of the Four Color Problem. Since then, it has been the focus of many intensive research efforts, see \cite{ScheinermanUllman2011}. In particular, fractional coloring of planar graphs without cycles of certain lengths is widely studied. Pirnazar and Ullman~\cite{PU02} showed that the fractional chromatic number of a planar graph with girth at least $8k-4$ is at most $2+\frac{1}{k}$. Dvo\v{r}\'{a}k {\em et al.}~\cite{DSV08} showed that every planar graph of odd-girth at least 9 is $(5:2)$-colorable. Recently, Dvo\v{r}\'{a}k {\em et al.} \cite{frpltr} showed that every planar triangle-free graph on $n$ vertices is $(9n:3n+1)$-colorable, and thus it has fractional chromatic number at most $3-\frac{3}{3n+1}$.
Well-known Steinberg's Conjecture asserts that every planar graph without cycles of length 4 or 5 is 3-colorable. Recently, Steinberg's conjecture was disproved \cite{CohenAddad2016}. This conjecture, though disproved, had motivated a lot of research, see \cite{borsurvey}. Since $\chi_f(G)\leq \chi(G)$ for any graph $G$, it is natural to ask whether there exists a constant $c<4$ such that $x_f(G)\leq c$ for all planar graphs without cycles of length 4 or 5. In this paper, we confirm this is the case for $c=\frac{11}{3}$. In fact, we prove the following stronger theorem.
\begin{theorem} \label{MT} Every planar graph without cycles of length 4 or 5 is $(11:3)$-colorable, and thus its fractional chromatic number is at most $\frac{11}{3}$. \end{theorem}
The \emph{independence number} $\alpha(G)$ of a graph $G$ is the size of a largest independent set in $G$. The \emph{independence ratio} of $G$ is the quantity $\frac{\alpha(G)}{|V(G)|}$. The famous Four Color Theorem \cite{AppHak1} implies that every planar graph has independence ratio at least $\frac{1}{4}$. In 1976, Albertson \cite{Albertson76} proved a weaker result that every planar graph has independence ratio at least $\frac{2}{9}$ without using the Four Color Theorem. In 2016, Cranston and Rabern \cite{CR16} improved this constant to $\frac{3}{13}$. If $G$ is a triangle-free planar graph, a classical theorem of Gr\H{o}tzsch \cite{grotzsch1959} says that $G$ is 3-colorable, and thus $G$ has independence ratio at least $\frac{1}{3}$. This bound can be slightly improved---Steinberg and Tovey~\cite{SteinbergTovey1993}
proved that the independence ratio is at least $\frac{1}{3}+\frac{1}{3|V(G)|}$, and gave an infinite family of planar triangle-free graphs for that this bound is tight. Steinberg's Conjecture would imply that every planar graph without cycles of length 4 or 5 has independence ratio at least $\frac{1}{3}$, and it is not known whether this weaker statement holds or not. Since $\alpha(G)\geq \frac{|V(G)|}{\chi_f(G)}$ for any graph $G$, we have the following corollary by Theorem~\ref{MT}.
\begin{corollary}\label{ratio} Every planar graph without cycles of length 4 or 5 has independence ratio at least $\frac{3}{11}$. \end{corollary}
It is not clear whether the constant $\frac{11}{3}$ from Theorem~\ref{MT} is the best possible, and we suspect this is not the case. Hence, the following question is of interest. \begin{problem} What is the infimum of fractional chromatic numbers of planar graphs without cycles of length 4 or 5? \end{problem} Let us remark that the counterexample to Steinberg's conjecture constructed in \cite{CohenAddad2016} is $(6:2)$-colorable, and thus we cannot even exclude the possibility that the answer is $3$.
The proof of Theorem~\ref{MT} naturally proceeds in list coloring setting. A \emph{list assignment} for a graph $G$ is a function $L$ that to each vertex $v$ of $G$ assigns a set $L(v)$ of colors. A set coloring $\varphi$ of $G$ is an \emph{$L$-set coloring} if $\varphi(v)\subseteq L(v)$ for all $v\in V(G)$. For a positive integer $b$, we say that $\varphi$ is an \emph{$(L:b)$-coloring} of $G$ if $\varphi$ is an $L$-set coloring and $|\varphi(v)|=b$ for all $v\in V(G)$. If such an $(L:b)$-coloring exists, we say that $G$ is \emph{$(L:b)$-colorable}. For an integer $a\ge b$, we say that $G$ is \emph{$(a:b)$-choosable} if $G$ is $(L:b)$-colorable from any assignment $L$ of lists of size $a$. We actually prove the following strengthening of Theorem~\ref{MT}.
\begin{theorem}\label{MTl} Every planar graph without cycles of length 4 or 5 is $(11:3)$-choosable. \end{theorem}
\section{Colorability of small graphs}
Let us start with some technical results on list-colorability of small graphs, especially paths and cycles. In the proofs, it is convenient to work with a non-uniform version of set coloring. Let $f:V(G)\to\mathbf{Z}_0^+$ be an arbitrary function. An \emph{$(L:f)$-coloring} of a graph $G$ is an $L$-set coloring $\varphi$ such that $|\varphi(v)|=f(v)$ for all $v\in V(G)$. If such an $(L:f)$-coloring exists, we say that $G$ is \emph{$(L:f)$-colorable}. We repeatedly use the following simple observation. \begin{lemma}\label{lemma-redulist}
Let $L$ be an assignment of lists to vertices of a graph $G$, let $f$ assign non-negative integers to vertices of $G$, and let $\psi$ be an $L$-set coloring of $G$ such that $|\psi(v)|\le f(v)$ for all $v\in V(G)$. Let $L'$ be the list assignment defined by $$L'(v)=L(v)\setminus \Big(\psi(v)\cup\bigcup_{u\in N_G(v)} \psi(u)\Big)$$
for all $v\in V(G)$, and let $f'(v)=f(v)-|\psi(v)|$ for all $v\in V(G)$. If $G$ is $(L',f')$-colorable, then $G$ has an $(L:f)$-coloring $\varphi$ such that $\psi(v)\subseteq \varphi(v)$ for all $v\in V(G)$. \end{lemma} \begin{proof} If $\varphi'$ is an $(L',f')$-coloring of $G$, it suffices to set $\varphi(v)=\psi(v)\cup\varphi'(v)$ for all $v\in V(G)$. \end{proof}
We also use the following observation. \begin{lemma}\label{lemma-greedy} Let $L$ be an assignment of lists to vertices of a graph $G$, let $f$ assign non-negative integers to vertices of $G$, and let $v_1$, \ldots, $v_n$ be an ordering of vertices of $G$. If \begin{equation}\label{eq:assgreedy}
|L(v_i)|\ge f(v_i)+\sum_{v_jv_i\in E(G), j<i} f(v_j) \end{equation} holds for $1\le i\le n$, then $G$ has an $(L:f)$-coloring. \end{lemma} \begin{proof}
We prove the claim by induction on $n$. The basic case $n=0$ is trivial. If $n\ge 1$, then $|L(v_1)|\ge f(v_1)$ by the assumptions, and thus there exists a subset $A$ of $L(v_1)$ of size $f(v_1)$. Let $L'(v_i)=L(v_i)\setminus A$ for all $i$
such that $v_1v_i\in E(G)$, and $L'(v_i)=L(v_i)$ for all $i\ge 2$ such that $v_1v_i\not\in E(G)$. Since $|L'(v_i)|\ge |L(v_i)|-f(v_1)$ in the former case and $|L'(v_i)|=|L(v_i)|$ in the latter case, it is easy to verify that the assumption (\ref{eq:assgreedy}) holds for $G-v_1$ with the vertex ordering $v_2$, \ldots, $v_n$ and the list assignment $L'$. Hence, by the induction hypothesis, $G-v_1$ has an $(L':f)$-coloring. Assigning $A$ to $v_1$ turns this coloring into an $(L:f)$-coloring of $G$. \end{proof} When Lemma~\ref{lemma-greedy} applies, we say that we \emph{color vertices of $G$ greedily in order $v_1$, \ldots, $v_n$}.
Finally, let us make another simple observation, which we will often (implicitly) apply. Let $G$ be a graph, let $G_0$ be a subgraph of $G$, and let $f,g:V(G)\to\mathbf{Z}_0^+$ be functions such that
$f(v)\le g(v)$ for all $v\in V(G)$. Let us consider the situation that we need to prove that a graph is $(L:f)$-colorable for every list assignment $L$ such that $|L(v)|\ge g(v)$ for all $v\in V(G)$, under assumption that $G_0$ is $(L:f)$-colorable. Then it suffices to prove this for all list assignments $L(v)$ such that $|L(v)|=g(v)$ for all $v\in V(G)$: if $|L(v)|>g(v)$, then we can without loss of generality throw away any color in $L(v)$, not used in the $(L:f)$-coloring of $G_0$ when $v\in V(G_0)$.
\begin{lemma}\label{3-3-3}
Let $L$ be a list assignment for a path $P=v_1v_2v_3$. If $|L(v_1)|=|L(v_3)|=5$ and $|L(v_2)|=8$, then $P$ is $(L:3)$-colorable. Moreover, for any colors $\alpha_1,\alpha_2\in L(v_1)$ and $\beta\in L(v_3)$, there exists an $(L:3)$-coloring $\varphi$ of $P$ such that $\alpha_1,\alpha_2\in \varphi(v_1)$ and $\beta\in \varphi(v_3)$. \end{lemma} \begin{proof}
Consider arbitrary colors $\alpha_1,\alpha_2\in L(v_1)$ and $\beta\in L(v_3)$. Let $f'(v_1)=1$, $f'(v_2)=3$, and $f'(v_3)=2$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=3$, $|L'(v_2)|=5$, and $|L'(v_3)|=4$.
Choose colors $\gamma_1\in L'(v_2)\setminus L'(v_3)$ and $\gamma_2\in L'(v_2)\setminus (\{\gamma_1\}\cup L'(v_1))$. Choose $\varphi'(v_2)$ as any $3$-element subset of $L'(v_2)$ containing $\gamma_1$ and $\gamma_2$. Then $|L'(v_1)\setminus \varphi'(v_2)|\ge 1$ and $|L'(v_3)\setminus \varphi'(v_2)|\ge 2$, and thus we can choose $\varphi'(v_1)$ as a $1$-element subset of $L'(v_1)\setminus \varphi'(v_2)$ and $\varphi'(v_3)$ as a $2$-element subset of $L'(v_3)\setminus \varphi'(v_2)$. Clearly, $\varphi'$ is an $(L':f')$-coloring of $P$. \end{proof}
\begin{lemma}\label{3-3-4-3} Let $L$ be a list assignment for a path $P=v_1v_2v_3v_4$
such that $|L(v_1)|=|L(v_3)|=|L(v_4)|=5$, $|L(v_2)|=8$ and the subpath $v_3v_4$ of $P$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. Moreover, for any colors $\alpha\in L(v_1)$ and $\beta\in L(v_4)$, the path $P$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$ and $\beta\in \varphi(v_4)$. \end{lemma}
\begin{proof} Since the path $v_3v_4$ is $(L:3)$-colorable, we have $L(v_3)\neq L(v_4)$. Consider arbitrary colors $\alpha\in L(v_1)$ and $\beta,\beta'\in L(v_4)$
such that at most one of the colors $\beta$ and $\beta'$ belongs to $L(v_3)$. Let $f'(v_1)=2$, $f'(v_2)=f'(v_3)=3$ and $f'(v_4)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=4$, $|L'(v_2)|=7$, $|L'(v_3)|=4$, and $|L'(v_4)|=3$.
Let $\gamma_3$ be any color in $L'(v_3)\setminus L'(v_4)$ and let $\gamma_2$ be any color in $L'(v_2)\setminus (\{\gamma_3\}\cup L'(v_1))$. Let $f''(v_1)=f''(v_2)=f''(v_3)=2$ and $f''(v_4)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that $|L''(v_1)|=4$, $|L''(v_2)|=5$, $|L''(v_3)|=2$, and $|L''(v_4)|=3$. This is the case by coloring the vertices of $P$ greedily in order $v_3$, $v_4$, $v_2$, and $v_1$. \end{proof}
\begin{lemma} \label{3-3-4-4-3} Let $L$ be a list assignment for a path $P=v_1\ldots v_5$
such that $|L(v_1)|=|L(v_3)|=|L(v_4)|=|L(v_5)|=5$, $|L(v_2)|=8$, and the subpath $v_3v_4v_5$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. Moreover, for any colors $\alpha\in L(v_1)$ and $\beta\in L(v_5)$ such that $\{\beta\}\neq L(v_4)\setminus L(v_3)$, the path $P$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$ and $\beta\in \varphi(v_5)$. \end{lemma} \begin{proof} Since the path $v_3v_4v_5$ is $(L:3)$-colorable, we have $L(v_3)\neq L(v_4)\neq L(v_5)$. Consider arbitrary colors $\alpha\in L(v_1)$, $\varepsilon\in L(v_3)\setminus L(v_4)$, and $\beta\in L(v_5)$
such that $\{\beta\}\neq L(v_4)\setminus L(v_3)$. There exists a color $\gamma\in L(v_4)\setminus L(v_3)$ such that $\gamma\neq \beta$. If $\beta\not\in L(v_4)$, then choose $\beta'\in L(v_5)\setminus\{\beta,\gamma\}$ arbitrarily; otherwise, choose $\beta'\in L(v_5)\setminus L(v_4)$ arbitrarily. In either case, assigning sets $\{\alpha\}$, $\emptyset$, $\{\varepsilon\}$, $\{\gamma\}$, $\{\beta,\beta'\}$ to vertices of $P$ in order gives an $L$-set coloring. Let $f'(v_1)=f'(v_3)=f'(v_4)=2$, $f'(v_2)=3$, and $f'(v_5)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(v_1)|=4$, $|L'(v_2)|=6$, $|L'(v_3)|=4$, $|L'(v_4)|=3$, and $|L'(v_5)|=2$.
Choose $\kappa_3\in L'(v_3)\setminus L'(v_4)$, $\kappa_4\in L'(v_4)\setminus L'(v_5)$, and $\kappa_2\in L'(v_2)\setminus (\{\kappa_3\}\cup L'(v_1))$. Let $f''(v_1)=f''(v_2)=2$ and $f''(v_3)=f''(v_4)=f''(v_5)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $P$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that $|L''(v_1)|=|L''(v_2)|=4$, $|L''(v_3)|=1$, and $|L''(v_4)|=|L''(v_5)|=2$. This is the case by coloring the vertices of $P$ greedily in order $v_3$, $v_4$, $v_5$, $v_2$, and $v_1$. \end{proof}
\begin{lemma} \label{3-3-4-4-4-3} Let $L$ be a list assignment for a path $P=v_1\ldots v_6$
such that $|L(v_1)|=|L(v_3)|=\ldots=|L(v_6)|=5$, $|L(v_2)|=8$, and the subpath $v_3\ldots v_6$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. Moreover, for any color $\alpha\in L(v_1)$, the path $P$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$. \end{lemma} \begin{proof}
Since $v_3v_4v_5v_6$ has an $(L:3)$-coloring $\psi$, we have $L(v_3)\neq L(v_4)\neq L(v_5)\neq L(v_6)$. Furthermore, if $|L(v_4)\setminus L(v_3)|=1$, then $\psi(v_4)$ contains the unique color $\gamma\in L(v_4)\setminus L(v_3)$, and thus $L(v_5)\setminus \{\gamma\}\not\subseteq L(v_6)$; in this case, let $\beta$ be an arbitrary color in $L(v_5)\setminus (\{\gamma\}\cup L(v_6))$. Otherwise, let $\beta$ be an arbitrary color in $L(v_5)\setminus L(v_6)$.
In either case, we have $\{\beta\}\neq L(v_4)\setminus L(v_3)$; hence, considering any $\alpha\in L(v_1)$, $P-v_6$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_1)$ and $\beta\in \varphi(v_5)$
by Lemma~\ref{3-3-4-4-3}. Since $\beta\not\in L(v_6)$, we have $|L(v_6)\setminus\varphi(v_5)|\ge 3$, and thus $\varphi$ can be extended to an $(L:3)$-coloring of $P$ by choosing $\varphi(v_6)$ as an arbitrary $3$-element subset of $L(v_6)\setminus\varphi(v_5)$. \end{proof}
\begin{lemma}\label{3-4-3---3} Let $L$ be a list assignment for a path $P=v_1\ldots v_k$ with $5\le k\le 7$
such that $|L(v_1)|=|L(v_2)|=|L(v_4)|=\ldots=|L(v_k)|=5$, $|L(v_3)|=8$, and $P-v_3$ has an $(L:3)$-coloring. Then $P$ is $(L:3)$-colorable. \end{lemma} \begin{proof}
Since $v_1v_2$ has an $(L:3)$-coloring, we have $L(v_1)\neq L(v_2)$, and thus there exists a color $\alpha\in L(v_2)\setminus L(v_1)$. By Lemmas~\ref{3-3-4-3}, \ref{3-3-4-4-3}, and \ref{3-3-4-4-4-3}, there exists an $(L:3)$-coloring $\varphi$ of $P-v_1$ such that $\alpha\in L(v_2)$. Since $\alpha\not\in L(v_1)$, we have $|L(v_1)\setminus\varphi(v_2)|\ge 3$, and thus $\varphi$ can be extended to an $(L:3)$-coloring of $P$ by choosing $\varphi(v_1)$ as an arbitrary $3$-element subset of $L(v_1)\setminus\varphi(v_2)$. \end{proof}
\begin{lemma}\label{triangle} Let $L$ be a list assignment for a triangle $C=v_1v_2v_3$. Then $C$ is $(L:3)$-colorable if and only if
$|L(v_i)|\ge 3$ for $1\le i\le 3$,
$|L(v_i)\cup L(v_j)|\ge 6$ for $1\le i < j\le 3$, and $|L(v_1)\cup L(v_2)\cup L(v_3)|\ge 9$. \end{lemma} \begin{proof}
If $\varphi$ is an $(L:3)$-coloring of $C$ and $S$ is a subset of $V(C)$, then $\varphi$ assigns pairwise disjoint sets to vertices of $S$, and thus $\big|\bigcup_{v\in S} L(v)\big|\ge \big|\bigcup_{v\in S} \varphi(v)\big|=3|S|$, proving that the conditions from the statement of the lemma are necessary.
Consider an auxiliary bipartite graph $H$ with one part $U$ consisting of $L(v_1)\cup L(v_2)\cup L(v_3)$ and the other part $V$ consisting of vertices $v_{i,k}$ for $1\le i,k\le 3$, with $c\in U$ adjacent to $v_{i,k}$ if and only if $c\in L(v_i)$. Using Hall's theorem, the assumptions of the lemma imply that $H$ has a matching saturating the vertices of $V$. Letting $\varphi(v_i)$ consist of the colors joined to $v_{i,1}$, $v_{i,2}$, and $v_{i,3}$ in this matching for $1\le i\le 3$ gives an $(L:3)$-coloring of $C$. \end{proof}
\begin{lemma}\label{lollipop}
Let $L$ be a list assignment for the graph $H$ consisting of a path $v_1v_2v_3v_4$ and an edge $v_1v_3$, such that $|L(v_1)|=|L(v_4)|=5$, $|L(v_2)|=|L(v_3)|=8$, and the triangle $v_1v_2v_3$ has an $(L:3)$-coloring. Then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof}
Since $v_1v_2v_3$ is $(L:3)$-colorable, we have $|L(v_1)\cup L(v_2)\cup L(v_3)|\ge 9$ by Lemma~\ref{triangle}, and thus there exists a color $\alpha\in (L(v_1)\cup L(v_3))\setminus L(v_2)$. Let $\beta$ be a color in $L(v_3)\setminus L(v_1)$. Let $\varphi(v_4)$ be any $3$-element subset of $L(v_4)\setminus\{\alpha,\beta\}$. Let $L'(v_3)=L(v_3)\setminus \varphi(v_4)$, $L'(v_1)=L(v_1)$ and $L'(v_2)=L(v_2)$. Note that $\beta\in L'(v_3)\setminus L'(v_1)$, and thus $|L'(v_1)\cup L'(v_3)|\ge 6$. Furthermore, $\alpha\in (L'(v_1)\cup L'(v_3))\setminus L'(v_2)$, and thus
$|L'(v_1)\cup L'(v_2)\cup L'(v_3)|\ge |L'(v_2)|+1=9$. By Lemma~\ref{triangle}, $v_1v_2v_3$ has an $(L':3)$-coloring, and this coloring extends $\varphi$ to an $(L:3)$-coloring of $H$. \end{proof}
\begin{lemma}\label{l6cycle} Let $L$ be a list assignment for a $6$-cycle $C=v_1\ldots v_6$, such that
$|L(v_i)|\ge 5$ for $1\le i\le 6$. Suppose that there exists $S\subseteq V(C)$
such that $|S|=2$, $|L(u)|=8$ for all $u\in S$, and $C-S$ is $(L:3)$-colorable. Then $C$ is $(L:3)$-colorable. \end{lemma} \begin{proof}
Without loss of generality, we can assume $|L(v)|=5$ for $v\in V(C)\setminus S$ and $S=\{v_1,v_t\}$ for some $t\in\{2,3,4\}$. Let us discuss the possible values of $t$ separately. \begin{itemize} \item Suppose first that $t=2$. Since $C-S$ is $(L:3)$-colorable, we have
$L(v_3)\neq L(v_4)\neq L(v_5)\neq L(v_6)$, and furthermore, if $|L(v_4)\setminus L(v_3)|=1$
and $|L(v_5)\setminus L(v_6)|=1$, then $L(v_4)\setminus L(v_3)\neq L(v_5)\setminus L(v_6)$. Select $\beta\in L(v_4)\setminus L(v_3)$ such that $|L(v_5)\setminus (\{\beta\}\cup L(v_6))|\ge 1$, and let $\gamma\in L(v_5)\setminus (\{\beta\}\cup L(v_6))$ be arbitrary. Then, select $\beta'\in L(v_4)\setminus\{\beta,\gamma\}$ so that at most one of $\beta$ and $\beta'$ belongs to $L(v_5)$, and $\gamma'\in L(v_5)\setminus \{\beta,\beta',\gamma\}$ so that at most one of $\gamma$ and $\gamma'$ belongs to $L(v_4)$. Furthermore, arbitrarily select $\alpha\in L(v_3)\setminus L(v_4)$ and $\varepsilon\in L(v_6)\setminus L(v_5)$. Note that assignment of sets $\emptyset$, $\emptyset$, $\{\alpha\}$, $\{\beta,\beta'\}$, $\{\gamma,\gamma'\}$, $\{\varepsilon\}$ to vertices of $C$ in order is a set coloring. Let $f'(v_1)=f'(v_2)=3$, $f'(v_3)=f'(v_6)=2$, and $f'(v_4)=f'(v_5)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L':f')$-coloring for any list assignment $L'$ such that
$|L'(v_1)|=|L'(v_2)|=7$, $|L'(v_3)|=|L'(v_6)|=3$, and $|L'(v_4)|=|L'(v_5)|=2$.
Choose $\alpha'\in L'(v_3)\setminus L'(v_4)$ and $\varepsilon'\in L'(v_6)\setminus L'(v_5)$. Let $f''(v_1)=f''(v_2)=3$ and $f''(v_3)=\ldots=f''(v_6)=1$. Applying Lemma~\ref{lemma-redulist} again, it suffices to prove that $C$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that
$|L''(v_1)|=|L''(v_2)|=6$ and $|L''(v_3)|=\ldots=|L''(v_6)|=2$. If $L''(v_1)\neq L''(v_2)$, then let $\kappa$ be a color in $L''(v_1)\setminus L''(v_2)$, and let $\varphi$ be an $(L'':f'')$-coloring of the path $v_6v_5v_4v_3$ such that $\varphi(v_6)\neq \{\kappa\}$, obtained greedily. If $L''(v_1)=L''(v_2)$, then let $\varphi$ be an $(L'':f'')$-coloring of the path $v_6v_5v_4v_3$ such that $\varphi(v_3)\neq\varphi(v_6)$, which exists, since a $4$-cycle is $(2:1)$-choosable~\cite{erdosrubintaylor1979}. In either case, the choice of $\varphi$ ensures that if $L''(v_1)\setminus\varphi(v_6)$ and $L''(v_2)\setminus \varphi(v_3)$ both have size $5$, then they are different. Hence, we can choose $\varphi(v_1)$ and $\varphi(v_2)$ as disjoint $3$-element subsets of $L''(v_1)\setminus\varphi(v_6)$ and $L''(v_2)\setminus \varphi(v_3)$, respectively. This gives an $(L'':f'')$-coloring of $C$, as required.
\item Next, suppose that $t=3$. If $L(v_2)\not\subseteq L(v_1)$, then choose $\alpha\in L(v_2)\setminus L(v_1)$
and $\beta\in L(v_6)$ arbitrarily so that $L(v_5)\setminus L(v_4)\neq\{\beta\}$. If $L(v_2)\subseteq L(v_1)$, then note that $|L(v_6)\setminus (L(v_1)\setminus L(v_2))|\ge 2$, and thus we can choose $\beta\in L(v_6)\setminus (L(v_1)\setminus L(v_2))$ so that $L(v_5)\setminus L(v_4)\neq\{\beta\}$. In this case, if $\beta\in L(v_2)$ then let $\alpha=\beta$, otherwise choose $\alpha\in L(v_2)$ arbitrarily. By Lemma~\ref{3-3-4-4-3}, the path $v_2v_3\ldots v_6$ has an $(L:3)$-coloring $\varphi$ such that $\alpha\in \varphi(v_2)$ and $\beta\in \varphi(v_6)$. By the choice of $\alpha$ and $\beta$ we have
$|L(v_1)\setminus (\varphi(v_2)\cup \varphi(v_6))|\ge 3$, and thus we can extend $\varphi$ to an $(L:3)$-coloring of $C$ by choosing $\varphi(v_1)$ as a $3$-element subset of $L(v_1)\setminus (\varphi(v_2)\cup \varphi(v_6))$.
\item Finally, suppose that $t=4$. Since $C-S$ is $(L:3)$-colorable, we have $L(v_2)\neq L(v_3)$ and $L(v_5)\neq L(v_6)$. Hence, there exist $\alpha\in L(v_2)\setminus L(v_3)$, $\beta\in L(v_3)\setminus L(v_2)$, $\gamma\in L(v_5)\setminus L(v_6)$, and $\varepsilon\in L(v_6)\setminus L(v_5)$. Let $f'(v_1)=f'(v_4)=3$ and $f'(v_2)=f'(v_3)=f'(v_5)=f'(v_6)=2$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L':f')$-coloring for any list assignment $L'$ such that
$|L'(v_1)|=|L'(v_4)|=6$ and $|L'(v_2)|=|L'(v_3)|=|L'(v_5)|=|L'(v_6)|=4$.
Suppose first that there exists a color $\alpha'\in L'(v_2)\setminus L'(v_1)$. Since $|L'(v_5)|+|L'(v_3)\setminus\{\alpha'\}|>|L'(v_4)|$, there exist colors $\beta'\in L'(v_3)\setminus\{\alpha'\}$ and $\gamma'\in L'(v_5)$ such that either $\beta'=\gamma'$ or at most one of $\beta'$ and $\gamma'$ belongs to $L'(v_4)$. Let $f''(v_1)=f''(v_4)=3$, $f''(v_2)=f''(v_3)=f''(v_5)=1$, and $f''(v_6)=2$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that
$|L''(v_1)|=6$, $|L''(v_4)|=5$, $|L''(v_2)|=|L''(v_3)|=2$, and $|L''(v_5)|=|L''(v_6)|=3$. This is the case by coloring the vertices of $C$ greedily in order $v_2$, $v_3$, $v_5$, $v_6$, $v_1$, and $v_4$.
Hence, we can assume that $L'(v_2)\subset L'(v_1)$, and by symmetry also $L'(v_6)\subset L'(v_1)$ and $L'(v_3),L'(v_5)\subset L'(v_4)$. Since $|L'(v_2)|+|L'(v_6)|-|L'(v_1)|=2$, it follows that $|L'(v_2)\cap L'(v_6)|\ge 2$, and symmetrically $|L'(v_3)\cap L'(v_5)|\ge 2$. Hence, there exist $\alpha'\in L'(v_2)\cap L'(v_6)$ and $\beta'\in L'(v_3)\cap L'(v_5)$ such that $\alpha'\neq\beta'$. Let $f''(v_1)=f''(v_4)=3$ and $f''(v_2)=f''(v_3)=f''(v_5)=f''(v_6)=1$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $C$ has an $(L'':f'')$-coloring for any list assignment $L''$ such that
$|L''(v_1)|=|L''(v_4)|=5$ and $|L''(v_2)=|L''(v_3)|=|L''(v_5)|=|L''(v_6)|=2$. This is again the case by coloring the vertices of $C$ greedily in order $v_2$, $v_3$, $v_5$, $v_6$, $v_1$, and $v_4$. \end{itemize} \end{proof}
\begin{lemma}\label{claw}
Let $L$ be a list assignment for the graph $H$ consisting of a vertex $v$ with three neighbors $v_1$, $v_2$, and $v_3$, such that $|L(v_i)|=5$ for $1\le i\le 3$ and $|L(v)|=8$. Then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof}
For $1\le i\le 3$, there exists $\alpha_i\in L(v)\setminus L(v_i)$. Fix $\varphi(v)$ as a $3$-element subset of $L(v)$ containing $\alpha_1$, $\alpha_2$, and $\alpha_3$. Then $|L(v_i)\setminus\varphi(v)|\ge 3$ for $1\le i\le 3$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$. \end{proof}
\begin{lemma}\label{claw5}
Let $L$ be a list assignment for the graph $H$ consisting of a vertex $v$ with four neighbors $v_1$, \ldots, $v_4$, and possibly the edge $v_3v_4$, such that $|L(v_1)|=|L(v_2)|=5$, $|L(v)|=8$ and either \begin{itemize}
\item $v_3v_4\not\in E(H)$ and $|L(v_3)|=|L(v_4)|=5$, or
\item $v_3v_4\in E(H)$, $|L(v_3)|=|L(v_4)|=8$, and the triangle $vv_3v_4$ is $(L:3)$-colorable. \end{itemize} Then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof}
If $v_3v_4\not\in E(H)$, then let $A_i$ be a $3$-element subset of $L(v)\setminus L(v_i)$ for $1\le i\le 4$. Since $\sum_{i=1}^4 |A_i|>|L(v)|$, there exists a color in $L(v)$ belonging to at least two of the sets $A_1$, \ldots, $A_4$. Hence, there exists a $3$-element set $\varphi(v)\subset L(v)$ such that $\varphi(v)\cap A_i\neq \emptyset$ for $1\le i\le 4$. Then $|L(v_i)\setminus \varphi(v)|\ge 3$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$.
If $v_3v_4\in E(H)$, then since $vv_3v_4$ is $(L:3)$-colorable, there exists a color $\alpha\in L(v)$
such that $|\{\alpha\}\cup L(v_3)\cup L(v_4)|\ge 9$. Since $|L(v_1)\setminus \{\alpha\}|+|L(v_2)\setminus \{\alpha\}|>|L(v)\setminus\{\alpha\}|$, there exist colors $\beta_1\in L(v_1)\setminus\{\alpha\}$ and $\beta_2\in L(v_2)\setminus\{\alpha\}$ such that either $\beta_1=\beta_2$ or at most one of $\beta_1$ and $\beta_2$ belongs to $L(v)$. For $i\in\{1,2\}$, let $\varphi(v_i)$ be any $3$-element subset of $L(v_i)\setminus\{\alpha\}$ containing $\beta_i$. Then $L(v)\setminus (\varphi(v_1)\cup\varphi(v_2))$ has size at least $3$ and contains $\alpha$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$ by Lemma~\ref{triangle}. \end{proof}
\begin{lemma}\label{claw53}
Let $L$ be a list assignment for the graph $H$ consisting of a vertex $v$ with three neighbors $v_1$, $v_2$, and $v_3$, a vertex $u_1$ adjacent to $v_1$, and possibly one edge between the vertices $v_1$, $v_2$, and $v_3$, such that $|L(u_1)|=|L(v)|=5$ and $|L(v_i)|=2+3\deg_H(v_i)$ for $1\le i\le 3$. If $H-v_1$ is $(L:3)$-colorable, then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof} If $v_1v_2\in E(H)$, then let $\varphi(v_2)$ be a $3$-element subset of $L(v_2)\setminus L(v)$. Let $L'(v)=L(v)$, $L'(v_3)=L(v_3)$, $L'(u_1)=L(u_1)$, and $L'(v_1)=L(v_1)\setminus\varphi(v_2)$. Since $H-v_1$ is $L$-colorable, $H-\{v_1,v_2\}$ is $L'$-colorable, and thus $H-v_2$ is $L'$-colorable by Lemma~\ref{3-3-4-3}. Hence, $\varphi$ extends to an $(L:3)$-coloring of $H$.
Hence, assume that $v_1v_2\not\in E(H)$, and by symmetry, $v_1v_3\not\in E(H)$. If $v_2v_3\in E(H)$, then since $vv_2v_3$ is $(L:3)$-colorable, there exists $\alpha\in L(v)$
such that $|\{\alpha\}\cup L(v_2)\cup L(v_3)|\ge 9$. Choose distinct $\beta,\beta'\in L(v_1)\setminus\{\alpha\}$ such that $\beta\not\in L(u_1)$ and $\beta'\not\in L(v)$. Let $\varphi(v_1)$ be an arbitrary $3$-element subset of $L(v_1)\setminus\{\alpha\}$ containing
$\beta$ and $\beta'$. Then $\varphi$ extends to an $(L:3)$-coloring of $H$ (using Lemma~\ref{triangle}), since $|L(u_1)\setminus\varphi(v_1)|\ge 3$,
$|L(v)\setminus\varphi(v_1)|\ge 3$, and $\alpha\in L(v)\setminus\varphi(v_1)$.
Finally, suppose that $\{v_1, v_2,v_3\}$ is an independent set. Since the path $v_2vv_3$ is $(L:3)$-colorable, we have $L(v_2)\neq L(v)\neq L(v_3)$. Choose colors $\beta\in L(v)\setminus L(v_2)$ and $\beta'\in L(v)\setminus L(v_3)$ arbitrarily. By Lemma~\ref{3-3-3}, there exists an $(L:3)$-coloring $\varphi$ of
$vv_1u_1$ such that $\beta,\beta'\in \varphi(v)$. Then $|L(v_i)\setminus\varphi(v)|\ge 3$ for $i\in \{2,3\}$, and thus $\varphi$ extends to an $(L:3)$-coloring of $H$. \end{proof}
\begin{lemma}\label{claw543}
Let $L$ be a list assignment for the graph $H$ consisting of a path $u_1v_1vv_2u_2$, a vertex $v_3$ adjacent to $v$, and possibly the edge $v_1v_3$, such that $|L(u_1)|=|L(v)|=|L(u_2)|=5$, $|L(v_2)|=8$, $|L(v_3)|=2+3\deg_H(v_3)$, and $|L(v_1)|=3\deg_H(v_1)-1$. If $H-v_2$ is $(L:3)$-colorable, then $H$ is $(L:3)$-colorable. \end{lemma} \begin{proof} Let us first consider the case that $v_1v_3\in E(H)$. Since the triangle $v_1vv_3$ is $(L:3)$-colorable, there exists $\alpha\in (L(v)\cup L(v_1))\setminus L(v_3)$. Choose $\beta\in L(v_2)\setminus (\{\alpha\}\cup L(u_2))$ and distinct colors $\gamma,\gamma'\in L(v_2)\setminus L(v)$ arbitrarily. Let $\varphi(v_2)$ be a $3$-element subset of
$L(v_2)$ containing $\beta$, $\gamma$, and $\gamma'$. Note that $|L(u_2)\setminus\varphi(v_2)|\ge 3$ by the choice of $\beta$, and thus we can choose $\varphi(u_2)$ as a $3$-element subset of $L(u_2)\setminus\varphi(v_2)$. By the choice of $\gamma$ and $\gamma'$, there exists a $4$-element subset $A$ of $L(v)$, containing $\alpha$ if $\alpha\in L(v)$. Choose distinct colors $\kappa,\kappa'\in L(v_1)\setminus A$, such that $\kappa=\alpha$ if $\alpha\not\in A$. Let $\varphi(u_1)$ be a $3$-element subset of $L(u_1)\setminus \{\kappa,\kappa'\}$, and let $B=L(v_1)\setminus \varphi(u_1)$. Note that $|B|\ge 5$, $|A\cup B|\ge |A\cup \{\kappa,\kappa'\}|=6$, and $|A\cup B\cup L(v_3)|\ge |L(v_3)\cup \{\alpha\}|=9$, and thus by Lemma~\ref{triangle}, $\varphi$ extends to an $(L:3)$-coloring of $H$.
Suppose now that $v_1v_3\not\in E(H)$. Since the path $u_1v_1vv_3$ is $(L:3)$-colorable, we have
$L(u_1)\neq L(v_1)\neq L(v)\neq L(v_3)$, and furthermore, if $|L(v_1)\setminus L(u_1)|=1$, then $L(v_1)\setminus L(u_1)\neq L(v)\setminus L(v_3)$. Hence, there exists a color $\alpha\in L(v)\setminus L(v_3)$
such that $|L(v_1)\setminus (\{\alpha\}\cup L(u_1))|\ge 1$. Let $\beta$ be any color in $L(v_1)\setminus (\{\alpha\}\cup L(u_1))$. If $\alpha\in L(v_1)$, then let $\alpha'$ be any color in $L(v)\setminus L(v_1)$, otherwise let $\alpha'$ be any color in $L(v)\setminus \{\alpha,\beta\}$. If $\beta\in L(v)$, then let $\beta'$ be any color in $L(v_1)\setminus L(v)$, otherwise let $\beta'$ be any color in $L(v_1)\setminus\{\alpha,\alpha',\beta\}$. Let $\gamma$ be any color in $L(u_1)\setminus L(v_1)$, let $\varepsilon$ be any color in $L(v_3)\setminus L(v)$, and let $\kappa$ be any color in $L(v_2)\setminus (\{\alpha,\alpha'\}\cup L(u_2)\})$. Let $f'(u_1)=f'(v_3)=f'(v_2)=2$, $f'(v_1)=f'(v)=1$, and $f'(u_2)=3$. By Lemma~\ref{lemma-redulist}, it suffices to prove that $H$ has an $(L':f')$-coloring for any list assignment $L'$ such that $|L'(u_1)|=|L'(v_3)|=3$, $|L'(v_1)|=2$, $|L'(v)|=1$, and $|L'(v_2)|=|L'(u_2)|=5$. This is the case by coloring the vertices of $H$ greedily in order $v$, $v_3$, $v_1$, $u_1$, $v_2$, and $u_2$. \end{proof}
\section{Properties of a minimal counterexample}
We are going to prove a mild strengthening of Theorem~\ref{MTl} where a clique (one vertex, two adjacent vertices, or a triangle) is precolored. A (hypothetical) \emph{counterexample} (to this strengthening) is a triple $(G,L,Z)$, where $G$ is a plane graph without $4$- or $5$-cycles, $Z$ is the vertex set of a clique of $G$, and $L$ is an assignment of lists of size $11$ to vertices of $V(G)\setminus Z$ and pairwise disjoint lists of size $3$ to vertices $Z$, such that $G$ is not $(L:3)$-colorable. The \emph{order} of the counterexample is the number of vertices of $G$. A counterexample is \emph{minimal} if there exists no counterexample of smaller order.
\begin{lemma}\label{conn} If $(G,L,Z)$ is a minimal counterexample, then $G$ is $2$-connected and every triangle in $G$ bounds a face. \end{lemma} \begin{proof} If $G$ is not $2$-connected, then there exist proper induced subgraphs $G_1$ and $G_2$ of $G$ and a vertex $z\in V(G_2)$ such that $G=G_1\cup G_2$, $V(G_1\cap G_2)\subseteq\{z\}$, and $Z\subseteq V(G_1)$. By the minimality of the counterexample, there exists an $(L:3)$-coloring $\varphi_1$ of $G_1$. If $z\in V(G_1)$, then let $L'(z)=\varphi_1(z)$, otherwise let $L'(z)$ be any $3$-element subset of $L(z)$. Let $L'(v)=L(v)$ for all $v\in V(G_2)\setminus \{z\}$. Note that $(G_2,L',\{z\})$ is not a counterexample since it has smaller order than $(G,L,Z)$, and thus there exists an $(L':3)$-coloring $\varphi_2$ of $G_2$. However, then $\varphi_1$ and $\varphi_2$ combine to an $(L:3)$-coloring of $G$, which is a contradiction.
Similarly, if $G$ contains a non-facial triangle $T$, then there exist proper induced subgraphs $G_1$ and $G_2$ of $G$ such that $G=G_1\cup G_2$, $T=G_1\cap G_2$, and $Z\subseteq V(G_1)$. By the minimality of the counterexample, there exists an $(L:3)$-coloring $\varphi_1$ of $G_1$. Let $L'(z)=\varphi_1(z)$ for all $z\in V(T)$ and $L'(v)=L(v)$ for all $v\in V(G_2)\setminus V(T)$. Note that $(G_2,L',V(T))$ is not a counterexample since it has smaller order than $(G,L,Z)$, and thus there exists an $(L':3)$-coloring $\varphi_2$ of $G_2$. However, then $\varphi_1$ and $\varphi_2$ combine to an $(L:3)$-coloring of $G$, which is a contradiction. \end{proof}
Let $L$ be a list assignment for a graph $G$, let $H$ be an induced subgraph of $G$, and let $\psi$ be an $(L:3)$-coloring of $G-V(H)$. Let $L_\psi$ denote the list assignment for $H$ defined by $$L_\psi(v)=L(v)\setminus\bigcup_{uv\in E(G), u\not\in V(H)} \psi(u)$$ for all $v\in V(H)$. Note that \begin{equation}\label{eq:size}
|L_\psi(v)|\ge |L(v)|-3(\deg_G(v)-\deg_H(v)). \end{equation} Furthermore, any $(L_\psi:3)$-coloring of $H$ combines with $\psi$ to an $(L:3)$-coloring of $G$. Hence, the following claim holds. \begin{proposition}\label{obs:extend} Let $(G,L,Z)$ be a minimal counterexample and let $H$ be an induced subgraph of $G$ disjoint from $Z$. If $\psi$ is an $(L:3)$-coloring of $G-V(H)$, then $H$ is not $(L_\psi:3)$-colorable. \end{proposition}
In a counterexample $(G,L,Z)$, a vertex $v\in V(G)$ is \emph{internal} if $v\not\in Z$.
\begin{lemma}\label{MD} If $(G,L,Z)$ is a minimal counterexample, then every internal vertex of $G$ has degree at least $3$. \end{lemma} \begin{proof}
Suppose for a contradiction that there exists a vertex $v\in V(G)\setminus Z$ of degree at most two. By the minimality of the counterexample, the graph $G-v$ has an $(L:3)$-coloring $\psi$. By Proposition~\ref{obs:extend}, $v$ is not $(L_\psi:3)$-colorable. However, this is a contradiction, since $|L_\psi(v)|\ge 11-2\cdot 3=5$ by (\ref{eq:size}). \end{proof}
\begin{lemma} \label{33-path} Let $(G,L,Z)$ be a minimal counterexample. Let $P=v_1\ldots v_k$ be a path in $G$ disjoint from $Z$ such that $3\le k\le 6$, $\deg (v_1) = \deg (v_2) = \deg (v_k) = 3$ and $\deg(v_i)=4$ for $3\le i\le k-1$. Then $k=3$ and $v_1v_3\in E(G)$. \end{lemma} \begin{proof} Suppose for a contradiction that either $k\ge 4$, or $k=3$ and $v_1v_3\not\in E(G)$. Choose such a path $P$ with $k$ minimum.
Note that $G$ contains at most one of the edges $v_1v_k$ and $v_2v_k$; this follows by the assumptions if $k=3$ and by the fact that $G$ does not contain $4$- or $5$-cycles otherwise. By the minimality of $k$, we conclude that $G$ contains neither of these edges, with the exception of the case $k=3$ and the edge $v_2v_3$ (since otherwise we can consider a path $v_1v_2v_k$ or $v_2v_1v_k$ instead of $P$). Consequently, by the minimality of $k$, it follows that the path $P$ is induced.
By the minimality of $G$, the graph $G-\{v_1,v_2\}$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(P)$, and consider the list assignment $L_\psi$ for $P$. By the existence of $\psi_0$, we conclude that $P-\{v_1,v_2\}$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v_2)|\ge 8$ and $|L_\psi(v_i)|\ge 5$ for $i=1,\ldots, k$ by (\ref{eq:size}). By Lemmas~\ref{3-3-3}, \ref{3-3-4-3}, \ref{3-3-4-4-3}, and \ref{3-3-4-4-4-3}, the path $P$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
\begin{lemma}\label{6-cycle} Let $(G,L,Z)$ be a minimal counterexample. Let $C=v_1\ldots v_6$ be a $6$-cycle in $G$ disjoint from $Z$ such that all vertices of $C$ have degree at most $4$. Then at most one vertex of $C$ has degree three. \end{lemma}
\begin{proof} Suppose for a contradiction that $C$ contains at least two vertices of degree three, and let $S$ be the set of two such vertices. Note that since $G$ does not contain $4$- or $5$-cycles, the cycle $C$ is induced.
By the minimality of $G$, the graph $G-S$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(C)$, and consider the list assignment $L_\psi$ for $C$. By the existence of $\psi_0$, we conclude that $C-S$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 8$ for all $v\in S$ and $|L_\psi(v_i)|\ge 5$ for $v\in V(C)\setminus S$ by (\ref{eq:size}). By Lemma~\ref{l6cycle}, the cycle $C$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
\begin{lemma}\label{tria3} Let $(G,L,Z)$ be a minimal counterexample. Let $v_1v_2v_3$ be a triangle in $G$ disjoint from $Z$ such that $\deg(v_1),\deg(v_3)\le 4$ and $\deg(v_2)=3$. Then $v_3$ has no neighbor $v_4\not\in \{v_1,v_2\}\cup Z$ of degree $3$. \end{lemma} \begin{proof} Suppose for a contradiction that $v_3$ has such a neighbor $v_4$. Note that $v_1v_4, v_2v_4\not\in E(G)$, since $G$ does not contain $4$-cycles. Let $H$ be the subgraph of $G$ induced by $\{v_1,v_2,v_3,v_4\}$.
By the minimality of $G$, the graph $G-v_4$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that the triangle $v_1v_2v_3$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v_2)|,|L_\psi(v_3)|\ge 8$ and $|L_\psi(v_1)|,|L_\psi(v_4)|\ge 5$ by (\ref{eq:size}). By Lemma~\ref{lollipop}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
\begin{lemma} \label{34-path} Let $(G,L,Z)$ be a minimal counterexample. Then $G$ does not contain a path $P=v_1\ldots v_k$ disjoint from $Z$ such that $5\le k\le 7$, $\deg (v_1) = \deg (v_3) = \deg (v_k) = 3$, $\deg(v_2)=4$, and $\deg(v_i)=4$ for $4\le i\le k-1$. \end{lemma} \begin{proof} Suppose for a contradiction that $G$ contains such a path $P$. By Lemma~\ref{33-path}, the vertices $v_1$, $v_3$, and $v_k$ form an independent set. By considering $P$ as short as possible, we can assume that $v_3\ldots v_k$ is an induced path. Note that $v_2v_j\not\in E(G)$ for $5\le j\le k$ and $v_1v_i\not\in E(G)$ for $4\le i\le k-1$ by the absence of $4$- and $5$-cycles and by Lemma~\ref{6-cycle}. Furthermore, $v_2v_4\not\in E(G)$ by Lemma~\ref{tria3}. Hence, $P$ is an induced path.
By the minimality of $G$, the graph $G-v_3$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(P)$, and consider the list assignment $L_\psi$ for $P$. By the existence of $\psi_0$, we conclude that $P-v_3$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v_3)|\ge 8$ and $|L_\psi(v_i)|\ge 5$ for $1\le i\le k$ by (\ref{eq:size}). By Lemma~\ref{3-4-3---3}, the path $P$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
We now consider the neighborhoods of vertices of degree 4.
\begin{lemma} \label{4-vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be an internal vertex of $G$ of degree four. Then $v$ has at most two internal neighbors of degree three. \end{lemma} \begin{proof}
Suppose for a contradiction that $v$ has three such neighbors $v_1, v_2,v_3\in V(G)\setminus Z$. Note that $\{v_1,v_2,v_3\}$ is an independent set by Lemma~\ref{tria3}. By the minimality of $G$, the graph $G-\{v,v_1,v_2,v_3\}$ has an $(L:3)$-coloring $\psi$. Note that $|L_\psi(v)|\ge 8$ and $|L_\psi(v_i)|\ge 5$ for $1\le i\le 3$ by (\ref{eq:size}). By Lemma~\ref{claw}, the subgraph $G[\{v,v_1,v_2,v_3\}]$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
Next, let us consider the neighborhoods of vertices of degree 5.
\begin{lemma} \label{5-vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be an internal vertex of $G$ of degree five. If $v$ has four internal neighbors $v_1$, \ldots, $v_4$ of degree three, then $G[\{v_1,\ldots,v_4\}]$ is a perfect matching. \end{lemma} \begin{proof} Suppose for a contradiction that $G[\{v_1,\ldots,v_4\}]$ is not a perfect matching. Since $G$ does not contain $4$-cycles, it follows that $G[\{v_1,\ldots,v_4\}]$ has at most one edge; we can assume that it contains no edge other than $v_3v_4$. Let $H=G[\{v,v_1,v_2,v_3,v_4\}]$.
By the minimality of $G$, the graph $G-\{v_1,v_2\}$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that $H-\{v_1,v_2\}$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 8$, $|L_\psi(v_i)|\ge 5$ for $1\leq i\leq 2$ and $|L_\psi(v_i)|\ge 2+3\deg_H(v_i)$ for $3\leq i\leq 4$ by (\ref{eq:size}). By Lemma~\ref{claw5}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
\begin{lemma} \label{5-vertex3} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be an internal vertex of $G$ of degree five. If $v$ has three internal neighbors $v_1$, $v_2$, and $v_3$ of degree three, then $v_1$ has no neighbor of degree three not belonging to $Z$ and not adjacent to $v$. \end{lemma} \begin{proof} Suppose for a contradiction that $v_1$ has a neighbor $u_1\not\in Z\cup N_G(v)$ of degree three. Since $G$ does not contain $4$-cycles, it follows that $u_1v_2,u_1v_3\not\in V(G)$ and that $G$ contains at most one of the edges $v_1v_2$, $v_2v_3$, and $v_1v_3$. Let $H=G[\{v,v_1,v_2,v_3,u_1\}]$.
By the minimality of $G$, the graph $G-v_1$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that $H-v_1$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 5$, $|L_\psi(u_1)|\ge 5$ and $|L_\psi(v_i)|\ge 2+3\deg_H(v_i)$ for $1\leq i\leq 3$ by (\ref{eq:size}). By Lemma~\ref{claw53}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
\begin{lemma}\label{5-vertex43} Let $(G,L,Z)$ be a minimal counterexample, and let $P=u_1v_1vv_2u_2$ be a path in $G$ vertex-disjoint from $Z$. If $vu_2\not\in E(G)$, $\deg(v)=5$, $\deg(u_1)=\deg(u_2)=\deg(v_2)=3$, and $\deg(v_1)=4$, then $v$ has no internal neighbors of degree three distinct from $v_2$ and $u_1$. \end{lemma} \begin{proof} Suppose for a contradiction that $v$ has a neighbor $v_3\not\in \{v_2,u_1\}\cup Z$ of degree three. Note that $G$ does not contain the edge $v_1v_2$ by Lemma~\ref{33-path} and the edge $vu_1$ by Lemma~\ref{5-vertex3}. Since $G$ does not contain $4$- or $5$-cycles, $P$ is an induced path. By Lemma~\ref{33-path} and absence of $4$- and $5$-cycles, $v_3$ has no neighbors among $u_1$, $v_2$, and $u_2$. Consequently, since $G$ does not contain $4$- or $5$-cycles, $H=G[\{u_1,v_1,v,v_2,u_2,v_3\}]$ consists of the path $P$, the edge $vv_3$, and possibly the edge $v_1v_3$.
By the minimality of $G$, the graph $G-v_2$ has an $(L:3)$-coloring $\psi_0$. Let $\psi$ be the restriction of $\psi_0$ to $G-V(H)$, and consider the list assignment $L_\psi$ for $H$. By the existence of $\psi_0$, we conclude that $H-v_2$ is $(L_\psi:3)$-colorable. Note that $|L_\psi(v)|\ge 5$, $|L_\psi(u_i)|\ge 5$ for $1\leq i\leq 2$, $|L_\psi(v_1)|\ge 3\deg_H(v_1)-1$, $|L_\psi(v_2)|\ge 8$ and $|L_\psi(v_3)|\ge 2+3\deg_H(v_3)$ by (\ref{eq:size}). By Lemma~\ref{claw543}, the graph $H$ is $(L_\psi:3)$-colorable. However, this contradicts Proposition~\ref{obs:extend}. \end{proof}
\section{Discharging}\label{sec-discharge}
\subsection{Notation} Consider a minimal counterexample $(G,L,Z)$. We say that the faces of $G$ of length at least $6$ are \emph{$6^+$-faces}. Since $G$ is $2$-connected by Lemma~\ref{conn}, every face of $G$ is bounded by a cycle, and in particular, every face of $G$ is either a $3$-face or a $6^+$-face. A vertex $v\in V(G)$ is a \emph{$k$-vertex} if $v$ is internal and $\deg(v)=k$. We say that $v$ is a \emph{$k^+$-vertex} if either $v\in Z$ or $\deg(v)\ge k$.
Let $v_1vv_2$ be a part of the cycle bounding a $6^+$-face $f$ of $G$, and for $i\in\{1,2\}$, let $f_i\neq f$ be the face incident with the edge $vv_i$. If both $f_1$ and $f_2$ are $3$-faces, we say that $v$ is \emph{type-II incident} with $f$. If exactly one of $f_1$ and $f_2$ is a $3$-face, we say that $v$ is \emph{type-I incident} with $f$. If neither $f_1$ nor $f_2$ is a $3$-face, we say that $v$ is \emph{type-0 incident} with $f$. See Figure~\ref{fig-incid} for an illustration.
\begin{figure}
\caption{Type-II, type-I, and type-0 incidences.}
\label{fig-incid}
\end{figure}
Suppose that in the situation described in the previous paragraph, $v$ is a 4-vertex type-I incident with $f$, where $f_1$ is a $3$-face, $v_1$ is a $4^+$-vertex and $v_2$ is a $5^+$-vertex. Let $v_2vx$ be the subpath of the cycle bounding $f_2$ centered at $v$. If $x$ is a $3$-vertex, then we say $v$ is {\em type-I-1 incident} with $f$. If $x$ is a $4$-vertex, $f_2$ is bounded by a $6$-cycle $xvv_2w_1w_2w_3$, $w_1$ and $w_3$ are $3$-vertices and $w_2$ is a $4$-vertex type-II incident with $f_2$, then we say $v$ is {\em type-I-2 incident} with $f$. See Figure~\ref{fig-type-Ia} for an illustration.
\begin{figure}
\caption{Type-I-1 and type-I-2 incidences.}
\label{fig-type-Ia}
\end{figure}
Let $v_0v_1vv_2v_3$ be a subpath of the cycle bounding a $6^+$-face $f$, where $vv_1$ is incident with a $3$-face, $vv_2$ is not incident with a $3$-face, $v$ is a $5$-vertex and $v_1$ is a $3$-vertex. Let $v_1$, $v_2$, $x_1$, $x_2$, $x_3$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. If both $v_2$ and $v_3$ are $3$-vertices, then we say $v$ is {\em type-I-3 incident} with $f$. If $v_0$ and $x_1$ are $3$-vertices and $x_1$ is contained in a triangle $x_1yz$ for $4^+$-vertices $y$ and $z$ distinct from $x_2$ and $x_3$, then we say $v$ is {\em type-I-4 incident} with $f$. See Figure~\ref{fig-type-Ib} for an illustration.
\begin{figure}
\caption{Type-I-3 and type-I-4 incidences.}
\label{fig-type-Ib}
\end{figure}
Let $v_1vv_2v_3$ be a part of the cycle bounding a $6^+$-face $f$, where $v$ is a $5$-vertex type-0 incident with $f$. Let $v_1$, $v_2$, $x_1$, $x_2$, $x_3$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. If $v_1$ and $v_2$ are $3$-vertices, then we say $v$ is {\em type-0-1 incident} with $f$. If $v_2$ and $v_3$ are $3$-vertices, then we say $v$ is {\em type-0-2 incident} with $f$. If both $x_1$ and $x_3$ belong to triangles containing only $3$-vertices distinct from $x_2$, then we say $v$ is {\em type-0-3 incident} with $f$. See Figure~\ref{fig-type-0} for an illustration.
\begin{figure}
\caption{Type-0-1, type-0-2, and type-0-3 incidences.}
\label{fig-type-0}
\end{figure}
\subsection{Initial charge and discharging rules}
Now we proceed by the discharging method. Consider a minimal counterexample $(G,L,Z)$. Set the initial charge of every vertex $v$ of $G$ to be $\text{ch}_0(v)=2\deg (v)-6$, and the initial charge of every face $f$ of $G$ to be $\text{ch}_0(f)=|f|-6$. By Euler's formula, \begin{align}
\sum_{v\in V(G)}\text{ch}_0(v)+\sum_{f\in F(G)}\text{ch}_0(f)&=\sum_{v\in V(G)}(2\deg (v)-6)+\sum_{f\in F(G)}(|f|-6)\nonumber\\
&=6(|E(G)|-|V(G)|-|F(G)|)=-12.\label{eq:sum} \end{align} We redistribute the charges according to the following rules:
\begin{description} \item[Rt] If a $6^+$-face $f$ shares an edge with a $3$-face $f'$, then $f$ sends 1 to $f'$. \item[R4] Suppose $v$ is a 4-vertex and $f$ is a $6^+$-face incident with $v$. \begin{itemize} \item[(II)] If $v$ is type-II incident with $f$, then $v$ sends $1$ to $f$. \item[(I)] Suppose $v$ is type-I incident with $f$. If $v$ is type-I-1 or type-I-2 incident with $f$, then $v$ sends $1/2$ to $f$, otherwise $v$ sends $1$ to $f$. \item[(0)] Suppose $v$ is type-0 incident with $f$. If either $v$ is not incident with any $3$-faces or $v$ is type-I-1 or type-I-2 incident with another $6^+$-face, then $v$ sends $1/2$ to $f$. \end{itemize} \item[R5] Suppose $v$ is a 5-vertex and $f$ is a $6^+$-face incident with $v$. \begin{itemize} \item[(II)] Suppose $v$ is type-II incident with $f$. If $v$ is type-I-3 incident with another $6^+$-face, then $v$ sends $1$ to $f$, otherwise $v$ sends $2$ to $f$. \item[(I)] Suppose $v$ is type-I incident with $f$. If $v$ is type-I-3 or type-I-4 incident with $f$, then $v$ sends $3/2$ to $f$, otherwise $v$ sends $1$ to $f$. \item[(0)] Suppose $v$ is type-0 incident with $f$. If $v$ is type-0-1 or type-0-2 incident with $f$, then $v$ sends $1$ to $f$; otherwise, if $v$ is not type-0-3 incident with $f$, then $v$ sends $1/2$ to $f$. \end{itemize} \item[R6] Suppose $v$ is a $6^+$-vertex and $f$ is a $6^+$-face incident with $v$. \begin{itemize} \item[(II)] If $v$ is type-II incident with $f$, then $v$ sends $2$ to $f$. \item[(I)] If $v$ is type-I incident with $f$, then $v$ sends $3/2$ to $f$. \item[(0)] If $v$ is type-0 incident with $f$, then $v$ sends $1$ to $f$. \end{itemize} \end{description} In the situations of rules R4, R5, and R6, we write $\text{ch}(v\to f)$ for the amount of charge sent from $v$ to $f$.
\subsection{Final charges of vertices}
Let $\text{ch}$ denote the charge assignment after performing the charge redistribution using the rules Rt, R4, R5, and R6.
\begin{lemma}\label{charge-4vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be a vertex of $G$. If $v$ is a $4$-vertex, then $\text{ch}(v)\ge 0$. \end{lemma} \begin{proof} Note that $\text{ch}_0(v)=2$. Let $v_1$, \ldots, $v_4$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. For $1\le i\le 4$, let $f_i$ be the face whose boundary contains the path $v_ivv_{i+1}$ (where $v_5=v_1$). Since $G$ contains no $4$-cycles, we can without loss of generality assume that $f_2$ and $f_4$ are $6^+$-faces. If $f_1$ and $f_3$ are $3$-faces, then $v$ is type-II incident with them and $\text{ch}(v\to f_2)=\text{ch}(v\to f_4)=1$ by R4(II), and $\text{ch}(v)=2-2\times 1=0$. Hence, suppose that $f_3$ is a $6^+$-face.
Suppose now that $f_1$ is a $3$-face. If $v$ neither type-I-1 nor type-I-2 incident with $f_2$ and $f_4$, then $\text{ch}(v\to f_3)=0$ by R4(0) and $\text{ch}(v\to f_2)=\text{ch}(v\to f_4)=1$ by R4(I), and $\text{ch}(v)=2-2\times 1=0$. If $v$ is type-I-1 or type-I-2 incident with say $f_2$, then $\text{ch}(v\to f_3)=1/2$ by R4(0) and $\text{ch}(v\to f_2)=1/2$ and $\text{ch}(v\to f_4)\le 1$ by R4(I), and $\text{ch}(v)\ge 2-1-2\times 1/2=0$.
Finally, if $v$ is not incident with any $3$-faces, then $\text{ch}(v\to f_i)=1/2$ by R4(0) for $1\le i\le 4$ and $\text{ch}(v)=2-4\times 1/2=0$. \end{proof}
\begin{lemma}\label{charge-5vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be a vertex of $G$. If $v$ is a $5$-vertex, then $\text{ch}(v)\ge 0$. \end{lemma} \begin{proof} Note that $\text{ch}_0(v)=4$. Let $v_1$, \ldots, $v_5$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. For $1\le i\le 5$, let $f_i$ be the face whose boundary contains the path $v_ivv_{i+1}$ (where $v_6=v_1$). Since $G$ contains no $4$-cycles, we can without loss of generality assume that $f_2$, $f_4$, and $f_5$ are $6^+$-faces.
Suppose first that $f_1$ and $f_3$ are $3$-faces. Then $v$ is type-II incident with $f_2$ and type-I incident with $f_4$ and $f_5$. Note that $v$ cannot be type-I-4 incident with $f_4$ and $f_5$. If $v$ is not type-I-3 incident with $f_4$ and $f_5$, then $\text{ch}(v\to f_2)=2$ by R5(II) and $\text{ch}(v\to f_4)=\text{ch}(v\to f_5)=1$ by R5(I), and thus $\text{ch}(v)=4-2-2\times 1=0$. If $v$ is type-I-3 incident with $f_4$ or $f_5$, then $\text{ch}(v\to f_2)=1$ by R5(II) and $\text{ch}(v\to f_4),\text{ch}(v\to f_5)\le 3/2$ by R5(I), and thus $\text{ch}(v)\ge 4-1-2\times 3/2=0$.
Hence, we can assume that $f_3$ is a $6^+$-face. Suppose now that $f_1$ is a $3$-face. If $v$ is type-I-3 or type-I-4 incident with neither $f_2$ nor $f_5$, then $\text{ch}(v\to f_i)\le 1$ by R5(I) and R5(0) for $2\le i\le 5$, and $\text{ch}(v)\ge 4-4\times 1=0$. If $v$ is type-I-3 incident with $f_2$, then $v_4$, $v_5$, and $v_1$ are $4^+$-vertices by Lemma~\ref{5-vertex3}, and thus $v$ is neither type-I-3 nor type-I-4 incident with $f_5$ and $v$ is neither type-0-1 nor type-0-2 incident with $f_4$. If $v$ is type-I-4 incident with $f_2$, then $v_3$, $v_5$, and $v_1$ are $4^+$-vertices by Lemma~\ref{5-vertex3}, and thus $v$ is neither type-I-3 nor type-I-4 incident with $f_5$ and $v$ is neither type-0-1 nor type-0-2 incident with $f_3$ and $f_4$. In either case, $\text{ch}(v\to f_2)=3/2$ and $\text{ch}(v\to f_5)=1$ by R5(I) and $\text{ch}(v\to f_4)\le 1/2$ and $\text{ch}(v\to f_3)\le 1$ by R5(0), and $\text{ch}(v)\ge 4-3/2-2\times 1-1/2=0$.
Finally, let us consider the case that $v$ is incident with no $3$-faces. If $v$ is type-0-1 or type-0-2 incident with at most three faces, then $\text{ch}(v)\ge 4-3\times 1-2\times 1/2=0$ by R5(0). Hence, suppose that $v$ is type-0-1 or type-0-2 incident with at least 4 faces. By Lemma~\ref{5-vertex}, $v$ is adjacent to at most three $3$-vertices. If $v$ is adjacent to three $3$-vertices, then by Lemma~\ref{5-vertex3} $v$ is not type-0-2 incident with any faces, and clearly $v$ is type-0-1 incident with at most two faces, which is a contradiction. Hence, $v$ is adjacent to at most two $3$-vertices, and by symmetry, we can assume that $v_5$ and $v_1$ are $4^+$-vertices. Then $v$ is neither type-0-1 nor type-0-2 incident with $f_5$, and thus it is type-0-1 or type-0-2 incident with $f_1$, \ldots, $f_4$. It cannot be type-0-1 incident with $f_1$ and $f_4$, and thus it is type-0-2 incident with these faces; i.e., $v_2$ and $v_4$ are $3$-vertices and have $3$-vertex neighbors $x_2$ and $x_4$ incident with $f_1$ and $f_4$. By Lemma~\ref{5-vertex3}, $v_3$ is a $4^+$-vertex. Consequently, $v$ is also type-0-2 incident with $f_2$ and $f_3$, and thus $v_2$ and $v_4$ have $3$-vertex neighbors $x'_2$ and $x'_4$ incident with $f_2$ and $f_3$. By Lemma~\ref{33-path}, $x_2x'_2\in E(G)$ and $x_4x'_4\in E(G)$. But then $v$ is type-0-3 incident with $f_5$ and $\text{ch}(v\to f_5)=0$ by R5(0); and thus $\text{ch}(v)=4-4\times 1=0$. \end{proof}
\begin{lemma} \label{vertex} Let $(G,L,Z)$ be a minimal counterexample and let $v$ be a vertex of $G$. If $v$ is internal, then $\text{ch}(v)\ge 0$. If $v\in Z$, then $\text{ch}(v)=\deg(v)-6$. \end{lemma} \begin{proof} By Lemma~\ref{MD}, if $v$ is internal then $\deg (v)\geq 3$. If $v$ is a $3$-vertex, then $\text{ch}(v)=\text{ch}_0(v)=0$. If $v$ is a $4$- or $5$-vertex, then $\text{ch}(v)\ge 0$ by Lemmas~\ref{charge-4vertex} and \ref{charge-5vertex}.
Hence, suppose that $v$ is a $6^+$-vertex, incident with $t$ $3$-faces, and type-II, type-I and type-0 incident with $d_{II}$, $d_I$, and $d_0$ $6^+$-faces, respectively. Note that $d_{II}+d_I/2=t$. By R6, we have \begin{align*} \text{ch}(v)&=\text{ch}_0(v)-d_{II}\times 2-d_I\times 3/2-d_0\\ &=\text{ch}_0(v)-(d_{II}+d_I+d_0+t)=\text{ch}_0(v)-\deg(v)=\deg(v)-6. \end{align*} If $v$ is internal, then $\deg(v)\ge 6$, and thus $\text{ch}(v)\ge 0$. \end{proof}
\subsection{Final charge of faces}
Let $f$ be a $6^+$-face. A subpath $S=u_0u_1\ldots u_t$ of the cycle bounding $f$ with at least two vertices is called a \emph{segment} of $f$ if $u_1$, \ldots, $u_{t-1}$ are type-II incident with $f$, and $u_0$ and $u_t$ are type-I incident with $f$. In particular, for $1\le i\le t$, the edge $u_{i-1}u_i$ is incident with a $3$-face. Note that the segments of $f$ are pairwise vertex-disjoint. Let us define $\text{ch}(S)=-t+\sum_{i=0}^t \text{ch}(u_i\to f)$; note that $\text{ch}(S)$ denotes the amount of charge received by $f$ from vertices of the segment, minus the amount sent by the rule Rt to $3$-faces incident with edges of $S$. If $\text{ch}(S)<0$, then we say $S$ is a \emph{negative segment}. A \emph{$t$-segment} is a segment with $t$ edges.
\begin{proposition} \label{segment} Let $(G,L,Z)$ be a minimal counterexample. Let $S=u_0\ldots u_t$ be a negative $t$-segment of a $6^+$-face $f$ of $G$, where $\deg(u_0)\le \deg(u_t)$. Then all vertices of $S$ are internal, and either \begin{itemize} \item both $u_0$ and $u_t$ are $3$-vertices and $\text{ch}(S)=-1$, or \item $t\ge 2$, $u_0$ is a $3$-vertex, $u_t$ is a $4$-vertex type-I-1 or type-I-2 incident with $f$, and $\text{ch}(S)=-1/2$. \end{itemize} Furthermore, if either $t\le 3$, or $t\le 5$ and $\text{ch}(S)=-1$, then $u_1$, \ldots, $u_{t-1}$ are $4$-vertices. \end{proposition} \begin{proof} Let $\beta_0=\beta_t=0$ and $\beta_i=1$ for $1\le i \le t-1$. Note that $\text{ch}(S)=-1+\sum_{i=0}^t (\text{ch}(u_i\to f)-\beta_i)$. For $1\le i\le t-1$, the edges $u_{i-1}u_i$ and $u_iu_{i+1}$ are incident with $3$-faces, and thus $u_i$ is a $4^+$-vertex type-II incident with $f$; hence, we have $\text{ch}(u_i\to f)\ge 1$ by R4(II), R5(II), and R6(II). Consequently, $\text{ch}(u_i\to f)-\beta_i\ge 0$ for $0\le i\le t$ and $\text{ch}(S)\ge -1$. Since $S$ is negative, we have $\text{ch}(u_i\to f)<\beta_i+1$ for $0\le i\le t$, and by R6(II), R6(I), R5(I), and R4(I), we conclude that all vertices of $S$ are internal of degree at most $5$, both $u_0$ and $u_t$ have degree at most $4$, and if they are $4$-vertices, then they are type-I-1 or type-I-2 incident with $f$. Furthermore, by R5(II), if $u_i$ is a $5$-vertex for some $i\in\{1,\ldots, t-1\}$, then $u_i$ is type-I-3 incident with another $6^+$-face, so that $\text{ch}(u_i\to f)=\beta_i$. By R4(I), if $u_i$ is a $4$-vertex for some $i\in\{0,t\}$, then $\text{ch}(u_i\to f)=\beta_i+1/2$, and thus either both $u_0$ and $u_t$ are $3$-vertices and $\text{ch}(S)=-1$, or $u_0$ is a $3$-vertex and $u_t$ is a $4$-vertex and $\text{ch}(S)=-1/2$. In the latter case, $u_t$ is type-I-1 or type-I-2 incident with $f$, and in particular $u_{t-1}$ is a $4^+$-vertex, and consequently $t\ge 2$.
Suppose now that some vertex $u_i$ with $i\in\{1,\ldots,t-1\}$ is a $5$-vertex; as we observed, $u_i$ is type-I-3 incident with another $6^+$-face. By Lemma~\ref{5-vertex3}, we have $i\ge 2$, and by Lemma~\ref{5-vertex43}, we have $i\ge 3$. If $\text{ch}(S)=-1$, then $u_t$ is a $3$-vertex, and a symmetric argument shows that neither $u_{t-1}$ nor $u_{t-2}$ is a $5$-vertex. Consequently, if either $t\le 3$, or $t\le 5$ and $\text{ch}(S)=-1$, then $u_1$, \ldots, $u_{t-1}$ are $4$-vertices. \end{proof}
We say two segments of the same $6^+$-face $f$ are \emph{adjacent} if an edge of the cycle bounding $f$ joins their ends.
\begin{proposition} \label{charge} Let $(G,L,Z)$ be a minimal counterexample and let $f$ be a $6^+$-face of $G$. The following propositions hold. \begin{itemize} \item[(1)] If $S$ is a segment of $f$ adjacent to a negative $1$-segment, then $\text{ch}(S)\ge 0$. Additionally, if $S$ is a $1$-segment, then $\text{ch}(S)\ge 1/2$. \item[(2)] Suppose $uvw$ is a subpath of the cycle bounding $f$, where $uv$ is incident with a $3$-face. If $w$ is type-0 incident with $f$ and $v$ is a $4^+$-vertex, then $\text{ch}(v\to f)+\text{ch}(w\to f)\geq 1$ (see Figure~\ref{fig-charge}(a)). \item[(3)] Suppose $uvw$ is a subpath of the cycle bounding $f$, where both $v$ and $w$ are type-0 incident with $f$ and $v$ is a $4$-vertex. Let $u$, $w$, $x$, $y$ be the neighbors of $v$ listed in cyclic order according to their drawing in $G$. If $u$ is a $3$-vertex, $w$ is a $5^+$-vertex, and both $x$ and $y$ are $4^+$-vertices, then $\text{ch}(v\to f)+\text{ch}(w\to f)\geq 1$ (see Figure~\ref{fig-charge}(b)). \item[(4)] Suppose $uvwx$ is a subpath of the cycle bounding $f$, where $uv$ is incident with a $3$-face and $u$ and $v$ are $3$-vertices. If $wx$ is incident with a $3$-face, then let $T=\{w\}$, otherwise let $T=\{w,x\}$. Then either $\sum_{z\in T} \text{ch}(z\to f)\ge 1$ or both $w$ and $x$ are $4$-vertices type-0 incident with $f$. \end{itemize}
\end{proposition} \begin{proof} Let us prove the claims separately. \begin{itemize} \item[(1)] Let $S=u_0\ldots u_t$, and let $S'=v_0v_1$ be a negative $1$-segment adjacent to $S$; say $v_1u_0$ is an edge of the cycle bounding $f$. By Proposition~\ref{segment}, both $v_0$ and $v_1$ are $3$-vertices. Note that $u_0$ is not adjacent to $v_0$, since all triangles in $G$ bound faces and $\deg(v_1)>2$. By Lemma~\ref{33-path}, $u_0$ is a $4^+$-vertex. Since $v_1$ is a $3$-vertex, $u_0$ is neither type-I-1 nor type-I-2 incident with $f$, and thus $\text{ch}(S)\ge 0$ by Proposition~\ref{segment}.
Suppose now that $t=1$. If $\text{ch}(u_0\to f)\ge 3/2$, then $\text{ch}(S)\ge \text{ch}(u_0\to f)-1=1/2$. Hence, we can assume that $\text{ch}(u_0\to f)<3/2$, and thus by R6(I), $u_0$ is not a $6^+$-vertex. Since $u_0$ is neither type-I-1 nor type-I-2 incident with $f$, R4(I) and R5(I) imply $\text{ch}(u_0\to f)=1$. If $u_0$ is a $5$-vertex, this by R5(I) implies that $u_0$ is not type-I-3 incident with $f$, and thus $u_1$ is a $4^+$-vertex. If $u_0$ is a $4$-vertex, then by Lemma~\ref{33-path}, we again conclude that $u_1$ is a $4^+$-vertex. In either case, R4(I), R5(I), and R6(I) imply $\text{ch}(u_1\to f)\ge 1/2$, and thus $\text{ch}(S)=\text{ch}(u_0\to f)+\text{ch}(u_1\to f)-1\ge 1/2$.
\item[(2)] By R4(I), R5(I), and R6(I), we have $\text{ch}(v\to f)\ge 1$, unless $v$ is a $4$-vertex type-I-1 or type-I-2 incident with $f$. If $v$ is type-I-1 or type-I-2 incident with $f$, then $\text{ch}(v\to f)=1/2$ and $w$ is a $5^+$-vertex. Note that in this case $w$ is not type-0-3 incident with $f$ (this is clear if $v$ is type-I-2 incident with $f$, and follows by Lemma~\ref{5-vertex43} if $v$ is type-I-1 incident with $f$). Hence, $\text{ch}(w\to f)\ge 1/2$ by R5(0) and R6(0), and $\text{ch}(v\to f)+\text{ch}(w\to f)\geq 1$.
\item[(3)] If $w$ is a $6^+$-vertex, then $\text{ch}(w\to f)=1$ by R6(0). Hence, assume that $w$ is a $5$-vertex. By Lemma~\ref{5-vertex43}, $w$ is not type-0-3 incident with $f$, and thus $\text{ch}(w\to f)\ge 1/2$ by R5(0). If $xy\in E(G)$, then consider the face $g$ whose boundary contains the path $wvx$, and observe that $v$ is type-I-1 incident with $g$. If $xy\not\in E(G)$, then $v$ is not incident with any $3$-faces. In either case, $\text{ch}(v\to f)=1/2$ by R4(0), and thus $\text{ch}(v\to f)+\text{ch}(w\to f)\ge 1$.
\item[(4)] By Lemma~\ref{33-path}, $w$ is a $4^+$-vertex. If $w$ is a $5$-vertex, then it is either type-I or type-0-2 incident with $f$. Hence, if $w$ is a $5^+$-vertex, then $\text{ch}(w\to f)\ge 1$ by R5(I), R5(0), and R6. Consequently, we can assume that $w$ is a $4$-vertex. If $wx$ is incident with a $3$-face, then note that $w$ is neither type-I-1 nor type-I-2 incident with $f$, and $\text{ch}(w\to f)=1$ by R4(I). Hence, assume that $wx$ is not incident with a $3$-face. By Lemma~\ref{33-path}, $x$ is a $4^+$-vertex. If $x$ is a $5^+$-vertex, then note that $w$ has no $3$-vertex neighbors other than $v$ by Lemma~\ref{33-path}, and thus $\text{ch}(w\to f)+\text{ch}(x\to f)\geq 1$ by (3). If $x$ is a $4$-vertex type-I incident with $f$, then note that $x$ is neither type-I-1 nor type-I-2 incident with $f$, and thus $\text{ch}(x\to f)=1$ by R4(I). Therefore, either $\sum_{z\in T} \text{ch}(z\to f)\ge 1$ or both $w$ and $x$ are $4$-vertices type-0 incident with $f$. \end{itemize} \end{proof}
\begin{figure}
\caption{Configurations from Proposition~\ref{charge}.}
\label{fig-charge}
\end{figure}
\begin{lemma}\label{6face} Let $(G,L,Z)$ be a minimal counterexample and let $f$ be a face of $G$. If $f$ has length $6$, then $\text{ch}(f)\ge 0$. \end{lemma} \begin{proof} Note that $\text{ch}_0(f)=0$. Let $v_1\ldots v_6$ be the cycle bounding $f$. If all faces that share edges with $f$ are $3$-faces, then $v_1$, \ldots, $v_6$ are $4^+$-vertices type-II incident with $f$, and thus $\text{ch}(f)\ge 6\times 1-6\times 1=0$ by R4(II), R5(II), R6(II), and Rt. Hence, we can assume that $f$ shares an edge with a $6^+$-face. Hence, each edge which $f$ shares with a $3$-face is contained in a segment. Let $S_1$, \ldots, $S_k$ be segments of $f$. By Rt, we have $\text{ch}(f)\ge \sum_{i=1}^k \text{ch}(S_i)$, and thus we can assume that say $S_1$ is a negative segment. We can label the vertices so that $S_1=v_1\ldots v_m$ for some $m\ge 2$.
Let us first consider the case that $\text{ch}(S)\ge -1/2$ for every segment $S$ of $f$. By Proposition~\ref{segment}, we have $\text{ch}(S_1)=-1/2$, $m\ge 3$, and we can assume that $v_1$ is a $3$-vertex and $v_m$ is a $4$-vertex type-I-1 or type-I-2 incident with $f$. Consequently, $v_{m+1}$ is a $5^+$-vertex (and in particular $m\le 5$). If $v_{m+1}$ is type-0 incident with $f$, then $f$ cannot have a negative segment other than $S_1$ (as it would be a $1$-segment, which cannot have charge at least $-1/2$ by Proposition~\ref{segment}). Furthermore, $\text{ch}(v_m\to f)+\text{ch}(v_{m+1}\to f)\ge 1$ by Proposition~\ref{charge}(2), and thus $\text{ch}(f)\ge (\text{ch}(S_1)-\text{ch}(v_m\to f))+(\text{ch}(v_m\to f)+\text{ch}(v_{m+1} \to f))\ge -1+1=0$. Hence, we can assume that $v_{m+1}$ is type-I incident with $f$ and starts a segment $S_2=v_{m+1}\ldots v_i$ for some $i\in \{5,6\}$. We have $\text{ch}(v_{m+1}\to f)\ge 1$ by R5(I) and R6(I), and if $v_i$ is a $4^+$-vertex, then $\text{ch}(v_i\to f)\ge 1/2$ by R4(I), R5(I), and R6(I), and thus $\text{ch}(S_2)\ge 1/2$, and $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(S_2)\ge 0$. Hence, suppose that $v_i$ is a $3$-vertex. Note that $m\le 4$, and thus $v_2$, \ldots, $v_{m-1}$ are $4$-vertices by Proposition~\ref{segment}. If $m=3$, then by Lemmas~\ref{33-path} and \ref{34-path}, we conclude that $i=5$ and $v_6$ is a $5^+$-vertex, not type-0-3 incident with $f$ by Lemma~\ref{5-vertex3}. Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(v_6\to f)\ge -1/2+0+1/2=0$ by R5(0) and R6(0). Therefore, we can assume that $m=4$, and thus $i=6$. By Lemma~\ref{33-path}, $v_4$ is not type-I-1 incident with $f$, and thus it is type-I-2 incident; i.e., $f$ shares the edge $v_4v_5$ with a $6$-face bounded by a cycle $v_4v_5w_1yx_1x$, where the edge $w_1y$ is incident with a $3$-face bounded by cycle $w_1yy_2$, $x_1$ and $w_1$ are $3$-vertices and $y$ is a $4$-vertex. Note that $y_2$ is a $4^+$-vertex by Lemma~\ref{4-vertex}. Consequently, $v_5$ is either a $6^+$-vertex, or a $5$-vertex type-I-4 incident with $f$, and $\text{ch}(v_5\to f)=3/2$ by R5(I) and R6(I). Consequently, $\text{ch}(S_2)=1/2$ and $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)=0$.
Let us now consider the case that $f$ is incident with a segment $S$ with $\text{ch}(S)<-1/2$, say $S=S_1$. By Proposition~\ref{segment}, $\text{ch}(S_1)=-1$ and both $v_1$ and $v_m$ are $3$-vertices. Since $m\le 6$, Proposition~\ref{segment} also implies that $v_2$, \ldots, $v_{m-1}$ are $4$-vertices. By Lemma~\ref{6-cycle}, we have $m\le 5$. If $m=5$, then by Lemmas~\ref{33-path} and \ref{6-cycle}, $v_6$ is a $5^+$-vertex. Note that if $v_6$ is a $5$-vertex, then it is type-0-1 incident with $f$. Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(v_6\to f)=-1+1=0$ by R5(0) and R6(0). Let us distinguish cases depending on whether $m$ is $2$, $3$, or $4$.
\textbf{Case $m=4$:} By Lemmas~\ref{33-path} and \ref{6-cycle}, we can assume that $v_5$ is a $4^+$-vertex and $v_6$ is a $5^+$-vertex. If $v_5v_6$ is incident with a $3$-face (and thus $v_5v_6$ is a segment $S_2$), then note that $v_5$ is not type-I-1 or type-I-2 incident with $f$, and thus $\text{ch}(v_i\to f)\ge 1$ for $i\in\{5,6\}$ by R4(I), R5(I), and R6(I), $\text{ch}(S_2)\ge 1$, and $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)\ge -1+1=0$. Hence, suppose that $v_5$ and $v_6$ are type-0 incident with $f$. Note that neither $v_5$ nor $v_6$ is type-0-3 incident with $f$ by Lemma~\ref{5-vertex3}, and thus $\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge 1$ by R5(0) and R6(0) unless $v_5$ is a $4$-vertex. If $v_5$ is a $4$-vertex, then note that $v_4$ is the only $3$-vertex neighbor of $v_5$ by Lemma~\ref{34-path}, and thus $\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge 1$ by Proposition~\ref{charge}(3). Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge -1+1=0$.
\textbf{Case $m=3$:} By Lemma~\ref{33-path}, $v_4$ and $v_6$ are $4^+$-vertices. If $v_5$ is a $3$-vertex, then $v_4$ and $v_6$ are $5^+$-vertices by Lemma~\ref{33-path} and by symmetry we can assume that $v_4$ is type-0 incident with $f$. Then $S_1$ is the only negative segment of $f$ ($v_5v_6$ could be a $1$-segment, but by Proposition~\ref{charge}(1), a negative $1$-segment cannot be adjacent to $S_1$) and if $v_4$ is a $5$-vertex, then it is type-0-1 incident with $f$. Hence, $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_4\to f)\ge -1+1=0$ by R5(0) and R6(0). Consequently, we can assume that $v_5$ is also a $4^+$-vertex.
Suppose that $f$ is incident with a segment $S_2\neq S_1$. If $\text{ch}(S_2)\ge 1$, then $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(S_2)\ge -1+1=0$. Hence, we can assume that $\text{ch}(S_2)<1$. Note that neither $v_4$ nor $v_6$ is type-I-1 or type-I-2 incident with $f$, and thus by R4(I), R5(I), and R6(I), this is only possible if $v_5$ is an end of $S_2$ (say $S_2=v_5v_6)$, $v_5$ is a $4$-vertex and $v_5$ is type-I-1 or type-I-2 incident with $f$. Then $\text{ch}(S_2)=1/2$ and $v_4$ is a $5^+$-vertex. By Lemma~\ref{5-vertex3}, $v_4$ is not type-0-3 incident with $f$, and thus $\text{ch}(v_4\to f)\ge 1/2$ by R5(0) and R6(0). Hence, $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(v_4\to f)\ge -1+1/2+1/2=0$.
Hence, we can assume that $S_1$ is the only segment of $f$. If $\text{ch}(v_4\to f)\ge 1/2$ and $\text{ch}(v_6\to f)\ge 1/2$, then $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_4\to f)+\text{ch}(v_6\to f)\ge -1+2\times 1/2=0$. Hence, by symmetry we can assume that $\text{ch}(v_4\to f)<1/2$. By Lemma~\ref{5-vertex3}, $v_4$ is not type-0-3 incident with $f$, and thus by R4(0), R5(0), and R6(0), we conclude that $v_4$ is a $4$-vertex. By Lemma~\ref{34-path}, $v_3$ is the only $3$-vertex neighbor of $v_4$. If $v_5$ is a $5^+$-vertex, then $\text{ch}(v_4\to f)+\text{ch}(v_5\to f)\ge 1$ by Proposition~\ref{charge}(3), and thus $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_4\to f)+\text{ch}(v_5\to f)\ge -1+1=0$. Hence, we can assume that $v_5$ is a $4$-vertex, and thus $v_6$ is a $5^+$-vertex by Lemma~\ref{6-cycle}. By Lemma~\ref{5-vertex3}, $v_6$ is not type-0-3 incident with $f$, and thus $\text{ch}(v_6\to f)\ge 1/2$ by R5(0) and R6(0). Let $f'$ denote the face with that $f$ shares the edge $v_5v_6$, and let $z\neq v_6$ be the neighbor of $v_5$ in the boundary cycle of $f'$. By Lemma~\ref{34-path}, $z$ is a $4^+$-vertex, and thus if $v_5$ is incident with a $3$-face, then $v_5$ is type-I-2 incident with $f'$. Consequently, $\text{ch}(v_5\to f)=1/2$ by R4(0), and $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(v_5\to f)+\text{ch}(v_6\to f)\ge -1+2\times 1/2=0$.
\textbf{Case $m=2$:} If $k=3$, then the segments of $f$ are $S_1$, $S_2=v_3v_4$ and $S_3=v_5v_6$. By Proposition~\ref{charge}(1), $\text{ch}(S_2),\text{ch}(S_3)\ge 1/2$, and thus $\text{ch}(f)=\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(S_3)=0$. Hence, we can assume that $k\le 2$. If $v_3$ is contained in a segment, then let $T_3=\{v_3\}$, otherwise let $T_3=\{v_3,v_4\}$. If $v_6$ is contained in a segment, then let $T_6=\{v_6\}$, otherwise let $T_6=\{v_5,v_6\}$. For $i\in\{3,6\}$, let $\gamma_i=\sum_{x\in T_i} \text{ch}(x\to f)$. By Lemma~\ref{6-cycle}, $v_3$, \ldots, $v_6$ cannot all be $4$-vertices, and thus by Proposition~\ref{charge}(4), we have $\max(\gamma_3,\gamma_6)\ge 1$. If $k=1$, then $\text{ch}(f)=\text{ch}(S_1)+\gamma_3+\gamma_6\ge -1+0+1=0$. Hence, we can assume that $k=2$. Let $S_2\neq S_1$ be the other segment of $f$, with ends $x$ and $y$, and let $\beta=\text{ch}(S_2)-\text{ch}(x\to f)-\text{ch}(y \to f)$. Observe that $\beta\ge -1$, and $\text{ch}(f)\ge \text{ch}(S_1)+\beta+\gamma_3+\gamma_6$. If $\gamma_3,\gamma_6\ge 1$, we have $\text{ch}(f)\ge 0$. Hence, by symmetry we can assume that $\gamma_3<1$, and by Proposition~\ref{charge}(4), $v_3$ and $v_4$ are $4$-vertices type-0 incident with $f$, and thus $S_2=v_5v_6$. By Lemma~\ref{33-path}, $v_5$ and $v_6$ are $4^+$-vertices, and clearly neither of them is type-I-1 or type-I-2 incident with $f$. By R4(I), R5(I), and R6(I), we have $\text{ch}(v_i\to f)\ge 1$ for $i\in \{5,6\}$, and thus $\text{ch}(S_2)\ge 1$. Consequently, $\text{ch}(f)\ge \text{ch}(S_1)+\text{ch}(S_2)\ge 0$. \end{proof}
\begin{lemma}\label{7face} Let $(G,L,Z)$ be a minimal counterexample and let $f$ be a face of $G$. If $f$ has length at least $7$, then $\text{ch}(f)\ge 0$. \end{lemma} \begin{proof} Let $C=v_1\ldots v_m$ be the cycle bounding $f$. If all faces that share edges with $f$ are $3$-faces, then $v_1$, \ldots, $v_m$ are $4^+$-vertices type-II incident with $f$, and thus $\text{ch}(f)\ge \text{ch}_0(f)+m\times 1-m\times 1\ge 0$ by R4(II), R5(II), R6(II), and Rt. Hence, we can assume that $f$ shares an edge with a $6^+$-face. Hence, each edge which $f$ shares with a $3$-face is contained in a segment. Let $S_1$, \ldots, $S_k$ be the segments of $f$, and let $n$ denote the number of them which are negative.
For a negative segment $S=v_iv_{i+1}\ldots v_s$, we say that $S$ \emph{owns} the edges of the path $S$, the edge $v_sv_{s+1}$, and if $S$ is a $1$-segment, then also the edge $v_{i-1}v_i$ (with all indices taken cyclically modulo $m$). Note that by Proposition~\ref{segment} and Lemma~\ref{33-path}, each edge of $C$ is owned by at most one negative segment, and each negative segment owns at least three edges. Consequently, $n\le \lfloor m/3\rfloor$, and $$\text{ch}(f)\ge \text{ch}(f_0)+\sum_{i=1}^k \text{ch}(S_i)\ge \text{ch}(f_0)-n\ge m-6-\lfloor m/3\rfloor.$$ It follows that if $m\ge 8$, then $\text{ch}(f)\ge 0$.
Hence, suppose that $|f|=7$, and thus $n\le 2$. If $f$ has at most one negative segment, or two negative segments of charge at least $-1/2$, then $\text{ch}(f)\ge \text{ch}(f_0)-1=0$. Hence, we can assume that $f$ has two negative segments $S_1$ and $S_2$ and $\text{ch}(S_1)<-1/2$. By Proposition~\ref{segment}, $\text{ch}(S_1)=-1$, and we can assume $S_1=v_1\ldots v_s$ for some $s\le 5$, $v_1$ and $v_s$ are $3$-vertices, and $v_2$, \ldots, $v_{s-1}$ are $4$-vertices. By Lemma~\ref{33-path}, $v_{s+1}$ and $v_7$ are $4^+$-vertices; furthermore, they are clearly neither type-I-1 nor type-I-2 incident with $f$. Since $S_2$ is negative, Proposition~\ref{segment} implies that $v_{s+1},v_7\not\in V(S_2)$, and thus $s\le 3$. If $\text{ch}(S_2)>-1$, then by Proposition~\ref{segment} we conclude that $s=2$ and $S_2$ is the $2$-segment $v_4v_5v_6$, and one end of $S_2$ is a $3$-vertex; otherwise, Proposition~\ref{segment} implies that both ends of $S_2$ are $3$-vertices. By symmetry, we can assume that $v_6$ is a $3$-vertex belonging to $S_2$. By Lemma~\ref{34-path}, $v_7$ is a $5^+$-vertex. Note that if $v_7$ is a $5$-vertex, then it is type-0-1 incident with $f$, and thus $\text{ch}(v_7\to f)=1$ by R5(0) and R6(0). Hence, $\text{ch}(f)\ge \text{ch}_0(f)+\text{ch}(S_1)+\text{ch}(S_2)+\text{ch}(v_7\to f)\ge 1-2\times 1+1=0$. \end{proof}
\begin{lemma} \label{face} Let $(G,L,Z)$ be a minimal counterexample. Then every face $f$ of $G$ satisfies $\text{ch}(f)\ge 0$. \end{lemma} \begin{proof} If $f$ is a $6^+$-face, this follows from Lemmas~\ref{6face} and \ref{7face}. If $f$ is a $3$-face, then note that $f$ only shares edges with $6^+$-faces by the absence of $4$- and 5-cycles, and thus $\text{ch}(f)=\text{ch}_0(f)+3\times 1=0$ by Rt. \end{proof}
\section{$(11:3)$-colorability of planar graphs}
We are now ready to prove our main result.
\begin{proof}[Proof of Theorem~\ref{MTl}] Suppose for a contradiction that there exists a plane graph $G_0$ without $4$- or $5$-cycles and an assignment $L_0$ of lists of size 11 to vertices of $G_0$ such that $G_0$ is not $(L_0:3)$-colorable. Let $z$ be any vertex of $G_0$, let $L_0'(z)$ be any $3$-element subset of $L_0(z)$, and let $L'_0(v)=L_0(v)$ for all $v\in V(G)\setminus\{z\}$. Then $G_0$ is not $(L'_0:3)$-colorable, and thus $(G_0, L_0,\{z\})$ is a counterexample.
Therefore, there exists a minimal counterexample $(G,L,Z)$. Let $\text{ch}$ be the assignment of charges to vertices and faces of $G$ obtained from the initial charge $\text{ch}_0$ as described in Section~\ref{sec-discharge}. By (\ref{eq:sum}), the fact that the total amount of charge does not change by its redistribution, and Lemmas~\ref{vertex} and \ref{face}, we have $$-12=\sum_{v\in V(G)}\text{ch}_0(v)+\sum_{f\in F(G)} \text{ch}_0(f)=\sum_{v\in V(G)}\text{ch}(v)+\sum_{f\in F(G)} \text{ch}(f)\ge \sum_{z\in Z} (\deg(z)-6).$$
Since $|Z|\le 3$ and $\deg(z)\ge 2$ for all $z\in Z$ by Lemma~\ref{conn}, we conclude that $|Z|=3$ and all vertices of $Z$ have degree two. But since $G$ is connected and $G[Z]$ is a triangle, this implies that $V(G)=Z$, and thus $G$ is $(L:3)$-colorable. This is a contradiction. \end{proof}
\section*{Acknowledgments}
The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement n.616787. Xiaolan Hu is partially supported by NSFC under grant number 11601176 and NSF of Hubei Province under grant number 2016CFB146.
\end{document} |
\begin{document}
\begin{center} \Large {\bfseries Uniqueness of W*-tensor Products}
\\
\end{center} \begin{flushright} \large \bfseries Corneliu Constantinescu \end{flushright}
\begin{abstract} In contrast to C*-algebras, distinct C*-norms on the algebraic tensor product of two W*-algebras produce isomorphic W*-tensor products. \end{abstract}
AMS classification code: 46L06, 46L10
Keywords: W*-tensor products
We use the notation and terminology of [C]. For W*-tensor products of W*-algebras we use [T].
In the sequel we give a list of some notation used in this paper. \renewcommand{\alph{enumi})}{\arabic{enumi}.} \begin{enumerate} \item $\ensuremath{\mathrm{I\! K}}$ denotes the field of real or the field of complex numbers. The whole theory is developed in parallel for the real and complex case (but the proofs coincide).
\item If $f$ is a map defined on a set $X$ and $Y$ is a subset of $X$ then $f|Y$ denotes the restriction of $f$ to $Y$. \item If $E,F$ are vector spaces in duality then $E_F$ denotes the vector space $E$ endowed with the locally convex topology of pointwise convergence on $F$, i.e. with the weak topology $\sigma (E,F)$. \item If $E$ is a Banach space then $E'$ denotes its dual, $E''$ its bi-dual, and $E^{\#}$ its unit ball :
$$ E^{\#}:= \me{x\in E}{\| x\| \leq 1}.$$ We put for every $x\in E$, $$\ma{j_Ex}{E'}{\ensuremath{\mathrm{I\! K}}}{x'}{\sa{x}{x'}}$$ and call the map $\mb{j_E}{E}{E''}$ the evaluation map of $E$. If $F$ is a vector subspace of $E$ then we set
$$F^0:=\me{x'\in E'}{x'|F=0}. $$ If $F$ is a vector subspace of $E'$ then we define $$^0F:=\me{x\in E}{x'\in F\Longrightarrow \sa{x}{x'}=0}.$$ \item Let $E,F$ be Banach spaces and $\mb{\varphi }{E}{F}$ is a continuous linear map. $\varphi $ is called an isometry if it preserves the norms and if it is surjective. We put $Im\,\varphi :=\varphi (E)=\me{\varphi x}{x\in E}$ and denote by $$\ma{\varphi '}{F'}{E'}{y'}{y'\circ \varphi }$$ the transpose of $\varphi $. \item If $E$ is a C*-algebra then we denote by $Pr\,E$ the set of orthogonal projections of $E$. If in addition $E$ is unital then we denote by $1_E$ its unit. If $F$ is a subset of $E$ then we put $$F^c:=\me{x\in E}{y\in F\Longrightarrow xy=yx}.$$ We put for all $(x,x')\in E\times E'$ $$\ma{xx'}{E}{\ensuremath{\mathrm{I\! K}}}{y}{\sa{yx}{x'}},$$ $$\ma{x'x}{E}{\ensuremath{\mathrm{I\! K}}}{y}{\sa{xy}{x'}};$$ then $xx',x'x\in E'$. \item If $E$ is a W*-algebra then $\ddot{E} $ denotes its predual. \item $\odot$ denotes the algebraic tensor product. If $E,F$ are C*-algebras and if $\alpha $ is a C*-norm on $E\odot F$ then $E\otimes _\alpha F$ denotes the C*-algebra obtained by the compltion of $E\odot F$ with respect to $\alpha $. \end{enumerate}
\renewcommand{\alph{enumi})}{\alph{enumi})}
\begin{p}\label{8085} Let $E$ be a W*-algebra and $F$ a closed vector subspace of $\ddot{E} $ such that $xF\subset F$ and $Fx\subset F$ for all $x\in E$. \begin{enumerate} \item There is a $p\in Pr\, E^c$ such that $F=p\ddot{E} $ and $F^0=(1_E-p)E$. \item For every $x\in pE$ put $$\ma{\tilde{x} }{F}{\ensuremath{\mathrm{I\! K}}}{a}{\sa{x}{a}}.$$ Then $\tilde{x}\in F' $ for all $x\in pE$ and the map $$\map{pE}{F'}{x}{\tilde{x} }$$ is an isometry of Banach spaces. \end{enumerate} \end{p}
a) follows from [T] Theorem III.2.7 c).
b) For $a\in \ddot{E} $, $$\sa{x}{a}=\sa{px}{a}=\sa{x}{ap}=\sa{\tilde{x} }{ap},$$ so $\tilde{x}\in F' $ and $\n{x}=\n{\tilde{x} }$. Let $a'\in F'$ and put $$\ma{y}{\ddot{E} }{\ensuremath{\mathrm{I\! K}}}{a}{\sa{a'}{pa}}.$$ Then $y\in E$ and for $a\in \ddot{E} $, $$\sa{py}{a}=\sa{y}{ap}=\sa{y}{a},$$ so $y=py\in pE$ and $\tilde{y}=a' $, i.e. the map is surjective.\qed
\begin{p}\label{8086} Let $E$ be a C*-algebra and $F$ a closed vector subspace of $E'$ such that $xF\subset F$ and $Fx\subset F$ for all $x\in E$. \begin{enumerate} \item There is a $p\in Pr\,(E'')^c$ such that $F=pE'$ and $F^0=(1_{E''}-p)E''$. \item The map
$$\map{pE''}{F'}{x''}{x''|F}$$ is an isometry of Banach spaces. \end{enumerate} \end{p}
By [C] Corollary 1.3.6.5, $Im\,j_E$ is dense in $E''_{E'}$ so $x''F\subset F$ and $Fx''\subset F$ for all $x''\in E''$. By [C] Theorem 6.3.2.1 b), $E''$ is a W*-algebra and the assertions follow from Proposition \ref{8085}.\qed
\begin{de}\label{8088} Let $E,F$ be W*-algebras. We define the (bilinear) maps $$\map{(E\odot F)\times (\ddot{E}\odot \ddot{F} )}{\ddot{E}\odot \ddot{F} }{(x\otimes y,a\otimes b)}{(x\otimes y)(a\otimes b):=(xa)\otimes (yb)},$$ $$\map{(\ddot{E}\odot \ddot{F} )\times (E\odot F)}{\ddot{E}\odot \ddot{F} }{(a\otimes b,x\otimes y)}{(a\otimes b)(x\otimes y):=(ax)\otimes (by)}.$$ and similarly the (bilinear) maps $$(E\odot F)\times (E'\odot F')\longrightarrow E'\odot F',$$ $$(E'\odot F')\times (E\odot F)\longrightarrow E'\odot F'.$$ \end{de}
\begin{p}\label{8087} Let $E,F$ be W*-algebras, $\alpha $ a C*-norm on $E\odot F$, and $$G:=\ddot{E}\bar{\otimes }_\alpha \ddot{F}$$ the closure of $Im\,j_{\ddot{E} }\odot Im\,j_{\ddot{F} }$ in $(E\otimes _\alpha F)'$. \begin{enumerate} \item There is a $p\in Pr\,((E\otimes _\alpha F)'')^c$ such that $G=p(E\otimes _\alpha F)'$ and such that the map
$$\map{p(E\otimes _\alpha F)''}{G'}{x''}{x''|G}$$ is an isometry of Banach spaces. \item If $\mb{j}{E\otimes _\alpha F}{(E\otimes _\alpha F)''}$ denotes the evaluation map then $Im\,j$ is a C*-subalgebra of $p(E\otimes _\alpha F)''$ generating it as a W*-algebra. \end{enumerate} \end{p}
a) For $(u,v),(x,y)\in E\times F$ and $(a,b)\in \ddot{E}\times \ddot{F} $, $$\sa{u\otimes v}{(x\otimes y)((j_{\ddot{E} }a)\otimes (j_{\ddot{F}}b))}=\sa{(u\otimes v)(x\otimes y)}{(j_{\ddot{E}}a)\otimes (j_{\ddot{F} }b)}=$$ $$=\sa{(ux)\otimes (vy)}{(j_{\ddot{E}}a)\otimes (j_{\ddot{F} }b)}=\sa{ux}{j_{\ddot{E} }a}\sa{vy}{j_{\ddot{F} }b}=$$ $$=\sa{ux}{a}\sa{vy}{b}=\sa{u}{xa}\sa{v}{yb} =\sa{u}{j_{\ddot{E} }(xa)}\sa{v}{j_{\ddot{F} }(yb)}=$$ $$=\sa{u\otimes v}{j_{\ddot{E}}(xa)\otimes j_{\ddot{F} }(yb)},$$ so $$(x\otimes y)((j_{\ddot{E}}a)\otimes (j_{\ddot{F} }b))=j_{\ddot{E}}(xa)\otimes j_{\ddot{F} }(yb)\in G.$$ It follows $(x\otimes y)G\subset G$ and $zG\subset G$ for all $z\in E\otimes _\alpha F$. Similarly $Gz\subset G$ for all $z\in E\otimes _\alpha F$. By Proposition \ref{8086}, there is a $p\in Pr\,((E\otimes _\alpha F)'')^c$ such that $G=p(E\otimes _\alpha F)'$ and such that the map
$$\map{p(E\otimes _\alpha F)''}{G'}{x''}{x''|G}$$ is an isometry of Banach spaces.
b Let $(x,y)\in E\times F$ and $\varepsilon >0$. There are $a\in E^{\#}$ and $b\in F^{\#}$ such that $$\n{x}\n{y}-\varepsilon <\sa{x}{a}\sa{y}{b}=\sa{x}{j_{\ddot{E} }a}\sa{y}{j_{\ddot{F} }b}=$$
$$=\sa{x\otimes y}{(j_{\ddot{E} }a)\otimes (j_{\ddot{F} }b)}=\sa{j(x\otimes y)}{(j_{\ddot{E} }a)\otimes (j_{\ddot{F} }b)}\leq \n{j(x\otimes y)|G}.$$ Since $\varepsilon $ is arbitrary,
$$\n{j(x\otimes y)|G}=\n{x}\n{y},$$ so by a), $j(x\otimes y)\in p(E\otimes _\alpha F)''$. It follows $j(E\odot F)\subset p(E\otimes _\alpha F)''$ and $j(E\otimes _\alpha F)\subset p(E\otimes _\alpha F)''$. Let $$z\in (Im\,j_{\ddot{E}}\odot Im\,j_{\ddot{F} })\cap \,^0(j(E\otimes _\alpha F)).$$
Then $z|(E\otimes _\alpha F)=0$ and so $z=0$. Thus $$G\cap \,^0(j(E\otimes _\alpha F))=\{0\},$$ so by a), $$^0(j(E\otimes _\alpha F))\subset (1_{(E\otimes _\alpha F)''}-p)(E\otimes _\alpha F)',$$ $$p(E\otimes _\alpha F)''\subset (\,^0(j(E\otimes _\alpha F)))^0,$$ $$p(E\otimes _\alpha F)''= (\,^0(j(E\otimes _\alpha F)))^0.$$ Hence $p(E\otimes _\alpha F)''$ is the closure of $j(E\otimes _\alpha F)$ in $(E\otimes _\alpha F)''_{(E\otimes_ \alpha F)'}$ ([C] Proposition 1.3.5.4). By [C] Corollary 4.4.4.12 a), $p(E\otimes _\alpha F)''$ is the W*-subalgebra of $(E\otimes _\alpha F)''$ generated by $j(E\otimes _\alpha F)$.\qed
\begin{de}\label{8090} We put \emph{(with the notation of Proposition \ref{8087})} $$E\bar{\otimes }_\alpha F:=p(E\otimes _\alpha F)'' .$$ It is a W*-algebra with $\ddot{E}\bar{\otimes }_\alpha \ddot{F}$ as predual. \end{de}
\begin{theo}\label{8091} If $E,F$ are W*-algebras and $\alpha ,\beta $ are C*-norms on $E\odot F$ then $E\bar{\otimes }_\alpha F$ and $E\bar{\otimes }_\beta F$ are isomorphic. \end{theo}
We may assume $\alpha \leq \beta $. By [W] Proposition T.6.24, there is a surjective C*-homomorphism $$\mb{\varphi }{E\otimes _\beta F}{E\otimes _\alpha F}.$$ Then $$\mb{\varphi '}{(E\otimes _\alpha F)'}{(E\otimes _\beta F)'}$$ preserves the norms ([C] Proposition 1.3.5.2). Thus $\ddot{E}\bar{\otimes }_\alpha \ddot{F}=\ddot{E}\bar{\otimes }_\beta \ddot{F}$ and $\varphi ''$ is an isometry of Banach spaces. By [C] Corollary 6.3.2.3, $\varphi ''$ is a W*-isomorphism.\qed
\begin{center} {\bfseries REFERENCES} \end{center} \begin{flushleft} [C] Constantinescu, Corneliu, C*-algebras. Elsevir, 2001. \newline [T] Takesaki, Masamichi, Theory of Operator Algebra I. Springer, 2002. \newline [W] Wegge-Olsen, N. E., K-theory and C*-algebras. Oxford University Press, 1993. \newline \end{flushleft} \begin{flushright} {\scriptsize \hspace{-5mm} Corneliu Constantinescu$\quad$\\ Bodenacherstr. 53$\qquad\;$\\ CH 8121 Benglen$\qquad\;\;$\\ e-mail: constant@math.ethz.ch } \end{flushright}
\end{document} |
\begin{document}
\title[Mod~$2$ homology for $GL(4)$ and Galois representations]{Mod~$2$ homology for $GL(4)$ and Galois representations}
\author{Avner Ash} \address{Boston College\\ Chestnut Hill, MA 02445} \email{Avner.Ash@bc.edu} \author{Paul E. Gunnells} \address{University of Massachusetts Amherst\\ Amherst, MA 01003} \email{gunnells@math.umass.edu} \author{Mark McConnell} \address{Princeton University\\ Princeton, New Jersey 08540} \email{markwm@princeton.edu}
\thanks{
AA wishes to thank the National Science Foundation for support of this research through NSF grant DMS-0455240, and also the NSA through grant H98230-09-1-0050. This manuscript is submitted for publication with the understanding that the United States government is authorized to produce and distribute reprints. PG wishes to thank the National Science Foundation for support of this research through NSF grant DMS-0801214 and DMS-1101640.}
\keywords{Cohomology of arithmetic groups, Galois representations, Voronoi complex, Steinberg module, modular symbols}
\subjclass{Primary 11F75; Secondary 11F67, 20J06, 20E42} \dedicatory{Dedicated to the memory of Steve Rallis} \begin{abstract} We extend the computations in \cite{AGM4} to find the mod~$2$ homology in degree $1$ of a congruence subgroup $\Gamma$ of $\mathrm{SL}(4,\mathbb{Z})$ with coefficients in the sharbly complex, along with the action of the Hecke algebra. This homology group is related to the cohomology of $\Gamma$ with $\mathbb{F}_2$ coefficients in the top cuspidal degree. These computations require a modification of the algorithm to compute the action of the Hecke operators, whose previous versions required division by $2$. We verify experimentally that every mod~$2$ Hecke eigenclass found appears to have an attached Galois representation, giving evidence for a conjecture in \cite{AGM4}. Our method of computation was justified in \cite{AGM5}. \end{abstract}
\maketitle
\section{Introduction}\label{intro}
\subsection{} This is a continuation of a series of papers \cite{AGM1, AGM2, AGM3,
AGM4, AGM5} devoted to the computation of the cohomology of congruence subgroups $\Gamma\subset \mathrm{SL}(4,\mathbb{Z})$ with constant coefficients, together with the action of the Hecke operators on the cohomology. We also investigated the representations of the absolute Galois group of $\mathbb{Q}$ that appear to be attached to Hecke eigenclasses in the cohomology. The papers \cite{AGM1, AGM2, AGM3} deal with complex coefficients, while \cite{AGM4, AGM5} deal with coefficients in a prime finite field $\mathbb{F}_p$, with $p$ odd. The current paper takes $p=2$. We concentrate on $H^5(\Gamma)$ because on the one hand, $H^5$ supports cuspidal cohomology with $\mathbb{C}$-coefficients, and on the other hand it is only one degree below the virtual cohomological dimension of $\Gamma$ and therefore amenable to an algorithm due to one of us (PG) for computing Hecke operators \cite{experimental}. Our next project will be to rewrite our code to deal with finite-dimensional twisted coefficients, which should lead to more interesting examples of attached Galois representations, aimed at testing the generalization of Serre's conjecture found in \cite{ADP} and in \cite{Herzig}.
As explained in \cite{AGM4}, when $p>5$, the $\mathbb{C}$- and mod~$p$-betti numbers coincide. In this case we can compute the cohomology in terms of the Steinberg module and the sharbly complex, which is what we did. Namely, $H^5(\Gamma,K)\approx H_1(\Gamma,St\otimes K)\approx H_1(\Gamma,Sh_{\bullet }\otimes K)$. Here, $K=\mathbb{C}$ or $\mathbb{F}_p$, $\Gamma \subset \mathrm{SL}(4,\mathbb{Z})$ is a congruence subgroup, $St$ denotes the Steinberg module, and $Sh_{\bullet }$ the sharbly complex, whose definitions are recalled in Section~\ref{sh} below.
What we actually compute is the homology valued in the sharbly complex. In theory we could compute the mod~$p$ cohomology using a spectral sequence similar to that used by Soul\'e for $\mathrm{SL}(3,\mathbb{Z})$ in \cite{Soule}. However, even if we carried out this arduous task, we do not know how to compute the Hecke action on the resulting cohomology.
The method we use to compute $H_1(\Gamma,Sh_{\bullet }\otimes \mathbb{F}_2)$ is the same as in the previous papers. However, the algorithm in \cite{experimental} for the computation of the Hecke action had required division by~$2$, which prevented our treatment of mod~$2$ coefficients. Following a suggestion of Dan Yasaki, we overcome that problem in this paper.
\subsection{} The mod~$2$ homology is especially interesting for two reasons. One is that there are many more mod~$2$ classes than exist for odd primes, so there is more opportunity for testing conjectures and studying phenomenology. The other is that every mod~$2$ Galois representation is odd and therefore again there are more possibilities for investigating the Serre-type conjectures.
By Theorem 13 of \cite{AGM5}, the Hecke eigenvalue data we compile gives us parts of Hecke eigenpackets ocurring in the sharbly homology of $\Gamma_0(N)$, for various~$N$. Therefore we can test Conjecture~5(d) of \cite{AGM4}, which asserts the existence of a Galois representation unramified outside $2N$ associated to each such eigenpacket. We do this by searching for the Galois representation using a computer program described in Section~\ref{gal}. There can easily be more than one Galois representation that fits our data for any given Hecke eigenclass, because we have only computed a few Hecke operators at each level (because of time and space constraints). Our Galois finder searches for the ``simplest'' Galois representation that fits our data in each case. We use the supply of characters and 2-dimensional representations coming from classical modular forms of weights~2, 3 and~4. In no case do we fail to find a match, using just reducible representations made out of these blocks. An explanation of why we use just these weights appears in Section~\ref{gal}.
Although we stop searching when we have found one Galois representation that appears to be attached to a given Hecke eigenpacket, we know by the Brauer-Nesbitt Theorem that up to semisimplification there can be at most one Galois representation that is truly attached. This Galois representation might be describable in many different ways using characters and classical cuspforms, because such things can be congruent modulo a prime above~2. Of course, we would expect more complicated and even irreducible 4-dimensional representations to be needed if we could compute for much larger levels and more Hecke operators. But at least in this small way we find evidence both for Conjecture~5(d) of \cite{AGM4} and of the correctness of our computations.
\subsection{}\label{ss:1.3}
As we have said, we compute Hecke eigenvalues in the sharbly homology, not in the group homology. When the coefficients are $\mathbb{F}_2$ (as in this paper), the relationship between these two homology theories is rather obscure, as explained in \cite{AGM4}. We might say that the sharbly homology is more ``geometric" than the group homology, closely related as the former is to the Voronoi decomposition of the cone of $n$-dimensional quadratic forms. We believe that the sharbly homology is an interesting Hecke module in its own right, conjecturally possessing attached Galois representations.
We have no idea at present how one might prove in general that Galois representations are attached to \ Hecke eigenclasses in the sharbly homology. However, we believe that any naturally occurring Hecke module arising from the "geometry" of $\mathrm{GL}(n,\mathbb{Z})$ ought to have attached Galois representations, and that is what we are testing in this paper in the case of the mod 2 sharbly homology.
When looking experimentally for apparently attached Galois representations, naturally we can only look for Galois representations with small image, ones we can get our hands on. Of course we would be happy to find Hecke eigenclasses where the conjecturally attached Galois representations have large image, especially irreducible or non-selfdual image. But if, for example, the predicted image were $\mathrm{GL}(4,\mathbb{F}_2)$, it would be unlikely that we could find this Galois representation by searching for an appropriate polynomial.
On the other hand, experience suggests that when the level is small, the attached Galois representation will be reducible, and the smaller the level, the more reducible. This is borne out by the data in this paper, and it allows our ``Galois finder" to succeed. Of course, if we could compute much larger levels we would expect more interesting Galois representations to be attached, although we would probably be unable to find them, unless they were lifts from a smaller group, e.g.~an orthogonal or a symplectic group. In this sense, failure of our Galois finder is to be expected in the case of sufficiently large level, but for the small levels studied in this paper, we expected and indeed found that our Galois finder, which only deploys 1- and 2-dimensional Galois representations, always succeeded. The limitations on the size of the level $N$ and prime $\ell$ in the Hecke operators $T(\ell,k)$ that we compute come purely from limitations on computer speed and memory size. The speed limits the Hecke operators, because the number of single cosets in $T(\ell,k)$ grows like $\ell^3$ or $\ell^4$, depending on $k$. The size of the memory limits $N$ because the number of rows and columns of the matrices on which we have to perform row reduction to compute the sharbly homology grows like $N^3$. For the largest $N$ and $\ell$ occurring in this paper, our computations were using up all available memory and could take a full day of CPU time to compute a single $T(\ell,k)$.
Our Galois finder has two phases. In the first phase, we find attached Galois representations which are sums of 4 characters or sums of 2 characters and a Galois representation attached to a classical cusp form of weight 2 or 4 (modulo 2). These weights are natural in view of the geometry of the Borel-Serre boundary. (Note however that no one has yet carried out a complete explicit computation of the homology of the Borel-Serre boundary for general congruence subgroups $\mathrm{GL}(n,\mathbb{Z})$ for any $n>3$.)
The other phase of the Galois finder is similar except that it uses cuspforms of weights 2 and 3. These weights can be predicted by means of Serre's conjecture for $\mathrm{GL}(2)$, and thus rely on the structure of the set of classical modular forms mod 2. See Section 4 for more discussion of these matters. It would be very interesting to have a more geometric or automorphic interpretation of the weight 3 forms that appear, perhaps some kind of mod 2 endoscopic or other functorial lifting. But we do not know of any.
\subsection{} After the completion of this paper, we learned of the recent remarkable work of Peter Scholze \cite{scholze}, which among other things attaches Galois representations to Hecke eigenclasses in the mod $p$ cohomology of locally symmetric spaces. This result builds on results of Harris--Lan--Taylor--Thorne \cite{hltt}, which attaches Galois representations to Hecke eigenclasses in the characteristic $0$ cohomology of locally symmetric spaces. It is quite likely that Scholze's results will imply that Galois representations are attached to Hecke eigenclasses in the mod $p$ cohomology of congruence subgroups $\Gamma $ of $\mathrm{GL}(n,\mathbb{Z})$, for all $p$ and $\Gamma $, although at the moment the necessary results seem to be conditional on stabilization of the twisted trace formula.
\subsection{} We now give a guide to the paper. In Section~\ref{sh} we recall the definitions of the Steinberg module, the sharbly complex, and the concept of attached Galois representation. We state the conjecture of \cite{AGM4} that asserts the existence of attached Galois representations to Hecke eigenclasses in the sharbly homology.
In Section~\ref{gu} we describe what we actually compute, namely certain Hecke eigenclasses in the sharbly homology in degree $1$. We use the Voronoi complex. We describe how the sharbly homology is calculated as a Hecke module, with reference to our earlier papers for details. Then we explain what modifications we made to the Hecke algorithm to allow us to work with $\mathbb{F}_2$-coefficients.
In Section~\ref{gal} we describe our Galois representation finder. Because there are so many mod~$2$ homology classes, we had to automate the process of finding candidates for the conjecturally attached Galois representations.
In Section~\ref{res} we give our results. We give the level $N$ of $\Gamma$, the dimension of $H_1(\Gamma,Sh_{\bullet }\otimes \mathbb{F}_2)$, and an enumeration of packets of Hecke eigenvalues. For each packet, we give the dimension of its simultaneous eigenspace and a Galois representation that appears to be attached to the packet.
We thank Dan Yasaki for conversations that greatly helped this project at the start. We thank Kevin Buzzard for very helpful correspondence, particularly in regard to~(\ref{buzztalk}). We thank the referee for raising the issues in subsection \ref{ss:1.3}.
\section{The Steinberg module and the sharbly complex}\label{sh}
\subsection{} Let $n\geqslant 2$ and let $\mathbb{Q}^n$ denote the vector space of $n$-dimensional row vectors.
\begin{definition} The \emph{Sharbly complex} $Sh_{\bullet } $ is the complex of $\mathbb{Z} \mathrm{GL}(n,\mathbb{Q})$-modules defined as follows. As an abelian group, $Sh_{k}$ is generated by symbols $[v_1,\dots,v_{n+k}]$, where the $v_i$ are nonzero vectors in $\mathbb{Q}^n$, modulo the submodule generated by the following relations:
(i) $[v_{\sigma (1)},\dots,v_{\sigma(n+k)}]-(-1)^\sigma[v_1,\dots,v_{n+k}]$ for all permutations $\sigma$;
(ii) $[v_1,\dots,v_{n+k}]$ if $v_1,\dots,v_{n+k}$ do not span all of $\mathbb{Q}^n$; and
(iii) $[v_1,\dots,v_{n+k}]-[av_1,v_{2},\dots,v_{n+k}]$ for all $a\in \mathbb{Q}^\times$.
\noindent The boundary map $\partial \colon Sh_{k} \rightarrow Sh_{k-1} $ is given by \[ \partial([v_1,\dots,v_{n+k}])= \sum_{i=1}^{n+k} (-1)^i [v_1,\dots,\widehat{v_i},\dots v_{n+k}], \] where as usual $\widehat{v_i}$ means to delete $v_{i}$. \end{definition}
The sharbly complex $$ \dots\to Sh_i\to Sh_{i-1}\to \dots \to Sh_1\to Sh_0 $$ is an exact sequence of $\mathrm{GL}(n,\mathbb{Q})$-modules. We may define the Steinberg module $St$ as the cokernel of $\partial\colon Sh_1\to Sh_0$ (cf.~\cite[Theorem 5]{AGM5}).
Of course, all these objects depend on $n$, which we suppress from the notation, since we will later only work with $n=4$.
Let $\Gamma$ be a congruence subgroup of $\mathrm{SL}(n,\mathbb{Z})$.
\begin{definition} Let $M$ be a right $\Gamma$-module, concentrated in degree $0$. The \emph{sharbly homology} of $\Gamma$ with coefficients in $M$ is defined to be $H_*(\Gamma,Sh_{\bullet} \otimes_\mathbb{Z} M)$, where $\Gamma$ acts diagonally on the tensor product. \end{definition}
If $(\Gamma,S)$ is a Hecke pair in $\mathrm{GL}(n,\mathbb{Z})$ and $M$ is a right $S$-module, the Hecke algebra $\cH(\Gamma,S)$ acts on the sharbly homology since $S$ acts (diagonally) on $Sh_{\bullet} \otimes_\mathbb{Z} M$.
Here is a restatement of Corollary 8 of \cite{AGM5}, which shows the close connection between the sharbly homology and the group cohomology of $\Gamma$:
\begin{theorem}\label{S-Steinberg} For any $\Gamma\subset \mathrm{GL}(n,\mathbb{Z})$ and any coefficient module $M$ in which $2$ is invertible, there is a natural isomorphism of Hecke modules $$ H_*(\Gamma,Sh_{\bullet} \otimes_\mathbb{Z} M)\to H_*(\Gamma,St \otimes_\mathbb{Z} M). $$ \end{theorem} By Borel-Serre duality \cite{B-S}, if~$\Gamma$ is torsionfree, there is a natural isomorphism of Hecke modules $$ H_i(\Gamma,St \otimes_\mathbb{Z} M)\to H^{\binom{n}{2}-i}(\Gamma, M) $$ for all $i$. This result can be extended to any~$\Gamma$ as long as its torsion primes are invertible on~$M$.
In general, the sharbly homology is more mysterious. Nevertheless, we still expect it to have number theoretic significance, as described in Conjecture~\ref{conj1} as follows.
\subsection{} Let $\Gamma_0(N)$ be the subgroup of matrices in $\mathrm{SL}(4,\mathbb{Z})$ whose first row is congruent to $(*,0,0,0)$ modulo $N$. Define $S_N$ to be the subsemigroup of integral matrices in $\mathrm{GL}(4,\mathbb{Q})$ satisfying the same congruence condition and having positive determinant relatively prime to $2N$.
Let $\cH(N)$ denote the $\mathbb{Z}$-algebra of double cosets $\Gamma_0(N)S_N\Gamma_0(N)$. Then $\cH(N)$ is a commutative algebra that acts on the cohomology and homology of $\Gamma_0(N)$ with coefficients in any $\mathbb{F}_2[S_N]$ module. When a double coset is acting on cohomology or homology, we call it a Hecke operator. Clearly, $\cH(N)$ contains all double cosets of the form $\Gamma_0(N)D(\ell,k)\Gamma_0(N)$, where $\ell$ is a prime not dividing $2N$, $0\leqslant k\leqslant m$, and $$D(\ell,k)=\left(\begin{matrix} 1&&&&&\cr&\ddots&&&&\cr&&1&&&\cr&&&\ell&&\cr&&&&\ddots&\cr&&&&&\ell\cr\end{matrix}\right)$$ is the diagonal matrix with the first $m-k$ diagonal entries equal to 1 and the last $k$ diagonal entries equal to $\ell$. It is known that these double cosets generate $\cH (N)$ (cf.~\cite[Thm.~3.20]{Shimura}). When we consider the double coset generated by $D(\ell,k)$ as a Hecke operator, we call it $T(\ell,k)$.
We can extend $\cH(N)$ to a larger commutative algebra $\cH^*(N)$ by adjoining the double cosets of $D(\ell,k)$ for $\ell\mid N$. Such a double coset, considered as a Hecke operator, is denoted $U(\ell,k)$.
Let $\overline{\mathbb{F}_2}$ be an algebraic closure of $\mathbb{F}_2$.
\begin{definition}\label{def:hp} Let $V$ be an $\cH(N)\otimes_\mathbb{Z} \overline{\mathbb{F}_2}$-module. Suppose that $v\in V$ is a simultaneous eigenvector for all $T(\ell,k)$ and that
$T(\ell,k)v=a(\ell,k)v$ with $a(\ell,k)\in \overline{\mathbb{F}_2}$ for all prime $\ell\not | \ 2N$ and all $0\leqslant k\leqslant 4$. If $$\rho\colon G_\mathbb{Q}\to \mathrm{GL}(4,\overline{\mathbb{F}_2})$$ is a continuous representation of $G_{\mathbb{Q}} = \Gal (\overline\mathbb{Q}/\mathbb{Q})$ unramified outside $2N$, and \begin{equation}\label{eqn:hp} \sum_{k=0}^{4}(-1)^k\ell^{k(k-1)/2}a(\ell,k)X^k=\det(I-\rho(\Frob_\ell)X) \end{equation}
for all $\ell\not | \ 2N$, then we say that $\rho$ is attached to $v$. \end{definition} Here, $\Frob_\ell$ refers to an arithmetic Frobenius element, so that if $\varepsilon$ is the cyclotomic character, we have $\varepsilon (\Frob_\ell)=\ell$. The polynomial in~(\ref{eqn:hp}) is called the \emph{Hecke polynomial} for~$v$ and~$\ell$. If $\ell\mid N$, we can still compute the left-hand side of~(\ref{eqn:hp}) and call it the Hecke polynomial for $U(\ell,k)$, but it has no obvious bearing on the attached Galois representation.
The following is a special case of \cite[Conjecture 5]{AGM4}:
\begin{cnj}\label{conj1} Let $N\geqslant1$. Let $v$ be a Hecke eigenclass in $H_{\ast}(\Gamma_0(N),Sh_{\bullet}\otimes_\mathbb{Z} \overline{\mathbb{F}_2} )$. Then there is attached to~$v$ a continuous representation unramified outside $2N$, $$\rho\colon G_\mathbb{Q}\to \mathrm{GL}(4,\overline{\mathbb{F}_2}).$$ \end{cnj}
\section{Computing homology and the Hecke action mod~$2$}\label{gu}
\subsection{} As explained in Sections 5 and 6 of \cite{AGM5}, we compute the Hecke operators acting on sharbly cycles that are supported on Voronoi sharblies. Theorem 13 of \cite{AGM5} guarantees that the packets of Hecke eignvalues we compute do occur on eigenclasses in $H_{1}(\Gamma_0(N),Sh_{\bullet}\otimes_\mathbb{Z} \overline{\mathbb{F}_2})$. In this section, we recall results from \cite{AGM5} and explain how they are modified to work with $\mathbb{F}_2$ coefficients.
The sharbly complex is not finitely generated as a $\mathbb{Z} \mathrm{SL}(n,\mathbb{Z})$-module, which makes it difficult to use in practice to compute homology. To get a finite complex to compute $H_{1}$, we use the Voronoi complex. We refer to \cite[Section 5]{AGM5}) for any unexplained notation in what follows.
Let $X_{n}^{0}\subset \mathbb{R}^{\binom{n+1}{2}}$ be the convex cone of positive-definite real quadratic forms in $n$-variables. This has a partial (Satake) compactification $(X_{n}^{0})^{*}$ obtained by adjoining rational boundary components, which is itself a convex cone. The space $(X_{n}^{0})^{*}$ can be partitioned into cones $\sigma =\sigma (x_{1},\dotsc ,x_{m})$, called \emph{Voronoi cones}, where the $x_{i}$ are contained in certain subsets of nonzero vectors from $\mathbb{Z}^{n}$. (We write elements of $\mathbb{Z}^{n}$ as row vectors, as we did in Section~\ref{sh} for $\mathbb{Q}^{n}$.) The cones are built as follows: each nonzero $x_{i}\in \mathbb{Z}^{n}$ determines a rank $1$ quadratic form $q (x_{i}) = {}^{t}x_{i} x_{i}\in (X_{n}^{0})^{*}$. Let $\Pi$ be the closed convex hull of the points $\{ q (x)\mid x\in \mathbb{Z}^{n}, x\not = 0\}$. Then each of the proper faces of $\Pi$ is a polytope, and the $\sigma$s are exactly the cones on these polytopes. The indexing sets are constructed in the obvious way: if $\sigma$ is the cone on $F\subset \Pi$, and $F$ has distinct vertices $q (x_{1}),\dotsc ,q (x_{m})$, then the indexing set is $\{\pm x_{1},\dotsc ,\pm x_{m} \}$. We let $\Sigma$ denote the set of all Voronoi cones.
Let $X^{*}_{n}$ be the quotient of $(X^{0}_{n})^{*}$ by homotheties. The images of the Voronoi cones are cells in $X^{*}_{n}$. Let $\mathbb{Z} V_{\bullet}$ be the oriented chain complex on these cells, graded by dimension, and let $\mathbb{Z} \partial V_{\bullet}$ be the subcomplex generated by those cells that do not meet the interior of $X^{*}_{n}$ (i.e., the image in $X^{*}_{n}$ of the positive-definite cone). The \emph{Voronoi complex} is then defined to be $\cV_{\bullet} = \mathbb{Z} V_{\bullet}/\mathbb{Z} \partial V_{\bullet}$. For our purposes, it is convenient to reindex $\cV_{\bullet}$ by introducing the complex $\cW_{\bullet}$, where $\cW_{k} = \cV_{n+k-1}$. The results of \cite{AGM5} prove that if $n\leqslant 4$, both $\cW_{\bullet}$ and $Sh_{\bullet}$ give resolutions of the Steinberg module. In particular, let $\Gamma =\Gamma_{0} (N)$. If $M$ is a $\mathbb{Z}[\Gamma]$-module such that the the order of all torsion elements in $\Gamma$ is invertible, then $H_{*} (\Gamma , \cW_{\bullet } \otimes_{\mathbb{Z}}M)\approx H_{*} (\Gamma , Sh_{\bullet}\otimes_{\mathbb{Z}}M)$, and furthermore by Borel--Serre duality are isomorphic (after reindexing) to $H^{*} (\Gamma , M)$. These two complexes can be related as follows in our case of interest: when $n=4$, every Voronoi cell of dimension $\leqslant 5$ is a simplex. Thus for $0\leqslant k \leqslant 2$, we can define a map of $\mathbb{Z}[\mathrm{SL}(4,\mathbb{Z})]$-modules $$ \theta_k\colon \cW_{k} \to Sh_{k} $$ that takes the Voronoi cell $\sigma(v_1,\dots,v_{k+4})$ to $\theta_k((v_1,\dots,v_{k+4})):=[v_1,\dots,v_{k+4}]$. This allows us to realize Voronoi cycles in these degrees in the sharbly complex.
In the current setting, in which $M\cong \overline{\mathbb{F}_2}$ with trivial $\Gamma$-action, all torsion orders in $\Gamma$ are of course not invertible in $M$. Hence what we actually compute is more subtle. Let $K$ denote either $\cW_{\bullet } $ or $Sh_{\bullet }$. It is necessary to distinguish between $H_{*}(K) = H_*(\Gamma ,K\otimes_\mathbb{Z} M)$, i.e.~the homology of $\Gamma$ with coefficients in the complex $K\otimes_\mathbb{Z} M$ and $H_1(K\otimes_\Gamma M)$, which is the homology of the complex $K\otimes_\Gamma M$ and which is the bottom line of a spectral sequence that computes $H_{*}(K)$. The Hecke algebra ${\mathcal H}$ acts on both of these homologies when $K=Sh_{\bullet }$, and the spectral sequence just mentioned is ${\mathcal H}$-equivariant.
Thus our computation begins by computing a basis $\{x_{i} \}$ of the homology group $H_{1} (\cW_{\bullet } \otimes_{\Gamma }\mathbb{F}_{2})$. We then compute elements $y_{i} = \theta_{1,*} (x_{i})\in H_{1} (Sh_{\bullet }\otimes_{\Gamma }\mathbb{F}_{2})$. Let $T$ be a Hecke operator. We compute each Hecke translate $Ty_{i}$ and then find a sharbly cycles $z_{i}$ such that $z_{i}=Ty_{i}$ in $H_{1} (Sh_{\bullet}\otimes_{\Gamma}\overline{\mathbb{F}_2})$ and such that $z_{i}$ is in the image of the map $\theta_{1,*}$. The inverse images $\theta_{1,*}^{-1} (z_{i})$ can be written as linear combinations of the cycles $x_{i}$, which gives a matrix representing the action of $T$ from which we can find eigenclasses and eigenvalues.
Unfortunately, as indicated in \cite[Section 6]{AGM5}, we don't know if the map $\theta_{1,*}$ is injective. Thus this raises the question of what these eigenvalues mean. The answer is provided by Theorem 13 in \cite{AGM5}, which guarantees that if we find a cycle $v$ representing a nonzero class in $H_1(\cW \otimes_\Gamma \overline \mathbb{F}_2)$ such that $\theta_1(v)T$ is homologous to $a\theta_1(v)$ in $Sh_\bullet \otimes_\Gamma \overline{\mathbb{F}_2}$ (for a Hecke operator $T$), then there exists an eigenclass in $H_1(Sh_{\bullet })$ with eigenvalue $a$ for $T$. Hence eigenvalues we find in this way do occur in the sharbly homology and conjecturally are associated with Galois representations as in Conjecture~5 above.
\subsection{} Next we turn to the actual computation of the Hecke operators. Assume for the moment that $\Gamma$ is torsionfree. Let $\xi = \sum n (x)x$ be a $1$-sharbly cycle mod~$\Gamma$, where all multiplicities $n (x)$ are taken to be nonzero. We also assume for the moment that $2$ is invertible in the coefficient module. As described in \cite{experimental}, we can encode $\xi$ as a collection of $4$-tuples $(x , n (x ), \{y \}, \{L (y ) \})$ of the following data: \begin{enumerate} \item The $1$-sharbly $x$ appears in $\xi = \sum n (x)x$ with multiplicity $n (x)$. \item $\{y \}$ is the set of $0$-sharblies appearing in the boundary of $x$.\label{steptwo} \item For each $0$-sharbly $y$ in \eqref{steptwo}, the matrix $L (y)$ is a \emph{lift} of $y$ to $M_{4} (\mathbb{Z})$. In other words, the rows of the matrix $L (y)$ equal the entries of $y$, up to permutation and scaling by $\{\pm 1 \}$.\label{stepthree} \end{enumerate} We further require that the lift matrices in \eqref{stepthree} are chosen $\Gamma $-equivariantly: suppose that for $x ,x'$ in the support of $\xi $ there exist $y$ (respectively $y'$) appearing in the boundary of $x$ (resp., $x'$) with $y = y'\gamma$ for some $\gamma \in \Gamma $. Then we require $L (y)= L (y')\gamma$. Thus we have written $\xi$ as a collection of $1$-sharblies with multiplicities and with extra data that reflects the cycle structure of $\xi$ mod~$\Gamma$.
The congruence groups $\Gamma$ we treat are not torsionfree in general, and we must modify the above data. When $\Gamma$ has torsion, it can happen that a given $0$-sharbly $y$ is taken to itself by a element of $\Gamma$ that reverses orientation. In the language of \cite[Section 3.8]{AGM1}, such Voronoi cells are \emph{nonorientable}; in that paper and its sequels \cite{AGM2, AGM3,
AGM4} these cells are discarded when one computes $H_{1} (\cW)$. Unfortunately, these cells are not discardable when one computes Hecke operators using the ideas in \cite{experimental}: after applying a Hecke operator, such $0$-sharblies must themselves be ``reduced'' to rewrite the Hecke translate in terms of cycles in the image of $\theta_{1}$.
The point for the current discussion is that, when encoding $\xi$ as a $4$-tuple, any nonorientable $0$-sharbly $y$ must effectively have more than one lift matrix chosen for it. In particular, if $y$ is nonorientable then we can find an orientation-reversing $\gamma$ in the stabilizer of $y$ with $y\gamma^{2}=y$, and we must replace the tuple $\Phi=(x,n (x),\{y \},\{L (y) \})$ in our data with a \emph{pair} of tuples $\Phi' , \Phi ''$. These tuples are the same as $\Phi$ except that (i) if $\Phi$ has multiplicity $n (x)$, then $\Phi '$, $\Phi ''$ each have multiplicity $n (x)/2$, and (ii) if $\Phi '$ has a lift matrix $L (y)$ for $y$, then $\Phi ''$ has the lift matrix $L (y)\gamma$ for $y$ in the same position.
Hence we ``split'' the contribution $n (x)x$ to $\xi$ into a contribution of two $1$-sharblies, each of multiplicity $n (x)/2$, so that we can encode it as two $4$-tuples that can maintain the $\Gamma$-equivariance of the data. Of course there may be more nonorientable $0$-sharblies in the boundary of $x$ than just $y$. If so we continue to split tuples as needed, dividing multiplicities by $2$ along the way. Since $x$ has at most $5$ $0$-sharblies in its boundary, our original $x$ gives rise to at most $2^{5}$ tuples.
We now return to the case at hand, in which $2$ is not invertible in the coefficients. Clearly we cannot apply the above construction to encode $\xi$ as a collection of tuples, since we cannot replace $n (x)$ by $n (x)/2$ if a $0$-sharbly is taken to itself by its stabilizer. Fortunately we are saved by an observation of Dan Yasaki: since $-1=1$ in the coefficients, there is no distinction between orientable and nonorientable Voronoi cells! All Voronoi cells are orientable; none are discarded when one builds the complex $\cW_{\bullet}$. The consequence is that a sharbly chain \emph{never} becomes a cycle mod~$\Gamma$ because of orientation-reversing self-maps on $0$-sharbles in its boundary. Hence we never have to divide by $2$ in building the tuples $\Phi$ to encode $\xi$.
\section{Finding attached Galois representations}\label{gal}
\subsection{} Suppose we have a finite-dimensional $\mathbb{F}_2$-vector space~$V$ with a Hecke action. We now describe how we find Galois representations that are conjecturally attached to Hecke eigenvectors in $V \otimes_{\mathbb{F}_2} \overline{\mathbb{F}_2}$. Our Galois representation finder is a Python script built on the mathematical software package Sage~\cite{sage}.
\subsection{} We start by using the algorithm in \cite{experimental} to compute explicitly the Hecke operators $T(\ell, k)$ for $k=1, 2, 3$ and for~$\ell$ ranging through a set~$L$ of small odd primes. The operator is $U(\ell, k)$ rather than $T(\ell, k)$ if $\ell \mid N$. The~$L$ we use depends on~$N$ as in Table~\ref{table_L}. We use a larger~$L$ when~$N$ is smaller, because the computations are faster for smaller~$N$.
\begin{table} \caption{\label{table_L} We compute $T(\ell,k)$ and $U(\ell,k)$ at level~$N$ for the~$\ell$ shown in~$L$.}
\begin{tabular}{|l|l|} \hline $N$ & $L$ \\ \hline 3--10, 17 & $\{3,5,7,11,13\}$ \\ 11 & $\{3,5,7,11,13,17\}$ \\ 13 & $\{3,5,7,11\}$ \\ other & $\{3,5,7\}$ \\ \hline \end{tabular} \end{table}
Let $\mathbb{F}$ be the field generated over $\mathbb{F}_2$ by the eigenvalues of the Hecke operators we have computed. $\mathbb{F}$ is a finite extension of $\mathbb{F}_2$. We replace~$V$ with its extension of scalars $V\otimes_{\mathbb{F}_2} \mathbb{F}$ for the rest of the discussion.
For each operator we have computed, we decompose~$V$ into eigen\-spaces under that operator. Then we take the common refinement of all the decompositions. In other words, let $E$ have the form $\bigcap_{(\ell, k)} E_{\ell, k}$, where $E_{\ell, k}$ is any one of the eigenspaces for the operator at $(\ell, k)$, and the intersection is over all $\ell\in L$ and $k=1,2,3$. We find all the non-zero~$E$ of this form, and call each a \emph{simultaneous eigen\-space}. The $E$'s are pairwise disjoint, and together they span a subspace of~$V$. By construction, the Hecke eigenvalues $a(\ell, k)$ are constant on each~$E$ and characterize it. The function $(\ell, k) \mapsto a(\ell, k)$ is the \emph{Hecke eigenpacket} of~$E$.
To a simultaneous eigenspace~$E$ we now attach a family of polynomials. Let $L' = \{\ell \in L \mid\, \ell \nmid N\}$.
\begin{definition} The \emph{polynomial system} $\mathcal{F}(E)$ is the mapping that sends $\ell \in L'$ to the Hecke polynomial with eigenvalues $a(\ell, k)$ defined in~(\ref{eqn:hp}). \end{definition}
The Hecke polynomials have coefficients in the field of eigenvalues~$\mathbb{F}$, but they do not necessarily split into linear factors over that field. We enlarge~$\mathbb{F}$ if necessary so that all the Hecke polynomials for $\ell\in L'$ split into linear factors in $\mathbb{F}[X]$, and again we replace~$V$ with its extension of scalars $V\otimes_{\mathbb{F}_2} \mathbb{F}$. The largest $\mathbb{F}$ we have had to work with is $\mathbb{F}_{64}$ at level $N=59$, a very small field from the computational standpoint.
\subsection{} We will be using various Galois representations~$\rho$ that have been defined classically. Each $\rho$ is a continuous, semisimple representation of $G_{\mathbb{Q}}$ unramified outside $2N$. It takes values in $\mathrm{GL}(n', \mathbb{F})$ for $n' = 1$ or~2, where $\mathbb{F}$ is the particular finite extension of $\mathbb{F}_2$ described above. The characteristic polynomial of Frobenius for~$\rho$ is known and is of degree~$n'$ for each $\ell \nmid 2N$.
\begin{definition} The \emph{polynomial system} $\mathcal{F}(\rho)$ is the mapping that sends $\ell \in L'$ to the characteristic polynomial of Frobenius for~$\rho$ at~$\ell$. \end{definition}
Before we say which~$\rho$ we consider, let us describe how we conjecturally attach a sum of $\rho$'s to a simultaneous eigenspace~$E$. Say that $\mathcal{F}(\rho)$ \emph{divides} $\mathcal{F}(E)$ if, for each $\ell\in L'$, the polynomial at~$\ell$ for $\rho$ divides the polynomial at~$\ell$ for~$E$. When one polynomial system divides another, define the \emph{quotient system} in the obvious way.
For a given~$E$, let $\mathcal{F} = \mathcal{F}(E)$ be its polynomial system. We run through a list of Galois representations~$\rho$ in some fixed order. The first time we find a $\rho$ (call it $\rho_1$) whose system divides $\mathcal{F}$, we replace $\mathcal{F}$ by the quotient system. If the system for~$\rho_1$ divides $\mathcal{F}$ more than once (say $n_1$ times), we take the quotient $n_1$ times. After that, we continue running through the rest of the $\rho$'s in our fixed order. When we find a $\rho_2$ whose system divides the new $\mathcal{F}$, say $n_2$ times, we again replace $\mathcal{F}$ with the quotient system. We stop with success when $\mathcal{F}$ becomes the trivial system, meaning all polynomials have degree zero. We stop with failure when we run out of $\rho$'s before $\mathcal{F}$ becomes trivial. In the successful cases, we say that the \emph{Galois
representation apparently attached to}~$E$ is $$ \rho_1^{\oplus n_1} \oplus \rho_2^{\oplus n_2} \oplus \cdots . $$ The word ``apparently'' means that this Galois representation matches our Hecke data as far as our data extends.
\subsection{} Now we describe the Galois representations~$\rho$ we use. We have two different lists of Galois representations, $\bm{\rho}_{2,4}$ and $\bm{\rho}_{2,3}$. With either list, we always successfully find a Galois representation that is apparently attached to one of our simultaneous eigenspaces~$E$. Specific results are in Section~\ref{res}. The lists $\bm{\rho}_{2,4}$ and $\bm{\rho}_{2,3}$ are ordered, and the order matters in the following sense. When we split the first representation, $\rho_1$, off of~$E$, we want $\rho_1$ to be as simple as possible. $\rho_2$ should be the second simplest, and so on.
In this subsection, we define $\bm{\rho}_{2,4}$ and $\bm{\rho}_{2,3}$. In~(\ref{ChiReasons})--(\ref{exponent2}), we give the motivation behind the definitions.
$\bm{\rho}_{2,4}$ begins with a list of one-dimensional Galois representations~$\chi$. These are Dirichlet characters with value in~$\mathbb{F}$, which we identify with one-dimensional representations as usual. Let~$M$ be the odd part of~$N$. The definition is that a Dirichlet character~$\chi$ belongs to $\bm{\rho}_{2,4}$ if and only if the conductor~$N_1$ of~$\chi$ is a divisor of~$M$. Following the intuition that a Dirichlet character with smaller conductor is simpler than one with a larger conductor, we put the Dirichlet characters into $\bm{\rho}_{2,4}$ in order of increasing~$N_1$. For instance, $\chi=1$ comes first. Sage's class \texttt{DirichletGroup} enumerates the~$\chi$ for a given~$N_1$ automatically. The characteristic polynomial of Frobenius at~$\ell$ for~$\chi$ is $1 + \chi(\ell) X$, for all $\ell \nmid 2N$.
After the Dirichlet characters, we put into $\bm{\rho}_{2,4}$ certain Galois representations~$\rho$ coming from classical cusp forms for congruence subgroups of $\mathrm{SL}(2,\mathbb{Z})$. We emphasize that the cusp forms are in characteristic zero, though the~$\rho$ take values in characteristic two. The characteristic polynomials of Frobenius for the cusp forms are naturally defined over number fields, so, as we describe which cusp forms we use, we must also describe how we reduce to get Galois representations defined over~$\mathbb{F}$.
Let $N_1$ be a divisor of~$M$. Let $f$ be a newform of weight 2 or 4 for $\Gamma_0(N_1)$. The coefficients of the $q$-expansion of~$f$ generate a number field~$K_f$, with ring of integers $\mathcal{O}_{K_f}$. Let $\mathfrak{p}$ be a prime of~$K_f$ over~2. If $\mathbb{F}$ is of high enough degree over $\mathbb{F}_2$, then the finite field $\mathcal{O}_{K_f} / \mathfrak{p}$ has an embedding $\alpha_{\mathfrak{p}}$ into $\mathbb{F}$. In every case we have computed, $\mathbb{F}$ is indeed large enough so that this embedding exists. Then the pair $(f, \mathfrak{p})$ gives rise to a Galois representation~$\rho$ into $\mathrm{GL}(2, \mathbb{F})$, by reduction mod~$\mathfrak{p}$ composed with $\alpha_{\mathfrak{p}}$. For any $\ell \nmid 2N$, the characteristic polynomial of Frobenius is $1 - \alpha_{\mathfrak{p}}(a_\ell) X + X^2$, where $a_\ell$ is the $\ell$-th coefficient in the $q$-expansion of~$f$.
By definition, $\bm{\rho}_{2,4}$ contains the representation~$\rho$ for $(f, \mathfrak{p})$, for all $N_1 \mid M$ and all newforms $f$ of weight 2 or 4 for $\Gamma_0(N_1)$. The order is as follows. The outermost loop is over weight~2 first, then weight~4. For a given weight, we let $N_1$ run through the divisors of~$M$ in increasing order. We find the newforms~$f$ for $\Gamma_0(N_1)$ and the given weight. Sage's class \texttt{CuspForms}, with its method \texttt{newforms}, makes this last step automatic. For each newform~$f$, we find the number field $K_f$. If there is more than one~$f$ for the given weight and~$N_1$, we sort these $f$'s by two keys; the primary key says the degree $[K_f : \mathbb{Q}]$ should be increasing, and the secondary key says that the absolute value of the discriminant of $K_f$ should be increasing.
We now turn to the definition of $\bm{\rho}_{2,3}$. It begins with the same Dirichlet characters as $\bm{\rho}_{2,4}$, in the same order. Next, let $N_1 \mid M$. Let $\psi$ be a character on $\mathbb{Z}/N_1\mathbb{Z}$. Let $f$ be a newform of weight 2 or 3 with level~$N_1$ and nebentype character~$\psi$. Let $K_f$ and~$\mathfrak{p}$ be as before. The pair $(f, \mathfrak{p})$ gives rise to a Galois representation~$\rho$ into $\mathrm{GL}(2, \mathbb{F})$ as above.
By definition, $\bm{\rho}_{2,3}$ contains the representation~$\rho$ for $(f, \mathfrak{p})$, for all $N_1 \mid M$, all~$\psi$, and all newforms $f$ of weight 2 or 3 of level~$N_1$ and nebentype character~$\psi$. The order is as follows. The outermost loop is over weight~2 first, then weight~3. For a given weight, we let $N_1$ run through the divisors of~$M$ in increasing order. For a given~$N_1$, we run through the~$\psi$ in the order Sage uses, which is to fix generators of the character group and raise them to powers in lexicographic order, starting with 0-th powers. In particular, the trivial~$\psi$ comes first. We find the newforms for the given weight and~$\psi$, again using Sage's class \texttt{CuspForms}. For each newform~$f$, we find the number field $K_f$, and sort the $f$'s by degree and discriminant as before.
\subsection{}\label{ChiReasons} The definitions of $\bm{\rho}_{2,4}$ and $\bm{\rho}_{2,3}$ present two different perspectives on how Galois representations would be attached to our homology classes.
Our construction of $\bm{\rho}_{2,4}$ reflects a guess based on an analogy with our first papers in this series, which studied homology in characteristic zero \cite{AGM1, AGM2, AGM3}. In them, we found that, for small levels, all the homology appeared to be accounted for by classes supported on the Borel-Serre boundary, and that it was always related to Dirichlet characters and classical cuspforms of weights~2 and~4. Although with this guess we might expect to need newforms of even level dividing $N$, in practice we did not.
By contrast, the list $\bm{\rho}_{2,3}$ reflects the conjecture found in~\cite{ADP}. Here we seek mod~2 Galois representations which the conjecture would associate to a homology class of level~$N$, weight~$k$ and trivial nebentype. In particular, the Serre conductor of such Galois representations would divide the odd part~$M$ of the level~$N$. We never get to test this for more than two-dimensional Galois representations, not all the way to four dimensions, because we keep splitting off the Dirichlet characters. Our guesses for two-dimensional representations are that they are mod~2 Galois representations which Serre's conjecture would attach to a homology class of level~$M$, weight~$k$ and trivial nebentype. But we don't have a good way to construct these mod~2 objects, except by reducing characteristic-zero modular forms mod~2. Kevin Buzzard tells us that we are guaranteed to find all such two-dimensional mod~2 Galois representations by looking at modular forms of level~$M$, weights 2 or 3, and a range of nebentypes. In~(\ref{buzztalk}) let us explain this guarantee.
\subsection{}\label{buzztalk} We thank Kevin Buzzard for much of the information in this subsection. Let $\sigma$ be a mod~2 Galois representation which Serre's conjecture (now a theorem of Khare-Wintenberger \cite{KW1,KW2}) would attach to a homology class of level~$M$, weight~$k$ and trivial nebentype. The following arguments are valid for all~$p$ prime to~$M$, including $p=2$ \cite[Thm.~4.3]{Eidx}. First of all, we do not have to worry about $k=1$, because by multiplying by the Hasse invariant we can move to $k=p=2$. Thus we may assume $k\geqslant 2$. Any eigenform will show up, up to a twist, in weight at most $p+1$. Thus for $p=2$, where there are no twists at all, we need only compute in weights $k=2$ and~3. All the mod~2 eigenforms lift to characteristic zero, because $k\geqslant 2$. Because $p\leqslant 3$, we cannot guarantee that the nebentype character lifts to the character we expect; however, we know that it lifts to \emph{some} character. Since the eigenforms lift to characteristic zero, well-known work of Deligne attaches $p$-adic representations to them. In turn, these reduce to mod~$p$ representations, one of which is the given~$\sigma$.
In practice, there are a large number of nebentypes~$\psi$, and often a large number of cusp forms for a given nebentype. We cut down on the amount of computation as follows. Our desired four-dimensional Galois representation must have determinant~1. In every case where we use a cusp form, we have already split off two Dirichlet characters; that is, the four-dimensional representation is $\chi_1 \oplus \chi_2 \oplus \rho_3$ where $\chi_1$, $\chi_2$ are Dirichlet characters and $\rho_3$ is from a cusp form of nebentype~$\psi$. Thus $\det \rho_3 = \delta$, where we define $\delta = \det(\chi_1 \chi_2)^{-1}$. Furthermore, $\det\rho_3 = \det\psi$. When we construct the list $\bm{\rho}_{2,3}$ is our program, we already know~$\chi_1$ and~$\chi_2$, so we make the list smaller by only including $\psi$ that are congruent to $\delta$ mod~2.
\subsection{}\label{exponent2} Let~$\Delta$ be the group of characters $\psi : (\mathbb{Z}/M\mathbb{Z})^\times \to \mathbb{C}$ that are congruent to 1 mod~2. The $\psi$ we need to use are the coset of~$\Delta$ translated by~$\delta$, so we would like to understand~$\Delta$. Let $\mu$ be the exponent of the group $(\mathbb{Z}/M\mathbb{Z})^\times$. All our~$\psi$ take values in $\mathbb{Q}(\zeta_\mu)$, the cyclotomic field of $\mu$-th roots of unity, and ``mod~2'' means modulo a prime ideal $\mathfrak{p}_\mu$ over~2 in $\mathbb{Q}(\zeta_\mu)$. Let $\nu$ be the power of~2 dividing~$\mu$, and let $o$ be the odd part, so that $\mu = \nu o$. As usual, $\mathbb{Q}(\zeta_\mu)$ is the compositum of $\mathbb{Q}(\zeta_\nu)$ and $\mathbb{Q}(\zeta_o)$, and $\mathfrak{p}_\mu$ can be understood by studying the primes $\mathfrak{p}_\nu$, $\mathfrak{p}_o$ over~2 in their respective fields.
\begin{lemma} $\Delta$ is the group of characters whose image lies in
$\mathbb{Q}(\zeta_\nu)$. Equivalently, it is the group of characters whose
orders are pure powers of two dividing~$\nu$. \end{lemma}
To prove the lemma, first consider~$\nu$. In $\mathbb{Q}(\zeta_\nu)$, 2 is totally ramified, and $\mathfrak{p}_\nu = (2, 1 - \zeta_\nu)$ is the only prime over~2. Any Dirichlet character is 1 mod~2, because $\zeta_\nu$ and all its powers are congruent to~1 mod~$\mathfrak{p}_\nu$. Second, consider~$o$. For an odd prime~$q$ that divides~$o$, let $o'$ be maximal power of~$q$ that divides~$o$. In $\mathbb{Q}(\zeta_{o'})$, 2 is unramified. Under the mapping to the residue class field, the $o'$ distinct powers of $\zeta_{o'}$ map to $o'$ distinct values, so only the trivial power $\zeta_{o'}^0 = 1$ maps to~1 mod~2. That is, only the trivial Dirichlet character is 1 mod $\mathfrak{p}_{o'}$. The lemma follows from the Chinese remainder theorem. \qed
\section{Results}\label{res}
For the list $\bm{\rho}_{2,4}$, subsection~(\ref{resultsS}) contains a table of results for several levels~$N$. For each level~$N$, we first give the overall dimension of the~$H_1$ we compute. Each succeeding row describes a simultaneous eigenspace~$E$. The first two columns in the row give the type of~$E$, a Roman numeral to be defined below, followed by $\dim E$.
Let $\mathbf{1}$ be the trivial one-dimensional Galois representation. Roman numeral~I means that the Galois representation apparently attached to our Hecke eigenspace is the sum of four trivial representations, $\mathbf{1}^{\oplus 4}$. The symbol $\mathrm{I}_m$ means the representation is the sum of two trivial and two non-trivial representations, $\mathbf{1} \oplus \mathbf{1} \oplus \chi_m \oplus \bar{\chi}_m$. The non-trivial representations go to $\mathbb{F}_4$ rather than $\mathbb{F}_2$. More precisely, $\chi_m$ maps $(\mathbb{Z}/m\mathbb{Z})^\times$ surjectively to $\mathbb{F}_4^\times$, and $\bar{\chi}_m$ is its conjugate under $\Gal(\mathbb{F}_4/\mathbb{F}_2)$. These statements characterize $\chi_m$ and $\bar{\chi}_m$ up to conjugation.
Roman numerals~II and~IV mean that the Galois representation apparently attached to our Hecke eigenspace is the sum of two $\mathbf{1}$'s and the Galois representation attached to a cuspidal newform from $\bm{\rho}_{2,4}$. The newform has weight~2 or~4, respectively. The congruence subgroup is $\Gamma_0(N)$, where~$N$ is the level where the representation first appears in the tables.
In subsection~(\ref{resultsB}), we present data for $\bm{\rho}_{2,3}$, but list only the representations where $\bm{\rho}_{2,4}$ and $\bm{\rho}_{2,3}$ give different results. Roman numeral III stands for the sum of two $\mathbf{1}$'s and a cuspidal newform of weight~3 from $\bm{\rho}_{2,3}$.
Our tables do not give the Hecke polynomials of the $T(\ell,k)$. This is because the Hecke polynomials can easily be recovered from the Galois representation. For example, all the $T$'s for a type~I representation have Hecke polynomial $(x+1)^4$. The Hecke polynomials for the~$U(\ell,k)$ are described below. The list of~$\ell$'s used for a given~$N$ was given in Table~\ref{table_L}.
For types~II, III, and~IV, we give details about the cusp form in the third column of each row. The coefficients of the $q$-expansions are in~$\mathbb{Q}$ unless the number field is indicated. We write $i = \sqrt{-1}$ as usual.
We observe that the results for types~I, $\mathrm{I}_m$, and~II are always the same at a given level for $\bm{\rho}_{2,4}$ and $\bm{\rho}_{2,3}$. The only differences we see are when type~IV changes to type~III. It is somewhat surprising that the type~II representations never change between $\bm{\rho}_{2,4}$ and $\bm{\rho}_{2,3}$. For $\bm{\rho}_{2,4}$ and weight~2, we always searched for cusp forms on $\Gamma_0(N_1)$, which means $\Gamma_1(N_1)$ with trivial nebentype. For $\bm{\rho}_{2,3}$ and weight~2, we searched for cusp forms of all nebentypes. The observation is that our program produced a weight~2 cusp form for some nebentype if and only if that nebentype was trivial.
The same cusp form can appear at the same level~$N$ for different simultaneous eigenspaces. This reflects the different embeddings of the number fields into~$\mathbb{F}$. For example, in the last table of~(\ref{resultsS}), with $\bm{\rho}_{2,4}$ and $N=59$, the same weight-2 cusp form appears four times, in all four of the type~II representations. The cusp form is defined over a quintic extension of~$\mathbb{Q}$. We find that~2 factors in the quintic field as a product $\mathfrak{p}_1 \mathfrak{p}_2$ of prime ideals, where~$\mathfrak{p}_1$ is unramified and has residue class field $\mathbb{F}_8$, while~$\mathfrak{p}_2$ has ramification index~2 and residue class field $\mathbb{F}_2$. The first three occurrences of the cusp form belong to~$\mathfrak{p}_1$. Let $\varpi$ be a root of $x^3 + x + 1 = 0$ in $\mathbb{F}_8$. The Hecke polynomials for the first representation are \begin{eqnarray*} (x+1)^2 \cdot (x^2 + \varpi^2 x + 1) &\qquad& (\ell=3) \\ (x+1)^2 \cdot (x^2 + \varpi x + 1) &\qquad& (\ell=5) \\ (x+1)^2 \cdot (x^2 + (\varpi^2 + \varpi) x + 1) &\qquad& (\ell=7) \end{eqnarray*} The Galois group $\Gal(\mathbb{F}_8/\mathbb{F}_2)$ permutes $\varpi$, $\varpi^2$, and $\varpi^4 = \varpi+\varpi^2$ in a three-cycle. Checking the Hecke polynomials of the second and third Galois representations, we see that these three representations (those with eigenspaces of dimension~4) are permuted in a three-cycle by the Galois group. The fourth occurrence of the cusp form (dimension~15) is for~$\mathfrak{p}_2$; here the coefficients of the Hecke polynomials are down in~$\mathbb{F}_2$.
For a given~$N$, the sum of the dimensions of the simultaneous eigenspaces is often less than the dimension of the full~$H_1$. This is because many Hecke operators, both $T(\ell,k)$ and $U(\ell,k)$, turn out not to be semisimple.
Every level~$N$ we have computed has some representations of type~I. A few have type $\mathrm{I}_m$. To avoid cluttering the tables, we list these representations here. The notation $(N,d)$ means level~$N$ has a representation with corresponding eigenspace of dimension~$d$. When the same~$N$ occurs in more than one pair, there are simultaneous eigenspaces where the $T(\ell,k)$ act the same but the $U(\ell,k)$ act differently. \begin{itemize} \item The type~I representations that appear to be attached to our
data are $(3,1)$, $(4,1)$, $(5,1)$, $(6,5)$, $(7,3)$, $(8.6)$,
$(9,1)$, $(9,4)$, $(10,7)$, $(11,1)$, $(12,19)$, $(13,1)$,
$(14,13)$, $(15,14)$, $(16,17)$, $(17,6)$, $(18,5)$, $(18,16)$,
$(19, 1)$, $(20,30)$, $(21,16)$, $(22, 5)$, $(23,3)$, $(24,55)$,
$(25,1)$, $(25,9)$, $(26,7)$, $(27,1)$, $(27,3)$, $(27,4)$,
$(28,43)$, $(29,1)$, $(30,59)$, $(31,3)$, $(32,40)$, $(33, 14)$,
$(34,29)$, $(35,18)$, $(36, 19)$, $(36,50)$, $(37,1)$, $(38,5)$,
$(39,21)$, $(41,8)$, $(43,10)$, $(47,3)$, $(53,1)$, $(59,1)$.
\item We find one type $\mathrm{I}_9$ representation of dimension~2
at level~27.
\item We find one type $\mathrm{I}_7$ representation of dimension~4
at level~35. \end{itemize}
We use the operators $U(\ell,k)$ to divide up the simultaneous eigenspaces as finely as possible, and we compute their Hecke polynomials, but we do not consider the $U(\ell,k)$ when attaching Galois representations. Again, to avoid cluttering the tables with $U(\ell,k)$ data, we summarize their Hecke polynomials here. The general rule is that, when $\ell$ is an odd prime dividing~$N$, the Hecke polynomial of~$U$ is $x^4+x^3+x^2+x+1$. We list the exceptions in the format ($N$, $U_\ell$, $d$), which means that for all the representations with a $d$-dimensional eigenspace we have found at level~$N$, the operator $U(\ell,k)$ has the Hecke polynomial described. \begin{itemize} \item The Hecke polynomial is $(x^2+x+1)^2$ for (9, $U_3$, 4), (18,
$U_3$, 16), (25, $U_5$, 2 or 9), (27, $U_3$, 2 or 4) and (36, $U_3$,
50). \item The Hecke polynomial is $x^4+x^3+1$ for (27, $U_3$, 3). \item Let $\omega$ be a primitive cube root of unity in $\mathbb{F}_4$. At
level $N=33$, the Hecke polynomial for $U_3$ is $x^4 + \omega x^3 +
x^2 + \omega x + 1$ on one of the 4-dimensional eigenspaces; for
the other 4-dimensional eigenspace, it is the polynomial's
conjugate under $\Gal(\mathbb{F}_4/\mathbb{F}_2)$, namely $x^4 + (\omega+1) x^3 +
x^2 + (\omega+1) x + 1$. At level $N=39$, the same pair of
conjugate Hecke polynomials occur for $U_3$ and the pair of
2-dimensional eigenspace, for both types III and~IV. \item The Hecke polynomial is $x^4+x+1$ for (33, $U_3$, 9), and also
for (39, $U_3$, 4) for both types III and~IV. \end{itemize}
\subsection{}\label{resultsS} Here are the results of types~II and~IV for $\bm{\rho}_{2,4}$. (Types~I and~$\mathrm{I}_m$ were described above.)
\newcolumntype{x}[1]{>{\RaggedRight}p{#1}}
\begin{center}
\begin{longtable}{|l|l|x{5in}|}
\hline \multicolumn{3}{|l|}{\textbf{Level 11}. Dimension 5.} \\ \hline II & 4 & $\rho_{2,11} = q - 2q^{2} - q^{3} + 2q^{4} + q^{5} + O(q^{6})$ \\ \hline \hline
\multicolumn{3}{|l|}{\textbf{Level 13}. Dimension 5.} \\ \hline IV & 2 & $\rho_{4,13} = q - 5q^{2} - 7q^{3} + 17q^{4} - 7q^{5} + O(q^{6})$ \\ \hline \hline
\multicolumn{3}{|l|}{\textbf{Level 19}. Dimension 9.} \\ \hline II & 4 & $\rho_{2,19} = q - 2q^{3} - 2q^{4} + 3q^{5} + O(q^{6})$ \\ \hline IV & 2 & $\rho_{4,19} = q - 3q^{2} - 5q^{3} + q^{4} - 12q^{5} + O(q^{6})$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 23}. Dimension 12.} \\ \hline II & 9 & $q + b_{0}q^{2} + \left(-2 b_{0} - 1\right)q^{3} + \left(-b_{0} - 1\right)q^{4} + 2 b_{0}q^{5} + O(q^{6}),$ with $b_0 = (-1 + \sqrt{5})/2$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 25}. Dimension 14.} \\ \hline IV & 2 & $q - q^{2} - 7q^{3} - 7q^{4} + O(q^{6})$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 26}. Dimension 25.} \\ \hline IV & 10 & $\rho_{4,13}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 27}. Dimension 20.} \\ \hline II & 4 & $q - 2q^{4} + O(q^{6})$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 29}. Dimension 17.} \\ \hline II & 5 & $q + b_{0}q^{2} - b_{0}q^{3} + \left(-2 b_{0} - 1\right)q^{4} - q^{5} + O(q^{6}),$ with $b_0 = -1 + \sqrt{2}$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 31}. Dimension 16.} \\ \hline II & 9 & $q + b_{0}q^{2} - 2 b_{0}q^{3} + \left(b_{0} - 1\right)q^{4} + q^{5} + O(q^{6}),$ with $b_0 = (1 + \sqrt{5})/2$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 33}. Dimension 35.} \\ \hline II & 4 & $\rho_{2,11}$ \\ \hline II & 4 & $\rho_{2,11}$ \\ \hline II & 9 & $\rho_{2,11}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 37}. Dimension 21.} \\ \hline II & 12 & $q - 2q^{2} - 3q^{3} + 2q^{4} - 2q^{5} + O(q^{6})$ \\ \hline IV & 2 & $\rho_{4,37} = q + b_{0}q^{2} + \left(-\frac{1}{8} b_{0}^{3} - \frac{9}{8} b_{0}^{2} - \frac{13}{4} b_{0} - \frac{11}{4}\right)q^{3}+ \left(b_{0}^{2} - 8\right)q^{4} $ \newline $+ \left(\frac{13}{8} b_{0}^{3} + \frac{85}{8} b_{0}^{2} + \frac{25}{4} b_{0} - \frac{93}{4}\right)q^{5}+ O(q^{6}),$ with $b_{0}^{4} + 6 b_{0}^{3} - b_{0}^{2} - 16 b_{0} + 6=0$. \\ \hline IV & 2 & $\rho_{4,37}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 38}. Dimension 40.} \\ \hline II & 15 & $\rho_{2,19}$ \\ \hline IV & 10 & $\rho_{4,19}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 39}. Dimension 41.} \\ \hline IV & 2 & $\rho_{4,13}$ \\ \hline IV & 2 & $\rho_{4,13}$ \\ \hline IV & 4 & $\rho_{4,13}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 43}. Dimension 26.} \\ \hline IV & 2 & $\rho_{4,43} = q + b_{0}q^{2} + \left(\frac{1}{8} b_{0}^{3} + \frac{1}{8} b_{0}^{2} - \frac{7}{2} b_{0} - \frac{13}{4}\right)q^{3} + \left(b_{0}^{2} - 8\right)q^{4} + \left(-\frac{9}{8} b_{0}^{3} - \frac{33}{8} b_{0}^{2} + \frac{19}{2} b_{0} + \frac{5}{4}\right)q^{5} + O(q^{6}),$ with $b_{0}^{4} + 4 b_{0}^{3} - 9 b_{0}^{2} - 14 b_{0} + 2=0$. \\ \hline IV & 2 & $\rho_{4,43}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 47}. Dimension 25.} \\ \hline II & 9 & $\rho_{2,47} = q + b_{0}q^{2} + \left(b_{0}^{3} - b_{0}^{2} - 6 b_{0} + 4\right)q^{3} + \left(b_{0}^{2} - 2\right)q^{4} + \left(-4 b_{0}^{3} + 2 b_{0}^{2} + 20 b_{0} - 10\right)q^{5} + O(q^{6}),$ with $b_{0}^{4} - b_{0}^{3} - 5 b_{0}^{2} + 5 b_{0} - 1=0$. \\ \hline II & 9 & $\rho_{2,47}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 53}. Dimension 33.} \\ \hline II & 8 & $q - q^{2} - 3q^{3} - q^{4} + O(q^{6})$ \\ \hline IV & 2 & $q + b_{1}q^{2} + \left(\frac{1}{14} b_{1}^{3} - \frac{3}{14} b_{1}^{2} - \frac{37}{14} b_{1} - \frac{3}{2}\right)q^{3}+\left(b_{1}^{2} - 8\right)q^{4} + \left(-\frac{5}{14} b_{1}^{3} - \frac{13}{14} b_{1}^{2} + \frac{31}{14} b_{1} + \frac{3}{2}\right)q^{5}+ O(q^{6}),$ with $b_{1}^{4} + 4 b_{1}^{3} - 16 b_{1}^{2} - 42 b_{1} + 49=0$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 59}. Dimension 36.} \\ \hline II & 4 & $\rho_{2,59} = q + b_{0}q^{2} + \left(-\frac{1}{4} b_{0}^{4} + \frac{5}{4} b_{0}^{2} - \frac{1}{2} b_{0}\right)q^{3} + \left(b_{0}^{2} - 2\right)q^{4} $ \newline $ + \left(\frac{3}{4} b_{0}^{4} + \frac{1}{2} b_{0}^{3} - \frac{23}{4} b_{0}^{2} - 3 b_{0} + 7\right)q^{5} + O(q^{6}),$ with $b_{0}^{5} - 9 b_{0}^{3} + 2 b_{0}^{2} + 16 b_{0} - 8=0$. \\ \hline II & 4 & $\rho_{2,59}$ \\ \hline II & 4 & $\rho_{2,59}$ \\ \hline II & 15 & $\rho_{2,59}$ \\ \hline IV & 4 & $q + b_{1}q^{2} + \left(-3 b_{1} + 1\right)q^{3} + \left(b_{1} - 4\right)q^{4} + \left(3 b_{1} - 17\right)q^{5} + O(q^{6}),$ with $b_1 = (1 + \sqrt{17})/2.$ \\ \hline \end{longtable} \end{center}
\subsection{}\label{resultsB} Here are the results for $\bm{\rho}_{2,3}$, where they differ from $\bm{\rho}_{2,4}$.
\begin{center}
\begin{longtable}{|l|l|p{5in}|}
\hline \multicolumn{3}{|l|}{\textbf{Level 13}. Dimension 5.} \\ \hline III & 2 & $\rho_{3,13} = q + b_{0}q^{2} + \left(\left(i - 1\right) b_{0} - 3\right)q^{3} + \left(\left(-2 i - 2\right) b_{0} - i\right)q^{4}+ \left(b_{0} + 3 i + 3\right)q^{5} + O(q^{6}),$ for $\Gamma_1(13)$ with nebentype mod 13 mapping $2 \mapsto i$, with coefficients in $\mathbb{Q}(i)[b_{0}]/(b_{0}^{2} + \left(2 i + 2\right) b_{0} - 3 i)$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 19}. Dimension 9.} \\ \hline III & 2 & $\rho_{3,19} = q + b_{1}q^{2} - b_{1}q^{3} - 9q^{4} + 4q^{5} + O(q^{6}),$ for $\Gamma_1(19)$ with nebentype mod 19 mapping $2 \mapsto -1$, where $b_1 = \sqrt{-13}$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 25}. Dimension 14.} \\ \hline III & 2 & $q + b_{0}q^{2} + i b_{0}q^{3} - iq^{4} + O(q^{6}),$ for $\Gamma_1(25)$ with nebentype mod 25 mapping $2 \mapsto i$, with coefficients in $\mathbb{Q}(i)[b_{0}]/(b_{0}^{2} - 3 i)$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 26}. Dimension 25.} \\ \hline III & 10 & $\rho_{3,13}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 37}. Dimension 21.} \\ \hline III & 2 & $\rho_{3,37} = q + b_{0}q^{2} + \left(\frac{1}{4} i b_{0}^{4} + \left(\frac{1}{4} i - \frac{1}{4}\right) b_{0}^{3} + \frac{11}{4} b_{0}^{2} + \left(\frac{5}{4} i + \frac{5}{4}\right) b_{0} - 3 i\right)q^{3} + \left(b_{0}^{2} - 4 i\right)q^{4} + \left(-\frac{1}{4} i b_{0}^{5} + \left(-\frac{1}{2} i + \frac{1}{2}\right) b_{0}^{4} - \frac{13}{4} b_{0}^{3} + \left(-5 i - 5\right) b_{0}^{2} + \frac{17}{2} i b_{0} + 6 i - 6\right)q^{5} + O(q^{6}),$ \newline for $\Gamma_1(37)$ with nebentype mod 37 mapping $2 \mapsto i$, with coefficients in \newline $\mathbb{Q}(i)[b_{0}]/(b_{0}^{6} + \left(3 i + 3\right) b_{0}^{5} - 10 i b_{0}^{4} + \left(-34 i + 34\right) b_{0}^{3} - 5 b_{0}^{2} + \left(-59 i - 59\right) b_{0} - 24 i)$. \\ \hline III & 2 & $\rho_{3,37}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 38}. Dimension 40.} \\ \hline III & 10 & $\rho_{3,19}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 39}. Dimension 41.} \\ \hline III & 2 & $\rho_{3,39} = q + b_{0}q^{2} + \left(\left(i - 1\right) b_{0} - 3\right)q^{3} + \left(\left(-2 i - 2\right) b_{0} - i\right)q^{4} + \left(b_{0} + 3 i + 3\right)q^{5} + O(q^{6}),$ for $\Gamma_1(13)$ with nebentype mod 13 mapping $2 \mapsto i$, with coefficients in $\mathbb{Q}(i)[b_{0}]/(b_{0}^{2} + \left(2 i + 2\right) b_{0} - 3 i)$. \\ \hline III & 2 & $\rho_{3,39}$ \\ \hline III & 4 & $\rho_{3,39}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 43}. Dimension 26.} \\ \hline III & 2 & $\rho_{3,43} = q + b_{1}q^{2} + \left(-\frac{1}{4} b_{1}^{5} - \frac{15}{4} b_{1}^{3} - \frac{25}{2} b_{1}\right)q^{3} + \left(b_{1}^{2} + 4\right)q^{4} + \left(\frac{1}{4} b_{1}^{5} + \frac{11}{4} b_{1}^{3} + \frac{9}{2} b_{1}\right)q^{5} + O(q^{6}),$ for $\Gamma_1(43)$ with nebentype mod 43 mapping $3 \mapsto -1$, with coefficients in $\mathbb{Q}[b_{1}]/(b_{1}^{6} + 20 b_{1}^{4} + 121 b_{1}^{2} + 214)$. \\ \hline III & 2 & $\rho_{3,43}$ \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 53}. Dimension 33.} \\ \hline III & 2 & $q + b_{0}q^{2} +\bigl(-\frac{39}{578} i b_{0}^{7} + \left(-\frac{91}{578} i + \frac{91}{578}\right) b_{0}^{6} - \frac{649}{578} b_{0}^{5} + \left(-\frac{1499}{578} i - \frac{1499}{578}\right) b_{0}^{4} + \frac{2233}{578} i b_{0}^{3} + \left(\frac{2666}{289} i - \frac{2666}{289}\right) b_{0}^{2} + \frac{219}{578} b_{0} + \frac{861}{289} i + \frac{861}{289}\bigr)q^{3}+\left(b_{0}^{2} - 4 i\right)q^{4} $\newline$+\bigl(-\frac{15}{578} b_{0}^{7} + \left(-\frac{35}{578} i - \frac{35}{578}\right) b_{0}^{6} +\frac{383}{578} i b_{0}^{5} + \left(\frac{621}{578} i - \frac{621}{578}\right) b_{0}^{4} +\frac{2993}{578} b_{0}^{3} + \left(\frac{1181}{289} i + \frac{1181}{289}\right) b_{0}^{2} - \frac{6131}{578} i b_{0} - \frac{220}{289} i + \frac{220}{289}\bigr)q^{5} + O(q^{6}),$ for $\Gamma_1(53)$ with nebentype mod 53 mapping $2 \mapsto i$, with coefficients in $\mathbb{Q}(i)[b_{0}]/(b_{0}^{8} + \left(3 i + 3\right) b_{0}^{7} - 16 i b_{0}^{6} + \left(-52 i + 52\right) b_{0}^{5} - 48 b_{0}^{4} + \left(-207 i - 207\right) b_{0}^{3} - 26 i b_{0}^{2} + \left(122 i - 122\right) b_{0} - 7)$. \\ \hline \pagebreak[2]\hline
\multicolumn{3}{|l|}{\textbf{Level 59}. Dimension 36.} \\ \hline III & 4 & $q + b_{1}q^{2} + \left(\frac{1}{4} b_{1}^{4} + \frac{9}{2} b_{1}^{2} + \frac{65}{4}\right)q^{3} + \left(b_{1}^{2} + 4\right)q^{4} + \left(\frac{1}{4} b_{1}^{4} + \frac{11}{2} b_{1}^{2} + \frac{93}{4}\right)q^{5} + O(q^{6}),$ for $\Gamma_1(59)$ with nebentype mod 59 mapping $2 \mapsto -1$, with coefficients in $\mathbb{Q}[b_{1}]/(b_{1}^{6} + 27 b_{1}^{4} + 215 b_{1}^{2} + 509)$. \\ \hline \end{longtable} \end{center}
\end{document} |
\begin{document}
\title{Approximations for Generalized Unsplittable Flow on Paths with Application to Power Systems Optimization\thanks{This work was supported by the Khalifa University under Awards No. CIRA-2020-286 and KKJRC-2019-Trans1.}}
\titlerunning{Approximations for Unsplittable Stairstep Flow on Paths}
\author{Areg Karapetyan \and Khaled Elbassioni\and Majid Khonji \and Sid Chi-Kin Chau}
\authorrunning{A.\, Karapetyan, K.\, Elbassioni , M.\, Khonji and S.\, C.-K. Chau}
\institute{A.\, Karapetyan \and K.\, Elbassioni \and M.\, Khonji \at Department of Electrical Engineering and Computer Science, Khalifa University, UAE
\email{\{areg.karapetyan, khaled.elbassioni, majid.khonji\}@ku.ac.ae}
\and
S.\, C.-K. Chau \at
Research School of Computer Science, Australian National University, Canberra, Australia
\email{sid.chau@anu.edu.au} }
\maketitle
\begin{abstract}
The \textit{Unsplittable Flow on a Path} (UFP) problem has garnered considerable attention as a challenging combinatorial optimization problem with notable practical implications. Steered by its pivotal applications in \textit{power engineering}, the present work formulates a novel generalization of UFP, wherein demands and capacities in the input instance are monotone step functions over the set of edges. As an initial step towards tackling this generalization, we draw on and extend ideas from prior research to devise a quasi-polynomial time approximation scheme (QPTAS) under the premise that the demands and capacities lie in a quasi-polynomial range. Second, retaining the same assumption, an efficient logarithmic approximation is introduced for the single-source variant of the problem. Finally, we round up the contributions by designing a (kind of) black-box reduction that, under some mild conditions, allows to translate LP-based approximation algorithms for the studied problem into their counterparts for the \textit{Alternating Current Optimal Power Flow} (AC OPF) problem -- a fundamental workflow in operation and control of power systems.
\keywords{Unsplittable Flow Problem \and QPTAS \and LP Rounding \and Logarithmic Approximation \and Power Systems Engineering \and AC Optimal Power Flow.} \end{abstract}
{\section*{Nomenclature}
\vspace*{-20pt} \def1{1.35} \begin{table}[H]
\centering
\begin{tabular}{|cl|}
\hline
\multicolumn{1}{|c|}{\textbf{Notation}} & \textbf{Description} \\ \hline
\multicolumn{2}{r}{ \rule{0pt}{3ex}\textit{\fontsize{6.5}{6}\selectfont Summary of key notations related to the generalized UFP}} \\ \hline
$\mathcal G$& Line graph \\
$\ensuremath{\mathcal{V}}$& Set of vertices (indexed by $i$ or $j$) \\
$\ensuremath{\mathcal{E}} $& Set of $m$ edges (indexed by $e$ or $(i, j)$) \\
$\mathcal I$& Set of $n$ users (indexed by $k$) \\
$\ensuremath{\mathcal{Q}}$& Grouping (of cardinality Q) of users based on utility-to-demand ratio \\
$\mathcal I^{q}$& Set of users in group $q \in \ensuremath{\mathcal{Q}}$ \\
$\mathcal L^q$& Set of users with ``large'' demands in group $q \in \ensuremath{\mathcal{Q}}$ \\
$\mathcal S^q$& Set of users with ``small'' demands in group $q \in \ensuremath{\mathcal{Q}}$ \\
\hline \hline
$d$& Number of dimensions \\
$u_k$& User $k$'s utility value \\
$x_k$& Decision variable for user $k$ \\
$f_k^r (\cdot)$& User $k$'s demand function over $\ensuremath{\mathcal{E}} $ in dimension $r \in \{1,2, \hdots, d\}$ \\
$e_k^r, \hat{e}_k^r$& User $k$'s demand function's binding edges in dimension $r \in \{1,2, \hdots, d\}$ \\
$c^r (\cdot)$& Capacity function over $\ensuremath{\mathcal{E}} $ in dimension $r \in \{1,2, \hdots, d\}$ \\ \hline\hline
$T_1, \hdots , T_d$& $d$ positive integers \\
$T$& Maximum among $T_1, \hdots , T_d$ \\
$C_1, \hdots , C_d$& $d$ integers each greater than $1$ \\
$P_r$& Number of edge partitions in dimension $r \in \{1,2, \hdots, d\}$ \\
$\epsilon$& Constant in $(0,1)$ \\\hline
\multicolumn{2}{r}{ \rule{0pt}{3ex}\textit{\fontsize{6.5}{6}\selectfont Summary of key notations related to AC OPF}} \\ \hline
$\mathcal T$& Graph of a radial distribution network \\
$\ensuremath{\mathcal{V}}^+$& Set of vertices excluding the root $0$ \\
$\ensuremath{\mathcal{V}}_i^+$& Set of vertices in $\ensuremath{\mathcal{V}}^+$ excluding the node $i$ \\
$\mathcal N$& Set of all users (electrical loads) (indexed by $k$) \\
$\mathcal U_j$& Set of users at node $j$ \\
$\mathcal N_j$& Set of users residing on the subpath rooted at node $j$ \\
$\mathcal F$& Set of users with elastic power demands \\
$\ensuremath{\mathcal{P}}_j$& The (unique) path from node $j$ to the root $0$\\\hline \hline
$s_k$& User $k$'s complex power demand \\
$z_{i,j}$& Impedance of power line $(i,j)$ \\
$V_j$& Voltage at node $j$ \\
$v_j$& Voltage magnitude square at node $j$ \\
$I_{i,j}$& Current traversing through line $(i,j)$ \\
$l_{i,j}$& Squared magnitude of current flowing through line $(i,j)$ \\
$S_{i,j}$& Complex power flowing from node $i$ to node $j$ \\ \hline
\end{tabular}
\label{tab:my-table} \end{table}} \def1{1}
\section{Introduction}
The UFP, in its most generic form, takes as input a capacitated line graph along with a collection of flow requests, each parameterized by a demand, a profit (utility) and a pair of source-sink vertices. Constrained by edge capacities, the pursued objective is to compute a maximum profit subset of requests routable simultaneously. Despite its apparent simplicity, UFP specializes to a number of classical NP-hard combinatorial problems, including the Knapsack problem (when the graph comprises a solitary edge) and the Maximum Edge-disjoint Path problem (when all demands and capacities are set to unity). On the practical side, this problem underlies a spectrum of real-world applications in communication networks~\citep{Bar-Noy2001}, space missions~\citep{Hall1994}, the Web~\citep{Albers:1999} and data centers management~\citep{BCES06}, to name a few.
Recently, several studies have revisited UFP generalizing it from different perspectives. In~\citep{10.1007/978-3}, UFP is extended to the Storage Allocation problem, where the requests are additionally characterized by a vertical position (i.e., height) and a coupling constraint is imposed enforcing a non-overlapping drawing of them. Another line of work in~\citep{Adamaszek2018}, adapted the problem to the setting of a submodular objective function stimulated by theoretical and practical appeal thereof. Expanding the application scope further, this paper introduces a novel generalization of UFP. Previously, Cook et al. \citep{BFb0055088} developed an interesting framework linking electrical power transmission with the unsplittable flow on general graphs. Solidifying this nexus, we establish a formal bond between UFP and the AC OPF, an essential problem in power systems engineering introduced by Carpentier in 1962~\citep{C62} (see Section~\ref{elecflows} for particulars). Formally, the proposed generalization is defined in what follows. \paragraph{\textbf{Generalization of UFP:}} In the {\it $d$-dimensional Unsplittable Stairstep Flow on a Path} ($d$-USFP) problem, defined here for a fixed positive integer $d\in\mathbb Z_+$, given is a line network $\mathcal G=(\ensuremath{\mathcal{V}},\ensuremath{\mathcal{E}} )$ rooted at node $0$ and a set $\mathcal I$ of $n$ users. Assuming an ascending ordering of the edges by distance from the root {(i.e., $e_1<e_2<\ldots<e_m$, where $e_i=(i-1,i)$)}, each user demand is captured by a $d$-dimensional vector $f_k=(f_k^1,\ldots,f_k^d)$, where for $\forall ~r$, $f_k^r:\ensuremath{\mathcal{E}} \to\mathbb R_+$ are either monotone non-increasing or monotone non-decreasing step functions over $\ensuremath{\mathcal{E}} $ (e.g., $f_k^r$ is monotone non-decreasing if $f_k^r(e)\le f_k^r(e')$ whenever $e\le e'$). As with UFP, if $f_k$ is satisfied (routed), $u_k \geq 0$ is the perceived utility for customer $k$ and each edge $e\in\ensuremath{\mathcal{E}} $ is associated with a capacity, which in the current context is a $d$-dimensional vector $c=(c^1,\ldots,c ^d)$, where $c^r:\ensuremath{\mathcal{E}} \to\mathbb R_+$ is a monotone non-decreasing function on $\ensuremath{\mathcal{E}} $. With this input, $d$-USFP then takes the form \begin{align} \big(d\text{-USFP}[\mathcal I,c]\big)&\quad \max_{\substack{x}} \sum_{k\in\mathcal I}u_kx_k, \notag \\ \text{s.t.} \ & \quad \sum_{k\in \mathcal I}f_k^r(e)x_k\le c^r(e),~\quad \forall~e\in\ensuremath{\mathcal{E}} ,~r\in\{1,...,d\}\label{ee0}\\ & \quad x_k \in \{0,1\},\quad \forall ~k \in \mathcal I \,.\label{ee1} \end{align} In the above formulation, we may assume without loss of generality (by reversing the order on $\ensuremath{\mathcal{E}} $ if necessary) that $f_k^r(\cdot)$ is monotone non-decreasing for $\forall~k \in \mathcal I,r \in \{1,...,d\}$. While $d$-USFP can be defined for any such $f_k^r(\cdot)$, this paper confines the scope to functions of {\it separable form}. More precisely, given positive integers $T_1,\ldots,T_d$, monotone (non-decreasing) functions $b^{r,t}:\ensuremath{\mathcal{E}} \to\mathbb R_+$, for $t=1,\ldots,T_r, r\in[d]$ as well as non-negative numbers $a^{r,t}_k\in\mathbb R_+$ and edges $e^{r}_k,\hat e^{r}_k\in\ensuremath{\mathcal{E}} $, for $t=1,\ldots,T_r$, it is assumed that \begin{align} \label{funct} f_k^r(e)=\sum_{t=1}^{T_r}a_k^{r,t}\tilde b^{r,t}_k(e), \text{ where } \tilde b^{r,t}_k(e)=\left\{ \begin{array}{ll} 0&\text{ if }e< e^{r}_k,\\ b^{r,t}(e)&\text{ if }e^{r}_k\le e< \hat e^{r}_k\,.\\ b^{r,t}(\hat e^{r}_k)&\text{ otherwise.} \end{array} \right. \end{align} This choice of functions\footnote{Note that, for $\forall~r \in \{1,...,d\}$, the capacity function $c^r(\cdot)$ adheres to this form trivially with $T_r=a^{r,t}=1, b^{r,t}(\cdot)=c^r(\cdot)$ for $\forall~t$ and $e^{r} = e_1, \hat e^{r}=e_m$.} stems from the relevant structural properties of OPF constraints, as elaborated in Section~\ref{sec:pre}. Yet, even with this condition in place, $d$-USFP remains substantially more complicated than UFP as it entails the packing of monotone step functions of special type (rather than intervals) within a given capacity function. Hence, known techniques for UFP, if amenable, have to be extrapolated in a non-trivial manner to deal with $d$-USFP. In the proceeding paragraphs, we briefly review these techniques.
\paragraph{\textbf{Related Work:}}\label{subsec:litreview}
As noted previously, UFP is NP-hard since it specializes to the Knapsack problem. In fact, even under the setting of uniform profits and capacities, it has proven to be strongly NP-hard~\citep{Chrobak2012}. In light of this hurdle, most of the prior studies attempted simplified variants of UFP, with the two predominantly common ones being the uniform capacity UFP (UCUFP) and the UFP with the {\it no-bottleneck assumption} (UFP-NBA).
For UCUFP, which was also studied under the name of Resource Allocation problem, the first constant factor approximation was presented in~\citep{1099-1425}, attaining a $6$-approximation via LP rounding techniques. This factor was then refined by~\citep{3-540-47867} to ($2 + \epsilon$). The approach therein decouples the instance into small and large requests, subsequently tackling the former in a fashion analogous to~\citep{1099-1425}, while the latter through dynamic programming.
Ensuing from a more general case, UFP-NBA restricts the maximum demand to be at most the minimum capacity of any edge. The crux of this condition is rooted in the integrality gap of the natural LP relaxation of UFP, which was shown to be $\Omega(n)$ in~\citep{CCGK07}, whereas that of UFP-NBA is O$(1)$. For UFP-NBA, the first constant factor approximation was derived in~\citep{CCGK07}. Improving upon this, Chekuri et al.~\citep{CMS07} obtained a ($2 + \epsilon$)-approximation. These both studies broadly follow the aforementioned framework of decomposing the requests into small and large.
Turning to UFP, in 2006 Bansal et al.~\citep{BCES06} developed a deterministic QPTAS under the assumption that the capacities and demands are bounded by $2^{\operatorname{polylog}(n)}$, thereby ruling out UFP's APX-hardness and hinting to the likely existence of a PTAS. The first polynomial-time approximation algorithm for UFP, yielding O$(\log n)$ guarantee, was introduced in~\citep{BFKS14}. The algorithm is combinatorial, thus allowing to bypass the $\Omega(n)$ integrality gap of the natural LP relaxation. Later on, this result was extended in~\citep{anagnostopoulos2014mazing} to a $(2+\epsilon)$-approximation. {Beating the barrier of $2$, Grandoni et al.~\citep{Grandoni3188745} provided a dynamic programming-based polynomial time algorithm with an approximation factor of $(\frac{5}{3} + \epsilon)$, which in~\citep{1.9781611977073.39} was subsequently improved to $1 +\frac{1}{1+e} +\epsilon < 1.269$ (in expectation) via a novel randomized sketching technique. Closing the search for a PTAS, very recently Grandoni et al.~\citep{10.1145/3519935.3519959} devised a polynomial time $(1 + \epsilon)$-approximation algorithm which tackles UFP by rephrasing the problem as a solitary game and is the best possible result unless P=NP.}
\paragraph{\textbf{Contributions and Paper Outline:}} As such, this study advances extant research in the following two aspects: \begin{itemize}
\item[\ding{227}] We introduce a practically-driven generalization of UFP and initiate the search for its efficient approximations. As a first step in this direction, we extend the ideas in~\citep{BCES06} to construct a QPTAS for separable $d$-USFP, under the assumption that the demands and capacities lie in a quasi-polynomial range. Second, relying on the same assumption, we devise an LP-based O$(d\log n)$-approximation for the single-source setting of the problem (i.e., when all the requests share the same origin). The algorithm hinges on a simple reduction allowing to transform the problem to an easier instance with only O$(d\log n)$ constraints.
\item[\ding{227}] A (kind of) black-box reduction is derived that, under some practical assumptions, translates an LP-based approximation for separable $d$-USFP into its analog for OPF on line distribution networks. This result complements the strand of research in~\citep{7590153, 8637153, KHONJI201934, Elbassioni2019, 9301221} concerned with developing efficient approximations tailored for combinatorial optimization of AC electric power systems. \end{itemize}
The remainder of this article is organized as follows. Section~\ref{sec:prelim} covers the adopted notation along with a basic result on partitioning of the studied step functions. Section~\ref{sec:qpt} presents the QPTAS for separable $d$-USFP. In Section~\ref{sec:logarithmic} we provide the logarithmic approximation for single-source separable $d$-USFP. Section~\ref{elecflows} contains an overview of AC OPF problem, followed by its mathematical formulation and the proposed reduction procedure producing LP-based approximations for OPF on line networks. Lastly, Section~\ref{conclrem} concludes the paper with a discussion on applications and connotations of present contributions as well as prospective directions for further developments.
\section{Notational Convention and Preliminaries}\label{sec:prelim}
{In what follows, unless otherwise explicitly mentioned, constants or variables are denoted in normal font (e.g., $C$, $d$), while sets in calligraphic capital letters (e.g., $\ensuremath{\mathcal{E}} $). We let $\mathbf 0$ and $\ensuremath{\boldsymbol{1}}$ symbolize the vectors of all zeros and ones, respectively, and as a shorthand, we shall write $[n]$ to encode the range $\{ 1,...,n \}$ for an integer $n$. Unless stated differently, we designate the operators $\bar{}$ , $\underline{~}$ to capture the maximum and minimum values of a variable/parameter/function, respectively. Given a complex number $\nu\in\mathbb C$, we let $|\nu|$ be its {\it magnitude}, $\arg(\nu)$ be the {\it phase angle} that it makes with the real axis, $\nu^*$ be its complex {\it conjugate} and write $\nu^{\rm R} \triangleq {\rm Re}(\nu)$, $\nu^{\rm I} \triangleq {\rm Im}(\nu)$ for its real and imaginary components, respectively. With a slight abuse of notation, we shall also use the superscript $^*$ to mark the optimal solutions.}
In line with~\citep{BCES06}, we suppose the range of demands and capacities is quasi-polynomial. Mathematically, $$ \displaystyle \max\left\{\frac{\max_{e\in\ensuremath{\mathcal{E}} , k \in \mathcal I, r\in [d]}f_k^r(e)}{\min_{e\in\ensuremath{\mathcal{E}} , k \in \mathcal I, r\in [d]: f_k^r(e) > 0}f_k^r(e)},\frac{\max_{e\in\ensuremath{\mathcal{E}} , r\in [d]}c^r(e)}{\min_{e\in\ensuremath{\mathcal{E}} , r\in [d]: c^r(e) > 0}c^r(e)}\right\}=2^{\operatorname{polylog}(n)} . $$ This assumption is leveraged both, in the QPTAS and the logarithmic approximation, however, one can possibly discard it with techniques from~\citep{BGKMW15}.
The proposed approximations employ the following simple, yet crucial, lemma which, in a sense, states that the line can be partitioned into logarithmic (in $n$) number of regions such that, for each user $k$, the function $f_k(\cdot)$ is roughly constant in each region.
\begin{lemma}\label{l5}For any $C_r>1$, $r\in[d]$,
$\ensuremath{\mathcal{E}} $ can be partitioned along each coordinate $r\in[d]$ into $P_r<T_r\log_{C_r} \big( \frac{\overline b^r}{\underline b^r}\big)$ intervals $\ensuremath{\mathcal{E}} ^r=\bigcup_{p=1}^{P_r}\ensuremath{\mathcal{E}} _p^r$, where $\ensuremath{\mathcal{E}} _p^r:=\{e_{\underline{i}(p,r)},e_{\underline{i}(p,r)+1},\ldots,e_{\overline{i}(p,r)}\}$,
and
$$\cdots<e_{\overline{i}(p-1,r)}<e_{\underline{i}(p,r)}<e_{\underline{i}(p,r)+1}<\cdots<e_{\overline{i}(p,r)}<e_{\underline{i}(p+1,r)}<\cdots,
$$
with the following property:
\begin{align}\label{property}
\overline{f}^{p,r}_k\le C_r \cdot \underline{f}^{p,r}_k, \quad \forall k\in\mathcal I,~ \forall p\in[P_r],~ \forall r\in[d],
\end{align}
where $\underline{b}^{r}:=\min_{e\in\ensuremath{\mathcal{E}} ,~t\in[T_r]:~b^{r,t}(e)>0} b^{r,t}(e)$, $\overline{b}^{r}:=\max_{e\in\ensuremath{\mathcal{E}} ,~t\in[T_r]} b^{r,t}(e)=$ $b^{r,t}$ $(e_n), \underline{f}^{p,r}_k:=\min_{e\in\ensuremath{\mathcal{E}} ^r_p:~f_k^r(e)>0}f_k^r(e)$ and $\overline{f}^{p,r}_k:=\max_{e\in\ensuremath{\mathcal{E}} ^r_p}f_k^r(e)=f_k^r(e_{\overline{i}(p,r)})$. \end{lemma}
\begin{proof}
Fix $r\in[d]$. For $t\in[T_r]$,
let $j^{t,1}\in\ensuremath{\mathcal{V}}$ be the smallest index such that ${b}^{r,t}((j^{t,1},j^{t,1}+1))>0$, and for $\ell'=2,3,\ldots,$ let $j^{t,\ell'}\in\ensuremath{\mathcal{V}}$, be the smallest index such that
\begin{align}\label{eeee1}
b^{r,t}((j^{t,\ell'},j^{t,\ell'}+1)) &> C_r\cdot b^{r,t}((j^{t,\ell'-1},j^{t,\ell'-1}+1)).
\end{align}
{Let $\bar{\ell}$ be the largest index} for which~\raf{eeee1} is possible (if no such index exists, then the lemma follows with $P_r=1$), and {set $\ell_{t}:=\bar{\ell}+1$ and $j^{t,\ell_t}:=m$.} The inequality in~\raf{eeee1} implies that
$b^{r,t}(e_n)>C_r^{\ell_t-1}\cdot b^{r,t}((j^{t,1},j^{t,1}+1))$ which implies in turn that
\begin{align}\label{b1}
\ell_t\le \log_{C_r}\frac{b^{r,t}(e_n)}{ b^{r,t}((j^{t,1},j^{t,1}+1))}\le \log_{C_r} \Big( \frac{\overline b^r}{\underline b^r}\Big).
\end{align}
Moreover, \raf{eeee1} implies
\begin{align}\label{fff1}
\frac{{b}^{r,t}((j-1,j))}{{b}^{r,t}((j'-1,j'))}\le C_r,\quad\forall j,j'\in\{j^{t,\ell'}+1,\ldots,j^{t,\ell'+1}\},\forall \ell'=2,\ldots,\ell_t-1.
\end{align}
The set $\bigcup_{t\in[T_r]}\{j^{t,\ell'}:\ell'\in[\ell_t]\}\subseteq\ensuremath{\mathcal{V}}$ defines a partition of $\ensuremath{\mathcal{E}} $ into $P_r\le\sum_{t=1}^{T_r}(\ell_t-1)$ intervals $\ensuremath{\mathcal{E}} ^r_1,\ldots,\ensuremath{\mathcal{E}} ^r_{P_r}$. By \raf{b1},
\begin{align}
\label{b1--}
P_r< T_r\log_{C_r} \Big( \frac{\overline b^r}{\underline b^r}\Big).
\end{align}
Consider any interval $\ensuremath{\mathcal{E}} ^r_p:=\big\{e_{\underline{i}(1,p)},e_{\underline{i}(1,p)+1},\ldots,e_{\overline{i}(1,p)}\big\}$ in the partition. Then by~\raf{funct} and~\raf{fff1}, for any $e,{e}'\in\ensuremath{\mathcal{E}} ^r_p$, we have
$$
\tilde{b}^{r,t}(e)\le C_r\cdot \tilde{b}^{r,t}(e'),\quad \text{ whenever }e'\ge e^r_k
$$
and thus, it follows form~\raf{funct} that, whenever $f_k^r(e')>0$ (and hence $e'\ge e^r_k$), we have
\begin{align*}
f_k^r(e)&=\sum_{t=1}^{T_r}a_k^{r,t}\tilde b^{r,t}_k(e)\le \sum_{t=1}^{T_r}a_k^{r,t} C_r \tilde b^{r,t}_k(e')
\le C_r f_k^r(e'),
\end{align*}
as required by~\raf{property}.
$\blacksquare$ \end{proof} \section{A QPTAS for separable $d$-USFP}\label{sec:qpt}
This section presents an LP-based approach that arrives at a QPTAS for separable $d$-USFP with the main result stated in Theorem~\ref{th:mcqptas-}. {The high-level idea behind the provided scheme is to segment the users' demand functions in each partition of edges guaranteed by Lemma~\ref{l5} into ``large'' and ``small'', then effectively combine their solutions by exploiting monotonicity and separability of these functions. As the number of ``large'' demands in the optimal solution turns to be provably bounded, we guess the corresponding decision variables through exhaustive search. On the other hand, the situation with ``small'' demands is more complicated since their presence in the optimal solution can be significant. However, as shown in Lemma~\ref{l4}, for such demands, a given fractional solution $\tilde x$ for separable $d$-USFP can be rounded to an integral one that fits within $\tilde x$'s resource requirements without a notable sacrifice in the objective value.}
For exposition clarity, the analysis is arranged into two subsections, which are then further dissected into more concise paragraphs. We proceed by exploring the properties of near-optimal solutions.
\subsection{Structure of Near-optimal Solutions}\label{sec:structure} \paragraph{\textbf{Discretizing the instance:}} Let $u_{\max}:=\max_{k\in\mathcal I}u_k$ and $\epsilon\in(0,1)$ be a given constant. Define $\hat\mathcal I:=\{k\in\mathcal I:~u_k\ge\frac{\epsilon u_{\max}}{n}\}$. Note that $u_{\max}\le\ensuremath{\textsc{Opt}}$ for a feasible instance, where $\ensuremath{\textsc{Opt}}$ is the value of an optimal solution for $d$-USFP$[\mathcal I,c]$. It follows that $\sum_{k \in\mathcal I\setminus\hat\mathcal I}u_k\le \epsilon u_{\max}\le \epsilon\ensuremath{\textsc{Opt}}$ and hence, $\sum_{k \in\hat\mathcal I}u_k\ge (1-\epsilon)\ensuremath{\textsc{Opt}}$.
For $k\in\hat\mathcal I$ and $r\in[d]$, let $\underline{f}^r_k:=\min_{e:~f_k^r(e)>0}f_k^r(e)$, $\overline{f}^r_k:=\max_{e}f_k^r(e)=f_k^r(e_n)$, $\underline{f}^r:=\min_{k} \underline f_k^r$ and $\overline{f}^r:=\max_{k} \overline f_k^r$. We consider discrete levels of function values: for $l=-\infty,0,1,2,\ldots,\left\lceil\log_{(1+\epsilon)}\frac{n\overline f^r}{\underline f^r}\right\rceil$ let $F_l^r:=(1+\epsilon)^l\underline f^r$, and $F^r:=\Big\{F_l^r:~l=-\infty,0,1,2,\ldots,\left\lceil\log_{(1+\epsilon)}\frac{n\overline f^r}{\underline f^r}\right\rceil\Big\}$ {with $\overline F:=\max\Big\{|F^r| : r\in[d]\Big\}$.}
\paragraph{\textbf{Partitioning the instance:}} For each $r\in[d]$, we assume the partition of $\ensuremath{\mathcal{E}} $ guaranteed by Lemma~\ref{l5}, and let $\underline {a}^r:=\min_{k\in\hat \mathcal I,~t\in[T_r]:~a_k^{r,t}>0}a_k^{r,t}$ and $\overline {a}^r:=\max_{k\in\hat \mathcal I,~t\in[T_r]}$ $a_k^{r,t}$. Note that if $a_k^{r,t}>0$ and $k\in\hat\mathcal I$, then $ \frac{\epsilon u_{\max}}{n\overline a^r}\le\frac{u_k}{a_k^{r,t}}\le \frac{u_{\max}}{\underline{a}^r}. $ We partition the users in $\hat\mathcal I$ into $Q:=\prod_{r=1}^d\prod_{t=1}^{T_r}Q_{r,t}$ groups, where $Q_{r,t}:=\left\lceil\log\frac{n\overline a^r}{\epsilon\underline a^r}\right\rceil+1$: \begin{align}\label{grouping} \mathcal I^{q}=\Big\{k\in\hat\mathcal I:~2^{q_{r,t}-1}L\le\frac{u_k}{a_k^{r,t}}< 2^{q_{r,t}}L\text{ for all }t\in[T_r],~ r\in[d]\Big\}, \end{align}
for $q=(q_{r,t}:~t\in[T_r],~r\in[d])\in \ensuremath{\mathcal{Q}}:=\prod_{r=1}^d\prod_{t=1}^{T_r}\{1,\ldots,Q_{t,r}-1,\infty\}$, where\footnote{For clarity, it is assumed in \raf{grouping} that the strict inequality is replaced by an inequality when $a_k^{r,t}=0$.} $L:=\frac{\epsilon u_{\max}}{n\overline a^r}$. Let $\overline Q:=\max_{t,r}Q_{r,t},$.
Then $Q\leq \overline Q^{\sum_{r=1}^d{T_r}}$.
\paragraph{\textbf{Structure of the optimal solution:}} Consider an optimal solution $x^*$ to separable $d$-USFP[$\mathcal I,c$]. For $q\in\ensuremath{\mathcal{Q}}$, let $\mathcal T^*=\{k\in\hat\mathcal I:~x_k^*=1\}$. Then $(f^*)^{q,r}(e):=\sum_{k\in\mathcal T^*\cap\mathcal I^{q}}f_k^r(e)$, for $r\in[d]$, defines a monotone non-decreasing function on $\ensuremath{\mathcal{E}} $. We call such a function a ``profile" defined by the optimal solution in group $\mathcal I^q$. For $p\in[P_r]$, let $(h^*)^{q,p,r}=\max_{e\in\ensuremath{\mathcal{E}} ^r_p}(f^*)^{q,r}(e)$ be the peak demand defined by the optimal solution (from group $q$) within the interval $\ensuremath{\mathcal{E}} _p^r$.
For $q\in \ensuremath{\mathcal{Q}}$, let $(\mathcal L^*)^q:=\{k\in \mathcal I^q\cap\mathcal T^*:~\underline{f}_k^{p,r}>\epsilon^2(h^*)^{q,p,r}\text{ for some } p\in[P_r],~ r\in[d]\}$ be the set of ``large" demands within group $\mathcal I^{q}$ in the optimal solution, and let $\mathcal S^q:=\mathcal I^q\cap\mathcal T^*\setminus(\mathcal L^*)^q$ be the set of ``small" demands within the same group. Note that, by definition of $(h^*)^{q,p,r}$ and the monotonicity of $f_k^r(\cdot)$, there cannot be more than $\frac{1}{\epsilon^2}$ demands $k$ in $\mathcal I^q\cap \mathcal T^*$ such that $\underline{f}^{p,r}_k>\epsilon^2(h^*)^{q,p,r}$, and hence $|(\mathcal L^*)^q|\le \frac{\sum_{r=1}^dP_r}{\epsilon^2}$. The situation with small demands is more complicated as their number in the optimal solution can be high. However, with a small loss in the objective value, the profile defined by such small demands can be restricted into one that admits a small description. This motivates the following definition (generalizing that of in \citep{BCES06}). \begin{definition}\label{d3.3}\emph{($(h,\epsilon)$-restricted profile)}
Let $\epsilon>0$ be such that $1/\epsilon\in\mathbb Z_+$.
For $r\in [d]$ and $p\in[P_r]$, let $h=(h^{p,r})_{p\in[P_r],~r\in[d]}$ be a given vector of numbers such that $h^{p,r}\in F^r$ and $h^{p,r}\ge h^{p-1,r}$, for all $p=2,\ldots,P_r$ and $r\in[d]$.
An $(h,\epsilon)$-restricted profile $g=(g^r)_{r\in[d]}$ is vector of monotone functions $g^{r}:\ensuremath{\mathcal{E}} \to\mathbb R_+$ such that $g^r(e)\in\{l\epsilon h^{p,r}:~l\in\{0,1\ldots,1/\epsilon\},~p\in[P_r]\}$ (see Figure~\ref{f1} in Section~\ref{lemma2b} for pictorial interpretation of an $(h,\epsilon)$-restricted profile). \end{definition} Accordingly, the total number of $(h,\epsilon)$-restricted profiles is at most $m^{\sum_{r=1}^dP_r/\epsilon}$. For $q\in\ensuremath{\mathcal{Q}}$ and for $p\in[P_r]$, define \begin{align}\label{hpr} H^{q,p,r}:=\sum_{t=1}^{T_r}\frac{b^{r,t}\big(e_{\overline i(p,r)}\big)}{2^{q_{r,t}}L}. \end{align} Note that \begin{align} \forall p\in[P_r]:~H^{q,p,r}>0~&\Leftrightarrow~\exists t\in[T_r]:~q_{r,t}\ne\infty \nonumber\\ &\Leftrightarrow~ \forall k\in\mathcal I^q~\exists t\in[T_r]:~a_k^{r,t}>0~\nonumber\\ &\Leftrightarrow~ \forall k\in\mathcal I^q:~f_k^r(e_n)>0. \label{p1} \end{align}
Let $\mathcal H^q:=\{r\in[d]:~H^{q,P_r,r}>0\},$ and $\alpha:=\frac{\sum_{r=1}^dP_r}{\sum_{r\in\mathcal H^{q}}P_r}.$ Assume $\mathcal H^q\neq \emptyset$ since otherwise, $f_k^r(e_n)=0$ for all $k\in\mathcal I^q$ and hence all the users in $\mathcal I^q$ can be taken in the solution without affecting the constraints.
In proving Theorem~\ref{th:mcqptas-}, we shall resort to the below Lemma, which builds on top of the findings in~\citep{BCES06} and is proved in Section~\ref{lemma2b}.
\begin{lemma}\label{l4}
Fix $q\in\ensuremath{\mathcal{Q}}$ and $\epsilon\in(0,1)$.
Let $\mathcal S^q\subseteq\mathcal I^q$ be a set of demands within group $q$ such that $\underline{f}_k^{p,r}\le B^{p,q,r}$ for all $k\in\mathcal S^q$, $p\in[P_r]$, $r\in[d]$, and some numbers $B^{p,q,r}\in\mathbb R_+$. Let $h^q=(h^{q,p,r})_{p\in[P_r],~r\in[d]}$ be a given vector of numbers such that $h^{q,p,r}\in F^r$ and $h^{q,p,r}\ge h^{q,p-1,r}$, for all $p=2,\ldots,P_r$ and $r\in[d]$, {and $(\tilde{x}_k)_{k\in\mathcal S^q}\in[0,1]^{\mathcal S^q}$ be such that}
\begin{align} \label{rel0}
\sum_{k\in\mathcal S^q}\overline f_k^{p,r}\tilde x_k\le(1+\epsilon)h^{q,p,r},\quad\forall p\in[P_r],~\forall r\in[d].
\end{align}
Then we can find in polynomial time an integral vector $(\hat x_k)_{k\in\mathcal S^q}\in\{0,1\}^{\mathcal S^q}$ and {an $(h,\epsilon)$-restricted} profile $g^{q}$, such that
~\\
\noindent (i) $\sum_{k\in\mathcal S^q}f_k^r(e)\hat x_k\leq g^{q,r}(e)\leq\sum_{k\in\mathcal S^q}f_k^r(e)\tilde{x}_k$ for all $e\in\ensuremath{\mathcal{E}} ,~r\in[d]$, and
\noindent (ii) $\sum\limits_{k\in\mathcal S^q}u_k\hat x_k\geq \sum\limits_{k\in\mathcal S^q}u_k\tilde x_k-\sum\limits_{r\in\mathcal H^q}\left(\sum\limits_{p=1}^{P_r}\left(\frac{C_r}{H^{q,p,r}}\left(\epsilon h^{q,p,r}+B^{q,p,r}\right)\right) + \frac{\alpha P_rB^{q,P_r,r}}{\epsilon H^{q,P_r,r}}\right).$ \end{lemma} In other terms, Lemma~\ref{l4} establishes that, when all demands are small, a given fractional solution $\tilde x$ for separable $d$-USFP can be rounded to an integral solution $\hat x$ that fits within a capacity profile with a small description, losing only a small part of the utility of $\tilde x$.
\subsection{Approximation Scheme}
The featured QPTAS, formally stated in Alg.~\ref{alg:qptas}, proceeds as follows. As $\ensuremath{\textsc{Opt}}\ge u_{\max}$, by restricting the set of demands to $\hat \mathcal I$ (defined in Section~\ref{sec:structure}) we lose only a value of at most $\epsilon\ensuremath{\textsc{Opt}}$ from the optimal solution. Next, the algorithm discretizes the instance and partitions the users in $\hat\mathcal I$ into $Q$ groups $(\mathcal I^q)_{q\in\ensuremath{\mathcal{Q}}}$, as described in Section~\ref{sec:structure}. Additionally, Alg.~\ref{alg:qptas} partitions $\ensuremath{\mathcal{E}} $ into intervals $\ensuremath{\mathcal{E}} ^r$ satisfying assumption~\raf{property}, as per Lemma~\ref{l5} (with $C_r=2$).
\begin{algorithm}[!htb]
\caption{ {\sc $d$-USFP-QPTAS} }
\label{alg:qptas}
\begin{algorithmic}[1]
\Require An approximation parameter $\epsilon\in(0,1)$; separable {\sc $d$-USFP} input $(f_k^{r})_{k\in \mathcal I,~r\in[d]}$ satsifying~\raf{funct}; {capacities $(c^r)_{r\in[d]}$}
\Ensure An integral solution $\hat x$ to $d$-USFP such that $\sum_{k\in\mathcal I}u_k\hat x_k \ge (1-O(\epsilon)) \ensuremath{\textsc{Opt}}$
\For {{each selection $\Big(\mathcal L = (\mathcal L^q)_{q\in\ensuremath{\mathcal{Q}}}, h = \big(h^q=(h^{q,p,r})_{p\in[P_r],~r\in[d]}\big)_{q\in\ensuremath{\mathcal{Q}}}\Big)$ such that} $\mathcal L^q\subseteq \mathcal I$, $|\mathcal L^q| \le \frac{\sum_{r=1}^dP_r}{\epsilon^2}$ and $h^{q,p,r}\in F^r$ } \label{alg:guess}
\If{{$\sum_{k\in\mathcal L}f_k^r(e)$}$+\sum_{p\in[P_r],~q\in\ensuremath{\mathcal{Q}}}h^{q,p,r}\le c^r(e)$ $\forall e\in\ensuremath{\mathcal{E}} ,~r\in[d]$} \label{qptas:mc-feas}
\State {$\hat x_k'\gets1$ $\forall~ k\in \mathcal L$}
\For{$q\in\ensuremath{\mathcal{Q}}$}\label{qptas:Q}
\State Let $\mathcal S^q$ be given by~\raf{small}
\For {every $(h,\epsilon)$-restricted profile $g^q$}\label{qptas:prof}
\State$ (\hat x_k')_{k\in \mathcal S^q} \leftarrow $ Integral vector returned by applying Lemma~\ref{l4} with vector $h^q$, {and $(\tilde x_k)_{k\in \mathcal S^q}=(x'_k)_{k\in \mathcal S^q}$} \label{qptas:mc-round}
\EndFor
\EndFor
\If{$\sum_{k\in\mathcal I}u_k\hat x_k' > \sum_{k\in\mathcal I}u_k\hat x_k$ }
\State $ \hat x\leftarrow \hat x'$
\EndIf
\EndIf
\EndFor
\State \Return $\hat x$
\end{algorithmic} \end{algorithm}
Then for each group $q\in\ensuremath{\mathcal{Q}}$, Alg.~\ref{alg:qptas} guesses the set of large demands $\mathcal L^q\subseteq\mathcal I^q$ in the optimal solution, and the peaks $h^{q,p,r}$, within $1+\epsilon$, of the small demands in the optimal solution within the interval $\ensuremath{\mathcal{E}} _p^r$. Let $\mathcal L=(\mathcal L^q)_{q\in\ensuremath{\mathcal{Q}}}$ and $h^q=(h^{q,p,r})_{p\in[P_r],~r\in[d]}$ where $h^{q,p,r}\in F^r$. Define the set of small demands within group $q\in\ensuremath{\mathcal{Q}}$ as \begin{align}\label{small} \mathcal S^q:=\left\{k\in\mathcal I^q:~\underline{f}_k^{p,r}\le B^{q,p,r}\text{ for all } p\in[P_r],~r\in[d]\right\}, \end{align} where $B^{q,p,r}:=\epsilon^2\left[h^{q,p,r}+\sum_{k\in\mathcal L^q}\overline{f}_k^{p,r}\right]$.
Let $T:=\max_{r}T_r$ and $M:=\max\left\{\max_{r}\frac{\overline a^r}{\underline a^r},\max_{r}\frac{\overline b^r}{\underline b^r}\right\}.$ \begin{theorem}
\label{th:mcqptas-}
For any fixed $\varepsilon\in(0,1)$, Alg.~\ref{alg:qptas} attains a $(1- \varepsilon)$-approximation for separable $d$-USFP in time $\left(\frac{nm\log (dnTM)}{\varepsilon}\right)^{dT\cdot O\left(\log \frac{dnTM}{\varepsilon}\right)^{dT}}$.
\end{theorem}
\begin{proof}
Let $\epsilon:=\frac{\varepsilon}{2\beta+1}$, where $\beta=\max_{r\in\mathcal H^q}2\left(2C_r+\alpha P_r\right)=O(d^3(T\log M)^2)$.
The number of possible choices for each $\mathcal L^q$ in step~\ref{alg:guess} of Alg.~\ref{alg:qptas} is at most $n^{\sum_{r=1}^dP_r/\epsilon^2}$. Thus, using $Q\leq \overline Q^{\sum_{r=1}^dT_r}$, and $\overline Q=O(\log\frac{nM}{\epsilon})$, the number of possible choices for $\mathcal L$ is at most \begin{align}\label{ch-L-}
n^{\sum_{r=1}^dP_rQ/\epsilon^2}\le n^{\sum_{r=1}^dP_r\overline Q^{{\sum_{r=1}^dT_r}}/\epsilon^2}= n^{dT\log M\cdot O(\log \frac{nM}{\epsilon})^{dT/\epsilon^2}}.
\end{align} The number of choices for each $h^q=(h^{q,p,r})_{p\in[P_r],~r\in[d]}$ is $$\overline F^{\sum_{r=1}^dP_r}= O\Big(\big(\frac{\log(nTM)}{\epsilon}\big)^{dT\log M}\Big)\,,$$ and the number of choices for $Q$ in step~\ref{qptas:Q} is
\begin{align}\label{ch-Q-}
\overline{Q}^{\sum_{r=1}^d {T}_r}\le O\left(\log \frac{nM}{\epsilon}\right)^{dT},
\end{align}
giving at most
\begin{align}
\label{ch-h-}
\left(O\left(\frac{\log(nTM)}{\epsilon}\right)^{dT\log M}\right)^{ Q}
=\left(O\left(\frac{\log(nTM)}{\epsilon}\right)^{dT\log M}\right)^{O\left(\log \frac{nM}{\epsilon}\right)^{dT}}
\end{align} choices for $h=(h^q)_{q\in\ensuremath{\mathcal{Q}}}$ in step~\ref{alg:guess}.
The number of choices for the $\epsilon$-restricted profiles in step~\ref{qptas:prof} is bounded from above by
$
m^{\sum_{r=1}^dP_r/\epsilon}=m^{O(\frac{dT\log{M}}{x\epsilon})}.
$ The bound on the running time of Alg.~\ref{alg:qptas} follows from this and~\raf{ch-L-},\raf{ch-Q-},\raf{ch-h-}.
We now argue that the solution $\hat x$ outputted by Alg.~\ref{alg:qptas} is $(1-O(\epsilon))$-approximation for separable $d$-USFP.
Let $x^*$ be an optimal solution for {\sc $d$-USFP} of objective value $\ensuremath{\textsc{Opt}} \triangleq \sum_{k\in\mathcal I}u_kx^*_k$. By the definition of $\hat\mathcal I$, we have
\begin{align}\label{en0-}
\sum_{k\in\mathcal I\setminus\hat\mathcal I}u_k\le\epsilon\ensuremath{\textsc{Opt}}.
\end{align}
Define $\mathcal T^*\triangleq\{k \in \hat\mathcal I \mid x^*_k = 1\}$ and $(h^*)^{q,p,r}=\sum_{k\in\mathcal T^*\cap\mathcal I^q}\overline f_k^{p,r}$, for $p\in[P_r],$ $r\in[d]$ and $q\in\ensuremath{\mathcal{Q}}$.
Let $(\mathcal L^*)^q:=\Big\{k\in \mathcal I^q\cap\mathcal T^*:~\underline{f}_k^{p,r}>\epsilon^2(h^*)^{q,p,r}\text{ for some } p\in[P_r],~\text{and some } r\in[d]\Big\}$ be the set of ``large'' demands within group $\mathcal I^{q}$ in the optimal solution, and let $(\mathcal S^*)^q:=\mathcal I^q\cap\mathcal T^*\setminus(\mathcal L^*)^q$ be the set of ``small" demands within the same group. Note by this definition that $|(\mathcal L^*)^q|\le\frac{\sum_{r=1}^dP_r}{\epsilon^2}$, and thus $\mathcal L^*=((\mathcal L^*)^q)_{q\in\ensuremath{\mathcal{Q}}}$ and $h=(h^q)_{q\in\ensuremath{\mathcal{Q}}}$ will be one of the guesses considered by the algorithm in step~\ref{alg:guess}. Let us focus on this particular iteration of the loop in step~\ref{alg:guess}. {Let $h^{q,p,r}=(1+\epsilon)^{\underline{\ell}}\underline{f}^{r}$, where $\underline{\ell}$ is the smallest integer (including $-\infty$) such that $h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}\ge (h^*)^{q,p,r}$.} Note that $h^{q,p,r}\in F^r$, and
\begin{align}\label{rel-}
\frac{1}{1+\epsilon}h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}\le (h^*)^{q,p,r}\le h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}.
\end{align}
Note that for any $k\in(\mathcal S^*)^q$, $q\in\ensuremath{\mathcal{Q}}$, $p\in[P_r]$, and $r\in[d]$, we have by \raf{rel-},
$$
\underline f_k^{p,r}\le\epsilon^2 (h^*)^{q,p,r}\leq\epsilon^2\left(h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}\right),
$$
and hence $(\mathcal S^*)^q\subseteq\mathcal S^q$. Note also that
\begin{align}
\label{bbb2-}
B^{q,p,r}&=\epsilon^2\left[h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline{f}_k^{p,r}\right] \nonumber \\
&\le\epsilon^2\left[h^{q,p,r}+(1+\epsilon)\sum_{k\in(\mathcal L^*)^q}\overline{f}_k^{p,r}\right]\le \epsilon^2(1+\epsilon)(h^*)^{q,p,r}.
\end{align}
For each $q\in\ensuremath{\mathcal{Q}}$, there is an $(h,\epsilon)$-restricted profile $g^q$ and an integral solution $ (\hat x_k')_{k\in \mathcal S^q} $ that satisfy Lemma~\ref{l4} (applied with $\hat x\leftarrow \hat x'$ and $\tilde x\leftarrow x^*$). Since all the possible $(h,\epsilon)$-restricted profiles are probed, the profile $g^q$ will be identified in one of the iterations of the loop in step~\ref{qptas:prof} of Alg.~\ref{alg:qptas}. Let us consider this iteration.
By condition~(ii) of Lemma~\ref{l4} and \raf{bbb2-},
{\footnotesize
\begin{align}\label{en1-}
\sum_{k\in\mathcal S^q}u_k\hat x_k'&\geq \sum_{k\in\mathcal S^q}u_kx_k^*-\sum_{r\in\mathcal H^q}\left(\sum_{p=1}^{P_r}\left(\frac{C_r\left(\epsilon h^{q,p,r}+B^{q,p,r}\right)}{H^{q,p,r}}\right)+\frac{\alpha P_rB^{q,P_r,r}}{\epsilon H^{q,P_r,r}}\right)\nonumber\\
&=\sum_{k\in\mathcal S^q}u_kx^*_k- \sum_{r\in\mathcal H^q}\left(\sum_{p=1}^{P_r}\left(\frac{C_r\epsilon (1+\epsilon)^2(h^*)^{q,p,r}}{H^{q,p,r}}\right)+\frac{\alpha P_r\epsilon^2(1+\epsilon)(h^*)^{q,P_r,r}}{\epsilon H^{q,P_r,r}}\right)\nonumber\\
&=\sum_{k\in\mathcal S^q}u_kx^*_k-\epsilon(1+\epsilon)\sum_{r\in\mathcal H^q}\left(\sum_{p=1}^{P_r}\left(\frac{C_r(1+\epsilon)(h^*)^{q,p,r}}{H^{q,p,r}}\right)+\frac{\alpha P_r(h^*)^{q,P_r,r}}{H^{q,P_r,r}}\right).
\end{align}
}
On the other hand, for $k\in\mathcal S^q$ and $r\in[d]$ such that $f_k^r(e_n)>0$ (and hence $H^{q,p,r}>0$ for all $p\in[P_r]$ by \raf{p1}), we have $u_k\ge 2^{q_{r,t}-1}La_k^{r,t}$ and thus
\begin{align}
\label{bb2-}
u_k\frac{b^{r,t}(e_{\overline{i}(p,r)})}{2^{q_{r,t}-1}L}\ge a_k^{r,t} b^{r,t}(e_{\overline{i}(p,r)})\ge a_k^{r,t} \tilde b^{r,t}(e_{\overline{i}(p,r)}).
\end{align}
Summing up \raf{bb2-} over $t\in[T_r]$, we get $u_k\ge \frac{\overline f_k^{p,r}}{2H^{q,p,r}}$. {Recall that $(h^*)^{q,p,r}=\sum_{k\in\mathcal T^*\cap\mathcal I^q}\overline f_k^{p,r}$,} then summing this inequality over $k\in\mathcal T^*\cap\mathcal I^q$ yields
\begin{align}
\label{LB-}
\ensuremath{\textsc{Opt}}^q:=\sum_{k\in\mathcal T^*\cap\mathcal I^q}u_k\ge \sum_{k\in\mathcal T^*\cap\mathcal I^q}\frac{\overline f_k^{p,r}}{2H^{q,p,r}}=\frac{(h^*)^{q,p,r}}{2H^{q,p,r}}.
\end{align}
Summing~\raf{LB-}, over $r\in\mathcal H^q$ and $p\in[P_r]$ gives
{
\begin{align}
\label{LA2-}
\ensuremath{\textsc{Opt}}^q&\ge\sum_{r\in\mathcal H^q}\sum_{p=1}^{P_r}\frac{(h^*)^{q,p,r}}{2H^{q,p,r}} \nonumber \\
&=\frac1{\beta}\cdot\sum_{r\in\mathcal H^q}\sum_{p=1}^{P_r}\frac{\beta(h^*)^{q,p,r}}{2H^{q,p,r}} \nonumber \\
&\geq\frac1{\beta}\cdot\sum_{r\in\mathcal H^q}\sum_{p=1}^{P_r}\frac{2\left(2C_r+\alpha P_r\right)(h^*)^{q,p,r}}{2H^{q,p,r}} \nonumber \\
&\geq\frac1{\beta}\cdot\sum_{r\in\mathcal H^q}\sum_{p=1}^{P_r}\Bigg(\frac{2C_r(h^*)^{q,p,r}}{H^{q,p,r}} + \frac{\alpha P_r(h^*)^{q,p,r}}{H^{q,p,r}} \Bigg)\nonumber \\
&\geq\frac1{\beta}\cdot\sum_{r\in\mathcal H^q}\Bigg(\sum_{p=1}^{P_r}\frac{(1+\epsilon)C_r(h^*)^{q,p,r}}{H^{q,p,r}} + \sum_{p=1}^{P_r}\frac{\alpha P_r(h^*)^{q,p,r}}{H^{q,p,r}} \Bigg)\nonumber \\
&\ge\frac1{\beta}\cdot\sum_{r\in\mathcal H^q}\left(\sum_{p=1}^{P_r}\left(\frac{C_r}{H^{q,p,r}} (1+\epsilon)(h^*)^{q,p,r}\right)+\frac{\alpha P_r(h^*)^{q,P_r,r}}{H^{q,P_r,r}}\right)\,,
\end{align}}\noindent{where $\beta=\max_{r\in\mathcal H^q}2\left(2C_r+\alpha P_r\right)$ as defined previously.}
Thus, it follows from~\raf{en1-} and~\raf{LA2-} that
\begin{align}\label{en2-}
\sum_{k\in\mathcal S^q}u_k\hat x_k'&\geq \sum_{k\in\mathcal S^q}u_kx_k^*-\epsilon(1+\epsilon)\beta\ensuremath{\textsc{Opt}}^q\geq \sum_{k\in(\mathcal S^*)^q}u_kx_k^*-\epsilon(1+\epsilon)\beta\ensuremath{\textsc{Opt}}^q.
\end{align}
Summing~\raf{en2-} over all $q\in\ensuremath{\mathcal{Q}}$ and using~\raf{en0-} and \raf{en2-} gives
\begin{align*}
\sum_{k\in\mathcal I}u_k\hat x_k'&=\sum_{q\in\ensuremath{\mathcal{Q}}}\left(
\sum_{k\in(\mathcal L^*)^q}u_k\hat x_k'+\sum_{k\in\mathcal S^q}u_k\hat x_k'\right)\\
&\ge \sum_{q\in\ensuremath{\mathcal{Q}}}\left(
\sum_{k\in(\mathcal L^*)^q}u_kx_k^*+\sum_{k\in(\mathcal S^*)^q}u_kx_k^*-\epsilon(1+\epsilon)\beta\ensuremath{\textsc{Opt}}^q\right)\\
&=\sum_{k \in\mathcal T^*}u_kx_k^*-\epsilon(1+\epsilon)\beta\sum_{k\in\mathcal T^*}u_k\\
&=\sum_{k \in \hat\mathcal I}u_kx_k^*-\epsilon(1+\epsilon)\beta\sum_{k\in\mathcal T^*}u_k\\
&\ge \sum_{k \in \mathcal I}u_kx_k^*-\epsilon(2\beta+1)\ensuremath{\textsc{Opt}}= (1-\varepsilon)\ensuremath{\textsc{Opt}}.
\end{align*}
It follows that the solution $\hat x$ returned by Alg.~\ref{alg:qptas} satisfies $$\sum_{k\in\mathcal I}u_k\hat x_k\ge\sum_{k\in\mathcal I}u_k\hat x_k'\ge(1-\varepsilon)\ensuremath{\textsc{Opt}}\,,$$
thus concluding the proof.
$\blacksquare$ \end{proof}
Note that the running time is quasi-polynomial if $M=2^{\operatorname{polylog} (m,n)}$ and $d=O(1)$, $T=O(1)$.
\section{A Logarithmic approximation for single-source separable $d$-USFP}\label{sec:logarithmic}
Notwithstanding its theoretical appeal, the QPTAS devised in Sec.~\ref{sec:qpt} is computationally prohibitive even for modest problem sizes, hence is of limited practicality. This section presents an efficient logarithmic approximation for single-source separable $d$-USFP$[\mathcal I,c]$ with a running time complexity dominated by solving an LP. Before stating the result formally, we rewrite the problem in a suitable matrix notation and briefly outline the underlying technique. Notice that $d$-USFP$[\mathcal I,c]$ can be cast as a general {\it packing integer program} (PIP) of the form \begin{align} \Big(\mathcal{P}\big[(A^r)_{r \in [d]}, u, (c^r)_{r \in [d]}\big]\Big) & \quad \max_{\substack{x}} ~~ u^Tx &\qquad \qquad\notag \\ \text{s.t.} \ & \quad A^rx\le c^r,~\quad \forall~r\in[d]\label{lg0} &\\ & \quad x \in \{0,1\}^n,\label{lg1} \end{align} where $u \in \mathbb R_+^n$ is the utility vector, $c^r \in \mathbb R_+^m$ denotes the edge capacities in dimension $r \in [d]$ and $A^r \in \mathbb R_+^{m \times n}$ resembles the edge-demand incidence relation for the corresponding dimension $r \in [d]$, with the rows signifying the edges and the columns the demands (i.e., $A_{ik}^r = f_k^r (e_i)$ for $\forall i \in [m], k \in \mathcal I$).
Exploiting the special structure of $\mathcal{P}$ induced by the monotonicity and separability of demands, we develop a simple {\it grouping and scaling} method allowing to reduce the problem to an easier instance with only logarithmically many constraints. Recall that an analogously named technique was derived in~\citep{kolliopoulos2001approximation} for the single-source unsplittable flow problem. Deviating from the setting in~\citep{kolliopoulos2001approximation} of partitioning the instance in the demand space, the proposed approach, instead, decomposes the edges into disjoint segments, each defining a subproblem of $\mathcal{P}$ where each capacity and demand varies within a preset range. These subproblems, after certain alterations, are then reconsolidated, effectively formulating the compacted problem with O$(d\log n)$ number of constraints. It's noteworthy that this reduction subroutine {\it holds irrespective of} the rather restrictive NBA condition, which is stipulated in~\citep{kolliopoulos2001approximation}. Thereafter, invoking the standard {\it randomized rounding} algorithm on the natural LP relaxation of the reduced problem ensures the claimed approximation factor. Formally, the preceding analysis culminates in Theorem~\ref{logap}.
In proving Theorem~\ref{logap}, we capitalize on several established results on randomized rounding and its derandomization (codified in the theorem to follow) as a unified black box technique and thereby omit the intricate particulars.
\begin{theorem}[\citep{Srinivasan1999,Raghavan1987}~] Let $\mathcal{B}$ be a PIP of the form $\max \{ u^Tx : Ax \leq c, x \in \{0,1\}^n\}$, where $A \in [0,1]^{m \times n}$, $u \in [0,1]^n$ and $c \in [1,\infty)^m$ with $\max_j u_j = 1$. Then, there exists an algorithm outputting in deterministic polynomial time a feasible solution to $P$ of value $$\Omega \bigg( \max \bigg\{ \frac{\ensuremath{\textsc{Opt}}_L}{m^{1/ \nu}}, \bigg( \frac{\ensuremath{\textsc{Opt}}_L}{m^{1/ \nu}} \bigg)^{\frac{\nu}{\nu -1}} \bigg\}\bigg) \, ,$$ where $\ensuremath{\textsc{Opt}}_L$ is the optimum of the linear relaxation of $\mathcal{B}$ and $\nu = \min_j c_j$. \label{rnd} \end{theorem}
\begin{theorem}\label{logproof}There is an O$(d\log n)$-approximation for single-source separable $d$-USFP, provided the edge capacities and demands are bounded by $2^{\operatorname{polylog}(n)}$. \label{logap} \end{theorem}
\begin{proof} Let $\Lambda = \Big( (A^r)_{r \in [d]}, u, (c^r)_{r \in [d]} \Big)$ be an input instance of $\mathcal{P}$ with $\ensuremath{\textsc{Opt}}$ denoting the value of its optimal solution $x^*$. From $\Lambda$, construct an augmented instance $\Lambda^{\prime} = \Big( \big(\begin{bmatrix}
A^r&c^r
\end{bmatrix}\big)_{r \in [d]}, (u, 0), (c^r)_{r \in [d]} \Big)$, which essentially models the outcome of incorporating {\it a dummy request} with a utility of $0$ and a demand equal to edge capacities. This auxiliary step, meant to streamline the proof, incurs no loss of generality as neither $x^*$ nor its structure is affected in the aftermath. Thus, to elude cumbersome notation, $(A^r)_{r \in [d]}$ and $u$ are hereafter assumed implicitly of the augmented form as in $\Lambda^{\prime}$.
At a loss of only a constant factor in $\ensuremath{\textsc{Opt}}$, we shall now transform $\mathcal{P}$ to a problem with O$(d\log n)$ constraints. Let $\Pi$ denote the LP relaxation of $\mathcal{P}$, obtained by allowing $x$ to lie in $[0,1]^n$. Fix a constant $C > 1$, along with the corresponding partitions $(\ensuremath{\mathcal{E}} ^r)_{r \in [d]}$ guaranteed by Lemma~\ref{l5}, and denote by $A^{r, p}$ the submatrix of $A^r$ restricted to the rows in $\big\{i \in [m] \mid e_i \in \ensuremath{\mathcal{E}} _p^r\big\}$. Observe that each interval $\ensuremath{\mathcal{E}} ^r_p$ in $(\ensuremath{\mathcal{E}} ^r)_{r \in [d]}$ naturally defines a subproblem $\Pi\big[A^{r,p}, u, c^{r,p}\big]$, where $\frac{\max_{i}A_{ij}^{r,p}}{\min_{i:A_{ij}^{r,p}>0}A_{ij}^{r,p}} \leq C$ for $\forall j \in [n+1]$ and, by introduced ancillary demands, $\frac{\max_{i}c_i^{r,p}}{\min_{i:c_i^{r,p}>0}c_{i}^{r,p}} \leq C$. Given $\Pi\big[A^{r,p}, u, c^{r,p}\big]$, compose a simplified instance $\Pi\big[\overline A^{r,p}, u, \underline c^{r,p}\big]$, with $\underline c^{r,p} := \min_i c^{r,p}_i \cdot \boldsymbol1 $ and $\overline A^{r,p}$ standing for the matrix whose $i,j$-th entry equals $\max_{i}A_{ij}^{r,p}$ if $A_{ij} \neq 0$ and $0$ otherwise. In a sense, this amounts to setting each demand to its maximum, therein flattening out the step functions into lines, and uniforming the edge capacities across the interval. Consider an optimal solution $y^*$ of $\Pi\big[A^{r,p}, u, c^{r,p}\big]$ and set $\tilde {y} := \frac{y^*}{C^2}$. As a corollary, $\tilde{y} $ becomes a feasible solution for $\Pi\big[\overline A^{r,p}, u, \underline c^{r,p}\big]$. On the other hand, any feasible solution to $\Pi\big[\overline A^{r,p}, u, \underline c^{r,p}\big]$ translates into that of $\Pi\big[A^{r,p}, u, c^{r,p}\big]$ of the same value. Taken together and generalized over all the partitions, these observations imply that
\begin{equation}
\widetilde {\ensuremath{\textsc{Opt}}}_{\Pi} \geq \frac{\ensuremath{\textsc{Opt}}_{\Pi} }{C^2} \geq \frac{\ensuremath{\textsc{Opt}} }{C^2} \,, \label{eq63}
\end{equation}
where $\widetilde{\ensuremath{\textsc{Opt}}}_{\Pi}$ and $ \ensuremath{\textsc{Opt}}_{\Pi} $ are the optimal objective values of $\Pi\big[(\overline A^r)_{r \in [d]}, u,$ $ (\underline c^r)_{r \in [d]}\big]$ and $\Pi\big[(A^r)_{r \in [d]}, u$ $, (c^r)_{r \in [d]}\big]$, respectively. Furthermore, a finer inspection of the former problem can render the majority of its constraints redundant. Indeed, by construction, each subproblem $\Pi\big[\overline A^{r,p}, u, \underline c^{r,p}\big]$ of $\Pi\big[(\overline A^r)_{r \in [d]}, u, $ $(\underline c^r)_{r \in [d]}\big]$ boils down to a {\it single Knapsack inequality}\footnote{This inequality is captured by the first constraint appearing in the subproblem, and thus can be extracted in O$(1)$ time.} since both, demands and capacities, are levelled therein, and all the requests share the same origin. Compounding these $\tilde{m} = O(d\log n)$ inequalities into $\tilde{A} \in \mathbb R_+^{\tilde{m}\times n+1}$ and $\tilde{c} \in \mathbb R_+^{\tilde{m}\times 1}$, formulate a new PIP $\mathcal{P}\big[\tilde{A} , u, \tilde{c}\big]$ minding that $\widetilde{\ensuremath{\textsc{Opt}}}_{\Pi}$ is the optimum value of its linear relaxation.
Henceforth, it remains to invoke Theorem~\ref{rnd} on $\mathcal{P}\big[\tilde{A} , u, \tilde{c}\big]$ after some proper scaling. In particular, without loss of generality, assume for $\forall i,j$, $\tilde{A}_{i,j} \leq \tilde{c}_i$ since otherwise we might as well set the corresponding $j$-th decision variable to $0$. This being so, scale down each row $i$ of $\tilde{A}$ and $\tilde{c}$ by $\max_j \tilde{A}_{i,j}$, consequently letting $\tilde{A} \in [0,1]^{\tilde{m}\times n+1}$ and $\tilde{c} = \boldsymbol1 $ (due to the dummy requests). Next, scaling $u$ such that $\max_j u_j = 1$, conforms $\mathcal{P}\big[\tilde{A} , u, \tilde{c}\big]$ to the form in Theorem~\ref{rnd}. Accordingly, we obtain a feasible integral solution to $\mathcal{P}\big[\tilde{A} , u, \tilde{c}\big]$, and hence to $\mathcal{P}\big[(A^r)_{r \in [d]}, u, (c^r)_{r \in [d]}\big]$, of value $ \frac{\widetilde {\ensuremath{\textsc{Opt}}}_{\Pi}}{O (d\log n)}$, which together with~\raf{eq63} yields the theorem.
$\blacksquare$ \end{proof}
\vspace*{8pt} \begin{remark}
For the sake of variety, the result in this section was provided in an existential form, rather than in an algorithmic variant as in Section~\ref{sec:qpt}. However, the algorithm is straightforward and follows immediately from the proof. Also, it should be noted that, at an additional loss of O$(\log n)$ factor, one can possibly extend this result to separable $d$-USFP through the approach in~\citep{BFKS14} of decomposing the given instance into one in which all the demands intersect. \end{remark}
\section{From Unsplittable Flows to Electrical Flows: Application to Power Systems}\label{elecflows}
In this section, we develop a reduction procedure that can be applied to LP-based approximations for separable $d$-USFP to produce approximations for AC OPF on line distribution networks. To this end, Section~\ref{oppf} first outlines the pertinent background on OPF and formulates the problem mathematically, then Section~\ref{sec:pre} expounds the proposed reduction.
\subsection{AC OPF and its Exact Relaxation for Radial Networks}\label{oppf}
The AC OPF problem, introduced by Carpentier in 1962~\citep{C62}, lies at the heart of techniques routinely deployed in power systems for performance optimization and control (see e.g., \citep{F12a} for a comprehensive survey on OPF). As such, the input of OPF comprises an electrical network, such as the one depicted in Fig.~\ref{f2}, represented by an undirected graph where nodes stand for \textit{electric buses}, whereas the edges model \textit{power lines}. Among the buses, some correspond to AC generators while others to demand nodes (loads). The objective is to determine an operating point, optimal with respect to a given objective (e.g., minimizing generation cost), that satisfies user demands while meeting operational (engineering) constraints (e.g. line thermal limit) and physical properties (imposed by Ohm’s and Kirchoff’s laws) of the electrical network.
From computational perspective, OPF is notoriously toilsome due mainly to the existence of {\em non-convex} constraints involving complex-valued entities of power system parameters such as current, voltage and power. Recently, there has been a major progress on tackling OPF through convex relaxations~\citep{BGCL15,huang2017sufficient,gan2015exact,low2014convex1,low2014convex2}. These papers focus chiefly on {\it radial} (i.e., tree) networks, since they are fairly common in real-world, and derive sufficient conditions under which the convex relaxation is exact (i.e., equivalent to the original non-convex problem); for example, relaxing the rank-$1$ constraint in the semidefinite programming (SDP) formulation \citep{BGCL15}, or relaxing the equality constraints in the second order cone programming (SOCP) formulation \citep{huang2017sufficient,gan2015exact,low2014convex1,low2014convex2}. While these results yield polynomial time algorithms for OPF, their scope is limited to the case with {\it continuously adjustable} power injection constraints; control variables responsible for modulating power loads are fractional and defined in terms of buses). In a more general setting, however, it is often necessary to account for {\it discrete} (or a mix of discrete and continuous) variables~\citep{6629395,7433473,9301221, 8892494}. Specifically, certain loads and devices, e.g., TV, vacuum cleaner or washing machine, operate only under a particular supply of electricity; are either switched on with a fixed power consumption rate or turned off. This \textit{combinatoric structure} renders a substantially more complicated instance of OPF. Concretely, as demonstrated in~\citep{khonji2016optimal}, OPF with discrete demands in a {\it delta} network is hard to approximate within any polynomial guarantees unless P=NP. Prior studies on OPF with discrete control variables, e.g.,~\citep{7926079, 4578739, Hijazi2017}, mainly resort to heuristic techniques, which, per se, are devoid of any optimality guarantees or theoretical guidance.
\begin{figure}
\caption{An example of a radial electrical network.}
\label{f2}
\end{figure}
\iffalse \begin{figure}
\caption{An example of a radial electrical network.}
\label{f2}
\caption{Conservation of power flow at node $j$.}
\label{f3}
\end{figure}
\fi With the above background in view, we next provide a model of an electrical network and define OPF formally. {Recall from the convention in Sec.~\ref{sec:prelim} that given a complex number $\nu\in\mathbb C$ we let $|\nu|$ be its {\it magnitude}, $\arg(\nu)$ be the {\it phase angle} that it makes with the real axis, $\nu^*$ be its complex {\it conjugate} and write $\nu^{\rm R} \triangleq {\rm Re}(\nu)$, $\nu^{\rm I} \triangleq {\rm Im}(\nu)$ for its real and imaginary components, respectively.} Consider a radial distribution network represented by a line graph $\mathcal T=(\ensuremath{\mathcal{V}},\ensuremath{\mathcal{E}} )$, where $\ensuremath{\mathcal{V}}=\{0,1,\ldots,m\}$ denotes the electric {\it buses}, whereas $\ensuremath{\mathcal{E}} $ symbolizes the distribution {\it lines}. Each line $e\in\ensuremath{\mathcal{E}} $ is characterized by a complex {\it impedance} $z_{e}\in\mathcal C$, with a {\it non-negative} real part resembling the {\it resistance} of the line (to the flow of current) and imaginary part quantifying the {\it reactance} ({\it inductance} if positive and {\it capacitance} if negative). In the setup under study, a {\it substation generator} is attached to the root of $\mathcal T$, node $0$. By convention, it is assumed that power flows from the root to the nodes. Let $\ensuremath{\mathcal{V}}^+\triangleq \ensuremath{\mathcal{V}} \setminus\{0\}$ and $\ensuremath{\mathcal{V}}_i^+ \triangleq \ensuremath{\mathcal{V}}_i\setminus\{i\}$. When referring to an edge, we shall use the (ordered) pair of subscripts $(i,j)$ and $e$ \textit{interchangeably}, where it is assumed that $i$ is the {\it parent} of $j$ in $\mathcal T$.
At each node $j\in\ensuremath{\mathcal{V}}^+$, attached is a set $\mathcal U_j$ of {\it users} (electrical loads). Let $\mathcal N \triangleq \cup_{j\in\ensuremath{\mathcal{V}}^+} \mathcal U_j$ be the set of all users ($|\mathcal N|=\tilde{ n}$), while $\mathcal N_j$ be those residing in the subpath rooted at node $j\in\ensuremath{\mathcal{V}}^+$. Among these users, some have {\it inelastic} (discrete) power demands, denoted by $\mathcal I \subseteq\mathcal N$. A discrete demand is either completely satisfied or dropped. An example is an appliance that is either switched on with a fixed power consumption rate or switched off. The rest of users, denoted by $\mathcal F \triangleq \mathcal N\backslash\mathcal I$, have elastic demands which can be partially satisfied. The demand of user $k$ is represented by a complex-valued number $s_k\in \mathbb C$; the real part $s_k^{\rm R}$ denotes the so-called {\it active} power while the imaginary part $s_k^{\rm I}$ captures the {\it reactive} power; the {\it apparent} power is defined as the magnitude $|s_k|=\sqrt{(s_k^{\rm R})^2+(s_k^{\rm I})^2}$ of $s_k$. Additionally, each user $k\in\mathcal I$ is associated with a number $u_k\in\mathbb R_+$ indicating the {\it utility} of user $k$ if her demand $s_k$ is fully satisfied.
Denote the unique path from node $j$ to the root $0$ by $\ensuremath{\mathcal{P}}_j$. For each user $k\in \mathcal U_j$, define $\ensuremath{\mathcal{P}}_k \triangleq \ensuremath{\mathcal{P}}_j$. With a slight abuse of notation, we interchangeably refer as $\ensuremath{\mathcal{P}}_j$ to the set of edges as well as the set nodes on the path from $j$ to the root. \begin{figure}
\caption{Conservation of power flow at node $j$.}
\label{f3}
\end{figure}
A steady-state power flow in a distribution network is generally described by a system of equations. For radial networks (which include paths), these can be framed through the \textit{Branch Flow (a.k.a. DistFlow) Model} (BFM)~\citep{baran19266}. Under BFM, OPF in $\mathcal T$ is embodied by the following mixed-integer programming problem. \begin{align} \textsc{Input}:\; \quad & v_0;(\underline v_j, \overline v_j)_{j\in \ensuremath{\mathcal{V}}^+}; (\overline S_e, \overline \ell_e, z_e)_{e\in \ensuremath{\mathcal{E}} }; (s_k)_{k\in \mathcal N} \notag\\ \textsc{Output}:\; \quad &s_0 ; (v_j)_{j\in \ensuremath{\mathcal{V}}^+}; (S_{e}, \ell_{e})_{e\in \ensuremath{\mathcal{E}} }; (x_k)_{k \in \mathcal N} \notag\\ &~\notag \\ \textsc{(OPF)} \quad &\max_{\substack{s_0, x, v,\ell, S \;\;}} f_{\textsc{OPF}}(s_0, x), \notag \\
\text{s.t.} \ & \ell_{i,j} = \frac{|S_{i,j}|^2}{v_i}, ~~ \forall (i,j) \in \ensuremath{\mathcal{E}} \label{p1:con1} \\ & S_{i,j}= \sum_{k \in \mathcal U_j} s_k x_k + \sum_{t:(j,t)\in \ensuremath{\mathcal{E}} } S_{j,t} + z_{i,j}\ell_{i,j}, ~~ \forall (i,j) \in \ensuremath{\mathcal{E}} \label{p1:con2} \end{align} \begin{align} & S_{0,1} =- s_0\label{p1:con3}\\
& v_j = v_i + |z_{i,j}|^2 \ell_{i,j} - 2 \ensuremath{\mathrm{Re}}(z_{i,j}^\ast S_{i,j}), ~~ \forall (i,j) \in \ensuremath{\mathcal{E}} \label{p1:con4} \\ & \underline v_j \le v_j \le \overline v_j, ~~\forall j \in\ensuremath{\mathcal{V}}^+ \label{p1:con5} \\
& |S_e| \le \overline S_e,~|-S_e + z_e\ell_e| \le \overline S_e, ~~\forall e \in \ensuremath{\mathcal{E}} \label{p1:con6}\\ & \ell_e \le\overline \ell_e ~~\forall e \in \ensuremath{\mathcal{E}} \label{p1:con7}\\ & x_k \in \{0,1\},~~\forall k \in \mathcal I,~~ x_k \in [0,1],~~ \forall k \in \mathcal F \label{p1:con8}\\ & v_j \in \mathbb R_+,~\forall j \in \ensuremath{\mathcal{V}}^+ \ \ell_e \in \mathbb R_+,~ S_{e} \in \mathbb C \label{p1:con9},~\forall e\in \ensuremath{\mathcal{E}} \,. \end{align}
\paragraph{\bfseries The variables:} In the above formulation, the complex variable $S_{i,j}$ represents the power output at node $i$ along the edge $(i,j)$, {$z_{i,j}^*$ denotes the \textit{complex conjugate} of $z_{i,j}$}, and $v_j\triangleq |V_j|^2$ and $\ell_e\triangleq|I_e|^2$ define the voltage and current magnitude squares at node $j$ and link $e$, respectively. Note that in BFM phase angles for the voltages and currents, $\arg(V_j)$ and $\arg(I_e)$, are eliminated from the formulation. However, as proved in \citep{FL13a}, this relaxation is exact for radial networks. That is, one can (in polynomial time) uniquely recover the phase angles once a solution to the relaxation is obtained. Finally, each user demand $k\in \mathcal N$ is assigned a control variable $x_k$; if $k \in \mathcal I$, then $x_k\in\{0,1\}$, otherwise, $x_k \in [0,1]$ for $k \in \mathcal F$. Define vectors $S\triangleq(S_e)_{e\in\ensuremath{\mathcal{E}} }, \ell \triangleq (\ell_e)_{e\in\ensuremath{\mathcal{E}} }, x\triangleq(x_k)_{k\in \mathcal N}, v = (v_i)_{i\in \ensuremath{\mathcal{V}}^+}$.
\paragraph{\bfseries The objective:} OPF seeks to assign values to the control vector $x$, complex power vector $S$ as well as current and voltage magnitude vectors $\ell$ and $v$, such that the following {\em concave} non-negative objective function\footnote{Traditionally, the objective is to minimize the generation cost $c(S_{0,1}^{\rm R})$, which is typically a non-decreasing convex function of the active generation power $S_{0,1}^{\rm R}$. In the discrete demand case under study, we combine the minimization of the generation cost with the utility maximization of the satisfied demands by using the function $f_{\textsc{OPF}}(s_0,x)$, where $f_0(s_0^{\rm R})\triangleq Y-c(S_{0,1}^{\rm R})=Y-c(-s_{0}^{\rm R}))$, for a sufficiently large number $Y$, is a nonnegative concave function, non-decreasing in $s_{0}^{\rm R}).$} $$f_{\textsc{OPF}}(s_0, x) = f_0(s_0^{\rm R}) + f_1\big((s_k^{\rm R} x_k)_{k\in\mathcal F} \big) + \sum_{k \in \mathcal I}u_k x_k,$$ is maximized, without violating the physical and operating constraints described below.
\paragraph{\bfseries The constraints:} Let $\underline v_j, \overline v_j \in\mathbb R^+$ be respectively the minimum and maximum allowable voltage magnitude squares at node $j$, and $\overline S_{e}, \overline \ell_{e}\in \mathbb R^+$ be the maximum allowable apparent power and current magnitude on edge $e \in \ensuremath{\mathcal{E}} $, respectively.
As customary, it is assumed that the generator voltage $v_0\in \mathbb R^+$ is given. In the above formulation, Eqn.~\raf{p1:con1} is immediate from the definition of the magnitude of the complex power $S_{i,j}=V_iI_{i,j}^*$. Eqn.~\raf{p1:con2} (in complex variables) captures the power flow conservation rule at node $j$ (see Figure~\ref{f3}). The rule equates the power output at node $i$ along the edge $(i,j)$ minus the power lost on that line ($z_{i,j}\ell_{i,j}=z_{i,j}|I_{i,j}|^2$) to the total power consumed by the loads at node $j$ (namely, $\sum_{k \in \mathcal U_j} s_k x_k$) plus the total power output on the lines outgoing from $j$ (which is $\sum_{t:(j,t)\in \ensuremath{\mathcal{E}} } S_{j,t}$). Eqn.~\raf{p1:con3} is the special case of Eqn.~\raf{p1:con2} applied to node $0$ (assuming an artificial edge $(0,0)$), where the demand $s_0$ is negated to indicate power generation (rather than consumption). Eqn.~\raf{p1:con4} is a consequence of Ohm's law: $V_i-V_j=z_{i,j}I_{i,j}$, and the definition of power $S_{i,j}=V_iI_{i,j}^*.$ The inequalities in~\raf{p1:con5} and \raf{p1:con7} limit the voltage and current magnitudes at each node and on each line, respectively, to the allowable range. While those in~\raf{p1:con6} cap the apparent power on each link in both directions by the capacity of the link: $|S_{i,j}|\le \overline S_{i,j}$ and $|S_{j,i}|\le \overline S_{i,j}$, where $S_{j,i}=V_jI_{j,i}^*=-V_jI_{i,j}^*=-(V_i-z_{i,j}I_{i,j})I_{i,j}^*=-S_{i,j}+z_{i,j}|I_{i,j}|^2$.
\subsubsection{Assumptions} In tackling OPF, we shall rely on the following practical assumptions. \begin{itemize}
\setlength{\itemindent}{.1in}
\item [{\sf A0:}] $f_0(\cdot)$ is non-decreasing in $s^{\rm R}_0$. {Recall that by definition $f_0(s_0^{\rm R}) = Y-c(-s_{0}^{\rm R}))$, where $c(-s_{0}^{\rm R}) = c(S_{0,1}^{\rm R})$ captures the active power generation cost. As is customary in power systems literature \citep{huang2017sufficient, FL13a, gan2015exact, 6290429}, we treat the generation cost $c(\cdot)$ as a non-decreasing convex function of $S_{0,1}^{\rm R}$. Consequently, one can set $Y$ to be a sufficiently large number such that $f_0(\cdot)$ is non-negative and non-decreasing in $s_{0}^{\rm R}).$}
\item [{\sf A1:}] $z_e\ge 0$ for all $e\in\ensuremath{\mathcal{E}} $, which naturally holds in distribution networks.
\item[{\sf A2:}] $ \underline v_j\le v_0 \le \overline v_j$ for all $j\in\ensuremath{\mathcal{V}}^+$. Typically in a distribution network, $v_0$ = 1 (per unit), $\underline v_{j}=(0.95)^2$ and $\overline v_{j}=(1.05)^2$; in other words, a $5\%$ deviation from the nominal voltage is allowed.
\item [{\sf A3:}] $\ensuremath{\mathrm{Re}}(z_{e}^* s_k)\ge 0 \text{ for all } k\in \mathcal N,~e\in \ensuremath{\mathcal{E}} .$
Equivalently, the angle difference between $z_e$ and $s_k$ is at most $ \tfrac{\pi}{2}$.
\item [{\sf A4:}] $\big|\arg(s_k) - \arg(s_{k'})\big| \le \tfrac{\pi}{2}$ for any $k, k' \in \mathcal N$. In practical settings, the so-called {\it load power factor} usually varies between $0.8$ to $1$ \citep{1339347} and thus the maximum phase angle difference between any pair of demands is restricted to be in the range of $[0, 36^{\circ}]$.
We also assume $s^{\rm R}_k\ge 0$ for all $k\in \mathcal N$, which always holds in power systems (assuming no power generation at non-root nodes in $\ensuremath{\mathcal{V}}^+$).
\item [{\sf A5:}] The range of impedances and demands is quasi-polynomial, that is,
{\small$$
\max\left\{\frac{\max_{e\in\ensuremath{\mathcal{E}} }z_{e}^{\rm R}}{\min_{e:z_e^{\rm R}>0}z_{e}^{\rm R}},\frac{\max_{e\in\ensuremath{\mathcal{E}} } z_{e}^{\rm I}}{\min_{e:z_e^{\rm I}>0}z_{e}^{\rm I}},\frac{\max_{k\in\mathcal N}s_{k}^{\rm R}}{\min_{k:s_k^{\rm R}>0}s_{k}^{\rm R}},\frac{\max_{k\in\mathcal N}s_{k}^{\rm I}}{\min_{k:s_k^{\rm I}>0}s_{k}^{\rm I}}\right\}=2^{\operatorname{polylog}(m,\tilde{ n})}
.
$$} \end{itemize}
Assumptions {\sf A3} and {\sf A4} are motivated, from a theoretical point of view, by the inapproximability results in \citep{khonji2016optimal} (if either one is invalid, the problem cannot be approximated within any polynomial factor unless P=NP). Assumption {\sf A3} holds in reasonable practical settings~\citep{huang2017sufficient}. As clarified in the next subsection, by performing an axis rotation, {\sf A4} implies $s_k\ge 0$. Clearly, under this and {\sf A1}, the reverse power constraint in~\raf{p1:con6} is implied by the forward power one ($|S_e|\le\overline S_e$). Similarly, under {\sf A1}, {\sf A2} and {\sf A3}, the voltage upper bounds in \raf{p1:con5} can be dropped, as elaborated in subsection~\ref{dolya}. Lastly, {\sf A5} is required merely for the analysis of the featured approximations and may possibly be bypassed with techniques from~\citep{BGKMW15}.
\subsubsection{Rotational Invariance of {\sc OPF}}\label{dolya} In the below lemma, it is argued that complex quantities in the OPF formulation (namely, $z_e, s_k$) can be rotated by a fixed angle without affecting the problem's structure. This property allows to replace {\sf A0} and {\sf A4} by the ones listed below. \begin{itemize}
\setlength{\itemindent}{.13in}
\item [{\sf A0$'$}:] $f_0(s_0^{\rm R} \cos \phi + s_0^{\rm I} \sin \phi)$ is non-decreasing in $s_0^{\rm R}, s_0^{\rm I}$.
\item [{\sf A4$'$:}] $s_k \ge 0$ for all $k\in \mathcal N$. \end{itemize} Note that {\sf A1} and {\sf A4$'$} already imply {\sf A3}. \begin{lemma}\label{lem:rot}
Assume {\sf A4} and suppose that $s_k$, for all $k\in\mathcal N$, and $z_e$, for all $e\in\ensuremath{\mathcal{E}} $, are rotated by an angle $\phi \triangleq \min\{\max_{k\in \mathcal N} -\arg(s_k),0\}\in[0,\frac{\pi}{2}]$. Denote the resulting {\sc OPF} problem by {\sc OPF$^\phi$}:
\begin{align}
\textsc{(OPF$^\phi$)} \quad &\max_{\substack{s_0, x, v,\ell, S \;\;}} f_{\textsc{OPF}}(s_0e^{-{\bf i}\phi}, x), \notag \\
\text{s.t.} \ &\raf{p1:con1}-\raf{p1:con9}, \text{ with $z_e$ replaced by $z_ee^{{\bf i}\phi}$, and $s_k$ replaced by $s_ke^{{\bf i}\phi}$ } . \notag
\end{align}
Then {\sc OPF$^\phi$} is equivalent to {\sc OPF} and satisfies assumptions {\sf A0$'$}, {\sf A1}, {\sf A2}, {\sf A3} and {\sf A4$'$}.
\end{lemma} \begin{proof}
One can easily show that a feasible solution $F=(s_0,x,v,\ell,S)$ to {\sc (OPF$^\phi$)} can be converted to a feasible solution $\bar{\bar F}=(\bar{\bar s}_0, x,v,\ell,\bar{\bar S})$ to {\sc OPF}, such that $\bar{\bar S}_{i,j}\triangleq S_{i,j} e^{-{\bf i}\phi}, \bar{\bar s}_0\triangleq s_0 e^{-{\bf i} \phi}$ are rotated by $\phi$, and vise versa.
Moreover, the two objective functions are equal. It is immediate to see that assumptions {\sf A0$'$} {\sf A1}, {\sf A2}, {\sf A3}, and {\sf A4$'$} hold for {\sc OPF$^\phi$}.
$\blacksquare$ \end{proof}
Hereafter, we implicitly consider the rotated problem which, with a slight abuse of notation, is simply denoted by OPF.
\subsubsection{Exact Second Order Cone Relaxation}\label{sec:rlx}
As observed from the preceding formulation, OPF's feasible set is non-convex due to the quadratic equality constraint~\raf{p1:con1}. Replacing this by $\ell_{i,j} \ge \frac{|S_{i,j}|^2}{v_i}$, one obtains an SOCP relaxation of \textsc{OPF}\footnote{Note that Cons.~\raf{p2:con1} can be rewritten as \begin{align*}
\Bigg\|\left(\begin{array}{c} 2S_{i,j}^{\rm R}\\ 2S_{i,j}^{\rm I}\\ \ell_{i,j}-v_i \end{array}
\right)\Bigg\|_2 \le \ell_{i,j}+v_i\,. \end{align*}}, defined below and denoted by \textsc{cOPF}. \begin{align} \textsc{(cOPF)} &\max_{\substack{s_0, x, v,\ell, S \;\;}} f_{\textsc{OPF}}(s_0, x) \notag \\ \text{s.t.} \ &\raf{p1:con2}-\raf{p1:con9},\notag \\
& \ell_{i,j} \ge \frac{|S_{i,j}|^2}{v_i},~\forall (i,j) \in \ensuremath{\mathcal{E}} . \label{p2:con1} \end{align} Let {\sc rcOPF} be the relaxation of {\sc cOPF} where the integrality constraints in~\raf{p1:con8} are replaced by $x_k \in [0,1]$ for all $k \in \mathcal N$. For a given $\hat x\in[0,1]^{\tilde{ n}}$, define by {\sc cOPF}$[\hat x]$ the restriction of {\sc cOPF} where $x=\hat x$.
Recently, studies in~\citep{low2014convex2,huang2017sufficient,gan2015exact} presented sufficient conditions for \textsc{cOPF} to have an optimal solution in which Cons.~\raf{p2:con1} holds with equality. For current purposes, we avail of the following lemma which is a slightly simplified version of that in \citep{huang2017sufficient} and is proved in Section~\ref{lem44}.
\begin{lemma}\label{lem:exact} Under assumptions~{\sf A0}, {\sf A1}, {\sf A2}, and {\sf A3}, for any given $x'\in[0,1]^{\tilde{ n}}$, there exists an optimal solution $F'=(s_0', x', v',\ell',S')$ of {\sc cOPF}$[x']$ that satisfies $\ell_{i,j} = \frac{|S_{i,j}'|^2}{v_i'}$ for all $(i,j)\in \ensuremath{\mathcal{E}} .$ Such a solution can be found in polynomial time. \end{lemma}
\subsection{Reduction Scheme}\label{sec:pre}
Having defined OPF formally, we next present the developed technique that obtains approximations for OPF on path distribution networks from LP-based approximations intended for separable $d$-USFP. \begin{lemma}\label{feas}
Let $F'=\big(s'_0,x', v', \ell', S'\big)$ be a feasible solution for \textsc{rcOPF}. Let $\bar x\in[0,1]^{\tilde{ n}}$ be such that
\begin{align}
\sum_{k\in\mathcal I}u_k\bar x_k&\ge \sum_{k\in\mathcal I}u_k x_k'-\varepsilon f_{\textsc{OPF}}(s_0',x'), \text{ for some }\varepsilon\in[0,1] \label{feas:con0}\\
\sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) \bar x_k &\le \sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) x'_k\quad \forall (i,j) \in \ensuremath{\mathcal{E}} , \label{feas:con1}\\
\sum_{k \in \mathcal N_j} s_k^{\rm R} \bar x_k &\le \sum_{k \in \mathcal N_j} s_k^{\rm R} x'_k\quad \forall (i,j)\in \ensuremath{\mathcal{E}} , \label{feas:con2}\\
\sum_{k \in \mathcal N_j} s_k^{\rm I} \bar x_k &\le\sum_{k \in \mathcal N_j} s_k^{\rm I} x_k'\quad \forall (i,j)\in \ensuremath{\mathcal{E}} ,\label{feas:con3}\\
\bar x_k&= x_k'\quad \forall k\in\mathcal F,\label{feas:con4}
\end{align}
where $f_{\textsc{OPF}}(\cdot)$ is the objective function of OPF. Then, under assumptions~{\sf A0$'$}, {\sf A1}, {\sf A2}, {\sf A3} and {\sf A4$'$}, \textsc{rcOPF$[\bar x]$} has a feasible solution $\tilde F=\big(\tilde s_0,\tilde x, \tilde v, \tilde \ell, \tilde S\big)$ such that $f_{\textsc{OPF}}(\tilde s_0,\tilde x)\ge (1-\varepsilon) f_{\textsc{OPF}}(s_0',x')$, where \textsc{rcOPF$[\bar x]$} denotes the restriction of {\sc rcOPF} with $x$ set to $\bar x$. \end{lemma}
Observe that, in Lemma~\ref{feas} (which is proved in Section~\ref{lprf}), the inequalities \raf{feas:con1}, \raf{feas:con2} and \raf{feas:con3} taken together form a single-source separable $d$-USFP with $d=3$. Indeed, for $k\in\mathcal I$ and $e=(i,j)\in\ensuremath{\mathcal{E}} $, define \begin{align*} f_k^1(e)=\ensuremath{\mathrm{Re}}\Big( \sum_{e'\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{e'} s_k\Big), \quad f_k^2(e)=\left\{\begin{array}{ll} s_k^{\rm R}&\text{ if }k\in\mathcal N_j \\ 0&\text{otherwise,} \end{array} \right.\quad f_k^3(e)&=\left\{\begin{array}{ll} s_k^{\rm I}&\text{ if }k\in\mathcal N_j \\ 0&\text{otherwise.} \end{array} \right. \end{align*} Note that $f^1_k$ is monotone non-decreasing on $\ensuremath{\mathcal{E}} $ when ordered by distance from the root, while $f^2_k$ and $f^3_k$ are monotone non-decreasing considering the reverse order on $\ensuremath{\mathcal{E}} $. Moreover, these functions are of the form~\raf{funct} (i.e., separability condition in $d$-USFP). For $r=2$ (similarly, for $r=3$), set $T_2=1$, $a_k^{2,1}:=s_k^{\rm R}$, $e_k^2=\hat e_k^2:= e_{j(k)}$, $b^{2,1}(e):=1$. As for $r=1$, note that \begin{align} \label{fff0} f_k^1(e)=\Big(\sum_{e'\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^{\rm R}_{e'}\Big) s^{\rm R}_k+\Big(\sum_{e'\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^{\rm I}_{e'}\Big) s^{\rm I}_k \text{ for } \forall e=(i,j)\in\ensuremath{\mathcal{E}} . \end{align} Thus, setting $T_1=2$, $a_k^{1,1}:= \ensuremath{\mathrm{Re}}( s_k)$, $a_k^{1,2}:=\ensuremath{\mathrm{Im}}( s_k)$, $e^1_k:=e_1$, $\hat e^1_k:=e_{j(k)}$, $\tilde{b}^{1,1}((i,j)):=\sum_{e'\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Re}}( z_{e'})$ and $\tilde{b}^{1,2}((i,j)):=\sum_{e'\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Im}}( z_{e'})$ writes $f_k^1(e)$ in the form \raf{funct}.
The above arguments coupled with Lemma~\ref{feas}, imply the following theorem.
\begin{theorem} Under assumptions~{\sf A0$'$}, {\sf A1}, {\sf A2}, {\sf A3}, {\sf A4$'$}, and {\sf A5}, there is a quasi-polynomial time algorithm that for any $\epsilon \in (0,1)$ produces a $(1- \varepsilon)$-approximation for \textsc{OPF} on line networks with single substation generator. \label{qptasopf} \end{theorem} \begin{proof}
Let $\widehat{\ensuremath{\textsc{Opt}}}$ be the optimal objective value of OPF. Consider the approximation scheme detailed in Alg.~\ref{alg:qptas-}, which is the analog of Alg.~\ref{alg:qptas} for OPF.
\begin{algorithm}[!htb]
\caption{ {\sc QPTAS-OPF} \label{alg:qptas-} }
\begin{algorithmic}[1]
\Require An approximation parameter $\epsilon\in(0,1)$; {\sc OPF} input $v_0;(\underline v_j, \overline v_j)_{j\in \ensuremath{\mathcal{V}}^+}; (\overline S_e, \overline \ell_e, z_e)_{e\in \ensuremath{\mathcal{E}} }$
\Ensure A solution $\hat F$ to {\sc OPF} such that $f_{\textsc{OPF}}(\hat F) \ge (1-O(\epsilon)) \widehat{\ensuremath{\textsc{Opt}}}$
\For {each selection {$\Big(\mathcal L = (\mathcal L^q)_{q\in\ensuremath{\mathcal{Q}}}, h = \big(h^q=(h^{q,p,r})_{p\in[P_r],~r\in[d]}\big)_{q\in\ensuremath{\mathcal{Q}}}\Big)$ such that} $\mathcal L^q\subseteq \mathcal I$, $|\mathcal L^q| \le \frac{\sum_{r=1}^dP_r}{\epsilon^2}$ and $h^{q,p,r}\in F^r$ } \label{alg:guess-}
\If{{\sc rcOPF$[\mathcal L,h]$} is feasible} \label{qptas:mc-feas-}
\State $F' \leftarrow$ Solution of {\sc rcOPF$[\mathcal L,h]$} \label{qptas:cvx}
\For{$q\in\ensuremath{\mathcal{Q}}$}\label{qptas:Q-}
\State Let $\mathcal S^q$ be given by~\raf{small}
\For {every $(h,\epsilon)$-restricted profile $g^q$}\label{qptas:prof-}
\State$ (\hat x_k)_{k\in \mathcal S^q} \leftarrow $ Integral vector returned by applying Lemma~\ref{l4} with vector $h^q$, and {$(\tilde x_k)_{k\in \mathcal S^q}=(x'_k)_{k\in \mathcal S^q}$} \label{qptas:mc-round-}
\EndFor
\EndFor
\State$\bar x_k\leftarrow \left\{\begin{array}{ll}
\hat x_k &\text{ if }k\in \bigcup_{q\in\ensuremath{\mathcal{Q}}}\mathcal S^q, \\
x'_k&\text { if }{k\in \mathcal N\setminus(\bigcup_{q\in\ensuremath{\mathcal{Q}}}\mathcal S^q)}
\end{array}\right.$ \label{qptas:mc-round--}
\State $\tilde F \leftarrow$ Solution of \textsc{cOPF$[\bar x]$} \label{qptas:r3}
\If{$f_{\textsc{OPF}}(\tilde F) > f_{\textsc{OPF}}(\hat F')$ }
\State $ \hat F' \leftarrow \tilde F$
\EndIf
\EndIf
\EndFor
\State Apply Lemma~\ref{lem:exact} to convert $\hat F'$ to a feasible solution $\hat F$ for OPF
\State \Return $\hat F$
\end{algorithmic}
\end{algorithm}
Similar to Alg.~\ref{alg:qptas}, the algorithm guesses the set of large demands $\mathcal L^q\subseteq\mathcal I^q$ in the optimal solution for each group $q\in\ensuremath{\mathcal{Q}}$, and the peaks $h^{q,p,r}$, within $1+\epsilon$, of the small demands in the optimal solution within the interval $\ensuremath{\mathcal{E}} _p^r$. Let $\mathcal L=(\mathcal L^q)_{q\in\ensuremath{\mathcal{Q}}}$ and $h^q=(h^{q,p,r})_{p\in[P_r],~r\in[d]}$ where $h^{q,p,r}\in F^r$. Define a restrictive version of {\sc rcOPF}, denoted by {\sc rcOPF$[\mathcal L,h]$}, which enforces that $x_{k} = 1$ for all ${k} \in \mathcal L^q$ and $q\in\ensuremath{\mathcal{Q}}$ and that the peak total contribution of the small demands in group $q$ within the interval $\ensuremath{\mathcal{E}} _p^r$ is at most $h^{q,p,r}$: $\sum_{k\in\mathcal S^q}\overline f_k^{p,r} x_k\le(1+\epsilon) h^{q,p,r}$.
\begin{align}
\textsc{(rcOPF}[\mathcal L,h]\textsc{)}\quad &\max_{\substack{s_0, x,v, \ell, S}} f_{\textsc{OPF}}(s_0,x), \notag \\
\text{s.t.} \ & \raf{p1:con2}-\raf{p1:con7},\raf{p1:con9},\raf{p2:con1}\\
&\sum_{k\in\mathcal S^q}\overline{f}_k^{p,r}x_k\le h^{q,p,r},~\forall p\in[P_r],~\forall r\in[d],~\forall q\in\ensuremath{\mathcal{Q}} \label{e111}\\
& x_k = 0,\quad \forall k \in \mathcal I\setminus \bigcup_{q\in\ensuremath{\mathcal{Q}}}(\mathcal L^q\cup\mathcal S^q)\label{e112-}\\
& x_k = 1,\quad \forall k \in \mathcal L^q,~\forall q\in\ensuremath{\mathcal{Q}}\label{e112}\\
& x_k \in [0,1],\quad \forall k \in \mathcal F\cup (\bigcup_{q\in\ensuremath{\mathcal{Q}}}\mathcal S^q). \label{e113}
\end{align}
Here, the set of small demands within group $q\in\ensuremath{\mathcal{Q}}$ is
\begin{align}\label{small-}
\mathcal S^q=\left\{k\in\mathcal I^q:~\underline{f}_k^{p,r}\le B^{q,p,r}\text{ for all } p\in[P_r],~r\in[d]\right\},
\end{align}
where $B^{q,p,r}=\epsilon^2\left[h^{q,p,r}+\sum_{k\in\mathcal L^q}\overline{f}_k^{p,r}\right].$
Given a feasible solution $ F'=\big(s'_0,x', $ $ v', \ell', S'\big)$ to {\sc rcOPF$[\mathcal L,h]$}, Alg.~\ref{alg:qptas-} applies Lemma~\ref{l4} with $\tilde{x}=x'$. By the lemma, one can find (in polynomial time) an integral solution $\hat x$ satisfying conditions (i) and (ii). Next, the algorithm recalculates $s_0, S, \ell, v$ utilizing the program~\textsc{cOPF$[\bar x]$} given in Section~\ref{sec:rlx}, and then applies Lemma~\ref{lem:exact} to obtain a feasible solution to OPF.
Define
{\footnotesize
\begin{align}
\label{qp-bd}
\tilde{M}:=&\max\left\{\frac{\overline z}{\underline z},\max_{r}\frac{\overline f^r}{\underline f^r}\right\} \nonumber\\
&=\max\left\{\frac{\overline z}{\underline z},\max_{k,k'\in\mathcal I}\frac{s_k^{\rm R}}{ s_{k'}^{\rm R}},\max_{k,k'\in\mathcal I}\frac{s_k^{\rm I}}{ s_{k'}^{\rm I}},\max_{k,k'\in\mathcal I,~(i,j),(i',j')\in\ensuremath{\mathcal{E}} }\frac{\ensuremath{\mathrm{Re}}\Big( \sum_{e'\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{e'} s_k\Big)}{\ensuremath{\mathrm{Re}}\Big( \sum_{e'\in \ensuremath{\mathcal{P}}_{k'} \cap \ensuremath{\mathcal{P}}_{j'}} z^*_{e'} s_{k'}\Big)}\right\},
\end{align}
}
where $\underline z:=\min\{\min_{e:\ensuremath{\mathrm{Re}}(z_e)>0}\ensuremath{\mathrm{Re}}(z_{e}),\min_{e:\ensuremath{\mathrm{Im}}(z_e)>0}\ensuremath{\mathrm{Im}}(z_{e})\}$ and $\overline z:=\max_{e\in\ensuremath{\mathcal{E}} }$ $\max\{\ensuremath{\mathrm{Re}}(z_{e}),\ensuremath{\mathrm{Im}}(z_{e})\}$.
In what follows, we prove that, for any fixed $\varepsilon\in(0,1)$, Alg.~\ref{alg:qptas-} arrives at a $(1- \varepsilon)$-approximation in time $(\frac{\tilde{n}\log (\tilde{ n}m\tilde{ M})}{\varepsilon})^{O(\log^9( \frac{\tilde{n}m\tilde{M}}{\varepsilon})/\varepsilon^2)}$.
Let $\epsilon:=\frac{\varepsilon}{3(2\beta+1)}$, where $\beta=\max_{r\in\mathcal H^q}2\left(2C_r+\alpha P_r\right)=O(\log^2 (m\tilde{M}))$.
The number of possible choices for each $\mathcal L^q$ in step~\ref{alg:guess-} of Alg.~\ref{alg:qptas-} is at most $\tilde{n}^{\sum_{r=1}^dP_r/\epsilon^2}$, where $\tilde{ n}=|\mathcal N|$. Thus, with $d=3$, $P_1=O(\log (m\tilde{M}))$, $P_2=P_3=1$, $T_1=2$, $T_2=T_3=1$, $Q\leq \overline Q^{\sum_{r=1}^dT_r}$, and $\overline Q=O(\log\frac{\tilde{n}\tilde{M}}{\epsilon})$, hence the number of possible choices for $\mathcal L$ is at most \begin{align}\label{ch-L}
\tilde{n}^{\sum_{r=1}^dP_rQ/\epsilon^2}\le \tilde{n}^{\sum_{r=1}^dP_r\overline Q^{{\sum_{r=1}^dT_r}}/\epsilon^2}= \tilde{n}^{O(\log(m\tilde{M})\log^4( \frac{\tilde{ n}\tilde{ M}}{\epsilon})/\epsilon^2)}.
\end{align} The number of choices for each $h^q=(h^{q,p,r})_{p\in[P_r],~r\in[d]}$ is $$\overline F^{\sum_{r=1}^dP_r}=O\Bigg(\Big(\frac{\log(\tilde{n}\tilde{M})}{\epsilon}\Big)^{\log (m\tilde{M})}\Bigg)\,,$$ {and that of for} $Q$ in step~\ref{qptas:Q-} is
\begin{align}
\label{ch-Q}
\overline{Q}^{\sum_{r=1}^d {T}_r}\le\log^4\left( \frac{\tilde{ n}\tilde{ M}}{\epsilon}\right),
\end{align}
giving at most
\begin{align}
\label{ch-h}
O\left(\left(\left(\frac{\log(\tilde{n}\tilde{ M})}{\epsilon}\right)^{\log (m\tilde{ M})}\right)^{ Q}\right)&=O\left(\left(\left(\frac{\log(\tilde{ n}\tilde{ M})}{\epsilon}\right)^{\log (m\tilde{ M})}\right)^{\overline Q^{\sum_{r=1}^d {T}_r}}\right)\nonumber\\
&=O\left(\left(\frac{\log(\tilde{ n}\tilde{ M})}{\epsilon}\right)^{\log (m\tilde{ M})\log^4( \frac{\tilde{ n}\tilde{ M}}{\epsilon})}\right)
\end{align} choices for $h=(h^q)_{q\in\ensuremath{\mathcal{Q}}}$ in step~\ref{alg:guess-}.
The number of choices for the $\epsilon$-restricted profiles in step~\ref{qptas:prof-} is bounded from above by
$
m^{\sum_{r=1}^dP_r/\epsilon}=m^{O(\log{(m\tilde{ M})}/\epsilon)}.
$
Thus, the bound on the running time follows from this and~\raf{ch-L},\raf{ch-h},\raf{ch-Q}.
We now argue that the solution $\hat F$ outputted by Alg.~\ref{alg:qptas-} is $(1-O(\epsilon))$-approximation for {\sc OPF}.
Let $F^*=\big(s^*_0,x^*, v^*, \ell^*, S^*\big)$ be an optimal solution for {\sc OPF} of objective value $\widehat{\ensuremath{\textsc{Opt}}} = f_{\textsc{OPF}}(F^*)$. By the definition of $\hat\mathcal I$, we have
\begin{align}\label{en0}
\sum_{k\in\mathcal I\setminus\hat\mathcal I}u_k\le\epsilon\widehat{\ensuremath{\textsc{Opt}}}\le\epsilon f_{\textsc{OPF}}(F^*).
\end{align}
Define $\mathcal T^*\triangleq\{k \in \hat\mathcal I \mid x^*_k = 1\}$ and $(h^*)^{q,p,r}=\sum_{k\in\mathcal T^*\cap\mathcal I^q}\overline f_k^{p,r}$, for $p\in[P_r],$ $r\in[d]$ and $q\in\ensuremath{\mathcal{Q}}$.
Let $(\mathcal L^*)^q:=\{k\in \mathcal I^q\cap\mathcal T^*:~\underline{f}_k^{p,r}>\epsilon^2(h^*)^{q,p,r}\text{ for some } p\in[P_r],~\text{and some } r\in[d]\}$ be the set of large demands within group $\mathcal I^{q}$ in the optimal solution, and let $(\mathcal S^*)^q:=\mathcal I^q\cap\mathcal T^*\setminus(\mathcal L^*)^q$ be the set of ``small" demands within the same group. Note by this definition that $|(\mathcal L^*)^q|\le\frac{\sum_{r=1}^dP_r}{\epsilon^2}$, and thus $\mathcal L^*=((\mathcal L^*)^q)_{q\in\ensuremath{\mathcal{Q}}}$ and $h=(h^q)_{q\in\ensuremath{\mathcal{Q}}}$ will be one of the guesses considered by the algorithm in step~\ref{alg:guess-}. Let us focus on this particular iteration of the loop in step~\ref{alg:guess-}. Let $h^{q,p,r}=(1+\epsilon)^{\ell'}\underline{f}^{r}$, where $\ell'$ is the smallest integer (including $-\infty$) such that $h^{q,p,r}+\sum_{k\in\mathcal L^q}\overline f_k^{p,r}\ge (h^*)^{q,p,r}$. Note that $h^{q,p,r}\in F^r$, and
\begin{align}\label{rel}
\frac{1}{1+\epsilon}h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}\le (h^*)^{q,p,r}\le h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}.
\end{align}
Moreover, for any $k\in(\mathcal S^*)^q$, $q\in\ensuremath{\mathcal{Q}}$, $p\in[P_r]$, and $r\in[d]$, we have by \raf{rel},
$$
\underline f_k^{p,r}\le\epsilon^2 (h^*)^{q,p,r}\leq\epsilon^2\left(h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}\right),
$$
and hence $(\mathcal S^*)^q\subseteq\mathcal S^q$. Note also that
\begin{align}
\label{bbb2}
B^{q,p,r}&=\epsilon^2\left[h^{q,p,r}+\sum_{k\in(\mathcal L^*)^q}\overline{f}_k^{p,r}\right]\nonumber\\
&\le\epsilon^2\left[h^{q,p,r}+(1+\epsilon)\sum_{k\in(\mathcal L^*)^q}\overline{f}_k^{p,r}\right]\le \epsilon^2(1+\epsilon)(h^*)^{q,p,r}.
\end{align}
Furthermore, $x^*$ is feasible for the constraint~\raf{e111} as
$$
\sum_{k\in\mathcal S^q}\overline f_k^{p,r}x_k^*=\sum_{k\in(\mathcal S^*)^q}\overline f_k^{p,r}x_k^*=\sum_{k\in(\mathcal S^*)^q}\overline f_k^{p,r}=(h^*)^{q,p,r}-\sum_{k\in(\mathcal L^*)^q}\overline f_k^{p,r}\le h^{q,p,r}.
$$
It follows that $F^*$ is feasible for~{\sc R1$[\mathcal L,h]$}, implying by~\raf{en0} that the solution $F'$ obtained in step~\ref{qptas:cvx} of the algorithm satisfies
\begin{equation}
f_{\textsc{OPF}}(F') \ge (1-\epsilon)f_{\textsc{OPF}}(F^*). \label{eq:qptas-cvx}
\end{equation}
For each $q\in\ensuremath{\mathcal{Q}}$, there is an $(h,\epsilon)$-restricted profile $g^q$ and an integral solution $ (\hat x_k)_{k\in \mathcal S^q} $ that satisfy Lemma~\ref{l4}. Since all the possible $(h,\epsilon)$-restricted profiles are probed, the profile $g^q$ will be found in one of the iterations in the loop in line~\ref{qptas:prof-}. Let us consider this iteration. By condition (i) of the lemma, $\sum_{k\in\mathcal S^q}f_k^r(e)\hat x_k\leq \sum_{k\in\mathcal S^q}f_k^r(e){x}'_k$ for all $e\in\ensuremath{\mathcal{E}} $ and $r\in[d]$, which implies that conditions~\raf{feas:con1}-\raf{feas:con4} of Lemma~\ref{feas} hold for the vector $\bar x$, defined in line~\ref{qptas:mc-round--} of Alg.~\ref{alg:qptas-}.
At this point, following exactly the same lines as in the proof of Theorem~\ref{th:mcqptas-}, it can be shown that
\begin{align*}
\sum_{k\in\mathcal I}u_k\bar x_k \ge \sum_{k \in \hat\mathcal I}u_kx_k'-3\epsilon(2\beta+1)f_{\textsc{OPF}}(F').
\end{align*}
Thus condition~\raf{feas:con0} in Lemma~\ref{feas} is satisfied with $\varepsilon=3\epsilon(2\beta+1)$ implying that $\tilde F$ is a feasible solution for \textsc{cOPF}, and hence for OPF by Lemma~\ref{lem:exact}, with $f_{\textsc{OPF}}(\tilde F)\ge (1-\varepsilon) f_{\textsc{OPF}}(F')\ge(1-\varepsilon)f_{\textsc{OPF}}(F^*)$.
$\blacksquare$ \end{proof}
\vspace*{5pt} \begin{remark} Following arguments analogous to those in the above proof, it is conceivable to generalize the logarithmic approximation devised in Section~\ref{sec:logarithmic} to OPF on line networks with single substation generator, provided assumptions~{\sf A0$'$}, {\sf A1}, {\sf A2}, {\sf A3}, {\sf A4$'$} and {\sf A5} hold. To this end, however, an additional constant factor would be lost in the approximation ratio for bounding the capacities (i.e., the right hand sides of inequalities~\raf{feas:con1}, \raf{feas:con2} and \raf{feas:con3}). \end{remark}
\section{Concluding Remarks}\label{conclrem}
This study defined a novel generalization of UFP, dubbed as $d$-USFP, and bridged it with AC OPF, which is a fundamental problem in power systems engineering. In a preliminary step towards tackling this extended problem, we devised a QPTAS and an efficient logarithmic approximation for its single-source variant. Leveraging the connection between separable $d$-USFP and AC OPF, a (kind of) black-box reduction is developed that, under some mild conditions, allows one to convert an approximation for the former problem to that of for AC OPF on line distribution networks with discrete demands. It's noteworthy that this reduction applies only to algorithms that depend on LP-rounding techniques, hence the focus of the present study on LP-based approximations. Whereas for future work, it would be interesting to generalize and extend the known alternative techniques (e.g., the surveyed combinatorial and dynamic programming based ones) to $d$-USFP, consequently improving upon the current results. As from power systems perspective, one future avenue to explore, would be extension of the established framework to a more practical setting with multiple generation sources and tree networks.
\iffalse \subsection{Approximation Solutions}
Let $F \triangleq \big(s_0,x, v, \ell, S\big) $ and $F^*\triangleq \big(s_0^*,x^*, v^*, \ell^*, S^*\big) $ be a feasible and an optimal solution to {\sc OPF}, respectively. With a slight abuse of notation, we write $f(F) $ to refer to the objective value of $F$, $f(s_0, (x_k)_k)$. Let $\ensuremath{\textsc{Opt}} = f(F^*)$ be the objective value of $F^*$.
\begin{definition}
An $\alpha$-approximation to \textsc{OPF} is a feasible solution $F$ such that $$f(F)\ge \alpha \ensuremath{\textsc{Opt}},$$
where $\alpha\in(0,1)$. \end{definition} In the above definition, $\alpha$ characterizes the approximation gap between the approximate and the optimal solutions. \begin{definition}
An $(\alpha, \beta)$-approximation to \textsc{OPF} is a solution $F$ subject to Cons.\raf{p1:con1}-\raf{p1:con4},\raf{p1:con6}-\raf{p1:con9} such that
\begin{align}
f(F)&\ge \alpha \ensuremath{\textsc{Opt}}, \notag \\
S_{e} &\le \beta \overline S_{e} \quad \text{for all } e \in \ensuremath{\mathcal{E}} , \text{ and } \label{eq:app1}\\
\tfrac{1}{\beta } \underline v_j &\le v_j \le \beta \overline v_j \quad \text{for all } j \in \ensuremath{\mathcal{V}}^+, \label{eq:app2}
\end{align}
where $\beta \ge 1$ and $\ensuremath{\textsc{Opt}}$ is the optimal solution subject to Cons.~\raf{p1:con1}-\raf{p1:con9}. \end{definition} In the above definition, $\beta$ characterizes the violation gap of either voltage or capacity constraints. For {\sc OPF$_{\sf S}$}, Cons.~\raf{eq:app2} is dropped, whereas for {\sc OPF$_{\sf V}$}, Cons.~\raf{eq:app1} is dropped. In particular, {\em polynomial-time approximation scheme} (PTAS) is a $(1-\epsilon,1)$-approximation algorithm for any $\epsilon \in (0,1)$. The running time of a PTAS is polynomial in the input size for every fixed $\epsilon$, but the exponent of the polynomial might depend on $1/\epsilon$. An even stronger notion is a {\em fully polynomial-time approximation scheme} (FPTAS), which requires the running time to be polynomial in both input size and $1/\epsilon$.
\fi
\appendix
\section*{Appendix}
\section{Proof of Lemma~\ref{l4}}\label{lemma2b}
\begin{proof}
For $r\in[d]$, consider the graph of the fractional profile $\sum_{k\in\mathcal S^q}f_k^r(e)\tilde{x}_k$ illustrated in Figure~\ref{f1}. For $p\in[P_r]$, slice the region between the horizontal axis and horizontal line at height $h^{q,p,r}$ with $\frac{1}{\epsilon}+1$ horizontal lines, with inter-distance $\epsilon h^{q,p,r}$. The intersections of the optimal profile with these lines define a monotone function $g^{q,r}$, as pictured in Figure~\ref{f1}, with $g^{q,r}(e)\in\{l\epsilon h^{p,r}:~l\in\{0,1\ldots,1/\epsilon\},~p\in[P_r]\}$, for all $e\in\ensuremath{\mathcal{E}} $. We adopt a greedy procedure, explained in Algorithm~\ref{alg:modify} below, to remove a set of demands from $\mathcal S^q$ in each interval $\ensuremath{\mathcal{E}} _p^r$ such that the remaining set of demands fractionally fits below $g^{q,r}$ (see lines~\ref{alg:s1}-\ref{alg:s7}). The algorithm proceeds by removing the ``left-most" set of demands that minimally ensures that the remaining ones in $\mathcal S^q$ can be packed under capacity $g^{q,r}$. This defines an intermediate fractional vector $\bar{\bar x}$ for separable $d$-USFP-R[$\mathcal S^q,g^q$], where $g^q=(g^{q,r})_{r\in[d]}$, which can be converted to a basic feasible solution (BFS) with the same or better objective value. Lastly, the fractional components of $\bar{\bar x}$ are rounded down yielding an integral solution $\hat x$.
\begin{algorithm}[!htb]
\caption{ {\sc Modify} \label{alg:modify} }
\begin{algorithmic}[1]
\Require $q\in\ensuremath{\mathcal{Q}}$; a restricted profile $\mbox{RP}_{\epsilon}(h;w;g)$; a set of users $\mathcal S^q\subseteq\mathcal I^q$; a fractional vector $(\tilde x_k)_{k\in\mathcal S^q}\in[0,1]^{\mathcal S^q}$
\Ensure A integral vector $(\hat x_k)_{k\in\mathcal S^q}\in\{0,1\}^{\mathcal S^q}$ satisfying conditions (i) and (ii) of lemma~\ref{l4}
\State $\bar x\leftarrow\tilde x$; $t\leftarrow 0$\label{alg:s2}
\For{$r=1$ to $d$}\label{alg:s1}
\For{$p=1,\ldots,P_r$}\label{alg:s1-}
\State $i\leftarrow \underline{i}(p,r)$
\While {$t<\epsilon h^{q,p,r}$ } \label{alg:cond}
\If{$\exists k\in\mathcal S^q$ such that $\tilde x_kf_k^r(e_i)>0$}\label{alg:s3}
\State $\bar x_k=0$\label{alg:s4}
\State $t\leftarrow t+\tilde{x}_kf_k^r(e_i)$\label{alg:s5}
\Else{}
$i\leftarrow i+1$\label{alg:s6}
\EndIf
\EndWhile
\EndFor
\EndFor \label{alg:s7}
\State Convert $\bar x$ to a BFS $\bar{\bar x}$ for $d$-USFP-R[$\mathcal S,g^q$] with $\sum_{k\in\mathcal S}u_k\bar{\bar x}_k\ge\sum_{k\in\mathcal S}u_k\bar x_k$ \label{alg:s8}
\State$ (\hat x_k)_{k\in \mathcal S^q} \leftarrow \big( \lfloor \bar{\bar x}_k \rfloor \big)_{k\in \mathcal S^q}$ \label{alg:USFP-round}
\State \Return $\hat x$
\end{algorithmic}
\end{algorithm}
We first show that condition (i) holds when $\hat x$ is replaced by $\bar x$. For $r\in[d]$, let $\mathcal J^{r}(e_i)$ be the set of demands $k\in\mathcal S^q$ for which $\bar x_k$ was set to $0$ in step~\ref{alg:s4} when considering edge $e_i\in\ensuremath{\mathcal{E}} $. Consider an edge $e\in\ensuremath{\mathcal{E}} _p^r$ such that $\sum_{k\in\mathcal S^q}f_k^r(e)\bar x_k>0$. Note that $0\leq \sum_{k\in\mathcal S^q}f_k^r(e)\tilde x_k-g^{q,r}(e)\le \epsilon h^{q,p,r}$ by \raf{rel0} and the definition of $g^{q,r}$.
By the monotonicity of $f^r_k(\cdot)$ and the condition of the while-loop in step~\ref{alg:cond} we have
\begin{align*}
\sum_{k\in\mathcal S^q}f_k^r(e)\bar x_k&=\sum_{k\in\mathcal S^q}f_k^r(e)\tilde x_k-\sum_{i:~e_i\le e}\sum_{k\in\mathcal J^{r}(e_i)}f_k^r(e)\tilde x_k\\
&\le \sum_{k\in\mathcal S^q}f_k^r(e)\tilde x_k-\sum_{i:~e_i\le e}\sum_{k\in\mathcal J^{r}(e_i)}f_k^r(e_i)\tilde x_k\\
&\le \sum_{k\in\mathcal S^q}f_k^r(e)\tilde x_k-\epsilon h^{q,p,r}\\
&\le g^{q,r}(e).
\end{align*}
\begin{figure}
\caption{A profile and its $(h,\epsilon)$-restriction.}
\label{f1}
\end{figure}
Since $\bar x$ is feasible for $d$-USFP-R[$\mathcal S^q,g^q$], one can obtain a BFS $\bar{\bar x}$ for the same linear program with $\sum_{k\in\mathcal S^q}u_k\bar{\bar x}_k\ge\sum_{k\in\mathcal S}u_k\bar x_k$ as in step~\ref{alg:s8} of procedure {\sc Modify}. Then, round down the fractional components in $\bar{\bar{x}}$ to obtain an integral solution $\hat x$. Note that, for all $e\in\ensuremath{\mathcal{E}} $,
$$\sum_{k\in\mathcal S^q}f_k^r(e)\hat x_k\le \sum_{k\in\mathcal S^q}f_k^r(e)\bar {\bar x}_k\le g^{q,r}(e)\le \sum_{k\in\mathcal S^q}f_k^r(e)\tilde{x}_k,$$
and hence (i) holds.
Note that the total fractional utility of demands removed by Algorithm~\ref{alg:modify} in steps~\ref{alg:s1}-\ref{alg:s7} is
\begin{align*}
\sum_{r\in[d],~e\in\ensuremath{\mathcal{E}} }\sum_{k\in\mathcal J^r(e)}u_k\tilde x_k&=
\sum_{r\in[d]:~H^{P_r,r}>0}\sum_{p=1}^{P_r}\sum_{e\in\ensuremath{\mathcal{E}} _p^r}\sum_{k\in\mathcal J^r(e)}u_k\tilde x_k\\
&\le \sum_{r\in\mathcal H^{q}}\sum_{p=1}^{P_r}\sum_{e\in\ensuremath{\mathcal{E}} _p^r}\sum_{k\in\mathcal J^r(e)} \frac{1}{H^{q,p,r}} \sum_{t=1}^{T_r}a_k^{r,t}b^{r,t}(e_{\overline i(p,r)})\tilde x_k\\
&=\sum_{r\in\mathcal H^{q}}\sum_{p=1}^{P_r}\frac{1}{H^{q,p,r}} \sum_{e\in\ensuremath{\mathcal{E}} _p^r}\sum_{k\in\mathcal J^r(e)} f_k^r(e_{\overline i(p,r)})\tilde x_k\\
&\le\sum_{r\in\mathcal H^{q}}\sum_{p=1}^{P_r}\frac{C_r}{H^{q,p,r}} \left(\sum_{e\in\ensuremath{\mathcal{E}} _p^r}\sum_{k\in\mathcal J^r(e)} f_k^r(e)\tilde x_k\right)\\
&\le\sum_{r\in\mathcal H^{q}0}\sum_{p=1}^{P_r}\frac{C_r}{H^{q,p,r}}(\epsilon h^{q,p,r}+B^{q,p,r}),
\end{align*}
where we use the fact that $k\in\mathcal I^q$ in the first inequality, property~\raf{property} in the second inequality, and $\underline{f}_k^{p,r}\le B^{q,p,r}$ and the condition of the while-loop in step~\ref{alg:cond} in the last inequality. (Note that we sum above over $r\in[d]$ such that in $H^{q,P_r,r}>0$ since $k\in\mathcal J^r(e)$ implies that $f_k^r(e)>0$, which in turn implies by~\raf{p1} that $H^{q,P_r,r}>0$.)
It follows that
\begin{align}\label{e2}
\sum_{k \in\mathcal S^q}u_k\bar x_k&\ge\sum_{k \in\mathcal S^q}u_k\tilde x_k-\sum_{r,e}\sum_{k\in\mathcal J^r(e)}u_k\tilde x_k \nonumber \\
&\ge \sum_{k \in\mathcal S^q}u_k\tilde x_k-\sum_{r\in\mathcal H^{q}}\sum_{p=1}^{P_r}\frac{C_r}{H^{q,p,r}}(\epsilon h^{q,p,r}+B^{q,p,r}).
\end{align}
By the monotonicity of the functions $f^r_k(\cdot)$, $d$-USFP-R[$\mathcal S,g^q$] has only $\frac{1}{\epsilon}\sum_{r=1}^dP_r$ {\it non-redundant} packing inequalities of the form \raf{ee0}. It follows that the BFS $\bar{\bar x}$ computed in step~\ref{alg:s8} has at most $\frac{1}{\epsilon}\sum_{r=1}^dP_r$ fractional components $\bar{\bar x}\in(0,1)$. Thus,
\begin{align}\label{e2-}
\sum_{k \in\mathcal S^q}u_k{\hat x}_k&=\sum_{k \in\mathcal S^q}u_k\bar{\bar x}_k-\sum_{k \in\mathcal S^q:~\bar{\bar x}_k\in(0,1)}u_k\bar{\bar x}_k \nonumber \\
&\ge\sum_{k \in\mathcal S^q}u_k\bar x_k-\frac{1}{\epsilon}\sum_{r=1}^dP_r\cdot\max_ku_k\bar{\bar x}_k \nonumber\\
&\ge\sum_{k \in\mathcal S^q}u_k\bar{ x}_k-\frac{1}{\epsilon}\frac{\sum_{r=1}^dP_r}{\sum_{r\in\mathcal H^{q}}P_r}\sum_{r\in\mathcal H^{q}}\frac{P_rB^{q,P_r,r}}{H^{q,P_r,r}},
\end{align}
where we use in the last inequality that $\bar{\bar x}_k\le 1$ and
\begin{align*}
u_k&\le \frac{\sum_{r\in\mathcal H^{q}}P_r\sum_{t=1}^{P_r}a_k^{r,t}b^{r,t}(e_{n})/H^{q,P_r,r}}{\sum_{r\in\mathcal H^{q}}P_r}=\frac{\sum_{r\in\mathcal H^{q}}P_rf_k^r(e_n)/H^{q,P_r,r}}{\sum_{r\in\mathcal H^{q}}P_r}\\
& \le \frac{\sum_{r\in\mathcal H^{q}}P_rB^{q,P_r,r}/H^{q,P_r,r}}{\sum_{r\in\mathcal H^{q}}P_r},
\end{align*}
for $k\in\mathcal S^q$. Condition (ii) follows from~\raf{e2} and~\raf{e2-}.
$\blacksquare$ \end{proof}
\section{Proof of Lemma~\ref{lem:exact}}\label{lem44}
\begin{algorithm}[!htb]
\caption{ {\sc Forward-Backward-Sweep} \label{alg:fbsweep} }
\begin{algorithmic}[1]
\Require A feasible solution $F'=(s_0', x', v',\ell',S')$ to {\sc cOPF$'[x']$} such that $\ell_{h,t}' > \frac{|S_{h,t}'|^2}{v_h'}$ for some $(h,t)\in \ensuremath{\mathcal{E}} $
\Ensure A feasible solution $\tilde F=(\tilde s_0, \tilde x, \tilde v, \tilde \ell, \tilde S)$ to {\sc cOPF$'[x']$} such that $\tilde x = x'$ and $\sum_{e\in\ensuremath{\mathcal{E}} }\tilde\ell_e<\sum_{e\in\ensuremath{\mathcal{E}} }\ell_e'$
\State $\tilde x \leftarrow x'$; $\tilde v_0 \leftarrow v_0$ \label{alg:fbs0}
\State Number nodes $\ensuremath{\mathcal{V}}=\{0,1,\ldots, m\}$ in a breadth-first search order
\For{$j=m,m-1,\ldots,1$} /* Forward sweep */
\State Let $i$ be s.t. $(i,j) \in \ensuremath{\mathcal{E}} $
\State $\tilde \ell_{i,j} \leftarrow \frac{|S_{i,j}'|^2}{v_i'}$ \label{alg:fbs1}
\State $\tilde S_{i,j} \leftarrow \sum_{k \in \mathcal N_j} s_k \tilde x_k + \sum_{t:(j,t)\in \ensuremath{\mathcal{E}} } \tilde S_{j,t} + z_{i,j} \tilde \ell_{i,j}$\label{alg:fbs2}
\EndFor
\State $\tilde s_0 \leftarrow - \tilde S_{0,1}$ \label{alg:fbs3}
\For{$j = 1,2,\ldots, m$} /* Backward sweep */
\State Let $i$ be s.t. $(i,j) \in \ensuremath{\mathcal{E}} $
\State $\tilde v_j \leftarrow \tilde v_i + |z_{i,j}|^2 \tilde \ell_{i,j} - 2 \ensuremath{\mathrm{Re}}(z_{i,j}^\ast \tilde S_{i,j}) $ \label{alg:fbs4}
\EndFor
\State \Return $\tilde F$
\end{algorithmic} \end{algorithm}
\begin{proof}
The analysis follows the same lines as in \citep{gan2015exact, low2014convex2,huang2017sufficient} and is sketched here for completeness.
Let $F''=(s_0'', x', v'',\ell'',S'')$ be an optimal solution of {\sc cOPF}$[x']$, which can be found (to within any desired accuracy) in polynomial time, by solving a convex program. Consider the following problem.
\begin{align}
\textsc{(cOPF$'[x']$)} \quad &\min_{\substack{s_0, x, v,\ell, S \;\;}}\sum_{e\in\ensuremath{\mathcal{E}} }\ell_e, &\notag \\
\text{s.t.} \ &\raf{p1:con2}-\raf{p1:con7}, \raf{p1:con9}, \raf{p2:con1}\notag &\\
& x=x'&\\
& f_{\textsc{OPF}}(s_0, x)\ge f_{\textsc{OPF}}(s_0'',x'). \label{p3:con1} &
\end{align}
Clearly, \textsc{cOPF$'[x']$} is feasible as $F''$ satisfies all its constraints. Hence, it has an optimal solution $F'=(s_0', x', v',\ell',S')$, which we claim satisfies the statement of the lemma.
Suppose, for the sake of contradiction, that there exists an edge $(h,t)$ such that $\ell_{h,t}' > \tfrac{|S_{h,t}'|^2}{v'_h}$.
In the sequel, we construct a feasible solution $\tilde F=(\tilde s_0, x', \tilde v, \tilde \ell, \tilde S)$ for \textsc{cOPF$'[x']$} such that $\sum_{e\in\ensuremath{\mathcal{E}} }\tilde\ell_e<\sum_{e\in\ensuremath{\mathcal{E}} }\ell_e'$, leading to a contradiction.
Apply the forward-backward sweep algorithm, illustrated in Alg.~\ref{alg:fbsweep}, on the solution $F'$ to obtain a feasible solution $\tilde F$.
We show the feasibility of the solution $\tilde F$.
By Steps~\ref{alg:fbs2}, \ref{alg:fbs3} and \ref{alg:fbs4} of Alg.~\ref{alg:fbsweep}, all equality constraints of \textsc{cOPF$'[x']$} are satisfied.
By Step~\ref{alg:fbs1} and the feasibility of $F'$, we also have
\begin{align}\label{e-l-}
\tilde \ell_e \le \ell_e'\le \overline \ell_e\text{ for all $e \in \ensuremath{\mathcal{E}} $.}
\end{align}
Next, by rewriting $\tilde S_{i,j}$, recursively substituting from the leaves, we get
\begin{align}\label{et1}
\tilde S_{i,j}=\sum_{k \in \mathcal N_j} s_k \tilde x_k + \sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}}z_e \tilde \ell_{e}.
\end{align}
Write $\Delta \ell_e \triangleq \tilde \ell_e - \ell_e'\le 0$, $\Delta S_e \triangleq \tilde S_e - S_e'$, and $\Delta |S_e|^2 \triangleq |\tilde S_e|^2 - |S_e'|^2$, for $e\in \ensuremath{\mathcal{E}} $. Let $\widehat S_{j} \triangleq \sum_{k \in \mathcal N_j} s_k x_k'$, $\tilde L_{i,j} \triangleq \sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}}z_e \tilde \ell_e$, and $ L'_{i,j} \triangleq \sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}}z_e \ell_e'$. Note by \raf{et1} that $\tilde S_{i,j} = \widehat S_{j} + \tilde L_{i,j}$ and, similarly, $ S_{i,j} = \widehat S_{j} + L_{i,j}'$. It follows that, for all $(i,j)\in\ensuremath{\mathcal{E}} $,
\begin{align}\label{eq:lem1.0}
\Delta S_{i,j} &= \tilde L_{i,j}-L_{i,j}'=\sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}}z_{e} \Delta \ell_{e} \le 0,
\end{align}
where the inequality follows by assumption {\sf A1}.
In particular, for $(i,j)=(0,1)$, we obtain
\begin{align}\label{s-e-}
\tilde s_0^{\rm R} &= -\tilde S_{0,1}^{\rm R} \ge - S_{0,1}'^{\rm R} = s_0'^{\rm R},
\end{align}
implying by {\sf A0} that $f_0(\tilde s_0^{\rm R}) \ge f_0(s_0'^{\rm R})$ and hence \raf{p3:con1} is satisfied.
Furthermore,
\begin{align}
\Delta |S_{i,j}|^2 &= |\tilde S_{i,j}|^2 - |S_{i,j}'|^2\\
&= (\tilde S_{i,j}^{\rm R})^2 - (S_{i,j}'^{\rm R})^2 +(\tilde S_{i,j}^{\rm I})^2 - (S_{i,j}'^{\rm I})^2 \\
&=\Delta S_{i,j}^{\rm R} (\tilde S_{i,j}^{\rm R} + S_{i,j}'^{\rm R}) + \Delta S_{i,j}^{\rm I} (\tilde S_{i,j}^{\rm I} + S_{i,j}'^{\rm I})\\
&=\sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}} z_e^{\rm R} \Delta \ell_e \big( 2 \widehat S^{\rm R}_{j} + \tilde L_{i,j}^{\rm R} + L_{i,j}'^{\rm R} \big) \nonumber \\
&\quad+ \sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}} z_e^{\rm I} \Delta \ell_e \big( 2 \widehat S^{\rm I}_{j} + \tilde L_{i,j}^{\rm I} + L_{i,j}'^{\rm I} \big) \\
&= \sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}} 2\Delta \ell_e \ensuremath{\mathrm{Re}}(z_e^*\widehat S_j)+ \sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}} \Delta \ell_e \ensuremath{\mathrm{Re}}(z_e^*\tilde L_{i,j})\nonumber\\
&\quad+\sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}} \Delta \ell_e\ensuremath{\mathrm{Re}}(z_e^*L'_{i,j})\le 0, \label{eq:lem1.1}
\end{align}
where Eqn.~\raf{eq:lem1.1} follows by {\sf A1}, {\sf A3} (or {\sf A4$'$}) and $\Delta \ell_e \le 0$. Therefore, by the feasibility of $S_e$,
\begin{equation}\label{eq:mS}
|\tilde{S}_{e}| \le |S_{e}'| \le \overline S_{e} \quad \text{for all } e \in \ensuremath{\mathcal{E}} .
\end{equation}
Note that, by~{\sf A1}, the inequalities in ~\raf{eq:mS} also imply that the reverse power constraint in~\raf{p1:con6} is satisfied for $\tilde S$.
Rewrite Cons.~\raf{p1:con4} by recursively substituting $\tilde v_j$, for $j$ moving away from the root, and then substituting for $\tilde S_{h,t}$ using \raf{et1}:
\begin{align}
\tilde v_j &= v_0 -2\sum_{ (h,t) \in \ensuremath{\mathcal{P}}_{j} } \ensuremath{\mathrm{Re}}(z_{h,t}^\ast \tilde S_{h,t}) + \sum_{( h,t) \in \ensuremath{\mathcal{P}}_{j} } |z_{h,t}|^2 \tilde \ell_{h,t}\notag\\
&=v_0 -2 \sum_{ (h,t) \in \ensuremath{\mathcal{P}}_{j} } \ensuremath{\mathrm{Re}}\Big(z^*_{h,t}\big(\sum_{k\in \mathcal N_t} s_k \tilde x_k + \sum_{e \in \ensuremath{\mathcal{E}} _t\cup\{(h,t)\}} z_{e} \tilde \ell_{e} \big)\Big) + \sum_{ (h,t) \in \ensuremath{\mathcal{P}}_{j} } |z_{h,t}|^2 \tilde \ell_{h,t},\notag\\
&=v_0 -2 \sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) \tilde x_k - 2 \sum_{(h,t)\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Re}} \Big( z_{h,t}^* \sum_{e \in \ensuremath{\mathcal{E}} _t} z_{e} \tilde \ell_{e}\Big) \notag \\
&~~- 2\sum_{ (h,t) \in\ensuremath{\mathcal{P}}_{j} }|z_{h,t}|^2 \tilde \ell_{h,t} +\sum_{ (h,t) \in\ensuremath{\mathcal{P}}_{j} }|z_{h,t}|^2 \tilde \ell_{h,t}, \label{eq:vLL}
\end{align}
where the last statement follows from exchanging the summation operators, and $z^*_{e} z_{e} = |z_{e}|^2$. Thus,
\begin{align}
\tilde v_j&=v_0 -2 \sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) \tilde x_k \nonumber\\
&\quad - \Big(2 \sum_{(h,t)\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Re}} \big( z_{h,t}^* \sum_{e \in \ensuremath{\mathcal{E}} _t} z_{e} \tilde \ell_{e}\big) + \sum_{ (h,t) \in\ensuremath{\mathcal{P}}_{j} }|z_{h,t}|^2 \tilde \ell_{h,t}\Big) \nonumber \\
&\le v_0 <\overline v_j, \label{eq:vL0}
\end{align}
where the first inequality follows by {\sf A1} and {\sf A3}, and the last inequality follows by {\sf A2}.
Since $\tilde \ell_e \le \ell'_e$ and $\tilde x = x'$, we get by {\sf A1} and the feasibility of $F'$,
\begin{align}
\tilde v_j &\ge v_0 -2 \sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) x_k' \nonumber \\
&\quad - \Big(2 \sum_{(h,t)\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Re}} \big( z_{h,t} \sum_{e \in \ensuremath{\mathcal{E}} _t} z_{e} \ell_{e}\big) + \sum_{ (h,t) \in\ensuremath{\mathcal{P}}_{j} }|z_{h,t}|^2 \ell_{h,t}\Big) \nonumber \\
&=v_j'\ge \underline v_j. \label{eq:vL2}
\end{align}
By Ineqs.~\raf{eq:mS} and \raf{eq:vL2}, $\tilde \ell_{i,j} = \frac{| S_{i,j} '|^2}{v_i'} \ge \frac{| \tilde S_{i,j} |^2}{\tilde v_i}$, hence, $\tilde F$ is feasible.
Finally, by the first inequality in \raf{e-l-} and the fact that $\ell_{h,t} '> \tfrac{|S_{h,t}|^2}{v_h}=\tilde\ell_{h,t} $, we have $\sum_{e\in\ensuremath{\mathcal{E}} }\tilde \ell_{e}<\sum_{e\in\ensuremath{\mathcal{E}} } \ell_{e}'$, contradicting the optimality of $F'$ for \textsc{cOPF$'[x']$}.
$\blacksquare$
\end{proof}
\section{Proof of Lemma~\ref{feas}}\label{lprf}
\begin{proof}
The argument is similar to that in Lemma~\ref{lem:exact}.
We apply a slightly modified version of Alg.~\ref{alg:fbsweep} on the solution $F'$ to obtain a feasible solution $\tilde F$. Replace steps~\ref{alg:fbs0} and~\ref{alg:fbs1} in Alg.~\ref{alg:fbsweep}, respectively, by:
\begin{align}\label{e-msteps}
\text{1: $\tilde x \leftarrow \bar x$; $\tilde v_0 \leftarrow v_0$, and 5: $\tilde \ell_{i,j} \leftarrow \ell'_{i,j}$.}
\end{align}
By Steps~\ref{alg:fbs2}, \ref{alg:fbs3} and \ref{alg:fbs4} of the (modified) algorithm, all equality constraints of \textsc{(rcOPF$[\bar x]$)} are satisfied.
By (modified) Step~\ref{alg:fbs1} and the feasibility of $F'$, we also have
\begin{align}\label{e-l}
\tilde \ell_e =\ell_e'\le \overline \ell_e\text{ for all $e \in \ensuremath{\mathcal{E}} $.}
\end{align}
Write $\Delta S_e \triangleq \tilde S_e - S_e'$, and $\Delta |S_e|^2 \triangleq |\tilde S_e|^2 - |S_e'|^2$, for $e\in \ensuremath{\mathcal{E}} $. Let $S_{j}' \triangleq \sum_{k \in \mathcal N_j} s_k x_k'$, $\tilde S_{j}\triangleq \sum_{k \in \mathcal N_j} s_k \tilde x_k$, and $\tilde L_{i,j} \triangleq \sum_{e \in \ensuremath{\mathcal{E}} _j\cup\{(i,j)\}}z_e \tilde \ell_e$. Note by \raf{et1} that $\tilde S_{i,j} = \tilde S_{j} + \tilde L_{i,j}$ and, $ S_{i,j}' = S_{j}' + \tilde L_{i,j}$. It follows that, for all $(i,j)\in\ensuremath{\mathcal{E}} $,
\begin{align}\label{eq:lem1.0.1}
\Delta S_{i,j} &= \tilde S_{j}-S_{j}'=\sum_{k \in \mathcal N_j} s_k \bar x_k-\sum_{k \in \mathcal N_j} s_k x_k' \le 0,
\end{align}
where the inequality follows from \raf {feas:con2} and \raf{feas:con3}.
In particular, for $(i,j)=(0,1)$, we obtain
\begin{align}\label{s-e}
\tilde s_0^{\rm R} &= -\tilde S_{0,1}^{\rm R} \ge - S_{0,1}'^{\rm R} = s_0'^{\rm R},
\end{align}
implying by {\sf A0$'$} that $f_0(\tilde s_0^{\rm R} \cos \phi + \tilde s_0^{\rm I} \sin \phi) \ge f_0(s_0'^{\rm R} \cos \phi + s_0'^{\rm I} \sin \phi))$ and hence $f_{\textsc{OPF}}(\tilde s_0,\tilde x)\ge(1-\varepsilon)f_{\textsc{OPF}}(s_0',x')$ follows from \raf{feas:con0} and \raf{feas:con4}.
Furthermore,
\begin{align*}
\Delta |S_{i,j}|^2 &= |\tilde S_{i,j}|^2 - |S_{i,j}'|^2\\
&= (\tilde S_{i,j}^{\rm R})^2 - (S_{i,j}'^{\rm R})^2 +(\tilde S_{i,j}^{\rm I})^2 - (S_{i,j}'^{\rm I})^2 \\
&=\Delta S_{i,j}^{\rm R} (\tilde S_{i,j}^{\rm R} + S_{i,j}'^{\rm R}) + \Delta S_{i,j}^{\rm I} (\tilde S_{i,j}^{\rm I} + S_{i,j}'^{\rm I})\\
&=\Delta S_{i,j}^{\rm R}(\tilde S_j^{\rm R}+S_j'^{\rm R}+2\tilde L_{i,j}^{\rm R})+ \Delta S_{i,j}^{\rm I}(\tilde S_j^{\rm I}+S_j'^{\rm I}+2\tilde L_{i,j}^{\rm I})\le 0,
\end{align*}
where the last inequality follows by {\sf A1}, {\sf A4$'$} and \raf{eq:lem1.0.1}. Therefore,
\begin{equation}\label{eq:bS}
|\tilde{S}_{i,j}| \le |S'_{i,j}| \le \overline S_{i,j}.
\end{equation}
Next, we show $\underline v_j \le \tilde v_j \le \overline v_j$. As in~\raf{eq:vLL}, rewrite Cons.~\raf{p1:con4} by recursively substituting $v_j'$, for $j$ moving away from the root, and then substituting for $\tilde S_{h,t}$ using \raf{et1}:
\begin{align}\label{eq:ew1}
v_j'&=v_0 -2 \sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) x_k' \nonumber \\
& \quad- \Big(2 \sum_{(h,t)\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Re}} \big( z_{h,t}^* \sum_{e \in \ensuremath{\mathcal{E}} _t} z_{e} \ell'_{e}\big) + \sum_{ (h,t) \in\ensuremath{\mathcal{P}}_{j} }|z_{h,t}|^2 \ell'_{h,t}\Big)
\end{align}
A similar equation can be derived for $\tilde v_j$, where $x'$ and $\ell'$ in \raf{eq:ew1} are replaced by $\tilde x$ and $\tilde\ell$, respectively. By assumptions {\sf A2} and {\sf A3}, we have
\begin{align*}
\tilde v_j&=v_0 -2 \sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) \tilde x_k \nonumber\\
&\quad - \Big(2 \sum_{(h,t)\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Re}} \big( z_{h,t}^* \sum_{e \in \ensuremath{\mathcal{E}} _t} z_{e} \tilde \ell_{e}\big) + \sum_{ (h,t) \in\ensuremath{\mathcal{P}}_{j} }|z_{h,t}|^2 \tilde \ell_{h,t}\Big) \nonumber\\
&\le v_0 <\overline v_j.
\end{align*}
Moreover, since $\tilde \ell_e = \ell'_e$ and $\tilde x = \bar x$ satisfies~\raf{feas:con1}, we get by {\sf A1} and the feasibility of $F'$,
\begin{align}\label{eq:bV}
\tilde v_j
&\ge v_0 -2 \sum_{k \in \mathcal N} \ensuremath{\mathrm{Re}}\Big( \sum_{(h,t)\in \ensuremath{\mathcal{P}}_k \cap \ensuremath{\mathcal{P}}_j} z^*_{h,t} s_k\Big) x_k' \nonumber \\
&\quad - \Big(2 \sum_{(h,t)\in \ensuremath{\mathcal{P}}_j} \ensuremath{\mathrm{Re}} \big( z_{h,t} \sum_{e \in \ensuremath{\mathcal{E}} _t} z_{e} \ell_{e}'\big) + \sum_{ (h,t) \in\ensuremath{\mathcal{P}}_{j} }|z_{h,t}|^2 \ell_{h,t}'\Big) \nonumber \\
&=v_j'\ge \underline v_j.
\end{align}
Finally, by inequalities~\raf{eq:bS} and \raf{eq:bV}, $\tilde \ell_{i,j} = \ell_{i,j}' = \frac{|S'_{i,j}|^2}{v'_i} \ge \frac{|\tilde S_{i,j}|^2}{\tilde v_i}$, hence $\tilde \ell_{i,j}$ satisfies Cons.~\raf{p2:con1}.
$\blacksquare$ \end{proof}
\end{document} |
\begin{document}
\def{\bf \mbox{I\hspace{-.17em}R}}{{\bf \mbox{I\hspace{-.17em}R}}} \def\mbox{I\hspace{-.15em}N}{\mbox{I\hspace{-.15em}N}}
\title{
A Dolbeault Lemma for Temperate Currents.
}
\hspace{2cm} $Dedicated\, to\, the\, memory\, of\, Pierre\, Dolbeault. $\\
\begin{abstract} We consider a bounded open Stein subset $\Omega$ of a complex Stein manifold $X$ of dimension $n$. We prove that if $f$ is a current on $X$ of bidegree $(p,q+1)$, $\overline\partial$-closed on $\Omega$,
we can find a current $u$ on $X$ of bidegree $(p,q)$ which is a solution of the equation $\overline\partial u=f$ in $\Omega$. In other words, we prove that the Dolbeault complex of temperate currents on $\Omega$ (\emph{i.e.} currents on $\Omega$ which extend to currents on $X$) is concentrated in degree $0$. Moreover if $f$ is a current on $X=\bf \mbox{I\hspace{-.47em}C}^n$ of order $k$, then we can find a solution $u$ which is a current on $\bf \mbox{I\hspace{-.47em}C}^n$ of order $k+2n+1$. \end{abstract}
\textbf{Keywords}: Stein open subset of $\bf \mbox{I\hspace{-.47em}C}^n$ or of a Stein manifold, $L^2$ estimates, $\bar\partial$-operator, Dolbeault $\bar\partial$-complex, temperate distributions and currents, temperate cohomology, Sobolev spaces.
\section{Introduction} \label{SectionIntroduction}
We will prove the following result in the same way as the famous `` Dolbeault-Grothendieck '' lemma for $\bar\partial$.
\begin{theorem} \label{partialtemperedchomology}
Let $\Omega$ be a bounded Stein open subset of $\bf \mbox{I\hspace{-.47em}C}^n$ and let $f$ be a given current of bidegree $(p,q+1)$ on $\bf \mbox{I\hspace{-.47em}C}^n$ (with compact support) which is $\bar\partial$-closed on $\Omega$. Then there exists a current $u$ of bidegree $(p,q)$ (with compact support) in $\bf \mbox{I\hspace{-.47em}C}^n$ such that: \begin{equation}\label{} \bar\partial u=f, \end{equation} $in\,\Omega$.\\ Moreover if $f$ is of order $k$ (resp. if $f\in H^{-s}_{(p,q+1)}(\bf \mbox{I\hspace{-.47em}C}^n)$ for some $s> 0$), we can find a solution $u$ is of order at most $k+2n+1$
(resp. $u\in H_{(p,q)}^{-s-2n-1}(\bf \mbox{I\hspace{-.47em}C}^n)$, more precisely if $k$ is the integer such that:
$s\le k<s+1$, for every $r>k$, we can find $u\in H_{(p,q)}^{-r-2n}(\bf \mbox{I\hspace{-.47em}C}^n)$). \end{theorem} We say that a current $T$ on $\Omega\subset\bf \mbox{I\hspace{-.47em}C}^n$ is temperate
if and only if it can be extended to $\bf \mbox{I\hspace{-.47em}C}^n$. In other words, we have:
\begin{corollary}\label{partialtemperedchomology2} For a given relatively compact open Stein subset of $\bf \mbox{I\hspace{-.47em}C}^n$, the Dolbeault $\bar\partial$-cohomology of temperate currents on $\Omega$ vanishes.
\end{corollary}
As usual, we denote by $H^{s}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$ the space of current on $\bf \mbox{I\hspace{-.47em}C}^n$ of bidegree $(p,q)$ the coefficients of which are distributions in the Sobolev space $H^s(\bf \mbox{I\hspace{-.47em}C}^n)$. A distribution $T\in\mathcal D'({\bf \mbox{I\hspace{-.17em}R}}^n)$ is of order $k\in\mbox{I\hspace{-.15em}N}$ if it is locally a finite linear combination of derivatives of order at most $k$ of Radon measures on ${\bf \mbox{I\hspace{-.17em}R}}^n$ or equivalenty if $T$ can be extended as a continuous linear form defined on all functions of class $C^k$ with compact support in ${\bf \mbox{I\hspace{-.17em}R}}^n$ or equivalently if for every relatively compact open subset $\Omega\subset {\bf \mbox{I\hspace{-.17em}R}}^n$, all functions $\phi\in {D} (\Omega)$ verify an inequality : $\vert<T,\phi>\vert\le C(\Omega,T)\sup_{x\in\,\Omega} \Sigma_{\vert\alpha\vert\le k}\; \vert D_{x}^{\alpha}\phi(x)\vert$,
in which the constant $C(\Omega,T)$ only depends on $\Omega$ and $T$. Of course a current is of order $k$, if its coefficients are distributions of order $k$.\\
The preceding results are still valid replacing $\bf \mbox{I\hspace{-.47em}C}^n$ by a Stein manifold (section \ref{Proofoftheorem2}, theorem \ref{partialtemperedchomologyStein2}) and for currents taking their values in a given holomorphic vector bundle. But for the sake of simplicity, we begin with the case of $\bf \mbox{I\hspace{-.47em}C}^n$ as in the Dolbeault-Grothendieck lemma : the general case of a Stein manifold does only need more difficult technical tools but no truly new ideas or methods. In the case of a Stein manifold the loss of regularity is larger than $2n+1$ because we have to iterate several times the construction made in the case of $\bf \mbox{I\hspace{-.47em}C}^n$.
This result answers a question raised by Pierre Schapira in a personal discussion. He hopes it can be useful to make significant progress in the Microlocal Analysis theories highlighted for instance in the papers of M. Kashiwara and P. Schapira, [KS1996] and [Scha2017] in which such a temperate cohomology naturally appears.\\ Even though the result is essentially a consequence of L. H\"ormander's $L^2$ estimates for $\bar\partial$ (corollary \ref{beanopenpseudoconvex3}),
it seems that it can not be explicitly found in the literature on the subject (with complete proof). Let us observe the following features of the result.
No assumption of smoothness is required for $\Omega$. The given current current $f$ and the solution $u$ have coefficients in spaces of distribution $H^s(\bf \mbox{I\hspace{-.47em}C}^n)$ with $s<0$. Hence they are never supposed to be smooth but with temperate singularities as for instance derivatives of Dirac measures
and the result is quite different
from the most usual regularity results for $\bar\partial$ involving $C^k$ regularity up to the boundary of $\Omega$ ($k\ge0$)
both for $\Omega$ and for the given differential forms on $\Omega$.
If $f\in H^s_{(p,q+1)}(\bf \mbox{I\hspace{-.47em}C}^n)$ for some $s\ge0$, then $f\in L^2_{(p,q+1)}(\Omega)$
and the result is an immediate consequence of H\" ormander's theorem
which provides a solution $u$ in $L^2_{(p,q)}(\Omega)$.
Then $u$ has a trivial extension in $ L^2_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$ (by $0$ outside $\Omega$).\\ The gap $2n+1$ of regularity for the solution $u$ does not depend on $\Omega$. In the basic example $\Omega=B(0,R)\setminus H$ in which $H$ is a complex analytic hypersurface of the ball $B(0,R)$ of center 0 and radius $R$,
the result does not depend at all on the complexity of the singularities of $H$ (and on the degree of $H$ when $H$ is algebraic). The gap $2n+1$ is an automatic consequence of the method of proof. To improve the gap $2n+1$ does not seem to immediately have a major interest for the purpose in [Scha2017].\\ We need four steps to prove theorem \ref{partialtemperedchomology}. At first, as P. Dolbeault in [Dol1956], by solving an appropriate Laplacian equation $\frac{1}{2}\Delta v=\bar\partial^{\star}f$ on $\bf \mbox{I\hspace{-.47em}C}^n$ ($\Delta$ is the usual Laplacian on $\bf \mbox{I\hspace{-.47em}C}^n$ defined on differential forms and currents and $\bar\partial^{\star}$ is the operator adjoint of $\bar\partial$ for the usual Hermitian structure on $\bf \mbox{I\hspace{-.47em}C}^n$) and replacing $f$ by $f-\bar \partial v$, we reduce the problem to the case of a current $f$
which has harmonic coefficients on $\Omega$. As $f$ is temperate, the mean value properties of harmonic functions imply that $f$ grows at the boundary of $\Omega$ like a negative power of the distance $d(z,\partial\Omega)$ to the boundary of $\Omega$ (for $z\in\Omega$).
Then H\"ormander's $L^2$ estimates for $\bar\partial$ give a solution $u$ of $\bar\partial u=f$
such that $\int_{\Omega}\vert u\vert^2\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda(z)<+\infty$ (for some $l>0$). Finally using an extension theorem of L. Schwartz [Schw1950]
for distributions, $u$ can be extended as a current on $\bf \mbox{I\hspace{-.47em}C}^n$.\\
Similar methods, were already used by P. Lelong [Le1964] for the Lelong-Poincar\'e $\partial\bar\partial$-equation and
by H. Skoda [Sk1971] for the $\bar\partial$-equation to obtain solutions explicitly given on $\bf \mbox{I\hspace{-.47em}C}^n$
by integral representations and with precise polynomial estimates. Y.T. Siu has already studied holomorhic functions of polynomial growth on bounded open domain of $\bf \mbox{I\hspace{-.47em}C}^n$
using H\"ormander's $L^2$ estimates for $\bar\partial$ in [Siu1970].\\ We establish the preliminary results we need in Section \ref{SectionPreliminarydefinitionsandresultsnNr}
and we prove theorem \ref{partialtemperedchomology} in Section \ref{Proofoftheorem 1}.
We extend the results to a Stein manifold in section \ref{Proofoftheorem2} theorem \ref{partialtemperedchomologyStein2} (using J-P. Demailly's theorem \ref{beanopenpseudoconvex3SteinDem12} extending H\"ormander's results to manifolds).\\ In the case of a subanalytic bounded open Stein subset $\Omega$ in a Stein manifold $X$, Pierre Schapira in [Scha2020] gives independently a proof of Theorem 1 (\emph{i.e} of corollary \ref{partialtemperedchomology2}) and of theorem \ref{partialtemperedchomologyStein2}. His proof is basically founded on cohomological methods which are particularly well adapted to the subanalytic case. It also heavily depends on H\"ormander's $L^2$ estimates for $\bar\partial$ that he uses in the case of a bounded Stein open subset of $\bf \mbox{I\hspace{-.47em}C}^n$ after embedding the given Stein manifold in some space $\bf \mbox{I\hspace{-.47em}C}^n$. He also uses Lojasiewcz inequalities and another H\"ormander's inequality for subanalytic subsets.\\ A first version of this article only treating the case of $\bf \mbox{I\hspace{-.47em}C}^n$ was submitted to ArXiv [Sk2020] on march 2020.\\ I thank Pierre Schapira very much for raising his insightful question which has strongly motivated this research.
\section{Preliminary definitions and results} \label{SectionPreliminarydefinitionsandresultsnNr}
Before proving theorem \ref{partialtemperedchomology}, we need to remind several classical results. We have sometimes given direct proof to establish the results in the appropriate form we wish.\\ An open subset of $\bf \mbox{I\hspace{-.47em}C}^n$ is called Stein if it is holomorphically convex: for all compact $K$ in $\Omega$ the holomorphic hull $\hat K_{\Omega}$ of $K$ is compact ($x\in\hat K_{\Omega}$ if and only if $x\in \Omega$ and
for all holomorphic function $f$ on $\Omega$, $\vert f(x)\vert\le \max_{\xi\in K}\vert f(\xi)\vert$).\\ Let us recall the following fundamental H\" ormander's $L^2$ existence theorem for $\bar\partial$ [H\"or1966] or [H\"or1965] .
We can also use J.P. Demailly's book [Dem2012], Chapter VIII, paragraph 6, Theorem 6.9, p. 379. We denote by $L^2_{(p,q)}(\Omega,loc)$ the vector space of current of bidegree $(p,q)$ in $\Omega$ the coefficients of which are in $ L^2(\Omega,loc)$ for the usual Lebesgue measure $d\lambda$ on $\bf \mbox{I\hspace{-.47em}C}^n$.
\begin{theorem}\label{beanopenpseudoconvex} Let $\Omega$ be an open pseudoconvex subset of $\bf \mbox{I\hspace{-.47em}C}^n$ and $\phi$ a plurisubharmonic function defined on $\Omega$.
For every $g\in L^2_{(p,q+1)}(\Omega,loc )$ with $\bar\partial g=0$
such that: $\int_{\Omega}\vert g\vert^2 e^{-\phi} d\lambda<+\infty$, there exists $u\in L^2_{(p,q)}(\Omega,loc )$ such that: \begin{equation}\label{beanopenpseudoconvex1}
\bar\partial u=g \end{equation} in $\Omega$ and:
\begin{equation}\label{beanopenpseudoconvex2} \int_{\Omega}\vert u\vert^2 e^{-\phi}(1+\vert z\vert^2)^{-2} d\lambda\le\frac{1}{2} \int_{\Omega}\vert g\vert^2 e^{-\phi} d\lambda.
\end{equation} \end{theorem}
If $\Omega $ is bounded, $u$ verifies the $L^2$ estimate: \begin{equation}\label{beanopenpseudoconvex3dfl}
\int_{\Omega}\vert u\vert^2 e^{-\phi}d\lambda\le C(\Omega) \int_{\Omega}\vert g\vert^2 e^{-\phi} d\lambda \end{equation}
with $ C(\Omega):=\frac{1}{2}(1+\max_{z\in\Omega}\vert z\vert^2)^{2}.$\\ The classical Oka-Norguet-Bremerman theorem ([H\"or1966] paragraph 2.6 and theorem 4.2.8) claims that the following assertions are equivalent:\\
1) $\Omega$ is Stein,\\ 2) $\Omega$ is pseudoconvex: \emph{i.e} there exists a plurisubharmonic function $\phi$ on $\Omega$
which is exhaustive (for all $c\in{\bf \mbox{I\hspace{-.17em}R}}$ the subset $\lbrace z\in\Omega\vert\;\phi(z)<c\rbrace$ is relatively compact in $\Omega$),\\ 3) the function $-\log d(z,\partial\Omega)$ is plurisubharmonic in $\Omega$.\\ Therefore for a given $k\ge0$, we can choose $\phi(z)=-k\log d(z,\partial\Omega)$ in the inequality (\ref{beanopenpseudoconvex3dfl})
and we will only need to use the following special case of theorem \ref{beanopenpseudoconvex} (see also [H\"or1965] theorem 2.2.1').
\begin{corollary}\label{beanopenpseudoconvex3} Let $\Omega$ be a bounded Stein open subset of $\bf \mbox{I\hspace{-.47em}C}^n$ and $k\ge 0$ be a given real number.
Then for every $g\in L^2_{(p,q+1)}(\Omega,loc )$ with $\bar\partial g=0$
such that: $\int_{\Omega}\vert g\vert^2 \lbrack d(z,\partial\Omega)\rbrack^k d\lambda<+\infty$, there exists $u\in L^2_{(p,q)}(\Omega,loc )$ such that: \begin{equation}\label{beanopenpseudoconvex4}
\bar\partial u=g \end{equation} in $\Omega$ and:
\begin{equation}\label{beanopenpseudoconvex5} \int_{\Omega}\vert u\vert^2 \lbrack d(z,\partial\Omega)\rbrack^k d\lambda\le C(\Omega) \int_{\Omega}\vert g\vert^2 \lbrack d(z,\partial\Omega)\rbrack^k d\lambda
\end{equation} \end{corollary} If we denote by $L^{2,k}_{(p,q)}(\Omega)$ the space of $u\in L^2_{(p,q)}(\Omega,loc )$ such that\\
$\int_{\Omega}\vert u\vert^2 \lbrack d(z,\partial\Omega)\rbrack^{2k} d\lambda<+\infty$, by: \begin{equation}\label{beanopenpseudoconvex6anotheL2} L^{2,k}_{0,(p,q)}(\Omega):=\lbrace u\in L^{2,k}_{(p,q)}(\Omega)\vert\;\bar\partial u\in L^{2,k}_{(p,q+1)}(\Omega)\rbrace \end{equation} and by $\mathcal O^{2,k}_{(p,0)}(\Omega):=\lbrace u\in L^{2,k}_{(p,0)}(\Omega)\vert\;\bar\partial u=0\rbrace$,
corollary \ref{beanopenpseudoconvex3} means that the following Dolbeault-complex is exact:
\begin{equation}\label{beanopenpseudoconvex6} \begin{aligned} 0\rightarrow\mathcal O^{2,k}_{(p,0)}(\Omega)\rightarrow L^{2,k}_{0,(p,0)}(\Omega)\xrightarrow{\bar\partial}
L^{2,k}_{0,(p,1)}(\Omega)\xrightarrow{\bar\partial} \ldots\xrightarrow{\bar\partial} L^{2,k}_{0,(p,q)}(\Omega)\xrightarrow{\bar\partial}
L^{2,k}_{0,(p,q+1)}(\Omega)\xrightarrow{\bar\partial} \ldots\\ \xrightarrow{\bar\partial} L^{2,k}_{0,(p,n)}(\Omega) \rightarrow 0. \end{aligned} \end{equation}
We also need two results of real analysis. \begin{lemma}\label{LetwbeasdistributiononRoforderk} Let $w$ be a distribution on ${\bf \mbox{I\hspace{-.17em}R}}^n$ of order $k$ which is harmonic (for the usual Laplacian on ${\bf \mbox{I\hspace{-.17em}R}}^n$) on the bounded open subset $\Omega$ of ${\bf \mbox{I\hspace{-.17em}R}}^n$. Then $w$ is of polynomial growth on $\Omega$: $\vert w(z)\vert\le C(\Omega,w)\; \lbrack\; d(z,{\bf \mbox{I\hspace{-.17em}R}}^n \setminus \Omega)\rbrack^{-k-n}$
where the constant $C(\Omega,w)$ only depends on $\Omega$ and $w$.\\ If $w\in H^{-s}({\bf \mbox{I\hspace{-.17em}R}}^n)$ for $s\ge 0$ we have: $\vert w(z)\vert\le C(\Omega,w)\; \lbrack\; d(z,{\bf \mbox{I\hspace{-.17em}R}}^n \setminus \Omega)\rbrack^{-k-\frac{n}{2}}$ where $k$ is the integer such that $s\le k<s+1$.
\end{lemma}
\begin{proof} Let $\rho$ be a non negative regularizing function in $\mathcal D ({\bf \mbox{I\hspace{-.17em}R}}^n)$ which only depends on $\vert \zeta\vert$,
has its support in the Euclidean ball of radius $1$
and verifies: $\int_{{\bf \mbox{I\hspace{-.17em}R}}^n} \rho(\zeta) d\lambda(\zeta)=1$ where $d\lambda$ is the Lebesgue measure on ${\bf \mbox{I\hspace{-.17em}R}}^n$.\\ Let $\rho_{\epsilon}(\zeta):=\frac{1}{\epsilon^{n}}\rho(\frac{\zeta}{\epsilon})$
be the associatef family of regularizing functions in $\mathcal{D}({\bf \mbox{I\hspace{-.17em}R}}^n)$ so that $\rho_{\epsilon}$ has its support in the ball of radius $\epsilon$ and verifies too $\int_{{\bf \mbox{I\hspace{-.17em}R}}^n} \rho_{\epsilon}(\zeta) d\lambda(\zeta)=1$.\\ As $w$ is harmonic in $\Omega$, for every $z\in\Omega$, $w(z)$ coincide with its mean-value on every Euclidean sphere of center $z$ and radius $r< d(z,\partial\Omega)$. Therefore using Fubini's theorem we get for every $\epsilon< d(z,\partial\Omega)$: \begin{equation}\label{Thereforeusingfubinistheorem}
w(z)=\int_{{\bf \mbox{I\hspace{-.17em}R}}^n} w(z+\zeta) \rho_{\epsilon}(\zeta) d\lambda(\zeta)
=\int_{{\bf \mbox{I\hspace{-.17em}R}}^n} w(\zeta) \rho_{\epsilon}(z-\zeta) d\lambda(\zeta). \end{equation}
\emph{i.e.} $w=w\star\rho_{\epsilon}$ on $\Omega_{\epsilon}:=\lbrace z\vert\, d(z,\partial\Omega)<\epsilon\rbrace$ (in which $\star$ represents a convolution product) .\\ Testing $w$ as a distribution on the test function (in the variable $\zeta$ ): $\rho_{\epsilon}(z-\zeta)$ with $\epsilon< d(z,\partial\Omega)\le 1$,
equation (\ref{Thereforeusingfubinistheorem}) becomes: \begin{equation}\label{w(z)=w(zeta),rhoepsilon}
w(z)=<w(\zeta),\rho_{\epsilon}(z-\zeta)>_{\zeta}. \end{equation}
As $w$ is a distribution of order $k$, we have for every function $\phi\in\mathcal D ({\bf \mbox{I\hspace{-.17em}R}}^n)$ an inequality: \begin{equation}\label{}
\vert< w,\phi>\vert\le C_1(w)\;\sup_{\zeta\in\,\bf \mbox{I\hspace{-.47em}C}^n} \Sigma_{\vert\alpha\vert\le k}\; \vert D_{\zeta}^{\alpha}\phi(\zeta)\vert, \end{equation} in which $C_1(w)>0$ is a constant only depending on $w$.\\ Taking $\phi(\zeta)=\rho_{\epsilon}(z-\zeta)$, we get: \begin{equation}\label{}
\vert w(z)\vert\le C_1(w)\;\sup_{\vert\zeta\vert\le \epsilon} \Sigma_{\vert\alpha\vert\le k}\; \vert D_{\zeta}^{\alpha}\rho_{\epsilon}(z-\zeta)\vert, \end{equation} and: \begin{equation}\label{}
\vert w(z)\vert\le C_2(w) \epsilon^{-n-k}\; , \end{equation} for some constant $C_2(w)>0$.\\ As it is true for every $\epsilon< d(z,\partial\Omega)$, we take the limit as $\epsilon\rightarrow d(z,\partial\Omega)$ and we get; \begin{equation}\label{} \vert w(z)\vert\le C_2(w)\;\lbrack d(z,\partial\Omega)\rbrack^{-l} \end{equation}
with $l=n+k$ and then:
\begin{equation} \int_{\Omega} \vert w\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda <\infty. \end{equation} If we now assume that $w\in H^{-s}({\bf \mbox{I\hspace{-.17em}R}}^n)$ for a given $s>0$, equation (\ref{w(z)=w(zeta),rhoepsilon}) becomes:
\begin{equation}\label{w(z)=w(zeta),rhoepsilon1}
\vert w(z)\vert=\vert<w(\zeta),\rho_{\epsilon}(z-\zeta)>_{\zeta}\vert\le \vert\vert w\vert\vert_{H^{-s}({\bf \mbox{I\hspace{-.17em}R}}^n)}\vert\vert\rho_{\epsilon}(z-\zeta)\vert\vert_{H^s({\bf \mbox{I\hspace{-.17em}R}}^n)}. \end{equation} Let $k$ be the integer defined by $s\le k< s+1$ so that (denoting as usual by $\hat\phi$ the Fourier transform of $\phi$):
\begin{equation}\label{w(z)=w(zeta),rhoepsilon2}
\vert\vert \phi\vert\vert^2_{H^s({\bf \mbox{I\hspace{-.17em}R}}^n)}= \int_{{\bf \mbox{I\hspace{-.17em}R}}^n}(1+\vert\xi\vert^2)^s \vert \hat\phi(\xi)\vert^2 d\lambda (\xi) \le\int_{{\bf \mbox{I\hspace{-.17em}R}}^n}(1+\vert\xi\vert^2)^k \vert \hat\phi(\xi)\vert^2 d\lambda (\xi)=\vert\vert \phi\vert\vert^2_{H^k({\bf \mbox{I\hspace{-.17em}R}}^n)} \end{equation}
As $k$ is an integer, the norm $\vert\vert \phi\vert\vert_{H^k({\bf \mbox{I\hspace{-.17em}R}}^n)}$ is equivalent to the sum of the $L^2$ norms of the derivatives of $\phi$ of order less or equal to $k$, we have: \begin{equation}\label{w(z)=w(zeta),rhoepsilon3}
\vert\vert \phi\vert\vert^2_{H^k({\bf \mbox{I\hspace{-.17em}R}}^n)}\le C_2(k) \int_{{\bf \mbox{I\hspace{-.17em}R}}^n}\sum_{\vert\alpha\vert\le k}\vert D^{\alpha}\phi\vert^2 d\lambda \end{equation} We replace $\phi$ by $\phi_{\epsilon}(\zeta):=\frac{1}{\epsilon^n}\phi(\frac{\zeta}{\epsilon})$\, (with$ \epsilon\le 1$) so that we get:
\begin{equation}\label{w(z)=w(zeta),rhoepsilon4}
\vert\vert \phi_{\epsilon}\vert\vert^2_{H^k({\bf \mbox{I\hspace{-.17em}R}}^n)}\le C_2(k) \epsilon^{-2k-n} \lbrack\int_{{\bf \mbox{I\hspace{-.17em}R}}^n}\sum_{\vert\alpha\vert\le k}\vert D^{\alpha}\phi\vert^2 d\lambda\rbrack \end{equation} Using (\ref{w(z)=w(zeta),rhoepsilon2}), (\ref{w(z)=w(zeta),rhoepsilon3}) and (\ref{w(z)=w(zeta),rhoepsilon4}) with $\phi(\zeta)=\rho(z-\zeta)$ (for a fixed $z\in\Omega$ with $\epsilon<d(z,\partial\Omega)\le1$) we finally obtain:
\begin{equation}\label{w(z)=w(zeta),rhoepsilon5}
\vert w(z)\vert\le C_3(k,n)\;\vert\vert w\vert\vert_{H^{-s}({\bf \mbox{I\hspace{-.17em}R}}^n)}\;\epsilon^{-k-\frac{n}{2}} \end{equation} and when $\epsilon\rightarrow d(z,\partial\Omega)$: \begin{equation}\label{w(z)=w(zeta),rhoepsilon6}
\vert w(z)\vert\le C_3(k,n)\;\vert\vert w\vert\vert_{H^{-s}({\bf \mbox{I\hspace{-.17em}R}}^n)} \;\lbrack d(z,\partial\Omega)\rbrack^{-k-\frac{n}{2}} \end{equation}
\begin{equation}\label{w(z)=w(zeta),rhoepsilon7} \int_{\Omega} \vert w(z)\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2k+n} d\lambda(z) <+\infty. \end{equation}
\end{proof} \begin{remark} Instead of using mean properties of harmonic functions, one can also use the elementary solution of $\Delta$ in ${\bf \mbox{I\hspace{-.17em}R}}^n$ as in [KS1996] proposition 10.1., p. 53. \end{remark}
We also need the following theorem of L. Schwartz (in his book on distribution theory [Schw1950]). We can also directly use theory of Sobolev spaces. We say that a measure $\mu$ defined on an open bounded subset $\Omega$ of ${\bf \mbox{I\hspace{-.17em}R}}^n$, is of polynomial growth at most $l$ in $\Omega$, if $\int_{\Omega}\; d(z,\partial\Omega)^l d\vert\mu\vert(z)<+\infty$.
\begin{theorem}\label{w(z)=w(zeta),rhoepsilon9on}
A measure of polynomial growth $l$ defined on an open bounded subset $\Omega$ of ${\bf \mbox{I\hspace{-.17em}R}}^n$ can be extended as a distribution on ${\bf \mbox{I\hspace{-.17em}R}}^n$ of order at most $l$.\\ Moreover if $w\in L^2(\Omega,loc)$ verifies the estimate: $\int_{\Omega} \vert w(z)\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda(z) <+\infty$, with $l\in\mbox{I\hspace{-.15em}N}$, then for every $r>l$, $w$ can be extended as a distribution in $H^{-r-\frac{n}{2}}({\bf \mbox{I\hspace{-.17em}R}}^n)$ (particularly in $H^{-l-\frac{n}{2}-1}({\bf \mbox{I\hspace{-.17em}R}}^n))$ . \end{theorem}
\begin{remark}\label{observethattheextensiontildew1} If $\int_{\Omega} \vert w(z)\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda(z)<+\infty$, let us observe that the extension $\tilde w$ of $w$ depends ($a\, priori$) on the choice of $r>l$. We will use the results ot theorem \ref{w(z)=w(zeta),rhoepsilon9on} in the case of $\bf \mbox{I\hspace{-.47em}C}^n={\bf \mbox{I\hspace{-.17em}R}}^{2n}$ so that $\tilde w$ can be constructed in $H^{-l-n-1}$. \end{remark}
\begin{remark}\label{observethattheextensiontildew2}
If $\int_{\Omega} \vert w(z)\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda(z)<+\infty$, as $\Omega$ is bounded,
Schwarz inequality implies that
$\int_{\Omega}\, d(z,\partial\Omega)^l\,\vert w(z)\vert\; d\lambda(z)<+\infty$. $w$ defines on $\Omega$ a measure of polynomial growth at most $l$.
Hence the first part of theorem \ref{w(z)=w(zeta),rhoepsilon9on} implies that $w$ can be extended to ${\bf \mbox{I\hspace{-.17em}R}}^n$ as a distribution of order at most $l$. \end{remark}
\begin{proof}
In L. Schwartz's book there is no proof and no references so that we give the following proof. We consider the subspace $F\subset\cal D(\bf \mbox{I\hspace{-.47em}C}^n)$ of functions the derivatives of which vanish at the order $\le l-1$ in every point $\zeta\in\partial\Omega$.
For a given $z\in\Omega$, we choose a point $\zeta\in\partial\Omega$ such that $\vert z-\zeta\vert=d(z,\partial\Omega)$
and we apply Taylor's formula at the point $\zeta\in\partial\Omega$, at the order $l-1$ (with integral remainder, cf. [H\"or1983] paragraph 1.1, formula (1.1.7'))
to a function $\phi\in F$ restricted to the real interval $\lbrace tz+(1-t)\zeta\vert \, t\in{\bf \mbox{I\hspace{-.17em}R}},\, 0\le t\le 1\rbrace$
linking in $\Omega$ the point $\zeta\in\partial\Omega$ to $z\in\Omega$, so that we obtain:
\begin{equation}\label{w(z)=w(zeta),rhoepsilon8taylor}
\phi(z)=l\int_0^1 (1-t)^{l-1}\left\lbrack\,\Sigma_{\vert\alpha\vert= l}\; D_{}^{\alpha} \phi(\zeta+t(z-\zeta))\,\frac{(z-\zeta)^{\alpha}}{\alpha!}\,\right\rbrack dt, \end{equation}
and then: \begin{equation}\label{w(z)=w(zeta),rhoepsilon8}
\vert \phi(z)\vert\le C_4(l,n)\; \lbrack d(z,\partial\Omega)\rbrack^{l}
\;\max_{\xi \in\bar\Omega} \; \lbrack\Sigma_{\vert\alpha\vert= l}\; \vert D_{\xi}^{\alpha} \phi(\xi)\vert\rbrack. \end{equation} For all functions $\phi\in F$ and all measures $\mu$ on $\Omega$
of polynomial growth $l$, \emph{i.e} $\int_{\Omega}\; d(z,\partial\Omega)^l d\vert\mu\vert(z)<+\infty$, (using (\ref{w(z)=w(zeta),rhoepsilon8})) we have:
\begin{equation}\label{w(z)=w(zeta),rhoepsilon9} \vert\int_{\Omega} \phi\; d\mu\; \vert \le C_4(l,n)\;\lbrack\,\int_{\Omega}\; d(z,\partial\Omega)^l d\vert\mu\vert\,\rbrack
\;\max_{\xi \in\bar\Omega} \;\lbrack \Sigma_{\vert\alpha\vert\le l}\; \vert D_{\xi}^{\alpha} \phi(\xi)\vert\rbrack. \end{equation}
For a given measure $\mu$ of polynomial growth $l$, we consider the space $\mathcal E^l(\bf \mbox{I\hspace{-.47em}C}^n)$ of functions of class $\mathrm C^l$ on $\bf \mbox{I\hspace{-.47em}C}^n$. We apply Hahn Banach theorem to the linear form $\phi\rightarrow\int_{\Omega} \phi\; d\mu$ defined on the subspace $F\subset\mathcal D(\bf \mbox{I\hspace{-.47em}C}^n)\subset\mathcal E^l(\bf \mbox{I\hspace{-.47em}C}^n)$ and continuous for the seminorm
$\max_{\xi\in\bar\Omega} \;\lbrack \Sigma_{\vert\alpha\vert\le l}\; \vert D_{\xi}^{\alpha} \phi(\xi)\vert\rbrack$. This linear form can be extended in a continuous linear form $T$ on $\mathcal E^l(\bf \mbox{I\hspace{-.47em}C}^n)$, such that: \begin{equation}\label{w(z)=w(zeta),rhoepsilon10} \vert<T,\phi> \vert \le C_4(l,n)\;\lbrack\int_{\Omega}\; d(z,\partial\Omega)^l d\vert\mu\vert \rbrack
\;\max_{\xi \in\bar\Omega} \;\lbrack \Sigma_{\vert\alpha\vert\le l}\; \vert D_{\xi}^{\alpha} \phi(\xi)\vert\rbrack. \end{equation} for all $\phi\in\mathcal E^l(\bf \mbox{I\hspace{-.47em}C}^n)$,
i.e. a distribution of order $l$ on $\bf \mbox{I\hspace{-.47em}C}^n$ (with compact support).\\
Let us now assume that $w\in L^2(\Omega, loc)$ verifies the estimate: \begin{equation}\label{w(z)=w(zeta),rhoepsilon11} I_l(w):=\int_{\Omega} \vert w(z)\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda(z) <+\infty. \end{equation}
for some integer $l\ge 0$.\\
For every $\phi\in F$, Cauchy-Schwarz inequality gives: \begin{equation}\label{w(z)=w(zeta),rhoepsilon12} \vert<w,\phi>\vert^2=\vert\int_{\Omega} w\phi\; d\lambda\:\vert^2\le I_l(w)\; \int_{\Omega} \vert \phi(z)\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{-2l} d\lambda(z). \end{equation} Using inequality(\ref{w(z)=w(zeta),rhoepsilon8}), (\ref{w(z)=w(zeta),rhoepsilon12}) becomes: \begin{equation}\label{w(z)=w(zeta),rhoepsilon13} \vert<w,\phi>\vert^2\le C_5(l,n,\Omega)\;I_l(w)\;
\lbrack\max_{\xi\in\bar\Omega}\Sigma_{\vert\alpha\vert= l}\;\vert D_{\xi}^{\alpha} \phi(\xi)\vert\rbrack^2. \end{equation}
with $C_5(l,n,\Omega):=\lbrack C_4(l,n)\rbrack^2\,\int_{\Omega}d\lambda$.\\ For every $r>l$, we use classical Sobolev inequality: \begin{equation}\label{w(z)=w(zeta),rhoepsilon14max}
\max_{\xi \in{\bf \mbox{I\hspace{-.17em}R}}^n} \; \Sigma_{\vert\alpha\vert\le l}\; \vert D_{\xi}^{\alpha} \phi(\xi)\vert\le C_6(r)\;\vert\vert\phi\vert\vert_{H^{r+\frac{n}{2}}} \end{equation}
and inequality (\ref{w(z)=w(zeta),rhoepsilon13}), so that
we obtain: \begin{equation}\label{w(z)=w(zeta),rhoepsilon14} \vert <w,\phi>\vert\le C_7(l,n,r,\Omega)\;\lbrack I_l(w)\rbrack^{\frac{1}{2}}\;\vert\vert\phi\vert\vert_{H^{r+\frac{n}{2}}}. \end{equation}\label{w(z)=w(zeta),rhoepsilon15} with $C_7(l,n,r,\Omega):=C_6(r)\, \lbrack C_5(l,n,\Omega)\rbrack^{\frac{1}{2}}$.\\ Using still Hahn-Banach Theorem for the linear form $\phi\rightarrow<w,\phi>$ defined on the subspace $F$ of $H^{r+\frac{n}{2}}({\bf \mbox{I\hspace{-.17em}R}}^n)$ and continuous for the norm of $ H^{r+\frac{n}{2}}$, we extend $w$ as a distribution $T\in H^{-r-\frac{n}{2}}$ such that: $\vert\vert T\vert\vert_{H^{-r-\frac{n}{2}}}\le C_7(l,n,r,\Omega)\;\lbrack I_l(w)\rbrack^{\frac{1}{2}}$ (of course we can also do this extension by using orthogonal projection on the closed subspace $\bar F$ in the Hilbert space $H^{r+\frac{n}{2}}({\bf \mbox{I\hspace{-.17em}R}}^n)$). \end{proof}
We can now prove theorem 1.
\section{Proof of theorem 1} \label{Proofoftheorem 1}
We follow P. Dolbeault's proof of the Dolbeault-Grothendieck lemma. A. Grothendieck's proof was different, (in some sense) more elementar
than P. Dolbeault's proof but not useful for our present purpose. Of course we can suppose (w.l.o.g.) that $f$ has compact support in $\bf \mbox{I\hspace{-.47em}C}^n$ (using a cutoff function in $\mathcal D ({\bf \mbox{I\hspace{-.17em}R}}^n)$ equal to 1 in a neighborhood of $\bar \Omega$).
Let us remind that $\bf \mbox{I\hspace{-.47em}C}^n$ being equipped with its usual flat Hermitian metric, the Laplacian acting on differential forms and currents is defined on $\bf \mbox{I\hspace{-.47em}C}^n$ by: \begin{equation}\label{beanopenpseudoconvex7delta1} \frac{1}{2} \Delta=\frac{1}{2} (dd^{\star}+d^{\star}d)=\bar\partial\bar\partial^{\star}+\bar\partial^{\star}\bar\partial =\partial\partial^{\star}+\partial^{\star}\partial, \end{equation}
so that $\frac{1}{2}\Delta f$ is the usual Laplacian on $\bf \mbox{I\hspace{-.47em}C}^n$ acting on each coefficient of the current $f$.
$\bar\partial^{\star}$ (resp. $\partial^{\star}$) (resp. $d^{\star}$)
is the adjoint of $\bar\partial$ (resp. $\partial$) (resp. $d:=\partial+\bar\partial$) for the same constant metric on $\bf \mbox{I\hspace{-.47em}C}^n$
(there is no weight function).\\ At first we solve in $\bf \mbox{I\hspace{-.47em}C}^n$ the Laplacian equation: \begin{equation}\label{beanopenpseudoconvex7} \frac{1}{2}\Delta v:=(\bar\partial\bar\partial^{\star}+\bar\partial^{\star}\bar\partial)\; v=\bar\partial^{\star}f. \end{equation}
($v$ and $\bar\partial^{\star}f$ are of bidegree $(p,q)$.)\\
If we write: $f=\sum_{\vert I\vert=p,\vert J\vert=q+1}^{'} f_{I,J} dz_I\wedge d\bar z_J$
($\Sigma ^{'}$ means that we only sum on strictly increasing multi-indices $I$ and $J$), we have (cf. [H\"or1966] paragraph 4.1, p. 82 or 85 or [Dem2012] Chapter 6, paragraph 6.1) : \begin{equation}\label{beanopenpseudoconvex8adjoint} \bar\partial^{\star} f=(-1)^{p-1}\Sigma_{\vert I\vert=p,\vert K\vert=q}^{'} \; \left\lbrack\Sigma_{j=1}^{j=n} \frac{\partial}{\partial z_j}(f_{I,jK})\right\rbrack \; dz_I\wedge d\bar z_{K}. \end{equation} If $f$ is of bidegree $(0,1)$, we simply have : $f=\Sigma_{j=1}^{j=n} f_j d \bar z_j$, $\bar\partial^{\star} f=- \Sigma_{j=1}^{j=n} \frac{\partial f_j}{\partial z_j}$ and (\ref{beanopenpseudoconvex7}) is the Laplace equation in $\bf \mbox{I\hspace{-.47em}C}^n$ : $\Sigma_{j=1}^{j=n} \frac{\partial^2}{\partial z_j \partial\bar z_j} v=\Sigma_{j=1}^{j=n} \frac{\partial f_j}{\partial z_j}$.\\ $v$ is obtained by convolution of each coefficient $\frac{\partial f_{I,jK}}{\partial z_j}$ of $\bar\partial^{\star} f$ in (\ref{beanopenpseudoconvex8adjoint}) with the elementar solution $E$ of the usual Laplacian in $\bf \mbox{I\hspace{-.47em}C}^n$.\\ We set: \begin{equation}\label{beanopenpseudoconvex8} g:=f-\bar\partial v \end{equation} As $\bar\partial^2=0$, we have (by usual computation): \begin{equation}\label{beanopenpseudoconvex94p} \frac{1}{2}\Delta(\bar\partial v)=(\bar\partial\bar\partial^{\star}+\bar\partial^{\star}\bar\partial) \bar\partial v=\bar\partial\bar\partial^{\star}\bar\partial v=\bar\partial(\bar\partial\bar\partial^{\star}+\bar\partial^{\star}\bar\partial) v =\bar\partial(\frac{1}{2}\Delta v) \end{equation}
\emph{i.e.} $\bar\partial$ commute with $\Delta$ on $\bf \mbox{I\hspace{-.47em}C}^n$.
Using (\ref{beanopenpseudoconvex7}), we have: $\frac{1}{2}\Delta(\bar\partial v)=\bar\partial\bar\partial^{\star}f$,
and: \begin{equation}\label{beanopenpseudoconvex9} \frac{1}{2}\Delta g=\frac{1}{2}\Delta f-\frac{1}{2}\Delta (\bar\partial v)=(\bar\partial\bar\partial^{\star}+\bar\partial^{\star}\bar\partial) \; f-\bar\partial\bar\partial^{\star}f =\bar\partial^{\star}\bar\partial\; f. \end{equation} Hence (as $\bar\partial f=0$ on $\Omega$), $g$ is harmonic on $\Omega$: \begin{equation}\label{beanopenpseudoconvex11} \Delta g=0. \end{equation} $f$ being of order $k$ with compact support,
$\bar\partial^{\star}f$ is of order $k+1$ with the same support (the coefficients of $\bar\partial^{\star}f$ are linear combination
of derivatives $\frac{\partial}{\partial z_j}$ of the coefficients of $f$). Therefore the solution $v$ of the Laplacian equation (\ref{beanopenpseudoconvex7}) is of order at most $k$. Indeed it is obtained by convolution: $E\star\frac{\partial f_{I,jK}}{\partial z_j}=\frac{\partial E}{\partial z_j}\star f_{I,jK}$
of each coefficient $\frac{\partial f_{I,jK}}{\partial z_j}$
of $\bar\partial^{\star} f$ in(\ref{beanopenpseudoconvex8adjoint})
with the elementary solution $E:=-C_n \vert z\vert^{-2n+2}$ of $\Delta$, the first derivatives $\frac{\partial E}{\partial z_j}$ of $E$ are $O(\vert z\vert^{-2n+1})$, then in $L^1 (\bf \mbox{I\hspace{-.47em}C}^n,loc)$ and the convolution $\frac{\partial E}{\partial z_j}\star f_{I,jK}$
of a function in $\L^1({\bf \mbox{I\hspace{-.17em}R}}^{2n}, loc)$ with a distribution of order $k$ and compact support is still of order at most $k$. Hence $\bar\partial v$ is of order at most $k+1$ and $g=f-\bar\partial v$ is too of order at most $k+1$.\\
We write: $g=\sum_{\vert I\vert=p,\vert J\vert=q+1} g_{I,J} dz_I\wedge d\bar z_J$ with strictly increasing multi-indices $I$ and $J$. Let $g_{I,J}$ be a coefficient of $g$. As $g_{I,J}$ is harmonic in $\Omega$, we can apply lemma \ref{LetwbeasdistributiononRoforderk} in ${\bf \mbox{I\hspace{-.17em}R}}^{2n}$ to $g_{I,J}$ which is a distribution of order at most $k+1$, we get an inequality:
\begin{equation}\label{beanopenpseudoconvex12}
\vert g_{I,J}(z)\vert\le C_1(\Omega,g_{I,J})\; \lbrack d(z,\partial\Omega)\rbrack^{-2n-k-1}. \end{equation} Hence : \begin{equation}\label{beanopenpseudoconvex13} \vert g(z)\vert\le C_2(\Omega,g)\;\lbrack d(z,\partial\Omega)\rbrack^{-l} \end{equation}
for some constant $C_2(\Omega,g)>0$ and $l:=2n+k+1$ (and $z\in\Omega$) and then:
\begin{equation}\label{beanopenpseudoconvex14} \int_{\Omega} \vert g\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda <\infty \end{equation}
where $d\lambda$ is the Lebesgue measure on $\bf \mbox{I\hspace{-.47em}C}^n$.\\
L. H\"ormander's $L^2$ estimates for $\bar\partial$ (corollary \ref{beanopenpseudoconvex3}) provide a solution $u$ in $\Omega$ of the equation:
\begin{equation}\label{beanopenpseudoconvex15} \bar\partial u=g \end{equation} with the $L^2$ estimate: \begin{equation}\label{beanopenpseudoconvex16} \int_{\Omega} \vert u\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2l} d\lambda <\infty \end{equation}
As $\Omega$ is bounded, Cauchy-Schwarz inequality gives the following $L^1$ estimate: \begin{equation}\label{beanopenpseudoconvex17} \int_{\Omega} \vert u\vert\;\lbrack d(z,\partial\Omega)\rbrack^{l} d\lambda <\infty \end{equation}
Therefore a coefficient $u_{I,J}$ of $u$ defines a measure of polynomial growth $l$ on $\Omega$.
Using L. Schwartz's theorem \ref{w(z)=w(zeta),rhoepsilon9on}, such a measure (of polynomial growth $l$) defined on $\Omega$ can be extended as a distribution on $\bf \mbox{I\hspace{-.47em}C}^n$ (of order at most $l$) so that $u$ can be extended as a current on $\bf \mbox{I\hspace{-.47em}C}^n$ of order at most $l$. Then $u+v$ is a current on $\bf \mbox{I\hspace{-.47em}C}^n$ verifying: \begin{equation}\label{beanopenpseudoconvex18} \bar\partial( u+v)=f \end{equation} on $\Omega$. Moreover $u+v$ has order at most $l=k+2n+1$.\\ We now consider the case of a given $f\in H^{-s}_{(p,q+1)}(\bf \mbox{I\hspace{-.47em}C}^n)$ for some $s\ge0$. Then $\bar\partial^{\star}f\in H^{-s-1}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$.
Classicaly we can find a solution $v$ of the Laplace equation (\ref{beanopenpseudoconvex7})
in $H^{-s+1}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$ so that $g=f-\bar\partial v$ is also in $ H^{-s}_{(p,q+1)}(\bf \mbox{I\hspace{-.47em}C}^n)$.
We apply lemma \ref{LetwbeasdistributiononRoforderk} to every coefficient $g_{I,J}$ of $g$ in $\bf \mbox{I\hspace{-.47em}C}^n={\bf \mbox{I\hspace{-.17em}R}}^{2n}$ so that $\vert g(z)\vert\le C\; \lbrack d(z,\bf \mbox{I\hspace{-.47em}C}^n \setminus \Omega)\rbrack^{-k-n}$ where $k$ is the integer such that $s\le k<s+1$ and therefore: \begin{equation} \int_{\Omega} \vert g\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2k+2n} d\lambda <+\infty. \end{equation} Corollary \ref{beanopenpseudoconvex3} implies we can solve $\bar\partial u=g=f-\bar\partial v$ with the estimate: \begin{equation} \int_{\Omega} \vert u\vert^2\;\lbrack d(z,\partial\Omega)\rbrack^{2k+2n} d\lambda <+\infty. \end{equation}
We now apply theorem \ref{w(z)=w(zeta),rhoepsilon9on} in $\bf \mbox{I\hspace{-.47em}C}^n={\bf \mbox{I\hspace{-.17em}R}}^{2n}$ to every coefficient of $u$ with $l=k+n$ ($l+n=k+2n$) so that for every $r>k$, u can be extended as a current in $\bf \mbox{I\hspace{-.47em}C}^n$ in $H^{-r-2n}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$. As $v\in H^{-s+1}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$, $u+v$ is too in $H^{-r-2n}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$ and verifies $\bar\partial(u+v)=f$ in $\Omega$.
\begin{remark}\label{Proofoftheorem2remarkprecise} We have a little more precise result: $f=\bar\partial(u+v)$ in $\Omega$ with $u\in L^{2,k+n}_{(p,q)}(\Omega)\cap H^{-r-2n}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$ (for every $r>k$ particularly for $r=k+1$)
and $v\in H^{-s+1}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$. We will use the fact that $v\in H^{-s+1}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$ has a better regularity than $f\in H^{-s}_{(p,q)}(\bf \mbox{I\hspace{-.47em}C}^n)$ in section \ref{Proofoftheorem2}.
\end{remark}
\section{Extension of theorem 1 to Stein manifolds} \label{Proofoftheorem2} We will now see that theorem \ref{partialtemperedchomology} remains true
for a relatively compact open Stein subset $\Omega$ of a given Stein manifold $X$.
We can essentially use the same reasoning as in $\bf \mbox{I\hspace{-.47em}C}^n$. But we need much stronger technical results.\\ Let us recall that a complex manifold $X$ is Stein if, by definition, global holomophic functions $\cal O(X)$ separate the points of $X$, give local holomorphic coordinates on $X$ and if $X$ is holomorphically convex (for all compact $K$ in $X$ the holomorphic hull $\hat K$ of $K$ is compact with $\hat K:=\lbrace x\in X\vert\, \vert f(x)\vert\le \max_{\xi\in K}\vert f(\xi)\vert\rbrace$)). Let us also remind the two following other characterizations of a Stein manifold $X$ of complex dimension $n$. The first one, a complex holomorphic manifold $X$ is Stein if and only if it can be imbedded as a closed complex submanifold of $\bf \mbox{I\hspace{-.47em}C}^{2n+1}$.
The second one, $X$ is Stein if and only if there exists a strictly plurisubharmonic exhaustive function $\psi$ on $X$ of class $C^2$
(if $X$ is a closed submanifold of $\bf \mbox{I\hspace{-.47em}C}^{2n+1}$,
we can take for $\psi$ the restriction to $X$ of the function $\vert\vert x\vert\vert^2$ defined on $\bf \mbox{I\hspace{-.47em}C}^{2n+1}$).\\ Hence $X$ is a K\"ahlerian manifold [Weil1958] (taking, for instance, the K\"ahler metric associated with the closed K\"ahler form $i\partial\bar\partial \psi$).\\
It is proved in [Ele75]
that if we consider a relatively compact Stein open subset $\Omega$ of $X$ and the geodesic distance associated with a given K\"ahlerian metric on $X$, then the function: $-\log d(z,\partial\Omega)+C(\Omega,\psi)\, \psi$, is strictly plurisubharmonic in $\Omega$
for a constant $C(\Omega,\psi)$ large enough.\\
Therefore using [Dem2012] Chapter VIII, paragraph 6, Theorem 6.1 p. 376 and 6.5, p. 378 or [Dem1982] the following result (similar to corollary \ref{beanopenpseudoconvex3}) still holds on a Stein manifold:\\ \begin{theorem}\label{beanopenpseudoconvex3SteinDem12} Let $\Omega$ be a relatively compact open Stein subset of the Stein manifold $X$. We consider on $X$ a given K\"ahler form $\omega$,
the geodesic distance on $X$ associated with $\omega$
and for $z\in\Omega$ the corresponding distance $d(z,\partial\Omega)$ to the boundary of $\Omega$.
Let us consider a holomorphic Hermitian vector bundle $F$ of rank $r$ on $X$ and currents with values in $F$.
Let $k\ge0$ be a given real number. Then for every $g\in L^2_{(p,q+1)}(\Omega, F,loc )$ with $\bar\partial g=0$
such that: $\int_{\Omega}\vert g\vert^2 \lbrack d(z,\partial\Omega)\rbrack^k d\lambda<+\infty$, there exists $u\in L^2_{(p,q)}(\Omega, F, loc )$ such that: \begin{equation}\label{beanopenpseudoconvex4}
\bar\partial u=g \end{equation} in $\Omega$ and:
\begin{equation}\label{beanopenpseudoconvex5Stein} \int_{\Omega}\vert u\vert^2 \lbrack d(z,\partial\Omega)\rbrack^k d\lambda\le C(\Omega, F, k) \int_{\Omega}\vert g\vert^2 \lbrack d(z,\partial\Omega)\rbrack^k d\lambda,
\end{equation} where $d\lambda=\frac{\omega^n}{n!}$ is the positive measure on $X$ defined by the $(n,n)$ form $\frac{\omega^n}{n!}$ ($C(\Omega, F, k)$ is a constant $>0$ only depending on $\Omega$, $F$ and $k$). \end{theorem}
Let us give more details about how to deduce theorem \ref{beanopenpseudoconvex3SteinDem12} from theorems 6.1 and 6.5 in [Dem2012]. At first let us remind Demailly's theorem 6.5 (for the sake of simplicity we state it with a little more restrictive assumption): \begin{theorem}\label{beanopenpseudoconvex3Stein} Let $(X,\omega)$ be a Stein manifold $X$ of complex dimension $n$ with a given K\"ahler metric $\omega$. Let us consider a holomorphic Hermitian vector bundle $F$ of rank $r$ on $X$ and a $\bf \mbox{I\hspace{-.47em}C}^{\infty}$
function $\phi$ on $X$ such that $ic(F)+i\partial\bar\partial\phi\ge\mu\,\omega$ where $c(F)$ is the curvature form of $F$ and $\mu>0$ a given constant. Then for every $g\in L^2_{(n,q+1)}(X, F,loc )$ with $\bar\partial g=0$
such that: $\int_{X}\vert g\vert^2 e^{-\phi} dV<+\infty$, there exists $u\in L^2_{(n,q)}(X, F,loc)$ such that: \begin{equation}\label{beanopenpseudoconvex4}
\bar\partial u=g \end{equation} in $X$ and:
\begin{equation}\label{beanopenpseudoconvex5Steinhm45} \int_{X}\vert u\vert^2 e^{-\phi} dV\le \frac{1}{\mu}\int_{X}\vert g\vert^2 e^{-\phi} dV.
\end{equation} where $dV=\frac{\omega^n}{n!}$ is the positive measure on $X$ defined by the $(n,n)$ form $\frac{\omega^n}{n!}$. \end{theorem} In theorem 6.5 in [Dem2012], $F$ is a line bundle but the result is still valid for a vector bundle : you only need to consider the positivity of the curvature form $ic(F)$ of $F$ in the strong sense of Nakano as explained in [Dem2012] (theorem 6.1).
For $(p,q)$-form (with $p\neq n$ and values in $F$) we consider (n,q)-forms with values in the new vector bundle $F\bigotimes \wedge^p T^{\star}(X)\bigotimes\wedge^n T(X)$.\\
If now $\Omega$ is a relatively compact open Stein subset of $X$, we can choose a constant $C_1(\Omega, F)$
such that $ic(F)+C_1(\Omega, F,)\, i\partial\bar\partial\psi\ge\omega$ on $\bar\Omega$ (in the strong sense of Nakano). For every $C^{\infty}$ plurisubharmonic function $\phi$ on $\Omega$
we can apply theorem \ref{beanopenpseudoconvex3Stein} restricted to the Stein manifold $\Omega$ and the function $C_1(\Omega, F)\,\psi+\phi$ so that (as $\psi$ is bounded on $\Omega$) we get an estimate: \begin{equation}\label{beanopenpseudoconvex5Stein6YUJN} \int_{\Omega}\vert u\vert^2 e^{-\phi} dV\le C_2(\Omega,F)\int_{\Omega}\vert g\vert^2 e^{-\phi} dV.
\end{equation} We can now take $\phi=-k\log d(z,\partial\Omega)+k\, C(\Omega,\psi)\, \psi$, (for some $k\ge 0$) so that (as $\psi$ is bounded on $\Omega$) we get the estimate (\ref{beanopenpseudoconvex5Stein}) of theorem \ref{beanopenpseudoconvex3SteinDem12} (for (p,q) forms with values in $F$). The function $\phi:=-k\log d(z,\partial\Omega)+k\,C(\Omega,\psi)\, \psi$ is only continuous on $\Omega$ but as $\phi$ is stritly plurisubharmonic on the Stein manifold $\Omega$ it can be closely approximated by a family $(\phi_{\epsilon })$ ($0<\epsilon<\epsilon_0$) of $C^{\infty}$ strictly plurisubharmonic functions on $\Omega$ as explained in [Dem2012] chapter 1, paragraph 5.E. page 42 (Richberg theorem (5.21)) such that $\phi\le \phi_{\epsilon }\le \phi +\epsilon$. At first we obtain the estimate (\ref{beanopenpseudoconvex5Stein6YUJN})
for the functions $\phi_{\epsilon}$ and a solution $u_{\epsilon}$ of (\ref{beanopenpseudoconvex4}). Taking the limit as $\epsilon$ goes to 0 and using the weak compacity of the closed ball of $L^{2,k}_{(p,q)}(X, F)$,
we get (\ref{beanopenpseudoconvex5Stein6YUJN}) for $\phi$ and a weak limit $u$ of a subsequence of the family $(u_{\epsilon}$).\\
We will only use lemma \ref{LetwbeasdistributiononRoforderk} in a local chart of $X$ (i.e. in $\bf \mbox{I\hspace{-.47em}C}^n$) : we do'nt need to extend this lemma to the Riemannian Laplacian operator on $X$ with variable coefficients. Replacing $\bf \mbox{I\hspace{-.47em}C}^n={\bf \mbox{I\hspace{-.17em}R}}^{2n}$ by a complex Riemannian manifold $X$, extension theorem \ref{w(z)=w(zeta),rhoepsilon9on} is still valid as it is a local result (alongside the boundary of $\Omega$) using a partition of unity
of class $C^{\infty}$ on $X$.\\
We will give an appropriate simple extension of theorem \ref{partialtemperedchomology} in $\bf \mbox{I\hspace{-.47em}C}^n$. (proposition \ref{partialtemperedchomologyescrtyX}) which will be enough to be able to work on Stein manifold $X$ and with an arbitrary holomorphic vector bundle on $X$. Applying and iterating this last result in local charts of $X$ we will reduce the problem to J-P. Demailly's estimates for $\bar\partial$ (theorem \ref{beanopenpseudoconvex3SteinDem12}).\\ To make short, we set : $d_{\Omega}(z):=d(z,\partial \Omega)$. Let us remind that for a given $k\in{\bf \mbox{I\hspace{-.17em}R}}$, we denote by $L^{2,k}(\Omega)=L^{2,k}_{(0,0)}(\Omega)$ the space of functions $u\in L^2(\Omega,loc)$ such that
$\int_{\Omega}\vert u\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2k} d\lambda<+\infty$ and we set: $\vert\vert u\vert\vert_k^2:=\int_{\Omega}\vert u\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2k} d\lambda$. We need the following preliminary lemma.
\begin{lemma}\label{w(z)=w(zeta),rhoepsilon9on23avril18WX}
Let $\Omega$ be a bounded open subset of ${\bf \mbox{I\hspace{-.17em}R}}^n$
and $k\in\mbox{I\hspace{-.15em}N}$ be given.
Then for every $w\in L^{2,k}(\Omega)$
there exists a solution
$v\in L^{2,k+n-1}(\Omega)$
of the equation:
$\Delta v=\frac{\partial}{\partial x_1} w$
or of the equation : $\Delta v= w$
such that :
$\vert\vert v\vert\vert_{k+n-1}^2\le C(\Omega,k,n) \vert\vert w\vert\vert_k^2$.
\end{lemma}
We will apply lemma \ref{w(z)=w(zeta),rhoepsilon9on23avril18WX} in $\bf \mbox{I\hspace{-.47em}C}^n={\bf \mbox{I\hspace{-.17em}R}}^{2n}$, so that $v\in L^{2,k+2n-1}(\Omega)$ and we will consider the equations : $\Delta v=\frac{\partial}{\partial x_j} w$, $1\le j\le n$ (we only refer to the equation $\Delta v= w$ for completeness).
\begin{proof} We can suppose that $Diam\,\Omega<1$. According to theorem \ref{w(z)=w(zeta),rhoepsilon9on} remark \ref{observethattheextensiontildew2}, $w$ can be extended to ${\bf \mbox{I\hspace{-.17em}R}}^n$ as a distribution of order at most $k$ with compact support that we denote by $\tilde w$. We suppose that the support of $\tilde w$ is a subset of an open ball $B$ of ${\bf \mbox{I\hspace{-.17em}R}}^n$ and we set $L:=\bar B$.\\
The solution $v$ is given by the convolution with the elementary solution $E=-c_n\vert x\vert^{-n+2}$ of the Laplacian in ${\bf \mbox{I\hspace{-.17em}R}}^n$, $v:=E\star (\frac{\partial}{\partial x_1}\tilde w)=(\frac{\partial}{\partial x_1}E)\star \tilde w$ (resp. $v:=E\star \tilde w$). Let us set $K:=\frac{\partial}{\partial x_1}E=(n-2)c_n\vert x\vert^{-n+1}$ (resp. $K=E$). We will need the fact that $K\in L^1_{loc}({\bf \mbox{I\hspace{-.17em}R}}^n)$ and is of class $\mathrm C^{\infty}$ outside $0$. We only have to prove that $v\vert_{\Omega}$ verifies the right estimate : $v\in L^{2,k+n-1}(\Omega)$.
Let $\psi$ be a cutoff function in $\mathcal D({\bf \mbox{I\hspace{-.17em}R}}^n)$
such that $0\le \psi\le 1$, $\psi=1$
in a neighborhood of the closed ball $\bar B(0,1)$,
with support in the the open ball $B(0,2)$ and such that $\psi(x)=\psi(-x)$ for all $x\in{\bf \mbox{I\hspace{-.17em}R}}^n$. We can split $v$ into $v:=K\star \tilde w=(\psi K)\star \tilde w+\lbrack(1-\psi)K\rbrack\star \tilde w$.
$(1-\psi)K\in\mathrm C^{\infty}({\bf \mbox{I\hspace{-.17em}R}}^n)$ and $\tilde w$ is a distribution with compact support in ${\bf \mbox{I\hspace{-.17em}R}}^n$,
then $\lbrack(1-\psi)K\rbrack\star \tilde w$ is in $\mathrm C^{\infty}({\bf \mbox{I\hspace{-.17em}R}}^n)$ and is bounded on the bounded subset $\bar\Omega$. Therefore we only have to verify that $v':=(\psi K)\star \tilde w$ is in $L^{2,k+n-1}(\Omega)$.
Henceforth we replace $K$ by $\tilde K:=\psi K$. $\tilde K$ has its support in the ball $B(0,2)$, $\tilde K(x)=\tilde K(-x)$ for all $x\in{\bf \mbox{I\hspace{-.17em}R}}^n$
and we only have to consider $v':=\tilde K\star \tilde w$ instead of $v$.\\
The idea of the proof is to locally use in $\Omega$ the classical $L^2$ inequality for convolution $\vert\vert \tilde K\star w\vert\vert_{L^2}\le\vert\vert \tilde K\vert\vert_{L^1} \vert\vert w\vert\vert_{L^2}$ in ${\bf \mbox{I\hspace{-.17em}R}}^n$ (of course after truncating $w$ by an appropriate cutoff function) (cf. [H\"or1983] Corollary 4.5.2. p. 117). Close to the boundary $\partial\Omega$ we get another estimate, using the extension $\tilde w$ of $w$ as a distribution in ${\bf \mbox{I\hspace{-.17em}R}}^n$. It will give us a polynomial gap $\lbrack d_{\Omega}(z)\rbrack^{-n+1}$. We have to prove the following inequality (using duality between $L^{2,k+n-1}(\Omega)$ and $L^{2,-k-n+1}(\Omega)$): \begin{equation} \label{artialtemperedchomolog56laplacientemperate1UX} \vert < \tilde K\star\tilde w,\phi>\vert^2\le C(\Omega,\tilde K,n) \vert\vert w\vert\vert_{k}^2 \vert\vert \phi\vert\vert_{-k-n+1}^2 \end{equation} for all $\phi \in\mathcal D(\Omega)$ or equivalently: \begin{equation} \label{artialtemperedchomolog56laplacientemperate2UVX} \vert < \tilde K\star\phi,\tilde w>\vert^2\le C(\Omega,\tilde K,n) \vert\vert w\vert\vert_{k}^2 \vert\vert \phi\vert\vert_{-k-n+1}^2 \end{equation} for all $\phi\in\mathcal D(\Omega)$ (we have $<T\star S,\phi>=<\check T\star\phi,S>$
for $S$, $T$ in $\mathcal E'({\bf \mbox{I\hspace{-.17em}R}}^n)$,
$\check T(x):=T(-x)$, we take $T=\tilde K$ which is symmetric and $S=\tilde w$).\\ For a given $\epsilon>0$, let us define $\Omega_{\epsilon}:=\,\lbrace x\in\Omega\vert\, d_{\Omega}(x)>\epsilon\rbrace$, $K_{\epsilon}:=\lbrace x\in \Omega\vert\, d_{\Omega}(x)\ge \epsilon\rbrace$, $V_{\epsilon}:=\,\lbrace x\in \Omega\vert\, \epsilon< d_{\Omega}(x)<2\epsilon\rbrace$
so that for all $x$ and $y$ in $V_{\epsilon}$
we have the inequality: \begin{equation} \label{artialtemperedchomolog56laplacientemperate3dxdyUV} \frac{1}{2}d_{\Omega}(y)\le d_{\Omega}(x)\le 2d_{\Omega}(y). \end{equation} Of course we will later take $\epsilon=2^{-j}$, $j\in\mbox{I\hspace{-.15em}N}$ and fulfill $\Omega$ with the exhaustive family of sets $V_{\epsilon}$.
For a given subset $A$ of ${\bf \mbox{I\hspace{-.17em}R}}^n$
we denote by $\chi_A$ be the characteristic function of $A$: $\chi_A(x)=1$ if $x\in A$, $\chi_A(x)=0$ if $x\notin A$. We define $\psi_{\epsilon}=\rho_{\frac{\epsilon}{4}}\star\chi_{K_{\frac{\epsilon}{2}}}$. $\psi_{\epsilon}\in\mathcal D(\Omega)$ is a function such that $0\le\psi_{\epsilon}\le1$, $\psi_{\epsilon}=1$ on a neighborhood of $K_{\epsilon}:=\lbrace x\in \Omega\vert\, d_{\Omega}(x)\ge \epsilon\rbrace$, $Supp\,\psi_{\epsilon}\subset \Omega_{\frac{\epsilon}{4}}$ and $\vert D^{\alpha}\psi_{\epsilon}\vert\le C(\alpha) \epsilon^{-\vert\alpha\vert }$ for all multi-indices $\alpha\in N^n$ where $C(\alpha)$ is a constant only dependent on $\alpha$. Indeed we have : $\psi_{\epsilon}(y)=\int_{K_{\frac{\epsilon}{2}}}\rho_{\frac{\epsilon}{4}}(y-x) d\lambda (x)$
with $\rho_{\epsilon}(x)= \frac{1}{\epsilon^n}\rho(\frac{x}{\epsilon})$ so that
$D^{\alpha}\psi_{\epsilon}(y)=\frac{1}{\epsilon^{n+\vert\alpha\vert}}\int_{K_{\frac{\epsilon}{2}}} (D^{\alpha}\rho)({\frac{y-x}{\epsilon}}) d\lambda (x)$
and $\vert D^{\alpha}\psi_{\epsilon}(y)\vert\le\frac{1}{\epsilon^{\vert\alpha\vert}}\int_{R^n} \vert D^{\alpha}\rho\vert(x) d\lambda (x)$.\\ We also set : $\psi'_{\epsilon}=1-\psi_{\epsilon}$ so that $\psi_{\epsilon}+\psi'_{\epsilon}=1$ on $\Omega$. $Supp\, \psi'_{\epsilon}\subset \lbrace x\in\Omega\vert\,d_{\Omega}(x)<\epsilon\rbrace$ so that $Supp\,\psi'_{\epsilon}\cap K_{\epsilon}=\emptyset$.\\
$\epsilon$ being fixed, let us assume at first that $Supp\,\phi\subset V_{\epsilon}$. We set: $I_1:=<\tilde K\star\phi,\psi_{\epsilon}\,\tilde w>$ and $I_2:=<\tilde K\star\phi,\psi'_{\epsilon}\,\tilde w>$ so that we have: \begin{equation} \label{artialtemperedchomolog56laplacientemperate41UVWX} \vert < \tilde K\star\phi,\tilde w>\vert^2=\vert I_1+I_2\vert^2\le 2\vert I_1\vert^2+2\vert I_2\vert^2 \end{equation}
with $I_1:=< \tilde K\star\phi,\psi_{\epsilon}\,\tilde w>=<\psi_{\epsilon}(y)\int_{x\in Supp \phi} \tilde K(y-x)\phi(x) d\lambda (x),\tilde w>_y$ and $I_2:=< \tilde K\star\phi,\psi'_{\epsilon}\,\tilde w>=<\psi'_{\epsilon}(y)\int_{x\in Supp \phi} \tilde K(y-x)\phi(x) d\lambda (x),\tilde w>_y$.\\ As $Supp\,\psi_{\epsilon}\subset\Omega$
we have $\psi_{\epsilon}\,\tilde w=\psi_{\epsilon}\,w$
($w$ is in $L^{2,k}(\Omega)$ and $\tilde w$ is an extension of $w$ as a distribution in $\mathcal D'({\bf \mbox{I\hspace{-.17em}R}}^n)$),\\ and then $I_1:=< \tilde K\star\phi,\psi_{\epsilon}\,\tilde w>=< \tilde K\star(\psi_{\epsilon}\, w),\phi>$. We use Cauchy-Schwarz inequality and convolution inequality :\\ $\vert I_1\vert^2\le \vert\vert \tilde K\star(\psi_{\epsilon}\, w)\vert\vert_2^2 \,\,\vert\vert\phi\vert\vert_2^2 \le\vert\vert \tilde K\vert\vert_1^2 \,\,\vert\vert\psi_{\epsilon}\, w\vert\vert_{2}^2 \,\,\vert\vert\phi\vert\vert_{2}^2$.\\ $\vert I_1\vert^2 \le\vert\vert \tilde K\vert\vert_1^2 \,\,\vert\vert\psi_{\epsilon}\, w\vert\vert_{2}^2 \,\,\vert\vert\phi\vert\vert_{2}^2 \le\vert\vert \tilde K\vert\vert_1^2 \,(\frac{\epsilon}{4})^{-2k}\,\vert\vert\psi_{\epsilon}\, w\vert\vert_{2,k}^2 \,(2\epsilon)^{2(k+n-1)}\,\vert\vert\phi\vert\vert_{2,-k-n+1}^2$.\\ Indeed as $d_{\Omega}(x)\ge \frac{\epsilon}{4}$ on the support of $\psi_{\epsilon}$, we have: $(\frac{\epsilon}{4})^{2k}\vert\vert\psi_{\epsilon}\, w\vert\vert_{2}^2\le\vert\vert\psi_{\epsilon}\, w\vert\vert_{2,k}^2$ and as $d_{\Omega}(x)\le 2\epsilon$ on $V_{\epsilon}$ which contains the support of $\phi$ , we have: $(2\epsilon)^{-2(k+n-1)}\vert\vert\phi\vert\vert_{2}^2\le\vert\vert\phi\vert\vert_{2,-k-n+1}^2$.\\ Finally we get: \begin{equation} \label{artialtemperedchomolog56laplacientemperate52341cdgUVWX} \vert I_1\vert^2 \le 2^{6k+2n-2} \epsilon^{2n-2} \vert\vert \tilde K\vert\vert_1^2 \,\,\vert\vert w\vert\vert_{2,k}^2 \,\,\vert\vert\phi\vert\vert_{2,-k-n+1}^2 \end{equation}
for every $\phi$ with $Supp\,\, \phi\subset V_{\epsilon}$.\\
We now consider the term $I_2$. As $\tilde w$ is a distribution of order at most $k$ and has in support in the ball $L$,
$\tilde w$ satisfies an inequality : \begin{equation} \label{artialtemperedchomolog56laplacientemperate52341UVWX} \vert<\tilde w,\psi>\vert\le C(\tilde w)\,\sup_{y\in\, L} \Sigma_{\vert\alpha\vert\le k}
\, \vert D_{y}^{\alpha}\psi(y)\vert, \end{equation}
for all $\psi\in \mathrm{C}^{l}({\bf \mbox{I\hspace{-.17em}R}}^n)$
with $C(\tilde w):=C(\Omega,L)\vert\vert w\vert\vert_k$ using theorem \ref{w(z)=w(zeta),rhoepsilon9on} remark \ref{observethattheextensiontildew2} (replacing $L$ by a compact neighborhood of $L$ if necessary).\\ We apply inequality (\ref{artialtemperedchomolog56laplacientemperate52341UVWX}) to the function $\psi (y):=\int_{x\in Supp\, \phi} \psi'_{\epsilon}(y) K(y-x)\phi(x) d\lambda (x)$ and we take the derivative in the variable $y$ under the symbol $\int$ so that we get : \begin{equation} \label{artialtemperedchomolog56laplacientemperate521UVWX} \vert I_2)\vert\le C(\tilde w) \max_{y\in L\setminus K_{\frac{\epsilon}{4}}} \Sigma_{\vert\alpha\vert\le k}\,\vert\int_{x\in Supp\,\phi} D^{\alpha}_y\lbrack\psi'_{\epsilon}(y)\tilde K(y-x)\rbrack\,\phi(x) d\lambda (x)\;\vert \end{equation} Let us observe that $Supp\,\psi'_{\epsilon}\subset {\bf \mbox{I\hspace{-.17em}R}}^n\setminus K_{\frac{\epsilon}{4}}$.\\ For $x\in\mathrm{Supp}\,\phi\subset V_{\epsilon}$ and $y\in{\bf \mbox{I\hspace{-.17em}R}}^n\setminus K_{\frac{\epsilon}{4}}$ we have $\vert x-y\vert\ge \frac{3\epsilon}{4}>\frac{3}{8} d_{\Omega}(x)$ (there exists $z\in\partial\Omega$ such that $ d_{\Omega}(y)=\vert y-z\vert<\frac{\epsilon}{4}$, then $\vert x-z\vert\ge d_{\Omega}(x)>\epsilon$ so that $\vert x-y\vert\ge \vert x-z\vert-\vert y-z\vert>\epsilon-\frac{\epsilon}{4}=\frac{3\epsilon}{4}$) and then\\ $\vert D_y^{\beta}\tilde K(y-x)\vert=O(\vert x-y\vert^{-\vert\beta\vert-n+1}) =O(\lbrack d_{\Omega}(x)\rbrack^{-\vert\beta\vert-n+1})$. We also have $\vert D_y^{\gamma}\psi'_{\epsilon}(y)\vert=O(\epsilon^{-\vert\gamma\vert}) =O(\lbrack d_{\Omega}(x)\rbrack^{-\vert\gamma\vert})$ so that (as we have $\vert\beta\vert+\vert\gamma\vert=\vert\alpha\vert$) : \begin{equation} \label{artialtemperedchomolog56laplacientemperate6121UVWX} \vert D^{\alpha}_y\lbrack\psi'_{\epsilon}(y) \tilde K(y-x)\rbrack\vert \le C_1(\Omega)\lbrack d_{\Omega}(x)\rbrack^{-\vert\alpha\vert-n+1} \end{equation} (\ref{artialtemperedchomolog56laplacientemperate521UVWX}) and (\ref{artialtemperedchomolog56laplacientemperate6121UVWX}) imply: \begin{equation} \label{artialtemperedchomolog56laplacientemperate523ret1UVX} \vert I_2)\vert\le C(\Omega,\tilde w) \int_{x\in Supp\,\phi} \lbrack d_{\Omega}(x)\rbrack^{-k-n+1}\,\vert\phi(x)\vert d\lambda (x) \end{equation} Using the Cauchy-Schwarz inequality we have: \begin{equation} \label{artialtemperedchomolog56laplacientemperate523retschz1UVX} \vert I_2\vert^2\le \lbrack C(\Omega,\tilde w)\rbrack^2(\lambda (Supp\,\phi))^2 \int_{x\in Supp\,\phi} \lbrack d_{\Omega}(x)\rbrack^{-2k-2n+2}\,\vert\phi(x)\vert^2 d\lambda (x) \end{equation} and then: \begin{equation} \label{artialtemperedchomolog56laplacientemperate523retschz21UVWX} \vert I_2\vert^2\le \lbrack C(\Omega, \tilde w)\rbrack^2(\lambda (V_{\epsilon}))^2 \vert\vert\phi\vert\vert_{2,-k-n+1}^2 \end{equation} (as $Supp\,\, \phi\subset V_{\epsilon}$ and as $\vert\vert\phi\vert\vert_{2,-k-n+1}^2:=\int_{x\in V_{\epsilon}} \lbrack d_{\Omega}(x)\rbrack^{-2k-2n+2}\,\vert\phi(x)\vert^2 d\lambda (x))$.\\ Using inequalities (\ref{artialtemperedchomolog56laplacientemperate41UVWX}), (\ref{artialtemperedchomolog56laplacientemperate52341cdgUVWX}) (for $I_1$) and (\ref{artialtemperedchomolog56laplacientemperate523retschz21UVWX}) (for $I_2$) we get: \begin{equation} \label{artialtemperedchomolog56laplacientemperate4sdghx1UVX} \vert < K\star\tilde w,\phi>\vert^2\le C_1(\Omega,\tilde w)\lbrack(\lambda (V_{\epsilon}))^2 +\epsilon^{2n-2}\rbrack \int_{x\in V_{\epsilon}} \lbrack d_{\Omega}(x)\rbrack^{-2k-2n+2}\,\vert\phi(x)\vert^2 d\lambda (x) \end{equation} for all $\phi$ with $Supp\,\phi\subset V_{\epsilon}$ and therefore: \begin{equation} \label{artialtemperedchomolog56laplacientemperate4sdghx21UVX} \int_{V_{\epsilon}}\vert K\star\tilde w\vert^2\lbrack d_{\Omega}(x)\rbrack^{2k+2n-2}d\lambda\le C_1(\Omega,\tilde w)
\lbrack(\lambda (V_{\epsilon}))^2 + \epsilon^{2n-2}\rbrack \end{equation} with $C_1(\Omega,\tilde w):=2\lbrack C(\Omega, \tilde w)\rbrack^2+22^{6k+2n-2} \vert\vert \tilde K\vert\vert_k^2 \,\,\vert\vert w\vert\vert_{2,k}^2 \le C_2(k,n,\Omega)\vert\vert w\vert\vert_{2,k}^2$.\\ As $\lbrack\lambda(V_{\epsilon})\rbrack^2\le \lambda(V_{\epsilon})\rbrack$, taking $\epsilon=2^{-j}$, $j\in\mbox{I\hspace{-.15em}N}$,
summing on $j$ and setting $\Omega':=\bigcup_{j\in\mbox{I\hspace{-.15em}N}} V_{2^{-j}}$ we have:
\begin{equation} \label{artialtemperedchomolog56laplacientemperate4sdghx2311UVWX} \int_{\Omega'}\vert K\star\tilde w\vert^2\lbrack d_{\Omega}(x)\rbrack^{2k+2n-2}d\lambda\le C_1(\Omega,\tilde w) \lbrack\lambda(\Omega')+2\rbrack, \end{equation} Taking $\epsilon=\frac{3}{4}2^{-j}$, $j\in\mbox{I\hspace{-.15em}N}$,
summing on $j$ and setting $\Omega'':=\bigcup_{j\in\mbox{I\hspace{-.15em}N}} V_{\frac{3}{4}2^{-j}}$ we have:
\begin{equation} \label{artialtemperedchomolog56laplacientemperate4sdghx2312UVWX} \int_{\Omega''}\vert K\star\tilde w\vert^2\lbrack d_{\Omega}(x)\rbrack^{2k+2n-2}d\lambda\le C_1(\Omega,\tilde w) \lbrack\lambda(\Omega'')+2\rbrack, \end{equation} Giving $j\in\mbox{I\hspace{-.15em}N}$, the set $K_{2^{-j}}\setminus \Omega_{2^{-j}}:=\,\lbrace x\in(\Omega)\vert\, d_{\Omega}(x)=2^{-j}\rbrace$
is a subset of $\Omega''$ so that
$\Omega\subset\Omega'\cup\Omega''$. Summing (\ref{artialtemperedchomolog56laplacientemperate4sdghx2311UVWX}) and (\ref{artialtemperedchomolog56laplacientemperate4sdghx2312UVWX}) we finally have :
\begin{equation} \label{artialtemperedchomolog56laplacientemperate4sdghx23113UVWX} \int_{\Omega}\vert K\star\tilde w\vert^2\lbrack d_{\Omega}(x)\rbrack^{2k+2n-2}d\lambda\le 2 C_1(\Omega,\tilde w) \lbrack\lambda(\Omega)+2\rbrack, \end{equation}
i.e. $\vert\vert v'\vert\vert_{k+n-1}^2\le
2 C_1(\Omega,\tilde w)\lbrack\lambda(\Omega)+2\rbrack$. \end{proof}
We can now prove the following slight extension of theorem \ref{partialtemperedchomology}: the aim of which is to iterate the construction made in $\bf \mbox{I\hspace{-.47em}C}^n$. \begin{proposition} \label{partialtemperedchomologyescrtyX}
Let $\Omega$ be a bounded Stein open subset of $\bf \mbox{I\hspace{-.47em}C}^n$ and $f$ be a given current of bidegree $(p,q+1)$ on $\bf \mbox{I\hspace{-.47em}C}^n$ with compact support which is $\bar\partial$-closed on $\Omega$ and can be written $f=g+h$
with $g_{\vert\Omega}\in L^{2,k}_{(p,q+1)}(\Omega)$ and $h\in H^{-s}_{(p,q+1)}(\bf \mbox{I\hspace{-.47em}C}^n)$ for some $s\le k$. Then there exists a current $w=u+v$ of bidegree $(p,q)$ (with compact support) in $\bf \mbox{I\hspace{-.47em}C}^n$ such that: \begin{equation}\label{} \bar\partial w=f, \end{equation} $in\,\Omega$,\\ with $w=u+v\in H_{(p,q)}^{-k-5n-2}(\bf \mbox{I\hspace{-.47em}C}^n)$,
$u_{\vert\Omega}\in L_{(p,q)}^{2,k+4n+1}(\Omega)$
and $v\in H_{(p,q)}^{-s+1}(\bf \mbox{I\hspace{-.47em}C}^n)$). \end{proposition}
Let us observe that $v$ has a better regularity than $h$. $u$ has still a polynomial $L^2$ growth in $\Omega$ as $g$ (even if it is larger than that of $g$).
\begin{proof}
Using extension theorem \ref{w(z)=w(zeta),rhoepsilon9on} (remark \ref{observethattheextensiontildew1}),
we can assume that $g$ and therefore $f$ are in $H^{-k-n-1}_{(p,q+1)}(\bf \mbox{I\hspace{-.47em}C}^n)$. At first we have to solve: \begin{equation}\label{beanopenpseudoconvex10proposi} \frac{1}{2}\Delta v= \bar\partial^{\star} f=\bar\partial^{\star} g+\bar\partial^{\star} h. \end{equation} We solve separately the equations $\frac{1}{2}\Delta v_1=\bar\partial^{\star} g$ in $\Omega$ and $\frac{1}{2}\Delta v_2==\bar\partial^{\star} h$ in $\bf \mbox{I\hspace{-.47em}C}^n$ . Using lemma \ref{w(z)=w(zeta),rhoepsilon9on23avril18WX} (in ${\bf \mbox{I\hspace{-.17em}R}}^{2n}$) we can at first find $v_1\in L_{(p,q)}^{2,k+2n-1}(\Omega)$ and then using theorem \ref{w(z)=w(zeta),rhoepsilon9on} (remark \ref{observethattheextensiontildew1})
we can find an extension of $v_1$ to $\bf \mbox{I\hspace{-.47em}C}^n$ which is a distribution in $H^{-k-3n}(\bf \mbox{I\hspace{-.47em}C}^n)$. We set $v:=v_1+v_2$. As $h\in H^{-s}(\bf \mbox{I\hspace{-.47em}C}^n)$, $v_2$ is in $H^{-s+1}(\bf \mbox{I\hspace{-.47em}C}^n)$ so that $v:=v_1+v_2\in H_{(p,q)}^{-k-3n}(\bf \mbox{I\hspace{-.47em}C}^n)$. $f-\bar\partial v\in H_{(p,q+1)}^{-k-3n-1}(\bf \mbox{I\hspace{-.47em}C}^n)$ is harmonic in $\Omega$ and therefore (using lemma \ref{LetwbeasdistributiononRoforderk}) $f-\bar\partial v$ is in $ L_{(p,q+1)}^{2,k+4n+1}(\Omega)$. Finally corollary \ref{beanopenpseudoconvex3} gives a solution $u\in L_{(p,q+1)}^{2,k+4n+1}(\Omega)$ of the equation $\bar\partial u=f-\bar\partial v$ in $\Omega$. Setting $w=u+v$, we get $\bar\partial w=f$ in $\Omega$. Using still theorem \ref{w(z)=w(zeta),rhoepsilon9on} (remark \ref{observethattheextensiontildew1}) we get an extension of $u$ in $H_{(p,q)}^{-k-5n-2}(\bf \mbox{I\hspace{-.47em}C}^n)$
so that $w\in H_{(p,q)}^{-k-5n-2}(\bf \mbox{I\hspace{-.47em}C}^n)$. \end{proof} We need the following lemma making comparisons between the several distances to boundary we have to consider. \begin{lemma} \label{partialtemperedchomologyescrtynlminX} Let $\Omega$ and $\Omega_j$ be two bounded open subsets of the complex Riemannian manifold $X$ such that $\Omega\cap\Omega_j\neq\emptyset$ and $u\in L^{2,k}_{(p,q)}(\Omega)$. Then:
\begin{equation}\label{avedOmegacapegageminpsijwedgeX} \int_{\Omega\cap\Omega_j}\vert u\vert^2 \lbrack d_{\Omega\cap\Omega_j}(z)\rbrack^{2k}d\lambda\le \int_{\Omega}\vert u\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2k}d\lambda. \end{equation} If $\psi_j\in\mathcal D(\Omega_j)$, and $u\in L^{2,k}_{(p,q)}(\Omega\cap\Omega_j)$ then: \begin{equation}\label{avedOmegacapegageminpsijghtkX} \int_{\Omega}\vert \psi_j u\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2k}d\lambda\le\epsilon_j^{-2k} \int_{\Omega\cap\Omega_j}\vert \psi_j u\vert^2 \lbrack d_{\Omega\cap\Omega_j}(z)\rbrack^{2k}d\lambda, \end{equation} and \begin{equation}\label{avedOmegacapegageminpsijghtk1X} \int_{\Omega}\vert \bar\partial\psi_j\wedge u\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2k}d\lambda\le\epsilon_j^{-2k} \int_{\Omega\cap\Omega_j}\vert \bar\partial\psi_j\wedge u\vert^2 \lbrack d_{\Omega\cap\Omega_j}(z)\rbrack^{2k}d\lambda, \end{equation}
with $\epsilon_j:=\min_{z\in Supp\,\psi_j} d_{\Omega_j}(z)$.
\end{lemma} Let us observe that $Supp\,(\psi_j u)\subset \Omega\cap \Omega_j$ and $Supp\,(\bar\partial\psi_j\wedge u)\subset \Omega\cap \Omega_j$ so that integrating $\psi_j u$ (resp. $\bar\partial\psi_j\wedge u$) on $\Omega$ or on $\Omega\cap \Omega_j$ gives the same result but we will need to consider later this extension of $\psi_j u$ to $\Omega$ (by zero outside its support).
\begin{proof} For $z\in\Omega\cap\Omega_j$,
we have: $d_{\Omega\cap\Omega_j}(z)=\min(d_{\Omega}(z), d_{\Omega_j}(z))\le d_{\Omega}(z)$ and therefore: \begin{equation}\label{avedOmegacapegageminpsij1ome} \int_{\Omega\cap \Omega_i}\vert u\vert^2\lbrack d_{\Omega\cap\Omega_i}(z)\rbrack^{2k}d\lambda \le\int_{\Omega}\vert u\vert^2\lbrack d_{\Omega}(z)\rbrack^{2k}d\lambda \end{equation}
On the other hand, we define $\epsilon_j:=\min_{z\in Supp\,\psi_j} d_{\Omega_j}(z)=\min_{z\in Supp\,\psi_j} d(z,\partial\Omega_j)$
($0<\epsilon_j\le 1$)
so that for $z\in Supp\, \psi_j$, we have: $d_{\Omega\cap\Omega_j}(z)=\min (d_{\Omega}(z),d_{\Omega_j}(z))\ge\min (d_{\Omega}(z),\epsilon_j)\ge\epsilon_j d_{\Omega}(z)$ (as $d_{\Omega}(z)\le 1$)
and then: \begin{equation}\label{avedOmegacapegageminpsijerlvrty} \int_{\Omega\cap \Omega_j}\vert \psi_j u\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2k}d\lambda \le \epsilon_j^{-2k} \int_{\Omega\cap \Omega_j}\vert \psi_j u\vert^2 \lbrack d_{\Omega\cap\Omega_j}(z)\rbrack^{2k}d\lambda \end{equation} and : \begin{equation}\label{avedOmegacapegageminpsijerlvrtyvariante} \int_{\Omega\cap \Omega_j}\vert \bar\partial\psi_j\wedge u\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2k}d\lambda \le \epsilon_j^{-2k} \int_{\Omega\cap \Omega_j}\vert\bar\partial \psi_j\wedge u\vert^2 \lbrack d_{\Omega\cap\Omega_j}(z)\rbrack^{2k}d\lambda. \end{equation}
\end{proof}
We can now prove the following result by the same reasoning as in the case of $\bf \mbox{I\hspace{-.47em}C}^n$. Moreover we consider currents in $\mathcal D'_{(p,q)}(X,F)$ with values in a given holomorphic vector bundle $F$ (to simplify we only consider current in $H^{-k}(X,F)$, $k\in\mbox{I\hspace{-.15em}N}$).
\begin{theorem} \label{partialtemperedchomologyStein2}
Let $\Omega$ be a relatively compact open Stein subset of a Stein manifold $X$ and $F$ be a given Hermitian holomorphic vector bundle on $X$. Then for every current $f$ of bidegree $(p,q+1)$ on $X$ with values in $F$ (and with compact support in $X$) which is $\bar\partial$-closed on $\Omega$,
there exists a current $w$ of bidegree $(p,q)$ on $X$ with values in $F$ (with compact support) such that: \begin{equation} \label{artialtemperedchomolog56} \bar\partial w=f, \end{equation} in $\Omega$. Moreover if $f$ is in $H^{-k}_{(p,q+1)}(X,F)$ for some $k\in\mbox{I\hspace{-.15em}N}$, we can find a solution $w$ in $H_{(p,q)}^{-n-1-r}(X,F)$ with $r=k(4n+2)-3n-1$, more precisely $w=u+v$, $u_{\vert\Omega}\in L_{(p,q)}^{2,r}(\Omega,F)$, $u\in H_{(p,q)}^{-n-1-r}(X,F)$
and $v\in H_{(p,q)}^{-k+1}(X,F)$. \end{theorem}
\begin{proof}
Let us assume that $f\in H^{-k}_{(p,q+1)}(X, F,loc)$. We will prove that we can find a solution $u\in H^{-n-1-r}_{(p,q)}(X, F,loc)$.\\ By considering local charts of $X$, we will locally reduce the problem to the case of $\bf \mbox{I\hspace{-.47em}C}^n$ and using a partition of the unity, we will patch together these local solutions to obtain a first approximate global solution $\tilde w_1$ such that $f_1:=f-\bar\partial \tilde w_1$ has in some sense better regularity than $f$ and then we iterate the construction (replacing $f$ by $f_1$) until $f_1$ becomes enough regular so that we can use J-P. Demailly's $L^2$-estimate for $\bar\partial$
(i.e. until we have found in the temperate cohomology class of $f$ a new current $f_1$ such that ${f_1}_{\vert\Omega}\in L^{2,k+r}_{(p,q+1)}(\Omega, F)$ for some $r$ large enough).\\
Using local charts on $X$ and Borel-Lebesgue lemma,
we can find a finite open covering of the compact set $\bar\Omega$
by relatively compact open subsets $\Omega_j$ of $X$, $1\le j\le N$ such that every $\bar \Omega_j$ is contained in a geodesic chart for the given Riemannian metric and every $\Omega_j$ is biholomorphic to a bounded open ball $U_j:=B_j(z_j, r_j)$ of $\bf \mbox{I\hspace{-.47em}C}^n$, by a local biholomorphic map $\phi_j$ defined on a neighborhood of $\bar\Omega_j\subset X$ and taking its values into $\bf \mbox{I\hspace{-.47em}C}^n$ ($z_j\in\phi_j(\bar\Omega)$, $r_j>0$). Moreover we can also suppose (by shrinking enough each $\Omega_j$)
that the exponential map sending
the tangent space $T_{z_j} X$ (of $X$ at $z_j$) into $X$ is a diffeomorphism of a open ball in $T_{z_j} X$ onto a geodesic open ball of center $z_j$ containing $\bar\Omega_j$ so that the geodesic distance and the Euclidian distance coming from $\bf \mbox{I\hspace{-.47em}C}^n$ (by means of $\phi_j$) are equivalent on a neighborhood of $\bar\Omega_j$
and so that the spaces $L^{2,k}_{(p,q)}(\Omega_j\cap\Omega, F)$ ($k\in \mbox{I\hspace{-.15em}N}$) associated with the geodesic distance to $\partial(\Omega_j\cap\Omega)$ or with the Euclidian distance to $\partial(\Omega_j\cap\Omega)$ coming from $\bf \mbox{I\hspace{-.47em}C}^n$ (by means of $\phi_j$) are the same. Finally we can also suppose that the given holomorphic vector bundle $F$ is trivial on a neighborhood of each $\bar\Omega_j$.\\ For every bounded Stein open subset $\phi_j(\Omega\cap\Omega_j)\subset \phi_j(\Omega_j)=:U_j\subset\bf \mbox{I\hspace{-.47em}C}^n$ we use the construction made in the case of $\bf \mbox{I\hspace{-.47em}C}^n$ (\emph{i.e.} remark \ref{Proofoftheorem2remarkprecise}, $F$ is trivial on a neighborhood of $\bar\Omega_j$) so that we can construct $u_j\in H^{-k-2n-1}_{(p,q)}(\Omega_j,F)$ with ${u_j}_{\vert\Omega\cap\Omega_j}\in L^{2,k+n}_{(p,q)}(\Omega_j\cap\Omega, F)$ and (it is the key point) $v_j\in H^{-k+1}_{(p,q)}(\Omega_j, F)$
such that $w_j:=u_j+v_j$ is a solution of $f=\bar\partial w_j=\bar\partial (u_j+ v_j)$ in $\Omega\cap\Omega_j$ and $w_j\in H^{-k-2n-1}_{(p,q)}(\Omega_j, F)$.\\
Let $\psi_j\in\mathcal D(\Omega_j))\ge 0$ be a partition of unity such that: $\sum_{j=1}^{j=N} \psi_j =1$ on a neighborhood of $\bar\Omega$.
Then $\psi_j w_j\in H^{-k-2n-1}_{(p,q)}(X, F)$
and a key point is that ${\psi_j u_j}_{\vert\Omega}\in L^{2,k+n}_{(p,q)}(\Omega, F)$ for the distance $d_{\Omega}(z)$ using lemma \ref{partialtemperedchomologyescrtynlminX} inequality (\ref{avedOmegacapegageminpsijghtkX})
(we apply with $u=u_j$) and then for $i\neq j$, ${\psi_j u_j}_{\vert\Omega\cap\Omega_i}\in L^{2,k+n}_{(p,q)}(\Omega\cap\Omega_i, F)$ for the distance $d_{\Omega\cap\Omega_i}(z)$ using lemma \ref{partialtemperedchomologyescrtynlminX} inequality (\ref{avedOmegacapegageminpsijwedgeX}) (we apply with $u=\psi_j u_j$ and to $\Omega_i$ instead of $\Omega_j$).
Particularly $({\sum_{j=1}^{j=N}\psi_j u_j})_{\vert\Omega}\in L^{2,k+n}_{(p,q)}(\Omega, F)$ for the distance $d_{\Omega}(z)$ and for all $1\le i\le N$, $({\sum_{j=1}^{j=N}\psi_j u_j})_{\vert\Omega\cap\Omega_i}\in L^{2,k+n}_{(p,q)}(\Omega\cap\Omega_i, F)$ for the distance $d_{\Omega\cap\Omega_i}(z)$.
Gluing together the local solutions $w_j$, we define $\tilde w_1 =\sum_{j=1}^{j=N}\psi_j w_j=\tilde u_1+\tilde v_1$, with $\tilde u_1 =\sum_{j=1}^{j=N}\psi_j u_j$ and $\tilde v_1 =\sum_{j=1}^{j=N}\psi_j v_j$
so that $\tilde w_1\in H^{-k-2n-1}_{(p,q)}(X, F)$, $\tilde {u_1}_{\vert\Omega}\in L^{2,k+n}_{(p,q)}(\Omega, F)$, $\tilde v_1\in H^{-k+1}_{(p,q)}(X, F)$ and we obtain: \begin{equation}\label{artialtemperedchomolog5674} \bar\partial \tilde w_1= \sum_{j=1}^{j=N}\psi_j\bar\partial w_j+\sum_{j=1}^{j=N}(\bar\partial \psi_j)\wedge w_j. \end{equation}
We have: $\bar\partial w_j=f$ in $\Omega\cap\Omega_j$ and $Supp\,\psi_j\subset \Omega_j$ so that equation (\ref{artialtemperedchomolog5674}) implies in $\Omega$ : \begin{equation}\label{artialtemperedchomolog5684} \bar\partial\tilde w_1=(\sum_{j=1}^{j=N}\psi_j)f+\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge w_j, \end{equation}
or (in $\Omega$) : \begin{equation}\label{artialtemperedchomolog5684parz} \bar\partial\tilde w_1=f+\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge w_j. \end{equation} We set: $f_1:=-\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge w_j$, $g_1:=-\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge u_j$ and $h_1:=-\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge v_j$,
so that we obtain: \begin{equation}\label{artialtemperedchomolog5694} f_1=f-\bar\partial \tilde w_1=g_1+h_1. \end{equation}
$f_1$ is $\bar\partial$-closed on $\Omega$ and we have $f_1:=g_1+h_1$ with $g_1\in H^{-k-2n-1}_{(p,q+1)}(X, F)$ and $h_1\in H^{-k+1}_{(p,q+1)}(X, F)$ (as each $v_j\in H^{-k+1}_{(p,q)}(\Omega_j, F)$). Moreover according to lemma \ref{partialtemperedchomologyescrtynlminX}, ${g_1}_{\vert\Omega}\in L^{2,k+n}_{(p,q+1)}(\Omega, F)$ for the distance $d_{\Omega}(z)$ (inequality (\ref{avedOmegacapegageminpsijghtk1X})) and for all $i$,
${g_1}_{\vert\Omega\cap\Omega_i}\in L^{2,k+n}_{(p,q+1)}(\Omega\cap\Omega_i, F)$,
for the distance $d_{\Omega\cap\Omega_i}(z)$
(inequality (\ref{avedOmegacapegageminpsijwedgeX})
applied in $\Omega_i$ to each form $\bar\partial\psi_j \wedge u_j$ of bidegre (p,q+1) with $j\neq i$). Let us observe that $g_1$ or $h_1$ are not necessarly $\bar\partial$-closed (that is the difficulty). The key point is that $f_1$ is better than $f$ in the sense that $h_1\in H^{-k+1}_{(p,q+1)}(X, F)$ has a best regularity than $f\in H^{-k}_{(p,q+1)}(X, F)$ and $g_1$ has a $L^2$ polynomial growth in $\Omega$.\\ We have built a first approximate global solution $\tilde w_1$ such that $f_1:=f-\bar\partial \tilde w_1$ is better than $f$. Applying proposition \ref {partialtemperedchomologyescrtyX} and lemma \ref{partialtemperedchomologyescrtynlminX} to each Stein open subset $\phi_j(\Omega\cap\Omega_j)\subset U_j\subset\bf \mbox{I\hspace{-.47em}C}^n$, we can iterate this construction and construct a finite sequence of currents on $X$: $f_{l+1}=g_{l+1}+h_{l+1}=f_l-\bar\partial \tilde w_{l+1}$, $l\in\mbox{I\hspace{-.15em}N}$, $f_0:=f$, such that $h_{l+1}\in H^{-k+l+1}_{(p,q+1)}(X, F)$ has a strictly better regularity than $h_l\in H^{-k+l}_{(p,q+1)}(X, F)$ and ${g_{l+1}}_{\vert\Omega}\in L^{2,k+n+l(4n+1)}_{(p,q+1)}(\Omega, F)$
has still a $L^2$ polynomial growth in $\Omega$, moreover $g_{l+1}\in H^{-k-2n-1-l(4n+1)}_{(p,q+1)}(X, F)$. Given $f_l=g_l+h_l$, we apply proposition \ref {partialtemperedchomologyescrtyX} to each $\phi_j(\Omega\cap\Omega_j)\subset\bf \mbox{I\hspace{-.47em}C}^n$, we construct a solution $w_{l+1,j}=u_{l+1,j}+v_{l+1,j}$ of $\bar\partial w_{l+1,j}=f_l$ in $\Omega\cap\Omega_j\subset X$
($\Omega\cap\Omega_j$ is biholomorphic to $\phi_j(\Omega\cap\Omega_j)$).\\ We set $\tilde w_{l+1}:=\sum_{j=1}^{j=N}\psi_j w_{l+1,j}=\tilde u_{l+1} + \tilde v_{l+1}$
with $\tilde u_{l+1}:=\sum_{j=1}^{j=N}\psi_j u_{l+1,j}$,
$\tilde v_{l+1}:=\sum_{j=1}^{j=N}\psi_j v_{l+1,j}$ and then $f_{l+1}:=f_l-\bar\partial \tilde w_{l+1}=-\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge w_{l+1,j}=g_{l+1}+h_{l+1}$,
with $g_{l+1}:=-\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge u_{l+1,j}$ and $h_{l+1}:=-\sum_{j=1}^{j=N}\bar\partial\psi_j\wedge v_{l+1,j}$.\\ The estimates we need to iterate are straightforward consequences of proposition
\ref {partialtemperedchomologyescrtyX} (we use replacing $k$ by $k+n+(l-1)(4n+1)$ and $s$ by $k-l$ at the step $l\ge1$) and lemma \ref{partialtemperedchomologyescrtynlminX}. Of course the polynomial $L^2$ growth of $g_{l+1}$ is larger than that of $g_l$ but it does'nt matter as we are able to solve the $\bar\partial$ equation in $\Omega$ for any $L^2$ polynomial growth.\\ Finally for $l=k$ (it is the last key point) $h_{k}\in H_{(p,q+1)}^{0}(X, F)=L_{(p,q+1)}^{2,0}(X, F)$, particularly ${h_{k}}_{\vert\Omega}\in L_{(p,q+1)}^{2,0}(\Omega, F)$, so that $f_{k}$ verifies the $L^2$-estimate we need: $f_{k}=g_{k}+h_{k}$ has polynomial $L^2$ growth in $\Omega$ : $\int_{\Omega}\vert f_{k}\vert^2 \lbrack d_{\Omega}(z)\rbrack^{2r} d\lambda<+\infty$
with $r=k+n+(k-1)(4n+1)=k(4n+2)-3n-1$.
We can now use J-P. Demailly's theorem \ref{beanopenpseudoconvex3SteinDem12} to solve in $\Omega$ the equation $\bar\partial \tilde w_{k+1}=f_{k}$
with $\tilde w_{k+1}\in L^{2,r}_{(p,q)}(\Omega,F)$ so that we obtain: $f= \bar\partial \tilde w$ with $\tilde w:= \tilde w_{k+1}+\Sigma_{l=1}^{l=k} \,\tilde w_{l}=u+v$,
$u:= \tilde w_{k+1}+\Sigma_{l=1}^{l=k} \,\tilde u_{l}$ and $v:= \Sigma_{l=1}^{l=k} \,\tilde v_{l}$, $u\in L^{2,r}_{(p,q)}(\Omega, F)$, $v\in H^{-s+1}_{(p,q)}(X,F)$. Using theorem \ref{w(z)=w(zeta),rhoepsilon9on} we get an extension of $u$ in $H_{(p,q)}^{-n-1-r}(X,F)$
so that $\tilde w\in H_{(p,q)}^{-n-1-r}(X,F)$. \end{proof}
As explained in P. Schapira's article [Scha2020] (remark 2.3.4), theorem \ref{partialtemperedchomologyStein2} implies the following result
(theorem 2.3.3 in [Scha2020]). We refer to [Scha2020] for the definitions of the (derived) sheave $\mathcal O_{X_{\mathrm{sa}}}^{\mathrm {tp}}$ of temperate holomorphic functions (defined on the subanalytic site $X_{\mathrm{sa}}$)
and of other objects associated with.
\begin{theorem} \label{partialtemperedchomologyStein2PS}\rm{(P. Schapira)}
Let $X$ be a complex Stein manifold and let $\Omega$ be a subanalytic relatively compact Stein open subset of $X$ contained in a Stein compact subset $K$ of $X$. Let $\mathcal F$ be a coherent $\mathcal O_X$-module defined on a neighborhood of $K$. Then $\mathrm{R}\Gamma(\Omega;\mathcal F^{\mathrm{tp}})$ is concentrated in degree 0.
\end{theorem}
[Dem1982] Demailly, J.P., \textit{Estimations $\mathrm {L}^2$ pour l'op\'erateur $\bar{\partial }$ d'un fibr\'e vectoriel holomorphe semi-positif au-dessus d'une vari\'et\'e k\"ahl\'erienne compl\`ete} in \textit{Annales scientifiques de l' \'Ecole Normale Sup\'erieure} \textbf{15}, 457-511, (1982).
[Dem2012] Demailly, J.P.,\textit{ Complex Analytic and Differential Geometry.} Open Content Book. Chapter VIII, paragraph 6, Theorem 6.9, p. 379 (2012).
[Dol1956] Dolbeault, P., \textit{Formes diff\'erentielles et cohomologie sur une vari\'et\'e analytique complexe}, I, in \textit{Ann. of Math.} 64, 83-130, (1956); II, in \textit{Ann. of Math.} \textbf{65}, 282-330, (1957).
[Ele1975] Elencwajg, G.,\textit{Pseudoconvexit\'e locale dans les vari\'et\'es k\"ahl\'eriennes,} in \textit{Annales de l'Institut Fourier,} \textbf{25}, 295-314, (1975).
[H\"or1965] H\"ormander, L., $L^2$\textit{Estimates and Existence Theorem for the $\bar\partial$ \textit{Operator.}}, in \textit{Acta Math.}, \textbf{113}, 89-152 (1965).
[H\"or1966] H\" ormander, L., \textit{An Introduction to Complex Analysis in Several Variables,} (1966), 3d edition,
\textit{North Holland Math. Libr.}, Vol.\textbf{7}, Amsterdam, London, (1990).
[H\"or1983] H\" ormander, L., \textit{The Analysis of Linear Partial Diffenrential Operators I},
\textit{Grundlehren der Mathematischen Wissenschhaften}, Vol. \textbf{256}, Springer Verlag, Berlin, Heidelberg, New York, Tokyo (1983).
[KS1996] Kashiwara, M. and Schapira, P., \textit{Moderate and Formal Cohomology associated with Constructibles Sheaves}, in \textit{M\'emoires Soc. Math. France,} Vol \textbf{64}, 76 pp, (1996).
[Le1964] Lelong, P., \textit{Fonctions enti\`eres (n variables) et fonctions plurisousharmoniques d'ordre fini dans $\bf \mbox{I\hspace{-.47em}C}^n$}, \textit{Journal d'Analyse (Jerusalem)}, \textbf{12}, 365-406 (1964).
[Scha2017] Schapira, P., \textit{Microlocal Analysis and beyond,} in \textit{arXiv}. 1701.08955 [math] (Pdf) (2017)
[Scha2020] Schapira, P., \textit{Vanishing of Temperate Cohomology on Complex Manifolds}, in \textit{arXiv.} 2003.11361 [math CV] (Pdf) (2020)
[Schw1950] Schwartz, L., \textit{Th\'eorie des distributions.} Tome I, in \textit{Actualit\'es Sci. Ind.}, vol. 1091, Publ. Inst. Math. Univ. Strasbourg, no IX, Paris: Hermann \& Cie., (1950), 2e \'ed. (1957); r\'e\'ed. (1966).
[Siu1970] Siu, Y.T., \textit{Holomorphic Functions of Polynomial Growth on Bounded Domains.}, in \textit{Duke Mathematical Journal}, \textbf{37}, 77-84, (1970).
[Sk1971] Skoda, H., \textit{$d^{\prime \prime } $-cohomologie \`a croissance lente dans C$^{n}$}, in \textit{Annales scientifiques de l'\'Ecole Normale Sup\'erieure,} S\'erie 4, Tome 4, no. 1, 97-120 (1971).
[Sk2020] Skoda, H., \textit{A Dolbeault lemma for temperate currents}, in \textit{arXiv.} 2003.11437 v2 [math CV] (Pdf) (2020)
[Weil1958] Weil, A., \textit{Introduction \`a l'\'etude des vari\'et\'es k\"ahl\'eriennes}, \textit{Actualit\'es scientifiques et industrielles}, 1267, Hermann, Paris, (1958).
\end{document} |
\begin{document}
\pagestyle{plain}
\title{Optimal Reachability in Divergent Weighted Timed
Games hanks{The first author has been supported by ENS Cachan,
Universit\'e Paris-Saclay. This work has been funded by the DeLTA
project (ANR-16-CE40-0007), and by the SoSI project (PEPS SISC
CNRS).}
\begin{abstract}
Weighted timed games are played by two players on a timed automaton
equipped with weights: one player wants to minimise the accumulated
weight while reaching a target, while the other has an opposite
objective. Used in a reactive synthesis perspective, this
quantitative extension of timed games allows one to measure the
quality of controllers. Weighted timed games are notoriously
difficult and quickly undecidable, even when restricted to
non-negative weights. Decidability results exist for subclasses of
one-clock games, and for a subclass with non-negative weights
defined by a semantical restriction on the weights of cycles. In
this work, we introduce the class of \emph{divergent weighted timed
games} as a generalisation of this semantical restriction to
arbitrary weights. We show how to compute their optimal value,
yielding the first decidable class of weighted timed games with
negative weights and an arbitrary number of clocks. In addition, we
prove that divergence can be decided in polynomial space. Last, we
prove that for untimed games, this restriction yields a class of
games for
which the value can be computed in polynomial time. \end{abstract}
\section{Introduction}
Developing programs that verify real-time specifications is notoriously difficult, because such programs must take care of delicate timing issues, and are difficult to debug a posteriori. One research direction to ease the design of real-time software is to automatise the process. We model the situation into a timed game, played by a \emph{controller} and an antagonistic \emph{environment}: they act, in a turn-based fashion, over a \emph{timed
automaton}~\cite{AD94}, namely a finite automaton equipped with real-valued variables, called clocks, evolving with a uniform rate. A usual objective for the controller is to reach a target. We are thus looking for a \emph{strategy} of the controller, that is a recipe dictating how to play (timing delays and transitions to follow), so that the target is reached no matter how the environment plays. Reachability timed games are decidable~\cite{AsaMal99}, and \EXP-complete~\cite{JurTri07}.
If the controller has a winning strategy in a given reachability timed game, several such winning strategies could exist. Weighted extensions of these games have been considered in order to measure the quality of the winning strategy for the controller~\cite{BCFL04,ABM04}. This means that the game now takes place over a \emph{weighted (or priced)
timed automaton}~\cite{BehFeh01,AluLa-04}, where transitions are equipped with weights, and states with rates of weights (the cost is then proportional to the time spent in this state, with the rate as proportional coefficient). While solving weighted timed automata has been shown to be \PSPACE-complete~\cite{BouBri07} (i.e.\ the same complexity as the non-weighted version), weighted timed games are known to be undecidable~\cite{BBR05}. This has led to many restrictions in order to regain decidability, the first and most interesting one being the class of strictly non-Zeno cost with only non-negative weights (in transitions and states)~\cite{BCFL04,ABM04}: this hypothesis states that every execution of the timed automaton that follows a cycle of the region automaton has a weight far from 0 (in interval $[1,+\infty)$, for instance).
Less is known for weighted timed games in the presence of negative weights in transitions and/or states. In particular, no results exist so far for a class that does not restrict the number of clocks of the timed automaton to 1. However, negative weights are particularly interesting from a modelling perspective, for instance in case weights represent the consumption level of a resource (money, energy\dots) with the possibility to spend and gain some resource. In this work, we introduce a generalisation of the strictly non-Zeno cost hypothesis in the presence of negative weights, that we call \emph{divergence}. We show the decidability of the class of divergent weighted timed games, with a $2$-\EXP\ complexity (and an $\EXP$-hardness lower bound). These complexity results match the ones that could be obtained in the non-negative case from the study of~\cite{BCFL04,ABM04}.
Other types of payoffs than the accumulated weight we study (i.e.\ total payoff) have been considered for weighted timed games. For instance, energy and mean-payoff timed games have been introduced in~\cite{BreCas14}. They are also undecidable in general. Interestingly, a subclass called \emph{robust timed games}, not far from our divergence hypothesis, admits decidability results. A weighted timed game is robust if, to say short, every simple cycle (cycle without repetition of a state) has weight non-negative or less than a constant $-\varepsilon$. Solving robust timed game can be done in \EXPSPACE, and is \EXP-hard. Moreover, deciding if a weighted timed game is robust has complexity $2$-\EXPSPACE\ (and $\coNEXP$-hard). In contrast, we show that deciding the divergence of a weighted timed game is a $\PSPACE$-complete problem.\footnote{Whereas all divergent weighted game are robust, the
converse may not be true, since it is possible to mix positive and
negative simple cycles in an SCC.} In terms of modeling power, we do believe that divergence is sufficient for most cases. It has to be noted that extending our techniques and results in the case of robust timed games is intrinsically not possible: indeed, the value problem for this class is undecidable~\cite{BJM15}.
The property of divergence is also interesting in the absence of time. Indeed, weighted games with reachability objectives have been recently explored as a refinement of mean-payoff games~\cite{BGHM15,BGHM16}. A pseudo-polynomial time (i.e.\ polynomial if weights are encoded in unary) procedure has been proposed to solve them, and they are at least as hard as mean-payoff games. In this article, we also study divergent weighted games, and show that they are the first non-trivial class of weighted games with negative weights solvable in polynomial time. Table~\ref{tab:summary} summarises our results. We start in Sections~\ref{sec:weighted-games} and \ref{sec:solving-divergent-weighted-games} by studying weighted (untimed) games, before considering the timed setting in Sections~\ref{sec:weighted-timed-games} and \ref{sec:solving-divergent-weighted-timed-games}.
\begin{table}[tbp]
\centering
\caption{Deciding weighted (timed) games with arbitrary weights}
\label{tab:summary}
\begin{tabular}{|c|c|c|c|}
\hhline{~|*{3}{-|}}
\multicolumn{1}{c|}{} & {\cellcolor{gray!20}}Value of a game
& \cellcolor{gray!20}Value of a divergent game
& \cellcolor{gray!20}Deciding the divergence \\
\hline
\cellcolor{gray!20}Untimed & pseudo-poly. \cite{BGHM16} &
\P-complete
& \NL-complete (unary), \P (binary) \\\hline
\cellcolor{gray!20}Timed & Undecidable \cite{BBR05} & 2-\EXP, \EXP-hard & \PSPACE-complete\\\hline
\end{tabular} \end{table}
\section{Weighted games}\label{sec:weighted-games}
We start our study with untimed games. We consider two-player turn-based games played on weighted graphs and denote the two \emph{players} by $\MaxPl$ and $\MinPl$. A \emph{weighted game}\footnote{Weighted games are called \emph{min-cost
reachability games} in \cite{BGHM16}.} is a tuple $\game=\langle \Vertices=\VerticesMin\uplus \VerticesMax,\VerticesT,\alphabet, \Edges,\weight\rangle$ where $\Vertices$ are vertices, partitioned into vertices belonging to \MinPl (\VerticesMin) and \MaxPl (\VerticesMax), $\VerticesT\subseteq \VerticesMin$ is a subset of target vertices for player $\MinPl$, $\alphabet$ is an alphabet, $\Edges\subseteq \Vertices\times\alphabet\times \Vertices$ is a set of directed edges, and $\weight\colon \Edges \to \Z$ is the weight function, associating an integer weight with each edge. These games need not be finite in general, but in Sections~\ref{sec:weighted-games} and \ref{sec:solving-divergent-weighted-games}, we limit our study to the resolution of finite weighted games (where all previous sets are finite). We suppose that: \begin{inparaenum}[($i$)] \item the game is deadlock-free, i.e.\ for each vertex
$v\in \Vertices$, there is a letter $a\in A$ and a vertex
$v'\in \Vertices$, such that $(v,a,v')\in \Edges$; \item the game is deterministic, i.e.\ for each pair
$(v,a)\in \Vertices\times \alphabet$, there is at most one vertex
$v'\in\Vertices$ such that $(v,a,v')\in\Edges$.\footnote{Actions are
not standardly considered, but they become useful in the
timed setting.} \end{inparaenum}
A \emph{finite play} is a finite sequence of edges $\play=v_0\xrightarrow{a_0}v_1\xrightarrow{a_1}\cdots \xrightarrow{a_{k-1}}v_k$, i.e.\ for all $0\leq i<k$,
$(v_i,a_i,v_{i+1})\in \Edges$. We denote by $|\play|$ the length $k$ of $\play$. We often write $v_0\xrightarrow{\play}v_k$ to denote that \play is a finite play from $v_0$ to $v_k$. The play \play is said to be a \emph{cycle} if $v_k=v_0$. We let $\FinitePlays_\game$ be the set of all finite plays in $\game$, whereas $\FinitePlaysMin_\game$ and $\FinitePlaysMax_\game$ denote the finite plays that end in a vertex of $\MinPl$ and $\MaxPl$, respectively. A \emph{play} is then an infinite sequence of consecutive edges.
A \emph{strategy} for $\MinPl$ (respectively, $\MaxPl$) is a mapping $\strat\colon \FinitePlaysMin_\game \to \alphabet$ (respectively, $\strat\colon \FinitePlaysMax_\game \to \alphabet$) such that for all finite plays $\play\in\FinitePlaysMin_\game$ (respectively, $\play\in\FinitePlaysMax_\game$) ending in vertex $v_k$, there exists a vertex $v'\in\Vertices$ such that $(v_k,\strat(\play),v')\in \Edges$. A play or finite play $\play = v_0\xrightarrow{a_0}v_1\xrightarrow{a_1}\cdots$ conforms to a strategy $\strat$ of $\MinPl$ (respectively, $\MaxPl$) if for all $k$ such that $v_k\in \VerticesMin$ (respectively, $v_k\in\VerticesMax$), we have that $a_{k} = \strat(v_0\xrightarrow{a_0}v_1\cdots v_k)$. A strategy $\strat$ is \emph{memoryless} if for all finite plays $\play, \play'$ ending in the same vertex, we have that $\strat(\play)=\strat(\play')$. For all strategies $\minstrategy$ and $\maxstrategy$ of players \MinPl and \MaxPl, respectively, and for all vertices~$v$, we let $\outcomes_\game(v,\maxstrategy,\minstrategy)$ be the outcome of $\maxstrategy$ and $\minstrategy$, defined as the unique play conforming to $\maxstrategy$ and $\minstrategy$ and starting in~$v$.
The objective of \MinPl is to reach a target vertex, while minimising the accumulated weight up to the target. Hence, we associate to every finite play $\play=v_0\xrightarrow{a_0}v_1 \ldots\xrightarrow{a_{k-1}}v_k$ its accumulated weight $\weight_\game(\play)=\sum_{i=0}^{k-1} \weight(v_i,a_i,v_{i+1})$. Then, the weight of an infinite play $\play=v_0\xrightarrow{a_0}v_1\xrightarrow{a_1}\cdots$, also denoted by $\weight_\game(\play)$, is defined by $+\infty$ if $v_k\notin \VerticesT$ for all $k\geq 0$, or the weight of $v_0\xrightarrow{a_0}v_1 \ldots\xrightarrow{a_{k-1}}v_k$ if $k$ is the first index such that $v_k\in\VerticesT$. Then, we let $\Val_\game(v,\minstrategy)$ and $\Val_\game(v,\maxstrategy)$ be the respective values of the strategies: \begin{align*} \Val_\game(v,\minstrategy) &= \sup_{\maxstrategy} \weight_\game(\outcomes(v,\maxstrategy,\minstrategy)) \\ \Val_\game(v,\maxstrategy) &= \inf_{\minstrategy} \weight_\game(\outcomes(v,\maxstrategy,\minstrategy))\,. \end{align*} Finally, for all vertices~$v$, we let $\lowervalue_\game(v) = \sup_{\maxstrategy} \Val_\game(v,\maxstrategy)$ and $\uppervalue_\game(v) = \inf_{\minstrategy} \Val_\game(v,\minstrategy)$ be the \emph{lower} and \emph{upper
values} of $v$, respectively. We may easily show that $\lowervalue_\game(v)\leq \uppervalue_\game(v)$ for all~$v$. We say that strategies $\minstrategy^\star$ of $\MinPl$ and $\maxstrategy^\star$ of $\MaxPl$ are optimal if, for all vertices~$v$, $\Val_\game(v,\maxstrategy^\star)=\lowervalue_\game(v)$ and $\Val_\game(v,\minstrategy^\star)=\uppervalue_\game(v)$, respectively. We say that a game $\game$ is \emph{determined} if for all vertices~$v$, its lower and upper values are equal. In that case, we write $\Val_\game(v)=\lowervalue_\game(v)=\uppervalue_\game(v)$, and refer to it as the \emph{value} of~$v$ in $\game$. Finite weighted games are known to be determined~\cite{BGHM16}. If the game is clear from the context, we may drop the index $\game$ from all previous notations.
\paragraph{Problems} We want to compute the value of a \emph{finite} weighted game, as well as optimal strategies for both players, if they exist. The corresponding decision problem, called the \emph{value problem}, asks whether $\Val_\game(v) \leq \alpha$, given a finite weighted game $\game$, one of its vertices $v$, and a threshold $\alpha\in\Z\cup\{-\infty,+\infty\}$.
\paragraph{Related work} The value problem is a generalisation of the classical shortest path problem in a weighted graph to the case of two-player games. If weights of edges are all non-negative, a generalised Dijkstra algorithm enables to solve it in polynomial time~\cite{KBB+08}. In the presence of negative weights, a pseudo-polynomial-time (i.e.\ polynomial with respect to the game where weights are stored in unary) solution has been given in~\cite{BGHM16}, based on a fixed point computation with value iteration techniques. Moreover, the value problem with threshold $-\infty$ is shown to be in $\NP\cap \coNP$, and as hard as solving mean-payoff games.
\section{Solving divergent weighted games} \label{sec:solving-divergent-weighted-games}
Our first contribution is to solve in polynomial time the value problem, for a subclass of finite weighted games that we call \emph{divergent}. To the best of our knowledge, this is the first attempt to solve a non-trivial class of weighted games with arbitrary weights in polynomial time. Moreover, the same core technique is used for the decidability result in the timed setting that we will present in the next sections. Let us first define the class of divergent weighted games:
\begin{definition}
A weighted game \game is divergent when every cycle \play of \game
satisfies $\weight(\play)\neq 0$. \end{definition}
Divergence is a property of the underlying weighted graph, independent from the repartition of vertices between players. The term \emph{divergent} reflects that cycling in the game ultimately makes the accumulated weight grow in absolute value. We will first formalise this intuition by analysing the strongly connected components (SCC) of the graph structure of a divergent game (the repartition of vertices into players does not matter for the SCC decomposition). Based on this analysis, we will obtain the following results: \begin{theorem}
The value problem over finite divergent weighted games is a
\P-complete problem. Moreover, deciding if a given finite weighted
game is divergent is an \NL-complete problem when weights are
encoded in unary, and \P\ when they are encoded in binary. \end{theorem}
\paragraph{SCC analysis} A play \play in $\game$ is said to be positive (respectively, negative) if $\weight(\play)>0$ (respectively, $\weight(\play)<0$). It follows that a cycle in a divergent weighted game is either positive or negative. A cycle is said to be simple if no vertices are visited twice (except for the common vertex at the beginning and the end of the cycle). We will rely on the following characterisation of divergent games in terms of SCCs.
\begin{proposition}\label{prop:scc-sign}
A weighted game \game is divergent if and only if, in each SCC
of~\game, all simple cycles are either all positive, or all
negative. \end{proposition} \begin{proof}
Let us first suppose that \game is divergent. By contradiction,
consider a negative simple cycle \play (of weight $-p<0$) and a
positive simple cycle $\play'$ (of weight $p'>0$) in the same SCC.
Let $v$ and $v'$ be respectively the first vertices of \play and
$\play'$. By strong connectivity, there exists a finite play $\eta$
from $v$ to $v'$ and a finite play $\eta'$ from $v'$ to $v$. Let us
consider the cycle $\play''$ obtained as the concatenation of $\eta$
and $\eta'$. If $\play''$ has weight $q>0$, the cycle obtained by
concatenating $q$ times \play and $p$ times $\play''$ has weight
$0$, which contradicts the divergence of~$\game$. The same reasoning
on $\play''$ and $\play'$ proves that $\play''$ can not be
negative. Thus, $\play''$ is a cycle of weight $0$, which again
contradicts the hypothesis.
Reciprocally, consider a cycle of \game. It can be decomposed into
simple cycles, all belonging to the same SCC. Therefore they are all
positive or all negative. As the accumulated weight of the cycle is
the sum of the weights of these simple cycles, \game is
divergent.\qed \end{proof}
\paragraph{Computing the values} Consider a divergent weighted game $\game$. Let us start by observing that vertices with value $+\infty$ are those from which \MinPl can not reach the target vertices: thus, they can be computed with the classical attractor algorithm, and we can safely remove them, without changing other values or optimal strategies. In the rest, we therefore assume all values to be in $\Z\cup\{-\infty\}$.
Our computation of the values relies on a value iteration algorithm to find the greatest fixed point of operator $\IteOpe\colon (\Z\cup\{-\infty,+\infty\})^\Vertices \to (\Z\cup\{-\infty,+\infty\})^\Vertices$, defined for every vector $\vec x$ by $\IteOpe(\vec x)_v = 0$ if $v\in \VerticesT$, and otherwise \[\IteOpe(\vec x)_v =
\begin{cases}
\displaystyle{\min_{e = (v, a, v')\in\Edges} \weight(e) + \vec x_{v'}} &
\text{if }
v\in \VerticesMin\\
\displaystyle{\max_{e = (v, a, v')\in\Edges} \weight(e) + \vec x_{v'}} &
\text{if } v\in \VerticesMax\,.
\end{cases} \] Indeed, this greatest fixed point is known to be the vector of values of the game (see, e.g., \cite[Corollary~11]{BGHM16}). In \cite{BGHM16}, it is shown that, by initialising the iterative evaluation of \IteOpe with the vector $\vec x^0$ mapping all vertices to $+\infty$, the computation terminates after a number of iterations pseudo-polynomial in $\game$ (i.e.\ polynomial in the number of vertices and the greatest weight in $\game$). For $i>0$, we let $\vec x^i = \IteOpe(\vec x^{i-1})$. Notice that the sequence $(\vec x^i)_{i\in\N}$ is non-increasing, since \IteOpe is a monotonic operator. Value iteration algorithms usually benefit from decomposing a game into SCCs (in polynomial time), considering them in a bottom-up fashion: starting with target vertices that have value $0$, SCCs are then considered in inverse topological order since the values of vertices in an SCC only depend on values of vertices of greater SCCs (in topological order), that have been previously computed.
\begin{example}
\begin{figure}
\caption{SCC decomposition of a divergent weighted game:
$\{v_1,v_2,v_3,v_4\}$ and $\{v_7\}$ are negative SCCs, $\{v_6\}$ and
$\{v_8,v_9\}$ are positive SCCs, and $\{v_5\}$ is a trivial positive
SCC.}
\label{fig:SCC}
\end{figure} Consider the weighted game of \figurename~\ref{fig:SCC}, where \MinPl vertices are drawn with circles, and \MaxPl vertices with squares. Vertex $\vertex_t$ is the only target. Near each vertex is placed its value. For a given vector $\vec x$, we have $\IteOpe(\vec x)_{v_8} = \min(0+\vec x_{v_t},-1+\vec x_{v_9})$ and $\IteOpe(\vec x)_{v_2} = \max(-2+\vec x_{v_1},-1+\vec x_{v_3},-10+\vec x_{v_5})$. By a computation of the attractor of $\{v_t\}$ for \MinPl, we obtain directly that $v_4$ and $v_7$ have value~$+\infty$. The inverse topological order on SCCs prescribes then to compute first the values for the SCC $\{v_8,v_9\}$, with target vertex $v_t$ associated with value~$0$. Then, we continue with SCC $\{v_6\}$, also keeping a new target vertex $v_8$ with (already computed) value $0$. For the trivial SCC $\{v_5\}$, a single application of $\IteOpe$ suffices to compute the value. Finally, for the SCC $\{v_1,v_2,v_3,v_4\}$, we keep a new target vertex $v_5$ with value $1$.\footnote{This means that, in
the definition of $\IteOpe$, a vertex $v$ of $\VerticesT$ is indeed
mapped to its previously computed value, not necessarily $0$.} Notice that this game is divergent, since, in each SCC, all simple cycles have the same sign. \end{example}
For a divergent game $\game$, Proposition~\ref{prop:scc-sign} allows us to know in polynomial time if a given SCC is positive or negative, i.e.\ if all cycles it contains are positive or negative, respectively: it suffices to consider an arbitrary cycle of it, and compute its weight. A trivial SCC (i.e.\ with a single vertex and no edges) will be arbitrarily considered positive. We now explain how to compute in polynomial time the value of all vertices in a positive or negative SCC.
First, in case of a positive SCC, we show that: \begin{proposition}\label{prop:positive-scc}
The value iteration algorithm applied on a positive SCC with $n$
vertices stabilises after at most $n$ steps. \end{proposition} \begin{proof}[inspired by techniques used in \cite{BCFL04}] Let
$W=\max_{\edge\in\Edges} |\weight(\edge)|$ be the greatest weight in
the game. There are no negative cycles in the SCC, thus there are no
vertices with value $-\infty$ in the SCC, and all values are
finite. Let $K$ be an upper bound on the values $|\vec x^n_\vertex|$
obtained after $n$ steps of the algorithm.\footnote{After $n$ steps,
the value iteration algorithm has set to a finite value all
vertices, since it extends the attractor computation.} Fix an
integer $p>(2K+W(n-1))n$. We will show that the values obtained
after $n+p$ steps are identical to those obtained after $n$ steps
only. Therefore, since the algorithm computes non-increasing
sequences of values, we have indeed stabilised after $n$ steps only.
Assume the existence of a vertex $\vertex$ such that
$\vec x^{n+p}_\vertex<\vec x^{n}_\vertex$. By induction on $p$, we
can show (see Lemma~\ref{lem:valite} in Appendix~\ref{app:technical}
for a detailed proof) the existence of a vertex $\vertex'$ and a
finite play \play from \vertex to $\vertex'$ with length $p$ and
weight $\vec x^{n+p}_\vertex-\vec x^{n}_{\vertex'}$: the play is
composed of the edges that optimise successively the min/max
operator in \IteOpe. This finite play being of length greater than
$(2K+W(n-1))n$, there is at least one vertex appearing more than
$2K+W(n-1)$ times. Thus, it can be decomposed into at least
$2K+W(n-1)$ cycles and a finite play $\play'$ visiting each vertex
at most once. All cycles of the SCC being positive, the weight of
\play is at least $2K+W(n-1) - (n-1) W= 2K$, bounding from below the
weight of $\play'$ by $-(n-1)W$. Then,
$\vec x^{n+p}_\vertex-\vec x^{n}_{\vertex'} \geq 2K$, so
$\vec x^{n+p}_\vertex \geq 2K + \vec x^{n}_{\vertex'} \geq K$. But
$K \geq \vec x^{n}_{\vertex}$, so
$\vec x^{n+p}_\vertex\geq\vec x^{n}_{\vertex}$, and that is a
contradiction.\qed \end{proof}
\begin{example}
For the SCC $\{v_8,v_9\}$ of the game in \figurename~\ref{fig:SCC},
starting from $\vec x$ mapping $v_8$ and $v_9$ to $+\infty$, and
$v_t$ to $0$, after one iteration, $\vec x_{v_8}$ changes for value
$0$, and after the second iteration, $\vec x_{v_9}$ stabilises to
value $2$. \end{example}
Consider then the case of a negative SCC. Contrary to the previous case, we must deal with vertices of value $-\infty$. However, in a negative SCC, those vertices are easy to find\footnote{This is in
contrast with the general case of (non divergent) finite weighted
games where the problem of deciding if a vertex has value $-\infty$
is as hard as solving mean-payoff games~\cite{BGHM16}.}. These are all vertices where $\MaxPl$ can not unilaterally guarantee to reach a target vertex:
\begin{proposition}\label{prop:minus-infty}
In a negative SCC with no vertices of value $+\infty$, vertices of
value $-\infty$ are all the ones not in the attractor for $\MaxPl$
to the targets. \end{proposition} \begin{proof}
Consider a vertex $v$ in the attractor for $\MaxPl$ to the
targets. Then, if \MaxPl applies a winning memoryless strategy for
the reachability objective to the target vertices, all strategies of
\MinPl will generate a play from $v$ reaching a target after at most
$|\Vertices|$ steps. This implies that $v$ has a finite (lower)
value in the game.
Reciprocally, if $v$ is not in the attractor, by determinacy of
games with reachability objectives, \MinPl has a (memoryless)
strategy $\minstrategy$ to ensure that no strategy of \MaxPl permits
to reach a target vertex from $v$. Applying $\minstrategy$ long
enough to generate many negative cycles, before switching to a
strategy allowing \MinPl to reach the target (such a strategy exists
since no vertex has value $+\infty$ in the game), allows \MinPl to
obtain from $v$ a negative weight as small as possible. Thus, $v$
has value~$-\infty$. \qed \end{proof}
Thus, we can compute vertices of value $-\infty$ in polynomial time for a negative SCC. Then, finite values of other vertices can be computed in polynomial time with the following procedure. From a negative SCC $\game$ that has no more vertices of value $+\infty$ or $-\infty$, consider the dual (positive) SCC $\widetilde \game$ obtained by: \begin{inparaenum}[($i$)] \item switching vertices of $\MinPl$ and $\MaxPl$; \item taking the opposite of every weight in edges. \end{inparaenum} Sets of strategies of both players are exchanged in those two games, so that the upper value in $\game$ is equal to the opposite of the lower value in $\widetilde \game$, and vice versa. Since weighted games are determined, the value of $\game$ is the opposite of the value of $\widetilde\game$. Then, the value of $\game$ can be deduced from the value of $\widetilde\game$, for which Proposition~\ref{prop:positive-scc} applies. We may also interpret this result as follows: \begin{proposition}\label{prop:negative-scc}
The value iteration algorithm, initialised with $\vec x^0_v=-\infty$
(for all $v$), applied on a negative SCC with $n$ vertices, and no
vertices of value $+\infty$ or $-\infty$, stabilises after at most
$n$ steps. \end{proposition} \begin{proof}
It is immediate that the vectors computed with this modified value
iteration (that computes the smallest fixed point of $\IteOpe$) are
exactly the opposite vectors of the ones computed in the dual
positive SCC. The previous explanation is then a justification of
the result.\qed \end{proof}
\begin{example}
Consider the SCC $\{v_1,v_2,v_3,v_4\}$ of the game in
\figurename~\ref{fig:SCC}, where the value of vertex $v_5$ has been
previously computed. We already know that $v_4$ has value $+\infty$
so we do not consider it further. The attractor of $\{v_5\}$ for
\MaxPl is $\{v_2,v_3\}$, so that the value of $v_1$ is
$-\infty$. Then, starting from $\vec x_0$ mapping $v_2$ and $v_3$ to
$-\infty$, the value iteration algorithm computes this sequence of
vectors: $\vec x_1 = (-9,-\infty)$ (\MaxPl tries to maximise the
payoff, so he prefers to jump to the target to obtain $-10+1$ than
going to $v_3$ where he gets $-1-\infty$, while \MinPl chooses $v_2$
to still guarantee $0-\infty$), $\vec x_2 = (-9,-9)$ (now, \MinPl
has a choice between the target giving $0+1$ or $v_3$ giving $0-9$). \end{example}
The proof for \P-hardness comes from a reduction (in logarithmic space) of the problem of solving finite games with reachability objectives \cite{Imm81}. To a reachability game, we simply add weights 1 on every transition, making it a divergent weighted game. Then,
\MinPl wins the reachability game if and only if the value in the weighted game is lower than $|\Vertices|$.
In a divergent weighted game where all values are finite, optimal strategies exist. As observed in \cite{BGHM16}, \MaxPl always has a memoryless optimal strategy, whereas \MinPl may require (finite) memory. Optimal strategies for both players can be obtained by combining optimal strategies in each SCC, the latter being obtained as explained in \cite{BGHM16}.
\paragraph{Class decision} We explain why deciding the divergence of a weighted game is an $\NL$-complete problem when weighs are encoded in unary. First, to prove the membership in \NL, notice that a weighted game is \emph{not divergent} if and only if there is a positive cycle and a negative cycle, both of length at most $|\Vertices|$, and belonging to the same SCC.\footnote{If the game is not divergent,
there exists an SCC containing a negative simple cycle and a
positive one by Proposition~\ref{prop:scc-sign}. This implies the
existence of a negative cycle and a positive cycle in the same SCC,
both of length at most $|\Vertices|$. Reciprocally, this property
implies the non-divergence, by the same proof as for
Proposition~\ref{prop:scc-sign}.} To test this property in \NL, we first guess a starting vertex for both cycles. Verifying that those are in the same SCC can be done in \NL. Then, we guess the two cycles on-the-fly, keeping in memory their accumulated weights (smaller than
$W\times |\Vertices|$, with $W$ the biggest weight in the game, and thus of size at most logarithmic in the size of $\game$, if weights are encoded in unary), and stop the on-the-fly exploration when the length of the cycles exceeds $|\Vertices|$. Therefore testing divergence is in $\co\NL=\NL$ \cite{Imm88,Sze88}.
The $\NL$-hardness (indeed $\co\NL$-hardness, which is equivalent \cite{Imm88,Sze88}) is shown by a reduction of the reachability problem in a finite automaton. More precisely, we consider a finite automaton with a starting state and a different target state without outgoing transitions. We construct from it a weighted game by distributing all states to \MinPl, and equipping all transitions with weight $1$. We also add a loop with weight $-1$ on the target state and a transition from the target state to the initial state with weight $0$. Then, the game is not divergent if and only if the target can be reached from the initial state in the automaton.
When weights are encoded in binary, the previous decision procedure gives \NP\ membership. However, we can achieve a \P\ upperbound with the following procedure. For every vertex~$\vertex$, let $C_\vertex=\{\weight(\play) \mid \play\text{ cycle containing }\vertex\}$. Using Floyd-Warshall's algorithm, it is possible to compute in polynomial time $\inf C_\vertex$ (in particular, it detects if $C_\vertex\neq\emptyset$), as well as $\sup C_\vertex$ in a dual fashion. Then, Proposition~\ref{prop:scc-sign} allows us to guarantee that a weighted game $\game$ is divergent if and only if $0\not\in[\inf C_\vertex,\sup C_\vertex]$, for all vertices $\vertex$ such that $C_\vertex\neq\emptyset$.
\section{Weighted timed games}\label{sec:weighted-timed-games}
We now turn our attention to a timed extension of the weighted games. We will first define weighted timed games, giving their semantics in terms of \emph{infinite} weighted games. We let \Clocks be a finite set of variables called clocks. A valuation of clocks is a mapping $\val\colon \Clocks\to \Rplus$. For a valuation $\val$, $d\in\Rplus$ and $Y\subseteq \Clocks$, we define the valuation $\val+d$ as $(\val+d)(x)=\val(x)+d$, for all $x\in \Clocks$, and the valuation $\val[Y\leftarrow 0]$ as $(\val[Y\leftarrow 0])(x)=0$ if $x\in Y$, and $(\val[Y\leftarrow 0])(x)=\val(x)$ otherwise. The valuation $\valnull$ assigns $0$ to every clock. A guard on clocks of \Clocks is a conjunction of atomic constraints of the form $x\bowtie c$, where ${\bowtie}\in\{{\leq},<,=,>,{\geq}\}$ and $c\in \N$. A valuation $\val\colon \Clocks\to \Rplus$ satisfies an atomic constraint $x\bowtie c$ if $\val(x)\bowtie c$. The satisfaction relation is extended to all guards $g$ naturally, and denoted by $\val\models g$. We let $\Guards \Clocks$ the set of guards over \Clocks.
A weighted timed game is then a tuple $\game=\langle\States=\StatesMin\uplus\StatesMax,\StatesT, \Trans,\weight\rangle$ where $\StatesMin$ and $\StatesMax$ are \emph{finite} disjoint subsets of states belonging to \MinPl and \MaxPl, respectively, $\StatesT\subseteq \StatesMin$ is a subset of target states for player $\MinPl$, $\Trans\subseteq \States\times\Guards \Clocks\times \powerset \Clocks \times \States$ is a \emph{finite} set of transitions, and $\weight\colon \Trans\uplus\States \to \Z$ is the weight function, associating an integer weight with each transition and state. Without loss of generality, we may suppose that for each state $\state\in \States$ and valuation $\val$, there exists a transition $(\state,g,Y,\state')\in \Trans$ such that $\val\models g$.
The semantics of a weighted timed game $\game$ is defined in terms of the infinite weighted game $\mathcal H$ whose vertices are configurations of the weighted timed game. A configuration is a pair $(\state,\val)$ with a state and a valuation of the clocks. Configurations are split into players according to the state. A configuration is final if its state is final. The alphabet of $\mathcal H$ is given by $\Rplus\times \Trans$ and will encode the delay that a player wants to spend in the current state, before firing a certain transition. For every delay $d\in\Rplus$, transition $\trans=(\state,g,Y,\state')\in \Trans$ and valuation~$\val$, there is an edge $(\state,\val)\xrightarrow{d,\trans}(\state',\val')$ if $\val+d\models g$ and $\val'=(\val+d)[Y\leftarrow 0]$. The weight of such an edge $e$ is given by $d\times \weight(\state) + \weight(\trans)$.
Plays, strategies, and values in the weighted timed game $\game$ are then defined as the ones in $\mathcal H$. It is known that weighted timed games are determined ($\lowervalue_\game(s,\nu)=\uppervalue_\game(s,\nu)$ for all state $s$ and valuation $\nu$).\footnote{The result is stated in \cite{BGH+15}
for weighted timed games (called priced timed games) with one clock,
but the proof does not use the assumption on the number of clocks.}
As usual in related work \cite{ABM04,BCFL04,BJM15}, we assume that all clocks are \emph{bounded}, i.e.\ there is a constant $M\in\N$ such that every transition of the weighted timed games is equipped with a guard $g$ such that $\val\models g$ implies $\val(x)\leq M$ for all clocks $x\in \Clocks$. We will rely on the crucial notion of regions, as introduced in the seminal work on timed automata \cite{AD94}: a region is a set of valuations, that are all time-abstract bisimilar. There is only a finite number of regions and we denote by $\regions \Clocks M$ the set of regions associated with set of clocks \Clocks and maximal constant $M$ in guards. For a valuation~$\val$, we denote by $[\val]$ the region that contains it. A region $r'$ is said to be a time successor of region $r$ if there exist $\val\in r$, $\val'\in r'$, and $d>0$ such that $\val'=\val+d$. Moreover, for $Y\subseteq \Clocks$, we let $r[Y\leftarrow 0]$ be the region where clocks of $Y$ are reset.
The region automaton $\rgame$ of a game $\game= \langle\States=\StatesMin\uplus\StatesMax,\StatesT, \Trans,\weight\rangle$ is the finite automaton with states $\States\times \regions \Clocks M$, alphabet $\Trans$, and a transition $(s,r)\xrightarrow{\trans}(s',r')$ labelled by $\trans=(s,g,Y,s')$ if there exists a region $r''$ time successor of $r$ such that $r''$ satisfies the guard $g$, and $r'=r''[Y\leftarrow 0]$. We call \emph{path} an execution (not necessarily accepting) of this finite automaton, and we denote by $\rpath$ the paths. A play $\play$ in $\game$ is projected on a execution $\rpath$ in $\rgame$, by replacing actual valuations by the regions containing them: we say that $\play$ \emph{follows} path $\rpath$. It is important to notice that, even if $\rpath$ is a cycle (i.e.\ starts and ends in the same state of the region automaton), there may exist plays following it in $\game$ that are not cycles, due to the fact that regions are sets of valuations.
\paragraph{Problems} As in weighted (untimed) games, we consider the \emph{value problem}, mimicked from the one in $\mathcal H$. Precisely, given a weighted timed game $\game$, a configuration $(s,\val)$ and a threshold $\alpha\in \Z\cup\{-\infty,+\infty\}$, we want to know whether $\Val_\game(s,\val)\leq \alpha$. In the context of timed games, optimal strategies may not exist. We generally focus on $\varepsilon$-optimal strategies, that guarantee the optimal value, up to a small error $\varepsilon$.
\paragraph{Related work} In the one-player case, computing the optimal value and an $\varepsilon$-optimal strategy for weighted timed automata is known to be $\PSPACE$-complete \cite{BouBri07}. In the two-player case, much work for weighted timed games (also called priced timed games in the literature) has been achieved in the case of non-negative weights. In this setting, the value problem is undecidable \cite{BBR05,BJM15}. To obtain decidability, one possibility is to limit the number of clocks to 1: then, there is an exponential-time algorithm to compute the value as well as $\varepsilon$-optimal strategies \cite{BBM06,Rut11,DueIbs13}, whereas the problem is only known to be $\P$-hard. The other possibility to obtain a decidability result~\cite{ABM04,BCFL04} is to enforce a semantical property of divergence, originally called strictly non-Zeno cost: it asks that every play following a cycle in the region automaton has weight at least~$1$.
In the presence of negative weights, undecidability even holds for weighted timed games with only 2 clocks \cite{BGNK+14} (for the existence problem asking if a strategy of player \MinPl can guarantee a given threshold). Only the 1-clock restriction has been studied so far allowing one to obtain an exponential-time algorithm, under restrictions on the resets of the clock in cycles \cite{BGH+15}. For weighted timed games, the strictly non-Zeno cost property has only been defined and studied in the absence of negative weights \cite{BCFL04}. As already mentioned in the introduction, the notion is close, but not equivalent, to the one of robust weighted timed games, studied for mean-payoff and energy objectives \cite{BreCas14}. In the next section, we extend the strictly non-Zeno cost property to negative weights calling it the divergence property, in order to obtain decidability of a large class of multi-clocks weighted timed games in the presence of arbitrary weights.
\section{Solving divergent weighted timed
games}\label{sec:solving-divergent-weighted-timed-games}
We introduce divergent weighted timed games, as an extension of divergent weighted games to the timed setting.
\begin{definition}
A weighted timed game \game is divergent when every finite
play~\play in \game following a cycle in the region automaton
$\rgame$ satisfies $\weight(\play)\notin (-1,1)$.\footnote{As in
\cite{BCFL04}, we could replace $(-1,1)$ by $(-\kappa,\kappa)$ to
define a notion of $\kappa$-divergence. However, since weights and
guard constraints in weighted timed games are integers, for
$\kappa\in(0,1)$, a weighted timed game $\game$ is
$\kappa$-divergent if and only if it is divergent.} \end{definition}
The weight is not only supposed to be different from $0$, but also far from~$0$: otherwise, the original intuition on the ultimate growing of the values of plays would not be fulfilled. If $\game$ has only non-negative weights on states and transitions, this definition matches with the \emph{strictly non-Zeno cost} property of~\cite[Thm.~6]{BCFL04}. Our contributions summarise as follows:
\begin{theorem}
The value problem over divergent weighted timed games is decidable
in $2$-\EXP, and is \EXP-hard. Moreover, deciding if a given weighted
timed game is divergent is a \PSPACE-complete problem. \end{theorem}
Remember that these complexity results match the ones that can be obtained from the study of \cite{BCFL04} for non-negative weights.
\paragraph{SCC analysis} Keeping the terminology of the untimed setting, a cycle~$\rpath$ of~\rgame is said to be positive (respectively, negative) if every play \play following~$\rpath$ satisfies $\weight(\play)\geq 1$ (respectively, $\weight(\play)\leq -1$). By definition, every cycle of the region automaton of a divergent weighted timed game is positive or negative. Moreover, notice that checking if a cycle $\rpath$ is positive or negative can be done in polynomial time with respect to the length of $\rpath$. Indeed, the set $\{\weight(\play) \mid \play \text{ is a play following } \rpath \}$ is an interval, as the image of a convex set by an affine function (see \cite[Sec.~3.2]{BouBri07} for explanation), and the extremal points of this interval can be computed in polynomial time by solving a linear problem \cite[Cor.~1]{BouBri07}. We first transfer in the timed setting the characterisation of divergent games in terms of SCCs that we relied on in the untimed setting:
\begin{proposition}\label{prop:timed-scc-sign}
A weighted timed game \game is divergent if and only if, in each SCC
of \rgame, simple cycles are either all positive, or all negative. \end{proposition}
The proof of the reciprocal follows the exact same reasoning than for weighted games (see Proposition~\ref{prop:scc-sign}). For the direct implication, the situation is more complex: we need to be more careful in the composition of cycles with each others, and weights in the timed game are no longer integers, forbidding the arithmetical reasoning we applied. To help us, we will rely on the corner-point abstraction introduced in \cite{BouBri08a} to study multi-weighted timed automata. It consists in adding a weighted information to the edges $(s,r)\xrightarrow{\delta}(s',r')$ of the region automaton. Since the weights depend on the exact valuations $\nu$ and $\nu'$, taken in regions $r$ and $r'$, respectively, the weight of such an edge in the region automaton is computed for each pair of \emph{corners} of the regions. Formally, corners of region $r$ are valuations in $\overline{r} \cap \N^\Clocks$ (where $\overline{r}$ denotes the topological closure of $r$). Since corners do not necessarily belong to their regions, we must consider a modified version $\overline\game$ of the game $\game$ where all strict inequalities of guards have been replaced with non-strict ones. Then, for a path $\rpath$ in $\rgame$, we denote by $\overline\rpath$ the equivalent of path $\rpath$ in $\mathcal R(\overline\game)$.
In the following, our focus is on cycles of the region automaton, so we only need to consider the aggregation of all the behaviours following a cycle. Inspired by the \emph{folded orbit graphs} (FOG) introduced in~\cite{Pur00}, we define the folded orbit graph $\FOG(\rpath)$ of a cycle $\rpath=(\state_1,r=r_1) \xrightarrow{\trans_1} (\state_2,r_2) \xrightarrow{\trans_2} \cdots \xrightarrow{\trans_n} (\state_1,r)$ in \rgame as a graph whose vertices are corners of region $r$, and that contains an edge from corner $\regv$ to corner $\regv'$ if there exists a finite play $\overline{\play}$ in $\overline\game$ from $(s_1,\regv)$ to $(s_1,\regv')$ following $\overline\rpath$ jumping from corners to corners\footnote{Notice that if there is a play from
$(s_1,\regv)$ to $(s_1,\regv')$ in $\overline\game$, there is
another one that only jumps at corners of regions.}. We fix such a finite play $\overline{\play}$ arbitrarily and label the edge between $\regv$ and $\regv'$ in the FOG by this play: it is then denoted by $\regv\xrightarrow{\overline{\play}}\regv'$. Moreover, since $\overline\play$ jumps from corners to corners, its weight $\weight(\overline\play)$ is an integer, conforming to the definitions of the corner-point abstraction of \cite{BouBri08a}. Following \cite[Prop.~5]{BouBri08a} (see Appendix~\ref{app:FOG} for a complete proof), it is possible to find a play $\play$ in $\game$ close to $\overline\play$, in the sense that we control the difference between their respective weights: \begin{lemma}\label{lem:fog-exec}
For all $\varepsilon>0$ and edge
$\regv\xrightarrow{\overline{\play}}\regv'$ of $\FOG(\rpath)$, there
exists a play \play in \game following \rpath such that
$|\weight(\play)-\weight(\overline{\play})|\leq \varepsilon$. \end{lemma}
In order to prove the direct implication of Proposition~\ref{prop:timed-scc-sign}, suppose now that \game is divergent, and consider two simple cycles \rpath and $\rpath'$ in the same SCC of $\rgame$. We need to show that they have the same sign. Lemma~\ref{lem:timed-touching-cycles} will first take care of the case where $\rpath$ and $\rpath'$ share a state $(\state,r)$.
\begin{lemma}\label{lem:timed-touching-cycles}
If \game is divergent and two cycles $\rpath$ and $\rpath'$ of
\rgame share a state~$(\state,r)$, they are either both positive or
both negative. \end{lemma} \begin{proof}
Suppose by contradiction that $\rpath$ is negative and $\rpath'$ is
positive. We assume that $(\state,r)$ is the first state of both
$\rpath$ and $\rpath'$, possibly performing cyclic permutations of
states if necessary. We construct a graph $\FOG(\rpath,\rpath')$ as
the union of $\FOG(\rpath)$ and $\FOG(\rpath')$ (that share the same
set of vertices), colouring in blue the edges of $\FOG(\rpath)$ and
in red the edges of $\FOG(\rpath')$. A path in
$\FOG(\rpath,\rpath')$ is said blue (respectively, red) when all of
its edges are blue (respectively, red).
We assume first that there exists in $\FOG(\rpath,\rpath')$
a blue cycle $C$ and a red cycle~$C'$ with the same first vertex
\regv. Let $k$ and $k'$ be the respective lengths of $C$ and $C'$,
so that $C$ can be decomposed as
$\regv\xrightarrow{\overline{\play_1}}\cdots
\xrightarrow{\overline{\play_k}}\regv$ and $C'$ as
$\regv\xrightarrow{\overline{\play_1'}}\cdots
\xrightarrow{\overline{\play_{k'}'}}\regv$, where
$\overline{\play_i}$ are plays following $\overline{\rpath}$ and
$\overline{\play_i'}$ are plays following $\overline{\rpath'}$, all
jumping only on corners of regions. Let $\overline{\play}$ be the
concatenation of $\overline{\play_1},\ldots,\overline{\play_k}$, and
$\overline{\play'}$ be the concatenation of
$\overline{\play_1'},\ldots,\overline{\play_{k'}'}$. Recall that
$w=|\weight(\overline{\play})|$ and
$w'=|\weight(\overline{\play'})|$ are integers. Since $\rpath$ is
negative, so is $\rpath^k$, the concatenation of $k$ copies of
$\rpath$ (the weight of a play following it is a sum of weights all
below $-1$). Therefore, $\overline{\play}$, that follows $\rpath^k$,
has a weight $\weight(\overline{\play})\leq -1$. Similarly,
$\weight(\overline{\play'})\geq 1$. We consider the cycle $C''$
obtained by concatenating~$w'$ copies of $C$ and $w$ copies of
$C'$. Similarly, we let $\overline{\play''}$ be the play obtained by
concatenating $w'$ copies of $\overline{\play}$ and $w$ copies of
$\overline{\play'}$. By Lemma~\ref{lem:fog-exec}, there exists a
play $\play''$ in \game, following $C''$ such that
$|\weight(\play'') - \weight(\overline{\play''})|\leq 1/3$. But
$\weight(\overline{\play''})=\weight(\overline{\play})w'+
\weight(\overline{\play'})w=0$, so $\weight(\play'')\in(-1,1)$: this
contradicts the divergence of $\game$, since $\play''$ follows the
cycle of $\rgame$ composed of $w'$ copies $\rpath^k$ and $w$ copies
of ${\rpath'}^{k'}$ of \rgame.
We now return to the general case, where $C$ and $C'$ may
not exist. Since $\FOG(\rpath)$ and $\FOG(\rpath')$ are finite
graphs with no deadlocks (every corner has an outgoing edge), from
every corner of $\FOG(\rpath,\rpath')$, we can reach a blue simple
cycle, as well as a red simple cycle. Since there are only a finite
number of simple cycles in $\FOG(\rpath,\rpath')$, there exists a
blue cycle $C$ and a red cycle $C'$ that can reach each other in
$\FOG(\rpath,\rpath')$.\footnote{Indeed, one can apply the following
construction. We start from a fixed vertex and reach a red simple
cycle. We fix a vertex of this red cycle, and from it we can reach
a blue simple cycle. We fix a vertex of this blue cycle, and from
it we can reach a red simple cycle. There is a finite number of
red and blue simple cycles, so we keep alternating between red and
blue until we reach a previously seen red simple cycle. This red
cycle and, for example, the previous blue one can reach each
other.} In $\FOG(\rpath,\rpath')$, we let $P$ be a path from the
first vertex of $C$ to the first vertex of $C'$, and $P'$ be a path
from the first vertex of $C'$ to the first vertex of $C$. Consider
the cycle $C''$ obtained by concatenating $P$ and $P'$. As a cycle
of $\FOG(\rpath,\rpath')$, we can map it to a cycle $\rpath''$ of
\rgame (alternating \rpath and $\rpath'$ depending on the colours of
the traversed edges), so that $C''$ is a cycle (of length 1) of
$\FOG(\rpath'')$. By the divergence of $\game$, $\rpath''$ is
positive or negative. Suppose for instance that it is
positive. Since $(s,r)$ is the first state of both $\rpath$ and
$\rpath''$, we can construct the $\FOG(\rpath,\rpath'')$, in which
$C$ is a blue cycle and $C''$ is a red cycle, both sharing the same
first vertex. We then conclude with the previous case. A similar
reasoning with $\rpath'$ applies to the case that $\rpath''$ is
negative. Therefore, in all cases, we reached a contradiction.\qed \end{proof}
To finish the proof of the direct implication of Proposition~\ref{prop:timed-scc-sign}, we suppose that the two simple cycles $\rpath$ and $\rpath'$ in the same SCC of $\rgame$ do not share any states. By strong connectivity, in $\rgame$, there exists a path $\rpath_1$ from the first state of~$\rpath$ to the first state of $\rpath'$, and a path $\rpath_2$ from the first state of $\rpath'$ to the first state of $\rpath$. Consider the cycle of $\rgame$ obtained by concatenating $\rpath_1$ and~$\rpath_2$. By divergence of $\game$, it must be positive or negative. Since it shares a state with both $\rpath$ and $\rpath'$, Lemma~\ref{lem:timed-touching-cycles} allows us to prove a contradiction in both cases. This concludes the proof of Proposition~\ref{prop:timed-scc-sign}.
\paragraph{Value computation} We will now explain how to compute the values of a divergent weighted timed game \game. Remember that the function $\Val$ maps configurations of $\States\times\Rplus^\Clocks$ to a value in $\Rbar=\R\cup\{-\infty,+\infty\}$. The semi-algorithm of \cite{BCFL04} relies on the same principle as the value iteration algorithm used in the untimed setting, only this time we compute the greatest fixed point of operator $\IteOpe\colon\Rbar^{\States\times\Rplus^\Clocks} \to \Rbar^{\States\times\Rplus^\Clocks}$, defined by $\IteOpe(\vec x)_{(\state,\val)}=0$ if $\state\in\StatesT$, and otherwise \[\IteOpe(\vec x)_{(\state,\val)}= \begin{cases}
\displaystyle{\sup_{(\state,\val)\xrightarrow{d,\trans}(\state',\val')}
d\times\weight(\state)+\weight(\trans)+\vec x_{(\state',\val')}
} &
\text{if }\state\in\StatesMax\\
\displaystyle{\inf_{(\state,\val)\xrightarrow{d,\trans}(\state',\val')}
d\times\weight(\state)+\weight(\trans)+\vec x_{(\state',\val')} } &
\text{if }\state\in\StatesMin \end{cases}\] \noindent where $(\state,\val)\xrightarrow{d,\trans}(\state',\val')$ ranges over the edges of the infinite weighted game associated with $\game$ (the one defining its semantics). Then, starting from $\vec x^0$ mapping every configuration to $+\infty$, we let $\vec x^i= \IteOpe(\vec x^{i-1})$ for all $i>0$. Since $\vec x^0$ is piecewise affine (even constant), and $\IteOpe$ preserves piecewise affinity, all iterates $\vec x^i$ are piecewise affine with a finite amount of pieces. In \cite{ABM04}, it is proved that $\vec x^i$ has at most a number of pieces linear in the size of \rgame and exponential in~$i$.\footnote{
For divergent games with only non-negative weights, the fixed point
is reached after a number of steps linear in the size of the region
automaton \cite{BCFL04}: overall, this leads to a doubly
exponential complexity.}
First, we can compute the set of configurations having value $+\infty$. Indeed, the region automaton \rgame can be seen as a reachability two-player game \hgame by saying that $(\state,r)$ belongs to \MinPl (\MaxPl, respectively) if $\state\in\StatesMin$ ($\state\in\StatesMax$, respectively). Notice that if $\Val(\state,\val)=+\infty$, then for all $\val'\in[\val], \Val(\state,\val')=+\infty$. Therefore, a configuration $(\state,\val)$ cannot reach the target states if and only if $(\state,[\val])$ is not in the attractor of \MinPl to the targets in \hgame. As a consequence, we can compute all such states of \hgame with complexity linear in the size of~\rgame.
We then decompose \rgame in SCCs. By Proposition~\ref{prop:timed-scc-sign}, each SCC is either positive or negative (i.e.\ it contains only positive cycles, or only negative ones). Then, in order to find the sign of a component, it suffices to find one of its simple cycles, for example with a depth-first search, then compute the weight of one play following it.
As we did for weighted (untimed) games, we then compute values in inverse topological order over the SCCs. Once the values of all configurations in $(s,r)$ appearing in previously considered SCCs have been computed, they are no longer modified in further computation. This is the case, in particular, for all pairs $(s,r)$ that have value $+\infty$, that we precompute from the beginning. In order to resolve a positive SCC of \rgame, we apply \IteOpe on the current piecewise affine function, only modifying the pieces appearing in the SCC, until reaching a fixed point over these pieces. In order to resolve a negative SCC of \rgame, we compute the attractor for \MaxPl to the previously computed SCCs: outside of this attractor, we set the value to $-\infty$. Then, we apply \IteOpe for pieces appearing in the SCC, initialising them to $-\infty$ (equivalently, we compute in the dual game, that is a positive SCC), until reaching a fixed point over these pieces. The next proposition contains the correction and termination arguments that where presented in Propositions~\ref{prop:positive-scc}, \ref{prop:minus-infty}, and \ref{prop:negative-scc} for the untimed setting: \begin{proposition}\label{prop:VI-timed}
Let $\game$ be a divergent game with no configurations of value
$+\infty$.
\begin{enumerate}
\item\label{item:positive} The value iteration algorithm applied on
a positive SCC of $\rgame$ with $n$ states stabilises after at
most $n$ steps.
\item\label{item:minus-infinity} In a negative SCC, states $(s,r)$
of $\rgame$ of value $-\infty$ are all the ones not in the
attractor for \MaxPl to the targets.
\item\label{item:negative} The value iteration algorithm,
initialised with $-\infty$, applied on a negative SCC of $\rgame$
with $n$ states, and no vertices of value $-\infty$, stabilises
after at most $n$ steps.
\end{enumerate} \end{proposition}
By the complexity results of \cite[Thm.~3]{ABM04}, we obtain a doubly exponential time algorithm computing the value of a divergent weighted timed game. This shows that the value problem is in $2$-\EXP\ for divergent weighted timed game. The proof for \EXP-hardness comes from a reduction of the problem of solving timed games with reachability objectives \cite{JurTri07}. To a reachability timed game, we simply add weights 1 on every transition and 0 on every state, making it a divergent weighted timed game. Then, \MinPl wins the reachability timed game if and only if the value in the weighted timed game is lower than threshold $\alpha=|\States|\times |\regions \Clocks M|$. A complete proof can be found in Appendix~\ref{app:exp-hardness}.
In an SCC of $\rgame$, the value iteration algorithm of \cite{ABM04} allows us to compute an $\varepsilon$-optimal strategy for both players (for configurations having a finite value), that is constant (delay or fire a transition) over each piece of the piecewise affine value function. As in the untimed setting, we may then compose such $\varepsilon$-optimal strategies to obtain an $\varepsilon'$-optimal strategy in $\game$ ($\varepsilon'$ is greater than $\varepsilon$, but can be controlled with respect to the number of SCCs in $\rgame$).
\paragraph{Class decision} Deciding if a weighted timed game is divergent is \PSPACE-complete. The complete proof is given in Appendix~\ref{app:class-decision}, and is an extension of the untimed setting \NL-complete result, but this time we reason on regions, hence the exponential blowup in complexity: it heavily relies on Proposition~\ref{prop:timed-scc-sign}, as well as the corner-point abstraction to keep a compact representation of plays.
\section{Conclusion}
In this article, we introduced the first decidable class of weighted timed games with arbitrary weights, with no restrictions on the number of clocks. Future work include the approximation problem for a larger class of weighted timed games (divergent ones where we also allow cycles of weight exactly 0), already studied with only non-negative weights by~\cite{BJM15}.
\appendix
\section{Technical lemmas regarding value iteration
algorithms}\label{app:technical}
\begin{lemma}[untimed setting notations]\label{lem:valite}
For all $i<j\in\N$, if $\vec x^j\neq \vec x^i$ then for all
$\vertex\in\Vertices$ there exists $\vertex'$ and a play \play from
\vertex to $\vertex'$ with $|\play|=j-i$ and
$\weight(\play)=\vec x^j_\vertex-\vec x^i_{\vertex'})$. \end{lemma} \begin{proof} Let us fix $i$, and prove it by induction on $j>i$.
\textit{Initialisation} : If $j=i+1$, we applied one step of the value iteration algorithm between $\vec x^i$ and $\vec x^j$, so for all $\vertex$ there exists $\vertex'$ and an edge $\edge=\vertex\rightarrow\vertex'$ such that $\weight(\edge)=\vec x^{i+1}_\vertex-\vec x^i_{\vertex'}$.
\textit{Iteration} : We assume the property holds for $j-1>i$, and $\vec x^j\neq \vec x^i$. We applied one step of the value iteration algorithm between $\vec x^{j-1}$ and $\vec x^j$, so for all \vertex there exists $\vertex'$ and an edge $\edge=\vertex\rightarrow\vertex'$ such that $\weight(\edge)=\vec x^j_\vertex-\vec x^{j-1}_{\vertex'}$. We apply the property on $i$ and $j-1$ ($\vec x^{j-1}\neq \vec x^i$
because $\vec x^j\neq \vec x^i$ and as soon as $\vec x$ stabilises, the fixed point is reached and the iteration stops), and obtain that for all $\vertex'\in\Vertices$ there exists $\vertex''$ and a play \play from $\vertex'$ to $\vertex''$ with $|\play|=j-1-i$ and $\weight(\play)=\vec x^{j-1}_{\vertex'}-\vec x^i_{\vertex''}$. Then we define
$\play'=\vertex\rightarrow\vertex'\xrightarrow{\play}\vertex''$ and it holds that $|\play'|=j-i$ and $\weight(\play')=\vec x^j_{\vertex}-\vec x^i_{\vertex''}$.\qed \end{proof}
\begin{lemma}[timed setting notations]\label{lem:valite-timed}
For all $i<j\in\N$, if $\vec x^j\neq \vec x^i$ then, for all
configurations $(\state,\val)$, there exists $(\state',\val')$ and a
play \play from $(\state,\val)$ to $(\state',\val')$ with
$|\play|=j-i$ and
$\weight(\play)=\vec x^j_{(\state,\val)}-\vec
x^i_{(\state',\val')}$. \end{lemma} \begin{proof} Let us fix $i$, and prove it by induction on $j>i$.
\textit{Initialisation} : If $j=i+1$, we applied \IteOpe once between $\vec x^i$ and $\vec x^j$, so for all configurations $(\state,\val)$ there exists $(\state',\val')$ and a transition $(\state,\val)\xrightarrow{d,\trans}(\state',\val')$ of weight $\vec x^j_{(\state,\val)}-\vec x^i_{(\state',\val')}$.
\textit{Iteration} : We assume the property holds for $j-1>i$, and $\vec x^j\neq \vec x^i$. We applied \IteOpe once between $\vec x^{j-1}$ and $\vec x^j$, so for all configurations $(\state,\val)$, there exists $(\state',\val')$ and a transition $(\state,\val)\xrightarrow{d,\trans}(\state',\val')$ of weight $\vec x^j_{(\state,\val)}-\vec x^i_{(\state',\val')}$. We apply the property on $i$ and $j-1$ ($\vec x^{j-1}\neq \vec x^i$ because $\vec x^j\neq \vec x^i$ and as soon as $\vec x$ stabilises, the fixed point is reached and the iteration stops), and obtain that for all configurations $(\state',\val')$, there exists $(\state'',\val'')$ and a play \play from $(\state',\val')$ to $(\state'',\val'')$ with
$|\play|=j-1-i$ and $\weight(\play)=\vec x^{j-1}_{(\state',\val')}-\vec x^i_{(\state'',\val'')})$. Then we define $\play'=(\state,\val) \xrightarrow{d,\trans}(\state',\val')
\xrightarrow{\play}(\state'',\val'')$ and it holds that $|\play'|=j-i$ and $\weight(\play')=\vec x^j_{(\state,\val)}-\vec x^i_{(\state'',\val'')}$.\qed \end{proof}
\section{FOG and corner-point abstraction}\label{app:FOG}
If \val is a valuation and $\varepsilon>0$, $\Ball(\val,\varepsilon)$
denotes the open ball of radius $\varepsilon$ centered into $\val$ for the infinity norm $\|.\|_\infty$ over $\Rplus^\Clocks$:
$\|\val-\val'\|_\infty = \max_{x\in\Clocks}|\val(x)-\val'(x)|$. We let $W$ be the biggest weight appearing in \game in absolute value. If \play and $\overline{\play}$ are plays in \game following respectively a play \rpath and its copy in $\overline{\game}$, we denote by $d(\play,\overline{\play})$ the distance between those two plays, defined as the sum of the differences in absolute value between the delays on the edges of \play and $\overline{\play}$. By triangular inequality, we obtain
$|\weight(\play)-\weight(\overline{\play})|\leq W d(\play,\overline{\play})$, since the same transitions are fired in \play and $\overline{\play}$, with only different delays. We will now relate an edge $\regv\xrightarrow{\overline{\play}}\regv'$ of a FOG with an actual play $\play$ in the original timed game, while controlling the distance between $\weight(\play)$ and $\weight(\overline{\play})$:
\begin{lemma}
Let \rpath be a cycle of \rgame, and
$\regv\xrightarrow{\overline{\play}}\regv'$ be an edge of
$\FOG(\rpath)$. For all $\varepsilon>0$ and
$\val\in r\cap\Ball(\regv,\varepsilon)$, there exists
$\val'\in r\cap\Ball(\regv',\varepsilon)$ and \play play in \game
from \val to $\val'$ following \rpath such that
$|\weight(\play)-\weight(\overline{\play})|\leq 2\varepsilon
|\rpath| W$. \end{lemma} \begin{proof}
By the previous explanation, it is sufficient to find a play $\play$
such that $d(\play,\overline{\play})\leq 2\varepsilon |\rpath|$. By
induction, it is sufficient to prove a similar result only for a
single edge
$(\state,r)\xrightarrow{\trans=(\state,g,Y,\state')} (\state',r')$
of the region automaton $\rgame$, between regions~$r$ and $r'$. We
thus consider a play
$(\state,\regv) \xrightarrow{d,\trans}(\state',\regv')$ in the
closed timed game $\overline{\game}$ from a
corner~$\regv\in\overline{r}$ to a corner~$\regv'\in\overline{r'}$.
Consider a valuation $\val\in r\cap\Ball(\regv,\varepsilon)$. We now
explain how to construct a valuation
$\val'\in r'\cap\Ball(\regv',\varepsilon)$ and $d'\geq 0$ such that
$\val\xrightarrow{d',\trans}\val'$ is a valid play in $\game$ and
$|d-d'|\leq 2\varepsilon$, which implies the lemma.
Let $r''$ be the time successor region of~$r$ such that
$r''[Y\leftarrow 0]=r'$. We let $\regv''=\regv+d$ be the valuation
of~$\overline{r''}$, just before the possible resets of the clocks
of~$Y$ in~$\overline\game$: $\regv'=\regv''[Y\leftarrow 0]$. Then,
the timed successors of $\val$, i.e.\ the affine line
$\val+(1,1,\ldots,1)\R$, intersect the set
$r''\cap\Ball(\regv'',\varepsilon)$ in a valuation $\val''$: indeed,
lines obtained by time elapsing starting from \val and \regv are
parallel, and $r''$ is a time successor of~$r$. There exists $d'$ such that
$\val''=\val+d'$. Moreover,
$d'=\|\val-\val''\|_\infty\leq \|\val-\regv\|_\infty +
\|\regv-\regv''\|_\infty + \|\regv''-\val''\|_\infty \leq 2
\varepsilon + d$, and
$d=\|\regv-\regv'\|_\infty\leq \|\regv-\val\|_\infty +
\|\val-\val'\|_\infty + \|\val'-\regv'\|_\infty \leq 2 \varepsilon +
d'$ so that $|d-d'|\leq 2\varepsilon$. Letting
$\val'=\val''[Y\leftarrow 0]$, we have
$\val\xrightarrow{d',\trans}\val'$ and
$\val'\in r'\cap\Ball(\regv',\varepsilon)$. \qed \end{proof}
This lemma is stronger than Lemma~\ref{lem:fog-exec}, and thus proves it.
\section{Proofs of correctness and termination of the algorithm for
divergent weighted timed games}
\begin{proof}[of Proposition~\ref{prop:VI-timed}-\ref{item:positive}]
Let $W=\max_{a\in\Trans\cup\States} |\weight(a)|$ be the greatest
weight in the game. There are no negative cycles in the SCC,
therefore there are no configurations with value $-\infty$, and all
values are finite. Let $K$ be a bound on the values
$|\vec x^n_{(\state,\val)}|$ obtained after $n$ steps of the
algorithm.\footnote{The value iteration emulates the attractor
computation, so every value is finite after $n$ steps. Moreover,
functions $(\state,\val)\mapsto \vec x^n_{(\state,\val)}$ are
piecewise affine with a finite number of pieces over a compact
space, allowing us to obtain this uniform bound $K$.} Let us fix
an integer $p>(2K+(n-1)^2(W+MW))n$. We will show that the values
obtained after $n+p$ steps are identical to those obtained after $n$
steps only. Therefore, since the algorithm computes non-increasing
sequences of values, we have indeed stabilised after $n$ steps only.
Let us assume the existence of a configuration $(\state,\val)$ such
that $\vec x^{n+p}_{(\state,\val)}<\vec x^{n}_{(\state,\val)}$. By
induction on $p$, we can show (see Lemma~\ref{lem:valite-timed} in
Appendix~\ref{app:technical} for a detailed proof) the existence of
a configuration $(\state',\val')$ and a finite play \play from
$(\state,\val)$ to $(\state',\val')$, with length $p$ and weight
$\vec x^{n+p}_{(\state,\val)}-\vec x^{n}_{(\state',\val')}$: the
play is composed of the delays and transitions that optimise
successively the min/max operator in \IteOpe. This finite play
being of length greater than $(2K+(n-1)^2(W+MW))n$, if we associate
each visited configuration $(\state,\val)$ to the state
$(\state,[\val])$ of \rgame, there is at least one state of \rgame
appearing more than $2K+(n-1)^2(W+MW)$ times. Thus, it can be
decomposed into at least $2K+(n-1)^2(W+MW)$ plays following cycles
of \rgame and at most $(n-1)$ finite plays $\play'_i$ visiting each
state of \rgame at most once.
All cycles of the SCC being positive, the weight of \play is at
least $(2K+(n-1)^2(W+MW)) - (n-1)^2(W+MW)=2K$, bounding from below
each $\play'_i$'s weight by $-(n-1)(W+MW)$. Then,
$\vec x^{n+p}_{(\state,\val)}-\vec x^{n}_{(\state',\val')} \geq 2K$,
so
$\vec x^{n+p}_{(\state,\val)} \geq 2K + \vec x^{n}_{(\state',\val')}
\geq 2K - K \geq K$. But $K \geq \vec x^{n}_{(\state,\val)}$, so
$\vec x^{n+p}_{(\state,\val)}\geq\vec x^{n}_{(\state,\val)}$, and
that is a contradiction.\qed \end{proof}
Much like in the untimed setting, negative SCCs can be resolved using a dual method. First, we characterise the $-\infty$ values as regions of \hgame where $\MaxPl$ can not unilaterally guarantee to reach the targets.
\begin{proof}[of Proposition~\ref{prop:VI-timed}-\ref{item:minus-infinity}]
Consider a state $(\state,r)$ of \hgame in the attractor for
$\MaxPl$ to the targets. Then, if \MaxPl applies a winning
memoryless strategy for the reachability objective to the target
states, for all $\val\in r$, all strategies of \MinPl will generate
a play from $(\state,\val)$ reaching a target after at most
$|\hgame|$ steps. This implies that $(\state,\val)$ has a finite
(lower) value in the game.
Reciprocally, if $(\state,r)$ is not in the attractor, by
determinacy of timed games with reachability objectives, for all
$\val\in r$, \MinPl has a (memoryless) strategy $\minstrategy$ to
ensure that no strategy of \MaxPl permits to reach a target state
from $(\state,\val)$. Applying $\minstrategy$ long enough to
generate a play following many negative cycles, before switching to
a strategy allowing \MinPl to reach the target (such a strategy
exists since no configuration has value $+\infty$ in the game),
allows \MinPl to obtain from $(\state,\val)$ a negative weight as
small as possible. Thus, $(\state,\val)$ has value~$-\infty$. \qed \end{proof}
Thus, given a negative SCC, we can compute configurations of value $-\infty$ in time polynomial in the SCC's size. Then, finite values of other configurations can be computed by applying \IteOpe.
\begin{proof}[of Proposition~\ref{prop:VI-timed}-\ref{item:negative}]
From a negative SCC $\hgame$ that has no more configuration of value
$+\infty$ or $-\infty$, consider the dual (positive) SCC
$\widetilde \game$ obtained by:
\begin{inparaenum}[($i$)]
\item switching states of $\MinPl$ and $\MaxPl$;
\item taking the opposite of every weight in states and transitions.
\end{inparaenum}
Sets of strategies of both players are exchanged in those two games,
so that the upper value in $\game$ is equal to the opposite of the
lower value in $\widetilde \game$, and vice versa. Since weighted
games are determined, the value of $\game$ is the opposite of the
value of $\widetilde\game$. Then, the value of $\game$ can be
deduced from the value of $\widetilde\game$, for which
Proposition~\ref{prop:VI-timed}-\ref{item:positive} applies.
It is then immediate that the values computed with this computation
of the smallest fixed point of $\IteOpe$ are exactly the opposite
values of the ones computed in the dual positive SCC.\qed \end{proof}
\section{\EXP-hardness of the value problem in divergent weighted
timed games}\label{app:exp-hardness}
Let us show \EXP-hardness of the value problem on divergent weighted timed games by reducing to reachability in a timed game \cite{JurTri07}. Consider an instance of timed game reachability with a timed game $\mathcal A$ with target states and a configuration $(\state,\val)$. Let us call \MinPl the player trying to enforce reachability of a target state in $\mathcal A$ and \MaxPl the one opposing it. We construct from $\mathcal A$ a weighted timed game
\game by considering the same game with all states weights at $0$ and all transition weights at $1$. Notice that \game is divergent. Let us define $\alpha$ as an upper bound on the number of states in the region automaton of $\mathcal A$, using classical bounds on the number of regions: $\alpha=|\States|(4(M+1))^{|\Clocks|}|\Clocks|!$ ($M$ is the upper bound on clock values). We consider the instance of the value problem defined with \game, the configuration $(\state,\val)$, and the bound $\alpha$. Then $\Val_\game(\state,\val)\leq \alpha$ if and only if $(\state,\val)$ can reach the target vertices in $\mathcal A$.
One direction of this statement's proof is direct by definition of having a value smaller than $+\infty$, and the other comes from the fact that reachability in $\mathcal A$ implies reachability in the region game of $\mathcal A$ in less than $\alpha$ transitions, and that implies reachability in \game with weight below $\alpha$ from the configuration $(\state,\val)$ to a target state, therefore $\Val_\game(\state,\val)\leq \alpha$.
Notice that this reduction is polynomial in the size of $\mathcal A$ because the bound $\alpha$ can be encoded and computed in binary.
\section{Deciding divergence for weighted timed
games}\label{app:class-decision}
Let us show how to decide if a game is \emph{not divergent}. By Proposition~\ref{prop:timed-scc-sign}, it suffices to search for an SCC of the region automaton containing a non-negative simple cycle (i.e.\ there exists a play following it of weight in $(-1,+\infty)$) and a non-positive one (i.e.\ there exists a play following it of weight in $(-\infty,1)$). Since simple cycles have a length bounded by
$\alpha=|\States|\times |\regions\Clocks M|=|\States|(4(M+1))^{|\Clocks|}|\Clocks|!$, we simply need to test the existence of two cycles of length at most $\alpha$, in the same SCC, one being non-negative and the other non-positive: notice that this condition indeed implies the non-divergence by the same proof as in Proposition~\ref{prop:timed-scc-sign}. Finally, notice that, by Lemma~\ref{lem:fog-exec}, given a cycle of the region automaton, we can decide if there exists a play following it with weight in $(-\infty,1)$ (respectively, $(-1,+\infty)$), by guessing a play following the corners in $\overline\game$ (game where strict inequalities in guards are replaced with non-strict ones) and checking that its accumulated weight is an integer in $(\infty,0]$ (respectively, $[0,+\infty)$). We test this condition in $\NPSPACE$, by the same techniques as in the untimed setting: we guess a starting region, and a starting corner, for both cycles, we check in polynomial space that the regions are in the same SCC of $\rgame$, and we guess on-the-fly the two cycles, i.e.\ the sequences of regions with one of their corners, keeping in memory their accumulated weight, stopping when we found two cycles of $\rgame$, or if their length becomes larger than $\alpha$. Note the accumulated weights are integers bounded (in absolute value) by
$\alpha\times \max_{a\in\Trans\cup\States} |\weight(a)|$, and can thus be stored in polynomial space. This shows that deciding the divergence is in $\co\NPSPACE=\NPSPACE=\PSPACE$ (using the theorems of Immerman-Szelepcs\'enyi~\cite{Imm88,Sze88} and Savitch~\cite{Sav70}).
Let us now show the $\PSPACE$-hardness (indeed the $\co\PSPACE$, which is identical) by a reduction from the reachability problem in a timed automaton. As in the untimed setting, we consider a timed automaton with a starting state and a different target state without outgoing transitions. We construct from it a weighted timed game by distributing all states to \MinPl, and equipping all transitions with weight $1$, and all states with weight $0$. We also add a loop with weight $-1$ on the target state, and a transition from the target state to the initial state with weight $0$, both resetting all transitions and bounded by a delay of 1. Then, the weighted timed game is not divergent if and only if the target can be reached from the initial state in the timed automaton.
\end{document} |
\begin{document}
\title[Anti-self-dual orbifolds]{Anti-self-dual orbifolds with cyclic\\ quotient singularities} \author{Michael T. Lock} \address{Department of Mathematics, University of Wisconsin, Madison, WI, 53706} \email{lock@math.wisc.edu} \author{Jeff A. Viaclovsky} \address{Department of Mathematics, University of Wisconsin, Madison, WI, 53706} \email{jeffv@math.wisc.edu} \thanks{Research partially supported by NSF Grants DMS-0804042 and DMS-1105187} \begin{abstract} An index theorem for the anti-self-dual deformation complex on anti-self-dual orbifolds with cyclic quotient singularities is proved. We present two applications of this theorem. The first is to compute the dimension of the deformation space of the Calderbank-Singer scalar-flat K\"ahler toric ALE spaces. A corollary of this is that, except for the Eguchi-Hanson metric, all of these spaces admit non-toric anti-self-dual deformations, thus yielding many new examples of anti-self-dual ALE spaces. For our second application, we compute the dimension of the deformation space of the canonical Bochner-K\"ahler metric on any weighted projective space $\mathbb{CP}^2_{(r,q,p)}$ for relatively prime integers $1 < r < q < p$. A corollary of this is that, while these metrics are rigid as Bochner-K\"ahler metrics, infinitely many of these admit non-trival self-dual deformations, yielding a large class of new examples of self-dual orbifold metrics on certain weighted projective spaces. \end{abstract} \date{May 17, 2012} \maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} \label{intro}
If $(M^{4},g)$ is an {\em{oriented}} four-dimensional Riemannian manifold, the Hodge star operator $*:\Lambda^{2} \mapsto\Lambda^{2}$ satisfies $*^2 = Id$, and induces the decomposition on the space of $2$-forms $\Lambda^{2}=\Lambda^{2}_{+} \oplus\Lambda^{2}_{-}$, where $\Lambda^{2}_{\pm}$ are the $\pm 1$ eigenspaces of $*$. The Weyl tensor can be viewed as an operator $\mathcal{W}_g: \Lambda^2 \rightarrow \Lambda^2$, so this decomposition enables us to decompose the Weyl tensor as $\mathcal{W}_g = \mathcal{W}^{+}_g + \mathcal{W}^{-}_g$, into the self-dual and anti-self-dual Weyl tensors, respectively. The metric $g$ is called {\em{anti-self-dual}} if $\mathcal{W}^+_g = 0$, and $g$ is called {\em{self-dual}} if $\mathcal{W}^-_g = 0$. Note that, by reversing orientation, a self-dual manifold is converted into an anti-self-dual manifold, and vice versa. There are now so many known examples of anti-self-dual metrics on various compact four-manifolds, that it is difficult to give a complete list here, and we refer the reader to \cite{ViaclovskyIndex} for a recent list of references.
The deformation theory of anti-self-dual metrics is roughly analogous to the theory of deformation of complex structures. If $(M,g)$ is an anti-self-dual four-manifold, the anti-self-dual deformation complex is given by \begin{align} \label{thecomplex} \Gamma(T^*M) \overset{\mathcal{K}_g}{\longrightarrow} \Gamma(S^2_0(T^*M)) \overset{\mathcal{D}}{\longrightarrow} \Gamma(S^2_0(\Lambda^2_+)), \end{align} where $\mathcal{K}_g$ is the conformal Killing operator defined by \begin{align} ( \mathcal{K}_g(\omega))_{ij} = \nabla_i \omega_j + \nabla_j \omega_i - \frac{1}{2} (\delta \omega) g, \end{align} with $\delta \omega = \nabla^i \omega_i$, $S^2_0(T^*M)$ denotes traceless symmetric tensors, and $\mathcal{D} = (\mathcal{W}^+)_g'$ is the linearized self-dual Weyl curvature operator.
If $M$ is a compact manifold then there is a formula for the index depending only upon topological quantities. The analytical index is given by \begin{align} Ind(M, g) = \dim( H^0(M,g)) - \dim( H^1(M,g)) + \dim( H^2(M,g)), \end{align} where $H^i(M,g)$ is the $i$th cohomology of the complex \eqref{thecomplex}, for $i = 0,1,2$. The index is given in terms of topology via the Atiyah-Singer index theorem \begin{align} \label{manifoldindex} Ind(M, g) = \frac{1}{2} ( 15 \chi(M) + 29 \tau(M)), \end{align} where $\chi(M)$ is the Euler characteristic and $\tau(M)$ is the signature of $M$, see \cite{KotschickKing}.
The cohomology groups of the complex \eqref{thecomplex} yield information about the local structure of the moduli space of anti-self-dual conformal classes, which we briefly recall \cite{Itoh2, KotschickKing}. There is a map \begin{align} \Psi: H^1(M,g) \rightarrow H^2(M,g) \end{align} called the {\em{Kuranishi map}} which is equivariant with respect to the action of $H^0$, and the moduli space of anti-self-dual conformal structures near $g$ is localy isomorphic to $\Psi^{-1}(0) / H^0$. Therefore, if $H^2 = 0$, the moduli space is locally isomorphic to $H^1 / H^0$.
In this paper, we will be concerned with orbifolds in dimension four with isolated singularities modeled on $\mathbb{R}^4 / \Gamma$, where $\Gamma$ is a finite subgroup of ${\rm{SO}}(4)$ acting freely on $\mathbb{R}^4 \setminus \{0 \}$. We will say that $(M,g)$ is a {\em{Riemannian orbifold}} if $g$ is a smooth metric away from the singular points, and at any singular point, the metric is locally the quotient of a smooth $\Gamma$-invariant metric on $B^4$ under the orbifold group $\Gamma$.
The above results regarding the Kuranishi map are also valid for anti-self-dual Riemannian orbifolds. However, the index formula \eqref{manifoldindex} does not hold without adding a correction term. In \cite{Kawasaki}, Kawasaki proved a version of the Atiyah-Singer index theorem for orbifolds, and gave a general formula for the correction term. Our first result is an explicit formula for this correction term for the complex \eqref{thecomplex} in the case that $\Gamma$ is an action of a cyclic group. In order to state this, we first make some definitions.
For $1 \leq q < p$ relatively prime integers, we denote by $\Gamma_{(q,p)}$ the cyclic action \begin{align} \label{qpaction} \left( \begin{matrix} \exp^{2 \pi i k / p} & 0 \\ 0 & \exp^{2 \pi i k q / p } \\ \end{matrix} \right), \ \ 0 \leq k < p, \end{align} acting on $\mathbb{R}^4$, which we identify with $\mathbb{C}^2$ using $z_1 = x_1 + i x_2$, and $z_2 = x_3 + i x_4$. We will also refer to this action as a type $(q,p)$-action. \begin{definition}{\em A group action $\Gamma_1: G \rightarrow {\rm{SO}(4)}$ is {\em{conjugate}} to another group action $\Gamma_2: G \rightarrow {\rm{SO}(4)}$ if there exists an element $O \in O(4)$ such that for any $g \in G$, $\Gamma_1(g) \circ O = O \circ \Gamma_2(g)$. If $O \in {\rm{SO}}(4)$, then the actions are said to be orientation-preserving conjugate, while if $O \notin {\rm{SO}}(4)$, the actions are said to be orientation-reversing conjugate. } \end{definition}
\begin{remark} \label{actrem} {\em We note the important fact that if $\Gamma$ is an $\rm{SO}(4)$ representation of a cyclic group, then $\Gamma$ is orientation-preserving conjugate to a $\Gamma_{(q,p)}$-action \cite{MCC}; we therefore only need consider the $\Gamma_{(q,p)}$-actions. Furthermore, for $1 \leq q , q' < p$, if a $\Gamma_{(q,p)}$-action is orientation-preserving conjugate to a $\Gamma_{(q',p)}$-action then $q q' \equiv 1 \mod p$. We also note that a $\Gamma_{(q,p)}$-action is orientation-reversing conjugate to a $\Gamma_{(p-q,p)}$-action. } \end{remark} We will employ the following modified Euclidean algorithm. For $1 \leq q < p$ relatively prime integers, write \begin{align} \begin{split}\label{mea} p&=e_1q-a_1\\ q&=e_2a_1-a_2\\ &\hspace{2mm} \vdots \\ a_{k-2}&=e_ka_{k-1}-1, \end{split} \end{align} where $e_i \geq 2$, and $0 \leq a_i < a_{i-1}$. This can also be written as the continued fraction expansion \begin{align} \frac{p}{q} = \cfrac{1}{e_1 - \cfrac{1}{e_2 - \cdots \cfrac{1}{e_k}}}. \end{align} We refer to the integer $k$ as the {\em{length}} of the modified Euclidean algorithm.
Our main theorem expresses the correction term in the index theorem in terms of the $e_i$ and the length of the modified Euclidean algorithm: \begin{theorem} \label{mainthm} Let $(M,g)$ be a compact anti-self-dual orbifold with a single orbifold point of type $(q,p)$. The index of the anti-self-dual deformation complex on $(M,g)$ is given by \begin{align} Ind(M,g)= \begin{cases}
\displaystyle \frac{1}{2}(15\chi_{top}+29\tau_{top})+\sum_{i=1}^{k}4e_i-12k-2 &\text{ when $q \neq p-1$}\\
\displaystyle \frac{1}{2}(15\chi_{top}+29\tau_{top}) - 4p + 4 &\text{ when $q=p-1$}. \end{cases} \end{align} \end{theorem} In some other special cases, the correction term may be written directly in terms of $p$. For example, if $q = 1$, and $p > 2$, we have \begin{align} \sum_{i=1}^{k}4e_i-12k-2 = 4p -14. \end{align} We note that the cases $q = 1$ and $q = p-1$ were proved earlier in \cite{ViaclovskyIndex} using a different method.
\begin{remark} \label{srmk} {\em While Theorem \ref{mainthm} is stated in the case of a single orbifold point for simplicity, if a compact anti-self-dual orbifold has several cyclic quotient orbifold points, then a similar formula holds, with the correction term simply being the sum of the corresponding correction terms for each type of orbifold point. } \end{remark}
\subsection{Asymptotically locally Euclidean spaces}
Many interesting examples of anti-self-dual metrics are complete and non-compact. Given a compact Riemannian orbifold $(\hat{X}, \hat{g})$ with non-negative scalar curvature, letting $G_p$ denote the Green's function for the conformal Laplacian associated with any point $p$, the non-compact space $X = \hat{X} \setminus \{p\}$ with metric $g_p = G_p^{2}\hat{g}$ is a complete scalar-flat orbifold. Inverted normal coordinates in the metric $\hat{g}$ in a neighborhood of the point $p$, give rise to a coordinate system in a neighborhood of infinity of $X$, which motivates the following: \begin{definition} \label{ALEdef} {\em
A noncompact Riemannian orbifold $(X^4,g)$ is called {\em{asymptotically locally Euclidean}} or {\em{ALE}} of order $\tau$ if there exists a finite subgroup $\Gamma \subset {\rm{SO}}(4)$ acting freely on $\mathbb{R}^4 \setminus \{0\}$, and a diffeomorphism $\phi : X \setminus K \rightarrow ( \mathbb{R}^4 \setminus B(0,R)) / \Gamma$ where $K$ is a compact subset of $X$, satisfying $(\phi_* g)_{ij} = \delta_{ij} + O( r^{-\tau})$ and
$\partial^{|k|} (\phi_*g)_{ij} = O(r^{-\tau - k })$ for any partial derivative of order $k$, as $r \rightarrow \infty$, where $r$ is the distance to some fixed basepoint. } \end{definition}
An {\em{orbifold compactification}} of an ALE space $(X,g)$, is a choice of a conformal factor $u : X \rightarrow \mathbb{R}_+$ such that $u = O(r^{-2})$ as $r \rightarrow \infty$. The space $(X, u^2 g)$ then compactifies to a $C^{1,\alpha}$ orbifold. If $(X,g)$ is anti-self-dual, then there moreover exists a $C^{\infty}$-orbifold conformal compactification $(\hat{X}, \hat{g})$ \cite[Proposition 12]{CLW}.
\begin{remark}{\em It is crucial to note that if $(X,g)$ is an anti-self-dual ALE space with a $\Gamma$-action at infinity, then the conformal compactification $(\hat{X}, \hat{g})$ with the anti-self-dual orientation has a $\tilde{\Gamma}$-action at the orbifold point where $\tilde{\Gamma}$ is orientation-reversing conjugate to $\Gamma$. In the case of a cyclic group, if the action at infinity of the anti-self-dual ALE space $(X,g)$ is of type $(q,p)$, then the action at the orbifold point of the compactification $(\hat{X}, \hat{g})$ with the anti-self-dual orientation is of type $(p-q,p)$. } \end{remark}
Many examples of anti-self-dual ALE spaces with nontrivial group at infinity have been discovered. The first non-trivial example was due to Eguchi and Hanson, who found a Ricci-flat anti-self-dual metric on $\mathcal{O}(-2)$ which is ALE with group $\mathbb{Z} / 2 \mathbb{Z}$ at infinity \cite{EguchiHanson}. Gibbons-Hawking then wrote down an metric ansatz depending on the choice of $n$ monopole points in $\mathbb{R}^3$, giving an anti-self-dual ALE hyperk\"ahler metric with cyclic action at infinity contained in ${\rm{SU(2)}}$, which are called multi-Eguchi-Hanson metrics \cite{GibbonsHawking, Hitchin2}.
Using the Joyce construction from \cite{Joyce1995}, Calderbank and Singer produced many examples of toric ALE anti-self-dual metrics, which are moreover scalar-flat K\"ahler, and have cyclic groups at infinity contained in ${\rm{U}}(2)$ \cite{CalderbankSinger}. For a type $(q,p)$-action, the space $X$ is the minimal Hirzebruch-Jung resolution of $\mathbb{C}^2/ \Gamma_{(q,p)}$, with exceptional divisor given by the union of $2$-spheres $S_1 \cup \cdots \cup S_k$, with intersection matrix \begin{align} \label{intersect} (S_i \cdot S_j) = \left( \begin{matrix} -e_1 & 1 & 0 & \cdots & 0 \\ 1 & - e_2 & 1 & \cdots & 0 \\ 0 & 1 & - e_3 & \cdots & 0 \\ \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & \cdots & - e_k \\ \end{matrix} \right), \end{align} where the $e_i$ and $k$ are defined above in \eqref{mea} with $e_i \geq 2$.
The K\"ahler scalar-flat metric on $X$ is then written down explicitly using the Joyce ansatz from \cite{Joyce1995}. We do not require the details of the construction here, but only note the following: For $1 < q$ the identity component of the isometry group of these metrics is a real $2$-torus, and for $q = 1$, it is ${\rm{U}}(2)$.
When $q = p - 1$, these metrics are the {\em{toric}} Gibbons-Hawking multi-Eguchi-Hanson metrics (when all monopole points are on a common line). In this case $k = p-1$ and $e_i = 2$ for $1 \leq i \leq k$. The moduli space of toric metrics in this case is of dimension $p-2$. But the moduli space of all multi-Eguchi-Hanson metrics is of dimension $3(p-2)$. So it is well-known that these metrics admit non-toric anti-self-dual deformations. When $q = 1$, these metrics agree with the LeBrun negative mass metrics on $\mathcal{O}(-p)$ discovered in \cite{LeBrunnegative}. In this case $k =1$ and $e_1 = p$. For $p > 2$, it was recently shown in \cite{HondaOn, ViaclovskyIndex} that these spaces also admit non-toric anti-self-dual deformations. Theorem \ref{CScor} will give a vast generalization of this phenomenon to the general case $1 < q < p-1$. The proof of Theorem \ref{CScor} relies on the following explicit formula for the index of the complex \eqref{thecomplex} on the conformal compactification of these metrics: \begin{theorem} \label{CSthm} Let $(\hat{X}, \hat{g})$ be the orbifold conformal compactification of a Calderbank-Singer space $(X,g)$ with a $(q,p)$-action at infinity. Then the index of of anti-self-dual deformation complex is given by \begin{align} Ind(\hat{X}, \hat{g}) = \begin{cases} \displaystyle 5k+5-\sum_{i=1}^{k}4e_i &\text{ when $q \neq 1$}\\ \displaystyle-4p+12 &\text{ when $q=1$}, \end{cases} \end{align} where the integers $k$ and $e_i$, $1 \leq i \leq k$, are the integers occuring in the modified Euclidean algorithm defined in \eqref{mea}. \end{theorem} We note that if $q = p - 1$, the index simplifies to $-3 p + 8$. A consequence of the above is that the Calderbank-Singer spaces admit large families of non-toric anti-self-dual deformations, thereby yielding many new examples:
\begin{theorem} \label{CScor} Let $(X,g)$ be a Calderbank-Singer space with a $(q,p)$-action at infinity, and $(\hat{X}, \hat{g})$ be the orbifold conformal compactification. Let $\mathcal{M}_{\hat{g}}$ denote the moduli space of anti-self-dual conformal structures near $(\hat{X}, \hat{g})$. Then, \begin{itemize} \item If $q = 1$ and $p =2$, then $\hat{g}$ is rigid. \item If $q = 1$ and $p = 3$, then $\mathcal{M}_{\hat{g}}$ is locally of dimension $1$. \item If $q = 1$ and $p > 3$, then $\mathcal{M}_{\hat{g}}$ is locally of dimension $4p -12$. \item If $q = p -1$, then $\mathcal{M}_{\hat{g}}$ is locally of dimension $3p -7$. \item If $1 < q < p - 1$, then $\mathcal{M}_{\hat{g}}$ is locally of dimension at least \begin{align} \label{csest} \dim(H^1) - 2 = -5k-5+\sum_{i=1}^{k}4e_i. \end{align} \end{itemize} Consequently, if $p > 2$, these spaces admit non-toric anti-self-dual deformations. \end{theorem} \begin{remark}{\em Theorem \ref{CSthm} could be equivalently stated in terms of the ALE metrics rather than the compactified metrics. However, the definition of the index on an ALE space involves defining certain weighted spaces; see \cite[Proposition 3.1]{ViaclovskyIndex} for the precise formula which relates the index on the ALE space to the index on the compactification; for our purposes here, we only require the statement on the compactification. Similarly, Theorem~\ref{CScor} could be equivalently stated in terms of anti-self-dual ALE deformations of the ALE model. } \end{remark} By a result of LeBrun-Maskit, $H^2(\hat{X}, \hat{g}) = 0$ for these metrics, so the actual moduli space is locally isomorphic to $H^1/ H^0$, \cite[Theorem 4.2]{LeBrunMaskit}. Therefore the moduli space could be of dimension $\dim(H^1)$, $\dim(H^1) - 1$, or $\dim(H^1) - 2$. This action in the toric multi-Eguchi-Hanson case $q = p -1$ is well-known; in this case for $p \geq 3$, $\dim(H^1) = 3p - 6$, and the dimension of the moduli space is equal to $\dim(H^1) -1 = 3p - 7$. In the LeBrun negative mass case $q = 1$, this action was recently completely determined by Nobuhiro Honda using arguments from twistor theory \cite{HondaOn}. For $ 1 < q < p-1$, further arguments are needed to determine this action explicitly; this is an interesting problem.
\subsection{Weighted projective spaces} We first recall the definition of weighted projective spaces in real dimension four: \begin{definition} {\em{ For relatively prime integers $1 \leq r \leq q \leq p$, the {\em{weighted projective space}} $\mathbb{CP}^2_{(r,q,p)}$ is $S^{5}/\mathbb{C}^*$, where $\mathbb{C}^*$ acts by \begin{align} (z_0,z_1,z_2)\mapsto (e^{ir\theta}z_0,e^{iq\theta}z_1 ,e^{ip\theta}z_2), \end{align} for $0\leq \theta <2\pi$. }} \end{definition}
The space $\mathbb{CP}^2_{(r,q,p)}$ has the structure of a compact complex orbifold. In \cite{Bryant}, Bryant proved that every weighted projective space admits a Bochner-K\"ahler metric. Subsequently, David and Gauduchon gave a simple and direct construction of these metrics \cite{DavidGauduchon}. Using an argument due to Apostolov, they also showed that this metric is the unique Bocher-K\"ahler metric on a given weighted projective space \cite[Appendix D]{DavidGauduchon}, and thus we will call this metric the {\em{canonical}} Bochner-K\"ahler metric. In complex dimension two, the Bochner tensor is the same as the anti-self-dual part of the Weyl tensor so Bochner-K\"ahler metrics are the same as self-dual K\"ahler metrics.
The work of Derdzinski \cite{Derdzinski} showed that a self-dual K\"ahler metric $g$ is conformal to a self-dual Hermitian Einstein metric on $M^*:=\{p\in M : R(p)\neq 0\}$, given by $\tilde{g}=R^{-2}g$, where $R$ is the scalar curvature. This conformal metric is not K\"ahler unless $R$ is constant. Conversely, Apostolov and Gauduchon \cite{ApostolovGauduchon} showed that any self-dual Hermitian Einstein metric that is not conformally flat is of the form $\tilde{g}$ for a unique self-dual K\"ahler metric $g$ with $R\neq 0$.
For a weighted projective space $\mathbb{CP}^2_{(r,q,p)}$, there are the following 3 cases: \begin{itemize} \item When $p < r+q$ the canonical Bochner-K\"ahler metric has $R>0$ everywhere, so it is conformal to a Hermitian Einstein metric with positive Einstein constant. \item When $p = r+q$ the canonical Bochner-K\"ahler metric has $R>0$ except at one point, so it is conformal to a complete Hermitian Einstein metric with vanishing Einstein constant outside this point.
\item When $p > r+q$ the canonical Bochner-K\"ahler metric has $R$ vanishing along a hypersurface and the complement is composed of two open sets on which the metric is conformal to a Hermitian Einstein metric with negative Einstein constant. \end{itemize} For $x \in \mathbb{R}$, $\lfloor x \rfloor$ denotes the integer part of $x$, and $\{x\} = x -\lfloor x \rfloor $ denotes the fractional part of $x$. We also define the integer $\epsilon$ by \begin{align} \epsilon = \begin{cases} 0 & \text{if } p \not\equiv q \text{ mod } r \mbox{ and } p \not\equiv r \text{ mod } q\\ 1 & \text{if } p \equiv q \text{ mod } r \mbox{ or } p \equiv r \text{ mod } q, \mbox{ but not both}, \\ 2 & \text{if } p \equiv q \text{ mod } r \mbox{ and } p \equiv r \text{ mod } q. \\ \end{cases} \end{align} Our main result for the index on weighted projective spaces is the following, with the answer depending upon certain number-theoretic properties of the triple $(r,q,p)$: \begin{theorem} \label{introthm} Let $g$ be the canonical Bochner-K\"ahler metric with reversed orientation on $\overline{\mathbb{CP}}^2_{(r,q,p)}$. Assume that $1<r<q<p$. If $r+q\geq p$ then \begin{align} Ind(\overline{\mathbb{CP}}^2_{(r,q,p)},g)=2. \end{align} If $r+q<p$, then \begin{align} Ind(\overline{\mathbb{CP}}^2_{(r,q,p)},g)= \begin{cases} 2 +2\epsilon -4\lfloor \frac{p}{qr} \rfloor &\text{ when $\{ \frac{p}{qr}\}<\{\frac{q^{-1;r}p}{r}\}$}\\ -2 +2\epsilon -4\lfloor \frac{p}{qr} \rfloor &\text{ when $\{ \frac{p}{qr}\}>\{\frac{q^{-1;r}p}{r}\}$}. \end{cases} \end{align} \end{theorem} We note that in the case $\{ \frac{p}{qr}\}<\{\frac{q^{-1;r}p}{r}\}$, the integer $\epsilon$ can only be $0$ or $1$; the integer $2$ does not actually occur in this case. Thus there are exactly $5$ cases which do in fact all occur, see Section \ref{wpssec}.
Theorem \ref{introthm} implies the following result regarding the moduli space of anti-self-dual metrics on $\overline{\mathbb{CP}}^2_{(r,q,p)}$: \begin{theorem} \label{wpsthm} Let $g$ be the canonical Bochner-K\"ahler metric with reversed orientation on $\overline{\mathbb{CP}}^2_{(r,q,p)}$. Assume that $1<r<q<p$. Then, \begin{itemize} \item If $p \leq q + r$ then $[g]$ is isolated as an anti-self-dual conformal class. \item If $p > q + r$ then the moduli space of anti-self-dual orbifold conformal classes near $g$, $\mathcal{M}_g$, is of dimension at least \begin{align} \dim( \mathcal{M}_g) \geq \begin{cases} 4\lfloor \frac{p}{qr} \rfloor- 2 -2\epsilon &\text{ when $\{ \frac{p}{qr}\}<\{\frac{q^{-1;r}p}{r}\}$}\\ 4\lfloor \frac{p}{qr} \rfloor +2 -2\epsilon &\text{ when $\{ \frac{p}{qr}\}>\{\frac{q^{-1;r}p}{r}\}$}. \end{cases} \end{align} \end{itemize} \end{theorem}
\begin{remark} {\em Since the case $p < q + r$ is conformal to an Einstein metric, it is perhaps not surprising (although not obvious) that these metrics are also isolated as self-dual metrics. But the non-trivial anti-self-dual deformations we have found in the case $p > q +r$ are quite surprising, since these metrics are rigid as Bochner-K\"ahler metrics. } \end{remark}
The proof of Theorem \ref{wpsthm} also relies on the fact that $H^2(M,g) = 0$ for these metrics, see Corollary \ref{h2wps} below. Then as pointed out above, the actual moduli space is locally isomorphic to $H^1/ H^0$, so the moduli space could be of dimension $\dim(H^1)$, $\dim(H^1) - 1$, or $\dim(H^1) - 2$. As in the case of the Calderbank-Singer spaces, we do not determine this action explicitly here; this is another very interesting problem.
\subsection{Outline of paper} We begin in Section \ref{ogi} by recalling Kawasaki's orbifold index theorem, and apply it to the complex \eqref{thecomplex}. Then in Section \ref{gcyclic}, we analyze the correction terms for cyclic group actions, culminating in the following formula for the index in terms of the following trigonometric sum when $1 < q < p -1$: \begin{align} \begin{split} \label{N(q,p)intro} Ind_{\Gamma}(\hat{M})&=\frac{1}{2}(15\chi_{top}+29\tau_{top})-6+\frac{14}{p}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big]\\ &\phantom{==}-\frac{2}{p}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\cos(\frac{2\pi}{p}j)\cos(\frac{2\pi}{p}qj)\Big]. \end{split} \end{align} We note that the quantity \begin{align} \label{Dedekind} s(q,p) = \frac{1}{4p}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big] \end{align} is the well-known Dedekind sum \cite{Rademacher2}. This has a closed form expression in several special cases, but not in general. It is not surprising that this term appears, since Dedekind sums arise naturally in the index theorem for the signature complex \cite{HirzebruchZagier, Katase1, Zagier}. However, for the anti-self-dual deformation complex, the interaction of the Dedekind sum term with the final term in \eqref{N(q,p)intro} makes a huge difference. In particular, formula \eqref{N(q,p)intro} says that the sum of these terms must always be an integer!
For $x \in \mathbb{R} \setminus \mathbb{Z}$, we define the sawtooth function $((x)) = \{ x \} - \frac{1}{2}$. In Section \ref{nontop}, we show that when $1 < q < p-1$, the non-topological terms in \eqref{N(q,p)intro} can be rewritten as a Dedekind sum plus terms involving the sawtooth function: \begin{align} N(q,p)=-6+\frac{12}{p}\sum_{j=1}^{p-1}\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)-4\bigg(\bigg(\frac{q^{-1;p}}{p}\bigg)\bigg) -4\bigg(\bigg(\frac{q}{p}\bigg)\bigg), \end{align} where $q^{-1;p}$ is the inverse of $q$ modulo $p$. In Section \ref{explicit} we use this, together with classical reciprocity for Dedekind sums to prove Theorem \ref{mainthm}. The results dealing with the Calderbank-Singer spaces, Theorems \ref{CSthm} and \ref{CScor}, are proved in Section~\ref{CSindex}. Finally, in Section \ref{wpssec}, we present a complete analysis of the index for the canonical Bochner-K\"ahler metric on a weighted projective space, and prove Theorem \ref{wpsthm}. Interestingly, an important ingredient is Rademacher's triple reciprocity formula for Dedekind sums \cite{Rademacher1}. We conclude the paper with some remarks on the number-theoretic condition on the triple $(r,q,p)$ which occurs in Theorem \ref{introthm}.
\section{The orbifold $\Gamma$-Index} \label{ogi}
For an orbifold $(M,g)$, the $\Gamma$-Index is given analytically by \begin{align} Ind_{\Gamma}(M,g) = \dim (H^0(M,g)) - \dim (H^1(M,g)) + \dim (H^2(M,g)). \end{align} From Kawasaki's orbifold index theorem \cite{Kawasaki}, it follows that we have a $\Gamma$-index formula of the form \begin{align} \label{I-G}
Ind_{\Gamma}(M)=\frac{1}{2}(15\chi_{orb}(M)+29\tau_{orb}(M))+\frac{1}{|\Gamma|}\sum_{\gamma \neq Id}\frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}({\lambda_{-1}N_{\mathbb{C}}})}. \end{align} where the quantity $\chi_{orb}(M)$ is the orbifold Euler characteristic defined by \begin{align}
\chi_{orb}(M)=\frac{1}{8\pi^2}\int_{M}(|W|^2-\frac{1}{2}|Ric|^2+\frac{1}{6}R^2)dV_{g}, \end{align} the quantity $\tau_{orb}(M)$ is the orbifold signature defined by \begin{align}
\tau_{orb}(M)=\frac{1}{12\pi^2}\int_{M}(|W^+|^2-|W^-|^2)dV_{g}, \end{align} and the quantity $\frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}$ is a correction term depending upon the action of $\gamma$ on certain bundles, which we will describe in what follows.
In the next subsection, we compute the trace of the action of $\gamma$, an element in the orbifold group $\Gamma$, on the bundles $[N_{\mathbb{C}}]$, $[S^2_0(N_{\mathbb{C}})]$ and $[S^2_0(\Lambda^2_+)]$ over the fixed point set, which we then use to compute a general formula for the $\frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}$ term. Then we give the orbifold Euler characteristic and orbifold signature in terms of the topological Euler characteristic and topological signature and correction terms also depending upon the $\gamma$-action respectively. Finally, we combine this information into a formula for the orbifold $\Gamma$-Index.
\subsection{Group action on bundles} In order to compute the $\Gamma$-Index, we first need to find the trace of the $\gamma$-action, for every $\gamma$ in $\Gamma$, on the pullback of the complexified principal symbol, $i^*\sigma$, where \begin{align} \label{pr1} i:p\rightarrow M \end{align} is the inclusion map from the fixed point $p$ into the orbifold $M$. In this case \begin{align} \label{symbol} i^*\sigma=[N_{\mathbb{C}}]-[S^2_0(N_{\mathbb{C}})]+[S^2_0\Lambda^2_+]. \end{align} For a general $\gamma$ of the form \begin{align} \gamma = \left( {\begin{array}{*{20}c}
\cos\theta_1 & -\sin\theta_1 & 0 & 0 \\
\sin\theta_1 & \cos\theta_1 & 0 & 0 \\
0 & 0 & \cos\theta_2 & -\sin\theta_2 \\
0 & 0 & \sin\theta_2 & \cos\theta_2
\end{array} } \right), \end{align} fixing the point $p$, the normal bundle is trivial, so $N_\mathbb{C}:=N\otimes \mathbb{C}=\mathbb{C}^4$, and we have the following proposition. \begin{proposition} \label{traceprop} The trace of the $\gamma$-action on the components of $i^*\sigma$ is as follows: \begin{enumerate}
\item $\displaystyle tr(\gamma|_{N_{\mathbb{C}}})=2\cos (\theta_1)+2\cos(\theta_2)$,
\item $\displaystyle tr(\gamma|_{S^2_0(N_{\mathbb{C}})})=1+2\cos(\theta_1+\theta_2)+ 2\cos(-\theta_1+\theta_2)+4\cos(\theta_1+\theta_2)\cos(-\theta_1+\theta_2)$,
\item $\displaystyle tr(\gamma|_{S^2_0(\Lambda^2_+)})=2cos(\theta_1+\theta_2)+4cos^2(\theta_1+\theta_2)-1.$ \end{enumerate} \end{proposition} \begin{proof} The normal bundle can be written as $N=x_1\oplus \cdot \cdot \cdot \oplus x_4$ in real coordinates. After complexifying the normal bundle we can diagonalize $\gamma$ to write \begin{align}
\gamma|_{N_{\mathbb{C}}} = \left( {\begin{array}{*{20}c}
e^{i\theta_1} & 0 & 0 & 0 \\
0 & e^{-i\theta_1} & 0 & 0 \\
0 & 0 & e^{i\theta_2}& 0 \\
0 & 0 & 0 & e^{-i\theta_2}
\end{array} } \right), \end{align} with respect to the complex basis $\{\lambda_1\oplus \lambda_2\oplus \lambda_3 \oplus \lambda_4\}=\mathbb{C}^4$, where \begin{align} \{2x_1, 2x_2, 2x_3, 2x_4 \} = \{ \lambda_1-i\lambda_2, i\lambda_1-\lambda_2, \lambda_3-i\lambda_4,i\lambda_3-\lambda_4 \}. \end{align} Formula $(1)$ follows immediately.
Next, to see how $\gamma$ acts on $S^2_0(N_g)=\Lambda^2_+ \otimes \Lambda^2_-$ we first examine how $\gamma$ acts on $\Lambda^2_+$ and $\Lambda^2_-$ independently. We use the following basis for $\Lambda^2_+$: \begin{align} \begin{split} \omega_1^+&=\frac{1}{2} [d\lambda_2\wedge d\lambda_1+d\lambda_4\wedge d\lambda_3],\\ \omega_2^+&= \frac{1}{2}[d\lambda_1\wedge d\lambda_3+d\lambda_4\wedge d\lambda_2],\\ \omega_3^+ &=\frac{1}{2}[id\lambda_1\wedge d\lambda_3+id\lambda_2\wedge d\lambda_4], \end{split} \end{align} and the following basis for $\Lambda^2_-$: \begin{align} \begin{split} \omega_1^-&=\frac{1}{2} [d\lambda_2\wedge d\lambda_1-d\lambda_4\wedge d\lambda_3],\\ \omega_2^-&=\frac{1}{2}[id\lambda_3\wedge d\lambda_2+id\lambda_4\wedge d\lambda_1],\\ \omega_3^-&=\frac{1}{2}[d\lambda_2\wedge d\lambda_3+d\lambda_4\wedge d\lambda_1]. \end{split} \end{align} So we see that $\gamma$ acts on $\Lambda_+^2$ by \begin{align} \begin{split} \gamma(\omega_1^+)&=\omega_1^+\\ \gamma(\omega_2^+) &=\frac{1}{2}[e^{i(\theta_1+\theta_2)}(\omega_2^+-i\omega_3^+)+e^{-i(\theta_1+\theta_2)}(\omega_2^++i\omega_3^+)]\\ \gamma(\omega_3^+) &=\frac{1}{2}[e^{i(\theta_1+\theta_2)}(\omega_3^++i\omega_2^+)+e^{-i(\theta_1+\theta_2)}(\omega_3^+-i\omega_2^+)], \end{split} \end{align} and $\gamma$ acts on $\Lambda_-^2$ by \begin{align} \begin{split} \gamma(\omega_1^-)&=\omega_1^-\\ \gamma(\omega_2^-) &=\frac{1}{2}[e^{i(-\theta_1+\theta_2)} (\omega_2^--i\omega_3^-)+e^{i(\theta_1-\theta_2)}(\omega_2^-+i\omega_3^-)]\\ \gamma(\omega_3^+) &=\frac{1}{2}[e^{i(-\theta_1+\theta_2)}(\omega_3^-+i\omega_2^-)+e^{i(\theta_1-\theta_2)}(\omega_3^--i\omega_2^-)]. \end{split} \end{align} Therefore, we can write \begin{align}
\gamma|_{\Lambda_+^2} = \left( \begin{matrix} 1&0&0\\ 0& \cos(\theta_1+\theta_2) &- \sin(\theta_1+\theta_2) \\
0& \sin(\theta_1+\theta_2) & \cos(\theta_1+\theta_2) \end{matrix} \right), \end{align} and \begin{align}
\gamma|_{\Lambda_-^2} = \left( \begin{matrix} 1&0&0\\ 0& \cos(-\theta_1+\theta_2) &- \sin(-\theta_1+\theta_2) \\
0& \sin(-\theta_1+\theta_2) & \cos(-\theta_1+\theta_2) \end{matrix} \right). \end{align} To derive $(2)$, we compute \begin{align} \begin{split}
tr(\gamma|_{S^2_0 N_{\mathbb{C}}})&=tr(\gamma|_{\Lambda^2_+\otimes \Lambda^2_-})\\
&=tr(\gamma|_{\Lambda^2_+})\cdot tr(\gamma|_{\Lambda^2_-})\\ &=(1+ 2\cos(\theta_1+\theta_2))\cdot (1+ 2\cos(-\theta_1+\theta_2))\\ =1+2\cos(\theta_1+\theta_2)&+ 2\cos(-\theta_1+\theta_2)+4\cos(\theta_1+\theta_2)\cos(-\theta_1+\theta_2). \end{split} \end{align} Next, to see how $\gamma$ acts on $S^2_0(\Lambda^2_+)$, decompose \begin{align} S^2_0\Lambda_+^2=[\mathbb{C}\otimes (\omega_2^+\oplus \omega_3^+)]\oplus S^2_0(\omega_2^+\oplus \omega_3^+)\oplus tr, \end{align} where $ tr=2\omega_1^+-(\omega_2^++\omega_3^+)$ denotes the trace component, and write the basis of $S^2_0(\omega_2^+\oplus \omega_3^+)$ as \begin{align}
\{\omega_2^+\otimes \omega_2^+-\omega_3^+\otimes \omega_3^+, \omega_2^+\otimes \omega_3^++\omega_3^+\otimes \omega_2^+\}.
\end{align} We see that \begin{align}
&\gamma|_{\omega_1^+\otimes (\omega_2^+ \oplus \omega_3^+)}=\left( \begin{matrix}
\cos(\theta_1+\theta_2) &- \sin(\theta_1+\theta_2) \\
\sin(\theta_1+\theta_2) & \cos(\theta_1+\theta_2) \end{matrix} \right), \\
&\gamma|_{S^2_0(\omega_2^+\oplus \omega_3^2)}=\left( \begin{matrix}
\cos^2(\theta_1+\theta_2) -\sin^2(\theta_1+\theta_2) & -2\sin(\theta_1+\theta_2) \cos(\theta_1+\theta_2) \\
2\sin(\theta_1+\theta_2) \cos(\theta_1+\theta_2) & \cos^2(\theta_1+\theta_2) -\sin^2(\theta_1+\theta_2) \end{matrix} \right),\\
&\gamma|_{tr\in S^2_0\Lambda^2_+}=1. \end{align} Using these, we derive $(3)$ by computing \begin{align} \begin{split}
tr(\gamma|_{S^2_0\Lambda^2_+})&=[2cos(\theta_1+\theta_2)]+[4cos^2(\theta_1+\theta_2)-2]+[1]\\ &=2cos(\theta_1+\theta_2)+4cos^2(\theta_1+\theta_2)-1. \end{split} \end{align} \end{proof}
\subsection{Equivariant Chern character} We next compute the term $\frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}$. The numerator of this term is the ${\gamma}$-equivariant Chern character of the pullback of the principal symbol, $i^*\sigma$, described in $\eqref{pr1}$ and $\eqref{symbol}$. The denominator is the ${\gamma}$-equivariant Chern character of the $K$-theoretic Thom class of the complexified normal bundle. Since the normal bundle is trivial over the fixed point, this is \begin{align} \lambda_{-1}N_\mathbb{C}=[\Lambda^0(\mathbb{C}^4)]-[\Lambda^1(\mathbb{C}^4)]+[\Lambda^2(\mathbb{C}^4)]-[\Lambda^3(\mathbb{C}^4)]+[\Lambda^4(\mathbb{C}^4)]. \end{align} Since the ${\gamma}$-equivariant Chern character is just the ${\gamma}$-action times the Chern character of each eigenspace, using Proposition \ref{traceprop}, we compute \begin{align} \begin{split}
ch_{\gamma}(i^*\sigma)&=tr({\gamma}|_{N_\mathbb{C}})-tr({\gamma}|_{S^2_0N_\mathbb{C}})+tr({\gamma}|_{S^2_0\Lambda^2_+})\\ &=[2\cos(\theta_1)+2\cos(\theta_2)]\\ &\phantom{=}-[1+2\cos(\theta_1+\theta_2)+2\cos(-\theta_1+\theta_2)+4\cos(\theta_1+\theta_2)\cos(-\theta_1+\theta_2)]\\ &\phantom{=}+[2\cos(\theta_1+\theta_2)+4\cos^2(\theta_1+\theta_2)-1]\\ &=[2\cos\theta_1+2\cos\theta_2-2-2\cos(\theta_1)\cos(\theta_2)]\\ &\phantom{=}+[-2\sin(\theta_1)\sin(\theta_2)-8\cos(\theta_1)\cos(\theta_2)\sin(\theta_1)\sin(\theta_2)+8\sin^2(\theta_1)\sin^2(\theta_2)]\\ &=[-2(\cos\theta_1-1)(\cos\theta_2-1)]+[8(1-\cos^2\theta_1)(1-\cos^2\theta_2)]\\ &\phantom{=}+[-2\sin(\theta_1)\sin(\theta_2)-8\cos(\theta_1)\cos(\theta_2)\sin(\theta_1)\sin(\theta_2)]. \end{split} \end{align} Similarly, we compute \begin{align} \begin{split}
ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})&=tr({\gamma}|_{[\Lambda^0(\mathbb{C}^4)]})-tr({\gamma}|_{[\Lambda^1(\mathbb{C}^4)]})+tr({\gamma}|_{[\Lambda^2(\mathbb{C}^4)]})\\
&\phantom{=}-tr({\gamma}|_{[\Lambda^3(\mathbb{C}^4)]})+tr({\gamma}|_{[\Lambda^4(\mathbb{C}^4)]}) =4(\cos\theta_1-1)(\cos\theta_2-1). \end{split} \end{align} Therefore \begin{align} \begin{split} \frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}&=\Big[-\frac{1}{2}+2(1+\cos\theta_1)(1+\cos\theta_2)\Big]\\ &\phantom{=}-\bigg[\frac{2\sin(\theta_1)\sin(\theta_2)+8\cos(\theta_1)\cos(\theta_2)\sin(\theta_1)\sin(\theta_2)}{4(\cos\theta_1-1)(\cos\theta_2-1)}\bigg]. \end{split} \end{align} Since $\frac{\sin(\theta_1)\sin(\theta_2)}{(\cos\theta_1-1)(\cos\theta_2-1)}=\cot(\frac{\theta_1}{2})\cot(\frac{\theta_2}{2})$, we see that \begin{align} \begin{split} \label{ch_g} \frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})} &=-\frac{1}{2}+2(1+\cos\theta_1)(1+\cos\theta_2)-\frac{1}{2}\cot (\frac{\theta_1}{2})\cot(\frac{\theta_2}{2})\\ &\phantom{==}-2\cot(\frac{\theta_1}{2})\cot(\frac{\theta_2}{2})\cos(\theta_1)\cos(\theta_2). \end{split} \end{align}
\subsection{The $\Gamma$-Index} For an orbifold with a single isolated singularity, we have a formula for the Euler characteristic \begin{align} \label{euler}
\chi_{top}(M)=\chi_{orb}(M)+\frac{|\Gamma|-1}{|\Gamma|}, \end{align} and a formula for the signature \begin{align} \label{tau} \tau_{top}(M)=\tau_{orb}(M)-\eta(S^3/\Gamma), \end{align} where $\Gamma \subset {\rm{SO}}(4)$ is the orbifold group around the fixed point and $\eta(S^3/\Gamma)$ is the eta-invariant, which in our case is given by \begin{align}
\eta(S^3/\Gamma)=\frac{1}{|\Gamma|}\sum_{\gamma \neq Id}\Big[\cot(\frac{\theta_1}{2}j)\cot(\frac{\theta_2}{2}j)\Big]. \end{align} See \cite{Hitchin} for a useful discussion of the formulas $\eqref{euler}$ and $\eqref{tau}$.
Combining formulas $\eqref{euler}$ and $\eqref{tau}$ with the formula for the $\Gamma$-Index given in $\eqref{I-G}$, we have \begin{align} \label{Gamma-I}
Ind_{\Gamma}=\frac{1}{2}(15\chi_{top}+29\tau_{top})-\frac{15}{2}\bigg(\frac{|\Gamma|-1}{|\Gamma|}\bigg)+\frac{29}{2}\eta(S^3/\Gamma)+\frac{1}{|\Gamma|}\sum_{\gamma\neq Id}\frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}, \end{align} where the last term is given by formula $\eqref{ch_g}$.
\section{$\Gamma$-Index for cyclic group actions} \label{gcyclic}
We consider an orbifold with an isolated singularity having the group action $\Gamma_{(q,p)}$ generated by \begin{align} \gamma = \left( {\begin{array}{*{20}c}
\cos(\frac{2\pi}{p}) & -\sin(\frac{2\pi}{p}) & 0 & 0 \\
\sin(\frac{2\pi}{p}) & \cos(\frac{2\pi}{p}) & 0 & 0 \\
0 & 0 & \cos(\frac{2\pi}{p}q)& -\sin(\frac{2\pi}{p}q) \\
0 & 0 & \sin(\frac{2\pi}{p}q)& \cos(\frac{2\pi}{p}q)
\end{array} } \right), \end{align} where $p$ and $q$ are relatively prime. The cases when $q=1$ and $q=p-1$ have already been resolved in \cite{ViaclovskyIndex}, and although we are specifically interested when $1<q<p-1$, we will make use of the sum \begin{align} \sum_{\gamma \neq Id} \frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})} \end{align} in all cases, and make our computations accordingly. We begin this section by simplifying our formula for this sum in general: \begin{align} \begin{split} \label{sich} \sum_{{\gamma} \neq Id} \frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}&=\sum_{j=1}^{p-1}\Big[-\frac{1}{2}+2(1+\cos(\frac{2\pi}{p}))(1+\cos(\frac{2\pi}{p}q))-\frac{1}{2}\cot(\frac{\pi}{p})\cot(\frac{\pi}{p}q)\Big]\\ &\phantom{==}-\sum_{j=1}^{p-1}\Big[2\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\cos(\frac{2\pi}{p}j)\cos(\frac{2\pi}{p}qj)\Big]\\ &=\sum_{j=1}^{p-1}\Big[\frac{3}{2}+2\cos(\frac{2\pi}{p}j)+2\cos(\frac{2\pi}{p}qj)+ \cos(\frac{2\pi}{p}(q+1)j)\Big]\\ &\phantom{==}+\sum_{j=1}^{p-1}\Big[\cos(\frac{2\pi}{p}(q-1)j)-\frac{1}{2}\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj) \Big]\\ &\phantom{==}+\sum_{j=1}^{p-1}\Big[-2\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\cos(\frac{2\pi}{p}j)\cos(\frac{2\pi}{p}qj)\Big]. \end{split} \end{align} Now, to further simplify our formula for the $\Gamma$-Index, it is necessary to separate into the following cases:
\subsection{$\Gamma$-Index when $1<q<p-1$} Using $\eqref{sich}$, we see that in this case \begin{align} \begin{split} \sum_{{\gamma}\neq Id} \frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}&=\Big[\frac{3}{2}p-\frac{15}{2}\Big] -\frac{1}{2}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big]\\ &\phantom{==}-2\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\cos(\frac{2\pi}{p}j)\cos(\frac{2\pi}{p}qj)\Big]. \end{split} \end{align} Therefore, by combining this with formula $\eqref{Gamma-I}$ for the $\Gamma$-Index, we have \begin{align} \begin{split} \label{N(q,p)} Ind_{\Gamma}(M)&=\frac{1}{2}(15\chi_{top}+29\tau_{top})-6+\frac{14}{p}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big]\\ &\phantom{==}-\frac{2}{p}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\cos(\frac{2\pi}{p}j)\cos(\frac{2\pi}{p}qj)\Big]. \end{split} \end{align}
\subsection{$\Gamma$-Index when $q=1$ and $p=2$} Using $\eqref{sich}$, we see that in this case \begin{align} \begin{split} \sum_{{\gamma}\neq Id} \frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}=-\frac{1}{2}. \end{split} \end{align} Therefore, by combining this with formula $\eqref{Gamma-I}$ for the $\Gamma$-Index, we have \begin{align} \begin{split} Ind_{\Gamma}(M)&=\frac{1}{2}(15\chi_{top}+29\tau_{top})-4 \end{split} \end{align}
\subsection{$\Gamma$-Index when $q=1$ and $p>2$} Using $\eqref{sich}$, we see that in this case \begin{align} \begin{split} \sum_{{\gamma}\neq Id} \frac{ch_{\gamma}(i^*\sigma)}{ch_{\gamma}(\lambda_{-1}N_{\mathbb{C}})}&=\frac{5}{2}p-\frac{15}{2}-\sum_{j=1}^{p-1}\Big[\frac{1}{2}\cot^2(\frac{\pi}{p}j)+2\cot^2(\frac{\pi}{p}j)\cos^2(\frac{2\pi}{p}j)\Big]. \end{split} \end{align} Therefore, by combining this with formula $\eqref{Gamma-I}$ for the $\Gamma$-Index, and the following well-known formula for the Dedekind sum (see \cite{Rademacher2}): \begin{align} \frac{1}{4p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j) = \frac{1}{12p}(p-1)(p-2), \end{align} we have \begin{align} \begin{split} \label{q=1} Ind_{\Gamma}(M)&=\frac{1}{2}(15\chi_{top}+29\tau_{top})-5+\frac{14}{p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j)-\frac{2}{p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j)\cos^2(\frac{2\pi}{p}j)\\ &=\frac{1}{2}(15\chi_{top}+29\tau_{top})-5+\frac{12}{p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j)+\frac{8}{p}\sum_{j=1}^{p-1}\cos^4(\frac{\pi}{p}j)\\ &=\frac{1}{2}(15\chi_{top}+29\tau_{top})-2-\frac{8}{p}+\frac{12}{p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j)\\ &=\frac{1}{2}(15\chi_{top}+29\tau_{top})-2-\frac{8}{p}+\frac{4}{p}(p^2-3p+2)\\ &=\frac{1}{2}(15\chi_{top}+29\tau_{top})+4p-14. \end{split} \end{align}
\subsection{$\Gamma$-Index when $q=p-1$ and $p>2$} Using $\eqref{sich}$, we see that in this case \begin{align} \begin{split} \sum_{{\gamma}\neq Id} \frac{ch_{\gamma}(i^*\sigma)}{ch_g(\lambda_{-1}N_{\mathbb{C}})}&=\frac{5}{2}p-\frac{15}{2}+\sum_{j=1}^{p-1}\Big[\frac{1}{2}\cot^2(\frac{\pi}{p}j)+2\cot^2(\frac{\pi}{p}j)\cos^2(\frac{2\pi}{p}j)\Big]. \end{split} \end{align} Therefore, by combining this with formula $\eqref{Gamma-I}$ for the $\Gamma$-Index, we have \begin{align} \begin{split} \label{q=p-1} Ind_{\Gamma}(M)&=\frac{1}{2}(15\chi_{top}+29\tau_{top})-5-\frac{14}{p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j)+\frac{2}{p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j)\cos^2(\frac{2\pi}{p}j)\\ &=\frac{1}{2}(15\chi_{top}+29\tau_{top})-8+\frac{8}{p}-\frac{12}{p}\sum_{j=1}^{p-1}\cot^2(\frac{\pi}{p}j)\\ &=\frac{1}{2}(15\chi_{top}+29\tau_{top})-8+\frac{8}{p}-\frac{4}{p}(p^2-3p+2)\\ &=\frac{1}{2}(15\chi_{top}+29\tau_{top})-4p+4. \end{split} \end{align}
\section{Non-topological terms in the $\Gamma$-Index} \label{nontop}
We denote the terms in the $\Gamma$-Index not involving the topological Euler characteristic or topological signature by $N(q,p)$. Also we change our notation of the $\Gamma$-Index from $Ind_{\Gamma}$ to $Ind_{(q,p)}$ to reflect the particular group action. With this new notation we can write the index as \begin{align} \label{indn} Ind_{(q,p)}&=\frac{1}{2}(15\chi_{top}+29\tau_{top})+N(q,p). \end{align} In this section we will simplify our formulas for $N(q,p)$. Also, for the remainder of the paper we will use the following notation. For two relatively prime positive integers $\alpha < \beta$, denote $\alpha$'s inverse modulo $\beta$ by $\alpha^{-1;\beta}$, and $\beta$'s inverse modulo $\alpha$ by $\beta^{-1;\alpha}$, i.e. \begin{align} \alpha\alpha^{-1;\beta}\equiv \text{1 mod $\beta$} \text{ and } \beta\beta^{-1;\alpha}\equiv \text{1 mod $\alpha$}. \end{align}
In the cases that $N(q,p)$ is easy to compute we see that \begin{align} \label{etc} N(q,p)= \begin{cases} 4p-14 &\text{ when $1=q<p-1$}\\ -4p+4 &\text{ when } q=p-1. \end{cases} \end{align} Note that the case when $q=\pm 1$ and $p=2$ can be actually included in the $q=p-1$ case. It will be convenient later in paper if we also have these formulas written in terms of sawtooth functions, a cotangent sum and a constant where the sawtooth function is defined to be \begin{align} \label{st} ((x))= \begin{cases} x-\lfloor x \rfloor -\frac{1}{2} &\text{ when $x\notin \mathbb{Z}$}\\ 0 &\text{ when $x\in \mathbb{Z}$}. \end{cases} \end{align} We will include the formulas from $\eqref{etc}$, written in this way, below in Theorem $\ref{N-nonexceptional}$.
To compute $N(q,p)$ in all other cases we will employ the following proposition: \begin{proposition} \label{fsp} \begin{align} -\frac{1}{2p}\sum_{j=1}^{p-1}\sin(\frac{2\pi}{p}qj)\cot(\frac{\pi}{p}j)=\bigg(\bigg(\frac{q}{p}\bigg)\bigg), \end{align} which is the sawtooth function defined in $\eqref{st}$. \end{proposition} \begin{proof} This is due to Eisenstein; see \cite{Apostol}. \end{proof} Now, we have \begin{theorem} \label{N-nonexceptional} When $q\not\equiv (p-1) \text { mod } p$ we have the formula \begin{align} N(q,p)=-6+\frac{12}{p}\sum_{j=1}^{p-1}\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)-4\bigg(\bigg(\frac{q^{-1;p}}{p}\bigg)\bigg) -4\bigg(\bigg(\frac{q}{p}\bigg)\bigg), \end{align} and when $q\equiv (p-1) \text{ mod } p$ we have the formula \begin{align} N(q,p)=N(p-1,p)=-4 -\frac{12}{p}\sum_{j=1}^{p-1}cot^2(\frac{\pi}{p}j)+4\bigg(\bigg(\frac{1}{p}\bigg)\bigg)+4\bigg(\bigg(\frac{1}{p}\bigg)\bigg). \end{align} \end{theorem} \begin{proof} For the $q=p-1$ case, by examining the formulas in $\eqref{q=1}$ and $\eqref{q=p-1}$, one can easily see that we can also write $N(p-1,p)=-4p+4$ in this way. Now, consider the $1\leq q<p$ case. From \eqref{N(q,p)}, we begin by computing \begin{align} \begin{split} N(q,p)&=-6+\frac{14}{p}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big]\\ &\phantom{==}-\frac{2}{p}\sum_{j=1}^{p-1}\Big[\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\cos(\frac{2\pi}{p}j)\cos(\frac{2\pi}{p}qj)\Big]\\ &=-6+\frac{2}{p}\sum_{j=1}^{p-1}\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big[7-\cos(\frac{2\pi}{p}j)\cos(\frac{2\pi}{p}qj)\Big], \end{split} \end{align} and using the identity $\cos (2x) = 1 - 2 \sin^2 (x)$ this expands to \begin{align} \begin{split} &=-6+\frac{2}{p}\sum_{j=1}^{p-1}\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big[7-(1-2\sin^2(\frac{\pi}{p}j))(1-2\sin^2(\frac{\pi}{p}qj))\Big]\\ &=-6+\frac{2}{p}\sum_{j=1}^{p-1}\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big[6+2\sin^2(\frac{\pi}{p}j)+2\sin^2(\frac{\pi}{p}qj)\Big]\\ &\phantom{==}+\frac{2}{p}\sum_{j=1}^{p-1}\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big[-4\sin^2(\frac{\pi}{p}j)\sin^2(\frac{\pi}{p}qj))\Big], \end{split} \end{align} which simplifies to \begin{align} \begin{split} \label{rhsf} N(q,p)&= -6+\frac{1}{p}\sum_{j=1}^{p-1}\Big[12\cot(\frac{\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big]+\frac{1}{p}\sum_{j=1}^{p-1}\Big[2\sin(\frac{2\pi}{p}j)\cot(\frac{\pi}{p}qj)\Big]\\ &\phantom{==}+\frac{1}{p}\sum_{j=1}^{p-1}\Big[4\sin(\frac{\pi}{p}qj)\cos(\frac{\pi}{p}qj)\cot(\frac{\pi}{p}j)\Big]\\ &\phantom{==}-\frac{1}{p}\sum_{j=1}^{p-1}\Big[8\sin(\frac{\pi}{p}j)\cos(\frac{\pi}{p}j)\sin(\frac{\pi}{p}qj)\cos(\frac{\pi}{p}qj)\Big]. \end{split} \end{align} The fifth term on the right hand side of \eqref{rhsf} sums to zero because \begin{align} \begin{split} \frac{-8}{p}\sum_{j=1}^{p-1}\sin(\frac{\pi}{p}j)\cos(\frac{\pi}{p}j)&\sin(\frac{\pi}{p}qj)\cos(\frac{\pi}{p}qj)=\frac{-4}{p}\sum_{j=1}^{p-1}\sin(\frac{2\pi}{p}j)\sin(\frac{2\pi}{p}qj)\\ &=\frac{-2}{p}\sum_{j=1}^{p-1}\Big[cos(\frac{2\pi}{p}(1-q)j)-cos(\frac{2\pi}{p}(1+q)j)\Big] =0. \end{split} \end{align} Using Proposition \ref{fsp}, the fourth term on the right hand side of \eqref{rhsf} is \begin{align*} \frac{4}{p} \sum_{j=1}^{p-1} \sin(\frac{\pi}{p}qj)\cos(\frac{\pi}{p}qj)\cot(\frac{\pi}{p}j) &=\frac{2}{p} \sum_{j=1}^{p-1} \sin(2 \frac{\pi}{p}qj)\cot(\frac{\pi}{p}j) = -4 \bigg(\bigg( \frac{q}{p}\bigg)\bigg), \end{align*} and the third term on the right hand side of \eqref{rhsf} is \begin{align*} \frac{2}{p}\sum_{j=1}^{p-1}\sin(\frac{2\pi}{p}j)\cot(\frac{\pi}{p}qj)&=\frac{2}{p}\sum_{j=1}^{p-1}\sin(\frac{2\pi}{p}qq^{-1;p}j)\cot(\frac{\pi}{p}qj)\\ &=\frac{2}{p}\sum_{r=1}^{p-1}\sin(\frac{2\pi}{p}q^{-1;p}r)\cot(\frac{\pi}{p}r) =-4\bigg(\bigg(\frac{q^{-1;p}}{p}\bigg)\bigg), \end{align*} where $r=jq^{-1;p}$, and this finishes the proof. \end{proof} Since the formulas for $N(q,p)$ given in Theorem $\ref{N-nonexceptional}$ are the same in all cases except for when $q = p-1$, we make the following definition:
\begin{definition} \label{d-exc} {\em{ A singularity is said to be {\em{exceptional}} if it results from a $(p-1,p)$-action. Otherwise, it is called {\em{non-exceptional}}. }} \end{definition}
\section{Explicit formula for $N(q,p)$} \label{explicit} We begin this section by proving reciprocity formulas for the individual summands of $N(q,p)$. Then, we use these relations to prove reciprocity formulas for $N(q,p)$, which will later be used to compute $N(q,p)$ explicitly. Since we have already computed $N(1,p)$, for the simplicity of presentation, we will assume that $q>1$ for the following. To simplify notation we let $A(q,p) = 48 s(q,p)$, where $s(q,p)$ is the Dedekind sum defined in \eqref{Dedekind}. \begin{proposition} \label{reci}Writing $p = eq - a$, the following reciprocity relations are satisfied: \begin{enumerate} \item $A(q,p)+A(p,q)=-12+4e-4\frac{a}{q}+4\frac{q}{p}+4\frac{1}{pq}$, \item $-4\Big(\Big( \frac {q^{-1;p}}{p}\Big)\Big)-4\Big(\Big( \frac {p^{-1;q}}{q}\Big)\Big)=-\frac{4}{pq}$, \item $-4\Big(\Big( \frac {q}{p}\Big)\Big)-4\Big(\Big( \frac {p}{q}\Big)\Big)=-4\frac{q}{p}+4\frac{a}{q}$. \end{enumerate} \end{proposition} \begin{proof} By the reciprocity formula for Dedekind sums \cite{Rademacher2}, we have that \begin{align} \begin{split} A(q,p)+A(p,q)&=-12+4\Big(\frac{p}{q}+\frac{q}{p}+\frac{1}{pq}\Big)\\ &=-12+4\Big(e-\frac{a}{q}+\frac{q}{p}+\frac{1}{pq}\Big)=-12+4e-4\frac{a}{q}+4\frac{q}{p}+4\frac{1}{pq}. \end{split} \end{align} Next, we have that \begin{align} \begin{split} -4\bigg(\bigg( \frac {q^{-1;p}}{p}\bigg)\bigg)&-4\bigg(\bigg( \frac {p^{-1;q}}{q}\bigg)\bigg)=\Big(-4\frac{q^{-1;p}}{p}+2\Big)+\Big(4\frac{a^{-1;q}}{q}-2\Big)\\ &=-4\frac{q^{-1;p}}{p}+4\frac{a^{-1;q}}{q}=4\frac{-qq^{-1;p}+a^{-1;q}p}{pq}. \end{split} \end{align} Next, using that $q^{-1;p}q=1+ a^{-1;q} p$ (see Proposition \ref{nt}), we have \begin{align} \begin{split}
-4\bigg(\bigg( \frac {q^{-1;p}}{p}\bigg)\bigg)&-4\bigg(\bigg( \frac {p^{-1;q}}{q}\bigg)\bigg)=4\frac{-qq^{-1;p}+a^{-1;q}p}{pq}\\
&=4\frac{-(1+\alpha p)+a^{-1;q}p}{pq}=-\frac{4}{pq}.
\end{split}
\end{align} Finally, we have \begin{align} \begin{split} -4\bigg(\bigg( \frac {q}{p}\bigg)\bigg)-4\bigg(\bigg( \frac {p}{q}\bigg)\bigg)&=\Big(-4\frac{q}{p}+2\Big)+\Big(4\frac{a}{q}-2\Big)=-4\frac{q}{p}+4\frac{a}{q}. \end{split} \end{align} \end{proof} Next, we will prove useful reciprocity formulas for $N(q,p)$. Denote \begin{align} \begin{split} &R^+(q,p)=N(q,p)+N(p,q)\\ &R^-(q,p)=N(-q,p)+N(-p,q). \end{split} \end{align} \begin{proposition} Writing $p=eq-a$ with $0<a<q$, we have the following formulas: \label{plus,minus} \begin{align} R^+(q,p)= \begin{cases} -4 &\text{ when } q=1 \text{ and } p=2\\ -14 &\text{ when $1<q=p-1$}\\ 4p-14 &\text{ when $1=q<p-1$}\\ 4e-22 &\text{ when } p=eq-1\\ 4e-24 &\text{ when $2\leq a \leq q-1$}, \end{cases} \end{align} and \begin{align} R^-(q,p) = \begin{cases} -4 &\text{ when } q=1 \text{ and } p=2\\ -6 &\text{ when $1<q=p-1$}\\ -4p+4 &\text{ when $1=q<p-1$}\\ -4e+2 &\text{ when $p=eq-(q-1)$ and $1<q<p-1$}\\
-4e &\text{ when $1\leq a \leq q-2$ and $2<q$}. \end{cases} \end{align} \end{proposition} \begin{proof} The first three formulas for both $R^+(q,p)$ and $R^-(q,p)$ are easily computable from the cases where $N(q,p)$ is easy to compute. Denote by $C_{(\alpha,\beta)}$ the constant term in $N(\alpha,\beta)$, so \begin{align} \label{cab} C_{(\alpha,\beta)}= \begin{cases} -6 &\text{ for a non-exceptional singularity}\\ -4 &\text{ for an exceptional singularity}. \end{cases} \end{align} For the case when $p=eq-a$, where $1\leq a <q-1$, we have that \begin{align*} R^+(q,p)=N(q,p)+N(p,q)&=\bigg[C_{(q,p)}+A(q,p)-4\bigg(\bigg(\frac{q^{-1;p}}{p}\bigg)\bigg) -4\bigg(\bigg(\frac{q}{p}\bigg)\bigg)\bigg]\\ &\phantom{==}+\bigg[C_{(p,q)}+A(p,q)-4\bigg(\bigg(\frac{p^{-1;q}}{q}\bigg)\bigg)-4\bigg(\bigg(\frac{p}{q}\bigg)\bigg)\bigg]. \end{align*} Then, by Proposition $\ref{reci}$, we see that \begin{align*} R^+(q,p)&=C_{(q,p)}+C_{(p,q)}+\Big[-12+4e-4\frac{a}{q}+4\frac{q}{p}+\frac{4}{pq}\Big]+\Big[-\frac{4}{pq}-4\frac{q}{p}+4\frac{a}{q}\Big]\\ &=4e+C_{(q,p)}+C_{(p,q)}-12, \end{align*} which proves the reciprocity formulas in each respective case. The proof for $R^-(q,p)$ is similar and is omitted. \end{proof} We next use the above reciprocity relations to recursively compute an explicit formula for $N(q,p)$: \begin{theorem}For $q$ and $p$ and relatively prime, we have \label{N=} \begin{align} \label{nqpform} N(q,p) = \begin{cases} \displaystyle \sum_{i=1}^{k}4e_i-12k-2 &\text{ when $q\not\equiv (p-1)$ mod $p$} \\ \displaystyle \sum_{i=1}^{k}4e_i-12k=-4p+4 &\text{ when $q\equiv (p-1)$ mod $p$}, \end{cases} \end{align} where $k$ and $e_i$, $1 \leq i \leq k$, were defined above in the modified Euclidean algorithm~\eqref{mea}. \end{theorem} \begin{proof} We have already proved the second case in $\eqref{etc}$, and we will now prove the first case, so we only need consider $q\not\equiv (p-1) \text{ mod } p$. Since our formulas only depend upon $q$ mod $p$, we can assume that $1\leq q<p-1$. We begin by using Proposition \ref{plus,minus} to compute $N(q,p)$ as follows: \begin{align*} N(q,p)&=R^+(q,p)-N(p,q)\\ &=R^+(q,p)-N(e_1q-a_1,q)\\ &=R^+(q,p)-N(-a_1,q)\\ &=R^+(q,p)-N(-a_1,q)-N(-q,a_1)+N(-q,a_1)\\ &=R^+(q,p)-R^-(a_1,q)+N(a_2,a_1)+N(a_1,a_2)-N(a_1,a_2)\\ &=R^+(q,p)-R^-(a_1,q)+R^+(a_2,a_1)-N(-a_3,a_2). \end{align*} Continuing this iteratively, we arrive at the formula \begin{align*} N(q,p) &=\sum_{i=1}^{r+1}4e_i-24\Big\lceil \frac{r+1}{2} \Big\rceil \\ &\phantom{==}+\Big[(-1)^{r+1}R^{(-1)^{r+1}}(a_{r+1},a_r)+(-1)^{r+2}N((-1)^{r+2}a_{r+2},a_{r+1})\Big], \end{align*} where $a_r=e_{r+2}a_{r+1}-1$ or $a_r=e_{r+2}a_{r+1}-(a_{r+1}-1)$. It is only necessary to consider the four following cases: \begin{enumerate} \item When $r+2$ is even and $a_{r+2}=1$:\\ \phantom{======}$N(q,p)=\displaystyle \sum_{i=1}^{r+3}4e_i-12(r+2)-14$. \item When $r+2$ is odd and $a_{r+2}=1$:\\ \phantom{======} $N(q,p)= \displaystyle \sum_{i=1}^{r+3}4e_i-12(r+1)-26$. \item When $r+2$ is even and $a_{r+2}=a_{r+1}-1$:\\
\phantom{======}$N(q,p)= \displaystyle \sum_{i=1}^{r+2}4e_i-4a_{r+1}-12(r+2)+2$. \item When $r+2$ is odd and $a_{r+2}=a_{r+1}-1$:\\ \phantom{======} $N(q,p)= \displaystyle \sum_{i=1}^{r+2}4e_i-4a_{r+1}-12(r+1)-10$. \end{enumerate} The formulas for $N(q,p)$ in each case are a direct consequence of formula $\eqref{etc}$ and Proposition $\ref{plus,minus}$. In case (1) and case (2), $k=r+3$. So written in terms of $k$ we have \begin{align} N(q,p) = \sum_{i=1}^{k}4e_i-12k-2, \end{align} for both cases. Now, in case (3), $k=(a_{r+1}-1)+(r+2)$ and $e_i=2$ for $i\geq r+3$. Therefore we can check that \begin{align} \begin{split} \sum_{i=1}^{k}4e_i-12k-2&=\bigg[\sum_{i=1}^{r+2}4e_i-12(r+2)\bigg]+\bigg[\sum_{i=r+3}^{k}4e_i-12(a_{r+1}-1)-2\bigg]\\ &=\sum_{i=1}^{r+2}4e_i-12(r+2)-4a_{r+1}+2=N(q,p). \end{split} \end{align} Finally, in case (4), $k=(a_{r+1}-1)+(r+2)$ and $e_i=2$ for $i\geq r+3$, and the result holds similarly. \end{proof} Theorem \ref{mainthm} is then a trivial consequence of Theorem \ref{N=} and \eqref{indn}.
\begin{remark}{\em Ashikaga and Ishizaka prove a recursive formula for the Dedekind sum in \cite[Theorem 1.1]{AshiIshi}, which is equivalent to Theorem \ref{N=}. However, our proof is more elementary and relies only on the reciprocity law for Dedekind sums. We will also need to use Proposition \ref{plus,minus} below in Section \ref{wpssec}. } \end{remark}
\section{Index on Calderbank-Singer spaces } \label{CSindex}
In this section, we prove the results regarding the Calderbank-Singer metrics. Let $k$ and $k'$ be the lengths of the modified Euclidean algorithm for $(q,p)$ and $(p-q,p)$ respectively. \begin{proof}[Proof of Theorem \ref{CSthm}] It follows from \eqref{intersect} that the compactified Calderbank-Singer space $(\hat{X}, \hat{g})$ satisfies $\tau_{top}(\hat{X})=-k$ and $\chi_{top}(\hat{X})=k+2$, so for a $(p-q,p)$-action when $q\neq 1$, the index is \begin{align*} Ind(\hat{X}, \hat{g})&=\frac{1}{2}(15\chi_{top}+29\tau_{top})+N(q,p) =[-7k+15]+\Big[\sum_{i=1}^{k'}4e_i'-12k'-2\Big]. \end{align*} We next use a 4-dimensional $(q,p)$-football, denoted by $S^4_{(q,p)}$, to relate $k$ and $k'$. This is defined using the $\Gamma_{(p,q)}$ action, acting as rotations around $x_5$-axis: \begin{align} S^4_{(q,p)}=S^4/\Gamma_{(q,p)}. \end{align} This quotient is an orbifold with two singular points, one of $(q,p)$-type, and the other of $(-q,p)$-type. Since $\chi_{top}(S^4_{(q,p)})=2$ and $\tau_{top}(S^4_{(q,p)})=0$, the index of \eqref{thecomplex} on $S^4_{(q,p)}$ with the round metric $g_S$ is \begin{align} Ind(S^4_{(q,p)},g_S)= 3 \text{ for $1<q<p-1$}. \end{align} Using the formula \begin{align} Ind(S^4_{(q,p)},g_S)=\frac{1}{2}(15\chi_{top}+29\tau_{top})+N(q,p)+N(-q,p), \end{align} and Theorem \ref{N=}, we have \begin{align} \begin{split} -12&=N(q,p)+N(-q,p)=N(q,p)+N(p-q,p)\\ &=\Big [\sum_{i=1}^{k}4e_i-12k-2\Big]+\Big[\sum_{i=1}^{k'}4e_i'-12k'-2\Big], \end{split} \end{align} which yields the formula \begin{align} k'=\frac{1}{12}\Big(8+\sum_{i=1}^{k}4e_i+\sum_{i=1}^{k'}4e_i'-12k\Big). \end{align} Then, substituting this for $k$ in $Ind(\hat{X}, \hat{g})$ gives \begin{align} \begin{split} Ind(\hat{X}, \hat{g})&=[-7k+15]+\Big[\sum_{i=1}^{k'}4e_i'-\Big(8+\sum_{i=1}^{k}4e_i+\sum_{i=1}^{k'}4e_i'-12k \Big)-2\Big]\\ &=5k+5-\sum_{i=1}^{k}4e_i. \end{split} \end{align}
Next, when $q=1$, we have $k =1$, so the index is \begin{align} Ind(\hat{X}, \hat{g}) =[-7k+15]+[-4p+4] =-4p+12. \end{align} \end{proof}
\begin{proof}[Proof of Theorem \ref{CScor}] Calderbank-Singer showed that their toric metrics come in families of dimension $k-1$. It was proved by Dominic Wright that the moduli space of toric anti-self-dual metrics on the orbifolds are of dimension exactly $k -1$ \cite[Corollary 1.1]{Wright}. So as long as we show the moduli space is strictly larger than $k-1$, there must be non-toric deformations.
The $(1,2)$ case is the Eguchi-Hanson metric which has no deformations. For $q=1$ and $p > 2$, the $(1,p)$ type Calderbank-Singer spaces are exactly the LeBrun negative mass metrics on $\mathcal{O}(-p)$ found in \cite{LeBrunnegative}. It was shown in \cite{HondaOn} for $p = 3$, the moduli space of these metrics is of dimension $1$ so the result is true since $1 > 0 = k - 1$. For $p \geq 4$, by \cite[Theorem 1.9]{ViaclovskyIndex}, the moduli space has dimension at least $4p - 12 > 0$ (in fact the dimension is exactly $4p-12$, see \cite[Theorem~1.1]{HondaOn}). So the result holds for $q = 1$ and $p \geq 3$. We also mention that \cite[Theorem~1.1]{HondaOn} determines exactly the identity component of the automorphism groups of the deformations.
Next, assume that $q = p-1$. In this case, the metrics are hyperk\"ahler, and correspond to toric multi-Eguchi-Hason metrics. In this case, the moduli space of all hyperk\"ahler metrics is known to be exactly of dimension $3(k-1)$.
Next, we assume that $1 < q < p-1$. As mentioned in the Introduction, from \cite[Theorem 4.2]{LeBrunMaskit}, we know that $\dim(H^2(\hat{X},\hat{g}))=0$. Also, $\dim(H^0) = 2,$ since the metrics are toric and $q >1$. Therefore \begin{align} \dim(H^1)=-Ind(\hat{X}, \hat{g}) +\dim(H^0) = -Ind(\hat{X}, \hat{g}) +2. \end{align} When $q \neq 1$, we have that \begin{align} -Ind =-5k-5+\sum_{i=1}^{k}4e_i. \end{align} Since $e_i \geq 2 $ for all $i$ and since $q < p-1$, then $e_j \geq 3$ for some $j$, $1 \leq j \leq k$. Therefore \begin{align} \dim(H^1)\geq 3k + 1. \end{align} The actual moduli space is locally isomorphic to $H^1 / H^0$, so it has dimension at least $3k - 1 > 3(k-1)$. \end{proof}
\section{Index on weighted projective spaces} \label{wpssec}
In this section we will study the index of the complex \eqref{thecomplex} at the Bochner-K\"ahler metrics of Bryant {\em{with reversed orientation to make them anti-self-dual}}. This reversal of orientation makes the orbifold points have orientation-reversing conjugate actions as follows: \begin{enumerate} \item Around [1,0,0] there is a $(-q^{-1;r}p,r)$-action. \item Around [0,1,0] there is a $(-p^{-1;q}r,q)$-action. \item Around [0,0,1] there is a $(-r^{-1;p}q,p)$-action. \end{enumerate}
In the next subsection, we will present some elementary number theoretic propositions that we will use throughout our computations. After that, we will prove crucial reciprocity laws for sawtooth functions relating $r, q$ and $p$ and then employ these to prove our main formula for the index. Finally, we use this formula to prove Theorem~\ref{wpsthm}.
\subsection{Elementary number theoretic preliminaries}
Recall that for two relatively prime positive integers $1<\alpha < \beta$, that we denote $\alpha$'s inverse modulo $\beta$ by $\alpha^{-1;\beta}$, and $\beta$'s inverse modulo $\alpha$ by $\beta^{-1;\alpha}$. Since $\alpha < \beta$ we can write \begin{align} \beta=e\alpha-a, \end{align} where $e$ and $a$ are positive integers with $a<\alpha$. Then we have the following proposition: \begin{proposition} \label{nt} We have the following identities: \begin{enumerate} \item $\beta^{-1;\alpha}=\alpha-a^{-1;\alpha}$ \item $\alpha \alpha^{-1;\beta}=1+a^{-1;\alpha}\beta$. \end{enumerate} \end{proposition} \begin{proof} To prove the first identity, recall that $\beta=e\alpha-a$ so \begin{align*} \beta (\alpha-a^{-1;\alpha})&=(e\alpha-a)(\alpha-a^{-1;\alpha}) =e\alpha^2-e\alpha a^{-1;\alpha}-a\alpha+aa^{-1;\alpha}\equiv 1 \mod \alpha. \end{align*} This proves the first identity because $\alpha-a^{-1;\alpha}<\alpha$ and the multiplicative inverses are unique.
To prove second identity we first write \begin{align} \alpha\alpha^{-1;\beta}=1+X\beta. \end{align} Since $1<\alpha$ we know $X$ must be a positive integer. We can then solve for \begin{align} \beta=\frac{\alpha \alpha^{-1;\beta}-1}{X}. \end{align} Therefore $\beta=\frac{\alpha \alpha^{-1;\beta}-1}{X}=e\alpha-a$, so \begin{align} \alpha\alpha^{-1;\beta}-1=e\alpha X-aX, \end{align} from which we see that \begin{align} aX=\alpha(eX-\alpha^{-1;\beta})+1, \end{align} so $aX\equiv \text{1 mod $\alpha$}$. This proves the second identity because $X=\frac{\alpha \alpha^{-1;\beta}-1}{\beta}<\alpha$ and multiplicative inverses are unique. \end{proof} For convenience, we define the fractional part of $x$ by \begin{align} \{ x \} = x - \lfloor x \rfloor. \end{align} We will use the following proposition extensively in the next section, the proof is elementary: \begin{proposition} \label{saw} For any real $\alpha$ and $\beta$, both non-integral, \begin{align} ((\alpha+\beta))= \begin{cases}
((\alpha))+((\beta))+\frac{1}{2} &\text{ when $\{\alpha\}+\{\beta\}<1$}\\
((\alpha))+((\beta))-\frac{1}{2} &\text{ when $\{\alpha\}+\{\beta\}>1$}\\
0 &\text{ when $\{\alpha\}+\{\beta\}=1$}.
\end{cases} \end{align} \end{proposition}
\subsection{Reciprocity formulas for sawtooth functions} Let $r<q<p$ and write: \begin{align} \begin{split} &p=e_{pr}r-a_{pr}\\ &p=e_{pq}q-a_{pq}\\ &q=e_{qr}r-a_{qr}. \end{split} \end{align} We have the following identities from Proposition $\ref{nt}$ (1): \begin{align} \begin{split} &p^{-1;r}=r-a^{-1;r}_{pr}\\ &p^{-1;q}=q-a^{-1;q}_{pq}\\ &q^{-1;r}=r-a^{-1;r}_{qr} \end{split} \end{align} and from Proposition $\ref{nt}$ (2): \begin{align} \begin{split} &rr^{-1;p}=1+a^{-1;r}_{pr}p\\ &rr^{-1;q}=1+a^{-1;r}_{qr}q\\ &qq^{-1;p}=1+a^{-1;q}_{pq}p. \end{split} \end{align} We now use these identities to prove reciprocity laws for the sawtooth function. These reciprocity laws will be broken up into two theorems where the first is independent of $r+q$ in relation to $p$ and the second is dependent. \begin{theorem} \label{2} We have the following reciprocity relations: \begin{enumerate} \item $\Big(\Big(\frac{qp^{-1;r}}{r}\Big)\Big)+ \Big(\Big(\frac{qr^{-1;p}}{p}\Big)\Big)=\frac{q}{pr}$\\ \item $\Big(\Big(\frac{rp^{-1;q}}{q}\Big)\Big)+ \Big(\Big(\frac{rq^{-1;p}}{p}\Big)\Big)=\frac{r}{pq}$. \end{enumerate} \end{theorem}
\begin{proof} Consider the first reciprocity relation. We have \begin{align} \begin{split} \bigg(\bigg(\frac{qp^{-1;r}}{r}\bigg)\bigg)+ \bigg(\bigg(\frac{qr^{-1;p}}{p}\bigg)\bigg)&=\bigg(\bigg(\frac{q(r-a^{-1;r}_{pr})}{r}\bigg)\bigg)+\bigg(\bigg(\frac{rr^{-1;p}q}{pr}\bigg)\bigg)\\ &=\bigg(\bigg(q-\frac{qa^{-1;r}_{pr}}{r}\bigg)\bigg)+\bigg(\bigg(\frac{q}{pr}+\frac{qa^{-1;r}_{pr}}{r}\bigg)\bigg)\\ &=-\bigg(\bigg(\frac{qa^{-1;r}_{pr}}{r}\bigg)\bigg)+\bigg(\bigg(\frac{q}{pr}+\frac{qa^{-1;r}_{pr}}{r}\bigg)\bigg) \end{split} \end{align} Now, we can write $\frac{qa^{-1;r}_{pr}}{r}=X+\frac{C}{r}$ where $0<X$ and $0<C<r$ are positive integers so that \begin{align} \label{1}-\bigg(\bigg(\frac{qa^{-1;r}_{pr}}{r}\bigg)\bigg)+\bigg(\bigg(\frac{q}{pr}+\frac{qa^{-1;r}_{pr}}{r}\bigg)\bigg)&= -\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\bigg(\bigg(\frac{q}{pr}+\frac{C}{r}\bigg)\bigg). \end{align} Since $0<C<r$ we know that \begin{align} \frac{q}{pr}+\frac{C}{r}\leq \frac{q}{pr}+\frac{r-1}{r}=\frac{q}{pr}+\frac{pr-p}{pr}<1, \end{align} because $q<p$, which implies that \begin{align} \bigg\{\frac{p}{qr}\bigg\}+\bigg\{\frac{C}{r}\bigg\}<1. \end{align} Therefore, by Proposition $\ref{saw}$, we can separate the second sawtooth function to get \begin{align} \bigg(\bigg(\frac{q}{pr}+\frac{C}{r}\bigg)\bigg)=\bigg(\bigg(\frac{q}{pr}\bigg)\bigg)+\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\frac{1}{2}. \end{align} Putting this back into $\eqref{1}$ we see that \begin{align} \begin{split} \bigg(\bigg(\frac{qp^{-1;r}}{r}\bigg)\bigg)&+ \bigg(\bigg(\frac{qr^{-1;p}}{p}\bigg)\bigg)=-\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\bigg(\bigg(\frac{q}{pr}+\frac{C}{r}\bigg)\bigg)\\ &=-\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\bigg(\bigg(\frac{q}{pr}\bigg)\bigg)+\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\frac{1}{2}=\bigg(\bigg(\frac{q}{pr}\bigg)\bigg)+\frac{1}{2}\\ &=\frac{q}{pr}-\Big\lfloor \frac{q}{pr}\Big \rfloor=\frac{q}{pr}. \end{split} \end{align} The proof of the second reciprocity relation exactly follows the proof of the first. \end{proof}
The next theorem gives a similar reciprocity relation, but it is dependent upon $r+q$ in relation to $p$. \begin{theorem} \label{3} For $1<r<q<p$ we have the following reciprocity relation: \begin{align} \bigg(\bigg(\frac{q^{-1;r}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{r^{-1;q}p}{q}\bigg)\bigg)= \begin{cases} \frac{p}{qr} &\text{when $r+q>p$}\\ \frac{p}{qr}-1 &\text{when $r+q=p$}\\ \frac{p}{qr}-\lfloor \frac{p}{qr}\rfloor &\text{when $r+q<p$}\\ &\text{and $\Big\{\frac{p}{qr}\Big\}<\Big\{\frac{q^{-1;r}p}{r}\Big\}$}\\ \frac{p}{qr}-\lfloor \frac{p}{qr}\rfloor-1 &\text{when $r+q<p$}\\ &\text{and $\Big\{\frac{p}{qr}\Big\}>\Big\{\frac{q^{-1;r}p}{r}\Big\}$}. \end{cases} \end{align} \end{theorem}
\begin{proof} We begin in a similar way to the proof of Theorem \ref{2}: \begin{align} \begin{split} \bigg(\bigg(\frac{q^{-1;r}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{r^{-1;q}p}{q}\bigg)\bigg)&=\bigg(\bigg(\frac{p(r-a^{-1;r}_{qr})}{r}\bigg)\bigg)+\bigg(\bigg(\frac{(1+a^{-1;r}_{qr}q)p}{qr}\bigg)\bigg)\\ &=-\bigg(\bigg(\frac{a^{-1;r}_{qr}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p}{qr}+\frac{a^{-1;r}_{qr}p}{r}\bigg)\bigg). \end{split} \end{align} Now, we can write $\frac{a^{-1;r}_{qr}p}{r}=X+\frac{C}{r}$ where $0<X$ and $0<C<r$ are positive integers so that \begin{align} -\bigg(\bigg(\frac{a^{-1;r}_{qr}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p}{qr}+\frac{a^{-1;r}_{qr}p}{r}\bigg)\bigg)&=-\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p}{qr}+\frac{C}{r}\bigg)\bigg). \end{align} The same argument that we used in the previous proof to split up the second sawtooth function will no longer work because $p>q$, which could allow $\frac{p}{qr}+\frac{C}{r}>1$. Lets consider the first case of this reciprocity relation when $r+q>p$. In this case we know that $p<rq$ so $p<2q$ since $1<r<q$. Now, we will show that $C\leq r-2$ and use this to prove the first case. Write $p$ as \begin{align} p=k_{pr}r+m_{pr}, \end{align} where $k_{pr}=e_{pr}-1$ and $m_{pr}=r-a_{pr}$ are positive integers. We know that $\frac{C}{r}$ is going to be the fractional part of $\frac{pa^{-1;r}_{qr}}{r}$ which equals the fractional part of $\frac{m_{pr}a^{-1;r}_{qr}}{r}$. If this equals $\frac{r-1}{r}$ then $m_{pr}a^{-1;r}_{pr}\equiv -1 (r)$ and therefore $m_{pr}=r-a_{qr}$, because multiplicative inverses are unique, which implies that $a_{pr}=a_{qr}$ (denote this value by $A$). So we have that \begin{align} e_{pr}r-A=p<q+r=(e_{qr}r-A)+r=(e_{qr}+q)r-A, \end{align} which is a contradiction because $e_{pr}\geq e_{qr}+1$. Therefore $\frac{C}{r}\leq \frac{r-2}{r}$, so \begin{align} \frac{p}{qr}+\frac{C}{r}\leq \frac{p}{qr}+\frac {r-2}{r}=\frac{p}{qr}+\frac{qr-2q}{qr}<1, \end{align} which implies that \begin{align} \bigg\{\frac{p}{qr}\bigg\}+\bigg\{\frac{C}{r}\bigg\}<1. \end{align} Therefore, by Proposition $\ref{saw}$, we can separate the second sawtooth function to get \begin{align} \begin{split} -\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p}{qr}+\frac{C}{r}\bigg)\bigg)&=-\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p}{qr}\bigg)\bigg)+\bigg(\bigg(\frac{C}{r}\bigg)\bigg)+\frac{1}{2}\\ &=\bigg(\bigg(\frac{p}{qr}\bigg)\bigg)+\frac{1}{2}=\frac{p}{qr}-\Big\lfloor \frac{p}{qr}\Big \rfloor=\frac{p}{qr}, \end{split} \end{align} which proves the first case.
Now, consider the second case when $r+q=p$. In this case we see that \begin{align} \begin{split} \bigg(\bigg(\frac{p}{qr}+\frac{pa^{-1;r}_{qr}}{r}\bigg)\bigg)&=\bigg(\bigg(\frac{p}{qr}+\frac{(r+q)a^{-1;r}_{qr}}{r}\bigg)\bigg)\\ &=\bigg(\bigg(\frac{p}{qr}-\frac{a_{qr}a^{-1;r}_{qr}}{r}\bigg)\bigg)=\bigg(\bigg(\frac{p}{qr}-\frac{1}{r}\bigg)\bigg)\\ &=\Big(\frac{p}{qr}-\frac{1}{r}\Big)-\Big\lfloor \frac{p}{qr}-\frac{1}{r} \Big\rfloor -\frac{1}{2} =\frac{1}{q}-\frac{1}{2}. \end{split} \end{align} Then, we compare this to \begin{align} \begin{split} \bigg(\bigg(&\frac{p}{qr}\bigg)\bigg)+\bigg(\bigg(\frac{pa^{-1;r}_{qr}}{r}\bigg)\bigg)+\frac{1}{2}=\bigg(\bigg(\frac{p}{qr}\bigg)\bigg)+\bigg(\bigg(\frac{(r+q)a^{-1;r}_{qr}}{r}\bigg)\bigg)+\frac{1}{2}\\ &=\bigg(\bigg(\frac{p}{qr}\bigg)\bigg)+\bigg(\bigg(\frac{-a_{qr}a^{-1;r}_{qr}}{r}\bigg)\bigg)+\frac{1}{2}=\bigg(\bigg(\frac{p}{qr}\bigg)\bigg)-\bigg(\bigg(\frac{1}{r}\bigg)\bigg)+\frac{1}{2}\\ &=\Big(\frac{p}{qr}-\Big\lfloor \frac{p}{qr}\Big \rfloor-\frac{1}{2}\Big)-\Big(\frac{1}{r}-\Big\lfloor \frac{1}{r}\Big \rfloor -\frac{1}{2}\Big)+\frac{1}{2}\\ &=\frac{p}{qr}-\frac{1}{r}+\frac{1}{2}=\frac{1}{q}+\frac{1}{2} =\bigg(\bigg(\frac{p}{qr}+\frac{pa^{-1;r}_{qr}}{r}\bigg)\bigg)+1. \end{split} \end{align} Therefore we see that \begin{align} \begin{split} -\bigg(\bigg(\frac{a^{-1;r}_{qr}p}{r}\bigg)\bigg)&+\bigg(\bigg(\frac{p}{qr}+\frac{a^{-1;r}_{qr}p}{r}\bigg)\bigg)\\ &=-\bigg(\bigg(\frac{a^{-1;r}_{qr}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p}{qr}\bigg)\bigg)+\bigg(\bigg(\frac{pa^{-1;r}_{qr}}{r}\bigg)\bigg)-\frac{1}{2}\\ &=\bigg(\bigg(\frac{p}{qr}\bigg)\bigg)-\frac{1}{2}=\frac{p}{qr}-\Big\lfloor \frac{p}{qr} \Big\rfloor -1=\frac{p}{qr}-1, \end{split} \end{align} which proves the second case.
To prove the third and fourth cases we begin once again by using that \begin{align} \bigg(\bigg(\frac{q^{-1;r}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{r^{-1;q}p}{q}\bigg)\bigg)=\bigg(\bigg(\frac{q^{-1;r}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p}{qr}-\frac{q^{-1;r}p}{r}\bigg)\bigg). \end{align} Notice that \begin{align} \Big\{\frac{p}{qr}\Big\}+\Big\{\frac{-q^{-1;r}p}{r}\Big\}=\Big\{\frac{p}{qr}\Big\}+1-\Big\{\frac{q^{-1;r}p}{r}\Big\}, \end{align} which is never equal to one because $r$, $q$ and $p$ are relatively prime. Now, the rest of the proof of the third and fourth cases follows directly from Proposition $\ref{saw}$. \end{proof}
\subsection{$\Gamma$-Index for weighted projective spaces} First, recall Definition $\ref{d-exc}$: singularities resulting from a $(p-1,p)$-action are said to be {\em{exceptional}} and otherwise they are called {\em{non-exceptional}}. Consider the case when $1<r<q<p$ so that there are three singularities. Before giving theorems concerning the index, we will first examine what type singularities, non-exceptional or exceptional, are admitted around each orbifold point in the cases when $r+q>p$, $r+q=p$ and $r+q<p$.
\begin{proposition} \label{gr} When $r+q>p$ all three singularites are non-exceptional. When $r+q=p$ we have that \begin{enumerate} \item The singularity at $[1,0,0]$ is always exceptional. \item The singularity at $[0,1,0]$ is always exceptional. \item The singularity at $[0,0,1]$ is non-exceptional and comes from a $(1,p)$-action. \end{enumerate} When $r+q<p$ we have that \begin{enumerate} \item The singularity at $[1,0,0]$ is exceptional if and only if $p\equiv q\text{ mod $r$}$. \item The singularity at $[0,1,0]$ is exceptional if and only if $p\equiv r\text{ mod $q$}$. \item The singularity at $[0,0,1]$ is always non-exceptional. \end{enumerate} \end{proposition} \begin{proof} At $[1,0,0]$ the $(-q^{-1;r}p,r)$-action is equivalent to a $(-a^{-1;r}_{qr}a_{pr},r)$-action, and this is equivalent to a $(r-1,r)$-action if and only if $a_{pr}=a_{qr}$. If $ r + q > p$, suppose that $a_{pr}=a_{qr}$, then \begin{align}
p=e_{pr}r-a_{qr}, \text{ and } q=e_{qr}r-a_{qr}, \end{align} so $p<q+r=(e_{qr}+1)r-a_{qr}$, which is a contradiction because $e_{pr}\geq e_{qr}+1$. If $r+q=p$ we have that \begin{align} p=q+r=(e_{qr}+1)r-a_{qr}, \end{align} so we see that $a_{pr}=a_{qr}$ since $a_{qr}<r$. If $ r + q < p$, then this happens if and only if $p\equiv q\text{ mod $r$}$.
At $[0,1,0]$, by Remark \ref{actrem}, the $(-p^{-1;q}r,q)$-action is equivalent to a $(-r^{-1;q}p,q)$-action. This is equivalent to a $(r^{-1;q}a_{pq},q)$-action, which is equivalent to a $(q-1,q)$-action if and only if $a_{pq}r^{-1;q}\equiv -1 \mod q$, which would imply that $a_{pq}=q-r$. If $ r + q > p$, suppose that $a_{pq}=q-r$, then \begin{align} p=2q-a_{pq}=2q-(q-r) =q+r, \end{align} which is a contradiction because $r+q>p$. If $r+q=p$, we have that \begin{align} p=2q-(q-r), \end{align} so we see that $a_{pq}=q-r$. If $ r + q < p$ then this happens if and only if $p\equiv r \mod q$.
At $[0,0,1]$ the $(-r^{-1;p}q,p)$-action is equivalent to a $(p-1,p)$-action if and only if $r^{-1;p}q\equiv 1 \mod p$. If $ r + q > p$, this condition would imply that $q=r$, which is a contradiction. If $r+q=p$ then the $(-r^{-1;p}q,p)$-action is obviously equivalent to a $(1,p)$-action since $q = p - r$. If $ r + q < p$ then $r^{-1;p}q\equiv 1 \mod p$ occurs if and only if $q=r$, but $q>r$ so this can never happen. \end{proof} In the case $r + q < p$, we can add the following: \begin{proposition} \label{lee} When $r+q<p$ and the singularities at $[1,0,0]$ and $[0,1,0]$ are both exceptional, we have that $p=Xqr+r+q$ for some integer $X$, and \begin{align} \bigg\{\frac{p}{qr} \bigg\}>\bigg\{\frac{q^{-1;r}p}{r}\bigg\}. \end{align} \end{proposition} \begin{proof} Since the singularities around $[1,0,0]$ and $[0,1,0]$ are both exceptional, from Proposition $\ref{gr}$ we know that \begin{align} p\equiv q\text{ mod } r, \text{ and } p\equiv r \text{ mod } q. \end{align} Therefore, we can write \begin{align} p=Y_1q+r=Y_2r+q, \end{align} and solve for \begin{align} r=\frac{Y_1-1}{Y_2-1}q, \end{align} which implies that $qX=Y_2-1$ for some $X$ in $\mathbb{Z}$, since $q$ and $r$ are relatively prime. Then solving for $Y_2=qX+1$ we see that \begin{align} p=(qX+1)r+q =Xqr+r+q. \end{align} Now, since $p=Xqr+r+q$ we see that $a_{pr}=a_{qr}$. Therefore \begin{align} \begin{split} \bigg\{\frac{p}{qr} \bigg\}- \bigg\{\frac{a^{-1;r}_{qr}a_{pr}}{r} \bigg\}&=\bigg\{\frac{Xqr+r+q}{qr} \bigg\}- \bigg\{\frac{a^{-1;r}_{qr}a_{qr}}{r} \bigg\}\\ &=\bigg\{\frac{1}{q}+\frac{1}{r} \bigg\}- \bigg\{\frac{1}{r} \bigg\}=\frac{1}{q}>0. \end{split} \end{align} \end{proof} The following is the main result of this section, which is the same as Theorem \ref{introthm} upon identifying the integer $\epsilon$ with the number of exceptional singularities: \begin{theorem} \label{thm} Let $g$ be the canonical Bochner-K\"ahler metric with reversed orientation on $\overline{\mathbb{CP}}^2_{(r,q,p)}$, and assume that $1<r<q<p$. If $r+q\geq p$ then \begin{align} Ind(\overline{\mathbb{CP}}^2_{(r,q,p)},g)=2. \end{align} If $r+q<p$ then \begin{align} Ind(\overline{\mathbb{CP}}^2_{(r,q,p)},g)= \begin{cases} 2+2\epsilon -4\lfloor \frac{p}{qr} \rfloor &\text{ when $\{ \frac{p}{qr}\}<\{\frac{q^{-1;r}p}{r}\}$}\\ -2+2\epsilon-4\lfloor \frac{p}{qr} \rfloor &\text{ when $\{ \frac{p}{qr}\}>\{\frac{q^{-1;r}p}{r}\}$}, \end{cases} \end{align} where $\epsilon$ is the number of exceptional singularities, either $0$, $1$, or $2$. \end{theorem} Note that from Proposition $\ref{lee}$ the only instance when two exceptional singularities can occur is in the second case, thus there are really only five distinct cases. All of these cases do in fact occur, see Table \ref{casestable}. \begin{table} \caption{Cases in Theorem \ref{thm}} \label{casestable}
\begin{tabular}{ | l | l | c | p{2cm} |}
\hline
$(r,q,p)$ & $\epsilon$ & $\{ \frac{p}{qr}\} - \{\frac{q^{-1;r}p}{r}\}$ \\ \hline
$(3,7,11)$ & 0 & $< 0$ \\ \hline
$(3,7,41)$ & 0 & $> 0$ \\ \hline
$(3,7,25)$ & 1 & $< 0$ \\ \hline
$(3,7,13)$ & 1 & $> 0$ \\ \hline
$(3,7,31)$ & 2 & $> 0$ \\ \hline
\end{tabular} \end{table} \begin{proof}[Proof of Theorem \ref{thm}] Since $1<r<q<p$, there are three singularities. Furthermore, $\chi_{top} = 3$ and $\tau_{top} = -1$ (see \cite[Appendix B]{Dimca}), so the $\Gamma$-index is \begin{align} \begin{split} Ind&= 8 + N(-q^{-1;r}p,r)+N(-p^{-1;q}r,q)+N(-r^{-1;p}q,p)\\ = 8 &+ \bigg[C_{(-q^{-1;r}p,r)}+A(-q^{-1;r}p,r)-4\bigg(\bigg( \frac{-q^{-1;r}p}{r}\bigg)\bigg)-4\bigg(\bigg(\frac{-p^{-1;r}q}{r}\bigg)\bigg)\bigg]\\ &+\bigg[C_{(-r^{-1;q}p,q)}+A(-p^{-1;q}r,q)-4\bigg(\bigg( \frac{-r^{-1;q}p}{q}\bigg)\bigg)-4\bigg(\bigg(\frac{-p^{-1;q}r}{q}\bigg)\bigg)\bigg]\\ &+\bigg[C_{(-r^{-1;p}q,p)}+A(-r^{-1;p}q,p)-4\bigg(\bigg( \frac{-r^{-1;p}q}{p}\bigg)\bigg)-4\bigg(\bigg(\frac{-q^{-1;p}r}{p}\bigg)\bigg)\bigg], \end{split} \end{align} recalling that $C_{(\alpha, \beta)}$ was defined above in \eqref{cab}. Then, using Rademacher's triple reciprocity for Dedekind sums \cite{Rademacher1} \begin{align} s(q^{-1;r}p,r)+s(p^{-1;q}r,q)+s(r^{-1;p}q,p) = -\frac{1}{4}+\frac{1}{12}\bigg(\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}\bigg), \end{align} we see that \begin{align} \begin{split} Ind&=8+[C_{(-q^{-1;r}p,r)}+C_{(-r^{-1;q}p,q)}+C_{(-r^{-1;p}q,p)}]\\ &\phantom{==}+48\bigg[\frac{1}{4}-\frac{1}{12}\bigg(\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}\bigg)\bigg]\\ &\phantom{==}+4\bigg[\bigg(\bigg( \frac{q^{-1;r}p}{r}\bigg)\bigg)+\bigg(\bigg(\frac{p^{-1;r}q}{r}\bigg)\bigg)+\bigg(\bigg( \frac{r^{-1;q}p}{q}\bigg)\bigg)\bigg]\\ &\phantom{==}+4\bigg[\bigg(\bigg(\frac{p^{-1;q}r}{q}\bigg)\bigg)+\bigg(\bigg( \frac{r^{-1;p}q}{p}\bigg)\bigg)+\bigg(\bigg(\frac{q^{-1;p}r}{p}\bigg)\bigg)\bigg]. \end{split} \end{align} Now, using our reciprocity laws for sawtooth functions, Theorems $\ref{2}$ and $\ref{3}$, and the restrictions on the types of singularities admitted, Proposition $\ref{gr}$, we complete the proof for each case.
When $r+q>p$: \begin{align*} Ind&=8+[-18]+48\bigg[\frac{1}{4}-\frac{1}{12}\bigg(\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}\bigg)\bigg] +4\bigg[\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}\bigg] =2. \end{align*}
When $r+q=p$: \begin{align*} Ind&=8+[-14]+48\bigg[\frac{1}{4}-\frac{1}{12}\bigg(\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}\bigg)\bigg] +4\bigg[\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}-1\bigg] =2. \end{align*}
When $r+q<p$ and $\{ \frac{p}{qr}\}<\{\frac{q^{-1;r}p}{r}\}$: \begin{align*} Ind&=8+[C_{(-q^{-1;r}p,r)}+C_{(-r^{-1;q}p,q)}+C_{(-r^{-1;p}q,p)}]\\ &\phantom{==}+48\bigg[\frac{1}{4}-\frac{1}{12}\bigg(\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}\bigg)\bigg]+4\bigg[\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}-\Big\lfloor \frac{p}{qr}\Big\rfloor\bigg]\\ &=20+[C_{(-q^{-1;r}p,r)}+C_{(-r^{-1;q}p,q)}+C_{(-r^{-1;p}q,p)}]-4\Big\lfloor \frac{p}{qr}\Big\rfloor\\ &=2+2\epsilon-4\Big\lfloor \frac{p}{qr}\Big\rfloor. \end{align*}
When $r+q<p$ and $\{ \frac{p}{qr}\}>\{\frac{q^{-1;r}p}{r}\}$: \begin{align*} Ind&=8+[C_{(-q^{-1;r}p,r)}+C_{(-r^{-1;q}p,q)}+C_{(-r^{-1;p}q,p)}]\\ &\phantom{==}+48\bigg[\frac{1}{4}-\frac{1}{12}\bigg(\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}\bigg)\bigg]+4\bigg[\frac{r}{pq}+\frac{q}{pr}+\frac{p}{qr}-1-\Big\lfloor \frac{p}{qr}\Big\rfloor\bigg]\\ &=16+[C_{(-q^{-1;r}p,r)}+C_{(-r^{-1;q}p,q)}+C_{(-r^{-1;p}q,p)}]-4\Big\lfloor \frac{p}{qr}\Big\rfloor\\ &=-2+2\epsilon-4\Big\lfloor \frac{p}{qr}\Big\rfloor. \end{align*} This completes the proof. \end{proof}
We also state the following theorem, which gives the index in the cases when there are strictly less than three singularities. \begin{theorem} Let $g$ be the canonical Bochner-K\"ahler metric with reversed orientation on $\overline{\mathbb{CP}}^2_{(r,q,p)}$. When $1=r<q<p$ there are two singularities and \begin{align} Ind(\overline{\mathbb{CP}}^2_{(1,q,p)},g)= \begin{cases} 2 &\text{ when $q=p-1$}\\ -4\lfloor \frac{p}{q} \rfloor+6 &\text{ when $p=eq-(q-1)$ and $q\neq p-1$}\\ -4\lfloor \frac{p}{q} \rfloor+4 &\text{ when $1\leq a_{pq} \leq q-2$ and $q>2$}. \end{cases} \end{align} When $1=r=q<p$ there is one singularity and \begin{align} Ind(\overline{\mathbb{CP}}^2_{(1,1,p)},g)=-4p+12. \end{align} \end{theorem} \begin{proof} We have that \begin{align} \frac{1}{2}(15\chi_{top}+20\tau_{top})=8, \end{align} Since $1=r<q<p$ we know that $p>2$. The first case follows from the reciprocity formula for $R^-(q,p)$ in Proposition $\ref{plus,minus}$. The second case follows from $N(-1,p)=-4p+4$ in $\eqref{etc}$. \end{proof}
\subsection{Proof of Theorem \ref{wpsthm}} We first present a general result about $H^2(M,g)$ on certain self-dual K\"ahler orbifolds: \begin{proposition} \label{sdk} Let $(M,g)$ be a compact self-dual K\"ahler orbifold and assume that the set $M^{>0} = \{ p \in M, R(p) > 0 \}$ is non-empty. With the reversed orientation to make $g$ anti-self-dual, we have $H^2(M,g) = 0$. \end{proposition} \begin{proof} As mentioned in the Introduction, the metric $\tilde{g} = R^{-2} g$ is an Einstein metric, which is complete on components of $M^*$. If $Z \in S^2_0(\Lambda^2_+(T^*M))$ satisfies $\mathcal{D}_g^* Z = 0$, where $ \mathcal{D}^*_{g}$ is the adjoint of $\mathcal{D}_{g}$, then from conformal invariance $\mathcal{D}_{\tilde g}^* Z = 0$ when $Z$ is viewed as a $(1,3)$ tensor. We compute \begin{align}
|Z|_{\tilde{g}}^2 = \tilde{g}^{ip} \tilde{g}^{jq} Z_{ijk}^{\phantom{ijk}l} Z_{pql}^{\phantom{pql}k}
= R^4 {g}^{ip} {g}^{jq} Z_{ijk}^{\phantom{ijk}l} Z_{pql}^{\phantom{pql}k} = R^4 |Z|_{g}^2, \end{align} so we have \begin{align} \label{zeqn}
|Z|_{\tilde{g}} = R^2 |Z|_{g}. \end{align} Let $M^*_1$ denote any non-trivial component of $M^*$. Since the metric $\tilde{g}$ is Einstein on $M^*_1$, from \cite[Proposition 5.1]{Itoh}, we have \begin{align} \label{dds} \mathcal{D}_{\tilde{g}} \mathcal{D}^*_{\tilde{g}} Z = \frac{1}{24} ( 3 \nabla^*_{\tilde{g}} \nabla_{\tilde{g}} + 2 R_{\tilde{g}})(2 \nabla^*_{\tilde{g}} \nabla_{\tilde{g}} + R_{\tilde{g}}) Z, \end{align} where $R_{\tilde{g}}$ is the (constant) scalar curvature of the Einstein metric $\tilde{g}$ on $M^*_1$. If $M^{> 0} = M$, then the maximum principle immediately implies that $Z = 0$. Otherwise, there is an nontrivial open component of $M^*$, which we again call $M^*_1$. The metric $\tilde{g}$ is a complete Einstein metric on
$M^*_1$, and \eqref{zeqn} shows that $|Z|_{\tilde{g}} (x) = o(1)$ as $r \rightarrow 0$, where $r$ is the distance to the zero set of the scalar curvature. Viewed on the complete manifold $(M^*_1, \tilde{g})$, $Z$ is then a decaying solution at infinity of $\eqref{dds}$. Since $R_{\tilde{g}}$ is a constant, a standard separation of variables argument (see for example \cite{Donn}) implies that $Z$ must decay faster than the inverse of any polynomial in the $\tilde{g}$ metric (it morever has exponential decay). Equivalently,
$|Z|_g = O(r^k)$ as $r \rightarrow 0$ for any $k > 0$. This implies that $Z$ has a zero of infinite order along the zero set of the scalar curvature. The unique continuation principle for elliptic operators (see \cite{Aron}) then implies that $Z$ is identically zero. \end{proof} As a corollary, we obtain \begin{corollary} \label{h2wps} If $g$ is the canonical Bochner-K\"ahler metric with reversed orientation on $\overline{\mathbb{CP}}^2_{(r,q,p)}$, then $H^2(M,g) = 0$. \end{corollary} \begin{proof} From \cite[Equation (2.32)]{DavidGauduchon}, the set $M^{> 0}$ is non-empty. So this follows immediately from Proposition \ref{sdk}. \end{proof}
\begin{proof}[Proof of Theorem \ref{wpsthm}] From Corollary \ref{h2wps}, $H^2(M,g) = 0$, so the actual moduli space is locally isomorphic to $H^1/ H^0$. Depending upon the action of $H^0$, the moduli space could therefore be of dimension $\dim(H^1)$, $\dim(H^1) - 1$, or $\dim(H^1) - 2$. The result then follows immediately from the determination of $H^1(M,g)$ in Theorem \ref{thm}. \end{proof}
\subsection{Final remarks} We end with a non-rigorous remark on the number-theoretic condition appearing in Theorem \ref{thm}. Figure~\ref{plot} contains a plot of the function \begin{align} H(r,q,p(j)) = \Big\{ \frac{p}{qr} \Big\} - \Big\{\frac{q^{-1;r}p}{r}\Big\} \end{align} for $r = 3$ and $q = 7$, where the horizontal axis indexes the $j$th prime. The plot begins at the fifth prime, $11$, and ends with the $100$th prime $541$. This, along with other empirical examples, indicates that the cases $H > 0$ and $H < 0$ occur with the approximately the same frequency.
\numberwithin{figure}{section} \begin{figure}
\caption{$H(3,7,p(j))$}
\label{plot}
\end{figure}
\end{document} |
\begin{document}
\title{Finite Volume Properly Convex Deformations of the Figure-eight Knot} \author{Samuel A. Ballas} \email{sballas@math.ucsb.edu} \date{\today} \address{Department of Mathematics\\ University of California Santa Barbara\\ CA 93106}
\maketitle
\begin{abstract}
In this paper we show that some open set of the representations of the fundamental group of figure-eight knot complement found in \cite{Ballas12a} are the holonomies of a family of finite volume properly convex projective structures on the figure-eight knot complement. \end{abstract}
\section{Introduction}
Let $M$ be an orientable hyperbolizable $n$-manifold with fundamental group $\Gamma$. When $n$ is at least 3, Mostow-Prasad rigidity tells us that if $M$ admits a complete finite volume hyperbolic structures then this structure is unique, up to isometry. Such a structure gives rise to a discrete faithful holonomy representation $\rho_{{\rm geo}}:\Gamma\to \text{Isom}^+(\mathbb{H}^n)$ (unique up to conjugation in $\text{Isom}(\mathbb{H}^n)$) and $M$ can be realized as $\mathbb{H}^n/\rho_{{\rm geo}}(\Gamma)$. The representation coming from this structure is called the \emph{geometric representation}.
We can view this construction in the projective setting: Let $\Omega\subset \mathbb{RP}^n$ be the projectivization of the negative cone of a quadratic form of signature $(n,1)$. Then $\mathbb{H}^n$ can be identified with $\Omega$ in such a way that elements of $\text{Isom}^+(\mathbb{H}^n)$ correspond to $\text{PSO}(n,1)$. Using this identification, $M$ can be realized as $\Omega/\rho_{{\rm geo}}(\Gamma)$. \emph{Properly convex projective structures} offer us a flexible way to generalize this construction. In this setting we no longer insist that $\Omega$ is the projectivization of the negative cone of a form. Instead we ask only that $\Omega$ is properly convex and that $M$ can be realized as $\Omega/\rho(\Gamma)$, where $\rho:\Gamma\to \PGL[n+1]{\mathbb{R}}$ is a discrete and faithful representation whose image preserves $\Omega$.
There is no notion of Mostow rigidity in this setting and so the deformation theory of convex projective structures is richer than its hyperbolic counterpart in the following sense. Let $\gold{M}$ be the set of (equivalence classes of) properly convex projective structures on $M$. Mostow rigidity tells us that there is a distinguished point $N_{hyp}\in\gold{M}$ corresponding to the unique complete hyperbolic structure on $M$. There are examples of manifolds for which $\gold{M}$ has positive dimension at $N_{hyp}$ (see \cite{Goldman90,Marquis12b,CooperLongThist06,PortiHeusener09,Ballas12a} for examples). In contrast to these results, there are many hyperbolic manifolds whose hyperbolic structure cannot be deformed to a non-hyperbolic convex projective structure \cite{Ballas12a,CooperLongThist07}. Said another way, it is possible to find $M$ such that $N_{hyp}$ is an isolated point of $\gold{M}$.
An important principle in the proof of the results in the previous paragraph is that conjugacy classes of representations of $\Gamma$ into $\PGL[n+1]{\mathbb{R}}$ near $\rho_{{\rm geo}}$ give a local parameterization of (equivalence classes of) projective structures on $M$. When $M$ is closed, work of Koszul \cite{Koszul68} shows that the projective structures near $N_{hyp}$ corresponding to conjugacy classes near $\rho_{{\rm geo}}$ are properly convex.
When $M$ is non-compact Koszul's result is not longer true. In general there are representations near $\rho_{{\rm geo}}$ that are not the holonomy of any point in $\gold{M}$. An example of such representations are the holonomies of incomplete hyperbolic structures on non-compact $3$-manifolds. Fortunately, recent work of Cooper and Long \cite{CooperLong13} has shown that if the nearby representation satisfies some mild hypotheses then it is the holonomy of a (possibly not properly convex) projective structure on $M$ with nice behavior near the boundary. Furthermore, Cooper, Long, and Tillmann \cite{CooperLongTillman13} have shown that if the restriction of the representation to the peripheral subgroups (fundamental groups of the boundary components) satisfies slightly stronger hypotheses then the representation is the holonomy of a properly convex projective structure.
In \cite{Ballas12a} the author showed that when $M$ is the figure-eight knot complement there exists an explicit family of representations, $\rho_s:\Gamma\to\rm{PGL}_{4}(\R)$ that satisfy the following conditions. \begin{itemize}
\item $\rho_0=\rho_{{\rm geo}}$,
\item the $\rho_s$ are pairwise non-conjugate, and
\item $\rho_s$ maps the meridian of $M$ to a unipotent element of $\rm{PGL}_{4}(\R)$ and the longitude to a non-unipotent element of $\rm{PGL}_{4}(\R)$. \end{itemize}
By applying results from \cite{CooperLongTillman13} and carefully analyzing the structure of the cusps we are able to prove the main theorem of this paper.
\begin{theorem}\label{t:maintheorem}
Let $M$ be the complement in $S^3$ of the figure-eight knot. There exists $\varepsilon$ such that for each $s\in(-\varepsilon,\varepsilon)$, $\rho_s$ is the holonomy of a finite volume properly convex projective structure on $M$. Furthermore, when $s\neq 0$ this structure is not strictly convex. \end{theorem}
\begin{remark}
In recent work with D.\ Long \cite{BallasLong14}, we are able to show that Theorem \ref{t:maintheorem} holds for all $s\in \mathbb{R}$. \end{remark}
Rephrasing Theorem \ref{t:maintheorem} in terms of $\gold{M}$ we have the following immediate corollary.
\begin{corollary}\label{c:structurecurve} Let $M$ be the complement in $S^3$ of the figure-eight knot. Then there is a non-trivial curve $c:(-\varepsilon,\varepsilon)\to \gold{M}$ such that $c(0)=N_{hyp}$. Furthermore, for every $s\in (-\varepsilon,\varepsilon)$, $c(s)$ has finite Busemann volume and $c(s)$ corresponds to a strictly convex structure if and only if $s=0$. \end{corollary}
When $s\neq 0$ we can regard $\rho_s$ as the holonomy of a singular projective structure on $S^3$ with ``cone singularities.'' The singular locus of this structure is the figure-eight knot sitting inside $S^3$ (See \cite{DancigerThesis} for definitions). The properly convex projective structure on the figure-eight knot complement can be recovered from this singular projective structure by deleting the singular locus.
In \cite{Marquis12b} Marquis shows that there exist non-compact hyperbolic 3-manifolds that admit finite volume convex projective deformations. However, his examples are based on the ``bending'' construction of Johnson and Millson \cite{JohnsonMillson87}, which requires the presence of an embedded totally geodesic surface. Since the figure-eight knot complement does not contain any embedded totally geodesic surfaces, we see that the deformations in Theorem \ref{t:maintheorem} are not covered by Marquis' work. Furthermore, to the best of our knowledge these are the first examples of finite volume, properly convex deformations of a non-compact $3$-manifold where the holonomy is explicitly computed.
The outline of the paper is as follows. Section \ref{s:background} provides background in projective geometry and properly convex projective structures. Section \ref{s:Z^2} defines and discusses certain Lie subgroups of $\PGL[4]{\mathbb{R}}$ that preserves a properly (but not strictly) convex domains. In Section \ref{s:obstruction} we show that the cusp shape of a finite volume hyperbolic 3-manifold places restrictions on deformations that can occur. Section \ref{s:volume} examines the volume of cusps. In Section \ref{s:fig8} we use results from the previous sections and \cite{CooperLongTillman13} to prove Theorem \ref{t:maintheorem}.
\section{Background}\label{s:background}
\subsection{Convex Projective Geometry}
Let $\mathbb{RP}^n$ be $n$-dimensional real projective space, that is the quotient of the space $\mathbb{R}^{n+1}\backslash \{0\}$ by the natural $\mathbb{R}^\times$ action given by scaling. Let $v\in \mathbb{R}^{n+1}\backslash \{0\}$, then we denote by $[v]$ the image of $v$ under the natural quotient map. Let $\PGL[n+1]{\mathbb{R}}$ be the quotient of $\textrm{GL}_{n+1}(\mathbb{R})$ by its center. It is easy to see that $\PGL[n+1]{\mathbb{R}}$ coincides with the set of self maps of $\mathbb{RP}^n$ induced by linear maps on $\mathbb{R}^{n+1}$.
The image of a 2-dimensional vector subspace under the quotient map is called a \emph{projective line} and the image of an $n$-plane in $\mathbb{RP}^n$ is called a \emph{projective hyperplane}. The complement of a projective hyperplane in $\mathbb{RP}^n$ is called an \emph{affine patch}. This name comes from the fact that if $A$ is an affine patch, then after a projective coordinate change we can write $A$ in homogeneous coordinates as $$A=\{[x_1:\ldots:x_n:1]\mid (x_1,\ldots,x_n)\in \mathbb{R}^n\}.$$
An open subset $\Omega$ of $\mathbb{RP}^n$ is called \emph{convex} if it is contained in some affine patch (i.e. is disjoint from a projective hyperplane) and its intersection with every projective line is connected. If in addition its closure $\ol{\Omega}$ is contained in an affine patch then we say that $\Omega$ is \emph{properly convex}. An alternative characterization of proper convexity is that $\Omega$ does not contain a complete affine line. A point $p\in \partial \Omega$ is strictly convex if $p$ is not contained in the interior of an any line segment in $\partial \Omega$. If $\Omega$ is properly convex and every $p\in \partial \Omega$ is strictly convex then we say that $\Omega$ is \emph{strictly convex}. If $\Omega$ is a properly convex domain and $p\in \partial \Omega$ then $p$ is contained in a (possibly non-unique) hyperplane which is disjoint from $\Omega$. Such a plane is called a \emph{supporting hyperplane}. When $p$ is a $C^1$ point of $\partial \Omega$ there is a unique supporting hyperplane containing $p$ which can be identified with the tangent plane to $\partial\Omega$ at $p$.
The space $\mathbb{RP}^n$ is double covered by the sphere $S^n$, which we realize as the quotient of $\mathbb{R}^{n+1}\backslash\{0\}$ by the positive scalars. Let $\pi:S^n\to \mathbb{RP}^n$ be the covering map. The automorphisms of $S^n$ are identified with $\slpm[n+1]{\mathbb{R}}$, which consists of linear transformations with determinant $\pm 1$. Let $[T]\in \PGL[n+1]{\mathbb{R}}$ be an equivalence class of linear transformations. By scaling $T$ we can arrange that $T\in \slpm[n+1]{\mathbb{R}}$. Additionally, we see that $T\in \slpm[n+1]{\mathbb{R}}$ if and only if $-T\in \slpm[n+1]{\mathbb{R}}$. As a result, there is a 2-to-1 map, which by abuse of notation we also call $\pi:\slpm[n+1]{\mathbb{R}} \to \PGL[n+1]{\mathbb{R}}$ given by $\pi(T)= [T]$. If we let $\slpm{\Omega}$ and $\PGL{\Omega}$ be subsets of $\slpm[n+1]{\mathbb{R}}$ and $\PGL[n+1]{\mathbb{R}}$ preserving $\pi^{-1}(\Omega)$ and $\Omega$, respectively then we see that $\pi$ restricts to a 2-to-1 map from $\slpm[]{\Omega}$ to $\PGL[]{\Omega}$.
When $\Omega$ is properly convex we can construct a section of $\pi$ that is a homomorphism as follows. Since $\Omega\subset \mathbb{RP}^n$ is properly convex, the preimage of $\Omega$ under $\pi$ will consist of two connected components. Every element of $\slpm[]{\Omega$} either preserves both of these components or interchanges them. Furthermore, $T\in \slpm[]{\Omega}$ preserves both components if and only if $-T$ interchanges them. As a result we see that if $[T]\in \PGL[]{\Omega}$ then there is a unique lift of $[T]$ to $\slpm[]{\Omega}$ that preserves both components of $\pi^{-1}(\Omega)$. Mapping $[T]$ to this lift yields the desired section. By using this section we are able to identify $\PGL[]{\Omega}$ with a subgroup of $\slpm[]{\Omega}$. As a result we will regard elements of $\PGL[]{\Omega}$ as linear transformations when convenient.
\subsection{Parabolic Coordinates}
The upper half space model of hyperbolic space offers a coordinate system that allows us to view hyperbolic space ``from a point at infinity,'' thus allowing us to simplify many arguments. In \cite{CooperLongTillman11} a projective analogue of these coordinates is introduced, and in this section we describe a generalization of these coordinates.
\begin{figure}
\caption{Parabolic coordinates for $\Omega$ }
\label{f:paraboliccoords}
\end{figure}
Let $\Omega$ be a properly convex domain, $p$ a point in $\partial \Omega$ and $H$ a supporting hyperplane containing $p$. The affine patch $\mathbb{RP}^n\backslash H$ can be identified with $\mathbb{R}^n$ in such a way that lines through $p$ not contained in $H$ are parallel to the $x_1$ axis. This is done by applying a projective coordinate change that maps $p$ to $[e_1]$ and $H$ to the projective plane dual to $[e_{n+1}]$, where $e_i$ is the $i$th standard basis vector of $\mathbb{R}^{n+1}$. We call the $x_1$ direction the \emph{vertical direction}. A set of coordinates with this property is referred to as \emph{parabolic coordinates centered at $(H,p)$}, or just parabolic coordinates when $H$ and $p$ are clear from context (see Figure \ref{f:paraboliccoords}). It is common to choose parabolic coordinates such that the origin is taken to be a point in $\partial \Omega$ and the plane $H_0$ of points where $x_1=0$ is a supporting hyperplane to $\Omega$, but we will not insist that this is the case.
Assume we have chosen parabolic coordinates centered at $(H,p)$. If $\gamma\in \PGL{\Omega}$ preserves both $p$ and $H$, then in these coordinates $\gamma$ will act as an affine map. Furthermore, $\gamma$ preserves the foliation of $\mathbb{R}^n$ by lines parallel to the $x_1$-axis. The space of these lines can be identified with $H_0$ via vertical projection and thus induces an affine action of $\gamma$ on $H_0$.
If $p$ is not contained in a line segment in $\partial \Omega\backslash H$ then in these coordinates $\partial \Omega\backslash H$ can be realized as the graph of a continuous convex function $h:U\to \mathbb{R}$, where $U$ is an open convex subset of $H_0$. Additionally, $\Omega\backslash H$ can be identified with the epigraph of this function. For this reason we call $h$ a \emph{boundary function} of $\Omega$. Since $\partial \Omega$ is preserved by $\gamma$ we see that $h$ has certain equivariance properties. Specifically, let $q\in \partial \Omega\backslash H$. Then there exists $v\in U$ such that $q=(h(v),v)$ and we see that $\gamma\cdot(h(v),v)=(h(\gamma\cdot v),\gamma\cdot v)$, where the action of $\gamma$ on $v$ is given by the affine action on $H_0$ described in the previous paragraph.
Parabolic coordinates allow us to define algebraic horospheres as follows. Let $t>0$ and let $e_1$ be the $1$st standard basis vector. Define $\mathcal{S}_t$ as the translation of the portion of $\partial \Omega$ that does not contain any line segments through $p$ by the vector $t e_1$. These sets are called \emph{algebraic horospheres centered at $(p,H)$} (see \cite{CooperLongTillman11} for more details about algebraic horospheres.) Furthermore, if $T>0$ we define an \emph{algebraic horoball centered at $(p,H)$} to be $\bigcup_{t>T}\mathcal{S}_t$.
\subsection{Convex Projective Manifolds}
Let $M$ be an $n$-manifold. A \emph{projective atlas} on $M$ is a collection of charts, $\phi_\alpha:U_\alpha\to \mathbb{RP}^n$, that cover $M$ with the property that if $U_\alpha$ and $U_\beta$ are charts with nontrivial intersection then $\phi_\alpha\circ \phi_\beta^{-1}$ is locally the restriction of an element of $\PGL[n+1]{\mathbb{R}}$. It can easily be shown that every projective atlas is contained in a unique maximal projective atlas and we call a maximal projective atlas on $M$ a \emph{projective structure} on $M$. A manifold equipped with a projective structure is called a \emph{projective manifold}. From this definition it is clear that a projective manifold is also a smooth manifold. A projective structure is a specific instance of a $(G,X)$ structure on $M$ (see \cite{Ratcliffe06} for an introduction to $(G,X)$ structures).
If $M$ and $M'$ are projective manifolds of the same dimension, then a continuous map $f:M\to M'$ is \emph{projective} if for each chart $\phi:U\to \mathbb{RP}^n$ of $M$ and each chart $\psi:V\to \mathbb{RP}^n$ of $M'$ such $U$ and $f^{-1}(V)$ intersect the map $$\psi\circ f\circ \phi^{-1}:\phi(U\cap f^{-1}(V))\to \psi(f(U)\cap V)$$ is locally the restriction of a element of $\PGL[n+1]{\mathbb{R}}$. Such a map is necessarily smooth.
The local data of a projective structure can be replaced with more global data as follows. By performing analytic continuation of a chosen chart, we obtain a local diffeomorphism $D:\tilde M\to \mathbb{RP}^n$, where $\tilde M$ is the universal cover of $M$, called a \emph{developing map}. We can also construct a representation $\rho:\fund{M}\to \PGL[n+1]{\mathbb{R}}$ called the \emph{holonomy}. The holonomy representation is equivariant with respect to the developing map in the following way: for $\gamma\in \fund{M}$ and $x\in \tilde M$ $$D(\gamma x)=\rho(\gamma)D(x),$$ where $\gamma$ acts on $\tilde M$ by deck transformation and $\rho(\gamma)$ acts on $\mathbb{RP}^n$ by projective automorphism. The pair $(D,\rho)$ is called a \emph{developing/holonomy pair}. While a projective structure does not uniquely determine the pair $(D,\rho)$, but if $(D',\rho')$ is another developing/holonomy pair for the same structure then there exists $g\in \PGL[n+1]{\mathbb{R}}$ such that $D'=gD$ and $\rho'=g\rho g^{-1}$.
Suppose that we are given a projective structure on $M$ with developing/holonomy pair $(D,\rho)$. If $D$ is a diffeomorphism onto a convex subset, $\Omega$, of $\mathbb{RP}^n$ then we say that the projective structure is \emph{convex}. If in addition, $\Omega$ is properly (resp.\ strictly) convex then we say that the structure is \emph{properly (resp.\ strictly) convex}. When a projective structure is convex $\rho$ is a discrete and faithful representation and $M$ can be identified with $\Omega/\rho(\fund{M})$.
A fixed manifold often admits many different projective structures and we would like a space that coherently organizes these structures. Let $N$ be a manifold and suppose that $N$ is either closed or is the interior of a compact manifold with boundary. A \emph{marked projective structure} on $N$ is a pair $(M,f)$, where $M$ is a projective manifold along with a diffeomorphism $f:N\to M$ called a \emph{marking}. Two markings $(M,f)$ and $(M',f')$ are \emph{isotopic} if there exists a projective bijection, $h$, \emph{that is defined on the complement of a collar neighborhood of $\partial M$ onto the complement of a collar neighborhood of $\partial M'$} such that the following diagram commutes up to isotopy.
$$ \xymatrix{ & M\ar[dd]^h\\ N\ar[ur]^f\ar[dr]_{f'} &\\ & M' } $$
Let $\mathbb{RP}(N)$ be the set of isotopy classes of marked projective structures on $N$. The space $\mathbb{RP}(N)$ can be realized as the quotient of the space of isotopy classes of developing maps by the action of $\PGL[n+1]{\mathbb{R}}$ by post composition. We can thus topologize $\mathbb{RP}(N)$ using the smooth compact open topology on the set of functions from $\tilde N$ to $\mathbb{RP}^n$.
Let $\rvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}}$ be the set of representations from $\fund{N}$ into $\PGL[n+1]{\mathbb{R}}$. We topologize $\rvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}}$ using the compact open topology (this coincides with the topology of pointwise convergence on the images of a fixed generating set). Let $\charvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}}$ be the quotient of $\rvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}}$ by the action of $\PGL[n+1]{\mathbb{R}}$ by conjugation. We topologize $\charvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}}$ using the quotient topology.
We can define a map \begin{equation}\label{e:holonomy} hol:\mathbb{RP}(N)\to \charvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}} \end{equation}
by $hol([(M,f)])=[\rho_M\circ f_\ast]$, where $\rho_M$ is a holonomy for the projective manifold $M$. The following theorem was originally stated by Thurston \cite[Sect.\ 5.1]{ThurstonNotes} and detailed proofs can be found in \cite{Goldman88b} and \cite{CanaryEpsteinGreen87}.
\begin{theorem}[Thurston \cite{ThurstonNotes}]\label{t:thurstonholonomy} Let $N$ be the interior of a compact smooth manifold. Then $hol:\mathbb{RP}(N)\to\charvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}}$ is a local homeomorphism. \end{theorem} As a consequence we see that elements of $\mathbb{RP}(N)$ can be locally parameterized by $\charvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{N}}$. Let $\gold{N}$ be the set of isotopy classes that contain a properly convex representative. That is $[(M,f)]\in \gold{N}$ if there is $(M',f')\in [(M,f)]$ such that $M'$ is a properly convex projective manifold. The behavior of $\gold{N}$ as a subset of $\mathbb{RP}(N)$ depends on whether or not $N$ is closed. When $N$ is closed, work of Koszul \cite{Koszul68} shows that $\gold{N}$ is an open subset of $\mathbb{RP}(N)$. Furthermore, Benoist has shown in \cite{Benoist05} that when $N$ is closed $\gold{N}$ is a closed subset of $\mathbb{RP}(N)$. However, when $N$ is non-compact there exist, in general, sequences of non-discrete representations from $\fund{N}$ into $\PGL[n+1]{\mathbb{R}}$ that converge to the holonomy of a properly convex projective structure on $N$. As a result we see that $\gold{N}$ is not generally an open subset of $\mathbb{RP}(N)$.
However, recent work of Cooper, Long, and Tillmann has given a sufficient condition for a small deformation of the holonomy of a properly convex projective structure to continue to be the holonomy of a properly convex projective structure. Their condition is phrased in terms of the ends of the manifold having the structure of \emph{generalized cusps}, which are generalizations of cusps of hyperbolic manifolds and are defined in Section \ref{s:Z^2}. Specifically, they prove the following theorem.
\begin{theorem}[{\cite[Thm 0.1]{CooperLongTillman13}}]\label{t:holonomytheorem} Let $M$ be a properly convex projective manifold with strictly convex boundary and holonomy $\rho$. Suppose that $M=A\cup \mathcal{B}$ where $A$ is compact and $\partial A=A\cap \mathcal{B}=\partial \mathcal{B}$, and each component of $B$ of $\mathcal{B}$ is a generalized cusp. If $\rho'$ is sufficiently close to $\rho$ in $\rvar[{\PGL[n+1]{\mathbb{R}}}]{\fund{M}}$ and for each component $B\subset \mathcal{B}$ there is a convex projective structure on $B$ which is a generalized cusp with holonomy $\rho'\vert_{\fund{B}}$ then there is a properly convex projective structure on $M$ with holonomy $\rho'$. \end{theorem}
\subsection{Hilbert Metric and Busemann Volume}
Every properly convex subset of $\mathbb{RP}^n$ comes equipped with a \emph{Hilbert metric} that is invariant under $\PGL{\Omega}$ and is defined as follows: Let $\Omega$ be an open properly convex domain in $\mathbb{RP}^n$ and let $x,y$ be distinct points in $\Omega$. Since $\Omega$ is properly convex, it follows that the line segment connecting $x$ to $y$ intersects $\partial \Omega$ in exactly two points, $a$ and $b$, such that $a$ is closer to $x$ and $b$ is closer to $y$ (see Figure \ref{f:hilbertmetric}). We define the Hilbert distance between $x$ and $y$ as $\dhilb{x}{y}:=\log\left([a:x:y:b]\right)$. Here $[a:x:y:b]=\frac{\abs{y-a}\abs{x-b}}{\abs{y-b}\abs{x-a}}$ is the projective cross ratio and $\abs{\cdot}$ is the Euclidean norm on any affine patch containing $\Omega$. When $x=y$ we define $\dhilb{x}{y}=0$. A proof that $d_\Omega$ is a metric can be found in \cite{delaHarpe93}. Since the cross ratio is invariant under projective automorphisms it is clear that the Hilbert metric is invariant under $\PGL{\Omega}$. When $\Omega$ is an ellipsoid the Hilbert metric coincides with twice the hyperbolic metric on the Klein model of hyperbolic space.
\begin{figure}
\caption{The line segment defining the Hilbert metric }
\label{f:hilbertmetric}
\end{figure}
We now define a $\PGL{\Omega}$-invariant measure on $\Omega$. We first recall the definition of \emph{Hausdorff measure} for an arbitrary metric space. Let $(X,d)$ be a metric space and let $r$ be a non-negative real number. Let $E\subset X$ and let $S^\varepsilon_E$ be the set of all countable covers of $E$ by sets of diameter less than $\varepsilon$. We define $$m_\varepsilon^r(E)=\inf_{\{E_i\}\in S^\varepsilon_E}c_r\sum_{i}diam(E_i)^r,$$ for some constant $c_r$, and we let $m^r(E)=\lim_{\varepsilon \to 0}m_\varepsilon^r(E).$ This construction defines an outer measure on $X$ which when restricted to the $\sigma$-algebra of Borel sets gives the \emph{$r$-dimensional Hausdorff measure on $X$}.
We can use the Hilbert metric on $\Omega$ to define the $n$-dimensional Hausdorff measure on Borel subsets of $\Omega$. We normalize so that $c_n=\alpha_n$, where $\alpha_n$ is the Lebesgue measure of the Euclidean ball of diameter 1. We denote this measure by $\mu_{\Omega}$ and call it the \emph{Busemann volume on $\Omega$}. Since this measure is defined using only the Hilbert metric it is easy to see that it is invariant under $\PGL{\Omega}$.
The above definition of Busemann volume is not conducive to performing computations and so we now recast the definition in a way that is more computationally convenient (compare Marquis \cite{Marquis13}). Let $\mathcal{A}$ be an affine patch containing $\Omega$ and we equip $\mathcal{A}$ with the Euclidean metric $\abs{\cdot}$. We can use the Hilbert metric to define a Finsler structure on $\Omega$ as follows. We can identify $T_x\Omega$ with $\mathcal{A}$ and if $v\in T_x\Omega$ we define \begin{equation}\label{e:hilbertnorm}
\Abs{v}_\Omega=\left.\frac{d}{dt}\right|_{t=0}\dhilb{x}{x+tv}=\abs{v}\left(\frac{1}{\abs{x-p_{-}}}+\frac{1}{\abs{x-p_{+}}}\right), \end{equation} where $p_{+}$ and $p_{-}$ are the intersection points of $\partial \Omega$ and the line in $\Omega$ through $x$ determined by $v$. Using this norm we can define the $n$-dimensional Hausdorff measure on $T_x\Omega$. We normalize this measure by choosing the constant $c_n=\alpha_n$, and denote it by $\mu^x_{\Omega}$.
In order to facilitate computations we define another Finsler structure on $\Omega$. In this Finsler structure the norm on each tangent space is given by the Euclidean norm on $\mathcal{A}$. Similarly, we can define a measure on each tangent space using the Lebesgue measure, $\mu_L$. The identity map on $\Omega$ gives a map between these two Finsler manifolds. Let $B\subset \Omega$ be a Borel set, then by the ``change of variables formula'' \cite[Prop 5.5.11]{BuragoBuragoIvanov01} we see that there exists $f:\Omega\to \mathbb{R}$ such that \begin{equation}\label{e:volform1} \hilbvol{B}=\int_Bf(x)d\mu_L(x), \end{equation} furthermore, the function $f$ can be easily described. The measures $\mu_L$ and $\mu^x_\Omega$ are both translation invariant measures and therefore differ by multiplication by a constant. The function $f(x)$ is this constant. Let $B_x^\Omega(1)$ be the unit ball in $T_x\Omega$ for the norm $\Abs{\cdot}_{\Omega}$. A simple computation shows that $f(x)=\frac{\mu^x_\Omega(B_x^\Omega(1))}{\mu_L(B_x^\Omega(1))}$. By work of Busemann \cite{Busemann47} we see that $\mu^x_\Omega(B_x^\Omega(1))=\alpha_n$, and we see that \eqref{e:volform1} becomes \begin{equation}\label{e:hilbertvolume} \hilbvol{B}=\int_B\frac{\alpha_n}{\mu_L(B_x^\Omega(1))}d\mu_L(x). \end{equation}
The Busemann volume is $\PGL{\Omega}$-invariant and thus descends to a Borel measure on any properly convex manifold $M=\Omega/\Gamma$, where $\Gamma$ is a discrete torsion-free subgroup of $\PGL{\Omega}$. We refer to this measure as the \emph{Busemann volume on $M$} and denote it $\mu_M$.
We close this section by mentioning some comparison properties of the Hilbert metric and Busemann volume that we will use throughout. Let $\Omega\subset \Omega'$ be two properly convex domains. If $x,y\in \Omega$ then a simple computation shows that $\dhilb[\Omega']{x}{y}\leq \dhilb[\Omega]{x}{y}$. Similarly, if $A\subset \Omega$ is a Borel subset then $\hilbvol[\Omega']{A}\leq \hilbvol[\Omega]{A}$.
\section{Properly Convex Cusps and Representations of $\mathbb{Z}^2$}\label{s:Z^2}
In this section we construct properly convex cusps that can be realized as small deformations of standard cusps coming from the complete hyperbolic structure on a cusped 3-manifold. To begin with let $\mathfrak{L}_0$ be the Abelian Lie subalgebra of $\mathfrak{gl}_4$ consisting of matrices of the form $$\begin{pmatrix}
0 & r & s & 0\\
0 & 0 & 0 & r\\
0 & 0 & 0 & s\\
0 & 0 & 0 & 0
\end{pmatrix} $$ If we exponentiate $\mathfrak{L}_0$ we get a Lie subgroup $L_0$ of $\GL[4]{\mathbb{R}}$ consisting of matrices of the form $$\begin{pmatrix}
1 & r & s & \frac{1}{2}(r^2+s^2)\\
0 & 1 & 0 & r\\
0 & 0 & 1 & s\\
0 & 0 & 0 & 1
\end{pmatrix} $$ The elements of $L_0$ preserve the 3-dimensional affine subspace ${\{(x_1,x_2,x_3,x_4)\in \mathbb{R}^4\vert x_4=1\}}$ and can thus be regarded as affine transformations of $\mathbb{R}^3$. Furthermore, by projectivizing we get an embedded copy of $L_0$ in $\rm{PGL}_{4}(\R)$, which we also call $L_0$.
Let $D_0=\{(x_1,x_2,x_3)\vert x_1>\frac{1}{2}(x_2^2+x_3^2)\}\subset \mathbb{R}^3$, then we see that the affine action of $L_0$ preserves $D_0$. Furthermore, if we let $c\geq 0$ and let $\mathcal{H}^0_c$ be the $L_0$-orbit of $(c,0,0)$ then we get a foliation of $D_0$. Explicitly, $\mathcal{H}^0_c$ is the graph of the strictly convex function $f_c(x_2,x_3)=\frac{1}{2}(x_2^2+x_3^2)+c$ and we can think of $D_0$ as the epigraph of $f_0$. We can now realize $D_0$ as a subset of $\mathbb{RP}^3$ via the embedding $(x_1,x_2,x_3)\mapsto[x_1:x_2:x_3:1]$ of $\mathbb{R}^3$ into $\mathbb{RP}^3$. In this way we realize parabolic coordinates for $D_0$ in which $f_0$ is a boundary function. It is easy to see that $D_0$ is the copy of $\mathbb{H}^3$ obtained by projectivizing the negative cone of the quadratic form
$$Q(x_1,x_2,x_3,x_4)=\frac{1}{2}(x_2^2+x_3^2)-x_1x_4.$$
With this point of view we see that the foliation $\mathcal{H}^0_c$ is just the foliation by horospheres centered at the point $[1:0:0:0]$ (which we think of as the point at $\infty$).
If we let $\Gamma_0$ be a lattice inside $L_0$ then $D_0/\Gamma_0$ is a properly (in fact strictly) convex manifold diffeomorphic to $T^2\times(0,\infty)$ via a diffeomorphism that maps $\mathcal{H}^0_c/\Gamma_0$ to $T^2\times \{c\}$. Furthermore, each $T^2\times \{c\}$ is equipped with the Euclidean similarity structure whose developing map is given by the restriction of the projection $[x_1:x_2:x_3:1]\mapsto [x_2:x_3:1]$ to $\mathcal{H}^0_c$.
We now recall the construction, in dimension 3, of generalized cusps from \cite{CooperLongTillman13}. Let $\Phi_t$ be the affine flow on $\mathbb{R}^3$ given by the affine transformation $$\Phi_t=\begin{pmatrix}
1 & 0 & 0 & t\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix} $$ Let $M$ be a convex projective $3$-manifold, then there exists a convex open $\Omega\subset \mathbb{RP}^3$ and a discrete group $\Gamma\subset \PGL{\Omega}$ such that $M\cong \Omega/\Gamma$. We say that $M$ is a \emph{complete generalized cusp} if $\Gamma$ is conjugate in $\rm{PGL}_{4}(\R)$ into the centralizer of $\Phi_t$.
A \emph{local hyperplane} in $M$ is an evenly covered subset (with respect to the universal cover $\Omega$) such that some (hence any) lift to $\Omega$ develops into a convex subset of an affine 2-plane in $\mathbb{R}^3$. Let $W$ be 2-dimensional submanifold of $M$. A point $p\in W$ is called \emph{strictly convex} if there exists a local hyperplane $P$ such that $P\cap W=p$. If every point of $W$ is strictly convex then we say that $W$ is \emph{strictly convex}.
Let $B$ be an open properly convex submanifold of $M$ such that $\partial B$ is strictly convex. Suppose also that there is an identification of $B$ with $\partial B\times [0,\infty)$ such that some (hence any) lift of $\{b\}\times [0,\infty)$ to $\Omega$ develops into a flowline of $\Phi_t$ for every $b\in \partial B$. If $B$ has these properties then $B$ is called \emph{generalized cusp}.
Since $\Gamma_0$ is contained in the centralizer of $\Phi_t$ we see that $\mathbb{R}^3/\Gamma_0$ is a complete generalized cusp. The properly convex submanifold $D_0/\Gamma_0$ has strictly convex boundary and admits a product structure given by flowlines of $\Phi_t$ and is thus a \emph{generalized cusp}
We now repeat the above construction while allowing ourselves to vary the initial Lie algebra. Let $\mathfrak{L}_t$ be the Abelian Lie subalgebra of $\mathfrak{gl}_4$ consisting of matrices of the form $$\begin{pmatrix}
0 & r & s & 0\\
0 & t r & 0 & r\\
0 & 0 & 0 & s\\
0 & 0 & 0 & 0
\end{pmatrix} $$ If we exponentiate $\mathfrak{L}_t$ then we obtain the Lie subgroup, $L_t\leq \GL[4]{\mathbb{R}}$ of affine transformations of the form $$\begin{pmatrix}
1 & \frac{e^{t r}-1}{t} & s & \frac{e^{tr}-t r-1}{t^2}+\frac{s^2}{2}\\
0 & e^{t r} & 0 & \frac{e^{t r}-1}{t}\\
0 & 0 & 1 & s\\
0 & 0 & 0 & 1
\end{pmatrix} $$ By projectivizing we obtain an embedded copy of $L_t$ inside $\rm{PGL}_{4}(\R)$, which we also call $L_t$.
Using parabolic coordinates, we can again regard the $L_t$-orbit of $(0,0,0)$ as the graph of a strictly convex function and let $D_t$ be the epigraph of this function. As above, we can regard $D_t$ as a domain in $\mathbb{RP}^3$. The following lemma demonstrates some convexity properties of $D_t$ in this context.
\begin{lemma}\label{l:propconvex}
For each $t>0$, $D_t$ is a properly, but not strictly convex subset of $\mathbb{RP}^3$. \end{lemma} \begin{proof}
Let $\mathfrak{L}'$ be the Abelian Lie algebra of matrices of the form
\begin{equation} \begin{pmatrix}
0 & 0 & b & -a\\
0 & a & 0 & 0\\
0 & 0 & 0 & b\\
0 & 0 & 0 & 0
\end{pmatrix}\label{e:liealg} \end{equation} which exponentiates to the Lie group, $L'$ of matrices of the form \begin{equation}\label{e:liegroup1}
\begin{pmatrix}
1 & 0 & b & \frac{1}{2}b^2-a\\
0 & e^a & 0 & 0\\
0 & 0 & 1 & b\\
0 & 0 & 0 & 1
\end{pmatrix} \end{equation} Next, let $D'$ be the epigraph of the $L'$-orbit of the point $(0,1,0)$ (see Figure \ref{f:domain1}).
\begin{figure}
\caption{The boundary of $D'$}
\label{f:domain1}
\end{figure}
The affine transformation \begin{equation}\label{e:Vt} V_t=\begin{pmatrix}
1/t^2 & 1/t^2 & 0 & -1/t^2\\
0 & 1/t & 0 & -1/t\\
0 & 0 & 1/t & 0\\
0 & 0 & 0 & 1
\end{pmatrix} \end{equation} conjugates $\mathfrak{L}_t$ to $\mathfrak{L}'$ (under this conjugation, $a=tr$ and $b=ts$). Since $V_t\cdot[0:1:0:1]=[0:0:0:1]$ we see that $V_t^{-1}D_t=D'$. Thus it suffices to show that $D'$ is properly, but not strictly convex.
To see this observe that the $L'$-orbit of $(0,1,0)$ is the set $$\left\{\left(\frac{b^2}{2}-a,e^{a},b\right)\vert a,b\in \mathbb{R}\right\}.$$ Under the coordinate change $A=e^{a}$ and $B=b$, this orbit can be written as $$\left\{\left(\frac{B^2}{2}-\log(A),A,B\right)\vert A,B\in \mathbb{R}, A>0\right\},$$ and we see that $${D'=\left\{\left(x_1,x_2,x_3\right)\in\mathbb{R}^3\vert x_1>\frac{x_3^2}{2}-\log(x_2),x_2>0\right\}}.$$ Thus we can realize $D'$ as the epigraph of the function $F(x_2,x_3)=\frac{x_3^2}{2}-\log(x_2)$. By computing the Hessian we see that $F$ is a convex function and thus $D'$ is a convex set. The function $F$ is the boundary function for parabolic coordinates for $D'$ centered at $(p,H)$, where $p=[1:0:0:0]$ and $H$ is the projective plane dual to $[0:0:0:1].$
We next show that $D'$ does not contain a complete affine line. Suppose for contraction that $\ell$ is an affine line that is entirely contained in $D'$. Our argument will make use of the parabolic coordinates introduced in the previous paragraph. If the tangent vector of $\ell$ has any component in the $x_2$ direction then $\ell$ will intersect the $x_1x_3$-plane, which is disjoint from $D'$. Thus $\ell$ must lie in a plane, $P$, parallel to the $x_1x_3$-plane. The intersection of $P$ with $D'$ is the epigraph of a function of the form $x_3^2/2+C$ for some constant $C$. It is easy to see that any affine line in $P$ intersect the graph of $x^2_3/2+C$, which contradicts the fact that $\ell$ is entirely contained in $D'$.
Finally, we show that $D'$ is not strictly convex by exhibiting an explicit line segment in $\partial D'$. Let $c>0$ and observe that the affine ray $$R_c=\{[c u:u:0:1]\mid u>T_c\},$$ (where $T_c$ is a positive constant that depends on $c$) is completely contained in $D'$. The ideal endpoint of this affine ray is $[c:1:0:0]$, which is contained in the boundary of $D'$ in $\mathbb{RP}^3$. Varying $c$ parameterizes a line segment that is contained in $\partial D'$. \qed \end{proof}
The domains $D'$ and $D_t$ also admit foliations by horospheres. Let $\mathcal{H}'_\kappa$ be the $L'$ orbit of $(\kappa,1,0)$. Let $\mathcal{H}^t_\kappa$ be the $L_t$-orbits of the points $(\kappa,0,0)$. The sets $\bigcup_{s>0}\mathcal{H}'_s$ (resp.\ $\bigcup_{s>0}\mathcal{H}^t_s$) are easily seen to be a foliation of $D'$ (resp.\ $D_t$) by algebraic horospheres. For this reason we call $\mathcal{H}'_\kappa$ (resp.\ $\mathcal{H}^t_\kappa$) \emph{$L'$-horospheres} (resp.\ \emph{$L_t$-horospheres}).
If $\Gamma_t$ is a lattice inside $L_t$ then $D_t/\Gamma_t$ is a properly convex manifold that is again diffeomorphic to $T^2\times(0,\infty)$ by a diffeomorphism that sends $\mathcal{H}^t_\kappa/\Gamma_t$ to $T^2\times\{\kappa\}$. In this case the map $[x_1:x_2:x_3:1]\mapsto[x_2:x_3:1]$ restricted to $\mathcal{H}^t_\kappa$ is the developing map for an affine structure on $\mathcal{H}^t_\kappa/\Gamma_t$. Let $E_t=\{(x_1,x_2,x_3)\in \mathbb{R}^3\vert x_2>-1/t\}$, then $E_t/\Gamma_t$ is a complete generalized cusp and $D_t/\Gamma_t$ is a generalized cusp. As $t\to 0$ we see that $\mathfrak{L}_t$ converges to $\mathfrak{L}_0$, $E_t$ converges to $\mathbb{R}^3$, and $D_t$ converges to $D_0$ (see Figure \ref{f:domaindeformation}). Furthermore, if $\lim_{t\to 0}\Gamma_t$ converges to a lattice in $L_0$ then $D_t/\Gamma_t$ will converge to a hyperbolic cusp and the affine tori $\mathcal{H}^t_\kappa/\Gamma_t$ will converge to a Euclidean torus (see Figure \ref{f:affinedeformation}). In \cite{Baues11}, Baues describes in more detail how 2-dimensional affine tori can converge to one another. In his notation, the convergence we have just described is a sequence of type $C_1$ affine structures converging to a type $D$ affine structure.
\begin{figure}
\caption{Convergence of $D_t$ to $D_0$.}
\label{f:domaindeformation}
\end{figure}
\begin{figure}
\caption{Fundamental domains of affine tori converging to fundamental domains of a Euclidean torus }
\label{f:affinedeformation}
\end{figure}
\subsection{Rank 2 Lattices in $L'$}
To conclude this section we characterize the holonomy of cusps which are projectively equivalent to $D'/\Gamma'$, where $\Gamma'$ is a lattice of $L'$. Additionally, we discuss when a family of such holonomies converges to the holonomy of a hyperbolic cusp. We begin by analyzing which rank 2 discrete abelian subgroups of $\rm{PGL}_{4}(\R)$ are conjugate into $L'$.
We begin by observing that if $A$ and $B$ are two commuting matrices in $\GL[4]{\mathbb{R}}$ with positive, real eigenvalues then the group $\langle A,B\rangle$ is contained in a Lie subgroup, $L_{AB}$, of $\GL[4]{\mathbb{R}}$ isomorphic to $\mathbb{R}^2$. This Lie group is isomorphic to its Lie algebra, $\mathfrak{L}_{AB}$ via the exponential map. Let $\alpha$ and $\beta$ be the elements of $\mathfrak{L}_{AB}$ corresponding to $A$ and $B$ via the exponential map. We can now define a map $m:\mathfrak{L}_{AB}\to \mathbb{R}[t]$ that assigns to an element of $\mathfrak{L}_{AB}$ its minimal polynomial. We call this function the \emph{minimal polynomial map} for $\mathfrak{L}_{AB}$. It is clear that the minimal polynomial map is invariant under conjugation by an element of $\GL[4]{\mathbb{R}}$.
Next, we examine how the function $m$ behaves on the Lie algebra $\mathfrak{L'}$. An element of $x\neq 0$ of $\mathfrak{L'}$ has the form $$x=\begin{pmatrix}
0 & 0 & b & -a\\
0 & a & 0 & 0\\
0 & 0 & 0 & b\\
0 & 0 & 0 & 0
\end{pmatrix} $$ and we see that $m(x)=t^{n(a,b)}(t-a)$, where $$n(a,b)=\left\{\begin{array}{rcl}
2 & \rm{if} & ab=0\\
3 & &\rm{otherwise}
\end{array} \right.$$ More generally, we see that \begin{equation}\label{e:minpolyform} m(x)=t^{n(x)}(t-f(x)) \end{equation}
where $f(x)$ is a linear functional on $\mathfrak{L'}$ and $n:\mathfrak{L'}\backslash\{0\}\to \{2,3\}$. The kernel of this linear functional is contained in $n^{-1}(2)$, and $n^{-1}(2)$ is a union of two distinct linear subspaces of $\mathfrak{L'}$. Elements of $L'$ that correspond under the exponential map to elements of $\ker f$ are called \emph{pure translations}. Elements of $L'$ that correspond under the exponential map to elements of $n^{-1}(2)\backslash \ker f$ are called \emph{pure dilations}. Elements of $L'$ are affine transformations of $\mathbb{R}^3$ that preserve the foliation of $\mathbb{R}^3$ by vertical lines. The set of these lines can be identified with $\mathbb{R}^2$ in such a way that $L'$ acts by affine transformations. The action on $\mathbb{R}^2$ of a pure translation corresponds to an affine transformation whose linear part is the identity and the action on $\mathbb{R}^2$ of a pure dilation corresponds to an affine transformation whose linear part has distinct real eigenvalues and whose translational part is trivial.
At first, one might hope that any Lie algebra for which the minimal polynomial map has these properties would be conjugate to $\mathfrak{L'}$. However the Lie algebra $\mathfrak{L'_{-}}$ of elements of the form
$$\begin{pmatrix}
0 & 0 & b & a\\
0 & a & 0 & 0\\
0 & 0 & 0 & b\\
0 & 0 & 0 & 0
\end{pmatrix} $$ gives rise to a minimal polynomial map of the same form, but $\mathfrak{L'}$ is not conjugate to $\mathfrak{L_{-}'}.$
\begin{remark}\label{r:convexorbit}
To see why $\mathfrak{L'}$ and $\mathfrak{L'_{-}}$ are not conjugate we look at the corresponding Lie groups. As we have seen, the Lie group $L'$ corresponding to $\mathfrak{L'}$ preserves an open, properly convex domain bounded by the closure of the $L'$-orbit of $[0:1:0:1]$. The Lie group $L'_{-}$ corresponding to $\mathfrak{L'_{-}}$ preserves no such properly convex domain. \end{remark}
However, the following lemma shows that once we know that our minimal polynomial map has these properties that this is the only ambiguity.
\begin{lemma}\label{l:liealgconjugacy} Let $\mathfrak{F}$ be a two dimensional Abelian Lie subalgebra of $\mathfrak{gl}_4$. Suppose that for $x\neq 0$ in $\mathfrak{F}$ that $m(x)=t^{n(x)}(t-f(x))$, where $f:\mathfrak{F}\to \mathbb{R}$ is a non-trivial linear functional and $n:\mathfrak{F}\backslash\{0\}\to \{2,3\}$ with the property that \begin{itemize}
\item $\ker f\subset n^{-1}(2)$ and
\item $n^{-1}(3)\neq \emptyset$ . \end{itemize} Then $\mathfrak{F}$ is conjugate to either $\mathfrak{L'}$ or $\mathfrak{L'_{-}}$. \end{lemma}
\begin{proof}
We begin by selecting generators $\alpha,\beta$ for $\mathfrak{F}$ such that $\alpha\in n^{-1}(3)$ (observe that our hypothesis forces $f(\alpha)\neq0$). Using (a small variation of) Jordan normal form, we can select a basis where
$$\alpha=\begin{pmatrix}
0 & 0 & 1 & 0\\
0 & f(\alpha) & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 0 & 0
\end{pmatrix}\text{ and } \beta=\begin{pmatrix}
b_{11} & b_{12} & b_{13} & b_{14}\\
b_{21} & b_{22} & b_{23} & b_{24}\\
b_{31} & b_{32} & b_{33} & b_{34}\\
b_{41} & b_{42} & b_{43} & b_{44}
\end{pmatrix} $$
Computing the commutator we see that $$[\alpha,\beta]=\begin{pmatrix} b_{31} & b_{32}-f(\alpha)b_{12} & b_{33}-b_{11} & b_{34}-b_{13}\\ f(\alpha)b_{21} & 0 & f(\alpha)b_{23}-b_{21} & f(\alpha)b_{24}-b_{23}\\ b_{41} & b_{42}-f(\alpha)b_{32} & b_{43}-b_{31} & b_{44}-b_{33}\\ 0 & -f(\alpha)b_{42} & -b_{41} & -b_{43} \end{pmatrix}.$$ Therefore, $b_{12}=b_{21}=b_{23}=b_{24}=b_{31}=b_{32}=b_{41}=b_{42}=b_{43}=0$, $b_{11}=b_{33}=b_{44}$, and $b_{13}=b_{34}$. We conclude that $$\beta=\begin{pmatrix} b_{11} & 0 & b_{13} & b_{14}\\ 0 & b_{22} & 0 & 0 \\ 0 & 0 & b_{11} & b_{13}\\ 0 & 0 & 0 & b_{11} \end{pmatrix}.$$
From the properties of $m$ we deduce that $b_{11}=0$ and $b_{22}=f(\beta)$, and thus
\begin{comment}
Thus $\alpha$ is a matrix with spectrum $\{0,0,0,f(\alpha)\}$. Since $\mathfrak{F}$ is Abelian we can select a basis such that
$$\alpha=\begin{pmatrix}
0 & 0 & a_{13} & a_{14}\\
0 & f(\alpha) & a_{23} & a_{24}\\
0 & 0 & 0 & a_{34}\\
0 & 0 & 0 & 0
\end{pmatrix}\text{ and } \beta=\begin{pmatrix}
b_{1} & 0 & b_{13} & b_{14}\\
0 & b_{2} & b_{23} & b_{24}\\
0 & 0 & b_{3} & b_{34}\\
0 & 0 & 0 & b_{4}
\end{pmatrix}. $$
From the form of $m$ we know that every element of $\mathfrak{F}$ has at most one non-zero eigenvalue (not counted with multiplicity). We claim that $b_{1}=b_{3}=b_{4}=0$. To see this assume for contradiction that $b_1\neq 0$. There are now two cases: either $b_2=b_1$ or $b_2=0$. If $b_2=b_1$ then we see that $\alpha+\beta$ has eigenvalues $b_1$ and $f(\alpha)+b_1$. Since $f(\alpha)\neq 0$ we see that this implies that $f(\alpha)=-b_1$. Therefore the matrix $\alpha+2\beta$ has eigenvalues $2b_1$ and $b_1$. One of these eigenvalues must be zero which contradicts the fact that $b_1\neq 0$. In the case where $b_2=0$ we see that $\alpha+\beta$ has eigenvalues $b_1$ and $f(\alpha)$. Since $f(\alpha)\neq 0$ this forces $f(\alpha)=b_1$. Now $\alpha+2\beta$ has eigenvalues $b_1$ and $2b_1$ and again we see that this contradicts the fact that $b_1\neq 0$. A similar argument shows that $b_3=b_4=0$.
By conjugating by the matrix $$\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & -\frac{a_{23}}{f(\alpha)} & -\frac{f(\alpha) a_{24}+a_{23}a_{34}}{f(\alpha)^2}\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}$$
we can assume that $a_{23}=a_{34}=0$. Examining the commutator we see that
\begin{equation}\label{e:commutator}[\alpha,\beta]=\begin{pmatrix}
0 & 0 & 0 & a_{13}b_{34}-a_{34}b_{13}\\
0 & 0 & f(\alpha)b_{23} & f(\alpha)b_{24}-a_{34}b_{23}\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{pmatrix} \end{equation} Therefore $b_{23}=b_{24}=0$. We have now reduced to the case where $$\alpha=\begin{pmatrix}
0 & 0 & a_{13} & a_{14}\\
0 & f(\alpha) & 0 & 0\\
0 & 0 & 0 & a_{34}\\
0 & 0 & 0 & 0
\end{pmatrix}\text{ and } \beta=\begin{pmatrix}
0 & 0 & b_{13} & b_{14}\\
0 & f(\beta) & 0 & 0\\
0 & 0 & 0 & b_{14}\\
0 & 0 & 0 & 0
\end{pmatrix} $$ We can assume that both $a_{13}$ and $a_{34}$ are nonzero since otherwise we would have $n(\alpha)=2$. After conjugating by $$\begin{pmatrix}
\frac{a_{13}}{a_{34}} & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}$$
we can assume that $a_{13}=a_{34}$. From \eqref{e:commutator} we see that this forces $b_{13}=b_{34}$.
\end{comment}
we have now reduced to the case that $\mathfrak{F}$ is conjugate to a Lie algebra of the form
$$\begin{pmatrix}
0 & 0 & b &c_1a+c_2b\\
0 & a & 0 & 0\\
0 & 0 & 0 & b\\
0 & 0 & 0 & 0
\end{pmatrix} $$ $c_1\neq0$ since otherwise there would be an element of $\mathfrak{F}$ whose minimal polynomial is not divisible by $t^2$. Finally, by conjugating by $$\begin{pmatrix}
\abs{c_1} & 0 & 0 & 0\\
0 & 1& 0 & 0\\
0 & 0 & \sqrt{\abs{c_1}} & -c_2\\
0 & 0 & 0 & 1
\end{pmatrix} $$ we can assume that $c_2=0$ and $c_1=\pm 1$. \qed \end{proof}
\begin{comment}
Lemma \ref{l:liealgconjugacy} can be used to determine when a rank 2 Abelian group is conjugate to a lattice in $L'$ or $L'_{-}$. If $A$ and $B$ are commuting matrices all of whose eigenvalues are positive, then there exist commuting matrices $\alpha$ and $\beta$ such that $A=\exp(\alpha)$ and $B=\exp(\beta)$. If the minimal polynomial of $\alpha$ is $t^{n(\alpha)}(t-f(\alpha))$ then a simple computation shows that the minimal polynomial of $A$ is $(t-1)^{n(\alpha)}(t-e^{f(\alpha)})$.
Let $A\in \GL[4]{\mathbb{R}}$, let $D(A)$ be the largest integer $n$ such that $(t-1)^n$ divides $m(A)$, and let $E(A)$ be the number of distinct eigenvalues of $A$. The following corollary, which follows from a more detailed inspection of the proof of Lemma \ref{l:liealgconjugacy}, shows that we can verify that a rank 2 Abelian group is conjugate into $L'$ or $L'_{-}$ by examining only finitely many group elements.
\begin{corollary}
Let $A$ and $B$ be matrices in $\GL[4]{\mathbb{R}}$ with real positive eigenvalues such that $\langle A,B\rangle$ is a rank 2 Abelian group. Let $S=\{A,B,AB,AB^2\}$ and suppose that $D(A)=3$, $E(A)=2$ and that $1\leq E(C)\leq 2$ for $C\in S$. Then $\langle A,B\rangle$ is conjugate to a lattice in either $L'$ or $L'_{-}$. \end{corollary}
\begin{proof}\label{c:L'lattice} In the proof of Lemma \ref{l:liealgconjugacy} it was only necessary to examine the minimal polynomials of the elements $\alpha$, $\beta$, $\alpha+\beta$, and $\alpha+2\beta$. If $A=\exp(\alpha)$ and $B=\exp(\beta)$ then the elements $\alpha$, $\beta$, $\alpha+\beta$, and $\alpha+2\beta$ correspond under the exponential map to the elements of $S$. If $A$ and $B$ commute then $[\alpha,\beta]=0$ and so $\mathfrak{F}$ is Abelian. The only properties we used were the size of the Jordan blocks of $\alpha$, that $\alpha$ had exactly 2 distinct eigenvalues, and that $\beta$, $\alpha+\beta$, and $\alpha+2\beta$ had at most 2 distinct eigenvalues. It is easy to check that these properties follows from the hypotheses on $D(C)$ and $E(C)$ for $C\in S$. \qed \end{proof}
\end{comment} \begin{comment}
\begin{proposition}\label{p:L'lattice}
Suppose that $\Gamma=\langle A,B\rangle$ is a rank 2 free abelian subgroup of $\rm{PGL}_{4}(\R)$ such that $A$ and $B$ are each conjugate to a matrix of the form \eqref{e:conjugacytype}. Then $\Gamma$ is conjugate to a lattice in $L'$ if and only if
\begin{enumerate}
\item $\abs{c_A}+\abs{c_B}$, $\abs{c_A}+\varepsilon_A$, $\abs{c_B}+\varepsilon_B$, and $\varepsilon_A+\varepsilon_B$ are all positive,
\item $\dim(E^A_1\cap E^B_1)=\dim(E^A_+\cap E^B_+)=1$
\item $\Gamma$ preserves a strictly convex hypersurface in $\mathbb{RP}^3$.
\end{enumerate} \end{proposition} \begin{proof}
If $\Gamma$ satisfies the above conditions then by Lemma \ref{l:normform1} and Remark \ref{r:convexorbit} we see that $\Gamma$ is conjugate into $L'$. Furthermore, condition (1) guarantees that $\Gamma$ is not contained in a 1-parameter subgroup, which implies that $\Gamma$ is a lattice.
Suppose now that $\Gamma$ is conjugate to a lattice in $L'$. By Remark \ref{r:convexorbit} we see that $\Gamma$ satisfies condition (3). Since $\Gamma$ is conjugate into $L'$ the only way $\abs{c_A}+\varepsilon_A=0$ (resp. $\abs{c_B}+\varepsilon_B=0$) is if $A$ (resp. $B$) is trivial. Since $\Gamma$ has rank 2 we see that this is not the case. As mentioned in the proof of Lemma \ref{l:normform1}, we can assume that $A$ is not unipotent, and thus $c_A\neq 0$. Thus to verify condition (1) it only remains to show that $\varepsilon_A+\varepsilon_B$ is nonzero. However, if $\varepsilon_A+\varepsilon_B=0$ then we see that $\Gamma$ is contained in contained in the 1-parameter subgroup $\exp(tv)$, where
$$
v=\begin{pmatrix}
0 & 0 & 0 & -s\\
0 & s & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{pmatrix}, $$ which contradicts the fact that $\Gamma$ is conjugate to a rank 2 lattice.
Since $A$ is not unipotent it is clear from \eqref{e:liegroup1} that $\dim(E_+^A\cap E_+^B)=1$. Finally, from \eqref{e:liegroup1} we see that the only way that $\dim(E_1^A\cap E_1^B)\neq 1$ is if $\varepsilon_A+\varepsilon_B=0$, which contradicts condition (1) which we have already shown is satisfied. \qed \end{proof} \end{comment}
Next, we will address the question of when a family, $\Gamma_t$, of lattices in $L'$ can be conjugated so that their limit is a lattice of $\Gamma_0\subset L_0$. Since $L'$ is a simply connected abelian Lie group we see that it is isomorphic to its Lie algebra (via the exponential map), which is in turn isomorphic to $\mathbb{R}^2$. Explicitly, we have a map \begin{equation}\label{e:liealgebramap} (a,b)\stackrel{g}{\mapsto} \begin{pmatrix}
0 & 0 & b & -a\\
0 & a & 0 & 0\\
0 & 0 & 0 & b\\
0 & 0 & 0 & 0
\end{pmatrix}\stackrel{\exp}{\mapsto} \begin{pmatrix}
1 & 0 & b & \frac{b^2}{2}-a\\
0 & e^a & 0 & 0\\
0 & 0 & 1 & b\\
0 & 0 & 0 & 1
\end{pmatrix} \end{equation}
We can now give a sufficient condition for a family of lattices in $L'$ to converge after conjugation to a lattice in $L_0$. \begin{proposition}\label{p:Ltconvergence} $\Gamma_t=\langle A_t,B_t\rangle$ be a family of lattices in $L'$, let $a_t$ and $b_t$ be the vectors in $\mathbb{R}^2$ that correspond to $A_t$ and $B_t$ under \eqref{e:liealgebramap}. Suppose that \begin{enumerate}
\item $\lim_{t\to 0}a_t=\lim_{t\to 0}b_t=0$,
\item $a_t$ and $b_t$ are differentiable near 0, and
\item the derivatives of $a_t$ and $b_t$ at $0$ are linearly independent vectors. \end{enumerate} Then there exists a family $C_t$ of invertible matrices such that $C_t \Gamma_t C_t^{-1}\subset L_t$ and $\lim_{t\to 0} C_t \Gamma_t C_t^{-1}$ is a lattice in $L_0$. \end{proposition} \begin{proof} Let $C_t=V_t$ be the family of matrices from \eqref{e:Vt}. As previously mentioned, the matrix $V_t$ conjugates the Lie algebra $\mathfrak{L}'$ to the Lie algebra $\mathfrak{L}_t$. Explicitly we see that if $x\in \mathfrak{L}'$ corresponds to the vector $(x_1,x_2)\in \mathbb{R}^2$ via $g$ then \begin{equation}\label{e:conjfamily} C_t x C_t^{-1}=\begin{pmatrix}
0 & \frac{x_1}{t} & \frac{x_2}{t} & 0\\
0 & x_1 & 0 & \frac{x_1}{t}\\
0 & 0 & 0 & \frac{x_2}{t}\\
0 & 0 & 0 & 0
\end{pmatrix} \end{equation} From \eqref{e:conjfamily} we see that $C_t\Gamma_tC_t^{-1}\subset L_t$. Furthermore, $C_t\Gamma_tC_t^{-1}\subset L_t$ will converge as $t\to 0$ provided that the limits of the entries of $C_tA_tC_t^{-1}$ and $C_tB_tC_t^{-1}$ converge as $t\to 0$. Since $\exp$ is a smooth map we see from \eqref{e:conjfamily} that $C_tB_tC_t^{-1}$ and $C_tA_tC_t^{-1}$ will converge provided that conditions (1) and (2) are satisfied. Furthermore, if condition (3) is satisfied then it is easy to see that the limits of $C_tA_tC_t^{-1}$ and $C_tB_tC_t^{-1}$ will generate a rank two group. \qed \end{proof} \begin{comment}
Let $\Delta'_t$ be the conjugate of $\Delta_t$ by $U_t$. The group $\Delta'_t$ can now be realized as a lattice inside the abelian Lie group, $L_t$ of matrices of the form \begin{equation}\label{e:liegroup2}
\begin{pmatrix}
1 & \frac{e^{a_t x}-1}{a_t} & y & \frac{e^{a_t x}-a_t x-1}{a_t^2} +\frac{y^2}{2}\\
0 & e^{a_t x} & 0 & \frac{e^{a_t x}-1}{a_t}\\
0 & 0 & 1 & y\\
0 & 0 & 0 & 1
\end{pmatrix} \end{equation} For $t>0$ let $P_t$ be the epigraph of the $L_t$-orbit of the vector $(0,0,0,1)$. Since $P_t$ can also be realized as the image of $P$ under $U_t$ we see that for $t>0$, $P_t$ is also properly, but not strictly, convex. As $t\to0$ the $P_t$ converges to the epigraph of $(\frac{x^2+y^2}{2},x,y,1)$, whose epigraph can be realized as the boundary of the paraboloid model of $\mathbb{H}^3$ (explain paraboloid model in earlier section). There is again a foliation $\mathcal{F}^t_c$ of $P_t$ by algebraic horospheres which are $L_t$ orbits of $(c,0,0,0)$, for $c>0$. The group $\Delta'_t$ preserves each leaf of this foliation and $\mathcal{F}^t_c/\Delta'_t$ is again an affine torus. As $t\to 0$ the foliation $\mathcal{F}^t_c$ converges to the foliation of $\mathbb{H}^3$ by standard horospheres centered at infinity. In addition, if we assume that $\lim_{t\to 0}\frac{a_t}{b_t}$ exists and is non-zero, then $\mathcal{F}^t_c/\Delta't$ will converge to a Euclidean torus. Figures \ref{f:domaindeformation} and \ref{f:affinedeformation} depicts this convergence.
\end{comment}
\section{Obstructions Coming From Cusp Shape}\label{s:obstruction}
In this section we discuss the relationship between the cusp shape of a finite volume hyperbolic manifold and types of projective deformations that can occur.
For the sake of completeness we will briefly describe the structure of finite volume hyperbolic 3-manifolds and cusp shapes. For details about the structure of hyperbolic manifolds see \cite{Ratcliffe06, ThurstonNotes} and for details about cusp shape see \cite{Riley79}. Let $M$ be an orientable 3-manifold that admits a complete, finite volume, hyperbolic structure then $M$ can be written as \begin{equation}\label{e:hyperbolicdecomp} M=M_K\cup\bigcup_{j=1}^m C_j, \end{equation} where $M_K$ is a compact manifold with $m$ torus boundary components, each $C_j$ is diffeomorphic to $T^2\times [1,\infty)$, where $T^2$ is a 2-dimensional torus, and $M_K\cap C_j=T^2\times\{1\}$. Furthermore, each $C_j$ is a convex submanifold of $M$ with strictly convex boundary. The interior of each $C_j$ is projectively equivalent to $D_0/\Gamma_0$, where $\Gamma_0$ is a lattice in $L_0$. Therefore, each $C_j$ is a generalized cusp. We refer to the $C_j$ as \emph{cusps} and after picking a basepoint in $M$ we can identify $\fund{C_j}\cong \mathbb{Z}^2$ with a subgroup of $\fund{M}$ and we refer to these subgroups as \emph{peripheral subgroups}. Next, let $\Delta_j$ be a peripheral subgroup associated with the cusp $C_j$, then the finite volume hyperbolic structure provides a well defined Euclidean similarity structure associated to $C_j$ called the \emph{cusp shape} of $C_j$.
For each $j$ we choose generators $m_j$ and $l_j$ for $\Delta_j$. The cusp shape induces a discrete faithful representation from $\Delta_j$ into $\psl$. After conjugating, we can assume that under this representation $$m_j\mapsto \begin{pmatrix}
1 & 1\\
0 & 1
\end{pmatrix}\ l_j\mapsto \begin{pmatrix}
1 & \omega_j\\
0 & 1
\end{pmatrix} $$ We call the number $\omega_j$ the \emph{cusp shape of $C_j$ relative to $\{m_j,l_j\}$} (see \cite{Riley79} for a proof that this complex number is well defined). Furthermore, by choosing $m_j$ and $l_j$ to be properly oriented with respect to orientation on $C_j$ coming from $M$ we can assume that $\omega_j$ has positive imaginary part. If $m'_j$ and $l'_j$ are another set of generators that are positively oriented with respect to the induced orientation coming form $M$ then the cusp shape of $C_j$ with respect to $\{m'_j,l'_j\}$ is in the same $PSL_2(\mathbb{Z})$-orbit as $\omega_j$ (here we are assuming that $PSL_2(\mathbb{Z})$ acts on $\mathbb{C}$ by linear fractional transformations). With this in mind we say that $C_j$ \emph{has imaginary cusp shape } if for some choice of generators of $\Delta_j$ the cusp shape with respect to these generators has real part equal to 0.
Let $P$ be the subgroup of $\psl$ of matrices of the form $$\begin{pmatrix}
1 & x+iy\\
0 & 1
\end{pmatrix} $$ There is an isomorphism between $L_0$ and $P$ given by \begin{equation}\label{e:L_0toP}
\begin{pmatrix}
1 & x & y & \frac{1}{2}\left(x^2+y^2\right)\\
0 & 1 & 0 & x\\
0 & 0 & 1 & y\\
0 & 0 & 0 & 1
\end{pmatrix}\mapsto \begin{pmatrix}
1 & x+iy\\
0 & 1 \end{pmatrix} \end{equation} If we have a complete hyperbolic structure on $M$ then \eqref{e:L_0toP} tells us that there is an induced representation of $\Delta_j$ into $L_0$, where $$m_j\mapsto \begin{pmatrix}
1 & x^j_1 & y^j_1 & \frac{1}{2}\left((x^j_1)^2+(y^j_1)^2\right)\\
0 & 1 & 0 & x^j_1\\
0 & 0 & 1 & y^j_1\\
0 & 0 & 0 & 1
\end{pmatrix} \ l_i\mapsto \begin{pmatrix}
1 & x^j_2 & y^j_2 & \frac{1}{2}\left((x^j_2)^2+(y^j_2)^2\right)\\
0 & 1 & 0 & x^j_2\\
0 & 0 & 1 & y^j_2\\
0 & 0 & 0 & 1
\end{pmatrix}$$ Thus we see that the cusp shape of the $j$th cusp of $M$ relative to $\{m_j,l_j\}$ is given by \begin{equation}\label{e:cuspshape} \frac{x_2^j+iy_2^j}{x_1^j+iy_1^j}=\frac{1}{(x^j_1)^2+(y^j_1)^2}\left(x^j_1x^j_2+y^j_1y^j_2+i\left(x^j_1y^j_2-x^j_2y^j_1\right)\right). \end{equation}
The next proposition uses \eqref{e:cuspshape} to show how the cusp shape of a manifold can obstruct certain types of deformations. \begin{proposition}\label{p:cuspshapeobstruction}
Let $M$ be a non-compact finite volume hyperbolic 3-manifold. Let $C$ be a cusp of $M$ and let $\{m,l\}$ be a choice of generators for a peripheral subgroup corresponding to $C$. Suppose that $M$ admits a family of projective structures with holonomy $\rho_t$ such that $\rho_0=\rho_{{\rm geo}}$ and $\rho_t(m)$ is a pure translation in $L_t$ and $\rho_t(l)$ is a pure dilation in $L_t$. Then the cusp shape of $C$ is purely imaginary. \end{proposition} \begin{proof}
Since $\rho_t(m)$ is a pure translation and $\rho_t(l)$ is a pure dilation, we see that there exists functions $\mu_t$ and $\nu_t$, continuous near zero, such that
$$\rho_t(m)=\exp\left(\begin{pmatrix}
0 & 0 & \mu_t & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & \mu_t\\
0 & 0 & 0 & 0
\end{pmatrix}\right) \textrm{ and } \rho_t(l)=\exp\left(\begin{pmatrix}
0 & \nu_t & 0 & 0\\
0 & \nu_t t & 0 & \nu_t\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{pmatrix} \right).$$ Thus if we let $\mu_0=\lim_{t\to 0}\mu_t$ and $\nu_0=\lim_{t\to0}\nu_t$, then we see that $$\rho_0(m)=\exp\left(\begin{pmatrix}
0 & 0 & \mu_0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & \mu_0\\
0 & 0 & 0 & 0
\end{pmatrix} \right)=\begin{pmatrix}
1 & 0 & \mu_0 & \frac{1}{2}\mu_0^2\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & \mu_0\\
0 & 0 & 0 & 1
\end{pmatrix} \textrm{ and }$$ $$\rho_0(l)=\exp\left(\begin{pmatrix}
0 & \nu_0 & 0 & 0\\
0 & 0 & 0 & \nu_0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{pmatrix} \right)=\begin{pmatrix}
1 & \nu_0 & 0 & \frac{1}{2}\nu_0^2\\
0 & 1 & 0 & \nu_0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix} .$$ From \eqref{e:cuspshape} we see that the cusp shape of $C$ relative to $\{m,l\}$ is $-i\frac{\nu_0}{\mu_0}$, and is thus imaginary. \qed \end{proof}
In \cite{Ballas12a} it is shown that the complements of the knots $5_2$ and $6_1$ in $S^3$ do not admit a families $\rho_t$ of representations passing through $\rho_{{\rm geo}}$ such that the image under $\rho_t$ of the meridian is a pure translation and the image under $\rho_t$ of the longitude is a pure dilation. On the other hand, we have seen that the figure-eight knot complement does admit a family $\rho_t$ of representations passing through $\rho_{{\rm geo}}$ such that the image under $\rho_t$ of the meridian is a pure translation and the image under $\rho_t$ of the longitude is a pure dilation. The cusp shape of $5_2$ and $6_1$ are not purely imaginary, but the cusp shape of the figure-eight knot is purely imaginary. Proposition \ref{p:cuspshapeobstruction} can be seen as a partial explanation of this phenomenon.
\section{Volume of Cusps}\label{s:volume}
In this section we will examine the Busemann volume of cusps that are projectively equivalent to $C_\Gamma=D'/\Gamma$, where $\Gamma$ is lattice inside $L'$. While $C_\Gamma$ will always have infinite volume, we will show that $C_\Gamma$ admits an exhaustion by sets of finite Busemann volume. The domain $D'$ can be written as $$\bigcup_{s>0}\mathcal{H}'_s,$$ (see Figure \ref{f:domain1}). For each $k>0$ we let $D'_k=\cup_{s>k}\mathcal{H}'_s$, and it is easy to see that $\{D'_k\}_{k>0}$ is an exhaustion of $D'$ by horoballs. Each $D_k'$ is preserved by $L'$ and the main result of this section is that some (hence any) fundamental domain for the action of $\Gamma$ on $D'_k$ has finite Busemann volume when regarded as a subset of $D'$.
As mentioned in Section \ref{s:Z^2}, $\mathcal{H}'_k/\Gamma$ is an affine torus whose developing map is given by the restriction of $[x_1:x_2:x_3:1]\mapsto [x_2:x_3:1]$ to $\mathcal{H}'_k$. Let $R$ be a compact fundamental domain for the affine action of $\Gamma$ on affine 2-space. Then a fundamental domain for $D'_k/\Gamma$ can be taken to be $$\mathcal{D}_k=\{[x_1:x_2:x_3:1]\mid (x_2,x_3)\in R, x_1>0\}\cap D'_k.$$ Geometrically, $\mathcal{D}_k$ is the intersection of $D'_k$ with the cone over $R$ whose cone point is ${[1:0:0:0]}$.
\begin{figure}
\caption{Slices of $D'$ in the $x_2$ and $x_3$ directions}
\label{f:slices}
\end{figure}
\begin{lemma}\label{l:euclideanvolume}
Let $\Gamma$ be a lattice in $L'$, let $k>0$, and let $x=[x_1:x_2:x_3:1]$ be a point in $\mathcal{D}_k$. There exist constants $C$ and $N$ (depending only on $\Gamma$) such that if $x_1>N$ then $Cx_1^{3/2}<\mu_L(B^{D'}_x(1)).$ \end{lemma}
\begin{proof}
We write $x=(x_1,x_2,x_3)$ as a vector in $\mathbb{R}^3$. The basic idea of the proof is to show that when $x_1$ is large that $B_x^{D'}(1)$ contains a simplex of large Lebesgue volume. Let $v^2_c=x+ce_2$. From Figure \ref{f:slices} and \eqref{e:hilbertnorm} we see that
\begin{equation}\label{e:distform1}
\Abs{v^2_c}_{D'}=\left.\frac{d}{dt}\right|_{t=0}\log\left(\frac{x_2+ct-k_1}{x_2-k_1}\right)=\frac{c}{x_2-k_1},
\end{equation} where $k_1=k_1(x_1,x_3):=\frac{e^{x_3^2/2}}{e^{x_1}}$. Since $(x_2,x_3)\in R$ and $R$ is compact we know that there are positive constants $c_1,c_2$, and $c_3$ such that $c_1<x_2<c_2$ and $-c_3<x_3<c_3$. Combining this fact with \eqref{e:distform1} we see that we can find a small positive constant $T$ such that when $x_1$ is sufficiently large $\Abs{v^2_T}_{D'}<1$. As a consequence $v^2_T\in B_x^{D'}(1)$.
Let $v^1_c=x+ce_1$. From Figure \ref{f:slices} and \eqref{e:hilbertnorm} we see that \begin{equation}\label{e:distform2}
\Abs{v^1_c}_{D'}=\left.\frac{d}{dt}\right|_{t=0}\log\left(\frac{x_1+ct-k_2}{x_1-k_2}\right)=\frac{c}{x_1-k_2}, \end{equation} where $k_2=k_2(x_2,x_3):=\frac{1}{2}x_3^2-\log x_2$. When $x_1$ is sufficiently large we see that $\frac{x_1}{2(x_1-k_2)}<1$ and so $v^1_{x_1/2}\in B^{D'}_x(1)$.
Finally, let $v^3_c=x+ce_3$. From Figure \ref{f:slices} and \eqref{e:hilbertnorm} we see that \begin{equation}\label{e:distform3}
\Abs{v^3_c}_{D'}=\left.\frac{d}{dt}\right|_{t=0}\log\left(\frac{(x_3+ct+k_3)(k_3-x_3)}{(x_3+k_3)(k_3-x_3-ct)}\right)=c\left(\frac{1}{(x_3+k_3)}+\frac{1}{(k_3-x_3)}\right), \end{equation}
where $k_3=k_3(x_1,x_2):=\sqrt{2\left(x_1+\log x_2\right)}$. When $x_1$ is sufficiently large we see that $\frac{\sqrt{x_1}}{3\sqrt{2}}\left(\frac{1}{(x_3+k_3)}+\frac{1}{(k_3-x_3)}\right)=\frac{\sqrt{x_1}}{3\sqrt{2}}\left(\frac{2k_3}{k_3^2-x_3^2}\right)<1$ and so $v^3_{\sqrt{x_1}/3\sqrt{2}}\in B^{D'}_x(1)$.
Thus for sufficiently large $x_1$, we see that $B^{D'}_x(1)$ contains $v^2_{T}$, $v^1_{x_1/2}$ and $v^3_{\sqrt{x_1}/3\sqrt{2}}$. Since $B^{D'}_x(1)$ is the unit ball for a norm we see that it is convex and thus it contains the convex hull of the set $\left\{x,v^2_T,v^1_{x_1/2},v^3_{\sqrt{x_1}/3\sqrt{2}}\right\}$. This convex hull is a tetrahedron and its Lebesgue measure is easily computed to be $\frac{x_1^{3/2}}{36\sqrt{2}}T$ and we conclude that when $x_1$ is sufficiently large $\frac{x_1^{3/2}}{36\sqrt{2}}T<\mu_L(B_x^{D'}(1))$
\qed \end{proof}
We can now prove the main result of this section.
\begin{proposition}\label{t:finitevolume}
$\mathcal{D}_k$ has finite Busemann volume when regarded as a subset of $D'$. \end{proposition} \begin{proof}
From \eqref{e:hilbertvolume} we see that $\hilbvol[D']{\mathcal{D}_k}=\int_{\mathcal{D}_k}\frac{d\mu_L(x)}{\mu_L(B^{D'}_x(1))}$. By Lemma \ref{l:euclideanvolume} we know that there exists a compact set $K\subset \mathcal{D}_k$ and a constant $C$ such that $Cx_1^{3/2}<\mu_L(B^{D'}_x(1))$ for every $x=(x_1,x_2,x_3)\in \mathcal{D}_k\backslash K$. Thus we see that
$$\hilbvol[D']{\mathcal{D}_k}=\int_K\frac{d\mu_L(x)}{\mu_L(B^{D'}_x(1))}+\int_{\mathcal{D}_k\backslash K}\frac{d\mu_L(x)}{\mu_L(B^{D'}_x(1))}<\int_K\frac{d\mu_L(x)}{\mu_L(B^{D'}_x(1))}+\int_{\mathcal{D}_k\backslash K}\frac{d\mu_L(x)}{Cx_1^{3/2}}<\infty$$ \qed \end{proof}
\section{Figure-eight example}\label{s:fig8}
In this section we use the ideas from the previous section along with work of \cite{CooperLongTillman13} to exhibit an explicit path of pairwise inequivalent finite volume properly convex projective structures that passes through the complete hyperbolic structure of the figure-eight knot complement.
Let $M$ be the complement of the figure-eight knot. Next, choose a basepoint $y\in M$, and let $\Gamma=\fund{M}$ with respect to this choice of basepoint. The group $\Gamma$ is generated by two elements $m$ and $n$, which are freely homotopic to meridians of the knot. Explicitly, the group $\Gamma$ can be written as $$\Gamma=\langle m,n\vert mw=wn\rangle,$$ where $w=nm^{-1}n^{-1}m$. As mentioned in Section \ref{s:obstruction}, $M$ can be written as $M_K\cup C$ where $M_K$ is compact and $C$ is diffeomorphic to $T^2\times [0,\infty)$ and we let $\Delta=\fund{C}$ be a choice of peripheral subgroup.
The complete hyperbolic structure of $M$ induces a representation $\rho_{{\rm geo}}:\Gamma\to \text{P}\so{3}\subset \rm{PGL}_{4}(\R)$. Under this representation, $\rho_{{\rm geo}}(\Delta)$ is conjugate to a lattice inside $L_0$ as described in Section \ref{s:Z^2}. For the figure-eight knot complement, the group $\Delta$ can be generated by $m$ and the element $l=ww^{op}=nm^{-1}n^{-1}m^2n^{-1}m^{-1}n$, where $w^{op}$ is the word $w$ written backwards. It is easy to see that $l$ is homologically trivial and that $l$ corresponds to a longitude of the knot.
In \cite{Ballas12a} an explicit family, $\rho_t$, of representations from $\Gamma$ into $\rm{PGL}_{4}(\R)$ is found for which $\mathcal{M}_t=\rho_t(m)$ and $\mathcal{N}_t=\rho_t(n)$ are both unipotent matrices and $\rho_{1/2}$ is the holonomy of the complete hyperbolic structure on $M$. Additionally, $\mathcal{L}_t=\rho_t(l)$ is unipotent if and only if $t=\frac{1}{2}$. We now show that the restriction of $\rho_t$ to the peripheral subgroup is the holonomy of a properly convex projective structure on $C$ which converges to the hyperbolic structure on $C$ coming from $\rho_{1/2}$. Specifically, $\rho_t$ is given by
$$\mathcal{M}_t=\begin{pmatrix}
1 & 0 & 1 & t-1\\
0 & 1 & 1 & t\\
0 & 0 & 1 & t+\frac{1}{2}\\
0 & 0 & 0 & 1
\end{pmatrix} \textrm{ and }
\mathcal{N}_t=\begin{pmatrix}
1 & 0 & 0 & 0\\
2+\frac{1}{t} & 1 & 0 & 0\\
2 & 1 & 1 & 0\\
1 & 1 & 0 & 1
\end{pmatrix}$$ $$\mathcal{L}_t=\begin{pmatrix}
\frac{8t^3-4t^2-2t-1}{8t^2} & \frac{8t^3+4t^2+2x+1}{8t^2} & \frac{-4t^2-1}{4t^2} & \frac{40t^3+24t^2+4t+3}{8t^2}\\
\frac{8t^4-4t^3-2t^2-t-1}{8t^3} & \frac{8t^4+4t^3+2t^2+t+1}{8t^3} & \frac{4t^3-4t^2+t-1}{4t^3} & \frac{56t^4+16t^3+20t^2+t+3}{8t^3}\\
0 & 0 & 2t & 0 \\
0 & 0 & 0 & 2t
\end{pmatrix}. $$
Let $\Delta_t=\rho_t(\Delta)=\langle \mathcal{M}_t,\mathcal{L}_t\rangle$. We now replace $\mathcal{L}_t$ with the projectively equivalent matrix $\frac{1}{2t}\mathcal{L}_t$. After performing the conjugacy described in the proof of Lemma \ref{l:liealgconjugacy} and applying the coordinate change $s=\log\left(\frac{1}{16t^4}\right)$ we see that $\mathcal{M}_s$ and $\mathcal{L}_s$ are conjugate to $$\begin{pmatrix}
1 & 0 & \sqrt{\frac{s\sinh(s/4)}{3}} & \frac{s\sinh(s/4)}{6}\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & \sqrt{\frac{s\sinh(s/4)}{3}}\\
0 & 0 & 0 & 1
\end{pmatrix}\textrm {and}
\begin{pmatrix}
1 & 0 & 0 & -s\\
0 & e^s & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}$$ respectively. We can further conjugate $\mathcal{M}_s$ and $\mathcal{L}_s$ to the matrices
\begin{equation}\label{e:fig8cuspconvergence} \mathcal{M}'_s=\begin{pmatrix}
1 & 0 & \sqrt{\frac{\sinh(s/4)}{3s}} & \frac{\sinh(s/4)}{6s}\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & \sqrt{\frac{\sinh(s/4)}{3s}}\\
0 & 0 & 0 & 1
\end{pmatrix} \textrm{ and } \mathcal{L}'_s=\begin{pmatrix}
1 & \frac{e^s-1}{s} & 0 & \frac{e^s-s-1}{s^2}\\
0 & e^s & 0 & \frac{e^s-1}{s}\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}. \end{equation} A Mathematica notebook containing these calculations can be found at \cite{BallasNotebook}.
From the discussion in Section \ref{s:Z^2} we see that for each $s\neq 0$ that $\Delta_s$ is conjugate to the group $\Delta_s'=\langle \mathcal{M}'_s,\mathcal{L}'_s \rangle$ that preserves $D_s$ and such that $D_s/\Delta'_s\cong C$. Thus we see that the restriction of $\rho_s$ to $\Delta$ gives $C$ the structure of a generalized cusp.
As $s\to 0$ ($t\to\frac{1}{2}$) $\mathcal{M}'_s$ and $\mathcal{L}'_s$ converge to the matrices \begin{equation}\label{e:limitmatrix} \mathcal{M}_0=\begin{pmatrix}
1 & 0 & \frac{1}{2\sqrt{3}} & \frac{1}{24}\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & \frac{1}{2\sqrt{3}}\\
0 & 0 & 0 & 1
\end{pmatrix}\textrm{ and }
\mathcal{L}_0=\begin{pmatrix}
1 & 1 & 0 & \frac{1}{2}\\
0 & 1 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}. \end{equation} In other words, as $s\to0$ the group $\Delta'_s$ is converging to a lattice $\Delta_0=\langle \mathcal{M}_0,\mathcal{L}_0 \rangle \leq L_0$ and $D_s/\Delta'_s$ is converging to a hyperbolic cusp $D_0/\Delta_0$ (see Figure \ref{f:domaindeformation}). By looking at the entries of the matrices in \eqref{e:limitmatrix} and applying the formula \eqref{e:cuspshape} we see the cusp shape of the limit structure is $-2\sqrt{3} i$, which is the cusp shape of of the figure-eight knot. Thus the projective structure on $C$ coming from $\rho_s$ is converging the hyperbolic structure on $C$ coming from $\rho_{0}$.
The next result shows that for small values of $s$, $\rho_s$ is the holonomy of a properly convex projective structure.
\begin{proposition}\label{p:convexstructure}
There exists $\varepsilon>0$ such that for $s\in (-\varepsilon,\varepsilon)$, $\rho_s$ is the holonomy of a properly convex structure on $M$. Furthermore, this structure is strictly convex if and only if $s=0$. \end{proposition}
\begin{proof} When $s=0$ then $\rho_0$ is the holonomy of the complete hyperbolic structure on $M$. As a result the $\rho_0$ is the holonomy of a strictly convex projective structure on $M$. From \eqref{e:hyperbolicdecomp} we see that $M=A\cup B$ where $A$ compact and $B$ has the structure of a generalized cusp. When $s\neq 0$ we have seen that $\Delta_s=\rho_s(\fund{B})$ is conjugate to a lattice in $L'$. If $\mathcal{B}$ is an $L'$-horoball then $\Delta_s$ is the holonomy of the generalized cusp $\mathcal{B}/\Delta_s$. By applying Theorem \ref{t:holonomytheorem} we see that for sufficiently small $s$, $\rho_s$ will the the holonomy of a properly convex projective structure on $M$.
When $s\neq 0$, a simple calculation shows that the matrix $\mathcal{L}_s$ has two distinct positive real eigenvalues and is thus not projectively equivalent to a parabolic transformation. By \cite[Prop 3.2.4]{BallasThesis} we see that this implies that $\rho_s$ cannot be the holonomy of a strictly convex projective structure on $M$.
\qed \end{proof}
From Proposition \ref{p:convexstructure} we know that for small values of $s$, $\rho_s$ is the holonomy of a properly convex projective structure on $M$. To complete the proof of Theorem \ref{t:maintheorem} it remains to prove that this structure has finite Busemann volume.
Let $\Omega_s$ be the properly convex domain preserved by $\Gamma_s:=\rho_s(\Gamma)$. Assume that we have conjugated so that $\Delta_s\subset L'$ and choose parabolic coordinates for $\Omega_s$ centered at $(p,H)$, where $p=[1:0:0:0]$ and the $H$ is the hyperplane dual to $[0:0:0:1]$. In these affine coordinates we have the $x_1$ direction is the vertical direction. We can further assume that the affine action of the pure translations on $H_0$ is by translation in the $x_3$ direction and the affine action of the pure dilations on $H_0$ is by dilation in the $x_2$ direction. The basic idea is that even though we do not know exactly what $\Omega_s$ looks like we can approximate $\Omega_s$ using $L'$-horoballs. We then use our geometric understanding of $D'$ to deduce information about $\Omega_s$.
\begin{lemma}\label{l:horoapprox} There exist $L'$-horoballs $\mathcal{B}$ and $\mathcal{B}'$ so that in parabolic coordinates for $\Omega_s$ centered at $(p,H)$ we have $\mathcal{B}\subset \Omega_s\subset \mathcal{B}'$. \end{lemma}
\begin{proof} Let $f$ be the boundary function $\Omega_s$ and $g$ be the boundary function for $D'$. The domain, $U$, of $g$ is one of the two half spaces in $H_0$ that form the complement of the plane defined by the equation $x_2=0$. We claim that the domain of $f$ is $U$. The only $\Delta_s$ invariant convex subsets of $H_0$ with non-empty interior are $U$, $\overline{U}$, and $H_0$. Suppose the domain of $f$ is not $U$, then there is a point of $\partial U$ in the domain of $f$. This implies that there is a point of the form $[c:0:d:1]\in \Omega_s$. Since the action of a pure translation on this point is by translation in the vertical direction it is easy to see that the convex hull of the $\Delta_s$ orbit of this point is the affine plane whose vertical projection is $\partial U$. This contradicts the fact that $\Omega_s$ is properly convex and so the domain of $f$ is also $U$.
Since every $L'$-horoball is simply the epigraph of the function $g+c$ where $c\in \mathbb{R}$ the proof will be complete if we can find positive constants $c$ and $C$ such that for all $x\in U$ $g(x)-c< f(x)< g(x)+C.$ We have seen that there is cocompact affine action of $\Delta_s$ on $U$ whose quotient is a torus. Let $K\subset U$ be a compact fundamental domain for this action. Pick $c$ and $C$ to be constants such that $(g-c)|_K< f|_K< (g+C)|_K$.
Suppose for contradiction that there exists a point $x\in U$ such that $g(x)-c\geq f(x)$. Then by the intermediate value theorem we can find a point $x'\in U$ such that $g(x')-c=f(x)$. There exists $\gamma\in \Delta_s$ such that $\gamma\cdot x'\in K$. We see that $(f(x'),x')=(g(x')-c,x')$ and so $\gamma\cdot (f(x'),x')=\gamma\cdot(g(x')-c,x').$ By equivariance we see that $(f(\gamma\cdot x'),\gamma\cdot x')=(g(\gamma\cdot x')-c,\gamma\cdot x')$. This implies that $f(\gamma\cdot x')=g(\gamma\cdot x')-c$, which contradicts our choice of $c$ and thus $g-c<f$. A similar argument shows that $f<g+C$. \qed \end{proof}
Let $G$ be a group acting on a set $X$. If $Y\subset X$ and $H$ is a subgroup then we say that $Y$ is \emph{precisely invariant under $H$} if $\gamma\cdot Y=Y$ for all $\gamma \in H$ and $\gamma\cdot Y\cap Y=\emptyset$ for $\gamma\in G\backslash H$. By combining the previous lemma with a version of the Margulis lemma for properly convex domains \cite[Thm 0.1]{CooperLongTillman11} we can show that there are precisely invariant $L'$ horoballs in $\Omega_s$.
\begin{lemma}\label{l:precinvhoroball} For sufficiently small $s$ there is an $L'$-horoball, $\mathcal{B}\subset \Omega_s$ that is precisely invariant under $\Delta_s$. \end{lemma}
\begin{proof} Let $\mu$ be a 3-dimensional properly convex Margulis constant, which exists by \cite[Thm 0.1]{CooperLongTillman11}. By Lemma \ref{l:horoapprox} we know that we can find an $L'$-horoball $\mathcal{B}'\subset \Omega_s$ and let $\mathcal{B}''$ be a $L'$-horoball contained in the interior of $\mathcal{B}'$. Let $z\in \partial \mathcal{B}''$. We claim that for every $y\in \partial \mathcal{B}''$ $\dhilb[\mathcal{B}']{y}{\mathcal{M}_s y}=\dhilb[\mathcal{B}']{z}{\mathcal{M}_s z}$. To see this observe that $\Delta_s$ acts transitively on $\partial \mathcal{B}''$ and so there exists $\gamma\in \Delta_s$ such that $\gamma z=y$. From this fact we deduce that $$\dhilb[\mathcal{B}']{y}{\mathcal{M}_s y}=\dhilb[\mathcal{B}']{\gamma z}{\mathcal{M}_s\gamma z}=\dhilb[\mathcal{B}']{\gamma z}{\gamma \mathcal{M}_s z}=\dhilb[\mathcal{B}']{z}{\mathcal{M}_s z}.$$
\begin{figure}
\caption{Distance estimates for $L'$-horoballs }
\label{f:horoballs}
\end{figure}
As a result we see that every point on $\partial \mathcal{B}''$ is moved the same distance in the Hilbert metric on $\mathcal{B}'$ by $\mathcal{M}_s$. For $t>0$ let $z_t=z+te_1$. Let $\ell$ be the affine line connecting $z$ and $\mathcal{M}_s z$ and let $a$ and $b$ be the intersection points of $\ell$ with $\partial \mathcal{B}'$. Similarly, let $\ell_t$ be the affine line connecting $z_t$ and $\mathcal{M}_s z_t$ and let $a_t$ and $b_t$ be the intersection points of $\ell_t$ with $\partial \mathcal{B}'$. Let $V_a$ and $V_b$ be the vertical lines passing through $a_t$ and $b_t$, respectively. Define $a'$ and $b'$ to be the respective intersection points of $V_a$ and $V_b$ with $\ell$. Figure \ref{f:horoballs} depicts this configuration and we see that $$[a_t:z_t:\mathcal{M}_s z_t: b_t]=[a':z:\mathcal{M}_s z:b']<[a:z:\mathcal{M}_s z:b],$$ and thus $\dhilb[\mathcal{B}']{z_t}{\mathcal{M}_s z_t}$ is a strictly decreasing function of $t$. Furthermore, as $t\to\infty$ we see that $[a_t:z_t:\mathcal{M}_sz_t:b_t]\to 1$, and so for sufficiently large $t$ we have $\dhilb[\mathcal{B}']{z_t}{\mathcal{M}_s z_t}<\mu$. Let $z_T$ be such that $\dhilb[\mathcal{B}']{z_T}{\mathcal{M}_s z_T}<\mu$ and let $\mathcal{B}$ be the $L'$-horoball such that $z_T\in \partial\mathcal{B}$. By construction we see that every point $z'\in \mathcal{B}$ is moved at most $\mu$ in the Hilbert metric on $\mathcal{B}'$ by $\mathcal{M}_s$. Since $\mathcal{B}\subset \Omega_s$ we have that $\dhilb[\Omega_s]{z'}{\mathcal{M}_s z'}\leq \dhilb[\mathcal{B}']{z'}{\mathcal{M}_sz'}<\mu$. Let $\tau\in \Gamma_s$ and suppose that $u\in \tau\mathcal{B}\cap \mathcal{B}$. The proof will be complete if we can show that $\tau \in \Delta_s$. Since $u\in \tau \mathcal{B}$ we can find $v\in \mathcal{B}$ such that $\tau v=u$. As a result we have $$\dhilb[\Omega_s]{u}{\tau \mathcal{M}_s \tau^{-1}u}=\dhilb[\Omega_s]{\tau v}{\tau \mathcal{M}_s v}=\dhilb[\Omega_s]{v}{\mathcal{M}_s v}<\mu.$$ The elements $\mathcal{M}_s$ and $\tau \mathcal{M}_s \tau^{-1}$ both move $u$ a distance less than $\mu$ in the Hilbert metric on $\Omega_s$. By the properly convex Margulis lemma we see that $\langle \mathcal{M}_s,\tau \mathcal{M}_s \tau^{-1} \rangle$ is virtually nilpotent.
Since $\Gamma_s$ is the fundamental group of a finite volume hyperbolic 3-manifold, it admits an action on $\mathbb{H}^3$ and its ideal boundary. Consequently we see that element of $\Gamma_s$ commute if and only if they have the same fixed point set for this action. Since any virtually nilpotent subgroup of the fundamental group of a finite volume hyperbolic 3-manifold is abelian we see that $\mathcal{M}_s$ and $\tau\mathcal{M}_s\tau^{-1}$ commute. Since $\mathcal{M}_s$ is contained in a peripheral subgroup it acts as a parabolic isometry on $\mathbb{H}^3$. This implies that $\mathcal{M}_s$ and $\tau\mathcal{M}_s\tau^{-1}$ have the same fixed point. Furthermore, $\tau$ also fixes the unique fixed point of $\mathcal{M}_s$. Since the action of $\Gamma_s$ on $\mathbb{H}^3$ is properly discontinuous this implies that $\tau$ also has a single fixed point and we conclude that $\tau$ and $\mathcal{M}_s$ commute. Thus $\tau\in \Delta_s$.
\qed \end{proof}
\begin{remark}\label{r:precinvhoroball} From the proof of Lemma \ref{l:precinvhoroball} we see that if $\mathcal{B}$ is an $L'$-horoball that is precisely invariant under $\Delta_s$ and $\mathcal{B}'$ is a $L'$-horoball such that $\mathcal{B}'\subset \mathcal{B}$, then $\mathcal{B}'$ is also precisely invariant under $\Delta_s$. \end{remark}
We can now complete the proof of Theorem \ref{t:maintheorem}.
\begin{proof}[Proof of Theorem \ref{t:maintheorem}] From Proposition \ref{p:convexstructure} we know that for sufficiently small $s$ that $\rho_s$ is the holonomy of a properly convex projective structure on $M$ and that this structure is strictly convex if and only if $s=0$. Thus we can find $\Omega_s$ such that $M\cong \Omega_s/\Gamma_s$. The proof will be complete if we can show that $\Omega_s/\Gamma_s$ has finite Busemann volume.
From Lemma \ref{l:precinvhoroball} and Remark \ref{r:precinvhoroball} we know that we can find $L'$-horoballs $\mathcal{B}$ and $\mathcal{B}'$ that are precisely invariant under $\Delta_s$ and such that $\mathcal{B}'\subset \mathcal{B}\subset \Omega_s$. Since $\mathcal{B}'$ is precisely invariant under $\Delta_s$ we see that $\mathcal{B}'/\Delta_s$ is an embedded submanifold of $\Omega_s/\Gamma_s$. The complement $(\Omega_s/\Gamma_s)\backslash\left(\mathcal{B}'/\Delta_s\right)$ is compact and so the proof will be complete if we can show that $\mathcal{B}'/\Delta_s$ is a finite Busemann volume submanifold of $\Omega_s/\Gamma_s$.
Let $K$ be a fundamental domain in $\mathcal{B}'$ for the action of $\Delta_s$. By Proposition \ref{t:finitevolume} we know that $\hilbvol[\mathcal{B}]{K}<\infty$. However $\mathcal{B}\subset \Omega_s$ and so $\hilbvol[\Omega_s]{K}\leq \hilbvol[\mathcal{B}]{K}<\infty$. We conclude that $\mathcal{B}'/\Delta_s$ is a finite volume submanifold of $\Omega_s/\Gamma_s$. \qed \end{proof}
\end{document} |
\begin{document}
\title{Choosability of the square of a planar graph\\ with maximum degree four }
\author{ Daniel W. Cranston\thanks{Virginia Commonwealth University, Department of Mathematics and Applied Mathematics, Richmond, VA, USA. Email: {\tt dcranston@vcu.edu}} \and Rok Erman\thanks{Institute of Mathematics, Physics and Mechanics, Jadranska 19, 1000 Ljubljana, Slovenia. Email: {\tt rok.erman@gmail.com} } \and Riste \v Skrekovski\thanks{Department of Mathematics, University of Ljubljana, Jadranska 21, 1000 Ljubljana, Slovenia. Email: {\tt skrekovski@gmail.com} } }
\date{\today}
\maketitle
\noindent {\bf Keywords:} choosability; square of a graph; maximum average degree; discharging; girth; maximum degree; list-colouring \\
\begin{abstract} We study squares of planar graphs with the aim to determine their list chromatic number. We present new upper bounds for the square of a planar graph with maximum degree $\Delta \leq 4$. In particular $G^2$ is 5-, 6-, 7-, 8-, 12-, 14-choosable if the girth of $G$ is at least 16, 11, 9, 7, 5, 3 respectively. In fact we prove more general results, in terms of maximum average degree, that imply the results above.
\end{abstract}
\section{Introduction}
The square of a graph $G$, denoted by $G^2$, is the graph with $V(G^2)=V(G)$ and $E(G^2)=\{uv\mid d_G(u,v)\leq 2\}$. This means that two vertices are adjacent in $G^2$ if they are at distance at most two in $G$. If $\Delta$ is the maximum degree of $G$, then to colour its square $G^2$ we will need at least $\Delta+1$ colours while the upper bound is $\Delta^2+1$ using the greedy algorithm. This upper bound is also achieved for a few graphs, for example by the Petersen graph.
Regarding the colouring of the square of planar graphs, Wegner~\cite{wegner} posed the following conjecture in 1977:
\begin{conjecture}[Wegner] \label{wegner} For a planar graph $G$ of maximum degree $\Delta$: $$\chi(G^2)\leq \left\{ \begin{array}{ll}
7, & \mbox{$\Delta=3$};\\
\Delta+5, & \mbox{$4\leq \Delta \leq 7$};\\
\lceil\frac{3}{2}\Delta \rceil+1, & \mbox{$\Delta \geq 8$}.
\end{array} \right.$$ \end{conjecture}
In~\cite{Havet1} Havet, van den Heuvel, McDiarmid, and Reed showed that
the following holds: $\chi(G^2)\leq \frac{3}{2}\Delta(1+o(1))$, which is also true for the choice number (defined below). Dvo\v r\'ak, Kr\'al', Nejedl\'y, and \v Skrekovski~\cite{skreko1} showed that the square of every planar graph of girth at least six with sufficiently large maximum degree $\Delta$ is $(\Delta+2)$-colorable. Borodin and Ivanova~\cite{borodin2} strengthened this result to prove that for every planar graph $G$ of girth at least six with maximum degree $\Delta \ge 24$, the choice number of $G^2$ is at most $\Delta+2$. For colouring (rather than list-colouring), the same authors showed~\cite{borodin3} that for every planar graph $G$ of girth at least six with maximum degree $\Delta \ge 18$, the chromatic number of $G^2$ is at most $\Delta+2$.
Lih, Wang, and Zhu~\cite{Lih} showed that the square of a $K_4$-minor free graph with maximum degree $\Delta$ has chromatic number at most $\lfloor \frac{3}{2}\Delta \rfloor+1$ if $\Delta \geq 4$ and $\Delta+3$ if $\Delta =2,3$. The same bounds were shown to hold for the choice number by Hetherington and Woodall~\cite{Hetherington}.
All graphs in this paper are undirected, simple, and finite. For standard graph definitions see~\cite{diestel}.
Denote by $l(f)$ the length of a face $f$ and by $d(v)$ the degree of a vertex $v$. A {\it $k$-vertex} is a vertex of degree $k$.
A {\it $k^-$-vertex} is a vertex of degree at most $k$, and a {\it $k^+$-vertex} is a vertex of degree at least $k$. If a vertex $u$ is adjacent to a $k$-vertex $v$, then $v$ is a $k$-neighbor of $u$. A {\it thread} between two vertices with degree at least three is a path between them consisting of only 2-vertices. A {\it $k$-thread} is a thread with $k$ internal 2-vertices. If vertices $u$ and $v$ lie on a common thread, then $u$ and $v$ are {\it weak neighbors} of each other. Similarly, we define a weak $k$-neighbor.
A {\it colouring} of the vertices of a graph $G$ is a mapping $c:V(G)\rightarrow \mathbb{N}$; we call elements of $\mathbb{N}$ {\it colours}.
A colouring is {\it proper} if every two adjacent vertices are mapped to different colours. List colouring was first studied by Vizing~\cite{vizing} and is defined as follows. Let $G$ be a simple graph. A {\it list-assignment} $L$ is an assignment of lists of colours to vertices. A {\it list-colouring} is then a colouring where each vertex $v$ receives a colour from $L(v)$. The graph
$G$ is {\it $L$-choosable} if there is a proper $L$-list-colouring. If $G$ has a list-colouring for every list-assignment with $|L(v)|=k$ for each vertex $v$, then $G$ is {\it $k$-choosable}. We will denote the size of the lists of colours in a specific case simply by $\chi_l$. The minimum $k$ such that $G$ is $k$-choosable is called the {\it choice number} of $G$.
To prove our theorem we will use the discharging method, which was first used by Wernicke~\cite{wernicke}; this technique is used to prove statements in structural graph theory, and it is commonly applied in the context of planar graphs. It is most well-known for its central role in the proof of the Four Colour Theorem. Here we apply the discharging method in the more general context of the {\it maximum average degree}, denoted $\mad(G)$, which is defined as $\mad(G):=\max_{H\subseteq G}\frac{2|E(H)|}{|V(H)|}$, where $H$ ranges over all subgraphs of $G$. A straightforward consequence of Euler's Formula is that every planar graph $G$ with girth at least $g$ satisfies $\mad(G)<\frac{2g}{g-2} = 2+\frac{4}{g-2}$. We call this Fact~1. Most of our results for planar graphs will follow from corresponding results for maximum average degree, via Fact 1.
The key tool in many of our proofs is global discharging, which relies on reducible configurations that may be arbitrarily large. Global discharging was introduced by Borodin~\cite{borodin1}. Typically, the vertices in these reducible configurations have degrees only 2 and $\Delta$. Our innovation in this paper is that we consider arbitrarily large reducible configurations consisting entirely of 2-vertices and 3-vertices, even though $\Delta=4$. For two similar applications of global discharging, see~\cite{cranston1} and~\cite{borodin2}.
Kostochka and Woodall~\cite{kostochka} conjectured that, for every square of a graph, the chromatic number and choice number are the same: \begin{conjecture}[Kostochka and Woodall] \label{kostochka} Let $G$ be a simple graph. Then $$\chi_l(G^2) =\chi(G^2).$$ \end{conjecture}
When $G$ is a planar graph, the upper bound on $\chi(G^2)$ in terms of $\Delta$ was succesively improved by Jonas~\cite{Jonas},
Wong~\cite{Wong}, Van den Heuvel and McGuinness~\cite{Heuvel},
Agnarsson and Halld´orsson~\cite{Agnarsson},
Borodin et al.~\cite{borodin} and finally by
Molloy and Salavatipour~\cite{Molloy} to the best known upper bound so far: $\chi(G^2)\leq \lceil \frac{5}{3}\Delta\rceil+78.$
The choosability of squares of subcubic planar graphs has been extensively studied by Dvo\v r\'ak, \v Skrekovski, and Tancer~\cite{skreko}, Montassier and Raspaud~\cite{montassier}, Thomassen~\cite{thomassen}, Havet~\cite{havet}, and Cranston and Kim~\cite{cranston}. For the case $\Delta=4$ there have been no results so far. We give some upper bounds on $\chi_l(G^2)$ when $\Delta(G)= 4$ and $\mad(G)$ is bounded. These results imply bounds for $\chi_l(G^2)$ when $G$ is planar with prescribed girth:
\begin{theorem} \label{mainthm} Let $G$ be a graph with maximum degree $\Delta=4$. The following bounds hold: \begin{enumerate}[(a)]
\item[$(a)$] $G^2$ is $5$-choosable if $\mad(G)<16/7$, specifically, if $G$ is planar with girth at least 16.
\item[$(b)$] $G^2$ is $6$-choosable if $\mad(G)<22/9$, specifically, if $G$ is planar with girth at least 11.
\item[$(c)$] $G^2$ is $7$-choosable if $\mad(G)<18/7$, specifically, if $G$ is planar with girth at least 9.
\item[$(d)$] $G^2$ is $8$-choosable if $\mad(G)<14/5$, specifically, if $G$ is planar with girth at least 7.
\item[$(e)$] $G^2$ is $12$-choosable if $\mad(G)<10/3$, specifically, if $G$ is planar with girth at least 5.
\item[$(f)$] $G^2$ is $14$-choosable if $G$ is planar. \end{enumerate} \end{theorem}
This theorem is summarized in the following table: \begin{table}[!h] \begin{center}
\begin{tabular}{ r |c |c |c |c |c |c }
$\chi_l\leq$ & $5$ & $6$ & $7$ & $8$ & $12$ & $14$ \\
\hline
$\mad(G)<$ & $16/7$ & $22/9$ & $18/7$ & $14/5$ & $10/3$ & $-$ \\
\hline
$\mbox{planar and~}g\geq$ & $16$ & $11$ & $9$ & $7$ & $5$ & $3$ \\
\end{tabular} \caption{Upper bounds on the choice number for squares of graphs with $\Delta=4$ and bounded maximum average degree, including planar graphs with bounded girth.} \label{} \end{center} \end{table}
We will prove each of the claims by contradiction while studying the smallest counterexample to the claim with respect to the number of vertices. If we remove one or more vertices from this graph we know that its square can be properly coloured with the lists provided. We will use this fact in the proofs of the claims.
\subsection{Reducible configurations}
A {\it configuration} is an induced subgraph $C$ of a graph $G$. We call a configuration {\it reducible} if it cannot appear in a minimal counterexample.
To prove that a configuration is reducible, we infer from the minimality of $G$ that subgraph $G-H$ can be properly coloured, and then prove that this colouring can be extended to a proper colouring of the original graph $G$; this gives a contradiction. A configuration is {\it $k$-reducible} if it is reducible in the setting of $k$-choosability. Clearly a $k$-reducible configuration is also $(k+1)$-reducible.
We split our proof of the main theorem into six lemmas, one for each part of the theorem. Within each lemma, we prove the reducibility of the configurations used in that lemma. Once we prove a configuration is reducible, we will assume that such a configuration is not present in a minimal counterexample to that lemma.
We will prove that the configurations are reducible by using the same method each time: remove some vertices and colour the remaining graph by minimality. If necessary uncolour some vertices, and finally extend this colouring to the whole graph.
To simplify the presentation of the reducibility proofs we give figures using the following notation: A removed vertex is marked with a square around it. An uncoloured vertex is marked with a circle around it. The minimal number of colours left in the list of a removed or uncoloured vertex is written next to it.
These figures allow the reader to quickly verify that the configurations pictured are reducible.
In the first few reducibility proofs we will provide detailed reasoning but in the remaining ones we will only present the corresponding figure and leave the details to the reader.
We call a graph {\it degree choosable} if it can be colored from any list assignment $L$ such that $|L(v)|=d(v)$ for all $v\in V(G)$. For a few of the reducibility proofs, we will need the following result of Erd\H{o}s, Rubin, and Taylor~\cite{ERT}:
\begin{lem}[Choosability Lemma] \label{ERTlemma} A connected graph fails to be degree-choosable if and only if every block is a complete graph or an odd cycle. \end{lem}
\section{Proof of the Main Theorem}
In this section, we prove our main result, Theorem~\ref{mainthm}. The six parts of Theorem~\ref{mainthm} are completely independent, so we present the proof as six self-contained lemmas, each proving a corresponding part of the theorem. The proofs of Lemmas~\ref{lemma1}--\ref{lemma5} all use maximum average degree, while Lemma~\ref{lemma6} requires planarity, since it sends charge to faces. The proofs of Lemmas~\ref{lemma1}, \ref{lemma2}, and \ref{lemma4} make use of global discharging; the easiest of these proofs is Lemma~\ref{lemma1}, while Lemmas~\ref{lemma2} and \ref{lemma4} require additional details and subtlety. We now prove the six lemmas without further comment.
\begin{lem} \label{lemma1} If $\Delta(G)\le 4$ and $\mad(G)<16/7$, then $\chi_l(G^2)\le 5$. In particular, for every planar graph $G$ with $\Delta(G)\le 4$ and girth at least 16, we have $\chi_l(G^2)\le 5$. \end{lem} \begin{proof} The second statement follows from the first by Fact~1. To prove the first, we use discharging. Let $G$ be a minimal counterexample to the lemma, i.e., a minimal graph with $\Delta(G)\le 4$ and $\mad(G)<16/7$ such that $\chi_l(G^2)>5$. For each vertex $v$, we begin with charge $\mu(v)=d(v)$; we will show that after the discharging phase each vertex finishes with charge at least $16/7$, which gives a contradiction and proves the lemma.
We call a configuration {\it 5-reducible} if it cannot appear in a minimal counterexample to the lemma. We use the following configurations (see Fig.~\ref{slika1}): \begin{itemize}
\item[$(i)$] \textit{A 4-thread is 5-reducible.}
Let $v$ and $w$ be the middle two vertices of
the 4-thread. By the minimality of $G$ we can 5-list-color
$(G\setminus\{v,w\})^2$. Now $v$ and $w$ each have at least 2 colors
available, so we can extend the coloring to $G$.
\item[$(ii)$] \textit{ A 3-thread $S$ incident to a 3-vertex $u$ is 5-reducible.}
Let $v$ be the 2-vertex on $S$ adjacent to $u$ and let $w$ be the 2-vertex
adjacent to $v$. By the minimality of $G$, we can 5-list-color
$(G\setminus\{v,w\})^2$. Now $u$ and $v$ have at least one and two available
colors, respectively. So we can extend the coloring to $G$ by coloring $u$
then $v$.
\item[$(iii)$] \textit{A $3k$-cycle $C_{3k}$ with $d(v_{3i})=3$ for all $i$ and
$d(v_{3i+1})=d(v_{3i+2})=2$ for all $i$ (the subscripts are modulo $3k$)
is 5-reducible.} Let $S=\{v_{3i}:1\le i\le k\}$. We delete all
vertices on $C_{3k}$ with degree 2. Now the subgraph of $G^2$ that we
must color, $(C_{3k})^2\setminus S\cong C_{2k}$, is isomorphic to an even cycle.
Each uncolored vertex has at most 3 restrictions on its color, so it
has a list of at least 2 available colors. Now we can extend the coloring to $G$ since $\chi_l(C_{2k})=2$ (this is an easy exercise, and also follows immediately from the Choosability Lemma). \end{itemize}
Let $H$ denote the subgraph of $G$ induced by 2-threads with 3-vertices at both ends. Since configuration $(iii)$ is 5-reducible, $H$ must be acyclic. Since every tree has one more vertex than edge, we can recursively assign each 2-thread in $H$ to be sponsored by an incident 3-vertex such that each 3-vertex sponsors at most one 2-thread.
\begin{figure}
\caption{Configurations $(i)$, $(ii)$, and $(iii)$ from Lemma~\ref{lemma1} are 5-reducible.}
\label{slika1}
\end{figure}
We use the initial charge function $\mu(v)=d(v)$ and the following discharging rules.
\begin{itemize} \item {\bf R1:} Each $3$-vertex gives charge $1/7$ to each incident thread. \item {\bf R2:} Each 4-vertex gives charge $3/7$ to each incident thread.\footnote{If a 4-vertex $v$ is adjacent to two vertices in the same thread, i.e., $v$ serves as both endpoints of the thread, then $v$ sends twice the normal charge to the thread; similarly for Lemma~2.} \item {\bf R3:} Each $3$-vertex incident with a 2-thread that it sponsors gives an additional charge of $2/7$ to that 2-thread.
\end{itemize}
Now we show that each $3^+$-vertex finishes with charge at least $16/7$ and that each $k$-thread receives charge at least $2k/7$ (so that it finishes with charge at least $16k/7$). Note that a 1-vertex is 5-reducible, so $\delta(G)\ge 2$. First we consider $3^+$-vertices. If $d(v)=4$, then $v$ gives charge $3/7$ to each incident thread, so $\mu^*(v)\ge 4-4(3/7)=16/7$. If $d(v)=3$, then $v$ sends charge $1/7$ to each incident thread and an additional charge of $2/7$ to at most one incident thread, so $\mu^*(v)\ge 3 - 3(1/7)-1(2/7)=16/7$.
Now we consider threads. Each 3-thread receives charge $3/7$ from each endpoint, which are both 4-vertices by $(ii)$. Each 1-thread receives charge at least $1/7$ from each endpoint. Each 2-thread with at least one degree 4 endpoint receives charge $3/7$ from one endpoint and at least $1/7$ from the other. Finally, each 2-thread with two degree 3 endpoints receives charge $1/7$ from each endpoint and an additional charge of $2/7$ from its sponsor, for a total of $4/7$. Thus $\mad(G)\ge 16/7$. This contradiction completes the proof. \end{proof}
\begin{lem} \label{lemma2} If $\Delta(G)\le 4$ and $\mad(G)<22/9$, then $\chi_l(G^2)\le 6$. In particular, for every planar graph $G$ with $\Delta(G)\le 4$ and girth at least 11, we have $\chi_l(G^2)\le 6$. \end{lem} \begin{proof} The second statement follows from the first by Fact~1. To prove the first, we use discharging. Let $G$ be a minimal counterexample to the lemma.
For each vertex $v$, we begin with charge $\mu(v)=d(v)$, and we will show that after discharging each vertex finishes with charge at least $22/9$, which gives a contradiction and proves the lemma.
\begin{figure}
\caption{Configurations $(i)$ and $(ii)$ from Lemma~\ref{lemma2} are 6-reducible. }
\label{slika11}
\end{figure}
We call a configuration {\it 6-reducible} if it cannot appear in a minimal counterexample to the lemma. We use the following configurations (see Fig.~\ref{slika11}): \begin{itemize}
\item[$(i)$]\textit{ A 3-thread $S$ is 6-reducible.}
Let $v$ and $w$ be adjacent 2-vertices on $S$, with $v$ adjacent to an
endpoint of $S$. By the minimality of $G$ we can 6-list-color
$(G\setminus\{v,w\})^2$. Now $v$ and $w$ have at least 1 and 3 colors
available, respectively. So we can extend the coloring to $G$ by coloring $v$
then $w$.
\item[$(ii)$] \textit{A 2-thread $T$ incident to a 3-vertex $u$ is 6-reducible.}
Let $v$ and $w$ be the two 2-vertices of $T$, with $v$ adjacent to $u$. By the
minimality of $G$ we can 6-list-color $(G\setminus\{v,w\})^2$. Now $v$ and $w$
have at least 2 and 1 colors available, respectively. So we can extend the
coloring to $G$ by coloring $w$ then $v$. \end{itemize} Let $H$ be the subgraph induced by 2-threads; recall that the endpoints of each 2-thread must be 4-vertices, by $(ii)$. As in the proof of Lemma~1, $H$ must be acyclic. Thus, we can assign each 2-thread of $H$ to be sponsored by an incident $4$-vertex such that each 4-vertex sponsors at most one 2-thread.
If a 2-vertex $v$ has two 3-neighbors, call the 1-thread containing $v$ {\it light}. Let $J$ be the subgraph induced by light 1-threads. We will show that each component of $J$ must be a tree or a cycle. Suppose instead that $J$ contains a cycle with an incident edge. We denote the cycle by $u_1v_1u_2v_2\ldots u_kv_k$ where $d(u_i)=2$ and $d(v_i)=3$ for all $i$ and $v_1$ is adjacent to a 2-vertex $z$ not on the cycle (which is adjacent to a second 3-vertex). By minimality, we can 6-list-color $(G\setminus \{u_1,v_1,u_2,z\})^2$. Now only three neighbors of $v_1$ in $G^2$ are colored, so we can color $v_1$. Finally, we uncolor each vertex $u_i$. Now the uncolored vertices induce in $G^2$ a subgraph $K$
consisting of a cycle with a single vertex $z$ adjacent to two successive vertices on the cycle. For each vertex $x\in V(K)$, let $L(x)$ denote the colors available for $x$. Note that we have $|L(x)|\ge d_K(x)$ for all $x\in V(K)$. Thus, by the Choosability Lemma, we can extend the list-coloring to all of $V(G)$. So each component of $J$ must be a tree or a cycle; hence we can assign each 1-thread of $J$ to be sponsored by an incident 3-vertex such that each 3-vertex sponsors at most one 1-thread.
We use the initial charge function $\mu(v)=d(v)$ and the following discharging rules.
\begin{itemize} \item {\bf R1:} Each $3$-vertex gives charge $1/9$ to each incident thread. \item {\bf R2:} Each 4-vertex gives charge $3/9$ to each incident thread. \item {\bf R3:} Each $3^+$-vertex incident with a sponsored thread gives an additional charge of $2/9$ to that thread.
\end{itemize}
Now we show that each $3^+$-vertex finishes with charge at least $22/9$ and that each $k$-thread receives charge at least $4k/9$ (so it finishes with charge at least $22k/9$). As in Lemma~\ref{lemma1}, note that $\delta(G)\ge 2$. If $d(v)=4$, then $v$ gives charge $3/9$ to each incident thread and an additional $2/9$ to at most one sponsored thread, so $\mu^*(v)\ge 4-4(3/9)-1(2/9)=22/9$. If $d(v)=3$, then $v$ sends charge $1/9$ to each incident thread and an additional $2/9$ to at most one incident thread, so $\mu^*(v)\ge 3 - 3(1/9)-1(2/9)=22/9$.
Now we consider threads. Each 2-thread receives charge $3/9$ from each endpoint and charge 2/9 from its sponsor, for a total charge of $8/9$. Consider a 1-thread with interior 2-vertex $v$. If $v$ has at least one 4-neighbor, then the 1-thread receives charge at least $3/9+1/9=4/9$. Each 1-thread with both endpoints of degree 3 receives charge $1/9$ from each endpoint and charge $2/9$ from its sponsor for a total charge of $4/9$. Thus $\mad(G)\ge 22/9$. This contradiction completes the proof. \end{proof}
\begin{lem} \label{lemma3} If $\Delta(G)\le 4$ and $\mad(G)<18/7$, then $\chi_l(G^2)\le 7$. In particular, for every planar graph $G$ with $\Delta(G)\le 4$ and girth at least 9, we have $\chi_l(G^2)\le 7$. \end{lem} \begin{proof} The second statement follows from the first by Fact~1. To prove the first, we use discharging. Let $G$ be a minimal counterexample to the lemma.
For each vertex $v$, we begin with charge $\mu(v)=d(v)$, and we will show that after discharging each vertex finishes with charge at least $18/7$, which gives a contradiction and proves the lemma. We leave to the reader the details of verifying that each of the three following configurations is 7-reducible (see Fig.~\ref{slika2}): \begin{itemize}
\item[$(i)$] \textit{A thread of two 2-vertices;}
\item[$(ii)$] \textit{A 3-vertex adjacent to three 2-vertices;}
\item[$(iii)$]\textit{A 3-vertex, adjacent to two 2-vertices, one of which
is adjacent to a second 3-vertex.} \end{itemize}
We use the initial charge function $\mu(v)=d(v)$ and the following discharging rules.
\begin{itemize} \item {\bf R1:} Each $4$-vertex gives charge $5/14$ to each 2-neighbor. \item {\bf R2:} Each $3$-vertex with a single 2-neighbor gives charge $4/14$ to that 2-neighbor. \item {\bf R3:} Each $3$-vertex with two 2-neighbors gives charge $3/14$ to each 2-neighbor. \end{itemize}
Now we show that each vertex finishes with charge at least $18/7$. Note that $\delta(G)\ge 2$. If $d(v)=2$ and $v$ has a 4-neighbor, then $\mu^*(v)\ge 2+ 5/14+3/14=2+4/7$. If $d(v)=2$ and $v$ has no 4-neighbor, then $v$ receives charge $4/14$ from each of its 3-neighbors, since otherwise we have configuration $(iii)$ in Figure~\ref{slika2}. Now $\mu^*(v)\ge 2+2(4/14)=18/7$. If $d(v)=3$, then by $(ii)$ $v$ has at most two 2-neighbors, so $\mu^*(v)\ge 3-2(3/14)=18/7$. Finally, if $d(v)=4$, then $v$ has at most four 2-neighbors, so $\mu^*(v)\ge 4 - 4(5/14)=18/7$. Thus, $\mad(G)\ge 18/7$. This contradiction completes the proof. \end{proof}
\begin{figure}
\caption{Configurations $(i)$, $(ii)$, and $(iii)$ from Lemma~\ref{lemma3} are 7-reducible. }
\label{slika2}
\end{figure}
\begin{lem} \label{lemma4} If $\Delta(G)\le 4$ and $\mad(G)<14/5$, then $\chi_l(G^2)\le 8$. In particular, for every planar graph $G$ with $\Delta(G)\le 4$ and girth at least 7, we have $\chi_l(G^2)\le 8$. \end{lem} \begin{proof} The second statement follows from the first by Fact~1. To prove the first, we use discharging. Let $G$ be a minimal counterexample to the lemma.
For each vertex $v$, we begin with charge $\mu(v)=d(v)$, and we will show that after discharging each vertex finishes with charge at least $14/5$, which gives a contradiction and proves the lemma. We call a 2-vertex with two 3-neighbors a {\it light 2-vertex}. We call a 2-vertex with a 3-neighbor and a 4-neighbor a {\it medium 2-vertex}. We call a 2-vertex with two 4-neighbors a {\it heavy 2-vertex}. We call a 3-vertex adjacent to a light 2-vertex a {\it needy 3-vertex}. Below we note that adjacent 2-vertices are 8-reducible. This implies that every 2-vertex is heavy, medium, or light.
We leave to the reader the details of verifying the following 8-reducible configurations. Recall from the previous lemma that adjacent 2-vertices are 7-reducible, so they are also 8-reducible. We will use the folowing 8-reducible configurations (see Fig.~\ref{slika22}): \begin{itemize}
\item[$(i)$] \textit{ a 3-vertex with two 2-neighbors;}
\item[$(ii)$] \textit{ a 3-vertex with two 3-neighbors and a light 2-neighbor;}
\item[$(iii)$] \textit{ a 4-vertex with three 2-neighbors, one of which is medium;}
\item[$(iv)$] \textit{A 4-vertex with a needy 3-neighbor and two 2-neighbors, one of which is medium.} \end{itemize} If a 1-thread $S$ contains a heavy 2-vertex $v$ then we call $S$ {\it heavy}. Let $J$ be the subgraph induced by heavy 1-threads. Each component of $J$ must be a tree or a cycle. Since the proof is identical to that given in Lemma~\ref{lemma2}, here we do not repeat the details. Since each component of $J$ is a tree or a cycle, we can assign each 2-vertex on a heavy 1-thread to be sponsored by an adjacent 4-vertex, so that each 4-vertex sponsors at most one such 2-vertex.
We use the initial charge function $\mu(v)=d(v)$ and the following discharging rules.
\begin{itemize} \item {\bf R1:} Each $3^+$-vertex gives charge $1/5$ to each adjacent 2-vertex. \item {\bf R2:} Each 4-vertex gives charge $1/5$ to each adjacent needy 3-vertex. \item {\bf R3:} Each needy 3-vertex gives an additional $1/5$ to each adjacent light 2-vertex. \item {\bf R4:} Each 4-vertex gives an additional $2/5$ to each adjacent medium 2-vertex and each adjacent sponsored 2-vertex. \end{itemize}
\begin{figure}
\caption{Configurations $(i)$--$(iv)$ from Lemma~\ref{lemma4} are 8-reducible.}
\label{slika22}
\end{figure}
Now we show that each vertex finishes with charge at least $14/5$. Note that $\delta(G)\ge 2$.
Suppose $d(v)=2$. If $v$ is heavy, then $v$ receives charge $1/5$ from each neighbor and an additional charge $2/5$ from its sponsor, so $\mu^*(v)=2+2(1/5)+2/5=14/5$. If $v$ is medium, then $v$ receives charge $1/5$ from its 3-neighbor and charge $1/5+2/5$ from its 4-neighbor, so $\mu^*(v)=2+1/5+1/5+2/5=14/5$. If $v$ is light, then $v$ receives charge $1/5$ from each neighbor and an additional charge $1/5$ from each neighbor, so $\mu^*(v)=2+2(2/5)=14/5$.
Suppose $d(v)=3$. By $(i)$, $v$ has at most one 2-neighbor. If $v$ has a light 2-neighbor, then $v$ gives it charge $1/5+1/5$ and $v$ receives charge $1/5$ from some 4-neighbor, since otherwise we have configuration $(ii)$. So $\mu^*(v)\ge 3-2/5+1/5=14/5$. If $v$ has a medium 2-neighbor, then $v$ gives it only charge $1/5$, so $\mu^*(v)\ge 3-1/5=14/5$.
Suppose $d(v)=4$. If $v$ has no medium neighbors, then $v$ gives charge at most $1/5$ to each neighbor and an additional charge of $2/5$ to at most one sponsored 2-vertex, so $\mu^*(v)\ge 4-4(1/5)-2/5=14/5$. So suppose that $v$ has a medium 2-neighbor. If $v$ has only one 2-neighbor, then $v$ gives charge at most 1/5 to each other neighbor and charge $1/5+2/5$ to its medium 2-neighbor, so $\mu^*(v)\ge 4-3(1/5)-1/5-2/5=14/5$. If $v$ has at least two 2-neighbors, at least one of which is medium, then by configurations $(iii)$ and $(iv)$, $v$ gives charge to no neighbors besides these two 2-neighbors. Since $v$ gives total charge at most $3/5$ to each of these 2-neighbors, $\mu^*(v)\ge 4-2(3/5)=14/5$.
Thus, each vertex finishes with charge at least $14/5$, so $\mad(G)\ge 14/5$. This contradiction completes the proof. \end{proof}
\begin{lem} \label{lemma5} If $\Delta(G)\le 4$ and $\mad(G)<10/3$, then $\chi_l(G^2)\le 12$. In particular, for every planar graph $G$ with $\Delta(G)\le 4$ and girth at least 5, we have $\chi_l(G^2)\le 12$. \end{lem} \begin{proof} The second statement follows from the first by Fact~1. To prove the first, we use discharging. Let $G$ be a minimal counterexample to the lemma.
For each vertex $v$, we begin with charge $\mu(v)=d(v)$, and we will show that after discharging each vertex finishes with charge at least $10/3$, which gives a contradiction and proves the lemma.
\begin{figure}
\caption{Configurations $(i)$--$(iv)$ from Lemma~\ref{lemma5} are 12-reducible.}
\label{slikaQ15}
\end{figure}
We leave to the reader the details of verifying 12-reducibility of the following four configurations (see Fig.~\ref{slikaQ15}): \begin{itemize}
\item[$(i)$] \textit{a 2-vertex adjacent to a 3-vertex;}
\item[$(ii)$] \textit{a 3-vertex adjacent to two 3-vertices;}
\item[$(iii)$] \textit{a 4-vertex with two adjacent 2-vertices;}
\item[$(iv)$] \textit{a 4-vertex adjacent to a 2-vertex and a 3-vertex.} \end{itemize} We use the initial charge function $\mu(v)=d(v)$ and the following discharging rules.
\begin{itemize} \item {\bf R1:} Each 4-vertex gives charge $2/3$ to each adjacent 2-vertex. \item {\bf R2:} Each 4-vertex gives charge $1/6$ to each adjacent 3-vertex. \end{itemize}
Now we show that each vertex finishes with charge at least $10/3$. Note that $\delta(G)\ge 2$. If $d(v)=2$, then by $(i)$ both neighbors of $v$ are 4-vertices, so $\mu^*(v)=2+2(2/3)=10/3$. If $d(v)=3$, then by $(i)$ $v$ has no 2-neighbors and by $(ii)$ $v$ has two 4-neighbors, so $\mu^*(v)\ge3+2(1/6)=10/3$. Suppose that $d(v)=4$. If $v$ has a 2-neighbor, then by $(iii)$ and $(iv)$ $v$ has no other $3^-$-neighbor, so $\mu^*(v)\ge 4-2/3=10/3$. If $v$ has no 2-neighbor, then $\mu^*(v)\ge 4-4(1/6)=10/3$.
Thus, each vertex finishes with charge at least $10/3$. This contradiction completes the proof. \end{proof}
\begin{lem} \label{lemma6} If $G$ is planar and $\Delta(G)\le 4$, then $\chi_l(G^2)\le 14$. \end{lem} \begin{proof} Let $G$ be a minimal planar graph with $\chi_l(G^2)>14$. The following six configurations are 14-reducible (see Fig.~\ref{slika6}): \begin{itemize}
\item[$(i)$] \textit{a 2-vertex;}
\item[$(ii)$] \textit{two adjacent 3-vertices;}
\item[$(iii)$] \textit{a 3-vertex incident to a 3-face;}
\item[$(iv)$] \textit{a 3-vertex incident to a 4-face;}
\item[$(v)$] \textit{a 4-vertex incident to two 3-faces (sharing an edge or not);}
\item[$(vi)$] \textit{a 4-vertex incident to a 3-face and a 4-face (sharing an edge or not).} \end{itemize}
We use discharging with the following initial charges: \begin{itemize} \item $\mu(v)=2d(v)-6$ for each vertex $v$. \item $\mu(f)=\l(f)-6$ for each face $f$. \end{itemize} By Euler's formula, the sum of the charges is negative. We use the following discharging rules. \begin{itemize} \item {\bf R1:} Each 4-vertex gives charge 1 to each incident 3-face. \item {\bf R2:} Each 4-vertex gives charge 1/2 to each incident 4-face. \item {\bf R3:} Each 4-vertex gives charge 1/3 to each incident 5-face. \end{itemize} \begin{figure}
\caption{Configurations $(i)$--$(vi)$ from Lemma~\ref{lemma6} are 14-reducible.}
\label{slika6}
\end{figure}
Now we show that all vertices and faces finish with nonnegative charge, which is a contradiction. By $(i)$, we have $\delta(G)\ge 3$. Thus, we must verify that each $5^-$-face receives sufficient charge and that no 4-vertex gives away too much charge; note that $4$-vertices give charge only to faces.
If $\l(f)=3$, then by $(iii)$ each incident vertex is a 4-vertex, so $\mu^*(f)=-3+3(1)=0$. If $\l(f)=4$, then by $(iv)$ each incident vertex is a 4-vertex, so $\mu^*(f)=-2+4(1/2)=0$. If $\l(f)=5$, then (since $G$ has no adjacent 3-vertices by $(ii)$), $f$ has at least three incident 4-vertices, so $\mu^*(f)\ge -1 + 3(1/3)=0$.
If $d(v)=3$, then $\mu^*(v)=\mu(v)=0$. If $d(v)=4$ and $v$ is incident to a triangle, then by $(v)$ and $(vi)$ vertex $v$ is also incident to three $5^+$-faces, so $\mu^*(v)\ge 2-1-3(1/3)=0$. If $d(v)=4$ and $v$ is not incident to a triangle, then $\mu^*(v)\ge 2-4(1/2)=0$.
Thus, each vertex and face finishes with nonnegative charge. This contradicts the fact that the sum of the initial charges was negative. This contradiction completes the proof. \end{proof}
\end{document} |
\begin{document}
\title{Detection of Gravitational Wave - \\ An Application of Relativistic Quantum Information Theory} \author{Ye Yeo} \affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\author{Chee Leong Ching} \affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\author{Jeremy Chong} \affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\author{Wee Kang Chua} \affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\author{Andreas Dewanto} \affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\author{Zhi Han Lim} \affiliation{Department of Physics, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore}
\begin{abstract} We show that a passing gravitational wave may influence the spin entropy and spin negativity of a system of $N$ massive spin-$1/2$ particles, in a way that is characteristic of the radiation. We establish the specific conditions under which this effect may be nonzero. The change in spin entropy and negativity, however, is extremely small. Here, we propose and show that this effect may be amplified through entanglement swapping. Relativistic quantum information theory may have a contribution towards the detection of gravitational wave. \end{abstract}
\maketitle
Relativity and quantum mechanics are the two pillars of 20th century physics. Einstein's general relativity \cite{Hartle} is the classical theory of gravity. Many of its predictions have been experimentally confirmed via very precise measurements. One of its most intriguing predictions is the propagation of ripples in spacetime curvature at the speed of light ($c = 1$) called {\em gravitational waves}. Gravity is a long-range interaction and it is not possible to shield this interaction. Gravitational waves thus provide a new window for exploring astronomical phenomena. However, gravity is the weakest of the four fundamental interactions; this means that gravitational waves are not easily detected. In fact, they have not yet been detected on Earth. Quantum mechanics, in combination with computation and information, leads to unexpected new ways that information can be processed and transmitted, extending the known capabilities in the field of classical information to previously unsuspected limits \cite{Nielsen}. One of the greatest challenges faced by quantum information scientists is the fragility of quantum coherence and entanglement in the presence of environmental decoherence \cite{Zurek}. In particular, multipartite entangled states such as the GHZ states \cite{Greenberger} become more susceptible under certain kinds of noise as the number of particles increases \cite{Carvalho}. Motivated by its fundamental importance in gravitational wave detection and several recent developments in relativistic quantum information theory \cite{Peres1}, we explore, in this paper, the possibility of turning this fragility into a quantum means to detect gravitational radiation.
Peres, {\em et al.} \cite{Peres2} were the first to study the relativistic properties of spin entropy for a single, free particle of spin 1/2 and nonzero mass in flat spacetime. They showed that even if the initial quantum state of the particle is a direct product of a function of momentum and a function of spin, the state under a Lorentz boost is in general not a direct product. This is because the spin undergoes a Wigner rotation \cite{Wigner} whose direction and magnitude depend on the momentum of the particle. Spin and momentum appear to be ``entangled''. As a result, the reduced density matrix for spin becomes mixed and the corresponding entropy becomes nonzero. Slightly later, Gingrich and Adami \cite{Gingrich} showed that Lorentz boost can also affect the entanglement between spins. Namely, a maximally entangled Bell state of two massive spin-1/2 particles loses entanglement under a Lorentz boost.
More recently, Terashima and Ueda \cite{Tera1} (see also \cite{Tera2}) extended the original investigation by Peres {\em et al.}, by considering the relativistic quantum mechanics of a massive spin-1/2 particle moving in curved spacetime, which entails a breakdown of the global $SO(3, 1)$ symmetry associated with flat spacetime. In this case, spin can only be defined locally at each spacetime point by invoking the $SO(3, 1)$ symmetry of some local inertial frame. Specifically, a spin-1/2 particle in curved spacetime is defined as a particle whose one-particle states furnish the spin-1/2 representation of the {\em local Lorentz transformation}. Terashima and Ueda showed that, as a consequence of this local definition, the motion of the particle is accompanied by a continuous succession of local Lorentz transformations, which gives rise to spin entropy production that is unique to the curved spacetime. They illustrated their ideas with the Schwarzschild spacetime in \cite{Tera1, Tera2}.
In this paper, we study the effects on the quantum states of one, two, and in general $N$ spin-1/2 particles due to a plane gravitational wave spacetime propagating in the positive $z$-direction \cite{Hartle}: \begin{equation} ds^2 = -dt^2 + [1 + f(t - z)]dx^2 + [1 - f(t - z)]dy^2 + dz^2. \end{equation}
The size and shape of the propagating ripple in curvature are determined by some dimensionless function $f$ ($|f(t - z)| << 1$). For example, for a Gaussian wave packet with width $\omega$ and maximum height $A$, \begin{equation} f(t - z) = A\exp\left[-\frac{(t - z)^2}{\omega^2}\right]. \end{equation} And, for a gravitational wave of amplitude $A$ and definite frequency $\varpi$, \begin{equation} f(t - z) = A\sin[\varpi(t - z)]. \end{equation} The only nonvanishing Christoffel symbols for the above metric are $2\Gamma^t_{xx} = -2\Gamma^t_{yy} = \partial f/\partial t$, $-2\Gamma^z_{xx} = 2\Gamma^z_{yy} = \partial f/\partial z$, $\Gamma^x_{tx} = \partial\ln\sqrt{1 + f}/\partial t$, $\Gamma^x_{xz} = \partial\ln\sqrt{1 + f}/\partial z$, $\Gamma^y_{ty} = \partial\ln\sqrt{1 - f}/\partial t$, and $\Gamma^y_{yz} = \partial\ln\sqrt{1 - f}/\partial z$.
We begin with a single spin-1/2 particle $A$ of mass $m$ in a local inertial frame at the spacetime point $x^{\mu}_i$. This particle is initially prepared at proper time $\tau_i$ in the state \begin{equation}
|\psi\rangle_A = \int N(k^a)d^3\vec{k}\sum_{\lambda}C(k^a, \lambda)|k^a, \lambda\rangle_A, \end{equation}
where $N(k^a)d^3\vec{k} = md^3\vec{k}/\sqrt{\vec{k}\cdot\vec{k} + m^2}$ is Lorentz-invariant volume element. From here on, it is assumed that Latin and Greek letters run over the four inertial-coordinate labels 0, 1, 2, 3 and the four general-coordinate labels, respectively. $|k^a, \lambda\rangle_A$ is the momentum eigenstate of the particle, labeled by the four-momentum $k^a = (\sqrt{\vec{k}\cdot\vec{k} + m^2}, \vec{k})$ and by the $z$-component $\lambda$ ($= \uparrow$ or 0, $\downarrow$ or 1) of the spin. We consider, in particular, the case where the coefficient $C(k^a, \lambda) = D(k^a)\delta_{\lambda 0}$, \begin{equation} D(k^a) = \frac{1}{\sqrt{N(k^a)}\sqrt{\pi}w} \prod_{a = 1, 3}\exp\left[-\frac{(k^a - q^a(x_i))^2}{2w^2}\right]\sqrt{\delta(k^2)}. \end{equation}
We assume that the spacetime curvature does not change drastically within the spacetime scale of the wave packet. $q^a(x_i)$ is as given in Eq.(18). Together with the orthogonality condition ${_A}\langle k'^a, \lambda'|k^a, \lambda\rangle_A = \delta^3(\vec{k}' - \vec{k})\delta_{\lambda'\lambda}/N(k^a)$ we clearly have ${_A}\langle\psi|\psi\rangle_A = \int N(k^a)d^3\vec{k}\sum_{\lambda}|C(k^a, \lambda)|^2 = 1$, i.e., $|\psi\rangle_A$ is normalized. To ease calculations, we set $k^2$ to zero with no loss of generality. It follows that at $\tau_i$ the reduced density matrix for spin, \begin{equation}
\rho_A(\tau_i) \equiv \int N(k^a)d^3\vec{k}\ {_A}\langle k^a|\psi\rangle_A\langle\psi|k^a\rangle_A = |0\rangle_A\langle 0|, \end{equation} and the corresponding entropy $S_A(\tau_i) \equiv -{\rm tr}[\rho_A(\tau_i)\log_2\rho_A(\tau_i)] = 0$. We will show that at a later proper time $\tau_f$, $\rho_A(\tau_i)$ evolves to \begin{equation} \rho'_A(\tau_f) \equiv {\cal E}[\rho_A(\tau_i)] = \frac{1}{2}\left(\begin{array}{cc} 1 + \bar{c} & \bar{s} \\ \bar{s} & 1 - \bar{c}\end{array}\right), \end{equation}
with spin entropy $S'_A(\tau_f) = -P\log_2P - (1 - P)\log_2(1 - P)$, $P = (1 - |\bar{u}|)/2$. Here, \begin{equation}
\bar{u} = \int N(k^a)d^3\vec{k}|D(k^a)|^2\exp(i\Omega), \end{equation} with $\Omega = \Omega(k^a; \tau_i, \tau_f, \xi, \vartheta)$ as given in Eq.(22), $\bar{c} = {\rm Re}(\bar{u})$ and $\bar{s} = {\rm Im}(\bar{u})$. It will be useful to note \begin{eqnarray}
{\cal E}[R^{00}] \equiv {\cal E}[|0\rangle\langle 0|] = \frac{1}{2}\left(\begin{array}{cc} 1 + \bar{c} & \bar{s} \\ \bar{s} & 1 - \bar{c}\end{array}\right), & &
{\cal E}[R^{01}] \equiv {\cal E}[|0\rangle\langle 1|] = \frac{1}{2}\left(\begin{array}{cc} -\bar{s} & 1 + \bar{c} \\ -1 + \bar{c} & \bar{s}\end{array}\right), \nonumber \\
{\cal E}[R^{10}] \equiv {\cal E}[|1\rangle\langle 0|] = \frac{1}{2}\left(\begin{array}{cc} -\bar{s} & -1 + \bar{c} \\ 1 + \bar{c} & \bar{s}\end{array}\right), & &
{\cal E}[R^{11}] \equiv {\cal E}[|1\rangle\langle 1|] = \frac{1}{2}\left(\begin{array}{cc} 1 - \bar{c} & -\bar{s} \\ -\bar{s} & 1 + \bar{c}\end{array}\right). \end{eqnarray}
Next, we consider two spin-1/2 particles $A$ and $B$ with equal mass $m$, initially prepared in the state \begin{equation}
|\Psi\rangle_{AB} = \int\int N(k^a)N(p^b)d^3\vec{k}d^3\vec{p}\sum_{\lambda, \sigma}C(k^a, \lambda; p^b, \sigma)
|k^a, \lambda\rangle_A \otimes |p^b, \sigma\rangle_B, \end{equation}
with $C(k^a, \lambda; p^b, \sigma) = D(k^a)D(p^b)\delta_{\lambda\sigma}$. By writing $|\Psi\rangle_{AB}$ as a density matrix and tracing over the momentum degrees of freedom, we obtain, at $\tau_i$, a maximally entangled Bell state \begin{equation}
\chi_{AB}(\tau_i) = |\Psi^0_{Bell}\rangle_{AB}\langle\Psi^0_{Bell}| = \frac{1}{2}\sum^1_{j, k = 0} R^{jk}_A \otimes R^{jk}_B, \end{equation}
where $|\Psi^0_{Bell}\rangle \equiv (|00\rangle + |11\rangle)/\sqrt{2}$, $S_{AB}(\tau_i) \equiv -{\rm tr}[\chi_{AB}(\tau_i)\log_2\chi_{AB}(\tau_i)] = 0$, and (spin) {\em negativity} ${\cal N}[\chi_{AB}(\tau_i)] = 1$. Consider a density matrix $\chi_{AB}$ and its partial transposition $\chi^{T_A}_{AB}$ for a two spin-1/2 system $AB$. $\chi_{AB}$ is entangled if and only if $\chi^{T_A}_{AB}$ has any negative eigenvalues \cite{Peres3, Horodecki}. The negativity \cite{Vidal} is a computable measure of entanglement defined by ${\cal N}[\chi_{AB}] \equiv \max\{-2\sum_i\eta_i,\ 0\}$, where $\eta_i$ is a negative eigenvalue of $\chi^{T_A}_{AB}$. At $\tau_f$, we will show that \begin{equation} \chi'_{AB}(\tau_f) = \frac{1}{2}\sum^1_{j, k = 0} {\cal E}[R^{jk}_A] \otimes {\cal E}[R^{jk}_B] = \frac{1}{4}\left(\begin{array}{cccc}
1 + |\bar{u}|^2 & 0 & 0 & 1 + |\bar{u}|^2 \\
0 & 1 - |\bar{u}|^2 & -(1 - |\bar{u}|^2) & 0 \\
0 & -(1 - |\bar{u}|^2) & 1 - |\bar{u}|^2 & 0 \\
1 + |\bar{u}|^2 & 0 & 0 & 1 + |\bar{u}|^2 \end{array}\right), \end{equation}
with $S_{AB}(\tau_f) = -P\log_2P - (1 - P)\log_2(1 - P)$ but $P = (1 - |\bar{u}|^2)/2$ (see FIG. 1 and 2), and ${\cal N}[\chi'_{AB}(\tau_f)] = |\bar{u}|^2$ (see FIG. 3 and 4). Generalization to a system of $N$ spin-$1/2$ particles is straightforward: \begin{eqnarray}
\chi_{A_1\cdots A_N}(\tau_i) & = & |\Psi^0_{GHZ}\rangle_{A_1\cdots A_N}\langle\Psi^0_{GHZ}| = \frac{1}{2}\sum^1_{j, k = 0} R^{jk}_{A_1} \otimes \cdots \otimes R^{jk}_{A_N}, \nonumber \\ \chi'_{A_1\cdots A_N}(\tau_f) & = & \frac{1}{2}\sum^1_{j, k = 0} {\cal E}[R^{jk}_{A_1}] \otimes \cdots \otimes {\cal E}[R^{jk}_{A_N}]. \end{eqnarray}
Here, $|\Psi^0_{GHZ}\rangle \equiv (|0\cdots 0\rangle + |1\cdots 1\rangle)/\sqrt{2}$ \cite{Greenberger}.
In order to measure the effects described by Eqs.(7) and (12), we introduce a {\em static} observer at each spacetime point along the ``trajectory'' of the particle(s). Each observer is assigned a local inertial frame defined by the following convenient choice of {\em vierbein} $e^{\mu}_a(x)$: \begin{equation} e^t_0(x) = 1, e^x_1(x) = \frac{1}{\sqrt{1 + f}}, e^y_2(x) = \frac{1}{\sqrt{1 - f}}, e^z_3(x) = 1, \end{equation} with all the other components being zero. Furthermore, we demand that the particle(s) be moving with four-velocity \begin{equation} u^{\mu}(x) = (\cosh\xi, \frac{\sinh\xi\sin\vartheta}{\sqrt{1 + f}}, 0, \sinh\xi\cos\vartheta) \end{equation} or four-momentum $q^{\mu}(x) = mu^{\mu}(x)$. Here, $\tanh\xi \equiv v$ ($=$ constant $< 1$), i.e., $\xi$ is the rapidity in the local inertial frame, and $0 < \vartheta < \pi/2$. In order for the particle(s) to move in this way, which is not a geodesic motion, we must apply an external force. The acceleration due to this external force is given by $a^{\mu}(x) = u^{\lambda}(x)\nabla_{\lambda}u^{\mu}(x)$: \begin{equation} a^{\mu}(x) = (\sinh^2\xi\sin^2\vartheta\frac{\partial}{\partial t}\ln\sqrt{1 + f}, \frac{F\sinh\xi\sin\vartheta}{\sqrt{1 + f}}, 0, -\sinh^2\xi\sin^2\vartheta\frac{\partial}{\partial z}\ln\sqrt{1 + f}), \end{equation} where \begin{equation} F = F(t, z; \xi, \vartheta) \equiv \left(\cosh\xi\frac{\partial}{\partial t} + \sinh\xi\cos\vartheta\frac{\partial}{\partial z}\right)\ln\sqrt{1 + f} = \frac{d}{d\tau}\ln\sqrt{1 + f(t - z)}. \end{equation} The inverse of the vierbein $e^a_{\mu}(x)$ in Eq.(14) is given by $e^0_t(x) = 1, e^1_x(x) = \sqrt{1 + f}, e^2_y(x) = \sqrt{1 - f}, e^3_z(x) = 1$. The vierbein transforms a tensor in a general coordinate system $x^{\mu}$ into that in a local inertial frame $x^a$. For instance, \begin{equation} q^a(x) = e^a_{\mu}(x)q^{\mu}(x) = (m\cosh\xi, m\sinh\xi\sin\vartheta, 0, m\sinh\xi\cos\vartheta), \end{equation} and similarly, $a^a(x) = e^a_{\mu}(x)a^{\mu}(x)$ yields \begin{equation} a^a(x) = (\sinh^2\xi\sin^2\vartheta\frac{\partial}{\partial t}\ln\sqrt{1 + f}, F\sinh\xi\sin\vartheta, 0, -\sinh^2\xi\sin^2\vartheta\frac{\partial}{\partial z}\ln\sqrt{1 + f}). \end{equation} A straightforward calculation shows that the nonzero components of the spin connection $\omega^a_{\mu b}(x) \equiv e^a_{\lambda}(x)\nabla_{\mu}e^{\lambda}_b(x)$ are $\omega^0_{x1}(x) = \omega^1_{x0}(x) = \sqrt{1 + f}\partial\ln\sqrt{1 + f}/\partial t$, $\omega^1_{x3}(x) = -\omega^3_{x1}(x) = \sqrt{1 + f}\partial\ln\sqrt{1 + f}/\partial z$, $\omega^0_{y2}(x) = \omega^2_{y0}(x) = \sqrt{1 - f}\partial\ln\sqrt{1 - f}/\partial t$, and $\omega^2_{y3}(x) = -\omega^3_{y2}(x) = \sqrt{1 - f}\partial\ln\sqrt{1 - f}/\partial z$.
Suppose at proper time $\tau$ the particle(s) is at $x^{\mu}$. After an infinitesimal proper time $d\tau$, the particle(s) moves to a new local inertial frame at the new point $x'^{\mu} = x^{\mu} + u^{\mu}d\tau$. $q^a(x)$ changes to $q^a(x') = q^a(x) + \delta q^a(x) = \Lambda^a_{\ b}(x)q^b(x)$, where the infinitesimal local Lorentz transformation $\Lambda^a_{\ b}(x) \equiv \delta^a_{\ b} + \lambda^a_{\ b}(x)d\tau$, with $\lambda^a_{\ b}(x) \equiv -[a^a(x)q_b(x) - q^a(x)a_b(x)]/m + \chi^a_{\ b}(x)$ and $\chi^a_{\ b}(x) \equiv -u^{\mu}(x)\omega^a_{\mu b}(x)$. For our case, we have $\lambda^0_{\ 1}(x) = \lambda^1_{\ 0}(x) = \sinh^2\xi\cos\vartheta\sin\vartheta G(t, z; \xi, \vartheta)$, $\lambda^0_{\ 3}(x) = \lambda^3_{\ 0}(x) = -\sinh^2\xi\sin^2\vartheta G(t, z; \xi, \vartheta)$, and $\lambda^1_{\ 3}(x) = -\lambda^3_{\ 1}(x) = -\cosh\xi\sinh\xi\sin\vartheta G(t, z; \xi, \vartheta)$; where $G(t, z; \xi, \vartheta) \equiv (\sinh\xi\cos\vartheta\partial/\partial t + \cosh\xi\partial/\partial z)\ln\sqrt{1 + f(t - z)} = -F(t, z; \xi, \vartheta)$. Corresponding to $\Lambda^a_{\ b}(x)$ is the infinitesimal local Wigner rotation $W^a_{\ b}(x) \equiv \delta^a_{\ b} + \varphi^a_{\ b}(x)d\tau$, where $\varphi^0_{\ 0}(x) = \varphi^0_{\ i}(x) = \varphi^i_{\ 0}(x) = 0$ and $\varphi^i_{\ j}(x) = \lambda^i_{\ j}(x) + [\lambda^i_0(x)k_j - k^i\lambda_{j0}(x)]/(\sqrt{\vec{k}\cdot\vec{k} + m^2} + m)$. Its spin-1/2 representation is $D^{(1/2)}(W(x)) = \sigma^0 + i[\varphi_{23}(x)\sigma^1 + \varphi_{31}(x)\sigma^2 + \varphi_{12}(x)\sigma^3]d\tau/2$, with the identity matrix $\sigma^0$ and the Pauli matrices $\{\sigma^1, \sigma^2, \sigma^3\}$. It follows that $\varphi^1_{\ 3}(x) = -G(t, z; \xi, \vartheta)H(k^a; \xi, \vartheta)$, with \begin{equation} H(k^a; \xi, \vartheta) \equiv \left(1 - \frac{k^1\sin\vartheta + k^3\cos\vartheta}{\sqrt{\vec{k}\cdot\vec{k} + m^2} + m}\tanh\xi\right)\cosh\xi\sinh\xi\sin\vartheta. \end{equation} Hence, for a finite proper time interval, $\tau_f - \tau_i$, we have \begin{equation} D^{(1/2)}(W(x_f, x_i)) = \exp\left[-\frac{i}{2}\sigma^2\Omega(k^a; \tau_i, \tau_f, \xi, \vartheta)\right], \end{equation} where \begin{eqnarray} \Omega(k^a; \tau_i, \tau_f, \xi, \vartheta) & \equiv & \int^{\tau_f}_{\tau_i}\varphi^1_{\ 3}(x)d\tau \nonumber \\ & = & H(k^a; \xi, \vartheta)\left[\ln\sqrt{1 + f(t_f - z_f)} - \ln\sqrt{1 + f(t_i - z_i)}\right] \nonumber \\ & \approx & \frac{1}{2}[f(t_f - z_f) - f(t_i - z_i)]H(k^a; \xi, \vartheta). \end{eqnarray}
Consequently, $|\psi\rangle_A$ evolves to \begin{equation}
|\psi'\rangle_A = \int N(k^a)d^3\vec{k}\sum_{\lambda, \lambda'}C(k^a, \lambda)
D^{(1/2)}_{\lambda'\lambda}(W(x_f, x_i))|\Lambda(x_f, x_i)k^a, \lambda'\rangle_A, \end{equation}
and similarly $|\Psi\rangle_{AB}$ to $|\Psi'\rangle_{AB} = \int N(k^a)N(p^b)d^3\vec{k}d^3\vec{p}\sum_{\lambda, \lambda', \sigma, \sigma'}C(k^a, \lambda; p^b, \sigma) \times D^{(1/2)}_{\lambda'\lambda}(W(x_f, x_i))|\Lambda(x_f, x_i)k^a, \lambda'\rangle_A \otimes D^{(1/2)}_{\sigma'\sigma}(W(x_f, x_i))|\Lambda(x_f, x_i)p^b, \sigma'\rangle_B$. We obtain Eqs.(7) and (12) by writing $|\psi'\rangle_A$ and $|\Psi\rangle_{AB}$ respectively as density matrices, and tracing over the momentum degrees of freedom. This completes what we set out to do.
In summary, we have shown that the spin entropy of a single massive spin-1/2 particle may change under the influence of a passing gravitational wave. Interestingly, this change has a dependence on the shape of the wave [see Eqs.(8) and (22)]. In other words, by determining the entropy change, one could in principle deduce $f$. To measure this change, one could prepare an identical ensemble of many particles in the state $|\psi\rangle$ [Eq.(4)] and subject them to an external force that produces the acceleration in Eq.(16). The observers at each spacetime point then select a subensemble of particles to determine as accurately as possible its spin state. The variation of the spin entropy with proper time can then be determined. We may also consider the same experimental setup for two- or $N$-particle state $|\Psi\rangle$ [Eq.(10) or its generalization, which gives Eq.(13)]. In this case, we can analyze the entanglement properties of the resulting states. Specifically, we have ${\cal N}[\chi'_{AB}(\tau_f)] = |\bar{u}|^2$.
We have to emphasize that the above effect, even though nonzero, is extremely tiny, especially in the light that the height or amplitude $A$ of a gravitational wave may be of the order of $10^{-21}$. Consequently, $|\bar{u}|^2$ would be extremely close to $1$. So, in order to measure such a minute effect, we need to ``amplify'' or ``concentrate'' it. Our preliminary analysis of the 3- to 7-particle states shows that although a passing gravitational wave may have a greater effect on the 3-particle state compared to a 2-particle one, the 4-, 5-, 6-, and 7-particle states are surprisingly robust. Thus, it seems, by considering $N$-particle states (with $N \geq 4$) does not help. Here, we turn to another well-known phenomenon in quantum information science, {\em entanglement swapping} \cite{Zukowski}. Briefly, we analyze the negativity of the resulting two-particle state \begin{equation} \Xi^{(4)}_{A_1A_2} \equiv \frac{1}{p_i}{\rm tr}_{B_1B_2}
[(I_{A_1A_2} \otimes |\Psi^i_{Bell}\rangle_{B_1B_2}\langle\Psi^i_{Bell}|)(\chi'_{A_1B_1}(\tau_f) \otimes \chi'_{A_2B_2}(\tau_f))], \end{equation}
where $|\Psi^i_{Bell}\rangle = (\sigma^i \otimes \sigma^0)|\Psi^0_{Bell}\rangle$ ($i = 0, 1, 2, 3$) and $p_i = {\rm tr}[(I_{A_1A_2} \otimes |\Psi^i_{Bell}\rangle_{B_1B_2}\langle\Psi^i_{Bell}|) \times (\chi'_{A_1B_1}(\tau_f) \otimes \chi'_{A_2B_2}(\tau_f))] = 1/4$ is the probability of obtaining outcome $i$ from the Bell basis measurement. Particles $B_1$ and $B_2$ become maximally entangled, but $\Xi^{(4)}_{A_1A_2}$ yields ${\cal N}[\Xi^{(4)}_{A_1A_2}] = |\bar{u}|^4$. This is therefore an amplification of the decoherence effect due to a gravitational wave. We repeat the procedure with $\chi'_{A_jB_j}(\tau_f)$ in Eq.(24) replaced by $\Xi^{(4)}_{A_jA_j}$ to obtain $\Xi^{(8)}_{A_1A_2}$, which has negativity ${\cal N}[\Xi^{(8)}_{A_1A_2}] = |\bar{u}|^8$. It is not difficult to see how one can achieve ${\cal N}[\Xi^{(n)}_{A_1A_2}] = |\bar{u}|^n$, with $n$ the number of particles. Hence, instead of a direct measurement on the spin states, we subject the particles to the above cycles of entanglement swapping, obtaining a smaller number of pairs of particles with negativities, which differ appreciably from $1$.
In conclusion, we have established the specific conditions under which the spin entropy or negativity of massive spin-$1/2$ particles may change due to a passing gravitational wave. This very small change may be amplified via the above entanglement swapping scheme, and may be measurable. It is therefore, a potentially viable means of gravitational wave detection. More generally, our results demonstrate the exciting possibility of detecting measurable effects due to spacetime curvature using ideas and tools developed in quantum information science. Effects including those due to our expanding universe will be discussed in a longer paper in preparation \cite{ZhiHan}.
\end{document} |
\begin{document}
\title{Approximately multiplicative maps between algebras of bounded operators on Banach spaces}
\begin{abstract} We show that for any separable reflexive Banach space $X$ and a large class of Banach spaces $E$, including those with a subsymmetric shrinking basis but also all spaces $L_p[0,1]$ for $1\le p \le \infty$, every bounded linear map ${\mathcal B}(E)\to {\mathcal B}(X)$ which is approximately multiplicative is necessarily close in the operator norm to some bounded homomorphism ${\mathcal B}(E)\to {\mathcal B}(X)$.
That is, the pair $({\mathcal B}(E),{\mathcal B}(X))$ has the AMNM property in the sense of Johnson (\textit{J.~London Math.\ Soc.} 1988). Previously this was only known for $E=X=\ell_p$ with $1<p<\infty$; even for those cases, we improve on the previous methods and obtain better constants in various estimates. A crucial role in our approach is played by a new result, motivated by cohomological techniques, which establishes AMNM properties relative to an amenable subalgebra; this generalizes a theorem of Johnson (\textit{op cit.}). \end{abstract}
\hrule
\eject
\tableofcontents
\begin{section}{Introduction}
\begin{subsection}{Background context, and the statement of our main theorem} The AMNM property referred to in the abstract was formulated by B. E. Johnson in \cite{BEJ_AMNM2}, and fits into the broader theme of ``Ulam stability'' for normed representations of groups or algebras: see \cite{BOT_ulam,choi_jaust13,Ko,McVi19} for more recent work in a similar direction. The main purpose of the present paper is to extend our knowledge of the AMNM property to a class of Banach algebras where relatively little has been done, namely the algebras consisting of all bounded operators on $E$, for various Banach spaces~$E$. (The more restricted setting of stability for \emph{surjective} homomorphisms has recently been considered by the second author with Tarcsay; see~\cite{HorTar}.)
To state Johnson's original definition, and our own results, we need to set up some notation. For a Banach space $X$ and $r\ge 0$, $\operatorname{\rm ball}\nolimits_r(X)$ denotes $\{ x\in X \colon \norm{x}\le r\}$. Given Banach spaces $E$ and $F$, and $n\in\mathbb N$, we write ${\mathcal L}^n(E,F)$ for the space of bounded $n$-multilinear maps $E\times \dots \times E \to F$. If $n=1$, then we shall usually modify this notation slightly and write ${\mathcal L}(E,F)$.
One exception to this notational convention is that when $n=1$ and $E=F$, we will denote the Banach algebra of all bounded linear operators $E\to E$ by ${\mathcal B}(E)$, to emphasise that this space is being equipped with extra algebraic structure. (We use the notation ${\mathcal L}^n(E,F)$ for the space of bounded, $n$-linear maps in place of ${\mathcal B}^n(E,F)$ to avoid confusion later in the paper; ${\mathcal B}^n$ usually stands for the space of continuous $n$-coboundaries in the context of Hochschild cohomology.)
For Banach algebras ${\mathsf A}$ and ${\mathsf B}$ we write $\operatorname{Mult}({\mathsf A},{\mathsf B})$ for the set of bounded algebra homomorphisms ${\mathsf A}\to {\mathsf B}$ (the zero map is allowed). Then, given $\psi\in {\mathcal L}({\mathsf A},{\mathsf B})$, we have a ``global'' measure of how far $\psi$ is from being a homomorphism; namely, we can consider the distance of $\psi$ from the set ${\mathcal L}({\mathsf A},{\mathsf B})$ with respect to the operator norm. Explicitly, \begin{align*} \operatorname{dist}(\psi) := \inf \lbrace \Vert \psi - \phi \Vert \, : \, \phi \in \operatorname{Mult}({\mathsf A},{\mathsf B}) \rbrace. \end{align*} (Note that since $\operatorname{Mult}({\mathsf A},{\mathsf B})$ is closed, $\operatorname{dist}(\psi) = 0$ if and only if $\psi \in \operatorname{Mult}({\mathsf A},{\mathsf B})$.) On the other hand, since a linear map $\psi:{\mathsf A}\to{\mathsf B}$ is a homomorphism if and only if it satisfies the identity $\psi(a_1a_2)=\psi(a_1)\psi(a_2)$ for each $a_1$ and $a_2$ in the closed unit ball of ${\mathsf A}$, we may consider the following ``local'' measure of how far $\psi$ is from being a homomorphism.
\begin{dfn} Given a linear map $\psi:{\mathsf A}\to{\mathsf B}$, the \dt{multiplicative defect} of $\psi$ is \begin{align*} \operatorname{def}(\psi) := \sup \{ \norm{\psi(a_1a_2)-\psi(a_1)\psi(a_2)} \colon a_1,a_2 \in \operatorname{\rm ball}\nolimits_1({\mathsf A})\} \in [0, \infty]. \end{align*} \end{dfn}
If $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ and we have some a priori upper bound on $\norm{\psi}$ (say $\norm{\psi}\le 1000$), it is easily checked that $\operatorname{dist}(\psi)$ being small implies $\operatorname{def}(\psi)$ is small. That is: starting with a multiplicative and bounded linear map, adding a linear perturbation with small norm yields a bounded linear map that has small multiplicative defect. Ulam stability is then the phenomenon that, under certain conditions on our algebras ${\mathsf A}$ and ${\mathsf B}$, we can go the other way. The following definition is due to B.~E.~Johnson, see \cite[Definition~1.2]{BEJ_AMNM2}.
\begin{dfn}(AMNM pair)\label{amnmdef} Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras. The pair $({\mathsf A},{\mathsf B})$ is said to have the \dt{AMNM property}, or be an \dt{AMNM pair}, if the following holds:
\begin{quote} For any $\varepsilon > 0$ and $L > 0$ there exists $\delta > 0$ such that for all $\phi \in \operatorname{\rm ball}\nolimits_L {\mathcal L}({\mathsf A},{\mathsf B})$ with $\operatorname{def}(\phi) < \delta$, we have $\operatorname{dist}(\phi) < \varepsilon$. \end{quote} \end{dfn}
Johnson investigated a diverse range of AMNM pairs $({\mathsf A},{\mathsf B})$, in addition to providing some explicit examples of ${\mathsf A}$ and ${\mathsf B}$ which do \emph{not} form an AMNM pair. However, when it came to Banach algebras of the form ${\mathcal B}(E)$, only one infinite-dimensional example was considered in \cite{BEJ_AMNM2}. Namely, Johnson showed (see \cite[Proposition~6.3]{BEJ_AMNM2}) that the pair $({\mathcal B}(\ell_2),{\mathcal B}(\ell_2))$ has the AMNM property, which is striking since one is not making any assumptions about \ensuremath{{\rm w}^*}-\ensuremath{{\rm w}^*} continuity.
Johnson's result was extended from $\ell_2$ to $\ell_p$, for $1<p<\infty$, in the PhD thesis of Howey \cite[Theorem 5.2.1]{Howey}; his proof is essentially identical to Johnson's. In both cases, the argument has a somewhat ``monolithic'' feel, and freely uses special features of $\ell_p$, so that it is not obvious how one might adapt the proof to more general Banach spaces.
Our main theorem extends the Johnson--Howey results to a much wider range of Banach spaces, including the classical spaces $L_p[0,1]$ for $p\in (1,\infty)$, but also many of their complemented subspaces such as $\ell_p(\ell_2)$ or Rosenthal's $X_p$-spaces, and also any reflexive space with a subsymmetric basis. At the same time, we obtain results for pairs $({\mathcal B}(E),{\mathcal B}(X))$ where $E\not\cong X$ and $E$ need not be reflexive. To state our theorem, it will be convenient to make the following definition.
\begin{dfn}\label{d:clone system} Let $E$ be a Banach space. A \dt{clone system} for $E$ is a bounded family $(P_i)_{i\in {\mathbb I}}$ of idempotents in ${\mathcal B}(E)$, such that the operator $P_iP_j$ has finite rank for all $i\neq j$, and $\sup_{i\in{\mathbb I}} d(E,\operatorname{\rm Ran}(P_i)) <\infty$ where $d$ denotes the Banach--Mazur distance. \end{dfn}
\begin{thm}\label{t:headline result}
Let $X$ be any separable, reflexive Banach space. Let $E$ be a Banach space such that both of the following conditions hold: \begin{romnum} \item ${\mathcal K}(E)$, the algebra of compact operators on $E$, is amenable as a Banach algebra; \item $E$ has an uncountable clone system. \end{romnum} Then the pair $({\mathcal B}(E),{\mathcal B}(X))$ has the AMNM property. \end{thm}
Although the hypotheses of Theorem \ref{t:headline result} are rather technical, we will show in the next section that they hold for several classical examples of interest. \end{subsection}
\begin{subsection}{Examples covered by our main theorem}\label{s: examples}
\begin{cor}\label{c:subsymm-shrinking} Let $E$ be a Banach space with a subsymmetric shrinking basis. Then $({\mathcal B}(E),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable~$X$. \end{cor}
Note that in this corollary, the hypothesis on $E$
is satisfied by $\ell_p$ for all $p\in (1,\infty)$ and $c_0$ (see \cite[Section~9.2]{ak}), and also for several natural families of Orlicz sequence spaces (see \cite[Propositions~4.a.4 and 3.a.3]{LTbook_combined}) and for Lorentz sequence spaces (see \cite[Propositions~4.e.3 and 1.c.12]{LTbook_combined}).
\begin{proof}[Proof of Corollary~\ref{c:subsymm-shrinking}.] By \cite[Theorem 4.2]{GJW_isr} and \cite[Theorem 4.5]{GJW_isr}, ${\mathcal K}(E)$ is amenable. The construction of an uncountable clone system for $E$ is a straightforward consequence of the definition of ``subsymmetric'' and the existence of uncountable almost disjoint families of subsets of $\mathbb N$; given such a family $\mathcal{D}\subset\mathcal{P}(\mathbb N)$ and
a subsymmetric basis $(u_n)_{n\ge 1}$ for~$E$, for each $S\in \mathcal{D}$ define $P_S$ to be the projection $\sum_{n\ge 1} \lambda_n u_n \mapsto \sum_{n\in S} \lambda_n u_n$. For details, see e.g.~the proof of \cite[Proposition 3.5(1)]{HorTar} (although this technique was already well known to specialists in Banach space theory). \end{proof}
The construction of an uncountable clone system in Corollary \ref{c:subsymm-shrinking} only used the fact that $E$ possessed a subsymmetric basis; the shrinking condition was needed to invoke results from \cite{GJW_isr} on amenability of ${\mathcal K}(E)$. On the other hand, it is well known that ${\mathcal K}(\ell_1)$ is amenable: this is a special case of \cite[Theorem 4.7]{GJW_isr}. We may therefore run the same argument as before to obtain an extra example.
\begin{cor}\label{c: ell_1 amnm} $({\mathcal B}(\ell_1),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable $X$. \end{cor}
The spaces $L_p[0,1]$ do not have a subsymmetric basis unless $p=2$; see \textit{e.g.}\ \cite[Theorem~21.2, Chapter~II, p.~568]{Sing}. In fact, $L_1[0,1]$ does not even have an unconditional basis; see \textit{e.g.}\ \cite[Theorem~6.3.3]{ak}.
Thus, the next corollary shows that Corollaries \ref{c:subsymm-shrinking} and \ref{c: ell_1 amnm} are far from describing the full extent of the spaces covered by Theorem \ref{t:headline result}.
\begin{cor} Let $p\in [1,\infty]$. Then $({\mathcal B}(L_p[0,1]),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable $X$. \end{cor}
\begin{proof} By \cite[Theorem 4.7]{GJW_isr} ${\mathcal K}(L_p[0,1])$ is amenable. For $1\le p <\infty$, an uncountable clone system for $L_p[0,1]$ is given by the construction in \cite[Proposition 3.5]{HorTar}. While that construction does not work for $p=\infty$, we recall that by a celebrated application of Pe\l czy\'nski's decomposition method $L_\infty[0,1] \cong \ell_\infty$ as Banach spaces. Then it is simple to construct an uncountable clone system for $\ell_\infty$ using an uncountable family of almost disjoint subsets of $\mathbb N$, as in previous proofs. \end{proof}
For our final corollary, we rely on recent work of Johnson--Phillips--Schechtman \cite{JPS_SHAI}, which we learned of after the initial work was done on this paper. For details we refer to \cite{ros} and \cite{JPS_SHAI}.
\begin{cor}\label{c:JPS(hash)} Let $p\in (1,2)\cup(2,\infty)$. Then $({\mathcal B}(E),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable $X$, whenever $E$ is any of the following Banach spaces: \begin{romnum} \item $\ell_p\oplus\ell_2$; \item $\ell_p(\ell_2) \equiv \ell_p(\mathbb N; \ell_2)$; \item $\overbrace{X_p \otimes_p \dots \otimes_p X_p}^{n}$ for some $n\in\mathbb N$, where $X_p$ denotes Rosenthal's $X_p$-space and $\otimes_p$ denotes the tensor product for closed subspaces of $L_p[0,1]$. \end{romnum} \end{cor}
\begin{proof} All of the listed choices for $E$ are complemented subspaces of $L_p[0,1]$, and hence are $\mathscr{L}_p$-spaces in the sense of Lindenstrauss--Pe\l czy\'nski by \cite[Theorem~III]{lr}. Thus ${\mathcal K}(E)$ is amenable by \cite[Theorem 6.4]{GJW_isr}, so it only remains to show that $E$ has an uncountable clone system.
In \cite[Definitions~1.2 and 2.1]{JPS_SHAI} the notion of an unconditional finite dimensional Schauder decomposition (UFDD) with a so-called \dt{property $(\sharp)$} is introduced. We do not give the precise definition here, but it should be clear from the arguments below. It follows from Propositions~2.4 and 2.5 and the the paragraph after Definition~2.1 in \cite{JPS_SHAI} that all of the listed choices for $E$ have a UFDD with $(\sharp)$ with some constant $K>0$, in the sense of \cite[Definition~2.1]{JPS_SHAI}.
We now show that whenever $E$ is a Banach space with a UFDD that has property $(\sharp)$ with some constant $K>0$, then $E$ has an uncountable clone system.
Take a UFDD $(E_n)$ with property $(\sharp)$ with some constant $K >0$. By taking an uncountable almost disjoint family ${\mathcal D}$ on $\mathbb N$, we obtain that $E_S:=\overline{\spanning}(E_n \colon n \in S)$ is $K$-isomorphic to $E$ for each $S \in {\mathcal D}$. Hence $\sup_{S \in {\mathcal D}} d(E,E_S) \le K$.
As outlined on page~2 in \cite{JPS_SHAI}, for every $B \subseteq \mathbb N$ there is an idempotent $P_B \in {\mathcal B}(E)$ such that $\operatorname{\rm Ran}(P_B) = \overline{\spanning}(E_n \colon n \in B)$.
Moreover, there is a $C >0$ (called the \dt{suppression constant} in \cite{JPS_SHAI}) such that $\sup_{B \subseteq \mathbb N} \norm{ P_{B} } \le C$. So $P_S \in {\mathcal B}(E)$ is an idempotent with $\operatorname{\rm Ran}(P_S)=E_S$ and $\norm{ P_S } \le C$ for each $S \in {\mathcal D}$. Also, $\operatorname{\rm Ran}(P_S P_{S'}) = \overline{\spanning}(E_n \colon n \in S \cap S')$ is finite-dimensional, whenever $S, S' \in {\mathcal D}$ are distinct.
Thus $E$ has an uncountable clone system, as required. \end{proof}
We hope that this selection of examples, while not exhaustive, shows that one can go far beyond the cases $E=X=\ell_p$ ($1<p<\infty$) studied by Johnson and Howey. Even for those special cases, our proof of Theorem \ref{t:headline result} makes several technical improvements over their approach: we provide an argument with clearer structure, and we obtain better constants, which in principle could be made explicit.
\begin{rem}\label{r: tsirelson} One can show that the Tsirelson space $T$ (as constructed by Figiel and Johnson~\cite{FJ}) has an uncountable clone system. This may be folklore, but we include a proof in an appendix for sake of completeness (see Proposition \ref{p:tsirelson}). On the other hand, Blanco and Gr{\o}nb{\ae}k proved that ${\mathcal K}(T)$ is \emph{not} amenable, see \cite[Corollary~5.8]{bg}, and so Theorem \ref{t:headline result} cannot be applied to ${\mathcal B}(T)$. It is an open problem whether the pair $({\mathcal B}(T),{\mathcal B}(T))$ has the AMNM property, and we believe this would be an interesting case to study further. \end{rem}
\paragraph{Note added in proof.} Let $K$ be an uncountable, compact metric space. The second-named author has recently shown that there exists an uncountable clone system in $C(K)$; details of this result will appear elsewhere. Since ${\mathcal K}(C(K))$ is amenable by \cite[Theorem~4.7]{GJW_isr}, Theorem \ref{t:headline result} shows that the pair $({\mathcal B}(C(K)), {\mathcal B}(X))$ has the AMNM property for every reflexive and separable~$X$.
\end{subsection}
\begin{subsection}{Comments on the proof of our main theorem, and other results of interest}
Theorem \ref{t:headline result} will follow by combining several other technical results. In this section we wish to highlight two of them, which correspond to the two conditions in the theorem. Proofs will be given in later sections.
The following definition will be used repeatedly throughout our arguments.
\begin{dfn}[Self-modular maps with respect to a subalgebra] \label{d:self-modular} Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras and let ${\mathsf D}$ be a closed subalgebra of ${\mathsf A}$. We denote by $\selfhom{{\mathsf D}}({\mathsf A},{\mathsf B})$ the set of all bounded linear maps $\theta:{\mathsf A}\to{\mathsf B}$ which satisfy \[ \theta(ar)=\theta(a)\theta(r)\text{ and }\theta(ra)=\theta(r)\theta(a)\quad\text{for all $a\in {\mathsf A}$ and all $r\in{\mathsf D}$.} \] \end{dfn}
Our main technical innovation is the following theorem, which provides a significant generalization of the main result in \cite{BEJ_AMNM2}.
\begin{thm}[ANMM with respect to an amenable subalgebra]\label{t:main innovation} Let ${\mathsf A}$ be a Banach algebra with a closed amenable subalgebra ${\mathsf D}_0$, and let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual. Fix some $L\ge 1$. Then there exists a constant $C'\ge 1$ (possibly depending on $L$ and ${\mathsf D}_0$) such that the following holds: whenever $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\norm{\psi}\le L$ and $C'\operatorname{def}(\psi)\le 1$, there exists $\theta\in \selfhom{{\mathsf D}_0}({\mathsf A},{\mathsf B})$ with $\norm{\theta-\psi} \le C'\operatorname{def}(\psi)$. \end{thm}
The case where ${\mathsf A}$ itself is amenable is \cite[Theorem 3.1]{BEJ_AMNM2}, but in order to obtain our generalization, it does not suffice to bootstrap from the earlier result. Instead we rework the arguments in Johnson's proof, introducing a version of the multiplicative defect relative to a closed subalgebra, and putting certain calculations from that proof in the framework of ``approximate cobounding'' for a modified version of the Hochschild cochain complex. This will be treated in Sections \ref{s:using improving} and \ref{s:building improving}.
We note that in the setting of Ulam stability for bounded representations of discrete groups on Hilbert space, a result analogous to Theorem \ref{t:main innovation} was given in \cite[Theorem 3.2]{BOT_ulam}; the proof makes use of features particular to groups and to operators on Hilbert space.
Our other main ingredient in the proof of Theorem \ref{t:headline result} is the following proposition, whose proof will be given in Section \ref{ss:prove MvN}. It can be viewed as a ``perturbed'' version of \cite[Proposition~3.8]{HorTar} (see also \cite[Corollary~6.16]{bp}), and it generalizes an argument of Johnson (from the proof of \cite[Proposition 6.3]{BEJ_AMNM2}) in the case $X=E=\ell_2$. Moreover, we obtain better constants than those obtained by just repeating the steps in \cite{BEJ_AMNM2}; see Remark \ref{r:finesse} for further details.
\begin{prop}\label{p:MvN trick} Let $E$ be a Banach space with an uncountable clone system. There exists a constant $c_E\in (0,1]$ such that the following holds: whenever $X$ is a separable Banach space, and $\psi: {\mathcal B}(E)/{\mathcal K}(E) \to {\mathcal B}(X)$ is bounded linear with $\operatorname{def}(\psi)\le c_E$, we have $\norm{\psi} \le \frac{3}{2}\operatorname{def}(\psi)$. \end{prop}
The key point here is that the constant $c_E$ does not depend on the chosen $\psi$, and so $\operatorname{def}(\psi)$ could be much smaller than $c_E$.
Note that in the conclusion of Proposition \ref{p:MvN trick}, we obtain the constant $3/2$ rather than some constant depending on the Banach algebras ${\mathcal B}(E)$ and ${\mathcal B}(X)$.
Obtaining a universal constant (such as $3/2$) is not essential to the proof of Theorem \ref{t:headline result} but it makes some of the epsilon-delta chasing significantly simpler.
\end{subsection} \end{section}
\begin{section}{Definitions and preliminary results}
\begin{subsection}{Basic properties of the multiplicative defect} First we have a general lemma. (A similar estimate is given without proof in \cite[Proposition~1.1]{BEJ_AMNM2}.)
\begin{lem}\label{l:defect of perturbed} Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras and let $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$. Suppose that $\theta\in {\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\norm{\theta-\psi}\le 1$. Then \[ \operatorname{def}(\theta)\le \operatorname{def}(\psi) + 2\norm{\theta-\psi} (1+\norm{\psi}). \] \end{lem}
\begin{proof} Writing $\theta=\psi+\gamma$, for each $a$ and $b$ in ${\mathsf A}$ we have \[ \theta(ab)-\theta(a)\theta(b) = \psi(ab) + \gamma(ab) - \psi(a)\psi(b) - \psi(a)\gamma(b)-\gamma(a)\psi(b)-\gamma(a)\gamma(b). \] Hence $\operatorname{def}(\theta) \le \operatorname{def}(\psi) + \norm{\gamma} + 2 \norm{\gamma} \norm{\psi} + \norm{\gamma}^2 $. Since we are assuming $\norm{\gamma}\le 1$, the desired inequality follows. \end{proof}
In the rest of this section we collect some general results concerning approximately multiplicative maps between Banach algebras, which do not seem to be spelled out in \cite{BEJ_AMNM2}. These may be useful for future work on the AMNM property for other kinds of Banach algebras.
It will be convenient to use the following terminology: given $\eta\in [0,\infty)$, we say that a linear map $\psi:{\mathsf A}\to{\mathsf B}$ is \dt{$\eta$-multiplicative} if $\operatorname{def}(\psi)\le\eta$; equivalently, if \[ \norm{\psi(ab)-\psi(a)\psi(b)} \le \eta\norm{a}\norm{b} \qquad\text{for all $a,b\in A$.} \] The point is that often we are not concerned with the precise value of the multiplicative defect, but merely with whether it is controlled by some (small) constant or parameter.
\begin{lem}\label{l:absorption trick} Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras and let $\eta\ge 0$. Let $\psi:{\mathsf A}\to {\mathsf B}$ be linear and $\eta$-multiplicative. \begin{romnum} \item\label{li:left unit}
Suppose $ab=b$ with $\norm{\psi(a)}\le 1/3$. Then $\norm{\psi(b)}\le \frac{3}{2} \eta\norm{a}\norm{b}$. \item\label{li:right unit}
Suppose $bc=b$ with $\norm{\psi(c)}\le 1/3$. Then $\norm{\psi(b)}\le \frac{3}{2} \eta\norm{b}\norm{c}$. \end{romnum} \end{lem}
\begin{proof} We prove \ref{li:left unit}; the proof for \ref{li:right unit} is identical with left and right swapped.
Since $ab=b$, $\norm{\psi(b)-\psi(a)\psi(b)} \le \eta\norm{a}\norm{b}$. Hence \[
\norm{\psi(b)} \le \eta\norm{a}\norm{b} + \norm{\psi(a)\psi(b)} \le \eta\norm{a}\norm{b} + \frac{1}{3}\norm{\psi(b)}. \] Rearranging we obtain the desired upper bound on $\norm{\psi(b)}$. \end{proof}
The following corollary is immediate.
\begin{cor}\label{c:small on identity} Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras with ${\mathsf A}$ unital. Let $\psi:{\mathsf A}\to {\mathsf B}$ be linear and $\eta$-multiplicative. If $\norm{\psi(1_{\mathsf A})}\le 1/3$ then $\psi$ is bounded with $\norm{\psi}\le 3\eta/2$. \end{cor}
\begin{rem} As observed in Section 1 of \cite{BEJ_AMNM2}, for a general linear $T:{\mathsf A}\to {\mathsf B}$ one can have $\operatorname{def}(T)$ small while $T$ has large norm, even when ${\mathsf A}=\mathbb C$. But examination of Example 1.5 in that paper shows that $T(1_{\mathsf A})$ is large in that example. Corollary \ref{c:small on identity} shows that this is the only obstruction. \end{rem}
The next result will be applied to show that if $p$ is an idempotent in a unital Banach algebra ${\mathsf A}$ and $p$ is Murray--von Neumann equivalent to $1_{\mathsf A}$, then $\psi(p)$ being small implies $\psi(1_{\mathsf A})$ is small, provided that $\operatorname{def}(\psi)$ is small. Normally, in perturbing exact algebraic arguments, one has to impose an {\it a~priori} upper bound on norms: informally, large times zero equals zero, but large times small might not be small. It is therefore somewhat surprising that in our result, we do not need to impose such a bound on $\norm{\psi}$.
\begin{prop}\label{p:equivalent proj} Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras. Let $u,v\in {\mathsf A}$ be such that $uv$ and $vu$ are idempotents. Let $\psi:{\mathsf A}\to {\mathsf B}$ be linear and $\eta$-multiplicative, for some $\eta$ satisfying $0\le \eta\norm{u}^3\norm{v}^3 \le 2/9$. If $\norm{\psi(uv)}\le 1/3$ then $\norm{\psi(vu)}\le 1/3$. \end{prop}
\begin{proof} If $vu=0$ then $\psi(vu)=0$ so there is nothing to prove. Hence we assume $vu\neq 0$; since $vu$ is an idempotent $1\le \norm{vu}\le\norm{v}\norm{u}$.
Since $uv$ is an idempotent, $uvu=uv\cdot uvu$ and $vuv=vuv\cdot uv$. Applying Lemma \ref{l:absorption trick} gives \[ \norm{\psi(uvu)} \le \frac{3}{2} \eta \norm{uv} \norm{uvu} \quad\text{and}\quad \norm{\psi(vuv)} \le \frac{3}{2} \eta \norm{vuv} \norm{uv} \] and so \[ \norm{\psi(uvu)\psi(vuv)}
\le \left(\frac{3}{2} \eta\right)^2 \norm{u}^5\norm{v}^5
\le \left(\frac{3}{2} \eta\right)^2 \norm{u}^6\norm{v}^6
\le \left(\frac{3}{2} \right)^2 \left(\frac{2}{9}\right)^2 = \frac{1}{9} \,. \] But since $vu$ is an idempotent, $vuv\cdot uvu= vu$. Hence \[ \norm{\psi(vu)-\psi(vuv)\psi(uvu)} \le \eta\norm{vuv}\norm{uvu} \le \eta\norm{u}^3\norm{v}^3 \le \frac{2}{9} \] and so $\norm{\psi(vu)} \le \frac{2}{9} + \norm{\psi(vuv)\psi(uvu)} \le \frac{1}{3}$. \end{proof}
\begin{rem} The choice of $\tfrac{1}{3}$ is somewhat arbitrary, and the reader may wonder why we did not attempt to prove sharper inequalities. In fact, it follows automatically from Corollary \ref{c:norm-dichotomy} below that if $\psi(uv)$ is ``moderately small'' then $\psi(vu)$ will be ``very small''. However, this refinement is not needed for the proofs of our main results. \end{rem}
\end{subsection}
\begin{subsection}{Dual Banach algebras}
There are various equivalent formulations in the literature of the notion of a dual Banach algebra. We follow the definition in \cite[Section 1]{Daws_DBArep}, although our terminology is slightly different and is influenced by \cite[Section 2]{DawsPhamWhite}.
\begin{dfn}\label{d:DBA} Let ${\mathsf B}$ be a Banach algebra and let ${\mathsf V}$ be a Banach space. We say that ${\mathsf B}$ is a \dt{dual Banach algebra} with \dt{isometric predual}~${\mathsf V}$, if there is an isometric isomorphism of Banach spaces $j:{\mathsf B}\to{\mathsf V}^*$ such that multiplication ${\mathsf B}\times{\mathsf B}\to{\mathsf B}$ is separately $\sigma({\mathsf B},{\mathsf V})$-continuous. \end{dfn}
Strictly speaking, in this definition, the choice of isometric isomorphism $j:{\mathsf B}\to{\mathsf V}^*$ should be part of the data. However, in most examples that occur in practice, it is clear from context which map $j$ is being used. Moreover, as discussed in \cite[Section 2]{DawsPhamWhite}: \begin{itemize} \item the ``dual Banach algebra structure'' induced on ${\mathsf B}$ only depends on the image of the isometry $j^*\kappa:{\mathsf V} \to {\mathsf B}^*$, where $\kappa$ is the canonical embedding of ${\mathsf V}$ in its bidual; \item the condition that multiplication in ${\mathsf B}$ be separately $\sigma({\mathsf B},{\mathsf V})$-continuous is equivalent to requiring $j^*\kappa({\mathsf V})$ to be a sub-${\mathsf B}$-bimodule of ${\mathsf B}^*$. \end{itemize} This latter condition is often easier to check in practice.
If the choice of isometric predual for ${\mathsf B}$ is not important, or is clear from context, then we will usually just refer to the \ensuremath{{\rm w}^*}-topology on ${\mathsf B}$ without mentioning the particular predual.
\begin{eg} The following Banach algebras are dual Banach algebras with an isometric predual. \begin{itemize} \item[--] $M(G)$ where $G$ is a locally compact group, with the isometric predual being $C_0(G)$; \item[--] any von Neumann algebra ${\mathsf N}$, with the isometric predual being the space of normal linear functionals on ${\mathsf N}$; \item[--] ${\mathcal B}(X)$ for any reflexive Banach space $X$, with the isometric predual being the projective tensor product $X^*\mathbin{\widehat{\otimes}} X$. \end{itemize} \end{eg}
\begin{rem} It was shown by Daws \cite[Theorem 3.5 and Corollary 3.8]{Daws_DBArep} that the last of these examples is in some sense a universal one: given any dual Banach algebra ${\mathsf B}$ with an isometric predual, there exists a reflexive Banach space $X$ and an isometric, \ensuremath{{\rm w}^*}-\ensuremath{{\rm w}^*}-continuous algebra homomorphism ${\mathsf B}\to{\mathcal B}(X)$. \end{rem}
Throughout the paper, we will work with \emph{isometric} preduals, as they suffice to cover all our applications. Our methods would also work for isomorphic (possibly non-isometric) preduals, although one would need to be much more careful in the estimates when keeping track of the constants. For instance, if the isomorphism $j:{\mathsf B}\to {\mathsf V}^*$ is not assumed to be isometric, and we wish to take $\sigma({\mathsf B},{\mathsf V})$-cluster points of a net $(b_i)$ in $\operatorname{\rm ball}\nolimits_1({\mathsf B})$, then it is not clear why $j^{-1}(\ensuremath{{\rm w}^*}\lim_i j(b_i))$ should have norm $\le 1$.
\end{subsection}
\begin{subsection}{A sharper dichotomy result} \label{ss:sharper dichotomy} This section is not required for the proof of our main result, but it is included since the proofs are elementary and since it may be useful in future work. The following lemma is inspired by similar observations/calculations in \cite[Section~3.1]{choi_jaust13}, but we are able to give a simpler proof.
\begin{lem}\label{l:kicsi-nagy} Let $x \in [0,\infty)$ and suppose that $x\le x^2 + c$ for some $c\in [0,2/9]$. Then \[ \min(x, 1-x) \le \frac{3c}{2} \le \frac{1}{3} \;. \] \end{lem}
\begin{proof} By comparing the graphs of the functions $f(u)=u$ and $g(u)=u^2+c$ for $u\ge 0$, which cross in exactly two points, we see that $x\in [0, u_1] \cup [u_2,\infty)$, where $0\le u_1< u_2\le 1$ are the solutions of $u=u^2+c$. Explicitly \[ u_1 = \frac{1}{2} (1- \sqrt{1-4c}) \quad,\quad u_2 = \frac{1}{2} (1+ \sqrt{1-4c}) =1-u_1\;. \]
It therefore suffices to prove that $u_1 \le 3c/2$. This is equivalent to proving that $1-3c \le \sqrt{1-4c}$, which (since both sides are non-negative) is equivalent to proving that $(1-3c)^2\le 1-4c$. Since $0\le c\le 2/9$, we have $9c^2\le 2c$, and therefore $1-6c+9c^2 \le 1-4c$ as required. \end{proof}
\begin{cor}[A norm dichotomy]\label{c:norm-dichotomy} Let ${\mathsf A}$, ${\mathsf B}$ be Banach algebras and let $p$ be an idempotent in ${\mathsf A}$. Let $\delta$ satisfy $0\le\delta\norm{p}^2 \le \tfrac{2}{9}$, and suppose $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ is $\delta$-multiplicative. Then either $\norm{\psi(p)}\le \frac{3}{2}\norm{p}^2\delta \le \tfrac{1}{3}$, or $\norm{\psi(p)}\ge 1-\tfrac{3}{2}\norm{p}^2\delta \ge \tfrac{2}{3}$. \end{cor}
The point of this result is that we do not need {\it a priori} control on $\norm{\psi}$ to choose how small $\delta$ must be; nor do we need any holomorphic functional calculus for the codomain~${\mathsf B}$.
\begin{proof} Since $p^2=p$, we have $\norm{\psi(p)} \le \norm{\psi(p)-\psi(p)^2}+ \norm{\psi(p)}^2 \le \delta\norm{p}^2 + \norm{\psi(p)}^2$. Now applying Lemma \ref{l:kicsi-nagy} completes the proof. \end{proof} \end{subsection} \end{section}
\begin{section}{Towards a proof of the main theorem}
\begin{subsection}{Self-modular maps relative to an ideal} Throughout this section, ${\mathsf B}$ is a dual Banach algebra with an isometric predual (Definition~\ref{d:DBA}). We denote \ensuremath{{\rm w}^*}-limits in ${\mathsf B}$ by $\lim\nolimits^\sigma$.
\begin{prop}[Decomposition relative to an ideal]\label{p:decompose} Let ${\mathsf B}$ be a dual Banach algebra with an isometric predual. Let ${\mathsf A}$ be a Banach algebra and ${\mathsf J}$ be a closed ideal in ${\mathsf A}$ with a b.a.i. Then each $\theta\in\selfhom{{\mathsf J}}({\mathsf A},{\mathsf B})$ can be written as $\theta=\phi+\theta_s$, where $\phi:{\mathsf A}\to{\mathsf B}$ is a bounded homomorphism, ${\theta_s\vert}_{\mathsf J}=0$, and $\operatorname{def}(\theta_s)=\operatorname{def}(\theta)$. \end{prop}
\begin{proof} Let ${\mathsf B}_0$ denote the \ensuremath{{\rm w}^*}-closure of $\theta({\mathsf J})$ inside~${\mathsf B}$. Since ${\mathsf J}$ is an ideal and multiplication in ${\mathsf B}$ is separately \wstar-\wstar-continuous, the self-modular property of $\theta$ implies that \begin{equation}\label{eq:1} \theta(a){\mathsf B}_0\subseteq {\mathsf B}_0 \quad\text{and}\quad {\mathsf B}_0\theta(a)\subseteq {\mathsf B}_0\quad\text{for all $a\in {\mathsf A}$.} \end{equation}
If $a_1,a_2\in {\mathsf A}$ and $x\in {\mathsf J}$, then repeated use of the self-modularity property yields \begin{equation}\label{eq:2} \theta(x)\theta(a_1a_2) = \theta(xa_1a_2) = \theta(xa_1)\theta(a_2) = \theta(x)\theta(a_1)\theta(a_2); \end{equation} hence, by taking \ensuremath{{\rm w}^*}-limits in \eqref{eq:2}, we have \begin{equation}\label{eq:3} b \theta(a_1a_2)= b\theta(a_1)\theta(a_2) \qquad\text{for all $a_1,a_2\in {\mathsf A}$ and all $b\in {\mathsf B}_0$.} \end{equation}
Now let $(e_i)$ be a b.a.i.\ in ${\mathsf J}$. Passing to a subnet, we may assume that $\theta(e_i)$ \ensuremath{{\rm w}^*}-converges in~${\mathsf B}$ to some $p\in {\mathsf B}_0$. Then for any $x\in {\mathsf J}$, \begin{equation}\label{eq:4} \begin{aligned} \theta(x)=\lim_i \theta(e_ix) & = \lim_i\theta(e_i)\theta(x) \\ & = \lim\nolimits^\sigma_i \theta(e_i)\theta(x) = \left(\lim\nolimits^\sigma_i\theta(e_i)\right)\theta(x) = p\theta(x), \end{aligned} \end{equation} and similarly $\theta(x)=\theta(x)p$. Hence, by another application of \wstar-\wstar-continuity, \begin{equation}\label{eq:5} pb=b=bp \qquad\text{for all $b\in {\mathsf B}_0$.} \end{equation} (In particular, $p$ is idempotent, although we do not use this explicitly in what follows.)
For each $a\in {\mathsf A}$, \eqref{eq:1} implies that $\theta(a)p\in {\mathsf B}_0$ and $p\theta(a)\in {\mathsf B}_0$. Hence by \eqref{eq:5} \begin{equation}\label{eq:6} p\theta(a)p =\theta(a)p\quad\text{and}\quad p\theta(a)=p\theta(a)p \quad\text{for all $a\in {\mathsf A}$.} \end{equation}
Now define $\phi$ by putting $\phi(a):= p\theta(a)$. Combining \eqref{eq:3} and \eqref{eq:6}, for all $a_1,a_2\in {\mathsf A}$ we have \begin{equation}\label{eq:7} \phi(a_1a_2)=p\theta(a_1)\theta(a_2)=p\theta(a_1)p\theta(a_2) = \phi(a_1)\phi(a_2), \end{equation} and thus $\phi$ is multiplicative.
Put $\theta_s(a) := \theta(a)-p\theta(a)$. Clearly $\phi+\theta_s=\theta$, and \eqref{eq:4} implies that $\theta_s(x)=0$ for all $x\in J$.
Finally: note that by \eqref{eq:6}, $\theta_s(a_1)p=0$. Hence, for all $a_1,a_2\in {\mathsf A}$, \begin{equation}\label{eq:8} \begin{aligned} \theta_s(a_1)\theta_s(a_2) = \theta_s(a_1)\theta(a_2) & = \theta(a_1)\theta(a_2)-p\theta(a_1)\theta(a_2) \\ & = \theta(a_1)\theta(a_2)-p\theta(a_1a_2), \end{aligned} \end{equation} where the last equality follows from \eqref{eq:3}. Therefore \begin{equation} \theta_s(a_1a_2) -\theta_s(a_1)\theta_s(a_2)= \theta(a_1a_2)-\theta(a_1)\theta(a_2), \end{equation} and we conclude that $\operatorname{def}(\theta_s)=\operatorname{def}(\theta)$. \end{proof}
\begin{rem} If the b.a.i.\ in ${\mathsf J}$ has norm $\le M$, then the functions $\phi$ and $\theta_s$ in this result can be taken to satisfy $\norm{\phi}\le M\norm{\theta}$ and $\norm{\theta_s}\le (1+M)\norm{\theta}$. However, we will not need these bounds in the applications of Proposition \ref{p:decompose}. \end{rem}
\end{subsection}
\begin{subsection}{The proof of Proposition \ref{p:MvN trick}} \label{ss:prove MvN} In this section we prove Proposition \ref{p:MvN trick}. For convenience, we repeat the statement: \begin{quote} {\itshape Let $E$ be a Banach space with an uncountable clone system. There exists a constant $c_E\in (0,1]$ such that the following holds: whenever $X$ is a separable Banach space, and $\psi: {\mathcal B}(E)/{\mathcal K}(E) \to {\mathcal B}(X)$ is bounded linear with $\operatorname{def}(\psi)\le c_E$, we have $\norm{\psi} \le \frac{3}{2}\operatorname{def}(\psi)$. } \end{quote}
We start by shifting perspective slightly in the definition of a clone system. It is well known (see \textit{e.g.}\ \cite[Lemma~1.4]{l03} for a proof) that an idempotent $P\in {\mathcal B}(E)$ satisfies $\operatorname{\rm Ran}(P)\cong E$ if and only if $P$ is Murray--von Neumann equivalent to $I_E$. We state a quantitative version in the following lemma, whose proof is left to the reader.
\begin{lem}\label{l:shift POV} Let $E$ be a Banach space and let $P\in{\mathcal B}(E)$ be an idempotent. \begin{romnum} \item If $\operatorname{\rm Ran}(P)\cong E$, then for every $\varepsilon >0$ there exist $U,V\in {\mathcal B}(E)$ such that $P=UV$, $I_E=VU$ and $\norm{U}\norm{V} \le (d(E,\operatorname{\rm Ran}(P))+ \varepsilon) \norm{P}$. \item If $U,V\in{\mathcal B}(E)$ are such that $I_E=VU$ and $UV=P$, then $\operatorname{\rm Ran}(U)=\operatorname{\rm Ran}(P)$ and ${V\vert}_{\operatorname{\rm Ran}(P)}$ is an isomorphism from $\operatorname{\rm Ran}(P)$ onto $E$. Hence, $d(E,\operatorname{\rm Ran}(P))\le\norm{U}\norm{V}$ (and clearly $\norm{P}\le \norm{U}\norm{V}$). \end{romnum} \end{lem}
We recall that idempotents $p,q$ in a ring are said to be \dt{orthogonal} if $pq=0=qp$.
\begin{lem}\label{l:separation lemma} Let ${\mathsf Q}$ be a Banach algebra containing an uncountable family $\Omega$ of pairwise orthogonal idempotents, and suppose $\sup_{p\in\Omega} \norm{p} \le L$ for some $L\ge 1$.
Let $X$ be a separable Banach space, and suppose $\psi\in{\mathcal L}({\mathsf Q},{\mathcal B}(X))$ is $\eta$-multiplicative for some $\eta>0$. Then $\norm{\psi(p)}\le 2\eta L^2$ for uncountably many $p\in\Omega$. \end{lem}
\begin{proof} For $\varepsilon>0$ let $\Omega_\varepsilon = \{ p\in \Omega \colon \norm{\psi(p)} > \varepsilon\}$. It suffices to show that $\Omega_{2\eta L^2}$ is countable; therefore, since $\Omega_{2\eta L^2} = \bigcup_{n=1}^\infty \Omega_{2\eta L^2 + 1/n}$, it suffices to show that $\Omega_c$ is countable for every $c> 2\eta L^2$.
Fix $c> 2\eta L^2$. We may assume that $\Omega_c$ is infinite (otherwise there is nothing to prove); in particular, this implies $\Vert\psi\Vert >0$. For each $p\in\Omega_c$ pick a unit vector $x_p\in X$ such that $\norm{\psi(p)x_p}\ge c$, and let $y_p=\psi(p)x_p$.
If $r\in\Omega_c$ and $r\neq p$, then \[ \norm{\psi(p)y_r} =\norm{\psi(p)\psi(r)x_r} \le \norm{\psi(p)\psi(r)} = \norm{\psi(p)\psi(r)-\psi(pr)} \le \eta L^2 \;; \] on the other hand, since $\norm{\psi(p)-\psi(p)\psi(p)} \le \eta L^2$, \[ \norm{\psi(p)y_p} = \norm{\psi(p)\psi(p)x_p} \ge \norm{\psi(p)x_p} - \eta L^2 \ge c -\eta L^2 \;. \] Combining these inequalities yields $\norm{\psi(p) y_p -\psi(p) y_r} \ge c -2\eta L^2$. Hence \[ \norm{y_p-y_r} \ge \frac{c-2\eta L^2}{\norm{\psi(p)}} \ge \frac{c-2\eta L^2}{\norm{\psi} L} > 0 \quad\text{for all $p,r\in\Omega_c$ with $p\neq r$.} \] Since $X$ is separable this is only possible if $\Omega_c$ is countable. \end{proof}
\begin{proof}[Proof of Proposition \ref{p:MvN trick}] Let $\Omega$ be an uncountable clone system for $E$. By Lemma \ref{l:shift POV}(i), there is a constant $C\ge 1$ such that each $P\in\Omega$ can be factorized as $P=UV$, for some $V$ and $U$ in ${\mathcal B}(E)$ satisfying $\norm{U}\norm{V}\le C$ and $VU=I_E$. We will show that the conclusion of Proposition \ref{p:MvN trick} holds with $c_E:= 6^{-1}C^{-3}$.
Let $\psi:{\mathcal B}(E)/{\mathcal K}(E)\to{\mathcal B}(X)$ be bounded linear. For convenience, let $\eta:= \operatorname{def}(\psi)$, and suppose that $\eta\le 6^{-1}C^{-3}$. Writing $q$ for the quotient homomorphism ${\mathcal B}(E)\to {\mathcal B}(E)/{\mathcal K}(E)$, note that $q(\Omega)$ is an uncountable family of orthogonal idempotents in ${\mathcal B}(E)/{\mathcal K}(E)$ with $\norm{q(P)}\le \norm{P}\le C$ for every $P\in \Omega$. By Lemma \ref{l:separation lemma} with ${\mathsf Q}={\mathcal B}(E)/{\mathcal K}(E)$, there exists some $P\in\Omega$ such that \[ \norm{\psi q(P)} \le 2 \eta C^2 \le 2\eta C^3 \le \frac{1}{3} \,.\] (In fact there exist uncountably many, but we only need one!) Consider $\psi q: {\mathcal B}(E)\to{\mathcal B}(X)$, which satisfies $\operatorname{def}(\psi q)=\operatorname{def}(\psi) = \eta$. We have \[ \eta\norm{U}^3 \norm{V}^3 \le \eta C^3 \le \frac{1}{6} < \frac{2}{9}\;. \] Hence, applying Proposition \ref{p:equivalent proj} to the map $\psi q :{\mathcal B}(E)\to{\mathcal B}(X)$, we deduce that $\norm{\psi q(I_E)} \le 1/3$. Since $q(I_E)$ is the identity element of ${\mathcal B}(E)/{\mathcal K}(E)$, it follows from Corollary \ref{c:small on identity} that $\norm{\psi}\le 3\eta /2$ as required. \end{proof}
\begin{rem}\label{r:finesse} Comparing our proof of Proposition \ref{p:MvN trick} with Johnson's arguments in \cite{BEJ_AMNM2}: he uses the fact that in any Banach algebra an element $x$ for which $\norm{x^2-x}$ is ``small'' is ``close in norm'' to a genuine idempotent. The proof of this result relies on holomorphic functional calculus, and hence has implicit constants depending on the given algebra. Our approach bypasses this issue. \end{rem}
The proof works just as well if ${\mathcal B}(E)$ is replaced by an arbitrary unital Banach algebra ${\mathsf A}$ and ${\mathcal K}(E)$ by an arbitrary closed ideal ${\mathsf J} \unlhd {\mathsf A}$. However, we do not know of natural examples that satisfy the \emph{hypotheses} of Proposition \ref{p:MvN trick} which are not of the form ${\mathsf A}={\mathcal B}(E)$ and ${\mathsf J}$ being some closed operator ideal, so it seems more appropriate to restrict ourselves to this setting.
\end{subsection}
\begin{subsection}{Deducing the main theorem from other results}
We now show how Theorem \ref{t:headline result} will follow from combining Theorem \ref{t:main innovation}, Proposition \ref{p:decompose} and Proposition \ref{p:MvN trick}. For convenience let us restate the theorem:
\begin{quote} {\itshape Let $X$ be any separable, reflexive Banach space. Let $E$ be a Banach space such that both of the following conditions hold: \begin{romnum} \item ${\mathcal K}(E)$, the algebra of compact operators on $E$, is amenable as a Banach algebra; \item $E$ has an uncountable clone system. \end{romnum} Then the pair $({\mathcal B}(E),{\mathcal B}(X))$ has the AMNM property. } \end{quote}
\begin{proof}[Proof of Theorem \ref{t:headline result}, assuming Theorem \ref{t:main innovation}] ${\mathcal B}(X)$ is a dual Banach algebra with an isometric predual, since $X$ is reflexive. Hence we may apply Theorem~\ref{t:main innovation} with ${\mathsf A}={\mathcal B}(E)$, ${\mathsf D}_0={\mathcal K}(E)$ and ${\mathsf B}={\mathcal B}(X)$. Fix some $L\ge 1$, and let $C'\geq1$ satisfy the conclusion of Theorem~\ref{t:main innovation} (recall that $C'$ may depend on the constant $L$ and also on the Banach space~$E$).
Given $\varepsilon>0$, we fix some $\delta > 0$ to be determined later. Let $\psi:{\mathcal B}(E)\to{\mathcal B}(X)$ satisfy $\norm{\psi}\le L$ and $\operatorname{def}(\psi)\le \delta$. It suffices to prove that there exists some bounded homomorphism $\phi:{\mathcal B}(E)\to{\mathcal B}(X)$ with $\norm{\phi-\psi}\le \varepsilon$.
By Theorem \ref{t:main innovation}, \emph{provided that $C'\delta \le 1$}, there exists $\theta \in \selfhom{{\mathcal K}(E)}({\mathcal B}(E),{\mathcal B}(X))$ such that $\norm{\theta-\psi}\le C'\delta$. Note that by Lemma \ref{l:defect of perturbed}, \[ \begin{aligned} \operatorname{def}(\theta) \le \operatorname{def}(\psi)+ 2(1+\norm{\psi})\norm{\theta-\psi} & \le \delta + 2(1+L) C'\delta \le 5LC'\delta . \end{aligned} \]
By Proposition \ref{p:decompose}, applied with ${\mathsf A}={\mathcal B}(E)$, ${\mathsf J}={\mathcal K}(E)$ and ${\mathsf B}={\mathcal B}(X)$, there exist \begin{itemize} \item a bounded homomorphism $\phi:{\mathcal B}(E) \to{\mathcal B}(X)$, \item a bounded linear map $\theta_s:{\mathcal B}(E)\to{\mathcal B}(X)$ which vanishes on ${\mathcal K}(E)$ and satisfies \[ \operatorname{def}(\theta_s)=\operatorname{def}(\theta) \le 5LC'\delta\] \end{itemize} such that $\theta=\phi+\theta_s$. Writing $q$ for the quotient homomorphism ${\mathcal B}(E)\to{\mathcal B}(E)/{\mathcal K}(E)$, we may factorize $\theta_s$ as $\widetilde{\theta_s}q$ where $\norm{\widetilde{\theta_s}}=\norm{\theta_s}$.
Let $c_E$ be the constant provided by Proposition \ref{p:MvN trick} (recall that this depends only on the chosen clone system for $E$).
By applying that proposition to $\widetilde{\theta_s}$: {\bf provided that $5 LC'\delta \le c_E$}, we have $\norm{\widetilde{\theta_s}}\le 15 LC'\delta/2$. Hence \[ \norm{\phi-\psi} \le \norm{\theta-\psi} + \norm{\theta_s} \le C'\delta + \frac{15}{2}LC'\delta < 9LC'\delta. \] Therefore, if we originally chose our $\delta$ to satisfy $0< 5 LC'\delta \le c_E$ and $9LC'\delta \le\varepsilon$, we have $\norm{\phi-\psi}\le\varepsilon$ as required. \end{proof}
At this point, the only piece missing from our proof of Theorem \ref{t:headline result} is the proof of our main technical novelty, Theorem \ref{t:main innovation}. This will take up the rest of the paper. \end{subsection}
\end{section}
\begin{section}{Towards a proof of Theorem \ref{t:main innovation}} \label{s:using improving}
The process of proving Theorem \ref{t:main innovation} is quite long, and it may be helpful for the reader to know that the key implications are given by the following chain:
\begin{quote} Theorem \ref{t:main innovation} $\Longleftarrow$ Theorem \ref{t:one-sided improved BEJ} $\Longleftarrow$ Proposition \ref{p:improving} $\Longleftarrow$ Section \ref{s:define-imp-op}. \end{quote}
\begin{subsection}{The projective tensor product and approximate diagonals} It turns out that we need to make \emph{quantitative} (rather than merely \emph{qualitative}) use of amenability. Thus, we shall briefly review the basic properties of the projective tensor norm for Banach spaces and the associated completed tensor product; a good source for background material is the monograph \cite{Ryan}. In what follows $\operatorname{Bil}(E,F; X)$ denotes the space of bounded, bilinear maps $E \times F \to X$ for Banach spaces $E,F$ and~$X$.
Rather than defining the projective tensor norm directly, we use the following property (see also \cite[Theorem~2.9]{Ryan}). \begin{quote} Given Banach spaces $E$ and $F$, there exists a Banach space $E\mathbin{\widehat{\otimes}} F$ and a map $\iota_{E,F}\in\operatorname{Bil}(E,F; E\mathbin{\widehat{\otimes}} F)$ of norm~$1$ such that for each Banach space $X$ the map $T \mapsto T \circ \iota_{E,F}, \; {\mathcal L}(E\mathbin{\widehat{\otimes}} F,X) \to \operatorname{Bil}(E,F;X)$ is an isometric isomorphism. \end{quote}
As is standard, for $x\in E$ and $y\in F$ we write $x\mathbin{\otimes} y$ for $\iota_{E,F}(x,y)$. It follows from the previous remarks that for each $T\in{\mathcal L}(E\mathbin{\widehat{\otimes}} F, X)$, \begin{equation}\label{eq:ball of ptp}
\norm{T}_{{\mathcal L}(E\mathbin{\widehat{\otimes}} F, X)} = \norm{T\circ \iota_{E,F}}_{\operatorname{Bil}(E,F;X)} = \sup\{ \norm{T(x\mathbin{\otimes} y)} \colon x\in \operatorname{\rm ball}\nolimits_1(E), y\in \operatorname{\rm ball}\nolimits_1(F)\} \;. \end{equation} That is: to determine the norm of $T\in{\mathcal L}(E\mathbin{\widehat{\otimes}} F,X)$, it suffices to check how $T$ acts on elementary tensors arising from the unit balls of $E$ and $F$.
The theory of amenability for Banach algebras is now a vast topic (see \textit{e.g.\ }\cite{Runde} for a comprehensive modern study). We shall only need the following fragment.
Let ${\mathsf A}$ be a Banach algebra. A bounded net $(\Delta_{\alpha})_{\alpha \in {\mathbb I}}$ in ${\mathsf A} \mathbin{\widehat{\otimes}} {\mathsf A}$ is called a \dt{bounded approximate diagonal for ${\mathsf A}$} if
\begin{equation}\label{amenabledef} \lim\nolimits_{\alpha} (a \cdot \Delta_{\alpha} - \Delta_{\alpha} \cdot a) = 0 \quad \text{and} \quad \lim\nolimits_{\alpha} a \pi_{\mathsf A}(\Delta_{\alpha}) = a \qquad \text{for all $a \in {\mathsf A}$,} \end{equation}
where the limits are taken in the norm topology, and $\pi_{\mathsf A} :{\mathsf A}\mathbin{\widehat{\otimes}}{\mathsf A}\to {\mathsf A}$ is the unique bounded linear map satisfying $\pi_{{\mathsf A}}(a\mathbin{\otimes} b)=ab$ for all $a,b \in {\mathsf A}$. We refer to $\sup_\alpha \norm{\Delta_\alpha}$ as the norm of the bounded approximate diagonal.
A Banach algebra ${\mathsf A}$ is \dt{amenable} if there is a bounded approximate diagonal for ${\mathsf A}$, and the \dt{amenability constant of ${\mathsf A}$} is the infimum of norms of all possible bounded approximate diagonals. It follows from compactness arguments in the bidual, together with Goldstine's lemma and a convexity argument, that we can always find a bounded approximate diagonal for ${\mathsf A}$ whose norm achieves this infimum. \end{subsection}
\begin{subsection}{Reduction to a unital version}
Let us revisit the definition of the multiplicative defect. Given Banach algebras ${\mathsf A}$ and ${\mathsf B}$ and a linear map $\phi \colon {\mathsf A} \to {\mathsf B}$, we define $\phi^\vee: {\mathsf A}\times {\mathsf A} \to {\mathsf B}$ by \begin{equation}\label{eq:define phi-check} \phi^\vee(a,b) := \phi(ab)-\phi(a)\phi(b) \qquad \text{for all $a,b\in {\mathsf A}$.} \end{equation} Our earlier definition merely says that \begin{equation} \operatorname{def}(\phi) = \sup \{ \norm{\phi(a_1a_2)-\phi(a_1)\phi(a_2)} \colon a_1,a_2 \in \operatorname{\rm ball}\nolimits_1({\mathsf A})\} =\norm{\phi^\vee}_{\operatorname{Bil}({\mathsf A},{\mathsf A};{\mathsf B})} \end{equation}
Now let ${\mathsf D}\subseteq {\mathsf A}$ be a closed subalgebra. We will need to define quantities analogous to $\operatorname{def}(\phi)$ where the ``multiplicative property'' is only tested on pairs in ${\sD\times\sA}$ or ${\sA\times\sD}$. To be precise: \begin{equation} \begin{aligned} \operatorname{def}_{{\sD\times\sA}}(\phi) & = \norm{\phi^\vee}_{\operatorname{Bil}({\mathsf D},{\mathsf A};{\mathsf B})} \\ & = \sup \{ \norm{\phi(a_1a_2)-\phi(a_1)\phi(a_2)} \colon a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D}),a_2 \in \operatorname{\rm ball}\nolimits_1({\mathsf A})\} \end{aligned} \end{equation} with $\operatorname{def}_{{\sA\times\sD}}(\phi)$ defined similarly. The function $\operatorname{def}_{{\sD\times\sA}} \colon {\mathcal L} ({\mathsf A},{\mathsf B}) \to [0, \infty)$ is continuous.
The next lemma is a sharper version of Lemma \ref{l:defect of perturbed}.
\begin{lem}\label{l:relative defect of perturbed} Let ${\mathsf A}$, ${\mathsf B}$ be Banach algebras and let $\phi,\gamma\in{\mathcal L}({\mathsf A},{\mathsf B})$. Then for all $a_1,a_2 \in {\mathsf A}$, \begin{equation}\label{eq:linearize} (\phi+\gamma)^\vee(a_1,a_2) = \phi^\vee(a_1,a_2) -\phi(a_1)\gamma(a_2)+ \gamma(a_1a_2)-\gamma(a_1)\phi(a_2) - \gamma(a_1)\gamma(a_2)\;. \end{equation} In particular, for any closed subalgebra ${\mathsf D}\subseteq{\mathsf A}$, \begin{align} \operatorname{def}_{{\sD\times\sA}}(\phi+\gamma) &\le \operatorname{def}_{{\sD\times\sA}}(\phi) + (2\norm{\phi}+1) \norm{\gamma} +\norm{\gamma}^2 \,, \\ \operatorname{def}_{{\sA\times\sD}}(\phi+\gamma) &\le \operatorname{def}_{{\sA\times\sD}}(\phi) + (2\norm{\phi}+1) \norm{\gamma} +\norm{\gamma}^2 \,. \end{align} \end{lem} \begin{proof} The first identity is a direct calculation, and we omit the details. The subsequent inequalities follow easily from the first identity and the definitions of $\operatorname{def}_{{\sD\times\sA}}$ and $\operatorname{def}_{{\sA\times\sD}}$. \end{proof}
The following theorem, which extends \cite[Theorem 3.1]{BEJ_AMNM2}, is the heart of Theorem~\ref{t:main innovation}. Note that unlike the earlier theorem, we impose the condition that the subalgebra is unital and restrict attention to unit-preserving maps, even though in the original application to Theorem \ref{t:headline result} it was important to allow non-unital examples.
\begin{thm}[AMNM with respect to a unital amenable subalgebra]\label{t:one-sided improved BEJ} Let ${\mathsf A}$ be a Banach algebra, let ${\mathsf D}$ be a closed subalgebra of ${\mathsf A}$ which is unital and amenable with amenability constant $\le K$, and let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual. Fix $L\ge1$ and $\delta>0$ satisfying $K^2L^2\delta \le 1/8$.
Let $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfy $\norm{\phi}\le L$, $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, $\operatorname{def}_{{\sA\times\sD}}(\phi)\le\delta$, and $\operatorname{def}_{{\sD\times\sA}}(\phi)\le\delta$. Then there exists $\psi\in\selfhom{{\mathsf D}}({\mathsf A},{\mathsf B})$ with $\psi(1_{{\mathsf D}})=1_{{\mathsf B}}$ and $\norm{\phi-\psi}\le 12K^2L^3\delta$. \end{thm}
Recall the statement of Theorem \ref{t:main innovation}: \begin{quote} {\itshape Let ${\mathsf A}$ be a Banach algebra with a closed amenable subalgebra ${\mathsf D}_0$, and let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual. Fix some $L\ge 1$. Then there exists a constant $C'\ge 1$ (possibly depending on $L$ and ${\mathsf D}_0$) such that the following holds: whenever $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\norm{\psi}\le L$ and $C'\operatorname{def}(\psi)\le 1$, there exists $\theta\in \selfhom{{\mathsf D}_0}({\mathsf A},{\mathsf B})$ with $\norm{\theta-\psi} \le C'\operatorname{def}(\psi)$. } \end{quote}
\begin{proof}[Deducing Theorem \ref{t:main innovation} from Theorem \ref{t:one-sided improved BEJ}] We start by considering an arbitrary $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$. Let $\fu{{\mathsf A}}= \mathbb C 1 \oplus_1 {\mathsf A}$ denote the forced unitization of $A$ (here $\oplus_1$ denotes the $\ell_1$-sum of two Banach spaces). Then there is a natural extension of $\psi$ to $\fu{\psi}:\fu{{\mathsf A}}\to {\mathsf B}$, given by \[ \fu{\psi}(\lambda, a) = \lambda 1_{\mathsf B} + \psi(a) \qquad \text{for all $\lambda\in\mathbb C$, $a\in {\mathsf A}$.} \] It is easily checked that $\norm{\fu{\psi}}=\norm{\psi}$ (one direction is trivial since ${\mathsf A}\subset\fu{{\mathsf A}}$, and the other follows by our choice of norm on $\fu{{\mathsf A}}$). Moreover, a direct calculation shows that \begin{equation}\label{eq:unitize} \fu{\psi}((\lambda_1,a_1)(\lambda_2,a_2)) - \fu{\psi}(\lambda_1,a_1)\fu{\psi}(\lambda_2,a_2) = \psi(a_1a_2)-\psi(a_1)\psi(a_2); \end{equation} and hence $\operatorname{def}(\fu{\psi})=\operatorname{def}(\psi)$. (Once again, one direction is trivial since ${\mathsf A}\subset\fu{{\mathsf A}}$; the non-trivial direction follows from the identity \eqref{eq:unitize}.)
Let ${\mathsf D}=\fu{{\mathsf D}_0}$, which coincides with the closed subalgebra $\Cplx1 \oplus_1 D_0$ of $\fu{A}$, where $1$ is the adjoined unit. It is well known that the unitization of any amenable Banach algebra is itself amenable;
let $K$ be the amenability constant of ${\mathsf D}$, which automatically satisfies $K\ge 1$.
Given $L\ge 1$, put $C':= 12K^2L^3$. Suppose $\psi\in\operatorname{\rm ball}\nolimits_L{\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\operatorname{def}(\psi)= \delta$, for some $\delta \in [0, 1/C' ]$. By our previous remarks, the extended map $\fu{\psi}:\fu{{\mathsf A}}\to {\mathsf B}$ also has multiplicative defect~$\delta$ and norm $\le L$, and by construction it satisfies $\fu{\psi}(1)=1_B$. Applying Theorem \ref{t:one-sided improved BEJ} to the triple $({\mathsf D},\fu{{\mathsf A}},{\mathsf B})$ (note that $8K^2L^2\delta \le 12K^2L^3\delta \le 1)$, we deduce that there exists $\phi\in \selfhom{{\mathsf D}}(\fu{{\mathsf A}},{\mathsf B})$ with $\phi(1)=1_{\mathsf B}$ and $\norm{\phi-\fu{\psi}}\le C'\delta$. Taking $\theta = {\phi\vert}_{\mathsf A} \in \selfhom{{\mathsf D}_0}({\mathsf A},{\mathsf B})$, we see that the conclusions of Theorem \ref{t:main innovation} are satisfied. \end{proof}
\end{subsection}
\begin{subsection}{Obtaining the unital version, using an improving operator} Guided by the case ${\mathsf D}={\mathsf A}$ that is treated in \cite{BEJ_AMNM2}, we shall prove Theorem~\ref{t:one-sided improved BEJ} by an iterative argument. Notably, the proof works by repeated application of a \emph{nonlinear} operator $F:{\mathcal L}({\mathsf A},{\mathsf B})\to {\mathcal L}({\mathsf A},{\mathsf B})$ with certain ``improving'' properties.
The operator $F$ is designed in such a way that for each $\phi$ satisfying the assumptions of Theorem~\ref{t:one-sided improved BEJ}, the sequence of iterates $(F^n(\phi))_{n\in\mathbb N}$ is a fast Cauchy sequence in ${\mathcal L}({\mathsf A},{\mathsf B})$ and satisfies $\operatorname{def}_{{\sD\times\sA}}(F^n(\phi))\to 0$. The map $\phi_\infty:=\lim_{n\to\infty} F^n(\phi)$ then satisfies $\operatorname{def}_{{\sD\times\sA}}(\phi_\infty)=0$; and $\norm{\phi-\phi_\infty}$ can be bounded above in terms of $\operatorname{def}_{{\sD\times\sA}}(\phi)$, using the geometric decay from the fast Cauchy property. To get the final map $\psi$, one performs a ``left--right switch'' and exploits some {\it ad hoc} features of the operator $F$.
Before constructing the operator $F$, we isolate those of its properties which are needed for the argument in the previous paragraph.
\begin{prop}[A nonlinear improving operator]\label{p:improving} Let ${\mathsf A}$ be a Banach algebra, let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual, and let ${\mathsf D}$ be a closed subalgebra of ${\mathsf A}$ which is unital and amenable with amenability constant $\le K$. Then there is a function $F:{\mathcal L}({\mathsf A},{\mathsf B})\to{\mathcal L}({\mathsf A},{\mathsf B})$ with the following properties: for each $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfying $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, we have \begin{romnum} \item\label{li:unital}
$F(\phi)(1_{{\mathsf D}})=1_{{\mathsf B}}$; \item\label{li:small step}
$\norm{F(\phi)-\phi} \le K\norm{\phi} \operatorname{def}_{{\sD\times\sA}}(\phi)$; \item\label{li:improve defect}
$\operatorname{def}_{{\sD\times\sA}}(F(\phi)) \le 3K^2 \norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi)$. \end{romnum} Moreover, \begin{romnum} \setcounter{enumi}{3} \item\label{li:preserve right}
if $\operatorname{def}_{{\sA\times\sD}}(\phi)=0$, then $\operatorname{def}_{{\sA\times\sD}}(F(\phi))=0$. \end{romnum} \end{prop}
\begin{proof}[{Proof of Theorem \ref{t:one-sided improved BEJ}, given Proposition~\ref{p:improving}}] We fix $K$, $L$ and $\delta$ as in the statement of the theorem. Let $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ with $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, $\norm{\phi}\le L$, $\operatorname{def}_{{\sA\times\sD}}(\phi)\le\delta$, and $\operatorname{def}_{{\sD\times\sA}}(\phi)\le\delta$.
The first step is to prove that $(F^n(\phi))_{n\ge 0}$ is a Cauchy sequence in ${\mathcal L}({\mathsf A},{\mathsf B})$. In fact, we prove a more precise technical statement, as follows.
\subproofhead{Claim} $\norm{F^n(\phi)-F^{n-1}(\phi)} \le KL\delta 2^{-(n-1)}$ and $\operatorname{def}_{{\sD\times\sA}}(F^n(\phi))\le 3\delta 2^{-2n-1}$, for each $n \ge 1$.
The claim is proved by strong induction on $n$. For the base case ($n=1$): applying Proposition~\ref{p:improving} to $\phi$, we obtain $\norm{F(\phi)-\phi} \le K\norm{\phi}\operatorname{def}_{{\sD\times\sA}}(\phi) \le K L\delta$ and
\begin{align*} \operatorname{def}_{{\sD\times\sA}}(F(\phi)) & \le 3K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi) \\ &\le 3K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sA}}(\phi)^2 \notag \\ & \le 3K^2L^2\delta^2 \\ & \le 3\delta /8 \end{align*}
as required.
Now suppose the claim holds for all $1\le j\le n$ for some $n\in\mathbb N$. Then
\begin{align}\label{est0} \norm{F^n(\phi)} & \le \norm{\phi}+ \sum_{j=1}^n \norm{F^j(\phi)-F^{j-1}(\phi)} \notag \\ & \le L + KL\delta \sum_{j=1}^n2^{-(j-1)} \le L + 2KL\delta \le 5 L /4, \end{align}
using the fact that $K\delta \le KL\delta \le 1/8$. Combining \eqref{est0} with the second part of the inductive hypothesis yields
\begin{align}\label{est1} \norm{F^n(\phi)}\operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) & \le (5L/4) \cdot 3\delta 2^{-2n-1} \notag \\ & \le L \delta 2^{-2n+1} \le L\delta 2^{-n} \qquad\text{(since $n\ge 1$)}. \end{align}
Applying Proposition~\ref{p:improving}~(ii) to $F^n(\phi)$ and using \eqref{est1} yields
\begin{align*} \norm{F^{n+1}(\phi)-F^n(\phi)} \le K\norm{F^n(\phi)}\operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) \le KL\delta 2^{-n}\,, \end{align*} and applying Proposition~\ref{p:improving}~(iii) to $F^n(\phi)$ yields \[ \begin{aligned} \operatorname{def}_{{\sD\times\sA}}(F^{n+1}(\phi)) & \le 3K^2 \norm{F^n(\phi)}^2 \operatorname{def}_{{\sD\times\sD}}(F^n(\phi)) \operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) \\ & \le 3 \left( K \norm{F^n(\phi)} \operatorname{def}_{{\sD\times\sA}}(F^n(\phi))\right)^2 \\ & \le 3 (KL\delta 2^{-n})^2 \quad\qquad\text{(using \eqref{est1})} \\ & = 3K^2L^2\delta \cdot \delta 2^{-2n} \\ & \le 3 \delta 2^{-2n-3} \quad\qquad \text{(since $K^2L^2\delta \le 1/8$).} \end{aligned} \] This completes the inductive step, and hence proves the claim.
It follows from the claim that the sequence $(F^n(\phi))_{n\ge 0}$ is Cauchy in ${\mathcal L}({\mathsf A},{\mathsf B})$. Let $\phi_\infty= \lim_{n\to\infty} F^n(\phi) \in {\mathcal L} ({\mathsf A},{\mathsf B})$.
Since $F^n(\phi)(1_{{\mathsf D}})=1_{{\mathsf B}}$ for all $n \in \mathbb N$ and $\lim_n\operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) = 0$, we have $\phi_\infty(1_{{\mathsf D}})=1_{{\mathsf B}}$ and $\operatorname{def}_{{\sD\times\sA}}(\phi_\infty)=0$ by continuity.
Also, $\norm{\phi-\phi_\infty}\le 2KL\delta$. This implies
\[ \norm{\phi_\infty} \le \norm{\phi}+\norm{\phi-\phi_\infty} \le L + 2KL\delta \le L (1+ 2K^2L^2\delta) \le 5 L /4 \;, \]
and, by the estimate given at the end of Lemma~\ref{l:relative defect of perturbed},
\[ \begin{aligned} \operatorname{def}_{{\sA\times\sD}}(\phi_\infty) & \le \operatorname{def}_{{\sA\times\sD}}(\phi)+ (2\norm{\phi}+1) \norm{\phi_\infty-\phi} + \norm{\phi_\infty-\phi}^2 \\ & \le \delta + (2L+1) 2KL\delta + (2KL\delta)^2 \\ & \le \delta ( 1+ 6KL^2 + 4K^2L^2\delta) \quad \le \quad \delta ( 3/2 + 6KL^2) \quad \le \quad 8KL^2\delta. \end{aligned} \]
To obtain the final map $\psi$, let $\op{\sA}$ and $\op{\sB}$ be the Banach algebras whose underlying Banach spaces are the same as ${\mathsf A}$ and ${\mathsf B}$ respectively, but which have the opposite algebra structures, so that $a_1\cdot_{(\op{\sA})} a_2 := a_2a_1$, etc. Note that $\op{\sD}$ is a closed subalgebra of $\op{\sA}$. Moreover, $\op{\sD}$ is unital and amenable with constant $\le K$: for if $\sigma: {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D} \to {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ is the flip map defined by $c_1\mathbin{\otimes} c_2 \mapsto c_2\mathbin{\otimes} c_1$, then $\sigma$ maps bounded approximate diagonals for ${\mathsf D}$ to bounded approximate diagonals for $\op{\sD}$.
Let $\phi'\in {\mathcal L}(\op{\sA}, \op{\sB})$ be the same function as $\phi_\infty\in{\mathcal L}({\mathsf A},{\mathsf B})$ (we introduce new notation to emphasise that we are now working with different algebras as domain and codomain, which affects the definition of $\operatorname{def}$). Then the following properties hold: \[ \phi'(1_{\op{\sD}})=\phi_\infty(1_{{\mathsf D}}) =1_{{\mathsf B}} = 1_{\op{\sB}}; \qquad \operatorname{def}_{{\AO\times\DO}}(\phi') = \operatorname{def}_{{\sD\times\sA}}(\phi_\infty)=0 \;. \] Applying Proposition \ref{p:improving} to the triple $(\op{\sA},\op{\sB},\op{\sD})$, there is a function $F': {\mathcal L}(\op{\sA},\op{\sB})\to{\mathcal L}(\op{\sA},\op{\sB})$ such that
\begin{enumerate} \item $F'(\phi')(1_{\op{\sD}})=1_{\op{\sB}}$; \item $\norm{F'(\phi')-\phi'} \le K \norm{\phi'} \operatorname{def}_{{\DO\times\AO}}(\phi') = K\norm{\phi_\infty} \operatorname{def}_{{\sA\times\sD}}(\phi_\infty)$; \item\label{li:exploit} $\operatorname{def}_{{\DO\times\AO}}(F'(\phi'))
\le 3K^2 \norm{\phi'} \operatorname{def}_{{\DO\times\DO}}(\phi') \operatorname{def}_{{\DO\times\AO}}(\phi')$; \item
$\operatorname{def}_{{\AO\times\DO}}(F'(\phi'))=0$. \end{enumerate}
Now observe that $\operatorname{def}_{{\DO\times\DO}}(\phi')\le \operatorname{def}_{{\AO\times\DO}}(\phi')=0$. Hence we may improve the property \ref{li:exploit} above to: $\operatorname{def}_{{\DO\times\AO}}(F'(\phi'))=0$.
We define $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ to have the same underlying function as $F'(\phi')$. Then
$\psi(1_{{\mathsf D}})=F'(\phi')(1_{\op{\sD}})=1_{\op{\sB}}=1_{{\mathsf B}}$, and
\[ \begin{aligned}
\norm{\psi-\phi}
& \le \norm{F'(\phi')-\phi'}+\norm{\phi_\infty-\phi} \\
& \le K (5L/4) 8K L^2\delta +2KL\delta
& \le 12K^2L^3\delta\;. \end{aligned} \]
Finally, $\psi\in\selfhom{{\mathsf D}}({\mathsf A},{\mathsf B})$ since $\operatorname{def}_{{\sD\times\sA}}(\psi)=\operatorname{def}_{{\AO\times\DO}}(F'(\phi'))=0$ and
$\operatorname{def}_{{\sA\times\sD}}(\psi)=\operatorname{def}_{{\DO\times\AO}}(F'(\phi')))=0$. \end{proof} \end{subsection}
\begin{subsection}{Explanation for the improving operator} We have not given any definition of the operator $F$, let alone explained why amenability of ${\mathsf D}$ would allow us to find or construct~$F$. In fact the definition of $F$ is quite simple and explicit --- see Equation \eqref{eq:define next step} below --- but attempting to prove directly that $F$ has the required ``improving properties'' is far less straightforward. Subtle cancellations are required, and one has to pay attention to technical issues arising when carrying out repeated \ensuremath{{\rm w}^*}-averaging.
These issues are already present in the proof of \cite[Theorem 3.1]{BEJ_AMNM2}, where an operator analogous to ours is constructed in the special case ${\mathsf D}={\mathsf A}$. Although Johnson chooses in his proof to verify the necessary properties directly, he follows this with a brief sketch of how the construction of the operator and the proof that it has the required properties are motivated by a ``vanishing $H^2$ argument'' that is standard in the Hochschild cohomology theory of (amenable) Banach algebras.
In our setting, the algebra ${\mathsf A}$ is no longer amenable, but the unital subalgebra ${\mathsf D}$ is, and the corresponding notion in cohomology theory is that of \emph{normalizing a $2$-cocycle with respect to an amenable subalgebra}. It is this approach which guides our construction of the desired ``improving operator'' $F$. Rather than adapting the calculations in the proof of \cite[Theorem 3.1]{BEJ_AMNM2} in an \textit{ad hoc} way to the setting of an amenable subalgebra ${\mathsf D}\subseteq {\mathsf A}$, it seems both more comprehensible and more robust to set up a general framework. This is our goal in the final section of the paper; the desired ``improving operator'' $F$ will then emerge naturally as a special case of the general machinery.
\end{subsection}
\end{section}
\begin{section}{Constructing the nonlinear improving operator} \label{s:building improving}
\begin{subsection}{An approximate cochain complex} Throughout this subsection, we fix Banach algebras ${\mathsf A},{\mathsf B}$ and $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$; we shall think of $\phi$ as defining an ``approximate action'' of ${\mathsf A}$ on ${\mathsf B}$.
As mentioned earlier, we are guided by a standard construction in the Hochschild cohomology theory of Banach algebras, which arises when normalizing cochains with respect to an amenable unital subalgebra. However, we require the actual techniques in the proofs and not just the results, and therefore we shall build the required machinery from scratch.
\begin{rem}\label{r:kazhdan disclaimer} After the original work was done for this section, it was brought to our attention that \cite{kazhdan} also adopts a similar setup with an approximate cochain complex; however, this is only done in the setting of (bounded) group cohomology for discrete groups. Moreover, \cite{kazhdan} does not explore the ``relative'' setting where one only has amenability for a subgroup rather than for the whole group. \end{rem}
\begin{dfn}\label{d:approx cochain complex} For each $n\in\mathbb N$, define the bounded linear map $\mathop{\partial}\nolimits_\phi^n: {\mathcal L}^n({\mathsf A},{\mathsf B})\to {\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$ by \[ \mathop{\partial}\nolimits_\phi^n \psi(a_1,\dots, a_{n+1}) = \left\{ \begin{aligned} \phi(a_1)\psi(a_2,\dots, a_{n+1}) \\ + \sum_{j=1}^{n}(-1)^j \psi(a_1,\dots, a_ja_{j+1}, \dots, a_{n+1}) \\ + (-1)^{n+1} \psi(a_1,\dots, a_n)\phi(a_{n+1}). \end{aligned} \right. \] \end{dfn}
In fact, to prove Proposition~\ref{p:improving}, we only need this definition for $n\in\{1,2\}$. We include the definitions for general~$n$, to put the following arguments in their proper context.
\begin{rem}\label{r:coho waffle} We make some remarks to provide context; they are not necessary for the proof of Proposition \ref{p:improving}. \begin{romnum}
\item If $\phi$ is multiplicative, then $(a,b)\mapsto \phi(a)b$ and $(b,a)\mapsto b\phi(a)$ give ${\mathsf B}$ the structure of an ${\mathsf A}$-bimodule ${}_\phi {\mathsf B}_\phi$, and the operator $\mathop{\partial}\nolimits^n_\phi$ is just the usual Hochschild coboundary operator for ${}_\phi {\mathsf B}_\phi$-valued cochains. If $\phi$ is not multiplicative, then we might have $\mathop{\partial}\nolimits^{n+1}_\phi\circ\mathop{\partial}\nolimits^n_\phi\neq 0$, but a direct calculation shows that $\norm{\mathop{\partial}\nolimits^{n+1}_\phi\circ\mathop{\partial}\nolimits^n_\phi} \le 4 \operatorname{def}(\phi)$. \item Recall that we have a nonlinear function $(\underline{\quad})^\vee: {\mathcal L}({\mathsf A},{\mathsf B})\to{\mathcal L}^2({\mathsf A},{\mathsf B}); \; \psi \mapsto \psi^{\vee}$ (where $\psi^{\vee}$ is defined as in Equation \eqref{eq:define phi-check}), which satisfies $\operatorname{def}(\psi)=\norm{\psi^\vee}$. If $\gamma\in{\mathcal L}({\mathsf A},{\mathsf B})$, Equation \eqref{eq:linearize} may be rewritten as \[ (\phi+\gamma)^\vee(a_1,a_2) = \phi^\vee(a_1,a_2) - \mathop{\partial}\nolimits_\phi^1 (\gamma)(a_1,a_2) - \gamma(a_1,a_2) \quad\text{for all $a_1,a_2\in {\mathsf A}$,} \] and it follows that the derivative of the function $(\underline{\quad})^\vee$ at $\phi$ is just $-\mathop{\partial}\nolimits_\phi^1$. (This observation is taken from remarks in \cite[Section~3]{BEJ_AMNM2}.) \item For now, we do not assume either ${\mathsf A}$ or ${\mathsf B}$ is unital; but when it comes to our analogue of ``normalization of cocycles'', some kind of unitality assumption is needed to obtain maps with the right properties. \end{romnum} \end{rem}
Since $\mathop{\partial}\nolimits_\phi^2$ can be applied to arbitrary elements of ${\mathcal L}^2({\mathsf A},{\mathsf B})$, we may apply it to the particular bilinear map $\phi^\vee$.
\begin{lem}[A $2$-cocycle for $\mathop{\partial}\nolimits_\phi$] \label{l:2-cocycle} $\mathop{\partial}\nolimits_\phi^2(\phi^\vee)=0$. \end{lem} The proof is a straightforward calculation, which we omit.
\begin{dfn}[Notation for restricting in first variable]\label{d:currying} Let $E$ and $V$ be Banach spaces, and let $F$ be a closed subspace of $E$. Let $n \ge 2$. Given $\psi\in{\mathcal L}^n(E,V)$ we may regard it as an element of ${\mathcal L}(E, {\mathcal L}^{n-1}(E,V))$, which is defined by
\[ x_1 \mapsto \left( (x_2,\dots, x_n) \mapsto \psi(x_1,\dots, x_n) \right). \]
Restricting this function to $F$ yields a bounded linear map $F\to{\mathcal L}^{n-1}(E,V)$, which we denote by
\begin{align*} \LRES{F}(\psi)\in {\mathcal L}(F,{\mathcal L}^{n-1}(E,V)). \end{align*} The function $\LRES{F}: {\mathcal L}^n(E,V) \to {\mathcal L}(F,{\mathcal L}^{n-1}(E,V))$ is linear and contractive.
\end{dfn}
For the rest of this subsection, we fix a closed subalgebra ${\mathsf D}\subseteq{\mathsf A}$. Note that $\operatorname{def}_{{\sD\times\sA}}(\phi)=\norm{\LRES{{\mathsf D}}(\phi^\vee)}$.
Our goal is to define (linear) operators $\mathop{\sigma}\nolimits_\phi^n : {\mathcal L}^{n+1}({\mathsf A},{\mathsf B}) \to {\mathcal L}^n({\mathsf A},{\mathsf B})$ such that for each $\psi\in {\mathcal L}^n({\mathsf A},{\mathsf B})$, the map \[ \LRES{{\mathsf D}}\left( \mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1}(\psi) + \mathop{\sigma}\nolimits_\phi^n\mathop{\partial}\nolimits_\phi^n(\psi) - \psi\right) \] has norm controlled by $\operatorname{def}_{{\sD\times\sA}}(\psi)$ (we make this precise in Proposition \ref{p:approx-splitting-v2} below). As a first step towards this, we set up a general construction by which elements of ${\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ define (linear) operators ${\mathcal L}^{n+1}({\mathsf A},{\mathsf B})\to{\mathcal L}^n({\mathsf A},{\mathsf B})$.
\begin{dfn}\label{d: ave} Let $n\in\mathbb N$. Given any $c,d\in {\mathsf D}$ and $\psi\in {\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$, we obtain an element of ${\mathcal L}^n({\mathsf A},{\mathsf B})$, defined by
\begin{align} (a_1,\dots, a_n) \mapsto \phi(c)\ \psi(d,a_1,\dots, a_n) \qquad \text{for all $a_1,\dots, a_n\in {\mathsf A}$.} \end{align}
This process yields a bounded bilinear map ${\sD\times\sD}\to {\mathcal L}( {\mathcal L}^{n+1}({\mathsf A},{\mathsf B}) , {\mathcal L}^n({\mathsf A},{\mathsf B}))$, with norm $\le \norm{\phi}\norm{\LRES{{\mathsf D}}}$, and hence uniquely defines a bounded linear map \[ {\mathsf D}\mathbin{\widehat{\otimes}} {\mathsf D}\to {\mathcal L}({\mathcal L}^{n+1}({\mathsf A},{\mathsf B}),{\mathcal L}^n({\mathsf A},{\mathsf B}))\] that we denote by $w\mapsto \ave[\phi]{w}^n$. Explicitly: for $c,d\in {\mathsf D}$ and $\psi\in{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$,
\begin{align} \ave[\phi]{c \mathbin{\otimes} d}^n (\psi)(a_1,\dots, a_n) = \phi(c)\ \psi(d,a_1,\dots, a_n) \quad \text{for all $a_1,\dots, a_n\in {\mathsf A}$.} \end{align}
\end{dfn}
Note that from our definitions, for each $\psi \in {\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$ we have
\begin{equation}\label{eq:bound of averaging operator} \norm{\ave[\phi]{w}^n(\psi)}_{{\mathcal L}^{n}({\mathsf A},{\mathsf B})} \le \norm{w}_{{\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}} \norm{\phi} \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n}({\mathsf A},{\mathsf B}))} \qquad\text{for all $w\in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$.} \end{equation}
Also, since we used the universal property of $\mathbin{\widehat{\otimes}}$ to define $\ave[\phi]{w}^n$, it is clear that this operator is independent of any chosen representation of $w$ as an absolutely convergent sum of elementary tensors.
\begin{lem}[Approximate splitting, 1st version]\label{l:approx-splitting-v1} Let $n\ge 2$. Then \begin{equation}\label{eq:towards homotopy} \left. \begin{gathered}
\mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{w}^{n-1} (\psi)(a_1,\dots,a_n) \\
+ \ave[\phi]{w}^n \mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots, a_n) \end{gathered}\right\} = \left\{ \begin{gathered}
\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(w)\cdot\psi(a_1,\dots,a_n) \\
+ \phi(a_1)\cdot \ave[\phi]{w}^{n-1}(\psi)(a_2,\dots,a_n) \\
- \ave[\phi]{w \cdot a_1}^{n-1}(\psi)(a_2,\dots,a_n) \end{gathered} \right. \end{equation} for all $w \in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$, $a_1,\dots,a_n\in {\mathsf A}$ and $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$. \end{lem}
\begin{proof} Fix $a_1,\dots, a_n\in {\mathsf A}$ and $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$. We denote the left-hand side of \eqref{eq:towards homotopy} by $T_L(w)$ and denote the right-hand side by $T_R(w)$. Then $T_L$ and $T_R$ are bounded linear maps from ${\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ to ${\mathsf B}$, so it suffices to prove that $T_L(c\mathbin{\otimes} d)=T_R(c\mathbin{\otimes} d)$ for all $c,d\in {\mathsf D}$.
Consider \[ T_L(c\mathbin{\otimes} d) = \mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{c\mathbin{\otimes} d}^{n-1}\psi(a_1,\dots,a_n) + \ave[\phi]{c\mathbin{\otimes} d}^n\mathop{\partial}\nolimits_\phi^n\psi(a_1,\dots,a_n). \] Expanding these expressions, most of the terms cancel, leaving \[ \begin{gathered}
\phi(a_1)\cdot \phi(c) \cdot\psi(d,a_2,\dots, a_n)
+ \phi(c)\cdot \phi(d)\cdot\psi(a_1,\dots, a_n)
- \phi(c)\cdot \psi(da_1,a_2,\dots, a_n)
\\ =
\phi(a_1)\cdot \ave[\phi]{c\mathbin{\otimes} d}^{n-1}\psi (a_2,\dots,a_n)
+ \pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(c\mathbin{\otimes} d) \cdot \psi(a_1,\dots,a_n) \\
- \ave[\phi]{c\mathbin{\otimes} da_1}^{n-1}\psi(a_2,\dots,a_n) \end{gathered} \] which equals $T_R(c\mathbin{\otimes} d)$, as required. \end{proof}
\begin{lem} \label{l:approx-left-modular} Let $n \ge 2$. Let $w\in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ and let $a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$, $a_2,\dots,a_n\in \operatorname{\rm ball}\nolimits_1({\mathsf A})$. Then for each $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$, \begin{equation}\label{eq:left-modular} \begin{gathered} \left\Vert \phi(a_1)\cdot \ave[\phi]{w}^{n-1}(\psi)(a_2,\dots,a_n) - \ave[\phi]{a_1\cdot w}^{n-1}(\psi)(a_2,\dots,a_n) \right\Vert \\
\le
\operatorname{def}_{{\sD\times\sD}}(\phi) \,\norm{w}_{{\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}}\ \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}. \end{gathered} \end{equation} \end{lem}
\begin{proof} Fixing $a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$ and $a_2,\dots, a_n\in \operatorname{\rm ball}\nolimits_1({\mathsf A})$, let \[ T(w):=\phi(a_1)\cdot \ave[\phi]{w}^{n-1}(\psi)(a_2,\dots,a_n) - \ave[\phi]{a_1\cdot w}^{n-1}(\psi)(a_2,\dots,a_n) \quad \text{for all $w \in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$.} \] Then $T \colon {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}\to {\mathsf B}$ is a bounded linear map and it suffices to prove that $\norm{T}\le \operatorname{def}_{{\sD\times\sD}}(\phi) \, \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}$. By \eqref{eq:ball of ptp} it suffices to prove that \[ \norm{T(c\mathbin{\otimes} d)}\le \operatorname{def}_{{\sD\times\sD}}(\phi) \, \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))} \quad\text{for all $c,d\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$.} \] This is now a straightforward calculation: \begin{align*} \norm{T(c\mathbin{\otimes} d)} &= \left\Vert \phi(a_1)\cdot \ave[\phi]{c\mathbin{\otimes} d}^{n-1}(\psi)(a_2,\dots,a_n) - \ave[\phi]{a_1c\mathbin{\otimes} d}^{n-1}(\psi)(a_2,\dots,a_n) \right\Vert \\ & = \left\Vert \phi(a_1) \phi(c) \psi(d,a_2,\dots,a_n) - \phi(a_1c) \psi(d,a_2,\dots,a_n) \right\Vert \\ & \le \norm{\phi(a_1)\phi(c)-\phi(a_1c)} \, \norm{\psi(d,a_2,\dots,a_n)} \\ & = \norm{\phi(a_1)\phi(c)-\phi(a_1c)} \, \norm{[\LRES{{\mathsf D}}(\psi)(d)](a_2,\dots,a_n)} \\ & \le \operatorname{def}_{{\sD\times\sD}}(\phi) \, \norm{\LRES{{\mathsf D}}(\psi)}_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}, \end{align*} as required. \end{proof} \end{subsection}
\begin{subsection}{Defining the approximate homotopy}\label{ss:approx-homotopy} To construct our approximate homotopy, we have to place further restrictions on ${\mathsf B}$ and~${\mathsf D}$. Thus throughout this subsection: \begin{itemize} \item ${\mathsf A}$ is a Banach algebra, ${\mathsf B}$ is a unital dual Banach algebra with an isometric predual, and $\phi\in {\mathcal L}({\mathsf A},{\mathsf B})$; \item ${\mathsf D}$ is a closed subalgebra of ${\mathsf A}$, which is unital and amenable with constant $\le K$; \end{itemize}
We also fix a net $(\Delta_\alpha)_{\alpha\in I}$ which is a bounded approximate diagonal for ${\mathsf D}$ and has the following properties: $\sup_\alpha\norm{\Delta_\alpha}_{{\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}}\le K$; and there exists $\boldsymbol{\Delta} \in ({\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D})^{**}$ such that $\Delta_\alpha \xrto{\ensuremath{{\rm w}^*}} \boldsymbol{\Delta}$.
The desired operators $\mathop{\sigma}\nolimits_\phi^n:{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})\to {\mathcal L}^n({\mathsf A},{\mathsf B})$ will be constructed as limits of the operators $\ave[\phi]{\Delta_\alpha}^n$, with respect to an appropriate topology which we now describe.
Let $E$ and $F$ be Banach spaces and let $n \in \mathbb N$ be fixed. For the sake of readability, elements of $E^n$ will be written as $\underline{x}:= (x_1,x_2, \ldots, x_n)$.
For every $\underline{x} \in E^n$ and $y \in F$, we introduce the seminorm
\begin{align*}
p_{\underline{x},y} &\colon {\mathcal L}^n(E, F^*) \to [0, \infty); \quad \psi \mapsto |\langle y, \psi(\underline{x}) \rangle |. \end{align*}
The topology on ${\mathcal L}^n(E, F^*)$ generated by the family of seminorms $(p_{\underline{x},y})_{(\underline{x},y) \in E^n \times F}$ is called the \dt{topology of point-to-$\ensuremath{{\rm w}^*}$ convergence} and will be denoted by~$\tau$.
The topology $\tau$ is linear, locally convex and Hausdorff (see \cite[Chapter~II,~\S4]{STVS}). We record a lemma here for future reference.
\begin{lem}\label{l:point-to-weak* conv} A net $(\psi_{\gamma})_{\gamma \in \Gamma}$ in ${\mathcal L}^n(E, F^*)$ converges to zero with respect to $\tau$ (in notation, $\lim\nolimits^\tau_{\gamma} \psi_{\gamma} =0$) if and only if $\lim\nolimits^\sigma_{\gamma} \psi_{\gamma}(\underline{x}) = 0$ for all $\underline{x} \in E^n$.
\end{lem}
\begin{lem}\label{l:coboundary weak-star-cts} Suppose ${\mathsf B}$ is a dual Banach algebra with an isometric predual. Then for every $n\in\mathbb N$, the operator $\mathop{\partial}\nolimits_\phi^n:{\mathcal L}^n({\mathsf A},{\mathsf B})\to{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$ is $\tau$-to-$\tau$ continuous. \end{lem}
\begin{proof} Let $(\psi_i)$ be a $\tau$-convergent net in ${\mathcal L}^n({\mathsf A},{\mathsf B})$, with limit~$\psi$. Let $a_1,\dots, a_{n+1}\in {\mathsf A}$. By Lemma~\ref{l:point-to-weak* conv}, for each $j=1,\dots, n$ we have \[ \psi_i(a_1,\dots, a_ja_{j+1}, \dots, a_{n+1}) \xrto{\ensuremath{{\rm w}^*}} \psi(a_1,\dots, a_ja_{j+1},\dots, a_{n+1}). \] Also, since ${\mathsf B}$ is a dual Banach algebra, multiplication in ${\mathsf B}$ is separately \ensuremath{{\rm w}^*}-continuous. Hence \[ \begin{aligned} \lim\nolimits^\sigma_i \big(\phi(a_1)\psi_i(a_2,\dots, a_{n+1}) \big) & = \phi(a_1) \ \lim\nolimits^\sigma_i\psi_i(a_2,\dots, a_{n+1}) \\ & = \phi(a_1)\psi(a_2,\dots,a_{n+1})\;, \\ \end{aligned} \] and similarly \[ \begin{aligned} \lim\nolimits^\sigma_i \big(\psi_i(a_1,\dots, a_n)\phi(a_{n+1}) \big) = \psi(a_1,\dots,a_n)\phi(a_{n+1}) . \end{aligned} \] Thus $(\mathop{\partial}\nolimits_\phi^n\psi_i)(a_1,\dots, a_{n+1}) \xrto{\ensuremath{{\rm w}^*}} (\mathop{\partial}\nolimits_\phi^n\psi)(a_1,\dots, a_{n+1})$, as required. \end{proof}
\begin{lem}\label{l:build splitting} Given $n\in\mathbb N$ and $\psi\in{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$, the net $(\ave[\phi]{\Delta_\alpha}^n\psi)$ $\tau$-converges in ${\mathcal L}^n({\mathsf A},{\mathsf B})$. \end{lem}
\begin{proof} Fix $\psi\in{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$. Given $a_1,\dots, a_n\in {\mathsf A}$, define $T\in {\mathcal L}({\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D},{\mathsf B})$ by $T(w) := \ave[\phi]{w}^n(\psi)(a_1,\dots, a_n)$. Then $T: {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}\to {\mathsf B}$ is a bounded linear map with values in a dual Banach space, and hence has a unique \wstar-\wstar-continuous extension $\widetilde{T}:({\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D})^{**} \to {\mathsf B}$, which satisfies $\norm{\widetilde{T}}=\norm{T}$.
In particular, \begin{align}\label{eq: aux splitting}
\ave[\phi]{\Delta_\alpha}^n(\psi)(a_1,\dots,a_n) = T(\Delta_\alpha) \xrto{\ensuremath{{\rm w}^*}} \widetilde{T} (\boldsymbol{\Delta}). \end{align} Denote the right-hand side of \eqref{eq: aux splitting} by $\Psi(a_1,\dots a_n)$.
Routine calculations show that the map $\Psi \colon {\mathsf A}^n \to {\mathsf B}$ is $n$-multilinear.
Using \eqref{eq: aux splitting} and the bound in \eqref{eq:bound of averaging operator}, we obtain
\begin{align*} \norm{\Psi(a_1,\dots,a_n)}
& = \norm{\widetilde{T}(\boldsymbol{\Delta})}
\le \liminf_{\alpha} \norm{T(\Delta_{\alpha})} \le K \norm{T} \notag \\
& \le K \norm{\phi}\norm{\LRES{{\mathsf D}}(\psi)} \norm{a_1}\dots \norm{a_n} \;. \end{align*} Thus $\Psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$, and $\ave[\phi]{\Delta_\alpha}^n(\psi) \xrto{\tau} \Psi$ by \eqref{eq: aux splitting} and Lemma~\ref{l:point-to-weak* conv}. \end{proof}
\begin{dfn}[Approximate homotopy] \label{d:define approx homotopy} Define $\mathop{\sigma}\nolimits_\phi^n : {\mathcal L}^{n+1}({\mathsf A},{\mathsf B}) \to {\mathcal L}^n({\mathsf A},{\mathsf B})$ by \begin{equation}\label{eq:define split} \mathop{\sigma}\nolimits_\phi^n(\psi) = \lim\nolimits^\tau\nolimits_{\alpha} \ave[\phi]{\Delta_\alpha}^n (\psi) \qquad \text{for all $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$.} \end{equation} This is well-defined by Lemma~\ref{l:build splitting}. \end{dfn}
The following lemma is basic, and is included just for sake of convenient reference. We leave the proof to the reader.
\begin{lem}\label{l:bound of w*-limit} Let $F$ be a Banach space, and let $(f_i)$ be a net in $F^*$ which converges \ensuremath{{\rm w}^*}\ to some $f\in F^*$. Suppose also that there is a convergent net $(c_i)$ in $[0,\infty)$ such that $\norm{f_i}\le c_i$. Then $\norm{f}\le \lim_i c_i$. \end{lem}
\begin{prop}[Approximate splitting, 2nd version]\label{p:approx-splitting-v2} Suppose $\phi(1_{\mathsf D})=1_{{\mathsf B}}$. Then for all $n\ge 2$ and all $\psi\in {\mathcal L}^n({\mathsf A},{\mathsf B})$, \[ \begin{gathered} \Norm{ \LRES{{\mathsf D}} \left( \mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1} (\psi) + \mathop{\sigma}\nolimits_\phi^n \mathop{\partial}\nolimits_\phi^n(\psi) - \psi \right) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}
\\ \le 2 K \operatorname{def}_{{\sD\times\sD}}(\phi)\norm{\LRES{{\mathsf D}}(\psi)}_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))} \;. \end{gathered} \] \end{prop}
\begin{proof} To ease notational congestion, throughout this proof we let \[ M := \norm{\LRES{{\mathsf D}}(\psi)}_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}\;.\]
Let $a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$ and let $a_2,\dots,a_n\in \operatorname{\rm ball}\nolimits_1({\mathsf A})$; it suffices to prove that \begin{equation}\label{eq:desired} \left\Vert \begin{gathered} \mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1} (\psi)(a_1,\dots, a_n) + \mathop{\sigma}\nolimits_\phi^n \mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots, a_n) \\ - \psi(a_1,\dots, a_n) \end{gathered} \right\Vert \le 2K \operatorname{def}_{{\sD\times\sD}}(\phi) M\;. \end{equation} Since $\mathop{\partial}\nolimits_\phi^{n-1}:{\mathcal L}^{n-1}({\mathsf A},{\mathsf B})\to {\mathcal L}^n({\mathsf A},{\mathsf B})$ is $\tau$-to-$\tau$ continuous by Lemma~\ref{l:coboundary weak-star-cts}, \begin{equation*} \begin{aligned} & \mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1}(\psi) + \mathop{\sigma}\nolimits_\phi^n\mathop{\partial}\nolimits_\phi^n(\psi) -\psi & = \lim\nolimits^\tau_\alpha \left( \mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1}(\psi) +
\ave[\phi]{\Delta_\alpha}^n\mathop{\partial}\nolimits_\phi^n(\psi)-\psi \right). \end{aligned} \end{equation*} Thus the left-hand side of the desired inequality \eqref{eq:desired} is equal to \begin{equation}\label{eq:en route} \left\Vert \lim\nolimits^\sigma_\alpha \left( \begin{gathered} \mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1} (\psi)(a_1,\dots, a_n) + \ave[\phi]{\Delta_\alpha}^n \mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots, a_n) \\ - \psi(a_1,\dots, a_n) \end{gathered}\right) \right\Vert. \end{equation}
Combining Lemma \ref{l:approx-splitting-v1}, Lemma \ref{l:approx-left-modular}, and the bound in \eqref{eq:bound of averaging operator} yields \[ \begin{aligned} & \left\Vert \begin{gathered} \mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1} (\psi)(a_1,\dots,a_n) + \ave[\phi]{\Delta_\alpha}^n\mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots,a_n) \\ - \pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha)\cdot\psi(a_1,\dots,a_n) \end{gathered} \right\Vert \\ = & \left\Vert
\phi(a_1)\cdot \ave[\phi]{\Delta_\alpha}^{n-1}(\psi)(a_2,\dots,a_n)
- \ave[\phi]{\Delta_\alpha\cdot a_1}^{n-1}(\psi)(a_2,\dots,a_n)
\right\Vert \\
\le & \left\{ \begin{gathered} \left\Vert \phi(a_1)\cdot \ave[\phi]{\Delta_\alpha}^{n-1}(\psi)(a_2,\dots,a_n)
- \ave[\phi]{a_1\cdot\Delta_\alpha}^{n-1}(\psi)(a_2,\dots,a_n) \right\Vert \\ + \left\Vert
\ave[\phi]{a_1\cdot\Delta_\alpha-\Delta_\alpha\cdot a_1}^{n-1}(\psi)(a_2,\dots,a_n) \right\Vert \end{gathered} \right. \\
\le & \operatorname{def}_{{\sD\times\sD}}(\phi) \norm{\Delta_\alpha} M + \norm{\phi} \norm{a_1\cdot\Delta_\alpha-\Delta_\alpha\cdot a_1} M. \end{aligned} \] Also, since $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, using $\operatorname{def}_{{\sD\times\sD}}(\phi)= \norm{ \phi \pi_{{\mathsf D}} - \pi_{{\mathsf B}} (\phi \mathbin{\widehat{\otimes}} \phi) }$, we obtain \begin{align*} & \phantom{\quad} \left\Vert
\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha)\cdot\psi(a_1,\dots,a_n) - \psi(a_1,\dots,a_n) \right\Vert \\
& \le \norm{\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha) - \phi(1_{{\mathsf D}})} \norm{\psi(a_1,\dots,a_n)} \\ & \le \norm{\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha) - \phi(\pi_{{\mathsf D}}(\Delta_\alpha))} M + \norm{\phi( \pi_{{\mathsf D}}(\Delta_\alpha)-1_{{\mathsf D}})} M \\ & \le \operatorname{def}_{{\sD\times\sD}}(\phi) \norm{\Delta_\alpha} M + \norm{\phi} \norm{\pi_{{\mathsf D}}(\Delta_\alpha)-1_{{\mathsf D}}} M \;. \end{align*}
Putting things together, and recalling that $K \ge\sup_\alpha\norm{\Delta_\alpha}$, we have: \begin{equation}\label{eq:before limit} \left\Vert \begin{gathered} \mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1} (\psi)(a_1,\dots,a_n) + \ave[\phi]{\Delta_\alpha}^n\mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots,a_n) \\ - \psi(a_1,\dots,a_n) \end{gathered} \right\Vert \le 2 \operatorname{def}_{{\sD\times\sD}}(\phi) KM + \varepsilon_\alpha\;, \end{equation} where $\varepsilon_\alpha :=
\norm{\phi} \norm{a_1\cdot\Delta_\alpha-\Delta_\alpha\cdot a_1} M
+ \norm{\phi} \norm{\pi_{{\mathsf D}}(\Delta_\alpha)-1_{{\mathsf D}}} M$,
which tends to $0$ by \eqref{amenabledef}. Comparing \eqref{eq:en route} and \eqref{eq:before limit},
and appealing to Lemma~\ref{l:bound of w*-limit},
the desired inequality \eqref{eq:desired} follows. \end{proof} \end{subsection}
\begin{subsection}{Defining the ``improving operator''} \label{s:define-imp-op} As in the previous subsection: \begin{itemize} \item ${\mathsf A}$ is a Banach algebra, ${\mathsf B}$ is a unital dual Banach algebra with an isometric predual; \item ${\mathsf D}$ is a closed subalgebra of ${\mathsf A}$, which is unital and amenable with constant $\le K$. \end{itemize} Then, for any given $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$, we may still form the splitting maps $\mathop{\sigma}\nolimits_\phi^n$, as in Definition \ref{d:define approx homotopy}. However, rather than fixing a single $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ and working with it throughout, we will now allow $\phi$ to vary.
\begin{dfn}\label{d:define improving} The \dt{improving operator} $F: {\mathcal L}({\mathsf A},{\mathsf B})\to{\mathcal L}({\mathsf A},{\mathsf B})$ is defined by
the formula \begin{equation}\label{eq:define next step} F(\phi) := \phi+ \mathop{\sigma}\nolimits_\phi^1(\phi^\vee) = \phi+ \lim\nolimits^\tau_\alpha \ave[\phi]{\Delta_\alpha}^1(\phi^\vee). \end{equation} \end{dfn}
The desired properties of $F$ follow from applying the machinery of Section~\ref{ss:approx-homotopy} to the bilinear map $\phi^\vee\in{\mathcal L}^2({\mathsf A},{\mathsf B})$, viewed as a ``$2$-cocycle'' with respect to the operator $\mathop{\partial}\nolimits_\phi^2$ (see Lemma~\ref{l:2-cocycle}).
We first deal with some technical details that do not depend on amenability of~${\mathsf D}$.
\begin{lem}\label{l:preserved by improvement} Let $\phi,\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ and let $w\in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$. \begin{romnum} \item If $\psi(1_{{\mathsf D}})=1_{{\mathsf B}}$, then $\ave[\phi]{w}^1(\psi^\vee)(1_{{\mathsf D}})=0$. \item If $\operatorname{def}_{{\sA\times\sD}}(\psi)=0$, then $\ave[\phi]{w}^1(\psi^\vee)(x)=0$ for all $x\in {\mathsf D}$, and
\[ \ave[\phi]{w}^1(\psi^\vee)(ax)= \ave[\phi]{w}^1(\psi^\vee)(a) \cdot \psi(x)\qquad \text{for all $a\in {\mathsf A}$ and $x\in {\mathsf D}$.} \]
\end{romnum}
\end{lem}
\begin{proof} For fixed $\phi$ and $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$, the map $w\mapsto \ave[\phi]{w}^1(\psi^\vee)$ is bounded linear from ${\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ to ${\mathcal L}({\mathsf A},{\mathsf B})$.
Hence, for both (i) and (ii), it suffices to prove the desired identity in the special case $w=c\mathbin{\otimes} d$, where $c,d\in {\mathsf D}$. \begin{romnum} \item $\ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(1_{{\mathsf D}}) = \phi(c) \psi^\vee(d,1_{{\mathsf D}}) = \phi(c)\psi(d1_{{\mathsf D}})-\phi(c)\psi(d)\psi(1_{{\mathsf D}})=0$.
\item Let $a\in {\mathsf A}$ and $x\in {\mathsf D}$. Then \[ \ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(x)
= \phi(c)\psi^\vee(d,x)
= \phi(c)\psi(dx) - \phi(c)\psi(d)\psi(x) = 0 \] and \[ \begin{aligned} \ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(ax) & = \phi(c)\psi^\vee(d,ax) \\ & = \phi(c)\psi(dax) - \phi(c)\psi(d)\psi(ax) \\ & = \phi(c)\psi(da)\psi(x) - \phi(c)\psi(d)\psi(a)\psi(x) \\ & = \ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(a) \cdot \psi(x). \end{aligned} \] \end{romnum} \vskip-1.5em \end{proof}
\begin{proof}[Proof of Proposition \ref{p:improving}] Let $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ with $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$. Let $F$ be as in Definition~\ref{d:define improving}.
\subproofhead{Part \ref{li:unital}: show that $F(\phi)(1_{{\mathsf D}})=1_{{\mathsf B}}$} By the definition of $F$, this is equivalent to showing that $\lim\nolimits^\sigma_\alpha \ave[\phi]{\Delta_\alpha}^1(\phi^\vee)(1_{{\mathsf D}})=0$, which in turn follows from Lemma~{\ref{l:preserved by improvement}(i)}.
\subproofhead{Part \ref{li:small step}: show that $\norm{F(\phi)-\phi} \le K\norm{\phi} \operatorname{def}_{{\sD\times\sA}}(\phi)$} Applying the bound in \eqref{eq:bound of averaging operator} with $\psi=\phi^\vee$ and $w=\Delta_\alpha$ yields \[ \norm{ \ave[\phi]{\Delta_\alpha}^1(\phi^\vee) }
\le K \norm{\phi} \norm{\LRES{{\mathsf D}}(\phi^\vee)} = K \norm{\phi}\operatorname{def}_{{\sD\times\sA}}(\phi) \;.\] Taking the limit on the left-hand side gives the desired bound on $\norm{F(\phi)-\phi}$.
\subproofhead{Part \ref{li:improve defect}: show that $\operatorname{def}_{{\sD\times\sA}}(F(\phi))\le 3K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi)$} We put $\gamma:= F(\phi)-\phi = \mathop{\sigma}\nolimits_\phi^1(\phi^\vee)$ in order to simplify some formulas. Rewriting the identity \eqref{eq:linearize} in terms of the operator $\mathop{\partial}\nolimits_\phi^1$, we have \[ (\phi+\gamma)^\vee = \phi^\vee - \mathop{\partial}\nolimits_\phi^1(\gamma) - \pi_{{\mathsf B}}\circ(\gamma\mathbin{\widehat{\otimes}}\gamma) \circ \iota_{{\mathsf A},{\mathsf A}} \;, \] where $\iota_{{\mathsf A},{\mathsf A}} \in \operatorname{Bil}({\mathsf A},{\mathsf A}; {\mathsf A}\mathbin{\widehat{\otimes}} {\mathsf A})$ is the canonical map. Hence \begin{align}\label{eq:two terms to bound} \operatorname{def}_{{\sD\times\sA}}(F(\phi)) &= \norm{\LRES{{\mathsf D}}(\phi+\gamma)^\vee} \notag \\ & \le \norm { \LRES{{\mathsf D}}\left(\phi^\vee - \mathop{\partial}\nolimits_\phi^1(\gamma)\right)} + \norm{ \LRES{{\mathsf D}}( \pi_{{\mathsf B}}\circ (\gamma\mathbin{\widehat{\otimes}}\gamma) \circ \iota_{{\mathsf A},{\mathsf A}}) } \notag \\ & \le \norm { \LRES{{\mathsf D}}\left(\phi^\vee - \mathop{\partial}\nolimits_\phi^1(\gamma)\right)} + \norm{{\gamma\vert}_{{\mathsf D}}}_{{\mathcal L}({\mathsf D},{\mathsf B})} \ \norm{\gamma} \;. \end{align}
To bound the first term on the right-hand side of \eqref{eq:two terms to bound}, we take $n=2$ and $\psi=\phi^\vee$ in Proposition \ref{p:approx-splitting-v2}. This yields \begin{align}\label{eq:use approx splitting} \norm{ \LRES{{\mathsf D}}\left( \mathop{\partial}\nolimits_\phi^1\mathop{\sigma}\nolimits_\phi^1(\phi^\vee) + \mathop{\sigma}\nolimits_\phi^2 \mathop{\partial}\nolimits_\phi^2(\phi^\vee)- \phi^\vee \right) } & \le 2 K \operatorname{def}_{{\sD\times\sD}}(\phi)\norm{\LRES{{\mathsf D}}(\phi^\vee)} \notag \\ & = 2K \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi). \end{align}
Recall that $\mathop{\partial}\nolimits_\phi^2(\phi^\vee)=0$ (by Lemma~\ref{l:2-cocycle}) and $\gamma=\mathop{\sigma}\nolimits_\phi^1(\phi^\vee)$. Hence \eqref{eq:use approx splitting} may be rewritten as
\begin{align}\label{eq: pre bound on 2nd term}
\norm{ \LRES{{\mathsf D}}\left( \mathop{\partial}\nolimits_\phi^1(\gamma) - \phi^\vee \right) }\le 2K \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi). \end{align}
The second term is easier to deal with. We already know from part~\ref{li:small step} of this proposition that $\norm{\gamma} \le K\norm{\phi}\operatorname{def}_{{\sD\times\sA}}(\phi)$. By the same argument, using \eqref{eq:bound of averaging operator}, we obtain \[ \norm{{\gamma\vert}_{{\mathsf D}}}_{{\mathcal L}({\mathsf D},{\mathsf B})} \le K \norm{\phi}\norm{{\phi^\vee \vert}_{{\sD\times\sD}} }_{{\mathcal L}^2({\mathsf D},{\mathsf B})} = K\norm{\phi} \operatorname{def}_{{\sD\times\sD}}(\phi). \] Hence \begin{equation}\label{eq:bound on 2nd term} \norm{{\gamma\vert}_{{\mathsf D}}}_{{\mathcal L}({\mathsf D},{\mathsf B})} \norm{\gamma} \le K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi). \end{equation}
Combining \eqref{eq:two terms to bound} with \eqref{eq: pre bound on 2nd term} and \eqref{eq:bound on 2nd term} yields \[ \operatorname{def}_{{\sD\times\sA}}(F(\phi))\le (2K+K^2\norm{\phi}^2) \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi). \] To finish off the proof of part~\ref{li:improve defect} it suffices to observe that $K\ge 1$ (because $\pi_{{\mathsf D}} \colon {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}\to {\mathsf D}$ is contractive and $(\pi_{{\mathsf D}}(\Delta_\alpha))_{\alpha \in I}$ is a b.a.i.\ for ${\mathsf D}$) and $\norm{\phi}\ge 1$ (since $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$ and both ${\mathsf A}$ and ${\mathsf B}$ are unital).
\subproofhead{Part \ref{li:preserve right}: show that if $\operatorname{def}_{{\sA\times\sD}}(\phi)=0$, then $\operatorname{def}_{{\sA\times\sD}}(F(\phi))=0$} Applying Lemma \ref{l:preserved by improvement}~(ii) with $w=\Delta_\alpha$ and $\psi=\phi$, and then taking the limit, we have \[ \gamma(x) = \lim\nolimits^\sigma_\alpha \ave[\phi]{\Delta_\alpha}^1(\phi^{\vee})(x) = 0 \qquad \text{for all $x\in {\mathsf D}$} \] and \[ \gamma(ax) -\gamma(a)\phi(x) = \lim\nolimits^\sigma_\alpha \big(\ave[\phi]{\Delta_\alpha}^1(\phi^{\vee})(ax) - \ave[\phi]{\Delta_\alpha}^1(\phi^{\vee})(a) \cdot \phi(x) \big) = 0 \quad \text{for all $a\in {\mathsf A}, x\in {\mathsf D}$.} \] Hence, whenever $a\in {\mathsf A}$ and $x\in {\mathsf D}$, we have \[ \begin{aligned} F(\phi)^\vee(a,x) & = \phi(ax)+\gamma(ax) - (\phi(a)+\gamma(a))(\phi(x)+\gamma(x)) \\ & = \phi(ax)+\gamma(a)\phi(x) - (\phi(a)+\gamma(a))\phi(x) \\ & = 0 \end{aligned} \] as required. \end{proof}
This completes the proof of Proposition~\ref{p:improving}, and hence --- via Theorem \ref{t:one-sided improved BEJ} --- the proof of Theorem \ref{t:main innovation}.
\end{subsection} \end{section}
\end{ack}
\appendix
\begin{section}{Constructing an uncountable clone system for the Tsi\-rel\-son space}
Let $T$ denote the Tsirelson space. In this appendix we prove the following result.
\begin{prop}\label{p:tsirelson} There is an uncountable clone system for $T$. \end{prop}
\begin{proof} We use the notation and terminology of \cite{CS} and \cite[Section~3]{BKL}. Let $(t_n)$ denote the unit vector basis for~$T$. For a subset $M$ of~$\mathbb N$, $P_M$ is the norm one basis projection onto the closed linear span of $\{ t_m : m\in M\}$, denoted by $T_M$. We first recall a few definitions. We say that $J\subseteq \mathbb N$ is a nonempty \dt{Schreier set} if $J$ is a finite set with $\lvert J\rvert\le\min J$. Let $M \subseteq \mathbb N$. We say that $J$ is an \dt{interval in $\mathbb N\setminus M$} if $J$ is of the form $J= [a,b] \cap \mathbb N$ for some real numbers $b >a \ge 1$, such that $J \cap M = \emptyset$. Lastly, if $M \subseteq \mathbb N$ and $J$ is an interval in $\mathbb N\setminus M$, we define \[ \sigma(\mathbb N,J) = \sup\biggl\{\sum_{j\in J} s_j : s_j\in [0,1]\ (j\in J),\,
{\biggl\| \sum_{j\in J} s_jt_j
\biggr\| }_T\le 1\biggr\}. \] We rely on the following two results: \begin{itemize} \item Let $J\subseteq \mathbb N$ be a nonempty Schreier set. Then \[ \norm{ x } \ge \frac12\sum_{j\in J}\lvert x_j \rvert\qquad \text{for all $x = (x_j)\in T$.} \]
This is an immediate consequence of how the Tsirelson norm is defined.
\item For an infinite $M\subseteq \mathbb N$, we have $T_M\cong T$ if and only if there is a constant $C\ge 1$ such that $\sigma(\mathbb N,J)\le C$ for every interval $J$ in $\mathbb N\setminus M$. This is a special case of a result of Casazza--Johnson--Tzafriri~\cite{CJT}, stated in \cite[Corollary~3.2]{BKL}, and applied here only in the particular case where $N = \mathbb N$. \end{itemize}
Combining these two results, we obtain the following conclusion: Suppose that $M = \{m_1<m_2<\cdots\}\subseteq \mathbb N$ is an infinite set with \begin{equation}\label{eq1}
m_1 = 1\qquad\text{and}\qquad m_{j+1}\le 2m_j+2\qquad \text{for all $j\in\mathbb N$.} \end{equation} For every nonempty interval $J$ in $\mathbb N\setminus M$, there is a unique $j\in\mathbb N$ such that $J\subseteq [m_j+1,m_{j+1}-1]$. This implies that \[ \lvert J\rvert\le (m_{j+1}-1)-(m_j+1)+1\le 2m_j+1 - m_j = m_j+1\le \min J, \] so $J$ is a Schreier set, and therefore
\[ {\biggl\|\sum_{j\in J} s_jt_j\biggr\|}_T\ge \frac12\sum_{j\in J} s_j\qquad \text{for all $s_j\in[0,1]$, $j\in J$} \] by the first bullet point, so $\sigma(\mathbb N,J)\le 2$. Hence $T_M\cong T$ by the second bullet point. In fact, it follows from the second part of the proof of Theorem~$10$ and the paragraph before Proposition~$3$ in \cite{CJT} that $T_M$ and $T$ are $4$-isomorphic.
We can therefore establish the result by constructing an uncountable, almost disjoint family~${\mathcal D}$ of sets whose elements satisfy~\eqref{eq1}. For then, the uncountable family of norm one idempotents $(P_M)_{M \in {\mathcal D}}$ will be the desired clone system. We construct ${\mathcal D}$ as follows.
Given a function $f\in\{0,1\}^{\mathbb N}$, define \[ m_n(f) = 2^{n-1} + \sum_{j=1}^{n-1} f(j)2^{n-1-j}\qquad \text{for all $n\in\mathbb N$.} \] Alternatively, we can state this definition recursively as follows: \begin{equation}\label{eq2}
m_1(f)=1\qquad\text{and}\qquad m_{n+1}(f) = 2m_n(f)+f(n)\qquad \text{for all $n\in\mathbb N$.} \end{equation} Set \[ M(f) = \{ m_n(f) : n\in\mathbb N \} \qquad\text{and}\qquad {\mathcal D} = \{ M(f) : f\in\{0,1\}^{\mathbb N} \}. \] Clearly $M(f)$ is an infinite subset of $\mathbb N$ for each $f\in\{0,1\}^{\mathbb N}$. Since $f(n)\in\{0,1\}$, the recursive definition~\eqref{eq2} shows that the elements of $M(f)$ satisfy~\eqref{eq1}.
It remains to verify that the family ${\mathcal D}$ is almost disjoint. More precisely, for distinct functions $f,g\in\{0,1\}^{\mathbb N}$, we claim that $\lvert M(f)\cap M(g)\rvert=k$, where $k\in\mathbb N$ is the smallest number such that $f(k)\ne g(k)$. This however follows from an easy induction argument and~\eqref{eq2}. \end{proof}
\end{section}
\Addresses
\end{document} |
\begin{document}
\title{Wave Support Theorem \\and Inverse Resonant Uniqueness on the Line} \author{Lung-Hui Chen$^1$}\maketitle\footnotetext[1]{General Education Center, Ming Chi University of Technology, New Taipei City, 24301, Taiwan. Email: mr.lunghuichen@gmail.com.}\maketitle \begin{abstract} In the paper, we experimentally study the inverse problem with the resonant scattering determinant. We analyze the structure of characteristics of perturbed linear waves. Assuming there is the common part of potential perturbation propagating along the same strips, we estimate the common part of the perturbed wave, and its Fourier transform.
We deduce the partial inverse uniqueness from the Nevanlinna type of representation theorem. \\MSC: 34B24/35P25/35R30. \\Keywords: scattering; resonance; Schr\"{o}dinger equation; inverse problem; Nevanlinna theorem. \end{abstract} \section{Introduction} Let us consider the Schr\"{o}dinger equation \begin{equation}\label{1.1}
-\frac{d^{2}}{dx^{2}}+V(x),\,x\in\mathbb{R},\,V(x)\in L_{comp}^{1}([a,b]),\,a<0<b,\,|a|\ll|b|, \end{equation} where we assume the potential is effectively support on $[a,b]$, that is, $[a,b]$ is the minimal convex hull that contains the support of $V^{j}$. The scattering matrix is of the form \begin{equation}\label{S} S(k)=\left(\begin{array}{cc}\frac{ik}{\hat{X}(k)} & \frac{\hat{Y}(k)}{\hat{X}(k)}
\\\frac{\hat{Y}(-k)}{\hat{X}(k)} & \frac{ik}{\hat{X}(k)}\end{array}\right). \end{equation} The scattering matrix $S(k)$ is meromorphic in $\mathbb{C}$, and its poles in $\{\Im k>0\}$ are the square roots of $L^{2}$-eigenvalues of~(\ref{1.1}). In this paper, we understand $\hat{X}(k)$ through the one-dimensional wave equation \begin{eqnarray} \label{AA} \left\{ \begin{array}{ll} \big(D_{x}^{2}-D_{y}^{2}+V(x)\big)A_{\pm}(x,y)=0;
\\ A_{\pm}(x,y)=\delta(x-y),\,\pm x\gg0, \end{array} \right. \end{eqnarray} where $D_{y}A_{-}(x,y)=X(y-x)+Y(y+x)$. In particular \cite[p.\,727]{Mellin}, \begin{eqnarray}\label{1122} &&X(x)-\delta'(x)+\frac{\int V(t)dt}{2}\delta(x)\in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R});\\ &&Y(y)-\frac{V(y/2)}{4}\in L^{1}(\mathbb{R})\cap L^{\infty}(\mathbb{R}).\label{511} \end{eqnarray} Thus, $A_{\pm}(x,y)$ satisfies the wave equation with $x$ taking the place of time (this choice is dictated by the forcing condition imposed) in~(\ref{AA}). The uniqueness part follows from the energy estimates of the wave equation \cite{Dya,Mellin,Tang,Zworski2}.
\par In this paper, we consider the complex analysis of entire function $\hat{X}(k)$ and $\hat{Y}(k)$ which are represented in form of \begin{eqnarray}\label{1.6} &&\hat{X}(k)=\int_{-2(b-a)}^{0}X(x)e^{-ikx}dx;\\\label{1.7} &&\hat{Y}(k)=\int_{2a}^{2b}Y(y)e^{-iky}dy,\,k\in\mathbb{C}, \end{eqnarray} that the unitary identity holds in $\mathbb{C}$: \begin{equation}\label{U} \hat{X}(k)\hat{X}(-k) = k^{2} + \hat{Y}(k)\hat{Y}(-k) , \end{equation} \par More importantly, we consider experimentally \begin{equation}\label{SS} \det S(k):=\frac{-\hat{X}(-k)}{\hat{X}(k)}. \end{equation} as scattering data in this paper inspired its simpler analytic structure. \par We define for potential $V^{j}$, $j=1,2$,\,$0<r\ll1$, \begin{eqnarray}\label{1.10} &&\hat{X}^{j}_{1}(k):=\mathcal{F}\{X^{j}\chi_{[2a,0]}(y-b)\}(k);\\\label{1.11} &&\hat{X}^{j}_{2}(k):=\mathcal{F}\{X^{j}\chi_{[2a-2r,2a]}(y-b)\}(k);\\\label{1.12} &&\hat{X}^{j}_{3}(k):=\mathcal{F}\{X^{j}\chi_{[2a-2b,2a-2r]}(y-b)\}(k);\\ &&\hat{Y}^{j}_{1}(k):=\mathcal{F}\{X^{j}\chi_{[2a,0]}(y+b)\}(k);\\ &&\hat{Y}^{j}_{2}(k):=\mathcal{F}\{X^{j}\chi_{[0,2r]}(y+b)\}(k);\\ &&\hat{Y}^{j}_{3}(k):=\mathcal{F}\{X^{j}\chi_{[2r,2b]}(y+b)\}(k), \end{eqnarray} in which $\chi_{[2a,0]}(x)$ is the characteristic function defined on $[2a,0]$, and so on. The support of linear waves $D_{y}A^{j}_{-}(x,y)$ is illustrated in the shaded areas in Figure \ref{F}. Most importantly, $X^{j}\chi_{[2a-2r,2a]}(y-b)$ in not in the domain of influence of $V^{j}\chi[a,0]$ when time variable $x= b$. The function $D_{y}A^{j}_{-}(x,y)=X(y-x)+Y(y+x)$ satisfies the wave equation for $x\geq b$. For $x\leq a$, we have $D_{y}A^{j}_{-}(x,y)=-\delta'(x-y)$. If $b=0$, we firstly see the support of $X^{j}\chi_{[2a,0]}(y)$. For $b\gg0$, we find the red triangle in the center of diagram that shows the support of the wave solution that is not affected by the $V^{j}\chi[a,0]$, and simply depends on $V^{j}\chi[0,b]$. We refer more detailed analysis construction and characteristics analysis to Figure \ref{F}. \begin{figure}
\caption{characteristics of linear waves}
\label{F}
\end{figure}
\par Let $$\det S^{j}(k):=\frac{-\hat{X}^{j}(-k)}{\hat{X}^{j}(k)},\,j=1,2, $$ be the corresponding scattering determinant of of potential $V^{j}$. \begin{theorem}\label{11} If $V^{1}(x)\equiv V^{2}(x)$ on non-empty $[0,b]\subset[a,b]$, and $\det S^{1}(k)=\det S^{2}(k)$, then $V^{1}(x)\equiv V^{2}(x)$ on $[a,b]$. \end{theorem} In literature, when there are no bound states, the potential is determined by the reflection coefficients by Faddeev's theory \cite{DT79,Fa64}. In inverse resonance problem, we consider to determine the potential $V$ from the resonances of~(\ref{1.1}) which includes the square root of $L^{2}$-eigenvalues. The inverse resonance problem of Schr\"{o}dinger operator on the half line has been studied in \cite{Korotyaev1,Korotyaev3,Zworski2}. In the half line case, the unique recovery of the potential from the eigenvalues and resonances is justified in \cite{Korotyaev1}. However, in the full line case, the inverse resonance problems mainly remained open for a long time. It is known that the potential cannot be solely determined by the eigenvalues and resonances. Specifically, Zworski \cite{Zworski2} proved the uniqueness theorem for the symmetric potentials along with certain isopolar results. Furthermore, Korotyaev \cite{Korotyaev1,Korotyaev3} applied the value distribution theory in complex analysis to prove that all eigenvalues and resonances, and a signed sequence can uniquely determine the potential $V$.
\section{Lemmata} \begin{lemma}\label{21} If $F(x)$ is of bounded variation on $(-\infty,\infty)$, then $F(x)$ is constant except on some finite interval if and only if $f(z)=\int_{-\infty}^{\infty}e^{izt}dF(t)$ is an entire functional of exponential type; and if $(a,b)$ is the smallest interval outside which $F(x)$ is a constant, then $a=h_{f}(-\frac{\pi}{2})$ and $b=h_{f}(\frac{\pi}{2})$. \end{lemma} \begin{proof} We refer the proof to Boas \cite[p.\,109]{Boas}. \end{proof}
Here, we note that the indicator diagram of $\hat{X}^{j}(k)$ is the line set $i[-2(b-a),0]$ on the imaginary axis, and its length is $|a|+|b|$. \begin{lemma} The length of indicator diagram of $\hat{X}^{j}(k)$ is $2(b-a)$. \end{lemma} \begin{proof} We use Lemma \ref{21} and~(\ref{1.6}) to conclude the result. \end{proof} \begin{lemma}
$\hat{X}^{j}_{1}(k)$ has indicator function $|a||\sin\theta|$;\,$\hat{X}^{j}_{2}(k)$ has indicator function $|r||\sin\theta|$;\,$\hat{X}^{j}_{3}(k)$ has indicator function $|b-r||\sin\theta|$. \end{lemma} \begin{proof} It is straightforward by Lemma \ref{21},~(\ref{1.10}),~(\ref{1.11}), and~(\ref{1.12}). \end{proof} \begin{lemma}\label{24} $\hat{X}^{j}_{j'}(k)$ has only finite zeros in $\mathbb{C}^{+}$, and infinitely many in $\mathbb{C}^{-}$. \end{lemma} \begin{proof} This is well-known in the literature, say, \cite{Dya}. \end{proof} \begin{lemma}\label{id} If $V^{1}\equiv V^{2}$ on $[0,b]$, then $\hat{X}^{1}_{2}(k)\equiv\hat{X}^{1}_{2}(k)$, and $\hat{Y}^{1}_{2}(k)\equiv\hat{Y}^{1}_{2}(k)$. \end{lemma} \begin{proof} Let us discuss the Figure \ref{F}. Let us first take $b=0$, so the wave starting inside the triangle: $(0,0)$, $(a,a)$, $(0,2a)$ propagate in parallel strips between the shaded area between the lines pointing at $(a,a)$ and $(0,2a)$ and the lines pointing at $(a,a)$ and $(0,0)$. The wave function $X^{j}_{1}$ and $Y^{j}_{1}$, $j=1,2$, are defined on the those two strips correspondingly. Now we consider $x\in[0,b]$, if $V^{1}_{2}\equiv V^{2}_{2}$ on $[0,b]$, then the wave function $Y^{1}_{2}\equiv Y^{2}_{2}$ are defined on the strips between the lines contain $(0,0)$ and $(b,b)$. Similarly, the wave function $X^{1}_{2}\equiv X^{2}_{2}$ are defined on the strips between the lines contain $(0,2a)$ and $(b,2a-b)$. Now we deduce from~(\ref{1.10}) that $\hat{X}^{1}_{2}(k)\equiv\hat{X}^{1}_{2}(k)$. Similar arguments works for $\hat{Y}^{1}_{2}(k)\equiv\hat{Y}^{1}_{2}(k)$.
\end{proof}
\section{Proof of Theorem \ref{11}}
We start with the assumption of Theorem \ref{1.1} $$\det S^{1}(k)\equiv\det S^{2}(k),$$ that is \begin{equation}\label{3.1} \frac{\hat{X}^{1}(-k)}{\hat{X}^{1}(k)}\equiv\frac{\hat{X}^{2}(-k)}{\hat{X}^{2}(k)}. \end{equation} Using Lemma \ref{24} and comparing the poles on both sides of~(\ref{3.1}), $\hat{X}^{1}(k)$ and $\hat{X}^{2}(k)$ have common zeros in $\mathbb{C}^{-}$, except finite ones in $\mathbb{C}^{+}$. If $\sigma$ is a common zero of $\hat{X}^{1}(k)$ and $\hat{X}^{2}(k)$, then \begin{equation} \hat{X}^{1}_{1}(\sigma) +\hat{X}^{1}_{3}(\sigma)=\hat{X}^{2}_{1}(\sigma)+\hat{X}^{2}_{3}(\sigma)=0, \end{equation} by Lemma \ref{id}. Now we want to show that \begin{equation}\label{3.2} \hat{X}^{1}(k):=\hat{X}^{1}_{1}(k)+\hat{X}^{1}_{2}(k) +\hat{X}^{1}_{3}(k)\equiv\hat{X}^{2}_{1}(k)+\hat{X}^{2}_{2}(k)+\hat{X}^{2}_{3}(k)=:\hat{X}^{2}(k). \end{equation} That is, \begin{equation}\label{3.22} \hat{X}^{1}_{1}(k) +\hat{X}^{1}_{3}(k)\equiv\hat{X}^{2}_{1}(k)+\hat{X}^{2}_{3}(k). \end{equation} By Lemma \ref{id},~(\ref{3.2}) and~(\ref{3.22}) should have same density of common zeros. \par Let us count the zero set of \begin{equation} F(k):=\hat{X}^{j}(k),\,j=1,2, \end{equation} and the zero set of \begin{equation}\label{3.3} G(k):=\hat{X}^{1}_{1}(k) +\hat{X}^{1}_{3}(k)-\hat{X}^{2}_{1}(k)-\hat{X}^{2}_{3}(k). \end{equation} Using ~(\ref{119}) and~(\ref{120}) in Appendix, we deduce that \begin{eqnarray}
&&h_{F}(\theta)=(b-a)|\sin\theta|;\\
&&h_{G}(\theta)=\max\{|a|,|b-r|\}|\sin\theta|=|b-r||\sin\theta|. \end{eqnarray} Using Theorem \ref{C}, $F(z)$ has zero density $\frac{(b-a)}{\pi}$, and $G(k)$ has identical zero density $\frac{b-r}{\pi}$. That contradicts to Lemma \ref{id}. Then, we deduce that $$ \hat{X}^{1}_{1}(k) +\hat{X}^{1}_{3}(k)\equiv\hat{X}^{2}_{1}(k)+\hat{X}^{2}_{3}(k) .$$ Due to Lemma \ref{id}, we have $$\hat{X}^{1}(k)\equiv\hat{X}^{2}(k).$$ \par Using~(\ref{U}), we have \begin{equation}\label{312} \hat{X}^{j}(k)\hat{X}^{j}(-k)=k^{2}+\hat{Y}^{j}(k)\hat{Y}^{j}(-k). \end{equation} Thus, we obtain \begin{equation}\label{313} \hat{Y}^{1}(k)\hat{Y}^{1}(-k)\equiv\hat{Y}^{2}(k)\hat{Y}^{2}(-k),\,k\in\mathbb{C}. \end{equation} Equivalently,
$$|\hat{Y}^{1}(k)|^{2}=|\hat{Y}^{2}(k)|^{2},\,k\in\mathbb{R},$$ and we apply Theorem \ref{NL} to deduce \begin{equation}\label{3100} \hat{Y}^{1}(k)\prod_{n=1}^{\infty}\frac{1-\frac{k}{\overline{a}^{1}_{n}}}{1-\frac{k}{a^{1}_{n}}}=e^{i\gamma}\hat{Y}^{2}(k)\prod_{n=1}^{\infty}\frac{1-\frac{k}{\overline{a}^{2}_{n}}}{1-\frac{z}{a^{2}_{n}}}, \end{equation} where $\{a_{n}^{j}\}$ are the zeros of $\hat{Y}^{j}(k)$ in $\mathbb{C}^{+}$. The Blaschke product $$\prod_{n=1}^{\infty}\frac{1-\frac{k}{\overline{a}^{j}_{n}}}{1-\frac{k}{a^{j}_{n}}}$$ is a function of zero type and of zero density of zeros. We refer the detail to \cite{Boas}. Again, the zero density of~(\ref{3100}) is $\frac{b-a}{\pi}$. Thus, $\hat{Y}^{1}(k)$ and $\hat{Y}^{2}(k)$ have common zero of density $\frac{b-a}{\pi}$. That is, \begin{equation} \hat{Y}^{1}(\sigma)=\hat{Y}^{2}(\sigma),\,\forall\sigma\in\Sigma, \end{equation} in which $\Sigma$ is a set of density $\frac{b-a}{\pi}$. That is, \begin{equation} \hat{Y}^{1}_{1}(\sigma)+\hat{Y}^{1}_{2}(\sigma)+\hat{Y}^{1}_{3}(\sigma)=\hat{Y}^{2}_{1}(\sigma)+\hat{Y}^{2}_{2}(\sigma)+\hat{Y}^{2}_{3}(\sigma),\,\forall\sigma\in\Sigma. \end{equation} That is, by Lemma \ref{id}, \begin{equation} \hat{Y}^{1}_{1}(\sigma)+\hat{Y}^{1}_{3}(\sigma)=\hat{Y}^{2}_{1}(\sigma)+\hat{Y}^{2}_{3}(\sigma),\,\forall\sigma\in\Sigma, \end{equation} which however could have a zero set of density $\frac{b-a-r}{\pi}$. Hence, $\hat{Y}^{1}_{1}(k)+\hat{Y}^{1}_{3}(k)\equiv\hat{Y}^{2}_{1}(k)+\hat{Y}^{2}_{3}(k)$, and then $$\hat{Y}^{1}(k)\equiv\hat{Y}^{2}(k).$$ Therefore, we deduce that $V^{1}$ and $V^{2}$ have the same scattering matrix; \begin{equation} S^{1}(k)\equiv S^{2}(k). \end{equation} Using Zworski \cite[Proposition\,8]{Zworski}, which says that the potential function with compact support is determined by the scattering matrix, we deduce that \begin{eqnarray} V^{1}(x)\equiv V^{2}(x). \end{eqnarray} This proves the theorem.
\section{Appendix} We review some results from complex analysis \cite{Boas,Levin2}. \begin{definition} Let $f(z)$ be an entire function. Let
\begin{equation}\nonumber M_f(r):=\max_{|z|=r}|f(z)|. \end{equation} An entire function of $f(z)$ is said to be a function of finite order if there exists a positive constant $k$ such that the inequality \begin{equation}\nonumber M_f(r)<e^{r^k} \end{equation} is valid for all sufficiently large values of $r$. The greatest lower bound of such numbers $k$ is called the order of the entire function $f(z)$. By the type $\sigma$ of an entire function $f(z)$ of order $\rho$, we mean the greatest lower bound of positive number $A$ for which asymptotically we have \begin{equation}\nonumber M_f(r)<e^{Ar^\rho}. \end{equation} That is, \begin{equation}\nonumber \sigma_{f}:=\limsup_{r\rightarrow\infty}\frac{\ln M_f(r)}{r^\rho}. \end{equation} If $0<\sigma_{f}<\infty$, then we say $f(z)$ is of normal type or mean type. For $\sigma_{f}=0$, we say $f(z)$ is of minimal type. \end{definition} We refer the details to \cite{Levin2}. \begin{definition}\label{33} Let $f(z)$ be an integral function of finite order $\rho$ in the angle $[\theta_1,\theta_2]$. We call the following quantity as the indicator function of function $f(z)$.
\begin{equation}\nonumber h_f(\theta):=\lim_{r\rightarrow\infty}\frac{\ln|f(re^{i\theta})|}{r^{\rho}}, \,\theta_1\leq\theta\leq\theta_2. \end{equation} \end{definition} The type of a function is connected to the maximal value of indicator function. \begin{definition}\label{d} The following quantity is called the width of the indicator diagram of entire function $f$: \begin{equation}\label{22222} d=h_f(\frac{\pi}{2})+h_f(-\frac{\pi}{2}). \end{equation} \end{definition}
\begin{definition}\label{255} Let $f(z)$ be an integral function of order $1$, and let $n(f,\alpha,\beta,r)$ denote the number of the zeros of $f(z)$
inside the angle $[\alpha,\beta]$ and $|z|\leq r$. We define the density function as \begin{equation}\nonumber \Delta_f(\alpha,\beta):=\lim_{r\rightarrow\infty}\frac{n(f,\alpha,\beta,r)}{r}, \end{equation} and \begin{equation}\nonumber \Delta_f(\beta):=\Delta_f(\alpha_0,\beta), \end{equation} with some fixed $\alpha_0\notin E$ such that $E$ is at most a countable set \cite{Boas,Levin,Levin2}. In particular, we denote the density function of $f$ on the open right/left half complex plane as $\Delta^{+}_{f}$/$\Delta^{-}_{f}$ respectively. Similarly, we can define the set density of a zero set $S$. Let $n(
S,r)$ be the number of the discrete elements of $S$ in $\{|z|<r\}$. We define \begin{equation}\nonumber \Delta_S:=\lim_{r\rightarrow\infty}\frac{n(S,r)}{r}, \end{equation}
\end{definition} \begin{theorem}[Cartwright]\label{C} Let $f$ be an entire function of exponential type with zero set $\{a_{k}\}$. We assume $f$ satisfies one of the following conditions: \begin{equation}\nonumber \mbox{ the integral
}\int_{-\infty}^\infty\frac{\ln^+|f(x)|}{1+x^2}dx\mbox{ exists}. \end{equation} \begin{equation}\nonumber
|f(x)|\mbox{ is bounded on the real axis}. \end{equation} Then \begin{enumerate}
\item all of the zeros of the function $f(z)$, except possibly those of a set of zero density, lie inside arbitrarily small angles $|\arg z|<\epsilon$ and $|\arg z-\pi|<\epsilon$, where the density \begin{equation} \Delta_f(-\epsilon,\epsilon)=\Delta_f(\pi-\epsilon,\pi+\epsilon)=\lim_{r\rightarrow\infty} \frac{n(f,-\epsilon,\epsilon,r)}{r} =\lim_{r\rightarrow\infty}\frac{n(f,\pi-\epsilon,\pi+\epsilon,r)}{r}, \end{equation} is equal to $\frac{d}{2\pi}$, where $d$ is the width of the indicator diagram in~(\ref{22222}). Furthermore, the limit $\delta=\lim_{r\rightarrow\infty}\delta(r)$ exists, where $$
\delta(r):=\sum_{\{|a_k|<r\}}\frac{1}{a_k}; $$ \item moreover, \begin{equation}\nonumber \Delta_f(\epsilon,\pi-\epsilon)=\Delta_f(\pi+\epsilon,-\epsilon)=0; \end{equation} \item the function $f(z)$ can be represented in the form
\begin{equation}\nonumber f(z)=cz^me^{i\kappa z}\lim_{r\rightarrow\infty}\prod_{\{|a_k|<r\}}(1-\frac{z}{a_k}), \end{equation} where $c,m,\kappa$ are constants and $\kappa$ is real; \item the indicator function of $f$ is of the form \begin{equation}
h_f(\theta)=\sigma|\sin\theta|. \end{equation} \end{enumerate} \end{theorem} We refer the Cartwright's theory to Levin \cite[p,251]{Levin}. \begin{lemma}\label{36} Let $f$, $g$ be two entire functions. Then the following two inequalities hold. \begin{eqnarray} &&h_{fg}(\theta)\leq h_{f}(\theta)+h_g(\theta),\mbox{ if one limit exists};\label{119}\\\label{120} &&h_{f+g}(\theta)\leq\max_\theta\{h_f(\theta),h_g(\theta)\}, \end{eqnarray} where the equality in~(\ref{119}) holds if one of the functions is of completely regular growth, and secondly the equality~(\ref{120}) holds if the indicator of the two summands are not equal at some $\theta_0$. \end{lemma} \begin{proof}
We can find the details in \cite{Levin}. \end{proof}
\begin{lemma}\label{38} The Fourier transform $\hat{X}(z)$ as in~(\ref{1.6}) is of Cartwright class, and the function can be represented in the form
$$\hat{X}(z)=cz^{m}e^{i\delta z}\lim_{R\rightarrow\infty}\prod_{|\sigma_{n}|<R}(1-\frac{z}{\sigma_{n}}),\,z=x+iy,$$ where $\delta\in\mathbb{R}$, and the following integral converges: \begin{equation}\label{2100}
\int_{-\infty}^{\infty}\frac{\ln^{+}|\hat{X}(x)|}{1+x^{2}}dx<\infty. \end{equation} Similar results hold for $\hat{Y}(k)$. \end{lemma} \begin{proof} We refer the definition of Cartwright class to \cite{Levin,Levin2}. \end{proof} \begin{theorem}[Nevanlinna-Levin]\label{NL} If the function $F(z)$ is holomorphic and of exponential type in the half-plane $\Im z\geq0$, and if~(\ref{2100}) holds, then \begin{enumerate} \item \begin{equation}\nonumber F(z)\prod_{k=1}^{\infty}\frac{1-\frac{z}{\overline{a}_{k}}}{1-\frac{z}{a_{k}}}=e^{i\gamma}e^{u(z)+iv(z)}, \end{equation}
where $$u(z)=\frac{y}{\pi}\int_{-\infty}^{\infty}\frac{\ln|F(t)|}{(t-x)^{2}+y^{2}}dt+\sigma^{+}_{F} y,$$ $\sigma^{+}_{F}=h_{F}(\frac{\pi}{2})$, $v(z)$ is the harmonic conjugate of $u(z)$, and $\{a_{k}\}$ are the zeros of the function $F(z)$ in the half-plane $\Im z>0$; \item
$$
\ln|F(z)|=\frac{y}{\pi}\int_{-\infty}^{\infty}\frac{\ln|F(t)|}{(t-x)^{2}+y^{2}}dt+\sigma^{+}_{F} y+\ln|\chi(z)|,\,z=x+iy, $$ where $$\chi(z)=\prod_{k=1}^{\infty}\frac{1-\frac{z}{a_{k}}}{1-\frac{z}{\overline{a}_{k}}}.$$ \end{enumerate} \end{theorem} \begin{proof} We refer the proof to \cite[p.\,240]{Levin}. \end{proof}
\end{document} |
\begin{document}
\title{Lempert Theorem for strongly linearly convex domains} \author{\L ukasz Kosi\'nski and Tomasz Warszawski} \subjclass[2010]{32F45} \keywords{Lempert Theorem, strongly linearly convex domains, Lempert extremals} \address{Instytut Matematyki, Wydzia\l\ Matematyki i Informatyki, Uniwersytet Jagiello\'nski, ul. Prof. St. \L ojasiewicza 6, 30-348 Krak\'ow, Poland} \email{lukasz.kosinski@gazeta.pl, tomasz.warszawski@im.uj.edu.pl} \begin{abstract} In 1984 L.~Lempert showed that the Lempert function and the Carath\'eodory distance coincide on non-planar bounded strongly linearly convex domains with real analytic boundaries. Following this paper, we present a~slightly modified and more detailed version of the proof. Moreover, the Lempert Theorem is proved for non-planar bounded ${\mathcal C}^2$-smooth strongly linearly convex domains. \end{abstract} \maketitle
The aim of this paper is to present a detailed version of the proof of the Lempert Theorem in the case of non-planar bounded strongly linearly convex domains with smooth boundaries. The original Lempert's proof is presented only in proceedings of a conference (see \cite{Lem1}) with a very limited access and at some places it was quite sketchy. We were encouraged by some colleagues to prepare an extended version of the proof in which all doubts could be removed and some of details of the proofs could be simplified. We hope to have done it below. Certainly, \textbf{the idea of the proof belongs entirely to Lempert}. The main differences, we would like to draw attention to, are \begin{itemize}
\item results are obtained in $\mathcal C^2$-smooth case;
\item the notion of stationary mappings and $E$-mappings is separated;
\item a geometry of domains is investigated only in neighborhoods of boundaries of stationary mappings (viewed as boundaries of analytic discs) --- this allows us to obtain localization properties for stationary mappings;
\item boundary properties of strongly convex domains are expressed in terms of the squares of their Minkowski functionals. \end{itemize}
Additional motivation for presenting the proof is the fact, showed recently in \cite{Pfl-Zwo}, that the so-called symmetrized bidisc may be exhausted by strongly linearly convex domains. On the other hand it cannot be exhausted by domains biholomorphic to convex ones (\cite{Edi}). Therefore, the equality of the Lempert function and the Carath\'eodory distance for strongly linearly convex domains does not follow directly from \cite{Lem2}.
\section{Introduction and results}
Let us recall the objects we will deal with. Throughout the paper $\mathbb{D}$ denotes the unit open disc on the complex plane, $\mathbb{T}$ is the unit circle and $p$ --- the Poincar\'e distance on $\mathbb{D}$.
Let $D\subset\mathbb{C}^{n}$ be a domain and let $z,w\in D$, $v\in\mathbb{C}^{n}$. The {\it Lempert function}\/ is defined as \begin{equation}\label{lem} \widetilde{k}_{D}(z,w):=\inf\{p(0,\xi):\xi\in[0,1)\textnormal{ and }\exists f\in \mathcal{O}(\mathbb{D},D):f(0)=z,\ f(\xi)=w\}. \end{equation} The {\it Kobayashi-Royden \emph{(}pseudo\emph{)}metric}\/ we define as \begin{equation}\label{kob-roy} \kappa_{D}(z;v):=\inf\{\lambda^{-1}:\lambda>0\text{ and }\exists f\in\mathcal{O}(\mathbb{D},D):f(0)=z,\ f'(0)=\lambda v\}. \end{equation} Note that \begin{equation}\label{lem1} \widetilde{k}_{D}(z,w)=\inf\{p(\zeta,\xi):\zeta,\xi\in\mathbb{D}\textnormal{ and }\exists f\in \mathcal{O}(\mathbb{D},D):f(\zeta)=z,\ f(\xi)=w\}, \end{equation} \begin{multline}\label{kob-roy1}
\kappa_{D}(z;v)=\inf\{|\lambda|^{-1}/(1-|\zeta|^2):\lambda\in\mathbb{C}_*,\,\zeta\in\mathbb{D}\text{ and }\\ \exists f\in\mathcal{O}(\mathbb{D},D):f(\zeta)=z,\ f'(\zeta)=\lambda v\}. \end{multline}
If $z\neq w$ (respectively $v\neq 0$), a mapping $f$ for which the infimum in \eqref{lem1} (resp. in \eqref{kob-roy1}) is attained, we call a $\widetilde{k}_D$-\textit{extremal} (or a \textit{Lempert extremal}) for $z,w$ (resp. a $\kappa_D$-\textit{extremal} for $z,v$). A mapping being a $\widetilde k_D$-extremal or a $\kappa_D$-extremal we will call just an \textit{extremal} or an \textit{extremal mapping}.
We shall say that $f:\mathbb{D}\longrightarrow D$ is a unique $\widetilde{k}_D$-extremal for $z,w$ (resp. a unique $\kappa_D$-extremal for $z,v$) if any other $\widetilde{k}_D$-extremal $g:\mathbb{D}\longrightarrow D$ for $z,w$ (resp. $\kappa_D$-extremal for $z,v$) satisfies $g=f\circ a$ for some M\"obius function $a$.
In general, $\widetilde{k}_{D}$ does not satisfy a triangle inequality --- take for example $D_{\alpha}:=\{(z,w)\in\mathbb{C}^{2}:|z|,|w|<1,\ |zw|<\alpha\}$, $\alpha\in(0,1)$. Therefore, it is natural to consider the so-called \textit{Kobayashi \emph{(}pseudo\emph{)}distance} given by the formula \begin{multline*}k_{D}(w,z):=\sup\{d_{D}(w,z):(d_{D})\text{ is a family of holomorphically invariant} \\\text{pseudodistances less than or equal to }\widetilde{k}_{D}\}.\end{multline*} It follows directly from the definition that $$k_{D}(z,w)=\inf\left\{\sum_{j=1}^{N}\widetilde{k}_{D}(z_{j-1},z_{j}):N\in\mathbb{N},\ z_{1},\ldots,z_{N}\in D,\ z_{0}=z,\ z_{N}=w\right\}.$$
The next objects we are dealing with, are the \textit{Carath\'eodory \emph{(}pseudo\emph{)}distance} $$c_{D}(z,w):=\sup\{p(F(z),F(w)):F\in\mathcal{O}(D,\mathbb{D})\}$$ and the \textit{Carath\'eodory-Reiffen \emph{(}pseudo\emph{)}metric}
$$\gamma_D(z;v):=\sup\{|F'(z)v|:F\in\mathcal{O}(D,\mathbb{D}),\ F(z)=0\}.$$
A holomorphic mapping $f:\mathbb{D}\longrightarrow D$ is said to be a \emph{complex geodesic} if $c_D(f(\zeta),f(\xi))=p(\zeta,\xi)$ for any $\zeta,\xi\in\mathbb{D}$.
Here is some notation. Let $z_1,\ldots,z_n$ be the standard complex coordinates in $\mathbb{C}^n$ and $x_1,\ldots,x_{2n}$ --- the standard real coordinates in $\mathbb{C}^n=\mathbb{R}^n+i\mathbb{R}^n\simeq\mathbb{R}^{2n}$. We use $T_{D}^\mathbb{R}(a)$, $T_{D}^\mathbb{C}(a)$ to denote a real and a complex tangent space to a ${\mathcal C}^1$-smooth domain $D$ at a point $a\in\partial D$, i.e. the sets \begin{align*}T_{D}^\mathbb{R}(a):&=\left\{X\in\mathbb{C}^{n}:\re\sum_{j=1}^n\frac{\partial r}{\partial z_j}(a)X_{j}=0\right\},\\ T_{D}^\mathbb{C}(a):&=\left\{X\in\mathbb{C}^{n}:\sum_{j=1}^n\frac{\partial r}{\partial z_j}(a)X_{j}=0\right\},\end{align*} where $r$ is a defining function of $D$. Let $\nu_D(a)$ be the outward unit normal vector to $\partial D$ at $a$.
Let $\mathcal{C}^{k}(\overline{\DD})$, where $k\in(0,\infty]$, denote a class of continuous functions on $\overline{\DD}$, which are of class ${\mathcal C}^k$ on $\mathbb{D}$ and \begin{itemize} \item if $k\in\mathbb{N}\cup\{\infty\}$ then derivatives up to the order $k$ extend continuously on~$\overline{\DD}$; \item if $k-[k]=:c>0$ then derivatives up to the order $[k]$ are $c$-H\"older continuous on $\mathbb{D}$. \end{itemize} By $\mathcal{C}^\omega$ class we shall denote real analytic functions. Further, saying that $f$ is of class $\mathcal{C}^{k}(\mathbb{T})$, $k\in(0,\infty]\cup\{\omega\}$, we mean that the function $t\longmapsto f(e^{it})$, $t\in\mathbb{R}$, is in $\mathcal{C}^{k}(\mathbb R)$. For a compact set $K\subset\mathbb{C}^n$ let ${\mathcal O}(K)$ denote the set of functions extending holomorphically on a neighborhood of $K$ (we assume that all neighborhoods are open). In that case we shall sometimes say that a given function is of class ${\mathcal O}(K)$. Note that $\mathcal{C}^{\omega}(\mathbb{T})={\mathcal O}(\mathbb{T})$.
Let $|\cdot|$ denote the Euclidean norm in $\mathbb{C}^{n}$ and let $\dist(z,S):=\inf\{|z-s|:s\in S\}$ be a distance of the point $z\in\mathbb{C}^n$ to the set $S\subset\mathbb{C}^n$. For such a set $S$ we define $S_*:=S\setminus\{0\}$. Let $\mathbb{B}_n:=\{z\in\mathbb{C}^n:|z|=1\}$ be the unit ball and $B_n(a,r):=\{z\in\mathbb{C}^n:|z-a|<r\}$ --- an open ball with a center $a\in\mathbb{C}^n$ and a radius $r>0$. Put $$z\bullet w:=\sum_{j=1}^nz_{j}{w}_{j}$$ for $z,w\in\mathbb{C}^{n}$ and let $\langle\cdotp,-\rangle$ be a hermitian inner product on $\mathbb{C}^n$. The real inner product on $\mathbb{C}^n$ is denoted by $\langle\cdotp,-\rangle_{\mathbb{R}}=\re\langle\cdotp,-\rangle$.
We use $\nabla$ to denote the gradient $(\partial/\partial x_1,\ldots,\partial/\partial x_{2n})$. For real-valued functions the gradient is naturally identified with $2(\partial/\partial\overline z_1,\ldots,\partial/\partial\overline z_n)$. Recall that $$\nu_D(a)=\frac{\nabla r(a)}{|\nabla r(a)|}.$$ Let $\mathcal{H}$ be the Hessian matrix $$\left[\frac{\partial^2}{\partial x_j\partial x_k}\right]_{1\leq j,k\leq 2n}.$$ Sometimes, for a ${\mathcal C}^2$-smooth function $u$ and a vector $X\in\mathbb{R}^{2n}$ the Hessian $$\sum_{j,k=1}^{2n}\frac{\partial^2 u}{\partial x_j\partial x_k}(a)X_{j}X_{k}=X^T{\mathcal H} u(a)X$$ will be denoted by ${\mathcal H} u(a;X)$. By $\|\cdot\|$ we denote the operator norm.
\begin{df}\label{29} Let $D\subset\mathbb{C}^{n}$ be a domain.
We say that $D$ is \emph{linearly convex} (resp. \emph{weakly linearly convex}) if through any point $a\in\mathbb C^n\setminus D$ (resp. $a\in \partial D$) there goes an $(n-1)$-dimensional complex hyperplane disjoint from $D$.
A domain $D$ is said to be \emph{strongly linearly convex} if \begin{enumerate} \item $D$ has $\mathcal{C}^{2}$-smooth boundary; \item there exists a defining function $r$ of $D$ such that
\begin{equation}\label{48}\sum_{j,k=1}^n\frac{\partial^2 r}{\partial z_j\partial\overline z_k}(a)X_{j}\overline{X}_{k}>\left|\sum_{j,k=1}^n\frac{\partial^2 r}{\partial z_j\partial z_k}(a)X_{j}X_{k}\right|,\ a\in\partial D,\ X\in T_{D}^\mathbb{C}(a)_*.\end{equation} \end{enumerate}
More generally, any point $a\in\partial D$ for which there exists a defining function $r$ satisfying \eqref{48}, is called a \emph{point of the strong linear convexity} of $D$.
Furthermore, we say that a domain $D$ has \emph{real analytic boundary} if it possesses a real analytic defining function. \end{df}
Note that the condition \eqref{48} does not depend on the choice of a defining function of $D$.
\begin{rem} Let $D\subset\mathbb{C}^{n}$ be a strongly linearly convex domain. Then \begin{enumerate} \item any $(n-1)$-dimensional complex tangent hyperplane intersects $\partial{D}$ at precisely one point; in other words $$\overline D\cap(a+T_{D}^\mathbb{C}(a))=\{a\},\ a\in\partial D;$$ \item for $a\in\partial D$ the equation $\langle w-a, \nu_D(a)\rangle=0$ describes the $(n-1)$-dimensional complex tangent hyperplane $a+T_{D}^\mathbb{C}(a)$, consequently $$\langle z-a, \nu_D(a)\rangle\neq 0,\ z\in D,\ a\in\partial D.$$ \end{enumerate} \end{rem}
The main aim of the paper is to present a detailed proof of the following
\begin{tw}[Lempert Theorem]\label{lem-car} Let $D\subset\mathbb{C}^{n}$, $n\geq 2$, be a bounded strongly linearly convex domain. Then $$c_{D}=k_{D}=\widetilde{k}_{D}\text{\,\ and\,\, }\gamma_D=\kappa_D.$$ \end{tw}
An important role will be played by strongly convex domains and strongly convex functions. \begin{df} A domain $D\subset\mathbb{C}^{n}$ is called \emph{strongly convex} if \begin{enumerate} \item $D$ has $\mathcal{C}^{2}$-smooth boundary; \item there exists a defining function $r$ of $D$ such that \begin{equation}\label{sc}\sum_{j,k=1}^{2n}\frac{\partial^2 r}{\partial x_j\partial x_k}(a)X_{j}X_{k}>0,\ a\in\partial D,\ X\in T_{D}^\mathbb{R}(a)_*.\end{equation} \end{enumerate} Generally, any point $a\in\partial D$ for which there exists a defining function $r$ satisfying \eqref{sc}, is called a \emph{point of the strong convexity} of $D$. \end{df} \begin{rem} A strongly convex domain $D\subset\mathbb{C}^{n}$ is convex and strongly linearly convex. Moreover, it is strictly convex, i.e. for any different points $a,b\in\overline D$ the interior of the segment $[a,b]=\{ta+(1-t)b:t\in [0,1]\}$ is contained in $D$ (i.e. $ta+(1-t)b\in D$ for any $t\in(0,1)$).
Observe also that any bounded convex domain with a real analytic boundary is strictly convex. Actually, if a domain $D$ with a real analytic boundary were not strictly convex, then we would be able to find two distinct points $a,b\in\partial D$ such that the segment $[a,b]$ lies entirely in $\partial D$. On the other hand, the identity principle would imply that the set $\{t\in\mathbb R:\exists\varepsilon>0:sa+(1-s)b\in\partial D\text{ for }|s-t|<\varepsilon\}$ is open-closed in $\mathbb R$. Therefore it has to be empty. This immediately gives a contradiction. \end{rem}
\begin{rem} It is well-known that for any convex domain $D\subset\mathbb{C}^{n}$ there is a sequence $\{D_m\}$ of bounded strongly convex domains with real analytic boundaries, such that $D_m\subset D_{m+1}$ and $\bigcup_m D_m=D$.
In particular, Theorem~\ref{lem-car} holds for convex domains. \end{rem}
\begin{df} Let $U\subset\mathbb{C}^n$ be a domain. A function $u:U\longrightarrow\mathbb{R}$ is called \emph{strongly convex} if \begin{enumerate} \item $u$ is $\mathcal{C}^{2}$-smooth; \item $$\sum_{j,k=1}^{2n}\frac{\partial^2 u}{\partial x_j\partial x_k}(a)X_{j}X_{k}>0,\ a\in U,\ X\in(\mathbb{R}^{2n})_*.$$ \end{enumerate} \end{df}
\begin{df} A degree of a continuous function (treated as a curve) $:\mathbb T\longrightarrow\mathbb T$ is called its winding number. The fundamental group is a homotopy invariant. Thus the definition of the \emph{winding number of a continuous function} $\varphi:\mathbb T\longrightarrow\mathbb C_*$ is the same. We denote it by $\wind\varphi$.
In the case of a ${\mathcal C}^1$-smooth function $\varphi:\mathbb{T}\longrightarrow\mathbb{C}_*$, its winding number is just the index of $\varphi$ at 0, i.e. $$\wind\varphi=\frac{1}{2\pi i}\int_{\varphi(\mathbb{T})}\frac{d\zeta}{\zeta}=\frac{1}{2\pi i}\int_{0}^{2\pi}\frac{\frac{d}{dt}\varphi(e^{it})}{\varphi(e^{it})}dt.$$ \end{df}
\begin{rem}\label{49} \begin{enumerate} \item\label{51} If $\varphi\in{\mathcal C}(\mathbb{T},\mathbb{C}_*)$ extends to a function $\widetilde{\varphi}\in{\mathcal O}(\mathbb{D})\cap \mathcal C(\overline{\DD})$ then $\wind\varphi$ is the number of zeroes of $\widetilde{\varphi}$ in $\mathbb{D}$ counted with multiplicities; \item\label{52} $\wind(\varphi\psi)=\wind\varphi+\wind\psi$, $\varphi,\psi\in{\mathcal C}(\mathbb{T},\mathbb{C}_*)$; \item\label{53} $\wind\varphi=0$ if $\varphi\in{\mathcal C}(\mathbb{T})$ and $\re\varphi>0$. \end{enumerate} \end{rem}
\begin{df} The boundary of a domain $D$ of $\mathbb C^n$ is \emph{real analytic in a neighborhood} $U$ of the set $S\subset\partial D$ if there exists a function $r\in\mathcal C^{\omega}(U,\mathbb{R})$ such that $D\cap U=\{z\in U:r(z)<0\}$ and $\nabla r$ does not vanish in $U$. \end{df}
\begin{df}\label{21} Let $D\subset\mathbb{C}^{n}$ be a domain. We call a holomorphic mapping $f:\mathbb{D}\longrightarrow D$ a \emph{stationary mapping} if \begin{enumerate} \item $f$ extends to a holomorphic mapping in a neighborhood od $\overline{\DD}$ $($denoted by the same letter$)$; \item $f(\mathbb{T})\subset\partial D$; \item there exists a real analytic function $\rho:\mathbb{T}\longrightarrow\mathbb{R}_{>0}$ such that the mapping $\mathbb{T}\ni\zeta\longmapsto\zeta \rho(\zeta)\overline{\nu_D(f(\zeta))}\in\mathbb{C}^{n}$ extends to a mapping holomorphic in a neighborhood of $\overline{\DD}$ $($denoted by $\widetilde{f}${$)$}. \end{enumerate}
Furthermore, we call a holomorphic mapping $f:\mathbb{D}\longrightarrow D$ a \emph{weak stationary mapping} if \begin{enumerate} \item[(1')] $f$ extends to a ${\mathcal C}^{1/2}$-smooth mapping on $\overline{\DD}$ $($denoted by the same letter$)$; \item[(2')] $f(\mathbb{T})\subset\partial D$; \item[(3')] there exists a ${\mathcal C}^{1/2}$-smooth function $\rho:\mathbb{T}\longrightarrow\mathbb{R}_{>0}$ such that the mapping $\mathbb{T}\ni\zeta\longmapsto\zeta \rho(\zeta)\overline{\nu_D(f(\zeta))}\in\mathbb{C}^{n}$ extends to a mapping $\widetilde{f}\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}^{1/2}(\overline{\DD})$. \end{enumerate}
The definition of a $($weak$)$ stationary mapping $f:\mathbb D\longrightarrow D$ extends naturally to the case when $\partial D$ is real analytic in a neighborhood of $f(\mathbb{T})$. \end{df}
Directly from the definition of a stationary mapping $f$, it follows that $f$ and $\widetilde f$ extend holomorphically on some neighborhoods of $\overline{\DD}$. By $\mathbb{D}_f$ we shall denote their intersection.
\begin{df}\label{21e} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain with real analytic boundary. A holomorphic mapping $f:\mathbb{D}\longrightarrow D$ is called a (\emph{weak}) $E$-\emph{mapping} if it is a (weak) stationary mapping and \begin{enumerate} \item[(4)] setting $\varphi_z(\zeta):=\langle z-f(\zeta),\nu_D(f(\zeta))\rangle,\ \zeta\in\mathbb{T}$, we have $\wind\varphi_z=0$ for some $z\in D$. \end{enumerate} \end{df}
\begin{rem} The strong linear convexity of $D$ implies $\varphi_z(\zeta)\neq 0$ for any $z\in D$ and $\zeta\in\mathbb{T}$. Therefore, $\wind\varphi_z$ vanishes for all $z\in D$ if it vanishes for some $z\in D$.
Additionally, any stationary mapping of a convex domain is an $E$-mapping (as $\re \varphi_z<0$). \end{rem}
We shall prove that in a class of non-planar bounded strongly linearly convex domains with real analytic boundaries weak stationary mappings are just stationary mappings, so there is no difference between $E$-mappings and weak $E$-mappings.
We have the following result describing extremal mappings, which is very interesting in its own.
\begin{tw}\label{main} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain.
Then a holomorphic mapping $f:\mathbb{D}\longrightarrow D$ is an extremal if and only if $f$ is a weak $E$-mapping.
For a domain $D$ with real analytic boundary, a holomorphic mapping $f:\mathbb D\longrightarrow D$ is an extremal if and only if $f$ is an $E$-mapping.
If $\partial D$ is of class ${\mathcal C}^k$, $k=3,4,\ldots,\infty$, then any weak $E$-mapping $f:\mathbb{D}\longrightarrow D$ and its associated mappings $\widetilde f,\rho$ are $\mathcal C^{k-1-\varepsilon}$-smooth for any $\varepsilon>0$.
\end{tw}
The idea of the proof of the Lempert Theorem is as follows. In real analytic case we shall show that $E$-mappings are complex geodesics (because they have left inverses). Then we shall prove that for any different points $z,w\in D$ (resp. for a point $z\in D$ and a vector $v\in(\mathbb{C}^n)_*$) there is an $E$-mapping passing through $z,w$ (resp. such that $f(0)=z$ and $f'(0)=v$). This will give the equality between the Lempert function and the Carath\'eodory distance. In the general case, we exhaust a ${\mathcal C}^2$-smooth domain by strongly linearly convex domains with real analytic boundaries.
To prove Theorem \ref{main} we shall additionally observe that (weak) $E$-mappings are unique extremals.
\begin{center}{\sc Real analytic case}\end{center}
In what follows and if not mentioned otherwise, $D\subset\mathbb{C}^n$, $n\geq 2$, is a \textbf{bounded strongly linearly convex domain with real analytic boundary}. \section{Weak stationary mappings of strongly linearly convex domains with real analytic boundaries are stationary mappings}\label{55} Let $M\subset\mathbb{C}^m$ be a totally real $\mathcal{C}^{\omega}$ submanifold of the real dimension $m$. Fix a point $z\in M$. There are neighborhoods $U,V\subset\mathbb{C}^m$ of $0$ and $z$ respectively and a biholomorphic mapping $\Phi:U\longrightarrow V$ such that $\Phi(\mathbb{R}^m\cap U)=M\cap V$ (for the proof see Appendix).
\begin{prop}\label{6} A weak stationary mapping of $D$ is a stationary mapping of $D$ with the same associated mappings. \end{prop} \begin{proof} Let $f:\mathbb{D}\longrightarrow D$ be a weak stationary mapping. Our aim is to prove that $f,\widetilde{f}\in{\mathcal O}(\overline{\DD})$ and $\rho\in\mathcal C^{\omega}(\mathbb{T})$. Choose a point $\zeta_0\in\mathbb{T}$. Since $\widetilde{f}(\zeta_0)\neq 0$, we can assume that $\widetilde{f}_1(\zeta)\neq 0$ in $\overline{\DD}\cap U_0$, where $U_0$ is a neighborhood of $\zeta_0$. This implies $\nu_{D,1}(f(\zeta_0))\neq 0$, so $\nu_{D,1}$ does not vanish on some set $V_0\subset\partial D$, relatively open in $\partial D$, containing the point $f(\zeta_0)$. Shrinking $U_0$, if necessary, we may assume that $f(\mathbb{T}\cap U_0)\subset V_0$.
Define $\psi:V_0\longrightarrow\mathbb{C}^{2n-1}$ by $$\psi(z)=\left(z_1,\ldots,z_n, \overline{\left(\frac{\nu_{D,2}(z)}{\nu_{D,1}(z)}\right)},\ldots,\overline{\left(\frac{\nu_{D,n}(z)}{\nu_{D,1}(z)}\right)}\right).$$ The set $M:=\psi(V_0)$ is the graph of a $\mathcal{C}^{\omega}$ function defined on the local $\mathcal{C}^{\omega}$ submanifold $V_0$, so it is a local $\mathcal{C}^{\omega}$ submanifold in $\mathbb{C}^{2n-1}$ of the real dimension $2n-1$. Assume for a moment that $M$ is totally real.
Let $$g(\zeta):=\left(f_1(\zeta),\ldots,f_n(\zeta), \frac{\widetilde{f}_2(\zeta)}{\widetilde{f}_1(\zeta)},\ldots,\frac{\widetilde{f}_n(\zeta)}{\widetilde{f}_1(\zeta)}\right),\ \zeta\in\overline{\DD}\cap U_0.$$ If $\zeta\in\mathbb{T}\cap U_0$ then $\widetilde{f}_k(\zeta)\widetilde{f}_1(\zeta)^{-1} = \overline{\nu_{D,k}(f(\zeta))}\ \overline{\nu_{D,1}(f(\zeta))}^{-1}$, so $g(\zeta)=\psi(f(\zeta))$. Therefore, $g(\mathbb{T}\cap U_0)\subset M$. Thanks to the Reflection Principle (see Appendix), $g$ extends holomorphically past $\mathbb{T}\cap U_0$, so $f$ extends holomorphically on a neighborhood of $\zeta_0$.
The mapping $\overline{\nu_D\circ f}$ is real analytic on $\mathbb{T}$, so it extends to a mapping $h$ holomorphic in a neighborhood $W$ of $\mathbb{T}$. For $\zeta\in\mathbb{T}\cap U_0$ we have $$\frac{\zeta h_1(\zeta)}{\widetilde{f}_1(\zeta)}=\frac{1}{\rho(\zeta)}.$$ The function on the left side is holomorphic in $\mathbb{D}\cap U_0\cap W$ and continuous in $\overline{\DD}\cap U_0\cap W$. Since it has real values on $\mathbb{T}\cap U_0$, the Reflection Principle implies that it is holomorphic in a neighborhood of $\mathbb{T}\cap U_0$. Hence $\rho$ and $\widetilde{f}$ are holomorphic in a neighborhood of $\zeta_0$. Since $\zeta_0$ is arbitrary, we get the assertion.
It remains to prove that $M$ is totally real. Let $r$ be a defining function of $D$. Recall that for any point $z\in V_0$ $$\frac{\overline{\nu_{D,k}(z)}}{\overline{\nu_{D,1}(z)}}=\frac{\partial r}{\partial z_k}(z)\left(\frac{\partial r}{\partial z_1}(z)\right)^{-1},\,k=1,\ldots,n.$$ Consider the mapping $S=(S_1,\ldots,S_n):V_0\times\mathbb{C}^{n-1}\longrightarrow\mathbb{R}\times\mathbb{C}^{n-1}$ given by $$S(z,w):=\left(r(z),\frac{\partial r}{\partial z_2}(z)-w_{1}\frac{\partial r}{\partial z_1}(z),\ldots,\frac{\partial r}{\partial z_n}(z)-w_{n-1}\frac{\partial r}{\partial z_1}(z)\right).$$ Clearly, $M=S^{-1}(\{0\})$. Hence \begin{equation}\label{tan} T_{M}^{\mathbb{R}}(z,w)\subset\ker\nabla S(z,w),\ (z,w)\in M,\end{equation} where $\nabla S:=(\nabla S_1,\ldots,\nabla S_n)$.
Fix a point $(z,w)\in M$. Our goal is to prove that $T_{M}^{\mathbb{C}}(z,w)=\lbrace 0\rbrace$. Take an arbitrary vector $(X,Y)=(X_1,\ldots,X_n,Y_1,\ldots,Y_{n-1})\in T_{M}^{\mathbb{C}}(z,w)$. Then we infer from \eqref{tan} that $$\sum_{k=1}^n\frac{\partial r}{\partial z_k}(z)X_k=0,$$ i.e. $X\in T_{D}^{\mathbb{C}}(z)$. Denoting $v:=(z,w)$, $V:=(X,Y)$ and making use of \eqref{tan} again we find that $$0=\nabla S_k(v)(V)=\sum_{j=1}^{2n-1}\frac{\partial S_k}{\partial v_j}(v)V_j+\sum_{j=1}^{2n-1}\frac{\partial S_k}{\partial\overline v_j}(v)\overline V_j$$ for $k=2,\ldots,n$. But $V\in T_{M}^{\mathbb{C}}(v)$, so $iV\in T_{M}^{\mathbb{C}}(v)$. Thus $$0=\nabla S_k(v)(iV)=i\sum_{j=1}^{2n-1}\frac{\partial S_k}{\partial v_j}(v)V_j-i\sum_{j=1}^{2n-1}\frac{\partial S_k}{\partial\overline v_j}(v)\overline V_j.$$ In particular, \begin{multline*}0=\sum_{j=1}^{2n-1}\frac{\partial S_k}{\partial\overline v_j}(v)\overline V_j=\sum_{j=1}^{n}\frac{\partial S_k}{\partial\overline z_j}(z,w)\overline X_j+\sum_{j=1}^{n-1}\frac{\partial S_k}{\partial\overline w_j}(z,w)\overline Y_j=\\=\sum_{j=1}^n\frac{\partial^2r}{\partial z_k\partial\overline{z}_j}(z)\overline X_j-w_{k-1}\sum_{j=1}^n\frac{\partial^2r}{\partial z_1\partial\overline{z}_j}(z)\overline X_j. \end{multline*} The equality $M=S^{-1}(\{0\})$ gives $$w_{k-1}=\frac{\partial r}{\partial z_k}(z)\left(\frac{\partial r}{\partial z_1}(z)\right)^{-1},$$ so $$\frac{\partial r}{\partial z_1}(z)\sum_{j=1}^n\frac{\partial^2r}{\partial z_k\partial\overline{z}_j}(z)\overline X_j=\frac{\partial r}{\partial z_k}(z)\sum_{j=1}^n\frac{\partial^2r}{\partial z_1\partial\overline{z}_j}(z)\overline X_j,\ k=2,\ldots,n.$$ Note that the last equality holds also for $k=1$. Therefore, \begin{multline*} \frac{\partial r}{\partial z_1}(z)\sum_{j,k=1}^n\frac{\partial^2r}{\partial z_k\partial\overline{z}_j}(z)\overline X_jX_k=\sum_{k=1}^n\frac{\partial r}{\partial z_k}(z)\sum_{j=1}^n\frac{\partial^2r}{\partial z_1\partial\overline{z}_j}(z)\overline X_jX_k =\\=\left(\sum_{k=1}^n\frac{\partial r}{\partial z_k}(z)X_k\right)\left(\sum_{j=1}^n\frac{\partial^2r}{\partial z_1\partial\overline{z}_j}(z)\overline X_j\right)=0. \end{multline*} By the strong linear convexity of $D$ we have $X=0$. This implies $Y=0$, since $$0=\nabla S_k(z,w)(0,Y)=\sum_{j=1}^{n-1}\frac{\partial S_k}{\partial w_j}(v)Y_j+\sum_{j=1}^{n-1}\frac{\partial S_k}{\partial\overline w_j}(v)\overline Y_j=-\frac{\partial r}{\partial z_1}(z)Y_{k-1}$$ for $k=2,\ldots,n$. \end{proof}
\section{(Weak) $E$-mappings vs. extremal mappings and complex geodesics}
In this section we will prove important properties of (weak) $E$-mappings. In particular, we will show that they are complex geodesics and unique extremals. \subsection{Weak $E$-mappings are complex geodesics and unique extremals} The results of this subsection are related to weak $E$-mappings of bounded strongly linearly convex domains $D\subset\mathbb{C}^n$, $n\geq 2$.
Let $$G(z,\zeta):=(z-f(\zeta))\bullet\widetilde{f}(\zeta),\ z\in\mathbb{C}^n,\ \zeta\in\mathbb{D}_f.$$
\begin{propp}\label{1} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain and let $f:\mathbb{D}\longrightarrow D$ be a weak $E$-mapping. Then there exist an open set $W\supset\overline D\setminus f(\mathbb{T})$ and a holomorphic mapping $F:W\longrightarrow\mathbb{D}$ such that for any $z\in W$ the number $F(z)$ is a unique solution of the equation $G(z,\zeta)=0,\ \zeta\in\mathbb{D}$. In particular, $F\circ f=\id_{\mathbb{D}}$. \end{propp}
In the sequel we will strengthen the above proposition for domains with real analytic boundaries (see Proposition~\ref{34}).
\begin{proof}[Proof of Proposition~\ref{1}] Set $A:=\overline{D}\setminus f(\mathbb{T})$. Since $D$ is strongly linearly convex, $\varphi_z$ does not vanish in $\mathbb{T}$ for any $z\in A$, so by a continuity argument the condition (4) of Definition~\ref{21e} holds for every $z$ in some open set $W\supset A$. For a fixed $z\in W$ we have $$G(z,\zeta)=\zeta\rho(\zeta)\varphi_z(\zeta),\ \zeta\in\mathbb{T},$$ so $\wind G(z,\cdotp)=1$. Since $G(z,\cdotp)\in{\mathcal O}(\mathbb{D})$, it has in $\mathbb{D}$ exactly one simple root $F(z)$. Hence $G(z,F(z))=0$ and $\frac{\partial G}{\partial\zeta}(z,F(z))\neq 0$. By the Implicit Function Theorem, $F$ is holomorphic in $W$. The equality $F(f(\zeta))=\zeta$ for $\zeta\in\mathbb{D}$ is clear. \end{proof}
From the proposition above we immediately get the following \begin{corr}\label{5} A weak $E$-mapping $f:\mathbb{D}\longrightarrow D$ of a bounded strongly linearly convex domain $D\subset\mathbb{C}^n$, $n\geq 2$, is a complex geodesic. In particular, $$c_{D}(f(\zeta),f(\xi))=\widetilde k_D(f(\zeta),f(\xi))\text{\,\ and\,\, }\gamma_D(f(\zeta);f'(\zeta))=\kappa_D(f(\zeta);f'(\zeta)),$$ for any $\zeta,\xi\in\mathbb{D}$. \end{corr}
Using left inverses of weak $E$-mappings we may prove the uniqueness of extremals. \begin{propp}\label{2} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain and let $f:\mathbb{D}\longrightarrow D$ be a weak $E$-mapping. Then for any $\xi\in(0,1)$ the mapping $f$ is a unique $\widetilde{k}_D$-extremal for $z=f(0)$, $w=f(\xi)$ \emph{(}resp. a unique $\kappa_D$-extremal for $z=f(0)$, $v=f'(0)$\emph{)}. \end{propp} \begin{proof}
Suppose that $g$ is a $\widetilde{k}_D$-extremal for $z,w$ (resp. a $\kappa_D$-extremal for $z,v$) such that $g(0)=z$, $g(\xi)=w$ (resp. $g(0)=z$, $g'(0)=v$). Our aim is to show that $f=g$. Proposition~\ref{1} provides us with the mapping $F$, which is a left inverse for $f$. By the Schwarz Lemma, $F$ is a left inverse for $g$, as well, that is $F\circ g=\text{id}_{\mathbb{D}}$. We claim that $\lim_{\mathbb{D}\ni\zeta\to\zeta_0}g(\zeta)=f(\zeta_0)$ for any $\zeta_0\in\mathbb{T}$ (in particular, we shall show that the limit does exist).
Assume the contrary. Then there are $\zeta_0\in\mathbb{T}$ and a sequence $\{\zeta_m\}\subset\mathbb{D}$ convergent to $\zeta_0$ such that the limit $Z:=\lim_{m\to\infty}g(\zeta_m)\in\overline{D}$ exists and is not equal to $f(\zeta_0)$. We have $G(z,F(z))=0$, so putting $z=g(\zeta_m)$ we infer that $$0=(g(\zeta_m)-f(F(g(\zeta_m))))\bullet \widetilde{f}(F(g(\zeta_m)))=(g(\zeta_m)-f(\zeta_m))\bullet\widetilde{f}(\zeta_m). $$ Passing with $m$ to the infinity we get $$0=(Z-f(\zeta_0))\bullet \widetilde{f}(\zeta_0)=\zeta_0\rho(\zeta_0)\langle Z-f(\zeta_0),\nu_D(f(\zeta_0))\rangle.$$ This means that $Z-f(\zeta_0)\in T^{\mathbb{C}}_D(f(\zeta_0))$. Since $D$ is strongly linearly convex, we deduce that $Z=f(\zeta_0)$, which is a contradiction.
Hence $g$ extends continuously on $\overline{\DD}$ and, by the maximum principle, $g=f$. \end{proof}
\begin{propp}\label{3} Let $D\subset\mathbb{C}^n$, $n\geq 2$, be a bounded strongly linearly convex domain, let $f:\mathbb{D}\longrightarrow D$ be a weak $E$-mapping and let $a$ be an automorphism of $\mathbb{D}$. Then $f\circ a$ is a weak $E$-mapping of $D$. \end{propp} \begin{proof} Set $g:=f\circ a$.
Clearly, the conditions (1') and (2') of Definition~\ref{21} are satisfied by $g$.
To prove that $g$ satisfies the condition (4) of Definition~\ref{21e} fix a point $z\in D$. Let $\varphi_{z,f}$, $\varphi_{z,g}$ be the functions appearing in the condition (4) for $f$ and $g$ respectively. Then $\varphi_{z,g}=\varphi_{z,f}\circ a$. Since $a$ maps $\mathbb{T}$ to $\mathbb{T}$ diffeomorphically, we have $\wind\varphi_{z,g}=\pm\wind\varphi_{z,f}=0$.
It remains to show that the condition (3') of Definition~\ref{21} is also satisfied by $g$. Note that the function $\widetilde a(\zeta):=\zeta/a(\zeta)$ has a holomorphic branch of the logarithm in the neighborhood of $\mathbb{T}$. This follows from the fact that $\wind \widetilde a=0$, however the existence of the holomorphic branch may be shown quite elementary. Actually, it would suffices to prove that $\widetilde a(\mathbb{T})\neq\mathbb{T}$. Expand $a$ as $$a(\zeta)=e^{it}\frac{\zeta-b}{1-\overline b\zeta}$$ with some $t\in\mathbb{R}$, $b\in\mathbb{D}$ and observe that $\widetilde a$ does not attain the value $-e^{-it}$. Indeed, if $\zeta/a(\zeta)=-e^{-it}$ for some $\zeta\in\mathbb{T}$, then $$\frac{1-\overline b\zeta}{1-b\overline\zeta}=-1,$$ so $2=2\re(b\overline\zeta)\leq 2|b|$, which is impossible.
Concluding, there exists a function $v$ holomorphic in a neighborhood of $\mathbb{T}$ such that $$\frac{\zeta}{a(\zeta)}=e^{i v(\zeta)}.$$ Note that $v(\mathbb{T})\subset\mathbb{R}$. Expanding $v$ in Laurent series $$v(\zeta)=\sum_{k=-\infty}^{\infty}a_k\zeta^k,\ \zeta\text{ near }\mathbb{T},$$ we infer that $a_{-k}=\overline a_k$, $k\in\mathbb{Z}$. Therefore, $$v(\zeta)=a_0+\sum_{k=1}^\infty 2\re(a_k\zeta^k)=\re\left(a_0+2\sum_{k=1}^\infty a_k\zeta^k\right),\ \zeta\in\mathbb{T}.$$ Hence, there is a function $h$ holomorphic in the neighborhood of $\overline{\DD}$ such that $v=\im h$. Put $u:=h-iv$. Then $u\in{\mathcal O}(\mathbb{T})$ and $u(\mathbb{T})\subset\mathbb{R}$.
Take $\rho$ be as in the condition (3') of Definition~\ref{21} for $f$ and define $$r(\zeta):=\rho(a(\zeta))e^{u(\zeta)},\ \zeta\in\mathbb{T}.$$ Let us compute \begin{eqnarray*}\zeta r(\zeta)\overline{\nu_D(g(\zeta))}=\zeta u^{u(\zeta)}\rho(a(\zeta))\overline{\nu_D(f(a(\zeta)))}&=&\\=a(\zeta)h(\zeta)\rho(a(\zeta))\overline{\nu_D(f(a(\zeta)))} &=&h(\zeta)\widetilde{f}(a(\zeta)),\quad\zeta\in\mathbb{T}. \end{eqnarray*} Thus $\zeta\longmapsto\zeta r(\zeta)\overline{\nu_D(g(\zeta))}$ extends holomorphically to a function of class ${\mathcal O}(\mathbb{D})\cap{\mathcal C}^{1/2}(\overline{\DD})$. \end{proof}
\begin{corr}\label{28} A weak $E$-mapping $f:\mathbb{D}\longrightarrow D$ of a bounded strongly linearly convex domain $D\subset\mathbb{C}^n$, $n\geq 2$, is a unique $\widetilde{k}_D$-extremal for $f(\zeta),f(\xi)$ \emph{(}resp. a unique $\kappa_D$-extremal for $f(\zeta),f'(\zeta)$\emph{)}, where $\zeta,\xi\in\mathbb{D}$, $\zeta\neq\xi$. \end{corr}
\subsection{Generalization of Proposition~\ref{1}} The results obtained in this subsection will play an important role in the sequel.
We start with \begin{propp}\label{4} Let $f:\mathbb{D}\longrightarrow D$ be an $E$-mapping. Then the function $f'\bullet\widetilde{f}$ is a positive constant. \end{propp} \begin{proof} Consider the curve $$\mathbb{R}\ni t\longmapsto f(e^{it})\in\partial D.$$ Its any tangent vector $ie^{it}f'(e^{it})$ belongs to $T_{D}^\mathbb{R}(f(e^{it}))$, i.e. $$\re\langle ie^{it}f'(e^{it}),\nu_D(f(e^{it}))\rangle=0.$$ Thus for $\zeta\in\mathbb{T}$ $$0=\rho(\zeta)\re\langle i\zeta f'(\zeta),\nu_D(f(\zeta))\rangle=-\im f'(\zeta)\bullet\widetilde{f}(\zeta),$$ so the holomorphic function $f'\bullet\widetilde{f}$ is a real constant $C$.
Considering the curve $$[0,1+\varepsilon)\ni t\longmapsto f(t)\in\overline D$$ for small $\varepsilon>0$ and noting that $f([0,1))\subset D$, $f(1)\in\partial D$, we see that the derivative of $r\circ f$ at a point $t=1$ is non-negative, where $r$ is a defining function of $D$. Hence $$0\leq\re\langle f'(1),\nu_D(f(1))\rangle =\frac{1}{\rho(1)} \re( f'(1)\bullet\widetilde{f}(1))= \frac{C}{\rho(1)},$$ i.e. $C\geq 0$. For $\zeta\in\mathbb{T}$ $$\frac{f(\zeta)-f(0)}{\zeta}\bullet\widetilde{f}(\zeta)=\rho(\zeta)\langle f (\zeta)-f(0),\nu_D(f(\zeta))\rangle.$$ This function has the winding number equal to $0$. Therefore, the function $$g(\zeta):=\frac{f(\zeta)-f(0)}{\zeta}\bullet\widetilde{f}(\zeta),$$ which is holomorphic in a neighborhood of $\overline{\DD}$, does not vanish in $\mathbb{D}$. In particular, $C=g(0)\neq 0$. \end{proof} The function $\rho$ is defined up to a constant factor. \textbf{We choose $\rho$ so that $ f'\bullet\widetilde{f}\equiv 1$}, i.e. \begin{equation}\label{rho}\rho(\zeta)^{-1}=\langle\zeta f'(\zeta),\nu_D(f(\zeta))\rangle,\ \zeta\in\mathbb{T}.\end{equation} In that way $\widetilde{f}$ and $\rho$ are uniquely determined by $f$.
\begin{propp} An $E$-mapping $f:\mathbb{D}\longrightarrow D$ is injective in $\overline{\DD}$. \end{propp}
\begin{proof}The function $f$ has the left-inverse in $\mathbb{D}$, so it suffices to check the injectivity on $\mathbb{T}$. Suppose that $f(\zeta_1)=f(\zeta_2)$ for some $\zeta_1,\zeta_2\in\mathbb{T}$, $\zeta_1\neq\zeta_2$, and consider the curves $$\gamma_j:[0,1]\ni t\longmapsto f(t\zeta_j)\in\overline D,\ j=1,2.$$ Since $$\re\langle\gamma_j'(1),\nu_D(f(\zeta_j))\rangle=\re\langle\zeta_jf'(\zeta_j),\nu_D(f(\zeta_j))\rangle
=\rho(\zeta_j)^{-1}\neq 0,$$ the curves $\gamma_j$ hit $\partial D$ transversally at their common point $f(\zeta_1)$. We claim that there exists $C>0$ such that for $t\in(0,1)$ close to $1$ there is $s_t\in(0,1)$ satisfying $\widetilde k_D(f(t\zeta_1),f(s_t\zeta_2))<C$. It will finish the proof since $$\widetilde k_D(f(t\zeta_1),f(s_t\zeta_2))=p(t\zeta_1,s_t\zeta_2)\to\infty,\ t\to 1.$$ We may assume that $f(\zeta_1)=0$ and $\nu_D(0)=(1,0,\ldots,0)=:e_1$. There exists a ball $B\subset D$ tangent to $\partial D$ at $0$. Using a homothety, if necessary, one can assume that $B=\mathbb{B}_n-e_1$. From the transversality of $\gamma_1,\gamma_2$ to $\partial D$ there exists a cone $$A:=\{z\in\mathbb{C}^n:-\re z_1>k|z|\},\quad k>0,$$ such that $\gamma_1(t),\gamma_2(t)\in A\cap B$ if $t\in(0,1)$ is close to $1$. For $z\in A$ let $k_z>k$ be a positive number satisfying the equality $$|z|=\frac{-\re z_1}{k_z}.$$
Note that for any $a\in\gamma_1((0,1))$ sufficiently close to $0$ one may find $b\in\gamma_2((0,1))\cap A\cap B$ such that $\re b_1=\re a_1$. To get a contradiction it suffices to show that $\widetilde k_D(a,b)$ is bounded from above by a constant independent on $a$ and $b$.
We have the following estimate \begin{multline*}\widetilde k_D(a,b)\leq\widetilde k_{\mathbb{B}_n-e_1}(a,b)=\widetilde k_{\mathbb{B}_n}(a+e_1,b+e_1)=\\=\tanh^{-1}\sqrt{1-\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\langle a+e_1,b+e_1 \rangle|^2}}.\end{multline*} The last expression is bounded from above if and only if $$\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\langle a+e_1,b+e_1\rangle|^2}$$ is bounded from below by some positive constant. We estimate $$\frac{(1-|a+e_1|^2)(1-|b+e_1|^2)}{|1-\langle a+e_1,b+e_1\rangle|^2}=\frac{(2\re a_1+|a|^2)(2\re b_1+|b|^2)}{|\langle a, b\rangle+a_1+\overline b_1|^2}=$$$$=\frac{\left(2\re a_1+\frac{(\re a_1)^2}{k^2_a}\right)\left(2\re a_1+\frac{(\re a_1)^2}{k^2_b}\right)}{|\langle a, b\rangle+2\re a_1+i\im a_1-i\im b_1|^2}\geq\frac{(\re a_1)^2\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2|\langle a, b\rangle+i\im a_1-i\im b_1|^2+2|2\re a_1|^2}$$$$\geq\frac{(\re a_1)^2\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2(|a||b|+|a|+|b|)^2+8(\re a_1)^2}=\frac{(\re a_1)^2\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2\left(\frac{(-\re a_1)^2}{k^2_ak^2_b}-\frac{\re a_1}{k_a}-\frac{\re a_1}{k_b}\right)^2+8(\re a_1)^2}$$$$=\frac{\left(2+\frac{\re a_1}{k^2_a}\right)\left(2+\frac{\re a_1}{k^2_b}\right)}{2\left(\frac{-\re a_1}{k^2_ak^2_b}+\frac{1}{k_a}+\frac{1}{k_b}\right)^2+8}>\frac{1}{2(1+2/k)^2+8}.$$ This finishes the proof. \end{proof}
Assume that we are in the settings of Proposition~\ref{1} and $D$ has real analytic boundary. Our aim is to replace $W$ with a neighborhood of $\overline D$.
\begin{remm}\label{przed34} For $\zeta_0\in\mathbb{D}_f$ we have $G(f(\zeta_0),\zeta_0)=0$ and $\frac{\partial G}{\partial\zeta}(f(\zeta_0),\zeta_0)=-1$. By the Implicit Function Theorem there exist neighborhoods $U_{\zeta_0},V_{\zeta_0}$ of $f(\zeta_0),\zeta_0$ respectively and a holomorphic function $F_{\zeta_0}:U_{\zeta_0}\longrightarrow V_{\zeta_0}$ such that for any $z\in U_{\zeta_0}$ the point $F_{\zeta_0}(\zeta)$ is the unique solution of the equation $G(z,\zeta)=0$, $\zeta\in V_{\zeta_0}$.
In particular, if $\zeta_0\in\mathbb{D}$ then $F_{\zeta_0}=F$ near $f(\zeta_0)$. \end{remm}
\begin{propp}\label{34} Let $f:\mathbb{D}\longrightarrow D$ be an $E$-mapping. Then there exist arbitrarily small neighborhoods $U$, $V$ of $\overline D$, $\overline{\DD}$ respectively such that for any $z\in U$ the equation $G(z,\zeta)=0$, $\zeta\in V$, has exactly one solution. \end{propp} \begin{proof} In view of Proposition~\ref{1} and Remark~\ref{przed34} it suffices to prove that there exist neighborhoods $U$, $V$ of $\overline D$, $\overline{\DD}$ respectively such that for any $z\in U$ the equation $G(z,\cdotp)=0$ has at most one solution $\zeta\in V$.
Assume the contrary. Then for any neighborhoods $U$ of $\overline D$ and $V$ of $\overline{\DD}$ there are $z\in U$, $\zeta_1,\zeta_2\in V$, $\zeta_1\neq\zeta_2$ such that $G(z,\zeta_1)=G(z,\zeta_2)=0$. For $m\in\mathbb{N}$ put $$U_m:=\{z\in\mathbb{C}^n:\dist(z,D)<1/m\},$$ $$V_m:=\{\zeta\in\mathbb{C}:\dist(\zeta,\mathbb{D})<1/m\}.$$ There exist $z_m\in U_m$, $\zeta_{m,1},\zeta_{m,2}\in V_m$, $\zeta_{m,1}\neq\zeta_{m,2}$ such that $G(z_m,\zeta_{m,1})=G(z_m,\zeta_{m,2})=0$. Passing to a subsequence we may assume that $z_m\to z_0\in\overline D$. Analogously we may assume $\zeta_{m,1}\to\zeta_1\in \overline{\DD}$ and $\zeta_{m,2}\to\zeta_2\in\overline{\DD}$. Clearly, $G(z_0,\zeta_1)=G(z_0,\zeta_2)=0$. Let us consider few cases.
1) If $\zeta_1,\zeta_2\in\mathbb{T}$, then $G(z_0,\zeta_j)=0$ is equivalent to $$\langle z_0-f(\zeta_j), \nu_D(f(\zeta_j))\rangle=0,\ j=1,2,$$ consequently $z_0-f(\zeta_j)\in T^{\mathbb{C}}_D(f(\zeta_j))$. By the strong linear convexity of $D$ we get $z_0=f(\zeta_j)$. But $f$ is injective in $\overline{\DD}$, so $\zeta_1=\zeta_2=:\zeta_0$. It follows from Remark~\ref{przed34} that in a sufficiently small neighborhood of $(z_0,\zeta_0)$ all solutions of the equation $G(z,\zeta)=0$ are of the form $(z,F_{\zeta_0}(z))$. Points $(z_m,\zeta_{m,1})$ and $(z_m,\zeta_{m,2})$ belong to this neighborhood for large $m$, which gives a contradiction.
2) If $\zeta_1\in\mathbb{T}$ and $\zeta_2\in\mathbb{D}$, then analogously as above we deduce that $z_0=f(\zeta_1)$. Let us take an arbitrary sequence $\{\eta_m\}\subset\mathbb{D}$ convergent to $\zeta_1$. Then $f(\eta_m) \in D$ and $f(\eta_m)\to z_0$, so the sequence $G(f(\eta_m),\cdotp)$ converges to $G(z_0,\cdotp)$ uniformly on $\mathbb{D}$. Since $G(z_0,\cdotp)\not\equiv 0$, $G(z_0,\zeta_2)=0$ and $\zeta_2\in\mathbb{D}$, we deduce from Hurwitz Theorem that for large $m$ the functions $G(f(\eta_m),\cdotp)$ have roots $\theta_m\in\mathbb{D}$ such that $\theta_m\to\zeta_2$. Hence $G(f(\eta_m),\theta_m)=0$ and from the uniqueness of solutions in $D\times\mathbb{D}$ (Proposition~\ref{1}) we have $$\theta_m=F(f(\eta_m))=\eta_m.$$ This is a contradiction, because the left side tends to $\zeta_2$ and the right one to $\zeta_1$, as $m\to\infty$.
3) We are left with the case $\zeta_1,\zeta_2\in\mathbb{D}$. If $z_0\in\overline{D}\setminus f(\mathbb{T})$ then $z_0\in W$. In $W\times\mathbb{D}$ all solutions of the equation $G=0$ are of the form $(z,F(z))$, $z\in W$. But for large $m$ the points $(z_m,\zeta_{m,1})$, $(z_m,\zeta_{m,2})$ belong to $W\times\mathbb{D}$, which is a contradiction with the uniqueness.
If $z_0\in f(\mathbb{T})$, then $z_0=f(\zeta_0)$ for some $\zeta_0\in\mathbb{T}$. Clearly, $G(f(\zeta_0),\zeta_0)=0$, whence $G(z_0,\zeta_0)=G(z_0,\zeta_1)=0$ and $\zeta_0\in\mathbb{T}$, $\zeta_1\in \mathbb{D}$. This is just the case 2), which has been already considered. \end{proof}
\begin{corr} There are neighborhoods $U$, $V$ of $\overline D$ and $\overline{\DD}$ respectively with $V\Subset\mathbb{D}_f$, such that the function $F$ extends holomorphically on $U$. Moreover, all solutions of the equation $G|_{U\times V}=0$ are of the form $(z,F(z))$, $z\in U$.
In particular, $F\circ f=\id_{V}$. \end{corr}
\section{H\"older estimates}\label{22}
\begin{df}\label{30} For a given $c>0$ let the family $\mathcal{D}(c)$ consist of all pairs $(D,z)$, where $D\subset\mathbb{C}^n$, $n\geq 2$, is a bounded pseudoconvex domain with real $\mathcal C^2$ boundary and $z\in D$, satisfying \begin{enumerate} \item $\dist(z,\partial D)\geq 1/c$; \item the diameter of $D$ is not greater than $c$ and $D$ satisfies the interior ball condition with a radius $1/c$; \item for any $x,y\in D$ there exist $m\leq 8 c^2$ and open balls $B_0,\ldots,B_m\subset D$ of radius $1/(2c)$ such that $x\in B_0$, $y\in B_m$ and the distance between the centers of the balls $B_j$, $B_{j+1}$ is not greater than $1/(4c)$ for $j=0,\ldots,m-1$; \item for any open ball $B\subset\mathbb{C}^n$ of radius not greater than $1/c$, intersecting non-emptily with $\partial D$, there exists a mapping $\Phi\in{\mathcal O}(\overline{D},\mathbb{C}^n)$ such that \begin{enumerate} \item for any $w\in\Phi(B\cap\partial D)$ there is a ball of radius $c$ containing $\Phi(D)$ and tangent to $\partial\Phi(D)$ at $w$ (let us call it the ``exterior ball condition'' with a radius $c$); \item $\Phi$ is biholomorphic in a neighborhood of $\overline B$ and $\Phi^{-1}(\Phi(B))=B$; \item entries of all matrices $\Phi'$ on $B\cap\overline D$ and $(\Phi^{-1})'$ on $\Phi(B\cap\overline{D})$ are bounded in modulus by $c$; \item $\dist(\Phi(z),\partial\Phi(D))\geq 1/c$; \end{enumerate}
\item the normal vector $\nu_D$ is Lipschitz with a constant $2c$, that is $$|\nu_D(a)-\nu_D(b)|\leq 2c|a-b|,\ a,b\in \partial D;$$ \item the $\varepsilon$-hull of $D$, i.e. a domain $D_{\varepsilon}:=\{w\in\mathbb C^n:\dist (w,D)<\varepsilon\}$, is strongly pseudoconvex for any $\varepsilon\in (0,1/c).$ \end{enumerate} \end{df}
Recall that the {\it interior ball condition} with a radius $r>0$ means that for any point $a\in\partial D$ there is $a'\in D$ and a ball $B_n(a',r)\subset D$ tangent to $\partial D$ at $a$. Equivalently $$D=\bigcup_{a'\in D'}B_n(a',r)$$ for some set $D'\subset D$.
It may be shown that (2) and (5) may be expressed in terms of boundedness of the normal curvature, boundedness of a domain and the condition (3). This however lies beyond the scope of this paper and needs some very technical arguments so we omit the proof of this fact. The reasons why we decided to use (2) in such a form is its connection with the condition (3) (this allows us to simplify the proof in some places).
\begin{rem}\label{con} Note that any convex domain satisfying conditions (1)-...-(4) of Definition~\ref{30} satisfies conditions (5) and (6), as well.
Actually, it follows from (2) that for any $a\in\partial D$ there exists a ball $B_n(a',1/c)\subset D$ tangent to $\partial D$ at $a$. Then $$\nu_D(a)=\frac{a'-a}{|a'-a|}=c(a'-a).$$ Hence $$|\nu_D(a)-\nu_D(b)|=c|a'-a-b'+b|=c|a'-b'-(a-b)|\leq c|a'-b'|+c|a-b|.$$ Since $D$ is convex, we have $|a'-b'|\leq|a-b|$, which gives (5).
The condition (6) is also clear --- for any $\varepsilon>0$ an $\varepsilon$-hull of a strongly convex domain is strongly convex. \end{rem}
\begin{rem} For a convex domain $D$ the condition (3) of Definition \ref{30} amounts to the condition (2).
Indeed, for two points $x,y\in D$ take two balls of radius $1/(2c)$ containing them and contained in $D$. Then divide the interval between the centers of the balls into $[4c^2]+1$ equal parts and take balls of radius $1/(2c)$ with centers at the points of the partition.
Note also that if $D$ is strongly convex and satisfies the interior ball condition with a radius $1/c$ and the exterior ball condition with a radius $c$, one can take $\Phi:=\id_{\mathbb{C}^n}$. \end{rem}
\begin{rem}\label{D(c),4} For a strongly pseudoconvex domain $D$ and $c'>0$ and for any $z\in D$ such that $\dist(z,\partial D)>1/c'$ there exists $c=c(c')>0$ satisfying $(D,z)\in\mathcal{D}(c)$.
Indeed, the conditions (1)-...-(3) and (5)-(6) are clear. Only (4) is non-trivial.
The construction of the mapping $\Phi$ amounts to the construction of Forn\ae ss peak functions. Actually, apply directly Proposition 1 from \cite{For} to any boundary point of $\partial D$ (obviously $D$ has a Stein neighborhood basis). This gives a covering of $\partial D$ with a finite number of balls $B_j$, maps $\Phi_j\in{\mathcal O}(\overline{D},\mathbb{C}^n)$ and strongly convex $C^\infty$-smooth domains $C_j$, $j=1,\ldots, N$, such that \begin{itemize}\item $\Phi_j(D)\subset C_j$; \item $\Phi_j(\overline D)\subset\overline C_j$; \item $\Phi_j(B_j\setminus\overline D)\subset\mathbb C^n\setminus\overline C_j$; \item $\Phi_j^{-1}(\Phi_j(B_j))=B_j$;
\item $\Phi_j|_{B_j}: B_j\longrightarrow \Phi_j(B_j)$ is biholomorphic. \end{itemize} Therefore, one may choose $c>0$ such that every $C_j$ satisfies the exterior ball condition with $c$, i.e. for any $x\in \partial C_j$ there is a ball of radius $c$ containing $C_j$ and tangent to $\partial C_j$ at $x$, every ball of radius $1/c$ intersecting non-emptily with $\partial D$ is contained in some $B_j$ (here one may use a standard argument invoking the Lebesgue number) and the conditions (c), (d) are also satisfied (with $\Phi:=\Phi_j$). \end{rem}
In this section we use the words `uniform', `uniformly' if $(D,z)\in \mathcal D(c)$. This means that estimates will depend only on $c$ and will be independent on $D$ and $z$ if $(D,z)\in\mathcal{D}(c)$ and on $E$-mappings of $D$ mapping $0$ to $z$. Moreover, in what follows we assume that $D$ is a strongly linearlu convex domain with real-analytic boundary.
\begin{prop}\label{7}
Let $f:(\mathbb{D},0)\longrightarrow(D,z)$ be an $E$-mapping. Then $$\dist(f(\zeta),\partial D)\leq C(1-|\zeta|),\ \zeta\in\overline{\DD}$$ with $C>0$ uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof} There exists a uniform $C_1$ such that $$\text{if }\dist(w,\partial D)\geq 1/c\text{ then }k_D(w,z)<C_1.$$ Indeed, let $\dist(w,\partial D)\geq 1/c$ and let balls $B_0,\ldots,B_m$ with centers $b_0,\ldots,b_m$ be chosen to the points $w$, $z$ as in the condition (3) of Definition~\ref{30}. Then
\begin{multline*}k_D(w,z)\leq k_D(w,b_0)+\sum_{j=0}^{m-1}k_D(b_j,b_{j+1})+k_D(b_m,z)\leq\\\leq k_{B_n(w,1/c)}(w,b_0)+\sum_{j=0}^{m-1}k_{B_j}(b_j,b_{j+1})+k_{B_n(z,1/c)}(b_m,z)=\\=p\left(0,\frac{|w-b_0|}{1/c}\right)+\sum_{j=0}^{m-1}p\left(0,\frac{|b_j-b_{j+1}|}{1/(2c)}\right)+
p\left(0,\frac{|b_m-z|}{1/c}\right)\leq\\\leq(m+2)p\left(0,\frac{1}{2}\right)\leq(8c^2+2)p\left(0,\frac{1}{2}\right)=:C_1. \end{multline*}
If $\zeta\in\mathbb{D}$ is such that $\dist(f(\zeta),\partial D)\geq 1/c$ then $$k_D(f(0),f(\zeta))\leq C_2-\frac{1}{2}\log\dist(f(\zeta),\partial D)$$ with a uniform $C_2:=C_1+\frac{1}{2}\log c$.
In the other case, i.e. when $\dist(f(\zeta),\partial D)<1/c$, denote by $\eta$ the nearest point to $f(\zeta)$ lying on $\partial D$. Let $w\in D$ be a center of a ball $B$ of radius $1/c$ tangent to $\partial D$ at $\eta$. By the condition (2) of Definition~\ref{30} we have $B\subset D$. Hence
\begin{multline*}k_D(f(0),f(\zeta))\leq k_D(f(0),w)+k_D(w,f(\zeta))\leq\\\leq C_1+k_B(w,f(\zeta))\leq C_1+\frac{1}{2}\log 2-\frac{1}{2}\log\left(1-\frac{|f(\zeta)-w|}{1/c}\right)=\\=C_1+\frac{1}{2}\log 2-\frac{1}{2}\log(c\dist(f(\zeta),\partial B))=C_3-\frac{1}{2}\log\dist(f(\zeta),\partial D) \end{multline*} with a uniform $C_3:=C_1+\frac{1}{2}\log\frac{2}{c}$.
We have obtained the same type estimates in both cases. On the other side, by Corollary~\ref{5} $$k_D(f(0),f(\zeta))=p(0,\zeta)\geq-\frac{1}{2}\log(1-|\zeta|),$$ which finishes the proof. \end{proof}
Recall that we have assumed that $\rho$ is of the form~\eqref{rho}. \begin{prop}\label{9} Let $f:(\mathbb{D},0)\longrightarrow(D,z)$ be an $E$-mapping. Then $$C_1<\rho(\zeta)^{-1}<C_2,\ \zeta\in\mathbb{T},$$ where $C_1,C_2$ are uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof} For the upper estimate fix $\zeta_0\in\mathbb{T}$. Set $B:=B_n(f(\zeta_0),1/c)$ and let $\Phi\in{\mathcal O}(\overline{D},\mathbb{C}^n)$ be as in the condition (4) of Definition~\ref{30} for $B$. One can assume that $f(\zeta_0)=\Phi(f(\zeta_0))=0$ and $\nu_D(0)=\nu_{\Phi(D)}(0)=(1,0,\ldots,0)$. Then $\Phi(D)$ is contained in the half-space $\{w\in\mathbb{C}^n:\re w_1<0\}$. Putting $h:=\Phi\circ f$ we have $$h_1(\mathbb{D})\subset\{w_1\in\mathbb{C}:\re w_1<0\}.$$ In virtue of the Schwarz Lemma on the half-plane
\begin{equation}\label{schh1}|h_1'(t\zeta_0)|\leq\frac{-2\re h_1(t\zeta_0)}{1-|t\zeta_0|^2}.\end{equation}
Let $\delta$ be the signed boundary distance of $\Phi(D)$, i.e. $$\delta(x):=\begin{cases}-\dist(x,\partial\Phi(D)),\ x\in\Phi(D)\\\ \ \ \dist(x,\partial\Phi(D)),\ x\notin\Phi(D).\end{cases}$$ It is a defining function of $\Phi(D)$ in a neighborhood of $0$ (recall that $\Phi^{-1}(\Phi(B))=B$). Observe that $$\delta(x)=\delta(0)+\re\langle\nabla\delta(0), x\rangle+O(|x|^2)=\re x_1+O(|x|^2).$$
If $x\in\Phi(D)$ tends transversally to $0$, then the angle between the vector $x$ and the hyperplane $\{w\in\mathbb{C}^n:\re w_1=0\}$ is separated from $0$, i.e. its sinus $(-\re x_1)/|x|>\varepsilon$ for some $\varepsilon>0$ independent on $x$. Thus $$\frac{\delta(x)}{\re x_1}=1+O(|x|)\text{ as }x\to 0\text{ transversally. }$$ Consequently \begin{equation}\label{50}-\re x_1\leq 2\dist(x,\partial\Phi(D))\text{ as }x\to 0\text{ transversally. }\end{equation}
We know that $t\longmapsto f(t\zeta_0)$ hits $\partial D$ transversally. Therefore, $t\longmapsto h(t\zeta_0)$ hits $\partial \Phi(D)$ transversally, as well. Indeed, we have \begin{multline}\label{hf}\left\langle\left.\frac{d}{dt}h(t\zeta_0)\right|_{t=1},\nu_{\Phi(D)}(h(\zeta_0))\right\rangle=\left\langle \Phi'(0)f'(\zeta_0)\zeta_0,\frac{(\Phi^{-1})'(0)^*\nabla r(0)}{|(\Phi^{-1})'(0)^*\nabla r(0)|}\right\rangle=\\=\frac{\langle\zeta_0 f'(\zeta_0),\nabla r(0)\rangle}{|(\Phi'(0)^{-1})^*\overline{\nabla r(0)}|}=\frac{\langle\zeta_0 f'(\zeta_0),\nu_D(f(\zeta_0))|\nabla r(0)|\rangle}{|(\Phi'(0)^{-1})^*\overline{\nabla r(0)}|}. \end{multline} where $r$ is a defining function of $D$. In particular,
\begin{multline*} \re \left\langle\left.\frac{d}{dt}h(t\zeta_0)\right|_{t=1},\nu_{\Phi(D)}(h(\zeta_0))\right\rangle=\re \frac{\langle\zeta_0 f'(\zeta_0),\nu_D(f(\zeta_0))|\nabla r(0)|\rangle}{|(\Phi'(0)^{-1})^*\overline{\nabla r(0)}|}=\\=\frac{\rho(\zeta_0)^{-1}|\nabla r(0)|}{|(\Phi'(0)^{-1})^*\overline{\nabla r(0)}|}\neq 0.\end{multline*} This proves that $t\longmapsto h(t\zeta_0)$ hits $\partial\Phi(D)$ transversally.
Consequently, we may put $x=h(t\zeta_0)$ into \eqref{50} to get \begin{equation}\label{hf1}\frac{-2\re h_1(t\zeta_0)}{1-|t\zeta_0|^2}\leq\frac{4\dist(h(t\zeta_0),\partial\Phi(D))}
{1-|t\zeta_0|^2},\ t\to 1.\end{equation}
But $\Phi$ is a biholomorphism near $0$, so \begin{equation}\label{nfr}\frac{4\dist(h(t\zeta_0),\partial\Phi(D))}{1-|t\zeta_0|^2}\leq C_3\frac{\dist(f(t\zeta_0),\partial D)}{1-|t\zeta_0|},\ t\to 1,\end{equation} where $C_3$ is a uniform constant depending only on $c$ (thanks to the condition (4)(c) of Definition~\ref{30}). By Proposition \ref{7}, the term on the right side of~\eqref{nfr} does not exceed some uniform constant.
It follows from \eqref{hf} that \begin{multline*}\rho(\zeta_0)^{-1}=|\langle f'(\zeta_0)\zeta_0,\nu_D(f(\zeta_0))\rangle|\leq C_4|\langle h'(\zeta_0), \nu_{\Phi(D)}(h(\zeta_0))\rangle|=\\=C_4|h_1'(\zeta_0)|=\lim_{t\to 1}C_4|h_1'(t\zeta_0)|\end{multline*} with a uniform $C_4$ (here we use the condition (4)(c) of Definition~\ref{30} again). Combining \eqref{schh1}, \eqref{hf1} and \eqref{nfr} we get the upper estimate for $\rho(\zeta_0)^{-1}.$
Now we are proving the lower estimate. Let $r$ be the signed boundary distance to $\partial D$. For $\varepsilon=1/c$ the function $$\varrho(w):=-\log(\varepsilon-r(w))+\log\varepsilon,\ w\in D_\varepsilon,$$ where $D_\varepsilon$ is an $\varepsilon$-hull of $D$, is plurisubharmonic and defining for $D$. Indeed, we have $$-\log(\varepsilon-r(w))=-\log\dist(w,\partial D_\varepsilon),\ w\in D_\varepsilon$$ and $D_\varepsilon$ is pseudoconvex.
Therefore, a function $$v:=\varrho\circ f:\overline{\mathbb{D}}\longrightarrow(-\infty,0]$$ is subharmonic on $\mathbb{D}$. Moreover, since $f$ maps $\mathbb{T}$ in $\partial D$ we infer that $v=0$ on $\mathbb{T}$. Moreover, since $|f(\lambda)-z|<c$ for $\lambda\in\mathbb{D}$, we have $$|f(\lambda)-z|<\frac{1}{2c}\text{ if }|\lambda|\leq\frac{1}{2c^2}.$$ Therefore, for a fixed $\zeta_0\in\mathbb{T}$ $$M_{\zeta_0}(x):=\max_{t\in[0,2\pi]}v(\zeta_0 e^{x+it})\leq-\log\left(1+\frac{1}{2c\varepsilon}\right)=:-C_5\text{ if }x\leq-\log(2c^2).$$ Since $M_{\zeta_0}$ is convex for $x\leq 0$ and $M_{\zeta_0}(0)=0$, we get $$v(\zeta_0 e^x)\leq M_{\zeta_0}(x)\leq\frac{C_5x}{\log(2c^2)}\text{\ \ \ for \ }-\log(2c^2)\leq x\leq 0.$$ Hence (remember that $v(\zeta_0)=0$)
\begin{multline}\label{wk}\frac{C_5}{\log(2c^2)}\leq\left.\frac{d}{dx}v(\zeta_0 e^x)\right|_{x=0}=\sum_{j=1}^n\frac{\partial\varrho}{\partial z_j}(f(\zeta_0))f_j'(\zeta_0)\zeta_0=\\=\langle\zeta_0 f'(\zeta_0),\nabla\varrho(f(\zeta_0))\rangle=\rho(\zeta_0)^{-1}|\nabla\varrho(f(\zeta_0))|.\end{multline} Moreover,
\begin{multline*}|\nabla\varrho(f(\zeta_0))|= \left\langle\nabla\varrho(f(\zeta_0)),\frac{\nabla\varrho(f(\zeta_0))}{|\nabla\varrho(f(\zeta_0))|}\right\rangle_\mathbb{R} =\langle\nabla\varrho(f(\zeta_0)),\nu_D(f(\zeta_0))\rangle_\mathbb{R}=\\=\frac{\partial\varrho}{\partial\nu_D}(f(\zeta_0))=\lim_{t\to 0}\frac{\varrho(f(\zeta_0)+t\nu_D(f(\zeta_0)))-\varrho(f(\zeta_0))}{t}=\frac{1}{\varepsilon}=c, \end{multline*} as $r(a+t\nu(a))=t$ if $a\in \partial D$ and $t\in\mathbb R$ is small enough. This, together with \eqref{wk}, finishes the proof of the lower estimate. \end{proof}
\begin{prop}\label{8}
Let $f:(\mathbb{D},0)\longrightarrow (D,z)$ be an $E$-mapping. Then $$|f(\zeta_1)-f(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\overline{\DD},$$ where $C$ is uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop} \begin{proof}
Let $\zeta_0\in\mathbb{D}$ be such that $1-|\zeta_0|<1/(cC)$, where $C$ is as in Proposition~\ref{7}. Then $B:=B_n(f(\zeta_0),1/c)$ intersects $\partial D$. Take $\Phi$ for the ball $B$ from the condition (4) of Definition~\ref{30}. Let $w$ denote the nearest point to $\Phi(f(\zeta_0))$ lying on $\partial\Phi(D)$. From the conditions (4)(b)-(c) of Definition~\ref{30} we find that there is a uniform constant $r<1$ such that the point $w$ belongs to $\Phi(B\cap\partial D)$ provided that $|\zeta_0|\geq r$.
From the condition (4)(a) of Definition~\ref{30} we get that there is $w_0$ such that $\Phi(D)\subset B_n(w_0,c)$ and the ball $B_n(w_0,c)$ is tangent to $\Phi(D)$ at $w$. Let $$h(\zeta):=(\Phi\circ f)\left(\frac{\zeta_0-\zeta}{1-\overline{\zeta_0}\zeta}\right),\ \zeta\in\mathbb{D}.$$
Then $h$ is holomorphic, $h(\mathbb{D})\subset B_n(w_0,c)$ and $h(0)=\Phi(f(\zeta_0))$. Using Lemma \ref{schw} we get \begin{multline*}|h'(0)|\leq\sqrt{c^2-|h(0)-w_0|^2}\leq\sqrt{2c(c-|\Phi(f(\zeta_0))-w_0|)}=\\
=\sqrt{2c(|w_0-w|-|\Phi(f(\zeta_0))-w_0|)}\leq\sqrt{2c}\sqrt{|\Phi(f(\zeta_0))-w|}=\\
=\sqrt{2c}\sqrt{\dist(\Phi(f(\zeta_0)),\partial\Phi(D))}.\end{multline*} Since $$h'(0)=\Phi'(f(\zeta_0))f'(\zeta_0)\left.\frac{d}{d\zeta}\frac{\zeta_0-\zeta}{1-\overline{\zeta_0}\zeta}\right|_{\zeta=0},$$ bby the condition (4)(c) of Definition~\ref{3} we get $$|h'(0)|\geq C_1|f'(\zeta_0)|(1-|\zeta_0|^2)$$ with a uniform $C_1$, so $$|f'(\zeta_0)|\leq\frac{|h'(0)|}{C_1(1-|\zeta_0|^2)}\leq\frac{\sqrt{2c}}{C_1}\frac{\sqrt{\dist(\Phi(f(\zeta_0)),\partial\Phi(D))}}{1-|\zeta_0|^2}\leq C_2\frac{\sqrt{\dist(f(\zeta_0),\partial D)}}{1-|\zeta_0|^2},$$ where $C_2$ is uniform. Combining with Proposition \ref{7} \begin{equation}\label{46}|f'(\zeta_0)|\leq C_3\frac{\sqrt{1-|\zeta_0|}}{1-|\zeta_0|^2}=\frac{C_3}{\sqrt{1-|\zeta_0|}},\end{equation} where a constant $C_3$ is uniform.
We have shown that \eqref{46} holds for $r\leq |\zeta_0|<1$ with a uniform $r<1$. For $|\zeta_0|<r$ we estimate in the following way $$|f'(\zeta_0)|\leq\max_{|\zeta|=r}|f'(\zeta)|\leq\frac{C_3}{\sqrt{1-r}}\leq\frac{C_4}{\sqrt{1-|\zeta_0|}}$$ with a uniform $C_4:=C_3/\sqrt{1-r}$.
Using Theorems \ref{lit1} and \ref{lit2} with $\alpha=1/2$ we finish the proof. \end{proof}
\begin{prop}\label{10a}
Let $f:(\mathbb{D} ,0)\longrightarrow (D,z)$ be an $E$-mapping. Then $$|\rho(\zeta_1)-\rho(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\mathbb{T},$$ where $C$ is uniform if $(D,z)\in\mathcal{D}(c)$. \end{prop}
\begin{proof}It suffices to prove that there exist uniform $C,C_1>0$ such that $$|\rho(\zeta_1)-\rho(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\mathbb{T},\ |\zeta_1-\zeta_2|<C_1.$$
Fix $\zeta_1\in\mathbb{T}$. Without loss of generality we may assume that $\nu_{D,1}(f(\zeta_1))=1$. Let $0<C_1\leq 1/4$ be uniform and such that $$|\nu_{D,1}(f(\zeta))-1|<1/2,\ \zeta\in\mathbb{T}\cap B_n(\zeta_1,3C_1).$$ It is possible, since by Proposition \ref{8} $$|{\nu_D(f(\zeta))}-{\nu_D(f(\zeta'))}|\leq 2c|f(\zeta)-f(\zeta')|\leq C'\sqrt{|\zeta-\zeta'|},\ \zeta,\zeta'\in\mathbb{T},$$ with a uniform $C'>0$. There exists a function $\psi\in{\mathcal C}^1(\mathbb{T},[0,1])$ such that $\psi=1$ on $\mathbb{T}\cap B_n(\zeta_1,2C_1)$ and $\psi=0$ on $\mathbb{T}\setminus B_n(\zeta_1,3C_1)$. Then the function $\varphi:\mathbb{T}\longrightarrow\mathbb{C}$ defined by $$\varphi:=(\overline{\nu_{D,1}\circ f}-1)\psi+1$$ satisfies \begin{enumerate} \item $\varphi(\zeta)=\overline{\nu_{D,1}(f(\zeta))}$, $\zeta\in\mathbb{T}\cap B_n(\zeta_1,2C_1)$;
\item $|\varphi(\zeta)-1|<1/2$, $\zeta\in\mathbb{T}$; \item $\varphi$ is uniformly $1/2$-H\"older continuous on $\mathbb{T}$, i.e. it is $1/2$-H\"older continuous with a uniform constant (remember that $\psi$ was chosen uniformly). \end{enumerate}
First observe that $\log\varphi$ is well-defined. Using using properties listed above we deduce that $\log\varphi$ and $\im\log\varphi$ are uniformly $1/2$-H\"older continuous on $\mathbb{T}$, as well. The function $\im\log\varphi$ can be extended continuously to a function $v:\overline{\DD}\longrightarrow\mathbb{R}$, harmonic in $\mathbb{D}$. There is a function $h\in\mathcal O(\mathbb{D})$ such that $v=\im h$ in $\mathbb{D}$. Taking $h-\re h(0)$ instead of $h$, one can assume that $\re h(0)=0$. By Theorem \ref{priv} applied to $ih$, we get that the function $h$ extends continuously on $\overline{\DD}$ and $h$ is uniformly $1/2$-H\"older continuous in $\overline{\DD}$. Hence the function $u:=\re h:\overline{\DD}\longrightarrow\mathbb{R}$ is uniformly $1/2$-H\"older continuous in $\overline{\DD}$ with a uniform constant $C_2$. Furthermore, $u$ is uniformly bounded in $\overline{\DD}$, since $$|u(\zeta)|=|u(\zeta)-u(0)|\leq C_2\sqrt{|\zeta|},\ \zeta\in\overline{\DD}.$$
Let $g(\zeta):=\widetilde{f}_1(\zeta)e^{-h(\zeta)}$ and $G(\zeta):=g(\zeta)/\zeta$. Then $g\in\mathcal O(\mathbb{D})\cap\mathcal C(\overline{\mathbb{D}})$ and $G\in\mathcal O(\mathbb{D}_*)\cap\mathcal C((\overline{\mathbb{D}})_*)$. Note that for $\zeta\in\mathbb{T}$ $$|g(\zeta)|=|\zeta
\rho(\zeta)\overline{\nu_{D,1}(f(\zeta))}e^{-h(\zeta)}|\leq\rho(\zeta)e^{-u(\zeta)},$$ which, combined with Proposition \ref{9}, the uniform boundedness of $u$ and the maximum principle, gives a uniform boundedness of $g$ in $\overline{\DD}$. The function $G$ is uniformly bounded in $\overline{\mathbb{D}}\cap B_n(\zeta_1,2C_1)$. Moreover, for $\zeta\in\mathbb{T}\cap B_n(\zeta_1,2C_1)$ \begin{eqnarray*} G(\zeta)&=&\rho(\zeta)\overline{\nu_{D,1}(f(\zeta))}e^{-u(\zeta)-i\im\log \varphi(\zeta)}=\\&=&\rho(\zeta)\overline{\nu_{D,1}(f(\zeta))}e^{-u(\zeta)+\re\log\varphi(\zeta)}e^{-\log\varphi(\zeta)} =\rho(\zeta)e^{-u(\zeta)+\re\log\varphi(\zeta)}\in\mathbb{R}.\end{eqnarray*} By the Reflection Principle one can extend $G$ holomorphically past $\mathbb{T}\cap B_n(\zeta_1,2C_1)$ to a function (denoted by the same letter) uniformly bounded in $B_n(\zeta_1,2C_2)$, where a constant $C_2$ is uniform. Hence, from the Cauchy formula, $G$ is uniformly Lipschitz continuous in $B_n(\zeta_1,C_2)$, consequently uniformly $1/2$-H\"older continuous in $B_n(\zeta_1,C_2)$.
Finally, the functions $G$, $h$, $\nu_{D,1}\circ f$ are uniformly $1/2$-H\"older continuous on $\mathbb{T}\cap B_n(\zeta_1,C_2)$, $|\nu_{D,1}\circ f|>1/2$ on $\mathbb{T}\cap B_n(\zeta_1,C_2)$, so the function $\rho=Ge^h/\overline{\nu_{D,1}\circ f}$ is uniformly $1/2$-H\"older continuous on $\mathbb{T}\cap B_n(\zeta_1,C_2)$. \end{proof}
\begin{prop}\label{10b}
Let $f:(\mathbb{D},0)\longrightarrow (D,z)$ be an $E$-mapping. Then $$|\widetilde{f}(\zeta_1)-\widetilde{f}(\zeta_2)|\leq C\sqrt{|\zeta_1-\zeta_2|},\ \zeta_1,\zeta_2\in\overline{\mathbb{D}},$$ where $C$ is uniform if $(D,z)\in\mathcal{D}(c)$.
\end{prop} \begin{proof} By Propositions \ref{8} and \ref{10a} we have desired inequality for $\zeta_1,\zeta_2\in\mathbb{T}$. Theorem \ref{lit2} finishes the proof. \end{proof}
\section{Openness of $E$-mappings' set}\label{27} We shall show that perturbing a little a domain $D$ equipped with an $E$-mapping, we obtain a domain which also has an $E$-mapping, being close to a given one.
\subsection{Preliminary results}
\begin{propp}\label{11} Let $f:\mathbb{D}\longrightarrow D$ be an $E$-mapping. Then there exist domains $G,\widetilde D,\widetilde G\subset\mathbb{C}^n$ and a biholomorphism $\Phi:\widetilde D\longrightarrow\widetilde G$ such that \begin{enumerate} \item $\widetilde D,\widetilde G$ are neighborhoods of $\overline D,\overline G$ respectively; \item $\Phi(D)=G$; \item $g(\zeta):=\Phi(f(\zeta))=(\zeta,0,\ldots,0),\ \zeta\in\overline{\DD}$; \item $\nu_G(g(\zeta))=(\zeta,0,\ldots,0),\ \zeta\in\mathbb{T}$; \item for any $\zeta\in\mathbb{T}$, a point $g(\zeta)$ is a point of the strong linear convexity of $G$. \end{enumerate} \end{propp} \begin{proof} Let $U,V$ be the sets from Proposition \ref{34}. We claim that after a linear change of coordinates one can assume that $\widetilde{f}_1,\widetilde{f}_2$ do not have common zeroes in $V$.
Since $ f'\bullet\widetilde{f}=1$, at least one of the functions $\widetilde f_1,\ldots,\widetilde f_n$, say $\widetilde f_1$, is not identically equal to $0$. Let $\lambda_1,\ldots,\lambda_m$ be all zeroes of $\widetilde f_1$ in $V$. We may find $\alpha\in\mathbb{C}^n$ such that $$(\alpha_1\widetilde f_1+\ldots+\alpha_n\widetilde f_n)(\lambda_j)\neq 0,\ j=1,\ldots,m.$$ Otherwise, for any $\alpha\in\mathbb{C}^n$ there would exist $j\in\{1,\ldots,m\}$ such that $\alpha\bullet\widetilde f(\lambda_j)=0$, hence $$\mathbb{C}^n=\bigcup_{j=1}^m\{\alpha\in\mathbb{C}^n:\ \alpha\bullet\widetilde f(\lambda_j)=0\}.$$ The sets $\{\alpha\in\mathbb{C}^n:\alpha \bullet \widetilde f(\lambda_j)=0\}$, $j=1,\ldots,m$, are the $(n-1)$-dimensional complex hyperplanes, so their finite sum cannot be the space $\mathbb{C}^n$.
Of course, at least one of the numbers $\alpha_2,\ldots,\alpha_n$, say $\alpha_2$, is non-zero. Let $$A:=\left[\begin{matrix} 1 & 0 & 0 & \cdots & 0\\ \alpha_1 & \alpha_2 & \alpha_3 &\cdots & \alpha_n\\ 0 & 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots &\ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1
\end{matrix}\right],\quad B:=(A^T)^{-1}.$$ We claim that $B$ is a change of coordinates we are looking for. If $r$ is a defining function of $D$ then $r\circ B^{-1}$ is a defining function of $B_n(D)$, so $B_n(D)$ is a bounded strongly linearly convex domain with real analytic boundary. Let us check that $Bf$ is an $E$-mapping of $B_n(D)$ with associated mappings \begin{equation}\label{56}A\widetilde f\in{\mathcal O}(\overline{\DD})\text{\ \ and\ \ }\rho\frac{|A\overline{\nabla r\circ f}|}{|\nabla r\circ f|}\in\mathcal{C}^{\omega}(\mathbb{T}).\end{equation} The conditions (1) and (2) of Definition~\ref{21} are clear. For $\zeta\in\mathbb{T}$ we have \begin{equation}\label{57}\overline{\nu_{B_n(D)}(Bf(\zeta))}=\frac{\overline{\nabla(r\circ B^{-1})(Bf(\zeta))}}{|\nabla(r\circ B^{-1})(Bf(\zeta))|}=\frac{(B^{-1})^T\overline{\nabla r(f(\zeta))}}{|(B^{-1})^T\overline{\nabla r(f(\zeta))}|}=\frac{A\overline{\nabla r(f(\zeta))}}{|A\overline{\nabla r(f(\zeta))}|},\end{equation} so
\begin{equation}\label{58}\zeta\rho(\zeta)\frac{|A\overline{\nabla r(f(\zeta))}|}{|\nabla r(f(\zeta))|}\overline{\nu_{B_n(D)}(Bf(\zeta))}=\zeta\rho(\zeta)A\overline{\nu_D(f(\zeta))}=A\widetilde f(\zeta).\end{equation} Moreover, for $\zeta\in\mathbb{T}$, $z\in D$ \begin{multline*}\langle Bz-Bf(\zeta), \nu_{B_n(D)}(Bf(\zeta))\rangle=\overline{\nu_{B_n(D)}(Bf(\zeta))}^T(Bz-Bf(\zeta))=\\=\frac{\overline{\nabla r(f(\zeta))}^TB^{-1}B_n(z-f(\zeta))}{|(B^{-1})^T\overline{\nabla r(f(\zeta))}|}=\frac{|\nabla r(f(\zeta))|}{|(B^{-1})^T\overline{\nabla r(f(\zeta))}|}\overline{\nu_D(f(\zeta))}^T(z-f(\zeta))=\\=\frac{|\nabla r(f(\zeta))|}{|(B^{-1})^T\overline{\nabla r(f(\zeta))}|}\langle z-f(\zeta), \nu_D(f(\zeta))\rangle. \end{multline*} Therefore, $B$ is a desired linear change of coordinates, as claimed.
If necessary, we shrink the sets $U,V$ associated with $f$ to sets associated with $Bf$. There exist holomorphic mappings $h_1,h_2:V\longrightarrow\mathbb{C}$ such that $$h_1\widetilde{f}_1+h_2\widetilde{f}_2\equiv 1\text{ in }V.$$ Generally, it is a well-known fact for functions on pseudoconvex domains, however in this case it may be shown quite elementarily. Indeed, if $\widetilde{f}_1\equiv 0$ or $\widetilde{f}_2\equiv 0$ then it is obvious. In the opposite case, let $\widetilde{f}_j=F_jP_j$, $j=1,2$, where $F_j$ are holomorphic, non-zero in $V$ and $P_j$ are polynomials with all (finitely many) zeroes in $V$. Then $P_j$ are relatively prime, so there are polynomials $Q_j$, $j=1,2$, such that $$Q_1P_1+Q_2P_2\equiv 1.$$ Hence $$\frac{Q_1}{F_1}\widetilde{f}_1+\frac{Q_2}{F_2}\widetilde{f}_2\equiv 1\ \text{ in }V.$$
Consider the mapping $\Psi:V\times\mathbb{C}^{n-1}\longrightarrow\mathbb{C}^n$ given by \begin{equation}\label{et2} \Psi_1(Z):=f_1(Z_1)-Z_2\widetilde{f}_2(Z_1)-h_1(Z_1) \sum_{j=3}^{n}Z_j\widetilde{f}_j(Z_1), \end{equation} \begin{equation}\label{et3} \Psi_2(Z):=f_2(Z_1)+Z_2\widetilde{f}_1(Z_1)-h_2(Z_1) \sum_{j=3}^{n}Z_j\widetilde{f}_j(Z_1), \end{equation} \begin{equation}\label{et4} \Psi_j(Z):=f_j(Z_1)+Z_j,\ j=3,\ldots,n. \end{equation}
We claim that $\Psi$ is biholomorphic in $\Psi^{-1}(U)$. First of all observe that $\Psi^{-1}(\{z\})\neq\emptyset$ for any $z\in U$. Indeed, by Proposition \ref{34} there exists (exactly one) $Z_1\in V$ such that $$(z-f(Z_1))\bullet\widetilde{f}(Z_1)=0.$$ The numbers $Z_j\in\mathbb{C}$, $j=3,\ldots,n$ are determined uniquely by the equations $$Z_j=z_j-f_j(Z_1).$$ At least one of the numbers $\widetilde f_1(Z_1),\widetilde f_2(Z_1)$, say $\widetilde f_1(Z_1)$, is non-zero. Let $$Z_2:=\frac{z_2-f_2(Z_1)+h_2(Z_1)\sum_{j=3}^{n}Z_j\widetilde{f}_j(Z_1)}{\widetilde f_1(Z_1)}.$$ Then we easily check that the equality $$z_1=f_1(Z_1)-Z_2\widetilde{f}_2(Z_1)-h_1(Z_1) \sum_{j=3}^{n}Z_j\widetilde{f}_j(Z_1)$$ is equivalent to $(z-f(Z_1))\bullet\widetilde{f}(Z_1)=0$, which is true.
To finish the proof of biholomorphicity of $\Psi$ in $\Psi^{-1}(U)$ it suffices to check that $\Psi$ is injective in $\Psi^{-1}(U)$. Let us take $Z,W$ such that $\Psi(Z)=\Psi(W)=z\in U$. By a direct computation both $\zeta=Z_1\in V$ and $\zeta=W_1\in V$ solve the equation $$(z-f(\zeta))\bullet\widetilde{f}(\zeta)=0.$$ From Proposition \ref{34} we infer that it has exactly one solution. Hence $Z_1=W_1$. By \eqref{et4} we have $Z_j=W_j$ for $j=3,\ldots,n$. Finally $Z_2=W_2$ follows from one of the equations \eqref{et2}, \eqref{et3}. Let $G:=\Psi^{-1}(D)$, $\widetilde D:=U$, $\widetilde G:=\Psi^{-1}(U)$, $\Phi:=\Psi^{-1}$.
Now we are proving that $\Phi$ has desired properties. We have $$\Psi_j(\zeta,0,\ldots,0)=f_j(\zeta),\ j=1,\ldots,n,$$ so $\Phi(f(\zeta))=(\zeta,0,\ldots,0)$, $\zeta\in\overline{\DD}$. Put $g(\zeta):=\Phi(f(\zeta))$, $\zeta\in\overline{\DD}$. Note that the entries of the matrix $\Psi'(g(\zeta))$ are $$\frac{\partial\Psi_1}{\partial Z_1}(g(\zeta))=f_1'(\zeta),\ \frac{\partial\Psi_1}{\partial Z_2}(g(\zeta))=-\widetilde{f}_2(\zeta),\ \frac{\partial\Psi_1}{\partial Z_j}(g(\zeta))=-h_1(\zeta)\widetilde{f}_j(\zeta),\ j\geq 3,$$$$\frac{\partial\Psi_2}{\partial Z_1}(g(\zeta))=f_2'(\zeta),\ \frac{\partial\Psi_2}{\partial Z_2}(g(\zeta))=\widetilde{f}_1(\zeta),\ \frac{\partial\Psi_2}{\partial Z_j}(g(\zeta))=-h_2(\zeta)\widetilde{f}_j(\zeta),\ j\geq 3,$$$$\frac{\partial\Psi_k}{\partial Z_1}(g(\zeta))=f_k'(\zeta),\ \frac{\partial\Psi_k}{\partial Z_2}(g(\zeta))=0,\ \frac{\partial\Psi_k}{\partial Z_j}(g(\zeta))=\delta^{k}_{j},\ j,k\geq 3.$$ Thus $\Psi '(g(\zeta))^T\widetilde f(\zeta)=(1,0,\ldots,0)$, $\zeta\in\overline{\DD}$ (since $f'\bullet\widetilde f=1$). Let us take a defining function $r$ of $D$. Then $r\circ\Psi$ is a defining function of $G$. Therefore, \begin{multline*}\nu_G(g(\zeta))=\frac{\nabla(r\circ\Psi)(g(\zeta))}{|\nabla(r\circ\Psi)(g(\zeta))|}=
\frac{\overline{\Psi'(g(\zeta))}^T\nabla r(f(\zeta))}{|\overline{\Psi'(g(\zeta))}^T\nabla r(f(\zeta))|}=\\=\frac{\overline{\Psi'(g(\zeta))}^T\overline{\frac{\widetilde f(\zeta)}{\zeta\rho(\zeta)}}|\nabla r(f(\zeta))|}{\left|\overline{\Psi'(g(\zeta))}^T\overline{\frac{\widetilde f(\zeta)}{\zeta\rho(\zeta)}}|\nabla r(f(\zeta))|\right|}=g(\zeta),\ \zeta\in\mathbb{T}.\end{multline*}
It remains to prove the fifth condition. By Definition \ref{29}(2) we have to show that \begin{equation}\label{sgf}\sum_{j,k=1}^n\frac{\partial^2(r\circ\Psi)}{\partial z_j\partial\overline{z}_k}(g(\zeta))X_{j}\overline{X}_{k}>\left|\sum_{j,k=1}^n\frac{\partial^2(r\circ\Psi)}{\partial z_j\partial z_k}(g(\zeta))X_{j}X_{k}\right|\end{equation} for $\zeta\in\mathbb{T}$ and $X\in(\mathbb{C}^{n})_*$ with
$$\sum_{j=1}^n\frac{\partial(r\circ\Psi)}{\partial z_j}(g(\zeta))X_{j}=0,$$ i.e. $X_1=0$. We have $$\sum_{j,k=1}^n\frac{\partial^2(r\circ\Psi)}{\partial z_j\partial\overline{z}_k}(g(\zeta))X_{j}\overline{X}_{k}=\sum_{j,k,s,t=1}^n\frac{\partial^2 r}{\partial z_s\partial\overline{z}_t}(f(\zeta))\frac{\partial\Psi_s}{\partial z_j}(g(\zeta))\overline{\frac{\partial\Psi_t}{\partial z_k}(g(\zeta))}X_{j}\overline{X}_{k}=$$$$=\sum_{s,t=1}^n\frac{\partial^2 r}{\partial z_s\partial\overline{z}_t}(f(\zeta))Y_{s}\overline{Y}_{t},$$ where $$Y:=\Psi'(g(\zeta))X.$$ Note that $Y\neq 0$. Additionally $$\sum_{s=1}^n\frac{\partial r}{\partial z_s}(f(\zeta))Y_{s}=\sum_{j,s=1}^n\frac{\partial r}{\partial z_s}(f(\zeta))\frac{\partial\Psi_s}{\partial z_j}(g(\zeta))X_j=\sum_{j=1}^n\frac{\partial(r\circ\Psi)}{\partial z_j}(g(\zeta))X_{j}=0.$$ Therefore, by the strong linear convexity of $D$ at $f(\zeta)$ $$\sum_{s,t=1}^n\frac{\partial^2 r}{\partial z_s\partial\overline{z}_t}(f(\zeta))Y_{s}\overline{Y}_{t}>\left|\sum_{s,t=1}^n\frac{\partial^2 r}{\partial z_s\partial z_t}(f(\zeta))Y_{s}Y_{t}\right|.$$ To finish the proof observe that $$\left|\sum_{j,k=1}^n\frac{\partial^2(r\circ\Psi)}{\partial z_j\partial z_k}(g(\zeta))X_{j}X_{k}\right|=\left|\sum_{j,k,s,t=1}^n\frac{\partial^2 r}{\partial z_s\partial z_t}(f(\zeta))\frac{\partial\Psi_s}{\partial z_j}(g(\zeta))\frac{\partial\Psi_t}{\partial z_k}(g(\zeta))X_{j}X_{k}+\right.$$$$\left.+\sum_{j,k,s=1}^n\frac{\partial r}{\partial z_s}(f(\zeta))\frac{\partial^2\Psi_s}{\partial z_j\partial z_k}(g(\zeta))X_{j}X_{k}\right|=$$$$=\left|\sum_{s,t=1}^n\frac{\partial^2 r}{\partial z_s\partial z_t}(f(\zeta))Y_{s}Y_{t}+\sum_{j,k=2}^n\sum_{s=1}^n\frac{\partial r}{\partial z_s}(f(\zeta))\frac{\partial^2\Psi_s}{\partial z_j\partial z_k}(g(\zeta))X_{j}X_{k}\right|$$ and $$\frac{\partial^2\Psi_s}{\partial z_j\partial z_k}(g(\zeta))=0,\ j,k\geq 2,\ s\geq 1,$$ which gives \eqref{sgf}. \end{proof}
\begin{remm}\label{rem:theta} Let $D$ be a bounded domain in $\mathbb C^n$ and let $f:\mathbb{D}\longrightarrow D$ be a (weak) stationary mapping such that $\partial D$ is real analytic in a neighborhood of $f(\mathbb{T})$. Assume moreover that there are a neighborhood $U$ of $f(\overline{\DD})$ and a mapping $\Theta:U\longrightarrow\mathbb{C}^n$ biholomorphic onto its image and the set $D\cap U$ is connected. Then $\Theta\circ f$ is a (weak) stationary mapping of $G:=\Theta(D\cap U)$.
In particular, if $U_1$, $U_2$ are neighborhoods of the closures of domains $D_1$, $D_2$ with real analytic boundaries and $\Theta:U_1\longrightarrow U_2$ is a biholomorphism such that $\Theta(D_1)=D_2$, then $\Theta$ maps (weak) stationary mappings of $D_1$ onto (weak) stationary mappings of $D_2$. \end{remm} \begin{proof}
Actually, it is clear that two first conditions of the definition of (weak) stationary mappings are preserved by $\Theta$. To show the third one we proceed similarly as in the equations \eqref{56}, \eqref{57}, \eqref{58}. Let $f:\mathbb{D}\longrightarrow D $ be a (weak) stationary mapping. The candidates for the mappings in condition (3) (resp. (3')) of Definition~\ref{21} for $\Theta\circ f$ in the domain $G$ are $$((\Theta'\circ f)^{-1})^T\widetilde f\text{\ \ and\ \ }\rho\frac{|((\Theta'\circ f)^{-1})^T\overline{\nabla r\circ f}|}{|\nabla r\circ f|}.$$ Indeed, for $\zeta\in\mathbb{T}$ \begin{multline*}\overline{\nu_{G}(\Theta(f(\zeta)))}=
\frac{\overline{\nabla(r\circ\Theta^{-1})(\Theta(f(\zeta)))}}{|\nabla(r\circ\Theta^{-1})(\Theta(f(\zeta)))|}=\frac{[(\Theta^{-1})'(\Theta(f(\zeta)))]^T\overline{\nabla r(f(\zeta))}}{|[(\Theta^{-1})'(\Theta(f(\zeta)))]^T\overline{\nabla r(f(\zeta))}|}=\\
=\frac{(\Theta'(f(\zeta))^{-1})^T\overline{\nabla r(f(\zeta))}}{|(\Theta'(f(\zeta))^{-1})^T\overline{\nabla r(f(\zeta))}|}, \end{multline*} hence
\begin{multline*}\zeta\rho(\zeta)\frac{|(\Theta'(f(\zeta))^{-1})^T\overline{\nabla r(f(\zeta))}|}{|\nabla r(f(\zeta))|}\overline{\nu_{G}(\Theta(f(\zeta)))}=\\ =\zeta\rho(\zeta)(\Theta'(f(\zeta))^{-1})^T\overline{\nu_{D}(f(\zeta))}= (\Theta'(f(\zeta))^{-1})^T\widetilde f(\zeta). \end{multline*} \end{proof}
\subsection{Situation (\dag)}\label{dag} Consider the following situation, denoted by (\dag) (with data $D_0$ and $U_0$): \begin{itemize} \item $D_0$ is a bounded domain in $\mathbb{C}^n$, $n\geq 2$; \item $f_0:\overline{\DD}\ni\zeta\longmapsto(\zeta,0,\ldots,0)\in\overline D_0$, $\zeta\in\overline{\DD}$; \item $f_0(\mathbb{D})\subset D_0$; \item $f_0(\mathbb{T})\subset\partial D_0$; \item $\nu_{D_0}(f_0(\zeta))=(\zeta,0,\ldots,0)$, $\zeta\in\mathbb{T}$; \item for any $\zeta\in\mathbb{T}$, a point $f_0(\zeta)$ is a point of the strong linear convexity of $D_0$; \item $\partial D_0$ is real analytic in a neighborhood $U_0$ of $f_0(\mathbb{T})$ with a function $r_0$;
\item $|\nabla r_0|=1$ on $f_0(\mathbb{T})$ (in particular, $r_{0z}(f_0(\zeta))=(\overline\zeta/2,0,\ldots,0)$, $\zeta\in\mathbb{T}$). \end{itemize}
Since $r_0$ is real analytic on $U_0\subset\mathbb{R}^{2n}$, it extends in a natural way to a holomorphic function in a neighborhood $U_0^\mathbb{C}\subset\mathbb{C}^{2n}$ of $U_0$. Without loss of generality we may assume that $r_0$ is bounded on $U_0^\mathbb{C}$. Set $$X_0=X_0(U_0,U_0^{\mathbb C}):=\{r\in\mathcal{O}(U_0^\mathbb{C}):\text{$r(U_0)\subset\mathbb{R}$ and $r$ is bounded}\},$$ which equipped with the sup-norm is a (real) Banach space.
\begin{remm} Lempert considered the case when $U_0$ is a neighborhood of a boundary of a bounded domain $D_0$ with real analytic boundary. We shall need more general results to prove the `localization property'. \end{remm}
\subsection{General lemmas}\label{General lemmas} We keep the notation from Subsection \eqref{dag} and assume Situation (\dag).
Let us introduce some additional objects we shall be dealing with and let us prove more general lemmas (its generality will be useful in the next section).
Consider the Sobolev space $W^{2,2}(\mathbb{T})=W^{2,2}(\mathbb{T},\mathbb{C}^m)$ of functions $f:\mathbb{T}\longrightarrow\mathbb{C}^m$, whose first two derivatives (in the sense of distribution) are in $L^2(\mathbb{T})$. The $W^{2,2}$-norm is denoted by $\|\cdot\|_W$. For the basic properties of $W^{2,2}(\mathbb{T})$ see Appendix.
Put $$B:=\{f\in W^{2,2}(\mathbb{T},\mathbb{C}^n):f\text{ extends holomorphically on $\mathbb{D}$ and $f(0)=0$}\},$$$$B_0:=\{f\in B:f(\mathbb{T})\subset U_0\},\quad B^*:=\{\overline{f}:f\in B\},$$$$Q:=\{q\in W^{2,2}(\mathbb{T},\mathbb{C}):q(\mathbb{T})\subset\mathbb{R}\},\quad Q_0:=\{q\in Q:q(1)=0\}.$$
It is clear that $B$, $B^*$, $Q$ and $Q_0$ equipped with the norm $\|\cdot\|_W$ are (real) Banach spaces. Note that $B_0$ is an open neighborhood of $f_0$. In what follows, we identify $f\in B$ with its unique holomorphic extension on $\mathbb{D}$.
Let us define the projection $$\pi:W^{2,2}(\mathbb{T},\mathbb{C}^n)\ni f=\sum_{k=-\infty}^{\infty}a_k\zeta^{k}\longmapsto\sum_{k=-\infty}^{-1}a_k\zeta^{k}\in{B^*}.$$ Note that $f\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$ extends holomorphically on $\mathbb{D}$ if and only if $\pi(f)=0$ (and the extension is $\mathcal C^{1/2}$ on $\mathbb{T}$). Actually, it suffices to observe that $g(\zeta):=\sum_{k=-\infty}^{-1}a_k\zeta^{k}$, $\zeta\in\mathbb{T}$, extends holomorphically on $\mathbb{D}$ if and only if $a_k=0$ for $k<0$. This follows immediately from the fact that the mapping $\mathbb{T}\ni\zeta\longmapsto g(\overline\zeta)\in\mathbb{C}^n$ extends holomorphically on $\mathbb{D}$.
Consider the mapping $\Xi:X_0\times\mathbb{C}^n\times B_0\times Q_0\times\mathbb{R}\longrightarrow Q\times{B^*}\times\mathbb{C}^n$ defined by $$\Xi(r,v,f,q,\lambda):=(r\circ f,\pi(\zeta(1+q)(r_z\circ f)),f'(0)-\lambda v),$$ where $\zeta$ is treated as the identity function on $\mathbb{T}$.
We have the following
\begin{lemm}\label{cruciallemma} There exist a neighborhood $V_0$ of $(r_0,f_0'(0))$ in $X_0\times\mathbb{C}^n$ and a real analytic mapping $\Upsilon:V_0\longrightarrow B_0\times Q_0\times\mathbb{R}$ such that for any $(r,v)\in V_0$ we have $\Xi(r,v,\Upsilon(r,v))=0$. \end{lemm}
Let $\widetilde\Xi:X_0\times\mathbb{C}^n\times B_0\times Q_0\times(0,1)\longrightarrow Q\times{B^*}\times\mathbb{C}^n$ be defined as $$\widetilde\Xi(r,w,f,q,\xi):=(r\circ f,\pi(\zeta(1+q)(r_z\circ f)),f(\xi)-w).$$
Analogously we have \begin{lemm}\label{cruciallemma1} Let $\xi_0\in(0,1)$. Then there exist a neighborhood $W_0$ of $(r_0,f_0(\xi_0))$ in $X_0\times D_0$ and a real analytic mapping $\widetilde\Upsilon:W_0\longrightarrow B_0\times Q_0\times(0,1)$ such that for any $(r,w)\in W_0$ we have $\widetilde\Xi(r,w,\widetilde\Upsilon(r,w))=0$. \end{lemm}
\begin{proof}[Proof of Lemmas \ref{cruciallemma} and \ref{cruciallemma1}]
We will prove the first lemma. Then we will see that a proof of the second one reduces to that proof.
We claim that $\Xi$ is real analytic. The only problem is to show that the mapping $$T: X_0\times B_0\ni(r,f)\longmapsto r\circ f\in Q$$ is real analytic (the real analyticity of the mapping $X_0\times B_0\ni(r,f)\longmapsto r_z\circ f\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$ follows from this claim).
Fix $r\in X_0$, $f\in B_0$ and take $\varepsilon>0$ so that a $2n$-dimensional polydisc $P_{2n}(f(\zeta),\varepsilon)$ is contained in $U_0^\mathbb{C}$ for any $\zeta\in\mathbb{T}$. Then any function $\widetilde r\in X_0$ is holomorphic in $U_0^\mathbb{C}$, so it may be expanded as a holomorphic series convergent in $P_{2n}(f(\zeta),\varepsilon)$. Losing no generality we may assume that $n$-dimensional polydiscs $P_{n}(f(\zeta),\varepsilon)$, $\zeta\in\mathbb{T}$, satisfy $P_{n}(f(\zeta),\varepsilon)\subset U_0$. This gives an expansion of the function $\widetilde r$ at any point $f(\zeta)$, $\zeta\in\mathbb{T}$, into a series $$\sum_{\alpha\in\mathbb{N}_0^{2n}}\frac{1}{\alpha!}\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}(f(\zeta))x^\alpha$$ convergent to $\widetilde r(f(\zeta)+x)$, provided that $x=(x_1,\ldots,x_{2n})\in P_n(0,\varepsilon)$ (where $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$ and $|\alpha|:=\alpha_1+\ldots+\alpha_{2n}$). Hence \begin{equation}\label{69}T(r+\varrho,f+h)=\sum_{\alpha\in\mathbb{N}_0^{2n}}\frac{1}{\alpha!}\left(\frac{\partial^{|\alpha|}r}{\partial x^\alpha}\circ f\right)h^\alpha+\sum_{\alpha\in\mathbb{N}_0^{2n}}\frac{1}{\alpha!}\left(\frac{\partial^{|\alpha|}\varrho}{\partial x^\alpha}\circ f\right)h^\alpha\end{equation} pointwise for $\varrho\in X_0$ and $h\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$ with $\|h\|_{\sup}<\varepsilon$.
Put $P:=\bigcup_{\zeta\in \mathbb{T}} P_{2n}(f(\zeta),\varepsilon)$ and for $\widetilde r\in X_0$ put $||\widetilde r||_P:=\sup_P|\widetilde r|$. Let $\widetilde r$ be equal to $r$ or to $\varrho$, where $\varrho$ lies is in a neighborhood of $0$ in $X_0$. The Cauchy inequalities give
\begin{equation}\label{series}\left|\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}(f(\zeta))\right|\leq\frac{\alpha!\|\widetilde r\|_{P}}{\varepsilon^{|\alpha|}},\quad\zeta\in\mathbb{T}.\end{equation}
Therefore, $$\left|\left|\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}\circ f\right|\right|_W\leq C_1\frac{\alpha!\|\widetilde r\|_{P}}{\varepsilon^{|\alpha|}}$$ for some $C_1>0$.
There is $C_2>0$ such that $$\|gh^\alpha\|_W\leq C_2^{|\alpha|+1}\|g\|_W\|h_1\|^{\alpha_1}_W\cdotp\ldots\cdotp\|h_{2n}\|^{\alpha_{2n}}_W$$ for $g\in W^{2,2}(\mathbb{T},\mathbb{C})$, $h\in W^{2,2}(\mathbb{T},\mathbb{C}^n)$, $\alpha\in\mathbb{N}_0^{2n}$ (see Appendix for a proof of this fact). Using the above inequalities we infer that $$\sum_{\alpha\in\mathbb{N}_0^{2n}}\left|\left|\frac{1}{\alpha!}\left(\frac{\partial^{|\alpha|}\widetilde r}{\partial x^\alpha}\circ f\right)h^\alpha\right|\right|_W$$ is convergent if $h$ is small enough on the norm $\|\cdot\|_W$. Therefore, the series~\eqref{69} is absolutely convergent in the norm $\|\cdot\|_W$, whence $T$ is real analytic.
To show the existence of $V_0$ and $\Upsilon$ we will make use of the Implicit Function Theorem. More precisely, we shall show that the partial derivative $$\Xi_{(f,q,\lambda)}(r_0,f_0'(0),f_0,0,1):B\times Q_0\times\mathbb{R}\longrightarrow Q\times{B^*}\times\mathbb{C}^n$$ is an isomorphism. Observe that for any $(\widetilde{f},\widetilde{q},\widetilde{\lambda})\in B\times Q_0\times\mathbb{R}$ the following equality holds \begin{multline*}\Xi_{(f,q,\lambda)}(r_0,f_0'(0),f_0,0,1)(\widetilde{f},\widetilde{q},\widetilde{\lambda})=\left.\frac{d}{dt}
\Xi(r_0,f_0'(0),f_0+t\widetilde{f},t\widetilde{q},1+t\widetilde{\lambda})\right|_{t=0}=\\ =((r_{0z}\circ f_0)\widetilde{f}+(r_{0\overline{z}}\circ f_0)\overline{\widetilde{f}},\pi(\zeta\widetilde{q}r_{0z}\circ f_0+\zeta(r_{0zz} \circ f_0)\widetilde{f}+\zeta(r_{0z\overline{z}}\circ f_0)\overline{\widetilde{f}}),\widetilde{f}'(0)-\widetilde{\lambda}f_0'(0)), \end{multline*} where we treat ${r_0}_z,{r_0}_{\overline{z}}$ as row vectors, $\widetilde{f},\overline{\widetilde{f}}$ as column vectors and $r_{0zz}=\left[\frac{\partial^2r_0}{\partial z_j\partial z_k}\right]_{j,k=1}^n$, $r_{0z\overline{z}}=\left[\frac{\partial^2r_0}{\partial z_j\partial\overline z_k}\right]_{j,k=1}^n$ as $n\times n$ matrices.
By the Bounded Inverse Theorem it suffices to show that $\Xi_{(f,q,\lambda)}(r_0,f_0'(0),f_0,0,1)$ is bijective, i.e. for $(\eta,\varphi,v)\in Q\times B^*\times\mathbb{C}^n$ there exists exactly one $(\widetilde{f},\widetilde{q},\widetilde{\lambda})\in B\times Q_0\times\mathbb{R}$ satisfying \begin{equation} (r_{0z}\circ f_0)\widetilde{f}+(r_{0\overline{z}}\circ f_0)\overline{\widetilde{f}}=\eta, \label{al1} \end{equation} \begin{equation} \pi(\zeta\widetilde{q}r_{0z}\circ f_0+\zeta (r_{0zz}\circ f_0)\widetilde{f}+\zeta(r_{0z\overline{z}}\circ f_0)\overline{\widetilde{f}})=\varphi, \label{al2} \end{equation} \begin{equation} \widetilde{f}'(0)-\widetilde{\lambda} f_0'(0)=v. \label{al3} \end{equation} First we show that $\widetilde\lambda$ and $\widetilde f_1$ are uniquely determined. Observe that, in view of assumptions, (\ref{al1}) is just $$\frac{1}{2}\overline{\zeta}\widetilde{f}_1+\frac{1}{2}\zeta\overline{\widetilde{f}_1}=\eta$$ or equivalently \begin{equation} \re(\widetilde{f}_1/\zeta)=\eta\text{ (on }\mathbb{T}). \label{al4} \end{equation} Note that the equation (\ref{al4}) uniquely determines $\widetilde{f}_1/\zeta\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ up to an imaginary additive constant, which may be computed using (\ref{al3}). Actually, $\eta=\re G$ on $\mathbb{T}$ for some function $G\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$. To see this, let us expand $\eta(\zeta)=\sum_{k=-\infty}^{\infty}a_k\zeta^{k}$, $\zeta\in\mathbb{T}$. From the equality $\eta(\zeta)=\overline{\eta(\zeta)}$, $\zeta\in\mathbb{T}$, we get \begin{equation}\label{65}\sum_{k=-\infty}^{\infty}a_k\zeta^{k}=\sum_{k=-\infty}^{\infty}\overline a_k\zeta^{-k}=\sum_{k=-\infty}^{\infty}\overline a_{-k}\zeta^{k},\ \zeta\in\mathbb{T},\end{equation} so $a_{-k}=\overline a_k$, $k\in\mathbb{Z}$. Hence $$\eta(\zeta)=a_0+\sum_{k=1}^\infty 2\re(a_k\zeta^k)=\re\left(a_0+2\sum_{k=1}^\infty a_k\zeta^k\right),\ \zeta\in\mathbb{T}.$$ Set $$G(\zeta):=a_0+2\sum_{k=1}^\infty a_k\zeta^k,\ \zeta\in\mathbb{D}.$$ This series is convergent for $\zeta\in\mathbb{D}$, so $G\in{\mathcal O}(\mathbb{D})$. Further, the function $G$ extends continuously on $\overline{\DD}$ (to the function denoted by the same letter) and the extension lies in $W^{2,2}(\mathbb{T},\mathbb{C})$. Clearly, $\eta=\re G$ on $\mathbb{T}$.
We are searching $C\in\mathbb{R}$ such that the functions $\widetilde{f}_1:=\zeta(G+iC)$ and $\theta:=\im(\widetilde{f}_1/\zeta)$ satisfy $$\eta(0)+i\theta(0)=\widetilde{f}_1'(0)$$ and $$\eta(0)+i\theta(0)-\widetilde{\lambda}\re{f_{01}'(0)}-i\widetilde{\lambda}\im{{f_{01}'(0)}}=\re{v_1}+i\im{v_1}.$$ But $$\eta(0)-\widetilde{\lambda}\re{f_{01}'(0)}=\re{v_1},$$ which yields $\widetilde{\lambda}$ and then $\theta(0)$, consequently the number $C$. Having $\widetilde{\lambda}$ and once again using (\ref{al3}), we find uniquely determined $\widetilde{f}_2'(0),\ldots,\widetilde{f}_n'(0)$.
Therefore, the equations $\eqref{al1}$ and $\eqref{al3}$ are satisfied by uniquely determined $\widetilde f_1$, $\widetilde\lambda$ and $\widetilde{f}_2'(0),\ldots,\widetilde{f}_n'(0)$.
Consider (\ref{al2}), which is the system of $n$ equations with unknown $\widetilde{q},\widetilde{f}_2,\ldots,\widetilde{f}_n$. Observe that $\widetilde{q}$ appears only in the first of the equations and the remaining $n-1$ equations mean exactly that the mapping \begin{equation} \zeta(r_{0\widehat{z}\widehat{z}}\circ f_0) \widehat{\widetilde{f}}+\zeta(r_{0\widehat{z}\widehat{\overline{z}}}\circ f_0)\widehat{\overline{\widetilde{f}}}-\psi \label{al5} \end{equation} extends holomorphically on $\mathbb{D}$, where $\widehat{a}:=(a_{2},\ldots,a_{n})$ and $\psi\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})$ may be obtained from $\varphi$ and $\widetilde{f}_1$. Indeed, to see this, write (\ref{al2}) in the form $$\pi(F_{1}+\zeta F_{2}+\zeta F_{3})=(\varphi_1,\ldots,\varphi_n),$$ where $$F_1:=(\widetilde q,0,\ldots,0),$$$$F_2:=(A_{j})_{j=1}^n,\ A_{j}:=\sum\limits_{k=1}^n(r_{0z_jz_k}\circ f_0)\widetilde{f}_k,$$$$F_3=(B_{j})_{j=1}^n,\ B_{j}:=\sum\limits_{k=1}^n(r_{0z_j\overline z_k}\circ f_0)\overline{\widetilde{f}_k}.$$ It follows that $$\widetilde{q}+\zeta A_1+\zeta B_1-\varphi_1$$ and $$\zeta A_j+\zeta B_j-\varphi_j,\ j=2,\ldots,n,$$ extend holomorphically on $\mathbb{D}$ and $$\psi:=\left(\varphi_j-\zeta(r_{0z_jz_1}\circ f_0)\widetilde{f}_1-\zeta(r_{0z_j\overline z_1}\circ f_0)\overline{\widetilde{f}_1}\right)_{j=2}^n.$$ Put $$g(\zeta):=\widehat{\widetilde{f}}(\zeta)/\zeta,\quad\alpha(\zeta):=\zeta^2r_{0\widehat{z}\widehat{z}}(f_0(\zeta)), \quad\beta(\zeta):=r_{0\widehat{z}\widehat{\overline{z}}}(f_0(\zeta)).$$
Observe that $\alpha(\zeta)$, $\beta(\zeta)$ are the $(n-1)\times(n-1)$ matrices depending real analytically on $\zeta$ and $g(\zeta)$ is a column vector in $\mathbb{C}^{n-1}$. This allows us to reduce \eqref{al5} to the following problem: we have to find a unique $g\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ such that \begin{equation} \alpha g+\beta\overline{g}-\psi\text{ extends holomorphically on $\mathbb{D}$ and } g(0)={\widehat{\widetilde{f}'}}(0). \label{al6} \end{equation} The fact that every $f_0(\zeta)$ is a point of strong linear convexity of the domain $D_0$ may be written as \begin{equation}
|X^T\alpha(\zeta)X|<X^{T}\beta(\zeta)\overline{X},\ \zeta\in\mathbb{T},\ X\in(\mathbb{C}^{n-1})_*. \label{al7} \end{equation}
Note that $\beta(\zeta)$ is self-adjoint and strictly positive, hence using Proposition \ref{12} we get a mapping $H\in{\mathcal O}(\overline{\DD},\mathbb{C}^{(n-1)\times(n-1)})$ such that $\det H\neq 0$ on $\overline{\DD}$ and $HH^*=\beta$ on $\mathbb{T}$. Using this notation, (\ref{al6}) is equivalent to \begin{equation} H^{-1}\alpha g+H^*\overline{g}-H^{-1}\psi\text{ extends holomorphically on $\mathbb{D}$} \label{al8} \end{equation} or, if we denote $h:=H^Tg$, $\gamma:=H^{-1}\alpha (H^T)^{-1}$, to \begin{equation} \gamma h+\overline{h}-H^{-1}\psi\text{ extends holomorphically on $\mathbb{D}.$} \label{al9} \end{equation}
For any $\zeta\in\mathbb{T}$ the operator norm of the symmetric matrix $\gamma(\zeta)$ is uniformly less than 1. In fact, from (\ref{al7}) for any $X\in\mathbb{C}^{n-1}$ with $|X|=1$ \begin{multline*}|X^{T}\gamma(\zeta)X|=|X^{T}H(\zeta)^{-1}\alpha(\zeta)(H(\zeta)^T)^{-1}X|<X^TH(\zeta)^{-1}\beta(\zeta) \overline{(H(\zeta)^T)^{-1}X}=\\=X^TH(\zeta)^{-1}H(\zeta)H(\zeta)^*\overline{(H(\zeta)^T)^{-1}}
\overline{X}=|X|^2=1,\end{multline*} so, by the compactness argument, $|X^{T}\gamma(\zeta)X|\leq 1-\widetilde\varepsilon$ for some $\widetilde\varepsilon>0$ independent on $\zeta$ and $X$. Thus $\|\gamma(\zeta)\|\leq 1-\widetilde\varepsilon$ by Proposition \ref{59}.
We have to prove that there is a unique solution $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ of (\ref{al9}) such that $h(0)=a$ with a given $a\in\mathbb{C}^{n-1}$.
Define the operator $$P:W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\ni\sum_{k=-\infty}^{\infty}a_k\zeta^{k}\longmapsto\overline{\sum_{k=-\infty}^{-1}a_k\zeta^{k}}\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1}),$$ where $a_k\in\mathbb{C}^{n-1}$, $k\in\mathbb{Z}$.
We will show that a mapping $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ satisfies (\ref{al9}) and $h(0)=a$ if and only if it is a fixed point of the mapping $$K:W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\ni h\longmapsto P(H^{-1}\psi-\gamma h)+a\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1}).$$
Indeed, take $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ and suppose that $h(0)=a$ and $\gamma h+\overline{h}-H^{-1}\psi$ extends holomorphically on $\mathbb{D}$. Then $$h=a+\sum_{k=1}^{\infty}a_k\zeta^{k},\quad\overline{h}=\overline{a}+\sum_{k=1}^{\infty}\overline a_k\zeta^{-k}=\sum_{k=-\infty}^{-1}\overline a_{-k}\zeta^{k}+\overline{a},$$ $$P(h)=0,\quad P(\overline{h})=\sum_{k=1}^{\infty}a_k\zeta^{k}=h-a$$ and $$P(\gamma h+\overline{h}-H^{-1}\psi)=0,$$ which implies $$P(H^{-1}\psi-\gamma h)=h-a$$ and finally $K(h)=h$. Conversely, suppose that $K(h)=h$. Then $$P(H^{-1}\psi-\gamma h)=h-a=\sum_{k=1}^{\infty}a_k\zeta^{k}+a_1-a,\quad P(h)=0$$ and $$P(\overline{h})=\sum_{k=1}^{\infty}a_k\zeta^{k}=h-a_1,$$ from which follows that $$P(\gamma h+\overline{h}-H^{-1}\psi)=P(\overline{h})-P(H^{-1}\psi-\gamma h)=a-a_1$$ and $$P(\gamma h+\overline{h}-H^{-1}\psi)=0\text{ iff }a=a_1.$$ Observe that $h(0)=K(h)(0)=P(H^{-1}\psi-\gamma h)(0)+a=a$.
We shall make use of the Banach Fixed Point Theorem. To do this, consider $W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})$ equipped with the following norm $$\|h\|_{\varepsilon}:=\|h\|_L+\varepsilon\|h'\|_L+
\varepsilon^2\|h''\|_L,$$ where $\varepsilon>0$ and $\|\cdot\|_L$ is the $L^2$-norm (it is a Banach space). We will prove that $K$ is a contraction with respect to the norm $\|\cdot\|_{\varepsilon}$ for sufficiently small $\varepsilon$. Indeed, there is $\widetilde\varepsilon>0$ such that for any $h_1,h_2\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})$ \begin{equation}
\|K(h_1)-K(h_2)\|_L=\|P(\gamma(h_2-h_1))\|_L\leq\|\gamma(h_2-h_1)\|_L\leq (1-\widetilde\varepsilon)\|h_2-h_1\|_L. \label{al10} \end{equation} Moreover, \begin{multline}
\|K(h_1)'-K(h_2)'\|_L= \|P(\gamma h_2)'-P(\gamma h_1)'\|_L\leq\\
\leq\|(\gamma h_2)'-(\gamma h_1)'\|_L= \|\gamma '(h_2-h_1)+\gamma(h_2'-h_1')\|_L. \label{al11} \end{multline} Furthermore, \begin{equation}
\|K(h_1)''-K(h_2)''\|_L\leq\|\gamma ''(h_2-h_1)\|_L+2\|\gamma '(h_2'-h_1')\|_L+\|\gamma
(h_1''-h_2'')\|_L.\label{al12} \end{equation}
Using the finiteness of $\|\gamma '\|$, $\|\gamma ''\|$ and putting (\ref{al10}), (\ref{al11}), (\ref{al12}) together we see that there exists $\varepsilon>0$ such that $K$ is a contraction w.r.t. the norm $\|\cdot\|_{\varepsilon}$.
We have found $\widetilde{f}$ and $\widetilde{\lambda}$ satisfying (\ref{al1}), (\ref{al3}) and the last $n-1$ equations from (\ref{al2}) are satisfied.
It remains to show that there exists a unique $\widetilde{q}\in Q_0$ such that $\widetilde{q}+\zeta A_1+\zeta B_1-\varphi_1$ extends holomorphically on $\mathbb{D}$.
Comparing the coefficients as in \eqref{65}, we see that if $$\pi(\zeta A_1+\zeta B_1-\varphi_1)=\sum_{k=-\infty}^{-1}a_k\zeta^{k}$$ then $\widetilde{q}$ has to be taken as $$-\sum_{k=-\infty}^{-1}a_k\zeta^{k}-\sum_{k=0}^{\infty}b_k\zeta^{k}$$ with $b_k:=\overline a_{-k}$ for $k\geq 1$ and $b_0\in\mathbb{R}$ uniquely determined by $\widetilde{q}(1)=0$.\\
Let us show that the proof of the second Lemma follows from the proof of the first one. Since $\widetilde\Xi$ is real analytic it suffices to prove that the derivative $$\widetilde\Xi_{(f,q,\xi)}(r_0,f_0(\xi_0),f_0,0,\xi_0):B\times Q_0\times\mathbb{R}\longrightarrow Q\times{B^*}\times\mathbb{C}^n$$ is invertible. For $(\widetilde{f},\widetilde{q},\widetilde{\xi})\in B\times Q_0\times\mathbb{R}$ we get \begin{multline*} \widetilde\Xi_{(f,q,\xi)}(r_0,f_0(\xi_0),f_0,0,\xi_0)(\widetilde{f},\widetilde{q},\widetilde{\xi})=\left.\frac{d}{dt}
\widetilde\Xi(r_0,f_0(\xi_0),f_0+t\widetilde{f},t\widetilde{q},\xi_0+t\widetilde{\xi})\right|_{t=0}=\\ =((r_{0z}\circ f_0)\widetilde{f}+(r_{0\overline{z}}\circ f_0)\overline{\widetilde{f}}, \pi(\zeta\widetilde{q}r_{0z}\circ f_0+\zeta(r_{0zz}\circ f_0)\widetilde{f}+\zeta(r_{0z\overline{z}}\circ f_0)\overline{\widetilde{f}}),\widetilde{f}(\xi_0)+\widetilde\xi f_0'(\xi_0)). \end{multline*} We have to show that for $(\eta,\varphi,w)\in Q\times B^*\times\mathbb{C}^n$ there exists exactly one $(\widetilde{f},\widetilde{q},\widetilde{\xi})\in B\times Q_0\times\mathbb{R}$ satisfying \begin{equation} (r_{0z}\circ f_0)\widetilde{f}+(r_{0\overline{z}}\circ f_0)\overline{\widetilde{f}}=\eta, \label{1al1} \end{equation} \begin{equation} \pi(\zeta\widetilde{q}r_{0z}\circ f_0+\zeta (r_{0zz}\circ f_0)\widetilde{f}+\zeta(r_{0z\overline{z}}\circ f_0)\overline{\widetilde{f}})=\varphi, \label{1al2} \end{equation} \begin{equation} \widetilde f(\xi_0)+\widetilde\xi f_0'(\xi_0)=w. \label{1al3} \end{equation} The equation (\ref{1al1}) turns out to be \begin{equation} \re(\widetilde{f}_1/\zeta)=\eta\text{ (on }\mathbb{T}). \label{1al4} \end{equation} The equation above uniquely determines $\widetilde{f}_1/\zeta\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ up to an imaginary additive constant, which may be computed using (\ref{1al3}). Indeed, there exists $G\in W^{2,2}(\mathbb{T},\mathbb{C})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ such that $\eta=\re G$ on $\mathbb{T}$. We are searching $C\in\mathbb{R}$ such that the functions $\widetilde{f}_1:=\zeta(G+iC)$ and $\theta:=\im(\widetilde{f}_1/\zeta)$ satisfy $$\xi_0\eta(\xi_0)+i\xi_0\theta(\xi_0)=\widetilde{f}_1(\xi_0)$$ and $$\xi_0(\eta(\xi_0)+i\theta(\xi_0))+\widetilde{\xi}\re{f_{01}'(\xi_0)}+i\widetilde{\xi}\im{{f_{01}'(\xi_0)}}= \re{w_1}+i\im{w_1}.$$ But $$\xi_0\eta(\xi_0)+\widetilde{\xi}\re{f_{01}'(\xi_0)}=\re{w_1},$$ which yields $\widetilde{\xi}$ and then $\theta(\xi_0)$, consequently the number $C$. Having $\widetilde{\xi}$ and once again using (\ref{1al3}), we find uniquely determined $\widetilde{f}_2(\xi_0),\ldots,\widetilde{f}_n(\xi_0)$.
Therefore, the equations $\eqref{1al1}$ and $\eqref{1al3}$ are satisfied by uniquely determined $\widetilde f_1$, $\widetilde\xi$ and $\widetilde{f}_2(\xi_0),\ldots,\widetilde{f}_n(\xi_0)$.
In the remaining part of the proof we change the second condition of \eqref{al6} to $$g(\xi_0)={\widehat{\widetilde{f}}}(\xi_0)/\xi_0$$ and we have to prove that there is a unique solution $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ of (\ref{al9}) such that $h(\xi_0)=a$ with a given $a\in\mathbb{C}^{n-1}$. Let $\tau$ be an automorphism of $\mathbb{D}$ (so it extends holomorphically near $\overline{\DD}$), which maps $0$ to $\xi_0$, i.e. $$\tau(\xi):=\frac{\xi_0-\xi}{1-\overline\xi_0\xi},\ \xi\in\mathbb{D}.$$ Let the maps $P,K$ be as before. Then $h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ satisfies (\ref{al9}) and $h(\xi_0)=a$ if and only if $h\circ\tau\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ satisfies (\ref{al9}) and $(h\circ\tau)(0)=a$. We already know that there is exactly one $\widetilde h\in W^{2,2}(\mathbb{T},\mathbb{C}^{n-1})\cap{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ satisfying (\ref{al9}) and $\widetilde h(0)=a$. Setting $h:=\widetilde h\circ\tau^{-1}$, we get the claim. \end{proof}
\subsection{Topology in the class of domains with real analytic boundaries}\label{topol}
We introduce a concept of a domain being close to some other domain. Let $D_0\subset\mathbb{C}^n$ be a bounded domain with real analytic boundary. Then there exist a neighborhood $U_0$ of $\partial D_0$ and a real analytic defining function $r_0:U_0\longrightarrow\mathbb{R}$ such that $\nabla r_0$ does not vanish in $U_0$ and $$D_0\cap U_0=\{z\in U_0:r_0(z)<0\}.$$
\begin{dff} We say that domains $D$ \textit{tend to} $D_0$ $($or are \textit{close to} $D_0${$)$} if one can choose their defining functions $r\in X_0$ such that $r$ tend to $r_0$ in $X_0$. \end{dff}
\begin{remm} If $r\in X_0$ is near to $r_0$ with respect to the topology in $X_0$, then $\{z\in U_0:r(z)=0\}$ is a compact real analytic hypersurface which bounds a bounded domain. We denote it by $D^{r}$.
Moreover, if $D^{r_0}$ is strongly linearly convex then a domain $D^r$ is also strongly linearly convex provided that $r$ is near $r_0$. \end{remm}
\subsection{Statement of the main result of this section}
\begin{remm}\label{f} Assume that $D^r$ is a strongly linearly convex domain bounded by a real analytic hypersurface $\{z\in U_0:r(z)=0\}$. Let $\xi\in(0,1)$ and $w\in(\mathbb{C}^n)_*$.
Then a function $f\in B_0$ satisfies the conditions $$f\text{ is a weak stationary mapping of }D^r,\ f(0)=0,\ f(\xi)=w$$ if and only if there exists $q\in Q_0$ such that $q>-1$ and $\widetilde\Xi(r,w,f,q,\xi)=0$.
Actually, from $\widetilde\Xi(r,w,f,q,\xi)=0$ we deduce immediately that $r\circ f=0$ on $\mathbb{T}$, $f(\xi)=w$ and $\pi(\zeta(1+q)(r_z\circ f))=0$. From the first equality we get $f(\mathbb{T})\subset \partial D^{r}$. From the last one we deduce that the condition (3') of Definition~\ref{21} is satisfied (with $\rho:=(1+q)|r_z\circ f|$). Since $D^{r}$ is strongly linearly convex, $\overline{D^r}$ is polynomially convex (use the fact that projections of $\mathbb{C}$-convex domains are $\mathbb{C}$-convex, as well, and the fact that $D^r$ is smooth). In particular, $$f(\overline{\DD})=f(\widehat{\mathbb{T}})\subset\widehat{f(\mathbb{T})}\subset\widehat{\overline{D^r}}=\overline{D^r},$$ where $\widehat S:=\{z\in\mathbb{C}^m:|P(z)|\leq\sup_S|P|\text{ for any polynomial }P\in\mathbb{C}[z_1,\ldots,z_m]\}$ is the polynomial hull of a set $S\subset\mathbb{C}^m$.
Note that this implies $f(\mathbb{D})\subset D^r$ --- this follows from the fact that $\partial D^r$ does not contain non-constant analytic discs (as $D^r$ is strongly pseudoconvex).
The opposite implication is clear.
In a similar way we show that for any $v\in(\mathbb{C}^n)_*$ and $\lambda>0$, a function $f\in B_0$ satisfies the conditions $$f\text{ is a weak stationary mapping of }D^r,\ f(0)=0,\ f'(0)=\lambda v$$ if and only if there exists $q\in Q_0$ such that $q>-1$ and $\Xi(r,v,f,q,\lambda)=0$.
\end{remm}
\begin{propp}\label{13} Let $D_0\subset\mathbb{C}^n$, $n\geq 2$, be a strongly linearly convex domain with real analytic boundary and let $f_0:\mathbb{D}\longrightarrow D_0$ be an $E$-mapping.
$(1)$ Let $\xi_0\in(0,1)$. Then there exist a neighborhood $W_0$ of $(r_0,f_0(\xi_0))$ in $X_0\times D_0$ and real analytic mappings $$\Lambda:W_0\longrightarrow\mathcal{C}^{1/2}(\overline{\mathbb{D}}),\ \Omega:W_0\longrightarrow(0,1)$$ such that $$\Lambda(r_0,f_0(\xi_0))=f_0,\ \Omega(r_0,f_0(\xi_0))=\xi_0$$ and for any $(r,w)\in W_0$ the mapping $f:=\Lambda(r,w)$ is an $E$-mapping of $D^{r}$ satisfying $$f(0)=f_0(0)\text{ and }f(\Omega(r,w))=w.$$
$(2)$ There exist a neighborhood $V_0$ of $(r_0,f_0'(0))$ in $X_0\times\mathbb{C}^n$ and a real analytic mapping $$\Gamma:V_0\longrightarrow\mathcal{C}^{1/2}(\overline{\mathbb{D}})$$ such that $$\Gamma(r_0,f_0'(0))=f_0$$ and for any $(r,v)\in V_0$ the mapping $f:=\Gamma(r,v)$ is an $E$-mapping of $D^{r}$ satisfying $$f(0)=f_0(0)\text{ and }f'(0)=\lambda v\text{ for some }\lambda>0.$$ \end{propp}
\begin{proof}
Observe that Proposition \ref{11} provides us with a mapping $g_0=\Phi\circ f_0$ and a domain $G_0:=\Phi(D_0)$ giving a data for situation (\dag) (here $\partial D_0$ is contained in $U_0$). Clearly, $\rho_0:=r_0\circ\Phi^{-1}$ is a defining function of $G_0$.
Using Lemmas \ref{cruciallemma}, \ref{cruciallemma1} we get neighborhoods $V_0$, $W_0$ of $(\rho_0, g_0'(0))$, $(\rho_0,g_0(\xi_0))$ respectively and real analytic mappings $\Upsilon$, $\widetilde\Upsilon$ such that $ \Xi(\rho,v,\Upsilon(\rho,v))=0$ on $V_0$ and $ \widetilde\Xi(\rho,w,\widetilde\Upsilon(\rho,w))=0$ on $W_0$. Define $$\widehat\Lambda:=\pi_B\circ\widetilde\Upsilon,\quad\Omega:=\pi_\mathbb{R}\circ\widetilde\Upsilon,\quad\widehat\Gamma:=\pi_B\circ\Upsilon,$$ where $$\pi_B:B\times Q_0\times\mathbb{R}\longrightarrow B,\quad\pi_\mathbb{R}:B\times Q_0\times\mathbb{R}\longrightarrow\mathbb{R},\ $$ are the projections.
If $\rho$ is sufficiently close to $\rho_0$, then the hypersurface $\{\rho=0\}$ bounds a strongly linearly convex domain. Moreover, then $\widehat\Lambda(\rho,w)$ and $\widehat\Gamma(\rho,v)$ are extremal mappings in $G^{\rho}$ (see Remark~\ref{f}).
Composing $\widehat\Lambda(\rho,w)$ and $\widehat\Gamma(\rho,v)$ with $\Phi^{-1}$ and making use of Remark \ref{rem:theta} we get weak stationary mappings in $D^r$, where $r:=\rho\circ\Phi$. To show that they are $E$-mappings we proceed as follows. If $D^r$ is sufficiently close to $D_0$ (this depends on a distance between $\rho$ and $\rho_0$), the domain $D^r$ is strongly linearly convex, so by the results of Section \ref{55} $$\Lambda(r,w):=\Phi^{-1}\circ\widehat\Lambda(\rho,w)\text{\ and\ }\Gamma(r,v):=\Phi^{-1}\circ\widehat\Gamma(\rho,v)$$ are stationary mappings. Moreover, they are close to $f_0$ provided that $r$ is sufficiently close to $r_0$. Therefore, their winding numbers are equal. Thus $f$ satisfies condition (4) of Definition~\ref{21e}, i.e. $f$ is an $E$-mapping. \end{proof}
\section{Localization property}
\begin{prop}\label{localization} Let $D\subset\mathbb C^n$, $n\geq 2$, be a domain. Assume that $a\in\partial D$ is such that $\partial D$ is real analytic and strongly convex in a neighborhood of $a$. Then for any sufficiently small neighborhood $V_0$ of $a$ there is a weak stationary mapping of $D\cap V_0$ such that $f(\mathbb T)\subset\partial D$.
In particular, $f$ is a weak stationary mapping of $D$. \end{prop}
\begin{proof} Let $r$ be a real analytic defining function in a neighborhood of $a$. The problem we are dealing with has a local character, so replacing $r$ with $r\circ\Psi$, where $\Psi$ is a local biholomorphism near $a$, we may assume that $a=(0,\ldots,0,1)$ and a defining function of $D$ near $a$ is $r(z)=-1+|z|^2+h(z-a)$, where $h$ is real analytic in a neighborhood of $0$ and $h(z)=O(|z|^3)$ as $z\to 0$ (cf. \cite{Rud}, p. 321).
Following \cite{Lem2}, let us consider the mappings
$$A_t(z):=\left((1-t^2)^{1/2}\frac{z'}{1+tz_n},\frac{z_n+t}{1+tz_n}\right),\quad z=(z',z_n)\in\mathbb{C}^{n-1}\times\mathbb{D},\,\,t\in(0,1),$$ which restricted to $\mathbb{B}_n$ are automorphisms. Let $$r_t(z):=\begin{cases}\frac{|1+tz_n|^2}{1-t^2}r(A_t(z)),&t\in(0,1),\\-1+|z|^2,&t=1.\end{cases}$$ It is clear that $f_{(1)}(\zeta)=(\zeta,0,\ldots,0)$, $\zeta\in\mathbb{D}$ is a stationary mapping of $\mathbb B_n$. We want to have the situation (\dag) which will allow us to use Lemma \ref{cruciallemma} (or Lemma \ref{cruciallemma1}). Note that $r_t$ does not converge to $r_1$ as $t\to 1$. However, $r_t\to r_1$ in $X_0(U_0,U_0^{\mathbb C})$, where $U_0$ is a neighborhood of $f_{(1)}(\mathbb{T})$ contained in $\{z\in\mathbb C^n:\re z_n>-1/2\}$ and $U_0^{\mathbb C}$ is sufficiently small (remember that $h(z)=O(|z|^3)$).
Therefore, making use of Lemma \ref{cruciallemma} for $t$ sufficiently close to $1$ we obtain stationary mappings $f_{(t)}$ in $D_t:=\{z\in \mathbb C^n: r_t(z)<0,\ \re z_n>-1/2\}$ such that $f_{(t)}\to f_{(1)}$ in the $W^{2,2}$-norm (so also in the sup-norm). Actually, it follows from Lemma~\ref{cruciallemma} that one may take $f_{(t)}:=\pi_B\circ\Upsilon(r_t,f_{(1)}'(0))$ (keeping the notation from this lemma). The argument used in Remark~\ref{f} gives that $f_{(t)}$ satisfies conditions (1'), (2') and (3') of Definition~\ref{21}. Since the non-constant function $r\circ A_t\circ f_{(t)}$ is subharmonic on $\mathbb{D}$, continuous on $\overline{\DD}$ and $r\circ A_t\circ f_{(t)}=0$ on $\mathbb{T}$, we see from the maximum principle that $f_{(t)}$ maps $\mathbb{D}$ in $D_t$. Therefore, $f_{(t)}$ are weak stationary mappings for $t$ close to $1$.
In particular, $$f_{(t)}(\mathbb{D})\subset 2\mathbb B_n \cap \{z\in\mathbb C^n:\re z_n>-1/2\}$$ provided that $t$ is close to $1$. The mappings $A_t$ have the following important property $$A_t(2\mathbb B_n\cap\{z\in\mathbb C^n:\re z_n>-1/2\})\to\{a\}$$ as $t\to 1$ in the sense of the Hausdorff distance.
Therefore, we find from Remark \ref{rem:theta} that $g_{(t)}:=A_t\circ f_{(t)}$ is a stationary mapping of $D$. Since $g_{(t)}$ maps $\mathbb{D}$ onto arbitrarily small neighborhood of $a$ provided that $t$ is sufficiently close to $1$, we immediately get the assertion. \end{proof}
\section{Proofs of Theorems \ref{lem-car} and \ref{main}}
We start this section with the following \begin{lem}\label{lemat} For any different $z,w\in D$ $($resp. for any $z\in D$, $v\in(\mathbb{C}^n)_*${$)$} there exists an $E$-mapping $f:\mathbb{D}\longrightarrow D$ such that $f(0)=z$, $f(\xi)=w$ for some $\xi\in(0,1)$ $($resp. $f(0)=z$, $f'(0)=\lambda v$ for some $\lambda>0${$)$}. \end{lem}
\begin{proof} Fix different $z,w\in D$ (resp. $z\in D$, $v\in(\mathbb{C}^{n})_*$).
First, consider the case when $D$ is bounded strongly convex with real analytic boundary. Without loss of generality one may assume that $0\in D\Subset\mathbb{B}_n$. We need some properties of the Minkowski functionals.
Let $\mu_G$ be a Minkowski functional of a domain $G\subset\mathbb{C}^n$ containing the origin, i.e. $$\mu_G(x):=\inf\left\{s>0:\frac{x}{s}\in G\right\},\ x\in\mathbb{C}^n.$$ Assume that $G$ is bounded strongly convex with real analytic boundary. We shall show that \begin{itemize} \item $\mu_G-1$ is a real analytic outside $0$, defining function of $G$; \item $\mu^2_G-1$ is a real analytic outside $0$, strongly convex outside $0$, defining function of $G$. \end{itemize} Clearly, $G=\{x\in\mathbb{R}^{2n}:\mu_G(x)<1\}$. Setting $$q(x,s):=r\left(\frac{x}{s}\right),\ (x,s)\in U_0\times U_1,$$ where $r$ is a real analytic defining function of $G$ (defined near $\partial G$) and $U_0\subset\mathbb{R}^{2n}$, $U_1\subset\mathbb{R}$ are neighborhoods of $\partial G$ and $1$ respectively, we have $$\frac{\partial q}{\partial s}(x,s)=-\frac{1}{s^2}\left\langle\nabla r\left(\frac{x}{s}\right),x\right\rangle_{\mathbb{R}}\neq 0$$ for $(x,s)$ such that $x\in\partial G$ and $s=\mu_G(x)=1$ (since $0\in G$, the vector $-x$ hooked at the point $x$ is inward $G$, so it is not orthogonal to the normal vector at $x$). By the Implicit Function Theorem for the equation $q=0$, the function $\mu_G$ is real analytic in a neighborhood $V_0$ of $\partial G$. To see that $\mu_G$ is real analytic outside $0$, fix $x_0\in(\mathbb{R}^{2n})_*$. Then the set $$W_0:=\left\{x\in\mathbb{R}^{2n}:\frac{x}{\mu_G(x_0)}\in V_0\right\}$$ is open and contains $x_0$. Since $$\mu_G(x)=\mu_G(x_0)\mu_G\left(\frac{x}{\mu_G(x_0)}\right),\ x\in W_0,$$ the function $\mu_G$ is real analytic in $W_0$. Therefore, we can take $d/ds$ on both sides of $\mu_G(sx)=s\mu_G(x),\ x\neq 0,\ s>0$ to obtain $$\langle\nabla\mu_G(x),x\rangle_{\mathbb{R}}=\mu_G(x),\ x\neq 0,$$ so $\nabla\mu_G\neq 0$ in $(\mathbb{R}^{2n})_*$.
Furthermore, $\nabla\mu^2_G=2\mu_G\nabla\mu_G$, so $\mu^2_G-1$ is also a defining function of $G$. To show that $u:=\mu^2_G$ is strongly convex outside $0$ let us prove that $$X^T\mathcal{H}_aX>0,\quad a\in\partial G,\ X\in(\mathbb{R}^{2n})_*,$$ where $\mathcal{H}_x:=\mathcal{H}u(x)$ for $x\in(\mathbb{R}^{2n})_*$. Taking $\partial/\partial x_j$ on both sides of $$u(sx)=s^2u(x),\ x,s\neq 0,$$ we get \begin{equation}\label{62}\frac{\partial u}{\partial x_j}(sx)=s\frac{\partial u}{\partial x_j}(x)\end{equation} and further taking $d/ds$ $$\sum_{k=1}^{2n}\frac{\partial^2 u}{\partial x_j\partial x_k}(sx)x_k=\frac{\partial u}{\partial x_j}(x).$$ In particular, $$x^T\mathcal{H}_xy=\sum_{j,k=1}^{2n}\frac{\partial^2 u}{\partial x_k\partial x_j}(x)x_ky_j=\langle\nabla u(x),y\rangle_{\mathbb{R}},\ x\in(\mathbb{R}^{2n})_*,\ y\in\mathbb{R}^{2n}.$$ Let $a\in\partial G$. Since $\langle\nabla\mu_G(a),a\rangle_{\mathbb{R}}=\mu_G(a)=1$, we have $a\notin T^\mathbb{R}_G(a)$. Any $X\in(\mathbb{R}^{2n})_*$ can be represented as $\alpha a+\beta Y$, where $Y\in T^\mathbb{R}_G(a)$, $\alpha,\beta\in\mathbb{R}$, $(\alpha,\beta)\neq(0,0)$. Then \begin{eqnarray*}X^T\mathcal{H}_aX&=&\alpha^2a^T\mathcal{H}_aa+2\alpha\beta a^T\mathcal{H}_aY+\beta^2Y^T\mathcal{H}_aY=\\&=&\alpha^2\langle\nabla u(a),a\rangle_{\mathbb{R}} +2\alpha\beta\langle\nabla u(a),Y\rangle_{\mathbb{R}} +\beta^2Y^T\mathcal{H}_aY= \\&=&\alpha^22\mu_G(a)\langle\nabla\mu_G(a),a\rangle_{\mathbb{R}} +\beta^2Y^T\mathcal{H}_aY= 2\alpha^2+\beta^2Y^T\mathcal{H}_aY.\end{eqnarray*} Since $G$ is strongly convex, the Hessian of any defining function is strictly positive on the tangent space, i.e. $Y^T\mathcal{H}_aY>0$ if $Y\in(T^\mathbb{R}_G(a))_*$. Hence $X^T\mathcal{H}_aX\geq 0$. Note that it cannot be $X^T\mathcal{H}_aX=0$, since then $\alpha=0$, consequently $\beta\neq 0$ and $Y^T\mathcal{H}_aY=0$. On the other side $Y=X/\beta\neq 0$ --- a contradiction.
Taking $\partial/\partial x_k$ on both sides of \eqref{62} we obtain $$\frac{\partial^2 u}{\partial x_j\partial x_k}(sx)=\frac{\partial^2 u}{\partial x_j\partial x_k}(x),\ x,s\neq 0$$ and for $a,X\in(\mathbb{R}^{2n})_*$ $$X^T\mathcal{H}_aX=X^T\mathcal{H}_{a/\mu_G(a)}X>0.$$
Let us consider the sets $$D_t:=\{x\in\mathbb{C}^n:t\mu^2_D(x)+(1-t)\mu^2_{\mathbb{B}_n}(x)<1\},\ t\in[0,1].$$ The functions $t\mu^2_D+(1-t)\mu^2_{\mathbb{B}_n}$ are real analytic in $(\mathbb{C}^n)_*$ and strongly convex in $(\mathbb{C}^n)_*$, so $D_t$ are strongly convex domains with real analytic boundaries satisfying $$D=D_1\Subset D_{t_2}\Subset D_{t_1}\Subset D_0=\mathbb{B}_n\text{\ if \ }0<t_1<t_2<1.$$ It is clear that $\mu_{D_t}=\sqrt{t\mu^2_D+(1-t)\mu^2_{\mathbb{B}_n}}$. Further, if $t_{1}$ is close to $t_{2}$ then $D_{t_{1}}$ is close to $D_{t_{2}}$ w.r.t. the topology introduced in Section \ref{27}. We want to show that $D_t$ are in some family $\mathcal D(c)$. Only the interior and exterior ball conditions need to verify.
There exists $\delta>0$ such that $\delta\mathbb{B}_n\Subset D$. Further, $\nabla\mu_{D_t}^2\neq 0$ in $(\mathbb{R}^{2n})_*$. Set $$M:=\sup\left\{\frac{\mathcal{H}\mu_{D_t}^2(x;X)}{|\nabla\mu_{D_t}^2(y)|}:
t\in[0,1],\ x,y\in 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n,\ X\in\mathbb{R}^{2n},\ |X|=1\right\}.$$ It is a positive number since the functions $\mu_{D_t}^2$ are strongly convex in $(\mathbb{R}^{2n})_*$ and the `sup' of the continuous, positive function is taken over a compact set. Let $$r:=\min\left\{\frac{1}{2M},\frac{\dist(\partial D,\delta\mathbb{B}_n)}{2}\right\}.$$ For fixed $t\in[0,1]$ and $a\in\partial D_t$ put $a':=a-r\nu_{D_t}(a)$. In particular, $\overline{B_n(a',r)}\subset 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n$. Let us define $$h(x):=\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a'|}(|x-a'|^2-r^2),\ x\in 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n.$$ We have $h(a)=1$ and $$\nabla h(x)=\nabla\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{|a-a'|}(x-a').$$ For $x=a$, dividing the right side by $|\nabla\mu^2_{D_t}(a)|$, we get a difference of the same normal vectors $\nu_{D_t}(a)$, so $\nabla h(a)=0$. Moreover, for $|X|=1$ $$\mathcal{H}h(x;X)=\mathcal{H}\mu^2_{D_t}(x;X)-\frac{|\nabla\mu^2_{D_t}(a)|}{r}\leq M|\nabla\mu^2_{D_t}(a)|-2M|\nabla\mu^2_{D_t}(a)|<0.$$ It follows that $h\leq 1$ in any convex set $S$ such that $a\in S\subset 2\overline{\mathbb{B}}_n\setminus\delta\mathbb{B}_n$. Indeed, assume the contrary. Then there is $y\in S$ such that $h(y)>1$. Let us join $a$ and $y$ with an interval $$g:[0,1]\ni t\longmapsto h(ta+(1-t)y)\in S.$$ Since $a$ is a strong local maximum of $h$, the function $g$ has a local minimum at some point $t_0\in(0,1)$. Hence $$0\leq g''(t_0)=\mathcal{H}h(t_0a+(1-t_0)y;a-y),$$ which is impossible.
Setting $S:=\overline{B_n(a',r)}$, we get $$\mu^2_{D_t}(x)\leq 1+\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a'|}(|x-a'|^2-r^2)<1$$ for $x\in B_n(a',r)$, i.e. $x\in D_t$.
The proof of the exterior ball condition is similar. Set $$m:=\inf\left\{\frac{\mathcal{H}\mu_{D_t}^2(x;X)}{|\nabla\mu_{D_t}^2(y)|}:
t\in[0,1],\ x,y\in(\overline{\mathbb{B}}_n)_*,\ X\in\mathbb{R}^{2n},\ |X|=1\right\}.$$ Note that the $m>0$. Actually, the homogeneity of $\mu_{D_t}$ implies $\mathcal{H}\mu_{D_t}^2(sx;X)=\mathcal{H}\mu_{D_t}^2(x;X)$ and $\nabla\mu_{D_t}^2(sx)=s\nabla\mu_{D_t}^2(x)$ for $x\neq 0$, $X\in \mathbb{R}^{2n}$, $s>0$. Therefore, there are positive constants $C_1,C_2$ such that $C_1\leq\mathcal{H}\mu_{D_t}^2(x;X)$ for $x\neq 0$, $X\in \mathbb{R}^{2n}$, $|X|=1$ and $|\nabla\mu_{D_t}^2(y)|\leq C_2$ for $y\in\overline\mathbb{B}_n$. In particular, $m\geq C_1/C_2$.
Let $R:=2/m$. For fixed $t\in[0,1]$ and $a\in\partial D_t$ put $a'':=a-R\nu_{D_t}(a)$. Let us define $$\widetilde h(x):=\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a''|}(|x-a''|^2-R^2),\ x\in\overline{\mathbb{B}}_n.$$ We have $\widetilde h(a)=1$ and $$\nabla\widetilde h(x)=\nabla\mu^2_{D_t}(x)-\frac{|\nabla\mu^2_{D_t}(a)|}{|a-a''|}(x-a''),$$ so $\nabla\widetilde h(a)=0$. Moreover, for $x\in(\overline{\mathbb{B}}_n)_*$ and $|X|=1$ $$\mathcal{H}\widetilde h(x;X)=\mathcal{H}\mu^2_{D_t}(x;X)-\frac{|\nabla\mu^2_{D_t}(a)|}{R}\geq m|\nabla\mu^2_{D_t}(a)|-m/2|\nabla\mu^2_{D_t}(a)|>0.$$ Therefore, $a$ is a strong local minimum of $\widetilde h$.
Now using the properties listed above we may deduce that $\widetilde h\geq 1$ in $\overline\mathbb{B}_n$. We proceed similarly as before: seeking a contradiction suppose that there is $y\in\overline\mathbb{B}_n$ such that $\widetilde h(y)<1$. Moving $y$ a little (if necessary) we may assume that $0$ does not lie on the interval joining $a$ and $y$. Then the mapping $\widetilde g(t):=\widetilde h(ta+ (1-t)y)$ attains its local maximum at some point $t_0\in(0,1)$. The second derivative of $\widetilde g$ at $t_0$ is non-positive, which gives a contradiction with a positivity of the Hessian of the function $\widetilde h$.
Hence, we get $$\frac{|\nabla\mu^2_{D_t}(a)|}{2|a-a''|}(|x-a''|^2-R^2)\leq\mu^2_{D_t}(x)-1<0,$$ for $x\in D_t$, so $D_t \subset B_n(a'',R)$.
Let $T$ be the set of all $t\in[0,1]$ such that there is an $E$-mapping $f_{t}:\mathbb{D}\longrightarrow D_{t}$ with $f_{t}(0)=z$, $f_{t}(\xi_{t})=w$ for some $\xi_{t}\in(0,1)$ (resp. $f_{t}(0)=z$, $f_{t}'(0)=\lambda_{t}v$ for some $\lambda_{t}>0$). We claim that $T=[0,1]$. To prove it we will use the open-close argument.
Clearly, $T\neq\emptyset$, as $0\in T$. Moreover, $T$ is open in $[0,1]$. Indeed, let $t_{0}\in T$. It follows from Proposition \ref{13} that there is a neighborhood $T_{0}$ of $t_{0}$ such that there are $E$-mappings $f_{t}:\mathbb{D}\longrightarrow D_{t}$ and $\xi_{t}\in(0,1)$ such that $f_{t}(0)=z$, $f_{t}(\xi_{t})=w$ for all $t\in T_{0}$ (resp. $\lambda_{t}>0$ such that $f_{t}(0)=z$, $f_{t}'(0)=\lambda_{t} v$ for all $t\in T_{0}$).
To prove that $T$ is closed, choose a sequence $\{t_{m}\}\subset T$ convergent to some $t\in[0,1]$. We want to show that $t\in T$. Since $f_{t_m}$ are $E$-mappings, they are complex geodesics. Therefore, making use of the inclusions $D\subset D_{t_m}\subset\mathbb B_n$ we find that there is a compact set $K\subset(0,1)$ (resp. a compact set $\widetilde K\subset(0,\infty)$) such that $\{\xi_{t_m}\}\subset K$ (resp. $\{\lambda_{t_m}\}\subset\widetilde K$). By Propositions \ref{8} and \ref{10b} the functions $f_{t_{m}}$ and $\widetilde f_{t_{m}}$ are equicontinuous in $\mathcal{C}^{1/2}(\overline{\mathbb{D}})$ and by Propositions \ref{9} and \ref{10a} the functions $\rho_{t_{m}}$ are uniformly bounded from both sides by positive numbers and equicontinuous in $\mathcal{C}^{1/2}(\mathbb{T})$. From the Arzela-Ascoli Theorem there are a subsequence $\{s_{m}\}\subset\{t_{m}\}$ and mappings $f,\widetilde f\in{\mathcal O}(\mathbb{D})\cap\mathcal C^{1/2}(\overline{\mathbb D})$, $\rho\in{\mathcal C}^{1/2}(\mathbb{T})$ such that $f_{s_{m}}\to f$, $\widetilde{f}_{s_{m}}\to\widetilde f$ uniformly on $\overline{\mathbb{D}}$, $\rho_{s_{m}}\to\rho$ uniformly on $\mathbb{T}$ and $\xi_{s_m}\to\xi\in (0,1)$ (resp. $\lambda_{s_m}\to\lambda>0$).
Clearly, $f(\overline{\DD})\subset\overline{D}_{t}$, $f(\mathbb{T})\subset\partial D_{t}$ and $\rho>0$. By the strong pseudoconvexity of $D_t$ we get $f(\mathbb{D})\subset D_t$.
The conditions (3') and (4) of Definitions~\ref{21} and \ref{21e} follow from the uniform convergence of suitable functions. Therefore, $f$ is a weak $E$-mapping of $D_{t}$, consequently an $E$-mapping of $D_t$, satisfying $f(0)=z$, $f(\xi)=w$ (resp. $f(0)=z$, $f'(0)=\lambda v$).
Let us go back to the general situation that is when a domain $D$ is bounded strongly linearly convex with real analytic boundary. Take a of point $\eta\in\partial{D}$ such that $\max_{\zeta\in\partial{D}}|z-\zeta|=|z-\eta|$. Then $\eta$ is a point of the strong convexity of $D$. Indeed, by the Implicit Function Theorem one can assume that in a neighborhood of $\eta$ the defining functions of $D$ and $B:=B_n(z,|z-\eta|)$ are of the form $r(x):=\widetilde r(\widetilde x)-x_{2n}$ and $q(x):=\widetilde q(\widetilde x)-x_{2n}$ respectively, where $x=(\widetilde x,x_{2n})\in\mathbb{R}^{2n}$ is sufficiently close to $\eta$. From the inclusion $D\subset B$ it it follows that $r-q\geq 0$ near $\eta$ and $(r-q)(\eta)=0$. Thus the Hessian $\mathcal{H}(r-q)(\eta)$ is weakly positive in $\mathbb{C}^n$. Since $\mathcal{H}q(\eta)$ is strictly positive on $T_B^\mathbb{R}(\eta)_*=T_D^\mathbb{R}(\eta)_*$, we find that $\mathcal{H}r(\eta)$ is strictly positive on $T_D^\mathbb{R}(\eta)_*$, as well.
By a continuity argument, there is a convex neighborhood $V_0$ of $\eta$ such that all points from $\partial D\cap V_0$ are points of the strong convexity of $D$. It follows from Proposition \ref{localization} (after shrinking $V_0$ if necessary) that there is a weak stationary mapping $g:\mathbb{D}\longrightarrow D\cap V_0$ such that $g(\mathbb{T})\subset\partial D$. In particular, $g$ is a weak stationary mapping of $D$. Since $D\cap V_0$ is convex, the condition with the winding number is satisfied on $D\cap V_0$ (and then on the whole $D$). Consequently $g$ is an $E$-mapping of $D$.
If $z=g(0)$, $w=g(\xi)$ for some $\xi\in\mathbb{D}$ (resp. $z=g(0)$, $v=g'(0)$) then there is nothing to prove. In the other case let us take curves $\alpha:[0,1]\longrightarrow D$, $\beta:[0,1]\longrightarrow D$ joining $g(0)$ and $z$, $g(\xi)$ and $w$ (resp. $g(0)$ and $z$, $g'(0)$ and $v$). We may assume that the images of $\alpha$ and $\beta$ are disjoint. Let $T$ be the set of all $t\in[0,1]$ such that there is an $E$-mapping $g_{t}:\mathbb{D}\longrightarrow D$ such that $g_{t}(0)=\alpha(t)$, $g_{t}(\xi_{t})=\beta(t)$ for some $\xi_{t}\in(0,1)$ (resp. $g_{t}(0)=\alpha(t)$, $g_{t}'(0)=\lambda_{t}\beta(t)$ for some $\lambda_{t}>0$). Again $T\neq\emptyset$ since $0\in T$. Using the results of Section \ref{22} similarly as before (but for one domain), we see that $T$ is closed.
Since $\widetilde k_D$ is symmetric, it follows from Proposition \ref{13}(1) that the set $T$ is open in $[0,1]$ (first we move along $\alpha$, then by the symmetry we move along $\beta$). Therefore, $g_1$ is the $E$-mapping for $z,w$.
In the case of $\kappa_{D}$ we change a point and then we change a direction. To be more precise, consider the set $S$ of all $s\in[0,1]$ such that there is an $E$-mapping $h_{s}:\mathbb{D}\longrightarrow D$ such that $h_{s}(0)=\alpha(s)$. Then $0\in S$, by Proposition \ref{13}(1) the set $S$ is open in $[0,1]$ and by results of Section~\ref{22} again, it is closed. Hence $S=[0,1]$. Now we may join $h'_{1}(0)$ and $v$ with a curve $\gamma:[0,1]\longrightarrow \mathbb C^n$. Let us define $R$ as the set of all $r\in[0,1]$ such that there is an $E$-mapping $\widetilde h_{r}:\mathbb{D}\longrightarrow D$ such that $\widetilde h_{r}(0)=h_1(0)$, $\widetilde h'_{r}(0)=\sigma_{r}\gamma(1-r)$ for some $\sigma_r>0$. Then $1\in R$, by Proposition \ref{13}(2) the set $R$ is open in $[0,1]$ and, by Section \ref{22}, it is closed. Hence $R=[0,1]$, so $\widetilde h_{0}$ is the $E$-mapping for $z,v$. \end{proof}
Now we are in position that allows us to prove the main results of the Lempert's paper.
\begin{proof}[Proof of Theorem \ref{lem-car} $($real analytic case$)$] It follows from Lemma \ref{lemat} that for any different points $z,w\in D$ (resp. $z\in D$, $v\in(\mathbb{C}^n)_*$) one may find an $E$-mapping passing through them (resp. $f(0)=z$, $f'(0)=v$). On the other hand, it follows from Proposition \ref{1} that $E$-mappings have left inverses, so they are complex geodesics. \end{proof}
\begin{proof}[Proof of Theorem \ref{main} $($real analytic case$)$] This is a direct consequence of Lemma \ref{lemat} and Corollary \ref{28}. \end{proof}
\begin{center}{\sc ${\mathcal C}^2$-smooth case}\end{center}
\begin{lem}\label{un} Let $D\subset\mathbb C^n$, $n\geq 2$, be a bounded strongly pseudoconvex domain with $\mathcal C^2$-smooth boundary. Take $z\in D$ and let $r$ be a defining function of $D$ such that \begin{itemize}\item $r\in \mathcal C^2(\mathbb C^n);$ \item $D=\{x\in \mathbb C^n:r(x)<0\}$; \item $\mathbb C^n\setminus D=\{x\in \mathbb C^n:r(x)>0\}$;
\item $|\nabla r|=1$ on $\partial D;$
\item $\sum_{j,k=1}^n\frac{\partial^2 r}{\partial z_j\partial\overline z_k}(a)X_{j}\overline{X}_{k}\geq C|X|^2$ for any $a\in \partial D$ and $X\in \mathbb C^n$ with some constant $C>0$. \end{itemize}
Suppose that there is a sequence $\{r_m\}$ of $\mathcal C^2$-smooth real-valued functions such that $D^{\alpha}r_n$ converges to $D^{\alpha}r$ locally uniformly for any $\alpha\in \mathbb N_0^{2n}$ such that $|\alpha|:=|\alpha_1| +\ldots+|\alpha_n|\leq 2$. Let $D_m$ be a connected component of the set $\{x\in\mathbb C^n:r_m(x)<0\}$, containing the point $z$.
Then there is $c>0$ such that $(D_m,z)$ and $(D,z)$ belong to $\mathcal D(c)$, $m>>1.$ \end{lem}
\begin{proof} Losing no generality assume that $D\Subset\mathbb B_n.$
Note that the conditions (1), (5), (6) of Definition \ref{30} are clearly satisfied. To find $c$ satisfying ($2$), we take $s>0$ such that $\mathcal H r (x;X)< s |X|^2$ for $x\in\overline\mathbb{B}_n$ and $X\in(\mathbb R^{2n})_*$. Then ${\mathcal H} r_m (x;X)<2s|X|^2$ for $x\in\overline\mathbb{B}_n$, $X\in(\mathbb R^{2n})_*$ and $m>>1$. Let $U_0\subset\mathbb B_n$ be an open neighborhood of $\partial D$ such that $|\nabla r|$ is on $U_0$ between $3/4$ and $5/4$. Note that $\partial D_m\subset U_0$ and $|\nabla r_m|\in (1/2, 3/2)$ on $U_0$ for $m>>1$.
Fix $m$ and $a\in \partial D_m$ and put $b:=a-R\nu_{D_m}(a)$, where a small number $R>0$ will be specified later. There is $t>0$ such that $\nabla r_m(a)=2t(a-b)$. Note that $t$ may be arbitrarily large provided that $R$ was small enough. We take $t:=2s$ and $R:=|\nabla r_m(a)|/t$. Then we have $\mathcal H r_m(x;X)<2t |X|^2$ for $x\in\overline\mathbb{B}_n$, $X\in(\mathbb R^{2n})_*$ and $m>>1$. Then a function $$h(x):=r_m(x)-t(|x-b|^2-R^2),\ x\in \mathbb C^n,$$ attains at $a$ its global maximum on $\overline\mathbb{B}_n$ ($a$ is a strong local maximum and the Hessian of $h$ is negative on the convex set $\overline\mathbb{B}_n$, cf. the proof of Lemma \ref{lemat}). Thus $h\leq 0$ on $\mathbb B_n$. From this we immediately get (2).
Note that it follows from (2) that $D_m=\{x\in\mathbb C^n:r_m(x)<0\}$ for $m$ big enough (i.e. $\{x\in \mathbb C^n:\ r_m(x)<0\}$ is connected).
Moreover, the condition (2) implies the condition (3) as follows. We infer from Remark~\ref{D(c),4} that there is $c'>0$ such that $D$ satisfies (3) with $c'$. Let $m_0$ be such that the Hausdorff distance between $\partial D$ and $\partial D_m$ is smaller than $1/c'$ for $m\geq m_0$. There is $c''$ such that $D_{m_0}$ satisfies (3) with $c''$. Losing no generality we may assume that $c''<c'$. Take any $x,y\in D_m$. Since $D_m$ satisfies the interior ball condition with a radius $c$ we infer that there are balls of a radius $1/c$ contained in $D_m$ and containig $x$ and $y$ respectively. The centers of these balls lie in $D_{m_0}$. Using the fact that $(D_{m_0},z)$ lies in $\mathcal D(c'')$, we may join chosen centers with balls of a radius $1/(2c'')$ as in the condition (3), so we have found a chain consiting of balls of radii $c'$ and $c''$ joining $x$ and $y$.
Thus we may join $x$ and $y$ with balls contained entirely in the constructed chain whose radii depend only on $c'$ and $c''$.
Now we are proving $(4)$. We shall show that there is $c>c'$ such that every $D_m$ satisfies (4) with $c$ for $m$ big enough. To do it let us cover $\partial D$ with a finite number of balls $B_j$, $j=1,\ldots,N$, from condition (4) and let $B'_j$ be a ball contained relatively in $B_j$ such that $\{B_j\}$ covers $\partial D$, as well. Let $\Phi_j$ be mappings corresponding to $B_j$. Let $\varepsilon$ be such that any ball of radius $\varepsilon$ intersecting $\partial D$ non-emptily is relatively contained in $B_j'$ for some $j$. Observe that any ball $B$ of radius $\varepsilon/2$ intersecting non-emptily $\partial D_m$ is contained in a ball of radius $\varepsilon$ intersecting non-emptily $\partial D$; hence it is contained in $B_j'$ for some $j$. Then the pair $B$, $\Phi_j$ satisfies the conditions (4) (b), (c) and (d). Therefore, it suffices to check that there is $c>2/\varepsilon$ such that each pair $B_j'$, $\Phi_j$ satisfies the condition (4) for $D_m$ with $c$ ($m>>1$). This is possible since $\Phi_j(D_m)\subset\Phi_j(D)$, $D^\alpha\Phi_j(\partial D_m\cap B_j)$ converges to $D^\alpha\Phi_j(\partial D\cap B_j)$ for $|\alpha|\leq 2$ and for any $w\in\Phi(\partial D\cap B_j)$ there is a ball of radius $2/\varepsilon$ containing $\Phi_j(D)$ and tangent to $\partial\Phi_j(D)$ at $w$. To be precise, we proceed as follows.
Let $a,b\in\mathbb{C}^n$ and let $x\in\partial B_n(a,\widetilde c)$, where $\widetilde c>c'$. Then a ball $B_n(2a-x,2\widetilde c)$ contains $B_n(a,\widetilde c)$ and is tangent to $B_n(a,\widetilde c)$ at $x$. There is a number $\eta=\eta(\delta,\widetilde c)>0$, independent of $a,b,x$, such that the diameter of the set $B_n(b,\widetilde c)\setminus B_n(2a-x,2\widetilde c)$ is smaller than $\delta>0$, whenever $|a-b|<\eta$ (this is a simple consequence of the triangle inequality).
Let $\widetilde s>0$ be such that $\mathcal H(r\circ\Phi_j^{-1})(x;X)\geq 2\widetilde s|X|^2$ for $x\in U_j$, $j=1,\ldots,N$, where $U_j$ is an open neighborhood of $\Phi_j(\partial D\cap B_j)$. Then, for $m$ big enough, $\mathcal H(r_m\circ \Phi_j^{-1})(x;X)\geq\widetilde s|X|^2$ for $x\in U_j$ and $\Phi_j(\partial D_m\cap B_j')\subset U_j$, $j=1,\ldots,N$. Repeating for the function $$x\longmapsto(r_m\circ\Phi_j^{-1})(x)-\widetilde t(|x-\widetilde b|^2-\widetilde R^2)$$ the argument used in the interior ball condition with suitable chosen $\widetilde t$ and uniform $\widetilde R>c$, we find that there is uniform $\widetilde\varepsilon>0$ such that for any $j,m$ and $w\in\Phi_j(\partial D_m\cap B_j')$ there is a ball $B$ of radius $\widetilde R$, tangent to $\Phi_j(\partial D_m\cap B_j')$ at $w$, such that $\Phi_j(\partial D_m\cap B_j')\cap B_n(w,\widetilde\varepsilon)\subset B$. Let $a_{j,m}(w)$ denote its center.
On the other hand for any $w\in \Phi_j(\partial D_m\cap B_j')$ there is $t>0$ such that $w'=w+t\nu (w)\in \Phi_j(\partial D\cap B_j)$, where $\nu(w)$ is a normal vector to $\Phi_j(\partial D_m\cap B_j')$ at $w$. Let $a_j(w')$ be a center of a ball of radius $\widetilde R$ tangent to $\Phi_j(\partial D\cap B_j)$ at $w'$. It follows that $|a_{j,m}(w)-a_j(w')|<\eta(\widetilde\varepsilon/2,\widetilde R)$ provided that $m$ is big enough.
Joinining the facts presented above, we finish the proof of the exterior ball condition (with a radius dependent only on $\widetilde\varepsilon$ and $\widetilde R$). \end{proof}
\begin{proof}[Proof of Theorems \ref{lem-car} and \ref{main} \emph{(}$\mathcal C^2$-smooth case$)$] Losing no generality assume that $0\in D\Subset\mathbb{B}_n$.
It follows from the Weierstrass Theorem that there is sequence $\{P_k\}$ of real polynomials on $\mathbb{C}^n\simeq\mathbb R^{2n}$ such that $$D^{\alpha}P_{k}\to D^{\alpha}r \text{ uniformly on }\overline\mathbb{B}_n,$$ where $\alpha=(\alpha_1,\ldots, \alpha_{2n})\in \mathbb N_0^{2n}$ is such that $|\alpha|=\alpha_1+\ldots +\alpha_{2n}\leq 2$. Consider the open set $$\widetilde D_{k,\varepsilon}:=\{x\in \mathbb C^n:P_{k}(x)+\varepsilon<0\}.$$ Let $\varepsilon_{m}$ be a sequence of positive numbers converging to $0$ such that $3\varepsilon_{m+1}<\varepsilon_m.$
For any $m\in \mathbb N$ there is $k_{m}\in\mathbb{N}$ such that $\sup_{\overline\mathbb{B}_n}|P_{k_{m}}-r|<\varepsilon_{m}$. Putting $r_{m}:=P_{k_{m}}+2\varepsilon_{m}$, we get $r+\varepsilon_{m}<r_{m}<r+3\varepsilon_{m}$. In particular, $r_{m+1}<r_m.$
Let $D_m$ be a connected component of $D_{k_m,2\varepsilon_m}$ containing $0$. It is a bounded strongly linearly convex domain with real analytic boundary and $r_m$ is its defining function provided that $m$ is big enough. Moreover, $D_{m}\subset D_{m+1}$ and $\bigcup_m D_{m}=D$. Using properties of holomorphically invariant functions and metrics we get Theorem~\ref{lem-car}.
We are left with showing the claim that for any different $z,w\in D$ (resp. $z\in D$, $v\in(\mathbb C^n)_*$) there is a weak $E$-mapping for $z,w$ (resp. for $z,v$). Fix $z\in D$ and $w\in D$ (resp. $v\in(\mathbb C^n)_*$). Then $z,w\in D_m$ (resp. $z\in D_m$), $m>>1$. Therefore, for any $m>>1$ one may find an $E$-mapping $f_m$ of $D_m$ for $z,w$ (resp. for $z,v$). Since $(D_m,z)\in \mathcal D(c)$ for some uniform $c>0$ ($m>>1$) (Lemma~\ref{un}), we find that $f_m$, $\widetilde f_m$ and $\rho_m$ satisfy the uniform estimates from Section~\ref{22}. Thus, passing to a subsequence we may assume that $\{f_m\}$ converges uniformly on $\overline{\DD}$ to a mapping $f\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}^{1/2}(\overline{\DD})$ passing through $z,w$ (resp. such that $f(0)=z$, $f'(0)=\lambda v$, $\lambda>0$), $\{\widetilde f_m\}$ converges uniformly on $\overline{\DD}$ to a mapping $\widetilde f\in{\mathcal O}(\mathbb{D})\cap\mathcal C^{1/2}(\overline{\mathbb D})$ and $\{\rho_m\}$ is convergent uniformly on $\mathbb{T}$ to a positive function $\rho\in{\mathcal C}^{1/2}(\mathbb{T})$ (in particular, $f'\bullet\widetilde f=1$ on $\mathbb{D}$, so $\widetilde f$ has no zeroes in $\overline{\DD}$). We already know that this implies that $f$ is a weak $E$-mapping of $D$.
To get ${\mathcal C}^{k-1-\varepsilon}$-smoothness of the extremal $f$ and its associated mappings for $k\geq 3$, it suffices to repeat the proof of Proposition~5 of \cite{Lem2}. This is just the Webster Lemma (we have proved it in the real analytic case --- see Proposition~\ref{6}). Namely, let $$\psi:\partial D\ni z\longmapsto(z,T_{D}^\mathbb{C}(z))\in \mathbb C^n\times(\mathbb P^{n-1})_*,$$ where $\mathbb P^{n-1}$ is the $(n-1)$-dimensional complex projective space. Let $\pi:(\mathbb{C}^n)_*\longrightarrow\mathbb P^{n-1}$ be the canonical projection.
By \cite{Web}, $\psi(\partial D)$ is a totally real manifold of $\mathcal C^{k-1}$ class. Observe that the mapping $(f,\pi\circ \widetilde f):\overline{\DD}\longrightarrow\mathbb{C}^n\times\mathbb P^{n-1}$ is $1/2$-H\"older continuous, is holomorphic on $\mathbb D$ and maps $\mathbb T$ into $\psi(\partial D)$. Therefore, it is $\mathcal C^{k-1-\varepsilon}$-smooth for any $\varepsilon>0$, whence $f$ is $\mathcal C^{k-1-\varepsilon}$-smooth. Since $\nu_D\circ f$ is of class $\mathcal C^{k-1-\varepsilon}$, it suffices to proceed as in the proof of Proposition~\ref{6}. \end{proof}
\section{Appendix}\label{Appendix} \subsection{Totally real submanifolds} Let $M\subset\mathbb{C}^m$ be a totally real local $\mathcal{C}^{\omega}$ submanifold of the real dimension $m$. Fix a point $z\in M$. There are neighborhoods $U_0\subset\mathbb{R}^m$, $V_0\subset\mathbb{C}^m$ of $0$ and $z$ and a $\mathcal{C}^{\omega}$ diffeomorphism $\widetilde{\Phi}:U_0\longrightarrow M\cap V_0$ such that $\widetilde{\Phi}(0)=z$. The mapping $\widetilde{\Phi}$ can be extended in a natural way to a mapping $\Phi$ holomorphic in a neighborhood of $0$ in $\mathbb{C}^m$. Note that this extension will be biholomorphic in a neighborhood of $0$. Actually, we have $$\frac{\partial\Phi_j}{\partial z_k}(0)=\frac{\partial\Phi_j}{\partial x_k}(0)=\frac{\partial\widetilde{\Phi}_j}{\partial x_k}(0),\ j,k=1,\ldots,m,$$ where $x_k=\re z_k$. Suppose that the complex derivative $\Phi'(0)$ is not an isomorphism. Then there is $X\in(\mathbb{C}^m)_*$ such that $\Phi'(0)X=0$, so \begin{multline*}0=\sum_{k=1}^m\frac{\partial\Phi}{\partial z_k}(0)X_k=\sum_{k=1}^m\frac{\partial\widetilde\Phi}{\partial x_k}(0)(\re X_k+i\im X_k)=\\=\underbrace{\sum_{k=1}^m\frac{\partial\widetilde\Phi}{\partial x_k}(0)\re X_k}_{=:A}+i\underbrace{\sum_{k=1}^m\frac{\partial\widetilde\Phi}{\partial x_k}(0)\im X_k}_{=:B}.\end{multline*} The vectors $$\frac{\partial\widetilde\Phi}{\partial x_k}(0),\ k=1,\ldots,m$$ form a basis of $T^{\mathbb{R}}_M(z)$, so $A,B\in T^{\mathbb{R}}_M(z)$, consequently $A,B\in iT^{\mathbb{R}}_M(z)$. Since $M$ is totally real, i.e. $T^{\mathbb{R}}_M(z)\cap iT^{\mathbb{R}}_M(z)=\{0\}$, we have $A=B=0$. By a property of the basis we get $\re X_k=\im X_k=0$, $k=1,\ldots,m$ --- a contradiction.
Therefore, $\Phi$ in a neighborhood of $0$ is a biholomorphism of two open subsets of $\mathbb{C}^m$, which maps a neighborhood of $0$ in $\mathbb{R}^m$ to a neighborhood of $z$ in $M$.
\begin{lemm}[Reflection Principle]\label{reflection} Let $M\subset\mathbb{C}^m$ be a totally real local $\mathcal{C}^{\omega}$ submanifold of the real dimension $m$. Let $V_0\subset\mathbb{C}$ be a neighborhood of $\zeta_0\in\mathbb{T}$ and let $g:\overline{\mathbb{D}}\cap V_0\longrightarrow\mathbb{C}^m$ be a continuous mapping. Suppose that $g\in{\mathcal O}(\mathbb{D}\cap V_0)$ and $g(\mathbb{T}\cap V_0)\subset M$. Then $g$ can be extended holomorphically past $\mathbb{T}\cap V_0$. \end{lemm} \begin{proof} In virtue of the identity principle it is sufficient to extend $g$ locally past an arbitrary point $\zeta_0\in\mathbb{T}\cap V_0$. For a point $g(\zeta_0)\in M$ take $\Phi$ as above. Let $V_1\subset V_0$ be a neighborhood of $\zeta_0$ such that $g(\overline{\DD}\cap V_1)$ is contained in the image of $\Phi$. The mapping $\Phi^{-1}\circ g$ is holomorphic in $\mathbb{D}\cap V_1$ and has real values on $\mathbb{T}\cap V_1$. By the ordinary Reflection Principle we can extend this mapping holomorphically past $\mathbb{T}\cap V_1$. Denote this extension by $h$. Then $\Phi\circ h$ is an extension of $g$ in a neighborhood of $\zeta_0$. \end{proof}
\subsection{Schwarz Lemma for the unit ball} \begin{lemm}[Schwarz Lemma]\label{schw}
Let $f\in{\mathcal O}(\mathbb{D},B_n(a,R))$ and $r:=|f(0)-a|$. Then $$|f'(0)|\leq \sqrt{R^2-r^2}.$$ \end{lemm}
\subsection{Some estimates of holomorphic functions of ${\mathcal C}^{\alpha}$-class}
Let us recall some theorems about functions holomorphic in $\mathbb{D}$ and continuous in $\overline{\DD}$. Concrete values of constants $M,K$ are possible to calculate, seeing on the proofs. In fact, it is only important that they do not depend on functions. \begin{tww}[Hardy, Littlewood, \cite{Gol}, Theorem 3, p. 411]\label{lit1} Let $f\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$. Then for $\alpha\in(0,1]$ the following conditions are equivalent
\begin{eqnarray}\label{47}\exists M>0:\ |f(e^{i\theta})-f(e^{i\theta'})|\leq M|\theta-\theta'|^{\alpha},\ \theta,\theta'\in\mathbb{R};\\
\label{45}\exists K>0:\ |f'(\zeta)|\leq K(1-|\zeta|)^{\alpha-1},\ \zeta\in\mathbb{D}. \end{eqnarray} Moreover, if there is given $M$ satisfying \eqref{47} then $K$ can be chosen as $$2^{\frac{1-3\alpha}{2}}\pi^\alpha M\int_0^\infty\frac{t^\alpha}{1+t^2}dt$$ and if there is given $K$ satisfying \eqref{45} then $M$ can be chosen as $(2/\alpha+1)K$. \end{tww} \begin{tww}[Hardy, Littlewood, \cite{Gol}, Theorem 4, p. 413]\label{lit2}
Let $f\in{\mathcal O}(\mathbb{D})\cap{\mathcal C}(\overline{\DD})$ be such that $$|f(e^{i\theta})-f(e^{i\theta'})|\leq M|\theta-\theta'|^{\alpha},\ \theta,\theta'\in\mathbb{R},$$ for some $\alpha\in(0,1]$ and $M>0$. Then $$|f(\zeta)-f(\zeta')|\leq K|\zeta-\zeta'|^{\alpha},\ \zeta,\zeta'\in\overline{\DD},$$ where $$K:=\max\left\{2^{1-2\alpha}\pi^\alpha M,2^{\frac{3-5\alpha}{2}}\pi^\alpha\alpha^{-1} M\int_0^\infty\frac{t^\alpha}{1+t^2}dt\right\}.$$ \end{tww} \begin{tww}[Privalov, \cite{Gol}, Theorem 5, p. 414]\label{priv}
Let $f\in{\mathcal O}(\mathbb{D})$ be such that $\re f$ extends continuously on $\overline{\DD}$ and $$|\re f(e^{i\theta})-\re f(e^{i\theta'})|\leq M|\theta-\theta'|^\alpha,\ \theta,\theta'\in\mathbb{R},$$ for some $\alpha\in(0,1)$ and $M>0$. Then $f$ extends continuously on $\overline{\DD}$ and $$|f(\zeta)-f(\zeta')|\leq K|\zeta-\zeta'|^\alpha,\ \zeta,\zeta'\in\overline{\DD},$$ where $$K:=\max\left\{2^{1-2\alpha}\pi^\alpha,2^{\frac{3-5\alpha}{2}}\pi^\alpha\alpha^{-1}\int_0^\infty\frac{t^\alpha}{1+t^2}dt\right\}\left(\frac{2}{\alpha}+1\right)2^{\frac{3-3\alpha}{2}}\pi^{\alpha}M\int_0^\infty\frac{t^\alpha}{1+t^2}dt.$$ \end{tww} \subsection{Sobolev space} The Sobolev space $W^{2,2}(\mathbb{T})=W^{2,2}(\mathbb{T},\mathbb{C}^m)$ is a space of functions $f:\mathbb{T}\longrightarrow\mathbb{C}^m$, whose first two derivatives (in the sense of distribution) are in $L^2(\mathbb{T})$ (here we use a standard identification of functions on the unit circle and functions on the interval $[0,2\pi]$). Then $f$ is $\mathcal C^1$-smooth.
It is a complex Hilbert space with the following scalar product $$\langle f,g\rangle_W:=\langle f,g\rangle_{L}+\langle f',g'\rangle_{L}+\langle f'',g''\rangle_{L},$$
where $$\langle\widetilde f,\widetilde g\rangle_{L}:=\frac{1}{2\pi}\int_0^{2\pi}\langle\widetilde f(e^{it}),\widetilde g(e^{it})\rangle dt.$$ Let $\|\cdot\|_L$, $\|\cdot\|_W$ denote the norms induced by $\langle\cdotp,-\rangle_L$ and $\langle\cdotp,-\rangle_W$. The following characterization simply follows from Parseval's identity $$W^{2,2}(\mathbb{T})=\left\{f\in L^2(\mathbb{T}):\sum_{k=-\infty}^{\infty}(1+k^2+k^4)|a_k|^2<\infty\right\},$$ where $a_k\in\mathbb{C}^m$ are the $m$-dimensional Fourier coefficients of $f$, i.e. $$f(\zeta)=\sum_{k=-\infty}^{\infty}a_k\zeta^k,\ \zeta\in\mathbb{T}.$$ More precisely, Parseval's identity gives $$\|f\|_W=\sqrt{\sum_{k=-\infty}^{\infty}(1+k^2+k^4)|a_k|^2},\ f\in W^{2,2}(\mathbb{T}).$$ Note that $W^{2,2}(\mathbb{T})\subset\mathcal{C}^{1/2}(\mathbb{T})\subset\mathcal{C}(\mathbb{T})$ and both inclusions are continuous (in particular, both inclusions are real analytic). Note also that
\begin{equation}\label{67}\|f\|_{\sup}\leq\sum_{k=-\infty}^{\infty}|a_k|\leq\sqrt{\sum_{k=-\infty}^{\infty}\frac{1}{1+k^2}\sum_{k=-\infty}^{\infty}(1+k^2)|a_k|^2}\leq\frac{\pi}{\sqrt 3}\|f\|_W.\end{equation}\\
Now we want to show that there exists $C>0$ such that $$\|h^\alpha\|_W\leq C^{|\alpha|}\|h_1\|^{\alpha_1}_W\cdotp\ldots\cdotp\|h_{2n}\|^{\alpha_{2n}}_W,\quad h\in W^{2,2}(\mathbb{T},\mathbb{C}^n),\,\alpha\in\mathbb{N}_0^{2n}.$$ Thanks to the induction it suffices to prove that there is $\widetilde C>0$ satisfying $$\|h_1h_2\|_W\leq\widetilde C\|h_1\|_W\|h_2\|_W,\quad h_1,h_2\in W^{2,2}(\mathbb{T},\mathbb{C}).$$ Using \eqref{67}, we estimate $$\|h_1h_2\|^2_W=\|h_1h_2\|^2_L+\|h_1'h_2+h_1h_2'\|^2_L+\|h_1''h_2+2h_1'h_2'+h_1h_2''\|^2_L\leq$$$$\leq C_1\|h_1h_2\|_{\sup}^2+(\|h_1'h_2\|_L+\|h_1h_2'\|_L)^2+(\|h_1''h_2\|_L+\|2h_1'h_2'\|_L+\|h_1h_2''\|_L)^2\leq$$\begin{multline*}\leq C_1\|h_1\|_{\sup}^2\|h_2\|_{\sup}^2+(C_2\|h_1'\|_L\|h_2\|_{\sup}+C_2\|h_1\|_{\sup}\|h_2'\|_L)^2+\\+(C_2\|h_1''\|_L\|h_2\|_{\sup}+C_2\|2h_1'h_2'\|_{\sup}+C_2\|h_1\|_{\sup}\|h_2''\|_L)^2\leq\end{multline*}\begin{multline*}\leq C_3\|h_1\|_W^2\|h_2\|_W^2+(C_4\|h_1\|_W\|h_2\|_W+C_4\|h_1\|_W\|h_2\|_W)^2+\\+(C_4\|h_1\|_W\|h_2\|_W+2C_2\|h_1'\|_{\sup}\|h_2'\|_{\sup}+C_4\|h_1\|_W\|h_2\|_W)^2\leq\end{multline*}$$\leq C_5\|h_1\|_W^2\|h_2\|_W^2+(2C_4\|h_1\|_W\|h_2\|_W+2C_2\|h_1'\|_{\sup}\|h_2'\|_{\sup})^2$$ with constants $C_1,\ldots,C_5$. Expanding $h_j(\zeta)=\sum_{k=-\infty}^{\infty}a^{(j)}_k\zeta^{k}$, $\zeta\in\mathbb{T}$, $j=1,2$, we obtain $$\|h_j'\|_{\sup}\leq\sum_{k=-\infty}^{\infty}|k||a^{(j)}_k|\leq\sqrt{\sum_{k\in\mathbb{Z}_*}\frac{1}{k^2}\sum_{k\in\mathbb{Z}_*}k^4|a^{(j)}_k|^2}\leq\frac{\pi}{\sqrt 3}\|h_j\|_W$$ and finally $\|h_1h_2\|^2_W\leq C_6\|h_1\|_W^2\|h_2\|_W^2$ for some constant $C_6$. \subsection{Matrices} \begin{propp}[Lempert, \cite{Lem2}, Th\'eor\`eme $B$]\label{12} Let $A:\mathbb{T}\longrightarrow\mathbb{C}^{n\times n}$ be a matrix-valued real analytic mapping such that $A(\zeta)$ is self-adjoint and strictly positive for any $\zeta\in\mathbb{T}$. Then there exists $H\in{\mathcal O}(\overline{\DD},\mathbb{C}^{(n-1)\times(n-1)})$ such that $\det H\neq 0$ on $\overline{\DD}$ and $HH^*=A$ on $\mathbb{T}$. \end{propp} In \cite{Lem2}, the mapping $H$ was claimed to be real analytic in a neighborhood of $\overline{\DD}$ and holomorphic in $\mathbb{D}$, but it is equivalent to $H\in{\mathcal O}(\overline{\DD})$. Indeed, since $\overline\partial H$ is real analytic near $\overline{\DD}$ and $\overline\partial H=0$ in $\mathbb{D}$, the identity principle for real analytic functions implies $\overline\partial H=0$ in a neighborhood of $\overline{\DD}$. \begin{propp}[\cite{Tad}, Lemma $2.1$]\label{59}
Let $A$ be a complex symmetric $n\times n$ matrix. Then $$\|A\|=\sup\{|z^TAz|:z\in\mathbb{C}^n,\,|z|=1\}.$$ \end{propp}
\textsc{Acknowledgements.} We would like to thank Sylwester Zaj\k ac for helpful discussions. We are also grateful to our friends for the participation in preparing some parts of the work.
\end{document} |
\begin{document}
\title{Multi-type TASEP in discrete time} \abstract
The TASEP (\textit{totally asymmetric simple exclusion process}) is a basic model for an one-dimensional interacting particle system with non-reversible dynamics. Despite the simplicity of the model it shows a very rich and interesting behaviour. In this paper we study some aspects of the TASEP in discrete time and compare the results to the recently obtained results for the TASEP in continuous time. In particular we focus on stationary distributions for multi-type models, speeds of second-class particles, collision probabilities and the ``speed process''. In discrete time, jump attempts may occur at different sites simultaneously, and the order in which these attempts are processed is important; we consider various natural update rules.
\newline {\scshape Keywords:} TASEP, multi-type, second class particle, speed process. \newline {\scshape AMS 2000 Mathematics Subject Classification:} 82C22, 60K35 \renewcommand{\sectionmark}[1]{}
\section{Introduction}
The TASEP in continuous time was introduced by Spitzer in 1970 (\cite{spitzer}) and can be described as follows. It is a Markov process $\left( \eta_t \right)_{t \geq 0}$ on the state space $E = \left\{ 0,1 \right\}^{\mathbb{Z}}$ where for $x \in \mathbb{Z}$ we have that site $x$ is occupied with a particle at time $t$ iff $\eta_t(x) =1$. Otherwise we say that site $x$ is empty at time $t$. Starting from some initial configuration $\eta_0 \in E$, \textit{updates} occur at each site as a Poisson process of rate 1, independently; when an update occurs at site $x$, if there is a particle at site $x$ and a hole to its right at site $x+1$, the particle jumps from site $x$ to site $x+1$. If site $x$ is empty, or if site $x+1$ is already occupied, the update has no effect.
In the model in discrete time, updates occur with some probability $\beta \in \left( 0,1 \right)$ at each site at each time-step. Since updates occur simultaneously, we now have to choose an order in which to update the sites. We will consider sequential updates (from right to left or from left to right) and sublattice parallel updates (even sites first then odd sites).
For the model in continuous time there exists a vast amount of literature. For an introduction and background to the topic see Liggett's books \cite{liggett1} (pp. 361-417) and \cite{liggett2} (pp. 209-316). However, in some physical models of interest it might be more natural to use a discrete time scale. For example in traffic models we can consider the reaction time of individuals as a smallest time scale (Blythe and Evans \cite{blytheevans}, Chowdhury, Santen and Schadschneider \cite{chowdhurysantenschadschneider} and Helbing \cite{helbing}) and this suggests modelling traffic with a model in discrete time. The ASEP (\textit{asymmetric simple exclusion process}, particles jump to the right at rate $p$ and to the left at rate $q < p$) in discrete time was studied for example in Sch\"{u}tz \cite{schutz}, Hinrichsen \cite{hinrichsen}, Rajewsky, Santen, Schadschneider and Schreckenberg \cite{rajewskysantenschadschneiderschreckenberg} and Blythe and Evans \cite{blytheevans}. However, the behaviour of the models in discrete time has not been analysed in as much depth as the model in continuous time. The papers mentioned above are mainly concerned with the model on a finite interval with open boundary conditions and just one type of particles and analyse density profiles and stationary distributions.
In this paper we derive further results for the TASEP in discrete time that correspond to recently obtained results for the continuous-time model. These include stationary distributions for multi-type systems (e.g.\ \cite{ferrarimartin, ferrarimartin2}), laws of large numbers for the path of a second class particle and their connection to competition interfaces in competition growth models (e.g.\ \cite{ferraripimentel, ferrarigoncalvesmartin}), and the \textit{TASEP speed process} recently studied by Amir, Angel and Valk\'{o} \cite{amirangelvalko}.
We find that the multi-type invariant distributions for the models with sequential updates are identical with those for the model in continuous time, and do not depend on the parameter $\beta$. This has the surprising consequence that various collision probabilities for different particles in a multi-type processes started out of equilibrium, of the sort considered in \cite{ferrarigoncalvesmartin} and \cite{amirangelvalko}, are also independent of $\beta$ and coincide with the values for a continuous-time process. These probabilities correspond to survival probabilities of clusters in the associated multi-type competition growth models. At the moment, the only argument we have for this property is indirect, using the fact that the set of invariant measures is identical for all $\beta$; we do not know of a more direct argument based on local dynamics or couplings.
By contrast, in the case of sublattice-parallel updates, the value of $\beta$ plays an important role in the set of stationary distributions. We extend the queue-based construction of the multi-type stationary distributions from \cite{ferrarimartin, ferrarimartin2} by incorporating queues whose arrival and service rates are different at even and odd times.
The paper is organized as follows. In Section 2 we will give a more formal definition of the model and introduce the multi-type TASEP. The main results are described in Section 3,
including results concerning invariant measures and hydrodynamic limits for single-type models which are required in order to state and understand the multi-type models described above. The proofs or proof sketches for the novel results are found in Section 4. In Section 5 we make some brief remarks about a related discrete-time TASEP model with ``fully parallel updates''.
\section{Model} \subsection{Models in continuous and discrete time} The TASEP in continuous time can be described by its generator $L$. For cylinder functions $f: E \rightarrow \mathbb{R}$ we have $$ Lf(\eta) = \sum_{x \in \mathbb{Z}} \eta(x) \left(1 - \eta(x+1) \right) \left[ f \left( \eta^{x,x+1} \right) - f \left( \eta \right) \right] $$ with the configuration $\eta^{x,x+1}$ defined by $$ \eta^{x,x+1}(y) = \begin{cases}
\eta(y) & y \notin \left\{ x, x+1 \right\} \\
\eta(x+1) & y = x \\
\eta(x) & y = x+1 \end{cases} $$
Following ideas of Harris (1978) \cite{harris} we can use the following graphical construction for the TASEP. Let $\{ \left( P^x_t \right)_{t \geq 0} : x \in \mathbb{Z} \}$ be a family of independent mean $1$ Poisson processes on a common probability space $\left(
\Omega, \mathcal{A}, \mathbb{P} \right)$. For $x \in \mathbb{Z}$ the process $P^x$ marks possible jumps from site $x$: If $P^x_{t} - P^x_{t-} = 1$ and $\eta_{t-}(x)=1$ then the particle at $x$ tries to jump one step to the right at time $t$. The jump is successful if the adjacent site $x+1$ was unoccupied, i.e. $\eta_{t-}(x+1)=0$. Note that for every $t>0$ with positive probability ($e^{-t}$) there was no jump in the Poisson process $P^x$ up to time $t$. Since all the Poisson processes are independent there will be infinitely many sites $x$ such that there were no jumps in $P^x$. These sites separate $\mathbb{Z}$ into intervals of finite length. Since no particle can have crossed the boundaries of these intervals, it is enough to be able to construct the process separately on each of these finite intervals.
We can use the same graphical construction to define the TASEP in discrete time. All we have to do is replace the family of Poisson processes with a family $\{ \left( B^x_n \right)_{n \geq 0} : x \in \mathbb{Z} \}$ of independent Bernoulli processes with parameter $\beta \in \left( 0,1 \right)$ and decide on an update rule for the sites. As mentioned in the introduction we will mainly consider the following three update rules: \begin{itemize}
\item Rule R1: Updates are processed in order from right to left.
\item Rule R2: Updates are processed in order from left to right.
\item Rule R3: All updates at even sites are processed before all updates at odd sites. \end{itemize} To highlight the difference between the three update rules we can look at the following example: \begin{figure}
\caption{Configuration at time $n$ and jump marks}
\caption{Configuration at time $n+1$ if we apply R1}
\caption{Configuration at time $n+1$ if we apply R2}
\caption{Configuration at time $n+1$ if we apply R3}
\label{TDP}
\label{TDP1}
\label{TDP2}
\label{TDP3'}
\end{figure} \\
Say we are at time $n$ in the configuration displayed in Figure \ref{TDP}, with particles at sites $-1$ and $0$ and holes at sites $1$ and $2$. There are jump attempts at the sites marked with a $*$. The resulting configurations under the three different update rules are as shown in Figures \ref{TDP1} - \ref{TDP3'}.
Note that in R2, a single particle may jump several times at the same time-step (but jumps are only possible onto sites that were already empty at the beginning of the time-step). In R1, several neighbouring particles may jump together at the same time-step. There is a natural symmetry between systems R1 and R2 - one is transformed into the other by exchanging left and right and exchanging the roles of particle and hole. For the last example (R3) the parity of the sites is important.
In connection with the \textit{speed process} we will also mention the model with odd/even updates (R4). Again this can be obtained from R3 by a simple transformation.
As seen above, each of these models shows a slightly different behaviour, but if we rescale time by a factor $\beta^{-1}$ and let $\beta \rightarrow 0$ then they converge to the model in continuous time. In this sense the model in discrete time is more general than the model in continuous time (which in the following we will denote by R0) since we can recover the model in continuous time from the model in discrete time. In discrete time we can also consider the model with (fully) parallel updates where all sites are updated simultaneously. However, many of the methods developed for the model in continuous time that work in the models R1-R3 fail in this case. We will mention some questions connected to this model in Section 5.
\subsection{Percolation representations}\label{percrep} Both in continuous and in discrete time, one important feature of the TASEP is its connection to \textit{last-passage percolation} and the \textit{corner growth model}. Here we consider a special case which corresponds to a particular initial condition of the TASEP, in which, at time 0, all non-positive sites $x\leq0$ contain a particle and all positive sites $x>0$ are empty. We label the particles from right to left, so that for $i\geq 1$, particle $i$ starts at site $-i+1$ at time 0 (and always remains to the right of particle $i+1$).
For $n,k\geq1$, let $T(n,k)$ be the time that particle $k$ jumps to its right for the $n$th time. Then it is well-known that the variables $T(n,k)$ satisfy the recursions \begin{equation}\label{Tnkrecursion} T(n,k) = \max \left\{ T(n-1,k) , T(n,k-1) \right\} + v(n,k) \qquad n,k \geq 1 \end{equation} with boundary conditions $T(0,k)=T(n,0)=0$ for all $n,k$, where $v(n,k)$ are i.i.d.\ exponential random variables with mean 1. The interpretation is that before particle $k$ can make its $n$th jump, both particle $k$ must have made its $(n-1)$st jump, and particle $k-1$ must have made its $n$th jump. Once these two events have happened, an amount of time which is exponentially distributed with rate 1 passes before particle $k$ makes its $n$th jump; this is the random variable $v(n,k)$.
The random variables $T(n,k)$ have an interpretation in terms of last-passage percolation times. For an increasing path $\pi$ from $z \in {\mathbb{Z}}_+^2$ to $z' \in {\mathbb{Z}}_+^2$, i.e. a path with increments in $\{ (0,1) , (1,0) \}$, define the weight of $\pi$ by $$ S \left( \pi \right) = \sum_{z'' \in \pi} v(z''). $$ Write $\Pi(z,z')$ for the set of all increasing paths from $z$ to $z'$; then \begin{equation} R(z,z') = \max_{\pi \in \Pi(z,z')} S(\pi) \label{R} \end{equation} is the weight of the heaviest path from $z$ to $z'$. Then, via the recursions (\ref{Tnkrecursion}), it is easy to see that $T(n,k)=R((1,1),(n,k))$. In this setting we may interpret the random variable $v(n,k)$ as a weight at the lattice point $(n,k)$.
We turn to the discrete-time case. Now let $w(n,k)$ be i.i.d.\ random variables whose distribution is geometric with parameter $\beta \in (0,1)$ (by which we mean that $\mathbb{P} \left[ w(z)=k \right] = (1-\beta)^{k}\beta$ for $k=0,1,2,\dots$). We define passage-times ${\tilde{T}}(n,k)$ analogous to $T(n,k)$ above by the recursions \begin{equation*} {\tilde{T}}(n,k) = \max \left\{ {\tilde{T}}(n-1,k) , {\tilde{T}}(n,k-1) \right\} + w(n,k) \qquad n,k \geq 1. \end{equation*} We will describe three variants on these recursions, which pertain to the different update rules R1, R2 and R3. As above, $w(n,k)$ will correspond to the delay before particle $k$ makes its $n$th jump, once it is free to do so. For $i=1,2,3$, let $T^{(i)}(n,k)$ be the $n$th jump of particle $k$ under update rule R$i$ with boundary conditions $T^{(i)}(0,k) = T^{(i)}(n,0) = -1$ for all $n,k$.
\noindent \textbf{Rule R1 (updates from right to left)}
\begin{itemize}
\item Recursions:
\begin{align}
T^{(1)}(n,k)
&= \max \left\{ T^{(1)}(n-1,k) +1 , T^{(1)}(n,k-1) \right\} + w(n,k) \notag \\
&= {\tilde{T}}(n,k) + n - 1 \label{T1}
\end{align}
\item In accordance with the updates from right to left, particles $k$ and $k-1$ can make their $n$th jumps
at the same time-step, but two jumps by the same particle must be separated by at least one time-step.
\item This corresponds to a percolation model in which as well as weights $w(n,k)$ at the vertices
$(n,k)\in{\mathbb{Z}}_+^2$, we have weights of size 1 on each horizontal edge between $(n-1,k)$ and $(n,k)$.
\end{itemize} \textbf{Rule R2 (updates from left to right)}
\begin{itemize}
\item Recursions:
\begin{align}
T^{(2)}(n,k)
&= \max \left\{ T^{(2)}(n-1,k) , T^{(2)}(n,k-1) +1 \right\} + w(n,k) \notag \\
&= {\tilde{T}}(n,k) + k - 1 \label{T2}
\end{align}
\item With updates from left to right, a particle may make several jumps at the same time-step,
but at least one time-step must separate the $n$th jump of particles $k-1$ and $k$.
\item In the corresponding percolation model, the weights of size 1 are now on the vertical edges of the lattice. \end{itemize} \textbf{Rule R3 (even updates then odd updates)}
\begin{itemize}
\item Recursions:
\begin{align}
T^{(3)}(n,k)
&= \begin{cases}
\max \left\{ T^{(3)}(n-1,k) + 1, T^{(3)}(n,k-1) + 1 \right\} + w(n,k) \\
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad n+k \text{ even} \\
\max \left\{ T^{(3)}(n-1,k), T^{(3)}(n,k-1) \right\} + w(n,k) \\
\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad n+k \text{ odd}
\end{cases} \notag \\
&= \begin{cases}
{\tilde{T}}(n,k) + \frac{n+k-2}{2} & n+k \text{ even} \label{T3} \\
{\tilde{T}}(n,k) + \frac{n+k-3}{2} & n+k \text{ odd}
\end{cases}
\end{align}
\item Now the edge weights of size 1 are added to all edges with an upper/right point $(n,k)$ such that $n+k$ is even. \end{itemize} For the model in continuous time we have, for $x > 0$, \begin{equation} \lim_{n \rightarrow \infty} \frac{T([xn],n)}{n} = \left( \sqrt{x} + 1 \right)^2 \qquad \text{a.s.} \label{LPPexp} \end{equation} This was essentially first shown in \cite{rost}. Replacing the exponential weights by geometric weights gives \begin{equation} \lim_{n \rightarrow \infty} \frac{{\tilde{T}}([xn],n)}{n} = \frac{(1-\beta)x + 2 \sqrt{(1-\beta)x} + (1-\beta)}{\beta} \qquad \text{a.s.}; \label{LPPgeom} \end{equation} see for example \cite{oconnell}. Using (\ref{T1})-(\ref{T3}), this can easily be used to give similar laws of large numbers for $T^{(i)}([xn],n)$, $i=1,2,3$.
We may also view the system as a growth model. For the continuous-time case, \begin{equation}\label{Gdef} G_t = \left\{ (x,y) \in {\mathbb{Z}}_+^2 : T(x,y) \leq t \right\} \end{equation} be the set of vertices whose passage-time is less than $t$. This gives a cluster in ${\mathbb{Z}}_+^2$ which grows over time; there is a 1-1 correspondence relating $G_t$ to the configuration of the TASEP at time $t$; the length of the row at height $k\in{\mathbb{Z}}_+$ is the number of jumps particle $k$ has made in the TASEP. In a similar way we can define $G^{(1)}(t)$, $G^{(2)}(t)$ and $G^{(3)}(t)$ by replacing $T$ in (\ref{Gdef}) by $T^{(1)}$, $T^{(2)}$ or $T^{(3)}$ respectively.
\subsection{Multi-type models} In the \textit{multi-type TASEP} each particle belongs to a class $y \in \mathbb{Z}$ (or more generally $y \in \mathbb{R}$). All particles can still jump into unoccupied sites. When a particle of class $k$ tries to jump into a site which is occupied by a particle of class $j$ two things can happen: If $k \geq j$ the jump is suppressed and if $k < j$ then the two particles swap. This means that the lower the class of a particle the higher is its priority.
An $N$-type TASEP (containing $N$ classes of particles and holes) can be regarded as a coupling of $N$ ordered single-type TASEPs. If $\eta^1_0$, $\ldots$, $\eta^N_0$ are $N$ TASEP configurations such that $\eta^1_0(x) \leq \ldots \leq \eta^N_0(x)$ for all $x \in \mathbb{Z}$, we can use the same Poisson or Bernoulli processes (this is called \textit{basic coupling}) to get a joint realization of the TASEPs $\eta^1$, $\ldots$, $\eta^N$.
The basic coupling preserves the ordering between the processes (since the updates are processed one by one, this is true for the discrete-time models just as in the continuous-time case). Thus we can define a multi-type process $\xi$ by $$ \xi_t(x) = N + 1 - \sum_{k=1}^N \eta^k_t(x). $$ We write $\xi_t=R\eta_t$. Particles of class $k$ occur at sites $x$ where $\xi(x)=k$. For $k>1$, these sites represent discrepancies between the processes $\eta^{k-1}$ and $\eta^k$. We may regard particles of type $N+1$ as holes. Then $\xi$ behaves like a multi-type TASEP with $N$ classes of particles and holes. See for example \cite{ferrarimartin} for further details.
\section{Results} We will divide this section into three subsections: The first deals with invariant measures for single- and multi-type models, the second with hydrodynamic limits and the third with multi-type models out of equilibrium.
\subsection{Invariant measures}
\begin{proposition} \label{invariant} For the TASEP in continuous time as well as the discrete time TASEPs R1 and R2, the Bernoulli product measures $\nu_{\rho}$ with marginals $\rho \in \left[ 0,1 \right]$ are the only translation invariant stationary ergodic measures with constant marginals. For the TASEP R3, the Bernoulli product measures $\mu_{\rho}$ with marginals $\rho \in \left[0,1 \right]$ on even sites and marginals $\frac{\rho (1 - \beta)}{1 - \rho \beta}$ on odd sites are the only stationary ergodic measures with marginals that are translation invariant under even shifts. \end{proposition}
\begin{remark} Interestingly, the marginals of the invariant Bernoulli product measures for the models R1 and R2 do not depend on the model parameter $\beta$, and coincide with the invariant measures for the model in continuous time. In the model R3 however, the densities at even and odd sites differ (with a specific relation between them) and the measure depends on the parameter $\beta$.
\end{remark}
\begin{proof} For references see for example Liggett \cite{liggett1} for R0, Blythe and Evans \cite{blytheevans} for R1,R2 and Rajewsky, Santen, Schadschneider and Schreckenberg \cite{rajewskysantenschadschneiderschreckenberg} for R3. The uniqueness statements can be proved following the approach of Mountford and Prabhakar \cite{mountfordprabhakar}.
\end{proof}
We now turn to the construction of invariant measures for systems with more than one class of particles. We use the construction based on a system of queues in tandem developed in \cite{ferrarimartin}, and begin by recalling notation from that paper.
Given two processes $\alpha_1$ and $\alpha_2$, taking values in $\{0,1\}^{\mathbb{Z}}$ and representing the arrival and service processes of a queue respectively, let $D(\alpha_1,\alpha_2)$ be the process of departures from the queue. Now define $D^{(1)}(\alpha)=\alpha$, $D^{(2)}(\alpha_1,\alpha_2)= D(\alpha_1,\alpha_2)$, and recursively $D^{(n)} \left( \alpha_1, \ldots , \alpha_n \right) = D \left( D^{(n-1)} \left( \alpha_1 , \ldots , \alpha_{n-1}
\right) , \alpha_n \right)$ for $n>2$. (The process $D^{(n)}$ can be seen as the departure process from a system of $n-1$ queues in tandem). Now for $\alpha=(\alpha_1,\dots,\alpha_n)$ we can define a system of $n$ ordered single-type TASEP configurations, denoted $T\alpha = \eta = \left( \eta^1 ,
\ldots , \eta^n \right)$ by $ \eta^k = D^{(n-k+1)} \left( \alpha_k ,
\ldots , \alpha_n \right) $. Then the corresponding multi-type configuration $\xi = \xi^{(1,\ldots,n)}$ is given by $\xi=R\eta=RT\alpha$, with $\xi(x) = n + 1 - \sum_{k=1}^n \eta^k(x) $ (as in the last paragraph of Section 2). See Remark \ref{queueexplanation} below for further explanation of the construction.
We can now state the main result.
To state this result, we work with systems with jumps \textit{from right to left}. To return to the systems defined before, one simply takes the space-reversal ($\tilde\eta_t(x)=\eta_t(-x)$). Note that time in the queueing system corresponds to space in the particle system.
\begin{theorem} \label{invariantmulti} If $\alpha = \left( \alpha_1 ,
\ldots , \alpha_n \right)$ has distribution $\nu = \nu_{\rho_1}
\times \ldots \times \nu_{\rho_n}$ ($\mu = \mu_{\rho_1} \times
\ldots \times \mu_{\rho_n}$ respectively for model R3) with $\rho_1
< \ldots < \rho_n$, then the law of $T \alpha = \eta$ is invariant
for the coupled multi-line TASEPs R0,R1 and R2 (R3 respectively) and
the law of $RT\alpha = R \eta = \xi$ is invariant for the multi-type
TASEPs R0, R1 and R2 (R3 respectively) with jumps from right to
left. These are the unique stationary translation invariant
(invariant under even shifts respectively) ergodic measures with
density $\rho_1$ of first class particles (density $\rho_1$ of first
class particles on even sites), density $\rho_2 - \rho_1$ of second
class particles (density $\rho_2 - \rho_1$ of second class particles
on even sites), etc. \end{theorem}
\begin{figure}
\caption{Queues in tandem and multi-type configurations $\xi^{(1,2)}$ and $\xi^{(1,2,3)}$}
\label{fig1}
\end{figure}
\begin{remark}\label{queueexplanation}
The mechanism to construct an invariant distribution as described
above can be depicted in the following way: Take $\alpha_1$ as the
arrival process and $\alpha_2$ as the service process of a
queue. Using $\alpha_1$ and $\alpha_2$ we can construct a process
consisting of the departures from this queue (first class
particles), unused services (second class particles) and
times when no service was offered (holes). We then use this process as the
arrival process for a queue with service process $\alpha_3$ where
first class particles have priority over second class particles: If
there is a service and a first and a second class particle are
waiting in the queue then the first class particles gets served
first. In this way we get a resulting process consisting of
departures of first class particles (first class particles),
departures of second class particles (second class particles),
unused services (third class particles) and holes. Now we can feed
this process into a queue with service process $\alpha_4$ and so
on. If $\alpha = \left( \alpha_1 , \ldots , \alpha_n \right)$ has
distribution $\nu = \nu_{\rho_1} \times \ldots \times \nu_{\rho_n}$
($\mu = \mu_{\rho_1} \times \ldots \times \mu_{\rho_n}$
respectively) then the distribution of the resulting multi-type
configuration is invariant for the multi-type TASEP. See Figure
\ref{fig1} for an illustration. Note that for models R0, R1, R2, the queues involved are simply $M/M/1$ queues in discrete-time; the same is almost true for R3, except that we have different arrival and service rates at odd and even times. \end{remark}
\begin{remark} We observe again that the invariant measures for the multi-type TASEPs R1 and R2 are the same as the invariant measures for the multi-type TASEP in continuous time and that they do not depend on $\beta$. Since the invariant measures for the single-type TASEP R3 depend on $\beta$ the same is true for the invariant measures for the multi-type TASEP R3. \end{remark}
\subsection{Hydrodynamic limits} We now move to considering systems out of equilibrium. We consider the particular initial configuration given by $$\eta_0(x) = \begin{cases}
1 & x \leq 0 \\
0 & x \geq 1 \end{cases} $$ This ``step'' initial condition corresponds to the corner growth model and to the particular initial conditions for the percolation models described in Section \ref{percrep}. We define the following functions $f_0$, $f_1$, $f_2$ and $f_3$, which will describe the evolving density profile for the continuous-time TASEP and for the TASEPs R1-R3 in discrete time: \begin{align*} &f_0(u) = \mathbbm{1}_{ \left. \left( -\infty , -1 \right. \right]}(u) + \frac{1}{2}(1-u) \cdot \mathbbm{1}_{\left[ -1 , 1 \right]}(u) \\ &f_1(u) = \mathbbm{1}_{ \left. \left( -\infty , -\frac{\beta}{1-\beta} \right. \right]}(u) + \frac{1}{\beta} \left( 1 - \sqrt{\frac{1-\beta}{1-u}} \right) \cdot \mathbbm{1}_{\left[ -\frac{\beta}{1-\beta} , \beta \right]}(u) \\ &f_2(u) = \mathbbm{1}_{ \left. \left( -\infty , - \beta \right. \right]}(u) + \left( 1 - \frac{1}{\beta} \left( 1 - \sqrt{\frac{1-\beta}{1+u}} \right) \right) \cdot \mathbbm{1}_{\left[ - \beta , \frac{\beta}{1-\beta} \right]}(u) = 1 - f_1(-u) \\ &f_3(u) = \mathbbm{1}_{ \left. \left( -\infty , - \frac{2 \beta}{2 - \beta} \right. \right]}(u) + \left( \frac{1}{2} - \frac{u}{\beta} \sqrt{ \frac{1 - \beta}{4 - u^2}} \right) \cdot \mathbbm{1}_{\left[ - \frac{2 \beta}{2 - \beta} , \frac{2 \beta}{2 - \beta} \right]}(u) \\ \text{We } &\text{let } a_3 \text{ be defined by } f_3(u) = \frac{1}{2} \left( a_3(u) + \frac{a_3(u) (1-\beta)}{1-a_3(u)\beta} \right) \text{, so} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\ &a_3(u) = \mathbbm{1}_{ \left. \left( -\infty , - \frac{2 \beta}{2 - \beta} \right. \right]}(u) + \frac{1}{\beta} \left( 1 - \left( 2 + u \right) \sqrt{\frac{1-\beta}{4-u^2}} \right) \cdot \mathbbm{1}_{\left[ - \frac{2 \beta}{2 - \beta} , \frac{2 \beta}{2 - \beta} \right]}(u) \end{align*}
For $i=0,1,2,3$ let $\tau_i(k,t)$ ($t \in \mathbb{R}_+$ or $t \in \mathbb{N}$ respectively) be the distribution of $(\eta_{t}(k+l), l \in \mathbb{Z})$ in the corresponding model. We have the following result for the TASEP in continuous time and the discrete TASEPs R1-R3.
\begin{theorem} \label{hydro} For any $u \in \mathbb{R}$ and $i=0,1,2$ the
measure $\tau_i \left( \left[ ut \right] , t \right)$ converges
weakly to the Bernoulli product measure with marginals $f_i(u)$ and
$\tau_3 \left( \left[ ut \right] , t \right)$ converges weakly to
the Bernoulli product measure with marginals $a_3(u)$ on even sites
and $\frac{a_3(u) (1-\beta)}{1-a_3(u)\beta}$ on odd
sites. In particular we have that for any $u \in \mathbb{R}$ the limit
$\lim_{t \rightarrow \infty} \mathbb{E} \left[ \eta_t(k) \right]$
exists and is equal to $f_i(u)$, $i = 0,1,2$ depending on which
model we are considering, whenever $\frac{k}{t}$ tends to $u$, and
$\lim_{t \rightarrow \infty} \mathbb{E} \left[
\eta_t(2[\frac{k}{2}]) \right]$ $= a_3(u)$ and $\lim_{t
\rightarrow \infty} \mathbb{E} \left[ \eta_t(2[\frac{k}{2}] + 1)
\right]= \frac{a_3(u) (1-\beta)}{1-a_3(u)\beta}$ in the model
R3. Furthermore, for $i=0,1,2,3$, the quantities $\frac{1}{t}
\sum_{ut < k < vt} \eta_t(k)$ converge a.s. to the constant value
$\int_{u}^{v} f_i(w) dw$, for $u < v$. \end{theorem}
The first part of the theorem states convergence to local equilibrium: suitably rescaled the models converge locally to the unique invariant measures from Theorem \ref{invariant}. This implies the other statements of the theorem. However, in the models R0, R1 and R2 we can prove the second part without proving convergence to local equilibrium first, while in the model R3 our proof for the second part requires convergence to local equilibrium. The statements for the model in continuous time were proved for the first time by Rost \cite{rost}. O'Connell \cite{oconnell} used the connection between the TASEP and last-passage percolation to prove an equivalent result about the asymptotic shape of the corner growth model (as defined in Section 2). The parts of Theorem \ref{hydro} concerning the models in discrete time can be proved using exactly the same methods as Rost \cite{rost} and O'Connell \cite{oconnell}. In Section 4 we will outline the proof for the model R3.
\begin{figure}
\caption{Black: Simulation of the corner growth model (R2) for $\beta=0.5$ and time $n=1000$; Red: Limiting shape given by a rescaled version of $g_2$}
\label{simu1}
\end{figure}
\begin{remark} \label{shape} From the convergence to the density profiles $f_0$, $f_1$,$f_2$ and $f_3$ we can easily deduce a shape theorem for the corner growth model defined in Section 2. The asymptotic shape in the models R1, R2 and R3 (after rescaling by $t$) are for example given by the functions \begin{equation} \label{g1} g_1(x) = \frac{1}{1-\beta} \left( \sqrt{\beta} - \sqrt{x} \right)^2 \end{equation} for $x \in \left[ 0, \beta \right]$, \begin{equation} \label{g2} g_2(x) = \left( \sqrt{\beta} - \sqrt{(1-\beta)x} \right)^2 \end{equation} for $x \in [ 0, \frac{\beta}{1-\beta} ]$, see Figure \ref{simu1} (simulation with $\beta=0.5$ up to time $n=1000$), and \begin{equation} \label{g3} g_3(x) = \frac{1}{(2-\beta)^2} \left( \sqrt{4x(1-\beta)} - \sqrt{4\beta - \beta^2x - 2\beta^2} \right)^2 \end{equation} for $x \in \left[0 , \frac{2\beta}{2-\beta} \right]$. \end{remark}
\begin{remark}
We may rescale time as well as space in Theorem \ref{hydro},
and look at the limit $f(u,t)=\lim_{N \rightarrow \infty} \mathbb{E}
\left[ \eta_{Nt} (k) \right]$ for $\lim_{N \rightarrow \infty}
\frac{k}{N} = u$. For the continuous-time model, this density profile is governed by Burgers' equation, \begin{equation} \label{burger}
\frac{\partial f}{\partial t} + \frac{\partial(f(1-f))}{\partial u} = 0; \end{equation} the solution, with initial condition $f(u,0) = \mathbbm{1}_{\left( \left. -\infty , 0 \right] \right.}(u),$ is $$ f(u,t) = \mathbbm{1}_{ \left. \left( -\infty , -t \right. \right]}(u) + \frac{t-u}{2t} \cdot \mathbbm{1}_{\left[ -t , t \right]}(u) $$ The differential equation also governs the evolution of the density profile for more general initial configurations than the ``step'' initial condition. We can get equations analogous to (\ref{burger}) for the models in discrete time. For example, for model R1, $$ f_1(u,t) = \mathbbm{1}_{ \left. \left( -\infty , -\frac{\beta t}{1-\beta} \right. \right]}(u) + \frac{1}{\beta} \left( 1 - \sqrt{\frac{t(1-\beta)}{t-u}} \right) \cdot \mathbbm{1}_{\left[ -\frac{\beta t}{1-\beta} , \beta t \right]}(u) $$ solves \begin{equation} \label{burgerR1} \begin{cases}
\frac{\partial f_1}{\partial t} + \frac{\partial}{\partial u} \frac{\beta f_1(1-f_1)}{1-\beta f_1} = 0 \\
f_1(u,0) = \mathbbm{1}_{\left( \left. -\infty , 0 \right] \right.}(u) \end{cases} \end{equation} Here $\frac{\beta f_1(1-f_1)}{1-\beta f_1} =\sum_{n=1}^{\infty} \beta^n f_1^n(1-f_1)$ is the probability that a particle jumps from a given site to its neighbour in a model in equilibrium with marginal density $f_1$.
\end{remark}
\subsection{Multi-type models out of equilibrium}\label{outof}
In this section we consider multi-type TASEPs $\xi_t \in \mathbb{Z}^{\mathbb{Z}}$ similar to Section 2.3. With the results from Theorem \ref{hydro} we can calculate the distribution of the asymptotic speed of a single second class particle in the TASEP with initial configuration $$\xi_0(x) = \begin{cases}
1 & x \leq -1 \\
2 & x = 0 \\
3 & x \geq 1 \end{cases} $$ As the particles of class 3 are weaker than all other particles in the model we can think of these particles as holes. So the second-class particle sees only particles to its left and only holes to its right. The second-class particle can be seen as a discrepancy between two copies of the ``step'' initial condition considered in the last section, one of which is shifted by one step to the right. Hence the path of the second-class particle corresponds to the propagation of the discrepancy under the basic coupling. The results for the models in discrete time correspond to the result for the model in continuous time first obtained in Ferrari and Kipnis \cite{ferrarikipnis}. They prove convergence in distribution. In order to prove a.s. convergence we can use the connection to last-passage percolation and the growth model: As in Ferrari and Pimentel \cite{ferraripimentel} the path of the second class particle corresponds to a competition interface in the growth model which has a.s. an asymptotic direction.
\begin{theorem} \label{2CP} For $i=0,1,2,3$ let $X^{(i)}(t)$ denote the position of the second class particle at time $t$ in the corresponding model. Then we have $$ \frac{X^{(i)}(t)}{t} \xrightarrow[t \rightarrow \infty]{a.s.} U^{(i)} $$ for random variables $U^{(i)}$ with distribution functions $1-f_i$ for $i=0,1,2$ and $a_3$ for $i=3$. \end{theorem}
\begin{remark} \label{dirR1R3}
The proofs for convergence in distribution are analogous to those
for the model in continuous time (see for example
\cite{ferrarikipnis}) apart from a small complication in the model
R3 with the \textit{particle-particle coupling}. We will give an
account of this proof in Section 4. For the almost sure
convergence we will explain the construction of the competition
interface for the models in discrete time and prove that the second
class particle has almost sure an asymptotic speed by using results
about semi-infinite geodesics in the percolation models similar to
\cite{ferraripimentel}. An interesting observation will be that the
distribution of the asymptotic direction of the competition interface
corresponding to the path of the second class particle is the same
in the models R1 and R3, see Remark \ref{proofdirR1R3}. However, this does not imply that
the distribution of the speed of the second class particle is the same in the two models. \end{remark}
\begin{remark} In the continuous model the distribution of the asymptotic speed of the second class particle turns out to be uniform on $[-1,1]$. The distributions in the models in discrete time are more complicated. \end{remark}
Using Theorem \ref{2CP} we can define the following so-called speed process: Consider the multi-type TASEP with initial configuration $\xi_0(n) = n$. By Theorem \ref{2CP} we know that each particle has a.s.\ an asymptotic speed $U_n$: Particle $n$ has only stronger particles to its left and only weaker particles (that can be seen as holes) to its right just like the second class particle in the initial configuration of Theorem \ref{2CP} and therefore we can apply Theorem \ref{2CP} to the speed of every particle. We call the process $\left\{ U_n \right\}_{n
\in \mathbb{Z}}$ the speed process and denote its distribution by $\mu$. This process is stationary, and its marginals (for the various models) are given by the distributions in Theorem \ref{2CP}. Furthermore, we write $Y_n(m)$ instead of $\xi_m(n)$ and denote the position of particle $n$ at time $m$ by $X_n(m)$ in order to be consistent with the notation introduced by Amir, Angel and Valk\'{o} \cite{amirangelvalko}. They have studied the speed process for the model in continuous time. Note that both $Y_n$ and $X_n$ can be seen as permutations of the set ${\mathbb{Z}}$, and are inverse to each other. We define $g_i = 1 - f_i$ for $i=0,1,2$ and $g_3 = 1 - a_3$, $g_4 = 1 - \frac{a_3 (1-\beta)}{1-a_3\beta}$ and our first result is the following theorem corresponding to Theorem 1.5 in \cite{amirangelvalko} (note that the labels of particles can now be in $\mathbb{R}$ instead of just $\mathbb{Z}$):
\begin{theorem} \label{duality} For $i=0,1,2$, $\mu^{(i)}$, the
distribution of the speed process in model R$i$, is the unique
stationary ergodic measure for the TASEP R0 whose marginals
have distribution function $g_i$. Correspondingly, for $i=3,4$, $\mu^{(i)}$ is the
unique stationary measure for the TASEP R$j(i)$, which has marginals
distributed according to $g_{i}$ on even sites and $g_{j(i)}$ on odd
sites, where $j(3) = 4$ and $j(4) = 3$. \end{theorem}
For $i=0$ this gives the result from \cite{amirangelvalko} saying
that the distribution of the speed process is itself a stationary
ergodic measure for the TASEP in continuous time (the marginals are
uniform on $\left[ -1,1 \right]$ in this case). The other parts of
Theorem \ref{duality} follow from nice dualities between the models R1 and
R2 and between the models R3 and R4, and the fact that R0, R1 and R2 all have the same set of stationary distributions, whatever the value of $\beta$ (as given in Theorem \ref{invariantmulti}). The dualities are given by the following result:
\begin{theorem} \label{duality2} Consider the starting configuration
$Y_n(0) = n$. For $i=0,1,2,3,4$ and any fixed $m > 0$ the process
$\{ X_{n}^{(i)}(m) \}_{n \in \mathbb{Z}}$ has the same distribution
as the process $\{ Y_{n}^{(j(i))}(m) \}_{n \in \mathbb{Z}}$ where $j(0)=0$, $j(1)=2$, $j(2)=1$, $j(3)=4$ and $j(4)=3$. \end{theorem}
The following theorems provide some explicit results about the joint distributions of the speeds of adjacent particles (and particles $0$ and $2$ in model R3). The first result is Theorem 1.7 in \cite{amirangelvalko}. The remaining theorems and remarks give analogous results for the TASEPs R1, R2 and R3.
\begin{theorem}[TASEP R0] \label{jointcont} The joint distribution of $\left( U_0 , U_1 \right)$, supported on $\left[ -1 , 1 \right]^2$, is $$ s(x,y) dx dy + r(x) \mathbbm{1}_{\left\{ x = y \right\}} dx $$ with $$ s(x,y) = \begin{cases}
\frac{1}{4} & x > y \\
\frac{y-x}{4} & x \leq y \end{cases} \qquad \text{and} \qquad r(x) = \frac{1-x^2}{8} $$ In particular, $\mathbb{P} \left[ U_0 > U_1 \right] = \frac{1}{2}$, $\mathbb{P} \left[ U_0 = U_1 \right] = \frac{1}{6}$ and $\mathbb{P} \left[ U_0 < U_1 \right] = \frac{1}{3}$. \end{theorem}
\begin{theorem}[TASEP R1] \label{jointR1} The joint distribution of $\left( U_0, U_1 \right)$ has support on $\left[ - \frac{\beta}{1-\beta} , \beta \right]^2$ and is given by $$ s_1(x,y) dx dy + r_1(x) \mathbbm{1}_{\left\{ x = y \right\}} dx $$ with $$ s_1(x,y) = \begin{cases}
\frac{1-\beta}{4\beta^2} \left( 1 - x \right)^{-\frac{3}{2}} \left( 1 - y \right)^{-\frac{3}{2}} = g_1^{'}(x)g_1^{'}(y) & x>y \\
\frac{1-\beta}{2\beta^3} \left( 1 - x \right)^{-\frac{3}{2}} \left( 1 - y \right)^{-\frac{3}{2}} \left( \sqrt{\frac{1-\beta}{1-y}} - \sqrt{\frac{1-\beta}{1-x}} \right) & x \leq y \end{cases} $$ and \begin{center} $r_1(x) = \left( \frac{\sqrt{1-\beta}}{2 \beta^2 \left( 1 - u \right)^{\frac{3}{2}}} \left( 1 - \frac{1}{\beta} \right) + \frac{1-\beta}{2 \beta^2 \left( 1 - u \right)^{2}} \left( \frac{2}{\beta} - 1 \right) - \frac{\sqrt{1-\beta}\left( 1 - \beta \right)}{2 \beta^3 \left( 1 - u \right)^{\frac{5}{2}}} \right)$ \end{center} In particular, $\mathbb{P} \left[ U_0 > U_1 \right] = \frac{1}{2}$, $\mathbb{P} \left[ U_0 = U_1 \right] = \frac{1}{6}$ and $\mathbb{P} \left[ U_0 < U_1 \right] = \frac{1}{3}$. \end{theorem}
\begin{remark} By symmetry, the joint distribution of $\left(U_0,U_1\right)$ in the model R2 is the same as that of $\left(-U_1,-U_0\right)$ in the model R1. \end{remark}
\begin{theorem}[TASEP R3] \label{jointR3a} The joint distribution of $\left( U_0, U_1 \right)$ has support on $\left[ - \frac{2\beta}{2-\beta} , \frac{2\beta}{2-\beta} \right]^2$ and is given by $$ s_2(x,y) dx dy + r_2(x) \mathbbm{1}_{\left\{ x = y \right\}} dx $$ with $$ s_2(x,y) = \begin{cases}
g_3^{'}(x) g_{4}^{'}(y) & x>y \\
g_3^{'}(x) g_{4}^{'}(y) \left( g_{4}(y) - g_{4}(x) \right) \\
\qquad \qquad \cdot \left( 2 - g_{4}(x) \beta - g_{4}(y) \beta \right) \left( \frac{2 + y}{2 - y} \right) & x \leq y \end{cases} $$ and \begin{center} $r_2(x) = \frac{1-\beta}{\beta^3} \left( \frac{2(2-\beta)}{4 - x^2} - \frac{8}{4 - x^2} \sqrt{ \frac{1-\beta}{4-x^2}} \right)$ \end{center} In particular, \footnotesize $$ \mathbb{P} \left[ U_0 > U_1 \right] = \frac{1}{\beta^2} \left( \beta - \left( 1 - \beta \right) \log \left( \frac{1}{1-\beta} \right) \right) $$ $$ \mathbb{P} \left[ U_0 = U_1 \right] = \frac{(1-\beta)(2-\beta)}{\beta^3} \log \left( \frac{1}{1-\beta} \right) - \frac{2(1-\beta)}{\beta^2} $$ \normalsize and \footnotesize $$ \mathbb{P} \left[ U_0 < U_1 \right] = \frac{(1-\beta)(2-\beta)}{\beta^2} + \frac{2(1-\beta)^2}{\beta^3} \log(1-\beta) $$ \normalsize \end{theorem}
\begin{remark} Again by symmetry, we have that under rule R3, $(U_1,U_2)$ has the same distribution as $(-U_1,-U_0)$. \end{remark}
\begin{theorem}[TASEP R3] \label{jointR3c} The joint distribution of $\left( U_0, U_2 \right)$ has support on $\left[ - \frac{2\beta}{2-\beta} , \frac{2\beta}{2-\beta} \right]^2$ and is given by $$ s_3(x,y) dx dy + r_3(x) \mathbbm{1}_{\left\{ x = y \right\}} dx $$ with $$ s_3(x,y) = \begin{cases}
g_3^{'}(x) g_3^{'}(y) & x>y \\
g_3^{'}(x) g_3^{'}(y) \left( g_{4}(x)^2 - g_{4}(y)^2 \vphantom{+ \frac{2(g_{4}(y) - 1)(g_{4}(x)^2 - 2g_{4}(x)g_{4}(y) + g_{4}(y) - 1}{1 - \beta g_{4}(y)} - 1} \right. \\
\qquad - \frac{2(g_{4}(x) - 1)(g_{4}(y)^2 - 2g_{4}(x)g_{4}(y) + g_{4}(x) - 1}{1 - \beta g_{4}(x)} \\
\qquad \qquad \left. + \frac{2(g_{4}(y) - 1)(g_{4}(x)^2 - 2g_{4}(x)g_{4}(y) + g_{4}(y) - 1}{1 - \beta g_{4}(y)} - 1 \right) & x \leq y \end{cases} $$ and \begin{center} $r_3(x) = \frac{g_3(u)(1-g_3(u))(1 - g_{4}(u)(1 - g_{4}(u)))}{\frac{(2-u)\beta}{2} \sqrt{\frac{4-u^2}{1-\beta}}}$ \end{center} In particular, \footnotesize $$ \mathbb{P} \left[ U_0 > U_2 \right] = \frac{1}{2} $$ $$ \mathbb{P} \left[ U_0 = U_2 \right] = \frac{1}{6} + \frac{1}{3\beta} - \frac{13}{3\beta^2} + \frac{8}{\beta^3} - \frac{4}{\beta^4} - \frac{\left( 1 - \beta \right)^2}{\beta^3} \left( \log \left( \frac{1}{1-\beta} \right) \right) \left( \frac{2}{\beta} - \frac{4}{\beta^2} \right) $$ \normalsize and \footnotesize $$ \mathbb{P} \left[ U_0 < U_2 \right] = \frac{1}{3} - \frac{1}{3\beta} + \frac{13}{3\beta^2} - \frac{8}{\beta^3} + \frac{4}{\beta^4} + \frac{\left( 1 - \beta \right)^2}{\beta^3} \left( \log \left( \frac{1}{1-\beta} \right) \right) \left( \frac{2}{\beta} - \frac{4}{\beta^2} \right) $$ \normalsize \end{theorem}
We see that in every model we have
that the speeds are independent on the set where $U_0 > U_1$ ($U_0 >
U_2$ respectively). This agrees with the result in continuous
time. The striking result, shown in \cite{amirangelvalko} for the continuous model, that with
positive probability the two continuous random variables $U_0$ and
$U_1$ are equal, holds also in the discrete models.
Interestingly, the probabilities $\mathbb{P} \left[ U_0 > U_1
\right]$, $\mathbb{P} \left[ U_0 = U_1 \right]$ and $\mathbb{P} \left[ U_0 < U_1 \right]$ are the same for models R0, R1 and R2, and do not depend on the parameter $\beta$. This is rather surprising since $\beta$ is not just a scaling parameter (i.e. we cannot produce models with different values of $\beta$ by just applying a time change). In fact, much more is true. From the first part of Theorem \ref{invariantmulti}, we see that, although the marginal distribution of each $U_i$ depends on the model and the value of $\beta$, we can obtain the distribution for either of R1 and R2 and any value of $\beta$ by applying an appropriate monotone function to each entry $U_i$ (see the proof of Theorem \ref{jointR1} for further details). Hence the relative ordering of the variables $U_i$ is not affected by the model or the value of $\beta$.
To go further, consider particles $i$ and $j$ with $i<j$. It's clear that if $U_i<U_j$ then particle $i$ can never overtake particle $j$, while if $U_i>U_j$ then particle $i$ must overtake particle $j$. In \cite{amirangelvalko}, it's shown that for the continuous-time model, with probability 1, if $U_i=U_j$ then particle $i$ overtakes particle $j$. The same result can be shown for the discrete-time models, although the calculations involved in the argument are rather more complicated than those used to prove Theorem 1.14 of \cite{amirangelvalko}, and we omit them here. So, for example, the probability that particle $i$ overtakes particle $j$ is the same for models R0, R1 and R2. Indeed, more completely one can define an ordering $\prec$ on $\mathbb{Z}$ by $i\prec j$ iff particle $j$ is eventually to the right of particle $i$. Then we have the following result: \begin{cor}\label{orderingcorollary} The ordering $\prec$ has the same distribution for R0, R1 and R2 and for any value of $\beta$. \end{cor} It would certainly be interesting to have a more direct understanding of this property, based for example on couplings or local dynamics, as well as the indirect argument based on the equivalence of multi-type equilibrium distributions.
Overtaking probabilities in the multi-type TASEP can also be interpreted in terms of questions of survival or extinction in multi-type growth models. In \cite{ferrarigoncalvesmartin} a coupling is given between the multi-type TASEP and a three-type version of the corner growth model, under which a given cluster survives for ever if and only if particle 0 never overtakes particle 1. (The extinction of the cluster occurs if two interfaces in the growth model meet -- the paths of these interfaces are related to the paths of the two particles). Different overtaking events in the TASEP can be represented by varying the initial condition in the competition growth model. Using the results above, we find that the survival probabilities in the growth model will remain unchanged if we move from the continuous-time model to natural discrete-time models which correspond to models R1 or R2 in the TASEP. Again, this is certainly not obvious from the local dynamics of the processes.
Unlike in models R1 and R2, in the model R3 the probabilities $\mathbb{P} \left[
U_0 > U_1 \right]$, $\mathbb{P} \left[ U_0 = U_1 \right]$ and $\mathbb{P} \left[ U_0 < U_1 \right]$ do depend on $\beta$ and the behaviour of the model is qualitatively different for different values of $\beta$ (Theorem \ref{jointR3a}): For small $\beta$ we have $\mathbb{P} \left[ U_1 < U_0 \right] > \mathbb{P} \left[ U_1 > U_0 \right]$, but $\mathbb{P} \left[ U_1 < U_0 \right] < \mathbb{P} \left[ U_1 > U_0 \right]$ for large $\beta$ (the transition occurs at $\beta = 0.38860064568\ldots$).
Note however that for $\beta \rightarrow 1$ the probabilities relating $U_0$ and $U_2$ in Theorem \ref{jointR3c} converge to $\frac{1}{2}$, $\frac{1}{6}$ and $\frac{1}{3}$, i.e.\ to the probabilities we get in the continuous model and R1 and R2 for the speeds of particle $0$ and $1$. In a sense, for large $\beta$ the particles $0$ and $2$ in the model R3 behave like adjacent particles in the models R1 and R2. This can heuristically be seen in the following way: We consider the particles in the model R3 (with large $\beta$ close to $1$) starting on even sites. In general, particles starting on an even site will move two steps to the right in each time-step since $\beta$ is large and we update even sites first. If a particle does not jump either during the even or the odd update (which happens with probability $2 \beta (1 - \beta)$) it ends up on an odd site and starts moving left until either \begin{itemize}
\item (A) it hits a weaker particle to the left by which it cannot be overtaken
\item (B) it does not get jumped over either during an even or odd
update because the adjacent particle to the left did not try to
jump \end{itemize} In both cases the particle itself will return to an even site (with high probability) and resume moving to the right. The particle that caused the stop (either because it was weaker or because it did not try to jump) will itself start moving to the left until (A) or (B) happens. Now consider the model R1 with large $\beta$. Most particles will move one step to the right in each time-step, but some particles do not jump and therefore get overtaken until again either (A) or (B) happens (where we remove the part ``either during and even or odd update''). Particles in these two models have different speeds, but the probabilities $\mathbb{P} \left[ U_0 > U_1 \right]$, $\mathbb{P} \left[ U_0 = U_1 \right]$, $\mathbb{P} \left[ U_0 < U_1 \right]$ in R1 and $\mathbb{P} \left[ U_0 > U_2 \right]$, $\mathbb{P} \left[ U_0 =
U_2 \right]$ and $\mathbb{P} \left[ U_0 < U_2 \right]$ in R3 are (almost) the same.
\section{Proofs}
\subsection{Invariant Measures}
As the idea of the proof for Theorem \ref{invariantmulti} is the same for the discrete-time models as for the continuous-time model R0 we will only sketch the proof. When thinking about the model R3 bear in mind that we have different densities on even and odd sites.
\begin{proof}[Proof sketch for Theorem \ref{invariantmulti}:] We can proceed in the same way as in \cite{ferrarimartin}: Using arguments as in \cite{mountfordprabhakar} we can see that for every parameter $\rho \in (0,1)$ there exists an essentially unique function $H_{\rho}$ which maps Bernoulli processes $\omega$ on $\mathbb{Z} \times \mathbb{Z}$ onto stationary and space-ergodic doubly infinite trajectories $(\eta_n)_{n \in \mathbb{Z}}$ of the TASEP governed by $\omega$ with time-marginals $\mu_{\rho}$ (Proposition 8 in \cite{ferrarimartin}). For each $\rho$ and $\omega$ we can construct a set of dual points $\bigtriangleup_{\rho}(\omega)$ which govern the time-reversal of $(\eta_n)_{n \in \mathbb{Z}}$ and again form a Bernoulli process. Also the set of dual points before time $m$ is independent of the configuration $\eta_m$ (Proposition 10 in \cite{ferrarimartin}). We now take $\rho_1 < \ldots < \rho_n$ and let $\alpha_m = (\alpha^1_m, \ldots, \alpha^n_m)$ be the multiline TASEP trajectory governed by $\omega$. This means that $\omega^n = \omega$, $\omega^k = \bigtriangleup_{\rho_{k+1}}(\omega^{k+1})$ and $(\alpha^k_m)_{m \in \mathbb{Z}}$ is the TASEP trajectory governed by $\omega^k$ with density $\rho_k$. Then by the independence of the dual points before time $m$ from the configuration $\eta_m$ we get that the multiline process is stationary with product measure $\nu = \nu_{\rho_1} \times \ldots \times \nu_{\rho_n}$ (Proposition 11 in \cite{ferrarimartin}). As in the paragraph preceding Theorem \ref{invariantmulti} we define $\eta = \left( \eta^1 , \ldots , \eta^n \right)$ by $\eta^k = D^{(n-k+1)} \left( \alpha_k , \ldots , \alpha_n \right)$. Then induction arguments and some case-by-case checking for $n=2$ show that $(\eta^k_n)_{n \in \mathbb{Z}}$ is the TASEP trajectory governed by $\omega$ with particle density $\rho_k$ (Proposition 12 in \cite{ferrarimartin}) and this implies Theorem \ref{invariantmulti}. \end{proof}
\begin{remark} As mentioned in the beginning of this section, for the model R3 we have to think of the $\rho_k$ as densities on even sites and we have to replace the $\nu_{\rho_k}$ by $\mu_{\rho_k}$. \end{remark}
\begin{remark} Inherent in the tandem queue construction for the multi-type stationary distribution in model R3 is a version of Burke's theorem for the queues with different arrival
and service rates on even and odd sites.
Consider a queue with arrival process $A_n$, service process
$S_n$ and departure process $D_n$. Let $A_n$ be a Bernoulli process
with rate $\rho_1 = (\gamma_1, \gamma_2) \in (0,1)^2$ which means
that on even sites arrivals happen with probability $\gamma_1$ and
on odd sites they happen with probability $\gamma_2$. Motivated by
the invariant distributions for the TASEP R3 with just one type of
particles we want $\gamma_1$ and $\gamma_2$ to satisfy \begin{align} \gamma_2 = \frac{\gamma_1 \left( 1 - \beta \right)}{1 - \gamma_1 \beta} \label{rho1} \end{align} where $\beta \in \left( 0,1 \right)$ is the rate at which jumps in the TASEP happen. Analogously, we let $S_n$ be a Bernoulli process with rate $\rho_2 = ( \delta_1, \delta_2 ) \in (0,1)^2$ where \begin{align} \delta_2 = \frac{\delta_1 \left( 1 - \beta \right)}{1 - \delta_1 \beta} \label{rho2} \end{align} (and $\gamma_1 < \delta_1$, $\gamma_2 < \delta_2$). The main observation in Burke's Theorem (see for example \cite{burke}) that shows that arrival and departure process have the same distribution is that the queue length process is reversible and that departures look in the reversed process like arrivals in the original process. Interestingly, it turns out that in the queueing model described above there exists a stationary reversible distribution $\pi$ for the queue length process which is independent of whether we just observed arrivals and services at even sites or at odd sites. $\pi$ is given by $$ \pi(j) = \left( 1 - \frac{\gamma_1 \left( 1 - \delta_1 \right)}{\left( 1 - \gamma_1 \right) \delta_1} \right) \left( \frac{\gamma_1 \left( 1 - \delta_1 \right)}{\left( 1 - \gamma_1 \right) \delta_1} \right)^j \qquad j = 0,1,\ldots $$ and it is reversible because it satisfies the two systems of detailed balance equations $$ \pi(j) \gamma_i \left( 1 - \delta_i \right) = \pi(j+1) \left( 1 - \gamma_i \right) \delta_i \qquad j=0,1,\ldots $$ for $i=1,2$ ($i=1$ corresponds to even sites, $i=2$ corresponds to odd sites). This follows from the relations (\ref{rho1}) and (\ref{rho2}). As in Burke's Theorem it follows from the reversibility of the queue length process that the departure process has the same distribution as the arrival process, i.e. $D_n$ is a Bernoulli process with rate $\rho_1 = (\gamma_1, \gamma_2)$.
Indeed, the multi-type construction yields extensions of this result which give input-output theorems for the priority queues with more than one type of customer. For a discussion of the analogous result in the context of constant arrival and service rates, see for example Section 6 of \cite{ferrarimartin}. \end{remark}
\subsection{Hydrodynamic limits}
\begin{proof}[Proof outline for Theorem \ref{hydro} for the TASEP R3:] The following Propositions correspond to Propositions 2,3 and 5 in \cite{rost} and the proofs are essentially the same as in \cite{rost}. \begin{proposition} \label{convS3} For all $u \in \mathbb{R}$ the random variables $\frac{1}{n}S(\left[un\right],n) = \frac{1}{n} \sum_{k > [un]} \eta_n(k)$ converge a.s. and in $L^1$ to a constant $h_3(u)$, as $n$ goes to infinity. The function $h_3$ is decreasing, convex; one has $h_3(u) = 0$ for $u > \frac{2\beta}{2-\beta}$ and $h_3(u) = -u$ for $u < - \frac{2\beta}{2-\beta}$. \end{proposition} \begin{proposition} \label{densityprofile3} If $h_3$ is differentiable at $u$, one has $$ \lim_{n \rightarrow \infty} \mathbb{E} \left[ \eta_n \left( 2 \left[ \frac{k}{2} \right] \right) + \eta_n \left( 2 \left[ \frac{k}{2} \right] + 1 \right) \right] = - 2 h_3^{'}(u) $$ whenever $\frac{k}{n}$ tends to $u$. \end{proposition} \begin{proposition} \label{weakconv3} Let $\mu_3(k,n)$ be the distribution of
$(\eta_{n}(k+l), l \in \mathbb{Z})$. If $h_3^{'}(u)$ exists, any
weak limit $\mu_3^*$ of the measures $\mu_3 \left( 2 \left[\frac{un}{2} \right] , n \right)$ for $n \rightarrow \infty$ is
of the form $$ \mu_3^* = \int_0^1 \tau_x \sigma(dx) $$ with some probability $\sigma$ on $\left[ 0, 1 \right]$. $\tau_x$ is the Bernoulli product measure with density $b(x)$ on even sites and density $\frac{b(x) \left( 1 - \beta \right)}{1 - b(x) \beta}$ on odd sites where $b(x)$ is such that the average density is given by $x = \frac{1}{2} \left( b(x) + \frac{b(x) \left( 1 - \beta \right)}{1 - b(x) \beta} \right)$. That means that from Proposition \ref{densityprofile3} it follows that the measure $\sigma$ satisfies $$ \int_0^1 x \sigma(dx) = f_3(u) = - h_3^{'}(u) $$ \end{proposition} We can use the results from O'Connell \cite{oconnell} about last-passage percolation (see (\ref{LPPgeom})) to calculate the function $h_3$: $$ h_3(u) = \begin{cases}
-u & u \leq - \frac{2 \beta}{2 - \beta} \\
\frac{1}{\beta} \left( 2 - \frac{u \beta}{2} - \sqrt{ \left( 4 - u^2 \right) \left( 1 - \beta \right) } \right) - 1 & - \frac{2 \beta}{2 - \beta} \leq u \leq \frac{2 \beta}{2 - \beta} \\
0 & u \geq \frac{2 \beta}{2 - \beta} \end{cases} $$ Since $h_3$ is differentiable we can identify $f_3 = - h_3^{'}$ from Proposition \ref{weakconv3} as $$ f_3(u) = -h_3^{'}(u) = \begin{cases}
1 & u \leq - \frac{2 \beta}{2 - \beta} \\
\frac{1}{2} - \frac{u}{\beta} \sqrt{ \frac{1 - \beta}{4 - u^2}} & - \frac{2 \beta}{2 - \beta} \leq u \leq \frac{2 \beta}{2 - \beta} \\
0 & u \geq \frac{2 \beta}{2 - \beta} \end{cases} $$ As mentioned after Theorem \ref{hydro} in the models R0, R1 and R2 this is enough to prove the convergence statements at the end of Theorem \ref{hydro}; this is done using the monotonicity of the distribution of $\eta_n(k)$ in $k$. However, due to the different behaviour at odd and even sites, this monotonicity does not hold for the model R3, and the average density $f_3$ does not pick up the density fluctuations between even and odd sites. Hence without knowing that the model converges to local equilibrium (which would allow us to calculate the function $a_3$ from $f_3$) we cannot prove the last part of Theorem \ref{hydro}. The essential step for proving convergence to local equilibrium is the following proposition, the proof of which is again the same as in \cite{rost} (Proposition 6). Let $\rho(k;F;n) = \mathbb{P} \left[ \eta_n(k + i) = 1 , i \in F \right] $ for a set $F \subset \mathbb{Z}$. \begin{proposition} \label{mu3ineq2} For any finite set $F$ and any $\epsilon > 0$ there exists a $\delta > 0$, $n_0$ such that
$$ \left| \rho\left(2\left[\frac{un}{2} \right];F;n\right)
- \rho\left(2\left[\frac{\overline{u}n}{2} \right];F;n\right) \right| \leq \epsilon $$
for $\left| u - \overline{u} \right| \leq \delta$ and $n \geq n_0$. Also
$$ \left| \rho\left(2 \left[\frac{\overline{u}n}{2} \right];F;n+l\right)
- \rho\left(2 \left[\frac{\overline{u}n}{2} \right];F;n\right) \right| \leq \epsilon $$ for $0 \leq l \leq [\delta n]$, $n \geq n_0$. \end{proposition}
Using this proposition and Jensen's inequality we can prove that the measure $\sigma$ from Proposition \ref{weakconv3} is the unit mass on $f_3(u)$ and since $b(f_3(u)) = a_3(u)$ this implies convergence to local equilibrium (see \cite{rost}, Section 4 for the details). \end{proof}
\subsection{Multi-type models out of equilibrium}
\begin{proof}[Proof of Theorem \ref{2CP}:] First we want to outline the proof for convergence of $\frac{X^{(i)}(t)}{t}$ in distribution (as before we have $t \in \mathbb{R}_+$ or $t \in \mathbb{N}$ depending on the model). This follows the ideas in \cite{ferrarikipnis}. We want to couple two TASEPs with initial configurations $$ \eta_0^1(x) = \begin{cases}
1 & x \leq 0 \\
0 & x \geq 1 \end{cases} \text{ and } \eta_0^2(x) = \begin{cases}
1 & x \leq -1 \\
0 & x \geq 0 \end{cases} $$ in two different ways and calculate the difference $\mathbb{E} \left[ S^1([rt],t) \right] - \mathbb{E} \left[ S^2([rt],t) \right]$ in both couplings. $S^1([rt],t)$ and $S^2([rt,t])$ are the number of particles to the right of $[rt]$ at time $t$ in $\eta^1$ and $\eta^2$. Using basic coupling (i.e. using the same Poisson processes $\{ \left( P^x_t \right)_{t \geq 0} : x \in \mathbb{Z} \}$ or Bernoulli processes $\{ \left( B_n^x \right)_{n \in \mathbb{N}} : x \in \mathbb{Z} \}$ for $\eta^1$ and $\eta^2$) gives \begin{equation} \mathbb{E} \left[ S^1([rt],t) \right] - \mathbb{E} \left[ S^2([rt],t) \right] = \mathbb{P} \left[ X^{(i)}(t) > [rt] \right] \label{BC} \end{equation} since we can interpret the discrepancy between $\eta^1$ and $\eta^2$ as second class particle (see Section 2.3). This works for all models R0, R1, R2 and R3. The second coupling we want to use is called particle-particle coupling. We label the particles in $\eta^1_0$ and $\eta^2_0$ from right to left and let particles with the same label jump at the same time. Then under this coupling \begin{equation} \mathbb{E} \left[ S^1([rt],t) \right] - \mathbb{E} \left[ S^2([rt],t) \right] = \mathbb{P} \left[ \eta_t^1([rt]+1) = 1 \right] \label{PPC} \end{equation} in the models R0, R1 and R2. By Theorem \ref{hydro} the right hand side of (\ref{PPC}) converges to $f_i(r)$, so together with (\ref{BC}) we have $$ \mathbb{P} \left[ X^{(i)}(t) > [rt] \right] \xrightarrow[t \rightarrow \infty]{} f_i(r) $$ which proves convergence in distribution for $i=0,1,2$. However, in the model R3 we cannot use the particle-particle coupling as before because this is no longer a real coupling. If we let particles with the same label jump at the same time in $\eta^1$ and $\eta^2$ then the dynamics of the $\eta^2$ process are different from the $\eta^1$ process: In $\eta^2$ we update odd sites first and then even sites. We denote by $\mathbb{E}_{PP}$ the expectation in $\eta^1$ and $\eta^2$ if particles with the same label jump at the same time in $\eta^1$ and $\eta^2$ where we update in such a way that $\eta^1$ is still a TASEP with update rule R3. Then we have $$ \mathbb{E}_{PP} \left[ S^1([rt],t) \right] = \mathbb{E} \left[ S^1([rt],t) \right] $$ Notice that starting the second process with updating even sites does not change anything as there is no particle on an even site with an adjacent empty site in the initial configuration. If we remove the last update of even sites at time $t$ this changes the value of $S^2([rt],t)$ if there is a jump from site $[rt]$ to site $[rt] + 1$ during this update. But there can only be a jump from $[rt]$ to $[rt] + 1$ while updating the even sites if $[rt]$ is even. Let us therefore consider odd sites $2 [ \frac{rt}{2} ] + 1$ first.
We get \begin{equation} \label{EPP} \mathbb{E}_{PP} \left[ S^2 \left( 2 \left[ \frac{rt}{2} \right] + 1,t \right) \right] = \mathbb{E} \left[ S^2\left( 2 \left[ \frac{rt}{2} \right] + 1,t \right) \right] \end{equation}
and we still have $$ \mathbb{E}_{PP} \left[ S^1\left( 2 \left[ \frac{rt}{2} \right] + 1,t \right) \right] - \mathbb{E}_{PP} \left[ S^2\left( 2 \left[ \frac{rt}{2} \right] + 1,t \right) \right] = \mathbb{P} \left[ \eta^1_t \left( 2 \left[ \frac{rt}{2} \right] + 2 \right) = 1 \right] $$ as before. Hence we get
\begin{align*} \mathbb{P} \left[ X^{(3)}(t) > 2 \left[ \frac{rt}{2} \right] + 1 \right] &= \mathbb{E} \left[ S^1\left( 2 \left[ \frac{rt}{2} \right] + 1,t \right) \right] - \mathbb{E} \left[ S^2\left( 2 \left[ \frac{rt}{2} \right] + 1,t \right) \right] \\
&= \mathbb{P} \left[ \eta^1_t \left( 2 \left[ \frac{rt}{2} \right] + 2 \right) = 1 \right] \\ &\xrightarrow[t \rightarrow \infty]{} a_3(r)
\end{align*}
by the convergence to local equilibrium (Theorem \ref{hydro}).
But by monotonicity we have $$ \mathbb{P} \left[ X^{(3)}(t) > 2 \left[ \frac{rt}{2} \right] - 1 \right] \geq \mathbb{P} \left[ X^{(3)}(t) > 2 \left[ \frac{rt}{2} \right] \right] \geq \mathbb{P} \left[ X^{(3)}(t) > 2 \left[ \frac{rt}{2} \right] + 1 \right] $$ Hence $$ \lim_{t \rightarrow \infty} \mathbb{P} \left[ \frac{X^{(3)}(t)}{t} > r \right] = a_3(r) $$
\begin{remark} If the second class particle starts on an odd site (with first class particles to the left and holes/third class particles to the right) then $$ \frac{\widetilde{X}^{(3)}(t)}{t} \xrightarrow[t \rightarrow \infty]{a.s.} \widetilde{U}^{(3)} $$ and $\widetilde{U}^{(3)}$ has distribution function $\frac{a_3(1-\beta)}{1-a_3\beta}$ accordingly. \end{remark}
Now we want to prove almost sure convergence of the speed of a second class particle. Our methods follow the approach in \cite{ferraripimentel}. The idea is to establish a connection between the path of the second class particle and a competition interface in the corresponding growth model. The cluster in the growth model can be divided into two clusters corresponding to events happening to the right and to the left of the second class particle. The interface between these two clusters is called the competition interface. Using results about semi-infinite geodesics it can be shown that this competition interface has almost surely an asymptotic direction. This can be used to deduce that the second class particle has almost surely an asymptotic speed and since we know the distribution of this speed we can also calculate the distribution of the random angle of the competition interface. In the following we will describe this method first for the TASEP in continuous time, as given in \cite{ferraripimentel}, and then explain the adjustments that have to be made in the TASEPs in discrete time. \begin{figure}
\caption{Pair representation of the second class particle: The figures on the left show the system with the pair; the figures on the right show the corresponding multi-type system}
\label{HP2CP}
\end{figure} \\ In order to establish the connection between the second class particle and the competition interface we represent the second class particle as a pair consisting of a hole and a particle. This reduces our multi-type model to a model consisting only of particles and holes and allows us to use the connection to last-passage percolation developed in Section 2.2. We let the pair move as follows: If the particle of the pair jumps to the right the pair moves to the right (A) and if a particle jumps from the left into the hole of the pair the pair moves to the left (B), see Figure \ref{HP2CP}. Then the pair behaves indeed like a second class particle. \begin{figure}
\caption{Pair representation of the second class particle with particles labelled from right to left and holes labelled from left to right}
\label{HP2CP2}
\end{figure} If we label the particles from right to left and the holes from left to right as in Figure \ref{HP2CP2} then we can consider the process
$(\varphi_n)_{n \geq 0}$ giving the labels of the pair after the
$n$th jump involving the pair. We have $\varphi_0 = (1,1)$, as initially the pair consists of hole 1 and particle 1, and
$\varphi_{n+1} - \varphi_n \in \{ (0,1) , (1,0) \}$. $\varphi_n$ satisfies the recursion formula \begin{equation} \varphi_{n+1}= \begin{cases}
\varphi_n + (1,0) & T(\varphi_n + (1,0)) < T(\varphi_n + (0,1)) \\
\varphi_n + (0,1) & T(\varphi_n + (1,0)) > T(\varphi_n + (0,1)) \end{cases} \label{phi} \end{equation} If the first
label increases the second class particle moves one step to the right
and if the second label increases the second class particle moves
one step to the left.
\begin{remark} Note that by inserting an extra site we changed the parity of the sites to the right of and including the particle of the pair: The first hole to the right of the second class particle is at an odd site while the first hole to the right of the pair is at an even site. The second class particle itself is at an even site while the particle in the pair is at an odd site. This will be important for model R3. \end{remark}
Now we want to define geodesics in the last-passage percolation model to define a competition interface in the growth model. For $z, z' \in {\mathbb{Z}}_+^2$ the heaviest increasing path from $z$ to $z'$ (i.e. the path that achieves the maximum in $R(z,z')$) is called the geodesic from $z$ to $z'$. Note that in the model in continuous time geodesics are unique. (In the models in discrete time we will need a rule to break ties to achieve uniqueness of the geodesics). A semi-infinite geodesic starting at $z$ is a path $\pi = (z,z_1,z_2,\ldots)$ in ${\mathbb{Z}}_+^2$ such that for every $z'=z_k, z''=z_l \in \pi$ the geodesic from $z'$ to $z''$ is $(z_k,z_{k+1},z_{k+2},\ldots,z_l) \subset \pi$. For $\alpha \in \left[0,90\text{\textdegree}\right]$ a $\alpha$-geodesic is a semi-infinte geodesic with direction $\alpha$. Now we colour every block $Q(i,j) = \left( \left. i-1, i \right] \right. \times \left( \left. j-1, j \right] \right.$ in
$(\mathbb{R}_+)^2 \backslash [0,1]^2$ either red if the geodesic
from $(1,1)$ to $(i,j)$ passes through $(1,2)$ or blue if it
passes through $(2,1)$. The interface between these clusters is
called the competition interface and an induction argument
together with the recursion (\ref{phi}) shows that it is given by
the process $\varphi_n$, see Proposition 3 in
\cite{ferraripimentel}. Using results about the existence and
uniqueness of $\alpha$-geodesics (Propositions 7,8 and 9 in
\cite{ferraripimentel}) it can be shown that $\varphi_n$ has
almost surely an asymptotic direction and we can conclude that the second
class particle has almost surely an asymptotic speed (Propositions
4 and 5 in \cite{ferraripimentel}). Now we want to apply these
methods to the discrete time TASEPs R1,R2 and R3.
\subsubsection*{R1} The last-passage percolation model for rule R1 was described in Section \ref{percrep}, and in particular just after (\ref{T1}). Now to adapt to the initial configuration $$\eta_0(x) = \begin{cases}
1 & x \leq -1 \\
0 & x = 0 \\
1 & x = 1 \\
0 & x \geq 2 \end{cases} $$ we have to remove the `$+1$' weight from the edge between $(1,1)$ and $(2,1)$ and the weight from the vertex $(1,1)$ as in the initial configuration we are considering particle 1 has already jumped over hole 1. We colour $Q(1,2)$ red, $Q(2,1)$ blue and every other block $Q(i,j)$ in $(\mathbb{R}_+)^2 \backslash \left[0,1\right]^2$ either red if $$ \widetilde{R}^{(1)}((1,2),(i,j)) > \widetilde{R}^{(1)}((2,1),(i,j)) $$ and blue if $$ \widetilde{R}^{(1)}((1,2),(i,j)) \leq \widetilde{R}^{(1)}((2,1),(i,j)) $$ (recall the defintion of $R$ in (\ref{R}); $\widetilde{R}^{(1)}$ is the corresponding quantity in model R1 with the changes mentioned above). This implies that if $Q(i,j+1)$ is red and $Q(i+1,j)$ is blue, then $Q(i+1,j+1)$ is red iff \begin{equation} \widetilde{T}^{(1)}(i,j+1) \geq \widetilde{T}^{(1)}(i+1,j) \label{T1in1} \end{equation} and blue iff \begin{equation} \widetilde{T}^{(1)}(i,j+1) < \widetilde{T}^{(1)}(i+1,j) \label{T1in2} \end{equation} where $\widetilde{T}^{(1)}$ is defined as in (\ref{T1}) but now in the model R1 with the modifications described above. The line $\varphi_n^{(1)}$ separating the two clusters is again called the competition interface and due to the way we defined the red and blue cluster we have again that the competition interface corresponds to the path of the second class particle. We can rewrite (\ref{T1in1}) and (\ref{T1in2}) in terms of $\varphi_n^{(1)}$ as \begin{equation} \label{phi1} \varphi_{n+1}^{(1)}= \begin{cases}
\varphi_n^{(1)} + (1,0) & \widetilde{T}^{(1)}(\varphi_n^{(1)} + (1,0)) \leq \widetilde{T}^{(1)}(\varphi_n^{(1)} + (0,1)) \\
\varphi_n^{(1)} + (0,1) & \widetilde{T}^{(1)}(\varphi_n^{(1)} + (1,0)) > \widetilde{T}^{(1)}(\varphi_n^{(1)} + (0,1)) \end{cases} \end{equation} Note the similarity of (\ref{phi}) with (\ref{phi1}); the difference comes from the fact that in the models in discrete time ties are possible. As in \cite{ferraripimentel} we want to prove that this competition interface has almost surely an asymptotic direction. First we note that the results in \cite{ferraripimentel} about geodesics still hold with geometric weights attached to the vertices instead of exponential weights. Secondly, the results still hold in a model where `$+1$' weights are attached to \textit{every} horizontal edge in ${\mathbb{Z}}_+^2$ as these weights do not affect the geodesics (they just give a constant weight to every path from $z$ to $z'$, $z,z' \in {\mathbb{Z}}_+^2$). The only difference in our case is that there is no weight attached to the edge from $(1,1)$ to $(2,1)$. But this local change does not affect the almost sure statements in Propositions 7,8 and 9 in \cite{ferraripimentel}. We conclude that the competition interface in our model has almost sure an asymptotic direction and it follows from arguments analogous to the ones in \cite{ferraripimentel} that the speed of the second class particle converges almost surely.
\subsubsection*{R2} The result for model R2 follows again from the symmetry between R1 and R2.
\subsubsection*{R3} For the purpose of this section it is convenient to change the last-passage percolation model corresponding to model R3 a little bit. Instead of updating the even sites and then the odd sites during a single time-step, we separate the two batches of updates by a half time-step. Then the percolation model corresponding to the initial configuration $$\eta_0(x) = \begin{cases}
1 & x \leq 0 \\
0 & x \geq 1 \end{cases} $$ has no weights attached to the edges, while the weights at the vertices are geometric with an extra $\frac{1}{2}$ added. As noticed in the beginning of this section, introducing an extra site into this model changes the parity of some sites. In order to deal with this we attach a single `$+\frac{1}{2}$' weight to the edge from $(1,1)$ to $(1,2)$ in the percolation model corresponding to the initial configuration $$\eta_0(x) = \begin{cases}
1 & x \leq -1 \\
0 & x = 0 \\
1 & x = 1 \\
0 & x \geq 2 \end{cases} $$ (where we also remove the weight from the vertex $(1,1)$). This ensures that we apply even/odd updates to the left of the particle of the pair and odd/even updates to the right. Then the movement of the pair $\varphi_n^{(3)}$ corresponds to the movement of the second class particle in a model with even/odd updates.
Again we colour $Q(1,2)$ red, $Q(2,1)$ blue and now every other block $Q(i,j)$ in $(\mathbb{R}_+)^2 \backslash \left[0,1\right]^2$ either red if \begin{equation} \label{R31} \widetilde{R}^{(3)}((1,2),(i,j)) \geq \widetilde{R}^{(3)}((2,1),(i,j)) \end{equation} and blue if \begin{equation} \label{R32} \widetilde{R}^{(3)}((1,2),(i,j)) < \widetilde{R}^{(3)}((2,1),(i,j)) \end{equation} The interface between these clusters is again the competition interface and is given by $\varphi_n^{(3)}$. In terms of $\varphi_n^{(3)}$ we get from (\ref{R31}) and (\ref{R32}) that \begin{equation} \label{phi3} \varphi_{n+1}^{(3)}= \begin{cases}
\varphi_n^{(3)} + (1,0) & \widetilde{T}^{(3)}(\varphi_n^{(3)} + (1,0)) < \widetilde{T}^{(3)}(\varphi_n^{(3)} + (0,1)) \\
\varphi_n^{(3)} + (0,1) & \widetilde{T}^{(3)}(\varphi_n^{(3)} + (1,0)) > \widetilde{T}^{(3)}(\varphi_n^{(3)} + (0,1)) \end{cases} \end{equation} $\widetilde{R}^{(3)}$ and $\widetilde{T}^{(3)}$ are defined as in (\ref{R}) and (\ref{T3}) but now we are considering the model R3 with the modifications described above. Due to the additional `$+\frac{1}{2}$' weight on the edge from $(1,1)$ to $(1,2)$ the competition interface never encounters any ties in this model, i.e. $\widetilde{T}^{(3)}(\varphi_n^{(3)} + (1,0)) = \widetilde{T}^{(3)}(\varphi_n^{(3)} + (0,1))$ does not occur. As in R1, the local change given by the extra `$+\frac{1}{2}$' edge-weight in this model does not change the almost sure statements in Propositions 7,8 and 9 in \cite{ferraripimentel}. The rest of the argument is the same as for model R1.
\end{proof}
\begin{remark} \label{proofdirR1R3} We can use the known distributions of the speeds of the second class particles (see Theorem \ref{2CP}) together with the hydrodynamic limit results (see Theorem \ref{hydro} and Remark \ref{shape}) to prove the interesting result mentioned in Remark \ref{dirR1R3} that the distributions of the asymptotic direction of the competition interfaces in the models R1 and R3 are the same: \begin{proof} The proof exploits the connections made in Proposition 5 in \cite{ferraripimentel}. Similar to \cite{ferraripimentel} we let $\psi^{(1)}_t = (I^{(1)}(t),$ $J^{(1)}(t))$, $\psi^{(3)}_t = (I^{(3)}(t),J^{(3)}(t))$ be the position of the competition interface (i.e. the labels of the pair) at time $t$ and denote by $\theta^{(1)}, \theta^{(3)} \in \left[0,90\text{\textdegree}\right]$ the random angle of the competition interface for model R1 and R3, i.e.
$$ \lim_{t \rightarrow \infty} \frac{\psi^{(i)}_t}{\left| \psi^{(i)}_t \right|} = e^{i\theta^{(i)}} = \left(\cos(\theta^{(i)}), \sin(\theta^{(i)})\right) \text{ for } i=1,3 $$ By the arguments in the previous sections we know that these limits exist almost surely. Using the asymptotic shape of the growth models given in Remark \ref{shape} we also have $$ \lim_{t \rightarrow \infty} \frac{\psi^{(i)}_t}{t} = j^{(i)}(\theta^{(i)})e^{i\theta^{(i)}} \text{ a.s. for } i=1,3 $$ where $j^{(i)}(\theta^{(i)})$ is the distance from the origin to the intersection of the line given by $\{ (u,v) \in \mathbb{R}_+^2 : \tan(\theta^{(i)}) = \frac{v}{u} \}$ and the asymptotic \textit{growth interface} $(x,g_i(x))$ for $i=1,3$. With the formulas for $g_1$ and $g_3$ in (\ref{g1}) and (\ref{g3}) we can calculate $j^{(1)}(\theta^{(1)})$ and $j^{(3)}(\theta^{(3)})$ explicitly: $$ j^{(1)}(\theta^{(1)}) = \frac{\beta}{\left( \sqrt{ \left( 1-\beta \right) \sin(\theta^{(1)})} + \sqrt{\cos(\theta^{(1)})} \right)^2} $$ and $$ j^{(3)}(\theta^{(3)}) = \frac{2\beta \left( 2-\beta \right)}{\left( \left( 2-\beta \right) \sqrt{\sin(\theta^{(3)})} + 2\sqrt{\left( 1-\beta \right) \cos(\theta^{(3)})} \right)^2 + \beta^2 \cos(\theta^{(3)})} $$ By the connection between the path of the second class particle and the competition interface ($X^{(i)}(t) = I^{(i)}(t) - J^{(i)}(t)$, $i=1,3$, since the second class particle moves to the right iff the first label of the pair increases and to the left iff the second label of the pair increases) it follows that $$ \lim_{t \rightarrow \infty} \frac{X^{(i)}(t)}{t} = \lim_{t \rightarrow \infty} \frac{I^{(i)}(t) - J^{(i)}(t)}{t} = j^{(i)}(\theta^{(i)}) \left( \cos(\theta^{(i)}) - \sin(\theta^{(i)}) \right) \stackrel{\mathrm{def}}= l_i(\theta^{(i)}) $$ almost surely for $i=1,3$. Using the known distributions of the speed of the second class particle in model R1 and R3 (see Theorem \ref{2CP}) we can calculate $$ \mathbb{P}\left[ \theta^{(1)} \leq \alpha \right] = \mathbb{P}\left[ l_1(\theta^{(1)}) \geq l_1(\alpha) \right] = f_1(l_1(\alpha)) $$ and $$ \mathbb{P}\left[ \theta^{(3)} \leq \alpha \right] = \mathbb{P}\left[ l_3(\theta^{(3)}) \geq l_3(\alpha) \right] = a_3(l_3(\alpha)) $$ A calculation shows that $$ f_1(l_1(\alpha)) = a_3(l_3(\alpha)) $$ \end{proof} \begin{figure}
\caption{Distribution function for the random angle of the competition interface in the models R1 and R3 (left) and R2 (right) for $\beta=0.1$ (red), $\beta=0.5$ (yellow) and $\beta=0.9$ (blue)}
\end{figure} \end{remark}
\begin{proof}[Proof of Theorem \ref{duality2}:] Let the operators $\sigma_n$ be defined by $$ \sigma_n Y = \begin{cases}
\tau_n Y & Y_n < Y_{n+1} \\
Y & \text{otherwise} \end{cases} $$ where $\tau_n$ exchanges $Y_n$ and $Y_{n+1}$ in $Y$. The proof of the following general Lemma is the same as in \cite{amirangelvalko} (Lemma 3.1). \begin{lemma}\label{sigma} For a fixed sequence $i_1,\ldots,i_k$ in $\mathbb{Z}$ we have $$ \sigma_{i_k} \cdots \sigma_{i_1} \overset{d}{=} \left( \sigma_{i_1} \cdots \sigma_{i_k} \right)^{-1} $$ \end{lemma} We have the relation $Y_{X_n(m)}(m) = n$ between the $X$ and the $Y$ process. Since $\beta < 1$ each site has a positive probability that no jump occurs at that site at any given time. At each time-step these sites separate $\mathbb{Z}$ into finite intervals and the events on these intervals during that time-step are independent. In the model with updates from right to left we apply a finite sequence of operators $\sigma_{i_1} \cdots \sigma_{i_k}$ where $i_1,\ldots,i_k$ is an increasing sequence (since we update from right to left). Lemma \ref{sigma} states that applying $\sigma_{i_1} \cdots \sigma_{i_k}$ is the same (in distribution) as applying $\sigma_{i_k} \cdots \sigma_{i_1}$ (i.e. updating from left to right) and taking the inverse permutation. But given the configuration $Y_n(0)$ and performing the updates from left to right we get $Y_{n}^{(2)}(1)$ and this is the inverse permutation of $X_{n}^{(2)}(1)$. So $$ Y_{n}^{(1)}(1) \overset{d}{=} X_{n}^{(2)}(1)$$ Inductively we get that this holds for all $m>0$. The other three parts follow in the same way. In the case with even/odd updates, $i_1,\ldots,i_k$ is a sequence such that there exists a $1 \leq j \leq k+1$ such that $i_l$ is odd for $l < j$ and $i_l$ is even for $l \geq j$. \end{proof}
In order to prove Theorem \ref{duality} we will need the following Lemma. We state it here for the model R1, but analogous results hold for the other models as well. The Lemma corresponds to Lemma 4.1 in \cite{amirangelvalko}. \begin{lemma}\label{speedshift} Consider two TASEPs, $Y^{(1)}$ and $\widetilde{Y}^{(1)}$, as functions of the same Bernoulli points on $\mathbb{Z} \times \mathbb{N}$ (i.e. under basic coupling). We set $Y_{n}^{(1)}(0) = n$ and $\widetilde{Y}_{n}^{(1)}(0) = \sigma_j \cdots \sigma_{j+k} Y_{n}^{(1)}(0)$ for some $j \in \mathbb{Z}$ and $k \geq 0$. Let $\{ U_{n}^{(1)} \}$ be the \textit{speed process} of $Y^{(1)}$ and $\{ \widetilde{U}_{n}^{(1)} \}$ be the \textit{speed process} of $\widetilde{Y}^{(1)}$. Then $\widetilde{U}^{(1)} = \sigma_{j+k} \cdots \sigma_j U^{(1)}$. \end{lemma} \begin{proof} Every other particle than $\{ j, \ldots j+k+1 \}$ is either stronger than all particles $\{ j, \ldots , j+k+1 \}$ or weaker than all particles $\{ j, \ldots , j+k+1 \}$. Any swap of a particle other than $\{ j, \ldots , j+k+1 \}$ will happen in both $Y^{(1)}$ and $\widetilde{Y}^{(1)}$. So for any $i \notin \{ j, \ldots , j+k+1 \}$ we have $X_{i}^{(1)}(m) = \widetilde{X}_{i}^{(1)}(m)$ for all $m \geq 0$ and therefore $U_{i}^{(1)} = \widetilde{U}_{i}^{(1)}$ for those $i$. In $\widetilde{Y}^{(1)}$ particle $j+k+1$ is always to the left of all other particles $\left\{ j, \ldots , j+k \right\}$. So $\widetilde{U}_{j+k+1}^{(1)} = \min \{ U_{j}^{(1)} \ldots U_{j+k+1}^{(1)} \}$. Define $j \leq r \leq j+k+1$ by $$ \min \left\{ U_{j}^{(1)} \ldots U_{j+k+1}^{(1)} \right\} = U_{r}^{(1)} $$ Then for $i = r, r+1, \ldots , j+k$ we have $\widetilde{U}_{i}^{(1)} = U_{i+1}^{(1)}$ and for $i=j,j+1, \ldots , r-1$ \begin{align*} &\widetilde{U}_{j}^{(1)} = \max \left\{ U_{j}^{(1)} , U_{j+1}^{(1)} \right\} \\ &\widetilde{U}_{j+1}^{(1)} = \max \left\{ \min \left\{ U_{j}^{(1)} , U_{j+1}^{(1)} \right\} , U_{j+2}^{(1)} \right\} \\ &\widetilde{U}_{j+2}^{(1)} = \max \left\{ \min \left\{ \min \left\{ U_{j}^{(1)} , U_{j+1}^{(1)} \right\} , U_{j+2}^{(1)} \right\} , U_{j+3}^{(1)} \right\} \\ &\ldots \end{align*} This shows that $\widetilde{U}^{(1)} = \sigma_{j+k} \cdots \sigma_j U^{(1)}$. \end{proof}
\begin{proof}[Proof of Theorem \ref{duality}:] Consider a Bernoulli process on $\mathbb{Z} \times \mathbb{Z}$. Half of this process ($\mathbb{Z} \times \mathbb{N}$) is used to construct the TASEP $Y^{(1)}$. For any $l \in \mathbb{Z}$ we can translate the Bernoulli process by $l$ (i.e. take points of the form $(n,m+l)$ where $(n,m)$ is in the original process). We can restrict this translated process to $\mathbb{Z} \times \mathbb{N}$ and use this restricted process to construct another TASEP. Let $U^{(1)}(l) = \{ U_{n}^{(1)}(l) \}$ be the speed process for the TASEP that has been constructed using the Bernoulli process translated by $l$. For every $l$, $U^{(1)}(l)$ has distribution $\mu^{(1)}$. So we have to show that $\{ U_{n}^{(1)}(l) \}$ behaves like a TASEP with updates from left to right. In order to do this we look at a transition $\{ U_{n}^{(1)}(l) \} \rightarrow \{ U_{n}^{(1)}(l+1) \}$. The effect on the original TASEP of changing from translating by $l$ to translating by $l+1$ is that some finite sequences of $\sigma$ operators of the form $\sigma_j \cdots \sigma_{j+k}$ are added to be applied to the TASEP before the original sequence of operations. At each location a $\sigma$ operator is added with probability $\beta$. The previous Theorem shows, that applying each of these finite sequences has the same effect on the speeds as applying each sequence in reverse order to the speed process. This shows that $\{ U_{n}^{(1)}(l) \}$ behaves like a TASEP with updates from left to right and therefore the measure $\mu^{(1)}$ is stationary for the TASEP with updates from left to right. \\ The proofs for the other three models are essentially the same (using the appropriate versions of Lemma \ref{speedshift}). \end{proof}
The following Lemma will allow us to do the explicit calculations for the joint densities of the speeds in Theorems \ref{jointR1} - \ref{jointR3c} using the connection between queueing models and the invariant measures introduced in Theorem \ref{invariantmulti}. Here $D(g_i)$ is the domain of the distribution function $g_i$ ($i=0,1,2,3,4$).
\begin{lemma} \label{queue} If $F: D(g_i) \rightarrow \{ 1 , \ldots , N \}$ is non-decreasing then for the TASEP $\{ Y_{n}^{(i)}(m) \}_{n,m}$ the distribution of $\{ F(U_{n}^{(i)}) \}_{n}$ is the unique ergodic stationary measure of the multi-type TASEP model R$j(i)$ with types $\{ 1 , \ldots , N \}$ and densities $\lambda_l = g_i ( \sup \{ F^{-1} ( l ) \} ) - g_i ( \inf \{ F^{-1} ( l ) \} )$ for type $l=1,\ldots,N$ ($j$ as in Theorem \ref{duality}). \end{lemma}
\begin{proof} The proof is analogous to the proof of Corollary 5.4 in \cite{amirangelvalko}. \end{proof}
With the help of this Lemma we can do all the calculations needed for the results in Theorems \ref{jointR1} - \ref{jointR3c}. Depending on the model we are considering we will choose the function $F$ from Lemma \ref{queue} to be $F_i = \min \{ j : g_i(u) < x_j \}$ for some increasing sequence $(x_1,\ldots,x_{N-1})$ in $[0,1]$. Then we put $V_n = F(U_n)$ and the distribution of the $V$s is given by the invariant measure for the multi-type models and can be calculated explicitly using the queueing representation.
\begin{proof}[Proof of Theorem \ref{jointR1}:] By Lemma \ref{queue} we have with $N=3$ that $F_1(U_n)$ is distributed according to the unique ergodic stationary measure of a 3-type TASEP with updates from left to right and densities \begin{align*} &\lambda_1 = \mathbb{P} \left[ x_0 < g_1(U_n) < x_1 \right] = x_1 - x_0 = x_1 \\ &\lambda_2 = \mathbb{P} \left[ x_1 < g_1(U_n) < x_2 \right] = x_2 - x_1 \\ &\lambda_3 = \mathbb{P} \left[ x_2 < g_1(U_n) < x_3 \right] = x_3 - x_2 = 1 - x_2 \end{align*} $\lambda_1$ is the density of first class particles, $\lambda_2$ is the density of second class particles and $\lambda_3$ is the density of third class particles (or holes). Recall that $V_n = F_1(U_n)$. Using the queueing representation for the unique ergodic stationary measure of a 3-type TASEP we can calculate the joint distribution $\left( V_0, V_1 \right)$ explicitly. This distribution depends on the $x_i$. Taking suitable derivatives with respect to these $x_i$ we get the density of the corresponding speeds. We have for example \begin{align*} \mathbb{P} \left[ U_0 < g_1^{-1}(x_1) < U_1 < g_1^{-1}(x_2) \right] &= \mathbb{P} \left[ V_0 = 1, V_1 = 2 \right] \\ &= x_1 x_2 \left( x_2 - x_1 \right) \end{align*} since the probability of having a second class particle at position 1 is $x_2 - x_1$ (since this is the density of second class particles) and to have a first class particle at position 0 we then have to have an arrival (probability $x_1$) and a service (probability $x_2$) because having a second class particle at site 1 means that the queue was empty at that time (so in order to have a departure at site 0 we need an arrival at site 0). Remember that in Theorem \ref{invariantmulti} the particles in the TASEP jumped from the right to the left. If we want to consider the TASEP with jumps from the left to the right (and that is what we are doing here) we have to read the queues from right to left. So position 1 comes before position 0 and the probability of having a second class particle at position 1 is independent of arrivals and services at position 0. So for $u_0 < u_1$ we put $x_1 = g_1(u_0)$ and $x_2 = g_1(u_1)$ and get as density \begin{align*} \mathbb{P} \left[ U_0 \in du_0 , U_1 \in du_1 \right] &= \frac{dx_1}{du_0} \frac{dx_2}{du_1} \frac{d}{dx_1} \frac{d}{dx_2} x_1 x_2 \left( x_2 - x_1 \right) \\ &= \frac{1-\beta}{4\beta^2} \left( 1 - u_0 \right)^{-\frac{3}{2}} \left( 1 - u_1 \right)^{-\frac{3}{2}} \left( 2 g_1(u_1) - 2 g_1(u_0) \right) \\ & = \frac{1-\beta}{2\beta^3} \left( 1 - u_0 \right)^{-\frac{3}{2}} \left( 1 - u_1 \right)^{-\frac{3}{2}} \\ &\qquad \qquad \qquad \qquad \cdot \left( \sqrt{\frac{1-\beta}{1-u_1}} - \sqrt{\frac{1-\beta}{1-u_0}} \right) \end{align*} Similarly, we have \begin{align*} \mathbb{P} \left[ g_1^{-1}(x_1) < U_1 < g_1^{-1}(x_2) < U_0 \right] &= \mathbb{P} \left[ V_0 = 3, V_1 = 2 \right] \\ &= \left( 1 - x_2 \right) \left( x_2 - x_1 \right) \end{align*} and therefore we get as density for $u_0 > u_1$ (putting $x_1 = g_1(u_1)$, $x_2 = g_1(u_0)$) \begin{align*} \mathbb{P} \left[ U_0 \in du_0 , U_1 \in du_1 \right] &= \left( - \frac{dx_1}{du_0} \right) \left( - \frac{dx_2}{du_1} \right) \frac{d}{dx_1} \frac{d}{dx_2} \left( 1 - x_2 \right) \left( x_2 - x_1 \right) \\ &= \frac{1-\beta}{4\beta^2} \left( 1 - u_0 \right)^{-\frac{3}{2}} \left( 1 - u_1 \right)^{-\frac{3}{2}} \\ &= g_1^{'}(u_0)g_1^{'}(u_1) \end{align*} To get the density for $u_0 = u_1$ we consider \begin{align*} \mathbb{P} \left[ g_1^{-1}(x_1) < U_0, U_1 < g_1^{-1}(x_2) \right] &= \mathbb{P} \left[ V_0 = 2, V_1 = 2 \right] \\ &= \left( 1 - x_1 \right) x_2 \left( x_2 - x_1 \right) \end{align*} and let $x_1, x_2 \rightarrow g_1(u)$. We get \begin{align*} \mathbb{P} \left[ U_0 , U_1 \in du \right] &= \lim_{x_1,x_2 \rightarrow g_1(u)} \frac{\left( 1 - x_1 \right) x_2 \left( x_2 - x_1 \right)}{g_1^{-1}(x_2) - g_1^{-1}(x_1)} \\ &= \frac{\sqrt{1-\beta}\left( 1 - g_1(u) \right) g_1(u)}{2 \beta \left( 1 - u \right)^{\frac{3}{2}}} \\ &= \frac{\sqrt{1-\beta}}{2 \beta^2 \left( 1 - u \right)^{\frac{3}{2}}} \left( 1 - \frac{1}{\beta} \right) + \frac{1-\beta}{2 \beta^2 \left( 1 - u \right)^{2}} \left( \frac{2}{\beta} - 1 \right) \\ &\qquad \qquad - \frac{\sqrt{1-\beta}\left( 1 - \beta \right)}{2 \beta^3 \left( 1 - u \right)^{\frac{5}{2}}} \end{align*} To get the probabilities in the Theorem we only have to integrate the densities over the appropriate ranges of $u_0$, $u_1$ and $u$. Alternatively we can use the following: \begin{align*} \mathbb{P} \left[ U_0^{(1)} < U_1^{(1)} \right] &= \mathbb{P} \left[ g_1(U_0^{(1)}) < g_1(U_1^{(1)}) \right] \\ &\overset{(*)}{=} \mathbb{P} \left[ g_0(U_0^{(0)}) < g_0(U_1^{(0)}) \right] \\ &= \mathbb{P} \left[ U_0^{(0)} < U_1^{(0)} \right] \\ &= \frac{1}{3} \end{align*} $(*)$ follows from the fact that the distribution of $\{ g_1(U_n^{(1)})\}$ is the unique translation invariant stationary ergodic measure for the TASEP R2 with marginals uniform on $[0,1]$. The distribution of $\{ g_0(U_n^{(0)})\}$ is the unique translation invariant stationary ergodic measure for the TASEP in continuous time. Since the stationary distributions for the multi-type TASEPs R0 and R2 are the same (see Theorem \ref{invariantmulti}) $\{ g_1(U_n^{(1)})\}$ has the same distribution as $\{ g_0(U_n^{(0)}) \}$. \end{proof}
The proofs for Theorems \ref{jointR3a} - \ref{jointR3c} work in exactly the same way.
\section{Fully parallel updates} Finally we mention the model with ``fully parallel updates''. If an update occurs at site $x$ at time $t$ (which happens with probability $\beta$ as usual), this update causes a jump from $x$ to $x+1$ if and only if $\eta_{t-1}(x)=1$ and $\eta_{t-1}(x+1)=0$ (that is, the jump is already possible before any other updates at the current time-step are performed).
There are several important differences between this model and the models we have studied earlier. The Bernoulli product measures $\nu_{\rho}$ are no longer invariant. Furthermore, the basic coupling no longer preserves an ordering between different initial configurations, and so it is no longer clear how to define a multi-class system. If we use basic coupling to couple two systems which start with one discrepancy at the origin then this single discrepancy can generate additional discrepancies. It would already be interesting to know how the leftmost and rightmost discrepancies behave. Do they have asymptotic speeds, and if so are the speeds random or deterministic? There is still a natural percolation representation, and we can still obtain a hydrodynamic limit result in the sense that $\frac{1}{n} \sum_{un < k < vn} \eta_t(n)$ converges a.s. to the constant value $\int_{u}^{v} f(w) dw$, for $u < v$ and some function $f$, but the stronger result that $\lim_{n \rightarrow
\infty} \mathbb{E} \left[ \eta_n(k) \right]$ exists and is equal to $f(u)$ whenever $\frac{k}{n}$ tends to $u$ does not follow using the same methods as in the proof of Theorem \ref{hydro}.
\section*{Acknowledgments} JM was supported by the EPSRC. PS was supported by a ``DAAD Doktorandenstipendium'' and the EPSRC.
\parbox{0.33\textwidth}{\noindent
James Martin\\
Department of Statistics\\
University of Oxford\\
1 South Parks Road \\
Oxford OX1 3TG\\
United Kingdom\\ \texttt{martin@stats.ox.ac.uk}}
\parbox{0.33\textwidth}{\noindent
Philipp Schmidt \\
Department of Statistics\\
University of Oxford\\
1 South Parks Road \\
Oxford OX1 3TG\\
United Kingdom\\ \texttt{schmidt@stats.ox.ac.uk}}
\end{document} |
\begin{document}
\title{A new upper bound for odd perfect numbers of a special form \footnote{This preprint is a revised version of the paper which appeared in Colloq. Math. \textbf{156} (2019), 15--23. See the corrigendum attached in this preprint.} \footnote{2010 Mathematics Subject Classification: Primary 11A05, 11A25; Secondary 11D61, 11J86.} \footnote{Key words and phrases: Odd perfect numbers; the sum of divisors; arithmetic functions; exponential diophantine equations.}} \author{Tomohiro Yamada} \date{} \maketitle
\begin{abstract} We give a new effectively computable upper bound of odd perfect numbers whose Euler factors are powers of fixed exponent, improving our old result from \cite{Ymd}. \end{abstract}
\section{Introduction}\label{intro}
As usual, let $\sigma(N)$ denote the sum of divisors of a positive integer $N$. $N$ is called perfect if $\sigma(N)=2N$. Though it is not known whether or not an odd perfect number exists, many conditions which must be satisfied by such a number are known.
Suppose that $N$ is an odd perfect number. Euler has shown that \[ N=p^{\alpha} q_1^{2\beta_1}\cdots q_r^{2\beta_r} \] for distinct odd primes $p, q_1, \ldots, q_r$ and positive integers $\alpha, \beta_1, \ldots, \beta_r$ with $p\equiv \alpha\equiv 1\pmod{4}$.
The special case $\beta_1=\beta_2=\cdots =\beta_r=\beta$ has been considered by many authors. Steuerwald \cite{St} proved that we cannot have $\beta_1=\cdots=\beta_r=1$. McDaniel \cite{Mc} proved that we cannot have $\beta_1\equiv\cdots\equiv\beta_r\equiv1\pmod{3}$. If $\beta_1=\cdots=\beta_r=\beta$, then it is known that $\beta\neq 2$ (Kanold \cite{Ka1}), $\beta\neq 3$ (Hagis and McDaniel \cite{HMD}), $\beta\neq 5, 12, 17, 24, 62$ (McDaniel and Hagis \cite{MDH}), and $\beta\neq 6, 8, 11, 14, 18$ (Cohen and Williams \cite{CW}). In their paper \cite{HMD}, Hagis and McDaniel conjectured that $\beta_1=\cdots=\beta_r=\beta$ does not occur. We \cite{Ymd} proved that if $\beta_1=\cdots=\beta_r=\beta$, then $r\leq 4\beta^2+2\beta+2$ and \[N<2^{4^{4\beta^2+2\beta+3}}.\] We call this upper bound for $N$ \textit{the classical bound}.
In the RIMS workshop on analytic number theory in 2014, we have given an improved upper bound for such numbers, although this result has been published nowhere (even in the preprint form or in an unrefereed proceedings). We proved that $r<2\beta^2+O(\beta\log\beta)$ with an effectively computable implicit constant. There we used the arithmetic in quadratic fields and lower bounds for linear forms in logarithms.
In this paper, we shall give a slightly stronger result and a simpler proof, using less arithmetic in quadratic fields and linear algebraic argument instead of Baker's method.
\begin{thm}\label{th} If $N=p^{\alpha} q_1^{2\beta}\cdots q_r^{2\beta}$ with $p, q_1, \ldots, q_r$ distinct primes is an odd perfect number, then $r\leq 2\beta^2+8\beta+2$ and \[N<2^{4^{2\beta^2+8\beta+3}}.\] Further, the coefficient $8$ of $\beta$ can be replaced by $7$ if $2\beta+1$ is not a prime, or $\beta\geq 29$. \end{thm}
The upper bound for $N$ immediately follows from the upper bound for $r$ and Nielsen's result \cite{Nie} that if $N$ is an odd perfect number, then $N<2^{4^{\omega(N)}}$, where $\omega(N)$ denotes the number of distinct prime factors of $N$.
In Section \ref{reduction}, we use a method used in \cite{Ymd} to reduce the theorem to an upper bound for the number of solutions of some diophantine equations.
\begin{lem}\label{lm0} Assume that $l=2\beta+1$ is a prime $\geq 19$. If $N=p^{\alpha} q_1^{2\beta}\cdots q_r^{2\beta}$ with $p, q_1, \ldots, q_r$ distinct primes is an odd perfect number, then, for each prime $q_j\equiv 1\pmod{l}$, there exist at most five primes $q_i\not\equiv 1\pmod{l}$ such that $(q_i^l-1)/(q_i-1)=p^m q_j$ for any prime $l\geq 59$ and at most six such primes for each prime $19\leq l\leq 53$. \end{lem}
In Section \ref{pr}, we solve this diophantine problem to prove the theorem. Here, we avoid the use of Baker's method by adopting a linear algebraic technique used by Beukers in \cite{Beu}, who gave upper bounds for the numbers of solutions of generalized Ramanujan-Nagell equations.
\section{Preliminaries}\label{lemmas}
In this section, we shall introduce some notations and lemmas.
We begin by introducing two well-known lemmas concerning prime factors of the $n$-th cyclotomic polynomial, which we denote by $\Phi_n(X)$. Lemma \ref{lm1} follows from Theorems 94 and 95 in Nagell \cite{Nag}. Lemma \ref{lm2} has been proved by Bang \cite{Ban} and rediscovered by many authors such as Zsigmondy \cite{Zsi}, Dickson \cite{Dic} and Kanold \cite{Ka1, Ka2}.
\begin{lem}\label{lm1} Let $p, q$ be distinct primes with $q\neq 2$ and $c$ be a positive integer. If $p\equiv 1\pmod{q}$, then $q$ divides $\sigma(p^c)$ if and only if $q$ divides $c+1$. Moreover, if $p\not\equiv 1\pmod{q}$, then $q$ divides $\sigma(p^c)$ if and only if the multiplicative order of $q$ modulo $p$ divides $c+1$. \end{lem}
\begin{lem}\label{lm2} If $a$ is an integer greater than $1$, then $\Phi_n(a)$ has a prime factor which does not divide $a^m-1$ for any $m<n$, unless $(a, n)=(2, 1), (2, 6)$ or $n=2$ and $a+1$ is a power of $2$. \end{lem}
Next, we need some notations and results from the arithmetic of a quadratic field. Let $l>3$ be a prime and $D=(-1)^\frac{l-1}{2}l$. Let ${\mathcal K}$ and ${\mathcal O}$ denote ${\mathbf{Q}}(\sqrt{D})$ and its ring of integers ${\mathbf{Z}}[(1+\sqrt{D})/2]$ respectively. We use the overline symbol to express the conjugate in ${\mathcal K}$. In the case $D>0$, $\ep$ and $R=\log\ep$ shall denote the fundamental unit and the regulator in ${\mathcal K}$. In the case $D<-4$, we set $\ep=-1$ and $R=\pi i$. We note that neither $D=-3$ nor $-4$ occurs since we have assumed that $l>3$.
We shall introduce the following lemma on the value of the cyclotomic polynomial $\Phi_l(x)$.
\begin{lem}\label{lm3} Assume that $l$ is a prime $\geq 19$ and $x$ is an integer $>3^{\floor{(l+1)/6}}$. Then $\Phi_l(x)$ can be written in the form $X^2-DY^2$ for some coprime integers $X$ and $Y$ with $0.4387/x<\abs{Y/(X+Y\sqrt{D})}$ and $\abs{Y/(X-Y\sqrt{D})}<0.5608/x$. Moreover, if $p, q$ are primes $\equiv 1\pmod{l}$ and $\Phi_l(x)=p^m q$ for some integer $m$, then, \begin{equation}\label{eq21} \left[\frac{X+Y\sqrt{D}}{X-Y\sqrt{D}}\right]=\left(\frac{\bar{\mathfrak p}}{{\mathfrak p}}\right)^{\pm m}\left(\frac{\bar{\mathfrak q}}{{\mathfrak q}}\right)^{\pm 1}, \end{equation} where $[p]={\mathfrak p}\bar{\mathfrak p}$ and $[q]={\mathfrak q}\bar{\mathfrak q}$ are prime ideal factorizations in ${\mathcal O}$. \end{lem}
\begin{proof} Let $\zeta$ be a primitive $l$-th root of unity. We can factor $(x^l-1)/(x-1)=\psi^+(x)\psi^-(x)$ in ${\mathcal K}$, where \begin{equation*} \begin{split} \psi^+(x)=& \prod_{\left(\frac{i}{l}\right)=1}(x-\zeta^i)=\sum_{i=0}^{\frac{l-1}{2}}a_i x^{\frac{l-1}{2}-i}, \\ \psi^-(x)=& \prod_{\left(\frac{i}{l}\right)=-1}(x-\zeta^i). \end{split} \end{equation*} Hence, taking $P(x)=\psi^+(x)+\psi^-(x)$ and $Q(x)=(\psi^+(x)-\psi^-(x))/\sqrt{D}$, we have \begin{equation} \frac{x^l-1}{x-1}=\psi^+(x)\psi^-(x)=\frac{P^2(x)-DQ^2(x)}{4}. \end{equation} Now, putting $X=P(x)$ and $Y=Q(x)$, we have $\Phi_l(x)=(X^2-DY^2)/4$ with $\psi^+(x)=(X+Y\sqrt{D})/2$ and $\psi^-(x)=(X-Y\sqrt{D})/2$.
If $\Phi_l(x)=p^m q$, then we have the ideal factorizations $[x-\zeta^i]={\mathfrak p}^{(i) m} {\mathfrak q}^{(i)}$ for $i=1, 2, \ldots, l-1$ in ${\mathbf{Q}}(\zeta)$ with $[p]=\prod_{i=1}^{l-1} {\mathfrak p}^{(i)}$ and $[q]=\prod_{i=1}^{l-1} {\mathfrak q}^{(i)}$. We see that $\prod_{\left(\frac{i}{l}\right)=1}{\mathfrak p}^{(i)}={\mathfrak p}$ or $\bar{\mathfrak p}$ and $\prod_{\left(\frac{i}{l}\right)=1}{\mathfrak q}^{(i)}={\mathfrak q}$ or $\bar{\mathfrak q}$. Now $[X+Y\sqrt{D}]$ can be factored into one of the forms ${\mathfrak p}^m {\mathfrak q}, \bar{\mathfrak p}^m {\mathfrak q}, {\mathfrak p}^m \bar{\mathfrak q}$ or $\bar{\mathfrak p}^m \bar{\mathfrak q}$ in ${\mathcal O}$ and (\ref{eq21}) holds.
Now it remains to show that $0.4387/x<\abs{Y/(X+Y\sqrt{D})}$ and \\ $\abs{Y/(X-Y\sqrt{D})}<0.5608/x$. We begin by dealing the case $x\geq l^2$. We clearly have $a_0=1$. It follows from the well known result for the Gauss sum that $a_1=\frac{1\pm \sqrt{D}}{2}$. Moreover, it immediately follows from the definition of $\psi^+(x)$ that \[\abs{a_i}\leq\binom{(l-1)/2}{i}<\left(\frac{l-1}{2}\right)^i\] for each $i\leq\frac{l-1}{2}$. Combining these facts on $a_i$'s, we obtain \begin{equation} \begin{split} \abs{P(x)-2x^\frac{l-1}{2}-x^\frac{l-3}{2}} & \leq 2\sum_{i=2}^{\frac{l-1}{2}}\left(\frac{l-1}{2}\right)^i x^{\frac{l-1}{2}-i} \\ & <\frac{(l-1)^2 x^\frac{l-3}{2}}{2x-l-1}\leq \frac{(l-1)^2 x^\frac{l-3}{2}}{2l^2-l-1} \\ & <\frac{x^\frac{l-3}{2}}{2} \end{split} \end{equation} and \begin{equation} \begin{split} \abs{\abs{Q(x)}-x^\frac{l-3}{2}} & \leq \frac{2}{\sqrt{l}}\sum_{i=2}^{\frac{l-1}{2}}\left(\frac{l-1}{2}\right)^i x^{\frac{l-1}{2}-i} \\ & <\frac{(l-1)^2 x^\frac{l-3}{2}}{\sqrt{l}(2x-l-1)}<\frac{(l-1)^2 x^\frac{l-3}{2}}{\sqrt{l}(2l^2-l-1)} \\ & <\frac{x^\frac{l-3}{2}}{2\sqrt{l}}. \end{split} \end{equation} From these inequalities, we deduce that \begin{equation} \abs{\frac{Q(x)}{P(x)-Q(x)\sqrt{D}}}<\frac{1+\frac{1}{2\sqrt{l}}}{2x+\frac{1}{2}-\left(1+\frac{1}{2\sqrt{l}}\right)\sqrt{l}}<\frac{0.5608}{x} \end{equation} and \begin{equation} \abs{\frac{Q(x)}{P(x)+Q(x)\sqrt{D}}}>\frac{1-\frac{1}{2\sqrt{l}}}{2x+\frac{3}{2}+\left(1+\frac{1}{2\sqrt{l}}\right)\sqrt{l}}>\frac{0.4387}{x} \end{equation} for $l\geq 19$, proving the lemma in this case.
In the remaining case $x<l^2$, then we have $l\leq 37$ since we have assumed that $x>3^{\floor{(l+1)/6}}$. For each $l$, we can confirm the desired inequality for $3^{\floor{(l+1)/6}}<x<l^2$ by calculation. Now the lemma is completely proved. \end{proof}
\section{Reduction to a diophantine problem}\label{reduction}
Let $N=p^{\alpha} q_1^{2\beta}\cdots q_r^{2\beta}$ be an odd perfect number. In this section, we shall show that our theorem can be reduced to Lemma \ref{lm0}.
Various results referred in the introduction of this paper allows us to assume that $\beta\geq 9$ without loss of generality.
We see that we can take a prime factor $l$ of $2\beta+1$ which is one of the $q_i$'s. Indeed, if $2\beta+1$ has at least two distinct prime factors $l_1$ and $l_2$, then at least one of them must be one of the $q_i$'s and, if $2\beta+1=l^\gamma$ is a power of a prime $l$, then we must have $l=q_{i_0}$ for some $i_0$ by Kanold \cite{Ka1}.
As we did in \cite{Ymd}, we divide $q_1, \ldots, q_r$ into four disjoint sets. Let \begin{equation*} S=\{i:q_i\equiv 1\pmod{l}\}, \end{equation*} \begin{equation*} T=\{i:q_i\not\equiv 1\pmod{l}, i\neq i_0, q_j\mid\sigma(q_i^{2\beta})\text{ for some }1\leq j\leq r\}, \end{equation*} and \begin{equation*} U=\{i:q_i\not\equiv 1\pmod{l}, i\neq i_0, q_j\nmid\sigma(q_i^{2\beta})\text{ for any }1\leq j\leq r\}. \end{equation*} Hence, we can write $\{i: 1\leq i\leq r\}=S\cup T\cup U\cup \{i_0\}$.
In \cite{Ymd}, we proved that $\#S\leq 2\beta$. Moreover, if $2\beta+1=l^\gamma$ is a prime power, then $\#T\leq (2\beta)^2$ and $\#U\leq 1$, implying that $r\leq 4\beta^2+2\beta+2$ and, if $2\beta+1$ has $s>1$ distinct prime factors, then $\#S\leq 2\beta$ and $r\leq 2\beta\#S/(2^{s-1}-1)$.
For each $i\in T$, let $f(i)$ denote the number of prime factors in $S$ dividing $\sigma(q_i^{2\beta})$ counted with multiplicity. Then, we can easily see that, for any $i\in T$, $\sigma(q_i^{2\beta})$ has at least one prime factor in $S$ from Lemmas \ref{lm1} and \ref{lm2}. Hence, we have $f(i)\geq 1$ for any $i\in T$.
This immediately gives that $\#T\leq \sum_{i\in T}f(i)\leq (2\beta)\#S\leq 4\beta^2$, which is Lemma 3.2 in \cite{Ymd}. This yields the dominant term $4\beta^2$ in the exponent of the classical bound, which we would like to improve. To this end, we denote by $\delta$ the number of $i$'s for which $f(i)=1$. Then we have \[2\#T-\delta=\delta+2(\#T-\delta)\leq \sum_{i\in T}f(i)\leq 4\beta^2;\] that is, $\#T\leq 2\beta^2+(\delta/2)$.
If $2\beta+1$ is composite, then, by Lemma \ref{lm2}, for each divisor $d$ of $(2\beta+1)/l$, $\Phi_{ld}(q_i)$ has a prime factor $\equiv 1\pmod{l}$ not dividing $\Phi_{lk}(q_i)$ for any other divisor $k<d$ of $(2\beta+1)/l$. Hence, we see that $U$ must be empty. Moreover, if $i\in T$ and $f(i)=1$, then $2\beta+1=l^2$ and $\Phi_l(q_i)=q_j$ or $\Phi_{l^2}(q_i)=q_j$ for some $j\in S$, or $2\beta+1=l_1 l$ for some prime $l_1$ and $\Phi_l(q_i)=q_j$ or $\Phi_{l_1 l}(q_i)=q_j$ for some $j\in S$. From this, we can deduce that $f(i)=1$ holds for at most $2\#S\leq 4\beta$ indices $i\in T$. That is, $\delta\leq 2\#S\leq 4\beta$. Since $U$ is empty, we have $\#T+\#U\leq 2\beta^2+2\beta$.
If $2\beta+1=l$ is prime, $i\in T$ and $f(i)=1$, then $\sigma(q_i^{2\beta})=\Phi_l(q_i)=p^m q_j$ for an index $j\in S$ and an integer $m\geq 0$. Moreover, we have $\#U\leq 1$ as mentioned above.
Now, observing that $r\leq \#S+\#T+\#U+1\leq 2\beta^2+2\beta+(\delta/2)+2$, we conclude that Theorem \ref{th} can be derived if we show that, for each prime $q_j\equiv 1\pmod{l}$, there exist at most five primes $q_i$ with $i\in T$ such that $\Phi_l(q_i)=(q_i^l-1)/(q_i-1)=p^m q_j$ for any prime $l\geq 59$ and at most six such primes for each prime $19\leq l\leq 53$. Hence, Theorem \ref{th} would follow from Lemma \ref{lm0}, which we prove in the next section.
\section{Proof of the theorem}\label{pr}
We begin by proving a gap principle using elementary modular arithmetic. \begin{lem}\label{lm4} If $x_2>x_1>0$ are two multiplicatively independent integers and $\Phi_l(x_1)=p^{m_1} q_j$ and $\Phi_l(x_2)=p^{m_2} q_j$, then $x_2>x_1^{\floor{(l+1)/6}}$. \end{lem}
\begin{proof} Assume that $x_2\leq x_1^{\floor{(l+1)/6}}$. We begin by observing that $(x_1^{f_1} x_2^{f_2})^l\equiv 1\pmod{q_j}$ for any integers $f_1$ and $f_2$. In the case $p^{m_1}<q_j$, we must have $q_j>(\Phi_l(x_1))^{1/2}>x_1^{(l-1)/2}$ and therefore \begin{equation} 1\leq x_1^{f_1}x_2^{f_2}\leq x_1^{f_1+f_2\floor{(l+1)/6}}\leq x_1^{(l-1)/2}<q_j \end{equation} for $0\leq f_1\leq (l-1)/2-f_2\floor{(l+1)/6}$. This implies that each integer of the form $x_1^{f_1} x_2^{f_2}$ with $0\leq f_1\leq (l-1)/2-f_2\floor{(l+1)/6}$ must give a solution of the congruence $X^l\equiv 1\pmod{q_j}$ and these solutions are not congruent to each other. For each fixed $f_2$, we have $(l+1)/2-f_2\floor{(l+1)/6}$ such solutions. Hence, recalling that $l\geq 19$, the congruence $X^l\equiv 1\pmod{q_j}$ should have at least \[\sum_{f_2=0}^2\left(\frac{l+1}{2}-f_2\floor{\frac{l+1}{6}}\right)=\frac{3(l+1)}{2}-3\floor{\frac{l+1}{6}}\geq l+1\] solutions in $1\leq X<q_j$, which is impossible. Similarly, in the case $p^{m_1}>q_j$, the congruence $X^l\equiv 1\pmod{p^{m_1}}$ should have at least $l+1$ solutions in $1\leq X<p^{m_1}$, a contradiction again. Hence, we must have $x_2>x_1^{\floor{(l+1)/6}}$. \end{proof}
Using Lemma \ref{lm3}, we shall prove another gap principle, which is more conditional but much stronger than the first gap principle.
\begin{lem}\label{lm5} If $\Phi_l(x_i)=p^{m_i} q_j$ for three integers $x_3>x_2>x_1>0$ with $x_2>x_1^{\floor{(l+1)/6}}$, then $m_3>0.445\abs{R}x_1/\sqrt{l}$. \end{lem} \begin{proof} We write $\xi_i=(X_i+Y_i\sqrt{D})/(X_i-Y_i\sqrt{D})$ for each $i=1, 2, 3$. Factoring $[p]={\mathfrak p}\bar{\mathfrak p}, [q_j]={\mathfrak q}_j\bar{\mathfrak q}_j$ in ${\mathcal O}$ and applying Lemma \ref{lm3} with $q_j$ in place of $q$, we obtain that, for each $i=1, 2, 3$, \begin{equation} [\xi_i]=\left(\frac{\bar{\mathfrak p}}{{\mathfrak p}}\right)^{\pm m_i}\left(\frac{\bar{\mathfrak q}_j}{{\mathfrak q}_j}\right)^{\pm 1}, \end{equation} holds with $0<Y_i/(X_i-Y_i\sqrt{D})<(\Phi_l(x_i))^{-1/(l-1)}$.
Hence, taking an appropriate combination of signs, we obtain \begin{equation} [\xi_1]^{\pm m_2\pm m_3}[\xi_2]^{\pm m_3\pm m_1}[\xi_3]^{\pm m_1\pm m_2}=[1], \end{equation} and therefore \begin{equation} \xi_1^{\pm m_2\pm m_3}\xi_2^{\pm m_3\pm m_1}\xi_3^{\pm m_1\pm m_2}=\pm \ep^a \end{equation} for some integer $a$. Hence, if we let each logarithm $\log\xi_i$ take its principal value, we have \begin{equation} (\pm m_2\pm m_3)\log\xi_1+(\pm m_3\pm m_1)\log\xi_2+(\pm m_1\pm m_2)\log\xi_3=bR \end{equation} for some integer $b$.
If $b\neq 0$, then \begin{equation} (m_2+m_3)\abs{\log \xi_1}+(m_3+m_1)\abs{\log \xi_2}+(m_1+m_2)\abs{\log \xi_3}\geq \abs{R}. \end{equation} Recalling that $0<Y_i/(X_i-Y_i\sqrt{D})<0.5608/x_i$ from Lemma \ref{lm3} and each complex logarithm takes its principal value, we have \[\log\frac{X_i+Y_i\sqrt{D}}{X_i-Y_i\sqrt{D}}=\log \left(1+\frac{2Y_i\sqrt{l}}{X_i-Y_i\sqrt{l})}\right)<\frac{2Y_i\sqrt{l}}{X_i-Y_i\sqrt{l}}<\frac{1.1216\sqrt{l}}{x_i}\] for $D>0$ and \[\log\frac{X_i+Y_i\sqrt{D}}{X_i-Y_i\sqrt{D}}=2\arctan \frac{Y_i\sqrt{l}}{X_i}<\frac{2Y_i\sqrt{l}}{X_i-Y_i\sqrt{l}}<\frac{1.1216\sqrt{l}}{x_i}\] for $D<0$, we have $\abs{\log \xi_i}<1.1216\sqrt{l}/x_i$ whether $D>0$ or $D<0$. Hence, we have $\abs{\log \xi_i}<1.1216\sqrt{l}/x_i$ and therefore \begin{equation} \begin{split} & 2.2432m_3\sqrt{l}\left(\frac{1}{x_1}+\frac{1}{x_2}+\frac{1}{x_3}\right) \\ & \geq 1.1216\sqrt{l}\left(\frac{m_2+m_3}{x_1}+\frac{m_3+m_1}{x_2}+\frac{m_1+m_2}{x_3}\right)>\abs{R}. \end{split} \end{equation} From this and the assumption that $x_3>x_2>x_1^{\floor{(l+1)/6}}\geq x_1^3$ (recall that we have assumed that $l\geq 19$), we can deduce that $m_3>0.445x_1 \abs{R}/\sqrt{l}$.
If $b=0$, then $\abs{\log \xi_1}\leq 2m_3\abs{\log \xi_2}+2m_2\abs{\log\xi_3}$. We see that $\abs{\log \xi_2}<1.1216/x_2, \abs{\log \xi_3}<1.1216/x_3$ by Lemma \ref{lm3} and $\abs{\log \xi_1}>0.8774/x_1$. Hence, we have \[\frac{0.15}{x_1}<m_3\left(\frac{1}{x_2}+\frac{1}{x_3}\right).\]
Moreover, since $x_2>x_1^{\floor{(l+1)/6}}\geq x_1^3$ and $x_3>x_2^{\floor{(l+1)/6}}$, we have \begin{equation} m_3>\frac{0.15x_2}{x_1}>0.15x_1^2>0.15\times 3^{\floor{\frac{l+1}{6}}} x_1>\abs{R} x_1, \end{equation} where the last inequality follows observing that, if $l\equiv 3\pmod{4}$, then $0.15\times 3^{\floor{(l+1)/6}}>0.2l>\pi=\abs{R}$ and, if $l\equiv 1\pmod{4}$, then $D=l\geq 29$ and therefore $0.15\times 3^{\floor{(l+1)/6}}>l>R$ using the estimate $R<D^{1/2}\log (4D)$ from \cite{Fai}.
Hence, we conclude that, whether $b=0$ or not, $m_3>0.445\abs{R}x_1/\sqrt{l}$, proving the lemma. \end{proof}
Now we shall prove Lemma \ref{lm0}. Fix a prime $q_j\equiv 1\pmod{l}$. Assume that $q_1<q_2<\cdots <q_6$ are six primes not congruent to $1$ modulo $l$ such that $\Phi_l(q_i)=p^{g_i} q_j$ for $i=1, 2, \ldots, 6$. Moreover, assume that $q_7$ is a prime in $T$ greater than $q_{10}$ and $\Phi_l(q_7)=p^{g_7} q_j$ if $19\leq l\leq 53$. Write $R^\prime=0.445\abs{R}$. Since $q_2>q_1^{\floor{(l+1)/6}}\geq 3^{\floor{(l+1)/6}}$, we can apply Lemma \ref{lm5} with $(x_i, m_i)=(q_{i+1}, g_{i+1}) (i=1, 2, 3)$ to obtain \[\log q_4>\frac{g_4\log p}{l-1}> \frac{q_2 R^\prime \log p}{(l-1)\sqrt{l} } \geq \frac{3^{\floor{\frac{l+1}{6}}} R^\prime \log (2l+1)}{(l-1)\sqrt{l}},\] where we use the fact $p\geq 2l+1$ by Lemma \ref{lm1} and the assumption $q_i\not\equiv 1\pmod{l}$. Similarly, we have $\log q_6>q_4 R^\prime (\log (2l+1))/(l-1)\sqrt{l}$ and \begin{equation} \frac{\log q_6}{\log 2}>\exp\left(\frac{3^{\floor{\frac{l+1}{6}}}R^\prime \log (2l+1)}{(l-1)\sqrt{l}}+\log\frac{R^\prime \log (2l+1)}{(l-1)\sqrt{l}}\right). \end{equation}
If $l\geq 79$ and $l\equiv 3\pmod{4}$, then $\abs{R}=\pi$ and \begin{equation}\label{eq41} \frac{\log q_6}{\log 2}>4^{l^2}=4^{4\beta^2+4\beta+1}. \end{equation} If $l\geq 73$ and $l\equiv 1\pmod{4}$, then, observing that $R>\sqrt{l}$, we have (\ref{eq41}) again. Similarly, if $l=61$, then we have (\ref{eq41}) observing that $R=(39+5\sqrt{61})/2$.
Assume that $l=71$. If $q_1=3$, then $q_j=\Phi_{71}(3)$. Since $q_1$ and $q_2$ are multiplicatively independent, we must have $q_2>\Phi_{71}(3)$. If $q_1\geq 5$, then $q_2>5^{12}$. In both cases, $q_2>5^{12}$ and $\log q_6/\log 2>4^{5041}$. If $l=67$, then we must have $q_1\geq 17, q_2\geq 17^{11}$ and $\log q_6/\log 2>4^{4489}$. If $l=59$, then $q_1\geq 5$ since $\Phi_{59}(3)=14425532687\times 489769993189671059$, where both prime factors are congruent to $3$ modulo $4$. Hence, we have $q_2>5^{10}$ and $\log q_6/\log 2>4^{2809}$.
Hence, for $l\geq 59$, we have (\ref{eq41}), which is impossible since it implies that $N\geq q_6$ exceeding the classical bound.
If $23\leq l\leq 53$, applying Lemma \ref{lm4}, we have $q_1\geq 3, q_2\geq 3^4$ and $q_3\geq 3^{16}$. Applying Lemma \ref{lm5} with $(x_i, m_i)=(q_{i+2}, g_{i+2})$ $(i=1, 2, 3)$ and then $(x_i, m_i)=(q_{i+4}, g_{i+4}) (i=1, 2, 3)$, we have $q_5>\exp(610000)$ and $q_7>\exp(\exp(600000))$. If $l=19$, applying Lemma \ref{lm5} with $(x_i, m_i)=(q_{i+2}, g_{i+2}) (i=1, 2, 3)$ and then $(x_i, m_i)=(q_{i+4}, g_{i+4}) (i=1, 2, 3)$, we have $q_1\geq 3, q_2\geq 29, q_3\geq 24391, q_5>\exp(2200)$ and $q_7>\exp(\exp(2000))$. Thus, $q_7$ must exceed the classical bound if $19\leq l\leq 53$, which is a contradiction again.
Hence, we conclude that, for each given $j\in S$, there are at most five indices $i\in T$ with $f(i)=1$ and $q_j\mid \Phi_l(q_i)=\sigma(q_i^{2\beta})$ if $l\geq 59$ and there are at most six indices $i\in T$ and $q_j\mid \Phi_l(q_i)=\sigma(q_i^{2\beta})$ with $f(i)=1$ if $19\leq l\leq 53$. This completes the proof of Lemma \ref{lm0}, which in turn implies Theorem \ref{th}.
{} \vskip 12pt
{\small Tomohiro Yamada}\\ {\small Center for Japanese language and culture\\Osaka University\\562-8558\\8-1-1, Aomatanihigashi, Minoo, Osaka\\Japan}\\ {\small e-mail: \protect\normalfont\ttfamily{tyamada1093@gmail.com}}
\end{document} |
\begin{document}
\title{The categorified Diassociative cooperad}
\begin{abstract}
Using representations of quivers of type $\mathbb{A}$, we define an
anticyclic cooperad in the category of triangulated categories,
which is a categorification of the linear dual of the Diassociative operad. \end{abstract}
\section{Introduction}
The Diassociative operad has been introduced by Loday \cite{loday_cras,loday_overview,loday_lnm}. It can be described as a collection of free abelian groups $\operatorname{Dias}(n)$ of rank $n$ and maps $\circ_i$ from $\operatorname{Dias}(m)\otimes \operatorname{Dias}(n)$ to $\operatorname{Dias}(m+n-1)$ satisfying some kind of associativity. The composition maps $\circ_i$ have a simple combinatorial description, using grafting of planar trees with a distinguished path from the root to a leaf.
It has been shown in \cite{anticyclic} that one can endow this operad with a refined structure of anticyclic operad. This means that there exists a map of order $n+1$ on $\operatorname{Dias}(n)$, with some compatibility with the $\circ_i$ maps.
The aim of this article is to prove that this whole structure (or rather its linear dual, which is an anticyclic cooperad) is the shadow of a natural representation-theoretic object, related to the Dynkin diagrams of type $\mathbb{A}$.
We will not assume any knowledge of operads, but the interested reader can consult \cite{ginzburg,loday_lnm,smirnov,stasheff} for basics of this theory and \cite{markl,anticyclic} for the notion of anticyclic operad.
We first define a cooperad $\mathcal{A}$ in the category of abelian categories. This amounts to a collection of abelian categories $\mathcal{A}_n$ for $n\geq 1$ and some functors $\nabla$ from these categories to products of two of these categories. The categories $\mathcal{A}_n$ involved are just the categories of modules over the $\overrightarrow{\mathbb{A}}_n$ quivers. These are very classical objects in representation theory.
The $\nabla$ functors are defined as tensor product by some specific multiplicity-free bimodules. The axioms of cooperads are checked by using a combinatorial description of the tensor product of such bimodules.
At the level of the Grothendieck groups, one then checks that the induced cooperad is the linear dual of the Diassociative operad. The classes of simple modules correspond to the usual basis of $\operatorname{Dias}(n)$ and the $\nabla$ functors give the linear dual of the $\circ_i$ maps.
As the $\nabla$ functors are given by the tensor product with projective bimodules, they are exact. Going to the derived categories $D\mathcal{A}_n$, we prove that there is some compatibility between the $\nabla$ functors and the Auslander-Reiten translations. At the level of Grothendieck groups, this amounts to the structure of anticyclic cooperad on the Diassociative cooperad.
\section{General facts}
\subsection{Quivers of type $\mathbb{A}$}
For each integer $n\geq 1$, let $\overrightarrow{\mathbb{A}}_n$ be the quiver $1 \to 2 \to \dots \to n$. This is a quiver on the graph of type $\mathbb{A}_n$ in the classification of Dynkin diagrams.
Let $k$ be a fixed ground field. Let $\mathcal{A}_n$ be the category of finite dimensional right-modules over the path algebra of $\overrightarrow{\mathbb{A}}_n$ over $k$. This is an abelian category, with a finite number of isomorphism classes of indecomposable objects.
Let $D\mathcal{A}_n$ be the bounded derived category of the category $\mathcal{A}_n$. This is a triangulated category, with a shift functor that will be denoted by $S$. Indecomposable objects in $D\mathcal{A}_n$ are just shifts of the images of the indecomposable objects in $\mathcal{A}_n$.
There exists a canonical self-equivalence of $D\mathcal{A}_n$, called the Auslander-Reiten translation and denoted by $\tau_n$.
The Nakayama functor $\nu_n$ is the composite $\tau_n S=S \tau_n$. This functor maps, for each vertex $i$ of $\overrightarrow{\mathbb{A}}_n$, the projective module $P_i$ to the injective module $I_i$.
\subsection{Products of quivers}
Let $\overrightarrow{\mathbb{A}}_{m_1,m_2,\dots,m_k}$ be the product quiver $\overrightarrow{\mathbb{A}}_{m_1} \times \overrightarrow{\mathbb{A}}_{m_2} \times \dots \times \overrightarrow{\mathbb{A}}_{m_k}$. We consider this as a quiver with relations by imposing all possible commutation relations.
A module over this quiver amounts to a module over the tensor product of the path algebras of the quivers $\overrightarrow{\mathbb{A}}_{m_i}$. Therefore, one can forget the action of some of the factors to define restricted modules.
As there is a canonical isomorphism of quivers from $\overrightarrow{\mathbb{A}}_{m,n}$ to $\overrightarrow{\mathbb{A}}_{n,m}$, there are canonical equivalences between the corresponding module and derived categories. Let us denote by $X$ these flip functors.
More generally, any permutation of the factors in a multiple product of quivers $\overrightarrow{\mathbb{A}}_n$ give rise to a corresponding equivalence.
\subsection{Standard modules}
Let $M$ be a module over a quiver $\overrightarrow{\mathbb{A}}_{m_1,m_2,\dots,m_k}$. Let $M_s$ be the vector space associated with a vertex $s$. One says that $M$ is \textbf{multiplicity-free} if the dimension of $M_{s}$ is at most $1$ for every vertex $s$. Let then $\mathsf{S}(M)$ be the support of $M$, which is the set of vertices $s$ such that $\dim M_s=1$.
Let $M$ be a multiplicity-free module. One says that $M$ is \textbf{standard} if, for any two adjacent vertices $s,s'$ in $\mathsf{S}(M)$, the map between $M_s$ and $M_{s'}$ is an isomorphism.
To describe a standard module $M$ up to isomorphism, it is clearly enough to give its support $\mathsf{S}(M)$. The support of a standard module cannot be arbitrary, because of the commuting conditions that must be satisfied. One can then build back the module using copies of the field $k$ and identity maps between them.
\subsection{Tensor product of projective standard modules}
There is a simple combinatorial description of the tensor product of two projective standard modules.
Let us consider only the special case that we will need. Let $M_{a\,;\,b,c}$ be a $\overrightarrow{\mathbb{A}}_a^{op} \times \overrightarrow{\mathbb{A}}_b \times \overrightarrow{\mathbb{A}}_c$-module and let $M_{c\,;\,d,e}$ be a $\overrightarrow{\mathbb{A}}_c^{op} \times \overrightarrow{\mathbb{A}}_d \times \overrightarrow{\mathbb{A}}_e$-module. Assume that $M_{a\,;\,b,c}$ is $\overrightarrow{\mathbb{A}}_{c}$-projective and that $M_{c\,;\,d,e}$ is $\overrightarrow{\mathbb{A}}_c^{op}$-projective.
Then one can define the tensor product of $M_{a\,;\,b,c}$ and $M_{c\,;\,d,e}$ over (the path algebra of) $\overrightarrow{\mathbb{A}}_c$. This is a $\overrightarrow{\mathbb{A}}_a^{op} \times \overrightarrow{\mathbb{A}}_b \times \overrightarrow{\mathbb{A}}_d \times \overrightarrow{\mathbb{A}}_e$-module denoted by $M_{a\,;\,b,c} \otimes_{\overrightarrow{\mathbb{A}}_c} M_{c\,;\,d,e}$.
Assume that $M_{a\,;\,b,c}$ and $M_{c\,;\,d,e}$ are standard modules with support $\mathsf{S}_{a\,;\,b,c}$ and $\mathsf{S}_{c\,;\,d,e}$. Let us define a set $\mathsf{S}_{a\,;\,b,c}\times_c\mathsf{S}_{c\,;\,d,e}$ as follows: \begin{equation*}
\mathsf{S}_{a\,;\,b,c}\times_c\mathsf{S}_{c\,;\,d,e}=\{ (\alpha,\beta,\delta,\epsilon)\mid \exists \gamma \, (\alpha,\beta,\gamma)\in\mathsf{S}_{a\,;\,b,c} \text{ and } (\gamma,\delta,\epsilon)\in\mathsf{S}_{c\,;\,d,e} \}. \end{equation*}
\begin{proposition}
\label{def_tensor}
The tensor product $M_{a\,;\,b,c} \otimes_{\overrightarrow{\mathbb{A}}_c} M_{c\,;\,d,e}$ is
isomorphic to the standard module with support
$\mathsf{S}_{a\,;\,b,c}\times_c\mathsf{S}_{c\,;\,d,e}$. \end{proposition} \begin{proof}
The tensor product over the field $k$ has a basis indexed by tuples
$(\alpha,\beta,\gamma,\gamma',\delta,\epsilon)$ with
$(\alpha,\beta,\gamma)\in\mathsf{S}_{a\,;\,b,c}$ and
$(\gamma',\delta,\epsilon)\in\mathsf{S}_{c\,;\,d,e}$. Then one has to take
the quotient by the action of the idempotents and arrows of the
quiver $\overrightarrow{\mathbb{A}}_c$. Abusing notation, we will identify tuples with the
corresponding vectors.
By the action of the idempotents in the path algebra, one can see
that all vectors $(\alpha,\beta,\gamma,\gamma',\delta,\epsilon)$
with $\gamma\not=\gamma'$ vanish in the tensor product.
There remains to quotient by the action of the arrows. This means
that one has to identify
$(\alpha,\beta,\gamma,\gamma,\delta,\epsilon)$ and
$(\alpha,\beta,\gamma+1,\gamma+1,\delta,\epsilon)$, provided that
one has $(\alpha,\beta,\gamma)\in \mathsf{S}_{a\,;\,b,c}$ and
$(\gamma+1,\delta,\epsilon)\in \mathsf{S}_{c\,;\,d,e}$.
By the hypothesis of projectivity, in this situation, one also has
$(\alpha,\beta,\gamma+1)\in \mathsf{S}_{a\,;\,b,c}$ and
$(\gamma,\delta,\epsilon)\in \mathsf{S}_{c\,;\,d,e}$. Hence both
$(\alpha,\beta,\gamma,\gamma,\delta,\epsilon)$ and
$(\alpha,\beta,\gamma+1,\gamma+1,\delta,\epsilon)$ are non-zero vectors.
Therefore, the tensor product has a basis indexed by tuples
$(\alpha,\beta,\delta,\epsilon)$ such that there exists $\gamma$
with $(\alpha,\beta,\gamma)\in\mathsf{S}_{a\,;\,b,c}$ and
$(\gamma,\delta,\epsilon)\in\mathsf{S}_{c\,;\,d,e}$.
One can also see by construction that indeed the module is standard. \end{proof}
\subsection{Fiber-reversal and action of $\tau$}
Let $N_n$ be the standard $\overrightarrow{\mathbb{A}}_n^{op} \times \overrightarrow{\mathbb{A}}_n$ module with support \begin{equation}
\{(i,j)\in[1,n]\times[1,n] \mid i \geq j\}. \end{equation} Note that $N_n$ is injective as a $\overrightarrow{\mathbb{A}}_n^{op}$-module and as a $\overrightarrow{\mathbb{A}}_n$-module.
\begin{lemma}
The Nakayama functor $\nu_n$ on the category $D\mathcal{A}_n$ is the derived
tensor product $? \otimes^{L}_{\overrightarrow{\mathbb{A}}_n} N_n$. \end{lemma} \begin{proof}
This follows from the fact that the image by $\nu$ of the projective
module $P_i$ is the injective module $I_i$, by the standard way of
representing functors by bimodules. \end{proof}
\begin{figure}
\caption{The bimodule $N_{6}$ corresponding to Nakayama functor $\nu_6$}
\label{exemple_nakayama}
\end{figure}
Let us now introduce some operations on support sets.
Let $\mathsf{S}$ be a subset in the product $[1,m_1]\times\dots\times[1,m_k]$. Fix an index $i$. Assume that $\mathsf{S}$ is projective in the direction $i$, \textit{i.e.} that
\begin{equation}
\text{if } (j_1,\dots,j_i,\dots,j_k)\in\mathsf{S} \text{ then }
(j_1,\dots,\ell,\dots,j_k)\in\mathsf{S} \text{ for all }\ell\geq j_i.
\end{equation}
The \textbf{fiber-reversal} of $\mathsf{S}$ in the direction $i$ is \begin{equation}
\{(j_i,\dots,j_i,\dots,j_k)\in
[1,m_1]\times\dots\times[1,m_k] \mid
(j_1,\dots,j_i-1,\dots,j_k)\not\in\mathsf{S} \}. \end{equation}
Note that the fiber-reversal of $\mathsf{S}$ in direction $i$ is never disjoint from $\mathsf{S}$, and really depends on the index $i$.
One can give a similar definition of the fiber-reversal if the set $\mathsf{S}$ is injective in the direction $i$, \textit{i.e.} if the following condition holds: \begin{equation}
\text{if } (j_1,\dots,j_i,\dots,j_k)\in\mathsf{S} \text{ then }
(j_1,\dots,\ell,\dots,j_k)\in\mathsf{S} \text{ for all }\ell\leq j_i. \end{equation}
Let us now describe the (derived) tensor product with $N_n$. We consider only the special case that we will need.
Let $M_{n;c;d}$ be a $\overrightarrow{\mathbb{A}}_n^{op}\times \overrightarrow{\mathbb{A}}_{c} \times \overrightarrow{\mathbb{A}}_d$ standard module with support $\mathsf{S}_{n;c;d}$. Assume that $M_{n;c,d}$ is projective as a $\overrightarrow{\mathbb{A}}_n^{op}$-module. \begin{proposition}
The derived tensor product of $N_n \otimes_{\overrightarrow{\mathbb{A}}_n}^L M_{n;c,d}$ is
isomorphic to the standard module with support the fiber-reversal
of $\mathsf{S}_{n;c;d}$ in the direction of length $n$. \end{proposition}
\begin{proof}
The tensor product $N_n \otimes_{k} M_{n;c,d}$ has a basis indexed
by tuples $(\alpha,\beta,\beta',\gamma,\delta)$ with
$\alpha \geq \beta$ and $(\beta',\gamma,\delta)\in\mathsf{S}_{n;c;d}$. Abusing
notation, we will identify tuples with the corresponding vectors.
Using the idempotents in the path algebra, the tensor product over
$\overrightarrow{\mathbb{A}}_n$ is spanned by tuples $(\alpha,\beta,\beta,\gamma,\delta)$.
Then one has to identify $(\alpha,\beta,\beta,\gamma,\delta)$ and
$(\alpha,\beta+1,\beta+1,\gamma,\delta)$ as soon as $\alpha \geq
\beta$ and $(\beta+1,\gamma,\delta)\in \mathsf{S}_{n;c,d}$.
Using the hypothesis that $M_{n;c,d}$ is projective, one has in this
situation that $(\beta,\gamma,\delta)\in \mathsf{S}_{n;c,d}$.
The only case where one of the two vectors
$(\alpha,\beta,\beta,\gamma,\delta)$ and
$(\alpha,\beta+1,\beta+1,\gamma,\delta)$ is zero and the other is
not zero happens if $\beta+1>\alpha$, \textit{i.e.} $\alpha=\beta$.
It follows that the vector $(\alpha,\beta,\beta,\gamma,\delta)$ is
mapped to zero in the tensor product over $\overrightarrow{\mathbb{A}}_n$ if and only if
$(\alpha+1,\gamma,\delta) \in \mathsf{S}(M)$ and are otherwise just
identified. This is exactly the definition of the fiber-reversal of
$\mathsf{S}_{n;c,d}$ in the first direction.
One can easily check that the tensor product is standard. \end{proof}
\section{The $\nabla$ functors on module categories}
In this section, we define a cooperad structure on the collection of module categories $(\mathcal{A}_n)_{n\geq 1}$. This means that several functors $\nabla$ are introduced and some kind of associativity properties are proved.
Let $n\geq 1$ be an integer and let $m,i$ be integers such that $1 \leq i \leq m$.
Consider the set $\mathsf{S}_{m;i}^n$ of integer triples $(\gamma,\mu,\nu)$ in $[1,m+n-1]\times[1,m]\times[1,n]$ such that \begin{align*}
&( \mu \leq i-1 \text{ and } \gamma \leq \mu) \\
\text{ or } &(\mu =i \text{ and } \gamma \leq i+\nu-1)\\
\text{ or } &(i+1 \leq \mu \text{ and } \gamma \leq \mu+n-1). \end{align*}
This is illustrated in Figure \ref{exemple_module} with $m=6,n=4$ and $i=3$.
\begin{figure}
\caption{The module $M_{6;3}^{4}$ and its symbolic description}
\label{exemple_module}
\end{figure}
For later use, here is an equivalent description of $\mathsf{S}_{m;i}^{n}$ : \begin{align}
\label{description_alt}
&( \gamma \leq i \text{ and } \gamma \leq \mu) \\
\text{ or } &(i+1 \leq \gamma\leq i+n-1 \text{ and } \gamma \leq
i+\nu-1)\text{ and } i\leq \mu\\
\text{ or } &(i+1 \leq \gamma\leq i+n-1 \text{ and } i+\nu \leq
\gamma )\text{ and } i+1\leq \mu\\
\text{ or } &(i+n \leq \gamma \text{ and } \gamma-n+1 \leq \mu). \end{align}
One can easily check that the set $\mathsf{S}_{m;i}^n$ has the following property : if $(\gamma,\mu,\nu)\in \mathsf{S}_{m;i}^n$ and if $(\gamma',\mu',\nu')\in [1,m+n-1]\times[1,m]\times[1,n]$ is such that $\gamma' \leq \gamma$, $\mu \leq \mu'$ and $\nu \leq \nu'$, then $(\gamma',\mu',\nu',\gamma')\in \mathsf{S}_{m;i}^n$.
This implies that one can define a representation $M_{m;i}^n$ of the quiver $\overrightarrow{\mathbb{A}}_{m+n-1}^{op} \times \overrightarrow{\mathbb{A}}_m \times \overrightarrow{\mathbb{A}}_n$ as the standard module with support $\mathsf{S}_{m;i}^n$. Note that $M_{m;i}^n$ is projective with respect to $\overrightarrow{\mathbb{A}}_m$, $\overrightarrow{\mathbb{A}}_n$ and $\overrightarrow{\mathbb{A}}_{m+n-1}^{op}$.
Let then $\nabla_{m;i}^n$ be the functor from $\mathcal{A}_{m+n-1}$ to $\mathcal{A}_{m,n}$ defined by the tensor product over $\overrightarrow{\mathbb{A}}_{m+n-1}$ by $M_{m;i}^{n}$ : \begin{equation}
\nabla_{m;i}^n = ? \otimes_{\overrightarrow{\mathbb{A}}_{m+n-1}} M_{m;i}^{n}. \end{equation}
Note that $\nabla_{1;1}^n$ is the identity functor of $\mathcal{A}_n$ and that $\nabla_{m;i}^{1}$ is the identity functor of $\mathcal{A}_m$ for all $i$.
\subsection{Relation to the $\operatorname{Dias}$ cooperad}
Let $n$ be an integer and let $1\leq j \leq n$. Let $S_j^n$ be the simple $\overrightarrow{\mathbb{A}}_{n}$-module supported at vertex $j$. Let $P_j^n$ be the projective $\overrightarrow{\mathbb{A}}_{n}$-module associated with vertex $j$. The class of a module $M$ in the Grothendieck group $K^0(\mathcal{A}_n)$ of $\mathcal{A}_n$ will be denoted by $[M]$. The elements $[S_j^n]$ for $1\leq j \leq n$ form a basis of $K^0(\mathcal{A}_n)$.
Let us now compute the class $[\nabla_{m;i}^{n}(S_j^{m+n-1})]$.
From the explicit description of the module $M_{m;i}^{n}$, one has \begin{equation}
[\nabla_{m;i}^n (P_j^{m+n-1})]=
\begin{cases}
[P_{j,1}^{m,n}] &\text{ if } 1\leq j \leq i,\\
[P_{i+1,1}^{m,n}]+[P_{i,j-i+1}^{m,n}]-[P_{i+1,j-i+1}^{m,n}]
&\text{ if } i+1\leq j\leq i+n-1,\\
[P_{j-n+1,1}^{m,n}] &\text{ if } i+n \leq j \leq m+n-1,
\end{cases} \end{equation} where $P_{i,j}^{m,n}$ is the projective module associated with vertex $(i,j)$ of $\overrightarrow{\mathbb{A}}_m \times \overrightarrow{\mathbb{A}}_n$.
Using a projective resolution of the simple modules, one deduces that \begin{equation}
[\nabla_{m;i}^n (S_j^{m+n-1})]=
\begin{cases}
\sum_{k=1}^{n} [S_{j,k}^{m,n}]&\text{ if } 1\leq j \leq i-1,\\
[S_{i,j-i+1}^{m,n}]&\text{ if } i\leq j\leq i+n-1,\\
\sum_{k=1}^{n} [S_{j-n+1,k}^{m,n}]&\text{ if } i+n \leq j \leq m+n-1,
\end{cases} \end{equation} where $S^{m,n}_{i,j}$ is the simple module at vertex $(i,j)$ for $\overrightarrow{\mathbb{A}}_{m} \times \overrightarrow{\mathbb{A}}_{n}$.
Taking the linear dual basis $e$ of the basis $[S]$, one finds that the linear dual maps $\circ$ to the $\nabla$ maps are given by \begin{equation}
\circ_{m;i}^{n}(e^m_j \otimes e^n_k) =
\begin{cases}
e^{m+n-1}_{j}&\text{ if }i>j,\\
e^{m+n-1}_{i+k-1}&\text{ if }i=j,\\
e^{m+n-1}_{j+n-1}&\text{ if }i<j.
\end{cases} \end{equation}
This is exactly the usual description of the Diassociative operad, in the usual basis $e$ of $\operatorname{Dias}(n)$, see \cite[\S 3]{anticyclic}.
\section{Cooperadic properties of $\nabla$ functors}
One has to check two different axioms to prove that the $\nabla$ functors define a cooperad. Let us call them the commutativity axiom and the associativity axiom.
\subsection{Commutativity axiom}
Let $m,n,p$ and $i,j$ be integers such that $1\leq i <j \leq m$.
\begin{proposition}
The modules $M$ have the following property : there is an
isomorphism
\begin{equation}
M_{m+p-1;i}^n \otimes_{\overrightarrow{\mathbb{A}}_{m+p-1}} M_{m;j}^p\simeq
M_{m+n-1;j+n-1}^p \otimes_{\overrightarrow{\mathbb{A}}_{m+n-1}}M_{m;i}^n ,
\end{equation}
where both sides are $\overrightarrow{\mathbb{A}}_{m+n+p-2}^{op}\times\overrightarrow{\mathbb{A}}_m\times \overrightarrow{\mathbb{A}}_n
\times \overrightarrow{\mathbb{A}}_p$-modules. \end{proposition}
\begin{proof}
As the modules $M$ are standard and projective, their tensor
products can be described using their supports. According to the
description of tensor products in Prop. \ref{def_tensor}, one
therefore has to compute and compare the sets $\mathsf{S}_{m;j}^p
\times_{{m+p-1}} S_{m+p-1;i}^n$ and $\mathsf{S}_{m;i}^n \times_{{m+n-1}}
S_{m+n-1;j+n-1}^p$.
By an elementary computation with boolean combinations of
inequalities, one can show that both sides are given by the set of
$(\gamma,\mu,\nu,\pi)$ in
$[1,m+n+p-2]\times[1,m]\times[1,n]\times[1,p]$ such that
\begin{align*}
&( \mu \leq i-1 \text{ and } \gamma \leq \mu) \\
\text{ or } &(\mu =i \text{ and } \gamma \leq i+\nu-1)\\
\text{ or } &(i+1 \leq \mu \leq j-1 \text{ and } \gamma \leq
\mu+n-1)\\
\text{ or } &(\mu =j \text{ and } \gamma \leq j+n+\pi-2)\\
\text{ or } &(j+1 \leq \mu \text{ and } \gamma \leq
\mu+n+p-2).
\end{align*} \end{proof}
\begin{corollary}
The functors $\nabla$ have the following property : there is a
natural transformation
\begin{equation}
(\operatorname{Id}_m \times X)(\nabla_{m;j}^p \times \operatorname{Id}_n)\nabla_{m+p-1;i}^n \simeq (\nabla_{m;i}^n \times \operatorname{Id}_p) \nabla_{m+n-1;j+n-1}^p.
\end{equation} \end{corollary}
\subsection{Associativity axiom}
Let $m,n,p$ and $i,j$ be integers such that $1\leq i \leq m$ and $1 \leq j \leq n$.
\begin{proposition}
The modules $M$ have the following property : there is an
isomorphism
\begin{equation}
M_{m;i}^{n+p-1} \otimes_{\overrightarrow{\mathbb{A}}_{n+p-1}} M_{n;j}^{p} \simeq M_{m+n-1;j+i-1}^p\otimes_{\overrightarrow{\mathbb{A}}_{m+n-1}} M_{m;i}^n ,
\end{equation}
where both sides are $\overrightarrow{\mathbb{A}}_{m+n+p-2}^{op}\times\overrightarrow{\mathbb{A}}_m\times \overrightarrow{\mathbb{A}}_n
\times \overrightarrow{\mathbb{A}}_p$-modules. \end{proposition}
\begin{proof}
As in the previous section, one just has to compute the supports of these modules. One can check that they both give the
set of $(\mu,\nu,\pi,\gamma)$ in $[1,m]\times[1,n]\times[1,p]\times[1,m+n+p-2]$ such that
\begin{align*}
&( \mu \leq i-1 \text{ and } \gamma \leq \mu) \\
\text{ or } &(\mu =i \text{ and } \nu\leq j-1 \text{ and }\gamma \leq i+\nu-1)\\
\text{ or } &(\mu =i \text{ and }\nu=j \text{ and } \gamma \leq
i+j+\pi-2)\\
\text{ or } &(\mu =i \text{ and }j+1 \leq \nu \text{ and } \gamma \leq i+\nu+p-2)\\
\text{ or } &(i+1 \leq \mu \text{ and } \gamma \leq
\mu+n+p-2).
\end{align*} \end{proof}
\begin{corollary}
The functors $\nabla$ have the following property : there is a
natural transformation
\begin{equation}
(\operatorname{Id}_m \times \nabla_{n;j}^{p})\nabla_{m;i}^{n+p-1} \simeq (\nabla_{m;i}^n \times \operatorname{Id}_p) \nabla_{m+n-1;j+i-1}^p.
\end{equation} \end{corollary}
\section{Relations between $\nabla$ and $\tau$}
Let us consider the functors $\nabla_{m;i}^{n}$ as the derived tensor product with $M_{m;i}^{n}$. As the modules $M_{m;i}^{n}$ are projective in every direction, this is just the usual tensor product. Therefore, we obtain a cooperad structure on the collection of derived categories $(D\mathcal{A}_n)_{n\geq 1}$.
In this section, we define an anticyclic cooperad structure on the collection of derived categories $(D\mathcal{A}_n)_{n\geq 1}$. This means that some compatibility properties hold between the functors $\nabla$ and the Auslander-Reiten translations $\tau$. We will rather work with the Nakayama functors $\nu$, described as derived tensor product with the modules $N$.
There are two different axioms for the notion of anticyclic operad : let us call them the border axiom and the inner axiom.
\subsection{Border axiom}
\begin{figure}
\caption{The module $M_{4;4}^{6}$, its fiber-reversal in the
direction of length $6$ and the fiber-reversal of the result
in the direction of length $4$}
\label{exemple_bord1}
\end{figure}
\begin{proposition}
The fiber-reversal of $\mathsf{S}_{m;1}^{n}$ in the direction of length
$m+n-1$ is equal to the fiber-reversal in the direction of length
$n$ of the fiber-reversal in the direction of length $m$ of
$\mathsf{S}_{n;n}^{m}$. In terms of modules, this means that
\begin{equation}
N_{m+n-1} \otimes^L_{\overrightarrow{\mathbb{A}}_{m+n-1}} M_{m;1}^n \simeq (M_{n;n}^m
\otimes^L_{\overrightarrow{\mathbb{A}}_m} N_m) \otimes^L_{\overrightarrow{\mathbb{A}}_n} N_n.
\end{equation} \end{proposition}
\begin{proof}
Let us first compute the fiber-reversal of $\mathsf{S}_{m;1}^{n}$ in the
direction of $\gamma$ of length
$m+n-1$ . One easily gets
\begin{align*}
&(\mu =1 \text{ and } \gamma \geq \nu)\\
\text{ or } & \gamma \geq \mu+n-1. \end{align*}
Let us then compute the fiber-reversal of $\mathsf{S}_{n;n}^{m}$ in the direction of $\mu$ of length $m$. One gets \begin{align*}
&( \mu = 1 \text{ and } \gamma \leq \nu) \\
\text{ or } &(\nu =n \text{ and } \gamma \geq n+\mu-1). \end{align*}
Then one can compute the fiber-reversal of this set in the direction of $\nu$ and check the expected result. \end{proof}
\begin{figure}
\caption{The module $M_{6;1}^{4}$, its top-bottom fiber-reversal}
\label{exemple_bord2}
\end{figure}
\begin{corollary}
The functors $\nabla$ satisfy
\begin{equation}
\label{anticyclic_reversal}
\nabla_{m;1}^{n} \tau_{m+n-1} \simeq S X (\tau_n \times \tau_m) \nabla_{n;n}^{m}.
\end{equation} \end{corollary}
\subsection{Inner axiom}
Let us assume here that $2 \leq i \leq m$.
\begin{figure}
\caption{The module $M_{8;4}^{7}$, its top-bottom fiber-reversal and its
fiber-reversal in the direction of length $8$}
\label{exemple_interieur}
\end{figure}
\begin{proposition}
The fiber-reversal of $\mathsf{S}_{m;i}^{n}$ in the direction of length
$m+n-1$ coincides with the fiber-reversal of $\mathsf{S}_{m;i-1}^{n}$ in
the direction of length $m$. In terms of modules, this means
\begin{equation}
N_{m+n-1} \otimes^L_{\overrightarrow{\mathbb{A}}_{m+n-1}} M_{m;i}^n \simeq M_{m;i-1}^n
\otimes^L_{\overrightarrow{\mathbb{A}}_m} N_m.
\end{equation} \end{proposition}
\begin{proof}
On the one hand, for the fiber-reversal of $\mathsf{S}_{m;i}^{n}$ in the
direction $\gamma$ of length $m+n-1$, one easily gets
\begin{align*}
&( \mu \leq i-1 \text{ and } \gamma \geq \mu) \\
\text{ or } &(\mu =i \text{ and } \gamma \geq i+\nu-1)\\
\text{ or } &(i+1 \leq \mu \text{ and } \gamma \geq \mu+n-1).
\end{align*}
On the other hand, using the alternative description
(\ref{description_alt}) of $S_{m;i-1}^n$, the fiber-reversal of
$\mathsf{S}_{m;i-1}^{n}$ in the direction $\mu$ of length $m$ is
\begin{align*}
\label{description_alt}
&( \gamma \leq i-1 \text{ and } \gamma \geq \mu) \\
\text{ or } &(i \leq \gamma\leq i+n-2 \text{ and } \gamma \leq
i+\nu-2)\text{ and } i-1\geq \mu\\
\text{ or } &(i \leq \gamma\leq i+n-2 \text{ and } i-1+\nu \leq
\gamma )\text{ and } i\geq \mu\\
\text{ or } &(i+n-1 \leq \gamma \text{ and } \gamma-n+1 \geq \mu).
\end{align*}
It is then a routine matter to check that they are indeed equal. \end{proof}
\begin{corollary}
The functors $\nabla$ satisfy
\begin{equation}
\label{anticyclic_shift}
\nabla_{m;i}^{n} \tau_{m+n-1} \simeq (\tau_m \times \operatorname{Id}) \nabla_{m;i-1}^{n}, \end{equation} for $2 \leq i \leq m$. \end{corollary}
Remerciements : Merci {\`a} Bernhard Keller pour ses conseils. Ce travail a {\'e}t{\'e} partiellement financ{\'e} par un programme blanc de l'ANR.
\begin{center} Fr{\'e}d{\'e}ric Chapoton\\ Universit{\'e} de Lyon ; Universit{\'e} Lyon 1 ;\\ CNRS, UMR5208, Institut Camille Jordan,\\ 43 boulevard du 11 novembre 1918,\\ F-69622 Villeurbanne-Cedex, France \end{center}
\end{document} |
\begin{document}
\title*{Double-sided Taylor's approximations and their applications in theory of trigonometric inequalities} \titlerunning{Double-sided Taylor's approximations and their applications} \author{Branko Male\v sevi\' c${}^{\mbox{\scriptsize 1)}}$, Tatjana Lutovac${}^{\mbox{\scriptsize 1)}}$, Marija Ra\v sajski${}^{\mbox{\scriptsize 1)}}$, Bojan Banjac${}^{\mbox{\scriptsize 2)}}$} \authorrunning{B.~Male\v sevi\' c, T.~Lutovac, M.~Ra\v sajski, B. Banjac} \institute{Branko Male\v sevi\' c, Tatjana Lutovac, Marija Ra\v sajski and Bojan Banjac \at ${}^{1)}$School of Electrical Engineering, University of Belgrade; ${}^{2)}$Faculty of Technical Sciences, University of Novi Sad\\ Corresponding author \email{branko.malesevic@etf.bg.ac.rs}}
\maketitle
\abstract{In this paper the double-sided {\sc Taylor}'s approximations are used to obtain generalisations and improvements of some trigonometric inequalities.}
\section{Introduction}
Many mathematical and engineering problems cannot be solved without {\sc Taylor}'s approximations \cite{D_S_Mitrinovic_1970}, \cite{Milovanovic_Rassias_2014}, \cite{M_J_Cloud_B_C_Drachman_L_P_Lebedev_2014}. Particularly, their application in proving various analytic inequalities is of great importance \cite{C_Mortici_2011}, \cite{B_Malesevic_M_Makragic_JMI_2016}, \cite{Milica_Makragic_JMI_2017}, \cite{T_Lutovac_B_Malesevic_C_Mortici_JIA_2017}. Recently, numerous inequalities have been generalized and improved by the use of the so-called double-sided {\sc Taylor}'s approximations \cite{Milica_Makragic_JMI_2017}, \cite{H_Alzer_M_K_Kwong_2017}, \cite{B_Malesevic_T_Lutovac_M_Rasajski_C_Mortici_Adv._Difference_Equ._2018}-\cite{M_Nenenzic_L_Zhu_AADM_2018} and \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}. Many topics regarding these approximations are presented in \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}. Some of the basic concepts and results about the double-sided {\sc Taylor}'s approximations presented in \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}, which will be used in this paper, are given in the next section.
In this paper, using the double-sided {\sc Taylor}'s approximations, we obtain generalizations and improvements of some trigonometric inequalities proved by {\sc J. Sandor} \cite{J_Sandor_2016}.
\begin{statement} $(\mbox{\rm \cite{J_Sandor_2016}},~\mbox{\rm Theorem}~1)$ \label{J_Sandor_Th_01} \begin{equation} \label{Eq_01} \displaystyle\frac{3}{8} < \displaystyle\frac{1-\mbox{\small $\displaystyle\frac{\cos x}{\cos \frac{x}{2}}$}}{x^2} < \displaystyle\frac{4}{\pi^2}, \end{equation} for any $x \!\in\! (0,\pi/2)$. \end{statement}
Note that {\sc J. D'Aurizio} \cite{J_D_Aurizio_2014} used the infinite products as well as some inequalities connected with the {\sc Riemann} zeta function $\zeta$ to prove the right-hand side inequality (\ref{Eq_01}).
\begin{statement}$(\mbox{\rm \cite{J_Sandor_2016}},~\mbox{\rm Theorem}~2)$ \label{J_Sandor_Th_02} \begin{equation} \label{Eq_02} \displaystyle\frac{4}{\pi^2}\left(2 - \sqrt{2}\right) < \displaystyle\frac{2-\mbox{\small $\displaystyle\frac{\sin x}{\sin \frac{x}{2}}$}}{x^2} < \displaystyle\frac{1}{4}, \end{equation} for any $x \!\in\! (0,\pi/2)$. \end{statement}
Inequalities (\ref{Eq_01}) and (\ref{Eq_02}) are reducible to mixed trigonometric-polynomial inequalities and can be proved by methods and algorithms that have been developed and shown in papers \cite{B_Malesevic_M_Makragic_JMI_2016}, \cite{T_Lutovac_B_Malesevic_C_Mortici_JIA_2017} and dissertation \cite{B_D_Banjac_2019}.
In this paper, we propose and prove generalizations of inequality (\ref{Eq_01}) by determining the sequence of the polynomial approximations. Also, an improvement of inequality (\ref{Eq_02}) is given for some intervals. The proposed generalizations and improvements are based on the double-sided {\sc Ta\-ylo\-r}'s approximations and the corresponding results presented in \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}.
\section{An overview of the results related to double-sided \textsc{Ta\-ylo\-r}'s \break approximations}
Let us consider a real function $f : (a, b) \longrightarrow \mathbb{R}$, such that there exist finite limits
$f^{(k)}(a+)=\!\lim\limits_{x \rightarrow a+}{f^{(k)}(x)}$, for $k=0,1,\ldots,n$. \\ {\sc Taylor}'s polynomial $$ T_n^{f,\,a+}(x) =\displaystyle\sum_{k=0}^{n}{\displaystyle\frac{f^{(k)}(a+)}{k!}(x-a)^k}, \; n \!\in\! \mathbb{N}_0, $$ and the polynomial $$ \mbox{$\mathbbmsl{T}$}_n^{f;\,a+,\,b-}(x) = \left\{ \begin{array}{ccl} T_{n-1}^{f,\,a+}(x) + \displaystyle\frac{1}{(b - a)^n}R_{n}^{f,\,a+}(b-)(x-a)^n &,& n \geq 1 \\[2.5 ex] f(b-) &,& n = 0, \end{array} \right. $$ are called the {\em first {\sc Taylor}'s approximation for the function $f$ in the right neighborhood~of~$a$}, and the {\em second {\sc Taylor}'s approximation for the function $f$ in the right neighborhood of $a$}, respectively.
Also, the following functions: $$ R_{n}^{f,\,a+}(x) = f(x) - T_{n-1}^{f,\,a+}(x), ~~ n \!\in\! \mathbb{N} $$ and $$ {\mbox{$\mathbbmsl{R}$}}_{n}^{f;\,a+,\,b-}(x) = f(x) - \mbox{$\mathbbmsl{T}$}_{n-1}^{f;\,a+,\,b-}(x), ~~ n \!\in\! \mathbb{N} $$ are called the {\em remainder of the first {\sc Taylor}'s approximation in the right neighborhood of $a$}, and the {\em remainder of the second {\sc Taylor}'s approximation in the right neighborhood of $a$}, respectively.
The following Theorem, which has been proved in \cite{S_Wu_L_Debnath_2009} and whose variants are considered in \cite{S_Wu_HM_Srivastva_2008a}, \cite{S_Wu_L_Debnath_2008} and \cite{S_Wu_HM_Srivastva_2008b}, provides an important result regarding {\sc Taylor}'s approximations.
\begin{theorem} \label{Theorem_1} $(\mbox{\rm \cite{S_Wu_L_Debnath_2009}},~\mbox{\rm Theorem}~2)$ Suppose that $f(x)$ is a real function on $(a,b)$, and that $n$ is a positive integer such that $f ^{(k)}(a+)$, for $k \!\in\! \{0,1,2, \ldots ,n\}$, exist.
\noindent Supposing that $f^{(n)}(x)$ is increasing on $(a,b)$, then for all $x \!\in\! (a,b)$ the following inequality also holds$\,:$ \begin{equation} \label{(2)}
T_n^{f,\,a+}(x) < f(x) < \mbox{$\mathbbmsl{T}$}_n^{f;\,a+,\,b-}(x).
\end{equation} Furthermore, if $f^{(n)}(x)$ is decreasing on $(a,b)$, then the reversed inequality~of~\mbox{\rm (\ref{(2)})} holds. \end{theorem}
The above theorem is called {\em Theorem on double-sided \textsc{Taylor}'s approximations} in \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}, i.e. {\em Theorem WD} in \cite{B_Malesevic_T_Lutovac_M_Rasajski_C_Mortici_Adv._Difference_Equ._2018}-\cite{M_Nenenzic_L_Zhu_AADM_2018}.
The proof of the following proposition is given in \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}.
\begin{proposition} \label{Proposition_1} $(\mbox{\rm \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}},~\mbox{\rm Proposition}~1)$ Consider a real function $f : (a, b) \longrightarrow \mathbb{R}$ such that there exist its first and second {\sc Taylor}'s approximations, for some $n \!\in\! {N}_0$. Then, $$ \label{sgn} {\mathop{\rm sgn}} {\Big (} \mbox{$\mathbbmsl{T}$}_{n}^{f,\, a+, \, b-}(x)\,-\,\mbox{$\mathbbmsl{T}$}_{n+1}^{f,\, a+, \, b-}(x) {\Big )} = {\mathop{\rm sgn}} {\Big (} {f(b-)\,-\,T_{n}^{f,\,a+}(b)} {\Big )}, $$ for all $ x\in(a,b)$. \end{proposition}
From the above proposition, as shown in \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}, the following theorem directly follows:
\begin{theorem} \label{Theorem_3} $(\mbox{\rm \cite{B_Malesevic_M_Rasajski_T_Lutovac_2019}},~\mbox{\rm Theorem}~4)$ Consider the real analytic functions $f : (a, b) \longrightarrow \mathbb{R}$: $$ \label{f(x)} f(x) = \sum_{k=0}^{\infty}{c_k(x-a)^k}, $$ where $c_k \!\in\! \mathbb{R}$ and $c_k \geq 0$ for all $k \!\in\! \mathbb{N}_0$. Then,
$$
\begin{array}{c}
T_0^{f,\,a+}(x) \leq \ldots\leq T_n^{f,\,a+}(x) \leq T_{n+1}^{f,\,a+}(x) \leq \ldots \\[1.0 ex]
\ldots \leq f(x) \leq \ldots \\[1.0 ex]
\ldots \leq \mbox{$\mathbbmsl{T}$}_{n+1}^{f;\,a+,\,b-}(x)
\leq \mbox{$\mathbbmsl{T}$}_n^{f;\,a+,\,b-}(x)
\leq \ldots \leq \mbox{$\mathbbmsl{T}$}_0^{f;\,a+,\,b-}(x),
\end{array}
$$ for all $x \!\in\! (a,b)$. \end{theorem}
\section{Main results}
$\,$ \vspace*{-3.5 mm}
\subsection{Generalization of Statement 1}
Consider the function: $$ f(x) = \left\{ \begin{array}{ccc} \mbox{\small $\displaystyle\frac{3}{8}$} &,& x = 0, \\[3.0 ex] \displaystyle\frac{1-\mbox{\small $\displaystyle\frac{\cos x}{\cos \frac{x}{2}}$}}{x^2} &,& x \!\in\! (0,\pi). \end{array} \right. $$
First, we prove that $ f $ is a real analytic function on $ [0,\pi)$.
Based on the elementary equality$:$ $$ 1-\displaystyle\frac{\cos x}{\cos \frac{x}{2}} = 1+\sec \frac{x}{2} - 2 \cos \frac{x}{2}, $$ and well known power series expansions \cite{I_Gradshteyn_I_Ryzhik_2014} (formula 1.411):
$$ \begin{array}{ccc} \cos t = \displaystyle\sum_{k=0}^{\infty}{\mbox{\small $\displaystyle\frac{(-1)^k}{(2k)!}$} \,t^{2k}} & \qquad & t \!\in\! R, \\[5.0 ex]
\sec t = \displaystyle\sum_{k=0}^{\infty}{\mbox{\small $\displaystyle\frac{|E_{2k}|}{(2k)!}$}\,t^{2k}} & \qquad & t \!\in\! \left(-\mbox{\small $\displaystyle\frac{\pi}{2}$},\mbox{\small $\displaystyle\frac{\pi}{2}$}\right); \end{array} $$
\noindent where $E_k$ are {\sc Euler}'s numbers \cite{I_Gradshteyn_I_Ryzhik_2014}, for $t=\mbox{\small $\displaystyle\frac{x}{2}$} \!\in\! \left[0,\mbox{\small $\displaystyle\frac{\pi}{2}$}\right)$, i.e. for $x \!\in\! [0,\pi)$, we have$:$ $$
f(x) = \displaystyle\sum_{k=1}^{\infty}{\mbox{\small $\displaystyle\frac{|E_{2k}| - 2(-1)^k}{2^{2k}(2k)!}$}x^{2k-2}} $$
\vspace*{-2.0 mm}
\noindent i.e.
\vspace*{-1.5 mm}
$$ f(x) = \mbox{\small $\displaystyle\frac{3}{8}$} + \mbox{\small $\displaystyle\frac{1}{128}$}\,x^2 + \mbox{\small $\displaystyle\frac{7}{5120}$}\,x^4 + \mbox{\small $\displaystyle\frac{461}{3440640}$}\,x^6 + \mbox{\small $\displaystyle\frac{16841}{1238630400}$}\,x^8 + \ldots $$
\noindent where the power series converges for $x \!\in\! [0,\pi)$.
Further, based on the elementary well-known features of
{\sc Euler}'s numbers $E_k$, we have$:$ $$
c_{2k-2} = \mbox{\small $\displaystyle\displaystyle\frac{|E_{2k}| - 2(-1)^k}{2^{2k}(2k)!}$} > 0 \quad\mbox{and}\quad c_{2k-1} = 0, $$ for $k = 1, 2, ...\,$.
Finally, from Theorem~\ref{Theorem_3} the following result directly follows.
\break
\begin{theorem} \label{Generalization_Statement_1} For the function $$ f(x) = \left\{ \begin{array}{ccc} \mbox{\small $\displaystyle\frac{3}{8}$} &,& x = 0, \\[2.0 ex] \displaystyle\frac{1-\mbox{\small $\displaystyle\frac{\cos x}{\cos \frac{x}{2}}$}}{x^2} &,& x \!\in\! (0,\pi) \end{array} \right. $$
and any $c\in (0, \pi)$ the following inequalities hold true: \begin{equation} \label{generalizacija_Shandor_1} \begin{array}{c} \mbox{\small $\displaystyle\frac{3}{8}$} = T_0^{f,\,0+}(x) \leq T_2^{f,\,0+}(x)\leq \ldots\leq T_{2n}^{f,\,0+}(x) \leq \ldots \\[2.0 ex]
\ldots \leq f(x) \leq \ldots \\[1.0 ex]
\leq \mbox{$\mathbbmsl{T}$}_{2m}^{f;\,0+,\,c-}(x)
\leq \ldots \leq \mbox{$\mathbbmsl{T}$}_2^{f;\,0+,\,c-}(x) \leq \mbox{$\mathbbmsl{T}$}_0^{f;\,0+,\,c-}(x) = \left(1-\mbox{\small $\displaystyle\frac{\cos c}{\cos \frac{c}{2}}$}\right)\mbox{\Large $/$}{c^2}. \end{array} \end{equation} for every $x \!\in\! \left(0, c\right)$, where $m, n \!\in\! \mathbb{N}_0$.
\end{theorem}
\noindent Note that inequalities from Statement 1 can be directly obtained from (\ref{generalizacija_Shandor_1}), for~$c\!=\!\mbox{\small $\displaystyle\frac{\pi}{2}$}$
$$ \mbox{\small $\displaystyle\frac{3}{8}$} < \displaystyle\frac{1-\mbox{\small $\displaystyle\frac{\cos x}{\cos \frac{x}{2}}$}}{x^2} < \mbox{\small $\displaystyle\frac{4}{\pi^2}$}. $$
Also, Theorem~\ref{Generalization_Statement_1} gives a generalization and a sequence of improvements of results from Statement 1. For example, for $c=\pi/2$ i.e. for $x \!\in\! \left(0, \mbox{\small $\displaystyle\frac{\pi}{2}$}\right)$ we have: $$ \mbox{\small $\displaystyle\frac{3}{8}$} \leq T_{2}^{f,\,0+}\!(x) \!=\! \mbox{\small $\displaystyle\frac{3}{8}$} \!+\! \mbox{\small $\displaystyle\frac{1}{128}$}x^2 \leq f(x) \leq \mbox{$\mathbbmsl{T}$}_{2}^{f;\,0+,\,\pi/2-}\!(x) \!=\! \mbox{\small $\displaystyle\frac{3}{8}$} \!+\! \left(\mbox{\small $\displaystyle\frac{16}{\pi^4}$} \!-\! \mbox{\small $\displaystyle\frac{3}{2 \pi^2}$}\right)x^2 \leq \mbox{\small $\displaystyle\frac{4}{\pi^2}$}. $$ Using standard numerical methods it is easy to verify: $$
\mathop{max}\limits_{x \in [0,\pi/2]}{|R_3^{f,0+}(x)|} = f(\pi/2) = 0.01100 ... $$ and $$
\mathop{max}\limits_{x \in [0,\pi/2]}|\mbox{$\mathbbmsl{R}$}_{3}^{f;\,0+,\,\pi/2-}(x)| = f(1.14909...) = 0.00315 ... \, . $$
\vspace*{-2.0 mm}
\subsection{An improvement of Statement 2}
Let $\beta \!\in\! (0,\pi)$ be a fixed real number. Consider the function$:$ $$ g(x) = \left\{ \begin{array}{ccc} \mbox{\small $\displaystyle\frac{1}{4}$} &,& x = 0, \\[2.0 ex] \displaystyle\frac{1-\mbox{\small $\displaystyle\frac{\sin x}{\sin \frac{x}{2}}$}}{x^2} &,& x \!\in\! (0,\beta]. \end{array} \right. $$
\break
We prove that $g$ is a real analytic function on $[0,\beta]$.
\noindent Notice that \begin{equation} \label{g1-g2} g(x) = g_1(x) - g_2(x) \end{equation} for $x \!\in\! [0,\beta]$, where $$ g_1(x) = \left\{ \begin{array}{ccc} \mbox{\small $\displaystyle\frac{1}{4}$} &,& x = 0, \\[2.25 ex] \displaystyle\frac{\cosh\!\frac{x}{2} - \cos\!\frac{x}{2}}{x^2} &,& x \!\in\! (0,\beta] \end{array} \right. $$ and $$ g_2(x) = \left\{ \begin{array}{ccc} 0 &,& x = 0, \\[2.25 ex] \displaystyle\frac{ \cosh\!\frac{x}{2} + \cos\!\frac{x}{2} - 2}{x^2} &,& x \!\in\! (0,\beta]. \end{array} \right. $$
Since the functions $g_1$ and $g_2$ are real analytic functions on $[0,\beta]$, with the following power series expansions: $$ g_1(x) = \displaystyle\sum_{k=0}^{\infty}{\mbox{\small $\displaystyle\frac{1}{2^{4k}(4k+2)!}$}x^{4k}} $$ and $$ g_2(x) = \displaystyle\sum_{k=0}^{\infty}{\mbox{\small $\displaystyle\frac{1}{2^{4k+2}(4k+4)!}$}x^{4k+2}}, $$ the function $g$ must also be a real analytic function on $[0,\beta]$.
Also, from Theorem~\ref{Theorem_3} the following results directly follow. \begin{theorem} \label{g1} For all $c\in \left(0, \pi\right)$ the following inequalities hold true$:$ $$ \begin{array}{c} \mbox{\small $\displaystyle\frac{1}{4}$} = T_0^{g_1,\,0+}(x) \leq \ldots\leq T_{4n}^{g_1,\,0+}(x) \leq T_{4n+4}^{g_1,\,0+}(x) \leq \ldots \\[1.50 ex]
\ldots \leq g_1(x) \leq \ldots \\[1.25 ex]
\ldots \leq \mbox{$\mathbbmsl{T}$}_{4m+4}^{g_1;\,0+,\,c-}(x)
\leq \mbox{$\mathbbmsl{T}$}_{4m}^{g_1;\,0+,\,c-}(x)
\leq \ldots \leq \mbox{$\mathbbmsl{T}$}_0^{g_1;\,0+,\,c-}(x) = g_1(c). \end{array} $$ for all $x \!\in\! \left(0, c\right)$, where $m, n \!\in\! \mathbb{N}_0$. \end{theorem} \begin{theorem} \label{g2} For all $c\in \left(0, \pi\right)$ the following inequalities hold true$:$ $$ \begin{array}{c} \mbox{\small $\displaystyle\frac{1}{192}$}x^2 =T_2^{g_2,\,0+}(x) \leq \ldots \leq T_{4n+2}^{g_2,\,0+}(x) \leq T_{4n+6}^{g_2,\,0+}(x) \leq \ldots \\[1.50 ex] \ldots \leq g_2(x) \leq \ldots \\[1.00 ex] \ldots \leq \mbox{$\mathbbmsl{T}$}_{4m+6}^{g_2;\,0+,\,c-}(x) \leq \mbox{$\mathbbmsl{T}$}_{4m+2}^{g_2;\,0+,\,c-}(x) \leq \ldots \leq \mbox{$\mathbbmsl{T}$}_2^{g_2;\,0+,\,c-}(x) = \mbox{\small $\displaystyle\frac{g_2(c)}{c^2}x^2$} \end{array} $$ for all $x \!\in\! \left(0, c \right)$, where $m, n \!\in\! \mathbb{N}_0$. \end{theorem}
Thus, from (\ref{g1-g2}), Theorem~\ref{g1} and Theorem~\ref{g2}, for $c=\mbox{\small $\displaystyle\frac{\pi}{2}$}$, an improvement of inequalities from Statement 2 are obtained, as shown bellow.
First, for all $x \!\in\! \left(0, \pi/2\right)$ the following inequalities hold true$:$
$$
\mbox{\small $\displaystyle\frac{1}{4}$} - \mbox{\small $\displaystyle\frac{4}{\pi^2}$}\,g_2\!\!\left(\mbox{\small $\displaystyle\frac{\pi}{2}$}\right)x^2 \leq g(x) \leq g_1\!\!\left(\mbox{\small $\displaystyle\frac{\pi}{2}$}\right) - \mbox{\small $\displaystyle\frac{1}{192}$} \, x^2 $$ i.e. $$ \mbox{\small $\displaystyle\frac{1}{4}$} - \mbox{\small $\displaystyle\frac{16}{\pi^4}$}\displaystyle\left( \cosh\!\mbox{\small $\displaystyle\frac{\pi}{4}$} + \mbox{\small $\displaystyle\frac{\sqrt{2}}{2}$} - 2\right)x^2 \leq g(x) \leq \displaystyle \mbox{\small $\displaystyle\frac{4}{\pi^2}$}\!\left( \cosh\!\mbox{\small $\displaystyle\frac{\pi}{4}$} - \mbox{\small $\displaystyle\frac{\sqrt{2}}{2}$}\right) - \mbox{\small $\displaystyle\frac{1}{192}$} \, x^2. $$
It is easy to check $$ \mbox{\small $\displaystyle\frac{4}{\pi^2}$}\!\left( \cosh\!\mbox{\small $\displaystyle\frac{\pi}{4}$} - \mbox{\small $\displaystyle\frac{\sqrt{2}}{2}$}\right) - \mbox{\small $\displaystyle\frac{1}{192}$} \, x^2 \leq \mbox{\small $\displaystyle\frac{1}{4}$} $$ for all $x \!\in\! [\delta_2, \pi/2]$, where $\delta_2 = \mbox{\small $\displaystyle\frac{4 \sqrt{3} e^{-\frac{\pi}{8}}}{\pi}$} \sqrt{8 + 8 e^{\frac{\pi}{2}} - (\pi^2+8\sqrt{2})e^{\frac{\pi}{4}}} = 0.22525 ...\,$.
Also, $$ \mbox{\small $\displaystyle\frac{4}{\pi^2}$}(2-\sqrt{2}) \leq \mbox{\small $\displaystyle\frac{1}{4}$} - \mbox{\small $\displaystyle\frac{16}{\pi^4}$}\!\left( \cosh\!\mbox{\small $\displaystyle\frac{\pi}{4}$} + \mbox{\small $\displaystyle\frac{\sqrt{2}}{2}$} - 2\right)x^2 $$ for all $x \!\in\! [0,\delta_1]$, where $\delta_1 = \mbox{\small $\displaystyle\frac{\sqrt{2} \pi e^{\frac{\pi}{8}}\sqrt{\pi^2+16\sqrt{2}-32}}{8\sqrt{(\sqrt{2}-4)e^{\frac{\pi}{4}} +e^{\frac{\pi}{2}}+1}}$} =1.55456 ... \,$.
\section{Conclusion}
In this paper, we showed a way to prove some trigonometric inequalities using the double-sided {\sc Taylor}'s approximations. The presented approach enabled generalizations of inequalities (\ref{Eq_01}) i.e. produced sequences of polynomial approximations of the given trigonometric function $f$.
Note that Theorem~\ref{Theorem_3} cannot be applied directly to inequality (\ref{Eq_02}) because the function $g$ has an alternating series expansion. We overcame this obstacle by representing this function by a linear combination of two functions whose power series expansions have nonnegative coefficients.
Our approach makes a good basis for the systematic proving of trigonometric inequalities. Developing general, automated-oriented methods for proving of trigonometric inequalities is an area our continuing interest \cite{B_Banjac_M_Nenenzic_B_Malesevic_Telfor_2015}, \cite{B_Malesevic_M_Makragic_JMI_2016}-\cite{B_Banjac_M_Makragic_B_Malesevic_Results_2016}, \cite{Milica_Makragic_JMI_2017}-\cite{B_Malesevic_I_Jovovic_B_Banjac_JMI_2017}, \cite{B_Malesevic_T_Lutovac_M_Rasajski_C_Mortici_Adv._Difference_Equ._2018}-\cite{B_Malesevic_M_Rasajski_T_Lutovac_2019} and \cite{B_D_Banjac_2019}.
\textbf{Acknowledgment.} Research of the first and second and third author was supported in part by the Serbian Ministry of Education, Science and Technological Development, under Projects ON 174032 \& III 44006, ON 174033 and TR 32023, respectively.
\end{document} |
\begin{document}
\begin{titlepage}
\def\thepage {}
\begin{flushleft} \large Andrew V. Goldberg\footnote{A.V. Goldberg: InterTrust Technologies Corp., 4750 Patrick Henry Drive, Santa Clara, CA 95054 email: avg@acm.org. Part of this research was done while this author was at NEC Research Institute, Inc., Princeton, NJ.}
and Alexander V. Karzanov\footnote{A.V. Karzanov: Corresponding author. Institute for System Analysis of the Russian Academy of Sciences, 9, Prospect 60 Let Oktyabrya, 117312 Moscow, Russia, email: sasha@cs.isa.ac.ru. This author was supported in part by a grant from the Russian Foundation of Basic Research.}
\end{flushleft}
\begin{flushleft} \bf\Large Maximum skew-symmetric flows and matchings
\end{flushleft}
\begin{flushleft} December 2003 \end{flushleft}
\noindent {\bf Abstract.} The maximum integer skew-symmetric flow problem (MSFP) generalizes both the maximum flow and maximum matching problems. It was introduced by Tutte~\cite{tut-67} in terms of self-conjugate flows in antisymmetrical digraphs. He showed that for these objects there are natural analogs of classical theoretical results on usual network flows, such as the flow decomposition, augmenting path, and max-flow min-cut theorems. We give unified and shorter proofs for those theoretical results.
We then extend to MSFP the shortest augmenting path method of Edmonds and Karp~\cite{EK-72} and the blocking flow method of Dinits~\cite{din-70}, obtaining algorithms with similar time bounds in general case. Moreover, in the cases of unit arc capacities and unit ``node capacities'' the blocking skew-symmetric flow algorithm has time bounds similar to those established in~\cite{ET-75,kar-73-2} for Dinits' algorithm. In particular, this implies an algorithm for finding a maximum matching in a nonbipartite graph in $O(\sqrt{n}m)$ time, which matches the time bound for the algorithm of Micali and Vazirani~\cite{MV-80}. Finally, extending a clique compression technique of Feder and Motwani~\cite{FM-91} to particular skew-symmetric graphs, we speed up the implied maximum matching algorithm to run in $O(\sqrt{n}m\log(n^2/m)/\log{n})$ time, improving the best known bound for dense nonbipartite graphs.
Also other theoretical and algorithmic results on skew-symmetric flows and their applications are presented.
{\bf Key words.} skew-symmetric graph -- network flow - matching -- b-matching
{\it Mathematics Subject Classification (1991):} 90C27, 90B10, 90C10, 05C85
\end{titlepage}
\baselineskip 15pt
\section{\Large Introduction}
By a {\em skew-symmetric flow} we mean a flow in a skew-symmetric directed graph which takes equal values on any pair of ``skew-symmetric'' arcs. This is a synonym of Tutte's {\em self-conjugate flow in an antisymmetrical digraph}~\cite{tut-67}. This paper is devoted to the maximum integer skew-symmetric flow problem, or, briefly, the {\em maximum IS-flow problem}. We study combinatorial properties of this problem and develop fast algorithms for it.
A well-known fact~\cite{FF-62} is that the bipartite matching problem can be viewed as a special case of the maximum flow problem. The combinatorial structure of nonbipartite matchings revealed by Edmonds~\cite{edm-65} involves blossoms and is more complicated than the structure of flows. This phenomenon explains, to some extent, why general matching algorithms are typically more intricate relative to flow algorithms. The maximum IS-flow problem is a generalization of both the maximum flow and maximum matching (or b-matching) problems. Moreover, this generalization appears to be well-grounded for two reasons. First, the basic combinatorial and linear programming theorems for usual flows have natural counterparts for IS-flows. Second, when solving problems on IS-flows, one can use intuition, ideas and technical tools well-understood for usual flows, so that the implied algorithms for matchings become more comprehensible.
As the maximum flow problem is related to certain path problems, the maximum IS-flow problem is related to certain problems on so-called {\em regular paths} in skew-symmetric graphs. We use some theoretical and algorithmic results on the {\em regular reachability} and {\em shortest regular path} problems from~\cite{GK-96}.
Tutte~\cite{tut-67} originated a mini-theory of IS-flows (in our terms) to bridge theoretical results on matchings and their generalizations (b-factors, b-matchings, degree-constrained subgraphs, Gallai's track packings, and etc.) and results on usual flows. This theory parallels Ford and Fulkerson's flow theory~\cite{FF-62} and includes as basic results the decomposition, augmenting path, and max-flow min-cut theorems. Subsequently, some of those results were re-examined in different, but equivalent, terms by other authors, e.g., in~\cite{blu-90,GK-95,KS-93}.
Recall that the flow decomposition theorem says that a flow can be decomposed into a collection of source-to-sink paths and cycles. The augmenting path theorem says that a flow is maximum if and only if it admits no augmenting path. The max-flow min-cut theorem says that the maximum flow value is equal to the minimum cut capacity. Their skew-symmetric analogs are, respectively, that an IS-flow can be decomposed into a collection of pairs of symmetric source-to-sink paths and pairs of symmetric cycles, that an IS-flow is maximum if and only if it admits no regular augmenting path, and that the maximum IS-flow value is equal to the minimum {\em odd-barrier} capacity. We give unified and shorter proofs for these skew-symmetric flow theorems.
There is a relationship between skew-symmetric flows and {\em bidirected flows} introduced by Edmonds and Johnson~\cite{EJ-70} in their combinatorial study of a natural class of integer linear programs generalizing usual flow and matching problems. In particular, they established a linear programming description for integer bidirected flows. We finish the theoretical part by showing how to obtain a linear programming description for maximum IS-flows directly, using the max-IS-flow min-barrier theorem.
The second, larger, part of this paper is devoted to efficient methods to solve the maximum IS-flow problem (briefly, {\em MSFP}) in general and special cases, based on the theoretical ground given in the first part. First of all we explain how to adapt the idea of Anstee's elegant methods~\cite{ans-85,ans-87} for b-matchings in which standard flow algorithms are used to construct an optimal half-integer solution and then, after rounding, the ``good pre-solution'' is transformed into an optimal b-matching by solving $O(n)$ certain path problems. We devise an $O(M(n,m)+nm)$-time algorithm for MSFP in a similar fashion, using a regular reachability algorithm with linear complexity to improve a good pre-solution. Hereinafter $n$ and $m$ denote the numbers of nodes and arcs of the input graph, respectively, and $M(n,m)$ is the time needed to find an integer maximum flow in a usual network with $n$ nodes and $m$ arcs. Without loss of generality, we assume $n=O(m)$.
The next approach is the core of this paper. The purpose is to extend to MSFP the well-known shortest augmenting path algorithm of Edmonds and Karp~\cite{EK-72} with complexity $O(nm^2)$, and its improved version, the blocking flow algorithm of Dinits~\cite{din-70} with complexity $O(n^2m)$, so as to preserve the complexity bounds. Recall that the blocking flow algorithm consists of $O(n)$ {\em phases}, each finding a blocking flow in a layered network representing the union of currently shortest augmenting paths. We introduce concepts of shortest blocking and totally blocking IS-flows and show that an optimal solution to MSFP is also constructed in $O(n)$ phases, each finding a shortest blocking IS-flow in the residual skew-symmetric network. In its turn a phase is reduced to finding a totally blocking IS-flow in an {\em acyclic} (though not necessarily layered) skew-symmetric network.
The crucial point is to perform the latter task in time comparable with the phase time in Dinits' algorithm (which is $O(nm)$ in general case). We reduce it to a certain auxiliary problem in a usual acyclic digraph. A fast algorithm for this problem provides the desired time bound for a phase.
The complexity of our blocking IS-flow algorithm remains comparable with that of Dinits' algorithm in important special cases where both the number of phases and the phase time significantly decrease. More precisely, Dinits' algorithm applied to the maximum matching problem in a bipartite graph runs in $O(\sqrt{n}m)$ time~\cite{HK-73,kar-73}. Extending that result, it was shown in~\cite{ET-75,kar-73-2} that for arbitrary nonnegative integer capacities, Dinits' algorithm has $O({\rm min}\{n,\sqrt{\Delta}\})$ phases and each phase runs in $O({\rm min}\{nm,m+\Delta\})$ time, where $\Delta$ is the sum of transit capacities of inner nodes. Here the transit capacity of a node (briefly, the {\em node capacity}) is the maximum flow value that can be pushed through this node. We show that both bounds remain valid for the blocking IS-flow algorithm.
When the network has unit arc capacities (resp. unit inner node capacities), the number of phases turns into $O(\sqrt{m})$ (resp. $O(\sqrt{n})$); in both cases the phase time turns into $O(m)$. The crucial auxiliary problem (that we are able to solve in linear time for unit arc capacities) becomes the following {\em maximal balanced path-set problem}:
\begin{itemize} \item[{\em MBP}:] {\sl Given an acyclic digraph in which one sink and an even set of sources partitioned into pairs are distinguished, find an (inclusion-wise) maximal set of pairwise arc-disjoint paths from sources to the sink such that for each pair $\{z,z'\}$ of sources in the partition, the number of paths from $z$ is equal to that from $z'$.}
\end{itemize}
As a consequence, the implied algorithm solves the maximum matching problem in a general graph in the same time, $O(\sqrt{n}m)$, as the algorithm of Micali and Vazirani~\cite{MV-80,vaz-90} (cf.~\cite{blu-90,GT-91}) and solves the b-factor or maximum degree-constrained subgraph problem in $O(m^{3/2})$ time, similarly to Gabow~\cite{gab-83}. The logical structure of our algorithm differs from that of~\cite{MV-80} and sophisticated data structures (incremental trees for set union~\cite{GT-85}) are used only in the regular reachability and shortest regular path algorithms of linear complexity from~\cite{GK-96} (applied as black-box subroutines) and once in the algorithm for MBP.
Finally, we show that a clique compression technique of Feder and Motwani~\cite{FM-91} can be extended to certain skew-symmetric graphs. As a result, our maximum matching algorithm in a general graph is speeded up to run in $O(\sqrt{n}m\log(n^2/m)/\log{n})$ time. This matches the best bound for bipartite matching~\cite{FM-91}.
Fremuth-Paeger and Jungnickel~\cite{FJ-99} developed an algorithm for MSFP (stated in terms of ``balanced flows'') which combines Dinits' approach with ideas and tools from~\cite{MV-80,vaz-90}; it runs in $O(nm^2)$ time for general capacities and in time slightly slower than $O(\sqrt{n}m)$ in the nonbipartite matching case.
This paper is organized as follows. Basic definitions and facts are given in Section~\ref{sec:back}. Sections~\ref{sec:theo} and~\ref{sec:lin} contain theoretical results on combinatorial and linear programming aspects of IS-flows, respectively. Section~\ref{sec:gisa} describes Anstee's type algorithm for MSFP. The shortest regular augmenting path algorithm and a high level description of the blocking IS-flow method are developed in Section~\ref{sec:sbf}. Section~\ref{sec:shortp} gives a short review on facts and algorithms in~\cite{GK-96} for regular path problems. Using it, Section~\ref{sec:iter} explains the idea of implementation of a phase in the blocking IS-flow method. It also bounds the number of phases for special skew-symmetric networks. Section~\ref{sec:acyc} completes the description of the blocking IS-flow algorithm by reducing the problem of finding a totally blocking IS-flow in an acyclic skew-symmetric network to the above-mentioned auxiliary problem in a usual acyclic digraph and devising a fast procedure to solve the later. The concluding Section~\ref{sec:mat} discusses implications for matchings and their generalizations, and explains how to speed up the implied maximum matching algorithm by use of the clique compression.
This paper is self-contained up to several quotations from~\cite{GK-96}. Main results presented in this paper were announced in extended abstract~\cite{GK-95}. Subsequently, the authors found a flaw in the original fast implementation of a phase in the blocking IS-flow method. It was corrected in a revised version of this paper (circulated in 2001) where problem MBP and its weighted analog were introduced and efficiently solved, whereas the original version (identical to preprint~\cite{GK-99}) embraced only the content of Sections~\ref{sec:theo}--\ref{sec:iter}.
\section{\Large Preliminaries}\label{sec:back}
By a {\em skew-symmetric graph} we mean a digraph $G = (V,E)$ with a mapping (involution) $\sigma$ of $V\cup E$ onto itself such that: (i) for each $x\in V\cup E$, $\sigma(x)\ne x$ and $\sigma(\sigma(x))=x$; (ii) for each $v\in V$, $\sigma(v)\in V$; and (iii) for each $a=(v,w)\in E$, $\sigma(a)=(\sigma(w),\sigma(v))$. Although parallel arcs are allowed in $G$, an arc leaving a node $x$ and entering a node $y$ is denoted by $(x,y)$ when it is not confusing. We assume that $\sigma$ is fixed (when there are several such mappings) and explicitly included in the description of $G$. The node (arc) $\sigma(x)$ is called {\em symmetric} to a node (arc) $x$ (using, for brevity, the term {\em symmetric\/} rather than skew-symmetric). Symmetric objects are also called {\em mates}, and we usually use notation with primes for mates: $x'$ denotes the mate $\sigma(x)$ of an element $x$. Note that $G$ can contain an arc $a$ from a node $v$ to its mate $v'$; then $a'$ is also an arc from $v$ to $v'$.
Unless mentioned otherwise, when talking about paths (cycles), we mean directed paths (cycles). The symmetry $\sigma$ is extended in a natural way to paths, subgraphs, and other objects in $G$; e.g., two paths (cycles) are symmetric if the elements of one of them are symmetric to those of the other and go in the reverse order. Note that $G$ cannot contain self-symmetric paths or cycles. Indeed, if $P=(x_0,a_1,x_1,\ldots,a_k,x_k)$ is such a path (cycle), choose arcs $a_i$ and $a_j$ such that $i\le j$, $a_j=\sigma(a_i)$ and $j-i$ is minimum. Then $j>i+1$ (as $j=i$ would imply $\sigma(a_i)=a_i$ and $j=i+1$ would imply $\sigma(x_i)=x_{j-1}=x_i$). Now $\sigma(a_{i+1})=a_{j-1}$ contradicts the minimality of $j-i$.
We call a function $h$ on $E$ {\em symmetric\/} if $h(a) = h(a')$ for all $a \in E$.
A {\em skew-symmetric network\/} is a quadruple $N=(G,\sigma,u,s)$ consisting of a skew-symmetric graph $G=(V,E)$ with symmetry $\sigma$, a nonnegative integer-valued symmetric function $u$ (of {\em arc capacities}) on $E$, and a {\em source} $s\in V$. The mate $s'$ of $s$ is the {\em sink} of $N$. A {\em flow} in $N$ is a function $f: E \to{\mathbb R}_+$ satisfying the capacity constraints $$
f(a) \leq u(a) \qquad \mbox{for all}\;\; a \in E $$ and the conservation constraints $$
\mbox{div}_f(x):=\sum_{(x,y) \in E} f(x,y) - \sum_{(y,x) \in E} f(y,x) =0
\qquad\mbox{for all}\;\; x \in V - \{s,s'\} . $$ The value $\mbox{div}_f(s)$ is called the {\em value} of $f$ and denoted by
$|f|$; we usually assume that $|f|\ge 0$. Now {\em IS-flow} abbreviates {\em integer symmetric flow}, the main object that we study in this paper. The {\em maximum IS-flow problem (MSFP)} is to find an IS-flow of maximum value in $N$.
The integrality requirement is important: if we do not require $f$ to be integral, then for any integer flow $f$ in $N$, the flow $f'$, defined by $f'(a):=(f(a)+f(a'))/2$ for $a\in E$, is a flow of the same value as $f$, which is symmetric but not necessarily integral. Therefore, the {\em fractional} skew-symmetric flow problem is equivalent to the ordinary flow problem.
Note that, given a digraph $D=(V(D),A(D))$ with two specified nodes $p$ and $q$ and nonnegative integer capacities of the arcs, we can construct a skew-symmetric graph $G$ by taking a disjoint copy $D'$ of $D$ with all arcs reversed, adding two extra nodes $s$ and $s'$, and adding four arcs $(s,p),(s,q'),(q,s'),(p',s')$ of infinite capacity, where $p',q'$ are the copies of $p,q$ in $D'$, respectively. Then there is a natural one-to-one correspondence between integer flows from $p$ to $q$ in $D$ and the IS-flows from $s$ to $s'$ in $G$. This shows that MSFP generalizes the classical (integer) max-flow problem.
\noindent {\bf Remark.} Sometimes it is useful to consider a sharper version of MSFP in which double-sided capacity constraints $\ell(a)\le f(a)\le u(a)$, $a\in E$, are imposed, where $\ell,u:E\to{\mathbb Z}_+$ and $\ell \le u$ ({\em problem DMSFP}). Similarly to the max-flow problem with upper and lower capacities~\cite{FF-62}, DMSFP is reduced to MSFP in the skew-symmetric network $N'$ obtained from $N$ by subdividing each arc $a=(x,y)$ into three arcs $(x,v),(v,w),(w,y)$ with (upper) capacities $u(a),u(a)-\ell(a),u(a)$, respectively, and adding extra arcs $(s,w)$ and $(v,s')$ with capacity $\ell(a)$ each. It is not difficult to show (e.g., using Theorem~\ref{tm:m-m}) that DMSFP has a solution if and only if all extra arcs are saturated by a maximum IS-flow $f'$ for $N'$, and in this case $f'$ induces a maximum IS-flow for $N$ in a natural way. For details, see~\cite{FJ-99}.
In our study of IS-flows we rely on results for regular paths in skew-symmetric graphs. A {\em regular path}, or an {\em r-path}, is a path in $G$ that does not contain a pair of symmetric arcs. Similarly, an {\em r-cycle} is a cycle that does not contain a pair of symmetric arcs. The {\em r-reachability problem (RP)\/} is to find an r-path from $s$ to $s'$ or a proof that there is none. Given a symmetric function of {\em arc lengths}, the {\em shortest r-path problem (SRP)\/} is to find a minimum length r-path from $s$ to $s'$ or a proof that there is none.
A criterion for the existence of a regular $s$ to $s'$ path is less trivial than that for the usual path reachability; it involves so-called barriers. We say that $$
{\cal B}=(A; X_1, \ldots, X_k) $$ is an {\em $s$-barrier} if the following conditions hold.
\begin{enumerate}
\item[(B1)] $A, X_1, \ldots, X_k$ are pairwise disjoint subsets of $V$, and $s \in A$.
\item[(B2)] For $A' = \sigma(A)$, $A \cap A' = \emptyset$.
\item[(B3)] For $i = 1, \ldots, k$, $X_i$ is self-symmetric, i.e., $\sigma(X_i) = X_i$.
\item[(B4)] For $i = 1, \ldots, k$, there is a unique arc, $e^i$, from $A$ to $X_i$.
\item[(B5)] For $i,j = 1,\ldots, k$ and $i \not = j$, no arc connects $X_i$ and $X_ j$.
\item[(B6)] For $M := V - (A\cup A' \cup X_1 \cup \ldots \cup X_k)$ and $i = 1, \ldots, k$, no arc connects $X_i$ and $M$.
\item[(B7)] No arc goes from $A$ to $A' \cup M$. \end{enumerate}
(Note that arcs from $A'$ to $A$, from $X_i$ to $A$, and from $M$ to $A$ are possible.) Figure~\ref{fig:bar} illustrates the definition. Tutte proved the following (see also~\cite{blu-90,GK-96}).
\begin{figure}
\caption{ A barrier}
\label{fig:bar}
\end{figure}
\begin{theorem}\label{tm:bar}{\rm \cite{tut-67}} There is an r-path from $s$ to $s'$ if and only if there is no s-barrier. \end{theorem}
This criterion will be used in Section~\ref{sec:theo} to obtain an analog of the max-flow min-cut theorem for IS-flows. RP is efficiently solvable.
\begin{theorem} \label{tm:ratime} {\rm \cite{blu-90,GK-96}} The r-reachability problem in $G$ can be solved in $O(m)$ time. \end{theorem}
The methods for the maximum IS-flow problem that we develop apply, as a subroutine, the r-reachability algorithm of linear complexity from~\cite{GK-96}, which finds either a regular $s$ to $s'$ path or an $s$-barrier. Another ingredient used in our methods is the shortest r-path algorithm for the case of nonnegative symmetric lengths, which runs in $O(m\,\log n)$ time in general, and in $O(m)$ time for all-unit lengths~\cite{GK-96}. The necessary results on RP and SRP are outlined in Section~\ref{sec:shortp}.
In the rest of this paper, $\sigma$ and $s$ will denote the symmetry map and the source, respectively, regardless of the network in question, which will allow us to use the shorter notation $(G,u)$ for a network $(G,\sigma,u,s)$. Given a simple path $P$, the number of arcs on $P$ is denoted by
$|P|$ and the incidence vector of its arc set in ${\mathbb R}^E$ is denoted by $\chi^P$, i.e., $\chi^P(a)=1$ if $a$ is an arc of $P$, and 0 otherwise.
\subsection{Relationships to Matchings} \label{ssec:rel_mat}
Given an undirected graph $G' = (V',E')$, a {\em matching} is a subset $M\subseteq E'$ such that no two edges of $M$ have a common endnode. The {\em maximum matching problem} is to find a matching $M$
whose cardinality $|M|$ is as large as possible.
There are well-known generalizations of matchings; for a survey see~\cite{law-76,LP-86,sch-03}. Let $u_0,u:E'\to{\mathbb Z}_+\cup\{\infty\}$ and $b_0,b:V'\to{\mathbb Z}_+$ be functions such that $b_0\le b$ and $u_0\le u$. A {\em $(u_0,u)$-capacitated $(b_0,b)$-matching} is a function $h: E'\to{\mathbb Z}_+$ satisfying the capacity constraints $$
u_0(e)\le h(e)\le u(e) \qquad \mbox{for all}\;\; e \in E', $$ and the supply constraints $$
b_0(v)\le \sum_{e=\{v,w\}\in E'} h(e) \le b(v) \qquad
\mbox{for all}\;\; v \in V'. $$ The {\em value}\/ of $h$ is defined to be $h(E')$. Hereinafter, for a numerical function $g$ on a set $S$ and a subset $S'\subseteq S$, $g(S')$ denotes $\sum_{e\in S'} g(e)$. Popular special cases are: a {\em u-capacitated b-matching} (when $b_0=0$); a {\em degree-constrained subgraph} (when $u\equiv 1$); a {\em perfect b-matching} (when $u\equiv \infty$ and $b_0=b$); a {\em b-factor} (when $u\equiv 1$ and $b_0=b$). In these cases one assigns $u_0=0$. Typically, in unweighted versions, one is asked for maximizing the value of $h$ (in the former two cases) or for finding a feasible $h$ (in the latter two cases).
The general maximum $(u_0,u)$-capacitated $(b_0,b)$-matching problem is reduced to the maximum IS-flow problem (MSFP or DMSFP, depending on whether both $u_0,b_0$ are zero functions or not) without increasing the problem size by more than a constant factor. The construction of the corresponding capacitated skew-symmetric graph $G=(V,E)$ is straightforward (and close to that in~\cite{tut-67}):
(i) for each $v\in V'$, $V$ contains two symmetric nodes $v_1$ and $v_2$;
(ii) also $V$ contains two additional symmetric nodes $s$ and $s'$ (the source and the sink);
(iii) for each $e=\{v,w\} \in E'$, $E$ contains two symmetric arcs $(v_1,w_2)$ and $(w_1, v_2)$ with lower capacity $u_0(e)$ and upper capacity $u(e)$;
(iv) for each $v\in V'$, $E$ contains two symmetric arcs ($s,v_1)$ and $(v_2,s')$ with lower capacity $b_0(v)$ and upper capacity $b(v)$.
There is a natural one-to-one correspondence between the $(u_0,u)$-capacitated $(b_0,b)$-matchings $h$ in $G'$ and the IS-flows $f$ from $s$ to $s'$ in $G$, and the value of $f$ is twice the value of $h$. Figure~\ref{fig:red} illustrates the correspondence for matchings.
\begin{figure}
\caption{{\sl Reduction example for maximum cardinality matching}}
\label{fig:red}
\end{figure}
In case of the b-factor or degree-constrained subgraph problem, one may assume that $b$ does not exceed the node degree function of $G$. Therefore, one can make a further reduction to MSFP in a network with $O(|E'|)$
nodes, $O(|E'|)$ arcs, and unit arc capacities (by getting rid of lower capacities as in the Remark above and then splitting each arc $a$ with capacity $q(a)>1$ into $q(a)$ parallel arcs of capacity one). In Section~\ref{sec:mat} we compare the time bounds of our methods for MSFP applied to the matching problem and its generalizations above with known bounds for these problems.
Edmonds and Johnson~\cite{EJ-70} studied the class of integer linear programs in which the constraint matrix entries are integers between --2 and +2 and the sum of absolute values of entries in each column (without including entries from the box constraints) does not exceed two. Such a problem is often stated in terms of bidirected graphs (for a survey, see~\cite[Chapter 36]{sch-03}). Recall that a {\em bidirected graph} $H=(X,B)$ may contain, besides usual directed edges going from one node to another, edges directed {\em from} both of its endnodes, and {\em to} both of them. A particular problem on such an object is the {\em maximum bidirected flow problem}: given a capacity function $c:B\to {\mathbb Z}_+$ and a {\em terminal} $p\in X$, find a function ({\em biflow}) $g:B\to{\mathbb Z}_+$ maximizing the value $\mbox{div}_g(p)$. (Reasonable versions with more terminals are reduced to this one.) Here $g\le c$ and $\mbox{div}_g(x)=0$ for all $x\in X-\{p\}$, where $\mbox{div}_g(x)$ is the total biflow on the edges directed from $x$ minus the total biflow on the edges directed to $x$ (a loop $e$ at $x$ contributes 0, $2g(e)$ or $-2g(e)$).
The maximum IS-flow problem admits a linear time and space reduction to the maximum biflow problem (in fact, both are equivalent). More precisely, given an instance $N=(G=(V,E),\sigma,u,s)$ of MSFP, take a partition $(X,X')$ of $V$ such that $X'=\sigma(X)$ and $s\in X$. For each pair $\{a,a'\}$ of symmetric arcs in $E$ and nodes $x,y\in X$, assign an edge from $x $ to $y$ if $a$ or $a'$ goes from $x$ to $y$; an edge from both $x,y$ if $a$ or $a'$ goes from $x$ to $\sigma(y)$; an edge to both $x,y$ if $a$ or $a'$ goes from $\sigma(x)$ to $y$. This produces a bidirected graph $H=(X,B)$. We set $p:=s$ and assign the capacity $c(e)$ of each edge $e\in B$ to be the capacity of the arc from which $e$ is created. There is a one-to-one correspondence between the IS-flows in $N$ and the biflows in $(H,c,p)$, and the values of corresponding flows are equal. A reverse reduction is also obvious. Using these reductions, one can try to derive results for IS-flows from corresponding results on biflows, and vice versa. In this paper we give direct proofs and algorithms for IS-flows.
\section{\Large Mini-Theory of Skew-Symmetric Flows}\label{sec:theo}
This section extends the classical flow decomposition, augmenting path, and max-flow min-cut theorems of Ford and Fulkerson \cite{FF-62} to the skew-symmetric case. The {\em support} $\{e\in S: f(e)\ne 0\}$ of a function $f:S\to{\mathbb R}$ is denoted by $\mbox{supp}(f)$.
Let $h$ be a nonnegative integer symmetric function on the arcs of a skew-symmetric graph $G=(V,E)$. A path (cycle) $P$ in $G$ is called $h$-{\em regular} if $h(a)>0$ for all arcs $a$ of $P$ and each arc $a\in P$ such that $a'\in P$ satisfies $h(a)\ge 2$. Clearly when $h$ is all-unit on $E$, the sets of regular and $h$-regular paths (cycles) are the same. We call an arc $a$ of $P$ {\em ordinary} if $a'\not\in P$ and define the $h$-{\em capacity} $\delta_h(P)$ of $P$ to be the minimum of all values $h(a)$ for ordinary arcs $a$ on $P$ and all values $\lfloor h(a)/2\rfloor$ for nonordinary arcs $a$ on $P$.
To state the symmetric flow decomposition theorem, consider an IS-flow $f$ in a skew-symmetric network $N=(G=(V,E),u)$. An IS-flow $g$ in $N$ is called {\em elementary} if it is representable as $g=\delta\chi^P +\delta\chi^{P'}$, where $P$ is a simple cycle or a simple path from $s$ to $s'$ or a simple path from $s'$ to $s$, $P'=\sigma(P)$, and $\delta$ is a {\em positive integer}. Since $g$ is feasible, $P$ is $u$-regular and $\delta\le\delta_u(P)$. We denote $g$ by $(P,P',\delta)$. By a {\em symmetric decomposition} of $f$ we mean a set $D$ of elementary flows such that $f=\sum(g : g\in D)$. The following {\em symmetric decomposition theorem}
(see~\cite{FJ-99,GK-95}) slightly generalizes a result by Tutte~\cite{tut-67} that there exists a symmetric set of $|f|$ paths from $s$ to $s'$ such that any arc $a$ is contained in at most $f(a)$ paths.
\begin{theorem}\label{tm:dec} For an IS-flow $f$ in $G$, there exists a symmetric decomposition consisting of at most $m$ elementary flows. \end{theorem}
\begin{proof} We build up an $f$-regular path $\Gamma$ in $G$ until this path contains a simple cycle $P$ or a simple path $P$ connecting $s$ and $s'$. This will determine a member of the desired flow decomposition. Then we accordingly decrease $f$ and repeat the process for the resulting IS-flow $f'$, and so on until we obtain the zero flow.
We start with $\Gamma$ formed by a single arc $a\in\mbox{supp}(f)$. First we grow $\Gamma$ forward. Let $b=(v,w)$ be the last arc on the current (simple) path $\Gamma$. Suppose that $w \not = s,s'$. By the conservation for $f$ at $w$, $\mbox{supp}(f)$ must contain an arc $q=(w,z)$. If $q'$ is not on $\Gamma$ or $f(q)\ge 2$, we add $q$ to $\Gamma$.
Suppose $q'$ is on $\Gamma$ and $f(q)=1$. Let $\Gamma_1$ be the part of $\Gamma$ between $w'$ and $w$. Then $\Gamma_1$ contains at least one arc since $w \ne w'$. Suppose there is an arc $\widetilde q\in \mbox{supp}(f)$ leaving $w$ and different from $q$. Then we can add $\widetilde q$ to $\Gamma$ instead of $q$, forming a longer $f$-regular path. (Note that since the path $\Gamma$ is simple, ${\widetilde q}'$ is not on $\Gamma$). Now suppose that such a $\widetilde q$ does not exist. Then exactly one unit of the flow $f$ leaves $w$. Hence, exactly one unit of the flow $f$ enters $w$, implying that $b=(v,w)$ is the only arc entering $w$ in $\mbox{supp}(f)$, and that $f(b)=1$. But $\sigma(d)$ also enters $w$, where $d$ is the first arc on $\Gamma_1$. The fact that $\sigma(d)\ne b$ (since $\Gamma_1$ is $f$-regular) leads to a contradiction.
Let $(w,z)$ be the arc added to $\Gamma$. If $z$ is not on $\Gamma$, then $\Gamma$ is a simple $f$-regular path, and we continue growing $\Gamma$. If $z$ is on $\Gamma$, we discover a simple $f$-regular cycle $P$.
If $\Gamma$ reaches $s'$ or $s$, we start growing $\Gamma$ backward from the initial arc $a$ in a way similar to growing it forward. We stop when an $f$-regular cycle $P$ is found or one of $s$, $s'$ is reached. In the latter case $P = \Gamma$ is either an $f$-regular path from $s$ to $s'$ or from $s'$ to $s$, or an $f$-regular cycle (containing $s$ or $s'$).
Form the elementary flow $g=(P,P',\delta)$ with $\delta=\delta_f(P)$ and reduce $f$ to $f':=f-\delta\chi^P-\delta\chi^{P'}$. Since $P$ is
$f$-regular, $\delta>0$. Moreover, there is a pair $e,e'$ of symmetric arcs of $P$ such that either $f'(e)=f'(e')=0$ or $f'(e)=f'(e')=1$; we associate such a pair with $g$. In the former case $e,e'$ vanish in the support of the new IS-flow $f'$, while in the latter case $e,e'$ can be used in further iterations of the decomposition process at most once. Therefore, each pair of arc mates of $G$ is associated with at most two members of the constructed decomposition $D$, yielding $|D|\le m$. \end{proof}
The above proof gives a polynomial time algorithm for symmetric decomposition. Moreover, the above decomposition process can be easily implemented in $O(nm)$ time, which matches the complexity of standard decomposition algorithms for usual flows.
The decomposition theorem and the fact that the network has no self-symmetric cycles imply the following useful property noticed by Tutte as well.
\begin{corollary}\label{cor:even} {\rm \cite{tut-67}} For any self-symmetric set $S\subseteq V$ and any IS-flow in $G$, the total flow on the arcs entering $S$, as well as the total flow on the arcs leaving $S$, is even. \end{corollary}
{\bf Remark.\ } Another consequence of Theorem~\ref{tm:dec} is that one may assume that $G$ has no arc entering $s$. Indeed, consider a maximum IS-flow $f$ in $G$ and a symmetric decomposition $D$ of $f$. Putting together the elementary flows from $s$ to $s'$ in $D$, we obtain an IS-flow $f'$
in $G$ with $|f'| \geq |f|$, so $f'$ is a maximum flow. Since $f'$ uses no arc entering $s$ or leaving $s'$, deletion of all such arcs from $G$ produces an equivalent problem in a skew-symmetric graph.
Next we state a skew-symmetric version of the augmenting path theorem. It is convenient to consider the graph $G^+=(V,E^+)$ formed by adding a reverse arc $(y,x)$ to each arc $(x,y)$ of $G$. For $a \in E^+$, $a^R$ denotes the corresponding reverse arc. The symmetry $\sigma$ is extended to $E^+$ in a natural way. Given a (nonnegative integer) symmetric capacity function $u$ on $E$ and an IS-flow $f$ on $G$, define the {\em residual capacity} $u_f(a)$ of an arc $a \in E^+$ to be $u(a)-f(a)$ if $a \in E$, and $f(a^R)$ otherwise. An arc $a \in E^+$ is called {\em residual} if $u_f(a) > 0$, and {\em saturated} otherwise. Given an IS-flow $g$ in the network $(G^+,u_f)$, we define the function $f \oplus g$ on $E$ by setting $(f \oplus g)(a) := f(a) + g(a) - g(a^R)$. Clearly $f \oplus g$
is a feasible IS-flow in $(G,u)$ whose value is $|f|+|g|$.
By an {\em r-augmenting path} for $f$ we mean a $u_f$-regular path from $s$ to $s'$ in $G^+$. If $P$ is an r-augmenting path and if $\delta \in {\mathbb N}$ does not exceed the $u_f$-capacity of $P$, then we can push $\delta$ units of flow through a (not necessarily directed) path in $G$ corresponding to $P$ and then $\delta$ units through the path corresponding to $P'$. Formally, $f$ is transformed into $f \oplus g$, where $g$ is the elementary flow $(P, P', \delta)$ in $(G^+,u_f)$. Such an augmentation increases the value of $f$ by $2\delta$.
\begin{theorem}\label{tm:aug} {\rm \cite{tut-67}} An IS-flow $f$ is maximum if and only if there is no r-augmenting path. \end{theorem}
\begin{proof} The direction that the existence of an r-augmenting path implies that $f$ is not maximum is obvious in light of the above discussion.
To see the other direction, suppose that $f$ is not maximum, and let $f^*$ be a maximum IS-flow in $G$. For $a \in E$ define $g(a) := f^*(a) - f(a)$ and $g(a^R) := 0$ if $f^*(a) \geq f(a)$, while $g(a^R) := f(a) - f^*(a)$ and $g(a) := 0$ if $f^*(a) < f(a)$. One can see that $g$ is a feasible symmetric flow in $(G^+,u_f)$. Take a symmetric decomposition
$D$ of $g$. Since $|g| = |f^*| - |f| > 0$, $D$ has a member $(P,P',\delta)$, where $P$ is a $u_f$-regular path from $s$ to $s'$. Then $P$ is an r-augmenting path for $f$. \end{proof}
In what follows we will use a simple construction which enables us to reduce the task of finding an r-augmenting path to the r-reachability problem. For a skew-symmetric network $(H,h)$, split each arc $a = (x,y)$ of $H$ into two parallel arcs $a_1$ and $a_2$ from $x$ to $y$ (the {\em first\/} and {\em second split-arcs\/} generated by $a$). These arcs are endowed with the capacities $[h](a_1) := \lceil h(a)/2 \rceil$ and $[h](a_2) := \lfloor h(a)/2 \rfloor$. Then delete all arcs with zero capacity $[h]$. The resulting capacitated graph is called the {\em split-graph} for $(H,h)$ and denoted by $S(H,h)$. The symmetry $\sigma$ is extended to the arcs of $S(H,h)$ in a natural way, by defining $\sigma(a_i) := (\sigma(a))_i$ for $i=1,2$.
For a path $P$ in $S(H,h)$, its image in $H$ is denoted by $\omega(P)$ (i.e., $\omega(P)$ is obtained by replacing each arc $a_i$ of $P$ by the original arc $a=:\omega(a_i)$). It is easy to see that if $P$ is regular, then $\omega(P)$ is $h$-regular. Conversely, for any $h$-regular path $Q$ in $H$, there is a (possibly not unique) r-path $P$ in $S(H,h)$ such that $\omega(P)=Q$. Indeed, replace each ordinary arc $a$ of $Q$ by the first split-arc $a_1$ (existing as $h(a)\ge 1$) and replace each pair $a,a'$ of arc mates in $Q$ by $a_i,a'_j$ for $\{i,j\}=\{1,2\}$ (taking into account that $h(a)=h(a')\ge 2$). This gives the required r-path $P$. Thus, Theorem~\ref{tm:aug} admits the following reformulation in terms of split-graphs.
\begin{corollary}\label{tm:spaug} An IS-flow $f$ in $(G,u)$ is maximum if and only if there is no regular path from $s$ to $s'$ in $S(G^+,u_f)$. \end{corollary}
Finally, the classic max-flow min-cut theorem states that the maximum flow value is equal to the minimum cut capacity. A skew-symmetric version of this theorem involves a more complicated object which is close to an $s$-barrier occurring in the solvability criterion for the r-reachability problem given in Theorem~\ref{tm:bar}. We say that ${\cal B} = (A; X_1, \ldots, X_k) $ is an {\em odd $s$-barrier} for $(G,u)$ if the following conditions hold.
\begin{enumerate}
\item[(O1)] $A, X_1, \ldots, X_k$ are pairwise disjoint subsets of $V$, and $s \in A$.
\item[(O2)] For $A' = \sigma(A)$, $A \cap A' = \emptyset$.
\item[(O3)] For $i = 1, \ldots, k$, $X_i$ is self-symmetric, {\it i.e.,} $\sigma(X_i) = X_i$.
\item[(O4)] For $i = 1, \ldots, k$, the total capacity $u(A, X_i)$ of the arcs from $A$ to $X_i$ is odd.
\item[(O5)] For $i,j = 1, \ldots, k$ and $i \not = j$, no positive capacity arc connects $X_i$ and $X_j$.
\item[(O6)] For $M := V - (A\cup A' \cup X_1 \cup \ldots \cup X_k) $ and $ i = 1, \ldots, k$, no positive capacity arc connects $X_i$ and $M$.
\end{enumerate} Compare with (B1)--(B7) in Section~\ref{sec:back}. Define the {\em capacity} $u({\cal B})$ of ${\cal B}$ to be $u(A, V-A) - k$. Since the source is denoted by $s$ throughout, we refer to an odd $s$-barrier as {\em odd barrier}.
The following is the {\em maximum IS-flow minimum barrier theorem}.
\begin{theorem}\label{tm:m-m} {\rm \cite{tut-67}} The maximum IS-flow value is equal to the minimum odd barrier capacity. \end{theorem}
\begin{proof} To see that the capacity of an odd barrier ${\cal B}=(A;X_1,\ldots, X_k)$ is an upper bound on the value of an IS-flow $f$, consider a symmetric decomposition $D$ of $f$. For each member $g=(P,P',\delta)$ of $D$, where $P$ is a path from $s$ to $s'$, take the {\em last} arc $a=(x,y)$ of the {\em first} path $P$ such that $x\in A$. If $y\in A'$, then the symmetric arc $a'$ (which is in $P'$) also goes from $A$ to $A'$ (by (O2)), and therefore, $g$ uses at least $2\delta$ units of the capacity of arcs from $A$ to $A'$. Associate $g$ with the pair $a,a'$. Now let $y\not\in A'$. Since $y\not\in A$, $y$ is either in $Y:=M$ or in $Y:=X_i$ for some $i$. The choice of $a$ and (O1),(O5),(O6) imply that $P$ leaves $Y$ by an arc $b$ from $Y$ to $A'$. Then the symmetric arc $b'$ (which is in $P'$) goes from $A$ to $Y$ (since $Y$ is self-symmetric), and therefore, $g$ uses at least $2\delta$ units of the capacity
$u(A,Y)$. Associate $g$ with the pair $a,b'$ (possibly $a=b'$). Note that at least one unit of each capacity $u(A,X_i)$ is not used under the canonical way we associate the elementary $s$ to $s'$ flows of $D$ with arcs from $A$ to $V-A$ (since $u(A,X_i)$ is odd, by (O4)). By these reasonings, $|f|\le u({\cal B})$.
Next we show that the two values in the theorem are equal. Let $f$ be a maximum IS-flow. By Corollary~\ref{tm:spaug}, the split-graph $S=S(G^+,u_f)$ contains no $s$ to $s'$ r-path, so it must contain an $s$-barrier ${\cal B} = (A; X_1, \ldots, X_k)$, by Theorem~\ref{tm:bar}.
Let $e^i$ be the (unique) arc from $A$ to $X_i$ in $S$ (see (B4) in Section 2). By the construction of $S$, it follows that the residual capacity $u_f$ of every arc from $A$ to $X_i$ in $G^+$ is zero except for the arc $\omega(e^i)$, whose residual capacity is one. Hence,
\begin{itemize} \item[(i)] if $e^i$ was formed by splitting an arc $a\in E$, then $a$ goes from $A$ to $X_i$, and $f(a)=u(a)-1$;
\item[(ii)] if $e^i$ was formed by splitting $a^R$ for $a\in E$, then $a$ goes from $X_i$ to $A$, and $f(a)=1$;
\item[(iii)] all arcs from $A$ to $X_i$ in $G$, except $a$ in case (i), are saturated by $f$;
\item[(iv)] all arcs from $X_i$ to $A$ in $G$, except $a$ in case (ii), are free of flow. \end{itemize}
Furthermore, comparing arcs in $S$ and $G$, we observe that:
\begin{itemize} \item[(v)] property (B7) implies that the arcs from $A$ to $A'\cup M$ are saturated and the arcs from $A'\cup M$ to $A$ are free of flow;
\item[(vi)] property (B5) implies (O5) and (B6) implies (O6). \end{itemize}
Properties (i)--(iv),(O5),(O6) together with Corollary~\ref{cor:even} provide (O4). So ${\cal B}$ is an odd $s$-barrier in $G$. We have
$|f|=f(A,V-A)-f(V-A,A)=u(A,V-A)-k$ (in view of (i)--(v)). Hence,
$|f|=u({\cal B})$.
\end{proof}
\section{\Large Integer and Linear Programming Formulations}\label{sec:lin}
Although methods of solving MSFP developed in subsequent sections will not use explicitly linear programming aspects exhibited in this section, such aspects help to understand more about the structure of IS-flows.
MSFP is stated as an integer program in a straightforward way. We use function rather than vector notation. For functions $g,h$ on a set $S$, $g\cdot h$ denotes the inner product $\sum_{x\in S}g(x)h(x)$. Assuming that no arc of $G$ enters the source $s$ (see the Remark in the previous section), MSFP can be written as follows:
\begin{eqnarray}
\mbox{\bf maximize}\;\; |f| = \sum\nolimits_{(s,v) \in E} f(s,v)
& & \mbox{\bf subject to} \label{eq:1} \\
f(a) \geq 0 & & \forall a \in E \label{eq:2} \\
f(a) \leq u(a) & & \forall a \in E \label{eq:3} \\
-\sum\nolimits_{(u,v) \in E} f(u,v)
+ \sum\nolimits_{(v,w) \in E} f(v,w) = 0
& & \forall v \in V - \{s,s'\} \label{eq:4} \\
f(a) - f(\sigma(a)) = 0 & & \forall a \in E \label{eq:5} \\
f(a) \;\;\;\mbox{integer} & & \forall a \in E \label{eq:6} \end{eqnarray}
A linear programming formulation for MSFP is obtained by replacing the integrality condition (\ref{eq:6}) by linear constraints related to certain objects that we call odd fragments in $G$. The correctness of the resulting linear program will be shown by use of the max-min relation between IS-flow and odd barriers in Theorem~\ref{tm:m-m}. Alternatively, one can try to derive it from a linear programming characterization of integer bidirected flows in~\cite{EJ-70} (using the reduction as in Section~\ref{sec:back}).
An {\em odd fragment} is a pair $\rho = (V_\rho, U_\rho)$, where $V_\rho$ is a {\em self-symmetric} set of nodes with $s\not\in V_\rho$, and $U_\rho$ is a subset of arcs entering $V_\rho$ such that the total capacity $u(U_\rho)$ is odd. The {\em characteristic function} $\chi_\rho$ of $\rho$ is the function on $E$ defined by
\begin{equation}\label{eq:ch_of}
\chi_\rho (a) := \left\{
\begin{array}{rl}
1 & \mbox{if} \;\; a \in U_\rho \cup \sigma(U_\rho),\\
-1 & \mbox{if} \;\; a \in \delta(V_\rho) - (U_\rho \cup
\sigma(U_\rho)),\\
0 & \mbox{otherwise}.
\end{array}
\right. \end{equation} Here $\delta(V_\rho)$ is the set of arcs with one end in $V_\rho$ and the other in $V - V_\rho$. We denote the set of odd fragments by $\Omega$.
Let $f$ be a (feasible) IS-flow and $\rho\in\Omega$. By~(\ref{eq:ch_of}) and the symmetry of $u$, we have $f\cdot\chi_\rho\le u(U_\rho)+u(\sigma(U_\rho))=2u(U_\rho)$. Moreover, $f\cdot\chi_\rho$ is at most $2u(U_\rho)-2$; this immediately follows from Corollary~\ref{cor:even} and the fact that $u(U_\rho)$ is odd. This gives new linear constraints for MSFP:
\begin{equation}
f\cdot \chi_\rho\le 2u(U_\rho)-2\quad \mbox{for each}\;\; \rho\in\Omega.
\label{eq:8}
\end{equation}
Addition of these constraints enables us to drop off the symmetry constraints (\ref{eq:5}) and the integrality constraints (\ref{eq:6}) without changing the optimum value of the linear program. This fact is implied by the following theorem.
\begin{theorem}\label{tm:opt} Every maximum IS-flow is an optimal solution to the linear program {\rm (\ref{eq:1})--(\ref{eq:4}), (\ref{eq:8})}. \end{theorem}
\begin{proof} Assign a dual variable $\pi(v)\in{\mathbb R}$ (a {\it potential}) to each node $v\in V$, $\gamma(a)\in{\mathbb R}_+$ (a {\it length}) to each arc $a\in E$, and $\xi(\rho)\in{\mathbb R}_+$ to each odd fragment $\rho\in\Omega$. Consider the linear program:
\begin{eqnarray}
\mbox{\bf minimize}\;\; \psi(\pi,\gamma,\xi):=\sum_E u(a)\gamma(a)+
\sum_{\Omega}(2u(U_\rho)-2)\xi(\rho)
& & \mbox{\bf subject to} \label{eq:9} \\
\gamma(a)\ge 0 & & \forall a\in E \label{eq:10} \\
\xi(\rho)\ge 0 & & \forall \rho\in\Omega \label{eq:11} \\
\pi(s)=0 & & \label{eq:12} \\
\pi(s')=1 & & \label{eq:13} \\
\pi(v)-\pi(w)+\gamma(a)+\sum_{\Omega}\xi(\rho)\chi_\rho(a)
\ge 0 & & \forall a=(v,w)\in E .
\label{eq:14} \end{eqnarray}
In fact, (\ref{eq:9})--(\ref{eq:14}) is dual to linear program~(\ref{eq:1})--(\ref{eq:4}),(\ref{eq:8}). (To see this, introduce an extra arc $(s',s)$, add the conservation constraints for $s$ and $s'$, and replace the objective (\ref{eq:1}) by ${\rm max}\{f(s',s)\}$. The latter generates the dual constraint $\pi(s')-\pi(s)\ge 1$. We can replace it by the equality or impose~(\ref{eq:12})--(\ref{eq:13}).) Therefore,
\begin{equation}
{\rm max}\;|f| ={\rm min}\; \psi(\pi,\gamma,\xi), \label{eq:15}
\end{equation}
where the maximum and minimum range over the corresponding feasible solutions.
We assert that every maximum IS-flow $f$ achieves the maximum in (\ref{eq:15}). To see this, choose an odd barrier $ {\cal B} = (A; X_1, \ldots, X_k) $ of minimum capacity $u({\cal B})$. For $i=1,\ldots,k$, let $U_i$ be the set of arcs from $A$ to $X_i$; then $\rho_i=(X_i,U_i)$ is an odd fragment for $G,u$. Define $\pi(v)$ to be 0 for $v\in A$, 1 for $v\in A'$, and $1/2$ otherwise. Define $\gamma(a)$ to be 1 for $a\in (A,A')$, $1/2$ for $a\in (A,M)\cup (M,A')$, and 0 otherwise, where $M=V-(A\cup A'\cup X_1\cup\ldots\cup X_k)$. Define $\xi(\rho_i)=1/2$ for $i=1,\ldots,k$, and $\xi(\rho)=0$ for the other odd fragments in $(G,u)$.
One can check that (\ref{eq:14}) holds for all arcs $a$ (e.g., both values $\pi(w)-\pi(v)$ and $\gamma(a)+ \sum_{\Omega}\xi(\rho)\chi_\rho(a)$ are equal to 1 for $a=(v,w)\in (A,A')$, and 1/2 for $a=(v,w)\in (A,L)\cup (L,A')$, where $L:=V-(A\cup A')$). Thus $\pi,\gamma,\xi$ are feasible.
Using the fact that $u(A,M)=u(M,A')$, we observe that $u\cdot\gamma=u(A,A')+u(A,M)$. Also
$$
\sum_{\Omega}(2u(U_\rho)-2)\xi(\rho)= \sum_{i=1}^k
\frac{1}{2}(2u(U_{i})-2)=\left(\sum_{i=1}^ku(A,X_k)\right)-k.
$$ This implies $\psi(\pi,\gamma,\xi)=u({\cal B})$, and now the result follows from Theorem~\ref{tm:m-m}. \end{proof}
\section{\Large Algorithm Using a Good Pre-Solution} \label{sec:gisa}
Anstee~\cite{ans-85,ans-87} developed efficient methods for b-factor and b-matching problems (unweighted or weighted) based on the idea that a good pre-solution can easily be found by solving a corresponding flow problem. In this section we adapt his approach to solve the maximum IS-flow problem in a skew-symmetric network $N= (G=(V,E),u)$. The algorithm that we devise is relatively simple; it finds a ``nearly optimal'' IS-flow and then makes $O(n)$ augmentations to obtain a maximum IS-flow. The algorithm consists of four stages.
The {\em first\/} stage ignores the fact that $N$ is skew-symmetric and finds an integer maximum flow $g$ in $N$ by use of a max-flow algorithm. Then we set $h(a):=(g(a)+g(a'))/2$ for all arcs $a\in E$. Since $\mbox{div}_h(s)=\mbox{div}_g(s)/2-\mbox{div}_g(s')/2 =\mbox{div}_g(s)$, $h$ is a maximum flow as well. Also $h$ is symmetric and {\em half-integer}. Let $Z$ be the set of arcs on which $h$ is not integer. If $Z=\emptyset$, then $h$ is already a maximum IS-flow; so assume this is not the case.
The {\em second\/} stage applies simple transformations of $h$ to reduce $Z$. Let $H=(X,Z)$ be the subgraph of $G$ induced by $Z$. Obviously, for each $x\in V$, $\mbox{div}_h(x)$ is an integer, so $x$ is incident to an even number of arcs in $Z$. Therefore, we can decompose $H$ into simple, not necessarily directed, cycles $C_1,\ldots,C_r$ which are pairwise arc-disjoint. Moreover, we can find, in linear time, a decomposition in which each cycle $C_i$ is either self-symmetric ($C_i=\sigma(C_i)$) or symmetric to another cycle $C_j$ ($C_i=\sigma(C_j)$).
To do this, we start with some node $v_0\in X$ and grow in $H$ a simple (undirected) path $P=(v_0,e_1,v_1,\ldots,e_q,v_q)$ such that the mate $v'_i$ of each node $v_i$ is not in $P$. At each step, we choose in $H$ an arc $e\ne e_q$ incident to the last node $v_q$ ($e$ exists since $H$ is eulerian); let $x$ be the other end node of $e$. If none of $x,x'$ is in $P$, then we add $e$ to $P$. If some of $x,x'$ is a node of $P$, $v_i$ say, then we shorten $P$ by removing its end part from $e_{i+1}$ and delete from $H$ the arcs
$e_{i+1},\ldots,e_q,e$ and their mates. One can see that the arcs deleted induce a self-symmetric cycle (when $x'=v_i$) or two disjoint symmetric cycles (when $x=v_i$). We also remove the isolated nodes created by the arc deletions and change the initial node $v_0$ if needed. Repeating the process for the new current graph $H$ and path $P$, we eventually obtain the desired decomposition ${\cal C}$, in $O(|Z|)$ time.
Next we examine the cycles in ${\cal C}$. Each pair $C,C'$ of symmetric cycles is canceled by sending a half unit of flow through $C$ and through $C'$, i.e., we increase (resp. decrease) $h(e)$ by 1/2 on each forward (resp. backward) arc $e$ of these cycles. The resulting function $h$ is symmetric, and $\mbox{div}_h(x)$ is preserved at each node $x$, whence $h$ is again a maximum symmetric flow. Now suppose that two self-symmetric cycles $C$ and $D$ meet at a node $x$. Then they meet at $x'$ as well. Concatenating the $x$ to $x'$ path in $C$ and the $x'$ to $x$ path in $D$ and concatenating the rests of $C$ and $D $, we obtain a pair of symmetric cycles and cancel these cycles as above. These cancellations result in ${\cal C}$ consisting of pairwise {\em node-disjoint} self-symmetric cycles, say $C_1,\ldots,C_k$. The second stage takes $O(m)$ time.
The {\it third\/} stage transforms $h$ into an IS-flow $f$ whose value $|f|$ is at most $k$ units below $|h|$. For each $i$, fix a node $t_i$ in $C_i$ and change $h$ on $C_i$ by sending a half unit of flow through the $t_i$ to $t'_i$ path in $C_i$ and through the reverse to the $t'_i$ to $t_i$ path in it. The resulting function $h$ is integer and symmetric and the divergences preserve at all nodes except for the nodes $t_i$ and $t'_i$ where we have $\mbox{div}_h(t_i)=-\mbox{div}_h(t'_i)=1$ for each $i$ (assuming, without loss of generality, that all $t_i$'s are different from $s'$). Therefore, $h$ is, in essence, a multiterminal IS-flow with sources $s,t_1,\ldots,t_k$ and sinks $s',t'_1,\ldots,t'_k$. A genuine IS-flow $f$ from $s$ to $s'$ is extracted by reducing $h$ on some $h$-regular paths. More precisely, we add to $G$ artificial arcs $e_i=(s,t_i)$, $i=1, \ldots, k$ and their mates, extend $h$ by ones to these arcs and construct a symmetric decomposition ${\cal D}$ (defined in Section~ \ref{sec:theo}) for the obtained function $h'$ in the resulting graph $G'$ (clearly $h'$ is an IS-flow of value
$|h|+k$).
Let ${\cal D}'$ be the set of elementary flows in ${\cal D}$ formed by the paths or cycles which contain artificial arcs. Then $\delta=1$ for each $(P,P',\delta)\in{\cal D}'$. Define $f':=h'-
\sum(\chi^P+\chi^{P'}: (P,P',1)\in{\cal D}')$. Then $f'$ is an IS-flow in $G'$, and $|f'|\ge|h'|-2k\ge|h|-k$. Moreover, since $f(e_i)=0$ for $i=1,\ldots,k$, the restriction $f$ of $f'$ to $E$ is an IS-flow in
$G$, and $|f|=|f'|$. Thus, $|f|\ge|h|-k$, and now the facts that $k\le n/2$ (as the nodes $t_1,\ldots,t_k,t'_1,\ldots,t'_k$ are different) and that $h$ is a maximum flow in $N$ imply that the value of $f$ differs from the maximum IS-flow value by $O(n)$. The third stage takes $O(nm)$ time (the time needed to construct a symmetric decomposition of $h'$).
The final, {\em fourth}, stage transforms $f$ into a maximum IS-flow. Each iteration applies the r-reachability algorithm (RA) mentioned in Section~\ref{sec:back} to the split-graph $S(G^+,u_f)$ in order to find a $u_f$-regular $s$ to $s'$ path $P$ in $G^+$ and then augment the current IS-flow $f$ by the elementary flow $(P,P',\delta_{u_f}(P))$ as explained in Section~\ref{sec:theo}. Thus, a maximum IS-flow in $N$ is constructed in $O(n)$ iterations. Since the RA runs in $O(m)$ time (by Theorem~\ref{tm:ratime}), the fourth stage takes $O(nm)$ time.
Summing up the above arguments, we conclude with the following.
\begin{theorem} \label{tm:gisa}
The above algorithm finds a maximum IS-flow in $N$ in $O(M(n,m)+nm)$ time, where $M(n,m)$ is the running time of the max-flow procedure it applies.
\end{theorem}
\section{\Large Shortest R-Augmenting Paths and Blocking IS-Flows} \label{sec:sbf}
Theorem \ref{tm:aug} and Corollary \ref{tm:spaug} prompt an alternative method for finding a maximum IS-flow in a skew-symmetric network $N=(G,u)$, which is analogous to the method of Ford and Fulkerson for usual flows. It starts with the zero flow, and at each iteration, the current IS-flow $f$ is augmented by an elementary flow in $(G^+,u_f)$ (found by applying the r-reachability algorithm to $S(G^+,u_f)$). Since each iteration increases the value of $f$ by at least two, a maximum IS-flow is constructed in pseudo-polynomial time. In general, this method is not competitive to the method of Section~\ref{sec:gisa}.
More efficient methods involve the concepts of shortest r-augmenting paths and shortest blocking IS-flows that we now introduce. Let $g$ be an IS-flow in a skew-symmetric network $(H=(V,W),h)$. We call $g(W)$ ($=\sum_{e\in W} g(e)$) the {\em volume\/} of $g$. Considering a symmetric decomposition $D=\{(P_i,P'_i,\delta_i): i=1,\ldots,k)$ of $g$, we have
$$
g(W)=\sum(\delta_i|P_i|+\delta_i|P'_i| : i=1,\ldots,k) \ge
|g|{\rm min}\{|P_i|: i=1,\ldots,k\}. $$ This implies
\begin{equation}
g(W)\ge |g|\mbox{\mbox{r-dist}}_{S(H,h)}(s,s'), \label{eq:5-1}
\end{equation} where $\mbox{r-dist}_{H'}(x,y)$ denotes the minimum length of a regular $x$ to $y$ path in a skew-symmetric graph $H'$ (the {\em regular distance\/} from $x$ to $y$). We say that an IS-flow $g$ is
\begin{itemize} \item[(i)] {\em shortest\/} if (\ref{eq:5-1}) holds with equality, i.e., some (equivalently, any) symmetric decomposition of $g$ consists of shortest $h$-regular paths from $s$ to $s'$;
\item[(ii)] {\em totally blocking\/} if there is no $(h-g)$-regular path from $s$ to $s'$ in $H$, i.e., we cannot augment $g$ using only residual capacities in $H$ itself;
\item[(iii)] {\em shortest blocking\/} if $g$ is shortest (as in (i)) and \begin{equation}
\mbox{r-dist}_{S(H,h-g)}(s,s') > \mbox{r-dist}_{S(H,h)}(s,s'). \label{eq:5-2} \end{equation} \end{itemize}
Note that a shortest blocking IS-flow is not necessarily totally blocking, and vice versa.
Given a skew-symmetric network $N=(G,u)$, the {\em shortest r-augmenting path method (SAPM)\/}, analogous to the method of Edmonds and Karp~\cite{EK-72} for usual flows, starts with the zero flow, and each iteration augments the current IS-flow $f$ by a shortest elementary flow $g=(P,P',\delta_{u_f}(P))$.
The {\em shortest blocking IS-flow method (SBFM)\/}, analogous to Dinits' method~\cite{din-70}, starts with the zero flow, and each {\em phase} (big iteration) augments the current IS-flow $f$ by performing the following two steps:
\begin{itemize}
\item[(P1)] Find a shortest blocking IS-flow $g$ in $(G^+,u_f)$.
\item[(P2)] Update $f:=f\oplus g$.
\end{itemize}
Both methods terminate when $f$ no longer admits r-augmenting paths (i.e., $g$ becomes the zero flow). The following observation is crucial for our methods.
\begin{lemma}\label{lm:incr} Let $g$ be a shortest IS-flow in $(G^+,u_f)$, and let $f':=f\oplus g$. Let $k$ and $k'$ be the minimum lengths of r-augmenting paths for $f$ and $f'$, respectively. Then $k'\ge k$. Moreover, if $g$ is a shortest blocking IS-flow, then $k'>k$. \end{lemma}
\begin{proof}
Take a shortest $u_{f'}$-regular path $P$ from $s$ to $s'$ in $G^+$. Then $|P|=k'$ and $g'=(P,P',1)$ is an elementary flow in $(G^+,u_{f'})$.
Note that $\mbox{supp}(g)$ does not contain opposed arcs $a=(x,y)$ and $b=(y,x)$. Otherwise decreasing $g$ by one on each of $a,b,a',b'$
(which are, obviously, distinct), we would obtain the IS-flow $\widetilde g$ in $(G^+,u_f)$ such that $|\widetilde g|=|g|$
and $\widetilde g(E^+)<g(E^+)$, which is impossible because $\widetilde g(E^+)\ge k|\widetilde g|$ and $g(E^+)=k|g|$. This implies that each arc $a$ in the set $Z:=\{a\in E^+: g(a^R)=0\}$ satisfies
\begin{equation}
u_{f'}(a)=u_f(a)-g(a). \label{eq:5-3}
\end{equation}
If $\mbox{supp}(g')\subseteq Z$, then $g'$ is a feasible IS-flow in
$(G^+,u_f)$ (by~(\ref{eq:5-3})), whence $k'=g'(E^+)/|g'|\ge k$. Moreover, if, in addition, $g$ is a shortest blocking IS-flow, then (\ref{eq:5-2}) and the fact that $g'\le u_f-g$ (by (\ref{eq:5-3})) imply $k'>k$.
Now suppose there is an arc $e\in E^+$ such that $g'(e)>0$ and $g(e^R)>0$. For each $a\in E^+$, put $\lambda(a):={\rm max}\{0,g(a)+g'(a) -g(a^R)-g'(a^R)\}$. One can check that $\lambda(a)\le u_f(a)$ for all arcs $a$ and that $\mbox{div}_\lambda(v)=0$ for all nodes $v\ne s,s'$. Therefore, $\lambda$ is an IS-flow in $(G^+,u_f)$ with
$|\lambda|=|g|+|g'|=|g|+2$. Also $\lambda(E^+)<g(E^+)+g'(E^+)$ since for the $e$ above, $\lambda(e)+\lambda(e^R)<g'(e)+g(e^R)$. We have
$$
2k'=g'(E^+)>\lambda(E^+)-g(E^+)\ge k(|g|+2)-k|g|=2k,
$$ yielding $k'>k$. \end{proof}
Thus, each iteration of SAPM does not decrease the minimum length of an r-augmenting path, and each phase of SBFM increases this length. This gives upper bounds on the numbers of iterations.
\begin{corollary}\label{cor:sapm} SAPM terminates in at most $(n-1)m$ iterations.
\end{corollary}
(This follows by observing, in the proof of Lemma~\ref{lm:incr}, that on the iterations with the same length of shortest r-augmenting paths, the subgraph of $G^+$ induced by the arcs contained in such paths is monotone nonincreasing, and each iteration reduces the capacity of some arc of this subgraph, as well as the capacity of its mate, to zero or one.)
\begin{corollary}\label{cor:n-1} SBFM terminates in at most $n-1$ phases.
\end{corollary}
As mentioned above, SBFM can be considered as a skew-symmetric analog of Dinits' blocking flow algorithm. Recall that each phase of that algorithm constructs a blocking flow in the subnetwork $H$ formed by the nodes and arcs of shortest augmenting paths. Such a network is acyclic (moreover, layered), and a blocking flow in $H$ is easily constructed in $O(nm)$ time.
The problem of finding a shortest blocking IS-flow ((P1) above) is more complicated. Let $H$ be the subgraph of $G^+$ formed by the nodes and arcs contained in shortest $u_f$-regular $s$ to $s'$ paths. Such an $H$ need not be acyclic (a counterexample is not difficult). In Section~\ref{sec:iter} we will show that problem (P1) can be reduced to a seemingly easier task, namely, to finding a totally blocking IS-flow in a certain acyclic network $(\overline H,\overline h)$. Such a network arises when the shortest r-path algorithm from~\cite{GK-96} is applied to the split-graph $S(G^+,u_f)$ with unit arc lengths. First, however, we need to tell more about the r-reachability and shortest r-path algorithms from~\cite{GK-96}.
\section{\Large Properties of Regular and Shortest Regular Path Algorithms} \label{sec:shortp}
In this section we exhibit certain properties of the algorithms in~\cite{GK-96}, referring the reader to that paper for details. We also establish an additional fact (Lemma~\ref{lm:acyc}), which will be used later.
\subsection{The Regular Reachability Algorithm (RA)}\label{sec:ra}
Let $\Gamma=(V,E)$ be a skew-symmetric graph with source $s$ and sink $s'=\sigma(s)$ (as before, $\sigma$ is the symmetry map). A {\em fragment} (or an $s$-{\em fragment}) in $\Gamma$ is a pair $\phi=(V_\phi,e_\phi = (v,w))$, where $V_\phi $ is a self-symmetric set of nodes of $\Gamma$ with $s\not\in V_\phi$ and $e_\phi$ is an arc entering $V_\phi$, i.e., $v\not\in V_\phi\ni w$ (cf. the definition of odd fragments in Section~\ref{sec:lin}). We refer to $e_\phi$ and $e'_\phi$ as the {\em base} and {\em anti-base} arcs of $\phi$, respectively. Let us say that the fragment is {\em well-reachable} if
(i) for each node $x\in V_\phi$, there is an r-path from $w$ to $x$ in the subgraph induced by $V_\phi$ (and therefore, an r-path from $x$ to $w'= \sigma(w)$), and
(ii) there is an r-path from $s$ to $v$ disjoint from $V_\tau$.
The {\em trimming operation} applied to $\phi$ (which is analogous to shrinking a blossom in matching algorithms) transforms $\Gamma$ by removing the nodes of $V_\phi -\{w, w'\}$ and modifying the arcs as follows.
\begin{itemize}
\item[(T1)] Each arc $a=(x,y) \in E$ such that either $x,y \in V - V_\phi$ or $a = e_\phi$ or $a=e'_\phi$ remains an arc from $x$ to $y$.
\item[(T2)] Each arc $(x,y) \in E-\{e'_\phi\}$ that leaves $V_\phi$ is replaced by an arc from $w$ to $y$, and each arc $(x,y) \in E-\{e_\phi\}$ that enters $V_\phi$ is replaced by an arc from $x$ to $w'$.
\item[(T3)] Each arc with both ends in $V_\phi$ is replaced by an arc from $w$ to $w'$.
\end{itemize}
(A variant of trimming deletes all arcs in (T3).) The image of an arc $a$ in the new graph is denoted again by $a$ (so its end nodes can be changed, but not its name). Figure~\ref{fig:trim} illustrates fragment trimming. The new $\Gamma$ is again skew-symmetric.
\begin{figure}
\caption{Fragment trimming example}
\label{fig:trim}
\end{figure}
The algorithm RA relies on the following property.
\begin{statement}\label{st:sym} {\rm \cite{GK-96}} If $\phi$ is a well-reachable fragment, then trimming $\phi$ preserves the existence (or non-existence) of a regular path from $s$ to $s'$. \end{statement}
RA searches for a regular $s$ to $s'$ path in $\Gamma$, starting with the trivial path $P=s$. Each iteration either increases the current r-path $P$, or reveals a well-reachable fragment and trims it, producing the new current graph $\Gamma$ and accordingly updating $P$. It terminates when either an $s$ to $s'$ r-path $\overline P$ or an $s$-barrier $\overline{\cal B}$ in the final graph $\overline\Gamma$ is found (cf. Theorem~\ref{tm:bar}). The postprocessing stage extends $\overline P$ into a regular $s$ to $s'$ path $P$ of the initial $\Gamma$ (cf. Statement~\ref{st:sym}) in the former case (the {\em path restoration procedure}) and extends $\overline{\cal B}$ into a barrier ${\cal B}$ of the initial $\Gamma$ in the latter case (the {\em barrier restoration procedure}).
The fragments of current graphs revealed by RA determine fragments of the initial $\Gamma$ in a natural way; all fragments are well-reachable. Moreover, the set $\Phi$ of these fragments of the initial $\Gamma$ is {\em well-nested}. This means that
\begin{itemize}
\item[(F1)] for distinct $\phi,\psi\in\Phi$, either $V_\phi\subset V_\psi$ or $V_\psi\subset V_\phi$ or $V_\phi\cap V_\psi=\emptyset$, and
\item[(F2)] for $\phi,\psi\in\Phi$, if $V_{\psi}\subset V_\phi $ and $e_{\psi}\in\delta(V_\phi)$ then $e_{\psi}=e_\phi$, and if $V_{\phi}\cap V_{\psi}=\emptyset$ and $e_{\psi}\in\delta(V_{\phi})$ then $e_{\phi}\not\in\delta(V_{\psi})$.
\end{itemize}
(Recall that for $X\subseteq V$, $\delta(X)$ is the set of arcs with one end in $X$ and the other in $V-X$.) Let us say that a path $R$ in $\Gamma$ is {\em compatible with} $\Phi$ if for each $\phi\in\Phi$,
$p:=|R\cap\delta(V_\phi)|\le 2$, and if $p=2$ then $R$ contains exactly one of $e_\phi,e'_\phi$. The following additional properties (relying on (T1)--(T3)) are important:
\begin{myitem} any regular $s$ to $s'$ path $\overline P$ in the final graph $\overline\Gamma$ is extendable to a regular $s$ to $s'$ path $P$ compatible with $\Phi$ in the initial $\Gamma$, and the path restoration procedure applied to
$\overline P$ constructs such a $P$ in $O(|P|+d)$ time, where $d$ is the total size of maximal fragments in $\Phi$ traversed by $P$;
\label{eq:P}
\end{myitem}
\begin{myitem}
for each $\phi\in\Phi$ and each arc $a\ne e'_\phi$ leaving $V_\phi$, there exists a compatible with $\Phi$ r-path $Q_\phi(a)$ with the first arc $e_\phi$, the last arc $a$ and all intermediate nodes in $V_\phi$; such a path can be constructed by (a phase of) the path restoration procedure in $O(|V_\phi|)$ time.
\label{eq:Q}
\end{myitem}
A fast implementation of RA (supported by the disjoint set union data structure of~\cite{GT-85}) runs in linear time, as indicated in Theorem~\ref{tm:ratime}.
\subsection{The Shortest Regular Path Algorithm (SRA)}\label{sec:srpa}
We now consider the shortest regular path problem (SRP) in a skew-symmetric graph $\Gamma=(V,E)$ with {\em nonnegative symmetric} lengths $\ell(e)$ of the arcs $e\in E$: find a minimum length regular path from $s$ to $s'$. One may assume that $s'$ is r-reachable from $s$. The dual problem involves above-mentioned fragments. Define the characteristic function $\chi_\phi$ of a fragment $\phi = (V_\phi, e_\phi)$ by
\begin{equation}\label{eq:ch_f}
\chi_\phi(a):= \left\{
\begin{array}{rl}
1 & \mbox{for $a=e_\phi, e'_\phi$}, \\
-1 & \mbox{for $a \in \delta(V_\phi) - \{e_\phi, e'_\phi\} $}, \\
0 & \mbox{for the remaining arcs of $\Gamma$}.
\end{array}
\right.
\end{equation}
(Compare with (\ref{eq:ch_of}).) For a function $\pi: V\to{\mathbb R}$ (of node {\em potentials}) and a nonnegative function $\xi$ on a set $\Phi$ of fragments, define the {\em reduced length} of an arc $e = (x,y)$ to be $$
\ell^\xi_\pi (e): = \ell(e) + \pi(x) - \pi(y) +
\sum_{\phi \in \Phi} \xi(\phi)\chi_\phi(e) . $$
An optimality criterion for SRP can be formulated as follows.
\begin{theorem} {\rm \cite{GK-96}}\label{tm:sp} A regular path $P$ from $s$ to $s'$ is a shortest r-path if and only if there exist a potential $\pi: V\to{\mathbb R}$, a set $\Phi$ of fragments, and a {\em positive} function $\xi$ on $\Phi$ such that
\begin{eqnarray}
\ell^\xi_\pi (e) \geq 0 & \mbox{for each} & e \in E; \label{eq:6-2} \\
\ell^\xi_\pi (e) = 0 & \mbox{for each} & e\in P; \label{eq:6-3}\\
\chi^P \cdot \chi_\phi = 0 & \mbox{for each} & \phi\in\Phi.
\label{eq:6-4} \end{eqnarray} \end{theorem}
The {\em shortest r-path algorithm (SRA)} from \cite{GK-96} implicitly maintains $\pi,\Phi,\xi$ in the input graph $\Gamma$ and iteratively modifies the graph by trimming certain fragments. Let $\Gamma^0$ be the subgraph of the current $\Gamma$ with the same set of nodes and with the arcs having zero reduced length, called the current {\em 0-subgraph} (recall that the arcs of the current graph are identified with the corresponding arcs of the initial one). Each iteration applies the above r-reachability algorithm RA to search for a regular $s$ to $s'$ path in $\Gamma^0$. If such a path is found, the algorithm terminates and outputs this path to a postprocessing stage. If such a path does not exist, then, using the barrier ${\cal B}=(A;X_1,\ldots,X_k)$ in $\Gamma^0$ constructed by RA, the iteration trims the fragments determined by the sets $X_i$ and updates $\pi,\Phi,\xi$, modifying $\Gamma^0$. The reduced lengths of the arcs within the newly and previously extracted fragments, as well as of their base and anti-base arcs, are not changed.
Let $\overline\Gamma$ and $\overline\Gamma^0$ denote the final graph $\Gamma$ and the 0-subgraph in it, respectively, and $\overline P$ the regular $s$ to $s'$ path in $\overline\Gamma^0$ found by the algorithm. Let $\Gamma^0$ stand for the 0-subgraph of the initial graph $\Gamma$ (concerning the reduced arc lengths determined by the resulting $\pi,\Phi,\xi$). We call $\Gamma^0$ and $\overline\Gamma^0$ the {\em full} and {\em trimmed 0-graphs}, respectively. The postprocessing stage applies the path restoration procedure of RA to extend $\overline P$ into a regular $s$ to $s'$ path $P$ in $\Gamma^0$, in time indicated in~(\ref{eq:P}). It also explicitly constructs $\Gamma^0$ (in linear time).
Note that any $s$ to $s'$ r-path or r-cycle $Q$ in $\Gamma$ compatible with $\Phi$ satisfies $\chi^Q\cdot\chi_\phi=0$ for each $\phi\in\Phi$. By~(\ref{eq:P}), {\em any} $s$ to $s'$ r-path $\overline P$ in $\overline\Gamma^0$ is extendable to an $s$ to $s'$ r-path $P$ in $\Gamma^0$ compatible with $\Phi$. Therefore, $P$ is shortest, by Theorem~\ref{tm:sp}.
\begin{theorem} \label{tm:sharp} {\rm \cite{GK-96}}
For nonnegative symmetric arc lengths $\ell$, SRA runs in $O(m \log n)$ time, and in $O(m \sqrt{\log L})$ time when $\ell$ is integer-valued and $L$ is the maximum arc length. Furthermore, the algorithm constructs (implicitly) $\pi,\Phi,\xi$ as in Theorem~\ref{tm:sp}, where $\pi$ is {\em anti-symmetric} (i.e., $\pi(x)=-\pi(x')$ for all $x\in V$), and constructs (explicitly) the trimmed 0-graph $\overline\Gamma^0$ and the full 0-graph $\Gamma^0$ such that:
\begin{itemize}
\item[{\rm (A1)}] $\Phi$ is well-nested (obeys {\rm (F1)--(F2)}) and consists of well-reachable fragments in $\Gamma^0$; in particular, $\ell^\xi_\pi(e_\phi)=0$ for each $\phi\in\Phi$;
\item[{\rm (A2)}] $\Phi$ satisfies~(\ref{eq:P}) and~(\ref{eq:Q}) with $\Gamma^0,\overline\Gamma^0$ instead of $\Gamma,\overline\Gamma$; in particular, any regular $s$ to $s'$ path of $\overline\Gamma^0$ is (efficiently) extendable to a shortest regular $s$ to $s'$ path in $(\Gamma,\ell)$.
\end{itemize} \end{theorem}
(Note that the anti-symmetry of $\pi$ and the symmetry of $\ell$ and $\chi_\phi$ for all $\phi\in\Phi$ imply that the reduced length function $\ell^\xi_\pi$ is symmetric. Therefore, the graphs $\Gamma^0$ and $\overline\Gamma^0$ are indeed skew-symmetric.) Let $\Phi^{{\rm max}}$ denote the set of maximal fragments in $\Phi$. The sets $V_\phi$ for $\phi\in\Phi^{{\rm max}}$ are pairwise disjoint (by (F1)), and the graph $\overline\Gamma^0$ can be directly obtained from $\Gamma^0$ by simultaneously trimming the fragments in $\Phi^{{\rm max}}$.
In the next section we will take advantage of the relationship between r-paths in $\overline\Gamma^0$ and shortest r-paths in $(\Gamma,\ell)$ indicated in (A2). Another important property of $\overline\Gamma^0$ is as follows.
\begin{lemma} \label{lm:acyc} If the length $\ell(C)$ of every cycle $C$ in $\Gamma$ is positive, then $\overline\Gamma^0$ is acyclic. In particular, $\overline\Gamma^0$ is acyclic if all arc lengths are positive. \end{lemma}
\begin{proof} Suppose $\overline\Gamma^0$ contains a (not necessarily regular) simple cycle $\overline C$. In view of~(\ref{eq:Q}), $\overline C$ is extendable to a cycle $C$ of $\Gamma^0$ compatible with $\Phi$. Then $\chi^C\cdot\chi_\phi=0$ for all $\phi\in\Phi$. This implies that the original length $\ell(C)$ and the reduced length $\ell^\xi_\pi(C)$ are the same (since the changes in $\ell^\xi_\pi$ due to $\pi$ cancel out as we go around the cycle). Since all arcs of $C$ have zero reduced length, $\ell(C)=\ell^\xi_\pi(C)=0$. This contradicts the hypotheses of the lemma. \end{proof}
\section{\Large Reduction to an Acyclic Network and Special Cases} \label{sec:iter}
We continue the description of the shortest blocking IS-flow method (SBFM) for solving the maximum IS-flow problem in a network $N=(G=(V,E),u)$ begun in Section~\ref{sec:sbf}. Let $f$ be a current IS-flow in $N$. We show that the task of finding a shortest blocking IS-flow $g$ in $(G^+,u_f)$ (step (P1) of a phase of SBFM) reduces to finding a totally blocking IS-flow in an acyclic network.
Build the split-graph $\Gamma=S(G^+,u_f)$ and apply the above shortest regular path algorithm to $\Gamma$ with the {\em all-unit} length function $\ell$ on the arcs. It constructs $\phi,\Phi,\xi$ as in Theorems~\ref{tm:sp} and \ref{tm:sharp}, taking $O(m)$ time (since $L=1$). SRA also constructs the trimmed 0-graph $\overline\Gamma^0$, the main object we will deal with. By Lemma~\ref{lm:acyc}, $\overline\Gamma^0$ is acyclic. Also the following property takes place.
\begin{lemma}\label{lm:ub}
Let $a\in E^+$ be an arc with $u_f(a)>1$, and let $a_1,a_2$ be the corresponding split-arcs in $\Gamma$. Then $\ell^\xi_\pi(a_1)=\ell^\xi_\pi(a_2)$. Moreover, none of $a_1,a_2$ can be the base or anti-base arc of any fragment in $\Phi$.
\end{lemma}
\begin{proof} Since $a_1,a_2$ are parallel arcs, for each $\phi\in\Phi$, $a_1$ enters (resp. leaves) $V_\phi$ if and only if $a_2$ enters (resp. leaves) $V_\phi$. This implies that $\ell^\xi_\pi(a_1)\ne\ell^\xi_\pi(a_2)$ can happen only if one of $a_1,a_2$ is the base or anti-base arc of some fragment in $\Phi$. Suppose $a_1\in\{e_\phi,e'_\phi\}$ for some $\phi\in\Phi$ (the case $a_2\in\{e_\phi,e'_\phi\}$ is similar). Then $\ell^\xi_\pi(a_1)=0$ (by (A1) in Theorem~\ref{tm:sharp}). Using property~(F2) from Section~\ref{sec:shortp} (valid as $\Phi$ is well-nested), one can see that $a_2$ is not the base or anti-base arc of any fragment in $\Phi$. Therefore, $\chi_{\psi}(a_2)\le\chi_{\psi}(a_1)$ for all $\psi\in\Phi$, yielding $\ell^\xi_\pi(a_2)\le\ell^\xi_\pi(a_1)$. Moreover, the latter inequality is strict because $\chi_\phi(a_2)=-1<1=\chi_\phi(a_1)$ and $\xi(\phi)>0$. Now $\ell^\xi_\pi(a_1)=0$ implies $\ell^\xi_\pi(a_2)<0$, contradicting (\ref{eq:6-2}).
\end{proof}
Let $E^0\subseteq E^+$ be the set of (images of) zero reduced length arcs of $\Gamma$. Lemma~\ref{lm:ub} implies that the base arc $e_\phi$ of each fragment $\phi\in\Phi$ in $\Gamma$ is generated by an arc $e\in E^0$ with $u_f(e)=1$. We can identify these $e$ and $e_\phi$ and consider $\phi$ as a fragment of $G^+$ as well. One can see that $\overline \Gamma^0$ is precisely the split-graph for $(\overline H,\overline h)$, where $\overline H=(\overline V,\overline E^0)$ is obtained from $H=(V,E^0)$ by trimming the maximal fragments in $\Phi$, and $\overline h$ is the restriction of $u_f$ to $\overline E^0$.
Based on the property of each fragment to have unit capacity of the base arc, we reduce step (P1) to the desired problem, namely:
\begin{itemize}
\item[(B)] {\em Find a totally blocking IS-flow in $(\overline H,\overline h)$}.
\end{itemize}
To explain the reduction, suppose we have found a solution $\overline g$ to (B). For each maximal fragment $\phi$ in $\Phi$ with $e_\phi\in\mbox{supp}(\overline g)$, we have $\overline g(e_\phi)=1$; therefore, exactly one unit of flow goes out of the head of $e_\phi$, through an arc $a\in \overline E^0$ say. We choose the path $Q=Q_\phi(a)$ as in~(\ref{eq:Q}) to connect $e_\phi$ and $a$ in (the subgraph on $V_\phi$ of) $H$ and then push a unit of flow through $Q$ and a unit of flow through the symmetric path $Q'$. Doing so for all maximal fragments $\phi$, we extend $\overline g$ to an IS-flow $g$ in $(H,h)$, where $h$ is the restriction of $u_f$ to $E^0$. Moreover, $g$ is a shortest blocking IS-flow in $(G^+,u_f)$.
Indeed, the fact that the chosen paths $Q$ have zero reduced length and are compatible with $\Phi$ implies that a symmetric decomposition of
$g$ consists of shortest $u_f$-regular paths (cf. (A2) in Theorem~\ref{tm:sharp}); so $g$ is shortest. Also $G^+$ cannot contain a $(u_f-g)$-regular $s$ to $s'$ path $R$ of length $g(E^+)/|g|$. For such an $R$ would be a path in $H$ compatible with $\Phi$ (in view of Theorem~\ref{tm:sp}); then the arcs of $R$ occurring in $\overline H$ should form an $(\overline h-\overline g)$-regular $s$ to $s'$ path in it, contrary to the fact that $\overline g$ is totally blocking.
Since each path $Q_\phi(a)$ is constructed in $O(|V_\phi|)$ time (by~(\ref{eq:Q})), and the sets $V_\phi$ of maximal fragments $\phi$ are pairwise disjoint, the reduction to (B) takes linear time.
\begin{lemma}\label{lm:redu} A totally blocking IS-flow in $(\overline H,\overline h)$ can be extended to a shortest blocking IS-flow in $(G^+,u_f)$, in $O(m)$ time. \ \vrule width.2cm height.2cm depth0cm
\end{lemma}
\begin{corollary}\label{cor:cvb} SBFM solves the maximum IS-flow problem in $O(qT(n,m)+qm)$ time, where $q$ is the number of phases ($q\le n$) and $T(n,m)$ is the time needed to find a totally blocking IS-flow in an acyclic network with at most $n$ nodes and $m$ arcs.
\end{corollary}
Clearly $T(n,m)$ is $O(m^2)$, as a totally blocking flow can be constructed by $O(m)$ applications of the regular reachability algorithm; this is slower compared with the phase time $O(nm)$ in Dinits' algorithm. However, we shall show in the next section that problem (B) can be solved in $O(nm)$ time as well. Moreover, the bound will be better for important special cases.
Next we estimate the number of phases. For the standard max-flow problem, the number of phases of Dinits' algorithm becomes significantly less than $n$ in the cases of unit arc capacities and unit ``node capacities''. To combine these into one case, given a network $N=(G=(V,E),u)$ with integer capacities $u$, for a node $x\in V$, define the {\em transit capacity}\/ $u(x)$ to be the minimum of values $\sum_{y:(x,y)\in E} u(x,y)$ and $\sum_{y:(y,x)\in E}u(y,x)$. Define
$$
\Delta:=\Delta(N):=\sum(u(x) : x\in V-\{s,s'\}).
$$ As shown in \cite{ET-75,kar-73-2}), the number $q$ of phases of the blocking flow method does not exceed $2\sqrt{\Delta}$. In particular, if $u\equiv 1$ then $q=O(\sqrt{m})$, and if the transit capacities $u(x)$ of all nodes $x\ne s,s'$ ({\em inner} nodes) are ones, e.g., in the case arising from the bipartite matching problem, then $q=O(\sqrt{n})$.
A similar argument works for skew-symmetric networks (see also~\cite{FJ-99} for a special case).
\begin{lemma}\label{lm:root} The number of phases of SBFM is at most ${\rm min}\{n,2\sqrt{\Delta}\}$.
\end{lemma}
\begin{proof} After performing $d:=\sqrt{\Delta}$ phases, the r-distance from $s$ to $s'$ in the network $N'=(G^+,u_f)$ for the current IS-flow $f$ becomes greater than $d$, by Lemma~\ref{lm:incr}. Let $f^*$ be a maximum IS-flow in $N$, and let $g$ be defined as in the proof of Theorem~\ref{tm:aug}. Then $g$ is a feasible IS-flow in $N'$ and
$|g|=|f^*|-|f|$. We assert that $|g|\le d$, which immediately implies that the number of remaining phases is at most $d/2$, thus proving the lemma. To see this, take a symmetric decomposition ${\cal D}$ of $g$ consisting of elementary flows $(P,P',\delta)$ with $\delta=1$. Let ${\cal D}'$ be the family of $s$ to $s'$ paths
$P,P'$ in ${\cal D}$; then $|{\cal D}'|\ge |g|$. It is easy to see that $u_f(x)=u(x)$ for each inner node $x$. Each path in ${\cal D}'$ contains at least $d$ inner nodes, and therefore, it uses at least $d$ units of the total transit capacity of inner nodes of
$N'$. So we have $d|{\cal D}'|\le \Delta(N)$. This implies $|g|\le d$.
\end{proof}
\section{\Large Finding a Totally Blocking IS-Flow in an Acyclic
Network}
\label{sec:acyc}
Our aim is to show the following.
\begin{theorem} \label{tm:ac-unit} A totally blocking IS-flow in an acyclic skew-symmetric graph with $O(n)$ nodes, $O(m)$ arcs, and unit arc capacities can be found in $O(n+m)$ time.
\end{theorem}
This together with Corollary~\ref{cor:cvb} and Lemma~\ref{lm:root} yields the following result for the shortest blocking IS-flow method.
\begin{corollary} \label{cor:unit} In case of a network $N$ with unit arc capacities, SBFM can be implemented so that it finds a maximum IS-flow in $O(m\sqrt{\Delta(N)})$ time (assuming $n=O(m)$). In particular, if the indegree or outdegree of each node is at most one, then the running time becomes $O(\sqrt{n}m)$.
\end{corollary}
In the second half of this section we will extend Theorem~\ref{tm:ac-unit} to general capacities, in which case the phase time will turn into $O(nm)$, similarly to Dinits' algorithm.
For convenience we keep the original notation for the network in question. Let $G=(V,E)$ be a skew-symmetric {\em acyclic} graph with source $s$ and the capacity $u(e)=1$ of each arc $e\in E$. One may assume that each node belongs to a path from $s$ to $\sigma(s)$.
First of all we make a reduction to the {\em maximal balanced path-set problem (MBP)} stated in the Introduction. Since $G$ is acyclic, one can assign, in linear time, a potential function $\pi:V\to{\mathbb Z}$ which is {\em antisymmetric} ($\pi(x)=-\pi(\sigma (x))$ for each $x\in V$) and {\em increasing} on the arcs ($\pi(y)>\pi(x)$ for each $(x,y)\in E$). (Indeed, a function $q:V\to{\mathbb Z}$ increasing on the arcs is constructed, in linear time, by use of the standard topological sorting. Now set $\pi(v):=q(v)-q(\sigma(v))$, $v\in V$.) Subdivide each arc $(x,y)$ with $\pi(x)<0$ and $\pi(y)>0$ into two arcs $(x,z)$ and $(z,y)$ and assign zero potential to $z$. The new graph $G$, with $O(m)$ nodes and $O(m)$ arcs, is again skew-symmetric, and the problem remains essentially the same.
Let $\Gamma$ be the subgraph of the new $G$ induced by the nodes with nonnegative potentials. Then $\Gamma\cup\sigma(\Gamma)=G$ and $\Gamma\cap\sigma(\Gamma)=(Z,\emptyset)$, where $Z$ is the self-symmetric set of zero potential nodes of $G$. Also $\Gamma$ contains $\sigma(s)$.
Clearly every $s$ to $\sigma(s)$ path $P$ of $G$ meets $Z$ at exactly one node $z$, which subdivides $P$ into an $s$ to $z$ path $R'$ in $\sigma(\Gamma)$ and a $z$ to $\sigma(s)$ path $Q$ in $\Gamma$. Then $P$ is regular if and only if $\sigma(R')$ and $Q$ are arc-disjoint. Conversely, let $Q,R$ be two arc-disjoint $Z$ to $\sigma(s)$ paths in $\Gamma$ beginning at symmetric nodes in $Z$. Then the concatenation $\sigma(Q)\cdot R$ (as well as $\sigma(R)\cdot Q$) is a regular $s$ to $\sigma(s)$ path of $G$ if and only if $Q$ and $R$ are arc-disjoint.
This shows that our particular totally blocking IS-flow problem is reduced, in linear time, to MBP with $\Gamma,\sigma(s),Z$ (in fact, the problems are equivalent). Theorem~\ref{tm:ac-unit} is implied by the following.
\begin{theorem} \label{tm:bbp} MBP is solvable in linear time.
\end{theorem}
We devise an algorithm for MBP and prove Theorem~\ref{tm:bbp}. Let the input of MBP consist of an acyclic graph $\Gamma=(X,U)$, a sink $t$ and a source set $Z$ with a map (involution) $\sigma:Z\to Z$ giving a partition of $Z$ into pairs. We say that two arc-disjoint $Z$ to $t$ paths $Q,R$ beginning at ``symmetric'' sources $z,\sigma(z)$ form a {\em good pair}, and say that a collection of pairwise arc-disjoint $Z$ to $s'$ paths in $\Gamma$ is a {\em balanced path-set} if its members can be partitioned into good pairs. So the task is to find a maximal (or ``blocking'') balanced path-set.
Each iteration of the algorithm will reduce the arc set of $\Gamma$ and, possibly, the set $Z$, and we sometimes will use index $i$ for objects in the input of $i$-th iteration. So $\Gamma_1=(X_1,U_1)$ is the initial graph. Without loss of generality, one may assume that initially each source has zero indegree and
\begin{itemize} \item[(C1)] each node of $\Gamma$ lies on a path from $Z$ to $t$,
\end{itemize}
and will maintain these properties during the algorithm.
The iteration input will include a path $D$ from a certain node of $\Gamma$ to $t$, called the {\em pre-path}. Initially, $D$ is trivial: $D=t$. The nodes of $\Gamma$ not in $Z\cup\{t\}$ are called {\em inner}. The current $\Gamma$ may contain special inner nodes, called {\em complex} ones. They arise when the algorithm shrinks certain subgraphs of $\Gamma$; the initial graph has no complex nodes. The adjacency structure of $\Gamma$ is given by double-linked lists $I_x$ and $O_x$ of the incoming and outgoing arcs, respectively, for each node $x$. The arc set of a path $P$ is denoted by $E(P)$.
An $i$-th iteration begins with extending $D$ to a
$Z$ to $t$ path $P$ in a natural way; this takes $O(|P|-|D|)$ time. Let $z$ be the first node of $P$. Then we try to obtain a good pair by constructing a path from $z'=\sigma(z)$ to $t$, possibly rearranging $P$. By standard arguments, a good pair for $z,z'$ exists if and only if there exists a path $A$ from $z'$ to $t$, with possible backward arcs, in which the forward arcs belong to $U-E(P)$ and the backward arcs belong to $E(P)$, called an {\em augmenting path} w.r.t. $P$. For certain reasons, we admit $A$ to be self-intersecting in nodes (but not in arcs). Once $A$ is found, the symmetric difference $E(P)\triangle E(A)$ gives a good pair $Q,R$ (taking into account that $\Gamma$ is acyclic).
To search for an augmenting path, we replace each arc $e=(x,y)\in E(P)$ by the reverse arc $\overline e=(y,x)$; let $\overline \Gamma=(X,\overline U)$ be the resulting graph, and $\overline P$ the $t$ to $Z$ path reverse to $P$. Thus, we have to construct a (directed) path from $z'$ to $t$ in $\overline\Gamma$ or establish that it does not exist.
To achieve the desired time bound, we apply a variant of depth first search which we call here {\em transit depth first search (TDFS)} (such a search procedure was applied in~\cite{kar-70}). The difference from the standard depth first search (DFS) is as follows. When scanning a new outgoing arc $(x,y)$ in the list $O_x$ of a current node $x$, if $y$ has already been visited, then DFS stays at $x$. In contrast, TDFS moves from $x$ to $y$, making $y$ the new current node. Both procedures maintain the stack of arcs traversed only in forward direction and ordered by the time of their traversal. If all outgoing arcs of the current node $x$ are already scanned, then the last arc $(w,x)$ of the stack is traversed in backward direction and $w$ becomes the new current node. We refer to the path determined by the stack, from the initial node to the current one, as the {\em active path}. Note that in case of TDFS the active path may be self-intersecting (while it is simple in DFS).
We impose the condition that the outgoing arc lists of $\overline\Gamma$ are arranged so that
\begin{itemize} \item[(C2)] for each node $x\ne z$ of $\overline P$, the arc $\overline e$ of $\overline P$ leaving $x$ is the {\em last} element of $\overline O_x$.
\end{itemize}
This guarantees that TDFS would scan $\overline e$ after all other outgoing arcs of $x$ (i.e., the arcs of $\overline P$ are ignored as long as possible).
At an iteration, we apply TDFS to $\overline\Gamma$ starting from $z'$ as above. The search terminates when either it reaches $t$ or it returns to $z'$ having all arcs of $O_{z'}$ traversed. In the first case ({\em breakthrough}) the final active path $\overline A$ in $\overline\Gamma$ determines the desired augmenting path $A$ in $\Gamma$, and we create a good pair $Q,R$ as described above. In the second case, the non-existence of a good pair for the given $z,z'$ is declared. Consider both cases.
{\em Breakthrough case.} Delete from $\Gamma$ the arcs of $Q,R$ and then delete all the nodes and arcs that are no longer contained in $Z$ to $t$ paths (thus maintaining (C1)). This is carried out by an obvious {\em cleaning procedure} in $O(q)$ time, where $q$ is the number of arcs deleted. If $Q$ or $R$ contains a complex node, the iteration finishes by transforming $Q,R$ into a good pair of paths of the initial graph; this is carried out by a {\em path expansion procedure} which will be described later. The obtained $\Gamma,Z$ form the input of the next iteration, and the new pre-path $D$ is assigned to be the trivial path $t$. If $\Gamma$ vanishes, the algorithm terminates. The following observation is crucial for estimating the time bound.
\begin{lemma} \label{lm:break} Let $q$ be the number of arcs deleted at an iteration with a breakthrough. Then, excluding the path expansion procedure if applied, the iteration runs in $O(q)$ time.
\end{lemma}
\begin{proof}
Let $\overline W$ be the set of arcs of $\overline\Gamma$ traversed by TDFS on the iteration, and $W$ the corresponding set in $\Gamma$, i.e., $W=\{e\in U: e\in \overline W$ or $\overline e\in\overline W\}$. The iteration runs in $O(q+|W|)$ time, taking into account that each arc of $P$ not in $Q\cup R$ is contained in $W$. Therefore, it suffices to show that no arc from $W$ remains in the new graph $\Gamma$. Suppose this is not so. Then there is a $Z$ to $t$ path $L$ of the old $\Gamma$ that meets $W$ but not $E(Q)\cup E(R)$ (as the arcs of $Q\cup R$ are deleted). Let $e=(x,y)$ be the {\em last} arc of $L$ in $W$. Let $b=(y,w)$ be the next arc of $L$ (it exists since $y=t$ would imply that $e$ is in $A$ but not in $P$, whence $e$ belongs to $Q\cup R$). Then $b\not\in W\cup E(Q)\cup E(R)$, by the choice of $e$. Two cases are possible.
(i) $e$ is in $P$. Then $\overline e=(y,x)\in\overline W$. According to condition (C2), at the time TDFS traversed $\overline e$ from $y$ to $x$ all arcs of $\overline\Gamma$ leaving $y$ had already been traversed. So $b$ is not in $\overline\Gamma$, implying $b\in E(P)-W$. Then $b$ is in $Q\cup R$; a contradiction.
(ii) $e$ is not in $P$. Then $e\in \overline W$ and $e$ does not lie on the final active path $\overline A$ (otherwise $e$ is in $Q\cup R$). Therefore, TDFS traversed $e$ in both directions. To the time of traversal of $e$ in backward direction, from $y$ to $x$, all arcs of $\overline\Gamma$ leaving $y$ have been traversed (at this point the difference between TDFS and DFS is important). So $b$ is not in $\overline\Gamma$, whence $b\in E(P)$. Now $\overline b\not\in \overline W$ implies that $b$ is in $Q\cup R$; a contradiction.
\end{proof}
{\em Non-breakthrough case.} Let $Y$ be the set of nodes visited by TDFS. Then no arc of $\overline\Gamma$ leaves $Y$. Therefore, in view of (C1),
\begin{myitem} the set of arcs of $\Gamma$ leaving $Y$ consists of a unique arc $a=(v,w)$, this arc lies on $P$, and the nodes of the part of $P$ from $z$ to $v$ are contained in $Y$.
\label{eq:N1}
\end{myitem}
Since no arc of $\Gamma$ enters $Z$, we also have
\begin{equation} \label{eq:N2}
Y\cap Z=\{z,z'\}.
\end{equation}
We reduce $\Gamma$ by shrinking its subgraph $\Gamma_Y=(Y,U_Y)$ induced by $Y$ into one node; the formed {\em complex} node $v_Y$ is identified with $v$. We call $v$ the {\em root} of $\Gamma_Y$ and store $\Gamma_Y$. The list of arcs entering $v_Y$ in the new graph is produced by simply merging the lists $I_x$ for $x\in Y$ from which the arcs occurring in $\Gamma_Y$ are explicitly removed, using the lists $O_y$ for $y\in Y$. (We do not need to correct the outgoing arc lists $O_x$ for $x\not\in Y$ explicitly, as we explain later.)
Thus, to update $\Gamma$ takes time linear in $|U_Y|$. By~\refeq{N1}, $a$ is the only arc leaving $v_Y$ in $\Gamma$.
From (C1) and~\refeq{N1} it follows that
\begin{myitem} the new graph $\Gamma$ is again acyclic, and for each $x\in Y$, there is a path $P_Y(x)$ from $x$ to the root $v$ in $\Gamma_Y$.
\label{eq:N3}
\end{myitem}
In view of~\refeq{N2}, the set $Z$ is updated as $Z:=Z-\{z,z'\}$. If there is at least one arc entering $v_Y$, then the new graph $\Gamma$ and set $Z$ satisfy (C1) and form the input of the next iteration. The new pre-path $D$ is assigned to be the part of $P$ from the formed complex node $v_Y$ to $t$. If no arc enters $v_Y$, we finish the current iteration by removing the nodes and the arcs not contained in $Z$ to $t$ paths. This further reduce $\Gamma$ and may reduce $Z$ and shorten $D$.
One can see that the set $U_Y$ is exactly $W$. This and the construction of pre-paths imply the following.
\begin{lemma} \label{lm:nobreak} Let $q$ be the number of arcs deleted by an $i$-th iteration without a breakthrough. Then the iteration runs in
$O(q+{\rm max}\{0,|D_{i+1}|-|D_i|\})$ time. \ \vrule width.2cm height.2cm depth0cm
\end{lemma}
As mentioned above, we do not need to explicitly correct the outgoing arc lists $O_x$ for $x\not\in Y$ (this would be expensive). Let ${\cal V}$ be the current set of all complex nodes created from the beginning of the algorithm. We take advantage of the following facts. First, the elements of ${\cal V}$ that are nodes of the current graph (the {\em maximal} complex nodes) lie on the current pre-path $D$. Second, at an iteration with a breakthrough, all complex nodes are removed. Third, at an iteration without a breakthrough, the subgraph $\Gamma_Y$ forming the new complex node $v_Y$ contains a subpath of $P$ from its beginning node (by~\refeq{N1}), and the cleaning procedure (if applied at the iteration) deletes a part of the updated $P$ from its beginning node as well. Therefore, one can store ${\cal V}$ as a tree in a natural way and use the {\em disjoint set union} data structure from~\cite{GT-85} to maintain ${\cal V}$. This enables us to efficiently access the head $v_Y$ of any arc $e=(x,v_Y)$ when $e$ is traversed by TDFS (with $O(1)$ amortized time per one arc).
To complete the algorithm description, it remains to explain the {\em path expansion procedure} to be applied when an iteration with a breakthrough finds paths $Q,R$ contained complex nodes. It proceeds in a natural way by recursively expanding complex nodes occurring in the current $Q,R$ into the corresponding paths $P_Y(x)$ as in~\refeq{N3} and building $P_Y(x)$ into $Q$ or $R$ (this takes
$O(|P_Y(x)|)$ time). The arc sets of subgraphs $\Gamma_Y$ extracted during the algorithm are pairwise disjoint, so the total time for all applications of the procedure is $O(m)$.
Thus, we can conclude from Lemmas~\ref{lm:break} and~\ref{lm:nobreak} that the algorithm runs in $O(m)$ time, yielding Theorem~\ref{tm:bbp}.
In the rest of this section we extend the above approach and algorithm ({\em Algorithm 1}) to a general case of acyclic $(G,u)$. The auxiliary graph $\Gamma=(X,U)$ and the set $Z$ are constructed as above, and the capacity $u(e)$ of each arc $e\in U$ is defined in a natural way. We call an integer $Z$ to $t$ flow $g$ in $(\Gamma,u)$ {\em balanced} if the flow values out of ``symmetric'' sources are equal, i.e.,
$$ \mbox{div}_g(z)=\mbox{div}_g(\sigma(z))\quad \mbox{for each $z\in Z$,}
$$ and {\em blocking balanced} if there exists no balanced flow $g'$ satisfying $g\ne g'\ge g$ (taking into account that $\Gamma$ is acyclic). Then the problem of finding a totally blocking IS-flow in $(G,u)$ is reduced to {\em problem BBF}: find a balanced blocking flow for $\Gamma,u,Z,t$.
{\em Algorithm 2} will find a balanced blocking flow $g$ in the form $g=\alpha_1\chi^{Q_1}+\alpha_1\chi^{R_1}+\ldots+ \alpha_r\chi^{Q_r}+\alpha_r\chi^{R_r}$, where each $\alpha_i$ is a positive integer, $Q_i$ is a path from some $z\in Z$ to $t$, and $R_i$ is a path from $\sigma(z)$ to $t$. It iteratively constructs pairs $Q_i,R_i$ for current $\Gamma,u,Z$, assigns the weight $\alpha_i$ to them as large as possible, and accordingly reduces the current capacities as $u:=u- \alpha_i\chi^{Q_i}-\alpha_i\chi^{R_i}$. All arc capacities in $\Gamma$ are positive: once the capacity of an arc becomes zero, this arc is immediately deleted from $\Gamma$.
Each pair $Q_i,R_i$ is constructed as in Algorithm 1 when it is applied to the corresponding {\em split-graph} $S=S(\Gamma,u)$. More precisely (cf.~Section~\ref{sec:theo}), $S$ is formed by replacing each arc $e=(x,y)$ of $\Gamma$ by two parallel arcs ({\em split-mates}) $e_1,e_2$ from $x$ to $y$ with the capacities $\lceil u(e)/2\rceil$ and $\lfloor u(e)/2\rfloor$, respectively. When $u(e)=1$, $e_2$ vanishes in $S$, and $e_1$ is called {\em critical}. The algorithm maintains $S$ explicitly. The desired pair $Q_i,R_i$ in $(\Gamma,u)$ is determined by a good pair in $S$ in a natural way.
The main part of an iteration of Algorithm 2 is a slight modification of an iteration of Algorithm 1. The difference is the following. While Algorithm 1 deletes {\em all} arcs of the paths $Q,R$ found at an iteration, Algorithm 2 deletes only a {\em nonempty subset} $B$ of arcs in $Q\cup R$ (concerning the graph $S$) including all critical arcs in these paths. One may think that Algorithm 2 essentially treats with a graph $S$ (ignoring $(\Gamma,u)$) in which some disjoint pairs of parallel arcs (analogs of split-mates) are distinguished and the other arcs are regarded as critical, and at each iteration, the corresponding subset $B\subseteq E(Q)\cup E(R)$ to be deleted is given by an oracle. Emphasize that the unique arc leaving a complex node is always critical. Therefore, each complex node in $Q\cup R$ will be automatically removed. Computing $\alpha_i$'s and other operations of the algorithm beyond the work with the graph $S$ do not affect the asymptotic time bound.
We now estimate the complexity of an iteration of Algorithm 2. In case without a breakthrough, properties \refeq{N1},\refeq{N2},\refeq{N3} and Lemma~\ref{lm:nobreak} (with $S$ instead of $\Gamma$) remain valid. Note that the arc $a$ in~\refeq{N1} is critical (since it is a unique arc leaving $Y$); therefore, the arc leaving the created complex node is critical. Our analysis of the breakthrough case involves the subset $\overline W_2\subset \overline W$ of arcs traversed by TDFS in both directions (where $\overline W$ is the set of all traversed arcs in the corresponding auxiliary graph $\overline S$). Let $W_2$ be the corresponding set in $S$.
\begin{lemma} \label{lm:break-gen} Suppose an iteration of Algorithm 2 results in a breakthrough. Let $e=(x,y)$ be an arc of $S$ such that $e\in W_2$ or $e\in E(P)\cap W$. Then any $x$ to $t$ path $L$ in $S$ starting with the arc $e$ contains a critical arc in $Q\cup R$ (and therefore, $e$ vanishes in the new graph $S$).
\end{lemma}
\begin{proof} Suppose this is not so and consider a counterexample $(e,L)$ with
$|L|$ minimum. Let $\overline A_0$ be the active path in $\overline S$ just before the traversal of $e$ or $\overline e$ from $y$ to $x$, and $A_0$ the corresponding (undirected) path in $S$. At that time, for the set $\overline O_y$ of arcs of $\overline S$ leaving $y$,
\begin{myitem} all arcs in $\overline O_y$ except $\overline e$ (in case $e\in E(P)$) are already traversed
\label{eq:ast}
\end{myitem}
(in view of condition (C2)). Let $b=(y,w)$ be the second arc of $L$ (existing as $y=t$ is impossible). By~(\ref{eq:ast}), if $b$ is not in $P$, then $b\in W$. Also $b\not\in W_2$ and $b\not\in E(P)\cap W$ (otherwise the part of $L$ from $y$ to $t$ would give a smaller counterexample). This implies that $b$ belongs to $Q\cup R$, and therefore, $b$ is not critical. Let $b'$ be the split-mate of $b$. Considering the path starting with $b'$ and then following $L$ from $w$ to $t$ (which is smaller than $L$), we similarly conclude that $b'\not\in W_2$ and $b'\not\in E(P)\cap W$. To come to a contradiction, we proceed as follows.
The fact that $S$ is acyclic implies that the symmetric difference (on the arcs) of $P$ and $A_0$ is decomposed into a path from $Z$ to $t$ and a path from $Z$ to $y$; therefore, $E(P)\triangle E(A_0)$ contains {\em at most one} arc $a$ leaving $y$. This and~(\ref{eq:ast}) imply that all arcs in $O_y\cap \overline O_y$ except, possibly, $a$ have been traversed twice; so they are in $W_2$. Hence, one of $b,b'$ must be in $P$; let for definiteness $b\in E(P)$ (then $b'\not\in E(P)$).
Now $b\not\in W$ implies $b\in E(P)-E(A_0)$, and $b'\in W-W_2$ implies $b'\in E(A_0)-E(P)$. Thus, both arcs $b,b'$ leaving $y$ are in $E(P)\triangle E(A_0)$; a contradiction.
\end{proof}
The running time of an iteration with a breakthrough is
$O(|P|+|W|+q)$, where $q$ is the number of arcs deleted from $S$. Lemma~\ref{lm:break-gen} allows us to refine this bound as
$O(|Q|+|R|+q)$. Combining this with Lemma~\ref{lm:nobreak}, we can conclude that, up to a constant factor, the total time of Algorithm 2 is bounded from above by $m$ plus the sum $\Sigma$ of lengths of paths $Q_1,R_1,\ldots,Q_r,R_r$ in the representation of the flow $g$
constructed by the algorithm. Since $|Q_i|,|R_i|\le n$ and $r\le 2m$ (as each iteration decreases the arc set of $S$), $\Sigma$ is $O(nm)$. Also $\Sigma$ does not exceed the sum of the transit capacities $u(x)$ of inner nodes $x$ of $\Gamma$ (assuming, without loss of generality, that no arc goes from $s$ to $s'$). Thus, Theorem~\ref{tm:ac-unit} is generalized as follows.
\begin{theorem} \label{tm:ac-gen} For an acyclic capacitated skew-symmetric network $N$ with $O(n)$ nodes and $O(m)$ arcs, a totally blocking IS-flow can be found in $O({\rm min}\{m+\Delta(N),nm\})$ time.
\end{theorem}
Together with Corollary~\ref{cor:cvb} and Lemma~\ref{lm:root}, this yields the desired generalization.
\begin{corollary} \label{cor:gen_cap} SBFM can be implemented so that it finds a maximum IS-flow in an arbitrary skew-symmetric network $N$ in $O({\rm min}\{n^2m,\sqrt{\Delta(N)}(m+\Delta(N))\})$ time.
\end{corollary}
\section{\Large Applications to Matchings}\label{sec:mat}
Apply the reduction of the maximum u-capacitated b-matching problem (CBMP) in a graph $G'=(V',E')$ to the maximum IS-flow problem in a network $N=(G=(V,E),\sigma,u,s)$; see Section~\ref{sec:back}. The best time bound for a general case of CBMP is attained by applying the algorithm of Section~\ref{sec:gisa}. Theorem~\ref{tm:gisa} implies the following.
\begin{corollary} \label{cor:bmat}
CBMP can be solved in $O(M(n,m)+nm)$ time, where $n:=|V'|$ and
$m:=|E'|$.
\end{corollary}
When the input functions $u,b$ in CBMP are small enough, the transit capacities of nodes in $N$ become small as well. Then the application of the shortest blocking IS-flow method may result in a competitive or even faster algorithm for CBMP. Let the capacities of all edges of $G'$ be ones. We have $\Delta(N)=O(m)$ in general, and $\Delta(N)=O(n)$ if $b$ is all-unit. Then Corollary~\ref{cor:unit} yields the same time bounds as in~\cite{gab-83,MV-80} for the corresponding cases.
\begin{corollary} \label{cor:bmat_un} SBFM (with the fast implementation of a phase as in Section~\ref{sec:acyc}) solves the maximum degree-constrained subgraph (or b-factor) problem in $O(m^{3/2})$ time and solves the maximum matching problem in a general graph in $O(\sqrt{n}m)$ time.
\end{corollary}
Feder and Motwani~\cite{FM-91} elaborated a clique compression technique and used it to improve the $O(\sqrt{n}m)$ bound for the maximum bipartite matching problem to $O(\sqrt{n}m\log(n^2/m)/\log{n})$. We explain how to apply a similar approach to a special case of MSFP, lowering the bound for dense nonbipartite graphs. We need a brief review of the method in~\cite{FM-91}.
Let $H=(X,Y,E)$ be a bipartite digraph, where $E\subseteq X\times Y$,
$|X|=|Y|=n$ and $|E|=m$. A {\em (bipartite) clique} is a complete bipartite subgraph $(A,B,A\times B)$ of $H$, denoted by $C(A,B)$. Define the {\em size} $s(C)$ of $C=C(A,B)$ to be $|A|+|B|$. A {\em clique partition} of $H$ is a collection ${\cal C}$ of cliques whose arc sets form a partition of $E$; the {\em size} $s({\cal C})$ of ${\cal C}$ is the sum of sizes of its members.
Let a constant $0<\delta<1/2$ be fixed. Then a clique $C(A,B)$ of $H$
is called a $\delta$-{\em clique} if $|A|=\lceil n^{1-\delta}\rceil$
and $|B|=\lfloor \delta\log{n}/\log(2n^2/m) \rfloor$. It is shown in~\cite{FM-91} that a $\delta$-clique exists.
The {\em clique partition algorithm} in~\cite{FM-91} finds a $\delta$-clique $C_1$ in the initial graph $H_1=(X,Y,E=:E_1)$ and deletes the arcs of $C_1$, obtaining the next graph $H_2= (X,Y,E_2)$. Then it finds a $\delta$-clique $C_2$ (concerning the number of arcs of $H_2$) and delete the arcs of $C_2$ from $H_2$, and so on while the number of arcs of the current graph is at least $2n^{2-\delta}$ and the $Y$-part of a $\delta$-clique is nonempty. The remaining arcs are partitioned into cliques consisting of a single arc each. So the cliques $C_i$ extracted during the algorithm form a clique partition. The running time of the algorithm is estimated as the sum of bounds $\tau(C_i)$ on the time to extract the cliques $C_i$ plus a time bound $\tau'$ to maintain a certain data structure (so-called neighborhood trees). One shows that
\begin{myitem} the algorithm runs in $O(\sqrt{n}m\beta)$ time and finds a clique partition ${\cal C}$ of $H$ such that $s({\cal C})=O(m\beta)$, where $\beta:=\log(n^2/m)/\log{n}$.
\label{eq:part}
\end{myitem}
Suppose we wish to find a maximum matching in a bipartite graph or, equivalently, to find a maximum integer flow from $s$ to $t$ in a digraph $G$ with unit arc capacities, node set $X\cup Y\cup\{s,t\}$
and arc set $E\cup(s\times X)\cup(Y\times t)$, where $E\subseteq X\times Y$. One may assume $|X|=|Y|=n$. Using the above algorithm, form a clique partition ${\cal C}$ as in~\refeq{part} for $(X,Y,E)$. Transform each clique $C(A,B)$ in ${\cal C}$ into a star by replacing its arcs by a node $z$, arcs $(x,z)$ for all $x\in A$ and arcs $(z,y)$ for all $y\in B$. There is a natural one-to-one correspondence between the
$s$ to $t$ paths in $G$ and those in the resulting graph $G^\ast$, and the problem for $G^\ast$ is equivalent to that for $G$. Compared with $G$, the graph $G^\ast$ has $|{\cal C}|$ additional nodes but the number $m^\ast$ of its arcs becomes $2n+s({\cal C})$, or $O(m\beta)$. Given a flow in $G^\ast$, any (simple) augmenting path of length $q$ meets exactly $(q-1)/2$ nodes in $X\cup Y$, and these nodes have unit transit capacities. This implies that Dinits' algorithm has $O(\sqrt{n})$ phases (arguing as in~\cite{ET-75,kar-73-2}). Since each phase takes $O(m^\ast)$ time, the whole algorithm runs in $O(\sqrt{n}m\beta)$ time, as desired.
Now suppose $H=(X,Y,E,\sigma)$ is a skew-symmetric bipartite graph without parallel arcs, where the sets $X$ and $Y$ are symmetric each other. We modify the above method as follows. Note that any two symmetric cliques in $H$ are disjoint (otherwise some $x\in X$ is adjacent to $\sigma(x)$, implying the existence of two arcs from $x$ to $\sigma(x)$). We call a clique partition ${\cal C}$ {\em symmetric} if $C\in{\cal C}$ implies $\sigma(C)\in{\cal C}$. An iteration of the {\em symmetric clique partition algorithm} works as in the previous algorithm, searching for a $\delta$-clique $C'$ in the current $H$, but then deletes the arcs of {\em both} $C'$ and $\sigma(C')$. Let the algorithm construct a partition ${\cal C}'$ consisting of cliques $C'_1,\sigma(C'_1),\ldots,C'_r,\sigma(C'_r)$ obtained in this order.
To estimate the size of ${\cal C}'$ and the running time, imagine we would apply the previous algorithm to our $H$ (ignoring the fact that $H$ is skew-symmetric). Let the resulting partition ${\cal C}$ be formed by cliques $C_1,\ldots,C_q$ (in this order). Note that for a bipartite graph with $n$ nodes and $m$ arcs, both the number $e(C)$ of arcs of a $\delta$-clique $C$ and its size $s(C)$ are computed uniquely, and these are monotone functions in $m$, as well as the above-mentioned time bound $\tau(C)$ (indicated in~\cite{FM-91}). Moreover, one can check that $m'\le m$ and $e(C')\ne 0$ imply $m'-2e(C')\le m-e(C)$, where $C'$ is a $\delta$-clique in a graph with $n$ nodes and $m'$ arcs. Using these, we can conclude that $r\le q$ and that for $i=1,\ldots r$, $s(C'_i)\le s(C_i)$ and $\tau(C'_i)\le \tau(C_i)$. Then $s({\cal C}')\le 2s({\cal C})$, implying $s({\cal C})=O(m\beta)$, by~\refeq{part}. Also the time of the modified algorithm is $O(\sqrt{n}m\beta)$ (by~\refeq{part} and by the fact that the above bound $\tau'$ remains the same) plus the time needed to treat the symmetric cliques $\sigma(C'_i)$, which is $O(m)$.
Finally, the graph $H^\ast$ obtained from $H$ by transforming the cliques $C'_1,\sigma(C'_1),\ldots, C'_r,\sigma(C'_r)$ into stars has a naturally induced skew-symmetry. By the above argument, $H^\ast$ has $O(m\beta)$ arcs, and computing $H^\ast$ takes $O(\sqrt{n}m\beta)$ time. Apply such a transformation to the input graph of MSFP arising from an instance of the maximum matching problem. Arguing as in the bipartite matching case above and as in the proof of Lemma~\ref{lm:root}, we conclude with the following.
\begin{theorem} \label{tm:compr} A maximum matching in a general graph with $n$ nodes and $m$ edges can be found in $O(\sqrt{n}m\log(n^2/m)/\log{n})$ time.
\end{theorem}
{\bf Acknowledgements.} We thank the anonymous referees for suggesting improvements in the original version of this paper and pointing out to us important references.
\small
\end{document} |
\begin{document}
\author{Michał Goliński \thanks{Faculty of Mathematics and Comp. Sci., Adam Mickiewicz University, ul. Uniwersytetu Poznańskiego 4, 61-614 Poznań, POLAND} \thanks{The authors research was partially supported by National Science Centre (Poland) Grant UMO-2013$\slash$10$\slash$A$\slash$ST1$\slash$00091.} \thanks{\href{mailto:golinski@amu.edu.pl}{golinski@amu.edu.pl}} , Adam Przestacki \setcounter{footnote}{0}\footnotemark \setcounter{footnote}{1} \footnotemark \setcounter{footnote}{3} \thanks{\href{mailto:adamp@amu.edu.pl}{adamp@amu.edu.pl}} \thanks{Corresponding author} }
\title{The invariant subspace problem for the space of smooth functions on the real line}
\begin{abstract} We construct a continuous linear operator acting on the space of smooth functions on the real line without non-trivial invariant subspaces. This is a first example of such an operator acting on a Fréchet space without a continuous norm. The construction is based on the ideas due to C. Read who constructed a continuous operator without non-trivial invariant subspaces on the Banach space $\ell_1$. \end{abstract}
Keywords: \textit{Invariant subspace problem} $\cdot$ \textit{Cyclic vectors} $\cdot$ \textit{Space of smooth functions} $\cdot$ \textit{Sequence spaces}
\section{Introduction} Let $X$ be a locally convex space. The invariant subspace (subset) problem is the question if every continuous linear operator $T\colon X\to X$ has a non-trivial invariant subspace (subset), i.e., if there exists a closed subspace (subset) $0\subsetneq H\subsetneq X$ such that $T(H)\subset H$. The case of the separable Hilbert space has been studied by multiple authors and is one of the most important open problems in operator theory.
In the Banach space setting the first counterexamples to the invariant subspace problem were constructed by P. Enflo \cite{MR0473871,MR892591} and C. Read \cite{MR749447,MR806634} in the 1980s. While Enflo constructed an operator on an artificial Banach space, Read was able to build his counterexample on $\ell_1$. An accessible exposition of the Read's construction can be found in the last chapter of \cite{MR2533318}. Later on Read \cite{MR959046,MR950973} improved his methods and was able to build counterexamples on other Banach spaces.
A. Atzmon in \cite{MR701260} published a construction of an operator on a nuclear Fréchet space (with a continuous norm) without non-trivial invariant subspaces.
The first author was able to adapt the methods of Read and constructed counterexamples to the invariant subspace problem for many classical Fréchet spaces including the space of holomorphic functions on the unit disc $H(\mathbb{D})$ and the space of rapidly decreasing sequences $s$ (see \cite{MR2863862}, see also \cite{MR3077885} for an operator without non-trivial invariant subsets on $s$). The construction required the existence of a continuous norm on the underlying space.
In \cite{MR3866905} Q. Menet was able to show that there is a big family of Fréchet spaces with a continuous norm that support an operator without non-trivial invariant subspaces (even subsets).
When the Fréchet space $X$ does not possess a continuous norm, then there are two possibilities: \begin{itemize}
\item There exists a fundamental increasing sequence of seminorms
$(p_n)_{n\in\mathbb{N}}$ for $X$ such that $\ker p_{n+1}$ is of finite codimension in $\ker p_n$
for all $n$ (e.g., the space of all sequences $\omega$).
Then every operator on $X$ has a non-trivial invariant subspace
(see \cite[Theorem 2.1]{MR3866905}).
\item No such fundamental system exists.
In this case it is not clear whether an operator without non-trivial
invariant subspaces exists -- this case is left open in \cite{MR3866905}. \end{itemize}
In this paper we construct an operator without non-trivial invariant subspaces on the space of smooth functions on the real line $C^\infty(\mathbb{R})$ with the usual topology of uniform convergence of functions and their derivatives on compact sets. This space does not possess a continuous norm and is isomorphic to the countable product of the space of rapidly decreasing sequences $s^\mathbb{N}$ which plays an important role in the theory of nuclear Fréchet spaces because of the celebrated Kōmura-Kōmura theorem.
\section{Preliminaries} Throughout we will denote by $\mathbb{N}$ the set of non-negative integers. For us an operator will always be a continuous linear map. For all unexplained notions from functional analysis we refer to \cite{MR1483073}. Below we describe the spaces which will be used in our construction.
\subsection{The space \texorpdfstring{$s$}{s} of rapidly decreasing sequences} The space of rapidly decreasing sequences is the space \[
s
= \left\lbrace
\left(x_j\right)_{j=0}^\infty:
p_N\left(\left(x_j\right)_{j=0}^\infty\right) =
\sum_{j=0}^\infty |x_j|(j+1)^N<\infty
\quad
\text{for every $N\in \mathbb{N}$}
\right\rbrace \] with the topology generated by the family of seminorms $\{p_N: N\in \mathbb{N}\}$. Due to technical reasons we will use a different, although equivalent, system of seminorms for $s$. The proof of the proposition below is standard (see, e.g., \cite[Section 2]{MR3077885}). \begin{proposition} \label{prop:matrix} There exists a matrix $\displaystyle \left(A_{N,j}\right)_{N,j=0}^\infty$ such that: \begin{enumerate}[leftmargin=2\parindent]
\item We have $A_{N,j}\geq 1$ for every $N,j \in \mathbb{N}$.
\item The sequence $\left(A_{N,j}\right)_{N=0}^\infty$ is increasing and unbounded for every $j\in \mathbb{N}$.
\item The sequence $\left(A_{N,j}\right)_{j=0}^\infty$ is increasing and unbounded for every $N\in \mathbb{N}$.
\item We have $\frac{A_{N,j+1}}{A_{N,j}}\leq 2$ for every $N,j \in \mathbb{N}$.
\item We have $\displaystyle\lim_{j\to\infty} \frac{A_{N,j}}{A_{N+1,j}}=0$ for every $N \in \mathbb{N}$.
\item We have $\displaystyle\lim_{j\to\infty} \frac{2^j}{A_{N,j}}=\infty$ for every $N \in \mathbb{N}$.
\item The family of seminorms $\{p_N:N\in\mathbb{N}\}$ is equivalent to the family of seminorms $\{|\cdot|_N:N\in\mathbb{N}\}$, where
\begin{equation*}
\left|\left(x_j\right)_{j=0}^\infty \right|_N=\sum_{j=0}^\infty|x_j|A_{N,j}.
\end{equation*}
Moreover, the unit balls of the seminorms $|\cdot|_N$ form a basis of neighborhoods of zero in $s$. \end{enumerate} \end{proposition}
\begin{remark}
Observe that in order to prove that a sequence $(x_n)_{n=0}^\infty$ converges to $x$ in $s$ one only needs to prove that for every $N$ the inequality $|x_n-x|_N\leq 1$ holds for every $n$ big enough. \end{remark} \subsection{The space \texorpdfstring{$s^\mathbb{N}$}{sᴺ}} The countable product of the space of rapidly decreasing sequences is the space \[
s^\mathbb{N}
= \left\lbrace
\left(x_{i,j}\right)_{i,j=0}^\infty:
\left(x_{i,j}\right)_{i=0}^\infty \in s
\quad
\text{for every $j\in\mathbb{N}$}
\right\rbrace \]
with the natural Fréchet space topology of the product space. This topology is generated by the family of seminorms $\{\|\cdot\|_N:N\in\mathbb{N}\}$, where \[
\left\|\left(x_{i,j}\right)_{i,j=0}^\infty\right\|_N
= \sum_{j=0}^N \left|\left(x_{i,j}\right)_{\vphantom{j}i=0}^\infty \right|_N. \]
Note that the unit balls of the seminorms $\|\cdot\|_N$ form a basis of neighborhoods of zero in $s^\mathbb{N}$.
The family of unit vectors $\{e_{n,m}:n,m\in\mathbb{N}\}$, where \[
e_{n,m}=\left(\delta_{(n,m)(i,j)}\right)_{i,j=0}^\infty, \] is a Schauder basis for $s^\mathbb{N}$ (where $\delta$ is the Kronecker delta). It follows from the definitions that \[
\|e_{n,m}\|_N
= \begin{cases}
A_{N,n}, & m\leq N;\\
0, & m>N.
\end{cases} \] \subsection{The spaces \texorpdfstring{$s^\mathbb{N}_0$}{sᴺ₀} and \texorpdfstring{$s^\mathbb{N}_{0,0}$}{sᴺ₀₀}} In our paper two dense linear subspaces of $s^\mathbb{N}$ will play an important role. We define \[
s^\mathbb{N}_0
= \left\lbrace
\left(x_{i,j}\right)_{i,j=0}^\infty\in s^\mathbb{N}:
x_{i,j}=0
\quad
\text{for $j$ large enough}
\right\rbrace \] and \[
s^\mathbb{N}_{0,0}
= \left\lbrace
\left(x_{i,j}\right)_{i,j=0}^\infty\in s^\mathbb{N}:
x_{i,j}=0
\quad
\text{for all but finitely many $(i,j)$}
\right\rbrace. \] The formula \[
\vertiii{\left(x_{i,j}\right)_{i,j=0}^\infty}_N
= \sum_{j=0}^\infty
\left| \left(x_{i,j}\right)_{\vphantom{j}i=0}^\infty \right|_N \] defines a norm on $s^\mathbb{N}_0$ and for every $n,m,N\in N$ we have \[
\vertiii{e_{n,m}}_N = A_{N,n}. \]
\section{The strategy} \label{sec:stra} In this section we describe the strategy for the construction of an operator without non-trivial invariant subspaces on a separable Fréchet space $X$
with a Schauder basis $\{e_n:n\in\mathbb{N}\}$ and with the topology generated by the fundamental family of seminorms $\{\|\cdot\|_n: n\in\mathbb{N}\}$ (we assume that the unit balls of those seminorms form a basis of neighborhoods of zero in $X$). This section is modeled on the last chapter of \cite{MR2533318} where F. Bayart and \'{E}. Matheron gave a detailed exposition of Read's construction of
an operator on $\ell_1$ without invariant subspaces.
The idea behind the construction is simple and is based on cyclic vectors. \begin{definition} Let $T\colon X\to X$ be an operator. We say that $x\in X$ is a cyclic vector for $T$ if \[
\Span \left\lbrace x,Tx,T^2x,\ldots\right\rbrace
= \{
P(T)x:
\text{$P$ is a polynomial}
\} \] is dense in $X$. \end{definition} It is a simple observation that an operator $T\colon X\to X$ has no non-trivial invariant subspaces if and only if every non-zero $x\in X$ is a cyclic vector for $T$. For some classes of operators the existence of at least one cyclic vector follows directly from the form of the operator. \begin{definition} We say that an operator $T\colon X\to X$ is a perturbed weighted forward shift if for every $j\in \mathbb{N}$ we have \[
Te_j = \sum_{i=0}^{j+1} \lambda_ie_i, \] where $\lambda_{j+1}\not=0$. \end{definition} It is clear that $e_0$ is a cyclic vector for any perturbed weighted forward shift. We observe now that in order to show that any other non-zero vector is cyclic for $T$ it is enough to prove an inequality. \begin{proposition} \label{prop:strategy} Let $T\colon X\to X$ be an operator for which $e_0$ is a cyclic vector. A non-zero $x\in X$ is a cyclic vector for $T$ if and only if for every $N\in\mathbb{N}$ there is a polynomial $P$ such that \begin{equation} \label{eq:stra}
\|P(T)x-e_0\|_N \leq 4. \end{equation} \end{proposition} \begin{proof} If $x$ is a cyclic vector for $T$, then $\Span\left\lbrace x, Tx, T^2x,\ldots \right\rbrace$ is dense in $X$ and therefore for every $N\in \mathbb{N}$ there is a polynomial $P$ such that \[
\|P(T)x-e_0\|_N\leq 4. \] To prove the converse, recall that the unit balls of the family of seminorms
$\left\lbrace\|\cdot\|_n:n\in\mathbb{N}\right\rbrace$ form a basis of neighborhoods of zero in $X$. Therefore the condition in the proposition implies that \[
e_0 \in \overline{\Span \left\lbrace x, Tx, T^2x,\ldots \right\rbrace}. \] But $e_0$ is a cyclic vector for $T$ and \[
\Span \left\lbrace e_0, Te_0, T^2e_0,\ldots \right\rbrace
\subset
\Span \left\lbrace x, Tx, T^2x,\ldots \right\rbrace. \] Thus $x$ is cyclic for $T$. \end{proof}
In order to define an operator $T\colon X\to X$ for which \eqref{eq:stra} is easy to verify for a large family of vectors we will use the following lemma due to Read, which can be found in a slightly modified version in the last chapter of \cite{MR2533318} (note that the proof uses only simple linear algebra and a compactness argument). \begin{definition} We say that vectors $\gamma_0,\ldots,\gamma_n$ form a perturbed canonical basis of $\Span\{e_0,\ldots,e_n\}$ if for every $0\leq j\leq n$ \[
\gamma_j = \sum_{i=0}^j\mu_ie_i, \] where $\mu_j\not=0$. \end{definition} \begin{remark} Let $T\colon X\to X$ be a perturbed weighted forward shift. For every $n\in N$ the vectors $e_0, Te_0,\ldots T^ne_0$ form a perturbed canonical basis of $\Span\{e_0,\ldots,e_n\}$. \end{remark} \begin{lemma}\ \label{lemma:theLemma} Let $\varepsilon>0$, $c$ and $d$ be positive integers and $\gamma_0,\ldots, \gamma_{c+d-1}$ be a perturbed canonical basis of $H=\Span\{e_0,\ldots,e_{c+d-1}\}$ such that \[
\gamma_c = \varepsilon e_c+e_0. \]
Let $\|\cdot\|$ be a norm on $\Span \{e_n:n\in\mathbb{N}\}$ and $K\subset H$ be a compact set such that for every $y\in K$ with \[
y = \sum_{i=0}^{c+d-1}y_i\gamma_i \] there is $0\leq j\leq c-1$ such that $y_j \neq 0$.
Then there is a number $D\geq 1$ such that for every $y\in K$ and every $T\colon X\to X$ which is a perturbed weighted forward shift such that \[
T^je_0=\gamma_j \text{ for } j=0,\ldots, c+d-1 \] there is a polynomial \[
P(t) = \sum_{i=1}^{c+d}c_it^i \text{ with } \sum_{i=1}^{c+d}|c_i|\leq D \] for which \[
\|P(T)y-e_0\|
\leq 2\varepsilon \|e_c\|
+ D\times \max_{c+d\leq j\leq 2(c+d-1)}\| T^je_0\|. \] \end{lemma} This lemma will be used inductively to construct an operator for which \eqref{eq:stra} holds for many vectors from $\Span\{e_n: n\in\mathbb{N} \}$. Having an arbitrary vector $x=\sum_{n=0}^\infty x_ne_n$ we will write $x=h+t$ in such a way that \eqref{eq:stra} holds for $h$ and $P(T)t$ is very small in the corresponding norm. This in essence is what we will do. \begin{remark} In the proofs of Read, Goliński and Menet the fact that $X$ possesses a continuous norm was crucial in the definition of the compact set $K$ when the above lemma is used. In the case of $s^\mathbb{N}$ there is no continuous norm and we need to somehow overcome this difficulty. \end{remark} \begin{remark} To realize the above strategy we need to define an order on the canonical basis of $s^\mathbb{N}$. This is done in the next section. \end{remark}
\section{The definitions of the functions \texorpdfstring{$\Next$}{next} and \texorpdfstring{$\pos$}{pos}} \begin{definition} \label{def:order} Let $(b_n)_{n=1}^\infty$ be an increasing sequence of natural numbers such that $2b_n+1<b_{n+1}$ for every $n\in\mathbb{N}$. \begin{enumerate}[leftmargin=2\parindent] \item We define the function
\[
\Next \colon \mathbb{N}\times\mathbb{N} \to \mathbb{N}\times\mathbb{N}
\]
by the formula (see also Figure \ref{fig:one}):
\begin{equation}
\label{def:next}
\Next(i,j)=
\begin{cases}
(i,j+1),& i = b_n,\, 2 \mid j,\, j < 2n;\\
(i,j+1),& i = 2b_n+1,\, 2 \nmid j,\, 0 < j \leq 2n;\\
(i,j+1),& i = 0,\, 2\nmid j ;\\
(i,j-1),& i = b_n+1,\, 2\nmid j,\, j < 2n;\\
(i,j-1),& i = 2b_n,\, 2\mid j,\, 0 < j \leq 2n;\\
\text{\parbox{\widthof{remaining cases}}{in the\\ remaining cases}}&
\begin{cases}
(i+1,j),& 2\mid j;\\
(i-1,j),& 2\nmid j.\\
\end{cases}
\end{cases}
\end{equation} \item We define $\Next^0$ to be the identity on $\mathbb{N}\times\mathbb{N}$ and for $k\geq 1$ by
$\Next^k$ we denote the composition of $\Next$ with itself $k$ times. \item We define the function
\[
\pos \colon \mathbb{N}\times\mathbb{N} \to \mathbb{N}
\]
by the rule:
\[
\pos(n,m) = k
\quad \text{if and only if} \quad
\Next^k(0,0)=(n,m).
\] \item We define the sequence $(e_k)_{k=0}^\infty$ by the rule
\[
e_k = e_{n,m}
\quad \text{if and only if} \quad
\pos(n,m)=k.
\] \end{enumerate} \end{definition} \begin{remark}
The function $\Next$ defines an ordering of $\mathbb{N}\times\mathbb{N}$ which is illustrated in Figure \ref{fig:one}.
One should assume that the sequence $(b_n)_{n=0}^\infty$ increases very rapidly, so the true image would be very elongated. \end{remark}
\begin{figure}
\caption{Ordering defined by $\Next$}
\label{fig:one}
\caption{A single step of the inductive procedure}
\label{fig:two}
\end{figure} \begin{remark}
It is clear that the elements of the sequence $(e_n)_{n=0}^\infty$ form a Schauder basis of $s^\mathbb{N}$.
It is also easy to see that
\[
e_{\pos(n,m)} = e_{n,m}.
\] \end{remark}
\section{Construction of the operator} We fix a sequence \begin{equation*}
(N_n)_{n\in\mathbb{N} } = (0, 1, 0, 1, 2, 0, 1, 2, 3, 0, 1, 2, 3, 4, \ldots). \end{equation*} Two properties of this sequence will be important for us: every non-negative integer appears in this sequence infinitely many times and \begin{equation} \label{eq:N_n}
N_n \leq n \quad \text{for every $n\in N$}. \end{equation} Assume that we are given integer sequences $(a_n)_{n=0}^\infty$, $(b_n)_{n=1}^\infty$, $(s_n)_{n=1}^\infty$ satisfying \begin{equation}
\begin{aligned}
\label{eq:monotonicity}
1 & = \Delta_0 \phantom{\mbox{} < b_1 < 2b_1 < s_1}\mbox{} < a_0 < a_0 + \pos(\Delta_0, 0) \\
& = \Delta_1 < b_1 < 2b_1 < s_1 < a_1 < a_1 + \pos(\Delta_1, 0) \\
& = \Delta_2 < b_2 < 2b_2 < s_2 < a_2 < a_2 + \pos(\Delta_2, 0) \\
& = \Delta_3 \ldots,
\end{aligned} \end{equation} where the function $\pos$ is derived from the sequence $(b_n)_{n=1}^\infty$ as described in the previous section.
For further reference, note that from the definition of $\Next$ and $\pos$ it follows that \begin{equation} \label{eq:pos}
\pos(\Delta_{n+1},0) = \pos(a_n,0) + \pos(\Delta_n,0). \end{equation} Moreover, assume we are given a sequence $(D_n)_{n=0}^\infty$ of non-zero numbers. We define a linear operator \[
T\colon s^\mathbb{N}_{0,0}\to s^\mathbb{N}_{0,0} \] by the formula \begin{equation} \label{eq:operator}
T^je_0
= \begin{cases}
\alpha_j e_j, & j\in \left[\pos(\Delta_n,0),\pos(a_n,0)\right);\\
\frac{1}{A_{N_n,a_n}}e_j+T^{j-\pos(a_n,0)}e_0, & j\in \left[\pos(a_n,0),\pos(\Delta_{n+1},0)\right);
\end{cases} \end{equation} where \begin{equation} \label{eq:alphas}
\alpha_j
= \begin{cases}
\frac{1}{A_{N_0,a_0}} \cdot 2^{j-\pos(\Delta_0,0)}, & j\in [\pos(\Delta_0, 0),\pos(a_0,0)); \\
\frac{1}{A_{N_n,a_n}} \cdot \left(\frac{1}{2D_{n-1}}\right)^{j- \pos(\Delta_n,0)}, & j\in [\pos(\Delta_n,0),\pos(s_n,0)), n \geq 1;\\
\alpha_{\pos(s_n,0)-1}\cdot 2^{j-\pos(s_n,0)}, & j\in [\pos(s_n, 0),\pos(a_n,0)), n \geq 1.
\end{cases} \end{equation} \begin{remark} Observe that because the function $\Next$ (defined with $(b_n)_{n=1}^\infty$ by \eqref{def:next}) gives us a strict ordering of $\mathbb{N}\times\mathbb{N}$ and all the numbers $\alpha_j$ in \eqref{eq:operator} are non-zero, one can easily deduce that \eqref{eq:operator} really defines a linear operator on $s^\mathbb{N}_{0,0}$ (we calculate the values of $Te_j$ in Section 7). \end{remark}
\section{Choosing the parameters} \label{sec:parameters} The sequences $(a_n)_{n=0}^\infty$, $(b_n)_{n=1}^\infty$, $(s_n)_{n=1}^\infty$ that were used in the previous section will be carefully chosen in an inductive procedure carried out over the intervals $[\Delta_n, \Delta_{n+1})$ described in this section. Along the way we will also fix the corresponding sequence $(D_n)_{n=0}^\infty$.
The procedure on the first interval starting in $\Delta_0$ is very similar to the general step, but simplified, as there is no need for $b_0$ and $s_0$. Hence we will go over the first step of the induction as well as the general step at the same time.
Assume that $(a_k)_{k=0}^{n-1}$, $(b_k)_{k=1}^{n-1}$, $(s_k)_{k=1}^{n-1}$ and $(D_k)_{k=0}^{n-1}$ have been already fixed. Hence the values of $T^je_0$ are defined by \eqref{eq:operator} for $j$ up to (but not including) $\pos(a_{n-1},0)+\pos(\Delta_{n-1},0)$. We define $\Delta_n=a_{n-1}+\pos(\Delta_{n-1},0)$ in accordance with \eqref{eq:monotonicity} (with $\Delta_0=1$).
If $n \geq 1$, then we choose $b_n$ to be an integer such that $b_n \geq 2 \pos(\Delta_n,0)$. Hence: \begin{equation}
\label{cond:pos_bn}
2(\pos(\Delta_n,0)-1)\leq b_n \leq \pos(b_n,0). \end{equation} We also require that for $k\geq b_n$ we have that \begin{equation}
\label{cond:2bn}
4^{\pos(\Delta_n,0)} A_{N_{n-1}+1,k}
\leq
\frac{A_{N_{n-1}+2,k}}{D_{n-1}}. \end{equation} This is possible as $\frac{A_{N_{n-1}+2,k}}{A_{N_{n-1}+1,k}}$ tends to $+\infty$ (see Proposition \ref{prop:matrix}).
In $(b_n,0)$ the order defined by $\Next$ takes us to other columns than the first (see Figure \ref{fig:two}). The number of iterates we need to return to ($b_{n+1,0}$) is determined by $\{b_1, b_2, \ldots, b_n\}$ only, hence we can calculate $\pos(x,0)$ for $x \leq b_{n+1}$, even though $b_{n+1}$ is not yet fixed and will be chosen in the next step. We choose $s_n$ to be an integer bigger than $2b_n$. The number $b_0$ is in fact not used in the definition of $\Next$, for simplicity we may choose $b_0=s_0=\Delta_0=1$.
We can now choose $a_n$ to be an integer greater than $s_n$ satisfying: \begin{equation} \label{eq:cond1}
\frac{2^{\pos(a_n,0)-\pos(s_n,0)-1}}{A_{N_n,a_n} \left( 2 D_{n-1} \right)^{\pos(s_n,0)-\pos(\Delta_n,0)-1}} \geq 1; \end{equation} \begin{equation} \label{eq:cond3}
A_{N_n,a_n} 4^{\pos(\Delta_n,0)}A_{0,0}
\leq
A_{N_n+1,a_n}. \end{equation} This is possible by Proposition \ref{prop:matrix}. For $n \geq 1$ we also require that: \begin{equation} \label{eq:cond2}
\frac{A_{N_{n-1},a_{n-1}}}{A_{N_{n},a_{n}}} \leq 1; \end{equation}
\begin{equation} \label{eq:cond4}
\frac{D_{n-1}A_{N_{n-1},b_{n}}}{A_{N_{n}},a_{n}} \leq 1. \end{equation} Now the equation \eqref{eq:alphas} gives as the values for $\alpha_j$ for $j \in [\pos(\Delta_n,0), \pos(a_n,0))$. Hence the values of $T^j e_0$ are defined for $j < \pos(a_n,0) + \pos(\Delta_n,0)$ by \eqref{eq:operator}. We put $\Delta_{n+1} = a_n + \pos(\Delta_n,0)$. For further reference note that \eqref{eq:cond1} gives us immediately
\begin{equation}
\label{eq:alpha_a_n}
\alpha_{\pos(a_n,0)-1} \geq 1. \end{equation}
To carry out the construction on the next interval we still need a suitable number $D_n$. It will be an outcome of applying Lemma \ref{lemma:theLemma}.
Let \begin{equation} \label{eq:heads}
H_n
= \Span\left\lbrace e_j:
j\leq \pos(\Delta_{n+1},0)-1
\right\rbrace. \end{equation} Then $\{T^je_0: j\leq \pos(\Delta_{n+1},0)-1\}$ is a perturbed canonical basis of $H_n$. We define a linear projection $\tau_n\colon H_n\to H_n$ by the formula \begin{equation} \label{eq:tau_n}
\tau_n\left(\sum_{j=0}^{\pos(\Delta_{n+1},0)-1} x_j T^je_0\right)
= \sum_{j=0}^{\pos(a_n,0)-1} x_jT^je_0. \end{equation} Using this projection we define a compact set \begin{equation} \label{eq:compacts}
K_n
= \left\lbrace
y\in H_n: \vertiii{y}_0\leq 1 \textrm{ and }
\vertiii{\tau_n(y)}_0\geq \frac{1}{2}
\right\rbrace. \end{equation}
We now apply Lemma \ref{lemma:theLemma} with
$H=H_n$, $K=K_n$, $\|\cdot\|=\vertiii{\cdot}_{N_n}$, $c=\pos(a_n,0)$ and $d=\pos(\Delta_n,0)$. From this lemma we get the necessary number $D_n$ and we can carry out the procedure on the next interval.
Applying Lemma \ref{lemma:theLemma} in the construction gives us the following: \begin{proposition}
\label{prop:heads}
If $y\in K_n$, then there is a polynomial
\begin{equation}
\label{eq:polynomial}
P(t)=\sum_{i=1}^{\pos(\Delta_{n+1},0)}c_it^i
\text{ with }
\sum_{i=1}^{\pos(\Delta_{n+1},0)}|c_i|\leq D_n
\end{equation}
for which
\[
\vertiii{P(T)y-e_0}_{N_n} \leq 3.
\] \end{proposition} \begin{proof} Let $y\in K_n$. Lemma \ref{lemma:theLemma} gives us the existence of a polynomial $P$ satisfying \eqref{eq:polynomial} such that \begin{align*}
\vertiii{P(T)y-e_0}_{N_n}
& \leq
\frac{2}{A_{N_n,a_n}} \vertiii{e_{\pos(a_n,0)}}_{N_n}
+ D_n \times \max_{\pos(\Delta_{n+1},0)\leq j\leq 2(\pos(\Delta_{n+1},0)-1)}
\vertiii{T^je_0}_{N_n}\\
& \overset{\mathpalette\mathclapinternal{\eqref{eq:operator}}}{=}
2 + D_n \times \max_{\pos(\Delta_{n+1},0)\leq j\leq 2(\pos(\Delta_{n+1},0)-1)}
\vertiii{\alpha_je_j}_{N_n}\\
& \overset{\mathpalette\mathclapinternal{\eqref{eq:alphas}}}{\leq}
2 + D_n \alpha_{\pos(\Delta_{n+1},0)} \vertiii{e_{2\pos(\Delta_{n+1},0)}}_{N_n}\\
&\overset{\mathpalette\mathclapinternal{\eqref{eq:alphas}}}{\leq}
2 + \frac{D_n}{A_{N_{n+1}},a_{n+1}} \vertiii{e_{\pos(b_{n+1},0)}}_{N_n}\\
& =
2 + \frac{D_nA_{N_{n},b_{n+1}}}{A_{N_{n+1}},a_{n+1}}
\overset{\mathpalette\mathclapinternal{\eqref{eq:cond4}}}{\leq} 3.\qedhere \end{align*} \end{proof}
\begin{corollary} \label{cor:heads} If $\displaystyle y\in \bigcup_{m=1}^\infty mK_n$, then there is a polynomial \begin{equation*}
P(t) = \sum_{i=1}^{\pos(\Delta_{n+1},0)} c_it^i
\text{ with }
\sum_{i=1}^{\pos(\Delta_{n+1},0)}|c_i| \leq D_n \end{equation*} for which \[
\vertiii{P(T)y-e_0}_{N_n}\leq 3. \] \end{corollary} \begin{proof} By the assumptions there is $m\geq 1$ such that $
y' = \frac{1}{m} y\in K_n. $ By Proposition \ref{prop:heads} there is a polynomial \begin{equation*}
P(t) = \sum_{i=1}^{\pos(\Delta_{n+1},0)}c_it^i
\text{ with }
\sum_{i=1}^{\pos(\Delta_{n+1},0)}|c_i| \leq D_n \end{equation*} for which \[
\vertiii{P(T)y'-e_0}_{N_n}\leq 3. \] To finish the proof observe that the polynomial $P' = \frac{1}{m}P$ has all the requested properties. \end{proof}
\section{The values \texorpdfstring{$Te_j$}{Teⱼ}} To prove that the operator $T$ has certain properties we need to calculate the exact values of $Te_j$.
\begin{proposition} \label{prop:values} The operator $T\colon s_{0,0}^\mathbb{N}\to s_{0,0}^\mathbb{N}$ defined by \eqref{eq:operator} satisfies the equality: \[
Te_j
= \begin{cases}
\alpha_1 e_1, & j = 0;\\
\frac{\alpha_{j+1}}{\alpha_j}e_{j+1}, & j \in \left[\pos(\Delta_n,0), \pos(a_n,0)-1\right);\\
\frac{1}{\alpha_{\pos(a_n,0)-1}}
\left(\frac{1}{A_{N_n,a_n}} e_{\pos(a_n,0)} +e_0\right),
& j = \pos(a_n,0)-1;\\
e_{j+1}, & j \in \left[\pos(a_n,0),\pos(\Delta_{n+1},0)-1\right);\\
A_{N_n,a_n}\alpha_{\pos(\Delta_{n+1},0)} e_{\pos(\Delta_{n+1},0)}
- A_{N_n,a_n}\alpha_{\pos(\Delta_n,0)} e_{\pos(\Delta_n,0)},
& j = \pos(\Delta_{n+1}, 0)-1.
\end{cases} \] Thus $T$ is a perturbed weighted forward shift. \end{proposition} \begin{proof} We consider all the possible cases for $j$. All the calculations below follow from \eqref{eq:operator}. \begin{enumerate}[leftmargin=2\parindent] \item By the definition we have $Te_0=\alpha_1e_1$. \item If $j \in \left[\pos(\Delta_n,0),\pos(a_n,0)-1\right)$, then
\[
e_j
= \frac{1}{\alpha_j} T^je_o.
\]
Therefore
\[
Te_j
= \frac{1}{\alpha_j}T^{j+1}e_0=\frac{\alpha_{j+1}}{\alpha_j}e_{j+1}.
\] \item If $j = \pos(a_n,0)-1$, then
\[
e_{\pos(a_n,0)-1}
= \frac{1}{\alpha_{\pos(a_n,0)-1}}T^{\pos(a_n,0)-1}e_0.
\]
Therefore
\[
Te_{\pos(a_n,0)-1}
= \frac{1}{\alpha_{\pos(a_n,0)-1}}T^{\pos(a_n,0)}e_0
= \frac{1}{\alpha_{\pos(a_n,0)-1}}\left(\frac{1}{A_{{N_n},a_n}}e_{\pos(a_n,0)} + e_0\right).
\] \item If $j \in \left[\pos(a_n,0),\pos(\Delta_{n+1},0)-1\right)$, then
\[
e_j = A_{N_n,a_n}\left(T^je_0-T^{j-\pos(a_n,0)}e_0 \right).
\]
Therefore
\begin{align*}
Te_j
& = A_{N_n,a_n}\left(T^{j+1}e_0-T^{j+1-\pos(a_n,0)}e_0 \right)\\
& = A_{N_n,a_n}\left(\frac{1}{A_{N_n,a_n}}e_{j+1}
+ T^{j+1-\pos(a_n,0)}e_0-T^{j+1-\pos(a_n,0)}e_0\right)\\
& = e_{j+1}.
\end{align*} \item If $j = \pos(\Delta_{n+1},0)-1$, then
\[
e_{\pos(\Delta_{n+1},0)-1}
= A_{N_n,a_n}\left(T^{\pos(\Delta_{n+1},0)-1}e_0-T^{\pos(\Delta_{n+1},0)-1-\pos(a_n,0)}e_0 \right).
\]
Using \eqref{eq:pos} we get
\begin{align*}
Te_{\pos(\Delta_{n+1},0)-1}
& = A_{N_n,a_n}\left(T^{\pos(\Delta_{n+1},0)} e_0-T^{\pos(\Delta_n,0)}e_0 \right)\\
& = A_{N_n,a_n}\alpha_{\pos(\Delta_{n+1},0)} e_{\pos(\Delta_{n+1},0)}
- A_{N_n,a_n}\alpha_{\pos(\Delta_n,0)}e_{\pos(\Delta_n,0)}.\qedhere
\end{align*} \end{enumerate} \end{proof}
\section{Continuity of \texorpdfstring{$T$}{T}} In this section we show that the linear operator $T$ defined on the space $s^\mathbb{N}_{0,0}$ can be extended to a continuous linear operator acting from $s^\mathbb{N}$ to $s^\mathbb{N}$. We also prove some continuity type estimates which will be important in the forthcoming sections.
\begin{lemma} \label{lemma:estimate} For every $j\in \mathbb{N}$ and $N\in\mathbb{N}$
we have $\|e_{j+1}\|_N \leq 2\|e_j\|_{N+1}$ and $\vertiii{e_{j+1}}_N\leq 2 \vertiii{e_j}_N$. \end{lemma} \begin{proof} This follows directly from the definition of the vectors $e_j$ and the definition of
$\|\cdot\|_N$ and $\vvvert \cdot \vvvert_N$. \end{proof}
\begin{proposition} \label{prop:continuity} The operator $T\colon s_{0,0}^\mathbb{N}\to s_{0,0}^\mathbb{N}$ defined by \eqref{eq:operator} satisfies the inequality
$\|Tx\|_N\leq 4\|x\|_{N+1}$ for every $x\in s_{0,0}^\mathbb{N}$ and $N\in\mathbb{N}$.
In particular, $T$ can be extended to a continuous operator on $s^\mathbb{N}$. Moreover, the extension of $T$ (which we also denote by $T$) satisfies the inequality $\vertiii{Tx}_N\leq 4\vertiii{x}_N$ for every $x\in s_0^\mathbb{N}$ and $N\in \mathbb{N}$. \end{proposition} \begin{proof} Fix $N\in\mathbb{N}$. First we will show that for every $(n,m)\in \mathbb{N}\times\mathbb{N}$ we have \begin{equation} \label{eq:cont}
\left\|Te_{n,m}\right\|_N\leq 4\left\|e_{n,m}\right\|_{N+1}
\quad\textrm{and}\quad
\vertiii{Te_{n,m}}_N\leq 4\vertiii{e_{n,m}}_N. \end{equation} Let $j\in\mathbb{N}$ be the unique number such that \[
e_j=e_{n,m}. \] We consider all the possible cases. \begin{enumerate}[leftmargin=2\parindent] \item If $j=0$, then by Proposition \ref{prop:values}
\[
Te_0 = \alpha_1e_1.
\]
Hence, using Lemma \ref{lemma:estimate}
\[
\|Te_0\|_N
= \vertiii{Te_0}_N
= \vertiii{\alpha_1e_1}_N
\leq 2 \vertiii{e_0}_N
= 2 \|e_0\|_N.
\] \item If $j \in \left[\pos(\Delta_n,0),\pos(a_n,0)-1\right)$,
then by Proposition \ref{prop:values}
\[
Te_j = \frac{\alpha_{j+1}}{\alpha_j}e_{j+1}.
\]
By the definition of the numbers $\alpha_j$ (see \eqref{eq:alphas}) we have
\[
\frac{\alpha_{j+1}}{\alpha_j} \leq 2.
\]
Therefore, using Lemma \ref{lemma:estimate}, we get
\[
\left\|Te_j\right\|_N
= \left\|\frac{\alpha_{j+1}}{\alpha_j}e_{j+1}\right\|_N
\leq 4 \left\|e_j\right\|_{N+1}
\quad\textrm{and}\quad
\vertiii{Te_j}_N
= \vertiii{\frac{\alpha_{j+1}}{\alpha_j}e_{j+1}}_N
\leq 4 \vertiii{e_j}_N.
\]
\item If $j=\pos(a_n,0)-1$, then by Proposition \ref{prop:values}
\[
Te_{\pos(a_n,0)-1}
= \frac{1}{\alpha_{\pos(a_n,0)-1}}
\left(\frac{1}{A_{N_n,a_n}} e_{\pos(a_n,0)} +e_0\right).
\]
We have
\begin{align*}
\left\|
\frac{1}{\alpha_{\pos(a_n,0)-1}}\left(\frac{1}{A_{N_n},a_n}e_{\pos(a_n,0)} + e_0\right)
\right\|_N
& = \vertiii{\frac{1}{\alpha_{\pos(a_n,0)-1}}\left(\frac{1}{A_{N_n,a_n}} e_{\pos(a_n,0)} + e_0\right)}_N \\
& = \frac{\frac{A_{N,a_n}}{A_{N_n,a_n}}+A_{N,0}}{\alpha_{\pos(a_n,0)-1}}
\end{align*}
and
\[
\left\| e_{\pos(a_n,0)-1}\right\|_N
= \vertiii{e_{\pos(a_n,0)-1}}_N
= A_{N,a_n-1}.
\]
Moreover, from \eqref{eq:alpha_a_n} and Proposition \ref{prop:matrix}
\[
\frac{\frac{A_{N,a_n}}{A_{N_n,a_n}} + A_{N,0}}{\alpha_{\pos(a_n,0)-1}}
\leq 4 A_{N,a_n-1}.
\]
This shows that
\[
\left\|T e_{\pos(a_n,0)-1}\right\|_N
= \vertiii{Te_{\pos(a_n,0)-1}}_N
\leq 4\left\| e_{\pos(a_n,0)-1}\right\|_N
= 4 \vertiii{e_{\pos(a_n,0)-1}}_N.
\] \item If $j\in \left[\pos(a_n,0),\pos(\Delta_{n+1},0)-1\right)$,
then by Proposition \ref{prop:values}
\[
Te_j = e_{j+1}.
\]
Therefore, using Lemma \ref{lemma:estimate}, we get
\[
\left\|Te_j\right\|_N
= \left\|e_{j+1}\right\|_N\leq 4\left\|e_j\right\|_{N+1}
\quad\textrm{and}\quad
\vertiii{Te_j}_N
= \vertiii{e_{j+1}}_N\leq 4\vertiii{e_j}_N.
\] \item If $j=\pos(\Delta_{n+1},0)-1$, then by Proposition \ref{prop:values}
\[
Te_{\pos(\Delta_{n+1},0)-1}
= A_{N_n,a_n} \alpha_{\pos(\Delta_{n+1},0)} e_{\pos(\Delta_{n+1},0)}
- A_{N_n,a_n} \alpha_{\pos(\Delta_n,0)} e_{\pos(\Delta_n,0)}.
\]
We have
\begin{align*}
& \left\|A_{N_n,a_n} \alpha_{\pos(\Delta_{n+1},0)} e_{\pos(\Delta_{n+1},0)}
- A_{N_n,a_n}\alpha_{\pos(\Delta_n,0)}e_{\pos(\Delta_n,0)} \right\|_N\\
& \hspace{2cm} =
\vertiii{A_{N_n,a_n} \alpha_{\pos(\Delta_{n+1},0)} e_{\pos(\Delta_{n+1},0)}
- A_{N_n,a_n} \alpha_{\pos(\Delta_n,0)} e_{\pos(\Delta_n,0)}}_N\\
& \hspace{2cm} =
A_{N_n,a_n} \alpha_{\pos(\Delta_{n+1},0)} A_{N,\Delta_{n+1}}
+ A_{N_n,a_n} \alpha_{\pos(\Delta_n,0)} A_{N,\Delta_n}\\
& \hspace{2cm} \overset{\mathpalette\mathclapinternal{\eqref{eq:alphas}}}{=}
\frac{A_{N_n,a_n} A_{N,\Delta_{n+1}}}{A_{N_{n+1},a_{n+1}}}
+ \frac{A_{N_n,a_n}A_{N,\Delta_n}}{A_{N_n,a_n}}
\end{align*}
and
\[
\left\|e_{\pos(\Delta_{n+1},0)-1}\right\|_N
= \vertiii{e_{\pos(\Delta_{n+1},0)-1}}_N=A_{N,\Delta_{n+1}-1}.
\]
By \eqref{eq:cond2} we have
\[
\frac{A_{N_n,a_n}}{A_{N_{n+1},a_{n+1}}} \leq 1.
\]
Therefore, since
$A_{N,\Delta_{n+1}}\leq 2A_{N,\Delta_{n+1}-1}$ and
$A_{N,\Delta_{n}}\leq 2A_{N,\Delta_{n+1}-1}$ (see Proposition \ref{prop:matrix}),
we get that
\[
\frac{A_{N_n,a_n}A_{N,\Delta_{n+1}}}{A_{N_{n+1},a_{n+1}}}
+ \frac{A_{N_n,a_n}A_{N,\Delta_n}}{A_{N_n,a_n}}
\leq 4 A_{N,\Delta_{n+1}-1}.
\]
This shows that
\[
\left\|Te_{\pos(\Delta_{n+1},0)-1}\right\|_N
= \vertiii{Te_{\pos(\Delta_{n+1},0)-1}}_N
\leq \left\|e_{\pos(\Delta_{n+1},0)-1}\right\|_N
= \vertiii{e_{\pos(\Delta_{n+1},0)-1}}_N.
\] \end{enumerate}
After all the tedious calculations we are now ready to prove the desired inequalities. Let $\left(x_{i,j}\right)_{i,j=0}^\infty\in s_{0,0}^\mathbb{N}$ be arbitrary. Then \begin{align*}
\left\|T\left(x_{i,j}\right)_{i,j=0}^\infty\right\|_N
= \left\|\sum_{i,j=0}^\infty x_{i,j}Te_{i,j}\right\|_N
& \leq
\sum_{i,j=0}^\infty |x_{i,j}|\left\| Te_{i,j} \right\|_N
\overset{\eqref{eq:cont}}{\leq}
4 \sum_{i,j=0}^\infty |x_{i,j}|\left\| e_{i,j} \right\|_{N+1}\\
& =
4 \sum_{j=0}^{N+1} \sum_{i=0}^\infty |x_{i,j}|\left\|e_{i,j} \right\|_{N+1}
=
4 \sum_{j=0}^{N+1} \left| \left(x_{i,j}\right)_{\vphantom{j}i=0}^\infty \right|_{N+1}\\
& =
4\left\|\left(x_{i,j}\right)_{i,j=0}^\infty\right\|_{N+1}. \end{align*} Since $s_{0,0}^\mathbb{N}$ is dense in $s^\mathbb{N}$, this implies that $T$ can be extended to a continuous operator on $s^\mathbb{N}$. We denote this extension by $T$.
Let now $x=\left(x_{i,j}\right)_{i,j=0}^\infty\in s_0^\mathbb{N}$. There is $M\in\mathbb{N}$ such that \[
x = \sum_{j=0}^M \sum_{i=0}^\infty x_{i,j}e_{i,j}. \] It is clear from Proposition \ref{prop:values} that $Tx\in s_0^\mathbb{N}$. Moreover, from \eqref{eq:cont} we get \begin{align*}
\vertiii{T\left(x_{i,j}\right)_{i,j=0}^\infty}_N
& =
\vertiii{\sum_{j=0}^M\sum_{i=0}^\infty x_{i,j}Te_{i,j}}_N
\leq
4 \sum_{j=0}^M \sum_{j=0}^\infty \left|x_{i,j}\right|\vertiii{e_{i,j}}_N
=
4 \vertiii{\sum_{j=0}^M \sum_{i=0}^\infty x_{i,j}e_{i,j}}_N.\qedhere \end{align*} \end{proof}
\section{The properties of the projection \texorpdfstring{$\tau_n$}{τₙ}} In this section we investigate the properties of the projection $\tau_n$ defined in Section \ref{sec:parameters}.
\begin{lemma} \label{lemma:values_tau} The projection $\tau_n\colon H_n\to H_n$ defined by \eqref{eq:tau_n} satisfies the equation \begin{equation*} \tau_n(e_j)
= \begin{cases}
e_j, & j\leq \pos(a_n,0)-1;\\
-A_{N_n,a_n}T^{j-\pos(a_n,0)}e_0, & \pos(a_n,0)\leq j \leq \pos(\Delta_{n+1},0)-1.
\end{cases} \end{equation*} \end{lemma} \begin{proof} Let $j\leq \pos(\Delta_{n+1},0)-1$. The vectors $e_0, Te_0,\ldots T^je_0$ form a basis of $\Span\{e_0,\ldots,e_j\}$ and therefore \[
e_j = \sum_{i=0}^j \lambda_i T^i e_0 \] for some numbers $\lambda_0,\ldots,\lambda_j$.
\begin{enumerate}[leftmargin=2\parindent] \item If $j\leq \pos(a_n,0)-1$, then from \eqref{eq:tau_n} it follows that
\[
\tau_n \left(e_j\right)
= \tau_n \left(\sum_{i=0}^j\lambda_iT^ie_0 \right)
= \sum_{i=0}^j\lambda_iT^ie_0
= e_j.
\] \item If $\pos(a_n,0)\leq j \leq \pos(\Delta_{n+1},0)-1$, then from the definition of the operator $T$ (see \eqref{eq:operator}) it follows that \[
e_j=A_{N_n,a_n}\left(T^je_0-T^{j-\pos(a_n,0)}e_0\right). \] By \eqref{eq:pos} we have that \[
j-\pos(a_n,0) \leq \pos(\Delta_n,0) \leq \pos(a_n,0)-1. \] This and \eqref{eq:tau_n} gives that \[
\tau_n \left(e_j\right)
= \tau_n \left(A_{N_n,a_n} \left(T^je_0-T^{j-\pos(a_n,0)} e_0\right)\right)
= -A_{N_n,a_n} T^{j-\pos(a_n,0)}e_0. \qedhere \] \end{enumerate} \end{proof}
The next proposition gives a continuity type estimate for the projection $\tau_n$ which will be important for us in the next section. \begin{proposition}
\label{prop:cont_tau}
Let $n\in N$ and $x\in H_n$. Then $\vertiii{\tau_n x}_0\leq \vertiii{x}_{N_n+1}$. \end{proposition} \begin{proof} First we will show that for $j\leq \pos(\Delta_{n+1},0)-1$ we have $\vertiii{\tau_ne_j}_0\leq \vertiii{e_j}_{N_n+1}$. To do this we consider all possible cases. \begin{enumerate}[leftmargin=2\parindent] \item If $j\leq \pos(a_n,0)-1$, then from Lemma \ref{lemma:values_tau}
it follows that
\[
\tau_n(e_j) = e_j.
\]
Thus
\[
\vertiii{\tau_n(e_j)}_0 = \vertiii{e_j}_0\leq \vertiii{e_j}_{N_n+1}.
\] \item If $\pos(a_n,0)\leq j \leq \pos(\Delta_{n+1},0)-1$,
then from Lemma \ref{lemma:values_tau} it follows that
\[
\tau_n(e_j) = -A_{N_n,a_n} T^{j-\pos(a_n,0)} e_0.
\]
Using Proposition \ref{prop:continuity} we get that
\begin{align*}
\vertiii{\tau_n(e_j)}_0
= \vertiii{A_{N_n,a_n}T^{j-\pos(a_n,0)}e_0}_0
\leq A_{N_n,a_n} 4^{j-\pos(a_n,0)} \vertiii{e_0}_0
= A_{N_n,a_n}4^{j-\pos(a_n,0)}A_{0,0}.
\end{align*}
By \eqref{eq:pos} we have
\[
j-\pos(a_n,0) \leq \pos(\Delta_n,0),
\]
Therefore
\begin{equation}
\label{eq:1}
\vertiii{\tau_n(e_j)}_0
\leq A_{N_n,a_n} 4^{\pos(\Delta_n,0)}A_{0,0}.
\end{equation}
Since $\pos(a_n,0) \leq j \leq \pos(\Delta_{n+1},0)-1$, we get that
\begin{equation}
\label{eq:2}
\vertiii{e_j}_{N_n+1}
\geq \vertiii{e_{\pos(a_n,0)}}_{N_n+1}
= \vertiii{e_{a_n,0}}_{N_n+1}=A_{N_n+1,a_n}.
\end{equation}
By \eqref{eq:cond3} we have that
\[
A_{N_n,a_n} 4^{\pos(\Delta_n,0)} A_{0,0}
\leq A_{N_n+1,a_n}.
\]
Thus, putting now \eqref{eq:1} and \eqref{eq:2} together, we get
\[
\vertiii{\tau_n(e_j)}_0
\leq \vertiii{e_j}_{N_n+1}.
\] \end{enumerate}
To finish the proof, take $x\in H_n$. Then \[
x = \sum_{j=0}^{\pos(\Delta_{n+1},0)-1} x_j e_j \] for some numbers $x_j$. By the inequality which we have just proved we get \[
\vertiii{\tau_n(x)}_0
= \vertiii{\sum_{j=0}^{\pos(\Delta_{n+1},0)-1} x_j \tau_n(e_j)}_0
\leq \sum_{j=0}^{\pos(\Delta_{n+1},0)-1} |x_j| \cdot \vertiii{e_j}_{N_n+1}
= \vertiii{x}_{N_n+1}.\qedhere \] \end{proof}
\section{Heads} Recall that \[
H_n
= \Span\left\lbrace e_j:
j \leq \pos(\Delta_{n+1},0)-1
\right\rbrace. \] We define now the projection (truncation) \[
\pi_n\colon s^\mathbb{N}\to H_n \] by the formula \begin{equation} \label{def:pi_n}
\pi_n\left(\sum_{j=0}^\infty x_je_{j} \right)
= \sum_{j=0}^{\pos(\Delta_{n+1}, 0)-1} x_je_{j}. \end{equation} Recall that in \eqref{eq:compacts} we defined compact sets \begin{equation*}
K_n
= \left\lbrace
y \in H_n:
\vertiii{y}_0 \leq 1
\text{ and }
\vertiii{\tau_n(y)}_0\geq \frac{1}{2}
\right\rbrace. \end{equation*} \begin{proposition} \label{prop:head} Let $N\in \mathbb{N}$ and \[
0 \neq x = \sum_{i,j=0}^\infty x_{i,j}e_{i,j}\in s^\mathbb{N} \] be such that \begin{equation} \label{eq:head}
\left|(x_{i,k_0})_{\vphantom{j}i=0}^\infty\right|_0 = 2 \end{equation} for some $k_0\in \mathbb{N}$. Let $\left(n_l\right)_{l=0}^\infty$ be an increasing sequence of natural numbers such that $N_{n_l}=N$ for $l\in \mathbb{N}$. Then for $l$ large enough we have that \[
\pi_{n_l}(x)\in\bigcup_{m=1}^\infty mK_{n_l}. \] \end{proposition} \begin{proof} We can write \[
x = \sum_{j=0}^\infty x'_j e_j \] for some sequence $(x'_j)_{j=0}^\infty$ of scalars. Let \begin{equation}
\label{eq:im1}
C_{n_l}
= \vertiii{\pi_{n_l}(x)}_0
= \vertiii{\sum_{j=0}^{\pos(\Delta_{n_l+1}, 0)-1} x'_j e_j}_0. \end{equation} From \eqref{eq:head} it easily follows that \begin{equation}
\label{eq:im2}
C_{n_l}
\geq 1 \text{ for $l$ large enough}. \end{equation} Observe that \begin{align*}
\vertiii{\tau_{n_l}(\pi_{n_l}(x))}_0
& = \vertiii{\tau_{n_l}\left(\sum_{j=0}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}\right)}_0
= \vertiii{\tau_{n_l}\left(\sum_{j=0}^{\pos(a_{n_l},0)-1} x'_je_{j}\right)
+ \tau_{n_l}\left(\sum_{j=\pos(a_{n_l},0)}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}\right)}_0\\
& \overset{\mathpalette\mathclapinternal{\ref{lemma:values_tau}}}{=}
\vertiii{\sum_{j=0}^{\pos(a_{n_l},0)-1} x'_je_{j}
+ \tau_{n_l}\left(\sum_{j=\pos(a_{n_l},0)}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}\right)}_0\\
& \overset{\mathpalette\mathclapinternal{\ref{prop:cont_tau}}}{\geq}
\vertiii{\sum_{j=0}^{\pos(a_{n_l},0)-1} x'_je_{j}}_0
- \vertiii{\sum_{j=\pos(a_{n_l},0)}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}}_{N+1}\\
& \geq
\vertiii{\sum_{j=0}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}}_0
- \vertiii{\sum_{j=\pos(a_{n_l},0)}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}}_{0}
- \vertiii{\sum_{j=\pos(a_{n_l},0)}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}}_{N+1}\\
& \geq
C_{n_l} - 2 \vertiii{\sum_{j=\pos(a_{n_l},0)}^{\pos(\Delta_{n_l+1}, 0)-1} x'_je_{j}}_{N+1}. \end{align*} It is clear that \[
\lim_{l\to\infty}
2 \vertiii{\sum_{j=\pos(a_{n_l},0)}^{\pos(\Delta_{n_l+1},0)-1} x'_je_{j}}_{N+1} = 0. \] Therefore, since $C_{n_l}\geq 1$ for $l$ large enough, from the above inequality we get that for $l$ large enough we have \begin{equation} \label{eq:im3}
\vertiii{\tau_{n_l}(\pi_{n_l}(x))}_0 \geq \frac{C_{n_l}}{2}. \end{equation} Thus from \eqref{eq:im1}, \eqref{eq:im2} and \eqref{eq:im3} it follows that for $l$ large enough \[
\pi_{n_l}(x)\in\bigcup_{m=1}^\infty mK_{n_l}.\qedhere \] \end{proof}
From the above proposition and Corollary \ref{cor:heads} we get: \begin{corollary}
\label{cor:last}
Let $N\in \mathbb{N}$ and
\[
0 \neq x = \sum_{i,j=0}^\infty x_{i,j}e_{i,j}\in s^\mathbb{N}
\]
be such that
\begin{equation*}
\label{eq:head1}
\left|(x_{i,k_0})_{\vphantom{j}i=0}^\infty\right|_0 = 2
\end{equation*}
for some $k_0\in \mathbb{N}$.
Let $(n_l)_{l=0}^\infty$ be an increasing sequence of natural numbers
such that $N_{n_l} = N$ for $l \in \mathbb{N}$. Then for every $l$ large enough
there exists a polynomial
\[
P(t)=\sum_{i=1}^{\pos(\Delta_{n_l+1},0)}c_it^i
\text{ with }
\sum_{i=1}^{\pos(\Delta_{n_l+1},0)}|c_i|\leq D_{n_l}
\]
and
\[
\vertiii{P(T)\pi_{n_l}(x)-e_0}_{N}\leq 3.
\] \end{corollary}
\section{Tails} Observe that (heuristically) if $j$ is much larger then $i$, then $e_{j+i}$ is always in the same column as $e_j$ or in one of the adjacent ones. More precisely: \begin{lemma} \label{lemma:snail} If $j\geq \pos(\Delta_{n+1},0)$ and $1\leq i\leq \pos(\Delta_{n+1},0)$, then for every $N\in \mathbb{N}$ we have \[
\|e_{i+j}\|_N \leq 2^i \|e_j\|_{N+1}. \] \end{lemma} \begin{proof} Observe that if $i,j$ are as above and $e_j = e_{p,q}$ for some $p$ and $q$, then $e_{i+j} = e_{k,l}$, where $k \in \mathbb{N}$ and $l \leq q+1$ (see the definition of the function $\Next$ and the condition \eqref{cond:pos_bn}). The desired inequality follows now from Lemma \ref{lemma:estimate}. \end{proof} For every $n \in \mathbb{N}$ we define the linear space \begin{equation}
\label{def:tails}
T_n = s^\mathbb{N} \ominus H_n. \end{equation} The elements of $T_n$ can be regarded as tails of vectors from $s^\mathbb{N}$.
\begin{proposition}
\label{prop:tails}
If $x\in T_n$ and $1 \leq i \leq \pos(\Delta_{n+1},0)$, then
\[
\|T^ix\|_{N_n}
\leq \frac{1}{D_n}\|x\|_{N_n+2}.
\] \end{proposition} \begin{proof} First we will show that for every $j\geq \pos(\Delta_{n+1},0)$ and $1 \leq i \leq \pos(\Delta_{n+1},0)$ we have \begin{equation}
\label{eq:tails1}
\|T^ie_j\|_{N_n}
\leq \frac{1}{D_n}\|e_j\|_{N_n+2}. \end{equation} We consider all possible cases for $j$. \begin{enumerate}[leftmargin=2\parindent] \item If $\pos(\Delta_{n+1},0)\leq j <\pos(s_{n+1},0)-\pos(\Delta_{n+1},0)$, then
by \eqref{eq:operator}
\[
e_j = \frac{1}{\alpha_j} T^j e_0.
\]
Therefore, once again by \eqref{eq:operator},
\[
T^i e_j
= \frac{1}{\alpha_j} T^{i+j} e_0
= \frac{\alpha_{i+j}}{\alpha_j} e_{i+j}.
\]
From \eqref{eq:alphas} we get
\[
\frac{\alpha_{i+j}}{\alpha_j} = \frac{1}{2^iD_n^i}.
\]
Thus, using Lemma \ref{lemma:snail}, we obtain
\[
\|T^ie_j\|_{N_n}
= \frac{1}{2^iD_n^i} \|e_{i+j}\|_{N_n}
\leq \frac{1}{D_n^i} \|e_j\|_{N_n+1}
\leq \frac{1}{D_n} \|e_j\|_{N_n+2}.
\] \item If $\pos(s_p,0)-\pos(\Delta_{n+1},0) \leq j < \pos(s_{p+1},0)-\pos(\Delta_{n+1},0)$
for some $p>n$ and $e_j = e_{k,l}$, where $l \geq N_n+2$,
then from the construction of the function $\Next$,
from the definition of $T$ and from \eqref{cond:pos_bn}
it follows that
\[
\|T^ie_j\|_{N_n} = 0
\]
and therefore \eqref{eq:tails1} holds. \item If $\pos(s_p,0)-\pos(\Delta_{n+1},0) \leq j < \pos(s_{p+1},0)-\pos(\Delta_{n+1},0)$
for some $p > n$ and $e_j = e_{k,l}$, where
$l \leq N_n+1$, then from the construction of the function $\Next$ and from
\eqref{eq:N_n} it follows that $k \geq b_{n+1}$.
We have
\begin{equation}
\label{eq:tails2}
\|e_j\|_{N_n+2}
= \|e_{k,l}\|_{N_n+2}
= A_{N_n+2,k}.
\end{equation}
Using Proposition \ref{prop:continuity} we get that
\begin{align*}
\|T^ie_j\|_{N_n}
& \leq \vertiii{T^ie_j}_{N_n}
\leq 4^i \vertiii{e_j}_{N_n}
\leq 4^{\pos(\Delta_{n+1},0)} \vertiii{e_j}_{N_n} \\
& \leq 4^{\pos(\Delta_{n+1},0)}\|e_j\|_{N_n+1}
= 4^{\pos(\Delta_{n+1},0)} A_{N_n+1,k}\\
& \overset{\mathpalette\mathclapinternal{\eqref{cond:2bn}}}{\leq}
\frac{A_{N_n+2,k}}{D_n}
\overset{\mathpalette\mathclapinternal{\eqref{eq:tails2}}}{=}
\frac{1}{D_n}\|e_j\|_{N_n+2}.
\end{align*}
\end{enumerate} To finish the proof take $x\in T_n$. Then \[
x = \sum_{j=\pos(\Delta_{n+1},0)}^\infty x_je_j \] and by the inequality we have just proven we have that \begin{equation*}
\|T^ix \|_{N_n}
\leq \frac{1}{D_n} \sum_{j=\pos(\Delta_{n+1},0)}^\infty |x_j|\|e_j\|_{N_n+2}
= \frac{1}{D_n} \|x\|_{N_n+2}.\qedhere \end{equation*} \end{proof}
\section{The operator \texorpdfstring{$T$}{T} has no non-trivial invariant subspaces} After all the hard work done in the previous sections we are ready to prove the main result of the paper. \begin{theorem*}
There exists a continuous operator $T \colon s^\mathbb{N} \to s^\mathbb{N}$ which has no non-trivial invariant subspaces. \end{theorem*} \begin{proof} Let $T \colon s^\mathbb{N} \to s^\mathbb{N}$ be the operator defined by \eqref{eq:operator}. From Proposition \ref{prop:values} it follows that $T$ is a perturbed weighted forward shift and therefore $e_0$ is a cyclic vector for $T$. We need to show that every non-zero $x \in s^\mathbb{N}$ is also a cyclic vector for $T$.
Let \[
0 \neq x = \left(x_{i,j}\right)_{i,j=0}^\infty \in s^\mathbb{N} \] be arbitrary and $k_0$ be the smallest integer for which \[
\left|(x_{i,k_0})_{\vphantom{j}i=0}^\infty\right|_0 \neq 0. \] Since any non-zero multiple of a cyclic vector is also a cyclic vector, we may assume with no loss of generality that \begin{equation*}
\left|(x_{i,k_0})_{\vphantom{j}i=0}^\infty\right|_0 = 2. \end{equation*} By Proposition \ref{prop:strategy}, in order to show that $x$ is a cyclic vector for $T$ we need to prove that for every $N \in \mathbb{N}$ there is a polynomial $P$ such that \[
\|P(T)x-e_0\|_N\leq 4. \] Fix $N \in \mathbb{N}$ and let $(n_l)_{l=0}^\infty$ be an increasing sequence of natural numbers such that $N_{n_l}=N$ for every $l\in \mathbb{N}$. For every $n \in \mathbb{N}$ let $\pi_n$ be the projection defined by \eqref{def:pi_n}. It is clear (see \eqref{def:tails}) that \[
x-\pi_n(x) \in T_n \] and that for every $n$ large enough we have \begin{equation*}
\|x-\pi_n(x)\|_{N+2} \leq 1. \end{equation*} Using this and Corollary \ref{cor:last} we can find $l$ large enough and a polynomial $\displaystyle P(t) = \sum_{i=1}^{\pos(\Delta_{n_l+1},0)}c_it^i$ such that \begin{equation}
\label{eq:end}
\sum_{i=1}^{\pos(\Delta_{n_l+1},0)}|c_i| \leq D_{n_l},
\quad
\vertiii{P(T)\pi_{n_l}(x)-e_0}_{N} \leq 3
\quad
\text{and}
\quad
\|x-\pi_{n_l}(x)\|_{N+2} \leq 1. \end{equation} We have \begin{align*}
\|P(T)x-e_0\|_N
& = \|P(T)\pi_{n_l}(x) + P(T)(x-\pi_{n_l}(x))-e_0\|_N\\
& \leq \|P(T)\pi_{n_l}(x)-e_0\|_N
+\|P(T)(x-\pi_{n_l}(x))\|_N\\
& \leq \vertiii{P(T)\pi_{n_l}(x)-e_0}_{N}+D_{n_l}
\max_{1 \leq i \leq \pos(\Delta_{n_l+1},0)} \|T^i(x-\pi_{n_l}(x))\|_N \\
& \overset{\mathpalette\mathclapinternal{\ref{prop:tails}}}{\leq}
\vertiii{P(T)\pi_{n_l}(x)-e_0}_{N}
+ D_{n_l}\frac{\|x-\pi_{n_l}(x)\|_{N+2}}{D_{n_l}}\\
& \overset{\mathpalette\mathclapinternal{\eqref{eq:end}}}{\leq}
3+1=4.\qedhere \end{align*} \end{proof}
\end{document} |
\begin{document}
\title{Periodic Solutions with Singularities in Two Dimensions in the $n$-body Problem}
\begin{abstract} Analytical methods are used to prove the existence of a periodic, symmetric solution with singularities in the planar 4-body problem. A numerical calculation and simulation are used to generate the orbit. The analytical method easily extends to any even number of bodies. Multiple simultaneous binary collisions are a key feature of the orbits generated. \end{abstract}
\textbf{KEYWORDS}: Simultaneous Binary Collision, 4-body Problem, Regularization, Singularity, $2n$-body Problem, Collision Singularity \\
\textbf{MSC}: 37N05, 65P99, 70F15, 70F16
\section{Introduction}
The $n$-body problem of celestial mechanics is one of the most important problems in the field of dynamical systems. The following differential equation \begin{equation}\label{Famous} m_i \ddot{\rho_i} = \sum_{j\neq i} -\frac{m_i m_j (\rho_i - \rho_j)}{{\vert \rho_i - \rho_j \vert}^3} \end{equation} gives a mathematical description of the planar $n$-body problem, where $\rho_i \in \mathbb{R}^2$ denotes the position of the $i$th body having mass $m_i$. All derivatives are taken with respect to time $t$. The potential energy of the system is given by \begin{equation}
U = \sum_{1\le i < j \le n} \frac{m_i m_j}{|\rho_i - \rho_j|}, \end{equation} and the kinetic energy is given by \begin{equation}
T = \frac{1}{2} \sum_{i=1}^n m_i |\dot{\rho_i}|^2. \end{equation}\\
Linearly stable symmetric periodic orbits are one aspect of the $n$-body problem. The elliptic Lagrangian triangular periodic orbits are linearly stable for certain values of eccentricity and the three masses \cite{MS},\cite{Ro1}. The Montgomery-Chenciner figure-eight orbit for three equal masses \cite{CM}, \cite{Moore} has been shown by Roberts \cite{GR} to be linearly stable by an innovative symmetry reduction technique he developed.\\
Singularities are another particular aspect of the $n$-body problem. Binary collisions, triple collisions, etc,. are discussed at length in \cite{SM}. The Simultaneous Binary Collision (SBC) problem has been widely studied as well, both analytically and numerically. Sim\'{o} \cite{SI} showed that the block regularization in the cases of the $n$-body problem which reduce to one-dimensional problems is differentiable, but the map passing from initial to final conditions (in suitable choices of transversal sections) is exactly $C^{8/3}$. Ouyang and Yan \cite{OY} give another approach for the regularization and analyze some properties of SBC solutions in the collinear four-body problem. Elbialy \cite{EL} studied the nature of the collision-ejection orbits associated to SBC. \\
Schubart \cite{JS} combined these two aspects to produce a singular linearly stable periodic orbit in the three-body equal mass collinear problem. The motion of the middle mass regularly alternates between binary collisions with each of the outer two masses. His work was subsequently extended to the unequal mass case by both H\'enon \cite{MH} and Hietarinta and Mikkola \cite{HM}. Sweatman \cite{SW} later extended this work to a four-body periodic solution in one dimension, with bodies alternating between SBC of the outer mass pairs and binary collision of the inner two masses.\\
In this paper, we present the analytic existence of a family of singular symmetric periodic planar orbits in the four-body equal mass problem. The initial conditions of these orbits are symmetric in both positions and velocities, which lead to periodic simultaneous binary collisions with each of the four masses alternating between collisions with its two nearest neighbors. Due to the abundance of symmetries present in the initial conditions, we can reduce the number of variables needed to just four -- two for representing position and two for representing momentum. In contrast to its one-dimensional counterparts, the proof for existence of this orbit is surprisingly simple. We begin in Section \textbf{\ref{Describe}} by giving a description of the proposed orbit and prove its existence. In Section \textbf{\ref{Numerical}} we present the numerical methods used to produce the initial conditions that will lead to this orbit. In Section \textbf{\ref{Family}}, we consider variants on this orbit, giving a family of orbits with singularities for an even number of equal masses.\\
Since the initial submission of this paper, we have been doing additional work with Dr. Lennard Bakker (Brigham Young University) and Dr. Gareth Roberts (College of the Holy Cross) implementing Robert's linear stability technique as presented in \cite{GR}. After precisely defining the symmetries that are present in the regularized coordinates, it is shown that the group of symmetries in the orbit is isomorphic to the dihedral group $D_4$. Further, as a consequence of Robert's technique, we have shown that the four-body planar orbit presented in this paper is linearly stable \cite{BORSY}. Further work has also been done on orbits in this family with alternating unequal masses \cite{BOYS}. Rather than a single mass parameter, the bodies have masses $m_1$, $m_2$, $m_1$, $m_2$ as numbered moving counterclockwise through the plane. Since some symmetries have been lost by this change in masses, it is necessary to choose two initial condition parameters as well as two initial velocities. Although numerically this is not a difficult problem, an analytical technique will require much more work.\\
\section{The Proposed Orbit} \subsection{Analytical Description}\label{Describe}
\begin{center} \myfig{icondsorbit.JPG}{1} \mycaption{On the left, we illustrate the initial conditions leading to the four-body two-dimensional periodic SBC oribt. On the right, the orbit is shown.} \end{center}
Initially we focused on finding a symmetric, periodic SBC orbit for four equal masses in two dimensions. Without loss of generality, we assume that the orbit begins with the four bodies lying at $(\pm 1,0)$ and $(0, \pm1)$ with initial velocities $(0, \pm v)$ and $(\pm v, 0)$, respectively, where $v \in (0, +\infty)$. For convenience throughout the rest of the paper, we number the bodies 1 to 4 as in Figure 1.\\
The singularity of SBC in this problem is not essential. For a better understanding of the behavior of the motion of the bodies in a neighborhood of a collision, the standard technique is to make a change of coordinates and rescale time. In the new coordinates, the orbits which approach collision can be extended across the collision in a smooth manner with respect to the new time variable. This technique is called \textit{regularization}. In our problem, the regularization describes the behavior of the bodies approaching and escaping collisions, similar to the collisions of billiard balls.\\
Due to the symmetry of the initial conditions and the equations governing the motion of the bodies, the symmetry that is present in the initial conditions is maintained in the regularized sense.\\
\begin{theorema} Let $E = T - U$ be the total energy and $m$ be the mass for each of the four bodies. For any $E<0$ and $m>0$, there exists a symmetric, periodic, four-body orbit with SBC in $\mathbb{R}^2$. \end{theorema}
Without loss of generality, we can assume $m=1$ and the initial positions are as illustrated in Figure 1. The proof will be given at the end of this section.\\
Let $t_0$ be the time of first SBC. For $t \in [0, t_0)$, let the coordinate of body 1 be $(x_1, x_2)$. By symmetry, the coordinates of bodies 2, 3, and 4 are $(x_2, x_1)$, $(-x_1, -x_2)$ and $(-x_2, -x_1)$, respectively. Using equation (\ref{Famous}), the acceleration of a body at point $(x_1,x_2)$ is given by: \begin{equation}\label{Newtons} (\ddot{x_1}, \ddot{x_2}) = - \bigg[ \frac{(x_1-x_2,x_2-x_1)}{(2(x_1-x_2)^2)^\frac{3}{2}} + \frac{(2x_1, 2x_2)}{(4x_1^2+4x_2^2)^\frac{3}{2}} + \frac{(x_1+x_2, x_1+x_2)}{(2(x_1+x_2)^2)^\frac{3}{2}} \bigg] \end{equation}
We now perform the regularization of the system. The system has the Hamiltonian: \begin{equation} H=\frac{1}{8} (w_1^2+w_2^2)- \frac{\sqrt{2}}{x_1-x_2}-\frac{\sqrt{2}}{x_1+x_2}-\frac{1}{\sqrt{x_1^2+x_2^2}} \end{equation} where $w_1=4 \dot{x}_1$ and $w_2=4 \dot{x}_2$ are the conjugate momenta to $x_1$ and $x_2$. Note that SBC happens when $x_1= \pm x_2$. We introduce a new set of coordinates: $$q_1=x_1-x_2, \qquad q_2= x_1+x_2.$$ Their conjugate momenta $p_i$ are given by using a generating function $F=(x_1-x_2) p_1 +(x_1+x_2) p_2$: $$w_1= p_1+p_2, \qquad w_2= p_2-p_1.$$ The Hamiltonian corresponding to the new coordinate system is \begin{equation} H= \frac{1}{4}(p_1^2+p_2^2)- \frac{\sqrt{2}}{q_1}-\frac{\sqrt{2}}{q_2} -\frac{\sqrt{2}}{\sqrt{q_1^2+q_2^2}}. \end{equation}
Following the work of Sweatman \cite{SW}, we introduce another canonical transformation: $$q_i=Q_i^2, \qquad P_i=2 Q_i p_i \qquad(i=1,2)$$ with $Q_i > 0$. We also introduce a new time variable $s$, which satisfies $\frac{d t}{d s}=q_1 q_2$. This produces a regularized Hamiltonian in extended phase space: $$\Gamma =\frac{d t}{d s} (H-E)$$ \begin{equation}\label{6} = \frac{1}{16} (P_1^2Q_2^2 + P_2^2 Q_1^2)- \sqrt{2}(Q_1^2+Q_2^2) -\frac{\sqrt{2}Q_1^2 Q_2^2 }{\sqrt{Q_1^4+ Q_2^4} } -Q_1^2 Q_2^2 E \end{equation} where $E$ is the total energy of the Hamiltonian $H$.\\
The regularized Hamiltonian gives the following differential equations of motion: \begin{equation}\label{1} Q_1' = \frac{1}{8}P_1 Q_2^2 \end{equation} \begin{equation}\label{2} Q_2' = \frac{1}{8}P_2 Q_1^2 \end{equation} \begin{equation}\label{3} P_1' = -\frac{1}{8}P_2^2 Q_1 + 2\sqrt{2}Q_1+\frac{2\sqrt{2}Q_1 Q_2^2}{\sqrt{Q_1^4+Q_2^4}}-\frac{2\sqrt{2}Q_1^5 Q_2^2}{(Q_1^4+Q_2^4)^\frac{3}{2}} + 2EQ_1Q_2^2 \end{equation} \begin{equation}\label{4} P_2' = -\frac{1}{8}P_1^2 Q_2 + 2\sqrt{2}Q_2+\frac{2\sqrt{2}Q_2 Q_1^2}{\sqrt{Q_1^4+Q_2^4}}-\frac{2\sqrt{2}Q_2^5 Q_1^2}{(Q_1^4+Q_2^4)^\frac{3}{2}} + 2EQ_2Q_1^2 \end{equation} with initial conditions \begin{equation}\label{5} Q_1(0)=1, \quad Q_2(0)=1, \quad P_1(0)=-4v, \quad P_2(0)=4v \end{equation} where derivatives are with respect to $s$, and E is the total energy of the Hamiltonian $H$.\\
\begin{theorem}\label{thm1} Let $s_0$ be the time of the first SBC in the regularized system. Then $s_0$ is a continuous function with respect to the initial velocity $v$. Furthermore, $$p_2 (t_0)= \frac{P_2 (s_0, v)}{2 Q_2 (s_0, v)}$$ is also continuous with respect to $v$. \end{theorem}
\begin{proof} At the first SBC, $Q_1(s_0)= 0$, and $Q_2(s_0) = \sqrt{q_2}= \sqrt{x_1+x_2} > 0$. Our goal is to show that $p_2 (t_0)$ is a continuous function with respect to $v$. \\
Because $\Gamma=0$ at $s=s_0$, $P_1(s_0)= -4 \sqrt[4]{2}$ from (\ref{6}). Since $\Gamma$ is regularized, the solution $P_i=P_i(s, v)$ and $Q_i=Q_i(s, v)$ are continuous functions with respect to the two variables $s$ and $v$. At time $s=s_0$,
$$0= Q_1(s_0(v), v).$$ To apply the implicit function theorem, we need to show that $$\frac{\partial Q_1}{\partial s} (s_0, v) \neq 0.$$ From (\ref{1}) $$\frac{\partial Q_1}{\partial s} (s_0, v)= \frac{1}{8} P_1 Q_2^2 \mid_{(s_0, v)} =- \frac{1}{2}\sqrt[4]{2} Q_2(s_0) ^2 <0. $$ So $s_0=s_0(v) $ is a continuous function of $v$. Therefore both $P_2(s_0, v)$ and $Q_2(s_0, v)$ are continuous functions of $v$. Further, since $Q_2(s_0, v)>0$, $p_2(t_0)$ is also a continuous function of $v$. \end{proof}
\begin{theorem}\label{thm2} There exists a $v=v_0$ such that $\dot{x}_1(t_0)+ \dot{x}_2 (t_0)=\frac{1}{2}p_2 (t_0) = 0$, where $t_0$ is the time of the first SBC, i.e. the net momentum of bodies 1 and 2 at the first SBC is 0. \end{theorem}
The outline of this proof is as follows: We will show that there exist $v_1$ and $v_2$ such that $\dot{x}_1+\dot{x}_2$ is negative at SBC for $v=v_1$ and positive at SBC for $v=v_2$. The result then follows by Theorem \ref{thm1}. \\
\begin{proof} Consider Newton's equation before the time of the first SBC:
\begin{equation}\label{dx1} \ddot{x}_1=\frac{x_2-x_1}{2\sqrt{2} (x_1-x_2)^3 } - \frac{2x_1}{8 (x_1^2+x_2^2)^{3/2} } - \frac{x_1+x_2}{2 \sqrt{2} (x_1+x_2)^3 } , \end{equation}
\begin{equation}\label{dx2}
\ddot{x}_2=\frac{x_1-x_2}{2\sqrt{2} (x_1-x_2)^3 } - \frac{2x_2}{8 (x_1^2+x_2^2)^{3/2} } - \frac{x_1+x_2}{2 \sqrt{2} (x_1+x_2)^3 }. \end{equation} Therefore, \begin{equation}\label{y} \ddot{x}_1+ \ddot{x}_2= - \frac{x_1+x_2}{4 (x_1^2+x_2^2)^{3/2} } - \frac{1}{ \sqrt{2} (x_1+x_2)^2 } <0, \end{equation} which means $\dot{x}_1+ \dot{x}_2$ is decreasing with respect to $t$. \\
At the initial time $t=0$, $x_1=1$, $x_2=0$, $\dot{x}_1=0$, and $\dot{x}_2= v$. Note that for $v \in (0, \infty)$, there is no triple collision or total collision for $t \in [0, t_0]$, where $t_0$ is the time of the first SBC, as a triple collision implies total collapse by symmetry. Also, from the initial time to $t_0$, $0 \leq x_2 \leq x_1 \leq 1$, $0<x_1+x_2<2$, and $x_1^2+x_2^2<4$.\\
Let $y(t)= x_1(t)+ x_2(t)$. Then for any choice of $v$, $ \ddot{y}(t)<0$ and $ 0<y(t)<2$ hold for any $t \in [0, t_0]$. In other words, $\dot{y}(t)$ is decreasing with respect to $t$.\\
First, we will show that there exists $v_1$ such that $\dot{y} (t_0) <0$. When $v=0$ the four bodies form a central configuration and, as a consequence, the motion of the four bodies leads to total collapse. Consider the time interval $t \in [0, t_0/2)$. In this interval, the differential equations (\ref{dx1}) and (\ref{dx2}) have no singularity, and $\ddot{y}(t_0/2)<0$. By continuous dependence on initial conditions, $\dot{y}(t_0/2)= \dot{x}_1 (t_0/2) + \dot{x}_2 (t_0/2) $ is a continuous function with respect to the initial velocity $v$. When $v=0$, $ \dot{x}_1 (t_0/2) <0 , \ \dot{x}_2 (t_0/2) = 0$, which gives $\dot{y}(t_0/2)<0 $. Therefore, there exists a $\delta>0$, such that $\dot{y}(t_0/2)<0$ holds for any $v \in (-\delta, \delta)$. \\
Choose $v_1= \delta/2$, then $\dot{y}(t_0/2)<0$. Because $ \dot{y}(t)$ is decreasing with respect to $t$, $\dot{y}(t_0) \leq \dot{y}(t_0/2)<0.$\\
Next we will show that there exists $v_2$ such that $ \dot{y} (t_0) >0$. Note that as $v \to \infty$, $$\lim_{v \to \infty} y(t_0)= \lim_{v \to \infty} x_1(t_0)+ x_2(t_0) =2 $$ and $$\lim_{v \to \infty} \dot{y}(t_0) = \infty.$$ Therefore there exists some positive value $v_2$, such that $\dot{y}(t_0)>0$. \end{proof}
\begin{proof}[Proof of the Main Theorem]
From Theorem \ref{thm2}, we know there exists an initial velocity $v=v_0$ such that $\dot{x}_1(t_0)+\dot{x}_2(t_0)=0$. Let $\{P_1, P_2, Q_1, Q_2\}$ for $s \in [0, s_0]$ be the solution in the regularized system corresponding to the orbit from $t=0$ to $t=t_0$. Following collision, consider the behavior of the first and second bodies. Assume their velocity was reflected about the $y=x$ line in the plane. In the new coordinate system, this corresponds to a new set of functions $$\{-P_1(2s_0-s), -P_2(2s_0-s), -Q_1(2s_0-s), -Q_2(2s_0-s)\}$$ for $s \in [s_0, 2s_0]$. We can easily check that $$\{-P_1(2s_0-s), -P_2(2s_0-s), -Q_1(2s_0-s), -Q_2(2s_0-s)\}$$ for $s \in [s_0, 2s_0]$ is also a set of solutions for equations (\ref{1}) through (\ref{4}) with initial conditions at $s=s_0$. Also, $\{P_1(s), P_2(s), Q_1(s), Q_2(s)\}$ for $s \in [s_0, 2s_0]$ satisfies equations (\ref{1}) through (\ref{4}) with the same initial conditions at $s=s_0$. Note that equations (\ref{1}) through (\ref{4}) with initial conditions at $s=s_0$ have a unique solution for any choice of $v \in (0, \infty)$. Then by uniqueness, the orbit for $s\in [s_0, 2s_0]$ must be the same as the orbit for $s \in[0, s_0]$ in reverse, i.e. $$P_i(s)= -P_i(2s_0-s), Q_i(s)= -Q_i(2s_0 -s)$$ for $s \in[0, s_0]$. Therefore at time $s=2s_0$, bodies 1 and 2 will have returned to their initial positions with velocities $(0,-v)$ and $(-v, 0)$ respectively. Similarly, at time $s=2s_0$, bodies 3 and 4 will have also returned to their initial positions with velocities $(0, v)$ and $(v,0)$ respectively.\\
Next, we use symmetry and uniqueness to show the orbit from $s=2s_0$ to $s=4s_0$ and the orbit from $s=0$ to $s=2s_0$ will be symmetric with respect to the $y-$axis. Compare the motion of body 2 and body 3 from $s=2s_0$ to $s=4s_0$ with the motion of body 2 and body 1 from time $s=0$ to $s=2s_0$. The initial conditions of body 3 at $s=2s_0$ and the initial conditions of body 1 at $s=0$ are symmetric with respect to the $y-$axis. Also the initial conditions of body 2 at $s=2s_0$ and the initial conditions of body 4 at $s=0$ are symmetric with respect to the $x-$axis. Therefore, by uniqueness, the orbit of bodies 2 and 3 from $s=2s_0$ to $s=4s_0$ and the orbit of bodies 1 and 2 from $s=0$ to $s=2s_0$ must be symmetric with respect to $y-$axis. Therefore, the orbit of bodies 1 and 4 from $s=2s_0$ to $s=4s_0$ and and the orbit of bodies 3 and 4 from $s= 0$ to $s=2s_0$ are symmetric with respect to the $y-$axis. Hence, at $s=4s_0$, the positions and velocities of the four bodies are exactly the same as at $s=0$. Therefore, the orbit is periodic with period $s=4s_0$. \end{proof}
It is worth noting here that the previous proof implies a time-reversing symmetry for the periodic orbit. This provides further evidence for the conjecture made by Roberts \cite{GR}, stating that linearly stable periodic orbits in the equal mass $n$-body problem must have a time-reversing symmetry. (Linear stability of this orbit is shown in \cite{BORSY}.)
\subsection{Numerical Method}\label{Numerical} As we are searching for a periodic orbit of the $n$-body problem, we assume the value of the Hamiltonian needs to be negative. Using the initial positions of the four bodies described earlier, it is not hard to find the potential energy at $t=0$: $$U = {2\sqrt{2} + 1}.$$ Then, acting under the negative Hamiltonian assumption:
$${2\sqrt{2}+1} \geq \sum_{i=1}^n \frac{m_i |v_i|^2}{2}.$$ Since all masses are equal, if we require that the velocities of each body are equal in magnitude, we obtain: \begin{equation} v_{max} = \sqrt{\frac{2\sqrt{2}+1}{2}} \end{equation} with $v_{max}$ defined to be the value of $v$ such that the value of the Hamiltonian is zero. Define $\theta = \frac{v}{v_{max}}$. This parameter is used in the numerical algorithm. \\
At this point it becomes necessary to find out just how much kinetic energy is required to obtain the periodic orbit. Since we know suitable bounds on the velocity parameter ($\theta \in (0, 1)$), we can search the interval numerically. We use an $n$-body simulator with the initial positions previously described. The simulation is ran until one SBC occurs. For simplicity, we consider only the collision between the first and second bodies in the first quadrant. Summing their velocities immediately before the collision gives a vector running along the line $y=x$ (due to symmetry), with both components having the same sign. The magnitude of this vector is given in Figure 2. Negative magnitudes represent vectors with both components less than zero.
\begin{center} \myfig{R01N04.jpg}{.75} \mycaption{The magnitude of the net velocity of the first two bodies (vertical axis) at the time of collision for various values of $\theta$ (horizontal axis)}. \end{center}
Next, a standard bisection method is used to find the amount of energy required to cause the net velocity at collision to be zero. Using the initial interval $\theta \in [0, 1]$ and iterating to a tolerance of $10^{-14}$, the correct value of $\theta$ was found to be $\theta = 0.46449539554694$. \\
It is worth noting that both the proof of existence and the numerical method do not guarantee the uniqueness of this orbit. Numerical simulations demonstrate that for values of $\theta$ near the correct value, the orbit remains for a significant length of time with the paths of the bodies lying in a ``fattened'' annular region roughly the shape of the original orbit. Near the extreme ends, the orbit experiences near total-collapse and fall apart rapidly. Although we do not focus on these questions just yet, a more thorough study of the dynamics could prove to be quite interesting.
\section{Variants}\label{Family}
\subsection{Orbits of more than four bodies}
The same technique can be adopted to find similar orbits for any arbitrary even number $n$. A key feature of these orbits will be higher numbers of simultaneous binary collisions. For a given value of $n$, initial positions are given by spacing the bodies evenly about the unit circle. The potential energy (and the value of $v_{max}$) is found numerically by iterating over each pair of planets and summing the reciprocal of the distances between them. (Recall that all $m_i = 1$.) Velocities are then assigned to the bodies in alternating counter-clockwise and clockwise directions, initially tangent to the circle. Again we consider the collision between the first and the second bodies. Although the net velocity of the two at collision will not lie along the $y=x$ line, the components of this vector will both have the same sign. The magnitudes of the net velocity between the first two bodies at initial collision are shown in Figure 3 for various values of $n$. Lower curves in the graph correspond to higher values of $n$. Again, negative magnitudes correspond to both components being negative. \\
\begin{center} \myfig{R01NVV.jpg}{.75} \mycaption{Curves showing the magnitude of the net velocity of the first two bodies (vertical axis) at the time of collision for various values of $\theta$ (horizontal axis) for $n=4, 6, 8, 10, 12$.} \end{center}
Pictures of the orbit for $n=6$ and $n=8$ are shown in Figure 4. It is readily seen that as $n$ increases, the shape of the orbit more closely approximates a circle.
\begin{center} \myfig{Orbit23.JPG}{.75} \mycaption{The six- and eight-body two-dimensional periodic SBC orbits.} \end{center}
\noindent{\textbf{Acknowledgements}}
Skyler Simmons would like to give thanks to Dr. Kening Lu, who recommended me for this project and special thanks to Jason Grout for hours of discussion that helped point my mathematical career in the right direction. We are also all indebted to the advice given by Dr. Lennard Bakker and all the referees during the revision process.\\
\end{document} |
\begin{document}
\selectlanguage{english}
\title{Balanced Combinations of Solutions\\ in Multi-Objective Optimization}
\author{ Christian Glaßer \hspace*{6mm} Christian Reitwießner \hspace*{6mm} Maximilian Witek\\[5.7mm] {Julius-Maximilians-Universität Würzburg, Germany} \\ {\tt \{glasser,reitwiessner,witek\}@informatik.uni-wuerzburg.de}}
\date{}
\maketitle
\begin{abstract}
\noindent For every list of integers $x_1, \ldots, x_m$ there is some $j$ such that $x_1 + \cdots + x_j - x_{j+1} - \cdots - x_m \approx 0$.
So the list can be nearly balanced and for this we only need one alternation between addition and sub\-traction.
But what if the $x_i$ are $k$-dimensional integer {\em vectors}?
Using results from topological degree theory we show that balancing is still possible, now with $k$ alternations.
This result is useful in multi-objective optimization,
as it allows a polynomial-time computable balance of two alternatives with conflicting costs.
The application to two multi-objective optimization problems yields the following results:
\begin{itemize}
\item
A randomized $\nicefrac{1}{2}$-approximation
for multi-objective maximum asymmetric traveling salesman,
which improves and simplifies the best known approximation for this problem.
\item A deterministic $\nicefrac{1}{2}$-approximation for
multi-objective maximum weighted satisfiability.
\end{itemize}
\end{abstract}
\section{Introduction} \label{sec_intro}
\paragraph{Balancing Sums of Vectors.} Suppose we are given a sequence of goods $g_1, \ldots, g_m$, each of which has a value, a weight, and a size. Is it possible to distribute the goods on two trucks such that the loads are nearly the same with respect to value, weight, and size? We show that this is always possible by a very easy partition: For suitable indices $i,j,k,l$, assign $g_i, g_{i+1}, \ldots, g_{j}, \; g_{k}, g_{k+1}, \ldots, g_l$ to the first truck and the remaining goods to the second one. In general, if the goods have $2k$ criteria (value, weight, size, \ldots), then there exist $k$ intervals of goods such that the goods inside and the goods outside of the intervals are nearly equivalent with respect to {\em all} criteria.
More formally, let $x_1, \ldots, x_m \in \mathbb{N}^{2k}$ be {\em vectors of natural numbers} that represent the criteria of each good, and let $z \in \mathbb{N}^{2k}$ be an upper bound for these vectors (i.e., $x_i \le z$ for all $i$). Lemma~\ref{lem:real:balancing} provides intervals $I_1, \ldots, I_k \subseteq \mathbb{N}$ such that for $I = \bigcup_{i=1}^k I_i$, \[
-4kz \;\; \le \;\; \sum_{i \in I} x_i - \sum_{i \notin I} x_i \;\; \le \;\;
4kz, \] where the $\le$ hold with respect to each component. The same is true if $x_1,\ldots,x_m \in \mathbb{Z}^{2k}$ are {\em vectors of integers}, where $-z \le x_i \le z$ for all $i$ (Corollary~\ref{coro:combinatorial:integer}). The proofs of these balancing results are based on the Odd Mapping Theorem, a result from topological degree theory, which we apply in a discrete setting. The discretization is responsible for the term $4kz$, which is caused by a rounding error that unavoidably occurs at the boundaries of the intervals $I_1, \ldots, I_k$.
The simplicity of the desired partition (i.e., a union of $k$ intervals) is important for the application of our balancing results. Algorithmically, it means that for fixed dimension $2k$, the right choice for the intervals $I_1, \ldots, I_k$ can be found by exhaustive search in time polynomial in $m$.
\paragraph{Multi-Objective Optimization.} Many real-life optimization problems have multiple objectives that cannot be easily combined into a single value. Thus, one is interested in solutions that are good with respect to all objectives at the same time. For conflicting objectives we cannot hope for a single optimal solution, but there will be trade-offs. The {\em Pareto set} captures the notion of optimality in this setting. It consists of all solutions that are optimal in the sense that there is no solution that is at least as good in all objectives and better in at least one objective. So the Pareto set contains all optimal decisions for a given situation. For a general introduction to multi-objective optimization we refer to the survey by Ehrgott and Gandibleux \cite{eg00} and the textbook by Ehrgott \cite{ehr05}.
For many problems, the Pareto set has exponential size and hence cannot be computed in polynomial time. Regarding the approximability of Pareto sets, Papadimitriou and Yannakakis \cite{PY00} show that every Pareto set has a $(1-\varepsilon)$-approximation of size polynomial in the size of the instance and $\nicefrac{1}{\varepsilon}$ (for the formal definition of approximation see section~\ref{sec:multi_defs}). Hence, even though a Pareto set might be an exponentially large object, there always exists a polynomial-size approximation. This clears the way for a general investigation of the approximability of Pareto sets of multi-objective optimization problems.
In general, inapproximability and hardness results directly translate from single-objective optimization problems to their multi-objective variants. On the other hand, existing approximation algorithms for single-objective problems can not always be used for multi-objective approximation. Using our balancing results we demonstrate a translation of single-objective approximation ideas to the multi-objective case: We obtain a randomized $\nicefrac{1}{2}$-approximation for multi-objective maximum asymmetric TSP and a deterministic $\nicefrac{1}{2}$-approximation for multi-objective maximum weighted satisfiability.
\paragraph{Traveling Salesman Problem.} The (single-objective) maximum asymmetric traveling salesman problem (\pn{\bf MaxATSP}, for short) is the optimization problem where on input of a complete directed graph with edge weights from $\mathbb{N}$ the goal is to find a Hamiltonian cycle of maximum weight. Engebretsen and Karpinski \cite{EK01} show that \pn{MaxATSP}\ cannot be $(319/320+\varepsilon)$-approximated (unless P$=$NP). In 1979, Fisher, Nemhauser and Wolsey \cite{FNW79} gave a $\nicefrac{1}{2}$-approximation algorithm for \pn{MaxATSP}\ (remove the lightest edge from each cycle of a maximum cycle cover and connect the remaining parts to a Hamiltonian cycle). Since then, many improvements were achieved and the currently best known approximation ratio of $\nicefrac{2}{3}$ for \pn{MaxATSP}\ is given by Kaplan et al.\ \cite{KLS+05}.
The $k$-objective variant {\boldmath$k${\bf-}}\pn{\bf MaxATSP}\ is defined analogously with edge weights from $\mathbb{N}^k$. The hardness results for \pn{MaxATSP}\ directly translate to its multi-objective variant (just set all but one component of the edge weights to a constant), but algorithms have to be newly designed. Bläser et al.\ \cite{BMP08} show that $k$-\pn{MaxATSP}\ is randomized $(\frac{1}{k+1}-\varepsilon)$-approximable. This was improved by Manthey \cite{man09} to a randomized $(\frac{1}{2}-\varepsilon)$-approximation for all (fixed) numbers of criteria. Both algorithms extend the cycle cover idea to multiple objectives. With a surprisingly simple algorithm we improve the approximation ratio to $\nicefrac{1}{2}$.
\paragraph{Satisfiability.} Given a formula in conjunctive normal form and a non-negative weight in $\mathbb{N}$ for each clause, the maximum weighted satisfiability problem (\mbox{\textbf{MaxSAT}}, for short) aims to find a truth assignment such that the sum of the weights of all satisfied clauses is maximal. The first approximation algorithm for \mbox{\rm MaxSAT}\ is due to Johnson \cite{Joh74}. He proved an approximation ratio of $\nicefrac{(2^r-1)}{(2^r)}$ for formulas where each clause has at least $r$ literals. His work showed that the general \mbox{\rm MaxSAT}\ problem is $\nicefrac{1}{2}$-approximable. Yannakakis \cite{Yan94} improved the approximation ratio of \mbox{\rm MaxSAT}\ to $\nicefrac{3}{4}$, and Goemans and Williamson \cite{GW94} subsequently gave a simpler algorithm with essentially the same approximation ratio, and later \cite{GW95} improved the approximation ratio to $0.758$. Further improvements followed, and the currently best known approximation ratio of $0.7846$ is due to Asano and Williamson \cite{AW02}. Regarding lower bounds, Papadimitriou and Yannakakis \cite{PY91} show that \mbox{\rm MaxSAT}\ is APX-complete. Furthermore, by H{\aa}stad \cite{Has97}, \mbox{\rm MaxSAT}\ cannot be approximated better than $\nicefrac{7}{8}$, unless P$=$NP.
Only little is known about the multi-objective maximum weighted satisfiability problem (\kMaxSATbf, for short), where each clause has a non-negative weight in $\mathbb{N}^k$ for some fixed $k \geq 1$ and where we wish to maximize the weight of the satisfied clauses. Santana et.\ al.\ \cite{SBLL09} apply genetic algorithms to a version of the problem that is equivalent to \kMaxSAT\ with polynomially bounded weights. To our knowledge, the approximability of \kMaxSAT\ has not been investigated so far.
Using our balancing results, we can transfer a simple idea from single-objective optimization to the multi-objective world: For any truth assignment, the assignment itself or its complementary assignment satisfies at least one half of all clauses. We obtain a (deterministic) $\nicefrac{1}{2}$-approximation for $\kMaxSAT$, independent of $k$.
\section{Balancing Results} \label{combinatorics_section}
\subsection{Preliminaries}
Let $a,b \in \mathbb{R}$. We call a function $f \colon [a,b] \to \mathbb{R}$ \emph{integrable}, if it is Lebesgue-integrable on $[a,b]$. This is especially the case for bounded functions $f$ with only finitely many points of discontinuity. A function $g \colon [a,b] \to \mathbb{R}^n$ is \emph{componentwise integrable}, if all projections $g_i$ are integrable and in this case we write $\int_a^b g(x) \, dx$ as abbreviation for the tuple $(\int_a^b g_1(x) \, dx, \ldots, \int_a^b g_n(x) \, dx)$. For $x=(x_1,\dots,x_n)$, $y=(y_1,\dots,y_n) \in \mathbb{R}^{n}$ we write $x \le y$ if $x_i \le y_i$ for all $i\in\{1,2,\dots,n\}$. For a set $A \subseteq \mathbb{R}^n$, $\oli{A}$ denotes the (topological) closure of $A$, and $\partial A$ denotes the boundary of $A$. The set $A\subseteq \mathbb{R}^n$ is \emph{symmetric} if $x \in A \iff -x \in A$ for all $x \in \mathbb{R}^n$.
For bounded, open sets $D \subseteq \mathbb{R}^n$, continuous functions $\varphi \colon D \to \mathbb{R}^n$ and points $p \in \mathbb{R}^n \setminus \varphi(\partial D)$ the integer $d(\varphi,D,p)$ is called the \emph{Brouwer degree} of $\varphi$ and $D$ at the point $p$. We will not define it here, but we note that it captures how often $p$ is ``covered'' by $\varphi(D)$, counting ``inverse'' covers negatively, and that it generalizes the winding number in complex analysis.
\subsection{Analytical Version}
We apply the following theorems from topological degree theory to get the analytical version of our balancing results. \begin{theorem}[\protect{\cite[Theorem 2.1.1]{ll78-degree-theory}}]\label{thm:oddmappingtwo} If $D\subseteq\mathbb{R}^n$ is bounded and open, $\varphi\colon \overline{D} \to \mathbb{R}^n$ is continuous, $p \notin \varphi(\partial D)$ and $d(\varphi,D,p)\neq 0$, then $p \in \varphi(D)$. \end{theorem}
\begin{theorem}[Odd Mapping Theorem, \protect{\cite[Theorem 3.2.6]{ll78-degree-theory}}]\label{thm:oddmapping} Let $D$ be a bounded, open, symmetric subset of $\mathbb{R}^n$ containing the origin. If $\varphi\colon \overline{D}\to\mathbb{R}^n$ is continuous, $0 \notin \varphi(\partial D)$, and for all $x \in \partial D$ it holds that $
\frac{\varphi(x)}{|\varphi(x)|} \neq
\frac{\varphi(-x)}{|\varphi(-x)|}, $ then $d(\varphi,D,0)$ is an odd number (and in particular not zero). \end{theorem}
\begin{corollary}\label{coro:oddmapping} Let $D$ be a bounded, open, symmetric subset of $\mathbb{R}^n$ containing the origin. If $\varphi\colon \overline{D} \to \mathbb{R}^n$ is continuous and for all $x \in \partial D$ it holds that $\varphi(-x)=-\varphi(x)$, then $0 \in \varphi(\overline{D})$. \end{corollary} \begin{proof} Assume that $0 \notin \varphi(\overline{D})$. From $\varphi(-x)=-\varphi(x)$ for $x \in \partial D$ it follows that the inequality condition of Theorem~\ref{thm:oddmapping} is fulfilled (note that $0 \notin \varphi(\partial D)$) and thus $d(\varphi,D,0) \neq 0$ and by Theorem~\ref{thm:oddmappingtwo}, $0 \in \varphi(\overline{D})$. This is a contradiction. \end{proof}
\begin{lemma}\label{lem:single_annulator} Let $n \ge 1$, $a,b \in \mathbb{R}$, and $h \colon [a,b] \to \mathbb{R}^{2n}$ be componentwise integrable. There exist $n$ closed intervals $I_1, \ldots, I_n \subseteq [a,b]$ such that for $I=I_1\cup \dots \cup I_n$, \[
\int\limits_I h(x) \,dx = \int\limits_{[a,b]\setminus I} h(x)\,dx. \] \end{lemma} \begin{proof}
Observe that it suffices to show this for $[a,b]=[0,1]$. Define $T = \{(t_1,t_2,\dots,t_{2n})\in \mathbb{R}^{2n} \mid \sum_{i=1}^{2n} |t_i| \le 1\}$ and for every $t=(t_1,\dots,t_{2n}) \in T$, let \[
I_t = \bigcup_{\substack{1 \le k \le 2n\\t_k > 0}} \left[\sum_{i=1}^{k-1}|t_i|,
\sum_{i=1}^{k}|t_i|\right] \] and \[ f \colon T \to \mathbb{R}^{2n},\quad f(t) = \int\limits_{I_t} h(x)\,dx - \int\limits_{[0,1]\setminus I_t} h(x)\,dx. \]
\begin{figure}
\caption{Illustration of the set $I_t$ for some value of $t=(t_1,\dots,t_8)$, where $t_1$, $t_2$, $t_4$ and $t_8$ are positive and $t_3$, $t_5$, $t_6$ and $t_7$ are negative.}
\end{figure}
By the formal definition, $I_t$ is a union of (at most) $2n$ closed intervals. However, it can always be written as a union of at most $n$ closed intervals, by merging adjacent intervals.
We now want to show that $0 \in f(T)$ by applying Corollary~\ref{coro:oddmapping} to $\varphi = f$ and $D$ being the interior of $T$. $D$ is obviously a bounded, open, and symmetric subset of $\mathbb{R}^{2n}$ containing the origin. The function $f$ is continuous because of the fundamental theorem of calculus for the Lebesgue integral and the fact that the endpoints of the intervals in $I_t$ depend continuously on $t$. Furthermore, for any $t \in \partial D$ we get that there are only finitely many points in $[0,1]$ which are not in exactly one of the sets $I_{-t}$ and $I_t$ and thus $f(-t) = -f(t)$ since these finitely many points have no influence on the values of the integrals. Since all preconditions of the corollary are fulfilled, we get $0\in f(T)$ and thus there exists some $t \in T$ such that \[ \int\limits_{I_t} h(x)\,dx = \int\limits_{[0,1]\setminus I_t} h(x)\,dx. \] As already noted, $I_t$ can be written as a union of at most $n$ closed intervals. We obtain a union of exactly $n$ intervals by adding intervals $[a,a]$. \end{proof}
\begin{lemma}\label{lem:double_bisector} Let $n \ge 1$, $a,b \in \mathbb{R}$, and $f,g \colon [a,b] \to \mathbb{R}^{2n}$ be componentwise integrable. There exist $n$ closed intervals $I_1,\ldots,I_n \subseteq [a,b]$ such that for $I=I_1\cup \dots \cup I_n$, \[
\int\limits_{I} f(x) \,dx +
\int\limits_{[a,b]\setminus I} g(x) \,dx =
\frac{1}{2}\int\limits_{[a,b]} f(x) + g(x) \,dx. \] \end{lemma} \begin{proof} Applying Lemma~\ref{lem:single_annulator} to $h(x) = f(x) - g(x)$ yields some $I\subseteq [a,b]$ that is the union of $n$ closed intervals in $[a,b]$ such that \begin{align*} \int\limits_{I} h(x) \,dx &= \int\limits_{[a,b]\setminus I} h(x) \,dx\\ \iff \int\limits_{I} f(x)-g(x) \,dx &= \int\limits_{[a,b]\setminus I} f(x)-g(x) \,dx\\ \iff \int\limits_{I} f(x)-g(x) \,dx + \int\limits_{[a,b]\setminus I} g(x)-f(x) \,dx &= 0\\ \stackrel{(*)}{\iff} 2\int\limits_{I} f(x) \,dx + 2\int\limits_{[a,b]\setminus I} g(x) \,dx &= \int\limits_{[a,b]} f(x) + g(x) \,dx\\ \iff \int\limits_{I} f(x) \,dx + \int\limits_{[a,b]\setminus I} g(x) \,dx &= \frac{1}{2}\int\limits_{[a,b]} f(x) + g(x) \,dx. \end{align*} Note that $(*)$ is obtained by adding $\int_{[a,b]}f(x)+g(x)\,dx$ to both sides. \end{proof}
\subsection{Discretization of the Analytical Results}
Now we discretize the analytical results which causes a rounding error that cannot be avoided.
\begin{figure}
\caption{\footnotesize Graphs of the functions $f$ and $g$ used in the proofs of the Lemmas~\ref{lem:real:balancing} and \ref{lem:combinatorial}.}
\label{fig:vector_diagram}
\end{figure}
\begin{lemma} \label{lem:real:balancing}
Let $n,m \ge 1$ and $x_1,\dots,x_m,y_1,\dots,y_m,z\in\mathbb{N}^{2n}$ such that $x_i \le z$ and $y_i \le z$ for all $i$.
There exist natural numbers $1 \le a_1 \le b_1 \le a_2 \le b_2 \le \cdots \le a_n \le b_n \le m$
such that for $I = \bigcup_{i=1}^n \{a_i, a_{i}+1,\ldots, b_{i}-1\}$,
\[
-2nz + \frac{1}{2}\sum_{i=1}^{m} (x_i + y_i)
\;\;\; \le \;\;\; \sum_{i \in I} x_i + \sum_{i \notin I} y_i
\;\;\; \le \;\;\; 2nz + \frac{1}{2}\sum_{i=1}^{m} (x_i + y_i).
\] \end{lemma} \begin{proof}
For the proof it is advantageous to start the indices of $x_i$ and $y_i$ at $0$.
We first define two functions $f$ and $g$ that distribute the values
$x_0,\dots,x_{m-1},y_0,\dots,y_{m-1}\in\mathbb{N}^{2n}$ equally over the interval
$[0,m)$, and then we apply Lemma~\ref{lem:double_bisector}.
Let $f,g\colon[0,m] \to \mathbb{R}^{2n}$ such that
\begin{align*}
f(t) &=
\begin{cases}
2x_i & \text{if $t \in [i,i+\half)$}\\
(0,\dots,0) & \text{otherwise}
\end{cases}\\
\intertext{and}
g(t) &=
\begin{cases}
2y_i & \text{if $t \in [i+\half,i+1)$}\\
(0,\dots,0) & \text{otherwise.}
\end{cases}
\end{align*}
Figure~\ref{fig:vector_diagram} shows the graph of $f$ and $g$.
Note that both functions are componentwise integrable.
Moreover, for $i\in\{0,\dots,m-1\}$,
\begin{equation} \label{eqn_fxgya}
\int_i^{i+1} f(t) \,dt \;=\; x_i
\quad \mbox{and} \quad
\int_i^{i+1} g(t) \,dt \;=\; y_i.
\end{equation}
By Lemma~\ref{lem:double_bisector} there exist closed intervals $I_i = [a_i,b_i] \subseteq [0,m]$ where $1 \le i \le n$
such that for $I=\bigcup_{i=1}^{n}[a_i,b_i]$ it holds that
\begin{equation} \label{eqn_62572}
\int\limits_{I} f(t) \,dt +
\int\limits_{[0,m]\setminus I} g(t) \,dt \;\;=\;\;
\frac{1}{2}\int\limits_{[0,m]} f(t) + g(t) \,dt.
\end{equation}
We may assume
$0 \le a_1 \le b_1 \le a_2 \le b_2 \le \cdots \le a_n \le b_n \le m$.
For $1 \le i \le n$ let
$$a'_i := \lfloor a_i + \nicefrac{1}{2} \rfloor \quad \mbox{and} \quad b'_i := \lfloor b_i + \nicefrac{1}{2} \rfloor.$$
Note that the $a'_i, b'_i$ are natural numbers such that
$0 \le a'_1 \le b'_1 \le a'_2 \le b'_2 \le \cdots \le a'_n \le b'_n \le m$.
By the definition of $f$ and $g$, for $1 \le i \le n$ it holds that
\begin{equation*}
\left| \int_{a_i}^{a'_i} f(t) \,dt \right| + \left| \int_{a_i}^{a'_i} g(t) \,dt \right| \le z \quad \mbox{and} \quad
\left| \int_{b_i}^{b'_i} f(t) \,dt \right| + \left| \int_{b_i}^{b'_i} g(t) \,dt \right| \le z,
\end{equation*}
where $|(v_1, \ldots, v_{2n})| := (|v_1|, \ldots, |v_{2n}|)$ for
$v_1, \ldots, v_{2n} \in \mathbb{R}$.
So if some $a_i$ (resp., $b_i)$ is replaced by $a'_i$ (resp., $b'_i)$,
then the left-hand side of (\ref{eqn_62572}) changes at most by $z$.
Hence, for $I'=\bigcup_{i=1}^{n}[a'_i,b'_i]$ it holds that
\begin{equation} \label{eqn_72348}
-2nz + \frac{1}{2}\int\limits_{[0,m]} f(t) + g(t) \,dt
\;\;\; \le \;\;\; \int\limits_{I'} f(t) \,dt \; + \!\!\!\!\! \int\limits_{[0,m]\setminus I'} g(t) \,dt
\;\;\; \le \;\;\; 2nz + \frac{1}{2}\int\limits_{[0,m]} f(t) + g(t) \,dt.
\end{equation}
Let $I'' = \bigcup_{i=1}^n \{a'_i, a'_{i}+1,\ldots, b'_{i}-1\}$.
From (\ref{eqn_fxgya}) and (\ref{eqn_72348}) we obtain
\begin{equation*}
-2nz + \frac{1}{2}\sum_{i=0}^{m-1} (x_i + y_i)
\;\;\; \le \;\;\; \sum_{i \in I''} x_i + \sum_{i \notin I''} y_i
\;\;\; \le \;\;\; 2nz + \frac{1}{2}\sum_{i=0}^{m-1} (x_i + y_i).
\end{equation*} \end{proof}
Next we state the integer variant of Lemma~\ref{lem:real:balancing}. \begin{corollary} \label{coro:combinatorial:integer}
Let $n,m \ge 1$, $x_1,\dots,x_m\in\mathbb{Z}^{2n}$, and $z \in \mathbb{N}^{2n}$
such that $-z \le x_i \le z$ for all $i$.
There exist natural numbers $1 \le a_1 \le b_1 \le a_2 \le b_2 \le \cdots \le a_n \le b_n \le m$
such that for $I = \bigcup_{i=1}^n \{a_i, a_{i}+1,\ldots, b_{i}-1\}$,
\[
-4nz \;\;\le\;\; \sum_{i \in I} x_i - \sum_{i \notin I} x_i \;\;\le\;\; 4nz.
\] \end{corollary} \begin{proof}
Let $x'_i := z+x_i$ and $y'_i := z-x_i$.
Thus $x'_i, y'_i \in \mathbb{N}^{2n}$ and $x'_i,y'_i \le 2z$.
Lemma~\ref{lem:real:balancing} applied to $x'_i$ and $y'_i$ provides natural numbers $1 \le a_1 \le b_1 \le a_2 \le b_2 \le \cdots \le a_n \le b_n \le m$
such that for $I = \bigcup_{i=1}^n \{a_i, a_{i}+1,\ldots, b_{i}-1\}$,
\begin{equation*}
-4nz + \frac{1}{2}\sum_{i=1}^{m} (x'_i + y'_i)
\;\;\; \le \;\;\; \sum_{i \in I} x'_i + \sum_{i \notin I} y'_i
\;\;\; \le \;\;\; 4nz + \frac{1}{2}\sum_{i=1}^{m} (x'_i + y'_i).
\end{equation*}
Therefore,
\begin{equation*}
-4nz + \frac{2mz}{2} + \frac{1}{2}\sum_{i=1}^{m} (x_i - x_i)
\;\;\; \le \;\;\; mz + \sum_{i \in I} x_i - \sum_{i \notin I} x_i
\;\;\; \le \;\;\; 4nz + \frac{2mz}{2} + \frac{1}{2}\sum_{i=1}^{m} (x_i - x_i).
\end{equation*} \end{proof}
For the applications to maximum asymmetric traveling salesman and maximum weighted satisfiability we need the following variant of Lemma~\ref{lem:real:balancing}. While providing only a lower bound for the balanced sum, it estimates the rounding error more precisely. \begin{lemma} \label{lem:combinatorial}
Let $n,m \ge 1$ and $x_1,\dots,x_m,y_1,\dots,y_m\in\mathbb{N}^{2n}$.
There exists an $n' \in \{0, \ldots, n\}$ and natural numbers $1 \le a_1 \le b_1 < a_2 \le b_2 < \cdots < a_{n'} \le b_{n'} \le m$
such that for $I = \bigcup_{i \in \{1,\ldots,n'\}} \{a_i, a_{i}+1,\ldots, b_i\}$,
\[
y_{b_1} + y_{b_2} + \cdots + y_{b_{n'}} + \sum_{i \in I} x_i + \sum_{i \notin I} y_i \;\;\ge\;\; \frac{1}{2} \sum_{i=1}^{m}(x_i + y_i).
\] \end{lemma} \begin{proof}
Again let the indices of $x_i$ and $y_i$ start at $0$ and
define the componentwise integrable functions $f$ and $g$ as in the proof of Lemma~\ref{lem:real:balancing}.
So for $i\in\{0,\dots,m-1\}$,
\begin{equation} \label{eqn_fxgy}
\int_i^{i+\half} f(t) \,dt \;\;=\;\; x_i
\quad \mbox{and} \quad
\int_{i+\half}^{i+1} g(t) \,dt \;\;=\;\; y_i.
\end{equation}
By Lemma~\ref{lem:double_bisector} there exists an $n' \in \{0,\ldots,n\}$
and closed intervals $I_i = [a_i,b_i] \subseteq [0,m]$ where $1 \le i \le n'$
such that for $I=\bigcup_{i=1}^{n'}[a_i,b_i]$
it holds that
\begin{align}
\int\limits_{I} f(t) \,dt +
\int\limits_{[0,m]\setminus I} g(t) \,dt \;\;\ge\;\;
\frac{1}{2}\int\limits_{[0,m]} f(t) + g(t) \,dt.\label{eq:combinatorial_1}
\end{align}
Here we only need the inequality even though Lemma~\ref{lem:double_bisector}
states an equality.
We may assume
\begin{equation} \label{eqn_order}
0 \le a_1 \le b_1 \le a_2 \le b_2 \le \cdots \le a_{n'} \le b_{n'} \le m.
\end{equation}
By the definition of $f$ and $g$,
the following holds for every $i\in\{0,\dots,m-1\}$:
\begin{align*}
t \in [i,i+\half) &\implies g(t) = (0,\dots,0)\\
t \in [i+\half,i+1) &\implies f(t) = (0,\dots,0)
\end{align*}
\begin{claim} \label{claim_a}
We may assume that
$\{a_j,b_j\} \not\subseteq [i+\half,i+1]$ and
$\{b_j,a_{j+1}\} \not\subseteq [i,i+\half]$
for all $j \in \{1,\ldots,n'\}$ and $i\in\{0,\dots,m-1\}$.
\end{claim}
\begin{proof}
If $a_j,b_j \in [i+\half,i+1]$,
then $f$ is $0$ on $[a_j,b_j)$ and hence $\int_{a_j}^{b_j} f(t) \,dt = 0$.
Thus the left-hand side of \eqref{eq:combinatorial_1}
does not decrease if we remove the interval $[a_j,b_j]$ from $I$.
Similarly, if $b_j,a_{j+1} \in [i,i+\half]$,
then $g$ is $0$ on $[b_j,a_{j+1})$ and
hence $\int_{b_j}^{a_{j+1}} g(t) \,dt = 0$.
Thus the left-hand side of \eqref{eq:combinatorial_1}
does not decrease if we replace the intervals $[a_j,b_j]$ and
$[a_{j+1},b_{j+1}]$ by the interval $[a_j,b_{j+1}]$.
Note that after these changes (which include a decrement of $n'$), (\ref{eqn_order}) still holds.
\end{proof}
\begin{claim} \label{claim_b}
We may assume that $a_1, \ldots, a_{n'} \in \mathbb{N}$ and
$b_1 + \half, \ldots, b_{n'} + \half \in \mathbb{N}$.
\end{claim}
\begin{proof}
Assume $a_j \in [i+\half,i+1)$.
By Claim~\ref{claim_a}, $b_j \notin [i+\half,i+1]$ and hence $b_j > i+1$.
Since $f$ is $0$ on $[i+\half,i+1)$,
the left-hand side of \eqref{eq:combinatorial_1}
does not decrease if we let $a_j := i+1$.
After this change, (\ref{eqn_order}) still holds.
Assume $a_j \in [i,i+\half)$.
By Claim~\ref{claim_a},
$b_{j-1} \notin [i,i+\half]$ and hence $b_{j-1} < i$ (for $j \ge 2$).
Since $g$ is $0$ on $[i,i+\half)$,
the left-hand side of \eqref{eq:combinatorial_1}
does not decrease if we let $a_j := i$.
After this change, (\ref{eqn_order}) still holds.
Assume $b_j \in [i+\half,i+1)$.
By Claim~\ref{claim_a},
$a_j \notin [i+\half,i+1]$ and hence $a_j < i+\half$.
Since $f$ is $0$ on $[i+\half,i+1)$,
the left-hand side of \eqref{eq:combinatorial_1}
does not decrease if we let $b_j := i+\half$.
After this change, (\ref{eqn_order}) still holds.
Assume $b_j \in [i,i+\half)$ and $i<m$.
By Claim~\ref{claim_a},
$a_{j+1} \notin [i,i+\half]$ and hence $a_{j+1} > i+\half$ (for $j<n'$).
Since $g$ is $0$ on $[i,i+\half)$,
the left-hand side of \eqref{eq:combinatorial_1}
does not decrease if we let $b_j := i+\half$.
After this change, (\ref{eqn_order}) still holds.
It remains to argue for the special case $b_j = m$.
By Claim~\ref{claim_a},
$a_j \notin [m-\half,m]$ and hence $a_j < m-\half$.
Since $f$ is $0$ on $[m-\half,m)$,
the left-hand side of \eqref{eq:combinatorial_1}
does not decrease if we let $b_j := m-\half$.
After this change, (\ref{eqn_order}) still holds.
\end{proof}
If we split the integrals on the left-hand side of (\ref{eq:combinatorial_1})
according to $I=\bigcup_{i=1}^{n'}[a_i,b_i]$,
we obtain
\begin{equation} \label{eqn_31415}
\int\limits_{0}^{a_1} g(t) \,dt
\;+\;
\sum_{i=1}^{n'-1}
\left(
\int\limits_{a_i}^{b_i} f(t) \,dt +
\int\limits_{b_i}^{a_{i+1}} g(t) \,dt
\right)
\;+\;
\int\limits_{b_{n'}}^{m} g(t) \,dt
\;\;\;\ge\;\;\;
\frac{1}{2}\int\limits_{[0,m]} f(t) + g(t) \,dt.
\end{equation}
For $i \in \{1,\ldots,n'\}$ let $c_i = b_i-\half$.
From Claim~\ref{claim_b} and (\ref{eqn_order}) it follows that
$$0 \le a_1 \le c_1 < a_2 \le c_2 < \cdots < a_{n'} \le c_{n'} \le m-1.$$
Together with (\ref{eqn_fxgy}) we obtain:
\begin{eqnarray*}
\int\limits_{0}^{a_1} g(t) \,dt &=& y_0 + y_1 + \cdots + y_{a_1-1}\\
\int\limits_{a_i}^{b_i} f(t) \,dt &=& x_{a_i} + x_{a_i+1} + \cdots + x_{c_i}\\
\int\limits_{b_i}^{a_{i+1}} g(t) \,dt &=& y_{c_i} + y_{c_i+1} + \cdots + y_{a_{i+1}-1}\\
\int\limits_{b_{n'}}^{m} g(t) \,dt &=& y_{c_{n'}} + y_{c_{n'}+1} + \cdots + y_{m-1}
\end{eqnarray*}
In these sums, each index $j\in\{c_1,c_2,\ldots,c_{n'}\}$ appears exactly twice,
once as $x_{j}$ and once as $y_{j}$.
All remaining indices $j \in \{0,\ldots,m-1\} \setminus \{c_1, c_2, \ldots, c_{n'}\}$ appear exactly once, either as $x_j$ or as $y_j$.
Therefore, with $I' = \bigcup_{i \in \{1,\ldots,n'\}} \{a_i,a_i + 1, \ldots, c_i\}$ the left-hand side of (\ref{eqn_31415}) is equal to
\begin{equation} \label{eqn_31416}
y_{c_1} + y_{c_2} + \cdots + y_{c_{n'}} + \sum_{i \in I'} x_i + \sum_{i \notin I'} y_i.
\end{equation}
Applying (\ref{eqn_fxgy}) to the right-hand side of (\ref{eqn_31415})
yields the desired inequality
\[
y_{c_1} + y_{c_2} + \cdots + y_{c_{n'}} + \sum_{i \in I'} x_i + \sum_{i \notin I'} y_i
\;\;\;\ge\;\;\;
\frac{1}{2} \sum_{i=0}^{m-1}(x_i + y_i).\qedhere
\] \end{proof}
\begin{corollary} \label{coro:combinatorial}
Let $n,m \ge 1$ and $x_1,\dots,x_m,y_1,\dots,y_m,z\in\mathbb{N}^{2n}$
such that $y_i \le z$ for all $i$.
There exist $n' \le \min(n,m)$
disjoint, nonempty intervals $I_1, \ldots, I_{n'} \subseteq \{1,\ldots,m\}$
such that for $I = I_1 \cup \cdots \cup I_{n'}$,
\[
n' \cdot z + \sum_{i \in I} x_i + \sum_{i \notin I} y_i \;\;\ge\;\; \frac{1}{2} \sum_{i=1}^{m}(x_i + y_i).
\] \end{corollary}
\section{Applications to Multi-Objective Optimization}
\subsection{Preliminaries} \label{sec:multi_defs} Consider some multi-objective maximization problem $\mathcal{O}$ that consists of a set of instances $\mathcal{I}$, a set of solutions $S(x)$ for each instance $x \in \mathcal{I}$, and a function $w$ assigning a $k$-dimensional weight $w(x, s) \in \mathbb{N}^k$ to each solution $s \in S(x)$ depending also on the instance $x \in \mathcal{I}$. If the instance $x$ is clear from the context, we also write $w(s) = w(x, s)$. The components of $w$ are written as $w_i$ for $i \in \{1,2,\dots,k\}$. For weights $a = (a_1 , \dots, a_k)$, $b = (b_1 , \dots, b_k ) \in \mathbb{N}^k$ we write $a \ge b$ if $a_i \ge b_i$ for all $i \in \{1, 2, \dots, k\}$.
Let $x \in \mathcal{I}$. The Pareto set of $x$, the set of optimal solutions, is the set $\{s \in S(x) \mid \neg \exists s' \in S(x)\,\, (w(x, s') \ge w(x, s) \text{ and } w(x, s') \neq w(x, s))\}$. For solutions $s, s' \in S(x)$ and $\alpha < 1$ we say $s$ is $\alpha$-approximated by $s'$ if $w_i(s') \ge \alpha \cdot w_i (s)$ for all $i$. We call a set of solutions $\alpha$-approximate Pareto set of $x$ if every solution $s \in S(x)$ (or equivalently, every solution from the Pareto set) is $\alpha$-approximated by some $s'$ contained in the set.
We say that some algorithm is an $\alpha$-approximation algorithm for $\mathcal{O}$ if it runs in polynomial time and returns an $\alpha$-approximate Pareto set of $x$ for all input instances $x \in \mathcal{I}$. We call it randomized if it is allowed to fail with probability at most $\nicefrac{1}{2}$ over all of its executions. An algorithm is an FPTAS (fully polynomial-time approximation scheme) for a given optimization problem, if on input $x$ and $0<\varepsilon<1$ it computes a $(1-\varepsilon)$-approximate Pareto set of $x$ in time polynomial in
$|x|+\nicefrac{1}{\varepsilon}$. If the algorithm is randomized it is called FPRAS (fully polynomial-time randomized approximation scheme).
\subsection{\texorpdfstring{$\boldsymbol{k}$}{k}-Objective Maximum Asymmetric Traveling Salesman Problem} \label{sec_atsp}
\paragraph{Definitions.} Let $k \geq 1$. An \emph{$\mathbb{N}^k$-labeled directed graph} is a tuple $G=(V,E,w)$, where $V$ is some finite set of vertices, $E \subseteq V \times V$ is a set of edges, and $w \colon E \to \mathbb{N}^k$ is a $k$-dimensional weight function. We denote the $i$-th component of $w$ by $w_i$ and extend $w$ to sets of edges by taking the sum over the weights of all edges in the set. A set of edges $M \subseteq E$ is called \emph{matching} in $G$ if no two edges in $M$ share a common vertex. A \emph{walk} in $G$ is an alternating sequence of vertices and edges $v_0,e_1,v_1, \dots e_{m},v_m$, where $v_i \in V$, $e_j \in E$, and $e_j=(v_{j-1},v_j)$ for all $0 \leq i \leq m$ and $1 \leq j \leq m$. If the sequence of vertices $v_0,v_1,\dots,v_m$ does not contain any repetitions, the walk is called a \emph{path} and if $v_0,v_1,\dots,v_{m-1}$ does not contain any repetitions and $v_{m} = v_0$, it is called a \emph{cycle}. A cycle in $G$ is called \emph{Hamiltonian} if it visits every vertex in $G$. For simplicity, we will interpret paths and cycles as sets of edges and can thus (using the above mentioned extension of $w$ to sets of edges) write $w(C)$ for the (multidimensional) weight of a Hamiltonian cycle $C$ of $G$.
Given some $\mathbb{N}^k$-labeled directed graph as input, our goal is to find a maximum Hamiltonian cycle. We will also use the multi-objective version of the maximum matching problem. These two maximization problems are defined as follows:
\begin{quote} \begin{tabbing} \textbf{$\boldsymbol{k}$-Objective Maximum Asymmetric Traveling Salesman Problem}\\ \textbf{($\boldsymbol{k}$-\pn{\bf MaxATSP})}\\ Instance: \= $\mathbb{N}^k$-labeled directed complete graph $(V,E,w)$\\ Solution: \> Hamiltonian cycle $C$\\ Weight: \> $w(C)$ \end{tabbing} \end{quote}
\begin{quote} \begin{tabbing} {\bf $\boldsymbol{k}$-Objective Maximum Matching ($\boldsymbol{k}$-\pn{\bf MM})}\\ Instance: \= $\mathbb{N}^k$-labeled directed graph $(V,E,w)$\\ Solution: \> Matching $M$\\ Weight: \> $w(M)$ \end{tabbing} \end{quote} Papadimitriou and Yannakakis \cite{PY00} give an FPRAS for $k$-\pn{MM}, which we will denote by \texttt{$k$-MM-Approx}$_\texttt{R}$\ and use as a black box in our algorithm. Since \texttt{$k$-MM-Approx}$_\texttt{R}$\ will be called multiple times, we assume that its success probability is amplified in a way such that the probability that {\em all} calls to the FPRAS succeed is at least $\nicefrac{1}{2}$.
\paragraph{High-Level Explanation of the Algorithm.} We apply the balancing results to the multi-objective maximum asymmetric traveling salesman problem and obtain a short algorithm that provides a randomized $\nicefrac{1}{2}$-approximation. This improves and simplifies the $(\nicefrac{1}{2}-\varepsilon)$-approximation that was given by Manthey \cite{man09}. Essentially our algorithm contracts a small number of edges, then computes a maximum matching, adds the contracted edges to the matching, and extends the result in an arbitrary way to a Hamiltonian cycle.
The argument for the correctness of the algorithm is as follows: Each Hamiltonian cycle $H$ induces two perfect matchings (the edges with odd and the edges with even sequence number in the cycle). For each objective $i$, the weight of one of the matchings is at least $\nicefrac{1}{2} \cdot w_i(H)$. The balancing results assure the existence of a matching $M$ such that for {\em all} objectives the inequality $w_i(M) \ge \nicefrac{1}{2} \cdot w_i(H)$ holds up to a small error. This matching can be approximated with the known FPRAS for multi-objective maximum matching. Moreover, by guessing and contracting a constant number of heavy edges in $H$ our algorithm can compensate the errors caused by the balancing and by the FPRAS.
\paragraph{Contraction and Expansion of Paths.} Suppose that for a given $\mathbb{N}^k$-labeled complete directed graph $G=(V,E,w)$ we wish to find some Hamiltonian cycle that contains a particular edge $e=(u,v)$. This reduces to the problem of finding some Hamiltonian cycle in the $\mathbb{N}^k$-labeled complete directed graph $G'=(V',E',w')$ where the edge $e$ is contracted by combining the nodes $u$ and $v$ into a single node while retaining the ingoing edges of $u$ and the outgoing edges of $v$. More formally, we remove $v$ and all incident edges from $G$ and set $w'(u,x)=w(v,x)$ for every $x \in V\setminus\{u,v\}$ (Figure \ref{fig:contract}). Now suppose we find a Hamiltonian cycle $C'$ in $G'$. Then there exists some $x$ such that $(u,x) \in C'$. Note that $w'(u,x)=w(v,x)$. We replace the edge $(u,x)$ in $C'$ with the detour $(u,v),(v,x)$ and obtain a Hamiltonian cycle $C$ in $G$ passing through $e$. Moreover, $C$ preserves the weights of $C'$ in the sense that $w(C) = w'(C') + w(e)$.
\begin{figure}
\caption{\footnotesize Contracting the edge $e=(u,v)$ deletes all edges incident to $v$ and sets the weights of every edge $(u,x)$ to the weights of the edge $(v,x)$ for $x \in V \setminus \{u,v\}$. Any Hamiltonian cycle passes through some edge $(u,x)$ and hence can be expanded to a Hamiltonian cycle through $e$ by replacing $(u,x)$ with the detour $(u,v),(v,x)$.}
\label{fig:contract}
\end{figure}
The notion of edge contractions and expansions can easily be extended to sets of pairwise vertex disjoint paths (Figure~\ref{fig_path_contract}), where each path is contracted edge-by-edge starting at the last edge of the path, and different paths can be contracted in an arbitrary order. We make this precise with the following definitions.
\begin{figure}
\caption{Example for the contraction of the path $\{(u,v),(v,y)\}$ in the graph $G$ resulting in
the graph $G''$ and the
following expansion of the tour $\{(u,x),(x,u)\}$ in $G''$.
The final tour in $G$ includes the contracted path.
}
\label{fig_path_contract}
\label{fig_path_expand}
\end{figure}
\begin{definition} Let $G=(V,E,w)$ be some $\mathbb{N}^k$-labeled complete directed graph, let $(u,v) \in E$, let $P \subseteq E$ be a path $u_0,e_1,u_1,e_2,u_2,\ldots, e_r,u_r$, and let $Q \subseteq E$ be a set of pairwise vertex disjoint paths $P_1,P_2, \dots, P_r \subseteq E$. \begin{enumerate} \item $\mathrm{contract}_{(u,v)}(G)
= (V \setminus \{v\}, \{e \in E \mid \text{$v$ is not incident to $e$}\}, w')$, where $w'(x,y)=w(x,y)$ if $x \neq u$ and $w'(u,z)=w(v,z)$. \item $
\mathrm{contract}_{P}(G)
=\mathrm{contract}_{e_1}(
\mathrm{contract}_{e_2}(
\dots \mathrm{contract}_{e_r}(G) \dots ))$ \item $
\mathrm{contract}_{Q}(G)
=\mathrm{contract}_{P_1}(
\mathrm{contract}_{P_2}(
\dots \mathrm{contract}_{P_r}(G) \dots ))$ \end{enumerate} We sometimes identify a graph with its edge set and apply $\mathrm{contract}$ directly to sets of edges and not to graphs. In this case, we also interpret the value of $\mathrm{contract}$ as an edge set. \end{definition}
Observe that the result of contracting several pairwise vertex disjoint paths does not depend on the order in which the paths are contracted.
We define edge expansion in a similar manner. Note that Definition~\ref{definition:expand} becomes essential if $G'$ is obtained from $G$ by a contraction of some set $Q$ of pairwise vertex disjoint paths in $G$.
\begin{definition}\label{definition:expand} Let $G=(V,E,w)$ and $G'=(V',E',w')$ be two $\mathbb{N}^k$-labeled complete directed graphs, let $T \subseteq E'$ be a Hamiltonian cycle of $G'$, let $(u,v) \in E$, let $P \subseteq E$ be a path $u_0,e_1,u_1,e_2,u_2,\ldots ,e_r,u_r$ and let $Q \subseteq E$ be a set of pairwise vertex disjoint paths $P_1,P_2, \dots, P_r \subseteq E$. \begin{enumerate} \item $\mathrm{expand}_{(u,v)}(T)
= \{(x,y) \in T \mid x \neq u\}
\cup \{(u,v)\}
\cup \{(v,x) \mid (u,x) \in T\}$ \item $
\mathrm{expand}_{P}(T)
=\mathrm{expand}_{e_{r}}(
\mathrm{expand}_{e_{r-1}}(
\dots \mathrm{expand}_{e_1}(T) \dots ))$ \item $
\mathrm{expand}_{Q}(T)
=\mathrm{expand}_{P_r}(
\mathrm{expand}_{P_{r-1}}(
\dots \mathrm{expand}_{P_1}(T) \dots ))$ \end{enumerate} \end{definition}
Again observe that the result of expanding several pairwise vertex disjoint paths does not depend on the order in which the paths are expanded.
\begin{proposition}\label{prop:expanded_cycle} Let $G=(V,E,w)$ be some $\mathbb{N}^k$-labeled complete directed graph, $Q \subseteq E$ be a set of pairwise vertex disjoint paths, and $G'=(V',E',w')=\mathrm{contract}_Q(G)$. For any Hamiltonian cycle $T' \subseteq E'$ of $G'$, the edges $T=\mathrm{expand}_Q(T')$ form a Hamiltonian cycle of $G$ with $w(T) = w'(T') + w(Q)$. \end{proposition}
\paragraph{Approximation Algorithm.} First we prove that the following algorithm computes a $(\nicefrac{1}{2}-\varepsilon)$-approximation for $k$-\pn{MaxATSP}. Then Theorem~\ref{thm_approx_maxtsp} shows that a modification of the algorithm provides a $\nicefrac{1}{2}$-approximation.
\begin{algorithm}[H] \algosettings \caption{\textbf{Algorithm}: \texttt{2\texttt{$k$-MaxATSP-Approx}$_\texttt{R}$($V,E,w,\varepsilon$)}} \Input{$\mathbb{N}^{2k}$-labeled complete directed graph $G=(V,E,w)$
and even $\card{V}$} \Output{set of Hamiltonian cycles of $G$} \BlankLine \ForEach{$F \subseteq E$ with $\card{F} \leq 2k$ that is a
set of vertex disjoint paths} {
$G':= \textrm{contract}_{F}(G)$\label{alg:contract}\;
${\cal M} := \textrm{2\texttt{$k$-MM-Approx}$_\texttt{R}$}(G',\varepsilon)$\label{alg:match}\;
\ForEach{$M \in {\cal M}$}{
extend $M$ in an arbitrary way to a Hamiltonian cycle $T'$ in $G'$\;
output $\textrm{expand}_{F}(T')$\;
} } \end{algorithm}
\begin{lemma}\label{lem:tsp_asymp_fptas} Let $G=(V,E,w)$ be an $\mathbb{N}^{2k}$-labeled complete directed graph with an even number of vertices, $\varepsilon>0$, and $T \subseteq E$ some Hamiltonian cycle in $G$. With probability at least $\nicefrac{1}{2}$, \texttt{2}\texttt{$k$-MaxATSP-Approx}$_\texttt{R}$$(V,E,w,\varepsilon)$ outputs a $(\nicefrac{1}{2}-\varepsilon)$-approximation of $T$
within time polynomial in $|(V,E,w)|+\nicefrac{1}{\varepsilon}$. \end{lemma}
\begin{proof} Let $G=(V,E,w)$ be an $\mathbb{N}^{2k}$-labeled complete directed graph with even $m=\card{V}$, and let $T$ be some arbitrary Hamiltonian cycle in $G$.
\begin{claim}\label{claim:lemmaAppliedToT} There is a set $F$ of vertex disjoint paths in $T$ with $\card{F} \leq 2k$ such that there is a matching $M'$ in the graph $(V',E',w') =\textrm{contract}_{F}(G)$ with $w'(M') \geq \frac{1}{2} w(T) - w(F) $. \end{claim}
\begin{proof} We apply Lemma \ref{lem:combinatorial} to the sequence of edge weights of $T$. Having an even number of edges, we can write $T$ sequentially as \begin{align*} T = u_1,e_1,v_1,f_1,u_2,e_2,v_2,f_2,\dots,u_p,e_p,v_p,f_p,u_1 \end{align*} where $u_i,v_i \in V$ and $e_i,f_i \in T$.
Since $w(e_i),w(f_i) \in \mathbb{N}^{2k}$, Lemma \ref{lem:combinatorial} shows that there exist $k' \in \{0, \ldots, k\}$ and natural numbers $1 \le a_1 \le b_1 < a_2 \le b_2 \cdots < a_{k'} \le b_{k'} \le p$ such that for $I = \bigcup_{i \in \{1,\ldots,k'\}} \{a_i, a_i+1, \ldots, b_i\}$, \begin{align} w(f_{b_1}) + w(f_{b_2}) + \cdots + w(f_{b_{k'}}) + \sum_{i \in I} w(e_i) + \sum_{i \notin I} w(f_i) \;\;\ge\;\; \frac{1}{2} \sum_{i=1}^{p}(w(e_i) + w(f_i)) \label{eqn:lemma_application_TSP}. \end{align}
Let $S = \{f_{b_1}, f_{b_2}, \dots, f_{b_{k'}}\} \cup \{e_i \mid i \in I\} \cup \{f_i \mid i \notin I\}$. Observe that it is possible that adjacent edges are contained in $S$. Figure \ref{fig:matching_in_cycle} gives an example.
\begin{figure}
\caption{\footnotesize Some part of the cycle $T$, where $S \subseteq T$ contains the depicted edges and is partially defined by $b_i=j$, $a_{i+1}=j+2$, $b_{i+1}=j+3$, and $a_{i+2}=b_{i+2}=j+4$.}
\label{fig:matching_in_cycle}
\end{figure}
Observe that for $1 \le j \le p$ and $f_{0}=f_p$ the following holds. \begin{eqnarray}
f_{j-1},e_{j} \in S &\;\;\iff\;\;& \exists i \in \{1,\ldots,k'\}, j=a_i \label{eqn_cases_a}\\
e_j,f_j \in S &\;\;\iff\;\;& \exists i \in \{1,\ldots,k'\}, j=b_i \label{eqn_cases_b} \end{eqnarray}
Let $F=\{e_{a_1},f_{b_1},e_{a_2},f_{b_2},\dots,e_{a_{k'}},f_{b_{k'}}\}$ and note that $\card{F}=2k'$. We argue that contracting $F$ will transform any path in $S$ into a single edge such that the resulting edge set is a matching:
Suppose $S$ contains some path $P=\{e_r,f_r,e_{r+1},f_{r+1},\dots,e_s,f_s\}$, where we assume $P$ to be maximal (i.e., $f_{r-1},e_{s+1} \notin S$). From (\ref{eqn_cases_a}), (\ref{eqn_cases_b}), and $a_1 \le b_1 < a_2 \le b_2 \cdots < a_{k'} \le b_{k'}$ we can draw the following conclusions: \begin{itemize}
\item $e_r,f_r \in P$ yields $r=b_i$ for some $1 \leq i \leq k'$
\item $f_r,e_{r+1},f_{r+1} \in P$ yields $r+1=a_{i+1}=b_{i+1}$
\item $f_{r+1},e_{r+2},f_{r+2} \in P$ yields $r+2=a_{i+2}=b_{i+2}$\\[1.5mm]
\hspace*{35mm} $\vdots$
\item $f_{s-1},e_s,f_s \in P$ yields $s=a_{s-r+1}=b_{s-r+1}$ \end{itemize}
Note that $e_r \notin F$, since otherwise $e_{b_i} \in F$, hence $a_i=r$, and by (\ref{eqn_cases_a}), $f_{r-1} \in S$, which contradicts the maximality of $P$. Therefore, contracting $\{f_{b_i},e_{a_{i+1}},f_{b_{i+1}},\dots,f_{b_j}\} \subseteq F$ transforms $P$ into the single edge $e_r$. A similar argumentation shows the same result for paths that start with some edge $f_r$ or end with some edge $e_s$. Hence, contracting $F$ transforms every path in $S$ into a single edge, and $M' =\textrm{contract}_{F}(S)$ is a matching in the graph $(V',E',w')=\textrm{contract}_{F}(G)$. We further obtain \begin{align*} w'(M') &= w(S) - w(F)\\ &= w(f_{b_1}) + w(f_{b_2}) + \cdots + w(f_{b_{k'}}) + \sum_{i \in I} w(e_i) + \sum_{i \notin I} w(f_i) - w(F)\\ &\stackrel{(\ref{eqn:lemma_application_TSP})}{\geq} \frac{1}{2} \sum_{i=1}^{p}(w(e_i) + w(f_i)) - w(F)\\ &= \frac{1}{2} w(T) - w(F) \end{align*} which proves the claim. \end{proof}
We fix the iteration of \texttt{2}\texttt{$k$-MaxATSP-Approx}$_\texttt{R}$\ where the algorithm chooses $F$ as in the claim. By Claim \ref{claim:lemmaAppliedToT} we know that there is a matching $M'$ of $G'=(V',E',w')$ with \begin{align} w'(M') \geq \frac{1}{2} w(T) - w(F)\label{eqn:matching_in_g'}. \end{align} Hence, with probability at least $\nicefrac{1}{2}$ the set ${\cal M}$ contains some matching $M$ of $G'$ such that \begin{align*} w'(M) &\geq (1-\varepsilon)w'(M')\\ &\stackrel{(\ref{eqn:matching_in_g'})}{\geq} (1-\varepsilon)\left( \frac{1}{2} w(T) - w(F)\right). \end{align*} We extend $M$ to some Hamiltonian cycle $T'$ of $G'$ in an arbitrary way without losing weights. By Proposition~\ref{prop:expanded_cycle} we can expand $T'$ with $F$ and obtain a Hamiltonian cycle $\tilde{T}$ in $G$ with \begin{align*} w(\tilde{T}) &= w'(T') + w(F)\\ &\geq w'(M) + w(F)\\ &\geq (1-\varepsilon)\left(\frac{1}{2} w(T) - w(F)\right) + w(F)\\ &\geq (1-\varepsilon) \frac{1}{2} w(T) + \varepsilon w(F)\\ &\geq (1-\varepsilon) \frac{1}{2} w(T). \end{align*} Moreover, the running time of every operation of the algorithm, including the execution of the randomized maximum matching algorithm, and the number of iterations of the loops are polynomial in the length of the input and $\nicefrac{1}{\varepsilon}$, which completes the proof of the lemma. \end{proof}
\begin{theorem} \label{thm_approx_maxtsp} $k$-\pn{MaxATSP}\ is randomized $\nicefrac{1}{2}$-approximable. \end{theorem}
\begin{proof} Let $G=(V,E,w)$ be an $\mathbb{N}^{2k}$-labeled complete directed graph with even $m=\card{V}$, and let $T$ be some arbitrary Hamiltonian cycle in $G$. The proof can easily be extended to graphs with an odd number of vertices or objectives.
For each $1 \leq i \leq 2k$ we choose an $f_i \in T$ with $w(f_i) \geq \frac{1}{m} w(T)$ (i.e., $f_i$ is a heaviest edge of $T$ with respect to component $i$). We let $F \subseteq T$ be a smallest set with even cardinality containing all the $f_i$. We get $\card{F}\leq 2k$ and \begin{align} w(F) \geq \frac{1}{m} w(T). \label{eqn:0} \end{align} $F$ is a set of vertex disjoint paths in $T$ and hence can be used to contract edges in $G$ and $T$. Let $G' = (V',E',w') = \textrm{contract}_{F}(G)$ and $T' = \textrm{contract}_{F}(T)$. Clearly, $T'$ is a Hamiltonian cycle in $G'$. Moreover, $G'$ has an even number of vertices. By Lemma~\ref{lem:tsp_asymp_fptas}, we can find in polynomial time a Hamiltonian cycle $\tilde{T}$ in $G'$ such that $w'(\tilde{T}) \geq (1-\varepsilon)\frac{1}{2}w'(T')$, where $\varepsilon=\nicefrac{1}{m}$. Moreover, we can expand $\tilde{T}$ with $F$ to obtain a Hamiltonian cycle $\hat{T}$ in $G$ such that \begin{align*} w(\hat{T}) &= w'(\tilde{T}) + w(F)\\ &\geq (1-\varepsilon) \frac{1}{2} w'(T') + w(F)\\ &= \left(\frac{1}{2} w'(T') + \frac{1}{2} w(F)\right) +
\frac{1}{2} w(F) - \frac{\varepsilon}{2} w'(T') \\ &= \frac{1}{2} w(T) + \frac{1}{2} (w(F) - \varepsilon w'(T'))\\ &\stackrel{(*)}{\geq} \frac{1}{2} w(T), \end{align*} where ($*$) follows from \begin{align*} w(F) - \varepsilon w'(T') &= w(F) - \frac{1}{m} w'(T')\\ &\geq w(F) - \frac{1}{m} w(T)\\ &\stackrel{\eqref{eqn:0}}{\geq} \frac{1}{m} w(T) - \frac{1}{m} w(T)\\ &= 0. \end{align*}
Note that, although we do not know the set $F$ of heaviest edges in the Hamiltonian cycle, we can simply try all possible sets of heaviest edges, since the number of objectives $2k$ is constant. \end{proof}
\subsection{\texorpdfstring{$\boldsymbol{k}$}{k}-Objective Maximum Weighted Satisfiability} \label{sec_sat}
\paragraph{Definitions.} We consider formulas over a finite set of propositional variables $V$. A \emph{literal} is a propositional variable $v\in V$ or its negation $\overline{v}$, a \emph{clause} is a finite, nonempty set of literals, and a \emph{formula in conjunctive normal form} (\textbf{CNF}, for short) is a finite set of clauses. A \emph{truth assignment} is a mapping $I \colon V \to \{0,1\}$. For some $v \in V$, we say that $I$ \emph{satisfies the literal $v$} if $I(v)=1$, and $I$ \emph{satisfies the literal $\overline{v}$} if $I(v)=0$. We further say that $I$ \emph{satisfies the clause $C$} and write $I(C)=1$ if there is some literal $l \in C$ that is satisfied by $I$, and $I$ \emph{satisfies a formula in CNF} if $I$ satisfies all of its clauses. For a set of clauses $\hat{H}$ and a variable $v$ let $\hat{H}[v] = \{C \in \hat{H} \mid v \in C\}$ be the set of clauses that are satisfied if this variable is set to one, and analogously $\hat{H}[\oli{v}] = \{C \in \hat{H} \mid \oli{v} \in C\}$ be the set of clauses that are satisfied if this variable is set to zero.
Given a formula in CNF and a $k$-objective weight function that maps each clause to a $k$-objective weight, our goal is to find truth assignments that maximize the sum of the weights of all satisfied clauses.
\begin{quote}
\begin{tabbing}
{\bf $\boldsymbol{k}$-Objective Maximum Weighted Satisfiability
(\kMaxSATbf)}\\
Instance: \= Formula $H$ in CNF over a set of variables $V$,
weight function $w \colon H \to \mathbb{N}^k$\\
Solution: \> Truth assignment $I \colon V \to \{0,1\}$\\
Weight: \> Sum of the weights of all clauses satisfied by $I$, i.\,e.,
$w(I) = \sum\limits_{\substack{C \in H\\I(C) = 1}} w(C)$
\end{tabbing} \end{quote}
\paragraph{High-Level Explanation of the Algorithm.} We apply the balancing results to \kMaxSAT. For a given formula $H$ in CNF over the variables $V$, the strategy is as follows: Start with a list of the variables $V$ and guess a partition of this list into $2k$ consecutive intervals. Assign $1$ to the variables in every second interval and $0$ to the remaining variables. The balancing results assure the existence of a partition that yields an assignment whose weights are approximately one half of the \emph{total} weights of $H$, up to an error induced by the variables at the boundaries of the partition. The error can be removed by first guessing a satisfying assignment for several influential variables $V^0$ of the formula. This results in a $\nicefrac{1}{2}$-approximation for \kMaxSAT.
\paragraph{Approximation Algorithm.} We show that the following algorithm computes a $\nicefrac{1}{2}$-approx\-ima\-tion for \kMaxSAT[2k].
\begin{algorithm}[H]
\algosettings
\caption{\textbf{Algorithm}: \texttt{\mbox{\tt $2k$-MaxSAT-Approx}($H,w$)}}
\Input{Formula $H$ in CNF over the variables $V=\{v_1,\dots,v_m\}$,
$2k$-objective weight function $w \colon H \to \mathbb{N}^{2k}$}
\Output{Set of truth assignments $I \colon V \to \{0,1\}$}
\BlankLine
\ForEach{$V^0 \subseteq V$ with $\card{V^0} \le (2k)^2$}{
let $I(v) := 0$ for all $v \in V^0$\;
$G := \{C \in H \mid \neg \exists v \in V^0\,\,(\oli{v} \in C)\}$\;
$V^1 := \{v \in V \setminus V^0 \mid
2k \cdot w(G[\oli{v}])\not\leq w(H \setminus G)\}$\;
set $I(v) := 1$ for all $v \in V^1$\;
$V' := V \setminus (V^0 \cup V^1)$\;
\ForEach{$a_1,b_1,a_2,b_2,\dots,a_k,b_k \in \{i \mid v_i \in V'\}$}{
\ForEach{$v_i \in V'$}{
\lIf{$\exists j (a_j \le i \le b_j)$}{$I(v_i):=1$}
\lElse{$I(v_i):=0$}
}
output $I$
}
} \end{algorithm}
\begin{theorem}
\kMaxSAT\ is $\nicefrac{1}{2}$-approximable. \end{theorem} \begin{proof} In the following, we assume without loss of generality that the number of objectives $2k$ is even. We show that the approximation is realized by the algorithm \mbox{\tt $2k$-MaxSAT-Approx}. First note that this algorithm runs in polynomial time since $k$ is constant. For the correctness, let $(H,w)$ be the input where $H$ is a formula over the variables $V=\{v_1,\dots,v_m\}$ and $w \colon H \to \mathbb{N}^{2k}$ is the $2k$-objective weight function. Let $I_o\colon V \rightarrow \{0,1\}$ be an optimal truth assignment. We show that there is an iteration of the loops of \mbox{\tt $2k$-MaxSAT-Approx}($H,w$) that outputs a truth assignment $I$ such that $w(I) \ge w(I_o)/2$. First we show that there is an iteration of the first loop that uses a suitable set $V^0$.
\begin{claim}\label{claimv0} There is some set $V^0 \subseteq \{v \in V\mid I_o(v) = 0\}$ with $\card{V^0} \le (2k)^2$ such that for $G = \{C \in H \mid \neg \exists v \in V^0\,\,(\oli{v} \in C)\}$ and any $v \in V \setminus V^0$ it holds that \begin{align*} 2k \cdot w(G[\oli{v}]) \not\leq w(H \setminus G) \qquad \Longrightarrow \qquad I_o(v)=1. \end{align*} \end{claim} \begin{proof} As a special case, if $\card\{v \in V \mid I_o(v) = 0\} < (2k)^2$, the assertion obviously holds for $V^0 = \{v \in V \mid I_o(v) = 0\}$, since $I_o(v)=1$ for all $v \in V\setminus V^0$.
Otherwise, let $V^0 = \{u_{2kt+r} \mid r=1,2,\dots,2k$ and $ t=0,1,\dots,2k-1\}$, where the $u_{2kt+r}\in V$ are defined inductively in the following way: \begin{itemize}[noitemsep] \item[(IB)] $H_0 := H$ \item[(IS)] $2kt+r-1 \to 2kt+r$: \begin{itemize}[noitemsep] \item choose $v \in V \setminus \{u_1,\dots,u_{2kt+r-1}\}$ such that $I_o(v) = 0$ and $w_r(H_{2kt+r-1}[\oli{v}])$ is maximal \item $u_{2kt+r} := v$ \item $H_{2kt+r} := H_{2kt+r-1} \setminus H_{2kt+r-1}[\oli{v}]$ \item $\alpha_{2kt+r} := w(H_{2kt+r-1}[\oli{v}])$. \end{itemize} \end{itemize}
We now show that the stated implication holds, so let $v \in V \setminus V^0$ and $j \in \{1,2,\dots,2k\}$ such that $2k \cdot w_j(G[\oli{v}]) > w_j(H \setminus G)$. Because the union $\bigcup_{i=1}^{4k^2} H_{i-1}[\oli{u_i}] = H \setminus G$ is disjoint, we get \begin{align*} w(H \setminus G) = \sum_{r=1}^{2k}\sum_{t=0}^{2k-1} \alpha_{2kt+r} \ge \sum_{t=0}^{2k-1} \alpha_{2kt+j} \intertext{and thus} w_j(G[\oli{v}]) > \sum_{t=0}^{2k-1} \frac{(\alpha_{2kt+j})_j}{2k}. \end{align*} Hence, by a pigeonhole argument, there must be some $t \in \{0,1,\dots,2k-1\}$ such that $w_j(G[\oli{v}]) > (\alpha_{2kt+j})_j$. But since $G \subseteq H_{2kt+j-1}$ and thus even $w_j(H_{2kt+j-1}[\oli{v}]) > w_j(G[\oli{v}]) > (\alpha_{2kt+j})_j$, the only reason we did not choose $v$ in the iteration $2kt+j$ (or even earlier) is that $I_o(v) = 1$. \end{proof}
We choose the iteration of the algorithm where $V^0$ equals the set whose existence is guaranteed by Claim~\ref{claimv0}. Furthermore let $G$ and $V^1$ be defined as in the algorithm and observe that by the claim it holds that $I_o(v)=1$ for all $v \in V^1$. Since $I_o(v) = 0$ for all $v \in V^0$, the truth assignment $I$ defined in the algorithm coincides with $I_o$ on $V^0 \cup V^1$.
Let further $V' = V \setminus (V^0\cup V^1)$ and $G' = \{ C \in G \mid \neg \exists v \in V^0\,\,(\oli{v} \in C) \land \neg \exists v \in V^1\,\,(v \in C) \land \exists v \in V'\,\, (v \in C \lor \oli{v} \in C)\}$ be the set of clauses that are not yet satisfied by $I$ but that could be satisfied by further extending $I$.
Now we apply the balancing result. Let $L' = V' \cup \{\oli{v} \mid v \in V'\}$. For $v_i \in V'$ let \begin{align*} x_i&=\sum_{C \in G'[v_i]} \frac{w(C)}{\card{(C \cap L')}} &\mbox{and}& &y_i&=\sum_{C \in G'[\oli{v_i}]} \frac{w(C)}{\card{(C \cap L')}}, \end{align*} and for $v_i \in V^0\cup V^1$ let $$x_i = y_i = 0.$$ It holds that \begin{align*} \sum_{v_i \in V} x_i + y_i = \sum_{v_i \in V'} x_i + y_i = w(G'). \end{align*} Note that for all $v_i \in V'$, we have the bound $y_i \le w(G'[\oli{v_i}]) \le w(G[\oli{v_i}]) \le \frac{1}{2k} w(H \setminus G)$ because of the definition of $V'$ and $V^1$. Hence, for all $v_i \in V$, $$y_i \le \frac{1}{2k} w(H \setminus G).$$ If we scale all values $x_i$ and $y_i$ to natural numbers, then by Corollary~\ref{coro:combinatorial}, there exist $k' \le k$ disjoint, nonempty intervals $J_1, \ldots, J_{k'} \subseteq \{1, \ldots, m\}$ such that for $J = J_1 \cup \dots \cup J_{k'}$ it holds that \begin{align*} \sum_{i \in J} x_i + \sum_{i \notin J} y_i \ge \frac{1}{2} w(G') - k' \frac{1}{2k} w(H \setminus G) \ge \frac{1}{2}(w(G') - w(H \setminus G)). \end{align*} The algorithm tries all combinations of $k$ (possibly empty) intervals $J_1 = [a_1,b_1], \ldots, J_k = [a_k,b_k]$. In particular, it will test the combination of the $k'$ nonempty intervals mentioned in Corollary~\ref{coro:combinatorial}. For $I$ being the truth assignment generated in this iteration it holds that \begin{align}\label{eqn227736} w(\{C \in G' \mid I(C) = 1\}) \ge \sum_{i \in J} x_i + \sum_{i \notin J} y_i \ge \frac{1}{2}(w(G') -w(H \setminus G)). \end{align} Furthermore, since $I$ and $I_o$ coincide on $V \setminus V'$, we have \begin{align} w(\{C \in H \setminus G' \mid I(C) = 1\}) &= w(\{C \in H \setminus G' \mid I_o(C) = 1\})\label{eqn2343}\\ &\ge w(\{C \in H \setminus G \mid I_o(C) = 1\})\notag\\ & = w(\{H \setminus G\}).\label{eqn33438} \end{align} Thus we finally obtain \begin{align*} w(I) &\;=\; w(\{C \in H \setminus G' \mid I(C) = 1\}) + w(\{C \in G'\mid I(C) = 1\})\\ &\stackrel{\eqref{eqn227736}}{\ge} w(\{C \in H \setminus G' \mid I(C) = 1\}) + \tfrac{1}{2}(w(G') - w(H \setminus G))\\ &\stackrel{\eqref{eqn2343}}{=} w(\{C \in H \setminus G' \mid I_o(C) = 1\}) + \tfrac{1}{2}(w(G') - w(H \setminus G))\\ &\stackrel{\eqref{eqn33438}}{\ge} \tfrac{1}{2} w(\{C \in H \setminus G' \mid I_o(C) = 1\}) + \tfrac{1}{2}w(G')\\ &\; \ge\; \tfrac{1}{2}w(I_o).\qedhere \end{align*} \end{proof}
\end{document} |
\begin{document}
\title{Variants of a theorem of Helson on general Dirichlet series}
\author[Defant]{Andreas Defant} \address[]{Andreas Defant\newline Institut f\"{u}r Mathematik,\newline Carl von Ossietzky Universit\"at,\newline 26111 Oldenburg, Germany. } \email{defant@mathematik.uni-oldenburg.de}
\author[Schoolmann]{Ingo Schoolmann} \address[]{Ingo Schoolmann\newline Institut f\"{u}r Mathematik,\newline Carl von Ossietzky Universit\"at,\newline 26111 Oldenburg, Germany. } \email{ingo.schoolmann@uni-oldenburg.de}
\maketitle
\begin{abstract} \noindent A result of Helson on general Dirichlet series $\sum a_{n} e^{-\lambda_{n}s}$ states that, whenever $(a_{n})$ is $2$-summable and $\lambda=(\lambda_{n})$ satisfies a certain condition introduced by Bohr, then for almost all homomorphism $\omega \colon (\mathbb{ R},+) \to \mathbb{T}$ the Dirichlet series $\sum a_{n} \omega(\lambda_{n})e^{-\lambda_{n}s}$ converges on the open right half plane $[Re>0]$. For ordinary Dirichlet series $\sum a_n n^{-s}$ Hedenmalm and Saksman related this result with the famous Carleson-Hunt theorem on pointwise convergence of Fourier series, and Bayart extended it within his theory of Hardy spaces $\mathcal{H}_p$ of such series. The aim here is to prove variants of Helson's theorem within our recent theory of Hardy spaces $\mathcal{H}_{p}(\lambda),\,1\le p \le \infty,$ of general Dirichlet series. To be more precise, in the reflexive case $1 < p < \infty$ we extend Helson's result to Dirichlet series in $\mathcal{H}_{p}(\lambda)$ without any further condition on the frequency $\lambda$, and in the non-reflexive case $p=1$ to
the wider class of frequencies satisfying the so-called Landau condition (more general than Bohr's condition). In both cases we add relevant maximal inequalities.
Finally, we give several applications to the structure theory of
Hardy spaces of general Dirichlet series.
\end{abstract}
\noindent \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnotetext{2010 \emph{Mathematics Subject Classification}: Primary 43A17, Secondary 30B50, 43A50} \footnotetext{\emph{Key words and phrases}: general Dirichlet series, Hardy spaces, almost everywhere convergence, maximal inequalities, completeness. } \footnotetext{}
\section{\bf Introduction}
A general Dirichlet series is a (formal) series of the form $\sum a_n e^{-\lambda_n s}$, where $s $ is a complex variable, $(a_n)$ a sequence of complex coefficients (called Dirichlet coefficients), and $\lambda=(\lambda_n)$ a frequency
(a strictly increasing non-negative real sequence which tends to $+\infty$). Fixing a frequency $\lambda$, we call $D=\sum a_{n}e^{-\lambda_{n}s}$ a $\lambda$-Dirichlet series, and $\mathcal{D}(\lambda)$ denotes the space of all these series. All basic information on general Dirichlet series can be found in \cite{HardyRiesz} or \cite{Helson}. In particular that convergence
of $D=\sum a_{n}e^{-\lambda_{n}s}$ in $s_0 \in \mathbb{C}$ implies convergence in all $s\in \mathbb{C}$
with $Re s > Re s_0$, and that the limit function $f(s) = \sum_{n=1}^{\infty} a_{n}e^{-\lambda_{n}s}$ of $D$
is holomorphic on the half plane $[Re > \sigma_c(D)]$, where
\[
\sigma_{c}(D)=\inf\left \{ \sigma \in \mathbb{ R} \mid D \text{ converges on } [Re>\sigma] \right\}
\] determines the so-called abscissa of convergence.
\subsection{Helson's theorem} Let us start with some details on the state of art of Helson's result mentioned in the abstract. We first consider the frequency $\lambda=(\log n)$, which is of special interest, since it generates so-called ordinary Dirichlet series $\sum a_{n}n^{-s}$. As usual (see e.g. \cite{Defant}, \cite{HLS}, or \cite{QQ}), we denote by $\mathcal{H}_2$ the Hilbert space of all Dirichlet series $\sum a_{n}n^{-s}$ with 2-summable coefficients, that is $(a_{n}) \in \ell_{2}$.
Recall that the infinite dimensional polytorus $\mathbb{T}^{\infty}:=\prod_{n=1}^{\infty} \mathbb{T}$ forms a compact abelian group (with its natural group structure), with the normalized Lebesgue measure $dz$ as its Haar measure. Denote by $\Xi$ the set of all completely multiplicative characters $\chi\colon \mathbb{ N} \to \mathbb{T}$ (that is $\chi(nm)=\chi(n)\chi(m)$ for all $m$,$n$), which with the pointwise multiplication forms an abelian group. Denote by $\mathfrak{p}=(p_{n})$ the sequence of prime numbers. Looking at the group isomorphism \begin{align*} \iota\colon \Xi \to \mathbb{T}^{\infty}, ~~\chi \mapsto (\chi(p_{n})), \end{align*}
we see that $\Xi$ also forms a compact abelian group, and its Haar measure $d \chi$ is the push forward measure of $dz$ through $\iota^{-1}$.
The following result of
Helson from \cite{Helson3} (see also \cite[Theorem 4.4]{HLS}) is our starting point.
\begin{Theo} \label{HelsonintroHelson}
Given $D= \sum a_{n}n^{-s}\in \mathcal{H}_2$, for almost all $\chi \in \Xi$ the Dirichlet series
$D^{\chi} =\sum a_{n} \chi(n) n^{-s}$ converges on the open right half plane $[\text{Re}>0]$.
\end{Theo}
Helson actually proves an extended version of Theorem \ref{HelsonintroHelson} for general Dirichlet series. Therefore, given a frequency $\lambda$, let us define the space $\mathcal{H}_{2}(\lambda)$ of all (formal) $D=\sum a_{n}e^{-\lambda_{n}s}$ with $2$-summable Dirichlet coefficients. The substitute for $\Xi$ from Theorem \ref{HelsonintroHelson} is given by the so-called Bohr compactification $\overline{\mathbb{ R}}$ of $(\mathbb{ R},+)$. Recall that $\overline{\mathbb{ R}}$ is a compact abelian group, which may be defined to be the set of all homomorphism $\omega \colon (\mathbb{ R},+) \to \mathbb{T}$ together with the topology of pointwise convergence (i.e. $\overline{\mathbb{ R}}$ is the dual group of $(\mathbb{R},+)$ endowed the discrete topology $d$). Additionally, Helson assumes Bohr's condition $(BC)$ on $\lambda$, that is \begin{equation} \label{BCHelson} \exists ~l = l (\lambda) >0 ~ \forall ~\delta >0 ~\exists ~C>0~\forall~ n \in \mathbb{ N}: ~~\lambda_{n+1}-\lambda_{n}\ge Ce^{-(l+\delta)\lambda_{n}}. \end{equation} This condition was isolated by Bohr in \cite{Bohr}, and, roughly speaking it prevents the $\lambda_n$'s from getting too close too fast. Note that $\lambda=(\log n)$ satisfies $(BC)$ with $l=1$. Then the extended version of Helson's Theorem~\ref{HelsonintroHelson} reads as follows.
\begin{Theo} \label{HelsonstheoremHelson}
Let $D=\sum a_{n} e^{-\lambda_{n}s} \in \mathcal{H}_{2}(\lambda)$ and $\lambda$ with $(BC)$. Then the Dirichlet series $D^{\omega}=\sum a_{n} \omega(\lambda_{n}) e^{-\lambda_{n}s}$ converges on $[Re >0]$ for almost all $\omega \in \overline{\mathbb{ R}}$.
\end{Theo}
One of our aims is to extend Helson's result to the Hardy space $\mathcal{H}_{1}(\lambda)$ (a class of Dirichlet series much larger than $\mathcal{H}_{2}(\lambda)$, see the definition below) under a milder assumption on the frequency $\lambda$. We say that $\lambda$ satisfies Landau's condition $(LC)$ (introduced in \cite{Landau}) provided \begin{equation} \label{LCHelson} \forall~ \delta>0~ \exists ~C>0 ~\forall~ n \in \mathbb{ N} \colon~ \lambda_{n+1}-\lambda_{n}\ge C e^{-e^{\delta \lambda_{n}}}. \end{equation} Observe that $(BC)$ implies $(LC)$, and that e.g. $\lambda=(\sqrt{\log n})$ satisfies $(LC)$, but fails for $(BC)$. To see an example which fails for $(LC)$, take e.g. $\lambda=(\log \log n)$.
\subsection{Dirichlet groups} From \cite{DefantSchoolmann2} we recall the definition and some basic facts of so-called Dirichlet groups.
Let $G$ be a compact abelian group and $\beta\colon (\mathbb{ R},+) \to G$ a homomorphism of groups. Then the pair $(G,\beta)$ is called Dirichlet group, if $\beta$ is continuous and has dense range. In this case the dual map $\widehat{\beta}\colon \widehat{G} \hookrightarrow \mathbb{ R}$ is injective, where we identify $\mathbb{R}=\widehat{(\mathbb{R},+)}$ (note that we do not assume $\beta$ to be injective). Consequently, the characters $e^{-ix\pmb{\cdot}} \colon \mathbb{ R} \to \mathbb{T}$, $x\in \widehat{\beta}(\widehat{G})$, are precisely those which define a unique $h_{x} \in \widehat{G}$ such that $h_{x} \circ \beta=e^{-ix\pmb{\cdot}}$. In particular, we have that \begin{equation*} \widehat{G}=\{h_{x} \mid x \in \widehat{\beta}(\widehat{G}) \}. \end{equation*} From \cite[Section 3.1]{DefantSchoolmann2} we know that every $L_{1}(\mathbb{ R})$-function may be interpreted as a bounded regular Borel measure on $G$. In particular, for every $u>0$ the Poisson kernel $$P_{u}(t):=\frac{1}{\pi}\frac{u}{u^{2}+t^{2}}\,,\,\,\, t \in \mathbb{ R},$$
defines a measure $p_{u}$ on $G$, which we call the Poisson measure on $G$. We have $\|p_{u}\|=\|P_{u}\|_{L_{1}(\mathbb{ R})}=1$ and \begin{equation}\label{Fourier1Helson}
\text{$\widehat{p_{u}}(h_{x})=\widehat{P_{u}}(x)=e^{-u|x|}$ for all $u >0$ and $x\in \widehat{\beta}(\widehat{G})$.} \end{equation} Finally, recall from \cite[Lemma 3.11]{DefantSchoolmann2} that, given a measurable function $f:G \to \mathbb{C}$, then for almost all $\omega \in G$ there are measurable functions $f_{\omega} \colon \mathbb{ R} \to \mathbb{C}$ such that \[ \text{$f_{\omega}(t)=f(\omega \beta(t))$ almost everywhere on $\mathbb{ R}$,} \] and if $f\in L_{1}(G)$, then all these $f_\omega$ are locally integrable. Moreover, as shown in \cite[Corollary 2.11]{DefantSchoolmann3}, for almost all $\omega \in G$ \begin{equation} \label{besicoHelson} \widehat{f}(0)=\lim_{T\to \infty} \frac{1}{2T} \int_{-T}^{T} f_{\omega}(t) dt . \end{equation}
We will later see, that this way to 'restrict' functions on the group $G$ to $\mathbb{ R}$, in fact establishes a sort of bridge between Fourier analysis on Dirichlet groups $(G,\beta)$ and Fourier analysis on $\mathbb{ R}$.
\subsection{$\lambda$-Dirichlet groups} Now, given a frequency $\lambda$, we call a Dirichlet group $(G,\beta)$ a $\lambda$-Dirichlet group whenever $\lambda \subset \widehat{\beta}(\widehat{G})$, or equivalently whenever for every $e^{-i\lambda_{n} \pmb{\cdot}} \in \widehat{(\mathbb{R},+)}$ there is (a unique) $h_{\lambda_{n}}\in \widehat{G}$ with $h_{\lambda_{n}}\circ \beta=e^{-i\lambda_{n} \pmb{\cdot}}$.
Note that for every $\lambda$ there exists a $\lambda$-Dirichlet groups $(G,\beta)$ (which is not unique). To see a very first example, take the Bohr compactification $\overline{\mathbb{ R}}$ together with the mapping $$\beta_{\overline{\mathbb{ R}}} \colon \mathbb{ R} \to \overline{\mathbb{ R}}, ~~ t \mapsto \left[ x \mapsto e^{-itx} \right].$$ Then $\beta_{\overline{\mathbb{ R}}}$ is continuous and has dense range (see e.g. \cite[Theorem 1.5.4, p. 24]{QQ} or \cite[Example 3.6]{DefantSchoolmann2}), and so the pair $(\overline{\mathbb{ R}},\beta_{\overline{\mathbb{ R}}})$ forms a $\lambda$-Dirichlet group for all $\lambda$'s. We refer to \cite{DefantSchoolmann2} for more 'universal' examples of Dirichlet groups. Looking at the frequency $\lambda=(n)=(0,1,2,\ldots)$, the group $G=\mathbb{T}$ together with \[\beta_\mathbb{T}: \mathbb{ R} \to \mathbb{T}, \,\,\beta_{\mathbb{T}}(t)=e^{-it},\] forms a $\lambda$-Dirichlet group, and the so-called Kronecker flow \begin{equation*} \label{oscarHelson} \beta_{\mathbb{T}^{\infty}}\colon \mathbb{ R} \to \mathbb{T}^{\infty}, ~~ t \mapsto \mathfrak{p}^{-it}=(2^{-it},3^{-it}, 5^{-it}, \ldots), \end{equation*} turns the infinite dimensional torus $\mathbb{T}^{\infty}$ into a $\lambda$-Dirichlet group for $\lambda = (\log n)$. We note that, identifying $\widehat{\mathbb{T}} = \mathbb{Z}$ and $\widehat{\mathbb{T}^\infty} = \mathbb{Z}^{(\mathbb{ N})}$ (all finite sequences of integers), in the first case $h_n(z) = z^n$ for $z \in \mathbb{T}, n \in \mathbb{Z}$, and in the second case $h_{\sum \alpha_j \log p_j}(z) = z^\alpha$ for $z \in \mathbb{T}^\infty, \alpha \in \mathbb{Z}^{(\mathbb{ N})}$.
\subsection{Hardy spaces of general Dirichlet series} Fix some $\lambda$-Dirichlet group $(G,\beta)$ and $1\le p \le \infty$. By
$$H_{p}^{\lambda}(G)$$
we denote the Hardy space of all functions $f\in L_{p}(G)$ (recall that being a compact abelian group, $G$ allows a unique normalized Haar measure) having a Fourier transform supported on $\{h_{\lambda_n} \colon n \in \mathbb{N}\} \subset \widehat{G}$. Being a closed subspace of $L_p(G)$, this clearly defines a Banach space.
These spaces $H_{p}^{\lambda}(G)$ naturally define $\lambda$-Dirichlet series. Let $$\mathcal{H}_{p}(\lambda)$$ be the class of all $\lambda$-Dirichlet series $D=\sum a_n e^{-\lambda_n s}$ for which there is some
$f \in H_p^\lambda(G)$ such that $a_n = \widehat{f}(h_{\lambda_{n}})$ for all $n$. In this case the function $f$ is unique, and together with the norm $\|D\|_{p}:=\|f\|_{p}$ the linear space $\mathcal{H}_{p}(\lambda)$ obviously forms a Banach space. So (by definition) the so-called Bohr map \begin{equation} \label{BohrmapHelson} \mathcal{B}\colon H_{p}^{\lambda}(G)\to \mathcal{H}_{p}(\lambda),~~ f \mapsto \sum \widehat{f}(h_{\lambda_{n}}) e^{-\lambda_{n}s} \end{equation} defines an onto isometry. A fundamental fact from \cite[Theorem 3.24.]{DefantSchoolmann2} is that the definition of $\mathcal{H}_{p}(\lambda)$ is independent of the chosen $\lambda$-Dirichlet group $(G,\beta)$. Now we have given two definitions of the Hilbert space $\mathcal{H}_{2}(\lambda)$, but by Parsel's theorem both of these definitions actually coincide.
Our two basic examples of frequencies, $\lambda = (n)$ and $\lambda = (\log n)$, lead to well-known examples: \begin{equation} \label{hardyTHelson}
H_{p}(\mathbb{T}):=H_{p}^{(n)}(\mathbb{T}) \,\,\, \,\text{and} \,\,\,\, H_p(\mathbb{T}^\infty) := H_p^{(\log n)}(\mathbb{T}^\infty) \,. \end{equation} In particular, $f \in H_p^{(n)}(\mathbb{T})$ if and only if $f \in L_p(\mathbb{T})$ and $\hat{f}(n) = 0$ for any $n \in \mathbb{Z}$ with $n < 0$, and $f \in H_p^{(\log n)}(\mathbb{T}^\infty)$ if and only if $f \in L_p(\mathbb{T}^\infty)$ and $\hat{f}(\alpha) = 0$ for any finite sequence $\alpha = (\alpha_k)$ of integers with $\alpha_k < 0$ for some $k$ (where as usual $\widehat{f}(\alpha) := \widehat{f}(h_{\log \mathfrak{p}^\alpha}))$. Consequently, if we turn to Dirichlet series, them the Banach spaces $$\mathcal{H}_p= \mathcal{H}_p((\log n))$$ are precisely Bayart's Hardy spaces of ordinary Dirichlet series from \cite{Bayart} (see also \cite{Defant} and \cite{QQ}).
\subsection{Vertical limits} Given a $\lambda$-Dirichlet series $D = \sum a_n e^{-\lambda_n s}$ and $z \in \mathbb{C}$, we say that $$D_{z}:=\sum a_n e^{-\lambda_n z} e^{-\lambda_n s} $$ is the translation of $D$ about $z$, and we distinguish between horizontal translations $D_u, u \in \mathbb{ R}$, and vertical translations $D_{i\tau}, \tau \in \mathbb{ R}$.
If $(G, \beta)$ is a $\lambda$-Dirichlet group and $D \in \mathcal{H}_{p}(\lambda)$ is associated to
$f\in H_p^\lambda(G)$, then for each $u>0$
the horizontal translation $D_u$ corresponds to the convolution of $f$ with the Poisson measure $p_{u}$, i.e.
$\mathcal{B}(f*p_{u})=D_{u}$ (compare coefficients),
and we refer to $f*p_{u}$ as the translation of $f$ about $u$. In particular, we have that $D_{u}\in \mathcal{H}_{p}(\lambda)$ for every $u>0$.
Moreover, each Dirichlet series of the form $$D^{\omega}:=\sum a_{n} h_{\lambda_{n}}(\omega)e^{-\lambda_n s}\,,\,\, \omega \in G,$$ is said to be a vertical limit of $D$. Examples are vertical translations $D_{i\tau}$ with $\tau \in \mathbb{R}$, and the terminology is explained by the fact that each vertical limit may be approximated by vertical translates. More precisely, given $D = \sum a_n e^{-\lambda_n s}$ which converges absolutely on the right half-plane, for every $\omega \in G$ there is a sequence $(\tau_{k})_{k} \subset \mathbb{ R}$ such that $(D_{i\tau_{k}})$ converges to $D^{\omega}$ uniformly on $[Re>\varepsilon]$ for all $\varepsilon>0$.
Assume conversely that for $(\tau_{k})_{k} \subset \mathbb{ R}$ the vertical translations $D_{i\tau_k}$ converge
uniformly on $[Re>\varepsilon]$ for every $\varepsilon>0$
to a holomorphic function $f$ on $[Re>0]$. Then there is $\omega \in G$ such that
$f(s)= \sum_{n=1}^\infty a_n h_{\lambda_n}(\omega) e^{-\lambda_n s}$
for all $s \in [Re>0]$\,. For all this see \cite[Proposition 4.6]{DefantSchoolmann2}.
\subsection{R\'esum\'e of our results on Helson's theorem} With all these preliminaries we give a brief r\'esum\'e of our extensions of Helson's theorem \ref{HelsonstheoremHelson}, where we carefully have to distinguish between the cases $1<p<\infty$ and $p=1$.
\noindent {\bf Synopsis I} \label{SIHelson}\\ Let $(G,\beta)$ be a $\lambda$-Dirichlet group, $1 \leq p < \infty$, and $D\in \mathcal{H}_p(\lambda)$ with associated function $f \in H_p^\lambda(G)$. Then the following statements hold true: \begin{itemize} \item[(i)] If $1 < p < \infty$, then almost all vertical limits $D^\omega$ converge almost everywhere on $[Re = 0]$, and consequently almost all of them converge on $[Re >0]$. \item[(ii)]
If $\lambda$ satisfies $(LC)$ and $p=1$, then almost all vertical limits $D^\omega$ converge on $[Re >0]$. \end{itemize} Moreover, there is a null set $N \subset G$ such that for every $\omega \notin N$ in the first case \[ D^\omega(it) = f_\omega (t) \,\,\, \text{for almost all $t \in \mathbb{ R}$}, \] and in both cases \[ D^\omega(u+it) = (f_\omega \ast P_u) (t) \,\,\, \text{for every $u >0$ and almost all $t \in \mathbb{ R}$}. \]
Let us indicate carefully which of these results are already known and which are new. We first discuss the ordinary case $\lambda = (\log n)$ with $(\log n)$-Dirichlet group $(\mathbb{T}^{\infty}, \beta_{\mathbb{T}^\infty})$. Then for $p=2$ statement (i) was proved by Hedenmalm and Saksman in \cite{HedenmalmSaksman}, whereas Bayart in \cite[Theorem 6]{Bayart} for every $D \in \mathcal{H}_1$ proves the convergence of almost all vertical limits $D^\omega$ on $[Re >0]$. For Dirichlet series in $\mathcal{H}_2$ Bayart deduces his theorem from the Menchoff-Rademacher theorem on almost everywhere convergence of orthonormal series (see also \cite{DefantSchoolmann5}), and extends it then to Dirichlet series $\mathcal{H}_1$ by so-called hypercontractivity. In the general case statement (ii) for $p=2$ is Helson's theorem~\ref{HelsonstheoremHelson} and under the more restrictive condition $(BC)$ instead of $(LC)$ and $p=1$.
\subsection{Helson's theorem and its maximal inequalities}
Our strategy is to deduce the preceding results
\begin{itemize}
\item
from relevant maximal inequalities for functions in $H_{1}^{\lambda}(G)$, \item to obtain as a consequence results on pointwise convergence of the Fourier series of these functions,
\item
and to use in a final step the Bohr transform (\ref{BohrmapHelson}) to transfer these results to Helson-type theorems for Dirichlet series.
\end{itemize}
In the reflexive case $1 < p < \infty$ we follow closely the ideas of Duy \cite{Duy} and Hedenmalm-Saksman \cite{HedenmalmSaksman} extending the Carleson-Hunt theorem on pointwise convergence of Fourier series to functions in $H_p^\lambda(G)$, and in the non-reflexive case $p=1$ we use among others boundedness properties of a Hardy-Littlewood maximal type operator for integrable functions on Dirichlet groups which we invent in \cite{DefantSchoolmann3}.
In order to give a r\'esum\'e of the results we have on the first of the above steps recall that given a measure space $(\Omega, \mu)$ the weak $L_{1}$-space $L_{1, \infty}(\mu)$ is the linear space of all measurable functions $f\colon \Omega\to \mathbb{C}$ for which there is a constant $C>0$ such that for all $\alpha>0$ we have $
\mu \big(\left\{ \omega \in \Omega \mid |f(\omega)|>\alpha \right\} \big)\le C/\alpha. $
Together with $\|f\|_{1,\infty}:= \inf C$ the space $L_{1,\infty}(\mu)$ becomes a quasi Banach space (see e.g. \cite[\S 1.1.1 and \S 1.4]{Grafakos1}), where the triangle inequality holds with constant $2$.\\
\noindent {\bf Synopsis II} \label{SIIHelson}\\ Let $(G,\beta)$ be a $\lambda$-Dirichlet group. Then the following statements hold true: \begin{itemize} \item[(i)] For every $1 < p < \infty$ there is a constant $C = C(p) >0$ such that for every $f \in H_p^\lambda(G)$ \begin{equation*}
\Big\| \sup_{N} \big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}} \big|
\Big\| _{L_p(G)} \le C \,\|f\|_{p}. \end{equation*} \item[(ii)]
If $\lambda$ satisfies $(LC)$, then for every $u >0$ there is a constant $C = C(u) >0$ such that for every $f \in H_1^\lambda(G)$ \begin{equation*}
\Big\| \sup_{N} \big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) e^{-u \lambda_n} h_{\lambda_{n}} \big|
\Big\| _{L_{1, \infty}(G)} \le C\,\|f\|_{1}. \end{equation*} \item[(iii)]
If $\lambda$ satisfies $(BC)$, then to every $u>0$ there is a constant $C = C(u) >0$ such that for all $1\le p \le \infty$ and $f \in H_{p}^{\lambda}(G)$
\begin{equation*}
\Big\|\sup_{N} \big| \sum_{n=1}^{N}\widehat{f}(h_{\lambda_{n}})e^{-\lambda_{n}u} h_{\lambda_{n}} \big|
\Big\|_p \le C\|f\|_{p}.
\end{equation*}
\end{itemize}
In particular, for all $f \in H_p^\lambda(G), 1 <p < \infty$
\[
f = \sum_{n=1}^{\infty} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}
\,\,\, \text{almost everywhere on $G$},
\] and under $(LC)$ for all $f \in H_1^\lambda(G)$ and $u >0$ \[ f\ast p_u =\sum_{n=1}^{\infty} \widehat{f}(h_{\lambda_{n}}) e^{-u \lambda_n} h_{\lambda_{n}} \,\,\, \text{almost everywhere on $G$.} \]
A standard argument shows how to deduce from such maximal inequalities pointwise convergence theorem of Fourier series, e.g. using Egoroff's theorem (see \cite[Lemma 3.6]{DefantSchoolmann3} for a more general situation). The following remark indicates
how pointwise convergence theorems of Fourier series then transfer to Dirichlet series (see \cite[Lemma 1.4]{DefantSchoolmann3}).
\begin{Rema} \label{tranferHelson}
Let $(G,\beta)$ be a Dirichlet group, and $f_n, f$ measurable functions on $G$. Then the following are equivalent: \begin{itemize} \item[(i)] $\lim_{n\to \infty} f_n(\omega) = f(\omega)$ \,\,\, \text{for almost all $\omega \in G$.}
\item[(ii)] $\lim_{n\to \infty} (f_n)_\omega(t)= f_\omega(t)$ \,\,\, \text{for almost all $\omega \in G$ and for almost all $t\in \mathbb{ R}$.} \end{itemize} In particular, if $(G,\beta)$ be a $\lambda$-Dirichlet group and $D=\sum a_n e^{-\lambda_n s}$ is associated to $f \in H_1^\lambda(G)$, then \begin{equation*} \label{FseriesHelson} f=\sum_{n=1}^{\infty} \widehat{f}(h_{\lambda_{n}})h_{\lambda_{n}} \end{equation*} almost everywhere on $G$ if and only if for almost all $\omega \in G$ the Dirichlet series \begin{equation*} \label{DseriesHelson} D^{\omega}=\sum a_{n} h_{\lambda_{n}}(\omega) e^{-\lambda_{n}s} \end{equation*} converges almost everywhere on the imaginary line $[Re=0]$, and its limit coincides with $f_\omega$
almost everywhere on $\mathbb{ R}$. \end{Rema}
\subsection{Organization} The reflexive case from Synopsis I and II we handle in Theorem~\ref{DirichletintegerHelson} and Theorem~\ref{CorointegerHelson}, and under a different point of view also in Theorem~\ref{maximalineqBCHelson}. The Theorems~\ref{Helson(LC)Helson} and \ref{HelsonstheoHelson} are going to cover the non-reflexive parts. In the final Section~\ref{bohrstheoremsectionHelson} we extend and improve parts of the structure theory of general Dirichlet series started in \cite{DefantSchoolmann2}. Among others we show in Theorem~\ref{equivalenceHelson} that $\mathcal{D}_{\infty}(\lambda)$, the normed space of all $\lambda$-Dirichlet series which
converge to a bounded and then holomorphic function on the right half plane, is complete if and only if $\mathcal{D}_{\infty}(\lambda)=\mathcal{H}_{\infty}(\lambda)$ holds isometrically if and only if
$\lambda$ satisfies (what we call) 'Bohr's theorem'.
\section{\bf Helson's theorem versus the Carleson-Hunt theorem} \label{CarlesonsectionHelson} In this section we provide the proofs of the reflexive statements from the Synopses I and II in the introduction.
Therefore, by $CH_{p} >0$ we denote the best constant in the maximal inequality from the Carleson-Hunt theorem -- that is, given $1<p<\infty$, the best $C>0$ such that for all $f\in L_{p}(\mathbb{T})$
$$\bigg(\int_{\mathbb{T}} \sup_{N} \big|\sum_{|k|\le N} \widehat{f}(k)z^{k}\big|^{p} dz\bigg)^{\frac{1}{p}}\le C\|f\|_{p}.$$
\begin{Theo} \label{DirichletintegerHelson} Let $1<p<\infty$ and $\lambda=(\lambda_n)$ an arbitrary frequency. Then for all $\lambda$-Dirichlet group $(G,\beta)$ and $D=\sum a_{n}e^{-\lambda_{n}s}\in \mathcal{H}_{p}(\lambda)$ we for almost all $\omega \in G$ have \begin{equation} \label{max1Helson}
\lim_{T\to \infty} \bigg(\frac{1}{2T} \int_{-T}^{T} \sup_{N} \big| \sum_{n=1}^{N} a_{n} h_{\lambda_{n}}(\omega) e^{-it\lambda_{n}} \big|^{p} dt \bigg)^{\frac{1}{p}} \le \text{CH}_{p}\|D\|_{p}. \end{equation} Moreover, for almost all $\omega \in G$ almost everywhere on $\mathbb{ R}$ \begin{equation}\label{point1Helson} D^{\omega}(it)=\sum_{n=1}^{\infty} a_n h_{\lambda_{n}}(\omega) e^{-it\lambda_{n}}=f_{\omega}(t), \end{equation} and in particular \begin{equation}\label{point2Helson} \text{$D^{\omega}=\sum a_{n} h_{\lambda_{n}}(\omega)e^{-\lambda_{n}s}$ converges on $[Re>0]$.} \end{equation} \end{Theo}
As described above we deduce this from a Carleson-Hunt type maximal inequality for functions in $H_p^\lambda(G)$.
\begin{Theo} \label{CorointegerHelson} Let $\lambda$ be a frequency and $1<p<\infty$. Then for all $\lambda$-Dirichlet groups $(G,\beta)$ and $f \in H_{p}^{\lambda}(G)$ we have \begin{equation} \label{iiiiiHelson}
\bigg( \int_{G} \sup_{N} \big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega) \big|^{p} d\omega \bigg)^{\frac{1}{p}} \le CH_{p}\|f\|_{p}. \end{equation} In particular, almost everywhere on $G$ \begin{equation} \label{sakssHelson} f=\sum_{n=1}^{\infty} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}. \end{equation} \end{Theo}
Before we begin with the proofs let us apply Theorem~\ref{CorointegerHelson} to the frequency $\lambda =(\log n)$, which, as remarked above, together with the group $(\mathbb{T}^\infty, \beta_{\mathbb{T}^\infty})$ forms a $(\log n)$-Dirichlet group.
\begin{Coro} \label{OrdiAHelson} Let $1 < p < \infty$ and $f \in H_p(\mathbb{T}^\infty)$. Then \[ \lim_{N\to \infty} \sum_{\mathfrak{p}^\alpha \leq N} \widehat{f}(\alpha) z^\alpha=f(z) \,\,\,\,\, \text{almost everywhere on $\mathbb{T}^\infty$}\,, \] and moreover \[
\bigg( \int_{\mathbb{T}^\infty} \sup_N \big| \sum_{\mathfrak{p}^\alpha \leq N} \widehat{f}(\alpha) z^\alpha
\big|^p d z \bigg)^{1/p}
\leq CH_{p} \|f\|_p\,. \] \end{Coro}
We start with the proof of Theorem \ref{CorointegerHelson}, and show at the end of this section that this result in fact also proves Theorem~\ref{DirichletintegerHelson}.
Actually for a certain choice of $\lambda$-Dirichlet groups, Theorem \ref{CorointegerHelson} is due to Duy in his article \cite{Duy}, where convergence of Fourier series of so-called Besicovitch almost periodic functions is investigated.
In our language, fixing a frequency $\lambda$, Duy considers the $\lambda$-Dirichlet group $G_{D}:=\widehat{(U,d)}$, where $U$ is the smallest subgroup of $\mathbb{ R}$ containing $\lambda$ and $d$ denotes the discrete topology. This compact abelian group together with the mapping $$\beta_{D}\colon \mathbb{ R} \to G_{D}, ~~ t \mapsto \left[u \mapsto e^{-itu}\right]$$ forms a $\lambda$-Dirichlet group (see also \cite[Example 3.5]{DefantSchoolmann2}). Then by \cite[Theorem 13, p. 274]{Duy} (in our notation) the maximal operator \begin{equation*} \label{DuymaximalopHelson}
\mathbb{M}(f)(\omega):=\sup_{N>0} \big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega) \big| \end{equation*} defines a bounded operator from $H_{p}^{\lambda}(G_{D})$ to $L_{p}(G_{D})$, whenever $1<p<\infty$, and this in fact proves Theorem \ref{CorointegerHelson} for $(G_{D},\beta_{D})$.
Moreover, the case $p=2$ and $\lambda=(\log n)$ with Dirichlet group $(\mathbb{T}^{\infty},\beta_{\mathbb{T}^{\infty}})$ of Theorem \ref{CorointegerHelson} is proven by Hedenmalm and Saksman in \cite[Theorem 1.5]{HedenmalmSaksman}, without stating (\ref{iiiiiHelson}). Their proof and the proof of Duy are based on Carleson's maximal inequality
on almost everywhere convergence of Fourier series of square integrable functions on $\mathbb{T}$, and
a technique due to Fefferman from \cite{Fefferman}.
Following closely their ideas, we for the sake of completeness provide a self-contained proof of Theorem \ref{CorointegerHelson} within our framework of Hardy spaces $H_{p}^{\lambda}(G)$, which shows that the special choice of the $\lambda$-Dirichlet group $(G, \beta)$ in fact is irrelevant.
A crucial argument of \cite{Duy} is, that for every finite set $\{a_{1}, \ldots, a_{N}\}$ of positive numbers, there are $\mathbb{Q}$-linearly independent numbers $b_{1}, \ldots, b_{P}$ such that $\{\lambda_{1}, \ldots, \lambda_{N}\} \subset \operatorname{span}_{\mathbb{ N}_{0}} (b_{1},\ldots, b_{P}).$ We demand for less and only require integer coefficients.
\begin{Lemm} \label{trickDuyHelson} Let $a_{1}, \ldots, a_{N}$ be positive numbers. Then there are $\mathbb{Q}$-linearly independent real numbers $b_{1}, \ldots b_{P}$ such that $\{a_{1}, \ldots, a_{N}\} \subset \operatorname{span}_{\mathbb{\mathbb{Z}}} (b_{1}, \ldots, b_{P}).$ \end{Lemm} \begin{proof} We prove the claim by induction. If $N=1$, then choose $b_{1}:=a_{1}$. Assume that for $a_{1}, \ldots, a_{N}$ there are $\mathbb{Q}$-linearly independent $b_{1}, \ldots, b_{P}$ such that $\{a_{1}, \ldots, a_{N}\} \subset \operatorname{span}_{\mathbb{\mathbb{Z}}} (b_{1}, \ldots, b_{P})$ and let $a_{N+1}$ arbitrary. If $(a_{N+1}, b_{1}, \ldots, b_{P})$ is $\mathbb{Q}$-linearly independent, then choose $b_{P+1}:=a_{N+1}$. Else, there are rationals $q_{j}$ such that $a_{N+1}=\sum_{j=1}^{P} q_{j}b_{j}$ and so for every $K\in \mathbb{ N}$ $$a_{N+1}=\sum_{j=1}^{P} (Kq_{j}) \frac{b_{j}}{K}.$$ Choose $K$ large enough such that $K q_{j}\in \mathbb{Z}$ for all $j$, and define $\widetilde{b_{j}}:=K^{-1}b_{j}$. Then $\{a_{1}, \ldots, a_{N}, a_{N+1}\} \subset \operatorname{span}_{\mathbb{\mathbb{Z}}} (\widetilde{b_{1}}, \ldots, \widetilde{b_{P}})$, which finishes the proof. \end{proof}
\begin{proof}[Proof of Theorem \ref{CorointegerHelson}] We first consider polynomials from $L_{p}(\mathbb{T}^{\infty})$ and then show that the choice of the Dirichlet group is irrelevant. So let $f \in L_{p}(\mathbb{T}^{\infty})$ be a polynomial and define for $x \in \mathbb{R}^N$ the maximal function
\[
M_xf(z) = \sup_{S >0}
\big| \sum_{\substack{\alpha \in \mathbb{Z}^N\\ <\alpha,x> \leq S}} \hat{f}(\alpha) z^\alpha \big|\,,\,\,\, z \in \mathbb{T}^N\,,
\]
where $<\alpha,x>:=\sum \alpha_{j}x_{j}$. We intend to show that
\begin{align} \label{maxinequalityHelson}
\| M_xf\|_p \leq CH_{p}\|f\|_p\,.
\end{align}
Note that then, taking $x=B$, the proof finishes.
We will use, that given a $N \times N$ matrix $M=(m_{i,j})$ with integer entries and such that $\det M =1$, the transformation formula for every integrable function $g: \mathbb{T}^N \to \mathbb{R}$ gives \begin{align} \label{integralHelson} \int_{\mathbb{T}^N} g(z)dz = \int_{\mathbb{T}^N} g(\Phi_M(z)) dz\,, \end{align}
where
\[ \Phi_M : \mathbb{T}^N \to \mathbb{T}^N\,, (e^{it_j})_j \mapsto (e^{i \sum_k m_{jk}t_k})_j\,, \] and moreover for all $\alpha \in \mathbb{Z}^N$ and $z \in \mathbb{T}^N$ \begin{align} \label{monoHelson} \Phi_M(z)^\alpha = z^{M^{t} \alpha}\,, \end{align} where $M^{t}$ denotes the transposed matrix of $M$.
By approximation we only have to prove \eqref{maxinequalityHelson} for a dense collection of $x$ in $\mathbb{R}_{>0}^N$, and, following the argument from the proof of \cite[Theorem 1.4]{HedenmalmSaksman}, we take
\[
x= \bigg(\frac{q_1}{Q}, \ldots\ldots,\frac{q_N}{Q} \bigg)\,,
\]
where $q_1, ., q_n, Q \in \mathbb{Z}$ and $\text{gcd}(q_1,q_2) =1$. Choose $r_1, r_2 \in \mathbb{Z}$ such that $q_1r_2 - q_2r_1 = 1$, and define the $N \times N$ matrix
\[ A=
\begin{bmatrix}
q_1 & q_2 & q_3 & . & . &. & . & q_N \\
r_1 & r_2 & 0 & .& . &.&. & 0 \\
0 & 0 & 1 & 0 & . &.&.& 0 \\
0 & 0 & 0 & 1 & 0 & . &.& 0 \\
. & . &. &. & . & . & .&. &\\
. & . &. &. & . & . & .&. &\\
. & . &. &. & . & . & .&. &\\
0 & 0 & 0 & 0 & 0 & . & 0& 1 \\
\end{bmatrix} \] which has determinant one. Then we deduce from \eqref{integralHelson} and \eqref{monoHelson} (applied to $M = (A^{-1})^{t}$) that \begin{align*}
\| M_xf\|_p^p & = \int_{\mathbb{T}^N}
\sup_{S >0} \big| \sum_{\substack{\alpha \in \mathbb{Z}^N\\ <q,\alpha> \leq QS}} \hat{f}(\alpha) z^{A^{-1}A\alpha}\big|^p dz \\& = \int_{\mathbb{T}^N}
\sup_{S >0} \big| \sum_{\beta \in \{A\alpha \colon <q,\alpha> \leq QS\}} \hat{f}(A^{-1}\beta) z^\beta\big|^p dz\,. \end{align*} Now we obseve that
for every $S >0$ \begin{align*}\label{indecesHelson} \{ A\alpha \colon & \text{$\alpha \in \mathbb{Z}^N$ and $<q,\alpha> \leq QS $} \} = \{ (\beta_1,\gamma) \in \mathbb{Z} \times\mathbb{Z}^{N-1} \colon \text{$\beta_1 \leq QS $ }\}\,, \end{align*} hence \begin{align*}
\| M_xf\|_p^p & = \int_{\mathbb{T}^{N-1}} \bigg( \int_{\mathbb{T}}
\sup_{S >0} \big| \sum_{\substack{\beta_1 \in \mathbb{Z}\\ \beta_1 \leq QS}} \Big[ \sum_{\gamma \in \mathbb{Z}^{N-1} } \hat{f}(A^{-1}(\beta_1, \gamma)) z^\gamma
\Big] z_1^{\beta_1}\big|^p dz_1\bigg) dz\,. \end{align*} Finally, we deduce from the Carleson-Hunt maximal inequality in $L_p(\mathbb{T}^N)$, and another application of \eqref{monoHelson} and \eqref{integralHelson} that \begin{align*}
\| M_xf\|_p^p \le \int_{\mathbb{T}^{N-1}} CH_{p}^{p} \bigg( \int_{\mathbb{T}}
\big| \sum_{\beta \in \mathbb{Z}^N} & \hat{f}(A^{-1}\beta) z^\beta
\big|^p dz_1\bigg) dz \\& = CH_{p}^{p} \int_{\mathbb{T}^{N}}
\big| \sum_{\alpha \in \mathbb{Z}^N} \hat{f}(\alpha) z^\alpha
\big|^p dz\,, \end{align*} which is what we aimed for. Now let $\lambda$ be a frequency and $(G,\beta)$ be a $\lambda$-Dirichlet group. Fix $N$ and let $E_{N}:=\{\lambda_{1},\ldots \lambda_{N}\}$. Then by Lemma \ref{trickDuyHelson} there are $\mathbb{Q}$-linearly independent $B_{N}:=(b_{1},\ldots, b_{P_{N}})$ such that $E_{N}\subset \operatorname{span}_{\mathbb{Z}} (b_{1},\ldots, b_{P_{N}})$. Let $f=\sum_{n=1}^{N} a_{n}h_{\lambda_{n}}$ and define $g:=\sum c_{\alpha} z^{\alpha} \in L_{p}(\mathbb{T}^{\infty})$, where $c_{\alpha}:=a_{n}$, whenever $\lambda_{n}=\sum \alpha_{j}b_{j}$. Observe that $\mathbb{T}^{P_{N}}$ with mapping $$\beta_{B_{N}}\colon \mathbb{ R} \to \mathbb{T}^{P_{N}}, ~~ t \mapsto (e^{-itb_{1}}, \ldots,e^{-itb_{P_{N}}})$$
forms a Dirichlet group. Then by \cite[Proposition 3.17]{DefantSchoolmann2} we have $\|f\|_{p}=\|g\|_{p}$. Moreover, for every Dirichlet group $(H,\beta_{H})$ we for all $f\in C(H)$ have \begin{equation}\label{QQQHelson} \int_{G} f~dm= \lim_{T\to \infty} \frac{1}{2T} \int_{-T}^{T} (f\circ\beta_{H})(t) dt, \end{equation} which is straight forward checked on polynomials and follows then by density.
Since $\omega \mapsto \sup_{N\le M} \left| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega) \right|$ is continuous, we obtain using (\ref{QQQHelson}) for $(G,\beta)$ and $(\mathbb{T}^{P_N},\beta_{B_{N}})$ and two times the monotone convergence theorem \begin{align*}
&\bigg( \int_{G} \sup_{N} \big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega) \big|^{p} dz \bigg)^{\frac{1}{p}}=\lim_{M\to \infty} \bigg( \int_{G} \sup_{N\le M} \big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega) \big|^{p} dz \bigg)^{\frac{1}{p}} \\ &= \lim_{M\to \infty} \bigg(\lim_{T\to \infty} \frac{1}{2T} \int_{-T}^{T} \sup_{N\le M} \big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}})e^{-\lambda_{n}it} \big|^{p} dt \bigg)^{\frac{1}{p}} \\ & =\lim_{M\to \infty} \bigg( \int_{\mathbb{T}^{\infty}} \sup_{N\le M} \big| \sum_{\alpha B \le N} \widehat{g}(\alpha) z^{\alpha} \big|^{p} dz\bigg)^{\frac{1}{p}}= \bigg( \int_{\mathbb{T}^{\infty}} \sup_{N} \big| \sum_{\alpha B \le N} \widehat{g}(\alpha) z^{\alpha} \big|^{p} dz \bigg)^{\frac{1}{p}} \\ &\le CH_{p}\|g\|_{p}=CH_{p}\|f\|_{p}. \qedhere \end{align*} \end{proof}
\begin{proof}[Proof of Theorem \ref{DirichletintegerHelson}]
Let $D\in \mathcal{H}_{p}(\lambda)$ and $f\in H_{p}^{\lambda}(G)$ with $\mathcal{B}(f)=D$. By Theorem \ref{CorointegerHelson} we know that \begin{equation*} \label{fHelson}
\omega \mapsto \sup_{N} \big| \sum_{n=1}^{N} a_{n} h_{\lambda_{n}}(\omega) \big|^p \in L_{1}(G). \end{equation*} Then \eqref{besicoHelson} shows that the maximal inequality from \eqref{iiiiiHelson} implies the maximal inequality from \eqref{max1Helson}. Finally, \eqref{point1Helson} is a consequence of \eqref{sakssHelson} and Remark~\ref{tranferHelson}. \end{proof}
\section{\bf Helson's theorem under Landau's condition} \label{maximalineqsectionLCHelson}
It is almost obvious that Theorem~\ref{DirichletintegerHelson}, \eqref{max1Helson} and \eqref{point1Helson} as well as their equivalent formulations Theorem~\ref{CorointegerHelson}, \eqref{iiiiiHelson} and \eqref{sakssHelson} of the preceding section fail in the non-reflexive case $p=1$. Indeed, as described in \eqref{hardyTHelson} we have that $H_{1}(\mathbb{T}) = H_{1}^{(n)}(\mathbb{T})$, and it is well-known that the Carleson-Hunt theorem fails in $H_{1}(\mathbb{T})$. But as we are going to show now, under Landau's condition $(LC)$ on the frequency $\lambda$ the Helson-type statement from Theorem~\ref{DirichletintegerHelson}, \eqref{point2Helson} can be saved.
\begin{Theo} \label{Helson(LC)Helson} Let $(G,\beta)$ be a $\lambda$-Dirichlet group for a frequency $\lambda$ with $(LC)$, and $D = \sum a_n e^{-\lambda_n s}\in \mathcal{H}_1(\lambda)$. \begin{itemize} \item[(i)] Then for almost all $\omega \in G$ the vertical limits $D^\omega$ converge on $[Re >0]$. \item[(ii)] More precisely, there is a null set $N \subset G$ such that for every $\omega \notin N$ \[ D^\omega(u+it) = (f_\omega \ast P_u) (t) \,\,\, \text{for every $u >0$ and almost all $t \in \mathbb{ R}$}\,, \] where $f \in H_1^\lambda(G)$ is the function associated to $D$ through Bohr's transform. \end{itemize} \end{Theo}
As in the preceding section our general setting combined with some of our preliminaries show that this result on general Dirichlet series in fact is equivalent to a result on pointwise convergence of Fourier series in Hardy spaces on $\lambda$-Dirichlet groups.
\begin{Theo}\label{HelsonstheoHelson} Let $(G,\beta)$ be a $\lambda$-Dirichlet group for a frequency $\lambda$ with $(LC)$. \begin{itemize} \item[(i)]
Then for every $u>0$ the sublinear operator \begin{equation*} \label{operatorHHelson}
S_{max}^{u}(f)(\omega):=\sup_{N} \big|\sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}})e^{-u\lambda_{n}} h_{\lambda_{n}}(\omega)\big| \end{equation*} is bounded from $H_{1}^{\lambda}(G)$ to $L_{1,\infty}(G)$. \item[(ii)] Moreover, if $f \in H_{1}^\lambda(G)$, then there is a null set $N\subset G$ such that for every $\omega \notin N$ and every $u>0$ we have \begin{equation*} \label{guertelHelson} (f \ast p_{u})(\omega) = \sum_{n=1}^{\infty} \widehat{f}(h_{\lambda_{n}})e^{-u\lambda_{n}} h_{\lambda_{n}}(\omega). \end{equation*} \end{itemize} \end{Theo}
Note that $S_{max}^{u}$ by Theorem~\ref{CorointegerHelson} without any restriction on $\lambda$ is bounded from $H_{p}^{\lambda}(G)$ to $L_{p}(G)$, whenever $1<p\le \infty$ (apply Theorem~\ref{CorointegerHelson} for $f \in H_{p}^{\lambda}(G)$ to $f \ast p_u$).
The proof of Theorem~\ref{HelsonstheoHelson} needs two lemmas, the first one of which in fact is crucial.
\begin{Lemm} \label{jojHelson} Let $\lambda$ be an arbitrary frequency. Then for any sequence $(k_{N})\subset ]0,1]$ the sublinear operator
$$T_{max}(f)(\omega):=\sup_{N} \Big(\big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega)\big| k_{N} \Big(\frac{\lambda_{N+1}-\lambda_{N}}{\lambda_{N+1}}\Big)^{k_{N}}\Big)$$ is bounded from $H_{1}^{\lambda}(G)$ to $L_{1,\infty}(G)$ and from $H_{p}^{\lambda}(G)$ to $L_{p}(G)$, where $1<p\le \infty$. \end{Lemm} The proof reduces to boundedness properties of the following Hardy-Littlewood maximal type operator $\overline{M}$ introduced in \cite[Section 2.3]{DefantSchoolmann3}: For $f\in L_{1}(G)$ and almost all $\omega \in G$ we define \begin{equation*}
\overline{M}(f)(\omega):=\sup_{I\subset \mathbb{ R}} \frac{1}{|I|} \int_{I} |f_{\omega}(t)| dt, \end{equation*} where the supremum is taken over all intervals $I\subset \mathbb{ R}$. Then, as shown in \cite[Theorem 2.10]{DefantSchoolmann3}, $\overline{M}$ is a sublinear bounded operator from $L_{1}(G)$ to $L_{1,\infty}(G)$, and from $L_{p}(G)$ to $L_{p}(G)$, whenever $1<p\le \infty$. \begin{proof}[Proof of Lemma \ref{jojHelson}] We recall from \cite[Section 1.3]{DefantSchoolmann3} the notion of Riesz means of some function $f\in H_{1}^{\lambda}(G)$. For $k>0$ and $x>0$ the polynomial $$R_{x}^{\lambda,k}(f):=\sum_{\lambda_{n}<x} \widehat{f}(h_{\lambda_{n}})\bigg(1-\frac{\lambda_{n}}{x}\bigg)^{k} h_{\lambda_{n}}$$ is called the first $(\lambda,k)$-Riesz mean of $f$. Then, choosing $(k_{N})\subset ]0,1]$, from \cite[Lemma 3.5]{Schoolmann} we know that \[
\big| \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega)\big|\le 3 \bigg(\frac{\lambda_{N+1}}{\lambda_{N+1}-\lambda_{N}} \bigg)^{k_{N}} \sup_{0<x<\lambda_{N+1}} |R^{\lambda,k_{N}}_{x}(f)(\omega)|\,, \]
and additionally from \cite[Proposition 3.2]{DefantSchoolmann3} that \begin{equation*}
\sup_{x>0}|R^{\lambda,k_{N}}_{x}(f)(\omega)|\le CK_{N}^{-1}\overline{M}(f)(\omega), \end{equation*} where $C$ is an absolute constant. So together \begin{equation} \label{bjHelson}
|T_{max}(f)(\omega)|\le 3C \overline{M}(f)(\omega), \end{equation} and, since $\overline{M}$ has the stated boundedness properties, the claim follows. \end{proof}
The second lemma is a standard consequence of Abel summation.
\begin{Lemm} \label{abeleHelson} For every $u>0$ there is a constant $C=C(u)$ such that for every choice of complex numbers $a_1, \ldots, a_N$ for all frequencies $\lambda=(\lambda_{n})$ and $\varepsilon>0$ \[
\big| \sum_{n=1}^{N} a_n e^{-(u+\varepsilon) \lambda_n} \big| \leq
C(u)\sup_{n\le N}\big|e^{-\varepsilon \lambda_n}\sum_{n=1}^{N} a_n \big|\,. \] \end{Lemm} \begin{proof} Indeed, by Abel summation \begin{align*}
&\big| \sum_{n=1}^{N} a_n e^{-(u+\varepsilon) \lambda_{n}} \big|\\ &=\big| e^{-(u+\varepsilon)\lambda_{N}}\sum_{n=1}^{N}a_n + \sum_{n=1}^{N-1} \bigg( \sum_{k=1}^{n} a_n \bigg)(e^{-(u+\varepsilon) \lambda_{n}}-e^{-(u+\varepsilon) \lambda_{n+1}})\big|\\ &\le \sup_{n\le N}\big|e^{-\varepsilon \lambda_{n}}\sum_{k=1}^{n} a_n \big| \bigg(e^{-u\lambda_{N}}+\sum_{n=1}^{N-1} e^{-u\lambda_{n}}-e^{-u\lambda_{n+1}}e^{-\varepsilon(\lambda_{n+1}-\lambda_{n})} \bigg) \\ &\le
\sup_{n\le N}\big|e^{-\varepsilon \lambda_{n}}\sum_{k=1}^{n} a_n \big| \bigg(e^{-u\lambda_{N}}+\sum_{n=1}^{N-1} e^{-\varepsilon \lambda_{n}}-e^{-u\lambda_{n+1}} \bigg)\\ &\le \sup_{n\le N}\big|e^{-\varepsilon \lambda_{n}}\sum_{k=1}^{n} a_n \big| \bigg(1+\frac{1}{u}\int_{0}^{\infty}e^{-ux}dx\bigg)\,\qedhere. \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{HelsonstheoHelson}] For the proof of (i) note first that by $(LC)$ for every $u>0$ there is a constant $C(u,\lambda) >0$, such that for all $N$ $$\lambda_{N+1}-\lambda_{N} \ge C(u,\lambda) e^{-e^{u\lambda_{N}}}.$$ Hence with the choice $k_{N}:=e^{-u\lambda_{N}}$ we for all $N$ have \begin{equation} \label{(A)Helson} e^{-u\lambda_{N}}\le C_1(u,\lambda) k_{N} \bigg(\frac{\lambda_{N+1}-\lambda_{N}}{\lambda_{N}}\bigg)^{k_{N}} \,, \end{equation} and conclude from Lemma~\ref{abeleHelson} that \begin{equation} \label{mittHelson} S_{\text{max}}^u(f) (\omega) \leq
C_{2}(u,\lambda) \sup_{N} \big|e^{-u\lambda_{N}} \sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}})h_{\lambda_{n}}(\omega)\big|\le C_{3}(u,\lambda) T_{max}(f)(\omega). \end{equation} Finally, the boundedness of $S_{\text{max}}^u: H_1^\lambda(G) \to L_{1,\infty}(G)$ is an immediate consequence of Lemma~\ref{jojHelson}.
To understand the second statement (ii) take $f \in H_1^\lambda(G)$ and $u >0$. Then $p_u \ast f \in H_1^\lambda(G)$, and recall from \eqref{Fourier1Helson} that all non-zero Fourier coefficients of this function have the form $\widehat{f}(h_{\lambda_{n}}) e^{-u \lambda_n}$. Using a standard argument (see again \cite[Lemma 3.6]{DefantSchoolmann3} for a more general situation) gives that there is a null set $N\subset G$ such that on $G\setminus N$ we have \[f*p_{u}=\sum_{n=1}^{\infty} \widehat{f}(h_{\lambda_{n}})h_{\lambda_{n}}.\] To finish the proof of (ii) we need to show that the dependence of $N$ on $u>0$ may be avoided: Recall first from \eqref{mittHelson} and \eqref{bjHelson}
that for every $u>0$ there is a constant $C(u,\lambda)>0$ which
for every $f\in H_{1}^{\lambda}(G)$ satisfies
satisfying \[S_{max}^{u}(f)(\omega)\le C(u,\lambda) \overline{M}(f)(\omega) \,.\] So fixing $u>0$ and $f\in H_{1}^{\lambda}(G)$, we for all $v>0$ obtain that for almost all $\omega$ $$S_{max}^{u+v}(f)(\omega)=S_{max}^{u}(f*p_{v})(\omega)\le C(u,\lambda)\overline{M}(f*p_{v})(\omega)\le C(u,\lambda) \overline{M}(f)(\omega)\,,$$ where the last estimate is taken from \cite[Proof of Proposition 3.7]{DefantSchoolmann3}. So for all $u>0$ there is a constant $C_1(u,\lambda)>0$ such that
$$\big\| \sup_{\alpha\ge u} S_{max}^{\alpha}(f)(\pmb{\cdot}) \big\|_{1,\infty}\le C_1(u,\lambda) \|f\|_{1} \,\,\, \text{ and } \,\,\,
\big\| \sup_{\alpha\ge u} |f \ast p_\alpha| \big\|_{1,\infty}\le \|f\|_{1}\,, $$ where the first estimate is a consequence of the $L_{1}$-$L_{1,\infty}$-boundedness of $\overline{M}$ (see again \cite[Theorem 2.10]{DefantSchoolmann3}) and the second inequality can be found in the proof of \cite[Proposition 2.4]{DefantSchoolmann3}. We conclude from \cite[Lemma 3.6]{DefantSchoolmann3} that for every $u$ there is a null set $N_{u}\subset G$ such that for all $\omega \notin G$ \begin{equation} \label{hansimglueckHelson}
\lim_{N \to \infty} \sup_{\alpha \ge u} \big| \sum_{n=1}^N \widehat{f}(h_{\lambda_n}) e^{-\alpha \lambda_n} h_{\lambda_n}(\omega) - (f\ast p_\alpha)(\omega) \big| =0. \end{equation} Now collecting all null sets $N_{1/n}, n \in \mathbb{ N},$ gives the conclusion. \end{proof}
Now we check that the Helson-type Theorem~\ref{Helson(LC)Helson} is indeed a consequence of the above maximal inequality from Theorem~\ref{HelsonstheoHelson}.
\begin{proof}[Proof of Theorem~\ref{Helson(LC)Helson}]
Both statements (i) and (ii) follow immediately from (\ref{hansimglueckHelson}) and Remark~\ref{tranferHelson}. Indeed, applying Remark \ref{tranferHelson} to (\ref{hansimglueckHelson}) we get that for every $u>0$ there is a null set $N_u\subset G$ such that, if $\omega \notin N_u$, then for almost every $t \in \mathbb{ R}$ \begin{equation*}
\lim_{N \to \infty} \sup_{\alpha \ge u} \big| \sum_{n=1}^N \widehat{f}(h_{\lambda_n})e^{-\alpha \lambda_n} h_{\lambda_n}(\omega)e^{-it\lambda_{n}} - (f\ast p_\alpha)(\omega\beta(t)) \big| =0. \end{equation*} Hence, again collecting all null sets $N_{1/n}, n \in \mathbb{ N},$ we obtain a null set $N$, such that for every $u>0$ and almost every $t\in \mathbb{ R}$ \[D^{\omega}(u+it)=(f*p_{u})(\omega\beta(t))=\int_{\mathbb{ R}} f_{\omega}(t-x) P_{u}(x) dx=f_{\omega}*P_{u}(t),\] whenever $\omega \notin N$, and so the proof is finished. \end{proof}
\begin{Rema} Obviously, the preceding proof of Theorem~\ref{HelsonstheoHelson} works, if we instead of the condition $(LC)$ for $\lambda$ assume that for every $u>0$ there is a constant $C=C(u)\ge 1$ and sequence $(k_{N})\subset ]0,1]$ such that the estimate from \eqref{(A)Helson} holds for all $N$. Taking the $k_{N}$th root condition \eqref{(A)Helson} is equivalent to: For every $u>0$ there is a constant $C=C(u)\ge 1$ and sequence $(k_{N})\subset ]0,1]$ such that for all $N$ \begin{equation*} \lambda_{N} e^{-u\lambda_{N}k_{N}^{-1}}\bigg(\frac{1}{Ck_{N}}\bigg)^{k_{N}^{-1}}\le \lambda_{N+1}-\lambda_{N}. \end{equation*} But then an elementary calculation shows that this condition in fact implies $(LC)$. \end{Rema}
\section{\bf Helson's theorem under Bohr's condition} \label{maximalineqsectionBCHelson} We now study the results of the preceding section under the more restrictive condition $(BC)$ instead of $(LC)$ for the frequency $\lambda$. We are going to show that under Bohr's condition $(BC)$ the operator $S_{max}^{u}$ from Theorem \ref{operatorHHelson} improves considerably in the sense that it maps $H_{1}^{\lambda}(G)$ to $L_{1}(G)$ and that its norm is uniformly bounded in $1 \leq p \leq \infty$.
\begin{Theo} \label{HelsonBCinternalHelson} Let $(BC)$ hold for $\lambda$. Then to every $u>0$ there is a constant $C=C(\lambda,u)$ such that for all $1\le p < \infty$, all $\lambda$-Dirichlet groups $(G,\beta)$ and $D\in \mathcal{H}_{p}(\lambda)$ we for almost all $\omega \in G$ have \begin{equation*}
\lim_{T\to \infty} \bigg(\frac{1}{2T} \int_{-T}^{T} \sup_{N} \big| \sum_{n=1}^{N} a_{n} h_{\lambda_{n}}(\omega) e^{-(u+it)\lambda_{n}} \big|^{p} dt \bigg)^{\frac{1}{p}} \le C\|D\|_{p}. \end{equation*} \end{Theo}
As before we deduce this from an appropriate maximal inequality of 'translated' Fourier series of functions in $H_{p}^{\lambda}(G)$.
\begin{Theo}\label{maximalineqBCHelson} Let $\lambda$ satisfy $(BC)$ and $(G,\beta)$ be a $\lambda$-Dirichlet group. Then for every $u>0$ there is $C=C(u, \lambda)>0$ such that for all $1\le p \le \infty$ and $f \in H_{p}^{\lambda}(G)$
\begin{equation*}
\Big\|\sup_{N} \big| \sum_{n=1}^{N}\widehat{f}(h_{\lambda_{n}})e^{-\lambda_{n}u} h_{\lambda_{n}} \big|
\Big\|_p \le C\|f\|_{p}.
\end{equation*} \end{Theo}
Obviously, Theorem~\ref{maximalineqBCHelson} transfers to Theorem~\ref{HelsonBCinternalHelson} precisely as in the proof of Theorem~\ref{DirichletintegerHelson} (given at the end of Section~\ref{CarlesonsectionHelson}).
Let us, as in Corollary~\ref{OrdiAHelson}, apply Theorem \ref{maximalineqBCHelson} to $\lambda=(\log n)$ and the $\lambda$-Dirichlet group $(\mathbb{T}^{\infty},\beta_{\mathbb{T}^{\infty}})$.
\begin{Coro}\label{OrdiBHelson} Let $f \in H_1(\mathbb{T}^\infty)$. Then for all $u >0$ \[ \lim_{N\to \infty} \sum_{\mathfrak{p}^\alpha \leq N} \widehat{f}(\alpha)\,\Big(\frac{z}{\mathfrak{p}^{u}}\Big)^\alpha=f*p_{u}(z) \,\,\,\,\, \text{almost everywhere on $\mathbb{T}^\infty$}\,, \] and moreover \[
\int_{\mathbb{T}^\infty} \sup_N \big| \sum_{\mathfrak{p}^\alpha \leq N} \widehat{f}(\alpha)
\Big(\frac{z}{\mathfrak{p}^{u}}\Big)^\alpha \big| d z
\leq C \|f\|_p\,, \] where $C= C(u)$ only depends on $u$. \end{Coro}
Our proof of Theorem \ref{maximalineqBCHelson}, which is inspired by Helson's proof of Theorem~\ref{HelsonstheoremHelson} from \cite{Helson}, seems to rely strongly on $(BC)$, and it requires the following two main ingredients.
\begin{Prop} \label{continuityHelson} Let $1\le p <\infty$, $\varepsilon>0$ and $u>0$. Then the operator \begin{equation*} \Psi=\Psi(p,u,\varepsilon) \colon L_{p}(G)\hookrightarrow L_{p}(G,L_{1+\varepsilon}(\mathbb{ R})), ~~ f \mapsto \left[ \omega \mapsto \frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}} \right] \end{equation*} defines a bounded linear embedding with \begin{equation} \label{normpsiHelson}
\|\Psi\|\le \int_{\mathbb{ R}} \bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}} dy<\infty. \end{equation} In particular, if $f \in L_{1}(G)$, then $\frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}} \in L_{1+\varepsilon}(\mathbb{ R})$ for almost every $\omega \in G$. \end{Prop}
So, provided $0<\varepsilon\le 1$, we may apply the Fourier transform $\mathcal{F}_{L_{1+\varepsilon}(\mathbb{ R})}$.
\begin{Prop} \label{perronHelson} Let $0<\varepsilon\le 1$ and $f \in H^{\lambda}_{1}(G)$. Then we for almost all $\omega \in G$ and for almost all $x \in \mathbb{ R}$ have \begin{equation*}
\mathcal{F}_{L_{1+\varepsilon}(\mathbb{ R})}\bigg(\frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}}\bigg)(-x)=e^{-u|x|}\sum_{\lambda_{n}<x}\widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega). \end{equation*} \end{Prop} Let us first show how to obtain Theorem \ref{maximalineqBCHelson} from the Propositions \ref{continuityHelson} and \ref{perronHelson}. As already mentioned our strategy is inspired by Helson's proof of Theorem \ref{HelsonstheoremHelson} from \cite{Helson}, which roughly speaking relies on Plancherel's theorem in $L_{2}(\mathbb{ R})$. Instead following Helson's ideas we use the Hausdorff-Young inequality in $L_{1+\varepsilon}(\mathbb{ R})$.
\begin{proof}[Proof of Theorem \ref{maximalineqBCHelson}]Adding more entries to the frequency $\lambda$ we may assume that $\lambda_{n+1}-\lambda_{n}\le 1$ for all $n$ (as in the proof \cite[Theorem 4.2]{Schoolmann}). Since $\lambda$ satisfies $(BC)$, there is $l>0$ and $C=C(\lambda)$ such that $\lambda_{n+1}-\lambda_{n}\ge Ce^{-l\lambda_{n}}$ for all $n$. Let $f \in H_{p}^{\lambda}(G)$. Fix $0<\varepsilon \le 1$ and we choose $q$ such that $\frac{1}{1+\varepsilon}+\frac{1}{q}=1$. By Proposition \ref{continuityHelson} we know that $\frac{P_{u}*f_{\omega}}{u+i\pmb{\cdot}} \in L_{1+\varepsilon}(\mathbb{ R})$ for almost all $\omega \in G$. For notational convenience let us define $$S(f_{\omega})(x)=\sum_{\lambda_{n}<x} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega).$$ Then, Proposition \ref{perronHelson} and the Hausdorff-Young inequality imply \begin{align*}
\infty &>\left\|\frac{P_{u}*f_{\omega}}{u+i\pmb{\cdot}}\right\|_{1+\varepsilon}^{q} \ge \int_{0}^{\infty} |e^{-ut} S(f_{\omega})(t)|^{q} dt=\sum_{n=1}^{\infty} |S(f_{\omega})(\lambda_{n+1})|^{q} \int_{\lambda_{n}}^{\lambda_{n+1}} e^{-uqt} dt \\ &\ge \sum_{n=1}^{\infty} |S(f_{\omega})(\lambda_{n+1})|^{q}(\lambda_{n+1}-\lambda_{n})e^{-uq\lambda_{n+1}} \ge\sum_{n=1}^{\infty} |S(f_{\omega})(\lambda_{n+1})|^{q} C e^{-l\lambda_{n}}e^{-uq(\lambda_{n}+1)} \\&=Ce^{-uq} \sum_{n=1}^{\infty} |S(f_{\omega})(\lambda_{n+1})|^{q}e^{\lambda_{n}(-uq+l)} \ge Ce^{-uq} \sup_{N} |S(f_{\omega})(\lambda_{N+1})|^{q}e^{-\lambda_{N}(uq+l)}\\ &= Ce^{-uq}\sup_{N} \big(|S(f_{\omega})(\lambda_{N+1})| e^{-\lambda_{N}\big(u+\frac{l}{q}\big)} \big)^{q}. \end{align*} Hence
$$C^{\frac{1}{q}} e^{-u} \sup_{N} |S(f_{\omega})(\lambda_{N+1})| e^{-\lambda_{N}\big(u+\frac{l}{q}\big)}\le \left\|\frac{P_{u}*f_{\omega}}{u+i\pmb{\cdot}}\right\|_{1+\varepsilon}$$ and therefore with the mapping $\Psi$ from Proposition \ref{continuityHelson} \begin{align*}
\bigg( \int_{G} \sup_{N } \left| \frac{S(f_{\omega})(\lambda_{N+1})}{e^{\lambda_{N}\big(u+\frac{l}{q}\big)}} \right|^{p} dm(\omega) \bigg)^{\frac{1}{p}} &\le C^{-\frac{1}{q}}e^{u}\bigg( \int_{G} \left\|\frac{P_{u}*f_{\omega}}{u+i\pmb{\cdot}}\right\|_{1+\varepsilon}^{p} dm(\omega) \bigg)^{\frac{1}{p}}\\ &\le C_{1}(u, \lambda)\|f\|_{p} \|\Psi(p,u,\varepsilon)\|. \end{align*} Now choosing $\varepsilon$ small enough, such that $l\le q u$, we obtain with (\ref{normpsiHelson}) from Proposition \ref{continuityHelson} \begin{equation}\label{rasenHelson}
\bigg( \int_{G} \sup_{N} \left| \frac{S(f_{\omega})(\lambda_{N+1})}{e^{2u\lambda_{N}}} \right|^{p} dm(\omega) \bigg)^{\frac{1}{p}} \le C_{2}(u,\lambda) \|f\|_{p}. \end{equation} which together with Lemma \ref{abeleHelson} proves the claim in the range $1\le p <\infty$. Now tending $p$ to $+\infty$ gives the full claim. \end{proof} \subsection{Proof of Proposition \ref{continuityHelson}} The technical part of the proof of Proposition \ref{continuityHelson} is to show that for every $\varepsilon,u>0$ \begin{equation} \label{monsterHelson}
\int_{\mathbb{ R}} \bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}} dy<\infty. \end{equation}
Observe that, if $\varepsilon=0$, then by Fubini's theorem for every $u>0$ this integral is infinity. Since $\|P_{u}\|_{1}=1$ and $\|P_{u}\|_{\infty}=\frac{1}{u}$ by Lyapunov's inequality (see e.g. \cite[Lemma II.4.1, p. 72]{Werner}) we obtain $\|P_{u}\|_{1+\varepsilon} \le \big(\frac{1}{u}\big)^{\frac{\varepsilon}{1+\varepsilon}}$ and so for all $y \in \mathbb{ R}$
\begin{equation} \label{trivialboundHelson}
\bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}} \le \frac{1}{u} \|P_{u}\|_{1+\varepsilon}\le \frac{1}{u} \bigg(\frac{1}{u}\bigg)^{\frac{\varepsilon}{1+\varepsilon}}=\bigg(\frac{1}{u}\bigg)^{1+\frac{\varepsilon}{1+\varepsilon}}.
\end{equation} Hence the interior integral of (\ref{monsterHelson}) is defined and in order to verify finiteness of (\ref{monsterHelson}) we claim that the interior integral is sufficiently decreasing considered as a function in $y$.
\begin{Lemm} \label{hardHelson} Let $\varepsilon, u>0$. Then we for all $|y|>4u$ have \begin{equation} \label{uglycalculationHelson}
\bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt\bigg)^{\frac{1}{1+\varepsilon}} \le 4|y|^{-\big(1+\frac{\varepsilon}{1+\varepsilon}\big)}. \end{equation} In particular, \begin{equation}
\int_{\mathbb{ R}} \bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}} dy\le 8 \bigg(\frac{1+\varepsilon}{\varepsilon}\bigg) \bigg(\frac{1}{u}\bigg)^{\frac{\varepsilon}{1+\varepsilon}}. \end{equation}
\end{Lemm}
\begin{proof} Since $|u|+|t|\le 2 |u+it|$, we have \begin{equation} \label{buchisdaHelson}
\frac{P_{u}(t-y)}{|u+it|}\le 2\frac{P_{u}(t-y)}{u+|t|}. \end{equation} Then fixing $y$ we now estimate separately the integrals $$ (a): ~~\bigg(\int_{0}^{\infty} \bigg( \frac{P_{u}(t-y)}{u+t} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}} ~ \text{and }~ (b):~~\bigg(\int_{-\infty}^{0} \bigg( \frac{P_{u}(t-y)}{u-y} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}}.$$ Since $$\int_{-\infty}^{0} \bigg(\frac{P_{u}(t-y)}{u-t}\bigg)^{1+\varepsilon} dt=\int_{0}^{\infty}\bigg(\frac{P_{u}(t+y)}{u+t}\bigg)^{1+\varepsilon} dt=\int_{0}^{\infty}\bigg(\frac{P_{u}(t-(-y))}{u+t}\bigg)^{1+\varepsilon} dt,$$ we see that it suffices to controll integral $(a)$ for $y>0$ and $y<0$. Part I deals with positive $y$ and Part II with negative $y$ in $(a)$.
\textbf{Part I:} Let $y>4u$. Applying the substitution $x(t)=-y+\frac{1}{t}$ we obtain \begin{align*} &\int_{0}^{\infty} \bigg( \frac{u}{(u^{2}+(x-y)^{2})(u+x)} \bigg)^{1+\varepsilon}~ dx\\ &=\int_{0}^{\frac{1}{y}} \bigg( \frac{u}{(u^{2}+(2y-\frac{1}{t})^{2})(u+\frac{1}{t}-y)} \bigg)^{1+\varepsilon} \frac{dt}{t^{2}}\\ &=
\int_{0}^{\frac{1}{y}} |t|^{2\varepsilon} \bigg( \frac{u}{((tu)^{2}+(2yt-1)^{2})(u+\frac{1}{t}-y)} \bigg)^{1+\varepsilon} dt\\ &\le \frac{1}{|y|^{2\varepsilon}}\int_{0}^{\frac{1}{y}}\bigg( \frac{u}{((tu)^{2}+(2yt-1)^{2})(u+\frac{1}{t}-y)} \bigg)^{1+\varepsilon} dt. \end{align*} Now we consider the function $$g(t):=\frac{u}{((tu)^{2}+(2yt-1)^{2})(u+\frac{1}{t}-y)}\,,$$ and we claim that $g$ is strictly increasing
on $[0,\frac{1}{y}]$ provided $y>4u$. So then $$\sup_{t \in [0,\frac{1}{y}]}g(t)=g(y^{-1})=\frac{1}{(\frac{u}{y})^{2}+1}\le 1\,,$$ and hence \begin{equation} \label{beast1Helson} \bigg(\int_{0}^{\infty} \bigg( \frac{P_{u}(t-y)}{u+t} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}}\le y^{-\big(\frac{1+2\varepsilon}{1+\varepsilon}\big)}. \end{equation} Note that $g$ is not differentiable at $t=\frac{1}{y-u}$. But $g$ is differentiable on $[0, \frac{1}{y}]$, since $\frac{1}{y-u}>\frac{1}{y}$ for $y>u$. We calculate $$g^{\prime}(t)=\frac{u(-2t^{3}(u-y)(u^{2}+4y^{2})-t^{2}(u^{2}-4uy+8y^{2})+1)}{(t(u-y)+1)^{2}(t^{2}(u^{2}+4y^{2})-4ty+1)^{2}}\,,$$ and show that $g^{\prime}$ is positive. Therefore we only have to focus on the polynomial $$p(t):=-2t^{3}(u-y)(u^{2}+4y^{2})-t^{2}(u^{2}-4uy+8y^{2})+1.$$ with derivative \begin{align*} p^{\prime}(t)&=-6t^{2}(u-y)(u^{2}+4y^{2})-2t(u^{2}-4uy+8y^{2})\\ &=2t(-3t(u-y)(u^{2}+4y^{2})-2(u^{2}-4uy+8y^{2}))), \end{align*} which vanishes in $t=0$ and (assuming $y>u$) in $$t_{0}:=\frac{2(u^{2}-4uy+8y^{2})}{3(y-u)(u^{2}+4y^{2})}.$$ We have $p(0)=1$ and, since $y>4u$, $$p\bigg(\frac{1}{y}\bigg)=\bigg(\frac{u}{y}\bigg)^{2}-2\bigg(\frac{u}{y}\bigg)^{3}-4\bigg(\frac{u}{y}\bigg)+1>0.$$ Moreover $t_{0}>\frac{1}{y}$, and assuming $y>4u$ we have \begin{align*} yt_{0}=\frac{2}{3} \frac{8y^{3}-yu(4y-u)}{(y-u)(u^{2}+4y^{2})} \ge \frac{2}{3} \frac{8y^{3}-(y \frac{y}{4}(4y))}{y\big( \big(\frac{y}{4}\big)^{2}+4y^{2} \big)}= \frac{2}{3} \frac{7}{4+\frac{1}{4}}>1. \end{align*} Let us summarize that $p$ is positive on the boundary and has no extremal point in the interior, which implies that $p$ is positive on $[0,\frac{1}{y}]$. Hence $g$ is strictly increasing.
\textbf{Part II}:
Now let $y<-4u$. Applying the substitution $x(t)=y+\frac{1}{t}$ we obtain
\begin{align*} &\int_{0}^{\infty} \bigg( \frac{u}{(u^{2}+(x-y)^{2})(u+x)} \bigg)^{1+\varepsilon} ~dx =\int_{0}^{-\frac{1}{y}} \bigg( \frac{u}{(u^{2}+(\frac{1}{t})^{2})(u+\frac{1}{t}+y)} \bigg)^{1+\varepsilon} \frac{dt}{t^{2}}\\ &=
\int_{0}^{\frac{1}{|y|}} t^{2\varepsilon} \bigg( \frac{u}{((tu)^{2}+1)(u+\frac{1}{t}+y)} \bigg)^{1+\varepsilon} dt\\ &\le \frac{1}{|y|^{2\varepsilon}}\int_{0}^{\frac{1}{|y|}}\bigg( \frac{u}{((tu)^{2}+1)(u+\frac{1}{t}+y)} \bigg)^{1+\varepsilon} dt. \end{align*} We follow the same strategy as before and consider $$h(t):=\frac{u}{((tu)^{2}+1)(u+y+\frac{1}{t})}.$$
Note that $h$ is differentiable on $[0,\frac{1}{|y|}]$. We calculate $$h^{\prime}(t)=\frac{-u(t^{3}2u^{2}(u+y)+t^{2}u^{2}-1)}{((tu)^{2}+1)^{2}(t(u+y)+1)^{2}}\,,$$
and claim that $h$ is increasing on $[0,\frac{1}{|y|}]$. Therefore consider $$p(t)=t^{3}2u^{2}(u+y)+t^{2}u^{2}-1$$ with derivative $$p^{\prime}(t)=6t^{2}u^{2}(u+y)+2tu^{2}=t2u^{2}(3(u+y)t+1),$$
which vanishes in $t=0$ and in $t_{0}=\frac{-1}{3(u+y)}.$ Note that $t_{0} \in [0,\frac{1}{|y|}]$, whenever $y<-4u$. We have $p(0)=-1$ and $p(\frac{-1}{y})<0$, since $$p\bigg(\frac{-1}{y}\bigg)=\bigg(\frac{u}{y}\bigg)^{2} \bigg(1-\frac{2(u+y)}{y}\bigg)-1<0,$$ provided $-y>2u$. Moreover, $$p(t_{0})=\bigg(\frac{u}{u+y}\bigg)^{2}\bigg(\frac{-2}{27}+\frac{1}{9}\bigg)-1=\frac{1}{27}\bigg(\frac{u}{u+y}\bigg)^{2}-1<0\,,$$ whenever $\bigg(\frac{u}{u+y}\bigg)^{2}\le 27$. But this holds true assuming $y<-4u$, since $$\bigg(\frac{u}{u+y}\bigg)^{2}\le \bigg(\frac{y}{4}\bigg)^{2} \frac{1}{(-y-(\frac{y}{4}))^{2}}= \frac{1}{9}.$$
Let us summarize, that $p$ is negative on the boundary of $[0,\frac{1}{|y|}]$ and has a maximum in $t_{0}$ with $p(t_{0})<0$. Hence $p$ is negative on $[0,\frac{1}{|y|}]$, and consequently $h$ is strictly increasing on $[0,\frac{1}{|y|}]$. So we for $y<-4u$ have \begin{equation} \label{beast2Helson}
\int_{0}^{\infty} \bigg( \frac{u}{(u^{2}+(x-y)^{2})(u+x)} \bigg)^{1+\varepsilon} ~dx \le |y|^{-2\varepsilon} \int_{0}^{\frac{1}{|y|}}\frac{1}{\frac{u}{|y|}+1} dt\le |y|^{-(1+2\varepsilon)}. \end{equation} Hence (\ref{buchisdaHelson}), (\ref{beast1Helson}) and (\ref{beast2Helson}) imply (\ref{uglycalculationHelson}). Moreover with (\ref{uglycalculationHelson}) and (\ref{trivialboundHelson}) we conclude
\begin{align*}
&\int_{\mathbb{ R}} \bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}} dy\\ &=\int_{|y|\le 4u}\bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt\bigg)^{\frac{1}{1+\varepsilon}}dy + \int_{|y|>4u} \bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt\bigg)^{\frac{1}{1+\varepsilon}} dy \\ &\le 4u\bigg(\frac{1}{u}\bigg)^{1+\frac{\varepsilon}{1+\varepsilon}}+ 4\int_{|y|>4u} |y|^{-\frac{1+2\varepsilon}{1+\varepsilon}} dy=4 \bigg(\frac{1}{u}\bigg)^{\frac{\varepsilon}{1+\varepsilon}}+8\frac{1+\varepsilon}{\varepsilon} \bigg(\frac{1}{4u}\bigg)^{\frac{\varepsilon}{1+\varepsilon}}, \end{align*} which completes the proof. \end{proof}
\begin{proof}[Proof of Proposition \ref{continuityHelson}] Let us for simplicity write \begin{equation*}
h(y):=\bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}}. \end{equation*} Then applying two times Minkowski's inequality we obtain \begin{align*}
&\bigg(\int_{G} \left\|\frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}} \right\|_{1+\varepsilon}^{p} d\omega \bigg)^{\frac{1}{p}}=\bigg( \int_{G}\bigg( \int_{\mathbb{ R}} \left| \frac{(f_{\omega}*P_{u})(t)}{u+it} \right|^{1+\varepsilon} dt \bigg)^{\frac{p}{1+\varepsilon}} d\omega \bigg)^{\frac{1}{p}}\\ &= \bigg( \int_{G} \bigg( \int_{\mathbb{ R}} \big| \int_{\mathbb{ R}} f_{\omega}(y)\frac{P_{u}(t-y)}{u+it} dy \big|^{1+\varepsilon} dt \bigg)^{\frac{p}{1+\varepsilon}} d\omega \bigg)^{\frac{1}{p}} \\ &\le \bigg(\int_{G} \bigg( \int_{\mathbb{ R}} \bigg( \int_{\mathbb{ R}} |f_{\omega}(y)|^{1+\varepsilon} \left| \frac{P_{u}(t-y)}{u+it} \right|^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}} dy \bigg)^{p} d\omega \bigg)^{\frac{1}{p}}\\ &=\bigg( \int_{G} \bigg( \int_{\mathbb{ R}} |f_{\omega}(y)| h(y) dy \bigg)^{p} d\omega \bigg)^{\frac{1}{p}} \le \int_{\mathbb{ R}} \bigg( \int_{G} |f_{\omega}(y)|^{p} h(y)^{p} d\omega \bigg)^{\frac{1}{p}} dy \\ &\le \|f\|_{p} \int_{\mathbb{ R}} h(y) dy= \|f\|_{p}\int_{\mathbb{ R}}\bigg(\int_{\mathbb{ R}} \bigg( \frac{P_{u}(t-y)}{|u+it|} \bigg)^{1+\varepsilon} dt \bigg)^{\frac{1}{1+\varepsilon}}dy, \end{align*} where the latter integral is finite by Lemma \ref{hardHelson}. Hence $\Psi$ is bounded and defined. To prove injectivity we calculate the Fourier coefficients of $\Psi(f)$. Let first $f=\sum_{n=1}^{N}a_{n} h_{x_{n}}$. Then for all $x\in \mathbb{ R}$ and all $t \in \mathbb{ R}$
\begin{align*}
\widehat{\Psi(f)}(h_{x})(t)&=\bigg(\int_{G} \Psi(f)(\omega) \overline{h_{x}(\omega)} d\omega\bigg)(t)=\int_{G} \Psi(f)(\omega)(t) \overline{h_{x}(\omega)} d\omega\\ &=\int_{G} \frac{f_{\omega}*P_{u}(t)}{u+it} \overline{h_{x}(\omega)} d\omega= \frac{1}{u+it}\int_{\mathbb{ R}} P_{u}(y) \int_{G} f(\omega\beta(t-y)) \overline{h_{x}(\omega)} d\omega dy \\ &= \frac{1}{u+it}e^{-ixt} \int_{\mathbb{ R}} P_{u}(y)e^{iyx} dy \int_{G} f(\eta) \overline{h_{x}(\eta)} d\eta=\frac{1}{u+it}e^{-u|x|}e^{-ixt} \widehat{f}(h_{x}). \end{align*} Now by density of polynomials and continuity of $\Psi$ we for all $f\in L_{1}(G)$ obtain \begin{equation*}
\widehat{\Psi(f)}(h_{x})(t)=\frac{1}{u+it}e^{-u|x|}e^{-ixt} \widehat{f}(h_{x}). \end{equation*} Hence, assuming $\Psi(f)=0$, we have $\widehat{\Psi(f)}(h_{x})=0$ and so $\widehat{f}(h_{x})=0$ for all $x$, which implies $f=0$. \end{proof}
\subsection{Proof of Proposition \ref{perronHelson}}
To finish the proof of Theorem \ref{maximalineqBCHelson} it remains to calculate the Fourier transform $\mathcal{F}_{L_{1+\varepsilon}(\mathbb{ R})}$ of $\frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}}$. Observe that this function may fail to be in $L_{1}(\mathbb{ R})$. For instance, if $f=h_{0}$, then $\|\frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}}\|_{1}=\int_{\mathbb{ R}} \frac{1}{|u+it|} dt=\infty$. Our strategy is to calculate for $k>0$ the Fourier transform of $\frac{f_{\omega}*P_{u}}{(u+i\pmb{\cdot})^{1+k}}$ (which belongs to $L_{1}(\mathbb{ R})$) and then we tend $k$ to zero to obtain Proposition \ref{perronHelson}. First we consider polynomials. \begin{Lemm}\label{kpositiveHelson} Let $g=\sum_{n=1}^{N}a_{n}e^{-i\lambda_{n} \pmb{\cdot}}$ and $k>0$. Then for all $x\in \mathbb{ R}$ \begin{equation}
\frac{\Gamma(k+1)}{2\pi} \mathcal{F}_{L_{1}(\mathbb{ R})}\bigg(\frac{g*P_{u}}{(u+i\pmb{\cdot})^{1+k}}\bigg)(-x)=e^{-u|x|} \sum_{\lambda_{n}<x} a_{n}(x-\lambda_{n})^{k}, \end{equation} where $\Gamma$ denotes the Gamma function. \end{Lemm} \begin{proof} From \cite[Lemma 10, p. 50]{HardyRiesz} we have that for all $\alpha>0$ and $k>0$ \begin{equation}\label{geniusHelson} \frac{\Gamma(k+1)}{2\pi i}\int_{\alpha-i\infty}^{\alpha+i\infty} \frac{e^{ys}}{s^{1+k}} ds = \begin{cases} y^{k}&, \text{if } y\ge 0,\\ 0 &, \text{if } y<0, \end{cases} \end{equation} By linearity it suffices to prove the claim for $g(t)=e^{-\lambda_{n}it}$ for some $n$. Then $g*P_{u}(t)=e^{-(u+it)\lambda_{n}}$ and we obtain \begin{align*} &\frac{\Gamma(k+1)}{2\pi} \mathcal{F}_{L_{1}(\mathbb{ R})}\bigg(\frac{g*P_{u}}{(u+i\pmb{\cdot})^{1+k}}\bigg)(-x)=\frac{\Gamma(k+1)}{2\pi} \int_{\mathbb{ R}} \frac{e^{-(u+it)\lambda_{n}}}{(u+it)^{1+k}} e^{xit} dt \\ &=\frac{\Gamma(k+1)}{2\pi} e^{-xu} \int_{\mathbb{ R}} \frac{e^{(x-\lambda_{n})(u+it)}}{(u+it)^{1+k}} dt= \frac{\Gamma(k+1)}{2\pi i} \int_{u-i\infty}^{u+\infty}\frac{e^{(x-\lambda_{n})s}}{s^{1+k}} ds, \end{align*} which by (\ref{geniusHelson}) with $\alpha=u$ equals $(x-\lambda_{n})^{k}$, whenever
$x>\lambda_{n}$, and else vanishes. \end{proof}
\begin{Lemm} \label{transformpolyHelson} Let $g=\sum_{n=1}^{N}a_{n}e^{-i\lambda_{n} \pmb{\cdot}}$ and $0<\varepsilon\le 1$. Then for almost every $x\in \mathbb{ R}$
$$\mathcal{F}_{L_{1+\varepsilon}(\mathbb{ R})}\bigg(\frac{g*P_{u}}{u+i\pmb{\cdot}}\bigg)(-x)=e^{-u|x|} \sum_{\lambda_{n}<x} a_{n}.$$ \end{Lemm} \begin{proof} Observe that $\frac{g*P_{u}}{u+i\pmb{\cdot}} \in L_{1+\varepsilon}(\mathbb{ R})$ and $\frac{g*P_{u}}{(u+i\pmb{\cdot})^{1+k}} \in L_{p}(\mathbb{ R})$ for all $k>0$ and $p\ge 1$. The dominated convergence theorem implies $\lim_{k\to 0}\frac{g*P_{u}}{(u+i\pmb{\cdot})^{1+k}}=\frac{g*P_{u}}{u+i\pmb{\cdot}}$ in $L_{1+\varepsilon}(\mathbb{ R})$. Now by continuity of the Fourier transform and Lemma \ref{kpositiveHelson} \begin{align*}
&\mathcal{F}_{L_{1+\varepsilon}(\mathbb{ R})}\bigg(\frac{g*P_{u}}{u+i\pmb{\cdot}}\bigg)=\lim_{k\to \infty}\mathcal{F}_{L_{1}(\mathbb{ R})}\bigg(\frac{g*P_{u}}{(u+i\pmb{\cdot})^{1+k}}\bigg)=\lim_{k\to \infty}\mathcal{F}_{L_{1}(\mathbb{ R})}\bigg(\frac{g*P_{u}}{(u+i\pmb{\cdot})^{1+k}}\bigg)\\ &= C(k)\lim_{k\to 0} e^{-u|\pmb{\cdot}|} \sum_{\lambda_{n}<\pmb{\cdot}} a_{n}(\pmb{\cdot}-\lambda_{n})^{k}=C(k) e^{-u|\pmb{\cdot}|} \sum_{\lambda_{n}<\pmb{\cdot}} a_{n}, \end{align*} with $C(k)=\frac{2\pi}{\Gamma(k+1)}$ and convergence in $L_{q}(\mathbb{ R})$, where $\frac{1}{1+\varepsilon}+\frac{1}{q}=1$. \end{proof}
\begin{proof}[Proof of Proposition \ref{perronHelson}] Let $(P^{n})$ be a sequence of polynomials from $H_{1}^{\lambda}(G)$ converging to $f$ (see \cite[Proposition 3.14]{DefantSchoolmann2}). Then $\lim_{n\to \infty} \Psi(P^{n})=\Psi(f)$ by Proposition \ref{continuityHelson} and so there is a subsequence $(n_{k})$ such that $\lim_{k\to \infty} \frac{P^{n_{k}}_{\omega}*P_{u}}{u+i\pmb{\cdot}}=\frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}}$ in $L_{1+\varepsilon}(\mathbb{ R})$ for almost all $\omega \in G$. Hence by continuity of the Fourier transform and Lemma \ref{transformpolyHelson} \begin{align*}
&\mathcal{F}_{L_{1+\varepsilon}(\mathbb{ R})}\bigg(\frac{f_{\omega}*P_{u}}{u+i\pmb{\cdot}}\bigg)=\lim_{k\to \infty} \mathcal{F}_{L_{1+\varepsilon}(\mathbb{ R})}\bigg(\frac{P^{n_{k}}_{\omega}*P_{u}}{u+i\pmb{\cdot}}\bigg)\\ & =\lim_{k\to \infty} e^{-u|\pmb{\cdot}|} \sum_{\lambda_{n}<\pmb{\cdot}} \widehat{P^{n_{k}}}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega)=e^{-u|\pmb{\cdot}|} \sum_{\lambda_{n}<\pmb{\cdot}} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega). \qedhere \end{align*} \end{proof}
\section{\bf Applications} \label{bohrstheoremsectionHelson}
In this final section we give several applications of the results of the preceding sections.
\subsection{Bohr's theorem and its equivalent formulations} Suppose that $D=\sum a_{n}e^{-\lambda_{n}s}$ converges somewhere and that its limit function extends to a bounded and holomorphic function $f$ on $[Re>0]$. Then a prominent problem from the beginning of the 20th century was to determine the class of $\lambda$'s for which under this assumption all $\lambda$-Dirichlet series converge uniformly on $[Re>\varepsilon]$ for every $\varepsilon>0$.
{\noindent \bf Bohr's theorem.}
We say that $\lambda$ satisfies 'Bohr's theorem' if the answer to the preceding problem is affirmative, and Bohr indeed proves in \cite{Bohr} that all frequencies with his property $(BC)$ belong to this class.
We denote by $\mathcal{D}^{ext}_{\infty}(\lambda)$ the space of all somewhere convergent $D\in \mathcal{D}(\lambda)$ which have a limit function extending to a bounded and holomorphic functions $f$ on $[Re>0]$. It is then immediate that $\lambda$ satisfies Bohr's theorem if and only if every $D \in \mathcal{D}^{ext}_{\infty}(\lambda)$ converges uniformly on $[Re>\varepsilon]$ for every $\varepsilon>0$.
As proven in \cite[Corollary 3.9]{Schoolmann}, the linear space $\mathcal{D}^{ext}_{\infty}(\lambda)$
together with $\|D\|_{\infty}=\sup_{[Re>0]}|f(s)|$ forms a normed space. The isometric subspace of all $D\in \mathcal{D}^{ext}_{\infty}(\lambda)$, which converge on $[Re>0]$, is denoted by $\mathcal{D}_{\infty}(\lambda)$. Note that $\mathcal{D}_{\infty}(\lambda)=\mathcal{D}_{\infty}^{ext}(\lambda)$, whenever Bohr's theorem holds for $\lambda$.
Later in \cite{Landau} Landau improves Bohr's result showing that the weaker condition $(LC)$ is sufficient for Bohr's theorem. More generally, we know from \cite[Remark 4.8.]{Schoolmann} that Bohr's theorem holds for $\lambda$ in each of the following 'testable' cases:
\begin{itemize}
\item
$\lambda$ is $\mathbb{Q}$-linearly independent,
\item $L(\lambda):=\limsup_{n\to \infty} \frac{\log n}{\lambda_{n}}=0$, \item
$\lambda$ fulfills (LC) (and in particular, if it fulfills (BC)).
\end{itemize}
In particular, the frequency $\lambda =(\log n)$ satisfies Bohr's theorem which constitutes one of the fundamental tools within the theory of ordinary Dirichlet series $\sum a_{n}n^{-s}$ (see e.g. \cite[Theorem 1.13, p. 21]{Defant} or \cite[Theorem 6.2.2., p. 143]{QQ}).
{\noindent \bf Completeness.} In general, $\mathcal{D}_{\infty}(\lambda)$ as well as $\mathcal{D}^{ext}_{\infty}(\lambda)$ may fail to be complete. See \cite[Theorem 5.2]{Schoolmann} for generic example of such $\lambda$'s. Let us recall \cite[Theorem 5.1]{Schoolmann}, where we prove that $\mathcal{D}_{\infty}(\lambda)$ (and consequently also $\mathcal{D}^{ext}_{\infty}(\lambda)$, see Theorem~\ref{equivalenceHelson}) is complete under each of the following concrete conditions:
\begin{itemize}
\item
$\lambda$ is $\mathbb{Q}$-linearly independent,
\item $L(\lambda)=0$,
\item
$\lambda$ fulfills (LC) and $L(\lambda)<\infty$ (and in particular, if it fulfills (BC)).
\end{itemize}
{\noindent \bf Coincidence.} From \cite[Section 2.5]{DefantSchoolmann3} we know that for any $\lambda$ there is an isometric linear map \begin{equation} \label{isometricembeddingAHelson} \mathcal{A} \colon \mathcal{D}^{ext}_{\infty}(\lambda) \hookrightarrow H_{\infty}^{\lambda}(G),~~ D\mapsto f \end{equation} such that $a_{n}(D)=\widehat{f}(h_{\lambda_{n}})$ for all $n$. Hence $\mathcal{D}_{\infty}^{ext}(\lambda)$, and so also $\mathcal{D}_{\infty}(\lambda)$, actually are isometric subspaces of $\mathcal{H}_{\infty}(\lambda)$.
Clearly, if $\mathcal{D}_{\infty}(\lambda)$ or $\mathcal{D}^{ext}_{\infty}(\lambda)$ are not complete,
then $\mathcal{D}_{\infty}^{ext}(\lambda) \varsubsetneq \mathcal{H}_{\infty}(\lambda)$ or $\mathcal{D}_{\infty}(\lambda) \varsubsetneq \mathcal{H}_{\infty}(\lambda)$, respectively. On the other hand, in the case of the two most prominent examples $\lambda = (n)$ and $\lambda = (\log n)$ we have 'coincidence': \[ \text{$\mathcal{D}_{\infty}((n))=\mathcal{H}_{\infty}((n))$ and $\mathcal{D}_{\infty}((\log n))=\mathcal{H}_{\infty}((\log n))$;} \] the first result is straight forward, the second one a fundamental result from \cite{HLS} (see also \cite[Corollary 5.3]{Defant}). More generally, \cite[Theorem 4.12]{DefantSchoolmann2} shows that we have the isometric 'coincidence' $\mathcal{D}_{\infty}(\lambda) = \mathcal{H}_{\infty}(\lambda)$ holds, whenever
\begin{itemize}
\item \text{$L(\lambda) < \infty$ and $\mathcal{D}^{ext}_{\infty}(\lambda)= \mathcal{D}_{\infty}(\lambda)$ (so if e.g. $\lambda$ satisfies Bohr's theorem).}
\end{itemize}
We come to the main point of this subsection -- Bohr's theorem, completeness, and coincidence generate the same class of frequencies.
\begin{Theo} \label{equivalenceHelson} Let $\lambda$ be an arbitrary frequency. Then the following are equivalent:
\begin{itemize}
\item[(a)]
$\lambda$ satisfies Bohr's theorem,
\item[(b)] $\mathcal{D}_{\infty}(\lambda)$ is complete, \item[(c)]
$\mathcal{D}_{\infty}(\lambda) = \mathcal{H}_{\infty}(\lambda)$ isometrically.
\end{itemize}
\end{Theo}
Note that each of the equivalent statements (a), (b), and (c) of Theorem \ref{equivalenceHelson} trivially implies that $\mathcal{D}_{\infty}(\lambda)=\mathcal{D}^{ext}_{\infty}(\lambda) = \mathcal{H}_{\infty}(\lambda)$ (look at (c) and \eqref{isometricembeddingAHelson}), and hence
in this case $\mathcal{D}^{ext}_{\infty}(\lambda)$ is complete. But we do not now whether in general completeness of $\mathcal{D}^{ext}_{\infty}(\lambda)$
implies completeness of $\mathcal{D}_{\infty}(\lambda)$, which would allow to replace $\mathcal{D}_{\infty}(\lambda)$ in Theorem \ref{equivalenceHelson} by $\mathcal{D}^{ext}_{\infty}(\lambda)$. In this context we like to mention, that an example of Neder from \cite{Neder} shows, that in general $D_{\infty}(\lambda)$ is not a closed subspace of $\mathcal{D}_{\infty}^{ext}(\lambda)$.
For the proof of Theorem~\ref{equivalenceHelson} we need some preparation, and start with the following simple consequence of the principle of uniform boundedness.
\begin{Lemm} \label{previousHelson}Assume that $\mathcal{D}_{\infty}(\lambda)$ is complete, and $\varepsilon>0$. Then there is a constant $C=C(\varepsilon)$ such that for all $D \in \mathcal{D}_{\infty}(\lambda)$ \begin{equation*}
\sup_{N}\big\| \sum_{n=1}^{N} a_{n}(D)e^{-\varepsilon\lambda_{n}}e^{-\lambda_{n}s}\big\|_{\infty} \le C\|D\|_{\infty}. \end{equation*} \end{Lemm}
\begin{proof} Define for every $N$ \begin{equation*} T_{N}(D)=\sum_{n=1}^{N}a_{n}(D)e^{-\varepsilon\lambda_{n}}\colon \mathcal{D}_{\infty}(\lambda)\to \mathbb{C}. \end{equation*} Then $T_{N}$ is continuous and $\lim_{N}T_{N}(D)=D(\varepsilon)$ exists. Hence by the principle of uniform boundedness (here completeness of $\mathcal{D}_{\infty}(\lambda)$ is essential) there is a constant $C>0$ such that \begin{equation*}
\sup_{N} \|T_{N}\|\le C<\infty, \end{equation*} that is for all $D\in \mathcal{D}_{\infty}(\lambda)$ we have \begin{equation} \label{bananeHelson}
\sup_{N} \big| \sum_{n=1}^{N} a_{n}(D)e^{-\lambda_{n}\varepsilon} \big|\le C\|D\|_{\infty}. \end{equation} Now let $D\in \mathcal{D}_{\infty}(\lambda)$. Applying (\ref{bananeHelson}) to $D_{z}$, which belong to $\mathcal{D}_{\infty}(\lambda)$ for all $z\in [Re>0]$, we obtain \begin{equation*}
\sup_{z\in [Re>0]} \sup_{N} \big| \sum_{n=1}^{N} a_{n}e^{-\lambda_{n}z}e^{-\lambda_{n}\varepsilon} \big|\le C \sup_{z\in [Re>0]}\|D_{z}\|_{\infty}\le C\|D\|_{\infty}, \end{equation*} which proves the claim. \end{proof}
The second lemma is crucial, and in fact a consequence of the Helson-type Theorem \ref{DirichletintegerHelson} (compare this with \cite[Propositions 4.3 and 4.5]{DefantSchoolmann2}).
\begin{Lemm} \label{complHelson} Let $\lambda$ be an arbitrary frequency and $D\in \mathcal{H}_{\infty}(\lambda)$. Then for every $\lambda$-Dirichlet group $(G,\beta)$
almost all vertical limits $D^\omega \in \mathcal{D}_{\infty}(\lambda)$ and $\|D^{\omega}\|_{\mathcal{D}_{\infty}(\lambda)}=\|D\|_{\mathcal{H}_{\infty}(\lambda)}$. \end{Lemm} \begin{proof} Let $f \in H_{\infty}^{\lambda}(G)$ be the function associated to $D$, i.e. $\mathcal{B}(f)=D$. Since $\mathcal{H}_{\infty}(\lambda)\subset \mathcal{H}_{2}(\lambda)$ and the function $f_{\omega}*P_{u}$ is continuous, Theorem \ref{DirichletintegerHelson} implies that $D^{\omega}$ converges on $[Re>0]$ and $D^{\omega}(u+it)=f_{\omega}*P_{u}(t)$ for all $t\in \mathbb{ R}$ and $u>0$. Hence
\[\sup_{[Re>u]} |D^{\omega}(s)|=\sup_{[Re=u]} |D^{\omega}(s)|\le \|f_{\omega}*P_{u}\|_{\infty}\le \|f_{\omega}\|_{\infty} \leq \|f\|_{\infty},\]
and so $D^{\omega}\in \mathcal{D}_{\infty}(\lambda)$ with $\|D^{\omega}\|_{\mathcal{D}_{\infty}(\lambda)}\le \|f\|_{\infty} = \|D\|_{\mathcal{H}_{\infty}(\lambda)}$. Moreover, by \cite[Propositions 4.3]{DefantSchoolmann2} and \eqref{isometricembeddingAHelson} we have that $\|D\|_{\mathcal{H}_{\infty}(\lambda)}= \|D^{\omega}\|_{\mathcal{H}_{\infty}(\lambda)}
= \|D^\omega\|_{\mathcal{D}_{\infty}(\lambda)}$. \end{proof}
The third and final ingredient we need for the proof of Theorem~\ref{equivalenceHelson} is a 'Bohr-Cahen formula' for the abscissa of uniform convergence for general Dirichlet series. Given a Dirichlet series $D=\sum a_{n}e^{-\lambda_{n}s}$, the abscissa $\sigma_{u}(D)$ of uniform convergence is defined to be the infimum over all $\sigma \in \mathbb{ R}$ such $D$ converges uniformly on $[Re > \sigma]$. The following convenient estimate for $\sigma_{u}(D)$ is proved in \cite[Corollary 2.5]{Schoolmann}: \begin{equation}\label{alwaysBohrcahenHelson}
\sigma_{u}(D)\le \limsup_{N\to \infty} \frac{\log\bigg( \sup_{t\in \mathbb{ R}} \big|\sum_{n=1}^{N} a_{n}e^{-it\lambda_{n}}\big|\bigg)}{\lambda_{N}}. \end{equation}
\begin{proof}[Proof of Theorem~\ref{equivalenceHelson}] In a first step we prove the equivalence $(b) \Leftrightarrow (c)$: Obviously, $(c)$ implies $(b)$. So assume that $(b)$ holds, and let $D\in \mathcal{H}_{\infty}(\lambda)$. Then $D^{\omega}\in \mathcal{D}_{\infty}(\lambda)$ for some $\omega \in G$ by Lemma \ref{complHelson}. Applying \cite[Proposition 3.4, $k=1$]{Schoolmann} for every $\varepsilon>0$ the Dirichlet polynomials \begin{equation*} R_{x}(D^{\omega}_{\varepsilon})=\sum_{\lambda_{n}<x} a_{n}(D)e^{-\varepsilon\lambda_{n}} h_{\lambda_{n}}(\omega) \bigg(1-\frac{\lambda_{n}}{x}\bigg) e^{-\lambda_{n}s} \end{equation*} converge uniformly to $D^{\omega}$ on $[Re>0]$. Hence, by \cite[Corollary 4.4]{DefantSchoolmann2} (Dirichlet polynomials in $\mathcal{D}_\infty(\lambda)$ and their vertical limits have the same norm)
\begin{equation*} R_{x}(D_{\varepsilon})=\sum_{\lambda_{n}<x} a_{n}(D)e^{- \varepsilon\lambda_{n}} \bigg(1-\frac{\lambda_{n}}{x}\bigg) e^{-\lambda_{n}s}, ~x>0, \end{equation*}
define a Cauchy net in $\mathcal{D}_{\infty}(\lambda)$. Then $(R_{x}(D_{\varepsilon}))$ by $(b)$ has a limit in $\mathcal{D}_{\infty}(\lambda)$, which is $D_{\varepsilon}$ with $a_{n}(D_{\varepsilon})=a_{n}(D)e^{-\varepsilon\lambda_{n}}$ for all $n$ and $\|D_{\varepsilon}\|_{\mathcal{D}_{\infty}(\lambda)}\le \|D\|_{\mathcal{H}_{\infty}(\lambda)}$ for all $\varepsilon>0$. Hence, as desired, $D \in \mathcal{D}_{\infty}(\lambda)$.
In a second step, we check that $(a) \Leftrightarrow (c)$, and start with the implication $(a) \Rightarrow (c)$. So let again $D \in \mathcal{H}_{\infty}(\lambda)$. We have to show that $D \in \mathcal{D}_{\infty}(\lambda)$. By Lemma \ref{complHelson} there is some $\lambda$-Dirichlet group $(G, \beta)$ and some $\omega \in G$ such that $D^{\omega}\in \mathcal{D}_{\infty}(\lambda)$
and $\|D\|_{\mathcal{H}_\infty(\lambda)} = \|D^\omega\|_{\mathcal{D}_\infty(\lambda)} $. We denote by $D_N$ the $N$th partial sum of $D_N$, and by $D_{N,\varepsilon}$ its horizontal translation by $\varepsilon >0$. Then, for every $\varepsilon>0$, assuming Bohr's theorem for $\lambda$, the sequence $(D^{\omega}_{N,\varepsilon})$ converges to $D^{\omega}_\varepsilon$ in $\mathcal{D}_\infty(\lambda)$. By \cite[Corollary 4.4]{DefantSchoolmann2} we know that \begin{equation*}
\sup_{N}\sup_{t\in \mathbb{ R}} | D_{N,\varepsilon}(it)|= \sup_{N}\sup_{t\in \mathbb{ R}} | D^\omega_{N,\varepsilon}(it)|<\infty, \end{equation*} which by \eqref{alwaysBohrcahenHelson} implies that $\sigma_{u}(D)\le 0$. So $D$ converges on the right half-plane, and it remains to show that the limit function of $D$ is bounded on all of $[Re >0]$. Indeed, if $\varepsilon>0$, then for large $N$ (again by \cite[Corollary 4.4]{DefantSchoolmann2}) \begin{align*}
\|D_{N,\varepsilon}\|_{\mathcal{D}_\infty(\lambda)} &
= \|D^\omega_{N,\varepsilon}\|_{\mathcal{D}_\infty(\lambda)} \\&
\leq 1 + \|D^\omega_{\varepsilon}\|_{\mathcal{D}_\infty(\lambda)}
\leq 1+ \|D^\omega\|_{\mathcal{D}_\infty(\lambda)} = 1+ \|D\|_{\mathcal{H}_\infty(\lambda)}\,. \end{align*}
Hence $\|D\|_{\mathcal{D}_\infty(\lambda)} \leq 1 + \|D \|_{\mathcal{H}_\infty(\lambda)} < \infty$, the conclusion.
Assume conversely that $(c)$ holds, that is, $\mathcal{D}_\infty(\lambda) = \mathcal{H}_\infty(\lambda)$. Then
$\mathcal{D}_\infty(\lambda)$ is complete and by \eqref{isometricembeddingAHelson} we have $\mathcal{D}_{\infty}^{ext}(\lambda)=\mathcal{D}_{\infty}(\lambda)$. In order to check $(a)$ take some $D\in \mathcal{D}_{\infty}^{ext}(\lambda)$; we have to show that $\sigma_{u}(D)\le 0$.
Indeed, by Lemma~\ref{previousHelson} and another application of the Bohr-Cahen formula~\eqref{alwaysBohrcahenHelson} we know that $\sigma_{u}(D_\varepsilon)\le 0 $ for all $\varepsilon>0$, which implies $\sigma_{u}(D)\le 0$. \end{proof}
\begin{Rema} \label{extHelson} A simple analysis of the previous proof shows that the equivalence (b) and (c) of Theorem \ref{equivalenceHelson} holds true, if we replace $\mathcal{D}_{\infty}(\lambda)$ by $\mathcal{D}_{\infty}^{ext}(\lambda)$, that is for any frequency $\lambda$ we have that
$\mathcal{D}^{ext}_{\infty}(\lambda)$ is complete if and only if $\mathcal{D}_{\infty}^{ext}(\lambda)=\mathcal{H}_{\infty}(\lambda)$. Indeed, if we assume that $\mathcal{D}^{ext}_{\infty} (\lambda)$ is complete, then in particular for $\varepsilon=1$ the sequence $(R_{x}(D_{1}))$ has a limit $D_{1}\in \mathcal{D}_{\infty}^{ext}(\lambda)$. Hence $\sigma_{c}(D_{1})<\infty$, which implies $\sigma_{c}(D)<\infty$ and so $D\in \mathcal{D}_{\infty}^{ext}(\lambda)$. Again, we do not know whether completeness of $\mathcal{D}^{ext}_{\infty}(\lambda)$ implies, that $\lambda$ satisfies Bohr's theorem. \end{Rema}
Let us apply Theorem \ref{equivalenceHelson} to the concrete frequency $\lambda = (\sqrt{\log n})$ which obviously satisfies $(LC)$, so fulfills Bohr's theorem. Then, although in this case $L((\sqrt{\log n}))=+\infty$ (!), we may conclude the following (apparently non-trivial) application.
\begin{Coro} $\mathcal{D}_{\infty}((\sqrt{\log n})) = \mathcal{H}_{\infty}((\sqrt{\log n}))$, and $\mathcal{D}_{\infty}((\sqrt{\log n}))$ is complete. \end{Coro}
\subsection{Norm of the partial sum operator in $\pmb{\mathcal{H}_{\infty}(\lambda)}$}
Recall from above that Bohr's theorem holds for $\lambda=(\log n)$, and that a quantitative variant of this (see again \cite[Theorem 6.2.2., p. 143]{QQ} or \cite[Theorem 1.13, p. 21]{Defant}) reads as follows: There is a constant $C>0$ such that for every $D\in \mathcal{D}_{\infty}((\log n))$ and $N$ \begin{equation} \label{ordinaryBohrquantiHelson}
\big\| \sum_{n=1}^{N} a_{n} n^{-s} \big\|_{\infty} \le C\log(N) \|D\|_{\infty}. \end{equation} Given an arbitrary frequency $\lambda$, we are interested
in establishing quantitative variants of Bohr's theorem in the sense of (\ref{ordinaryBohrquantiHelson}), and this means to control the norm of the partial sum operator \begin{equation*} S_{N}\colon \mathcal{D}^{ext}_{\infty}(\lambda) \to \mathcal{D}_{\infty}(\lambda), ~~ D \mapsto \sum_{n=1}^{N}a_{n}(D) e^{-\lambda_{n}s}. \end{equation*} The main result of \cite[Theorem 3.2]{Schoolmann} is then, that for all $0<k\le 1$, $D=\sum a_{n}e^{-\lambda_{n}s}\in \mathcal{D}^{ext}_{\infty}(\lambda)$ and $N$ we have \begin{equation} \label{normofSNHelson}
\big\|\sum_{n=1}^{N} a_{n}e^{-\lambda_{n}s}\big\|_{\infty}\le C \frac{\Gamma(k+1)}{k} \bigg(\frac{\lambda_{N}}{\lambda_{N+1}-\lambda_{N}}\bigg)^{k} \|D\|_{\infty}, \end{equation} where $C$ is an absolute constant and $\Gamma$ denotes the Gamma function. The case $p=\infty$ of Lemma \ref{jojHelson} extends (\ref{normofSNHelson}) from $\mathcal{D}^{ext}_{\infty}(\lambda)$ to $\mathcal{H}_{\infty}(\lambda)$.
\begin{Theo} \label{coro22Helson} Let $\lambda$ be an arbitrary freuency. Then for all $D\in \mathcal{H}_{\infty}(\lambda)$, all $0<k\le 1$ and all $N$ we have \begin{equation*}
\big\|\sum_{n=1}^{N} a_{n}(D) e^{-\lambda_{n}s} \big\|_{\infty} \le \frac{C}{k} \bigg(\frac{\lambda_{N+1}}{\lambda_{N+1}-\lambda_{N}}\bigg)^{k} \|D\|_{\infty}, \end{equation*} where $C>0$ is a universal constant. \end{Theo} In particular, assuming $(LC)$ (respectively, (BC)) for $\lambda$ and choosing $k_{N}=e^{-\delta\lambda_{N}}$ (respectively, $k_{N}=\lambda_{N}^{-1}$) we deduce from Theorem \ref{coro22Helson} (see also again \eqref{(A)Helson}) the following quantitative variants of Bohr's theorem in $\mathcal{H}_{\infty}(\lambda)$. See \cite[Section 4]{Schoolmann} for the corresponding results for $\mathcal{D}^{ext}_{\infty}(\lambda)$. \begin{Coro} \label{coro1Helson} Let $(LC)$ hold for $\lambda$. Then to every $\delta>0$ there is a constant $C=C(\delta)$ such that for all $D\in \mathcal{H}_{\infty}(\lambda)$ and $N$ \begin{equation*}
\big\|\sum_{n=1}^{N} a_{n}(D) e^{-\lambda_{n}s} \big\|_{\infty} \le Ce^{\delta\lambda_{N}} \|D\|_{\infty}. \end{equation*} If $\lambda$ satisfies (BC), then for every $D\in \mathcal{H}_{\infty}(\lambda)$ and $N$ \begin{equation*}
\big\|\sum_{n=1}^{N} a_{n}(D)e^{-\lambda_{n}s}\big \|_{\infty} \le C_{1}\lambda_{N} \|D\|_{\infty}. \end{equation*} with an absolute constant $C_{1}>0$. \end{Coro} \begin{proof}[Proof of Theorem \ref{coro22Helson}] Let us for simplicity write $C=C(k,N):=\frac{1}{k}\bigg(\frac{\lambda_{N+1}}{\lambda_{N+1}-\lambda_{N}}\bigg)^{k}.$ Then for all $\omega \in G$ with $T_{\max}$ from Lemma \ref{jojHelson} we have \begin{align*}
\big|\sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}}) h_{\lambda_{n}}(\omega)\big|=C C^{-1}\big|\sum_{n=1}^{N} \widehat{f}(h_{\lambda_{n}})h_{\lambda_{n}}(\omega)\big|\le C T_{\max}(f)(\omega), \end{align*} and so the claim follows, since $T_{\max}\colon H_{\infty}^{\lambda}(G) \to L_{\infty}(G)$ is bounded. \end{proof}
\subsection{Montel theorem} In the case of ordinary Dirichlet series, so series with frequency $\lambda=(\log n)$, Bayart in \cite{Bayart} (see also \cite[Theorem 3.11]{Defant} or \cite[Theorem 6.3.1]{QQ}) proves an important Montel-type theorem in $\mathcal{H}_{\infty}=\mathcal{D}_{\infty}((\log n))$: For every bounded sequence $(D^{j})$ in $\mathcal{D}_{\infty}((\log n))$ there are a subsequence $(D^{j_{k}})$ and $D\in \mathcal{D}_{\infty}((\log n))$ such that $(D^{j_{k}})$ converges uniformly to $D$ on $[Re>\varepsilon]$ for every $\varepsilon>0$.
Bayart's Montel theorem extends to $\lambda$-Dirichlet series whenever $\lambda$ satisfies (LC) or $L(\lambda)=0$, or is $\mathbb{Q}$-linearly independent (see \cite[Theorem 4.10]{Schoolmann}). Moreover, as proven in \cite[Theorem 4.19]{DefantSchoolmann2}, under one of the the assumptions [(LC) and $L(\lambda)<\infty$, or $L(\lambda)=0$, or $\mathbb{Q}$-linear independence] it extends from $\mathcal{D}_{\infty}(\lambda)$ to $\mathcal{H}_{p}(\lambda)$.
We prove a considerable extension of all this. A consequence of Theorem \ref{equivalenceHelson} shows that Bayart's Montel theorem holds for every frequency $\lambda$ which satisfies Bohr's theorem (or equivalently (b) or (c) from Theorem \ref{equivalenceHelson}).
\begin{Theo} \label{MontelHelson} Assume that Bohr's theorem holds for $\lambda$ and $1\le p \le \infty$. Then for every bounded sequence $(D^{j})$ in $\mathcal{H}_{p}(\lambda)$ there is a subsequence $(D^{j_{k}})_{k}$ and $D\in \mathcal{H}_{p}(\lambda)$ which converges to $D_{\varepsilon}$ in $\mathcal{H}_{p}(\lambda)$ for every $\varepsilon>0$. The same result holds true, if we replace $\mathcal{H}_{p}(\lambda)$ by $\mathcal{D}_{\infty}(\lambda)$. \end{Theo}
We follow the same strategy as in the proof of \cite[Theorem 4.19]{DefantSchoolmann2}. We first prove Theorem \ref{MontelHelson} for $\mathcal{D}_{\infty}(\lambda)$, and then, using some vector valued arguments, we extend this result to $\mathcal{H}_{p}(\lambda)$.
Therefore, let us recall, that, given a frequency $\lambda$ and a Banach space $X$, we denote by $\mathcal{D}_{\infty}(\lambda,X)$ the linear space of all Dirichlet series $D=\sum a_{n}e^{-\lambda_{n}s}$ which have coefficients $(a_{n})\subset X$ and which converge and define a bounded function on $[Re>0]$ (then being holomorphic and with values in $X$).
A result from \cite{vectorvalued} states that for any non-trivial Banach space $X$, the space $\mathcal{D}_{\infty}(\lambda)$ is complete if and only if $\mathcal{D}_{\infty}(\lambda,X)$ is complete (again endowed with sup norm on $[Re >0]$).
Moreover, a standard Hahn-Banach argument shows that Lemma~\ref{previousHelson} extends from the scalar-valued case to the vector-valued case: Given $\varepsilon >0$, there is a constant $C=C(\varepsilon) >0$ such that for every Banach space $X$ and every $D=\sum a_{n}e^{-\lambda_{n}s} \in \mathcal{D}_{\infty}(\lambda,X)$ \begin{equation} \label{translationinftyHelson}
\sup_{N}\big\| \sum_{n=1}^{N} a_{n}e^{-\varepsilon\lambda_{n}}e^{-\lambda_{n}s}\big\|_{\infty} \le C\|D\|_{\infty}\,, \end{equation} provided that $\mathcal{D}_{\infty}(\lambda)$ is complete, or equivalently $\lambda$ satisfies
Bohr's theorem (Theorem \ref{equivalenceHelson}). Indeed, apply Lemma~\ref{previousHelson} to the Dirichlet series
$x^\ast \circ D=\sum x^{\ast}(a_{n})e^{-\lambda_{n}s} \in \mathcal{D}_{\infty}(\lambda),\, x^\ast \in X^\ast$, and use a standard Hahn-Banach argument.
\begin{proof}[Proof of Theorem \ref{MontelHelson}] We first assume that $p=\infty$, so that by assumption and Theorem \ref{equivalenceHelson} we have that $\mathcal{D}_{\infty}(\lambda)=\mathcal{H}_{\infty}(\lambda)$. Moreover, we at first look at a bounded sequence $(D^{j})$ in $\mathcal{D}_{\infty}(\lambda)$, and denote the coefficients of $D^{j}$ by $(a_{n}^{j})_{n}$. So, by \cite[Corollary 3.9]{Schoolmann} there is a constant $C>0$ such that for all $n$, $j$ \begin{equation}
|a_{n}^{j}|\le \|D^{j}\|_{\infty}\le \sup_{j} \|D^{j}\|_{\infty}\le C <\infty. \end{equation} Hence by a diagonal process we find a subsequence $(j_{k})_{k}$ such that $\lim_{k\to \infty} a^{j_{k}}_{n}=:a_{n}$ exists for all $n$. Moreover, applying \eqref{translationinftyHelson} we obtain for every $\varepsilon>0$ a constant $C_1 = C_1(\varepsilon)>0$ such that for all $N$ \begin{equation*}
\sup_{k} \big\| \sum_{n=1}^{N} a_{n}^{j_{k}} e^{-\varepsilon\lambda_{n}}e^{-\lambda_{n}s}\big\|_{\infty}\le C_{1}\sup_{k}\|D^{j_{k}}\|_{\infty}<C_{1}C<\infty\,. \end{equation*}
Hence with $D=\sum a_{n}e^{-\lambda_{n}s}$, by \cite[Proposition 2.4]{Schoolmann} we obtain that $(D^{j_{k}}_{\varepsilon})$ converges uniformly to $D_{\varepsilon}$ on $[Re>\delta]$ for every $\delta>0$, which proves the claim for $\mathcal{D}_{\infty}(\lambda)$. Now let $1\le p < \infty$ and $(D^{j})$ a bounded sequence in $\mathcal{H}_{p}(\lambda)$. Since $\mathcal{D}_{\infty}(\lambda)$ is complete under Bohr's theorem (Theorem \ref{equivalenceHelson}), by \cite[Lemma 4.9]{DefantSchoolmann2} the map \begin{equation*} \label{embeddingptoDirichletHelson} \Phi \colon \mathcal{H}_{p}(\lambda) \hookrightarrow \mathcal{D}_{\infty}(\lambda,\mathcal{H}_{p}(\lambda)), ~~ \sum a_{n}e^{-\lambda_{n}s} \mapsto \sum (a_{n}e^{-\lambda_{n}z}) e^{-\lambda_{n}s} \end{equation*} defines an into isometry. Hence $(\Phi(D^{j}))$ is a bounded sequence in $\mathcal{D}_{\infty}(\lambda,\mathcal{H}_{p}(\lambda))$ and again for all $n$, $j$ \begin{equation*}
|a_{n}^{j}|=\|a_{n}^{j}e^{-\lambda_{n}z}\|_{p}\le \|\Phi(D^{j})\|=\|D^{j}\|_{p}\le \sup_{j} \|D^{j}\|_{p} \le C<\infty\,, \end{equation*} for some absolute constant $C>0$. By another diagonal process we obtain a subsequence $(j_{k})_{k}$ such that $\lim_{k\to \infty} a_{n}^{j_{k}}=:a_{n}$ exists, and using \eqref{translationinftyHelson} together with the vector-valued variant of \cite[Proposition 2.4]{Schoolmann} (its proof follows word by word from the scalar case) we conclude, that $(\Phi(D^{j_{k}}_{\varepsilon}))$ converges in $\mathcal{D}_{\infty}(\lambda,\mathcal{H}_{p}(\lambda))$ for every $\varepsilon>0$ as $k\to \infty$. Hence, the sequence $(D_{\varepsilon}^{j_{k}})$ forms a Cauchy sequence in $\mathcal{H}_{p}(\lambda)$ with limit $D_{\varepsilon}$, and the proof is complete. \end{proof}
\subsection{Nth abschnitte} Let $H_{\infty}(B_{c_{0}})$ denote the Banach space of of all holomorphic and bounded functions on the open unit ball $B_{c_{0}}$ (of the Banach space $c_{0}$ of all zero sequences). Then as proven in \cite{HLS}
(see also \cite[Theorem 5.1]{Defant}) there is an isometric bijection \begin{equation} \label{HLSHelson} H_{\infty}(B_{c_{0}}) \to H_{\infty}(\mathbb{T}^{\infty}), ~~ F\mapsto f, \end{equation} which preserves the Taylor and Fourier coefficients in the sense that $c_{\alpha}(F)=\widehat{f}(\alpha)$ for all multi indices $\alpha$.
Recall, that $F\colon B_{c_{0}} \to \mathbb{C}$ belongs to $H_{\infty}(B_{c_{0}})$ if and only if $F$ is continuous and all its restrictions $F_{N}\colon \mathbb{D}^{N} \to \mathbb{C}$ belong to $H_{\infty}(\mathbb{D}^{N})$ with $\sup_{N} \|F_{N}\|_{\infty}<\infty$ (see e.g. \cite[Corollary 2.22]{Defant}). By the Bohr map (\ref{BohrmapHelson}) and (\ref{HLSHelson}) this result transfers to ordinary Dirichlet series: A Dirichlet series $D=\sum a_{n}n^{-s}$ belongs to $\mathcal{D}_{\infty}((\log n))$ if and only if for every $N$ its so-called $N$th abschnitt, that is $D|_{N}=\sum a_{n}n^{-s}$, where the sum is taken over all natural numbers which only have the first $N$ prime numbers as divisors, belong to $\mathcal{D}_{\infty}((\log n))$ with $\sup_{N} \|D|_{N}\|_{\infty} <\infty$ (see also \cite[Corollary 3.10]{Defant}).
This result extends to general Dirichlet series. To understand this let us recall, that for every frequency $\lambda$ there is another real sequence $B=(b_{n})$ such that for every $n$ there are finitely many rationals $q_{1}^{n}, \ldots q_{k}^{n}$ such that \begin{equation*} \lambda_{n}=\sum q_{j}^{n} b_{n}. \end{equation*} In this case, we call $B$ basis, and $R=(q^{n}_{j})_{n,j}$ Bohr matrix of $\lambda$. Moreover, we write $\lambda=(R,B)$, whenever $\lambda$ decomposes with respect to a basis $B$ with Bohr matrix $R$, and note that every $\lambda$ allows a subsequence which is a basis $B$ for $\lambda$.
Suppose that $\lambda=(R,B)$ and let $D\in \mathcal{D}(\lambda)$. Then the Dirichlet series $D|_{N}=\sum a_{n}(D)e^{-\lambda_{n}s}$, where $a_{n}(D)\ne 0$ implies that $\lambda_{n}\in \operatorname{span}_{\mathbb{Q}}(b_{1},\ldots, b_{N})$, is denoted as the $N$th abschnitt of $D$.
A consequence of Theorem \ref{MontelHelson} gives an improvement of \cite[Theorem 4.22]{DefantSchoolmann2}.
\begin{Theo} Assume that Bohr's theorem holds for $\lambda$, $1\le p \le \infty$ and $D=\sum a_{n}e^{-\lambda_{n}s}$. Then $D\in \mathcal{H}_{p}(\lambda)$ if and only if its $N$th abschnitte $D|_{N} \in \mathcal{H}_{p}(\lambda)$ with $\sup_{N} \|D|_{N}\|_{p}<\infty$. Moreover, in this case
$\|D\|_{p}=\sup \|D|_{N}\|_{p}$, and the same results holds true, whenever we replace $\mathcal{H}_{p}(\lambda)$ by $\mathcal{D}_{\infty}(\lambda)$. \end{Theo} \begin{proof}
The 'if part' precisely is Remark 4.21 from \cite{DefantSchoolmann2}, and holds true without any assumption on $\lambda$. So, suppose $D|_{N} \in \mathcal{H}_{p}(\lambda)$ for all $N$ with $\sup_{N} \|D|_{N}\|_{p}<\infty$. Then by Theorem \ref{MontelHelson} there is a subsequence $(N_{k})$ and $E\in \mathcal{H}_{p}(\lambda)$ such that $(D_{1}|_{N_{k}})$ converges to $E_{1}$ as $k \to \infty$. Comparing Dirichlet coefficients we see, that $a_{n}(E)e^{-\lambda_{n}}=a_{n}(E_{1})=a_{n}e^{- \lambda_{n}}$ and so $E=D$. \end{proof}
\end{document} |
\begin{document}
\title{A Stochastic Model for Electric Scooter Systems}
\author{
Jamol Pender \\ School of Operations Research and Information Engineering \\ Cornell University \\ 228 Rhodes Hall, Ithaca, NY 14853 \\ jjp274@cornell.edu \\ \and Shuang Tao \\ School of Operations Research and Information Engineering \\ Cornell University \\ 293 Rhodes Hall, Ithaca, NY 14853 \\ st754@cornell.edu \\ \and Anders Wikum \\ School of Operations Research and Information Engineering \\ Cornell University \\ 206 Rhodes Hall, Ithaca, NY 14853 \\ aew236@cornell.edu \\
}
\maketitle \begin{abstract} Electric scooters are becoming immensely popular across the world as a means of reliable transportation around many cities. As these e-scooters rely on batteries, it is important to understand how many of these e-scooters have enough battery life to transport riders and when these e-scooters might require a battery replacement. To this end, we develop the first stochastic model to capture the battery life dynamics of e-scooters of a large scooter network. In our model, we assume that e-scooter batteries are removable and replaced by agents called \textbf{swappers}. Thus, to gain some insight about the large scale dynamics of the system, we prove a mean field limit theorem and a functional central limit theorem for the fraction of e-scooters that lie in a particular interval of battery life. Exploiting the mean field limit and the functional central limit theorems, we develop an algorithm for determining the number of \textbf{swappers} that are needed to guarantee levels of probabilistic performance of the system. Finally, we show through a stochastic simulation and real data that our stochastic model captures the relevant dynamics. \end{abstract}
\section{Introduction}
It's a bird. It's a plane. Nah, it's a scooter! Electric scooter (e-scooter) companies are growing in popularity across the United States looking to take advantage of the ride-sharing and micro-mobility economy by providing an alternative to cars and bicycles. E-scooters were first introduced in September of 2017 in Santa Monica \citet{Hall2018} when the micro-mobility company Bird Rides Inc. placed thousands of scooters all around the city. Bird's scooters were immediately popular with commuters, since they are convenient and are a low cost alternative to cars. These e-scooters can reach speeds of up to 25 miles per hour., but actually speeds vary by municipality. The e-scooter technology has become a prevalent form of transportation to provide a feasible solution to the last mile problem in the transportation literature.
These web-based e-scooters are controlled by rental networks and are easily operated by smartphones. Customers are able to use e-scooters by downloading mobile applications to their smartphones. The mobile applications (apps) then show customers an image of the nearest e-scooters to their GPS location and this information will direct customers to the nearest available e-scooter. A picture of the mobile application is given in Figure \ref{Bird_App}. Using the information about available e-scooters, a customer will take an e-scooter and after completing their ride, customers can leave their e-scooter anywhere outside restricted zones, as indicated in the mobile application on their phone.
\begin{figure}
\caption{Picture of the Lime E-Scooter App. }
\label{Lime_App}
\end{figure}
\begin{figure}
\caption{Picture of Bird E-Scooter App With Battery Life Remaining. }
\label{Bird_App}
\end{figure}
Companies operating e-scooters are rapidly expanding operations in the United States. For example, another company called Lime has a fleet of e-scooters that are available in more than 60 US cities. They also have an international presence in over 10 cities as well. Today, several major companies, including Bird and Lime, offer dockless e-scooter services, and several other companies, including the ride-sharing companies Uber and Lyft, have recently entered the market as the demand continues to grow. Recent financial analyses show that Lime is valued over \$1 billion and its rival Bird is valued at more than \$2 billion. This growth will continue to accelerate as the demand grows for these e-scooters in many of the largest cities around the world.
While e-scooter transportation should reduce emissions, and automobile congestion in local areas, e-scooters are not without accidents, see for example \citet{allem2019electric, kobayashi2019merging, carville2018}. Although they are a convenient and affordable solution to transportation gaps, they are operated by batteries and need to be charged. The battery charging operations for these e-scooters consists of private individuals who go around and collect scooters to charge them. Bird scooter collectors are called “Bird Hunters” and Lime scooter collectors are called "Lime Juicers". These collectors can make significant profits if they charge these scooters, see for example \citet{goshtasb2018proposing}. Generally, there is a flat payment for charging the e-scooter and the payment will increase depending on the difficulty of locating the scooter. Typically the "Bird Hunters" will take the scooters home and after charging them overnight, need to drop them off as groups of three at dedicated assigned points called “Nests” \citet{bordes2019impacts}. However, this particular way of charging e-scooters is not without incident. There have been several situations where juicers and hunters quarrel over the ability to charge the e-scooters \citet{goshtasb2018proposing}. There are also several places where juicers and hunters "own" a specific territory to charge the e-scooters. It would of great interest to eliminate this territorial behavior over charging the e-scooters.
To solve some of these issues, several e-scooter production companies are developing e-scooters with easily removable batteries. In this situation, instead of having "juicers" and "hunters" that need to charge the batteries, batteries are swapped by employees of the scooter company. We call these agents who replaced the dead batteries "swappers". Many large scale e-scooter systems such as Bird and Lime already give their users the ability to see real-time availability and "battery life" through a smartphone app and web API. A picture of the Bird app showing that $43\%$ battery life for an e-scooter is given in Figure \ref{Bird_App}. This information helps users make better decisions such as where to pick up e-scooters with enough battery life to get to their destination. It is also helpful for "swappers" who will replace the batteries when the scooter's battery life is below a predetermined threshold.
However, in order to implement the "swapping" for a scooter company, we need to understand the battery life dynamics of a large scale e-scooter system. Understanding these dynamics will enable us to determine how many of these swappers are necessary to achieve the ideal performance of the e-scooter system. To this end, in this paper, we develop the first stochastic model that analyzes this "swapping" process along with the battery dynamics of e-scooters. Our goal in this work is to understand the dynamics of a removable e-scooter system and what proportion of the e-scooters have a particular fraction of battery life. In particular, we focus on understanding how many scooters are actually available to customers at a specific time when they want to travel. Our analysis yields new insights for staffing removable battery e-scooter systems that will be used in the future.
\subsection{Main Contributions of Paper}
In this section, we describe the contributions of our work in this paper. \begin{itemize} \item We construct the first stochastic e-scooter model using empirical processes, which measure the battery life dynamics. Since our model is difficult to analyze for a large number of scooters, we propose to analyze an empirical process that describes the proportion of scooters that have a certain fraction of battery life remaining. Our model is informed by real data collected from the JUMP API in Washington D.C. \item We prove a mean field limit and a central limit theorem for our stochastic e-scooter empirical process, showing that the mean field limit and the variance of the empirical process can be described by a system of $\frac{K^2+3K}{2}$ differential equations where $K$ is the number equally sized intervals of battery life. \item We develop a novel algorithm based on our limit theorems for staffing the number of swappers that are needed to ensure that the proportion of scooters that have a small amount of battery is smaller than a given threshold. \end{itemize}
\subsection{Organization of Paper}
The remainder of this paper is organized as follows. Section~\ref{Data} describes how we obtained and analyzed our e-scooter data. It also includes insights on how the data was used to inform our model and model parameters. In Section \ref{Sec_Bike_Model}, we introduce two stochastic e-scooter models. The first model assume that battery usage is instantaneous , while the second model assumes that battery usage occurs according to an exponential distribution. In Section, \ref{Mean_Field_Limit}, we prove the mean field limit of our stochastic e-scooter model showing that the mean field limit is a system of $K$ differential equations. In Section \ref{Central_Limit}, we prove a functional central limit theorem for the empirical process. We also show the variance of the diffusion limit can be approximated by a system of ordinary differential equations that are coupled to the mean field limit. In Section \ref{Staffing}, we use the mean field and central limit theorems to construct a staffing policy for the number of "swappers" need to satisfy probabilistic performance constraints. In Section \ref{Numerics}, we show that our results are indeed valid by comparing them to a stochastic simulation of the e-scooter system. In Section \ref{Conclusion}, we conclude and give directions for future work. Finally, additional proofs and theorems for our second model are given in the Appendix or Section \ref{Appendix}.
\subsection{Preliminaries of Weak Convergence}
Following \citet{ko2018strong}, we assume that all random variables in this paper are defined on a common probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Moreover, for all positive integers $k$, we let $\mathcal{D}([0 , \infty), \mathbb{R}^k)$ be the space of right continuous functions with left limits (RCLL) in $\mathbb{R}^k$ that have a time domain in $[0, \infty)$. As is usual, we endow the space $\mathcal{D}([0 , \infty), \mathbb{R}^k)$ with the usual Skorokhod $J_1$ topology, and let $M^k$ be defined as the Borel $\sigma$-algebra associated with the $J_1$ topology. We also assume that all stochastic processes are measurable functions from our common probability space $(\Omega, \mathcal{F}, \mathbb{P})$ into $(\mathcal{D}([0 , \infty), \mathbb{R}^k), M^k)$. Thus, if $\{\zeta\}^\infty_{n=1}$ is a sequence of stochastic processes, then the notation $\zeta^n \rightarrow \zeta$ implies that the probability measures that are induced by the $\zeta^n$'s on the space $(\mathcal{D}([0 , \infty), \mathbb{R}^k), M^k)$ converge weakly to the probability measure on the space $(\mathcal{D}([0 , \infty), \mathbb{R}^k), M^k)$ induced by $\zeta$. For any $x \in (\mathcal{D}([0 , \infty), \mathbb{R}^k), M^k)$ and any $T > 0$, we define \begin{equation}
||x||_T \equiv \sup_{0 \leq t \leq T} \ \max_{i = 1,2,...,k} |x_i(t)| \end{equation} and note that $\zeta^n$ converges almost surely to a continuous limit process $\zeta$ in the $J_1$ topology if and only if \begin{equation}
||\zeta^n - \zeta||_T \to 0 \quad a.s. \end{equation} for every $T > 0$.
\section{Insights from Electric Scooter Data} \label{Data}
In this section, we describe some of the e-scooter trip data that we collected for this paper. This data is used to inform our stochastic models in the subsequent sections. With the real data, we can understand how e-scooter battery levels change when riders take trips, the rate of arrival to use e-scooters, and how long riders use the e-scooters in terms of time duration and distance. Below we describe how we collected raw geographic bike-share data, reconstructed likely trips, and filtered some data points that did not make sense from a rider perspective.
\subsection{The Data Collection Process} General Bikeshare Feed Specification (GBFS) is an industry standard for sharing bike-share data that has been adopted by virtually every bike-sharing company, thanks in no small part to it being a requirement for operation in many US cities. GBFS is designed to provide a real-time snapshot of a city’s fleet, which includes vehicle locations and battery levels for bikes that are not in active use. This information is also collected without keeping records of trips or personal information. In addition to providing more detailed monthly reports to the District Department of Transportation, maintaining a public API with GBFS data is a condition of operating in the Washington D.C. metro area.
\begin{figure}
\caption{Raw GBFS Data}
\label{raw_data}
\end{figure}
The dataset analyzed in this paper is based on GBFS data scraped from APIs maintained by JUMP for the D.C. metro area. The data was pulled from the JUMP API once per minute from the dates 01/01/2020 to 03/01/2020 during peak hours (6:00 – 23:59) to coincide with vehicle location updates. After adding time stamps, the scraped data has the form shown in Figure \ref{raw_data}. Note that the data has a vehicle type and some of them say "bike". This is because JUMP operates both e-bikes and e-scooters and the data is collected for both. Since our analysis is centered around e-scooters, we removed the data for the e-bikes.
\begin{table}[H] \caption{GBFS Data Fields} \centering \begin{tabular}{c c } \hline\hline
\hline $bike\_id$ & Unique identification String for each vehicle in fleet.\\ $is\_disabled$ & Boolean, 1 if vehicle is outside of approved geo-fences and 0 otherwise. \\ $is\_reserved$ & Boolean, 1 if vehicle is reserved through the JUMP app, 0 otherwise.\\ $battery$ & Integer between 0 and 100 representing percent battery remaining.\\ $company$ & Company that operates the vehicle.\\ $type$ & Vehicle type, one of `bike' or `scooter'.\\ $lat/lon$ & GPS location of vehicle at the given timestamp.\\ $time$ & Date/Time when API call is executed.\\ $epoch$ & Time API call is executed, in seconds since 01/01/1970.\\ \hline \end{tabular} \label{table:nonlin} \end{table}
\paragraph{Reconstructing E-Scooter Trips from Raw Data} We are ultimately interested in trip data for the purpose of informing the model parameters for our stochastic models. Though GBFS data explicitly excludes trip records, we were able to reconstruct trips by observing the times and locations at which bikes disappeared from and reentered the GBFS dataset. Using this information, our goal was to determine if a trip actually occurred or was it something else like rebalancing or strange movements of the e-scooter.
More explicitly, we compute the haversine distance between the start and end GPS coordinates. The start location is defined to be where the e-scooter first disappears from the data extraction from the API and the end location is where the e-scooter reappears in the data again. To mitigate noise from potential rebalancing, any disappearance with a distance less than 50 meters was removed from the dataset. Each of the remaining disappearances were tentatively designated as trips, with corresponding start and end times, locations, and battery levels. The distance of the trip was determined by the haversine distance between its start and end GPS coordinates, which is likely an underestimate of the true distance traveled on the e-scooter. Thus, when measuring the e-scooter drain rates, we are definitely overestimating this quantity in our analyses.
\begin{figure}
\caption{Cleaned Trip Data}
\label{trip_data}
\end{figure}
Our final data filtering steps included removing trips corresponding to overnight disappearances, trips with an average velocity greater than the maximum theoretical speed of JUMP e-bikes (25 mph), and trips corresponding to recharging in which the e-scooter battery life increased. After the filtering process was complete, we were left with a dataset of 71,518 likely trips with format shown in Figure \ref{trip_data} .
\subsection{Insights Gained from Data}
\paragraph{Distance Traveled} In this section, we use the data we collected to understand how long riders travel on the e-scooters. On the right of Figure \ref{Scooter_Duration_cdf}, we plot a histogram of the distances traveled by riders on the top plot. Since most of the data has less than 5 kilometers distance, we restricted the dataset to be less 5 kilometers. On the bottom right of Figure \ref{Scooter_Duration_cdf}, we plot the cumulative density function (cdf) of the distance data. This cdf plot allows us to understand the quantiles or percentiles of the data more clearly than the histogram plot. In this context, we observe that the median distance or 50\% quantile is roughly equal to 800 meters or close to half a mile. Moreover, we observe that about 80\% of riders are traveling less than 1.5 kilometers.
\paragraph{Time Duration of Riders} On the left of Figure \ref{Scooter_Duration_cdf}, we plot a histogram of the time spent by riders with the e-scooters on the top plot. Since most of the data was less than one hour, we restricted the dataset to be less than one hour or 60 minutes. On the bottom left of Figure \ref{Scooter_Duration_cdf}, we plot the cumulative density function (cdf) of the time duration data. This cdf plot allows us to understand the quantiles or percentiles of the data more clearly than the histogram plot. In this context, we observe that the median duration is roughly 8 minutes and about 80\% of riders are traveling less than 15 minutes on an e-scooter.
\begin{figure}
\caption{Scooter Rental Duration Distribution (Left). Scooter Rental Distance Distribution (Right). }
\label{Scooter_Duration_cdf}
\end{figure}
\paragraph{Inter-arrival Times of Riders} On the left of Figure \ref{Scooter_interarrival_cdf}, we plot a histogram of the inter-arrival times of riders to e-scooters in the network. This histogram provides information about how many riders we should expect to arrive to the system during a time period. We should mention that the arrival rate should depend on time, however, we ignore this time dependence when looking at the inter-arrival times here. Since most of the data for the inter-arrival times was less than 20 minutes, we restricted the dataset to be less than 20 minutes. On the bottom left of Figure \ref{Scooter_interarrival_cdf}, we plot the cumulative density function (cdf) of the inter-arrival data. We observe that the median duration is roughly equal to 1 minute, however, this information is a bit misleading because of how the data is collected. Since the scooter API is updated only once per minute, it is impossible to observe an inter-arrival time less than one minute. Moreover, we observe that about 95\% of the inter-arrival times are less than three minutes in length. On the right of Figure \ref{Scooter_interarrival_cdf}, we plot the arrival rate as a function of time average over the days of the data set. It is clear that the arrival rate is non-stationary and varies over the time of day. The two hump pattern (one in the morning and one in the afternoon) is also observed in this data and is common in ride-sharing data.
\begin{figure}
\caption{Scooter Rental Inter-Arrival Time Distribution (Left). Scooter Rental Arrival Rate Throughout the Day (Right). }
\label{Scooter_interarrival_cdf}
\end{figure}
\begin{figure}
\caption{Scooter Trip Duration vs. Change in Battery Life (Left). Scooter Trip Distance vs. Change in Battery Life (Right). }
\label{Scooter_Duration_regression}
\end{figure}
\paragraph{Estimating Battery Life of E-Scooters} Another informative measurement from a data perspective is the battery usage dynamics of riders. On the left of Figure \ref{Scooter_Duration_regression}, we show a scatterplot of trip time duration and the decrease of battery life. The plot measures for each trip, how long the customer used the e-scooter and what was the subsequent drain in the battery life of the e-scooter. On the right of Figure \ref{Scooter_Duration_regression}, we show a scatterplot of trip distance and the decrease of battery life. This plot measures for each trip, how far in meters the customer drove the e-scooter and what was the subsequent drain in the battery life. We should emphasize that the e-scooter was not tracked during the entire time of usage and only the starting and ending GPS locations were used to compute the haversine distance between them. In both plots of Figure \ref{Scooter_Duration_regression}, we observe that the relationship between time or distance with battery life is negative. We use regression analysis to explore these relationships in a formal way. Table \ref{table:reg_result} summarizes the coefficients of different regression methods used to understand the relationship between \% battery change and distance/duration.
\begin{table}[h] \begin{center} \caption{Regression coefficients for different methods} \label{table:reg_result}
\begin{tabular}{c|c|c|c} \hline\hline feature & norm & intercept & slope \\ \hline \multirow{4}{*}{distance}& \multicolumn{1}{l}{$L_2$} & \multicolumn{1}{l}{0} & \multicolumn{1}{l}{-6.13\%}\\
&\multicolumn{1}{l}{$L_2$} & \multicolumn{1}{l}{-2.55} & \multicolumn{1}{l}{-4.51\%}\\\cline{2-4}
& \multicolumn{1}{l}{$L_1$} & \multicolumn{1}{l}{0} & \multicolumn{1}{l}{-6.01\%}\\
&\multicolumn{1}{l}{$L_1$} & \multicolumn{1}{l}{-1.15} & \multicolumn{1}{l}{-5.02\%} \\\hline \multirow{4}{*}{duration}& \multicolumn{1}{l}{$L_2$} & \multicolumn{1}{l}{0} & \multicolumn{1}{l}{-14.8\%}\\
&\multicolumn{1}{l}{$L_2$} & \multicolumn{1}{l}{-5.98} & \multicolumn{1}{l}{-6.54\%}\\\cline{2-4}
& \multicolumn{1}{l}{$L_1$} & \multicolumn{1}{l}{0} & \multicolumn{1}{l}{-42.9\%}\\
&\multicolumn{1}{l}{$L_1$} & \multicolumn{1}{l}{-4.25} & \multicolumn{1}{l}{-12.5\%} \\\hline \hline \end{tabular} \end{center} \end{table}
We observe that in both plots of Figure \ref{Scooter_Duration_regression} that performing the regression without an intercept increases the absolute value of the negative slopes. Moreover, we see that $L_1$ regression generally yields more negative slopes than their $L_2$ counterparts. This is especially true in the duration vs. battery plot on the left of Figure \ref{Scooter_Duration_regression}. Finally, we observe that the trip distance seems to be a better estimate of real trips versus the time duration of trips. This is consistent with the estimates of battery life that are reported from the Jump scooter company.
\begin{figure}
\caption{Scooter Rental \% Battery Change Distribution. }
\label{Scooter_Battery_cdf}
\end{figure}
\paragraph{Battery Usage Per Trip} Finally, on the top of Figure \ref{Scooter_Battery_cdf}, we plot a histogram of the battery used by customer. Since most of the data was less than 60\% of the total battery life, we restricted the dataset to be less than 60 \%. We observe that the histogram for battery usage almost looks exponentially distributed, but probably is more of a gamma distribution. On the bottom of Figure \ref{Scooter_Battery_cdf}, we plot the cdf of the battery usage data. We observe that the median battery usage is about 6\% of battery life and about 80\% of riders are using less than 10\% of battery life of the scooter on each trip. From the data, we also observe that the minimum starting battery is about 21\%. Thus, the average scooter can do about 13 trips before its battery needs to be swapped out for a new one.
\section{Stochastic Models and Limit Theorems} \label{Sec_Bike_Model}
In this section, we propose two new Markovian queueing models for the empirical process of e-scooters battery life. In our first model, we assume the battery usage time is instantaneous. However, in our second model, we assume the battery usage time is an exponentially distributed random variable. Although one can analyze each e-scooter individually, this is a high dimensional stochastic process and given the large scale of these e-scooter systems, we wish not to take on this intractable endeavor. As a result, we resort to using an empirical process perspective for the e-scooter system. The empirical process perspective reduces the dimension from the number of e-scooters, which is large, to the number of intervals of battery life one would like to keep track of. For example, in Washington D.C, each scooter company is allowed to operate at most 2500 scooters. However, the number of intervals of battery life for the scooters is at most 100. This represents an 25 fold reduction in dimension. However, it may not be necessary to even keep that much granularity for the purposes of this work. We suggest a value of $K = 10$ to give battery intervals of $10\%$. This would yield a 250 fold reduction in dimension.
There is a large literature in the space of bike sharing and the sharing economy, see for example \citet{hampshire2012analysis, nair2013large, schuijbroek2017inventory, faghih2017empirical, singla2015incentivizing, jian2016simulation, freund2020data}. Despite there being much research on bike sharing networks there is much less literature on electric scooters and their impact on transportation networks in large cities. Our goal in this work is to add to the growing literature in the sharing economy, but specifically for e-scooters. Our approach leverages new data resources for the scooters and uses the data to inform the structure of the stochastic models we will build in the sequel. Our new stochastic models leverage techniques from empirical process theory and weak convergence of martingales.
Empirical processes are not new and have been used in a variety of contexts in queueing theory, see for example \citet{graham1997stochastic, graham2000kinetic, graham2000chaoticity, graham2005functional, li2014mean, li2016mean, mitzenmacher2016analyzing, ying2016approximation, iyer2011mean, iyer2014mean, yang2016mean, yang2018mean, yang2019information}. One of the first papers to consider empirical processes in ride-sharing is \citet{mohamed2012mean}, where the authors model bike sharing networks as a network of finite capacity single server queues. Using empirical processes, they prove a mean field limit theorem for the number stations that have $k$ bikes. In this context, the dimensionality is reduced from the total number of bikes in the network to roughly the size of the largest station. In large metropolitan cities like New York City and Washington D.C., this reduction is huge and useful. Recently, \citet{tao2017stochastic}, prove a central limit theorem for the same bike sharing model, showing that the central limit theorem is quite good at describing the fluctuations of the stochastic bike-sharing network process. Moreover, recent work by \citet{fricker2016incentives, bortolussi2016mean, li2016bike, li2016queueing, li2017fluid, el2018using} has also generalized the mean field limit theorems of \citet{mohamed2012mean} to the setting of non-stationary bike sharing systems and for Markovian arrival processes (MAPs) for the arrival and service distributions. More recently, \citet{graef2019fractional} extend the mean field model from ordinary differential equations to fractional ordinary differential equations. Generally, the mean field limit theorems provide rigorous support for using ordinary differential equations for describing the mean dynamics of the empirical measure. However, \citet{graef2019fractional} shows that using fractional ordinary differential equations might be more appropriate as they provide more flexibility than their non-fractional counterparts. Before we get specific about the models that we will describe in the sequel, we give some of the common notation that we will use throughout the remainder of the paper below in Table \ref{table:nonlin}.
\begin{table}[H] \caption{Summary of Notation} \centering \begin{tabular}{c c } \hline\hline
\hline $i$ & index for e-scooter\\ $N$ & Number of e-scooters \\ $N^*$ & Number of swappers\\ $K$ & Battery life bucket size\\
$K_U$ & Battery threshold for riding\\ $\lambda$ & Arrival rate of recharger to a e-scooter\\ $\mu$ & Arrival rate of customer to a e-scooter\\ $p_{ij}$ & Probability of battery end up in bucket $j/K$ in a ride with starting battery in bucket $i/K$\\ $B_i(t)$ & Battery life of e-scooter $i$ at time $t$\\ $Y^N(t)$ & Empirical process of e-scooters battery life at time $t$\\ \hline \end{tabular} \label{table:nonlin} \end{table}
\subsection{Model 1: Instantaneous Battery Usage}\label{model_1} Here we describe our first model for modeling the battery dynamics of an e-scooter network. We consider an empirical process of the battery life among all e-scooters in the system. The goal here is to model the distribution of battery life as Markov process and study the asymptotic behavior of the system as the number of e-scooters grows towards infinity i.e, $N \to \infty$. More specifically, we analyze a mean field and central limit theorem for the empirical process to help understand how different parameters can affect the system's performance.
\subsubsection{Modeling Assumptions}
\paragraph{Customer Arrival (battery usage):} We assume that customers arrive to the system following a Poisson process with rate $\mu N$ (uniform on geographical location). Only e-scooters with battery life above a certain threshold $K_U/K$ can be picked up and used by the customer. For simplicity of the model, we assume that after the customer picks up the e-scooter, the battery life changes immediately according to a probability matrix $P=(p_{ij})_{ij}$. Each element in the matrix $P=(p_{ij})_{ij}$ represents an e-scooter moving from the $i^{th}$ interval of battery life to the $j^{th}$ interval. Using the Jump scooter data we collected, we find the following empirical probability matrix $\hat{P}$ when setting $K=5$, which is equal to
$$\hat{P}=\begin{blockarray}{cccccc} [0,20\%] & [20\%,40\%] & [40\%,60\%] & [60\%,80\%] & [80\%,100\%] \\ \begin{block}{(ccccc)c} 0 & 0 & 0 & 0 & 0 & [0,20\%] \\ 0.097 & 0.903 & 0 & 0 & 0 & [20\%,40\%] \\ 0.003& 0.345& 0.652 & 0 &0& [40\%,60\%] \\ 0.0008 & 0.022 & 0.329 & 0.648 & 0 & [60\%,80\%]\\ 0.00 & 0.004 & 0.021 & 0.446 & 0.529 & [80\%,100\%] \\ \end{block} \end{blockarray}$$ Here the first row of $\hat{P}$ is zero because the minimum starting battery life is around 21\% from the Jump scooters data.
\paragraph{Swapper Arrival:} We assume that swappers arrive to the system following a Poisson process with rate $\lambda N^*$. The probability of a e-scooter with battery life in bucket $k/K$ getting recharged is based on a choice model $\frac{Y_k^N(t)g_k}{\sum_{i=0}^{K-1}Y_i^N(t)g_i}$, where $\{g_i\}_{i=0}^{K-1}>0$ is a decreasing sequence on $i$. For simplicity of the model, we assume that after the swappers picks up the e-scooters, the battery life jumps to full immediately (i.e. neglecting swapping time).
\paragraph{Remark:} Note that instead of only recharge e-scooters with low battery, here we use a choice model for recharging that gives more weight to e-scooters with low battery life. With the choice model, there is a positive probability to recharge a e-scooter in bucket $\left[\frac{K-1}{K},1\right]$. Without the choice model, we cannot guarantee the Lipschitz property of the drift function for the limiting mean field equations. The Lipschitz property is crucial for proving the mean field and central limit results in this work. However, one can set up the choice model $\{g_i\}_{i=0}^{K-1}$ so that the probability of recharging a e-scooter with high battery is low (In fact we only need $\min_{i}\{g_i\} > 0$).
\subsubsection{Markov Jump Process} Now that we have described the dynamics of the model, we are now free to construct our empirical process model. To this end, we define the empirical process $Y_k^N(t)$ as the proportion of e-scooters with remaining battery life between $[\frac{k}{K},\frac{k+1}{K})$. Thus, we can write $Y_k^N(t)$ as the following equation $$Y_k^N(t)=\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}\left\{\frac{k}{K}\leq B_i(t) < \frac{k+1}{K}\right\}, \quad k=0,\cdots, K-1$$ where $N$ is the total number of e-scooters in the system, and $B_i(t)$ is the battery life of the $i^{th}$ e-scooter at time $t$. Moreover, we also assume that battery drainage of a single ride follows a discrete probability distribution, i.e. $$P(\text{battery end up in } j/K \text{ but started with battery in } i/K)=p_{ij}, \quad j\leq i=0,\cdots, K-1.$$ If we condition on $Y_k^{N}(t)=y_k$, the transition rates of $y$ are specified as follows.
\paragraph{Swapping Batteries:} When there is a swapper arriving to the system to swap an e-scooter's battery with battery life in the interval $[\frac{k}{K},\frac{k+1}{K}]$, the proportion of e-scooters with battery life in bucket $[\frac{k}{K},\frac{k+1}{K}]$ goes down by $1/N$, the proportion of e-scooters with battery life in bucket $[\frac{K-1}{K},1]$ goes up by $1/N$, and the transition rate $Q^N$ is \begin{eqnarray} Q^{N}\left(y,y+\frac{1}{N}(\mathbf{1}_{K-1}-\mathbf{1}_{k}) \right) &=& \lambda N^* \frac{y_kg_k}{\sum_{i=0}^{K-1}y_ig_i} . \end{eqnarray}
\paragraph{Riding a Scooter:} When there is a customer riding an e-scooter with battery life in the interval $[\frac{k}{K},\frac{k+1}{K}]$ where $k\geq K_U$, the proportion of e-scooters with battery life in the interval$[\frac{k}{K},\frac{k+1}{K}]$ moves down by $1/N$ with probability $p_{kj}$. In addition, the proportion of e-scooters with battery life in the interval $[\frac{j}{K},\frac{j+1}{K}]$ moves up by $1/N$, and the transition rate $Q^N$ is \begin{eqnarray} Q^{N} \left(y,y+\frac{1}{N}(\mathbf{1}_{j}-\mathbf{1}_{k}) \right) &=& \mu N p_{kj} y_k \mathbf{1}\{k\geq K_U\} \mathbf{1}\{j\leq k\}. \end{eqnarray}
With the described transitions, one can show that $Y^N(t)$ is a Markovian jump process with the above transition rates. In Figure \ref{model1_diagram}, we illustrate the transitions between states in the model proposed above.
\begin{figure}
\caption{Diagram of transitions of states for the empirical process $Y^N(t)$.}
\label{model1_diagram}
\end{figure}
Despite some complexity of our model, it is not completely realistic as we make the battery usage transitions instantaneous. What follows in the sequel is a generalization of our first model where the battery life transitions have an exponential distribution.
\subsection{Model 2: Exponentially Distributed Battery Usage Time}\label{model_2}
In this subsection, we propose a different model for the empirical process of battery life where battery usage time is considered to be exponentially distributed with rate $\mu_U$. We use the same notation from Section \ref{model_1} and introduce a new variable $R^N(t)$ as the number of e-scooters in use (riding by customers) at time $t$. Table \ref{table:nonlin2} summarizes the additional notation we need for this new model.
\begin{table}[H] \caption{Summary of Additional Model 2 Notation} \centering \begin{tabular}{c c } \hline\hline
\hline $1/\mu_U$ & Mean trip duration\\ $R^N(t)$ & Number of e-scooters in use at time $t$\\ $X^N(t)$ & Proportion of e-scooters in use at time $t$\\ \hline \end{tabular} \label{table:nonlin2} \end{table}
\subsubsection{Modeling Assumptions}
Now we describe the following modeling assumptions we make for this new model.
\paragraph{Customer Arrival (battery usage):} We assume that customers arrive to the system following a Poisson process with rate $\mu (N-R^N(t))$ (uniform on geographical location). Only e-scooters with battery life above a certain threshold $K_U/K$ can be picked up and used by the customer. After the customer picks up the e-scooter, they will ride the e-scooter for a time that is exponentially distributed with rate $\mu_U$, and after this the battery life changes immediately according to a probability matrix $P=(p_{ij})_{ij}$.
\paragraph{Swapper Arrival:} We assume that swappers arrive to the system following a Poisson process with rate $\lambda N^*\left(1-\frac{R^N(t)}{N}\right)$, which is proportional to the number of e-scooters available at the time. The probability of a e-scooter with battery life in bucket $k/K$ getting swapped is based on a choice model $\frac{Y_k^N(t)g_k}{\sum_{i=0}^{K-1}Y_i^N(t)g_i}$, where $\{g_i\}_{i=0}^{K-1}>0$ is a decreasing sequence on $i$. For simplicity of the model, we assume that after the swapper picks up the e-scooter, the battery life jumps to full immediately (i.e. neglecting the swapping time).
\subsubsection{Markov Jump Process} Now that we have described the dynamics of the model, we are now free to construct our empirical process model. To this end, we define the fraction of e-scooters in use as $$X^N(t)=\frac{R^N(t)}{N}$$
By conditioning on $(X^N(t),Y^{N}(t))=(x,y)$, the transition rates of $(x,y)$ are specified as follows: \paragraph{Battery Swapping:} When there is a swapper arriving to the system to swap the battery of an e-scooter with battery life in the interval $[\frac{k}{K},\frac{k+1}{K}]$, the proportion of e-scooters with battery life in the interval $[\frac{k}{K},\frac{k+1}{K}]$ goes down by $1/N$, the proportion of e-scooters with battery life in the interval$[\frac{K-1}{K},1]$ goes up by $1/N$, and the transition rate $Q^N$ is \begin{eqnarray} Q^{N}\left((x,y),\left(x,y+\frac{1}{N}(\mathbf{1}_{K-1}-\mathbf{1}_{k})\right) \right) &=& \lambda N^* (1-x)\frac{y_kg_k}{\sum_{i=0}^{K-1}y_ig_i} \end{eqnarray}
\paragraph{Customer Arrival:} When there is a customer arriving to the system to pick up a e-scooter with battery life in the interval $[\frac{k}{K},\frac{k+1}{K}]$ where $k\geq K_U$,, the proportion of e-scooters in use goes up by $1/N$, and the transition rate $Q^N$ is \begin{eqnarray} Q^{N}\left((x,y),\left(x+\frac{1}{N},y\right) \right) &=& \mu N(1-x)\sum_{k=K_U}^{K-1}y_k \end{eqnarray}
\paragraph{Battery Usage:} When there is a customer riding a e-scooter with battery life in the interval $[\frac{k}{K},\frac{k+1}{K}]$ where $k\geq K_U$, the proportion of e-scooters with battery life in the interval $[\frac{k}{K},\frac{k+1}{K}]$ goes down by $1/N$, with probability $p_{kj}$ the proportion of e-scooters with battery life in the interval $[\frac{j}{K},\frac{j+1}{K}]$ goes up by $1/N$, and the proportion of e-scooters in use goes down by $1/N$, and the transition rate $Q^N$ is \begin{eqnarray} Q^{N} \left((x,y),\left(x-\frac{1}{N},y+\frac{1}{N}(\mathbf{1}_{j}-\mathbf{1}_{k})\right) \right) &=& \mu_U Nx p_{kj} y_k \mathbf{1}\{k\geq K_U\} \mathbf{1}\{j\leq k\}. \end{eqnarray} We the above transition rates, we have that $(X^N(t),Y^N(t))$ is a Markov jump process. In Figure \ref{model2_diagram}, we illustrate the transitions between states in the model proposed. Note that in order to make the illustration easier to understand, we break down $X^N(t)$ into each battery interval. Since we assume uniform arrival to scooters with battery life in all levels, we do not need to track the proportion of e-scooters in use in each battery bucket for the model to be Markovian. However, it will be needed when arrival rate depends on the battery life of e-scooters, in which case we can extend the state space to $$X_{u, k}^N(t)=\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}\left\{\frac{k}{K}\leq B_i(t) < \frac{k+1}{K}, u_i(t)=u\right\}, \quad k=0,\cdots, K-1, u=0,1$$ where $u_i(t)=1$ denotes scooter $i$ is in use at time $t$ and $u_i(t)=0$ indicates when the $i^{th}$ e-scooter is idle at time $t$.
\begin{figure}
\caption{Diagram of transitions of states for the empirical process $(X^N(t),Y^N(t))$}
\label{model2_diagram}
\end{figure}
Now that we have two models for the dynamic behavior of e-scooter systems, we want to understand some important behavior of the system. Since the system is quite large and is not easy to analyze directly, we resort to using asymptotic analysis. Thus, in the sequel we will prove mean field and central limit theorems for describe the mean and variance dynamics of the e-scooter system.
\section{Mean Field Limit of Empirical Processes} \label{Mean_Field_Limit}
In this section, we prove the mean field limit for both of our empirical process of e-scooters battery life models. A mean field limit describes the large system dynamics of the e-scooters battery life and usage over time. Deriving the mean field limit allows us to gain insight about the average system behavior when the number of e-scooters is very large. Thus, we avoid the need to study an $N$-dimensional continuous time Markov chain and compute its steady state distribution in this high dimensional setting, which is quite intractable. We first state the mean field limit result for the stochastic model described in Section \ref{model_1}, the empirical process of e-scooter battery life with instantaneous battery usage.
\begin{theorem}[Functional Law of Large Numbers]\label{fluid_limit}
Let $|. |$ denote the Euclidean norm in $\mathbb{R}^{K}$. Suppose that $\lim_{N\rightarrow \infty}\frac{N^*}{N}=\gamma$, and $Y^{N}(0)\xrightarrow{p} y(0)$, then we have for $\forall \epsilon>0$
$$\lim_{N\rightarrow \infty}P\left(\sup_{t\leq t_0}|Y^N(t)-y(t)|>\epsilon\right)=0,$$ where $y(t)$ is the unique solution to the following differential equation starting at $y(0)$, \begin{equation}\label{diff_eqn} \shortdot{y}=f(y) \end{equation}
where $f:[0,1]^{K}\rightarrow \mathbb{R}^{K}$ is a vector field that satisfies \begin{eqnarray}\label{eqn:b} f(y)&=&\sum_{k=0}^{K-1}\left[ \left( \frac{\lambda \gamma g_k}{\sum_{i=0}^{K-1}y_ig_i} \right)(\mathbf{1}_{K-1}-\mathbf{1}_{k})+\sum_{j=0}^{k}\mu p_{kj} (\mathbf{1}_{j}-\mathbf{1}_{k})\mathbf{1}_{k\geq K_U}\right] y_k\nonumber \\ \end{eqnarray} or componentwise for $0\leq j \leq K-1$, \begin{eqnarray} f_j(y)&=&\underbrace{\sum_{k=\max(K_U,j)}^{K-1}\mu p_{k,j}y_k}_{\text{battery usage from scooters in $k$-th bucket}} +\underbrace{\lambda \gamma \mathbf{1}\{j=K-1\}}_{\text{battery recharge to full}}-\underbrace{\mu y_j \mathbf{1}\{j\geq K_U\}}_{\text{battery usage from scooters in $j$-th bucket}}\nonumber\\ & &-\underbrace{\frac{\lambda \gamma y_jg_j}{\sum_{i=0}^{K-1}y_ig_i}}_{\text{battery recharge to scooters in $j$-th bucket}} . \end{eqnarray} \end{theorem}
\begin{proof} The full proof is provided in the Appendix (Section \ref{Appendix}). \end{proof}
Now we state the the mean field limit result for the stochastic model described in Section \ref{model_2}, the empirical process of e-scooter battery life with exponentially distributed battery usage time.
\begin{theorem}[Functional Law of Large Numbers]\label{fluid_limit_model2}
Let $|. |$ denote the Euclidean norm in $\mathbb{R}^{K+1}$. Suppose that $\lim_{N\rightarrow \infty}\frac{N^*}{N}=\gamma$, and $(X^N(0),Y^{N}(0))\xrightarrow{p} (x(0),y(0))$, then we have for $\forall \epsilon>0$
$$\lim_{N\rightarrow \infty}P\left(\sup_{t\leq t_0}|(X^N(t), Y^N(t))-(x(t),y(t))|>\epsilon\right)=0$$ where $(x(t),y(t))$ is the unique solution to the following differential equation starting at $(x(0), y(0))$ \begin{eqnarray}\label{diff_eqn} \shortdot{x}&=&f_x(x, y),\\ \shortdot{y}&=&f_y(x, y), \end{eqnarray}
where $f=(f_x,f_y):[0,1]\times [0,1]^{K}\rightarrow \mathbb{R}\times \mathbb{R}^{K}$ is a vector field that satisfies \begin{eqnarray} f_x(x,y)=\underbrace{\mu(1-x)\sum_{k=K_U}^{K-1}y_k}_{\text{customers picking up scooters}} -\underbrace{\mu_U x \sum_{k=K_U}^{K-1}y_k}_{\text{customers drop off scooters}}, \end{eqnarray}
\begin{eqnarray} f_y(x,y)&=&\sum_{k=0}^{K-1}\left[ \left( \frac{\lambda \gamma (1-x) g_k}{\sum_{i=0}^{K-1}y_ig_i} \right)(\mathbf{1}_{K-1}-\mathbf{1}_{k})+\sum_{j=0}^{k}\mu_U x p_{kj} (\mathbf{1}_{j}-\mathbf{1}_{k})\mathbf{1}_{k\geq K_U}\right] y_k,\nonumber \\ \end{eqnarray}
or componentwise for $0\leq j \leq K-1$, \begin{eqnarray} f_y(x,y)(j)&=&\underbrace{\sum_{k=\max(K_U,j)}^{K-1}\mu _U x p_{k,j}y_k}_{\text{battery usage from scooters in $k$-th bucket}} +\underbrace{\lambda \gamma(1-x) \mathbf{1}\{j=K-1\}}_{\text{battery recharge to full}}\nonumber\\ & &-\underbrace{\mu_U x y_j \mathbf{1}\{j\geq K_U\}}_{\text{battery usage from scooters in $j$-th bucket}}-\underbrace{\frac{\lambda \gamma(1-x) y_jg_j}{\sum_{i=0}^{K-1}y_ig_i}}_{\text{battery recharge to scooters in $j$-th bucket}}. \end{eqnarray} \end{theorem} \begin{proof} The proof ideas for Theorem \ref{fluid_limit_model2} follow easily from the proof of Theorem \ref{fluid_limit} so we do not prove them in this paper. \end{proof}
\iffalse \subsection{Steady State Analysis} By the existence and uniqueness of the fluid limit, $y(t)$ has a unique steady state $\bar{y}$ which satisfies
$$\sum_{k=\max(K_U,j)}^{K-1}\mu p_{k,j}\bar{y}_k +\lambda \gamma \mathbf{1}\{j=K-1\} =\mu \bar{y}_j \mathbf{1}\{j\geq K_U\}+\frac{\lambda \gamma \bar{y}_jg_j}{\sum_{i=0}^{K-1}\bar{y}_ig_i}$$ Specifically, \begin{equation} \sum_{k=K_U}^{K-1}\mu p_{k,j}\bar{y}_k =\frac{\lambda \gamma \bar{y}_jg_j}{\sum_{i=0}^{K-1}\bar{y}_ig_i}, \quad 0\leq j<K_U \end{equation} \begin{equation} \sum_{k=j}^{K-1}\mu p_{k,j}\bar{y}_k =\mu \bar{y}_j+\frac{\lambda \gamma \bar{y}_jg_j}{\sum_{i=0}^{K-1}\bar{y}_ig_i}, \quad K_U\leq j<K-1 \end{equation} \begin{equation} \mu p_{K-1,K-1}\bar{y}_{K-1} +\lambda \gamma =\mu \bar{y}_{K-1}+\frac{\lambda \gamma \bar{y}_{K-1}g_{K-1}}{\sum_{i=0}^{K-1}\bar{y}_ig_i}, \quad j=K-1 \end{equation} \fi
Both mean field limits provide insights to the stochastic model, by providing ordinary differential equations that describe the mean proportion of scooters in a particular interval of battery life. In the first model, the dimension is reduced from $N$ scooters to $K$ intervals where $K$ is generally much lower than $N$. In the second model, the dimension is only increased to $K+1$, which is still much smaller than $N$, the number of e-scooters. We will describe in the sequel how these mean field limits can be used to construct staffing algorithms for agents who will swap out the batteries when they are low. However, the mean field limits only describe the mean dynamics of the stochastic models and say nothing about the stochastic fluctuations around the mean field limits. In the next section, we prove central limit theorems, centering around the mean field limits for our stochastic models. The central limit theorems will provide some rigorous support for confidence intervals around the mean field limits.
\section{Central Limit Theorem of Empirical Process} \label{Central_Limit}
In this section, we derive the diffusion limit of our stochastic empirical process of scooters battery life model. Diffusion limits are critical for obtaining a deep understanding of the sample path behavior of stochastic processes around their mean. One reason is that diffusion limits describe the fluctuations around the mean or mean field limit and can help understand the variance or the asymptotic distribution of the stochastic process being analyzed. We define our diffusion scaled e-scooters sharing model by subtracting the mean field limit from the scaled stochastic process and rescaling it by $\sqrt{N}$. Thus, we obtain the following expression for the diffusion scaled scooter battery life empirical process \begin{equation} D^{N}(t)=\sqrt{N}(Y^{N}(t)-y(t)). \end{equation} Now we state the functional central limit theorem for the empirical process described in Section \ref{model_1}. \begin{theorem}[Functional Central Limit Theorem]\label{difftheorem} Consider $D^{N}(t)$ in $\mathbb{D}(\mathbb{R}_{+},\mathbb{R}^{K})$ with the Skorokhod $J_{1}$ topology, and suppose that $\limsup_{N\rightarrow \infty}\sqrt{N}\left(\frac{N^*}{N}-\gamma\right)< \infty$. If $D^{N}(0)$ converges in distribution to $D(0)$, then $D^{N}(t)$ converges to the unique Ornstein Uhlenbeck (OU) process solving $D(t)=D(0)+\int_{0}^{t}f'(y(s))D(s) ds+M(t)$ in distribution, where $f'(y)$ is specified as follows, \begin{eqnarray}\label{b'} \frac{\partial f_j(y)}{\partial y_k}&=&\mu p_{k,j}\mathbf{1}\{k\geq \max\{j,K_U\}\}-\mu \mathbf{1}\{k=j\geq K_U\}\nonumber\\ & &+\frac{\lambda \gamma g_kg_jy_j}{(\sum_{i=0}^{K-1}y_ig_i)^2}-\frac{\lambda \gamma g_j}{\sum_{i=0}^{K-1}y_ig_i}\mathbf{1}\{j= k\}, \end{eqnarray} and $M(t) = (M_{0}(t),\cdots,M_{K-1}(t))\in \mathbb{R}^{K}$ is a real continuous centered Gaussian martingale, with Doob-Meyer brackets given by \begin{eqnarray}
\boldlangle M_{k}(t),M_{j}(t)\boldrangle &=&\begin{cases} \int_{0}^{t}\left[\sum_{i=\max(K_U,k+1)}^{K-1}\mu p_{i,k}y_i(s) +\left(\mu(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}\right.\right.\nonumber\\ +\left.\left.\frac{\lambda \gamma g_k}{\sum_{i=0}^{K-1}y_i(s)g_i} \right)y_k(s)\right]ds, & k=j<K-1 \nonumber\\ \int_{0}^{t}\left[\mu(1-p_{K-1,K-1})y_{K-1}(s)+\lambda \gamma\left(1- \frac{g_{K-1}y_{K-1}(s)}{\sum_{i=0}^{K-1}y_i(s)g_i}\right)\right]ds, & k=j=K-1\\ -\int_{0}^{t} \left[\mu p_{k,j}y_k(s)+\frac{\lambda \gamma g_j}{\sum_{i=0}^{K-1}y_i(s)g_i}y_j(s)\mathbf{1}\{k=K-1\}\right]ds, & j<k, k\geq K_U\nonumber\\ 0. & \text{otherwise} \end{cases} \end{eqnarray} Define $\mathcal{A}(t)=f'(y(t))$, $\mathcal{B}(t)=\left(\frac{d}{dt}\boldlangle M_i(t),M_j(t)\boldrangle \right)_{ij}$, then the covariance matrix $\Sigma (t)=\mathrm{Cov}[D(t),D(t)]$ satisfies \begin{equation} \frac{d\Sigma(t)}{dt}=\Sigma(t)\mathcal{A}(t)^\top+\mathcal{A}(t)\Sigma(t)+\mathcal{B}(t). \end{equation} Moreover, componentwise, for $i=0,1,\cdots, K-2$, \begin{eqnarray} \frac{d\Sigma_{ii}(t)}{dt} &=&2\sum_{k=\max\{i,K_U\}}^{K-1}\Sigma_{ik}\mu p_{k,i}-2\left( \frac{\lambda \gamma g_i}{\sum_{i=0}^{K-1}y_ig_i}+\mu \mathbf{1}\{i\geq K_U\}\right)\Sigma_{ii} +2\sum_{k=0}^{K-1}\Sigma_{ik}\frac{\lambda \gamma g_kg_iy_i}{(\sum_{i=0}^{K-1}y_ig_i)^2}\nonumber\\ & &+\sum_{k=\max(i+1,K_U)}^{K-1}\mu p_{k,i}y_k +\left(\mu(1-p_{i,i})\mathbf{1}\{i\geq K_U\}+\frac{\lambda \gamma g_i}{\sum_{i=0}^{K-1}y_ig_i} \right)y_i, \end{eqnarray}
and for $i=K-1$, \begin{eqnarray} \frac{d\Sigma_{K-1,K-1}(t)}{dt} &=&-2\left( \frac{\lambda \gamma g_{K-1}}{\sum_{i=0}^{K-1}y_ig_i}+\mu \right)\Sigma_{ii}+2\sum_{k=0}^{K-1}\Sigma_{ik}\frac{\lambda \gamma g_kg_{K-1}y_{K-1}}{(\sum_{i=0}^{K-1}y_ig_i)^2}+\mu(1-p_{K-1,K-1})y_{K-1}\nonumber\\ & &+\lambda \gamma\left(1- \frac{g_{K-1}y_{K-1}}{\sum_{i=0}^{K-1}y_ig_i}\right) , \end{eqnarray} for $0\leq i<j\leq K-1$, \begin{eqnarray} \frac{d\Sigma_{ij}(t)}{dt} &=&\sum_{k=\max\{j,K_U\}}^{K-1}\Sigma_{ik}\mu p_{k,j}-\left( \frac{\lambda \gamma g_j}{\sum_{i=0}^{K-1}y_ig_i}+\mu \mathbf{1}\{j\geq K_U\}\right)\Sigma_{ij}+\sum_{k=0}^{K-1}\Sigma_{ik}\frac{\lambda \gamma g_kg_jy_j}{(\sum_{i=0}^{K-1}y_ig_i)^2}\nonumber\\ & &\sum_{k=\max\{i,K_U\}}^{K-1}\Sigma_{jk}\mu p_{k,i}-\left( \frac{\lambda \gamma g_i}{\sum_{i=0}^{K-1}y_ig_i}+\mu \mathbf{1}\{i\geq K_U\}\right)\Sigma_{ij}+\sum_{k=0}^{K-1}\Sigma_{jk}\frac{\lambda \gamma g_kg_iy_i}{(\sum_{i=0}^{K-1}y_ig_i)^2}\nonumber\\ & &-\mu p_{j,i}y_j\mathbf{1}\{j\geq K_U\}-\frac{\lambda \gamma g_i}{\sum_{i=0}^{K-1}y_ig_i}y_i\mathbf{1}\{j=K-1\}. \end{eqnarray}
\begin{proof} In order to prove Theorem \ref{difftheorem}, we need to prove the following four results listed below step by step.
\begin{itemize} \item[1).](Lemma \ref{martingale_brackets})$\sqrt{N}M^{N}(t)$ is a family of martingales independent of $D^{N}(0)$ with Doob-Meyer brackets given by \begin{eqnarray} & &\boldlangle \sqrt{N}M^{N}_k(t),\sqrt{N}M^{N}_j(t)\boldrangle\nonumber\\ &=&\begin{cases} \int_{0}^{t}\left[\sum_{i=\max(K_U,k+1)}^{K-1}\mu p_{i,k}Y^N_i(s) +\left(\mu(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}\right.\right. & \nonumber\\ +\left.\left.\lambda N^*\frac{ g_k}{N\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_k(s)\right]ds, & k=j<K-1 \nonumber\\ \int_{0}^{t}\left[\mu(1-p_{K-1,K-1})Y_{K-1}^N(s)+\lambda \frac{N^*}{N}\left(1- \frac{g_{K-1}Y_{K-1}^N(s)}{\sum_{i=0}^{K-1}Y_i^N(s)g_i}\right)\right]ds, & k=j=K-1\\ -\int_{0}^{t} \left[\mu p_{k,j}Y^N_k(s)+\lambda N^*\frac{ g_j}{N\sum_{i=0}^{K-1}Y_i^N(s)g_i}Y_j^N(s)\mathbf{1}\{k=K-1\}\right]ds, & j<k, k\geq K_U\nonumber\\ 0. & \text{otherwise} \end{cases} \end{eqnarray}
\item[2).](Lemma \ref{L2bound}) For any $T\geq 0$, $$\limsup_{N\rightarrow \infty}\mathbb{E}(|D^{N}(0)|^2)<\infty \Rightarrow \limsup_{N\rightarrow \infty}\mathbb{E}(\sup_{0\leq t\leq T}|D^{N}(t)|^2)<\infty. $$ \item[3).] (Lemma \ref{tightness}) If $(D^{N}(0))_{N=1}^{\infty}$ is tight then $(D^{N})_{N=1}^{\infty}$ is tight and its limit points are continuous. \item[4).] If $D^{N}(0)$ converges to $D(0)$ in distribution, then $D^{N}(t)$ converges to the unique OU process solving $D(t)=D(0)+\int_{0}^{t}f'(y(s))D(s) ds+M(t)$ in distribution. \end{itemize} We provide the proofs of Lemma \ref{martingale_brackets}, Lemma \ref{L2bound} and Lemma \ref{tightness} in the Appendix (Section \ref{Appendix}). For step 4), by Theorem 4.1 in Chapter 7 of \citet{Ethier2009}, it suffices to prove the following conditions hold \begin{enumerate} \item [a).] \begin{equation}\label{condition_1}
\lim_{N\rightarrow \infty}\mathbb{E}\left[\sup_{t\leq T}|D^N(t)-D^N(t-)|^2\right]=0, \end{equation} \item [b).] \begin{equation}\label{condition_2}
\lim_{N\rightarrow \infty}\mathbb{E}\left[\sup_{t\leq T}\left|\int_{0}^{t}F^N(Y^N(s))ds-\int_{0}^{t-}F^N(Y^N(s))ds\right|^2\right]=0, \end{equation} \item [c).] for $0\leq k,j\leq K-1$, \begin{equation}\label{condition_3}
\lim_{N\rightarrow \infty}\mathbb{E}\left[\sup_{t\leq T}\left|\boldlangle \sqrt{N}M^{N}_k(t),\sqrt{N}M^{N}_j(t)\boldrangle-\boldlangle \sqrt{N}M^{N}_k(t-),\sqrt{N}M^{N}_j(t-)\boldrangle\right|^2\right]=0, \end{equation} \item [d).] for $0\leq k,j\leq K-1$, \begin{equation}\label{condition_4}
\sup_{t\leq T}\left|\boldlangle \sqrt{N}M^{N}_k(t),\sqrt{N}M^{N}_j(t)\boldrangle-\boldlangle M_k(t),M_j(t)\boldrangle\right|\xrightarrow{p} 0, \end{equation} \item [e).] \begin{equation}\label{condition_5}
\sup_{t\leq T}\left|\int_{0}^{t}\left\{\sqrt{N}[f(Y^N(s))-f(y(s))]-f'(y(s))D^{N}(s)\right\}ds\right|\xrightarrow{p} 0. \end{equation}
\end{enumerate}
Condition (\ref{condition_1}) is easy to show by the fact that $D^N(t)$ has jump size of $1/\sqrt{N}$. Condition (\ref{condition_2}) follows by the fact that $F^N(y)$ is a Lipschitz function of $y$ and that condition (\ref{condition_1}) holds. By Lemma \ref{martingale_brackets} and the fact that $Y^N(t)$ has jump size of $1/N$, it is also easy to show that condition (\ref{condition_3}) holds. For condition (\ref{condition_4}), it follows from Proposition \ref{driftbound} and Lemma \ref{martingale_brackets} (see for example proof of Equation (\ref{martingale_brackets_convergence}) for details).
Finally, to show condition (\ref{condition_5}), by Equation (\ref{b'}) we know that $f(y(t))$ is continuously differentiable with respect to $y(t)$. By the mean value theorem, for every $0\leq s\leq t$ there exists a vector $Z^{N}(s)$ in between $Y^{N}(s)$ and $y(s)$ such that $$f(Y^{N}(s))-f(y(s))=f'(Z^{N}(s))(Y^{N}(s)-y(s)).$$ Therefore, $$\int_{0}^{t}\left\{\sqrt{N}[f(Y^{N}(s))-f(y(s))]-f'(y(s))D^{N}(s)\right\}ds=\int_{0}^{t}[f'(Z^{N}(s))-f'(y(s))]D^{N}(s)ds.$$
We know that $$\lim_{N\rightarrow \infty}\sup_{t\leq T}|f'(Z^{N}(s))-f'(y(s))|=0\quad \text{in probability}$$ by the mean field limit convergence (Theorem \ref{fluid_limit}) and the uniform continuity of $f'$. By applying Chebyshev's inequality we have that $D^{N}(s)$ is bounded in probability. Then, by Lemma 5.6 in \citet{ko2018strong} we have that
$$\sup_{t\leq T}\left|\int_{0}^{t}\left\{\sqrt{N}[f(Y^N(s))-f(y(s))]-f'(y(s))D^{N}(s)\right\}ds\right|\xrightarrow{p} 0.$$ \end{proof} \end{theorem}
Like in the mean field case, we now state the functional central limit theorem for the empirical process described in Section \ref{model_2}, where we consider battery usage time to be exponentially distributed.
\begin{theorem}[Functional Central Limit Theorem] \label{difftheorem_model2} Define $$D^{N}(t)=\sqrt{N}((X^N(t),Y^N(t))-(x(t),y(t))).$$ Consider $D^{N}(t)$ in $\mathbb{D}(\mathbb{R}_{+},\mathbb{R}^{K+1})$ with the Skorokhod $J_{1}$ topology, and suppose that $$\limsup_{N\rightarrow \infty}\sqrt{N}\left(\frac{N^*}{N}-\gamma\right)< \infty.$$ Then if $D^{N}(0)$ converges in distribution to $D(0)$, then $D^{N}(t)$ converges to the unique OU process solving $D(t)=D(0)+\int_{0}^{t}f'(x(s), y(s))D(s) ds+M(t)$ in distribution, where $f'(x,y)$ is specified as follows, \begin{eqnarray} \frac{\partial f_x(x,y)}{\partial x}&=&-(\mu+\mu_U)\sum_{k=K_U}^{K-1}y_k,\nonumber\\ \frac{\partial f_x(x,y)}{\partial y_k}&=&(\mu(1-x)-\mu_U x)\mathbf{1}\{k\geq K_U\}\nonumber\\ \frac{\partial f_y(x,y)(j)}{\partial x}&=&\sum_{k=\max(K_U,j)}^{K-1}\mu_Up_{kj}y_k-\lambda\gamma\left(\mathbf{1}\{j=K-1\}-\frac{g_jy_j}{\sum_{i=0}^{K-1}g_iy_i}\right)-\mu_U y_j \mathbf{1}\{j\geq K_U\},\nonumber\\ \frac{\partial f_y(x,y)(j)}{\partial y_k}&=&\mu_U x p_{k,j}\mathbf{1}\{k\geq \max\{j,K_U\}\}-\mu_U x \mathbf{1}\{k=j\geq K_U\},\nonumber\\ & & +\frac{\lambda \gamma (1-x) g_kg_jy_j}{(\sum_{i=0}^{K-1}y_ig_i)^2}-\frac{\lambda \gamma (1-x)g_j}{\sum_{i=0}^{K-1}y_ig_i}\mathbf{1}\{j= k\},\nonumber\\ \end{eqnarray} and $M(t)=(M_{x}(t), M_{y,0}(t),\cdots,M_{y,K-1}(t))\in \mathbb{R}^{K+1}$ is a real continuous centered Gaussian martingale, with Doob-Meyer brackets given by \begin{eqnarray} \boldlangle M_{x}(t) \boldrangle & = & \int_{0}^{t}\left[(\mu(1-x(s))+\mu_U x(s))\sum_{k=K_U}^{K-1}y_k(s)\right]ds,\\ \boldlangle M_{x}(t), M_{y,j}(t)\boldrangle & = & \ -\int_{0}^{t}\left[\mu_U x(s)\sum_{k=\max\{j,K_U\}}^{K-1}p_{k,j}y_k(s)\right]ds, \end{eqnarray} \begin{eqnarray} & &\boldlangle M_{y,k}(t),M_{y,j}(t)\boldrangle \nonumber\\ &=&\begin{cases} \int_{0}^{t}\left[\sum_{i=\max(K_U,k+1)}^{K-1}\mu_U x p_{i,k}y_i(s) +\left(\mu_Ux(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}\right.\right.\nonumber\\ +\left.\left.\frac{\lambda \gamma(1-x) g_k}{\sum_{i=0}^{K-1}y_i(s)g_i} \right)y_k(s)\right]ds, & k=j<K-1 \nonumber\\ \int_{0}^{t}\left[\mu_Ux(1-p_{K-1,K-1})y_{K-1}(s)+\lambda \gamma(1-x)\left(1- \frac{g_{K-1}y_{K-1}(s)}{\sum_{i=0}^{K-1}y_i(s)g_i}\right)\right]ds, & k=j=K-1\\ -\int_{0}^{t} \left[\mu_U x p_{k,j}y_k(s)+\frac{\lambda \gamma(1-x) g_j}{\sum_{i=0}^{K-1}y_i(s)g_i}y_j(s)\mathbf{1}\{k=K-1\}\right]ds, & j<k, k\geq K_U\nonumber\\ 0. & \text{otherwise} \end{cases}\nonumber\\ \end{eqnarray} Define $\mathcal{A}(t)=f'(x(t),y(t))$, $\mathcal{B}(t)=\left(\frac{d}{dt}\boldlangle M_i(t),M_j(t)\boldrangle \right)_{ij}$, then the covariance matrix $\Sigma (t)=\mathrm{Cov}[D(t),D(t)]$ satisfies \begin{equation} \frac{d\Sigma(t)}{dt}=\Sigma(t)\mathcal{A}(t)^\top+\mathcal{A}(t)\Sigma(t)+\mathcal{B}(t). \end{equation} \begin{proof} The proof ideas for Theorem \ref{difftheorem_model2} follow easily from the proof of Theorem \ref{difftheorem} so we do not prove them in this paper. \end{proof} \end{theorem}
\section{Insights to Staffing Swappers}\label{Staffing} In Sections \ref{Mean_Field_Limit} and \ref{Central_Limit}, we prove mean field and central limit theorems for the empirical process of battery life. In this section, we show how to use these limits for providing insights for staffing the number of swappers to keep the number of e-scooters with low battery life below a pre-specified threshold. Our limit theorems are useful because when the scale of the system $N$ is large enough, the empirical process representing battery life can be approximated by a normal distribution, which is extremely convenient from a computational perspective. As a result, we can approximate the tail probability of the empirical measure process by the following expression \begin{eqnarray} P(Y_k^N(t)>x)\approx P(y_k(t)+\sqrt{\Sigma_{kk}(t)/N}\cdot Z>x)=1-\Phi\left(\frac{x-y_k(t)}{\sqrt{\Sigma_{kk}(t)/N}}\right). \end{eqnarray} where $Z\sim N(0,1)$ and $\Phi$ is the cdf of standard normal distribution.
Similarly, if we want to consider several terms of the empirical process $(Y_{\sigma(i)})_{i}$ where $\sigma$ is a permutation on $\{0,1,\cdots,K-1\}$. We have \begin{eqnarray} P\left(\sum_{i=1}^{m}Y_{\sigma(i)}^N(t)>x\right)&\approx & P\left(\sum_{i=1}^{m}y_{\sigma(i)}(t)+\sqrt{\frac{1}{N}\left(\sum_{i=1}^{m}\Sigma_{\sigma(i)\sigma(i)}(t)+2\sum_{i<j}^{m}\Sigma_{\sigma(i)\sigma(j)}(t)\right)}\cdot Z>x\right)\nonumber\\ &=&1-\Phi\left(\frac{x-\sum_{i=1}^{m}y_{\sigma(i)}(t)}{\sqrt{\frac{1}{N}\left(\sum_{i=1}^{m}\Sigma_{\sigma(i)\sigma(i)}(t)+2\sum_{i<j}^{m}\Sigma_{\sigma(i)\sigma(j)}(t))\right)}}\right) \end{eqnarray}
An important question the above analysis can help answer is how many swappers are needed to keep the proportion of e-scooters with low battery life lower than a threshold, i.e. $P(Y_0>x)<\epsilon$ for given $(x,\epsilon)$. Specifically, we construct the following algorithm for finding the solution numerically.
\begin{algorithm}\label{algo} Given $x>0, \epsilon>0$, we have the following steps of finding the number of swappers needed to satisfy $P(Y_0^N>x)\leq \epsilon$. \begin{enumerate} \item Initialize $\gamma=1$. \item Evaluate \begin{equation} f(\gamma)=\bar{y}_0(\gamma)+\sqrt{\frac{\bar{\Sigma}_{00}(\gamma)}{N}}\Phi^{-1}(1-\epsilon)-x \end{equation} where $\bar{y}_0(\gamma),\bar{\Sigma}_{00}(\gamma)$ are the limiting mean and variance of $Y^N_0(t)$ at equilibrium (which are computed from the mean field and diffusion limits).
If $f(\gamma)>0$, set $\gamma \leftarrow 2\gamma$ (double the value of $\gamma$) and repeat step 2 until $f(\gamma)<0$. Denote the final value of $\gamma$ as $\gamma_{\max}$. \item Apply bisection method on interval $[0,\gamma_{\max}]$ to find the root $\gamma^*$ to $f(\gamma)$. Then $\gamma^*$ is the optimal number of swapper per e-scooter needed. \end{enumerate} \end{algorithm}
The main idea of the algorithm is to use the mean field and central limit theorems to construct quantiles for each interval of battery life. More specifically, we invert the quantiles to find the number of swappers to achieve the probabilistic performance given by system operator. We will demonstrate the usefulness of this algorithm in the next section, which is devoted to numerical examples.
\section{Numerical Examples and Simulation} \label{Numerics}
In this section, we use numerical examples and simulation results to provide better insights to the behavior of the e-scooters system. The examples validate our theoretical results in Sections \ref{Mean_Field_Limit} and \ref{Central_Limit} by showing how accurate the mean field and diffusion limits are for approximating the mean and variance of the empirical process. We also illustrate how to use our results for staffing the number of swappers. The following simulation results are computed with the following parameter settings:
\begin{itemize} \item The number of e-scooters $N=100$, \item The number of swappers $N^*=50$, \item Arrival rate of customers $\lambda=1$, \item Arrival rate of swappers $\mu=1$, \item Battery life bucket size $K=5$, \item Battery threshold for riding $K_U=1$, \item Battery usage probability $p_{ij}=\frac{1}{i+1}\mathbf{1}\{j\leq i\}$. \end{itemize}
We initialize the battery life of scooters at time 0 to be uniform in each interval of battery life, i.e. $Y^N(0)=[0.2,0.2,0.2,0.2,0.2]$. \begin{figure}
\caption{Single simulated sample path $Y^N(t)$ vs. its mean field limit $y(t)$ ($\gamma=0.5$)}
\label{Sim_1}
\end{figure}
In Figure \ref{Sim_1}, we simulate the e-scooter network according to the model dynamics where $K=5$ (intervals of battery life). It is important to note that we have only simulated one sample path in this picture and this is not an average of sample paths. Thus, we find that the mean field limit captures the sample path behavior of the empirical process dynamics for all of the proportions.
\begin{figure}
\caption{Average of 100 simulated sample paths $\mathbb{E}[Y^N(t)]$ vs. its mean field limit $y(t)$ ($\gamma=0.5$ (Left)) and ($\gamma=0.1$ (Right)) }
\label{Sim_2}
\end{figure}
In Figure \ref{Sim_2}, we average the dynamics over 100 sample paths. When compared to Figure \ref{Sim_1}, we observe that the averaged dynamics are closer to the mean field limit trajectories, which is to be expected. We also find that the mean field limit equations capture the averaged sample path behavior of the empirical process dynamics for all of the proportions.
\begin{figure}
\caption{Variance of 100 simulated sample paths $\text{Var}[Y^N(t)]$ vs. its diffusion limit $\Sigma(t)$ ($\gamma=0.5$ (Left)) and ($\gamma=0.1$ (Right)) }
\label{Sim_3}
\end{figure}
In Figure \ref{Sim_3}, we average the variance dynamics for each proportion over 100 sample paths. We observe that the variance dynamics are a bit more stochastic than the mean field limit simulations. However, we find that the variance dynamics are well approximated by the variance of the central limit theorem for the e-scooter process and for all of the proportions.
To give an example on the implementation of Algorithm \ref{algo} in Section \ref{Staffing}, we set $(x,\epsilon)=(10\%,10\%)$, i.e. find the minimum value of $\gamma$ such that the probability that proportion of scooters with low battery life (<20\%) is more than 10\% is no greater than 10\%. The value of $\gamma$ found through the algorithm is 0.527.
In Figure \ref{hist_y0}, we validate our algorithm by running 500 simulations using the optimal number of swappers found through Algorithm \ref{algo} ($N=100$, $N^*=53$), and plot the distribution of $Y_0^N$ at equilibrium from the 500 simulations. Then we plot the 50\%, 80\%, 90\% and 95\% sample quantiles of $Y_0^N$ at equilibrium (pink solid lines) and compare them with the quantiles estimated from the algorithm (red solid lines), and we can see that the approximation using the algorithm is very close to reality. We also plot the normal distribution curve to see how good the approximation is to the distribution of $Y^N_0$. We find that the central limit approximations capture the quantile behavior of the e-scooter system quite well.
\begin{figure}
\caption{Histogram of simulated $Y_0^N$ with normal approximation}
\label{hist_y0}
\end{figure}
\begin{table}[h] \begin{center} \caption{Optimal $\gamma$ for different values of $(x,\epsilon)$ ($\lambda=\mu=1$)} \label{table:gammas}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline\hline $(x,\epsilon)$& 0.01 & 0.05 & 0.1& 0.15& 0.2& 0.25 & 0.3\\ \hline 0.05 & 0.828 & 0.734 & 0.695& 0.672& 0.641& 0.625 & 0.609\\ 0.1 & 0.598 & 0.551 & 0.527& 0.508& 0.496& 0.484 & 0.473\\ 0.15 & 0.504 & 0.467 & 0.447& 0.434& 0.424& 0.414 & 0.406\\ 0.2 & 0.443 & 0.412 & 0.395& 0.383& 0.375& 0.366 & 0.359\\ 0.25 & 0.398 & 0.370 & 0.354& 0.344& 0.336& 0.328 & 0.322\\ 0.3 & 0.361 & 0.334 & 0.320& 0.311& 0.303& 0.296 & 0.290\\ \hline \end{tabular} \end{center} \end{table}
Table \ref{table:gammas} summarizes the optimal value of $\gamma$ (number of swappers per e-scooter, for different values of $(x,\epsilon)$. In addition, Figure \ref{gamma_surface} provides a surface plot of $\gamma$ over different values of $(x,\epsilon)$ that are given in Table \ref{table:gammas} .
\begin{figure}
\caption{Surface plot of optimal $\gamma$ for different values of $(x,\epsilon).$}
\label{gamma_surface}
\end{figure}
\section{Conclusion}\label{Conclusion}
In this paper, we construct two stochastic models for modeling the battery life dynamics for e-scooters in a large network. In full generality, the model is intractable when wants to keep track of each scooter's dynamics individually because of the high dimension. However, we propose to use empirical processes to capture the essential dynamics of battery life in e-scooter systems. Empirical processes describe the proportion of e-scooters that have battery life in a particular interval. To this end, we prove a mean field limit and a functional central limit theorem for our e-scooter network. We show that the mean and the variance of the empirical process can be approximated by a system of $\frac{K^2 + 3K}{2}$ differential equations where $K$ is the number battery life intervals we want to keep. We use the mean and variance to also construct a numerical algorithm to compute the number of \textbf{swappers} needed to ensure that the fraction of scooters whose battery life is below a pre-determined threshold.
There are many directions for future work. As Figure \ref{Scooter_Duration_cdf} shows, the trip durations are not exponential and are closer to a lognormal distribution or gamma distribution. An extension to general arrival and service distributions would aid in showing how the non-exponential distributions affect the dynamics of the empirical process. Recent work by \citet{li2017nonlinear, ko2017diffusion, li2016mean, pender2017approximations} provides a Poisson process representation of Markovian arrival processes. Thus, it might be useful to leverage this representation in future work where the arrivals and service distributions are non-renewal processes.
Although not explicitly studied in this work, it would be interesting to explore the impact of non-stationary arrival rates and service rates. This would undoubtedly change the underlying dynamics, however, we should mention that our analysis is easily generalizable to this setting. Moreover, our numerical algorithm for determining the number of \textbf{swappers} does not depend on the stationarity of our model and would easily generalize to the non-stationary setting. One important feature that is important to know in the non-stationary rate context is the size of the amplitude and the frequency of the mean field limit when the arrival rate is periodic. One way to analyze the amplitude and the frequency is to use Lindstedt's method like in \citet{novitzky2019nonlinear}.
Lastly, it is also interesting to consider a spatial model of arrivals to the e-scooter network. In this case, we would consider customers arriving to the system via a spatial Poisson process and riders would choose among the nearest scooters with enough battery life to make their trip. This spatial process can model the real choices that riders make and would model the real spatial dynamics of e-scooter networks. We intend to pursue these extensions in future work.
\section{Appendix}\label{Appendix}
\subsection*{Proof of Mean Field Limit Results}
\begin{proof}[Proof of Theorem \ref{fluid_limit}] Our proof exploits Doob's inequality for martingales and Gronwall's lemma, and we use Proposition~\ref{bound}, Proposition~\ref{Lipschitz}, and Proposition~\ref{drift} in the proof, which are stated after the proof of Theorem~\ref{fluid_limit}.
Since $Y^{N}(t)$ is a semi-martingale, we have the following decomposition of $Y^N(t)$ , \begin{equation}\label{semiY} Y^{N}(t)=\underbrace{Y^{N}(0)}_{\text{initial condition}}+\underbrace{M^{N}(t)}_{\text{martingale}}+\int_{0}^{t}\underbrace{F^N(Y^{N}(s))}_{\text{drift term}}ds \end{equation} where $Y^{N}(0)$ is the initial condition and $M^{N}(t)$ is a family of martingales. Moreover, $\int_{0}^{t}F^N(Y^{N}(s))ds$ is the integral of the drift term where the drift term is given by $F^N: [0,1]^{K}\rightarrow \mathbb{R}^{K}$ or \begin{eqnarray}\label{F^N} F^N(y)&=&\sum_{x\neq y}(x-y)Q^N(y,x)\\ &=&\sum_{k=0}^{K-1}\left[ \left( \frac{\lambda N^*}{N} \frac{g_k}{\sum_{i=0}^{K-1}y_ig_i} \right)(\mathbf{1}_{K-1}-\mathbf{1}_{k})+\sum_{j=0}^{k}\mu p_{kj} (\mathbf{1}_{j}-\mathbf{1}_{k})\mathbf{1}_{k\geq K_U}\right] y_k\nonumber \end{eqnarray}
We want to compare the empirical measure $Y^N(t)$ with the mean field limit $y(t)$ defined by \begin{equation} y(t)=y(0)+\int_{0}^{t}f(y(s))ds. \end{equation}
Let $|\cdot|$ denote the Euclidean norm in $\mathbb{R}^{K}$, then \begin{eqnarray}
\left|Y^N(t)-y(t)\right|&=&\left|Y^{N}(0)+M^{N}(t)+\int_{0}^{t}F^N
(Y^{N}(s))ds-y(0)-\int_{0}^{t}f(y(s))ds\right|\nonumber \\
& =& \left|Y^{N}(0)-y(0)+M^{N}(s)+\int_{0}^{t}\left(F^N (Y^{N}(s))-f(Y^{N}(s))\right)ds \right.\nonumber\\
& & \left.+\int_{0}^{t}(f(Y^{N}(s))-f(y(s)))ds\right|.\nonumber \\ \end{eqnarray}
Now define the random function $f^{N}(t)=\sup_{s\leq t}\left|Y^{N}(s)-y(s)\right|$, we have \begin{eqnarray}
f^{N}(t) &\leq& |Y^{N}(0)-y(0)|+\sup_{s\leq t}|M^{N}(s)|+\int_0^t|F^N(Y^{N}(s))-f(Y^{N}(s))|ds \nonumber \\
&&+\int_{0}^{t}|f(Y^{N}(s))-f(y(s))|ds. \end{eqnarray} By Proposition ~\ref{Lipschitz}, $f(y)$ is Lipschitz with respect to Euclidean norm. Let $L$ be the Lipschitz constant of $f(y)$, then. \begin{eqnarray}
f^{N}(t)&\leq& |Y^{N}(0)-y(0)|+\sup_{s\leq t}|M^{N}(s)|+\int_0^t|F^N(Y^{N}(s))-f(Y^{N}(s))|ds \nonumber \\
&&+\int_{0}^{t}|f(Y^{N}(s))-f(y(s))|ds \nonumber \\
&\leq& |Y^{N}(0)-y(0)|+\sup_{s\leq t}|M^{N}(s)|+\int_0^t| F^N(Y^{N}(s))-f(Y^{N}(s))|ds \nonumber \\
&&+L\int_{0}^{t}|Y^{N}(s)-y(s)|ds \nonumber \\
&\leq& |Y^{N}(0)-y(0)|+\sup_{s\leq t}|M^{N}(s)|+\int_0^t| F^N(Y^{N}(s))-f(Y^{N}(s))|ds \nonumber \\ &&+L\int_{0}^{t}f^{N}(s)ds. \end{eqnarray}
By Gronwall's lemma (See \citet{10.2307/1967124}), \begin{equation}
f^{N}(t) \leq \left(|Y^{N}(0)-y(0)|+\sup_{s\leq t}|M^{N}(s)|+\int_0^t| F^N(Y^{N}(s))-f(Y^{N}(s))|ds\right)e^{Lt}. \end{equation}
Now to bound $f^{N}(t)$ term by term, we define function $\alpha: [0,1]^{K}\rightarrow \mathbb{R}^{K}$ as \begin{eqnarray}
\alpha_k(y)&=&\sum_{x\neq y}|x-y|^2Q^N(y,x)(k)\nonumber\\ &=&\begin{cases} \frac{1}{N}\sum_{i=\max(K_U,k+1)}^{K-1}\mu p_{i,k}y_i +\frac{1}{N}\left(\mu(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}+\frac{\lambda N^* g_k}{N\sum_{i=0}^{K-1}y_ig_i} \right)y_k, & k<K-1\nonumber\\ \frac{1}{N}\mu(1-p_{K-1,K-1})y_{K-1}+\lambda \frac{N^*}{N^2}\left(1- \frac{g_{K-1}y_{K-1}}{\sum_{i=0}^{K-1}y_ig_i}\right), & k=K-1 \end{cases} \end{eqnarray}
and consider the following four sets \begin{eqnarray}
\Omega_0 &=& \{|Y^{N}(0)-y(0)|\leq \delta \}, \\
\Omega_1 &=& \left\{\int_0^{t_0}|F^N(Y^{N}(s))-f(Y^{N}(s))|ds\leq \delta \right\}, \\ \Omega_2 &=& \left\{\int_0^{t_0}\alpha(Y^{N}(t))dt \leq A(N)t_0 \right\}, \\
\Omega_3 &=& \left\{\sup_{t\leq t_0}|M_t^{N}|\leq \delta \right\} , \end{eqnarray} where $\delta=\epsilon e^{-Lt_0}/3$. Here the set $\Omega_{1}$ is to bound the initial condition, the set $\Omega_{2}$ is to bound the drift term $F^N$ and the limit of drift term $b$, and the sets $\Omega_{2},\Omega_{3}$ are to bound the martingale $M^{N}(t)$.
Therefore on the event $\Omega_0\cap \Omega_1\cap \Omega_3$, \begin{equation}\label{e} f^{N}(t_0)\leq 3\delta e^{Lt_0}=\epsilon. \end{equation}
Since $\lim_{N\rightarrow \infty}\frac{N^*}{N}=\gamma$, we can choose large enough $N$ such that $$\frac{N^*}{N}\leq 2\gamma.$$ Thus, we have \begin{eqnarray}
\alpha_k(y)&=&\sum_{x\neq y}|x-y|^2Q(y,x)(k)\nonumber\\ &\leq & \frac{1}{N}\sum_{i=\max(K_U,k+1)}^{K-1}\mu p_{i,k}y_i +\frac{1}{N}\left(\mu(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}+2\frac{\lambda \gamma g_k}{\sum_{i=0}^{K-1}y_ig_i} \right)y_k\nonumber \\ &\leq &\frac{1}{N}\left(\mu K +\mu +2\lambda \gamma \right)\nonumber \\ &\lesssim & O(\frac{1}{N}) . \end{eqnarray}
Now we consider the stopping time $$T=t_0\wedge \inf \left\{t\geq 0:\int _{0}^{t}\alpha(Y^{N}(s))ds>A(N)t_0\right\},$$
and by Proposition ~\ref{bound}, we have that
$$\mathbb{E}\left(\sup_{t\leq T}|M^{N}(t)|^2\right)\leq 4\mathbb{E}\int_{0}^{T}\alpha(Y^N(t))dt\leq 4A(N)t_0. $$ On $\Omega_2$, we have $T =t_0$, so
$\Omega_2 \cap \Omega_3^{c}\subset \{\sup_{t\leq T}|M^{N}_t|>\delta\}$. By Chebyshev's inequality we have that \begin{equation}
\mathbb{P}(\Omega_2 \cap \Omega_3^{c})\leq \mathbb{P}\left(\sup_{t\leq T}|M^{N}_t|>\delta\right)\leq \frac{\mathbb{E}\left(\sup_{t\leq T}|M^{N}(t)|^2\right)}{\delta^2}\leq 4A(N)t_0/\delta^2. \end{equation} Thus, by Equation (\ref{e}), we have the following result, \begin{equation} \begin{split}
\mathbb{P}\left(\sup_{t\leq t_0}|Y^{N}(t)-y(t)|>\epsilon\right)&\leq \mathbb{P}(\Omega_0^c\cup \Omega_1^c\cup \Omega_3^c)\\ &\leq \mathbb{P}(\Omega_2 \cap \Omega_3^{c})+\mathbb{P}(\Omega_0^{c}\cup \Omega_1^{c}\cup \Omega_2^{c})\\ &\leq 4A(N)t_0/\delta^2+\mathbb{P}(\Omega_0^{c}\cup \Omega_1^{c}\cup \Omega_2^{c})\\ &=36A(N)t_0 e^{2Lt_0}/\epsilon^2+\mathbb{P}(\Omega_0^{c}\cup \Omega_1^{c}\cup \Omega_2^{c}). \end{split} \end{equation} \\
Let $A(N)=\frac{4(C+\gamma)}{N}$, then $\Omega_{2}^{c}=\emptyset$. And since $Y_0^N\xrightarrow{p} y(0)$, $\lim_{N\rightarrow \infty}\mathbb{P}(\Omega_{2}^{c})=0$. Therefore we have $$\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{t\leq t_0}|Y^{N}(t)-y(t)|>\epsilon\right)=\lim_{N\rightarrow\infty}\mathbb{P}(\Omega_{1}^{c}) .$$ By Proposition \ref{drift}, $\lim_{N\rightarrow\infty}\mathbb{P}(\Omega_{1}^{c})=0$. Thus, we proved the final result
$$\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{t\leq t_0}|Y^{N}(t)-y(t)|>\epsilon\right)=0.$$ \end{proof}
\begin{proposition}[Bounding martingales]\label{bound} For any stopping time $T$ such that $\mathbb{E}(T)<\infty$, we have \begin{equation}
\mathbb{E}\left(\sup_{t\leq T}|M^{N}(t)|^2\right)\leq 4\mathbb{E}\int_{0}^{T}\alpha(Y^N(t))dt. \end{equation} \begin{proof} See proof of Proposition 4.2 (page 46) in \citet{tao2017stochastic}. \end{proof} \end{proposition}
\begin{proposition}[Asymptotic Drift is Lipschitz]\label{Lipschitz} The drift function $f(y)$ given in Equation (\ref{eqn:b}) is a Lipschitz function with respect to the Euclidean norm in $\mathbb{R}^{K}$. \begin{proof}
Denote $\|\cdot\|$ the Euclidean norm in $\mathbb{R}^{K}$. Consider $y,\tilde{y}\in [0,1]^{K}$, \begin{eqnarray}
\|f(y)-f(\tilde{y})\|&\leq& 2\left(\frac{\lambda \gamma \max_i g_i}{\min_i g_i}+\mu\right)\|y-\tilde{y}\| \end{eqnarray} which proves that $f(y)$ is Lipschitz with respect to Euclidean norm in $\mathbb{R}^{K}$. \end{proof} \end{proposition}
\begin{proposition}[Drift is Asymptotically Close to Lipschitz Drift]\label{drift} Under the assumptions of Theorem \ref{fluid_limit}, we have for any $\epsilon>0$ and $s\geq 0$,
$$\lim_{N\rightarrow \infty}P(|F^N(Y^{N}(s))-f(Y^{N}(s))|>\epsilon)= 0.$$ \begin{proof} \begin{eqnarray}
\left| F^N(Y^{N}(s))-f(Y^{N}(s))\right|&=&\left|\sum_{k=0}^{K-1}\left(\frac{N^*}{N}-\gamma\right)\frac{Y^{N}(s)(k)g_k}{\sum_{i=0}^{K-1}Y^{N}(s)(i)g_i}(\mathbf{1}_{K-1}-\mathbf{1}_{k})\right|\nonumber\\
&\leq & 2\left|\frac{N^*}{N}-\gamma\right|\left|\sum_{k=0}^{K-1}\frac{\max_i g_i}{\min_i g_i} \right|\nonumber\\
&=& 2\frac{K\max_i g_i}{\min_i g_i} \left|\frac{N^*}{N}-\gamma\right| \rightarrow 0 \end{eqnarray} \end{proof} \end{proposition}
\subsection*{Proof of Central Limit Results}
\begin{lemma}\label{martingale_brackets} $\sqrt{N}M^{N}(t)$ is a family of martingales independent of $D^{N}(0)$ with Doob-Meyer brackets given by \begin{eqnarray} & &\boldlangle \sqrt{N}M^{N}_k(t),\sqrt{N}M^{N}_j(t)\boldrangle\nonumber\\ &=&\begin{cases} \int_{0}^{t}\left[\sum_{i=\max(K_U,k+1)}^{K-1}\mu p_{i,k}Y^N_i(s) +\left(\mu(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}\right.\right. & \nonumber\\ +\left.\left.\lambda N^*\frac{ g_k}{N\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_k(s)\right]ds, & k=j<K-1 \nonumber\\ \int_{0}^{t}\left[\mu(1-p_{K-1,K-1})Y_{K-1}^N(s)+\lambda \frac{N^*}{N}\left(1- \frac{g_{K-1}Y_{K-1}^N(s)}{\sum_{i=0}^{K-1}Y_i^N(s)g_i}\right)\right]ds, & k=j=K-1\\ -\int_{0}^{t} \left[\mu p_{k,j}Y^N_k(s)+\lambda N^*\frac{ g_j}{N\sum_{i=0}^{K-1}Y_i^N(s)g_i}Y_j^N(s)\mathbf{1}\{k=K-1\}\right]ds, & j<k, k\geq K_U\nonumber\\ 0. & \text{otherwise} \end{cases} \end{eqnarray} \end{lemma}
\begin{proof}[Proof of Lemma \ref{martingale_brackets}] By Dynkin's formula, \begin{eqnarray}
\boldlangle \sqrt{N}M_k^{N}(t)\boldrangle&=&\int_{0}^{t}N\sum_{x\neq Y^{N}(s)}|x-Y^{N}(s)|^2 Q(Y^{N}(s),x)(k)ds\nonumber\\ &=&\int_{0}^{t}N\alpha_k(Y^N(s))ds\nonumber\\ &=&\begin{cases} \int_{0}^{t}\left[\sum_{i=\max(K_U,k+1)}^{K-1}\mu p_{i,k}Y^N_i(s) +\left(\mu(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}\right.\right.\nonumber\\ +\left.\left.\frac{\lambda N^* g_k}{N\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_k(s)\right]ds, & k<K-1\nonumber\\ \int_{0}^{t}\left[\mu(1-p_{K-1,K-1})Y^N_{K-1}(s)+\lambda \frac{N^*}{N}\left(1- \frac{g_{K-1}Y^N_{K-1}(s)}{\sum_{i=0}^{K-1}Y^N_i(s)g_i}\right)\right]ds, & k=K-1 \end{cases}\\ &\triangleq &\int_{0}^{t}(F^N_{+}(Y^{N}(s))(k)+F^N_{-}(Y^{N}(s))(k))ds \end{eqnarray} where $$F^N_{+}(Y^{N}(s))(k)=\begin{cases} \sum_{i=\max(K_U,k+1)}^{K-1}\mu p_{i,k}Y^N_i(s), & k<K-1 \\ \lambda \frac{N^*}{N}\left(1- \frac{g_{K-1}Y^N_{K-1}(s)}{\sum_{i=0}^{K-1}Y^N_i(s)g_i}\right), & k=K-1 \end{cases}$$ and $$F^N_{-}(Y^{N}(s))(k)=\begin{cases} \left(\mu(1-p_{k,k})\mathbf{1}_{\{k\geq K_U\}}+\frac{\lambda N^* g_k}{N\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_k(s), & k<K-1 \\ \mu(1-p_{K-1,K-1})Y^N_{K-1}(s), & k=K-1 \end{cases}$$ To compute $\boldlangle \sqrt{N}M_k^{N}(t),\sqrt{N}M_j^{N}(t)\boldrangle$ for $j<k$ and $k\geq K_U$, since \begin{equation} \begin{split} &\boldlangle M_k^{N}(t)+M_j^{N}(t)\boldrangle\\
=&\int_{0}^{t}\sum_{x\neq Y^{N}(s)}\left|x_k+x_j-Y^{N}(s)(k)-Y_{j}^{N}(s)\right|^2 Q(Y^{N}(s),x)ds\\ =&\frac{1}{N}\int_{0}^{t}\left[\mu\left(\sum_{i=\max(j+1,K_U),i\neq k}^{K-1}p_{i,j}+\sum_{i=k+1}^{K-1}p_{i,k}\right)Y^N_i(s) +\left(\mu(1-p_{j,j})+\frac{\lambda \gamma g_j}{\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_j(s) \right.\\ & \left. +\left(\mu(1-p_{k,k}-p_{k,j})+\frac{\lambda \gamma g_k}{\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_k(s)+\lambda \gamma \{k=K-1\}\right]ds \end{split} \end{equation} We have that \begin{equation} \begin{split} &\boldlangle \sqrt{N}M_k^{N}(t),\sqrt{N}M_j^{N}(t)\boldrangle\\ =&\frac{N}{2}\left[\boldlangle M_k^{N}(t)+M_j^{N}(t)\boldrangle-\boldlangle M_k^{N}(t)\boldrangle-\boldlangle M_j^{N}(t)\boldrangle\right]\\ =&\frac{1}{2}\int_{0}^{t}\left[\mu\left(\sum_{i=\max(j+1,K_U),i\neq k}^{K-1}p_{i,j}+\sum_{i=k+1}^{K-1}p_{i,k}\right)Y^N_i(s) +\left(\mu(1-p_{j,j})+\frac{\lambda \gamma g_j}{\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_j(s) \right.\\ & \left. +\left(\mu(1-p_{k,k}-p_{k,j})+\frac{\lambda \gamma g_k}{\sum_{i=0}^{K-1}Y^N_i(s)g_i} \right)Y^N_k(s)+\lambda \gamma \{k=K-1\}\right]ds\\ &-\frac{1}{2}\int_{0}^{t}(F^N_{+}(Y^{N}(s))(k)+F^N_{+}(Y^{N}(s))(j)+F^N_{-}(Y^{N}(s))(k)+F^N_{-}(Y^{N}(s))(j))ds\\ =&-\int_{0}^{t} \left[\mu p_{k,j}Y^N_k(s)+\lambda N^*\frac{ g_j}{N\sum_{i=0}^{K-1}Y_i^N(s)g_i}Y_j^N(s)\mathbf{1}\{k=K-1\}\right] ds. \end{split} \end{equation} Finally, $M_k^N$ and $M_j^N$ are independent when $j,k\leq K_U$, thus in this case the Doob-Meyer brackets is equal to 0. \end{proof}
\begin{proposition}\label{driftbound} For any $s\geq 0$, \begin{equation}
\limsup_{N\rightarrow \infty}\sqrt{N}\left|F^N(Y^{N}(s))-f(Y^{N}(s))\right|=0. \end{equation} where $F^N$ is the drift function defined in Equation (\ref{F^N}).
\end{proposition}
\begin{proof}[Proof of Proposition \ref{driftbound}] \begin{eqnarray}
& &\limsup_{N\rightarrow \infty}\sqrt{N}\left|F^N(Y^{N}(s))-f(Y^{N}(s))\right|\nonumber\\
&=&\limsup_{N\rightarrow \infty}\sqrt{N}\left|\sum_{k=0}^{K-1}\left(\frac{N^*}{N}-\gamma\right)\frac{Y^{N}_{k}(s)g_k}{\sum_{i=0}^{K-1}Y^{N}_{i}(s)g_i}(\mathbf{1}_{K-1}-\mathbf{1}_{k})\right|\nonumber\\
&\leq & \limsup_{N\rightarrow \infty}2\sqrt{N}2\left|\frac{N^*}{N}-\gamma\right|\left|\sum_{k=0}^{K-1}\frac{\max_i g_i}{\min_i g_i} \right|\nonumber\\
&=& \limsup_{N\rightarrow \infty}2\sqrt{N}\frac{K\max_i g_i}{\min_i g_i} \left|\frac{N^*}{N}-\gamma\right| \nonumber\\ &=&0 \end{eqnarray} \end{proof}
\begin{lemma}[Finite Horizon Bound]\label{L2bound}
For any $T\geq 0$, if $$\limsup_{N\rightarrow \infty}\mathbb{E}\left(|D^{N}(0)|^2 \right) < \infty ,$$ then we have $$\limsup_{N\rightarrow \infty}\mathbb{E}\left(\sup_{0\leq t\leq T}|D^{N}(t)|^2 \right) < \infty .$$ \end{lemma}
\begin{proof}[Proof of Lemma \ref{L2bound}]
By Proposition \ref{driftbound}, $\sqrt{N}|F^N(Y^{N}(s))-f(Y^{N}(s))|=O(1)$, then \begin{equation} \begin{split}
|D^{N}(t)|&\leq |D^{N}(0)|+\sqrt{N}|M^{N}(t)|+O(1)t+\int_{0}^{t}\sqrt{N} |f(Y^{N}(s))-f(y(s))|ds\\
&\leq|D^{N}(0)|+\sqrt{N}|M^{N}(t)|+O(1)t+\int_{0}^{t}\sqrt{N}L|Y^{N}(s)-y(s)|ds\\
&=|D^{N}(0)|+\sqrt{N}|M^{N}(t)|+O(1)t+\int_{0}^{t}L|D^{N}(s)|ds. \end{split} \end{equation} By Gronwall's Lemma,
$$\sup_{0\leq t\leq T}|D^{N}(t)|\leq e^{LT}\left(|D^{N}(0)|+O(1)T+\sup_{0\leq t\leq T}|\sqrt{N}M^{N}(t)|\right),$$ then
$$\limsup_{N\rightarrow \infty}\mathbb{E}\left(\sup_{0\leq t \leq T}|D^{N}(t)|^2\right)\leq e^{2LT}\left[\limsup_{N\rightarrow \infty}\mathbb{E}(|D^{N}(0)|)+O(1)T+\limsup_{N\rightarrow \infty}\mathbb{E}\left(\sup_{0\leq t\leq T}\sqrt{N}|M^{N}(t)|\right)\right]^2.$$ We know that
$$\left[\mathbb{E}\left(\sup_{0\leq t\leq T}\sqrt{N}|M^{N}(t)|\right)\right]^2\leq N\mathbb{E}\left(\sup_{0\leq t\leq T}|M^{N}(t)|^2\right)\leq 4NA(N)T,$$ and that $A(N)=O(\frac{1}{N})$. Therefore
$$\limsup_{N\rightarrow \infty}\mathbb{E}\left(\sup_{0\leq t\leq T}\sqrt{N}|M^{N}(t)| \right)<\infty.$$
Together with our assumption that $\limsup_{N\rightarrow \infty}\mathbb{E}(|D^{N}(0)|^2)<\infty$, we have
$$\limsup_{N\rightarrow \infty}\mathbb{E}\left(\sup_{0\leq t \leq T}|D^{N}(t)|^2 \right)<\infty.$$\\ \end{proof}
\begin{lemma}\label{tightness} If $(D^{N}(0))_{N=1}^{\infty}$ is tight then $(D^{N})_{N=1}^{\infty}$ is tight and its limit points are continuous. \end{lemma} \begin{proof}[Proof of Lemma \ref{tightness}]
To prove the tightness of $(D^{N})_{N=1}^{\infty}$ and the continuity of the limit points, we can apply results from \citet{billingsley2013convergence}, which implies that we only need to show the following two conditions holds for each $T>0$ and $\epsilon>0$, \begin{itemize} \item[(i)] \begin{equation}
\lim_{K\rightarrow \infty}\limsup_{N\rightarrow \infty}\mathbb{P}\left(\sup_{0\leq t\leq T}|D^{N}(t)|>K \right)=0, \end{equation} \item[(ii)] \begin{equation} \lim_{\delta\rightarrow 0}\limsup_{N\rightarrow \infty}\mathbb{P}\left(w(D^{N},\delta,T)\geq \epsilon \right)=0 \end{equation} \end{itemize} where for $x\in \mathbb{D}^{d}$, \begin{equation}
w(x,\delta,T)=\sup\left\{\sup_{u,v\in[t,t+\delta]}|x(u)-x(v)|:0\leq t\leq t+\delta\leq T\right\}. \end{equation} By Lemma \ref{L2bound}, there exists $C_{0}>0$ such that \begin{eqnarray}
\lim_{K\rightarrow \infty}\limsup_{N\rightarrow \infty}\mathbb{P} \left(\sup_{0\leq t\leq T}|D^{N}(t)|>K \right) &\leq& \lim_{K\rightarrow \infty}\limsup_{N\rightarrow \infty}\frac{\mathbb{E}\left(\sup_{0\leq t\leq T}|D^{N}(t)|^2 \right)}{K^2} \\ &\leq& \lim_{K\rightarrow \infty}\frac{C_{0}}{K^2} \\ &=&0, \end{eqnarray} which proves condition (i).
For condition (ii), we have that
\begin{eqnarray} D^N(u) - D^N(v) &=& \underbrace{\sqrt{N} \cdot ( M^N(u) - M^N(v))}_{\text{first term}} + \underbrace{\int^{u}_{v} \sqrt{N} \left( F^N(Y^N(z)) - f(Y^N(z)) \right) dz}_{\text{second term}} \nonumber \\&+& \underbrace{\int^{u}_{v} \sqrt{N} \left( f(Y^N(z)) - f(y(z)) \right) dz }_{\text{third term}} \end{eqnarray} for any $0<t\leq u<v\leq t+\delta\leq T$. Now it suffices to show that each of the three terms of $D^N(u) - D^N(v)$ satisfies condition (ii). In what follows, we will show that each of the three terms satisfies condition (ii) to complete the proof of tightness.
For the first term,
similar to the proof of Proposition ~\ref{drift}, we can show that
$$\sup_{t\leq T}\left|F^N_{+}(Y^N(t))-f_{+}(Y^N(t))\right|\xrightarrow{p}0, \quad \sup_{t\leq T}\left|F^N_{-}(Y^N(t))-f_{-}(Y^N(t))\right|\xrightarrow{p}0.$$ And by the proof of Proposition \ref{Lipschitz}, $f_{+}(y), f_{-}(y)$ are also Lipschitz with constant $L$, then by the fact that the composition of Lipschitz functions are also Lipschitz, \begin{eqnarray}
\max\left\{\sup_{t\leq T}|f_{+}(Y^N(t))-f_{+}(y(t))|,\sup_{t\leq T}|f_{-}(Y^N(t))-f_{-}(y(t))|\right\}\leq L\sup_{t\leq T}|Y^N(t)-y(t)|. \end{eqnarray} By Theorem \ref{fluid_limit}, \begin{equation}
\sup_{t\leq T}|Y^N(t)-y(t)|\xrightarrow{p} 0. \end{equation} Thus for any $\epsilon>0$, \begin{eqnarray}
& &\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{t\leq T}\left|\boldlangle \sqrt{N}M_k^N(t)\boldrangle- \boldlangle M_k(t)\boldrangle\right|>\epsilon\right)\nonumber \\
&=&\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{t\leq T}\left|\int_{0}^{t}\left(F^N_{+}(Y^{N}(s))+F^N_{-}(Y^{N}(s))- f_{+}(y(s))-f_{-}(y(s))\right)ds\right|>\epsilon\right)\nonumber\\
&\leq & \lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{t\leq T}T\left|F^N_{+}(Y^N(t))- f_{+}(Y^{N}_{t})\right|>\epsilon/3\right)+\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{t\leq T}T\left|F^N_{-}(Y^N(t))- f_{-}(Y^{N}_{t})\right|>\epsilon/3\right)\nonumber\\
& &+\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{t\leq T}2LT\left|Y^N(t)- y(t)\right|>\epsilon/3 \right)\nonumber \\ &=& 0, \end{eqnarray} which implies \begin{equation}\label{martingale_brackets_convergence}
\sup_{t\leq T}\left|\boldlangle \sqrt{N}M^N_k(t)\boldrangle- \boldlangle M_k(t)\boldrangle\right|\xrightarrow{p} 0. \end{equation} We also know that the jump size of $D^{N}(t)$ is $1/\sqrt{N}$, therefore \begin{equation}
\lim_{N\rightarrow \infty}\mathbb{E}\left[\sup_{0<t\leq T}\left|M^{N}(t)-M^{N}(t-)\right| \right]=0. \end{equation} By Theorem 1.4 in Chapter 7 of \citet{Ethier2009}, $\sqrt{N}M^{N}(t)$ converges to the Brownian motion $M(t)$ in distribution in $\mathbb{D}(\mathbb{R}_{+},\mathbb{R}^{K+1})$. By Prohorov's theorem, $(\sqrt{N}M^{N})_{N=1}^{\infty}$ is tight. And since $M(t)$ is a Brownian motion, its sample path is almost surely continuous.
For the second term, we have by Proposition \ref{driftbound} that the quantity $ \sqrt{N} \left( F^N(Y^N(z)) - f(Y^N(z)) \right) $ is bounded for any value of $z\in [0,T]$. Therefore, there exists some constant $C_{1}$ that does not depend on $N$ such that \begin{equation}
\sup_{z\in [0,T]}\sqrt{N} \left| F^N(Y^N(z)) - f(Y^N(z)) \right|\leq C_{1}. \end{equation} Then \begin{eqnarray}
& &\lim_{\delta\rightarrow 0}\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{u,v\in [0,T],|u-v|\leq \delta}\int^{u}_{v} \sqrt{N} \left| F^N(Y^N(z)) - f(Y^N(z)) \right| dz > \epsilon \right)\nonumber \\
&\leq & \lim_{\delta\rightarrow 0}\lim_{N\rightarrow \infty}\mathbb{P}\left(\delta \sup_{z\in [0,T]}\sqrt{N} \left| F^N(Y^N(z)) - f(Y^N(z)) \right| > \epsilon \right)\nonumber \\ &\leq & \lim_{\delta\rightarrow 0}\mathbb{P}\left(\delta C_{1} > \epsilon \right)\nonumber \\ &=& 0. \end{eqnarray}
Thus, we have proved the oscillation bound for the second term. Finally for the third term we have that \begin{eqnarray}
\int^{u}_{v} \sqrt{N} \left| f(Y^N(z)) - f(y(z)) \right| dz & \leq& \int^{u}_{v} \sqrt{N} L\left| Y^N(z) - y(z) \right| dz\nonumber \\
&=& \int^{u}_{v} L \cdot \left|D^N(z)\right| dz \nonumber\\\
&\leq & L\delta \sup_{t\in [0,T]}|D^{N}(t)|. \end{eqnarray} By Lemma ~\ref{L2bound}, \begin{eqnarray}
& &\lim_{\delta\rightarrow 0}\lim_{N\rightarrow \infty}\mathbb{P}\left(\sup_{u,v\in [0,T],|u-v|\leq \delta}\int^{u}_{v} \sqrt{N} \left| f(Y^N(z)) - f(y(z)) \right| dz>\epsilon\right)\nonumber\\
&\leq &\lim_{\delta\rightarrow 0}\lim_{N\rightarrow \infty}\mathbb{P}\left(L\delta\sup_{t\in [0,T]}|D^{N}(t)|>\epsilon\right)\nonumber\\
&\leq & \lim_{\delta\rightarrow 0}\lim_{N\rightarrow \infty}\frac{\mathbb{E}\left(\sup_{t\in [0,T]}|D^{N}(t)|^2\right)}{(\epsilon/L \delta)^2}\nonumber \\ &\leq &\lim_{\delta\rightarrow 0}\frac{C_{0}(L\delta)^2}{\epsilon^2}\nonumber \\ &=& 0, \end{eqnarray}
which implies that the oscillation bound holds for the third term. \end{proof}
\end{document} |
\begin{document}
\title{A unifying theory of exactness of linear penalty functions II: parametric penalty functions}
\begin{abstract} In this article we develop a general theory of exact parametric penalty functions for constrained optimization problems. The main advantage of the method of parametric penalty functions is the fact that a parametric penalty function can be both smooth and exact unlike the standard (i.e. non-parametric) exact penalty functions that are always nonsmooth. We obtain several necessary and/or sufficient conditions for the exactness of parametric penalty functions, and for the zero duality gap property to hold true for these functions. We also prove some convergence results for the method of parametric penalty functions, and derive necessary and sufficient conditions for a parametric penalty function to not have any stationary points outside the set of feasible points of the constrained optimization problem under consideration. In the second part of the paper, we apply the general theory of exact parametric penalty functions to a class of parametric penalty functions introduced by Huyer and Neumaier, and to smoothing approximations of nonsmooth exact penalty functions. The general approach adopted in this article allowed us to unify and significantly sharpen many existing results on parametric penalty functions. \end{abstract}
\section{Introduction}
One of the main approaches to finding a local or global minimum of a constrained optimization problem consists of the reduction of this problem to a sequence of unconstrained optimization problems or to a single unconstrained optimization problem whose locally (or globally) optimal solutions coincide with locally (or globally) optimal solutions of the original problem. In turn, one of the main methods of the reduction of a constrained optimization problem to a single unconstrained optimization problem is the exact penalty method. The exact penalty method was proposed by Eremin \cite{Eremin} and Zangwill \cite{Zangwill} in the 1960s, and, later on, became a standard tool of constrained optimization \cite{Bertsekas,EvansGouldTolle,HanMangasarian,DiPilloGrippo,DiPilloGrippo2,DiPillo,ExactBarrierFunc, RubinovYang,WuBaiYang,Demyanov,Zaslavski,Dolgopolik}.
The exact penalty method allows one to replace a constrained optimization problem by an equivalent unconstrained optimization problem with necessarily nonsmooth objective function. However, the nonsmoothness of exact penalty functions makes them less attractive (especially for practitioners who are often not familiar with efficient methods for solving nonsmooth optimization problems) than other methods of constrained optimization. There exist two approaches to overcome this difficulty within the theory of exact penalty functions. The first one is based on the use of smoothing approximations of nonsmooth exact penalty functions \cite{Pinar,WuBaiYang,MengDangYang,Liu,LiuzziLucidi,MengLiYang,XuMengSunShen,Lian,XuMengSunHuangShen}, while the second approach was developed by Huyer and Neumaier in \cite{HuyerNeumaier}. Throughout this article, we refer to the penalty function proposed in \cite{HuyerNeumaier} as a \textit{singular exact penalty function}, since one achieves smoothness of this exact penalty function via the introduction of a singular term into the definition of this function. Recently, singular exact penalty functions have attracted a lot of attention of researchers \cite{Bingzhuang,WangMaZhou,Dolgopolik_OptLet,Dolgopolik_OptLet2}, and were successfully applied to various constrained optimization problems \cite{YuTeoZhang,LiYu,JianLin,MaLiYiu, YuTeoBai,LinWuYu,LinLoxton}. It should be noted that the main feature of both smoothing approximations of exact penalty functions and singular exact penalty functions is the fact that they depend on some additional parameters apart from the penalty parameter. Thus, both smoothing approximations and singular exact penalty functions are \textit{parametric penalty functions}.
The main goal of this article is to develop a general theory of exact \textit{parametric} penalty functions that allows one not only to unify and generalize existing results on parametric penalty functions, but also to better understand capabilities as well as limitations of the method of exact \textit{parametric} penalty functions. Apart from the general theory of exactness of parametric penalty functions, which can be viewed as an extension of the main results on exactness of standard (i.e. non-parametric) penalty function \cite{Dolgopolik} to the parametric case, we also present some other results closely related to the theory of parametric penalty functions. Namely, we obtain necessary and sufficient conditions for a parametric penalty function to not have any stationary (i.e. critical) points outside the set of feasible points of the initial constrained optimization problem. We also study the zero duality gap property, and obtain several results on convergence of the method of parametric penalty functions.
Note that parametric penalty functions can be viewed as a particular case of separation functions that are studied within the image space analysis (see~\cite{Giannessi_book,Mastroeni,LiFengZhang,ZhuLi,XuLi} and references therein). However, surprisingly, no results of this paper (even the necessary and sufficient condition for the zero duality gap property) can be derived from the general results on separation functions known to the author (see also Remark~\ref{Rmrk_ZeroDualityGap_ImageSpaceAnalysis} below).
The paper is organised as follows. In Section~\ref{Sect_Preliminaries} we present some preliminary material that is used throughout the article. A general theory of exact parametric penalty functions is developed in Section~\ref{Sect_ParametricPenaltyFunctions}. In Section~\ref{Section_SingPenFunc} we apply this theory to singular penalty functions. The main results of this section sharpen and/or generalize many existing results on singular penalty functions. Finally, in Section~\ref{Section_SmoothingPenFunc} we develop a theory of approximations of exact penalty functions within the framework of the theory of exact parametric penalty functions. The theory of approximations of exact penalty functions presented in Section~\ref{Section_SmoothingPenFunc} unifies and generalizes most of existing results on smoothing approximations of nonsmooth exact penalty functions.
\section{Preliminaries} \label{Sect_Preliminaries}
In this section, we recall the notions of the rate of steepest descent/ascent of a function defined on a metric space (see, e.g., \cite{Demyanov,DemyanovRSD}), and derive some calculus rules for the rate of steepest descent/ascent that will be used throughout the article.
Let $(X, d)$ be a metric space, and $A \subset X$ be a nonempty set. Denote, as usual, $\overline{\mathbb{R}} = \mathbb{R} \cup \{ + \infty \} \cup \{ - \infty \}$ and $\mathbb{R}_+ = [0, + \infty)$. For any function $f \colon C \to \overline{\mathbb{R}}$ with $C \subset X$ being a nonempty set, denote
$\dom f = \{ x \in C \mid |f(x)| < + \infty \}$.
\begin{definition} Let $U \subset X$ be an open set, and $f \colon U \to \overline{\mathbb{R}}$ be a given function. For any $x \in \dom f \cap A$ the quantity $$
f^{\downarrow}_A(x) = \liminf_{y \to x, \: y \in A} \frac{f(y) - f(x)}{d(y, x)} $$ is called \textit{the rate of steepest descent} of the function $f$ with respect to the set $A$ at the point $x$, while the quantity $$
f^{\uparrow}_A(x) = \limsup_{y \to x, \: y \in A} \frac{f(y) - f(x)}{d(y, x)} $$ is called \textit{the rate of steepest ascent} of the function $f$ with respect to the set $A$ at the point $x$. If $x$ is an isolated point of the set $A$, then by definition $f^{\downarrow}_A(x) = + \infty$ and $f^{\uparrow}_A(x) = - \infty$. If $A = X$, then the quantity $f^{\downarrow}_X(x)$ is called \textit{the rate of steepest descent} of $f$ at $x$, and is denoted by $f^{\downarrow}(x)$, and the similar notation is used for \textit{the rate of steepest ascent} of $f$ at $x$. \end{definition}
\begin{remark}
It should be noted that that the notion of the rate of steepest descent is very similar (but not identical) to the notion of the strong slope $|\nabla f|$ of a function $f$ defined on a metric space (cf., e.g., \cite{Ioffe,Aze}). Namely, it is easy to see that $f^{\downarrow}(x) < 0$ iff $|\nabla f| (x) > 0$, and in this case
$f^{\downarrow}(x) = - |\nabla f|(x)$. However, it should be noted that $|\nabla f|(x) \ge 0$ for any function $f$, while it is easy to provide an example of a function $f$ such that $f^{\downarrow}(x) > 0$. \end{remark}
Although there is no elaborate calculus for the rate of steepest descent/ascent, in some cases one can easily compute (or estimate) these quantities. Below, we present several results of this type (see also \cite{Aze}). If the proof of a statement is omitted, then it follows directly from definitions.
\begin{lemma} \label{Lemma_SimpleCases} Let $f \colon X \to \mathbb{R}$ be a given function. The following statements hold true: \begin{enumerate} \item{if $X$ is a normed space, and $f$ is Fr\'echet differentiable at a point $x \in X$, then
$f^{\downarrow}(x) = - \| f'(x) \|_{X^*}$ and $f^{\uparrow}(x) = \| f'(x) \|_{X^*}$, where $\| \cdot \|_{X^*}$ is the norm in the topological dual space $X^*$ of $X$; furthermore, if $x$ is a limit point of the set $A$, then
$|f^{\downarrow}_A (x)| \le \| f'(x) \|_{X^*}$ and $|f^{\uparrow}_A(x)| \le \| f'(x) \|_{X^*}$; }
\item{if $f$ is calm at a limit point $x$ of the set $A$ such that $f(x) = 0$, then for any $\alpha > 0$ one has $(f^{1 + \alpha})^{\downarrow}_A(x) = (f^{1 + \alpha})^{\uparrow}_A(x) = 0$; furthermore, if $X$ is a normed space, then the function $f^{1 + \alpha}$ is Fr\'echet differentiable at the point $x$ and $(f^{1 + \alpha})'(x) = 0$; }
\item{if $f(x) = d(x, x_0) + c$ in a neighbourhood of a limit point $x_0$ of the set $A$ with $c \in \mathbb{R}$, then $f^{\downarrow}_A(x_0) = f^{\uparrow}_A(x_0) = 1$. } \end{enumerate} \end{lemma}
\begin{lemma} \label{Lemma_SumRule} Let $f, g \colon X \to \overline{\mathbb{R}}$ be given function. Then for any $x \in \dom f \cap \dom g$ such that the sum $f + g$ is correctly defined in a neighbourhood of $x$, and the sums $f^{\downarrow}_A (x) + g^{\downarrow}_A (x)$ and $f^{\uparrow}_A (x) + g^{\uparrow}_A (x)$ are correctly defined one has $$
(f + g)^{\downarrow}_A (x) \ge f^{\downarrow}_A(x) + g^{\downarrow}_A (x), \quad
(f + g)^{\uparrow}_A (x) \le f^{\uparrow}_A(x) + f^{\uparrow}_A (x). $$ \end{lemma}
\begin{lemma} \label{Lmm_SDD_SumEstimate} Let $f, g \colon X \to \overline{\mathbb{R}}$ be given functions. Then for any $x \in \dom f \cap \dom g$ such that the sum $f + g$ is correctly defined in a neighbourhood of $x$, and the sum $f^{\uparrow}_A (x) + g^{\downarrow}_A (x)$ is correctly defined one has $(f + g)^{\downarrow}_A (x) \le f^{\uparrow}_A (x) + g^{\downarrow}_A (x)$. \end{lemma}
\begin{lemma} \label{Lemma_RSD_Product} Let a function $f \colon (a, b) \to \mathbb{R}$ be differentiable at a point $t \in (a, b)$ such that $f(t) > 0$. Then for any function $g \colon (a, b) \to \overline{\mathbb{R}}$ such that $t \in \dom g$ one has \begin{gather} \label{RSD_Product}
(f \cdot g)^{\downarrow}(t) = \inf\big\{ f(t) g^{\downarrow}_{[t, b)}(t) + f'(t) g(t),
f(t) g^{\downarrow}_{(a, t]} (t) - f'(t) g(t) \big\}, \\
(f \cdot g)^{\uparrow}(t) = \sup\big\{ f(t) g^{\uparrow}_{[t, b)}(t) + f'(t) g(t),
f(t) g^{\uparrow}_{(a, t]} (t) - f'(t) g(t) \big\}. \label{RSA_Product} \end{gather} In particular, the following inequalities hold true $$
(f \cdot g)^{\downarrow}(t) \ge - |f'(t) g(t)| + f(t) g^{\downarrow}(t), \quad
(f \cdot g)^{\uparrow}(t) \le |f'(t) g(t)| + f(t) g^{\uparrow}(t). $$ \end{lemma}
\begin{proof} Let us prove the validity of~(\ref{RSD_Product}). Equality (\ref{RSA_Product}) is proved in a similar way.
It is easy to verify that \begin{equation} \label{RSD_MinLeftRight}
(f \cdot g)^{\downarrow}(t) = \inf\big\{ (f \cdot g)^{\downarrow}_{[t, b)}(t),
(f \cdot g)^{\downarrow}_{(a, t]}(t) \big\}. \end{equation} From the fact that the function $f$ is differentiable at the point $t$ it follows that $f$ is continuous at the point $t$, and for any sufficiently small $\Delta t \in \mathbb{R}$ one has \begin{multline*}
f(t + \Delta t) g(t + \Delta t) - f(t) g(t)
= f(t + \Delta t) (g(t + \Delta t) - g(t)) + (f(t + \Delta t) - f(t)) g(t) = \\
= f(t + \Delta t) (g(t + \Delta t) - g(t)) + f'(t) g(t) \Delta t + o(\Delta t) g(t), \end{multline*} where $o(\Delta t) / \Delta t \to 0$ as $\Delta t \to 0$. Therefore taking into account the facts that $f(t) > 0$, and $f$ is continuous at the point $t$ one gets that $$
(f \cdot g)^{\downarrow}_{[t, b)}(t) =
\liminf_{\Delta t \to +0} \frac{f(t + \Delta t) g(t + \Delta t) - f(t) g(t)}{\Delta t}
= f(t) g^{\downarrow}_{[t, b)}(t) + f'(t) g(t) $$ and $$
(f \cdot g)^{\downarrow}_{(a, t]}(t) =
\liminf_{\Delta t \to -0} \frac{f(t + \Delta t) g(t + \Delta t) - f(t) g(t)}{|\Delta t|} =
f(t) g^{\downarrow}_{(a, t]}(t) - f'(t) g(t). $$ Hence with the use of (\ref{RSD_MinLeftRight}) one obtains the desired result. \end{proof}
\begin{lemma} \label{Lemma_Superpos} Let $f \colon [0, + \infty] \to [0, + \infty]$ be a non-decreasing function, and $g \colon X \to [0, + \infty]$ be a given function. Let $x \in \dom g$ be such that $g(x) > 0$, $- \infty < g^{\downarrow}_A (x) < 0$, the function $f$ is continuously differentiable at the point $g(x)$, and $f'(g(x)) > 0$. Then \begin{equation} \label{Compos_RSD}
\big[ f(g(\cdot)) \big]^{\downarrow}_A (x) = f'(g(x)) g^{\downarrow}_A (x) < 0. \end{equation} Similary, if $0 < g^{\uparrow}_A (x) < + \infty$, then \begin{equation} \label{Compos_RSA}
\big[ f(g(\cdot)) \big]^{\uparrow}_A (x) = f'(g(x)) g^{\uparrow}_A (x) > 0 \end{equation} \end{lemma}
\begin{proof} Let us prove the validity of~(\ref{Compos_RSD}). Equality~(\ref{Compos_RSA}) is proved in a similar way.
From the definition of limit inferior it follows that there exists a sequence $\{ x_n \} \subset A$ converging to $x$, and such that $$
\lim_{n \to \infty} \frac{g(x_n) - g(x)}{d(x_n, x)} = g^{\downarrow}_A (x). $$ Observe that $g(x_n) \to g(x)$ as $n \to \infty$, since $g^{\downarrow}_A(x)$ is finite. Therefore applying the fact that the function $f$ is differentiable at the point $g(x)$ one gets that there exists a function $\omega \colon \mathbb{R} \to \mathbb{R}$ such that $\omega(0) = 0$, $\omega(t) \to 0$ as $t \to 0$ and $$
f(g(x_n)) - f(g(x)) = f'(g(x)) \big( g(x_n) - g(x) \big) + \omega\big( g(x_n) - g(x) \big) |g(x_n) - g(x)| $$ for all $n \in \mathbb{N}$. Dividing both sides of the last equality by $d(x_n, x)$, and passing to the limit as $n \to \infty$ one obtains that \begin{equation} \label{CompositionRSD_UpperEstimate}
\liminf_{n \to \infty} \frac{f(g(x_n)) - f(g(x))}{d(x_n, x)} = f'(g(x)) g^{\downarrow}_A (x) < 0. \end{equation} Hence $[f(g(\cdot))]^{\downarrow}_A(x) \le f'(g(x)) g^{\downarrow}_A (x) < 0$.
Let, now, $\{ x_n \} \subset A$ be a sequence converging to $x$, and such that $$
\lim_{n \to \infty} \frac{f(g(x_n)) - f(g(x))}{d(x_n, x)} = \big[ f(g(\cdot)) \big]^{\downarrow}_A(x). $$ Since $[f(g(\cdot))]^{\downarrow}_A(x) < 0$, without loss of generality one can suppose that $f(g(x_n)) < f(g(x))$ for all $n \in \mathbb{N}$. Therefore taking into account the fact that the function $f$ is non-decreasing one obtains that $g(x_n) < g(x)$ for any $n \in \mathbb{N}$. Note that $g(x_n) \to g(x)$ as $n \to \infty$, since otherwise $$
g^{\downarrow}_A(x) \le \liminf_{n \to \infty} \frac{g(x_n) - g(x)}{d(x_n, x)} = - \infty, $$ which contradicts the assumption of the lemma. Consequently, by the mean value theorem for any sufficiently large $n \in \mathbb{N}$ there exists $\theta_n \in [g(x_n), g(x)]$ such that $f(g(x_n)) - f(g(x)) = f'(\theta_n) (g(x_n) - g(x))$. Applying the fact that $f$ is continuously differentiable at the point $g(x)$ one gets that for any $\varepsilon > 0$ there exists $n_0 \in \mathbb{N}$ such that $0 < f'(\theta_n) < f'(g(x)) + \varepsilon$ for any $n \ge n_0$. Hence for any $n \ge n_0$ one has $$
f(g(x_n)) - f(g(x)) = f'(\theta_n) \big( g(x_n) - g(x) \big) \ge (f'(g(x)) + \varepsilon) \big( g(x_n) - g(x) \big). $$ Dividing the last inequality by $d(x_n, x)$, and passing to the limit inferior as $n \to \infty$ one obtains that \begin{multline*}
\big[ f(g(\cdot)) \big]^{\downarrow}_A(x) = \lim_{n \to \infty} \frac{f(g(x_n)) - f(g(x))}{d(x_n, x)} \ge \\
\ge (f'(g(x)) + \varepsilon) \liminf_{n \to \infty} \frac{g(x_n) - g(x)}{d(x_n, x)} \ge
(f'(g(x)) + \varepsilon) g^{\downarrow}_A(x) \end{multline*} for any $\varepsilon > 0$. Hence and from (\ref{CompositionRSD_UpperEstimate}) one obtains the desired result. \end{proof}
Let $(X, d_X)$ and $(Y, d_Y)$ be metric spaces. Hereinafter, we suppose that the Cartesian product $X \times Y$ is endowed with the metric $$
d\big( (x_1, y_1), (x_2, y_2) \big) = d_X(x_1, x_2) + d_Y(y_1, y_2). $$ It is readily seen that the following result hold true.
\begin{lemma} \label{Lemma_PartialRSD} Let $A_X \subset X$ and $A_Y \subset Y$ be nonempty sets, and $A = A_X \times A_Y$. Then for any function $f \colon X \times Y \to \overline{\mathbb{R}}$ and for any $(x, y) \in \dom f \cap A$ one has \begin{gather*}
f^{\downarrow}_A (x, y) \le
\inf\big\{ f(\cdot, y)^{\downarrow}_{A_X}(x), f(x, \cdot)^{\downarrow}_{A_Y}(y) \big\}, \\
f^{\uparrow}_A (x) \ge
\sup\big\{ f(\cdot, y)^{\uparrow}_{A_X}(x), f(x, \cdot)^{\uparrow}_{A_Y}(y) \big\}. \end{gather*} \end{lemma}
The lemma above furnishes the upper estimate of the rate of steepest descent $f^{\downarrow}_A (x, y)$ via the ``partial'' rates of steepest descent $f(\cdot, y)^{\downarrow}_{A_X}(x)$ and $f(x, \cdot)^{\downarrow}_{A_Y}(y)$. In the case when $X$ and $Y$ are normed spaces, and the function $f$ is Fr\'echet differentiable, one can obtain the lower estimate of the rate of steepest descent $f^{\downarrow}_A (x, y)$ via the ``partial'' rates of steepest descent.
\begin{theorem} \label{Thrm_RSDLowerEstimViaPartialRSD} Let $X$ and $Y$ be normed spaces, $A_X \subset X$ and $A_Y \subset Y$ be nonempty sets, and $A = A_X \times A_Y$. Let also $U_x \subset X$ be a neighbourhood of a point $x \in A_X$, and $U_y \subset Y$ be a neighbourhood of a point $y \in A_Y$. Suppose that a function $f \colon U_x \times U_y \to \mathbb{R}$ is Fr\'echet differentiable at $(x, y)$, and $g \colon U_x \to \mathbb{R}$ and $h \colon U_y \to \mathbb{R}$ are given functions. Then the function $F(x, y) = f(x, y) + g(x) + h(y)$ is defined in a neighbourhood of the point $(x, y)$ and \begin{equation} \label{RSDLowerEstimViaPartialRSD}
F^{\downarrow}_A(x, y) \ge 2 \cdot \inf\big\{ 0, f(\cdot, y)^{\downarrow}_{A_X}(x) + g^{\downarrow}_{A_X}(x),
f(x, \cdot)^{\downarrow}_{A_Y}(y) + h^{\downarrow}_{A_Y}(y) \big\}. \end{equation} \end{theorem}
\begin{proof} If $(x, y)$ is an isolated point of the set $A$, then $F^{\downarrow}_A(x, y) = + \infty$, and inequality (\ref{RSDLowerEstimViaPartialRSD}) is valid. Therefore one can suppose that $(x, y)$ is a limit point of the set $A$.
By the definition of rate of steepest descent there exists a sequence $\{ (x_n, y_n) \} \subset A$ converging to the point $(x, y)$, and such that \begin{equation} \label{RSDDefInProductSpace}
\lim_{n \to \infty} \frac{F(x_n, y_n) - F(x, y)}{d(x_n, x)_X + d(y_n, y)_Y} = F^{\downarrow}_A(x, y). \end{equation} Note that if $x_n = x$ for all but finitely many $n \in \mathbb{N}$, then taking into account Lemma~\ref{Lemma_SumRule} one obtains that $$
F^{\downarrow}_A(x, y) = \lim_{n \to \infty} \frac{F(x, y_n) - F(x, y)}{d(y_n, y)_Y} \ge
F(x, \cdot)^{\downarrow}_{A_Y}(y) \ge f(x, \cdot)^{\downarrow}_{A_Y}(y) + h^{\downarrow}_{A_Y}(y), $$ which implies the required result. Similarly, if $y_n = y$ for all but finitely many $n \in \mathbb{N}$, then applying Lemma~\ref{Lemma_SumRule} again one gets that $$
F^{\downarrow}_A(x, y) = F(\cdot, y)^{\downarrow}_{A_X}(x) \ge
f(\cdot, y)^{\downarrow}_{A_X}(x) + g^{\downarrow}_{A_X}(x), $$ and inequality (\ref{RSDLowerEstimViaPartialRSD}) holds true. Thus, replacing, if necessary, the sequence $\{ (x_n, y_n) \}$ by its subsequence, one can suppose that $x_n \ne x$ and $y_n \ne y$ for all $n \in \mathbb{N}$.
Taking into account the fact that the function $f$ is Fr\'echet differentiable at the point $(x, y)$ one obtains that \begin{equation} \label{DefOfFrechetDifferetiability}
\frac{1}{\alpha_n} (f(x_n, y_n) - f(x, y)) =
\frac{1}{\alpha_n} \frac{\partial f}{\partial x}(x, y)[x_n - x] +
\frac{1}{\alpha_n} \frac{\partial f}{\partial y}(x, y)[y_n - y] + c_n, \end{equation}
where $\alpha_n = \| x_n - x \|_X + \| y_n - y \|_Y$ and $c_n \to 0$ as $n \to \infty$. Similarly, one has $$
f(x_n, y) - f(x, y) = \frac{\partial f}{\partial x}(x_n - x) + o(\|x_n - x\|_X), $$
where $o(\| x_n - x \|_X) / \| x_n - x \|_X \to 0$ as $n \to \infty$. Note that $$
\frac{1}{\alpha_n} |o(\| x_n - x \|_X)| =
\frac{\| x_n - x \|_X}{\alpha_n} \frac{|o(\| x_n - x \|_X)|}{\| x_n - x \|_X} \le
\frac{|o(\| x_n - x \|_X)|}{\| x_n - x \|_X}, $$
which implies that $o(\| x_n - x \|_X) / \alpha_n \to 0$ as $n \to \infty$. Consequently, one gets that $$
\frac{1}{\alpha_n} \frac{\partial f}{\partial x}(x, y)[x_n - x] = \frac{1}{\alpha_n} ( f(x_n, y) - f(x, y) ) + c^x_n
\quad \forall n \in \mathbb{N}, $$ where $c_n^x \to 0$ as $n \to \infty$. Replacing $x$ with $y$ one obtains that $$
\frac{1}{\alpha_n} \frac{\partial f}{\partial y}(x, y)[y_n - y] = \frac{1}{\alpha_n} ( f(x, y_n) - f(x, y) ) + c^y_n
\quad \forall n \in \mathbb{N}, $$ where $c_n^y \to 0$ as $n \to \infty$. Hence and from (\ref{DefOfFrechetDifferetiability}) it follows that $$
\frac{1}{\alpha_n} (f(x_n, y_n) - f(x, y)) =
\frac{1}{\alpha_n} ( f(x_n, y) - f(x, y) ) + \frac{1}{\alpha_n} ( f(x, y_n) - f(x, y) ) + \sigma_n, $$ where $\sigma_n \to 0$ as $n \to \infty$. Therefore \begin{multline} \label{RSDreductionToPartialRSDInequal}
F^{\downarrow}_A(x) = \lim_{n \to \infty} \frac{F(x_n, y_n) - F(x, y)}{d(x_n, x)_X + d(y_n, y)_Y} =
\lim_{n \to \infty} \Bigg( \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\alpha_n} + \\
+ \frac{f(x, y_n) + h(y_n) - f(x, y) - h(y)}{\alpha_n} \Bigg) \ge
\liminf_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\alpha_n} + \\
+ \liminf_{n \to \infty} \frac{f(x, y_n) + h(y_n) - f(x, y) - h(y)}{\alpha_n}. \end{multline} Replacing, if necessary, the sequence $\{ (x_n, y_n) \}$ by its subsequence one can suppose that $$
\liminf_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\| x_n - x \|_X} =
\lim_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\| x_n - x \|_X} $$ (note that equality (\ref{RSDDefInProductSpace}) remains valid if one replaces the sequence $\{ (x_n, y_n) \}$ by its subsequence). Replacing again, if necessary, the obtained sequence by its subsequence one can also suppose that $$
\liminf_{n \to \infty} \frac{f(x, y_n) + h(y_n) - f(x, y) - h(y)}{\| y_n - y \|_Y} =
\lim_{n \to \infty} \frac{f(x, y_n) + h(y_n) - f(x, y) - h(y)}{\| y_n - y \|_Y}. $$ If $$
\lim_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\| x_n - x \|_X} > 0, $$ then there exists $n_0 \in \mathbb{N}$ such that $f(x_n, y) + g(x_n) - f(x, y) - g(x) > 0$ for all $n \ge n_0$. Hence $(f(x_n, y) + g(x_n) - f(x, y) - g(x)) / \alpha_n > 0$ for all $n \ge n_0$, and \begin{equation} \label{FirstCase_PartialRSD}
\liminf_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\alpha_n} \ge 0. \end{equation} If $$
\lim_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\| x_n - x \|_X} = 0, $$ then for any $\varepsilon > 0$ there exists $n_0 \in \mathbb{N}$ such that for any $n \ge n_0$ one has $$
\frac{|f(x_n, y) + g(x_n) - f(x, y) - g(x)|}{\alpha_n} \le
\frac{|f(x_n, y) + g(x_n) - f(x, y) - g(x)|}{\| x_n - x \|_X} < \varepsilon. $$ Therefore \begin{equation}
\lim_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\alpha_n} = 0. \end{equation} Finally, if $$
\lim_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\| x_n - x \|_X} < 0, $$ then there exists $n_0 \in \mathbb{N}$ such that $f(x_n, y) + g(x_n) - f(x, y) - g(x) < 0$ for all $n \ge n_0$. Consequently, for any $n \ge n_0$ one has $$
0 > \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\alpha_n} \ge
\frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\| x_n - x \|_X}, $$ which yields \begin{multline} \label{ThirdCase_PartialRSD}
0 \ge \liminf_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\alpha_n} \ge \\
\ge \lim_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\| x_n - x \|_X} \ge
\big[ f(\cdot, y) + g(\cdot) \big]^{\downarrow}_{A_X}(x). \end{multline} Combining (\ref{FirstCase_PartialRSD})--(\ref{ThirdCase_PartialRSD}), and applying Lemma~\ref{Lemma_SumRule} one gets that $$
\liminf_{n \to \infty} \frac{f(x_n, y) + g(x_n) - f(x, y) - g(x)}{\alpha_n} \ge
\inf\big\{ 0, f(\cdot, y)^{\downarrow}_{A_X}(x) + g^{\downarrow}_{A_X}(x) \big\}. $$ Arguing in a similar way one can show that $$
\liminf_{n \to \infty} \frac{f(x, y_n) + h(y_n) - f(x, y) - h(y)}{\alpha_n} \ge
\inf\big\{ 0, f(x, \cdot)^{\downarrow}_{A_Y}(y) + h^{\downarrow}_{A_Y}(y) \big\}. $$ Hence and from (\ref{RSDreductionToPartialRSDInequal}) one obtains the desired result. \end{proof}
The rates of steepest descent and ascent can be used to express optimality conditions. Namely, it is easy to verify that the following result holds true.
\begin{lemma} \label{Lemma_NessOptCond} Let $f \colon X \to \overline{\mathbb{R}}$ be a given function, and let $x^* \in A \cap \dom f$ be a point of local minimum $($resp.~maximum$)$ of $f$ on $A$. Then $f^{\downarrow}_A(x^*) \ge 0$ $(\text{resp. }f^{\uparrow}(A)(x^*) \le 0)$. Furthermore, if a point $x \in A \cap \dom f$ satisfies the inequality $f^{\downarrow}_A(x) > 0$ $(\text{resp. }f^{\uparrow}(A)(x^*) < 0)$, then $x$ is a point of strict local minimum $($resp.~maximum$)$ of the function $f$ on the set $A$. \end{lemma}
For any function $f \colon X \to \overline{\mathbb{R}}$ a point $x \in A \cap \dom f$ such that $f^{\downarrow}_A (x) \ge 0$ is called an \textit{inf-stationary} (or \textit{lower semistationary}, see \cite{Giannessi}) point of the function $f$ with respect to the set $A$.
We will also need the following approximate Fermat's rule in terms of rate of steepest descent, which is a simple corollary to the Ekeland variational principle (cf.~\cite{UderzoCalm, Aze}).
\begin{lemma}[Approximate Fermat's rule] Let $X$ be a complete metric space, $A \subseteq X$ be closed, and $f \colon X \to \mathbb{R} \cup \{ + \infty \}$ be proper, l.s.c. and bounded below on $A$. Let also $\varepsilon > 0$ and $x_{\varepsilon} \in A$ be such that $$
f(x_{\varepsilon}) \le \inf_{x \in A} f(x) + \varepsilon. $$ Then for any $r > 0$ there exists $y \in A$ such that $f(y) \le f(x_{\varepsilon})$, $d(y, x_{\varepsilon}) \le r$ and $f^{\downarrow}_A(y) \ge - \varepsilon / r$. \end{lemma}
\section{Parametric penalty functions} \label{Sect_ParametricPenaltyFunctions}
In this section, we develop a general theory of exact parametric penalty functions. We obtain several sufficient conditions for the exactness of parametric penalty functions, and study the concept of feasibility-preserving penalty function. We also derive necessary and sufficient conditions for the zero duality gap property for a parametric penalty function to hold true, and study some properties of minimizing sequences of a parametric penalty function.
\subsection{Exact penalty functions} \label{Subsect_ExactPenFunc}
Let $X$ be a metric space, $M, A \subset X$ be nonempty sets, and $f \colon X \to \mathbb{R} \cup \{ + \infty \}$ be a given function. Hereinafter, we study the following optimization problem $$
\min f(x) \quad \text{subject to} \quad x \in M, \quad x \in A. \eqno{(\mathcal{P})} $$ Denote by $\Omega = M \cap A$ the set of feasible points of this problem. We suppose that $\dom f \ne \emptyset$, and $f$ attains a global minimum on the set $\Omega$.
Choose a function $\phi \colon X \to [0, +\infty]$ such that $\phi(x) = 0$ iff $x \in M$. The function $g_{\lambda}(x) = f(x) + \lambda \phi(x)$ is called \textit{a penalty function} for the problem $(\mathcal{P})$, where $\lambda \ge 0$ is a penalty parameter. The main goal of the theory of exact penalty functions is to study a relation between the problem $(\mathcal{P})$ and the penalized problem \begin{equation} \label{PenProb}
\min g_{\lambda}(x) \quad \text{subject to} \quad x \in A. \end{equation} Note that only the constraint $x \in M$ is penalized, while the constraint $x \in A$ is taken into account explicitly. Usually, the set $A$ corresponds to ``simple'' constraints, e.g. bound or linear constraints.
Let $x^* \in \dom f$ be a locally optimal solution of the problem $(\mathcal{P})$. Recall that the penalty function $g_{\lambda}$ is said to be \textit{exact} at $x^*$, if there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the point $x^*$ is a locally optimal solution of the penalized problem (\ref{PenProb}).
Observe that if the penalty function $g_{\lambda}$ is exact at a locally optimal solution $x^* \in \dom f$ of the problem $(\mathcal{P})$, then the penalty term $\phi$ must be nonsmooth at $x^*$ in the general case. Indeed, suppose that $X$ is a normed space, and the function $\phi$ is Fr\'echet differentiable at $x^*$. Note that $x^* \in M$ due to the fact that $x^*$ is a locally optimal solution of the problem $(\mathcal{P})$. Therefore $\phi(x^*) = 0$, which implies that $x^*$ is a point of global minimum of $\phi$, since $\phi$ is nonnegative. Hence $\phi'(x^*) = 0$. Applying Lemmas~\ref{Lemma_SimpleCases}, \ref{Lmm_SDD_SumEstimate} and \ref{Lemma_NessOptCond} one obtains $$
f^{\downarrow}_A (x^*) = f^{\downarrow}_A (x^*) + \lambda \phi^{\uparrow}_A(x^*) \ge
( g_{\lambda} )^{\downarrow}_A(x^*) \ge 0. $$ Thus, $f^{\downarrow}_A (x^*) \ge 0$, i.e. $x^*$ is an inf-stationary point of the problem $$
\min f(x) \quad \text{subject to} \quad x \in A, $$ which is normally not the case, since otherwise the constraint $x \in M$ is somewhat redundant. As a result, one gets that in order to construct an exact penalty function one needs to choose a nonsmooth penalty term $\phi$. Alternatively, one can consider a \textit{parametric} penalty function.
Let $P$ be a metric space of parameters, and $p_0 \in P$ be fixed. Choose a function $\varphi \colon X \times P \to [0, + \infty]$ such that $\varphi(x, p) = 0$ iff $p = p_0$ and $x \in M$. Introduce the following extended penalized problem \begin{equation} \label{ParamPenProb}
\min_{x, p} F_{\lambda}(x, p) \quad \text{subject to} \quad (x, p) \in A \times P, \end{equation} where $F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p)$ is \textit{a parametric penalty function} for the problem $(\mathcal{P})$, and $\lambda \ge 0$ is a penalty parameter. Note that $F_{\lambda}(x, p_0) = f(x)$ for any $x \in M$, and $F_{\lambda}(x, p) > f(x)$ otherwise. Observe also that $\Omega = \{ x \in A \mid \varphi(x, p_0) = 0 \}$.
\begin{definition} Let $x^* \in \dom f$ be a locally optimal solution of the problem $(\mathcal{P})$. The penalty function $F_{\lambda}$ is said to be exact at $x^*$ if there exists $\lambda_0 \ge 0$ such that the pair $(x^*, p_0)$ is a locally optimal solution of the problem (\ref{ParamPenProb}) with $\lambda = \lambda_0$. The greatest lower bound of all such $\lambda_0$ is referred to as \textit{the exact penalty parameter} of the penalty function $F_{\lambda}$ at $x^*$, and is denoted by $\lambda^*(x^*, f, \varphi)$ or simply by $\lambda^*(x^*)$, if $f$ and $\varphi$ are fixed (some authors refer to $\lambda^*(x^*)$ as \textit{the least exact penalty parameter}, see~\cite{RubinovYang}). \end{definition}
\begin{definition} The penalty function $F_{\lambda}$ is called (\textit{globally}) \textit{exact} if there exists $\lambda_0 \ge 0$ such that the function $F_{\lambda_0}$ attains a global minimum on the set $A \times P$, and a pair $(x^*, p^*)$ is a globally optimal solution of the problem (\ref{ParamPenProb}) with $\lambda = \lambda_0$ iff $p^* = p_0$, and $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$. The greatest lower bound of all such $\lambda_0$ is referred to as \textit{the exact penalty parameter} of the penalty function $F_{\lambda}$, and is denoted by $\lambda^*(f, \varphi)$. \end{definition}
Note that the definitions of local and global exact penalty parameters are meaningful and correct. Namely, if $F_{\lambda}$ is exact at a locally optimal solution $x^*$ of the problem $(\mathcal{P})$, then for any $\lambda > \lambda^*(x^*)$ the pair $(x^*, p_0)$ is a locally optimal solution of the problem (\ref{ParamPenProb}). Similarly, if $F_{\lambda}$ is exact, then for any $\lambda > \lambda^*(f, \varphi)$ the penalty function $F_{\lambda}$ attains a global minimum on the set $A \times P$, and a pair $(x^*, p^*)$ is a globally optimal solution of the problem (\ref{ParamPenProb}) iff $p^* = p_0$, and $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$. The validity of these statements follows directly from the fact that $F_{\lambda}(x, p)$ is non-decreasing with respect to $\lambda$ for any $(x, p) \in X \times P$, and $F_{\lambda}(x, p)$ is strictly increasing in $\lambda$ if both $f(x)$ and $\varphi(x, p)$ are finite and either $p \ne p_0$, $x \in X$ or $p = p_0$, $x \notin M$.
The following result is a direct corollary to the definition of exact parametric penalty function.
\begin{proposition} \label{Prp_ExactLimInfPenFunc} Let $F_{\lambda}(x, p)$ be exact at a locally optimal solution $x^* \in \dom f$ of the problem $(\mathcal{P})$. Then the penalty functions $$
g_{\lambda}(x) = f(x) + \lambda \varphi(x, p_0), \quad
h_{\lambda}(x) = f(x) + \lambda \liminf_{p \to p_0} \varphi(x, p) $$ are exact at $x^*$ as well. Furthermore, if $F_{\lambda}$ is (globally) exact, then so are $g_{\lambda}$ and $h_{\lambda}$ (provided $\liminf_{p \to p_0} \varphi(x, p) > 0$ for any $x \notin M$). \end{proposition}
Thus, in the general case, for the (local or global) exactness of the parametric penalty function $F_{\lambda}(x, p)$, it is necessary that the functions $x \to \varphi(x, p_0)$ and $x \to \liminf_{p \to p_0} \varphi(x, p)$ are nonsmooth. In particular, if $X$ is a normed space, and the function $\varphi$ has the form $\varphi(x, p) = \alpha(p) \phi(x) + \beta(p)$ for some functions $\alpha, \beta \colon P \to [0, + \infty]$ and $\phi \colon X \to \mathbb{R}_+$ such that $\phi$ is smooth and $\beta(p) \to 0$ as $p \to p_0$, then for the exactness of the penalty function $F_{\lambda}$ it is necessary that $\alpha(p) \to + \infty$ as $p \to p_0$ (cf. the parametric penalty function from Section~\ref{Section_SingPenFunc}).
\begin{remark} {(i) A different approach to the definition of parametric penalty functions, in which a parameter is included into the objective function, was considered in \cite{Dolgopolik_OptLet}. }
\noindent{(ii) Note that the parametric penalty function $F_{\lambda}$ is nothing but a standard penalty function for the extended optimization problem of the form $$
\min_{(x, p)} \widehat{f}(x, p) \quad \text{subject to} \quad (x, p) \in M \times \{ p_0 \}, \quad
(x, p) \in A \times P, $$ where $\widehat{f}(x, p) = f(x)$ for all $(x, p) \in X \times P$. Therefore one can apply the existing results on exactness of non-parametric penalty functions \cite{Dolgopolik} to a penalty function for the extended problem in order to obtain sufficient and/or necessary conditions for the exactness of the parametric penalty function $F_{\lambda}$. However, it should be noted that in most cases this approach leads to weaker results than a direct study of the exactness of parametric penalty functions presented below, since the parameter $p$ is treated in the same way as the variable $x$, and the particular structure of the constraint $(x, p) \in M \times \{ p_0 \}$ is not taken into account within this approach (cf., e.g., Theorem~\ref{Th_GlobExFiniteDim} below and its non-parametric counterpart \cite{Dolgopolik}, Theorem~3.17). } \end{remark}
\subsection{Global exactness}
Observe that if the penalty function $F_{\lambda}$ is exact, then it is exact at every globally optimal solution of the problem $(\mathcal{P})$. Our aim is to prove that under some additional assumptions the converse statement holds true. However, at first, we describe an alternative approach to the definition of global exactness of a penalty function that is not based on properties of global minimizers of this function.
Denote by $f^* = \inf_{x \in \Omega} f(x)$ the optimal value of the problem $(\mathcal{P})$. The proposition below follows directly from the fact that the penalty function $F_{\lambda}(x, p)$ is strictly increasing in $\lambda$ for any $(x, p)$ such that both $f(x)$ and $\varphi(x, p)$ are finite and either $p \ne p_0$, $x \in X$ or $p = p_0$, $x \notin M$.
\begin{proposition} \label{Prp_EquivDefExPen} The penalty function $F_{\lambda}$ is exact if and only if there exists $\lambda_0 \ge 0$ such that $$
\inf_{(x, p) \in A \times P} F_{\lambda_0}(x, p) \ge f^*. $$ Moreover, the greatest lower bound of all such $\lambda_0$ is equal to the exact penalty parameter $\lambda^*(f, \varphi)$. \end{proposition}
\begin{corollary} \label{Crlr_ReductionToStandExPen} The penalty function $F_{\lambda}$ is exact if and only if the penalty function $g_{\lambda}(x) = f(x) + \lambda \inf_{p \in P} \varphi(x, p)$ is exact. Furthermore, the exact penalty parameters of these functions coincide. \end{corollary}
\begin{remark} The corollary above describes a general approach to the study of exact parametric penalty functions. Namely, Corollary~\ref{Crlr_ReductionToStandExPen} allows one to study the standard penalty function $g_{\lambda}$ instead of the parametric penalty function $F_{\lambda}$ in order to obtain sufficient (and maybe necessary) conditions for $F_{\lambda}$ to be exact. One simply has to compute the penalty function $g_{\lambda}$ (or to find its lower and upper estimates), and then apply well-developed methods of the theory of exact linear penalty function~\cite{Dolgopolik} to the penalty function $g_{\lambda}$ (or its estimates). For the application of this approach to the study of singular penalty functions see~\cite{Dolgopolik_OptLet2}. However, it should be noted that sometimes it might be difficult to compute the penalty function $g_{\lambda}$ or to find its sharp estimates, which makes the described approach inapplicable in some cases. \end{remark}
Let us show that the infimum in Proposition~\ref{Prp_EquivDefExPen} can be taken over a significantly smaller set than $A \times P$. We need the following definition in order to conveniently formulate this result.
\begin{definition} Let $C \subset A \times P$ be a nonempty set. The penalty function $F_{\lambda}$ is said to be \textit{exact} on the set $C$ if there exists $\lambda_0 \ge 0$ such that $$
F_{\lambda_0}(x, p) \ge f^* \qquad \forall (x, p) \in C. $$ The greatest lower bounded of all $\lambda_0 \ge 0$ for which the inequality above holds true is denoted by $\lambda^*(C)$ and is referred to as \textit{the exact penalty parameter} of the penalty function $F_{\lambda}$ on the set $C$. In the case when $C = A_0 \times P$ for some $A_0 \subset A$, we simply say the the penalty function $F_{\lambda}$ is exact on the set $A_0$, and denote $\lambda^*(A_0) := \lambda^*(A_0 \times P)$. \end{definition}
Note that if the penalty function $F_{\lambda}$ is exact on a set $C \subset A \times P$, and there exists a globally optimal solution $x^*$ of the problem $(\mathcal{P})$ such that $(x^*, p_0) \in C$, then for any $\lambda > \lambda^*(C)$ the function $F_{\lambda}$ attains a global minimum on the set $C$, and a pair $(\overline{x}, \overline{p})$ is a point of global minimum of $F_{\lambda}$ on $C$ iff $\overline{p} = p_0$, and $\overline{x}$ is a globally optimal solution of the problem $(\mathcal{P})$. Note also that $\lambda^*(C) \le \lambda^*(f, \varphi)$ for any $C \subseteq A \times P$.
Being inspired by the ideas of prof. V.F. Demyanov \cite{Demyanov}, introduce the set $$
\Omega_{\delta} = \big\{ (x, p) \in A \times P \mid \varphi(x, p) < \delta \big\} \quad \forall \delta > 0. $$ Note that \begin{equation} \label{OmegaDeltaShrinking}
\bigcap_{\delta > 0} \Omega_{\delta} = \Omega \times \{ p_0 \} \end{equation} and for any $\delta_1 > \delta_2$ one has $\Omega_{\delta_2} \subseteq \Omega_{\delta_1}$, i.e. the set $\Omega_{\delta}$ shrinks into the set $\Omega \times \{ p_0 \}$ as $\delta \to +0$.
\begin{lemma} \label{Lmm_Reduction} Let there exist $\lambda_0 \ge 0$ such that the function $F_{\lambda_0}$ is bounded below on $A \times P$. Then for any $\delta > 0$ one has $$
F_{\lambda}(x, p) \ge f^* \qquad \forall (x, p) \notin \Omega_{\delta}
\qquad \forall \lambda \ge \lambda_0 + \frac{f^* - c}{\delta}, $$ where $c = \inf\{ F_{\lambda_0}(x, p) \mid (x, p) \in A \times P \}$, i.e. the penalty function $F_{\lambda}$ is exact on the set $(A \times P) \setminus \Omega_{\delta}$ and $\lambda^*((A \times P) \setminus \Omega_{\delta}) \le \lambda_0 + (f^* - c) / \delta$. \end{lemma}
\begin{proof} Fix $\delta > 0$. Let $(x, p) \notin \Omega_{\delta}$, i.e. $\varphi(x, p) \ge \delta$. Then $$
F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p) = f(x) + \lambda_0 \varphi(x, p) +
(\lambda - \lambda_0) \varphi(x, p) \ge F_{\lambda_0}(x, p) + (\lambda - \lambda_0) \delta. $$ Therefore for any $\lambda \ge \overline{\lambda}$, where $$
\overline{\lambda} = \lambda_0 + \frac{f^* - c}{\delta}, \qquad
c = \inf_{(x, p) \in A \times P} F_{\lambda_0}(x, p), $$ one has $F_{\lambda}(x, p) \ge c + (\lambda - \lambda_0) \delta \ge f^*$ for all $(x, p) \notin \Omega_{\delta}$, that completes the proof. \end{proof}
\begin{proposition} \label{Prp_Reduction} The penalty function $F_{\lambda}$ is exact if and only if $F_{\lambda}$ is bounded below on $A \times P$ for some $\lambda \ge 0$, and there exist $\delta > 0$ such that the penalty function $F_{\lambda}$ is exact on $\Omega_{\delta}$. Furthermore, one has $$
\lambda^*(f, \varphi) \le \overline{\lambda} :=
\max\left\{ \lambda^*(\Omega_{\delta}), \lambda_0 + \frac{f^* - c}{\delta} \right\}, \quad
c = \inf_{(x, p) \in A \times P} F_{\lambda_0}(x, p) $$ for any $\lambda_0 \ge 0$ such that $F_{\lambda_0}$ is bounded below on $A \times P$. \end{proposition}
\begin{proof} If the penalty function $F_{\lambda}$ is exact, then applying Proposition~\ref{Prp_EquivDefExPen} one gets that the function $F_{\lambda}$ is exact on the set $\Omega_{\delta}$ for any $\delta > 0$, and is bounded below on $A \times P$ for any $\lambda \ge \lambda^*(f, \varphi)$.
Suppose now that $F_{\lambda}$ is bounded below on $A \times P$, and there exists $\delta > 0$ such that the penalty function $F_{\lambda}$ is exact on $\Omega_{\delta}$. Applying~Lemma~\ref{Lmm_Reduction} and the fact that $F_{\lambda}$ is exact on $\Omega_{\delta}$ one obtains that $F_{\lambda}(x, p) \ge f^*$ for all $(x, p) \in A \times P$ and $\lambda \ge \overline{\lambda}$. Consequently, the penalty function $F_{\lambda}$ is exact by Proposition~\ref{Prp_EquivDefExPen}. \end{proof}
\begin{remark} \label{Rmrk_BarrierTermPenFunc} One can easily verify that if the penalty function $F_{\lambda}$ is exact on $\Omega_{\delta}$ for some $\delta > 0$, then the penalty function $\Psi_{\lambda}(x) = f(x) + \lambda \psi_{\delta}(\varphi(x, p))$ is exact and $\lambda^*(f, \psi_{\delta} \circ \varphi) \le \delta \cdot \lambda^*(\Omega_{\delta})$, where $\psi(t) = t / ( \delta - t )$, if $0 \le t < \delta$, and $\psi(t) = + \infty$, otherwise. Conversely, if the penalty function $\Psi_{\lambda}$ is exact, then for any $\theta \in (0, \delta)$ the penalty function $F_{\lambda}$ is exact on $\Omega_{\theta}$ and $\lambda^*(\Omega_{\theta}) \le \lambda^*(f, \psi_{\delta} \circ \varphi) / (\delta - \theta)$. \end{remark}
Recall that $\Omega$ is the set of feasible points of the problem $(\mathcal{P})$. As it was mentioned above, the set $\Omega_{\delta}$ shrinks into the set $\Omega \times \{ p_0 \}$ as $\delta \to +0$ (see~(\ref{OmegaDeltaShrinking})). Furthermore, it is easy to see that if the function $\varphi$ is lower semicontinuous on $A \times P$, and the set $A$ is closed, then $$
\limsup_{\delta \to +0} \Omega_{\delta} = \Omega \times \{ p_0 \}, $$ where $\limsup$ is the outer limit. Thus, the set $\Omega_{\delta}$ can be considered as an outer approximation of the set of feasible solutions of the problem $(\mathcal{P})$. Consequently, the proposition above can be imprecisely formulated as follows: the penalty function $F_{\lambda}$ is exact if and only if it is bounded below for sufficiently large $\lambda \ge 0$, and $F_{\lambda}$ is exact on the outer approximation $\Omega_{\delta}$ of the set $\Omega$ with arbitrarily small $\delta > 0$. However, note that the set $\Omega_{\delta}$ is defined via the penalty term $\varphi$.
Utilizing the previous proposition one can obtain a general sufficient condition for the global exactness of the parametric penalty function $F_{\lambda}$.
\begin{theorem} \label{Th_GlobalExGeneralCase} Let $X$ and $P$ be complete metric spaces, $A$ be closed, $f$ be l.s.c. on $A$, and $\varphi$ be l.s.c. on $A \times P$. Suppose also that there exist $\delta > 0$ and $\lambda_0 \ge 0$ such that \begin{enumerate} \item{the function $f$ is Lipschitz continuous on the projection of the set $$
C(\delta, \lambda_0) = \big\{ (x, p) \in \Omega_{\delta} \mid F_{\lambda_0}(x, p) < f^* \big\} $$ onto the space $X$, and upper semicontinuous (u.s.c.) at every point of this projection; }
\item{there exists $a > 0$ such that $\varphi^{\downarrow}_{A \times P}(x, p) \le - a$ for any $(x, p) \in C(\delta, \lambda_0) \cap \dom \varphi$. \label{Assumpt_MetricRegDiffCond} } \end{enumerate} Then the penalty function $F_{\lambda}$ is exact if and only if it is bounded below on $A \times P$ for some $\lambda \ge 0$. \end{theorem}
\begin{proof} If $F_{\lambda}$ is exact, then, obviously, it is bounded below on $A \times P$ for any $\lambda \ge \lambda^*(f, \varphi)$. Let us prove the converse statement. Note that without loss of generality one can suppose that $F_{\lambda_0}$ is bounded below on $A \times P$.
Arguing by reductio ad absurdum, suppose that the function $F_{\lambda}$ is not exact. Then for any $\lambda \ge 0$ there exists $(x_{\lambda}, p_{\lambda}) \in A \times P$ such that $F_{\lambda}(x_{\lambda}, p_{\lambda}) < f^*$. The function $F_{\lambda}$ is l.s.c. due to the lower semicontinuity of the functions $f$ and $\varphi$. Applying approximate Fermat's rule to the restriction of $F_{\lambda}$ to the set $A \times P$ one obtains that for any $\lambda \ge 0$ there exists $(y_{\lambda}, q_{\lambda}) \in A \times P$ such that \begin{equation} \label{ApproxFermatRule}
F_{\lambda}(y_{\lambda}, q_{\lambda}) \le F_{\lambda}(x_{\lambda}, p_{\lambda}) < f^*, \quad
\big( F_{\lambda} \big)^{\downarrow}_{A \times P} (y_{\lambda}, q_{\lambda}) \ge -1. \end{equation} From Lemma~\ref{Lmm_Reduction} it follows that $$
F_{\lambda}(x, p) \ge f^* \quad \forall (x, p) \notin \Omega_{\delta} \quad \forall
\lambda \ge \overline{\lambda} := \lambda_0 + \frac{f^* - \inf_{x \in A \times P} F_{\lambda_0}(x, p)}{\delta}, $$ which yields $\varphi(y_{\lambda}, q_{\lambda}) < \delta$ for any $\lambda \ge \overline{\lambda}$. Hence $(y_{\lambda}, q_{\lambda}) \in C(\delta, \lambda_0)$ for all $\lambda \ge \overline{\lambda}$.
Due to assumption~\ref{Assumpt_MetricRegDiffCond} one has $\varphi^{\downarrow}_{A \times P} (y_{\lambda}, q_{\lambda}) \le - a$ for any $\lambda \ge \overline{\lambda}$. Therefore by the definition of rate of steepest descent for all $\lambda \ge \overline{\lambda}$ there exists a sequence $\{ (y_{\lambda}^{(n)}, q_{\lambda}^{(n)}) \} \subset A \times P$ converging to $(y_{\lambda}, q_{\lambda})$ such that \begin{equation} \label{RateOfStDesc_Def}
\frac{\varphi( y_{\lambda}^{(n)}, q_{\lambda}^{(n)} ) - \varphi(y_{\lambda}, q_{\lambda})}{
d( y_{\lambda}^{(n)}, y_{\lambda} )_X + d( q_{\lambda}^{(n)}, q_{\lambda} )_P} \le -
\frac{a}{2} \quad \forall n \in \mathbb{N}. \end{equation} Hence, in particular, $\varphi(y_{\lambda}^{(n)}, q_{\lambda}^{(n)}) < \varphi(y_{\lambda}, q_{\lambda}) < \delta$. Taking into account the facts that $F_{\lambda}(y_{\lambda}, q_{\lambda}) < f^*$, and $f$ is u.s.c. at $y_{\lambda}$, by virtue of the fact that $(y_{\lambda}, q_{\lambda}) \in C(\delta, \lambda_0)$, one obtains that $F_{\lambda}(y_{\lambda}^{(n)}, q_{\lambda}^{(n)}) < f^*$ for any sufficiently large $n$. Therefore without loss of generality one can suppose that $\{ (y_{\lambda}^{(n)}, q_{\lambda}^{(n)}) \} \subset C(\delta, \lambda_0)$ for any $\lambda \ge \overline{\lambda}$. Consequently, applying (\ref{RateOfStDesc_Def}), and the fact that $f$ is Lipschitz continuous on the projection of $C(\delta, \lambda_0)$ onto $X$ one gets that there exists $L > 0$ such that \begin{multline*}
F_{\lambda}(y_{\lambda}^{(n)}, q_{\lambda}^{(n)}) - F_{\lambda}(y_{\lambda}, q_{\lambda})
\le f(y_{\lambda}^{(n)}) - f(y_{\lambda}) + \lambda
\big( \varphi(y_{\lambda}^{(n)}, q_{\lambda}^{(n)}) - \varphi(y_{\lambda}, q_{\lambda}) \big) \le \\
\le \left( L - \frac{a}{2} \lambda \right)
\big( d( y_{\lambda}^{(n)}, y_{\lambda} )_X + d( q_{\lambda}^{(n)}, q_{\lambda} )_P \big) \end{multline*} for any $n \in \mathbb{N}$ and $\lambda \ge \lambda^*$. Hence dividing by $d( y_{\lambda}^{(n)}, y_{\lambda} )_X + d( q_{\lambda}^{(n)}, q_{\lambda} )_P$ and passing to the limit inferior as $n \to \infty$ one obtains that $$
\big( F_{\lambda} \big)^{\downarrow}_{A \times P} (y_{\lambda}, q_{\lambda}) \le L - \frac{a}{2} \lambda
\quad \forall \lambda \ge \overline{\lambda}, $$ which contradicts (\ref{ApproxFermatRule}). \end{proof}
\begin{remark} {(i)~It is not difficult to verify that the theorem above holds true if the set $A \cap \dom f$, considered as a metric subspace of $X$, is complete, while the space $X$ itself can be incomplete. }
{\noindent(ii)~Note that in the theorem above it is sufficient to suppose that the function $f$ is Lipschitz continuous on the set $\{ x \in X \mid f(x) < f^* \}$. Furthermore, if $X$ is a normed space, and for some $\lambda \ge 0$ one has
$\inf_{p \in P} F_{\lambda}(x, p) \to + \infty$ as $\| x \| \to \infty$ (in particular, one can suppose that the function $f$ is coercive), then it is sufficient to suppose that $f$ is Lipschitz continuous on any bounded subset of the space $X$. }
{\noindent(iii)~It should be noted that the fact that a function $g \colon X \to \mathbb{R}$ is Lipschitz continuous on a set $C \subset X$ does not imply that $g$ is u.s.c. on $C$. For example, let $X = \mathbb{R}$, $C = [0, 1]$, $g(x) = 1$ for any $x \in (- \infty, 0) \cup (1, + \infty)$ and $g(x) = 0$ for all $x \in [0, 1]$. Then $g$ is Lipschitz continuous on the set $C$ with the Lipschitz constant $L = 0$, since for any $x, y \in C$ one has
$|g(x) - g(y)| = 0$. However, $g$ is not u.s.c. on the set $C$, since it is not u.s.c. at the points $x = 1 \in C$ and $x = 0 \in C$. Therefore, in the previous theorem, the assumption that the function $f$ is u.s.c. at every point of the projection of $C(\delta, \lambda_0)$ onto $X$ is not redundant. }
{\noindent(iv)~Note, finally, that Theorem~\ref{Th_GlobalExGeneralCase} was inspired by Theorem~3.4.1 from \cite{Demyanov}. Furthermore, Theorem~\ref{Th_GlobalExGeneralCase} sharpens Theorem~3.4.1 from \cite{Demyanov} (since we do not assume that $F_{\lambda}(x, p)$ attains a global minimum for any sufficiently large $\lambda \ge 0$) and extends it to the parametric case. } \end{remark}
Clearly, the most restrictive assumption of the previous theorem is assumption~\ref{Assumpt_MetricRegDiffCond}. Typically, in order to check this assumption one needs to verify that a constraint qualification condition in the problem $(\mathcal{P})$ holds at every point of the projection of $C(\delta, \lambda_0)$ onto $X$ that might have a very complicated structure. However, let us show that under natural assumptions in the case when $X$ is a finite dimensional normed space, the global exactness of the penalty function $F_{\lambda}$ is completely defined by the behaviour of this function near globally optimal solutions of the problem $(\mathcal{P})$. We call this result \textit{the localization principle}, since it allows one to reduce the study of the \textit{global} exactness of a penalty function to a \textit{local} analysis of the behaviour of this function near globally optimal solutions of the original problem.
\begin{theorem}[Localization Principle] \label{Th_GlobExFiniteDim} Let $X$ be a finite dimensional normed space, $A$ be a closed set, $f$ be l.s.c. on $\Omega$, and $\varphi$ be l.s.c. on $A \times \{ p_0 \}$. Suppose that there exists a non-decreasing function $\omega \colon \mathbb{R}_+ \to \mathbb{R}_+$ such that $\omega(t) = 0$ iff $t = 0$, and \begin{equation} \label{PenTermGen_GrowthInP}
\varphi(x, p) \ge \omega(d(p, p_0)) \quad \forall (x, p) \in A \times P. \end{equation} Let also one of the following assumptions be valid: \begin{enumerate} \item{there exists $\delta > 0$ such that the projection of the set $\Omega_{\delta}$ onto the space $X$ is bounded; }
\item{there exists $\mu > 0$ such that the set $\{ x \in A \mid \inf_{p \in P} F_{\mu}(x, p) < f^* \}$ is bounded (in particular, one can suppose that $A$ is compact or $f$ is coercive). \label{Assump_BoundLevelSet}} \end{enumerate} Then the penalty function $F_{\lambda}$ is exact if and only if $F_{\lambda}$ is exact at every globally optimal solution of the problem $(\mathcal{P})$, and there exists $\lambda_0 \ge 0$ such that $F_{\lambda_0}$ is bounded below on $A \times P$. \end{theorem}
\begin{proof} If $F_{\lambda}$ is exact, then it is obviously exact at every globally optimal solution of the problem $(\mathcal{P})$, and for any $\lambda \ge \lambda^*(f, \varphi)$ the function $F_{\lambda}$ is bounded below. Let us prove the converse statement.
Observe that if $F_{\lambda_0}$ is bounded below on $A \times P$ for some $\lambda_0 \ge 0$, then by virtue of Lemma~\ref{Lmm_Reduction} one gets that for any $\delta > 0$ there exists $\mu \ge \lambda_0$ such that $$
\big\{ (x, p) \in A \times P \mid F_{\mu}(x, p) < f^* \big\} \subset \Omega_{\delta}. $$ Therefore without loss of generality one can suppose that assumption~\ref{Assump_BoundLevelSet} is valid.
Our aim is to show that $F_{\lambda}$ is exact on $\Omega_{\delta}$ for sufficiently small $\delta > 0$. Then with the use of Proposition~\ref{Prp_Reduction} one concludes that $F_{\lambda}$ is exact.
From the fact that the set $\{ x \in A \mid \inf_{p \in P} F_{\mu}(x, p) < f^* \}$ is bounded it follows that $\lambda \ge \mu$ one has \begin{equation} \label{ExactOutsideBall}
F_{\lambda}(x, p) \ge f^* \quad \forall p \in P \quad \forall x \in A \colon \| x \| > R \end{equation} for sufficiently large $R > 0$.
Denote by $\Omega^* \subset \Omega$ the set of all globally optimal solutions of the problem $(\mathcal{P})$ that belong to the ball $B(0, R) = \{ x \in X \mid \| x \| \le R \}$. Observe that the function $\varphi(\cdot, p_0)$ is l.s.c. on $A$ by virtue of the fact that $\varphi$ is l.s.c. on $A \times \{ p_0 \}$. Hence taking into account the facts that $\varphi$ is nonnegative, $\Omega = \{ x \in A \mid \varphi(x, p_0) = 0 \}$, and $A$ is closed, one obtains that $\Omega$ is closed as well. Therefore by virtue of the lower semicontinuity of the function $f$ on $\Omega$ one gets that $\Omega^*$ is closed and, consequently, compact due to the fact that $X$ is a finite dimensional normed space.
By the assumption of the theorem the penalty function $F_{\lambda}$ is exact at every globally optimal solution of the problem $(\mathcal{P})$. Therefore for any $x^* \in \Omega^*$ there exist $\lambda(x^*) \ge 0$ and $r(x^*) > 0$ such that for any $\lambda \ge \lambda(x^*)$ one has \begin{equation} \label{ExactAtOptSol}
F_{\lambda}(x, p) \ge f(x^*) = f^* \quad
\forall (x, p) \in \Big( U\big( x^*, r(x^*) \big) \cap A \Big) \times U\big( p_0, r(x^*) \big), \end{equation} where $U( p_0, r(x^*) ) = \{ p \in P \mid d(p_0, p) < r(x^*) \}$. Applying the compactness of the set $\Omega^*$ one obtains that there exist $x_1^*, \ldots, x_m^* \in \Omega^*$ such that \begin{equation} \label{FiniteOpenCover}
\Omega^* \subset \bigcup_{k = 1}^m U\left( x_k^*, \frac{r(x_k^*)}{2} \right). \end{equation} Denote $$
\lambda_0 = \max_{k \in \{ 1, \ldots, m \}} \lambda(x_k^*), \quad
r_0 = \min_{k \in \{ 1, \ldots, m \}} \frac{r(x_k^*)}{2}, \quad
U = \bigcup_{x^* \in \Omega^*} \big( U(x^*, r_0) \cap A \big). $$ Observe that $\Omega^* \subset U$, and the set $U$ is open in $A$ (hereafter, we suppose that the set $A$ is endowed with the induced metric).
Let $x \in U$ be arbitrary. By definition there exists $x^* \in \Omega^*$ such that $x \in U(x^*, r_0)$. Applying \eqref{FiniteOpenCover} one obtains that there exists $k \in \{ 1, \ldots, m \}$ such that $x^* \in U(x_k^*, r(x_k^*)/2)$. Hence and from the definition of $r_0$ it follows that $x \in U(x_k^*, r(x_k^*))$, which with the use of \eqref{ExactAtOptSol} and the definition of $\lambda_0$ implies that $F_{\lambda}(x, p) \ge f^*$ for all $p \in U(p_0, r_0)$ and $\lambda \ge \lambda_0$. Thus, one has \begin{equation} \label{ExactNearOptSol}
F_{\lambda}(x, p) \ge f^* \quad \forall (x, p) \in U \times U( p_0, r_0 ) \quad
\forall \lambda \ge \lambda_0. \end{equation} Denote $C = (\Omega \setminus U) \cap B(0, R)$. Clearly, for any $x \in C$ one has $f(x) > f^*$. Applying the lower semicontinuity of the function $f$ on $\Omega$ one gets that for any $x \in C$ there exists $\tau(x) > 0$ such that $f(y) > f^*$ for any $y \in U(x, \tau(x))$. Denote $$
V = \bigcup_{x \in C} \big( U(x, \tau(x)) \cap A \big). $$ Note that $V$ is open in $A$, and \begin{equation} \label{ExactNotInOptSol}
F_{\lambda}(x, p) \ge f(x) > f^* \quad \forall (x, p) \in V \times P \quad \forall \lambda \ge 0, \end{equation} i.e. $F_{\lambda}$ is exact on the set $V$ and $\lambda^*(V) = 0$.
From the facts that the sets $U$ and $V$ are open in $A$, and $A$ is closed it follows that the set $K = ( B(0, R) \cap A ) \setminus (U \cup V)$ is closed in $X$, and, consequently, compact. Furthermore, by the definitions of $U$ and $V$ one has $\Omega \cap B(0, R) \subset U \cup V$, i.e. the sets $K$ and $\Omega$ are disjoint. Therefore for any $x \in K$ one has $\varphi(x, p_0) > 0$. By the assumption of the theorem $\varphi$ is l.s.c. on $A \times \{ p_0 \}$. Hence for any $x \in K$ there exists $s(x) > 0$ such that $$
\varphi(y, p) > \frac{\varphi(x, p_0)}{2} \quad \forall (x, p) \in U(x, s(x)) \times U(p_0, s(x)). $$ Applying the compactness of the set $K$ one gets that there exist $x_1, \ldots, x_l \in K$ such that $$
K \subset \bigcup_{k = 1}^l U(x_k, s(x_k)), $$ which yields $\varphi(x, p) \ge \delta_0$ for all $(x, p) \in K \times U( p_0, s_0 )$, where $$
\delta_0 = \min_{k \in \{ 1, \ldots, l \}} \frac{\varphi(x_k, p_0)}{2}, \qquad
s_0 = \min_{k \in \{ 1, \ldots, l \}} s(x_k). $$ On the other hand, if $p \in P$ is such that $d(p, p_0) \ge s_0$, then $\varphi(x, p) \ge \omega(s_0) > 0$. Denote $\delta = \min\{ \delta_0, \omega(\min\{ r_0, s_0 \}) \} > 0$. Then $\varphi(x, p) \ge \delta$ for all $(x, p) \in K \times P$.
Observe that $$
\Omega_{\delta} \cap ( B(0, R) \times P ) \subseteq (U \cup V) \times U(p_0, r_0). $$ Indeed, if $(x, p) \in \Omega_{\delta} \cap ( B(0, R) \times P )$, then $x \in (A \cap B(0, R)) \setminus K = U \cup V$, and $d(p, p_0) < r_0$, since otherwise $\varphi(x, p) \ge \omega(r_0) \ge \delta$, which is impossible. Therefore taking into account (\ref{ExactOutsideBall}), (\ref{ExactNearOptSol}) and (\ref{ExactNotInOptSol}) one obtains that $$
F_{\lambda}(x, p) \ge f^* \quad \forall (x, p) \in \Omega_{\delta}
\quad \forall \lambda \ge \max\{ \mu, \lambda_0 \}. $$ Thus, the penalty function $F_{\lambda}$ is exact on the set $\Omega_{\delta}$. \end{proof}
\begin{remark} The theorem above extends Theorem~3.17 from \cite{Dolgopolik} to the case of parametric penalty functions. It should be noted that although Theorem~3.17 from \cite{Dolgopolik} is correct, its proof is valid only in the case $A = X$ due to a small mistake. However, the theorem above provides a correct proof of Theorem~3.17 from \cite{Dolgopolik}, since it contains Theorem~3.17 as a particular case. \end{remark}
Note that by Proposition~\ref{Prp_EquivDefExPen} the boundedness of the set $\{ x \in A \mid \inf_{p \in P} F_{\mu}(x, p) < f^* \}$ for some $\mu \ge 0$ is also \textit{necessary} for the penalty function $F_{\lambda}$ to be exact. Thus, the previous theorem can be reformulated as follows.
\begin{theorem}[Localization Principle] Let $X$ be a finite dimensional normed space, $A$ be a closed set, $f$ be l.s.c. on $\Omega$, $\varphi$ be l.s.c. on $A \times \{ p_0 \}$, and let inequality \eqref{PenTermGen_GrowthInP} hold true. Then the penalty function $F_{\lambda}$ is globally exact if and only if $F_{\lambda}$ is exact at every globally optimal solution of the problem $(\mathcal{P})$, and there exists $\mu \ge 0$ such that the set $\{ x \in A \mid \inf_{p \in P} F_{\mu}(x, p) < f^* \}$ is either bounded or empty, and the function $F_{\mu}$ is bounded below on $A \times P$. \end{theorem}
Arguing in the same way as in the proof of Theorem~\ref{Th_GlobExFiniteDim} one can obtain a complete characterization of the exactness of the penalty function $F_{\lambda}$ on bounded subsets of a finite dimensional space.
\begin{theorem}[Localization Principle] Let $X$ be a finite dimensional normed space, $A$ be a closed set, $f$ be l.s.c. on $\Omega$, $\varphi$ be l.s.c. on $A \times \{ p_0 \}$, and the penalty function $F_{\lambda_0}$ be bounded below on $A \times P$ for some $\lambda_0 \ge 0$. Suppose that there exists a non-decreasing function $\omega \colon \mathbb{R}_+ \to \mathbb{R}_+$ such that $\omega(t) = 0$ iff $t = 0$, and $\varphi(x, p) \ge \omega(d(p, p_0))$ for all $(x, p) \in A \times P$. Then for the penalty function $F_{\lambda}$ to be exact on any bounded subset of the set $A$ it is necessary and sufficient that $F_{\lambda}$ is exact at every globally optimal solution of the problem $(\mathcal{P})$. \end{theorem}
In the case, when $X$ is a finite dimensional normed space, and $P$ is a closed subset of a finite dimensional normed space, one can provide a different characterization of the exactness of a parametric penalty function (cf.~Theorem~3.10 in \cite{Dolgopolik}).
\begin{definition} Let $X$ be a normed space, and $P$ be a subset of a normed space $Y$. The parametric penalty function $F_{\lambda}$ is said to be \textit{non-degenerate} if there exist $\lambda_0 \ge 0$ and $R > 0$ such that for any $\lambda \ge \lambda_0$ the penalty function $F_{\lambda}$ attains a global minimum on the set $A \times P$ and there exists $(x_{\lambda}, p_{\lambda}) \in \argmin_{(x, p) \in A \times P} F_{\lambda}(x, p)$ such that
$\| x_{\lambda} \|_X \le R$ and $\| p_{\lambda} \|_Y \le R$. \end{definition}
Roughly speaking, the non-degeneracy of the penalty function $F_{\lambda}$ means that $F_{\lambda}$ attains a global minimum on $A \times P$ for any sufficiently large $\lambda \ge 0$, and the norm of global minimizers of this function on $A \times P$ cannot increase unboundedly as $\lambda \to +\infty$. In other words, the non-degeneracy condition does not allow points of global minimum of the penalty function $F_{\lambda}$ to escape to infinity as $\lambda \to +\infty$.
\begin{theorem}[Localization Principle] \label{Thrm_ExactnessNessSuffCond_NonDegeneracy} Let $X$ be a finite dimensional normed space, $P$ be a closed subset of a finite dimensional normed space $Y$, and $A$ be closed. Suppose also that $f$ is l.s.c. on $A$, and $\varphi$ is l.s.c. on $A \times P$. Then the parametric penalty function $F_{\lambda}$ is exact if and only if $F_{\lambda}$ is non-degenerate, and exact at every globally optimal solution of the problem $(\mathcal{P})$. \end{theorem}
\begin{proof} Let $F_{\lambda}$ be exact, and let $x^*$ be a globally optimal solution of the problem $(\mathcal{P})$. Then for any $\lambda > \lambda^*(f, \varphi)$ the pair $(x^*, p_0)$ is a point of global minimum of $F_{\lambda}$ on the set $A \times P$. Therefore $F_{\lambda}$ is exact at $x^*$, which implies that $F_{\lambda}$ is exact at every globally optimal solution of the problem $(\mathcal{P})$. Moreover, $F_{\lambda}$ is non-degenerate with
$\lambda_0 = \lambda^*(f, \varphi)$ and $R = \max\{ \|x^*\|_X, \|p_0\|_Y \}$.
Suppose, now, that $F_{\lambda}$ is non-degenerate and exact at every globally optimal solution of the problem $(\mathcal{P})$. Then there exist $\lambda_0 \ge 0$ and $R > 0$ such that for any $\lambda \ge \lambda_0$ there exists $(x_{\lambda}, p_{\lambda}) \in \argmin_{(x, p) \in A \times P} F_{\lambda}(x, p)$ for which
$\| x_{\lambda} \|_X \le R$ and $\| p_{\lambda} \|_Y \le R$.
Choose an increasing unbounded sequence $\{ \lambda_n \} \subset [\lambda_0, + \infty)$. The corresponding sequence $\{ (x_{\lambda_n}, p_{\lambda_n}) \} \subset A \times P$ is bounded in the finite dimensional normed space $X \times Y$. Therefore, without loss of generality, one can suppose that this sequence converges to some $(x^*, p^*) \in X \times Y$. Recall that the sets $A$ and $P$ are closed. Therefore $(x^*, p^*) \in A \times P$. Let us show that $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$ and $p^* = p_0$.
Indeed, from Theorem~\ref{Thrm_MinimizingSequences} below it follows that $\varphi(x_{\lambda_n}, p_{\lambda_n}) \to 0$ as $n \to \infty$. Hence taking into account the fact that the function $\varphi$ is l.s.c. on $A \times P$ one obtains that $x^* \in \Omega$ and $p = p_0$.
From the facts that $(x_{\lambda_n}, p_{\lambda_n})$ is a point of global minimum of the penalty function $F_{\lambda}$ on the set $A \times P$, $F_{\lambda}(x, p_0) = f(x)$ for all $x \in \Omega$, and the function $\varphi$ is nonnegative it follows that $f(x_{\lambda_n}) \le f^*$. Consequently, applying the lower semicontinuity of the function $f$ on the set $A$ and the fact that $x^* \in \Omega$ one gets that $f(x^*) = f^*$. Thus, $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$. Therefore, $F_{\lambda}$ is exact at $x^*$.
Fix an arbitrary $\mu > \lambda^*(x^*)$. Then there exists $r > 0$ such that $$
F_{\mu}(x, p) \ge F_{\mu}(x^*, p_0) = f^* \quad
\forall (x, p) \in \Big( U(x^*, r) \cap A \Big) \times U(p_0, r). $$ Hence and from the fact that the function $F_{\lambda}$ is non-decreasing in $\lambda$ it follows that for any $\lambda \ge \mu$ one has \begin{equation} \label{NonDeg_ExactNearGlobMin}
F_{\lambda}(x, p) \ge F_{\lambda}(x^*, p_0) = f^* \quad
\forall (x, p) \in \Big( U(x^*, r) \cap A \Big) \times U(p_0, r). \end{equation} Applying the facts that $\{ \lambda_n \}$ is an increasing unbounded sequence, and the sequence $\{ (x_{\lambda_n}, p_{\lambda_n}) \}$ converges to the point $(x^*, p_0)$ one obtains that there exists $n_0 \in \mathbb{N}$ such that for any $n \ge n_0$ one has $\lambda_n \ge \mu$ and $(x_{\lambda_n}, p_{\lambda_n}) \in U(x^*, r) \times U(p_0, r)$. Hence and from (\ref{NonDeg_ExactNearGlobMin}) one gets that $$
F_{\lambda_n}(x_{\lambda_n}, p_{\lambda_n}) \ge f^* \quad \forall n \ge n_0. $$ Recall that $(x_{\lambda_n}, p_{\lambda_n})$ is a point of global minimum of the function $F_{\lambda_n}$ on the set $A \times P$. Therefore for any $n \ge n_0$ one has $F_{\lambda_n}(x, p) \ge f^*$ for all $(x, p) \in A \times P$, which, with the use of Proposition~\ref{Prp_EquivDefExPen}, implies that the penalty function function $F_{\lambda}$ is exact. \end{proof}
\subsection{Feasibility-preserving parametric penalty functions}
In the previous subsection, we developed a direct approach to the study of exact parametric penalty functions, i.e. we obtained several sufficient conditions for a parametric penalty function to be exact. However, there is an indirect approach to the study of exact penalty functions. In this approach, one is concerned with a useful property of a penalty function that is different from exactness, while the proof of the fact that a penalty function is exact comes as a by-product. This property is the absence of stationary (critical) points of a penalty function not belonging to the set of feasible solutions of the initial problem.
The conditions under which a penalty function does not have any stationary points outside the set of feasible solutions of the original problem are very important for applications, and they have been studied by different researchers (see, e.g., \cite{DiPilloGrippo, DiPilloGrippo2, ExactBarrierFunc, Demyanov, Ye}). It should also be noted that such conditions were the main tool for the study of singular exact penalty functions \cite{HuyerNeumaier,Bingzhuang,WangMaZhou}. Despite all attention and importance, the property of a penalty function to not have any infeasible stationary points has never been named.
Recall that for any function $g \colon X \to \overline{\mathbb{R}}$ and any nonempty set $K \subset X$ a point $x \in \dom g \cap K$ is called an inf-stationary point of the function $g$ with respect to the set $K$ iff $g^{\downarrow}_K(x) \ge 0$.
\begin{definition} Let $C \subseteq A$ be a nonempty set. The parametric penalty function $F_{\lambda}$ is said to be \textit{feasiblity-preserving} on the set $C$ if there exists $\lambda_0 \ge 0$ such that for any $\lambda \ge \lambda_0$ there are no inf-stationary points of the function $F_{\lambda}$ with respect to the set $A \times P$ belonging to the set $(C \times P) \setminus (\Omega \times \{ p_0 \})$. In other words, the penalty function $F_{\lambda}$ is feasibility-preserving on the set $C$ if for any sufficiently large $\lambda \ge 0$ and for any $(x, p) \in (C \times P) \cap \dom F_{\lambda}$ the inequality $(F_{\lambda})^{\downarrow}_{A \times P} (x, p) \ge 0$ implies that $x \in \Omega$ and $p = p_0$. The greatest lower bound of all such $\lambda_0$ is denoted by $\lambda_{fp}(C)$, and is referred to as \textit{the parameter of feasibility preservation}. \end{definition}
\begin{remark} {(i) Note that a penalty function $F_{\lambda}$ might not have any infeasible inf-stationary points with respect to the set $A \times P$ for some $\lambda \ge 0$, and nevertheless be \textit{non}-feasibility-preserving. In particular, it is not difficult to provide an example of a penalty function that does not have any infeasible inf-stationary points for any $\lambda \in [0, \lambda_0)$, and have infeasible inf-stationary points for any $\lambda > \lambda_0$ for some $\lambda_0 < + \infty$ (it is sufficient to choose functions $f$ and $\varphi$ such that $f^{\downarrow}(x) < 0$ and $\varphi^{\downarrow}(x, p) > 0$ for some infeasible $x$ and for all $p \in P$). Such pathological examples are inconsistent with the general theory of (exact) penalty functions, since in general one expects that the greater is the penalty parameter, the better a penalty function approximates the original constrained optimization problem. That is why we excluded such pathological cases from further consideration by requiring that a feasibility-preserving penalty function must not have any infeasible inf-stationary points for any $\lambda$ greater than some $\lambda_0 \ge 0$. }
{\noindent(ii) Let $F_{\lambda}$ be feasibility-preserving on a set $C \subseteq A$. From the definition it follows that for any $\lambda > \lambda_{fp}(C)$ and $x \in C \setminus A$ the inequality $(F_{\lambda})^{\downarrow}_{A \times P}(x, p) < 0$ is satisfied \emph{for all} $p \ne p_0$. In some cases, it can be useful to consider only a subset $K \subset P$, and to require that the inequality $(F_{\lambda})^{\downarrow}_{A \times P}(x, p) < 0$ is satisfied for all $p \in K \setminus \{ p_0 \}$. (moreover, the set $K$ can depend on $x$). In other words, it can be convenient to consider a more general defintion of feasibility-preserving parametric penalty function in which the set $C \subseteq A$ is replaced by a set $C \subseteq A \times P$. The interested reader can extend the results below to this more general case. } \end{remark}
Feasibility-preserving penalty functions have a very important advantage over other penalty functions, which makes them more appealing for applications. Namely, any standard minimization algorithm, when applied to a feasibility-preserving penalty function $F_{\lambda}$ with $\lambda > \lambda_{fp}(A)$, converges to a feasible point of the original problem (at least, in theory), since such algorithms, normally, cannot converge to a non-stationary point.
In addition to its value from the computational point view, the concept of feasibility preservation can be applied to the study of exact penalty functions.
\begin{proposition} \label{Prp_FeasibPreservImpliesExactness} Let a penalty function $F_{\lambda}$ be feasibility-preserving on the set $A$, and suppose that there exists $\lambda_0 \ge 0$ such that for any $\lambda \ge \lambda_0$ the function $F_{\lambda}$ attains a global minimum on the set $A \times P$. Then $F_{\lambda}$ is exact, and $\lambda^*(f, \varphi) \le \max\{ \lambda_{fp}(A), \lambda_0 \}$. \end{proposition}
\begin{proof} Choose $\lambda > \max\{ \lambda_{fp}(A), \lambda_0 \}$, and let $(x^*, p^*)$ be a point of global minimum of $F_{\lambda}$ on the set $A \times P$ that exists due to our assumption. Clearly, the pair $(x^*, p^*)$ is an inf-stationary point of $F_{\lambda}$. Therefore $p^* = p_0$ and $x^* \in \Omega$ by virtue of the fact that $F_{\lambda}$ is feasibility-preserving. Hence, as it is easy to see, $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$ (recall that $F_{\lambda}(x, p_0) = f(x)$ for any $x \in \Omega$) and $\min_{(x, p) \in A} F_{\lambda}(x, p) = F_{\lambda}(x^*, p^*) = f^*$, which implies that $F_{\lambda}$ is exact, and $\lambda^*(f, \varphi) \le \max\{ \lambda_{fp}(A), \lambda_0 \}$ by virtue of Proposition~\ref{Prp_EquivDefExPen}. \end{proof}
It should be noted that the assumption that the penalty function $F_{\lambda}$ attains a global minimum on the set $A \times P$ for any sufficiently large $\lambda$ is indispensable for the validity of the proposition above. Namely, one can construct a parametric penalty function $F_{\lambda}$ that is feasibility-preserving on the set $A$, but is not exact, which implies that it does not attain a global minimum on the set $A \times P$ due of the previous proposition. The example of such penalty function is taken from \cite{LianZhang}.
\begin{example} Let $X = \mathbb{R}^n$, and the problem $(\mathcal{P})$ have the form \begin{equation} \label{ProblemInExampleOfIncorPenFunc}
\min f(x) \quad \text{subject to} \quad F(x) = 0, \quad x \in [u, v], \end{equation} where the functions $f \colon \mathbb{R}^n \to \mathbb{R}$ and $F \colon \mathbb{R}^n \to \mathbb{R}^m$ are continuously differentiable, $u, v \in \mathbb{R}^n$, and $[u, v] = \{ x \in \mathbb{R}^n \mid u_i \le x_i \le v_i, \: i \in \{1, \ldots, n \} \}$.
Let $P = \mathbb{R}_+$ and $p_0 = 0$. Fix $w \in \mathbb{R}^m$, and for any $p > 0$ denote
$\Delta(x, p) = \| F(x) - p w \|^2$, where $\| \cdot \|$ is the Euclidean norm. Finally, choose $a > 0$, and define \begin{equation} \label{IncorrectPenTerm}
\varphi(x, p) = \begin{cases}
0, & \text{if } p = 0, x \in \Omega, \\
\dfrac{1}{2(p + 1)} \dfrac{\Delta(x, p)}{1 - a \Delta(x, p)} + p, & \text{if } p > 0, \Delta(x, p) < 1/a, \\
+ \infty, & \text{otherwise}.
\end{cases} \end{equation} Note that $\varphi(x, p) = 0$ iff $p = 0$ and $x \in \Omega$.
It was proved in \cite{LianZhang} that under some additional assumptions the parametric penalty function $F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p)$ with the penalty term (\ref{IncorrectPenTerm}) is feasibility-preserving on the set $A = [u, v]$. However, let us show that this penalty function is not exact.
Indeed, for any $x \in \mathbb{R}^n$ one has $g_{\lambda}(x) = \liminf_{p \to +0} F_{\lambda}(x, p) = f(x) + \lambda \phi(x)$, where $$
\phi(x) = \begin{cases}
\dfrac{1}{2}\dfrac{\| F(x) \|^2}{1 - a \| F(x) \|^2}, & \text{if } \| F(x) \|^2 < 1/a, \\
+ \infty, & \text{if } \| F(x) \|^2 \ge 1/a.
\end{cases} $$
By Proposition~\ref{Prp_ExactLimInfPenFunc}, if the parametric penalty function $F_{\lambda}$ is exact, then the penalty function $g_{\lambda}$ is exact as well. However, note that the penalty term $\phi$ is continuously differentiable for any $x$ such that $\| F(x) \|^2 < 1/a$. Therefore the penalty function $g_{\lambda}$ cannot be exact in the general case.
Indeed, let $n = m = 1$, $f(x) = x$, $F(x) = x - 1$, $u = 0$ and $v = 2$. The only feasible point of this problem is $x^* = 1$. If the penalty function $g_{\lambda}$ is exact, then $x^*$ is a point of global minimum of the function $g_{\lambda}$ on the set $[u, v] = [0, 2]$ for any sufficiently large $\lambda \ge 0$, which implies that $g'_{\lambda}(x^*) = 0$ for any sufficiently large $\lambda$. On the other hand, $g'_{\lambda}(x^*) = f'(x^*) = 1$. Consequently, the penalty function $g_{\lambda}$ is not exact, which, with the use of Proposition~\ref{Prp_ExactLimInfPenFunc}, implies that the penalty function $F_{\lambda}$ is not exact as well. In spite of this fact, one can show that the penalty function $F_{\lambda}$ is feasibility-preserving (see \cite{LianZhang}). Therefore one concludes that the penalty function $F_{\lambda}$ does not attain a global minimum on the set $A \times P = [u, v] \times \mathbb{R}_+$ for any sufficiently large $\lambda$ by virtue of Proposition~\ref{Prp_FeasibPreservImpliesExactness}. Thus, the claim from \cite{LianZhang} that the penalty function $F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p)$ with the penalty term $\varphi$ defined by (\ref{IncorrectPenTerm}) is an exact penalty function for the problem~(\ref{ProblemInExampleOfIncorPenFunc}) (see~Theorem~2.2 in \cite{LianZhang}) is not correct. \end{example}
Let us obtain general sufficient conditions for a penalty function to be feasibility-preserving. In order to understand a natural way to formulate these conditions, we, at first, derive simple necessary conditions for a penalty function to be feasibility-preserving.
For any set $C \subset A$ denote $$
(C \times P)_{\inf} = \Big( \big( C \times P \big) \cap \dom \varphi \Big) \setminus
\Big( \Omega \times \{ p_0 \} \Big) $$ (throughout this section we suppose that the penalty term $\varphi$ is fixed). Thus, the set $(C \times P)_{\inf}$ consists of all those points $(x, p) \in (C \times P) \cap \dom \varphi$ that are not ``feasible'' for the problem $(\mathcal{P})$.
\begin{proposition} Let $C \subset A$ be a nonempty set. Suppose that the penalty function $F_{\lambda}$ is feasibility-preserving on $C$ for any function $f$ that is Lipschitz continuous on $X$. Then $$
\varphi^{\downarrow}_{A \times P} (x, p) < 0 \quad
\forall (x, p) \in (C \times P)_{\inf}. $$ \end{proposition}
\begin{proof} Arguing by reductio ad absurdum, suppose that there exists $(y, q) \in (C \times P)_{\inf}$ such that $\varphi^{\downarrow}_{A \times P} (y, q) \ge 0$. Define $f(x) \equiv 0$. Then applying Lemma~\ref{Lemma_SumRule} one obtains that $$
(F_{\lambda})^{\downarrow}_{A \times P} (y, q) \ge f^{\downarrow}_{A \times P} (x_0) +
\lambda \varphi^{\downarrow}_{A \times P} (y, q) \ge 0 + \lambda \cdot 0 = 0, $$ which contradicts the assumption that $F_{\lambda}$ is feasibility-preserving on $C$ for any Lipschitz continuous function $f$. \end{proof}
Under an additional assumption on the penalty function $F_{\lambda}$, that is satisfied, in particular, when $P$ is a one-point set (i.e. when $F_{\lambda}$ is a non-parametric penalty function) or when the penalty term $\varphi$ is Fr\'echet differentiable at all points $(x, p) \in (A \times P)_{\inf}$ (see~Theorem~\ref{Thrm_RSDLowerEstimViaPartialRSD}), one can obtain a stronger necessary condition for a penalty function to be feasibility-preserving.
\begin{theorem} \label{Thrm_NessCondFeasPres} Let $\Omega$ be closed, and $C \subset A$ be a nonempty set. Suppose that the penalty function $F_{\lambda}$ is feasibility-preserving on the set $C$ for any function $f$ that is Lipschitz continuous on $X$, and there exists $\kappa > 0$ such that for any $(x, p) \in (C \times P)_{\inf}$ one has \begin{equation} \label{RSD_SumEstimate}
(F_{\lambda})^{\downarrow}_{A \times P}(x, p) \ge
\kappa \min\big\{ 0, f^{\downarrow}_A(x) + \lambda \varphi(\cdot, p)^{\downarrow}_A (x),
\lambda \varphi(x, \cdot)^{\downarrow}(p) \big\}. \end{equation} Then there exists $a > 0$ such that \begin{equation} \label{NessFPCond}
\varphi(\cdot, p)^{\downarrow}_A (x) \le - a \quad
\forall (x, p) \in \big\{ (y, q) \in (C \times P)_{\inf} \mid \varphi(y, \cdot)^{\downarrow}(q) \ge 0 \big\}. \end{equation} \end{theorem}
\begin{proof} Arguing by reductio ad absurdum, suppose that for any $a > 0$ there exists $(x, p) \in (C \times P)_{\inf}$ such that $\varphi(x, \cdot)^{\downarrow}(p) \ge 0$, and $\varphi(\cdot, p)^{\downarrow}_A (x) > - a$.
Then, in particular, there exists a sequence $\{ (x_n, p_n) \} \subset (C \times P)_{\inf}$ such that $\varphi(x_n, \cdot)^{\downarrow}(p_n) \ge 0$, $\varphi(\cdot, p_n)^{\downarrow}_A (x_n) < 0$ for any $n \in \mathbb{N}$, and $\varphi(\cdot, p_n)^{\downarrow}_A (x_n) \to 0$ as $n \to \infty$. Observe that $x_n$ is a limit point of the set $A$ for any $n \in \mathbb{N}$, since otherwise $\varphi(\cdot, p_n)^{\downarrow}_A (x_n) = + \infty$.
Suppose, at first, that there is only a finite number of different points in the sequence $\{ x_n \}$. Then there exists a point $x_0 \in \{ x_n \}_{n \in \mathbb{N}}$ and a subsequence $\{ p_{n_k} \}$ such that $\varphi(\cdot, p_{n_k})^{\downarrow}_A (x_0) \to 0$ as $n \to \infty$. If $x_0 \in \Omega$, then define $f(x) = d(x, x_0)$. Otherwise, there exists $r > 0$ such that $B(x_0, r) \cap \Omega = \emptyset$ due to the fact that $\Omega$ is closed. Then define $f(x) = \min\{ d(x, x_0) - r, 0 \}$. Note that in both cases $f$ attains a global minimum on the set $\Omega$, and is Lipschitz continuous on $X$.
Applying inequality (\ref{RSD_SumEstimate}), and Lemma~\ref{Lemma_SimpleCases} one obtains that \begin{multline*}
(F_{\lambda})^{\downarrow}_{A \times P} (x_0, p_{n_k}) \ge
\kappa \min\Big\{ 0, f^{\downarrow}_A (x_0) + \lambda \varphi(\cdot, p_{n_k})^{\downarrow}_A (x_0),
\lambda \varphi(x_0, \cdot)^{\downarrow} (p_{n_k}) \Big\} \ge \\
\ge \kappa \min\big\{ 1 + \lambda \varphi(\cdot, p_{n_k})^{\downarrow}_A (x_0), 0 \big\} \end{multline*} for any $\lambda \ge 0$ and $k \in \mathbb{N}$. From the fact that $\varphi(\cdot, p_{n_k})^{\downarrow}_A (x_0) \to 0$ as $k \to \infty$ it follows that for any $\lambda \ge 0$ there exists $k_0 \in \mathbb{N}$ such that for any $k \ge k_0$ one has $(F_{\lambda})^{\downarrow}_{A \times P} (x_0, p_{n_k}) \ge 0$, which contradicts the assumption that $F_{\lambda}$ is feasibility-preserving on $C$ for any Lipschitz continuous function $f$.
Suppose, now, that there is an infinite number of different points in the sequence $\{ x_n \}$. Without loss of generality we can suppose that $x_n \ne x_k$ for all $n, k \in \mathbb{N}$ such that $n \ne k$. Denote $M_1 = \{ x_n \}_{n \in \mathbb{N}} \cap \Omega$ and $M_2 = \{ x_n \}_{n \in \mathbb{N}} \setminus \Omega$. Clearly, one of these sets is infinite.
Let $M_1$ be infinite. Then it can be identified with a subsequence $\{ x_{n_k} \} \subset \Omega$ of the sequence $\{ x_n \}$. For any $k \in \mathbb{N}$ define $r_k = \inf_{m > k} d( x_{n_k}, x_{n_m} )$. If for some $k \in \mathbb{N}$ one has $r_k = 0$, then there exists a subsequence $\{ x_{n_{k_l}} \}$ with $k_l > k$ for any $l \in \mathbb{N}$ converging to the point $x_{n_k}$. Recall that all points in the original sequence $\{ x_n \}$ are distinct. Since a sequence in a metric space cannot converge to more than one point, $\inf_{r > l} d( x_{n_{k_l}}, x_{n_{k_r}} ) > 0$ for all $l \in \mathbb{N}$. Therefore replacing, if necessary, the sequence $\{ x_{n_k} \}$ by the subsequence $\{ x_{n_{k_l}} \}$ one can suppose that $r_k > 0$ for all $k \in \mathbb{N}$.
Define $\theta_1 = r_1$ and $\theta_k = \min\{ \theta_{k - 1}, r_k \}$ for any $k > 1$. Thus, $\{ \theta_k \}$ is a non-increasing sequence such that $\theta_k \le r_k$ for all $k \in \mathbb{N}$. Denote $$
f(x) = \sum_{k = 1}^{\infty} f_k(x), \quad f_k(x) = \min\left\{ d(x, x_{n_k}) - \frac{\theta_k}{3}, 0 \right\}. $$ Observe that the functions $f_k$, $k \in \mathbb{N}$, have disjoint supports. Consequently, the function $f$ is correctly defined. Furthermore, it is easy to verify that $f$ attains a global minimum on $\Omega$ at the point $x_{n_1}$.
Let us show that $f$ is Lipschitz continuous on $X$. Indeed, let $x, y \in X$ be arbitrary. Note that there exist $m, l \in \mathbb{N}$ such that $f(x) = f_m(x)$ and $f(y) = f_l(y)$ by virtue of the fact that the functions $f_k$ have disjoint supports. Hence and from the fact that all functions $f_k$, $k \in \mathbb{N}$, are Lipschitz continuous on $X$ with a Lipschitz constant $L_k \le 1$ it follows that \begin{multline*}
|f(x) - f(y)| = |f_m(x) + f_l(x) - f_m(y) - f_l(y)| \le \\
\le |f_m(x) - f_m(y)| + |f_l(x) - f_l(y)| \le 2 d(x, y). \end{multline*} in the case $m \ne l$, and $$
|f(x) - f(y)| = |f_m(x) - f_m(y)| \le d(x, y) $$ in the case $m = l$. Thus, $f$ is Lipschitz continuous on $X$, which implies that the penalty function $F_{\lambda} = f + \lambda \varphi$ is feasibility-preserving on the set $C$. On the other hand, taking into account the fact that $f(x) = f_k(x) = d(x, x_{n_k}) - \theta_k/3$ in a neighbourhood of the point $x_{n_k}$, and applying inequality (\ref{RSD_SumEstimate}), and Lemma~\ref{Lemma_SimpleCases} one obtains that \begin{multline*}
(F_{\lambda})^{\downarrow}_{A \times P} (x_{n_k}, p_{n_k}) \ge
\kappa \min\Big\{ 0, f^{\downarrow}_A (x_{n_k}) + \lambda \varphi(\cdot, p_{n_k})^{\downarrow}_A (x_{n_k}),
\lambda \varphi(x_{n_k}, \cdot)^{\downarrow} (p_{n_k}) \Big\} \ge \\
\ge \kappa \min\big\{ 1 + \lambda \varphi(\cdot, p_{n_k})^{\downarrow}_A (x_{n_k}), 0 \big\} \end{multline*} for any $\lambda \ge 0$ and $k \in \mathbb{N}$. Recall that by construction one has $\varphi(\cdot, p_n)^{\downarrow}_A (x_n) \to 0$ as $n \to \infty$. Therefore for any $\lambda \ge 0$ there exists $k_0 \in \mathbb{N}$ such that for any $k \ge k_0$ one has $(F_{\lambda})^{\downarrow}_{A \times P} (x_{n_k}, p_{n_k}) \ge 0$, which contradicts the fact that $F_{\lambda}$ is feasibility-preserving on the set $C$.
Suppose, finally, that the set $M_1 = \{ x_n \}_{n \in \mathbb{N}} \cap \Omega$ is finite, but the set $M_2 = \{ x_n \}_{n \in \mathbb{N}} \setminus \Omega$ is infinite. Then one can identify $M_2$ with a subsequence $\{ x_{n_k} \} \subset A \setminus \Omega$ of the sequence $\{ x_n \}$. Repeating, if necessary, the same argument as in the case when the set $M_1$ is infinite, one can suppose that $r_k = \inf_{m > k} d( x_{n_k}, x_{n_m} ) > 0$ for any $k \in \mathbb{N}$. Taking into account the fact that the set $\Omega$ is closed, one gets that for any $k \in \mathbb{N}$ there exists $s_k > 0$ such that $B(x_{n_k}, s_k) \cap \Omega = \emptyset$. Denote $\theta_k = \min\{ \theta_{k-1}, r_k, s_k \}$, and define $$
f(x) = \sum_{k = 1}^{\infty} f_k(x), \quad f_k(x) = \min\left\{ d(x, x_{n_k}) - \frac{\theta_k}{3}, 0 \right\}. $$ Observe that the functions $f_k$, $k \in \mathbb{N}$, have disjoint supports that do not intersect with the set $\Omega$. Therefore the function $f$ is correctly defined, and attains a global minimum on the set $\Omega$ (in fact, $f(x) \equiv 0$ on $\Omega$).
Arguing in the same way as in the case when the set $M_1$ is infinite, one can show that the function $f$ is Lipschitz continuous on $X$, and for any $\lambda \ge 0$ there exists $k_0 \in \mathbb{N}$ such that for any $k \ge k_0$ one has $(F_{\lambda})^{\downarrow}_{A \times P} (x_{n_k}, p_{n_k}) \ge 0$, which contradicts the fact that $F_{\lambda}$ is feasibility-preserving on the set $C$. \end{proof}
\begin{remark} Recall that throughout this article we suppose that the exists a globally optimal solution of the problem $(\mathcal{P})$, i.e. we suppose that the function $f$ attains a global minimum on the set $\Omega$, since otherwise the definition of exact penalty function is meaningless. That is why, when we construct a function $f$ in the proof of the theorem above, we must ensure that it attains a global minimum on $\Omega$. \end{remark}
Let us show that condition (\ref{NessFPCond}) is not only necessary but also sufficient for the penalty function $F_{\lambda}$ to be feasibility-preserving.
\begin{proposition} \label{Prp_SuffCondFeasPres} Let $C \subset A$ be a nonempty set, and the function $f$ be Lipschitz continuous on an open set $V$ containing the set $C$. Suppose that there exists $a > 0$ such that $$
\varphi(\cdot, p)^{\downarrow}_A (x) \le - a \quad
\forall (x, p) \in \big\{ (y, q) \in (C \times P)_{\inf} \mid \varphi(y, \cdot)^{\downarrow}(q) \ge 0 \big\}. $$ Then the penalty function $F_{\lambda}$ is feasibility-preserving on the set $C$. Moreover, $\lambda_{fp}(C) \le L / a$, where $L$ is a Lipschitz constant of $f$ on $V$. \end{proposition}
\begin{proof} Let $(x, p) \in (C \times P)_{\inf}$ be arbitrary. If $\varphi(x, \cdot)^{\downarrow}(p) < 0$, then with the use of Lemma~\ref{Lemma_PartialRSD} one gets that for any $\lambda > 0$ the following inequalities hold true $$
(F_{\lambda})^{\downarrow}_{A \times P} (x, p) \le F_{\lambda}(x, \cdot)^{\downarrow}(p) =
\lambda \varphi(x, \cdot)^{\downarrow}(p) < 0. $$ Suppose, now, that $\varphi(x, \cdot)^{\downarrow}(p) \ge 0$. Then by the assumption of the proposition there exists a sequence $\{ x_n \} \subset A$ converging to the point $x$ such that $$
\varphi(\cdot, p)^{\downarrow}_A (x) = \lim_{n \to \infty} \frac{\varphi(x_n, p) - \varphi(x, p)}{d(x_n, x)} \le - a. $$ Since $x \in C$, one can suppose that $\{ x_n \} \subset V$. Hence $f(x_n) - f(x) \le L d(x_n, x)$ for all $n \in \mathbb{N}$. Consequently, for any $\lambda > 0$ one gets \begin{multline*}
(F_{\lambda})^{\downarrow}_{A \times P} (x, p) \le F_{\lambda}(\cdot, p)^{\downarrow}_A(x) \le
\liminf_{n \to \infty} \frac{F_{\lambda}(x_n, p) - F_{\lambda}(x, p)}{d(x_n, x)} \le \\
\le L + \lambda \lim_{n \to \infty} \frac{\varphi(x_n, p) - \varphi(x, p)}{d(x_n, x)} \le
L - \lambda a. \end{multline*} Therefore for any $\lambda > L / a$ one has $(F_{\lambda})^{\downarrow}_{A \times P} (x, p) < 0$, which implies that the penalty function $F_{\lambda}$ is feasibility-preserving on the set $C$ and $\lambda_{fp}(C) \le L / a$.
\end{proof}
\subsection{Zero duality gap}
Sometimes, it might be difficult to verify whether a given parametric penalty function is exact (or feasibility-preserving). In such cases, one can try to characterize a quality of the chosen penalty function via different notions, the most important of which is the \emph{zero duality gap property}. Recall that the zero duality gap property is said to hold true for the penalty function $F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p)$ if $$
\sup_{\lambda \ge 0} \inf_{(x, p) \in A \times P} F_{\lambda}(x, p) = \inf_{x \in \Omega} f(x). $$ Note that if the penalty function $F_{\lambda}$ is exact, then the zero duality gap property obviously holds for $F_{\lambda}$. However, as it easy to see, the converse is not true.
In this subsection, we obtain necessary and sufficient conditions for the zero duality gap property to hold true. These conditions are similar to the ones for a standard (non-parametric) penalty function (see, e.g., \cite{RubinovYang,RubinovHuangYang}), and are expressed in terms of the lower semicontinuity of the optimal value function of a perturbed optimization problem.
\begin{remark} \label{Rmrk_ZeroDualityGap_ImageSpaceAnalysis} It should be noted that although the fact that the zero duality gap property is equivalent to the lower semicontinuity of the optimal value function (perturbation function) is well-known in constrained optimization, the validity of this statement in the case of parametric penalty functions does not follow from any existing results. For instance, it does not follow from the general results from the image space analysis (see~\cite{ZhuLi} and references therein). In particular, it does not follow from Theorems~3.3 and 3.4 in \cite{ZhuLi} due to the fact that assumptions $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ from \cite{ZhuLi} need not be satisfed for a parametric penalty function. Note that singular penalty functions (see Section~\ref{Section_SingPenFunc}) do not satisfy assumptions $\mathcal{B}$ and $\mathcal{C}$, while smoothing penalty functions (see Section~\ref{Section_SmoothingPenFunc}) do not satisfy assumption $\mathcal{C}$ from \cite{ZhuLi}. \end{remark}
Consider the following perturbation of the problem $(\mathcal{P})$: $$
\min \quad f(x) \quad \text{subject to} \quad \inf_{p \in P} \varphi(x, p) \le \eta, \quad x \in A
\eqno{(\mathcal{P}_{\eta})}, $$ where $\eta \ge 0$ is a perturbation parameter. For any $\eta \ge 0$ denote $$
\Omega(\eta) = \big\{ x \in A \mid \inf_{p \in P} \varphi(x, p) \le \eta \big\}, \quad
\beta(\eta) = \inf_{x \in \Omega(\eta)} f(x). $$ Thus, $\Omega(\eta)$ is the set of feasible points of the problem ($\mathcal{P}_{\eta}$), while $\beta(\eta)$ is the \emph{perturbation function} (or the \textit{optimal value function}) of this problem. Note that $\beta(0) = f^* := \min_{x \in \Omega} f(x)$, and $\beta \colon \mathbb{R}_+ \to \overline{\mathbb{R}}$ is a non-increasing function.
\begin{theorem} \label{Thrm_DualityGap} Let the penalty function $F_{\lambda}$ be bounded below on $A \times P$ for some $\lambda \ge 0$. Then the following equality holds true $$
\sup_{\lambda \ge 0} \inf_{(x, p) \in A \times P} F_{\lambda}(x, p) =
\lim_{\eta \to +0} \beta(\eta). $$ \end{theorem}
\begin{proof} Denote \begin{equation} \label{InfOfPenFunc}
h(\lambda) = \inf_{(x, p) \in A \times P} F_{\lambda}(x, p) \quad \forall \lambda \ge 0. \end{equation} Observe that $h(\lambda) \le f^*$ for all $\lambda \ge 0$ due to the fact that $F_{\lambda}(x, p_0) = f(x)$ for any $x \in \Omega$.
Choose an arbitrary $\eta > 0$. From Lemma~\ref{Lmm_Reduction} it follows that there exists $\lambda_0 \ge 0$ such that $$
F_{\lambda}(x, p) \ge f^* \quad \forall (x, p) \notin \Omega_{\eta} \quad \forall \lambda \ge \lambda_0. $$ Therefore for all $\lambda \ge \lambda_0$ and for any $\varepsilon > 0$ there exists $(x, p) \in \Omega_{\eta}$ such that $$
f^* \ge h(\lambda) \ge F_{\lambda}(x, p) - \varepsilon \ge f(x) - \varepsilon $$ (if $h(\lambda) = f^*$, then one can choose $(x, p) = (x^*, p_0) \in \Omega_{\eta}$, where $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$). Note that $\varphi(x, p) < \eta$, since $(x, p) \in \Omega_{\eta}$, which implies that $x$ belongs to the set of feasible points $\Omega(\eta)$ of the problem ($\mathcal{P}_{\eta}$). Consequently, $f(x) \ge \beta(\eta)$, which yields $$
h(\lambda) \ge f(x) - \varepsilon \ge \beta(\eta) - \varepsilon \quad \forall \lambda \ge \lambda_0. $$ Taking the supremum over all $\lambda \ge 0$, and taking into account the fact that $\varepsilon > 0$ is arbitrary one obtains that $$
\sup_{\lambda \ge 0} h(\lambda) \ge \beta(\eta), $$ which yields \begin{equation} \label{LimValPenFunc_LowerEstim}
\sup_{\lambda \ge 0} h(\lambda) \ge \limsup_{\eta \to +0} \beta(\eta) \end{equation} due to the fact that $\eta > 0$ was chosen arbitrarily.
To prove the reverse inequality, choose a decreasing sequence $\{ \eta_k \} \subset (0, +\infty)$ such that $\eta_k \to 0$ as $k \to \infty$, and denote $\lambda_k = 1 / \sqrt{\eta_k}$. By the definition of the perturbation function $\beta$, for any $k \in \mathbb{N}$ there exists $x_k \in A$ such that $$
f(x_k) \le \beta(\eta_k) + \sqrt{\eta_k}, \quad \inf_{p \in P} \varphi(x_k, p) \le \eta_k. $$ Clearly, for any $k \in \mathbb{N}$ there exists $p_k \in P$ such that $\varphi(x_k, p_k) \le 2 \eta_k$. Consequently, one has $$
h(\lambda_k) \le F_{\lambda_k}(x_k, p_k) = f(x_k) + \lambda_k \varphi(x_k, p_k) \le
\beta(\eta_k) + \sqrt{\eta_k} + 2 \lambda_k \eta_k = \beta(\eta_k) + 3 \sqrt{\eta_k}. $$ Passing to the limit inferior as $k \to \infty$ one gets $$
\liminf_{k \to \infty} h(\lambda_k) \le \liminf_{k \to \infty} \beta(\eta_k). $$ Note that the function $h(\lambda)$ is non-decreasing due to the fact that the penalty function $F_{\lambda}$ is non-decreasing in $\lambda$ (see~(\ref{InfOfPenFunc})). Hence $\sup_{\lambda \ge 0} h(\lambda) = \lim_{\lambda \to \infty} h(\lambda)$, which implies that $$
\sup_{\lambda \ge 0} h(\lambda) = \lim_{k \to \infty} h(\lambda_k) \le \liminf_{k \to \infty} \beta(\eta_k), $$ due to the fact that $\lambda_k = 1 / \sqrt{\eta_k}$ and $\eta_k \to +0$ as $k \to \infty$. Taking into account the fact that the sequence $\{ \eta_k \}$ was chosen arbitrarily one obtains that \begin{equation} \label{LimValPenFunc_UpperEstim}
\sup_{\lambda \ge 0} h(\lambda) \le \liminf_{\eta \to +0} \beta(\eta) \end{equation} Combining (\ref{LimValPenFunc_LowerEstim}) and (\ref{LimValPenFunc_UpperEstim}) one obtains the desired result. \end{proof}
As a simple corollary to the theorem above we obtain a characterization of the zero duality gap property in terms of the perturbation function $\beta$. In order to formulate this result note that if the zero duality gap property holds for $F_{\lambda}$, then $F_{\lambda}$ is bounded below on $A \times P$ for some $\lambda \ge 0$ by the fact that $f^* > - \infty$.
\begin{theorem} \label{Thrm_ZeroDualityGapCharacterization} The zero duality gap property holds true for the penalty function $F_{\lambda}$ if and only if $F_{\lambda}$ is bounded below on $A \times P$ for some $\lambda \ge 0$, and the perturbation function $\beta$ is lower semicontinuous at the origin. \end{theorem}
\begin{proof} As it was mentioned above, $\beta(0) = f^* > -\infty$. Hence and from Theorem~\ref{Thrm_DualityGap} it follows that the zero duality gap property holds true for $F_{\lambda}$ iff $F_{\lambda}$ is bounded below on $A \times P$ for some $\lambda \ge 0$ and $\lim_{\eta \to +0} \beta(\eta) = \beta(0)$, i.e. iff $F_{\lambda}$ is bounded below on $A \times P$ for some $\lambda \ge 0$, and $\beta$ is continuous at the origin. It remains to note that $\beta$ is continuous at the origin iff it is lower semicontinuous at the origin due to the fact that $\beta$ is a non-increasing function. \end{proof}
It is interesting to note that necessary and sufficient conditions for the penalty function $F_{\lambda}$ to be exact can also be expressed in terms of the behaviour of the perturbation function $\beta$.
\begin{theorem} The parametric penalty $F_{\lambda}$ is exact if and only if $F_{\lambda}$ is bounded below on $A \times P$ for some $\lambda$, and the perturbation function $\beta$ is calm from below at the origin, i.e. $$
\liminf_{\eta \to +0} \frac{\beta(\eta) - \beta(0)}{\eta} > - \infty. $$ \end{theorem}
\begin{proof} The validity of the theorem follows directly from Corollary~\ref{Crlr_ReductionToStandExPen}, and \cite{Dolgopolik}, Theorem~3.24. \end{proof}
Various sufficient conditions for a perturbation function of a constrained optimization problem to be lower semicontinuous at the origin are well-known in parametric optimization (see, e.g., \cite{RubinovYang}, Section~3.1.6). Below, we provide a modification of one of these conditions that takes into account the particular structure of the perturbed problem ($\mathcal{P}_{\eta}$).
\begin{proposition} \label{Prp_ZeroDualityGapSuffCond} Let $A$ be closed, $f$ be l.s.c. on $\Omega$ and $\varphi$ be l.s.c. on $A \times \{ p_0 \}$. Suppose that there exists $\eta_0 > 0$ such that the set $\{ x \in \Omega(\eta_0) \mid f(x) < f^* \}$ is relatively compact. Suppose also that there exists a non-decreasing function $\omega \colon \mathbb{R}_+ \to \mathbb{R}_+$ such that $\omega(t) = 0$ iff $t = 0$ and \begin{equation} \label{PertFunc_LowerEstimPenTerm}
\varphi(x, p) \ge \omega(d(p, p_0)) \quad \forall p \in P. \end{equation} Then the perturbation function $\beta(\eta)$ is l.s.c. at the origin. \end{proposition}
\begin{proof} Fix an arbitrary $\varepsilon > 0$. Our aim is to find $\overline{\eta} > 0$ such that $\beta(\eta) \ge \beta(0) - \varepsilon$ for all $\eta \in (0, \overline{\eta})$.
By definition, for any $x \in \Omega$ one has $f(x) \ge f^*$. Applying the lower semicontinuity of the function $f$ on the set $\Omega$ one obtains that for any $x \in \Omega$ there exists a neighbourhood $U(x)$ of $x$ such that $f(x) \ge f^* - \varepsilon$ for all $x \in U(x)$. Denote $$
V = \bigcup_{x \in \Omega} U(x), \quad C = \cl\big\{ x \in \Omega(\eta_0) \mid f(x) < f^* \big\} $$ and $K = C \setminus V$, where ``cl'' stands for the closure operator. By the assumption of the theorem the set $C$ is compact, while the set $V$ is open by construction. Consequently, the set $K$ is compact. Note also that $\Omega \subset V$, and $C \subset A$ due to the fact that $A$ is closed, which implies that $K \subset A$ and $K \cap \Omega = \emptyset$.
The function $\varphi(\cdot, p_0)$ is l.s.c. on the set $A$, since $\varphi$ is l.s.c. on the set $A \times \{ p_0 \}$. Hence and from the facts that the set $K$ is compact, and $\varphi(x, p_0) > 0$ for any $x \notin \Omega$ one obtains that there exists $\delta > 0$ such that $\varphi(x, p_0) \ge \delta$ for all $x \in K$. Applying the lower semicontinuity of $\varphi$ on $A \times \{ p_0 \}$, and the compactness of the set $K$ again, one can easily verify that there exists $r > 0$ such that $$
\varphi(x, p) \ge \frac{\delta}{2} \quad \forall (x, p) \in K \times B(p_0, r). $$ On the other hand, if $x \in K$, but $p \notin B(p_0, r)$, then taking into account (\ref{PertFunc_LowerEstimPenTerm}) one gets $\varphi(x, p) \ge \omega(r) > 0$, which yields \begin{equation} \label{LowerEstimOfInfPenTerm}
\inf_{p \in P} \varphi(x, p) \ge \min\left\{ \frac{\delta}{2}, \omega(r) \right\} \quad \forall x \in K. \end{equation} Denote $\overline{\eta} = \min\{ \eta_0, \delta / 2, \omega(r) \}$, and choose arbitrary $\eta < \overline{\eta}$ and $x \in \Omega(\eta)$. Observe that $x \in \Omega(\eta_0)$ due to the fact that $\eta < \overline{\eta} \le \eta_0$. Furthermore, if $f(x) < f^*$, then $x \in C \cap V$, since otherwise $x \in K$, which is impossible due to (\ref{LowerEstimOfInfPenTerm}) and the choice of $\overline{\eta}$. Hence and from the definition of the set $V$ it follows that $$
f(x) \ge f^* - \varepsilon \quad \forall x \in \Omega(\eta) \quad \forall \eta \in (0, \overline{\eta}). $$ Taking the infimum over all $x \in \Omega(\eta)$ one gets $$
\beta(\eta) = \inf_{x \in \Omega(\eta)} f(x) \ge f^* - \varepsilon = \beta(0) - \varepsilon \quad
\forall \eta \in (0, \overline{\eta}). $$ Thus, $\beta$ is l.s.c. at the origin. \end{proof}
\begin{remark} Note that if $X$ is a finite dimensional normed space, then, as it is well-known, the set
$\{ x \in \Omega(\eta) \mid f(x) < f^* \}$ is relatively compact iff it is bounded. In particular, in this case it is sufficient to suppose that either the set $\Omega(\eta)$ is bounded for some $\eta > 0$ or the function $f$ is coercive, i.e. $f(x) \to + \infty$ as $\| x \| \to \infty$. \end{remark}
\subsection{Minimizing sequences}
Let us discuss a computational aspect of the parametric penalty functions method. In practice, penalty functions can be applied in the following way. One chooses an initial value $\lambda_1$ of the penalty parameter, and then finds an unconstrained minimum of the penalty function. Then one increases the value of the penalty parameter, and minimizes the penalty function again using the point obtained on the first step as the initial guess. This process is repeated until a ``good'' approximation of a solution of the original problem is found. From the theoretical point of view, one can look at this process as a procedure that generates a sequence $\{ (x_n, p_n) \}$ such that \begin{equation} \label{MinimizingSequenceDef}
F_{\lambda_n}(x_n, p_n) \le \inf_{(x, p) \in A \times P} F_{\lambda_n}(x, p) + \varepsilon, \end{equation} where $\lambda_n \to + \infty$, and $\varepsilon > 0$ is a given tolerance. Below, we study the important question of how the sequence $\{ (x_n, p_n) \}$ behaves as $n \to \infty$.
It should be noted that although there are many general results on convergence of penaly/barrier methods (see, e.g., \cite{FiaccoMcCormic,AuslenderCominettiHaddou,BenTal,Auslender}), none of them can be applied in the case of parametric penalty functions. Nevertheless, the proofs of the theorems below are very similar to the proof of convergence of penalty/barrier methods.
Let an increasing unbounded sequence $\{ \lambda_n \} \subset (0, + \infty)$ be fixed. The theorem below sharpens and significantly generalizes Theorem~5.2 and Corollary~5.1 from \cite{WangMaZhou}.
\begin{theorem} \label{Thrm_MinimizingSequences} Let a sequence $\{ (x_n, p_n) \} \subset A \times P$ satisfy the inequality $$
F_{\lambda_n}(x_n, p_n) \le \inf_{(x, p) \in A \times P} F_{\lambda_n}(x, p) + \varepsilon \quad
\forall n \in \mathbb{N} $$ for some $\varepsilon > 0$. Suppose also that the function $F_{\lambda}$ is bounded below on $A \times P$ for some $\lambda \ge 0$. Then the following statements hold true: \begin{enumerate} \item{$\varphi(x_n, p_n) \to 0$ as $n \to \infty$; \label{Item_PhiConv} }
\item{if there exists a non-decreasing function $\omega \colon \mathbb{R}_+ \to \mathbb{R}_+$ such that $\omega(t) = 0$ iff $t = 0$ and $\varphi(x, p) \ge \omega(d(p, p_0))$ for all $p \in P$, then $p_n \to p_0$ as $n \to \infty$; }
\item{if $A$ is closed, and $\varphi$ is l.s.c. on $A \times P$, then a cluster point of the sequence $\{ (x_n, p_n) \}$ (if exists) belongs to the set $\Omega \times \{ p_0 \}$. }
\item{the following inequalities hold true $$
\lim_{\eta \to +0} \beta(\eta) \le \liminf_{n \to \infty} f(x_n) \le
\limsup_{n \to \infty} f(x_n) \le \lim_{\eta \to +0} \beta(\eta) + \varepsilon, $$ where $\beta$ is the perturbation function of the problem ($\mathcal{P}_{\eta}$); \label{Item_FConv} }
\item{if the zero duality gap property holds true, then $f^* \le \liminf_{n \to \infty} f(x_n) \le \limsup_{n \to \infty} f(x_n) \le f^* + \varepsilon$; \label{Item_FConv_ZDG} } \end{enumerate} \end{theorem}
\begin{proof} \ref{Item_PhiConv}. Fix an arbitrary $\sigma \ge 0$. By the assumption of the theorem there exists $\mu \ge 0$ such that $c := \inf_{(x, p) \in A \times P} F_{\mu}(x, p) > - \infty$. Consequently, for any $(x, p) \in A \times P$ such that $\varphi(x, p) \ge \sigma$ one has $$
F_{\lambda_n}(x, p) = F_{\mu}(x, p) + (\lambda_n - \mu) \varphi(x, p) \ge c + (\lambda_n - \mu) \sigma. $$ Applying the fact that the sequence $\{ \lambda_n \}$ is increasing and unbounded one obtains that for any sufficiently large $n \in \mathbb{N}$ and for any $(x, p)$ such that $\varphi(x, p) \ge \sigma$ the following inequalities hold true $$
\lambda_n > \mu + \frac{f^* + \varepsilon - c}{\sigma}, \quad
F_{\lambda_n}(x, p) \ge c + (\lambda_n - \mu) \sigma > f^* + \varepsilon. $$ On the other hand, taking into account the fact that $F_{\lambda}(x, p_0) = f(x)$ for all $x \in \Omega$ and $\lambda \ge 0$ one gets that for any $n \in \mathbb{N}$ $$
F_{\lambda_n}(x_n, p_n) \le \inf_{(x, p) \in A \times P} F_{\lambda_n}(x, p) + \varepsilon \le
\inf_{x \in \Omega} F_{\lambda_n}(x, p_0) + \varepsilon = f^* + \varepsilon. $$ Therefore for any sufficiently large $n \in \mathbb{N}$ one has $\varphi(x_n, p_n) < \sigma$, which implies that $\varphi(x_n, p_n) \to 0$ as $n \to \infty$ due to the fact that $\sigma > 0$ was chosen arbitrarily.
Note that the second and the third assertions of the theorem are direct corollaries to the first one.
\ref{Item_FConv}. Choose an arbitrary $\eta > 0$. From the first part of the proof it follows that there exists $n_0 \in \mathbb{N}$ such that $\varphi(x_n, p_n) < \eta$ for all $n \ge n_0$. Hence $x_n \in \Omega(\eta)$ for any $n \ge n_0$, where $\Omega(\eta)$ is the set of feasible points of the problem ($\mathcal{P}_{\eta}$). Consequently, one has $f(x_n) \ge \beta(\eta)$ for any $n \ge n_0$, which yields $$
\liminf_{n \to \infty} f(x_n) \ge \lim_{\eta \to +0} \beta(\eta). $$ On the other hand, applying Theorem~\ref{Thrm_DualityGap}, and taking into account the facts that the penalty function $F_{\lambda}$ is non-decreasing in $\lambda$, and $f(x) \le F_{\lambda}(x, p)$ for any $(x, p) \in A \times P$ and $\lambda \ge 0$ one obtains \begin{multline*}
\limsup_{n \to \infty} f(x_n) \le \limsup_{n \to \infty} F_{\lambda_n}(x_n, p_n) \le
\limsup_{n \to \infty} \inf_{(x, p) \in A \times P} F_{\lambda_n}(x, p) + \varepsilon = \\
= \sup_{\lambda \ge 0} \inf_{(x, p) \in A \times P} F_{\lambda_n}(x, p) + \varepsilon =
\lim_{\eta \to +0} \beta(\eta) + \varepsilon, \end{multline*} that completes the proof of the fourth assertion of the theorem.
The validity of the fifth assertion of the theorem follows directly from the fourth assertion, Theorem~\ref{Thrm_DualityGap} and the definition of the zero duality gap property. \end{proof}
\begin{corollary} \label{Crlr_MinimizingSequences} Let a sequence $\{ (x_n, p_n) \} \subset A \times P$ satisfy the inequality $$
F_{\lambda_n}(x_n, p_n) \le \inf_{(x, p) \in A \times P} F_{\lambda_n}(x, p) + \varepsilon_n, \quad
\forall n \in \mathbb{N} $$ for some sequence $\{ \varepsilon_n \} \subset (0, + \infty)$ such that $\varepsilon_n \to 0$ as $n \to \infty$. Suppose also that the zero duality gap property holds true. Then $\varphi(x_n, p_n) \to 0$ and $f(x_n) \to f^*$ as $n \to \infty$. Moreover, if $A$ is closed, $f$ is l.s.c. on $A$, and $\varphi$ is l.s.c. on $A \times P$, then any cluster point of the sequence $\{ (x_n, p_n \}$ (if exists) has the form $(x^*, p_0)$, where $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$. \end{corollary}
Note that the assumptions on the penalty function $F_{\lambda}$ in the corollary above are satisfied when the penalty function $F_{\lambda}$ is exact. However, in the case when $F_{\lambda}$ is exact, there is no need to choose an unbounded sequence $\{ \lambda_n \}$, since it is sufficient to choose any $\lambda > \lambda^*(f, \varphi)$.
\begin{proposition} \label{Prp_MinimizingSeq_ExactPenFunc} Let the penalty function $F_{\lambda}$ be exact, and let a sequence $\{ (x_n, p_n) \} \subset A \times P$ be such that $$
\lim_{n \to \infty} F_{\lambda_0}(x_n, p_n) = \inf_{(x, p) \in A \times P} F_{\lambda_0} (x, p) $$ for some $\lambda_0 > \lambda^*(f, \varphi)$. Then $\varphi(x_n, p_n) \to 0$ and $f(x_n) \to f^*$ as $n \to \infty$. \end{proposition}
\begin{proof} Choose an arbitrary $\varepsilon > 0$. By the definition of the sequence $\{ (x_n, p_n) \}$ there exists $n_0 \in \mathbb{N}$ such that \begin{equation} \label{ExPenFunc_MinimizSeq}
F_{\lambda_0}(x_n, p_n) < \inf_{(x, p) \in A \times P} F_{\lambda_0} (x, p) +
\varepsilon ( \lambda_0 - \lambda^*(f, \varphi) ) \quad \forall n \ge n_0. \end{equation} The exactness of the penalty function $F_{\lambda}$ implies that $$
\inf_{(x, p) \in A \times P} F_{\lambda_0} (x, p) =
\inf_{(x, p) \in A \times P} F_{\lambda^*(f, \varphi)} (x, p) = f^*. $$ Consequently, for any $(x, p) \in A \times P$ such that $\varphi(x, p) \ge \varepsilon$ one has $$
F_{\lambda_0}(x, p) = F_{\lambda^*(f, \varphi)}(x, p) + (\lambda_0 - \lambda^*(f, \varphi)) \varphi(x, p) \ge
f^* + (\lambda_0 - \lambda^*(f, \varphi)) \varepsilon. $$ Hence and from (\ref{ExPenFunc_MinimizSeq}) it follows that $\varphi(x_n, p_n) < \varepsilon$ for any $n \ge n_0$, which implies that $\varphi(x_n, p_n) \to 0$ as $n \to \infty$.
Choose arbitrary $\eta > 0$. From the first part of the proof it follows that there exists $n_0 \in \mathbb{N}$ such that for any $n \ge n_0$ one has $\varphi(x_n, p_n) < \eta$. Consequently, $x_n \in \Omega(\eta)$ for any $n \ge n_0$, which implies that $f(x_n) \ge \beta(\eta)$ for all $n \ge n_0$. Therefore $$
\liminf_{n \to \infty} f(x_n) \ge \lim_{\eta \to +0} \beta(\eta) = f^*, $$ where the last equality follows from Theorem~\ref{Thrm_DualityGap}, and the fact that the zero duality gap property holds true due to the exactness of the penalty function $F_{\lambda}$. On the other hand, taking into account the fact that $f(x) \le F_{\lambda}(x, p)$ for any $\lambda \ge 0$ and $(x, p) \in A \times P$ one obtains that $$
\limsup_{n \to \infty} f(x_n) \le \lim_{n \to \infty} F_{\lambda_0}(x_n, p_n) =
\inf_{(x, p) \in A \times P} F_{\lambda_0}(x, p) = f^*, $$ which implies the desired result. \end{proof}
Let us also obtains an interesting property of cluster points of a minimizing sequence constructed with the use of a parametric penalty function (i.e. a sequence satisfying (\ref{MinimizingSequenceDef})) in the case when this penalty function is not exact.
\begin{proposition} \label{Prp_ConvergenceToNonExactPoint} Let $A$ be closed, $f$ be l.s.c. on $A$ and $\varphi$ be l.s.c. on $A \times P$. Suppose also that the zero duality gap property holds true for the penalty function $F_{\lambda}$, and $F_{\lambda}$ is not exact. Let a sequence $\{ (x_n, p_n) \} \subset A \times P$ satisfy the inequality \begin{equation} \label{NonExactPenFunc_SubOptSol}
F_{\lambda_n}(x_n, p_n) < f^* \quad \forall n \in \mathbb{N}. \end{equation} Then any cluster point of the sequence $\{ (x_n, p_n) \}$ (if exists) has the form $(x^*, p_0)$, where $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$ such that the penalty function $F_{\lambda}$ is not exact at $x^*$. \end{proposition}
\begin{proof} Note that $\inf_{(x, p) \in A \times P} F_{\lambda_n} (x, p) < f^*$ for all $n \in \mathbb{N}$ by virtue of Proposition~\ref{Prp_EquivDefExPen}, and the fact that $F_{\lambda}$ is not exact. Therefore there exists a sequence $\{ (x_n, p_n) \}$ satisfying (\ref{NonExactPenFunc_SubOptSol}).
Denote $\varepsilon_n = f^* - \inf_{(x, p) \in A \times P} F_{\lambda_n} (x, p) > 0$ for any $n \in \mathbb{N}$. Then $$
F_{\lambda_n}(x_n, p_n) \le \inf_{(x, p) \in A \times P} F_{\lambda_n} (x, p) + \varepsilon_n \quad
\forall n \in \mathbb{N}. $$ Taking into account the fact that the zero duality gap property holds true one obtains that $\varepsilon_n \to 0$ as $n \to \infty$. Hence and from Corollary~\ref{Crlr_MinimizingSequences} one gets that any cluster point of the sequence $\{ (x_n, p_n) \}$ (if exists) has the form $(x^*, p_0)$, where $x^*$ is a globally optimal solution of the problem $(\mathcal{P})$.
Let $(x^*, p_0)$ be a cluster point of the sequence $\{ (x_n, p_n) \}$. Let us show that the penalty function $F_{\lambda}$ is not exact at the point $x^*$. Arguing by reductio ad absurdum, suppose that $F_{\lambda}$ is exact at $x^*$, and fix an arbitrary $\lambda_0 > \lambda^*(x^*)$. Then there exists $r > 0$ such that $$
F_{\lambda_0}(x, p) \ge F_{\lambda_0}(x^*, p_0) = f(x^*) = f^* \quad
\forall (x, p) \in \big( U(x^*, r) \cap A \big) \times U(p_0, r). $$ Consequently, applying the facts that $\lambda_n \to \infty$ as $n \to \infty$, and $F_{\lambda}$ is non-decreasing with respect to $\lambda$ one obtains that there exists $n_0 \in \mathbb{N}$ such that for any $n \ge n_0$ one has $\lambda_n \ge \lambda_0$ and $$
F_{\lambda_n}(x, p) \ge F_{\lambda_n}(x^*, p_0) = f^* \quad
\forall (x, p) \in \big( U(x^*, r) \cap A \big) \times U(p_0, r). $$ Therefore taking into account the definition of the sequence $\{ (x_n, p_n) \}$ one gets that $(x_n, p_n) \notin U(x^*, r) \times U(p_0, r)$ for any $n \ge n_0$, which contradicts the fact that $(x^*, p_0)$ is a cluster point of the sequence $\{ (x_n, p_n) \}$. Thus, $F_{\lambda}$ is not exact at the point $x^*$. \end{proof}
Let us consider a simple application of the proposition above.
\begin{example} Let $g_{\lambda}$ be the $\ell_1$ penalty function for the nonlinear programming problem \begin{equation} \label{MathProgProblem_l1PenFunc}
\min f(x) \quad \text{subject to} \quad a_i(x) = 0, \quad i \in I, \quad
b_j(x) \le 0, \quad j \in J, \end{equation} where $I = \{ 1, \ldots, m \}$ and $J = \{1, \ldots, l\}$, i.e. let $$
g_{\lambda}(x) = f(x) + \lambda \Big( \sum_{i = 1}^m |a_i(x)| + \sum_{j = 1}^m \max\{ 0, b_j(x) \} \Big). $$ Here $f, a_i, b_j \colon \mathbb{R}^d \to \mathbb{R}$ are continuously differentiable. Suppose that there exists $\lambda_0 \ge 0$ such that the function $g_{\lambda_0}$ is coercive.
As it is well known (see, e.g., \cite{HanMangasarian}), if the Mangasarian-Fromowitz constraint qualification (MFCQ) holds true at a locally optimal solution $x^*$ of the problem \eqref{MathProgProblem_l1PenFunc}, then the penalty function $g_{\lambda}$ is exact at $x^*$. Hence and from Theorem~\ref{Thrm_ExactnessNessSuffCond_NonDegeneracy} it follows that the penalty function $g_{\lambda}$ is exact, provided MFCQ holds true at every global minimum of the problem~\eqref{MathProgProblem_l1PenFunc}.
Let, now, $\{ \lambda_n \}$ be an increasing unbounded sequence, and let a sequence $\{ x_n \} \subset \mathbb{R}^d$ be such that $g_{\lambda_n}(x_n) \le \inf_{x \in \mathbb{R}^d} g_{\lambda_n}(x) + \varepsilon_n$ for any $n \in \mathbb{N}$ with sufficiently small $\varepsilon_n > 0$. Then from Proposition~\ref{Prp_ConvergenceToNonExactPoint} it follows that either the $\ell_1$ penalty function $g_{\lambda}$ is exact, and $x_n$ is a globally optimal solution of the problem~\eqref{MathProgProblem_l1PenFunc} for some $n \in \mathbb{N}$, or MFCQ fails to hold true at cluster points of the sequence $\{ x_n \}$ that exist due to out assumption that $g_{\lambda}$ is coercive for sufficiently large $\lambda$.
Thus, roughly speaking, if the $\ell_1$ penalty function is not exact, then a minimizing sequence constructed with the use of this penalty function converges to a globally optimal solution of the problem~\eqref{MathProgProblem_l1PenFunc} at which a constraint qualification fails to hold true. In contrast, a minimizing sequence constructed with the use of the standard (quadratic) smooth penalty function can converge to any globally optimal solution of the problem~\eqref{MathProgProblem_l1PenFunc}. \end{example}
\section{Singular penalty functions} \label{Section_SingPenFunc}
In this section, we apply the general theory of parametric penalty functions developed above to the study of a class of smooth exact penalty functions introduced by Huyer and Neumaier in \cite{HuyerNeumaier} (see also \cite{WangMaZhou,Dolgopolik_OptLet,Dolgopolik_OptLet2}). We refer to penalty functions from this class as \textit{singular penalty functions}. In the terminology of this paper, the class of singular penalty functions consists of parametric penalty function with the set of parameters $P$ equal to $\mathbb{R}_+$. We obtain new simple sufficient conditions for the local exactness of singular penalty functions, and demonstrate how the theory developed in the previous section can be utilized in order to derive new results on these penalty functions. The results of this section sharpen and generalize the main results of the paper \cite{WangMaZhou}.
\subsection{Local exactness of singular penalty functions}
Let the set $M \subset X$ have the form $M = \{ x \in X \mid 0 \in G(x) \}$, where $G \colon X \rightrightarrows Y$ is a set-valued mapping with closed values, and $Y$ is a normed space. In other words, in this section we suppose that the problem $(\mathcal{P})$ has the form $$
\min f(x) \quad \text{subject to} \quad 0 \in G(x), \quad x \in A. $$ Note that for the problem above one has $\Omega = G^{-1}(0) \cap A$.
Let also $P = \mathbb{R}_+$ and $p_0 = 0$. Choose an arbitrary $w \in Y$, and non-decreasing functions $\phi \colon [0, + \infty] \to [0, + \infty]$ and $\omega \colon \mathbb{R}_+ \to [0, + \infty]$ such that $\phi(t) = 0$ iff $t = 0$ and $\omega(t) = 0$ iff $t = 0$. Being inspired by the ideas of \cite{HuyerNeumaier,WangMaZhou}, introduce the penalty term $\varphi(x, p)$ for the problem $(\mathcal{P})$ as follows \begin{equation} \label{PenaltyTermOneDim}
\varphi(x, p) = \begin{cases}
0, & \text{if } x \in \Omega, p = 0, \\
+ \infty, & \text{if } x \notin \Omega, p = 0, \\
p^{-1} \phi(d(0, G(x) - p w)^2) + \omega(p), & \text{if } p > 0,
\end{cases} \end{equation} where $$
d(0, G(x) - p w) = \begin{cases}
\inf_{y \in G(x)} \| y - p w \|, & \text{if } G(x) \ne \emptyset, \\
+ \infty, & \text{if } G(x) = \emptyset,
\end{cases} $$ and define the parametric penalty function $F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p)$. Hereafter, the penalty function $F_{\lambda}$ is referred to as a \textit{singular penalty function}.
\begin{remark} Note that one needs to include $+\infty$ into the domain of the function $\phi$ in order to allow the set $G(x)$ to be empty for some $x \in X$. \end{remark}
Our aim is to obtain simple sufficient conditions for the penalty function $F_{\lambda}$ to be (locally or globally) exact or feasibility-preserving. We start with sufficient conditions for local exactness.
Recall that a set-valued mapping $Q \colon X \rightrightarrows Y$ is called \textit{metrically subregular} with constant $\tau > 0$ at a point $(\overline{x}, \overline{y}) \in X \times Y$ with $\overline{y} \in Q(\overline{x})$ if there exists a neighbourhood $U$ of $x$ such that $$
d(\overline{y}, Q(x)) \ge \tau d(x, Q^{-1}(\overline{y})) \quad \forall x \in U. $$ We say that the set-valued mapping $Q$ is \textit{metrically subregular with respect to a set} $C \subseteq X$ with constant $\tau > 0$ at a point $(\overline{x}, \overline{y}) \in C \times Y$ with $\overline{y} \in Q(\overline{x})$ if the restriction of the mapping $Q$ to the set $C$ is metrically subregular with constrant $\tau > 0$ at $(\overline{x}, \overline{y})$ or, equivalently, if there exists a neighbourhood $U$ of $x$ such that $$
d(\overline{y}, Q(x)) \ge \tau d(x, Q^{-1}(\overline{y}) \cap C) \quad \forall x \in U \cap C. $$ For various necessary and sufficient conditions for metric subregularity see \cite{Kruger} and references therein.
The following theorem extends Theorem~3 from \cite{Dolgopolik_OptLet} to the case of arbitrary $w \in Y$.
\begin{theorem} \label{Thrm_LocalExactness} Let $x^* \in \Omega$ be a locally optimal solution of the problem $(\mathcal{P})$, $f$ be Lipschitz continuous near $x^*$, and $G$ be metrically subregular with respect to the set $A$ at $(x^*, 0)$. Suppose also that there exist $t_0, \phi_0, \omega_0 > 0$ such that \begin{equation} \label{PhiOmegaLowerEstimate}
\phi(t) \ge \phi_0 t, \quad \omega(t) \ge \omega_0 t \quad \forall t \in [0, t_0]. \end{equation} Then the penalty function $F_{\lambda}$ is exact at $(x^*, 0)$, and \begin{equation} \label{ExPenParamEstimOneDim}
\lambda^*(x^*) \le \frac{L (\sqrt{\phi_0} \| w \| + \sqrt{\omega_0 + \phi_0 \| w \|})}{2 \omega_0 \sqrt{\phi_0} \tau}, \end{equation} where $L$ is a Lipschitz constant of $f$ near $x^*$, and $\tau > 0$ is the constant of metric subregularity of $G$ with respect to the set $A$ at $(x^*, 0)$. \end{theorem}
\begin{proof} From the facts that $f$ is Lipschitz continuous in a neighbourhood of $x^*$, and $x^*$ is a locally optimal solution of the problem $(\mathcal{P})$ it follows that there exists $r_1 > 0$ such that \begin{equation} \label{LowerLipEstimObjFunc}
f(x) - f(x^*) \ge - L d(x, \Omega) \quad \forall x \in B(x^*, r_1), \end{equation} where $L > 0$ is a Lipschitz constant of $f$ in a neighbourhood of $x^*$ (see, e.g., \cite{Dolgopolik}, Proposition~2.7). Let us also obtain a lower estimate of the penalty term $\varphi$.
Applying the well-known inequality $\| u - v \| \ge | \| u \| - \| v \| |$, and taking into account the fact that the mapping $G$ is metrically subregular with respect to the set $A$ at $(x^*, 0)$ one obtains that there exist $\tau > 0$ and $r_2 \in (0, r_1)$ such that $$
d(0, G(x) - p w) \ge d(0, G(x)) - p \| w \| \ge \tau d(x, \Omega) - p \| w \| $$ for any $x \in B(x^*, r_2) \cap A$ and $p \ge 0$. Hence for any $x \in B(x^*, r_2) \cap A$ and $p \ge 0$ such that
$\tau d(x, \Omega) \ge p \| w \|$ one has $$
d(0, G(x) - p w)^2 \ge (\tau d(x, \Omega) - p \| w \|)^2 $$ Note that since $x^* \in \Omega$, for any $x \in B(x^*, \sqrt{t_0} / 2 \tau)$ and
$p \in (0, \widehat{p})$ one has $(\tau d(x, \Omega) - p \| w \|)^2 \le t_0$, where
$\widehat{p} = \sqrt{t_0} / 2 \| w \|$ in the case $w \ne 0$, and $\widehat{p} = + \infty$ otherwise. Therefore taking into account (\ref{PhiOmegaLowerEstimate}) and the fact that the function $\phi$ is non-decreasing one gets that \begin{multline*}
\phi( d(0, G(x) - p w)^2 ) \ge \phi( (\tau d(x, \Omega) - p \| w \|)^2 ) \ge \\
\ge \phi_0 \big( \tau^2 d(x, \Omega)^2 - 2 p \| w \| \tau d(x, \Omega) + p^2 \| w \|^2 \big). \end{multline*} for any $x \in B(x^*, \overline{r}) \cap A$ and $p \in (0, \widehat{p})$ such that
$\tau d(x, \Omega) \ge p \| w \|$, where $\overline{r} = \min\{ \sqrt{t_0} / 2 \tau, r_2 \}$. Hence and from (\ref{LowerLipEstimObjFunc}) it follows that for all $\lambda \ge 0$ and for any $x \in B(x^*, \overline{r}) \cap A$ and $p \in (0, \widehat{p})$ such that
$\tau d(x, \Omega) \ge p \| w \|$ one has \begin{multline*}
F_{\lambda}(x, p) - F_{\lambda}(x^*, 0) = f(x) - f(x^*) +
\lambda\left( \frac{1}{p} \phi( d(0, G(x) - p w)^2 ) + \omega(p) \right) \ge \\
\ge - L d(x, \Omega) + \frac{\lambda}{p} \phi_0 \tau^2 d(x, \Omega)^2 -
2 \lambda \phi_0 \| w \| \tau d(x, \Omega) + \lambda \phi_0 \| w \|^2 p + \lambda \omega(p). \end{multline*} Minimizing the right-hand side of the last inequality with respect to $d(x, \Omega)$ one obtains $$
F_{\lambda}(x, p) - F_{\lambda}(x^*, 0) \ge - \left( \frac{L^2}{4 \lambda \phi_0 \tau^2} +
\frac{L \| w \|}{\tau} \right) p + \lambda \omega(p). $$ Consequently, applying (\ref{PhiOmegaLowerEstimate}) one gets that \begin{equation} \label{PartlyExactOneDimParam}
F_{\lambda}(x, p) - F_{\lambda}(x^*, 0) \ge
\left( - \frac{L^2}{4 \lambda \phi_0 \tau^2} - \frac{L \| w \|}{\tau} + \lambda \omega_0 \right) p \ge 0 \end{equation}
for any $x \in B(x^*, \overline{r}) \cap A$, $p \in (0, \overline{p})$ and $\lambda \ge \overline{\lambda}$ such that $\tau d(x, \Omega) \ge p \| w \|$, where $$
\overline{p} = \min\left\{ \widehat{p}, t_0 \right\}, \quad
\overline{\lambda} =
\frac{L (\sqrt{\phi_0} \| w \| + \sqrt{\omega_0 + \phi_0 \| w \|^2})}{2 \omega_0 \sqrt{\phi_0} \tau} $$ ($\overline{\lambda}$ is the smallest positive $\lambda$ for which inequality (\ref{PartlyExactOneDimParam}) is satisfied for any $p > 0$).
On the other hand, if $x \in B(x^*, \overline{r}) \cap A$ and $p \in (0, \overline{p})$ are such that
$\tau d(x, \Omega) < p \| w \|$ (note that in this case $\| w \| > 0$), then applying (\ref{PhiOmegaLowerEstimate}), and taking into account the fact that the function $\phi$ is nonnegative one obtains that \begin{multline*}
F_{\lambda}(x, p) - F_{\lambda}(x^*, 0) = f(x) - f(x^*) + \frac{\lambda}{p} \phi( d(0, G(x) - p w)^2 ) +
\lambda \omega(p) \ge \\
\ge - L d(x, \Omega) + \lambda \omega_0 p \ge
\left( - L + \lambda \frac{\tau \omega_0}{\| w \|} \right) d(x, \Omega) \ge 0 \end{multline*}
for any $\lambda \ge \overline{\lambda} \ge L \| w \| / \tau \omega_0$. Furthermore, if $p = 0$, then $F_{\lambda}(x, 0) = + \infty \ge F_{\lambda}(x^*, 0)$ for any $x \notin \Omega$, and $F_{\lambda}(x, 0) = f(x) \ge f(x^*) = F_{\lambda}(x^*, 0)$ for any $x \in B(x^*, \delta) \cap \Omega$, provided $\delta > 0$ is sufficiently small, due to the fact that $x^*$ is a locally optimal solution of the problem $(\mathcal{P})$. Therefore for any $\lambda \ge \overline{\lambda}$ one has $$
F_{\lambda}(x, p) \ge F_{\lambda}(x^*, 0) \quad
\forall (x, p) \in \big( B(x^*, \min\{ \overline{r}, \delta \}) \cap A \big) \times [0, \overline{p}). $$ Thus, the penalty function $F_{\lambda}$ is exact at $x^*$, and inequality (\ref{ExPenParamEstimOneDim}) is valid. \end{proof}
\subsection{Some properties of singular penalty functions}
In order to apply the general theorems on the global exactness of parametric penalty functions (Theorems~\ref{Th_GlobalExGeneralCase} and \ref{Th_GlobExFiniteDim}) to the penalty function $F_{\lambda}$ introduced above, we need to be able to check the assumptions of these theorems, including the lower semicontinuity of the penalty term $\varphi$, and the boundedness of the set $\Omega_{\delta}$. The following auxiliary results are useful for verifying these assumptions.
\begin{lemma} \label{Lmm_LSC_Criterion} Let $\phi$ and $\omega$ be l.s.c. on $\mathbb{R}_+$, and the function $(x, p) \to d(0, G(x) - p w)$ be l.s.c. on $X \times \mathbb{R}_+$. Then the penalty term $\varphi(x, p)$ is l.s.c. on $X \times \mathbb{R}_+$. \end{lemma}
\begin{proof} For any $\varepsilon > 0$ introduce the function $$
\varphi_{\varepsilon}(x, p) = \frac{1}{p + \varepsilon} \phi( d(0, G(x) - p w)^2 ) + \omega(p) \quad
\forall (x, p) \in X \times \mathbb{R}_+. $$ With the use of the facts that the function $(x, p) \to d(0, G(x) - p w)$ is l.s.c., and the function $\phi$ is l.s.c. and non-decreasing, one can verify that the function $(x, p) \to \phi( d(0, G(x) - p w)^2 )$ is l.s.c., which, as it is easy to see, implies that the function $(x, p) \to \phi( d(0, G(x) - p w)^2 ) / (p + \varepsilon)$ is l.s.c. as well. Hence the function $\varphi_{\varepsilon}$ is l.s.c. as the sum of two l.s.c. functions. Note that $$
\varphi(x, p) = \sup_{\varepsilon > 0} \varphi_{\varepsilon}(x, p) \quad
\forall (x, p) \in X \times \mathbb{R}_+. $$ Therefore $\varphi$ is l.s.c. as the supremum of a family of l.s.c. functions. \end{proof}
\begin{remark} Let us note that by \cite{BorweinZhu}, Theorem~5.1.20, the function $(x, p) \to d(0, G(x) - p w)$ is l.s.c., provided the multifunction $G(x)$ is outer semicontinuous (upper semicontinuous; cf. the terminologies in \cite{BorweinZhu} and \cite{RockWets}). \end{remark}
\begin{lemma} \label{Lmm_UpperEstimPenTerm} Let there exist $t_0, \Phi_0, \Omega_0 > 0$ such that $$
\phi(t) \le \Phi_0 t, \quad \omega(t) \le \Omega_0 t \quad \forall t \in [0, t_0]. $$ Then there exists $\delta_0 > 0$ such that for any $x \in X$ with $d(0, G(x)) < \delta_0$ one has \begin{equation} \label{InfimumPenTerm_UpperEstimate}
\inf_{p \ge 0} \varphi(x, p) \le C_0 d(0, G(x)), \quad
C_0 := 2 \big( \sqrt{\Phi_0^2 \| w \|^2 + \Phi_0 \Omega_0} + \Phi_0 \| w \| \big). \end{equation} \end{lemma}
\begin{proof}
Choose an arbitrary $\delta \in (0, \sqrt{t_0} / 2)$. Clearly, one can suppose that $t_0 \| w \| < \sqrt{t_0} / 2$. Then for any $x \in X$ such that $d(0, G(x)) < \delta$ and for any $p \in (0, t_0)$ one has $$
d(0, G(x) - p w) \le d(0, G(x)) + p \| w \| < \delta + p_0 \| w \| < \sqrt{t_0}. $$ Therefore for any such $x$ and $p$ one has \begin{multline*}
\varphi(x, p) = \frac{1}{p} \phi( d(0, G(x) - p w)^2 ) + \omega(p) \le
\frac{\Phi_0}{p} d(0, G(x) - p w)^2 + \Omega_0 p \le \\
\le \frac{\Phi_0}{p} (d(0, G(x))^2 + 2 p \| w \| d(0, G(x)) + p^2 \| w \|^2) + \Omega_0 p. \end{multline*} Fix an arbitrary $x \in X$ such that $d(0, G(x)) < \delta$. If $d(0, G(x)) = 0$, then $$
\inf_{p \ge 0} \varphi(x, p) \le \inf_{p \in (0, t_0)} \varphi(x, p) \le
\inf_{p \in (0, t_0)} \big( \Phi_0 \| w \|^2 + \Omega_0 \big) p = 0, $$ which implies (\ref{InfimumPenTerm_UpperEstimate}). If $d(0, G(x)) > 0$, then denote $$
h(p) = \frac{\Phi_0}{p} (d(0, G(x))^2 + 2 p \| w \| d(0, G(x)) + p^2 \| w \|^2) + \Omega_0 p. $$ Observe that the function $h(p)$ is continuously differentiable on $(0, + \infty)$, and $h(p) \to +\infty$ as $p \to +0$ or $p \to + \infty$. Therefore the function $h(p)$ attains a global minimum on $(0, + \infty)$ at a point $p^* > 0$ that must satisfy the equality $h'(p^*) = 0$. Solving the equation $h'(p) = 0$ one obtains that $$
p^* = \frac{\sqrt{\Phi_0} d(0, G(x))}{ \sqrt{\Phi_0 \| w \|^2 + \Omega_0}} <
\frac{\sqrt{\Phi_0} \delta}{ \sqrt{\Phi_0 \| w \|^2 + \Omega_0}}. $$ Choosing $\delta \in (0, \sqrt{t_0} / 2)$ sufficiently small one obtains that $p^* < t_0$. Therefore for any $x \in X$ such that $0 < d(0, G(x)) < \delta$ one has \begin{multline*}
\inf_{p \ge 0} \varphi(x, p) \le \inf_{p \in (0, t_0)} \varphi(x, p) \le \inf_{p \in (0, t_0)} h(p) =
h(p^*) = \\
= 2 ( \sqrt{\Phi_0^2 \| w \|^2 + \Phi_0 \Omega_0} + \Phi_0 \| w \| ) d(0, G(x)), \end{multline*} that completes the proof. \end{proof}
\begin{lemma} \label{Lmm_LowerEstimPenTerm} Let there exist $t_0, \phi_0, \omega_0 > 0$ such that $$
\phi(t) \ge \phi_0 t, \quad \omega(t) \ge \omega_0 t \quad \forall t \in [0, t_0]. $$ Then for any $(x, p) \in X \times \mathbb{R}_+$ such that $\varphi(x, p) < \delta_0 := \min\big\{ \omega_0, \omega_0 t_0, \phi_0 t_0 \big\}$ one has $$
\varphi(x, p) \ge c_0 d(0, G(x)), \quad
c_0 :=
\min\left\{ \frac{\omega_0}{\| w \|}, 2 \sqrt{\phi_0 \omega_0 + \phi_0^2 \| w \|^2} - 2 \phi_0 \| w \| \right\}, $$
where by definition $\omega_0 / \| w \| = + \infty$ in the case $w = 0$. \end{lemma}
\begin{proof} Fix $(x, p) \in X \times \mathbb{R}_+$ such that $\varphi(x, p) < \delta_0$. If $p = 0$, then $\varphi(x, 0) = 0$ in the case $d(0, G(x)) = 0$, and $\varphi(x) = + \infty$ otherwise, which implies the validity of the inequality $\varphi(x, p) \ge c_0 d(0, G(x))$ for any $c_0 \ge 0$.
Suppose, now, that $p > 0$. Then $$
\varphi(x, p) = \frac{1}{p} \phi( d(0, G(x) - p w)^2 ) + \omega(p) < \delta_0. $$ Hence $p^{-1} \phi( d(0, G(x) - p w)^2 ) < \delta_0$ and $\omega(p) < \delta_0$. From the fact that the function $\omega$ is non-decreasing it follows that for any $s \ge t_0$ one has $\omega(s) \ge \omega(t_0) \ge \omega_0 t_0$. Therefore $p < t_0$ due to the definition of $\delta_0$, which implies that $\omega_0 p \le \omega(p) <\delta_0$, and $p < 1$ by the fact that $\delta_0 < \omega_0$. Hence $\phi( d(0, G(x) - p w)^2 ) < \delta_0$. Applying the inequality $\phi(t) \ge \phi_0 t$, and taking into account the facts that $\delta_0 < \phi_0 t_0$ and the function $\phi$ is non-decreasing one can easily verify that $d(0, G(x) - p w)^2 < t_0$. Consequently, one has $$
\varphi(x, p) = \frac{1}{p} \phi( d(0, G(x) - p w)^2 ) + \omega(p) \ge
\frac{\phi_0}{p} d(0, G(x) - p w)^2 + \omega_0 p. $$
If $ w \ne 0$ and $d(0, G(x)) < p \| w \|$, then from the last equality it follows that \begin{equation} \label{LowerEstimOfPenTermNearOmega}
\varphi(x, p) \ge \frac{\omega_0}{\| w \|} d(0, G(x)). \end{equation}
On the other hand, if $d(0, G(x)) \ge p \| w \|$, then applying the inequality
$d(0, G(x) - p w) \ge d(0, G(x)) - p \| w \|$ one obtains that $$
d(0, G(x) - p w)^2 \ge \big( d(0, G(x)) - p \| w \| \big)^2 =
d(0, G(x))^2 - 2 p \| w \| d(0, G(x)) + p^2 \| w \|^2, $$ which yields \begin{equation} \label{AuxilInequal}
\varphi(x, p) \ge \frac{\phi_0}{p} d(0, G(x))^2 - 2 \phi_0 \| w \| d(0, G(x)) +
(\phi_0 \| w \|^2 + \omega_0) p. \end{equation} For any $p > 0$ define $$
\sigma(p) = \frac{\phi_0}{p} d(0, G(x))^2 -
2 \phi_0 \| w \| d(0, G(x)) + (\phi_0 \| w \|^2 + \omega_0) p. $$ The function $\sigma$ is nonnegative, continuously differentiable on $(0, + \infty)$, and $\sigma(p) \to + \infty$ as $p \to + 0$ or $p \to + \infty$. Therefore $\sigma$ attains a global minimum on $(0, + \infty)$ at a point $p^*$ that must satisfy the equality $\sigma'(p^*) = 0$. Solving the equation $\sigma'(p) = 0$ one gets $$
p^* = \frac{\sqrt{\phi_0} d(0, G(x))}{\sqrt{\omega_0 + \phi_0 \| w \|^2}}. $$ Hence and from (\ref{AuxilInequal}) it follows that $$
\varphi(x, p) \ge \sigma(p) \ge \min_{q > 0} \sigma(q) = \sigma(p^*) =
2 ( \sqrt{\phi_0 \omega_0 + \phi_0^2 \| w \|^2} - \phi_0 \| w \| ) d(0, G(x) $$ Combining (\ref{LowerEstimOfPenTermNearOmega}) with the inequality above one obtains the desired result. \end{proof}
\begin{lemma} \label{Lmm_Representation} Let there exist $t_0, \phi_0, \omega_0 > 0$ such that $$
\phi(t) \ge \phi_0 t, \quad \omega(t) \ge \omega_0 t \quad \forall t \in [0, t_0]. $$ Then for any $\eta \in (0, \delta_0)$ one has $$
\Omega(\eta) = \big\{ x \in A \mid \inf_{p \in \mathbb{R}_+} \varphi(x, p) \le \eta \big\} \subseteq
\big\{ x \in A \mid d(0, G(x)) \le \eta / c_0 \big\}, $$ where $\delta_0$ and $c_0$ are the same as in Lemma~\ref{Lmm_LowerEstimPenTerm}. Furthermore, if the penalty function $F_{\lambda}$ is bounded below on $A \times \mathbb{R}_+$ for some $\lambda \ge 0$, then there exists $\lambda_0 \ge 0$ such that for any $\lambda \ge \lambda_0$ one has $$
\big\{ x \in A \mid \inf_{p \in \mathbb{R}_+} F_{\lambda}(x, p) < f^* \big\} \subseteq
\big\{ x \in A \mid f(x) + \lambda c_0 d(0, G(x)) < f^* \big\}. $$ \end{lemma}
\begin{proof} Let $x \in \Omega(\eta)$ with some $\eta \in (0, \delta_0)$. Then for any $\varepsilon \in (0, \delta_0 - \eta)$ there exists $p \ge 0$ such that $\varphi(x, p) < \eta + \varepsilon$. Therefore by Lemma~\ref{Lmm_LowerEstimPenTerm} one has $$
\eta + \varepsilon > \varphi(x, p) \ge c_0 d(0, G(x)), $$ which implies $d(0, G(x)) < \eta / c_0$ by the fact that $\varepsilon$ was chosen arbitrarily.
Suppose, additionally, that the penalty function $F_{\lambda}$ is bounded below on $A \times \mathbb{R}_+$ for some $\lambda \ge 0$. Then applying Lemma~\ref{Lmm_Reduction} one obtains that there exists $\lambda_0 \ge 0$ such that for any $\lambda \ge \lambda_0$ one has \begin{equation} \label{PenFuncOutsideOmega}
F_{\lambda}(x, p) \ge f^* \quad \forall (x, p) \notin \Omega_{\delta_0}. \end{equation} Choose arbitrary $\lambda \ge \lambda_0$ and $x \in A$ such that $\inf_{p \in \mathbb{R}_+} F_{\lambda}(x, p) < f^*$. Clearly, there exists $p \ge 0$ such that $F_{\lambda}(x, p) < f^*$. With the use of (\ref{PenFuncOutsideOmega}) one gets that $(x, p) \in \Omega_{\delta_0}$ or, equivalently, $\varphi(x, p) < \delta_0$. Hence applying Lemma~\ref{Lmm_LowerEstimPenTerm} one obtains that $$
f^* > F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p) \ge f(x) + \lambda c_0 d(0, G(x)), $$ which completes the proof. \end{proof}
\subsection{Global exactness of singular penalty functions}
Now, we can apply Theorems~\ref{Th_GlobalExGeneralCase} and \ref{Th_GlobExFiniteDim} in order to obtain new sufficient conditions for the singular penalty function $F_{\lambda}$ to be globally exact. At first, we consider the finite dimensional case and apply the localization principle.
\begin{theorem} Let $X$ be a finite dimensional normed space, and the set $A$ be closed. Suppose that the following assumptions are satisfied: \begin{enumerate} \item{$f$ is l.s.c. on $A$, and Lipschitz continuous near each of globally optimal solutions of the problem $(\mathcal{P})$; }
\item{$G$ is metrically subregular with respect to the set $A$ at $(x^*, 0)$ for every globally optimal solution $x^*$ of the problem $(\mathcal{P})$; }
\item{the mapping $(x, p) \to d(0, G(x) - p w)$ is l.s.c. on $A \times \mathbb{R}_+$ (in particular, one can suppose that $G$ is outer semicontinuous on $A$); }
\item{the functions $\phi$ and $\omega$ are l.s.c., and there exist $\phi_0, \omega_0, t_0 > 0$ such that $\phi(t) \ge \phi_0 t$ and $\omega(t) \ge \omega_0 t$ for any $t \in (0, t_0)$; }
\item{either there exists $\delta > 0$ such that the set $\{ x \in A \mid d(0, G(x)) < \delta \}$ is bounded or there exists $\mu \ge 0$ such that the set $\{ x \in A \mid f(x) + \mu d(0, G(x)) < f^* \}$ is bounded. } \end{enumerate} Then the penalty function $F_{\lambda}$ is exact if and only if it is bounded below on $A \times \mathbb{R}_+$ for some $\lambda \ge 0$. \end{theorem}
\begin{proof} From Theorem~\ref{Thrm_LocalExactness} it follows that the penalty function $F_{\lambda}$ is exact at every globally optimal solution of the problem $(\mathcal{P})$. Applying Lemma~\ref{Lmm_LSC_Criterion} one gets that the penalty term $\varphi$ is l.s.c. on $A \times \mathbb{R}_+$, while taking into account Lemma~\ref{Lmm_Representation} one obtains that either the projection of the set $\Omega_{\eta}$ onto $X$ is bounded for sufficiently small $\eta > 0$ (note that the projection of $\Omega_{\eta}$ onto $X$ is contained in $\Omega(\eta)$) or the set $\{ x \in A \mid \inf_{p \ge 0} F_{\mu}(x, p) < f^* \}$ is bounded for sufficiently large $\mu$. Hence with the use of Theorem~\ref{Th_GlobExFiniteDim} one obtains the desired result. \end{proof}
One can apply Theorem~\ref{Th_GlobalExGeneralCase} in order to obtain sufficient conditions for the singular penalty function $F_{\lambda}$ under consideration to be exact in the general case. However, instead of applying Theorem~\ref{Th_GlobalExGeneralCase} directly, we prove a useful auxiliary result about an intimate relation between the exactness of the parametric penalty function $F_{\lambda}$ and the exactness of the standard penalty function $h_{\lambda}(x) = f(x) + \lambda d(0, G(x))$, and then apply Theorem~\ref{Th_GlobalExGeneralCase} to the function $h_{\lambda}$. This approach allows one to obtain stronger and, at the same time, simpler sufficient conditions for the penalty function $F_{\lambda}$ to be globally exact than a direct application of Theorem~\ref{Th_GlobalExGeneralCase}. Moreover, a theorem about a connection between the exactness of penalty functions $F_{\lambda}$ and $h_{\lambda}$ (the first step in the approach that we use) is a very useful and important result on its own, since it allows one to reduce the study of the singular penalty function $F_{\lambda}$ to the study of the standard penalty function $h_{\lambda}$.
The following result sharpens Theorems 7 and 9 from \cite{Dolgopolik_OptLet2}.
\begin{theorem} \label{Thrm_ReductionToStandPenFunc} Let there exist $\phi_0, \Phi_0, \omega_0, \Omega_0, t_0 > 0$ such that \begin{equation} \label{RHS_Derivatives_Inequal}
\phi_0 t \le \phi(t) \le \Phi_0 t, \quad
\omega_0 t \le \omega(t) \le \Omega_0 t \quad \forall t \in [0, t_0] \end{equation} (in particular, one can suppose that there exist the right-hand side derivatives $\phi'_+(0)$ and $\omega'_+(0)$ of the functions $\phi$ and $\omega$ at the origin such that $\phi'_+(0) > 0$ and $\omega'_+(0) > 0$). Then the penalty function $F_{\lambda}$ is exact on $\Omega_{\delta}$ for some $\delta > 0$ if and only if the penalty function $h_{\lambda}$ is exact on the set $\{ x \in A \mid d(0, G(x)) < \theta \}$ for some $\theta > 0$. \end{theorem}
\begin{proof} Let the penalty function $F_{\lambda}$ be exact on $\Omega_{\delta}$ for some $\delta > 0$. By definition, there exists $\lambda \ge 0$ such that $F_{\lambda}(x, p) \ge f^*$ for all $(x, p) \in \Omega_{\delta}$. Hence applying Proposition~\ref{Prp_EquivDefExPen} one can verify that the penalty function $\Psi_{\lambda}(x, p) = f(x) + \lambda \psi_{\delta}(\varphi(x, p))$, where \begin{equation} \label{BarrierTrasformDef}
\psi_{\delta}(t) = \begin{cases}
\dfrac{t}{\delta - t}, & \text{if } 0 \le t < \delta, \\
+ \infty, & \text{if } t \ge \delta.
\end{cases} \end{equation} is exact, and $\lambda^*(f, \psi_{\delta} \circ \varphi) \le \delta \cdot \lambda^*( \Omega_{\delta} )$ (see Remark~\ref{Rmrk_BarrierTermPenFunc}). Consequently, with the use of the fact that the function $\psi_{\delta}$ is non-decreasing and continuous on its effective domain one obtains that $$
\inf_{p \ge 0} \Psi_{\lambda}(x, p) = f(x) + \lambda \psi_{\delta}\left( \inf_{p \ge 0} \varphi(x, p) \right) \ge f^*
\quad \forall x \in A \quad \forall \lambda \ge \lambda^*(f, \psi_{\delta} \circ \varphi). $$ By Lemma~\ref{Lmm_UpperEstimPenTerm} there exist $C_0 > 0$ and $\delta_0 > 0$ such that $\inf_{p \le 0} \varphi(x, p) \le C_0 d( 0, G(x) )$ for any $x \in X$ with $d(0, G(x)) < \delta_0$. Therefore taking into account the facts that the function $\psi_{\delta}$ is non-decreasing and $\psi_{\delta}(t) \le 2 t / \delta$ for any $t \in [0, \delta / 2]$ one gets that $$
f^* \le f(x) + \lambda \psi_{\delta}\left( \inf_{p \ge 0} \varphi(x, p) \right) \le
f(x) + \lambda \psi_{\delta}\Big( C_0 d(0, G(x)) \Big) \le h_{\lambda C_1}(x), $$ for any $x \in A$ with $d(0, G(x)) < \min\{ \delta_0, \delta / 2 C_0 \}$ and $\lambda \ge \lambda^*(f, \psi_{\delta} \circ \varphi)$, where $C_1 = 2 C_0 / \delta$. Thus, the penalty function $h_{\lambda}$ is exact on the set $\{ x \in A \mid d(0, G(x)) < \theta \}$ with $\theta = \min\{ \delta_0, \delta / 2 C_0 \}$
Let, now, the penalty function $h_{\lambda}$ be exact on the set $\{ x \in A \mid d(0, G(x)) < \theta \}$ for some $\theta > 0$. Then the penalty function $b_{\lambda}(x) = f(x) + \lambda \psi_{\theta}(d(0, G(x))$ is exact. By Lemma~\ref{Lmm_LowerEstimPenTerm} there exist $c_0 > 0$ and $\delta_0 > 0$ such that $\varphi(x, p) \ge c_0 d(0, G(x))$ for any $(x, p) \in A \times P$ with $\varphi(x, p) < \delta_0$. Hence taking into account the facts that the function $\psi_{\theta}$ is non-decreasing and $\psi_{\theta}(t) \le 2 t / \theta$ for any $t \in [0, \theta / 2]$ one obtains that $$
f^* \le b_{\lambda}(x) \le f(x) + \lambda \psi_{\theta}\left( \frac{1}{c_0} \varphi(x, p) \right) \le
f(x) + \frac{2 \lambda}{\theta c_0} \varphi(x, p) $$ for any sufficiently large $\lambda \ge 0$ and for all $(x, p) \in A \times P$ such that $\varphi(x, p) < \min\{ \delta_0, \theta c_0 / 2 \}$. Thus, the penalty function $F_{\lambda}$ is exact on the set $\Omega_{\delta}$ with $\delta = \min\{ \delta_0, \theta c_0 / 2 \}$. \end{proof}
\begin{corollary} Let there exist $\phi_0, \Phi_0, \omega_0, \Omega_0, t_0 > 0$ such that inequalities \eqref{RHS_Derivatives_Inequal} are valid. Suppose also that there exists $\lambda_0 \ge 0$ such that the function $F_{\lambda_0}$ is bounded below on $A \times \mathbb{R}_+$, and the function $h_{\lambda_0}$ is bounded below on $A$. Then the penalty function $F_{\lambda}$ is exact if and only if the penalty function $h_{\lambda}$ is exact. \end{corollary}
Now, we can utilize Theorem~\ref{Th_GlobalExGeneralCase} in order to derive sufficient conditions for the parametric penalty function $F_{\lambda}$ to be exact in the general case.
\begin{theorem} Let $X$ be a complete metric space, $A$ be a closed set, and the functions $f$ and $d(0, G(x))$ be l.s.c. on $A$ (in particular, one can suppose that $G$ is outer semicontinuous on $A$). Suppose also that there exist $\delta > 0$ and $\lambda_0 \ge 0$ such that the following assumptions are satisfied: \begin{enumerate} \item{$f$ is Lipschitz continuous on the set $$
K(\delta, \lambda_0) := \big\{ x \in A \mid d(0, G(x)) < \delta, f(x) + \lambda_0 d(0, G(x)) < f^* \big\}, $$ and u.s.c. at every point of this set; }
\item{there exists $a > 0$ such that $d(0, G(\cdot))^{\downarrow}_A(x) < - a$ for any $x \in K(\delta, \lambda_0)$; }
\item{there exist $\phi_0, \Phi_0, \omega_0, \Omega_0, t_0 > 0$ such that inequalities \eqref{RHS_Derivatives_Inequal} hold true; } \end{enumerate} Then the penalty function $F_{\lambda}$ is exact if and only if it is bounded below on $A \times \mathbb{R}_+$ for some $\lambda \ge 0$. \end{theorem}
\begin{proof} Applying Theorem~\ref{Th_GlobalExGeneralCase} to the penalty function $b_{\lambda}(x) = f(x) + \lambda \psi_{\delta}(d(0, G(x))$ (see~\eqref{BarrierTrasformDef}) one obtains that $b_{\lambda}$ is exact. Therefore, as it is easy to see, the penalty function $h_{\lambda}$ is exact on the set $\{ x \in A \mid d(0, G(x)) < \theta \}$ for any $\theta < \delta$ (cf.~Remark~\ref{Rmrk_BarrierTermPenFunc}). Hence applying Theorem~\ref{Thrm_ReductionToStandPenFunc} one obtains that the parametric penalty function $F_{\lambda}$ is exact on the set $\Omega_{\tau}$ for some $\tau > 0$, which with the use of Proposition~\ref{Prp_EquivDefExPen} implies the desired result. \end{proof}
\begin{remark} Note that the second assumption of the theorem above, in essence, means that the set-valued mapping $G$ is metrically regular with respect to the set $A$ on the set $K(\delta, \lambda_0) \times \{ 0 \}$ (see~\cite{Ioffe,Kruger}). \end{remark}
Let us also obtain sufficient conditions for the penalty function $F_{\lambda}$ under consideration to be feasibility-preserving.
\begin{theorem} \label{Thrm_SingPenFunc_FeasPreserv} Let $C \subset A$ be a nonempty set, and $f$ be Lipschitz continuous on an open set containing the set $C$. Suppose also that the following assumptions are valid: \begin{enumerate} \item{the functions $\phi$ and $\omega$ are convex and continuously differentiable on $\dom \varphi$ and $\dom \omega$, respectively, and $\phi'(0) > 0$ and $\omega'(0) > 0$;}
\item{there exists $a > 0$ such that $d(0, G(\cdot) - p w)^{\downarrow}_A (x) \le - a$ for any $(x, p) \in (C \setminus \Omega) \times \dom \omega$ such that $d(0, G(x) - p w) > 0$ and $\phi(d(0, G(x) - p w)^2) < + \infty$. } \end{enumerate} Then the penalty function $F_{\lambda}$ is feasibility-preserving on the set $C$. \end{theorem}
\begin{proof} Let us show that under the assumptions of the theorem there exists $c > 0$ such that $$
\varphi^{\downarrow}_{A \times \mathbb{R}_+}(x, p) < - c \quad
\forall (x, p) \in (C \times \mathbb{R}_+)_{\inf}. $$ Then applying Proposition~\ref{Prp_SuffCondFeasPres} one obtains the required result.
Let $(x, p) \in (C \times \mathbb{R}_+)_{\inf}$ be arbitrary. Note that if $p = 0$, then $x \notin \Omega$, which implies $\varphi(x, p) = + \infty$. Therefore $p > 0$. Furthermore, one has $\omega(p) < + \infty$ and $\phi(d(0, G(x) - p w)^2) < + \infty$ by the definition of the set $(C \times \mathbb{R}_+)_{\inf}$.
Suppose, at first, that $d(0, G(x) - p w) = 0$. Then by the second part of Lemma~\ref{Lemma_SimpleCases} the function $g(q) = d(0, G(x) - q w)^2$ is differentiable at the point $p$ and $g'(p) = 0$. Consequently, the function $q \to \varphi(x, q) = q^{-1} \phi(g(q)) + \omega(q)$ is differentiable at the point $p$ and $$
\frac{\partial \varphi}{\partial q}(x, p) = - \frac{1}{p^2} \phi(g(p)) + \frac{1}{p} \phi'(g(p)) g'(p) +
\omega'(p) = \omega'(p) $$ by the fact that $\phi(0) = g'(p) = 0$. Note that the function $\omega'(p)$ is non-decreasing due to the convexity of $\omega$. Therefore \begin{equation} \label{RSD_PenTerm_UpperEstimate_1}
\varphi_{A \times \mathbb{R}_+}^{\downarrow}(x, p) \le \varphi(x, \cdot)^{\downarrow}_{\mathbb{R}_+}(p) =
- \left| \frac{\partial \varphi}{\partial q}(x, p) \right| = - | \omega'(p) | \le - |\omega'(0)| < 0. \end{equation} Suppose, now, that $d(0, G(x) - p w) > 0$, and choose an arbitrary $\sigma > 0$. If $$
\frac{1}{p} \phi'( d(0, G(x) - p w)^2 ) d(0, G(x) - p w) \ge \sigma, $$ then applying Lemma~\ref{Lemma_Superpos} one obtains that \begin{equation} \label{RSD_PenTerm_UpperEstimate_2}
\varphi(\cdot, p)^{\downarrow}_A (x) = \frac{2}{p} \phi'( d(0, G(x) - p w)^2 ) d(0, G(x) - p w)
d(0, G(\cdot) - p w)^{\downarrow}_A(x) \le - \sigma a < 0. \end{equation} On the other hand, if \begin{equation} \label{RSD_Estimate_TrickyInequal}
\frac{1}{p} \phi'( d(0, G(x) - p w)^2 ) d(0, G(x) - p w) < \sigma, \end{equation}
then introduce the functions $g_0(q) = d(0, G(x) - q w)$ and $h(q) = q^{-1} \phi( g_0(q)^2 )$. Clearly, the function $g_0$ is Lipschitz continuous on $\mathbb{R}$ with a Lipschitz constant $L \le \| w \|$. Hence
$g_0^{\uparrow}(p) \le \| w \|$. Therefore applying (\ref{RSD_Estimate_TrickyInequal}), and Lemmas~\ref{Lemma_RSD_Product} and \ref{Lemma_Superpos} one obtains that \begin{multline*}
h^{\uparrow}(p) \le \frac{1}{p^2} \phi\big( (g_0(p))^2 \big) +
\frac{1}{p} \phi'(d(0, G(x) - p w)^2) d(0, G(x) - p w) g_0^{\uparrow}(p) \le \\
\le \frac{1}{p^2} \phi\big( (g_0(p))^2 \big) + \sigma \| w \|. \end{multline*} Recall that the function $\phi$ is convex, which implies that its derivative is a non-decreasing function. Consequently, with the use of (\ref{RSD_Estimate_TrickyInequal}) one gets that \begin{multline*}
\phi( d(0, G(x) - p w)^2 ) \le \phi' ( d(0, G(x) - p w)^2 ) d(0, G(x) - p w)^2 \le \\
\le \sigma p d(0, G(x) - p w) \le \frac{\sigma^2 p^2}{\phi'(d(0, G(x) - p w)^2)} \le
\frac{\sigma^2 p^2}{\phi'(0)}. \end{multline*}
Hence one has that $h^{\uparrow}(p) \le \sigma^2 / \phi'(0) + \sigma \| w \|$. Choosing $\sigma > 0$ sufficiently small one gets that $h^{\uparrow}(p) \le \omega'(0) / 2$. Therefore applying Lemma~\ref{Lmm_SDD_SumEstimate} one obtains that \begin{multline} \label{RSD_PenTerm_UpperEstimate_3}
\varphi^{\downarrow}_{A \times \mathbb{R}_+}(x, p) \le \varphi(x, \cdot)^{\downarrow}_{\mathbb{R}_+}(p) =
(h + \omega)^{\downarrow}(p) \le \\
\le h^{\uparrow}(p) + \omega^{\downarrow}(p) \le
\frac{\omega'(0)}{2} - | \omega'(p) | \le - \frac{\omega'(0)}{2} \end{multline} by virtue of the convexity of the function $\omega$. Combining (\ref{RSD_PenTerm_UpperEstimate_1}), (\ref{RSD_PenTerm_UpperEstimate_2}) and (\ref{RSD_PenTerm_UpperEstimate_3}) one gets $$
\varphi^{\downarrow}_{A \times \mathbb{R}_+}(x, p) \le
- \min\left\{ \frac{\omega'(0)}{2}, \sigma a \right\} < 0
\quad \forall (x, p) \in (C \times \mathbb{R}_+)_{\inf}, $$ that completes the proof. \end{proof}
\begin{remark} Let $X$ be a normed space, $Y = \mathbb{R}^m \times \mathbb{R}^l$, and let the space $Y$ be equipped with an arbitrary norm. Suppose also that the multifunction $G$ has the form $$
G(x) = \Big( \prod_{i = 1}^m \{ f_i(x) \} \Big) \times \Big( \prod_{j = 1}^l [g_j(x), + \infty) \Big), $$ where $\prod$ stands for the Cartesian product, and the functions $f_i, g_j \colon X \to \mathbb{R}$ are continuously Fr\'echet differentiable. Thus, the inclusion $0 \in G(x)$ is equivalent to the following system of equations and inequalities \begin{equation} \label{EqualInequalConstraints}
f_i(x) = 0, \quad i \in \{ 1, \ldots, m \}, \quad g_j(x) \le 0, \quad j \in \{ 1, \ldots, l \}. \end{equation} Let $x \in X$ and $p > 0$ be such that $d(0, G(x) - p w) > 0$. Then one can check that if MFCQ holds true for the system (\ref{EqualInequalConstraints}) at the point $x$, then $d(0, G(\cdot) - p w)^{\downarrow}(x) < 0$. Furthermore, one can verify (cf.~\cite{HanMangasarian}, Theorem 2.2, and \cite{DiPilloGrippo}, Lemma 3.1) that in this case there exist $r > 0$ and $a > 0$ such that $$
d(0, G(\cdot) - q w)^{\downarrow}(y) < - a \quad
\forall (y, q) \in U(x, r) \times U(p, r) \colon d(0, G(y) - q w) > 0. $$ Therefore, if MFCQ holds true for the system (\ref{EqualInequalConstraints}) at every point of a compact set $C \subset X$, then for any $t_0 > 0$ there exists $a > 0$ such that $d(0, G(\cdot) - p w)^{\downarrow}(x) < - a$ for any $(x, p) \in (C \setminus \Omega) \times [0, t_0]$ such that $d(0, G(x) - p w) > 0$. Hence assumption~2 of the previous theorem is satisfied provided that one of the set $\dom \phi$ or $\dom \omega$ is bounded.
Usually, if in Theorem~\ref{Thrm_SingPenFunc_FeasPreserv} the sets $C$ and $\dom \phi \cap \dom \omega$ are relatively compact, then assumption~2 of this theorem is satisfied provided that a constraint qualification holds true at every point of the set $C \setminus \Omega$. \end{remark}
\subsection{Zero duality gap and singular penalty functions}
Let us obtain two simple sufficient conditions for the zero duality gap property for the penalty function $F_{\lambda}$ to hold true, and show how Theorem~\ref{Thrm_MinimizingSequences} about minimizing sequences of a penalty function transforms in the case under consideration.
\begin{theorem} Let there exist $\phi_0, \Phi_0, \omega_0, \Omega_0, t_0 > 0$ such that $$
\phi_0 t \le \phi(t) \le \Phi_0 t, \quad \omega_0 t \le \omega(t) \le \Omega_0 t
\quad \forall t \in [0, t_0]. $$ Then the zero duality gap property holds true for the penalty function $F_{\lambda}$ if and only if $F_{\lambda}$ is bounded below for some $\lambda \ge 0$, and the perturbation function $$
\gamma(\eta) := \inf_{x \in K_{\eta}} f(x), \quad \eta \ge 0 $$ is l.s.c. at the origin, where $K_{\eta} = \{ x \in A \mid d(0, G(x)) \le \eta \}$. \end{theorem}
\begin{proof} Define, as above, the perturbation function $\beta(\eta) := \inf_{x \in \Omega(\eta)} f(x)$ for any $\eta \ge 0$. Note that $\beta(0) = \gamma(0) = f^*$. By Lemma~\ref{Lmm_Representation}, there exist $\delta_1 \ge 0$ and $c_0 > 0$ such that for any $\eta \in (0, \delta_1)$ one has $\beta(\eta) \ge \gamma( \eta / c_0 )$. On the other hand, from Lemma~\ref{Lmm_UpperEstimPenTerm} it follows that there exists $\delta_2 > 0$ and $C_0 > 0$ such that for any $\eta \in (0, \delta_2)$ one has $\{ x \in A \mid d(0, G(x)) \le \eta \} \subseteq \Omega( C_0 \eta )$, which implies $\gamma(\eta) \ge \beta( C_0 \eta )$. Therefore, as it is easy to verify, the perturbation function $\gamma$ is l.s.c. at the origin iff the perturbation function $\beta$ is l.s.c. at the origin. It remains to apply Theorem~\ref{Thrm_ZeroDualityGapCharacterization}. \end{proof}
\begin{theorem} Let $A$ be closed, $f$ be l.s.c. on $\Omega$, $\phi$ and $\omega$ be l.s.c. on $\mathbb{R}_+$, and the mapping $(x, p) \to d(0, G(x) - p w)$ be l.s.c. on $A \times \mathbb{R}_+$ (in particular, one can suppose that $G$ is outer semicontinuous on $A$). Suppose also that there exist $t_0, \phi_0, \omega_0 > 0$ such that $$
\phi(t) \ge \phi_0 t, \quad \omega(t) \ge \omega_0 t \quad \forall t \in [0, t_0], $$ and there exists $\eta > 0$ such that the set $\{ x \in A \mid d(0, G(x)) < \eta, f(x) < f^* \}$ is relatively compact. Then the zero duality gap property for the penalty function $F_{\lambda}$ holds true. \end{theorem}
\begin{proof} By Lemma~\ref{Lmm_LSC_Criterion} the penalty term $\varphi(x, p)$ is l.s.c. on $A \times \mathbb{R}_+$. Applying Lemma~\ref{Lmm_Representation} one obtains that there exists $c_0 > 0$ such that $$
\{ x \in \Omega( c_0 \eta ) \mid f(x) < f^* \} \subset \{ x \in A \mid d(0, G(x) < \eta, f(x) < f^* \}, $$ provided $\eta > 0$ is sufficiently small. Consequently, applying Proposition~\ref{Prp_ZeroDualityGapSuffCond} one gets that the zero duality gap property holds true for $F_{\lambda}$. \end{proof}
\begin{theorem} Let $\{ \lambda_n \} \subset (0, + \infty)$ be an increasing unbounded sequence. Let also a sequence $\{ (x_n, p_n) \} \subset A \times [0, +\infty)$ be such that $$
F_{\lambda_n}(x_n, p_n) \le
\inf_{(x, p) \in A \times \mathbb{R}_+} F_{\lambda_n}(x, p) + \varepsilon $$ for some $\varepsilon > 0$. Then $p_n \to 0$ and $d(0, G(x_n)) \to 0$ as $n \to \infty$. Moreover, if, additionally, the zero duality gap property holds true for the penalty function $F_{\lambda}$, then $f^* \le \liminf_{n \to \infty} f(x_n) \le \limsup_{n \to \infty} f(x_n) \le f^* + \varepsilon$. \end{theorem}
\begin{proof} From Theorem~\ref{Thrm_MinimizingSequences} it follows that $p_n \to 0$ and $\varphi(x_n, p_n) \to 0$ as $n \to \infty$, and $f^* \le \liminf_{n \to \infty} f(x_n) \le \limsup_{n \to \infty} f(x_n) \le f^* + \varepsilon$, if the zero duality gap property holds true. Thus, it remains to show that $d(0, G(x_n)) \to 0$ as $n \to \infty$. Arguing by reductio ad absurdum, suppose that this is not true. Then there exist $\sigma > 0$ and a subsequence $\{ x_{n_k} \}$ such that $d(0, G(x_{n_k})) > \sigma$ for any $k \in \mathbb{N}$.
Note that if $p_{n_k} = 0$ for some $k \in \mathbb{N}$, then $d(0, G(x_{n_k})) = 0$, since otherwise $\varphi(x_{n_k}, p_{n_k}) = + \infty$, which is impossible. Consequently, $p_{n_k} > 0$ for all $k \in \mathbb{N}$. Since $p_n \to 0$ as $n \to \infty$, for any sufficiently large $k$ one has $d(0, G(x_{n_k}) - p_{n_k} w) > \sigma / 2$. Thererefore taking into account the fact that the function $\phi$ is non-decreasing one obtains that for any $k$ large enough the following inequality holds true $$
\varphi(x_{n_k}, p_{n_k}) = \frac{1}{p_{n_k}} \phi( d(0, G(x_{n_k}) - p_{n_k} w)^2 ) + \omega(p_{n_k}) \ge
\frac{1}{p_{n_k}} \phi(\sigma / 2). $$ Consequently, $\varphi(x_{n_k}, p_{n_k}) \to + \infty$ as $k \to \infty$, which contradicts the fact that $\varphi(x_n, p_n) \to 0$ as $n \to \infty$. Thus, $d(0, G(x_n)) \to 0$ as $n \to \infty$. \end{proof}
\section{Smoothing approximations of nonsmooth penalty functions} \label{Section_SmoothingPenFunc}
In this section, we discuss a general approach to the construction of exact parametric penalty functions based on the use of smoothing approximations of standard exact penalty functions, and study some properties of approximations of penalty functions.
Let $g_{\lambda}(x) = f(x) + \lambda \phi(x)$ be a standard penalty function for the problem $(\mathcal{P})$, i.e. let $\phi \colon X \to [0, + \infty]$ be a given function such that $\phi(x) = 0$ iff $x \in M$. Let also $P$ be a metric space, and $p_0 \in P$ be fixed.
\begin{definition} A function $\Phi \colon X \times P \to [0, + \infty]$ is called an \textit{approximation} of the penalty term $\phi$, if $\Phi(\cdot, p_0) = \phi(\cdot)$, and $\Phi(\cdot, p) \to \phi(\cdot)$ as $p \to p_0$ uniformly on the set $A$. An approximation $\Phi$ of $\phi$ is called an \textit{upper approximation} of the penalty term $\phi$ if $\Phi(x, p) \ge \phi(x)$ for all $(x, p) \in X \times P$. \end{definition}
Approximations of the penalty term $\phi$ can be utilized to construct parametric penalty functions for the problem $(\mathcal{P})$. Indeed, let $\Phi$ be an approximation of the penalty term $\phi$. Choose a non-decreasing function $\omega \colon \mathbb{R}_+ \to [0, + \infty]$ such that $\omega(t) = 0$ iff $t = 0$, and define $\varphi(x, p) = \Phi(x, p) + \omega(d(p_0, p))$ for any $(x, p) \in X \times P$. Clearly, $\varphi(x, p) = 0$ iff $x \in \Omega$, and $p = p_0$. Therefore the function \begin{equation} \label{ParamPenFuncViaUpperApprox}
F_{\lambda}(x, p) = f(x) + \lambda \varphi(x, p) = f(x) + \lambda \big( \Phi(x, p) + \omega(d(p_0, p)) \big) \end{equation} is a parametric penalty function for the problem $(\mathcal{P})$.
Note that if $\Phi$ is an upper approximation of the penalty term $\phi$, then $$
\inf_{p \in P} \varphi(x, p) = \inf_{p \in P} \big( \Phi(x, p) + \omega(d(p_0, p)) \big) =
\phi(x) \quad \forall x \in X. $$ Therefore, in this case, the parametric penalty function $F_{\lambda}$ is exact if and only if the penalty function $g_{\lambda}$ is exact by virtue Corollary~\ref{Crlr_ReductionToStandExPen}. Thus, an approach based on upper approximations of the penalty term $\phi$ provides one with a simple method for constructing exact parametric penalty functions.
In the theory of smoothing approximations of exact penalty functions (see, e.g., \cite{Pinar,WuBaiYang,MengDangYang,Liu,LiuzziLucidi,XuMengSunShen,Lian,XuMengSunHuangShen} and references therein), one defines $P = \mathbb{R}_+$, and chooses an approximation $\Phi$ of the penalty term $\phi$ such that the function $\Phi(\cdot, p)$ is smooth for any $p \in (0, + \infty)$. Then one chooses a decreasing sequence $\{ p_n \} \subset (0, + \infty)$ such that $p_n \to 0$ as $n \to \infty$, and uses any method of smooth optimization in order to find a point of global minimum $x_n$ of the function $f(\cdot) + \lambda \Phi(\cdot, p_n)$. Under some additional assumptions one can show that the sequence $\{ x_n \}$ converges to a globally optimal solution of the problem $(\mathcal{P})$. Thus, smoothing approximations allow one to avoid the minimization problem for the \textit{nondifferentiable} exact penalty function $g_{\lambda}$. On the other hand, a smoothing approximation of an exact penalty function often enjoys some good properties that standard smooth penalty functions do note have (see \cite{Pinar,WuBaiYang,MengDangYang,Liu,LiuzziLucidi,XuMengSunShen,Lian,XuMengSunHuangShen}).
From the viewpoint of the theory of exact parametric penalty functions, one can consider the method of smoothing approximation of exact penalty functions as follows. One defines the parametric penalty function $F_{\lambda}$ of the form (\ref{ParamPenFuncViaUpperApprox}), and then applies a ``coordinate'' descent method (with $x$ and $p$ being two ``coordinates'') to find a point of global minimum of this function. The ``coordinate'' descent method is used in order to utilize the smoothness of the function $\Phi$ in $x$, while the term $\omega$ is not used because it does not affect the optimization process.
Let us study how a minimizing sequence of the parametric penalty function $F_{\lambda}$ constructed via the ``coordinate'' descent method behaves as $n \to \infty$. The propositions below unify and significantly generalize many theorems on convergence of minimization methods based on the use of smoothing approximations of exact penalty functions (cf.~Proposition~4 in \cite{Pinar}, Theorem~ 2.1 and 3.1 in \cite{MengDangYang}, Theorem~2.1 in \cite{Liu}, etc.).
\begin{proposition} \label{Prp_CoordDescentApproxPenFunc} Let the penalty function $g_{\lambda}$ be bounded below on $A$, $\Phi$ be an approximation of the penalty term $\phi$, and $F_{\lambda}(x, p) = f(x) + \lambda \Phi(x, p)$. Let a sequence $\{ p_n \} \subset P$ converge to the point $p_0$, and let a sequence $\{ x_n \} \subset A$ satisfy the inequality $$
F_{\lambda}(x_n, p_n) \le \inf_{x \in A} F_{\lambda}(x, p_n) + \varepsilon. $$ for some $\varepsilon \ge 0$ and $\lambda > 0$. Then \begin{equation} \label{CoordDescentApproxPenFunc_Ineq}
\inf_{x \in A} g_{\lambda}(x) \le \liminf_{n \to \infty} g_{\lambda}(x_n) \le
\limsup_{n \to \infty} g_{\lambda} (x_n) \le \inf_{x \in A} g_{\lambda}(x) + \varepsilon. \end{equation} If, additionally, $f$ and $\phi$ are l.s.c. on $A$, and the set $A$ is closed, then any cluster point $x^*$ of the sequence $\{ x_n \}$ (if exists) belongs to the set $A$, and satisfies the inequality $g_{\lambda}(x^*) \le \inf_{x \in A} g_{\lambda}(x) + \varepsilon$. \end{proposition}
\begin{proof}
By the definition of approximation, $\Phi(\cdot, p) \to \phi(\cdot)$ as $p \to p_0$ uniformly on the set $A$. Therefore for any $\sigma > 0$ there exists $n_0 \in \mathbb{N}$ such that $| \Phi(x, p_n) - \phi(x) | < \sigma$ for all $n \ge n_0$ and $x \in A$. Consequently, for any $n \ge n_0$ the function $F_{\lambda}(\cdot, p_n)$ is bounded below on $A$, and $$
\big| \inf_{x \in A} F_{\lambda}(x, p_n) - \inf_{x \in A} g_{\lambda}(x) \big| \le \sigma $$ Hence and from the definition of the sequence $\{ x_n \}$ it follows that $$
\inf_{x \in A} g_{\lambda}(x) - \sigma \le F_{\lambda}(x_n, p_n) \le
\inf_{x \in A} g_{\lambda}(x) + \varepsilon + \sigma \quad \forall n \ge n_0. $$
Then applying the fact that $|F_{\lambda}(x_n, p_n) - g_{\lambda}(x_n)| = | \Phi(x_n, p_n) - \phi(x_n) | < \sigma$ for all $n \ge n_0$ one obtains that inequalities (\ref{CoordDescentApproxPenFunc_Ineq}) are valid by virtue of the fact that $\sigma > 0$ is arbitrary.
If, additionally, the functions $f$ and $\phi$ are l.s.c. on $A$, and the set $A$ is closed, then taking into account inequalities (\ref{CoordDescentApproxPenFunc_Ineq}) one obtains that any cluster point $x^*$ of the sequence $\{ x_n \}$ satisfies the inequality $g_{\lambda}(x^*) \le \inf_{x \in A} g_{\lambda}(x) + \varepsilon$. \end{proof}
\begin{proposition} Let the penalty function $g_{\lambda}$ be exact, $\Phi$ be an approximation of the penalty term $\phi$, and $F_{\lambda}(x, p) = f(x) + \lambda \Phi(x, p)$. Let a sequence $\{ p_n \} \subset P$ converge to the point $p_0$, and let a sequence $\{ x_n \} \subset A$ satisfy the inequality $$
F_{\lambda}(x_n, p_n) \le \inf_{x \in A} F_{\lambda}(x, p_n) + \varepsilon_n. $$ for some $\lambda > \lambda^*(f, \phi)$, and a decreasing sequence $\{ \varepsilon_n \} \subset (0, + \infty)$ such that $\varepsilon_n \to 0$ as $n \to \infty$. Then $\phi(x_n) \to 0$ as $n \to \infty$, and $\lim_{n \to \infty} f(x_n) = f^*$. If, additionally, $f$ and $\phi$ are l.s.c. on $A$, and the set $A$ is closed, then any cluster point $x^*$ of the sequence $\{ x_n \}$ (if exists) is a globally optimal solution of the problem ($\mathcal{P}$). \end{proposition}
\begin{proof} With the use of Proposition~\ref{Prp_CoordDescentApproxPenFunc} one can easily obtain that $$
\lim_{n \to \infty} g_{\lambda}(x_n) = \inf_{x \in A} g_{\lambda}(x). $$ Then applying Proposition~\ref{Prp_MinimizingSeq_ExactPenFunc} one gets the required result. \end{proof}
Let $\Phi$ be an approximation of the penalty term $\phi$. Denote $$
e(p) = \sup_{x \in A} |\Phi(x, p) - \phi(x)| $$ By the definition of approximation one has $e(p) \to 0$ as $p \to p_0$. One can use the function $e(\cdot)$ in order to understand how well the parametric penalty function $F_{\lambda}(x, p) = f(x) + \Phi(x, p)$ approximates the standard penalty function $g_{\lambda}$.
The following proposition and its corollary unify many results on smoothing approximations of exact penalty function (see, e.g., \cite{Pinar}, Propositions~1 and 2, \cite{MengDangYang}, Theorems~2.2, 2.3, 3.2, 3.3, \cite{Liu}, Theorems~2.2, 2.3 and 3.1, etc.)
\begin{proposition} \label{Prp_ApproxPenFuncEstimate} Let the penalty function $g_{\lambda_0}$ be bounded below on the set $A$. Then for any $\lambda \ge \lambda_0$ and for all $p \in \dom e$ one has \begin{equation} \label{InfEstimateApproxPenFunc}
\big| \inf_{x \in A} F_{\lambda}(x, p) - \inf_{x \in A} g_{\lambda}(x) \big| \le \lambda e(p). \end{equation} Furthermore, if the penalty function $g_{\lambda}$ is exact and $\lambda > \lambda^*(f, \phi)$, then for any $p \in \dom e$ such that the function $F_{\lambda}(\cdot, p)$ attains a global minimum on the set $A$, and for any $x_p \in \argmin_{x \in A} F_{\lambda}(x, p)$ one has $$
| f^* - f(x_p) | \le \lambda \big( e(p) + \Phi(x_p, p) \big). $$ \end{proposition}
\begin{proof} Fix arbitrary $\lambda \ge \lambda_0$ and $p \in \dom e$. For any $x \in A$ one has \begin{equation} \label{UniformEstimateApproxPenFunc}
|F_{\lambda}(x, p) - g_{\lambda}(x, p)| = \lambda | \Phi(x, p) - \phi(x) | \le \lambda e(p). \end{equation} Consequently, taking into account the fact that the penalty function $g_{\lambda}$ is bounded below on the set $A$ one obtains that the function $F_{\lambda}(\cdot, p)$ is bounded below on the set $A$ as well.
By the definition of infimum, for any $\varepsilon > 0$ there exists $x_{\varepsilon}$ such that $$
F_{\lambda}(x_{\varepsilon}, p) \le \inf_{x \in A} F_{\lambda}(x, p) + \varepsilon. $$ Hence and from \eqref{UniformEstimateApproxPenFunc} one obtains that $$
\lambda e(p) \ge g_{\lambda}(x_{\varepsilon}) - F_{\lambda}(x_{\varepsilon}, p) \ge
\inf_{x \in A} g_{\lambda}(x) - \inf_{x \in A} F_{\lambda}(x, p) - \varepsilon. $$ Passing to the limit as $\varepsilon \to + 0$ one gets $$
\inf_{x \in A} g_{\lambda}(x) - \inf_{x \in A} F_{\lambda}(x, p) \le \lambda e(p). $$ Arguing in a similar way one can easily verify that the inequality $$
\inf_{x \in A} F_{\lambda}(x, p) - \inf_{x \in A} g_{\lambda}(x) \le \lambda e(p) $$ is valid, which implies that \eqref{InfEstimateApproxPenFunc} holds true.
If the penalty function $g_{\lambda}$ is exact and $\lambda > \lambda^*(f, \phi)$, then $\inf_{x \in A} g_{\lambda}(x) = f^*$ by virtue of Proposition~\ref{Prp_EquivDefExPen}. Therefore applying inequaliaty \eqref{InfEstimateApproxPenFunc} one obtains that for any $p \in \dom e$ such that the function $F_{\lambda}(\cdot, p)$ attains a global minimum on the set $A$, and for any
$x_p \in \argmin_{x \in A} F_{\lambda}(x, p)$ one has $| f^* - F_{\lambda}(x_p, p) | \le \lambda e(p)$. Consequently, taking into account the fact that the function $\Phi$ is nonnegative one gets that the following inequalities hold true $$
\lambda e(p) \ge | f^* - F_{\lambda}(x_p, p) | = |f^* - f(x_p) - \lambda\Phi(x_p, p)| \ge
|f^* - f(x_p)| - \lambda \Phi(x_p, p), $$ which implies the desired result. \end{proof}
\begin{proposition} Suppose that the penalty function $g_{\lambda_0}$ is bounded below on the set $A$ for some $\lambda_0 \ge 0$. Let a sequence $\{ p_n \} \subset P$ converge to $p_0$, and let $\{ \lambda_n \} \subset (\lambda_0, + \infty)$ be an increasing unbounded sequence such that $\lambda_n e(p_n) \to 0$ as $n \to \infty$. Suppose, finally, that a sequence $\{ x_n \} \subset A$ satisfies the inequality $$
F_{\lambda_n}(x_n, p_n) \le \inf_{x \in A} F_{\lambda_n}(x, p_n) + \varepsilon $$ for any sufficiently large $n \in \mathbb{N}$, and for some $\varepsilon > 0$. Then $\phi(x_n) \to 0$ as $n \to \infty$. Moreover, if the functions $f$ and $\phi$ are l.s.c. on $A$, and the set $A$ is closed, then any cluster point $x^*$ of the sequence $\{ x_n \}$ is a feasible point of the problem $(\mathcal{P})$, and $f^* \le f(x^*) \le f^* + \varepsilon$. \end{proposition}
\begin{proof} Taking into account the facts that the function $g_{\lambda_0}$ is bounded below on $A$, and $e(p) \to 0$ as $p \to \infty$ one obtains that the function $F_{\lambda_n}(\cdot, p_n)$ is bounded below for any sufficiently large $n \in \mathbb{N}$. Thus, the sequence $\{ x_n \}$ is correctly defined.
Recall that $g_{\lambda}$ is a penalty function for the problem $(\mathcal{P})$. Therefore for any $\lambda \ge 0$ one has $\inf_{x \in A} g_{\lambda}(x) \le f^*$. Hence and from Proposition~\ref{Prp_ApproxPenFuncEstimate} it follows that \begin{equation} \label{UpperEstimApproxPenFuncMinSeq}
F_{\lambda_n}(x_n, p_n) \le \inf_{x \in A} F_{\lambda_n}(x, p_n) + \varepsilon
\le \inf_{x \in A} g_{\lambda_n}(x) + \varepsilon + \lambda_n e(p_n) \le f^* + \varepsilon + \lambda_n e(p_n) \end{equation} for any sufficiently large $n$, which implies that the sequence $\{ F_{\lambda_n}(x_n, p_n) \}$ is bounded above due to the fact that $\lambda_n e(p_n) \to 0$ as $n \to \infty$.
Observe that for any $n \in \mathbb{N}$ one has \begin{multline*}
F_{\lambda_n}(x_n, p_n) = F_{\lambda_0}(x_n, p_n) + (\lambda_n - \lambda_0) \Phi(x_n, p_n) \ge \\
\ge g_{\lambda_0}(x_n) - \lambda_0 e(p_n) + (\lambda_n - \lambda_0) \Phi(x_n, p_n). \end{multline*} Consequently, taking into account the facts that the function $g_{\lambda_0}$ is bounded below on the set $A$, and $e(p_n) \to 0$ as $n \to \infty$ one obtains that $\Phi(x_n, p_n) \to 0$ as $n \to \infty$, since otherwise $\limsup_{n \to \infty} F_{\lambda_n}(x_n, p_n) = + \infty$, which is impossible. Therefore applying the inequality
$|\Phi(x_n, p_n) - \phi(x_n)| \le e(p_n)$ one gets that $\phi(x_n) \to 0$ as $n \to \infty$.
Suppose, now, that the functions $f$ and $\phi$ are l.s.c. on $A$, and the set $A$ is closed. Then applying the fact that $\phi(x_n) \to 0$ as $n \to \infty$ one obtains that any cluster point $x^*$ of the sequence $\{ x_n \}$ must satisfy the equality $\phi(x^*) = 0$, which implies that $x^*$ is a feasible point of the problem $(\mathcal{P})$. Hence $f(x^*) \ge f^*$. On the other hand, with the use of (\ref{UpperEstimApproxPenFuncMinSeq}), and the fact that the function $\Phi$ is nonnegative one gets that $f(x_n) \le f^* + \varepsilon + \lambda_n e(p_n)$. Passing to the limit as $n \to \infty$ one obtains the desired result. \end{proof}
In the end of this section, we provide two examples of smoothing approximations of exact penalty functions, and exact parametric penalty functions constructed with the use of these smoothing approximations. Let, for the sake of simplicity, $X = \mathbb{R}^d$, and suppose that the problem $(\mathcal{P})$ has the form \begin{equation} \label{InequalConstrOptimization}
\min f(x) \quad \text{subject to} \quad g_i(x) \le 0, \quad i \in \{ 1, \ldots, m \}, \end{equation} where $g_i \colon \mathbb{R}^d \to \mathbb{R}$ are given functions. Let also $P = \mathbb{R}_+$ and $p_0 = 0$.
\begin{example} \cite{Liu} Let $\phi$ be the $\ell_1$ penalty term for the problem (\ref{InequalConstrOptimization}), i.e. let $$
\phi(x) = \sum_{i = 1}^m \max\{ 0, g_i(x) \}. $$ For any $p > 0$ define $$
\theta(t, p) = \begin{cases}
\dfrac{1}{2} p e^{t/p}, & \text{if } t \le 0, \\
t + \dfrac{1}{2} p e^{- t/p}, & \text{if } t > 0,
\end{cases} $$ and define $\theta(t, 0) = \max\{ 0, t \}$. Set $$
\Phi(x, p) = \sum_{i = 1}^m \theta(g_i(x), p) \quad \forall x \in \mathbb{R}^n. $$ Note that $0 \le \theta(t, p) - \max\{ 0, t \} \le p/2$ for any $t \in \mathbb{R}$. Therefore the function $\Phi$ is an upper approximation of the penalty term $\phi$, and $e(p) = p / 2$ for any $p \ge 0$.
Thus, the parametric penalty function $F_{\lambda}(x, p) = f(x) + \lambda \big( \Phi(x, p) + p \big)$ is exact if and only if the $\ell_1$ penalty function $h_{\lambda}(x) = f(x) + \lambda \phi(x)$ is exact. Furthermore, observe that if the functions $f$ and $g_i$ are twice continuously differentiable on $\mathbb{R}^d$, then the function $F_{\lambda}(\cdot, p)$ is twice continuously on $\mathbb{R}^n$ for any $p > 0$. \end{example}
\begin{example} \cite{LiuzziLucidi} Let $\phi$ be the $\ell_{\infty}$ penalty term for the problem (\ref{InequalConstrOptimization}), i.e. let $$
\phi(x) = \max\left\{ 0, g_1(x), \ldots, g_m(x) \right\}. $$ For any $p > 0$ define $$
\Phi(x, p) = p \ln \Big( \sum_{i = 1}^m \exp\big( g_i(x) / p \big) \Big) $$ (see, e.g., \cite{Xu}), and set $\Phi(x, 0) = \phi(x)$. Note that for any $p > 0$ and $x \in \mathbb{R}^d$ one has $$
\Phi(x, p) = \phi(x) + p \ln \Big( \sum_{i = 1}^m \exp\big( (g_i(x) - \phi(x)) / p \big) \Big) \le \phi(x) + p \ln m, $$ if $\phi(x) \ne 0$, and $0 \le \Phi(x, p) \le p \ln m$, otherwise. Hence $$
\phi(x) \le \Phi(x, p) \le \phi(x) + p \ln m \quad \forall (x, p) \in \mathbb{R}^d \times \mathbb{R}_+. $$ Therefore the function $\Phi$ is an upper approximation of the penalty term $\phi$, and $e(p) \le p \ln m$.
Thus, the parametric penalty function $F_{\lambda}(x, p) = f(x) + \lambda \big( \Phi(x, p) + p \big)$ is exact if and only if the $\ell_{\infty}$ penalty function $h_{\lambda}(x) = f(x) + \lambda \phi(x)$ is exact. Moreover, note that if the functions $f$ and $g_i$ are twice continuously differentiable on $\mathbb{R}^d$, then the function $F_{\lambda}$ is twice continuously differentiable on $\mathbb{R}^d \times (0, + \infty)$. \end{example}
\end{document} |
\begin{document}
\maketitle
\begin{abstract} An invariant measure for a flow is, of course, an invariant measure for any of its time-t maps. But the converse is far from being true. Hence, one may naturally ask: What is the obstruction for an invariant measure for the time-one map to be invariant for the flow itself? We give an answer in terms of measure disintegration. Surprisingly all it takes is the measure not to be ``too much pathological in the orbits". We prove the following rigidity result. If $\mu$ is an ergodic probability for the time-one map of a flow, then it is either highly pathological in the orbits, or it is highly regular (i.e invariant for the flow). In particular this measure rigidity result is also true for measurable flows by the classical Ambrose-Kakutani's representation theorem for measurable flows. \end{abstract}
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
A basic question answered on an introductory ergodic theory course is that we may always find invariant probability measures for many dynamical systems. Two particularly large classes of such dynamical systems are homeomorphisms and continuous flows on a compact manifold. We say that a measurable map $h:X \rightarrow X$ preserves a probability $\mu$ if for every measurable set $A \subset X$, then $\mu(h^{-1}(A))=\mu(A)$. If $\phi:\mathbb R \times X \rightarrow X$ is a flow, we say that the flow $\phi$ preserves the measure $\mu$ if it is preserved for every time $t$-map $\phi_t:=\phi(t,.)$. We say that a measure $\mu$ is ergodic for a certain dynamical system if any invariant set has either full measure or zero measure. It is not at all expected that an ergodic probability for the time-$t$ map to be ergodic (in particular invariant) for the flow itself. Hence a natural question arises:
\textbf{Question:} \textit{What is the obstruction for an ergodic measure for the time-one map to be ergodic (in particular invariant) for the flow itself?}
To the best of our knowledge, even though this seems to be a natural question it has not been treated in the literature. We are able to give a precise answer to this question in terms of measure disintegration. Surprisingly all it takes is the measure not to be ``too much pathological in the orbits". That is, we prove a measure rigidity result. If $\mu$ is an ergodic probability for the time-one map of a flow, then it is either highly pathological in the orbits, or it is highly regular (i.e invariant for the flow). This is our main result:
\begin{maintheorem}\label{theo:continuous.flow} Let $X$ be a separable metric space and $\phi:\mathbb R \times X \rightarrow X$ be a continuous flow. Denote by $\mathcal F=\{\mathcal F(x)\}_{x\in X}$ the foliation of $X$ by orbits of the flow $\phi$. Given any ergodic Borel measure $\mu$ for the time$-1$ map $\phi_1:= \phi(1,\cdot):X\rightarrow X$, then \begin{enumerate} \item either there is a set $A \subset X$ of full $\mu$-measure such that $\mathcal F(x) \cap A$ is a discrete set of $\mathcal F(x)$. Moreover, there is a natural number $k\geq 1$ such that $\mathcal F(x) \cap A$ is the $\phi_1-$orbit of exactly $k$ points; or \item for $\mu$-almost every $x\in X$ there is a measure $\mu_{\mathcal F(x)}$ on $\mathcal F(x)$ such that
\[ \mu_{\mathcal F(x)}(\phi([0,t] \times \{x\})) = 2^{-1}t, \] as long as $\tau \mapsto \phi(\tau,x)$ is injective on $[0,t]$ and where $\mu_{\mathcal F(x)}$ is a $\phi_1$-invariant measure which normalized and restricted on a foliated chart of the orbit is a disintegration of $\mu$. In particular $\mu$ is invariant for the flow.
\end{enumerate} \end{maintheorem}
In \cite{Ambrose, AmbroseKakutani} W. Ambrose and S. Kakutani proved a remarkable representation theorem for measurable flows which can be summarized as follows: every measurable measure preserving flow on a Lebesgue space is isomorphic to a flow built under a function (see \cite{AmbroseKakutani} for definition). This result was latter strengthened and extended to larger classes of measurable flows (e.g. non-singular flows) by D. Rudolph \cite{Rudolph}, S. Dani \cite{Dani}, U. Krengel \cite{Krengel, Krengel2} and I. Kubo \cite{Kubo}. In \cite{Wagh} V. Wagh gave a descriptive version of Ambrose-Kakutani's theorem and more recently D. McClendon \cite{McClendon} proved a version of Ambrose-Kakutani's theorem for Borel countable-to-one semi-flows.
\begin{corollary}\label{theorem:A} Let $\phi$ be a measurable flow defined on a Lebesgue space $X$ and $\mathcal F(x)$ be the $\phi-$orbit of the point $x$. Then, given any $\phi_1-$ergodic invariant measure either \begin{enumerate} \item there is a set $A \subset X$ of full $\mu$-measure such that $\mathcal F(x) \cap A$ is a discrete subset of $\mathcal F(x)$. Moreover, there is a natural number $k\geq 1$ such that $\mathcal F(x) \cap A$ is the $\phi_1-$orbit of exactly $k$ points; or \item for $\mu$-almost every $x\in X$ there is a measure $\mu_{\mathcal F(x)}$ on $\mathcal F(x)$ such that
\[ \mu_{\mathcal F(x)}(\phi([0,t] \times \{x\})) = 2^{-1}t, \] as long as $\tau \mapsto \phi(\tau,x)$ is injective on $[0,t]$, $t\geq 0$, and where $\mu_{\mathcal F(x)}$ is a $\phi_1-$invariant measure which normalized and restricted on a foliated chart of the orbit is a disintegration of $\mu$.
\end{enumerate}
\end{corollary}
This corollay follows as a direct consequence of the classical Ambrose-Kakutani's Theorem (see Theorem ~\ref{theorem:ambrose.kakutani} from Subsection ~\ref{subsec:Mflows}).
\section{Preliminaries on measure theory}\label{sec:preliminaries}
\subsection{Measurable partitions and Rohklin's Theorem}
Let $(X, \mu, \mathcal B)$ be a probability space, where $X$ is a compact metric space, $\mu$ a probability measure and $\mathcal B$ the Borelian $\sigma$-algebra of $X$. Given a partition $\mathcal P$ of $X$ by measurable sets, we associate the probability space $(\mathcal P, \widetilde \mu, \widetilde{\mathcal B})$ by the following way. Let $\pi:X \rightarrow \mathcal P$ be the canonical projection, that is, $\pi$ maps a point $x$ of $X$ to the partition element of $\mathcal P$ that contains it. Then we define $\widetilde \mu := \pi_* \mu$ and
$\widetilde B \in \widetilde{\mathcal B}$ if and only if $\pi^{-1}(\widetilde B) \in \mathcal B$.
\begin{definition} \label{definition:conditionalmeasure}
Given a partition $\mathcal P$. A family $\{\mu_P\}_{P \in \mathcal P} $ is a \textit{system of conditional measures} for $\mu$ (with respect to $\mathcal P$) if \begin{itemize}
\item[i)] given $\phi \in C^0(X)$, then $P \mapsto \int \phi \mu_P$ is measurable; \item[ii)] $\mu_P(P)=1$ $\widetilde \mu$-a.e.; \item[iii)] if $\phi \in C^0(X)$, then $\displaystyle{ \int_X \phi d\mu = \int_{\mathcal P}\left(\int_P \phi d\mu_P \right)d\widetilde \mu }$. \end{itemize} \end{definition}
When it is clear which partition we are referring to, we say that the family $\{\mu_P\}$ \textit{disintegrates} the measure $\mu$ or that it is the \textit{disintegration of $\mu$ along $\mathcal P$}.
\begin{proposition} \label{prop:uniqueness} \cite{EW, Ro52}
Given a partition $\mathcal P$, if $\{\mu_P\}$ and $\{\nu_P\}$ are conditional measures that disintegrate $\mu$ on $\mathcal P$, then $\mu_P = \nu_P$ $\widetilde \mu$-a.e. \end{proposition}
\begin{definition} \label{def:mensurable.partition} We say that a partition $\mathcal P$ is measurable (or countably generated) with respect to $\mu$ if there exist a measurable family $\{A_i\}_{i \in \mathbb N}$ and a measurable set $F$ of full measure such that if $B \in \mathcal P$, then there exists a sequence $\{B_i\}$, depending on $B$, where $B_i \in \{A_i, A_i^c \}$ such that $B \cap F = \bigcap_i B_i \cap F$. \end{definition}
\begin{theorem}[Rokhlin's disintegration \cite{Ro52}] \label{theo:rokhlin}
Let $\mathcal P$ be a measurable partition of a compact metric space $X$ and $\mu$ a Borel probability. Then there exists a disintegration of $\mu$ along $\mathcal P$. \end{theorem}
\subsection{Souslin Theory}
We list some basic properties of Souslin sets. All the results cited here can be found in \cite[Chapter $6$]{BogachevII}.
\begin{definition} Given a Hausdorff space $X$, a subset $A \subset X$ is called Souslin if it is the image of a complete separable metric space under a continuous mapping. We say that the Hausdorff space $X$ is a Souslin space if it is a Souslin set. By convention, we define the empty set to be a Souslin set. \end{definition}
Observe that by definition, if $X$ and $Y$ are Hausdorff spaces and $A \subset X$ is Souslin, then given any continuous function $f:X \rightarrow Y$, the image $f(A) \subset Y$ is Souslin. Given Souslin spaces $X$ and $Y$, the product $X\times Y$ is a Souslin space and the images of a Souslin set $A\subset X\times Y$ by the projections $\pi_1:X\times Y \rightarrow X , \pi_2:X\times Y \rightarrow Y$ are Souslin sets. Notice that the image of a Borel sets even by a well behaved continuous function such as the projection may not be a Borel set. In fact this was result of a classical mistake committed by Lebesgue \cite{Lebesgue} and corrected by Souslin \cite{Souslin}.
Every Borel subset of a Souslin space is itself a Souslin space. Also, Souslin sets of Souslin spaces are preserved under Borel maps, that is, given $X$ and $Y$ Souslin spaces, $f:X\rightarrow Y$ a Borel map and $A\subset X$, $B\subset Y$ Souslin sets, then $f(A)$ and $f^{-1}(B)$ are Souslin sets. Although the complement of a Souslin set may not be a Souslin set, if the base space is Hausdorff and the complement of a Souslin set is a Souslin set it turns out that the original set is in fact a Borel set. Another interesting property of Souslin sets is that they are universally measurable sets.
\subsection{Measurable flows} \label{subsec:Mflows} Let $(X,\mathcal B, \mu)$ be a Lebesgue space.
In this section we briefly recall some basic notions on the structure of measurable flows and we refer the reader to \cite{Ambrose,AmbroseKakutani} for more on the subject.
\begin{definition} A flow $\phi$ in $X$ is a one-parameter group $\{\phi_t\}$ , $-\infty < t <+\infty$, of measure preserving transformations $\phi_t :X \rightarrow X$. If $x\in X$ and $\phi$ is a flow on $X$ we say that the set $\{\phi_t(x): t\in \mathbb R\}$ is the trajectory of $x$ or the orbit of $x$ by the flow. A flow $\phi : \mathbb R \times X \rightarrow X$ is said to be measurable if $\phi$ is a measurable function, that is, for any measurable set $Y \subset X$ the set $\{(x,t) : \phi_t(x) \in Y\}$ is a measurable set in the product space $\mathbb R \times X$ where the measure is the product of the Lebesgue measure on $\mathbb R$ and $\mu$. \end{definition}
Two flows $\phi = (\phi_t)_{t\in \mathbb R}$ on the Lebesgue space $(X,\mathcal B, \mu)$ and $\psi = (\psi_t)_{t\in \mathbb R}$ on the Lebesgue space $(Y,\mathcal C,\nu)$ are said to be \textit{isomorphic} if there exist invariant full measure sets $X_0 \subset X$, $Y_0 \subset Y$ and an invertible measure preserving transformation $\rho:X_0 \rightarrow Y_0$ such that \[ \rho \circ \phi_t = \psi_t \circ \rho\] for all $t\in \mathbb R$.
The following classical result of Ambrose-Kakutani shows that measure preserving flows on Lebesgue spaces can be represented as continuous flows on metric spaces.
\begin{theorem}[Ambrose-Kakutani \cite{AmbroseKakutani}] \label{theorem:ambrose.kakutani} Let $\{\phi_t\}$ be a measure preserving measurable flow defined on a Lebesgue space $(X,\mathcal B, \mu)$. Then $\{\phi_t\}$ is isomorphic to a continuous flow on a separable metric space $M$ endowed with a measure $\lambda$ such that \begin{itemize} \item[1)] every open set has positive $\lambda$-measure; \item[2)] $\lambda$ is a regular measure. \end{itemize} \end{theorem}
\subsection{Measurable choice} We finish this preliminary section with a result by R. J. Aumann \cite{Aumann}, which although comes from the Decision Theory in Economics, lies in the realm of measure theory. This result will be used in the study of some atomic case.
\begin{theorem}[Measurable Choice Theorem \cite{Aumann}] \label{theo:MCT} Let $(T,\mu)$ be a $\sigma$-finite measure space, let $S$ be a Lebesgue space, and let $G$ be a measurable subset of $T\times S$ whose projection on $T$ is all of $T$. Then there is a measurable function $g:T\rightarrow S$, such that $(t,g(t)) \in G$ for almost all $t \in T$. \end{theorem}
\section{Fibered spaces and disintegration} \label{sec:FiberedSpaces} Given a continuous foliation $\mathcal F$ of a non-atomic Lebesgue probability space $X$, it is useful to look at $\mathcal F$ as fibers over a certain base space. It is not true that we can always choose a measurable set intersecting each plaque $\mathcal F(x)$ in exactly one point (the simplest example being the irrational linear foliation on the $2$-torus), so the quotient space $X/ \mathcal F$ is not always a good candidate for a base of a fibered space. In the light of this observation, instead of taking the quotient by the plaques we construct a fibered-type space over $X$ by literally attaching over each $x\in X$ the plaque $\mathcal F(x)$.
\begin{definition}\label{defi:fibered.type.space} Given a space $X$ and a family $\mathcal P$ of subsets of $X$. We can construct a natural fibered-type space over $X$ where the fibers are given by the elements of the family $\mathcal P$. More precisely, we define the space \[X^{\mathcal P} = \bigcup_{x\in X}\{x\}\times \mathcal P(x) \subset X\times X,\] endowed with the $\sigma$-algebra induced by the product $\sigma$-algebra on $X\times X$.
We call $X^{\mathcal P}$ the $(X,\mathcal P)$-fibered space or simply the $\mathcal P$-fibered space. Each subset $\{x\}\times \mathcal P(x) \subset X^{\mathcal P}$ is called the fiber of $x$ on $X^{\mathcal P}$. \end{definition}
Given a continuous foliation $\mathcal F$ of a non-atomic Lebesgue probability space $(X,\mu)$, consider a local chart \[\varphi_x:(0,1)\times (0,1)^k \rightarrow U\] of $\mathcal F$. The partition $\mathcal V = \{ \{x\} \times (0,1)^k \}$ is a measurable partition of $(0,1)\times (0,1)^k$ with respect to any Borel measure $\mu$ due to the separability of $(0,1)^k$. Hence on a local chart the partition given by the segments of leaves $\varphi_x(\{x\} \times (0,1)^k )$ forms a measurable partition on $U$ for any Borel measure on $U$. That means we can always disintegrate a measure on a local chart. Although the partition by the leaves of a foliation is not necessarily a measurable partition the next result allow us to say that the disintegration of a measure is atomic on the leaves, or it is absolutely continuous to Lebesgue on the leaves, since these properties persists independent of the foliated box one uses to disintegrate.
\begin{proposition} \label{prop:disintegration.unbounded} If $U_1$ and $U_2$ are described by the local charts $\varphi_{x_1}$ and $\varphi_{x_2}$ of $\mathcal F$ respectively, then the conditional measures $\mu_x^{U_1}$ and $\mu_x^{U_2}$, of $\mu$ on $U_1$ and $U_2$ respectively, coincide up to a constant on $U_1 \cap U_2$. \end{proposition} \begin{proof} It follows from \cite[Proposition 5.17]{El.pisa}. \end{proof}
\begin{definition} We say that a probability $\mu$ has atomic disintegration with respect to a foliation if its conditional measures on any foliated box are sum of Dirac measures. \end{definition}
\begin{remark}\label{remark:class.measures} Consider the classical volume preserving Kronecker irrational flow on the the torus $\mathbb T^2$. Let $\mathcal F$ be the continuous foliation given by the orbits of this flow, it follows that this is not a measurable partition in the sense of Definition \ref{def:mensurable.partition}. Hence, we cannot apply the Rohklin's disintegration Theorem \ref{theo:rokhlin} even on the apparently well-behaved continuous foliations. But we may always disintegrate locally and compare two local disintegrations by the above result. The proposition above implies that we can talk about disintegration of a measure over a foliation even if it does not form a measurable partition, as long as we have in mind that for a disintegration we understand that on a plaque there is a class of conditional measures which differ up to a multiplication of a constant, we denote this system of conditional measures as $\{[\mu_x]\}$. More precisely for a given foliation $\mathcal F$ on each plaque $\mathcal F(x)$ there is a family of measures $[\mu_x]$ defined on $\mathcal F(x)$ such that if $\eta \in [\mu_x]$ then $\mu_x = \alpha \eta$ for some positive constant $\alpha \in \mathbb R$. And on a foliated box if one normalizes these measures they form a disintegration of the measure $\mu$ in this foliated box. \end{remark}
\subsection{Disintegration on the unitary fibered space} In this section the foliation $\mathcal F$ comes from the orbits of a continuous flow $\phi$ on a separable metric space $X$. Hence $\mathcal F(x)$ is the orbit of $x$ through the flow $\phi$. Denote $B_{\mathcal F}(x,r):=\phi((-r,r)\times \{x\})$ and consider the family of sets \[\mathcal F^1 = \{ \mathcal F^1(x):= \{x\} \times B_{\mathcal F}(x,1)\}_x.\] For convenience, denote by $X_1^{\mathcal F}$ the $(X,\mathcal F^1)-$fibered space, that is,
\[X_1^{\mathcal F} = \bigcup_{x \in X}\mathcal F^1(x).\]
\begin{lemma} \label{lemma:measurable.for.unbounded} The partition of $X_1^{\mathcal F}$ by the verticals $\mathcal F^1(x)$ is a measurable partition with respect to any measure on $X_1^\mathcal{F}$. \end{lemma} \begin{proof} Let $\{U_i\} \subset X$ be a countable basis of open sets of $X$. By the definition of $\mathcal F$, the $\mathcal F$ saturation of $U_i$ is given by $ \phi((-\infty,+\infty) \times U_i)$, which is a measurable set since the flow is continuous. Let $V_i:= (U_i \times \mathcal F(U_i)) \cap X_1^{\mathcal F}$. Each $V_i$ is a measurable set in $X^{\mathcal F}_1$. Now, it is easy to see that each fiber can be written as intersection of sets of the countable family of sets $\{V_i\}$ or its complement.
\end{proof}
\begin{proposition} \label{prop:unbounded.measurable}
For each $x\in X$ denote by $\mu^1_x$ the measure on the equivalence class $[\mu_x]$ (as defined on Remark \ref{remark:class.measures}) such that $\mu^1_x$ is a probability measure when restricted to $B_{\mathcal F}(x,1)$. Then \[x\mapsto \mu^1_x\] is a measurable map, that is, given any measurable set $W\subset X$ the function \[x\mapsto \mu^1_x(W)\] is a measurable function. \end{proposition} \begin{proof} On the fibered space $X^{\mathcal F}_1$ consider the measure $\widetilde{\mu}$ defined by $$\widetilde \mu (\widetilde{A}) = \int_X \mu^1_x(\widetilde{A}_x) d\mu(x), $$
for any measurable set $\widetilde{A} \subset X^{\mathcal F}_1$, where $\widetilde{A}_x = \{y\in B_{\mathcal F}(x,1): (x,y) \in \widetilde{A}\}$. Since the vertical partition on $X^{\mathcal F}_1$ is a measurable partition by Lemma \ref{lemma:measurable.for.unbounded} the probability measure $\widetilde \mu$ has a Rohklin disintegration along the leaves for which the conditional measures varies measurably on the base point. By uniqueness and by the definition of $\widetilde \mu$ we have that the conditional measure on the plaque $\{x\} \times B_{\mathcal F}(x,1)$ is exactly $\mu^1_x$. By the properties of the Rohklin disintegration it follows that given any measurable set $\widetilde{W} \subset X^{\mathcal F}_1$ we have that $x\mapsto \mu^1_x(\widetilde{W}_x)$ is a measurable function. Given any measurable set $W\subset X$ let \[\widetilde{W}:= \bigcup_{x\in W} \{x\}\times [W\cap B_{\mathcal F}(x,1)].\] Thus $\widetilde{W}_x = W\cap B_{\mathcal F}(x,1)$ and then we have that \[x\mapsto \mu^1_x(W \cap B_{\mathcal F}(x,1)) = \mu^1_x(W)\] is a measurable function on $x$ as we wanted to show. \end{proof}
\begin{proposition} \label{prop:disinmeasurable2} For each $r\in (0,\infty)$ the function \[x\mapsto \mu^1_x(B_{\mathcal F}(x,r))\] is a measurable function. \end{proposition} \begin{proof} For each fixed $0\leq r \leq 1$, define the following $r$-top function \[f_r: X \rightarrow X_1^{\mathcal F}, \quad f_r(z) = (z,\phi_r(z)).\] Let \[W := \bigcup_{z\in X} [f_{-r}(z),f_r(z)]_z,\] where $[x, y]_z$ denotes the closed vertical segment connecting $x$ and $y$ on $\mathcal F(z)$. Observe that $W$ is Borel since it is a compact set. By the measurability of $x\mapsto \mu^1_x(W)$ we have that $z\mapsto \mu^1_z(B_{\mathcal F}(z,r))$ is a measurable function. Consequently \[z\mapsto \mu^1_z(B_{\mathcal F}(z,r))\] is a measurable function. Since $x\mapsto \mu_{x}^1(B_{\mathcal F}(x,r_0))$, for every $0\leq r_0 \leq 1$ fixed, is $\phi_1-$invariant we have conclude that for any $r\in (0,\infty)$ the function $x\mapsto \mu_{x}^1(B_{\mathcal F}(x,r))$ is measurable. \end{proof}
\begin{corollary} \label{cor:jointlymeasurable2} If $\{[\mu_x]\}$ is a non-atomic system of conditional measures then, for each typical $x\in X$ the function \[r\mapsto \mu^1_x(B_{\mathcal F}(x,r))\] is continuous. Furthermore the function \[(x,r) \mapsto \mu^1_x(B_{\mathcal F}(x,r))\] is jointly measurable. \end{corollary} \begin{proof} Let $x\in X$ be a $\mu$-typical point, hence $\mu^1_x$ is a non-atomic measure on $\mathcal F(x)$. First, let us prove that $r \mapsto \mu^1_x(B_{\mathcal F}(x,r))$ is a continuous function. Let $y_n \in \mathcal F(x)$ and $\varepsilon_n \searrow \varepsilon \in (0,\infty)$, hence $\mu^1_{x}(B_{\mathcal F}(x,\varepsilon_n)) = \mu^1_x(B_{\mathcal F}(x,\varepsilon)) + \mu^1_x(B_{\mathcal F}(x,\varepsilon_n)\setminus B_{\mathcal F}(x,\varepsilon))$. Because $\mu^1_x$ is nonatomic $$\lim_{n\rightarrow \infty}\mu^1_x(B_{\mathcal F}(x,\varepsilon_n)\setminus B_{\mathcal F}(x,\varepsilon)) =0.$$ Then, $$\mu^1_{x}(B_{\mathcal F}(x,\varepsilon_n)) \rightarrow \mu^1_x(B_{\mathcal F}(x,\varepsilon)).$$ By Proposition \ref{prop:disinmeasurable2} we know that $x \mapsto \mu^1_x(B_{\mathcal F}(x,r))$ is a measurable function, therefore the function $(x,r) \mapsto \mu^1_{x}(B_{\mathcal F}(x,r))$ is a Carath\'eodory function (i.e. measurable in one variable and continuous in the other, see \cite[Definition 4.50]{InfDimAna}), in particular it is a jointly measurable function \cite[Lemma 4.51]{InfDimAna}. \end{proof}
\subsection{The leafwise measure distortion}
The last concept we will introduce in this section is the concept of leafwise measure distortion.
\begin{definition} \label{defi:distortion2} Let $(X,\mu)$ be a non-atomic Lebesgue space and $\mathcal F$ be a continuous foliation of $X$ induced by the orbits of a continuous flow $\phi_t$. Let $\{[\mu_{x}]\}$ denote the system of equivalence classes of conditional measures along $\mathcal F$. We define the upper and lower $\mu$-distortion at $x$ respectively by \[\overline{\Delta(\mu)}(x):= \limsup_{\varepsilon \rightarrow 0}\frac{\mu^1_x(B_{\mathcal F}(x,\varepsilon))}{\varepsilon} ,\quad \underline{\Delta(\mu)}(x):= \liminf_{\varepsilon \rightarrow 0}\frac{\mu^1_x(B_{\mathcal F}(x,\varepsilon))}{\varepsilon},\] where $\mu^1_x$ is taken to be the measure on the class of $[\mu_{x}]$ which gives weight one to $B_{\mathcal F}(x,1)$. If the upper and lower distortions at $x$ are equal then we just call it the $\mu$-distortion at $x$ and denote by \[\Delta(\mu)(x):= \lim_{\varepsilon \rightarrow 0}\frac{\mu^1_x(B_{\mathcal F}(x,\varepsilon))}{\varepsilon}.\] \end{definition}
\section{Proof of the main result} \label{sec:DDL}
We proceed to the proof of our main result, Theorem \ref{theo:continuous.flow}, but first we provide a sketch of its proof.
\subsection{Sketch of the proof of Theorem \ref{theo:continuous.flow}} The proof will be made in two steps. The first, and easy case, is the atomic case. The second case, the non-atomic case is the one where the main ideas appear.
The first observation is that ergodicity implies that the upper (resp. lower) $\mu-$distortion at $x$ is constant almost everywhere. Then, using the $\varphi_1-$invariance of the family $\{B_{\mathcal F}(x,r)\}$ and the ergodicity of the measure, we obtain some uniformity on the upper (resp. lower) $\mu-$distortion in the sense that along a certain sequence $(\varepsilon_k)_k$, $\varepsilon_k \rightarrow 0$, the ratios appearing in Definition \ref{defi:distortion2} converge to the upper (resp. lower) $\mu-$distortion with the same rate for almost every point $x \in X$ . This is proven in Lemmas \ref{lema:sequenciaboa} and \ref{lema:sequenciaboa2}. Once proven this uniformity of the upper (resp. lower) distortion, we turn our attention to the set of all points $\overline{\Pi}$ (resp. $\underline{\Pi}$) where such uniformity occurs and its topological characteristics when restricted to a plaque. To be more precise, we prove in Lemma \ref{lemma:closed} that the set of points for which the uniforme distortions occurs is closed in each plaque intersecting it. The last step consists of analyzing the set $D$ of points $x$ for which $\overline{\Pi}$ is dense in $\mathcal F(x)$, that is, $\overline{\Pi}\cap \mathcal F(x)=\mathcal F(x)$. $D$ is $\varphi_1$-invariant thus it has full or zero measure. If it has full measure then the denseness of $\overline{\Pi}$ on the plaques $\mathcal F(x), x\in D$, allows us to extend the uniform upper distortion to every point on the respective plaque (i.e. orbit). Using the uniformity at every point we prove that the upper distortion is a constant times the $\mu_x$ measure of the set $B_{\mathcal F}(x,1)$ on the plaque $\mathcal F(x)$. Applying the same argument for the set $\underline{\Pi}$ where the lower distortion is uniform we get to the same equality and conclude that the upper and lower distortion are equal, thus the limit converges and we actually have a well defined distortion. Using this fact we prove in Lemma \ref{lemma:equallebesgue} that $\mu^1_x$ is a constant times the natural measure induced by the flow on the orbits. If $D$ has zero measure then almost every plaque has pieces of open intervals in it which are in the complement of the set $\overline{\Pi}$. We use this holes to show that atoms should appear, which yields an absurd.
\subsection{Proof of Theorem \ref{theo:continuous.flow}}
To simplify notation we denote $f:=\phi_1$.
First let us deal with the case where $\mu$ itself has atoms, that is, there is a countable subset $Z\subset X$ such that $\mu(\{z\})>0$ for any $z\in Z$. Since $f$ is ergodic and $Z$ is $f$-invariant we have $\mu(Z)=1$. Hence the second item of the theorem is satisfied. We may now assume the measure $\mu$ itself is atomless.
Let $Per(\phi)$ to be the set of periodic orbits of the flow $\phi$. First let us assume that $\mu(Per(\phi))=0$ and break the proof in two cases (the \textit{atomic case} and the \textit{non-atomic case}). We deal with $\mu(Per(\phi))>0$ by the end of the proof. Also recall that $\mathcal F$ is the foliation whose plaques are the orbits of the flow and $B_{\mathcal F}(x,r):=\phi((-r,r)\times \{x\})$.
\textbf{The atomic case:} Assume that $\mu$ has atomic disintegration over $\mathcal F$.
Consider the measurable function $g_r: x \mapsto \mu_x^1(B_{\mathcal F}(x,r))$. Now define the weight map $$w: x \mapsto \mu_x^1(\{x\}).$$ This is a measurable map because $w(x) = \lim_{r \rightarrow 0} g_r(x)$ and pointwise limit of measurable functions is a measurable function.
Now consider the invariant set $w^{-1} ((0,\delta) )$ of atoms whose weight is less then $\delta$. Ergodicity implies that this set has zero or one measure. Thus, there exists a real number $\delta_0>0$ such that each atom has weight $\delta_0$ and, consequently, each plaque has the same number of atoms $k_0 = 1/\delta_0$.
Hence we have proved statement $(2)$ of Theorem \ref{theo:continuous.flow}.
\textbf{Non-atomic case}: We now assume that the disintegration is not atomic.
Let $\{[\mu_x]\}$, as in Remark \ref{remark:class.measures}, be the equivalence classes of the conditional measures coming from the Rokhlin disintegration of $\mu$ along the leaves of $\mathcal F$.
Observe that $\mu_x^1(B_{\mathcal F}(x,\varepsilon)) >0 $ for every $x \in \operatorname{Supp}_{\mathcal F}(\mu^1_x)$ (where the support here is inside $\mathcal F(x)$). Thus, it makes sense to evaluate the upper and lower unitary distortions. Also observe that, a priori, $\overline{\Delta}(x)$ and $\underline{\Delta}(x)$ could be infinity. In any case these functions are well-known to be measurable functions. Also note that both $\overline{\Delta}(x)$ and $\underline{\Delta}(x)$ are $f$-invariant maps because \[f_{*}\mu^1_x = \mu^1_{f(x)} \;\text{ and }\;f(B_{\mathcal F}(x,\varepsilon)) = B_{\mathcal F}(f(x),\varepsilon).\]
By ergodicity of $f$ it follows that both are constant almost everywhere, let us call these constants by $\overline{\Delta}$ and $\underline{\Delta}$. That is, for almost every $x$: \begin{equation}\label{eq:delta} \overline{\Delta}(x) = \overline{\Delta} ,\quad \text{ and } \underline{\Delta}(x) = \underline{\Delta}. \end{equation} Let $D$ be a (full measure) set of points $x$ for which \eqref{eq:delta} occurs.
\begin{lemma} \label{lema:sequenciaboa} If $\overline{\Delta}$ is finite, there exists a sequence $\varepsilon_k\rightarrow 0$, as $k\rightarrow +\infty$, and a full measure subset $R \subset D$ such that \begin{itemize} \item[i)] $R$ is $f$-invariant; \item[ii)]for every $x \in R$, then \begin{equation}\label{eq:uniform}
\left| \frac{ \mu^1_x(B_{\mathcal F}(x,\varepsilon_k))}{\varepsilon_k} - \overline{\Delta} \right| \leq \frac{1}{k};\end{equation}
\end{itemize}
An analogous result holds if instead of $\overline{\Delta}$ we consider $\underline{\Delta}$. \end{lemma}
\begin{proof} Since $\overline{\Delta}(x) = \overline{\Delta}$ for every $x \in D$ and $k\in \mathbb N^{*}$ define
\[\varepsilon_k(x):= \sup \left\{\varepsilon: \left| \frac{ \mu^1_x(B_{\mathcal F}(x,\varepsilon))}{ \varepsilon} - \overline{\Delta} \right| +\varepsilon \leq \frac{1}{k} \right\}.\] Observe that $\varepsilon_k(x)$ exists because since the $\limsup$ is $\overline{\Delta}$ we can take a sequence $\varepsilon_l(x) \rightarrow 0$ such that the ratio given approaches $\overline{\Delta}$.
\noindent {\bf Claim:} The function $\varepsilon_k(x)$ is a measurable for all $k \in \mathbb N$. \begin{proof} Observe that since $\mu_x$ is not atomic we have \[\varepsilon_k(x) = \lim_{n\rightarrow \infty} \varepsilon^n_k(x)\] where
\[\varepsilon^n_k(x) = \sup \left\{\varepsilon: \left| \frac{ \mu_x^1(B_{\mathcal F}(x,\varepsilon))}{ \varepsilon} - \overline{\Delta} \right| +\varepsilon < \frac{1}{k} +\frac{1}{n} \right\}.\] So, it is enough to prove that $\varepsilon_k^n(x)$ is measurable on $x$.
Define \[g(x,\varepsilon) = \left| \frac{ \mu_x^1(B_{\mathcal F}(x,\varepsilon))}{ \varepsilon} - \overline{\Delta} \right| + \varepsilon.\]
By Corollary \ref{cor:jointlymeasurable2}, for any typical $x\in M$ the function $g(x,\cdot):(0,\infty) \rightarrow \mathbb (0,\infty)$ is continuous.
Let $\varepsilon>0$ be fixed and let us prove that $g(\cdot, \varepsilon):M \rightarrow \mathbb (0,\infty)$ is a measurable function. By Proposition \ref{prop:disinmeasurable2} we know that $x\mapsto \mu_x^1(B_{\mathcal F}(x,\varepsilon))$ is a measurable function, therefore $g(\cdot, \varepsilon)$ is measurable function.
Given any $k\in \mathbb N$, $k>0$, the continuity of $g(x,\cdot)$ implies that \[\varepsilon_k^{-1}((0,\beta))=\{x: \varepsilon_{k}(x) \in (0,\beta)\} = \bigcap_{r\geq b, r\in \mathbb Q} g(\cdot, r)^{-1}([1/k, +\infty)). \] Therefore $\varepsilon_k^{-1}((0,\beta))$ is measurable and consequently $\varepsilon_k$ is a measurable function for every $k$. \end{proof}
Note that $\varepsilon_k(x)$ is $f$-invariant. Thus, by ergodicity, let $R_k$ be a full measure set such that $\varepsilon_k(x)$ is constant equal to $\varepsilon_k$. It is easy to see that the sequence $\varepsilon_k$ goes to $0$ as $k$ goes to infinity. Take $\widetilde{R}:= \bigcap_{k=1}^{+\infty} R_k$. Since each $R_k$ has full measure, $\widetilde{R}$ has full measure and clearly satisfies what we want for the sequence $\{\varepsilon_k\}_{k}$. Finally, take $R = \bigcap_{-\infty}^{+\infty} f^i(\widetilde{R})$. $R$ is $f$-invariant, has full measure and satisfies $(i)$ and $(ii)$.
\end{proof}
Now consider the following set \[\overline{\Pi} := \bigcup_{x \in R} \overline{\Pi}_x.\] where
\[\overline{\Pi}_x:= \left\{y \in \mathcal F(x):\left| \frac{ \mu^1_x(B_{\mathcal F}(y,\varepsilon_k))}{\varepsilon_k} - \overline{\Delta} \right| \leq \frac{1}{k}, \forall k\geq 1 \right\},\] similarly we define $\underline{\Pi}_x$ and $\underline{\Pi}$ with $\underline \Delta$ in the role of $\overline \Delta$.
\begin{lemma} \label{lemma:closed} For every $x \in R$ the set $\overline{\Pi}_x$ is closed in the plaque $\mathcal F(x)$. \end{lemma} \begin{proof} Let $y_n \rightarrow y$, $y_n \in \overline{\Pi}_x$, $y\in \mathcal F(x)$. To prove that $y\in \overline{\Pi}_x$ it is enough to show that
\[\lim_{n\rightarrow \infty} \mu_x^1(B_{\mathcal F}(y_n,\varepsilon_k)) = \mu_x^1(B_{\mathcal F}(y,\varepsilon_k)). \] Given any $k\in \mathbb N$, since $\mu_x$ is not atomic we have that \begin{eqnarray*}
\mu^1_x(\partial B_{\mathcal F}(y,\varepsilon_k)) = \mu^1_x(\phi(-\varepsilon_k,y) \cup \phi(\varepsilon_k,y)) =0 \end{eqnarray*} and \begin{eqnarray*}
\mu^1_x(\partial B_{\mathcal F}(y_n,\varepsilon_k)) = \mu^1_x(\phi(-\varepsilon_k,y_n) \cup \phi(\varepsilon_k,y_n)) =0, \forall n \in \mathbb N, \end{eqnarray*} where $\partial B_{\mathcal F}$ denotes the boundary of the set inside the leaf.
Now, let $B_n:=B_{\mathcal F}(y_n,\varepsilon_k) \Delta B_{\mathcal F}(y,\varepsilon_k)$ where $Y\Delta Z$ denotes the symmetric diference of the sets $Y$ and $Z$. Observe that, by passing to a subsequence of $y_n$ if necessary, we have $B_n \supset B_{n+1}$, for every $n \geq 1$. Thus \begin{align*}
\lim_{n\rightarrow \infty} \mu^1_x(B_n) & = \lim_{n\rightarrow \infty} \mu^1_x \left(\bigcap_{n} B_n \right) \\
&= \lim_{n\rightarrow \infty} \mu^1_x(\{\phi(-\varepsilon_k, y) , \phi(\varepsilon_k,y) \}) \\
&= 0.\end{align*}
Therefore $\lim_{n\rightarrow \infty} \mu_x^1(B_{\mathcal F}(y,\varepsilon_k) \setminus B_{\mathcal F}(y_n,\varepsilon_k)) = \lim_{n\rightarrow \infty} \mu_x^1(B_{\mathcal F}(y_n,\varepsilon_k) \setminus B_{\mathcal F}(y,\varepsilon_k))= 0$ and consequently
\[\lim_{n\rightarrow \infty} \mu_x^1(B_{\mathcal F}(y_n,\varepsilon_k)) = \mu_x^1(B_{\mathcal F}(y,\varepsilon_k)), \]
as we wanted to show.
\end{proof} An analogous result is true for $\overline{\Delta}$. \begin{lemma} \label{lema:sequenciaboa2} If $\overline{\Delta}$ is infinity, there exists a sequence $\varepsilon_k\rightarrow 0$, as $k\rightarrow +\infty$, and a full measure subset $R^{\infty} \subset D$ such that \begin{itemize} \item[i)] $R^{\infty}$ is $f$-invariant;
\item[ii)]for every $x \in R^{\infty}$ we have \begin{equation}\label{eq:infity}
\frac{\mu^1_x(B_{\mathcal F}(x,\varepsilon_k))}{\varepsilon_k} \geq k .\end{equation} \end{itemize} An analogous result holds if instead of $\overline{\Delta}$ we consider $\underline{\Delta}$. \end{lemma}
Analogously to what we have done for the finite case, define \[\overline{\Pi}^{\infty}_x:= \left\{y \in \mathcal F(x): \frac{ \mu^1_x(B_{\mathcal F}(x,\varepsilon_k))}{\varepsilon_k} \geq k , \forall k\geq 1\right\},\] and \[\overline{\Pi}^{\infty} := \bigcup \overline{\Pi}^{\infty}_x.\] Similarly we define $\underline{\Pi}^{\infty}_x$ and $\underline{\Pi}^{\infty}$.
\begin{lemma} If $\overline{\Delta}$ (resp. $\underline{\Delta}$) is infinity then for every $x \in R$ the set $\overline{\Pi}^{\infty}_x$ (resp. $\underline{\Pi}^{\infty}_x$) is closed on the plaque $\mathcal F(x)$. \end{lemma} \begin{proof} Analogous to the proof of Lemma \ref{lemma:closed}. \end{proof}
\begin{lemma}\label{lemma:two.sets}
If $\overline{\Delta}$ is finite, then there are Borel sets $Q$ and $G$ such that
\begin{itemize}
\item[i)] $f(Q)=Q$ and $f(G)=G$;
\item[ii)] $Q \cap G = \emptyset$;
\item[iii)] $\mu(Q\cup G)=1$;
\item[iv)] if $x \in Q$, then for $\varepsilon_k$ as in Lemma \ref{lema:sequenciaboa} then
\[\left| \frac{ \mu_x^1(B_{\mathcal F}(x,\varepsilon_k))}{\varepsilon_k} - \overline{\Delta} \right| \leq \frac{1}{k};\]
\item[v)] if $x \in G$, then there exists $k_0 \in \mathbb N$ such that
\[\left| \frac{ \mu^1_x(B_{\mathcal F}(x,\varepsilon_{k_0}))}{\varepsilon_{k_0}} - \overline{\Delta} \right| > \frac{1}{k_0}.\]
\end{itemize} \end{lemma} \begin{proof} Consider $\overline{\Pi}$ as defined above. Take any $x\in \overline{\Pi}^c$, that is, there exists $k\geq 1$ such that
\[\left| \frac{ \mu^1_x(B_{\mathcal F}(x,\varepsilon_{k}))}{\varepsilon_{k}} - \overline{\Delta} \right| > \frac{1}{k}.\] By the measurability of $x\mapsto \mu^1_x(B_{\mathcal F}(x,\varepsilon_k))$ proved in Proposition \ref{prop:disinmeasurable2} and Lusin's Theorem we can take a compact set $G_1$ where this function varies continuously. Thus, there exists an open set $G_2$ such that for every $y\in G_2\cap G_1$ we have
\[\left| \frac{ \mu^1_x(B_{\mathcal F}(x,\varepsilon_{k}))}{\varepsilon_{k}} - \overline{\Delta} \right| > \frac{1}{k}.\] Define $G = \bigcup_{n\in \mathbb Z}f^n(G_2\cap G_1)$.
Let $x \in \overline{\Pi}$. For each $n\in \mathbb N$ we have
\[\left| \frac{ \mu^1_x(B_{\mathcal F}(x,\varepsilon_k))}{\varepsilon_k} - \overline{\Delta} \right| < \frac{1}{k}+\frac{1}{n}.\]
Using again Proposition \ref{prop:disinmeasurable2}, Lusin's Theorem and the invariance of $\mu_x$ by $f$, we find a sequence of nested Borel sets $ \ldots Q_{n+1} \subset Q_n \subset Q_{n-1} \subset ...\subset Q_1$ such that $f(Q_n)=Q_n$, $n\geq 1$ and for all $y\in Q_n$ we have
\[\left| \frac{ \mu^1_y(B_{\mathcal F}(y,\varepsilon_k))}{\varepsilon_k} - \overline{\Delta} \right| < \frac{1}{k}+\frac{1}{n}.\] By Lemma \ref{lema:sequenciaboa} we have $\mu(Q_n)=1$ for every $n$. Take $Q:=\bigcap_{n=1}^{\infty}Q_n$. Then $Q$ is an $f$-invariant Borel set and $\mu(Q)=1$. Therefore $\mu(Q\cup G) = 1$. Also, it is clear that $Q\cap G = \emptyset$ and we conclude the proof of the lemma. \end{proof} Consider the following measurable set $$D := \mathcal F(Q) \setminus \mathcal F(G).$$ Equivalently \[D = \{x \in \mathcal F(G \cup Q) : \overline{\Pi}_x \cap \mathcal F(x) = \mathcal F(x)\},\] that is, $D$ is the set of all points whose plaque is fully inside $\overline{\Pi}_x$.
In the sequel of the proof we will need the following counting lemma.
\begin{lemma} \label{lemma:aux} Let $r>0$ be a fixed real number and $x\in D$ an arbitrary point. Let $a_i := \varphi_{2i r}(\varphi_{-1}(x))$ and $b_i := \varphi_{2ir}(x)$ for $i=1,2,..., l$ where $l=\left \lfloor \frac{1}{2}\left(\frac{1}{r}-1\right)\right \rfloor$. Then \begin{equation}\label{eq:statement} \sum_{i=1}^{l}\mu_{x}^{1}(B_{\mathcal F}[a_i,1]) + \sum_{i=1}^{l}\mu_{x}^{1}(B_{\mathcal F}[b_i,1]) = 2l.\end{equation} \end{lemma} \begin{proof} To simplify the notation, for $s>0$ we will write $[x,\varphi_s(x)]$ to denote the set $\{\varphi_t(x): 0\leq t \leq s\}$. With this notation we can write \begin{equation*} [\varphi_{-1}(x),x] = [\varphi_{-1}(x), a_1] \cup [a_1,a_2] \cup \ldots \cup [a_{l-1}, a_l] \cup [a_l, \varphi_{2(l+1)r-1}(x)] \cup [\varphi_{2(l+1)r-1}(x),x] \end{equation*} Denote $J_0 := [\varphi_{-1}(x), a_1] $, $J_i:=[a_i,a_{i+1}]$ for $1\leq i \leq l-1$, $J_l := [a_l, \varphi_{2(l+1)r-1}(x)] $ and $J_{l+1} = [\varphi_{2(l+1)r-1}(x),x] $. Thus we can rewrite \begin{equation}\label{eq:part1} [\varphi_{-1}(x),x] = J_0 \cup \ldots J_{l+1}. \end{equation} Now, by applying $\varphi_1$ to \eqref{eq:part1} we can write \begin{align}\label{eq:part2} [x,\varphi_1(x)] = & [x, b_1] \cup [b_1,b_2] \cup \ldots \cup [b_{l-1}, b_l] \cup [b_l, \varphi_{2(l+1)r}(x)] \cup [\varphi_{2(l+1)r}(x),\varphi_1(x)] \\ =& \varphi_1(J_0) \cup \ldots \varphi_1(J_{l+1}). \end{align} Also as a consequence of \eqref{eq:part1} we can write \begin{equation} \label{eq:part3} [\varphi_{-2}(x), \varphi_2(x)] = \varphi_{-1}(J_0) \cup \ldots \cup \varphi_{-1}(J_{l+1}) \cup [\varphi_{-1}(x),x] \cup [x,\varphi_1(x)] \cup \varphi_{2}(J_0) \cup \ldots \cup \varphi_{2}(J_{l+1}) . \end{equation}
Now, observe that each term involved in the sums on the left side of \eqref{eq:statement} can be written as the sum of the $\mu_x^1-$measure of sets of the forms involved on the equations \eqref{eq:part1}, \eqref{eq:part2} and \eqref{eq:part3}. Lets count how many times each of this sets appears on the left side of \eqref{eq:statement}. \begin{itemize} \item Observe that the set $\varphi_{-1}(J_0)$ is not contained in any of the sets $B_{\mathcal F}[a_i,1]$, $B_{\mathcal F}[b_i,1]$, thus it does not appears on \eqref{eq:statement}. However, the set $\varphi_1(J_0)=[x,\varphi_1(a_1)]$ is contained in all of the sets $B_{\mathcal F}[a_i,1], B_{\mathcal F}[b_i,1]$, thus is appears on $2l$ times on the equation \eqref{eq:statement}. Thus, $\mu_x^1(\varphi_1(J_0))$ appears exactly $2l$ times on \eqref{eq:statement}.
\item For any $1\leq i \leq l+1$ the set $\varphi_{-1}(J_i)$ appears on each of the terms $B_{\mathcal F}[a_j,1]$, $j=1,...,i$, that is, it appears $i$ times on \eqref{eq:statement}. On the other hand the set $\varphi_1(J_i)$ appears $2l-i$ times as it does not belong only to the sets $B_{\mathcal F}[a_j,1]$, $j=1,...,i$. By the fact that $\varphi_1$ preserves $\mu_x^1$ we know that $\mu_x^1(\varphi_{-1}(J_i)) =\mu_x^1(\varphi_1(J_i))$ and then we can say that $\mu_x^1(\varphi_1(J_i))$ appears exactly $2l$ times on \eqref{eq:statement} \item By symmetry we can see that the terms $\mu_x^1(J_i)$ also appears exactly $2l$ times each. \end{itemize} Thus we have \begin{align*} \sum_{i=1}^{l}\mu_{x}^{1}(B_{\mathcal F}[a_i,1]) + \sum_{i=1}^{l}\mu_{x}^{1}(B_{\mathcal F}[b_i,1]) = & \\ = & l \cdot \left(2l\cdot \sum_{i=0}^{l+1} \mu_x^1(J_i) + 2l\cdot \sum_{i=0}^{l+1} \mu_x^1(\varphi_{1}(J_i)) \right) \\ = & 2l \cdot \mu_{x}^1(B_{\mathcal F}[x,1]) = 2l.\end{align*} as we wanted to show. \end{proof}
\begin{lemma} \label{lemma:aux2} Let $r>0$ be a fixed real number and $x\in D$ an arbitrary point. Let $a_i := \varphi_{2i r}(\varphi_{-1}(x))$ and $b_i := \varphi_{2ir}(x)$ for $i=1,2,..., l+1$ where $l=\left \lfloor \frac{1}{2}\left(\frac{1}{r}-1\right)\right \rfloor$. Then \begin{equation}\label{eq:statement} \sum_{i=1}^{l+1}\mu_{x}^{1}(B_{\mathcal F}[a_i,1]) + \sum_{i=1}^{l}\mu_{x}^{1}(B_{\mathcal F}[b_i,1]) = 2l+2.\end{equation} \end{lemma} \begin{proof} The proof is identical to the proof of Lemma \ref{lemma:aux}. \end{proof}
\textbf{Case 1: $D$ has full measure.} First of all, we will prove that in this case we must have $\underline{\Delta}\leq \overline{\Delta}<\infty$. Assume that $\overline{\Delta}=\infty$. Consider a typical fiber $\mathcal F(x)$ with $x\in D$ and take any $k\geq 1$ fixed. On Lemma \ref{lemma:aux} take $r:=\varepsilon_k$ and let $l, a_i, b_i$, $1\leq i\leq l$ be as in the statement of the respective lemma. For each $1\leq i \leq l$ we have \begin{equation}
\frac{\mu^1_{a_i}(B_{\mathcal F}[a_i,\varepsilon_k]))}{\varepsilon_k} \geq k \Rightarrow \mu^1_{a_i}(B_{\mathcal F}[a_i,\varepsilon_k]) \geq k \varepsilon_k, \end{equation} and similarly we obtain \begin{equation} \mu^1_{b_i}(B_{\mathcal F}[b_i,\varepsilon_k]) \geq k \varepsilon_k.\end{equation} Now observe that \begin{align} \label{eq:xai} \mu_x^1(B_{\mathcal F}[a_i,\varepsilon_k])) = & \mu_x^1(B_{\mathcal F}[a_i,1])) \cdot \mu_{a_i}^1(B_{\mathcal F}[a_i,\varepsilon_k]) \\ \mu_x^1(B_{\mathcal F}[b_i,\varepsilon_k])) = & \mu_x^1(B_{\mathcal F}[b_i,1])) \cdot \mu_{b_i}^1(B_{\mathcal F}[b_i,\varepsilon_k]) \label{eq:xbi} \end{align} Taking the sum over $i$ we have \begin{align*}
\mu_x^1(B_{\mathcal F}[x,1]) \geq \sum_{i=1}^{l} \mu_x^1(B_{\mathcal F}[a_i,\varepsilon_k]) +& \sum_{i=1}^{l} \mu_x^1(B_{\mathcal F}[b_i,\varepsilon_k]) \\ = \sum_{i=1}^{l} \mu_x^1(B_{\mathcal F}[a_i,1])) \cdot \mu_{a_i}^1(B_{\mathcal F}[a_i,\varepsilon_k]) +& \sum_{i=1}^{l} \mu_x^1(B_{\mathcal F}[b_i,1])) \cdot \mu_{b_i}^1(B_{\mathcal F}[b_i,\varepsilon_k]), \end{align*} using ~\eqref{eq:ai} and ~\eqref{eq:bi} we get, \[ \mu_x^1(B_{\mathcal F}[x,1]) \geq \left( \sum_{i=1}^{l} \mu_x^1(B_{\mathcal F}[a_i,1])) + \sum_{i=1}^{l} \mu_x^1(B_{\mathcal F}[b_i,1])) \right) \cdot k \varepsilon_k.\] Thus, from the conclusion of Lemma ~\ref{lemma:aux} we have that \[ \mu_x^1(B_{\mathcal F}[x,1]) \geq 2 \left \lfloor \frac{1}{2}\left(\frac{1}{\varepsilon_k}-1\right)\right \rfloor \cdot \varepsilon_k \cdot k.\] As the left side is finite and the right side goes to infinity as $k$ goes to infinity we obtain a contradiction. Thus indeed $\overline{\Delta}$ is finite.
\begin{lemma} \label{lemma:uniformDelta} \[\overline{\Delta} = \underline{\Delta} = 1. \] \end{lemma} \begin{proof} For a given $k \in \mathbb N^{*}$, we know that for any $x \in \overline{\Pi}$ \begin{equation}\label{eq:all}
\left| \frac{\mu^1_x(B_{\mathcal F}(x,\varepsilon_k))}{\varepsilon_k} - \overline{\Delta} \right| \leq \frac{1}{k} .\end{equation}
Consider the closed ball $B=B_{\mathcal F}[x,1] \subset \mathcal F(x)$. Given $\epsilon > 0$ take $k_0 \in \mathbb N$ such that $k_{0}^{-1} < \epsilon$. Let $r=\varepsilon_k$ and let $a_i, b_i$ be as in Lemma \ref{lemma:aux}. Thus, we have a family of disjoint balls inside $B_{\mathcal F}[x,1]$ centered at the points $a_i$ and $b_i$, $1\leq i \leq l:=\left \lfloor \frac{1}{2}\left( \frac{1}{\varepsilon_k}-1\right) \right \rfloor$.
For each $1\leq i \leq l$ we have \begin{equation}\label{eq:ai}
\mu^1_{a_i}(B_{\mathcal F}[a_i,\varepsilon_k])) - \overline{\Delta}\varepsilon_k>- \epsilon \cdot \varepsilon_k \Rightarrow \mu^1_{a_i}(B_{\mathcal F}[a_i,\varepsilon_k]) > \varepsilon_k(\overline{\Delta}-\epsilon), \end{equation} and similarly we obtain \begin{equation}\label{eq:bi} \mu^1_{b_i}(B_{\mathcal F}[b_i,\varepsilon_k]) > \varepsilon_k(\overline{\Delta}-\epsilon).\end{equation}
Therefore, by ~\eqref{eq:xai} and \eqref{eq:xbi}, \begin{eqnarray*}
1=\mu^1_x(B_{\mathcal F}[x,1]) & > & \left( \sum_{i=1}^{l}\mu_{x}^{1}(B_{\mathcal F}[a_i,1]) + \sum_{i=1}^{l}\mu_{x}^{1}(B_{\mathcal F}[b_i,1]) \right) \cdot \varepsilon_k\cdot ({\overline{\Delta}-\epsilon}) .\\ \end{eqnarray*} By Lemma \ref{lemma:aux} we have \begin{eqnarray*}
1=\mu^1_x(B_{\mathcal F}[x,1]) & > & 2 \cdot \left\lfloor \frac{1}{2}\left( \frac{1}{\varepsilon_k}-1\right) \right \rfloor \cdot \varepsilon_k\cdot ({\overline{\Delta}-\epsilon}),\\ \end{eqnarray*} for every $k\geq 1$. Taking $k\rightarrow \infty$ we have that $$\overline{\Delta}\leq 1.$$ Similarly, by taking $r:=\varepsilon_k$ and $a_i$, $b_i$, $1\leq i \leq l:=\left \lfloor \frac{1}{2}\left( \frac{1}{\varepsilon_k}-1\right) \right \rfloor$, as in Lemma \ref{lemma:aux2} we cover $B_{\mathcal F}[x,1]$ with $2l+2$ balls of radius $\varepsilon_k$. Now, we know that \begin{align*}\mu^1_{a_i}(B_{\mathcal F}[a_i,\varepsilon_k])< & \epsilon\cdot \varepsilon_k + \overline{\Delta}\varepsilon_k \\ \mu^1_{b_i}(B_{\mathcal F}[b_i,\varepsilon_k])< & \epsilon\cdot \varepsilon_k + \overline{\Delta}\varepsilon_k,\end{align*}
for $1\leq i \leq \left \lfloor \frac{1}{2}\left( \frac{1}{\varepsilon_k}-1\right) \right \rfloor$. Consequently, again using ~\eqref{eq:xai} and ~\eqref{eq:xbi}, we have \[1= \mu^1_x(B_{\mathcal F}[x,1]) < \left( \sum_{i=1}^{l+1}\mu_{x}^{1}(B_{\mathcal F}[a_i,1]) + \sum_{i=1}^{l+1}\mu_{x}^{1}(B_{\mathcal F}[b_i,1]) \right) \cdot \varepsilon_k(\epsilon+\overline{\Delta}). \] By Lemma \ref{lemma:aux2} \[1= \mu^1_x(B_{\mathcal F}[x,1]) < (2l +2) \cdot \varepsilon_k(\epsilon+\overline{\Delta}) \Rightarrow \overline{\Delta}\leq 1.\] Consequently we have that $\overline{\Delta} = 1$. Repeating the same argument with $\underline{\Delta}$ we conclude that \[\overline{\Delta} = \underline{\Delta} =1.\] \end{proof}
Next we are able to conclude that $\mu^1_x$ is equivalent to the measure induced by the flow $\phi$ on the orbits
\begin{lemma} \label{lemma:equallebesgue} For almost every $x\in X$ \[\mu^1_x(B) = 2^{-1} \cdot \lambda_{\mathcal F(x)}(B)\]
where $\lambda_{\mathcal F(x)}$ is the measure on $\mathcal F(x)$ induced by the flow $\phi$ (i.e. $\lambda_{\mathcal F(x)}([x,y])=|t|$ if $y = \phi_t(x)$) \end{lemma} \begin{proof} Take any typical plaque $\mathcal F(x)$ and any point $a\in \mathcal F(x)$. For each $r>0$ we can write the set $[a,\phi_r(a)]$ as a disjoint union as below \[[a,\phi_r(a)] = \left(\bigcup_{j=0}^n [\phi_{2j\varepsilon_k}(a),\phi_{2(j+1)\varepsilon_k}(a)] \right) \cup J_k, \quad n:= \left \lfloor r/2\varepsilon_k \right \rfloor \] where $J_k = [\phi_{2(n+1)\varepsilon_k}(a),\phi_r(a)]$. Each of the terms appearing on the right side of the previous equality, except for $J_k$, is a closed $\mathcal F$-ball of radius $\varepsilon_k$. By Lemma \ref{lemma:uniformDelta}, $\overline{\Delta} = \underline{\Delta}=1$ so \[\varepsilon_k(1-1/k)<\mu_{c_j}^1([\phi_{2j\varepsilon_k}(a),\phi_{2(j+1)\varepsilon_k}(a)]) < \varepsilon_k(1+1/k),\] where $c_j:=\phi_{2j\varepsilon_k}(a)+\varepsilon_k$, $j=0,1,\ldots, n$. Also, we know that \[\mu_x^1(B_{\mathcal F}[c_j,\varepsilon_k]) = \mu_x^1(B_{\mathcal F}[c_j,1]) \cdot \mu_{c_j}^1(B_{\mathcal F}[c_j,\varepsilon_k]).\] Therefore we have \begin{align*} \left( \sum_{j=0}^{n} \mu_x^1(B_{\mathcal F}[c_j,1]) \right) \cdot \varepsilon_k(1-1/k) \leq &\quad \mu_x^1[a,\phi_r(a)] \leq \\ \leq & \left( \sum_{j=0}^{n} \mu_x^1(B_{\mathcal F}[c_j,1]) \right) \cdot \varepsilon_k(1+1/k) + \mu_x^1(J_k). \end{align*} Repeating the argument of the proof of Lemma \ref{lemma:aux}, we see that $ \sum_{j=0}^{n} \mu_x^1(B_{\mathcal F}[c_j,1]) =n+1$. Thus \[ \lfloor r/2\varepsilon_k \rfloor \cdot \varepsilon_k(1-1/k) \leq \mu_x^1([a,\phi_r(a)]) \leq \lfloor r/2\varepsilon_k \rfloor \cdot \varepsilon_k(1+1/k) + \mu_x^1(J_k).\] Taking $k\rightarrow +\infty$ we have \[\mu_x^1([a,\phi_r(a)]) = r/2\] as we wanted to show.
\end{proof}
\textbf{Case 2: $D$ has null measure.} Since $\overline \Pi_x$ is closed in the plaque $\mathcal F(x)$ for all $x \in D$ and $\mu(\overline \Pi)=1$, it is true that for a full measurable set $\mathfrak D$, if $ x\in \mathfrak D$ then $x\notin \overline{\Pi}$ if, and only if, there is $r>0$ with $\mu^1_x(B_{\mathcal F}(x,r)) = 0$. Now consider $\{q_1,q_2,...\}$ to be an enumeration of the rationals.
For each $i\geq 1$ let us define the function $S_i$ as \[S_i(x) = \max\{q_j: 1\leq j \leq i \text{ and } \mu^1_y(B_{\mathcal F}(y,q_j))=0 \text{ for some } y \in \mathcal F(x) \}.\]
\begin{lemma}
$S_i$ is an invariant measurable function for all $i \in \mathbb N$. \end{lemma} \begin{proof} For each $i \in \mathbb N$ define the function $Q_i: \mathfrak D \rightarrow [0,\infty)$ by \[Q_i(x) = \mu^1_x(B_{\mathcal F}(x,q_i)).\]
By proposition \ref{prop:disinmeasurable2} $Q_i(x)$ is a measurable function for every $i$ and, by an standard measure theory argument, we may take a compact set $K \subset \mathfrak D$ of positive measure such that $Q_i | K$ is continuous for every $i$. Now, given $j \in \mathbb N$, let $\sigma$ be a permutation of $\{1,...,j\}$ such that $q_{\sigma(1)}<q_{\sigma(2)}<...<q_{\sigma(j)}$. Observe that for $$\mathfrak K = \bigcup_{n\in \mathbb Z} f^n(K)$$ we have \[S_j^{-1}(\{q_{\sigma(j)}\}) \cap \mathfrak K = \bigcup_{n\in \mathbb Z} f^n ( \mathcal F(Q_{\sigma(j)}^{-1}(\{0\}) \cap K )),\] which is a measurable set since $Q_{\sigma(n)}^{-1}(\{0\}) \cap K $ is a Borel set. Now, \[S_j^{-1}(\{q_{\sigma(j-1)}\}) \cap \mathfrak K = \bigcup_{n\in \mathbb Z} f^n ( \mathcal F(Q_{\sigma(j-1)}^{-1}(\{0\}) \cap K )) \setminus (S_j^{-1}(\{q_{\sigma(j)}\}) \cap \mathfrak K),\] which is also a measurable set. Inductively we prove that $S_j^{-1}(\{q_{\sigma(i)}\}) \cap \mathfrak K$ is measurable for all $1\leq i \leq j$. Since, by ergodicity, the set $\mathfrak K \subset \mathfrak D$ has full measure we conclude that $S_j(x)$ is measurable for every $j \geq 1$. \end{proof}
Let $S(x) := \lim_{i\rightarrow \infty} S_i(x)$. $S$ is measurable and $f$-invariant thus it is constant almost everywhere, call this constant $r_0$. This means that for a full measure set $Y \subset \mathfrak D$, for every $x\in Y$ the plaque $\mathcal F(x)$ has a finite number of intervals of radius $r_0$ outside $\overline \Pi_x$. Let us call these intervals as ``bad" intervals.
Now consider the set $\mathfrak M$ formed by the median points of these ``bad" intervals of radius $r_0$. Notice that $\mathfrak M$ is a measurable set, since it is inside a set of zero measure and also that $f(\mathfrak M)=\mathfrak M$.
Let $\varphi:(0,1) \times (0,1)^k \rightarrow U$ be a local chart for $\mathcal F$ such that the $\mathcal F(\mathfrak M \cap U)$, the $\mathcal F$ saturation of these ``bad" intervals inside $U$, has positive measure. Set $\Sigma := \pi_1(\varphi^{-1}(\mathfrak M\cap U))$, where $\pi_1:(0,1) \times (0,1)^k \rightarrow (0,1)$ is the projection onto the first coordinate. Now we may apply The Measurable Choice Theorem \ref{theo:MCT} to obtain a measurable function $ \mathfrak F: \Sigma \rightarrow (0,1)$ such that $(x,\mathfrak F(x)) \in \varphi^{-1}(\mathfrak M\cap U)$ for all $x \in \Sigma$. Again by standard arguments, using Lusin's theorem, we may assume $\Sigma$ to be compact and such that $\mathfrak F$ is a continuous function.
Now consider the set $\mathfrak M_0:= \varphi(\text{graph }\mathfrak F)$, which is a Borel set since the graph of $\mathfrak F$ is a compact set. Notice that our construction implies that $\mathcal F(\mathfrak M_0)$ has positive measure. Now define the following $f$ invariant set $$\mathfrak M_1:= \bigcup_{n \in \mathbb Z} f^n(\mathfrak M_0).$$ By ergodicity the set $\mathcal F (\mathfrak M_1)$ has full measure.
The set $\mathfrak M_1$ intersects almost each plaque in a finite (constant) number of points. Notice that, for each $r \in \mathbb R_+$ the invariant set $$\mathfrak M_1^r:= \bigcup_{x \in \mathfrak M_1} B_{\mathcal F}(x,r)$$ has zero or full measure. Let $\alpha_0$ such that $\mu(\mathfrak M_1^r)=0$ if $r < \alpha_0$ and $\mu(\mathfrak M_1^r)=1$ if $r \geq \alpha_0$. This implies that the extreme points of $B_{\mathcal F}(x,\alpha_0)$ for $x \in \mathfrak M_1$ forms a set of atoms.
Which is an absurd, because we are assuming we are in the non-atomic case. The case $\overline{\Delta}=\infty$ is similar.
The measure $\mu_{\mathcal F(x)}$ of the statement of the result is given, due to Lemma \ref{lemma:equallebesgue}, as $2^{-1}\lambda_{\mathcal F(x)}$.
Let us now work with the case $\mu(Per(\phi))>0$. By ergodicity of $f$ and the $f$ invariance of $Per(\phi)$ we have $\mu(Per(\phi))=1$. Hence the partition of $X$ given by each periodic orbit of $\phi$ and the set $X\setminus Per(\phi)$ forms a measurable partition (e.g. \cite[Proposition 2.5]{PTV}). We can then disintegrate $\mu$ on this partition. Denote the family of conditional measures by $\{\mu_{\mathcal F(x)}\}$. If the set of singularities for the flow $\phi$ have positive measure then it is clear that one should have full measure, in particular the measure is atomic and we fall on the first item. If not then we repeat the prove but instead of working with $\mu_x^1$ we can simply work with the disintegrated measures $\mu_{\mathcal F(x)}$ and the theorem follows.
$
\square$
\end{document} |
\begin{document}
\thispagestyle{empty} \title{A fast 25/6-approximation for the minimum unit disk cover problem}
\begin{abstract} Given a point set $P$ in $\mathbb{R}^2$, the problem of finding the smallest set of unit disks that cover all of $P$ is NP-hard. We present a simple algorithm for this problem with an approximation factor of $25/6$ in the Euclidean norm and $2$ in the max norm, by restricting the disk centers to lie on parallel lines. The run time and space of this algorithm is $O(n \log n)$ and $O(n)$ respectively. This algorithm extends to any $L_p$ norm and is asymptotically faster than known alternative approximation algorithms for the same approximation factor.
\end{abstract}
\section{Introduction} Given a point set $P$ in $\mathbb{R}^2$, the \textit{unit disk cover problem} (UDC) seeks to find the smallest set of unit disks that cover all of $P$. This problem arises in applications to facility location, motion planning, and image processing \cite{fowler, hochbaummaass}.
In both the $L_2$ and $L_\infty$ norm, UDC is NP-hard \cite{fowler}. A \textit{shifting strategy} admits various polynomial time approximation algorithms in $d$ dimensions --- for some arbitrarily large integer shifting parameter $\ell$, it is possible to approximate to within $\left(1+\frac{1}{\ell}\right)^{d-1}$ \cite{hochbaummaass, gonzalez}. Since these algorithms rely on optimally solving the problem in an $\ell\times \ell$ square through exhaustive enumeration, they tend to have a slow time complexity that scales exponentially with $\ell$, making them impractical for large data sets. At the cost of incurring a constant approximation factor, the speed of the algorithm may be improved by constraining the disk centers to a unit square grid within the $\ell\times \ell$ square \cite{caltech1, caltech2}.
If the disk centers are constrained to an arbitrary finite set of points, UDC becomes the discrete unit disk covering problem (DUDC), which is also NP-hard. However, DUDC has a number of different approximation algorithms, with the current state-of-the-art achieving a constant factor of 15 \cite{dudc}.
In this paper, we present an algorithm that approximates UDC in the plane with the Euclidean and max norms by constraining the disk centers to a set of parallel lines. This algorithm is useable in practical settings and simple to implement. We show that, in the max norm, choosing a set of parallel lines distance $2$ apart achieves an approximation factor of 2. In the Euclidean norm, choosing a set of parallel lines distance $\sqrt{3}$ apart achieves an approximation factor of $25/6$. In both norms, the most costly step is simply from sorting the points. Consequently, the run time and space of the algorithm is $O(n \log n)$ and $O(n)$ respectively.
\renewcommand{1.5}{1.5} \begin{table}[htb]
\centering
\begin{tabular}{c|c|c|c} \hline Paper & Approximation & Running Time & Year\tabularnewline \hline \hline
\cite{hochbaummaass} & $\left(1+\frac{1}{\ell}\right)^2$ & $O\left(\ell^4 (2n)^{4\ell^2+1}\right)$ & 1985\tabularnewline \hline
\cite{gonzalez} & $\left(1+\frac{1}{\ell}\right)$ & $O\left(\ell^{2}n^{6\ell\sqrt{2}+1}\right)$ & 1991\tabularnewline \hline
\cite{gonzalez} & 8 & $O\left(n+n\log H\right)$ & 1991\tabularnewline \hline
\cite{bronnimann} & $O(1)$ & $O(n^{3}\log n)$ & 1995\tabularnewline \hline
\cite{caltech2} & $\alpha\left(1+\frac{1}{\ell}\right)^{2}$ & $O(Kn)$ & 2001\tabularnewline \hline
Ours & $25/6$ & $O(n\log n)$ & 2014\tabularnewline \hline \end{tabular}
\caption{A history of approximation algorithms for the unit disk cover problem in $L_2$. $n$ is the number of points in $P$. The shifting parameter $\ell$ is a positive integer which may be arbitrarily large. $H$ is the number of circles in the optimal solution. $\alpha$ is a constant between 3 and 6. $K$ is a factor at least quadratic in $l$ and polynomial in the size of the approximation lattice.}
\end{table}
\section{Line restricted unit disk cover} \label{lrudc} Here we explore a restricted variant of UDC:
Given a point set $P$ in $\mathbb{R}^2$, the \textit{line restricted unit disk cover} problem (LRUDC) seeks to find the smallest set of unit disks --- each with \textit{centers on a given set of parallel lines $S$} --- that cover all of $P$.
For certain carefully chosen sets of lines, LRUDC can be solved efficiently using greedy methods. However, with no restrictions on the placement and number of parallel lines in $S$, LRUDC is NP-hard by reduction from UDC.
\begin{theorem} Using $O(n^2)$ parallel lines, UDC reduces to LRUDC. \end{theorem}
\begin{proof} Consider a circle arrangement $\mathcal A$ consisting of unit radius circles centered at each of the points in the point set $P$. For any circle $C$ in the optimal solution of UDC, let $F$ be the face in $\mathcal A$ in which the center of $C$ resides. Observe that moving this center to any point in $F$ does not change the subset of points in $P$ that $C$ covers. If the set $S$ of parallel lines intersects all faces in $\mathcal A$, then the optimal line-restricted solution can have disks centered in the same set of faces as in the unrestricted case. Hence, any optimal solution of LRUDC for this set of lines is an optimal solution of UDC. Since there are only $O(n^2)$ faces in $\mathcal A$, having one line for each of the faces suffices. \end{proof}
As an aside, it is unknown whether LRUDC is NP-hard if only $O(n)$ parallel lines are used.
\section{Approximation algorithms for UDC}
In our approximation algorithms for UDC, we use solutions to LRUDC on narrow vertical strips. The set $S$ of restriction lines we use for LRUDC will simply be uniformly spaced vertical lines. For each restriction line we will solve LRUDC confined to the subset of $P$ within a thin strip around the line. All points in $P$ will be in some strip, and we will choose the spacing between restriction lines so that a good approximation to UDC is obtained.
\subsection{A $2$-approximation with the $L_\infty$ norm} \label{sec:max-norm-p3} The max norm is a special case, as unit circles in the max norm are axis-aligned squares of width 2. We can take advantage of this fact to obtain a $2$-approximation algorithm.
\begin{enumerate} \item Partition the plane into vertical strips of width $2$, and let the restriction line set $S$ be the set of vertical lines running down the centre of the strips. \item For each non-empty strip, use the simple greedy procedure of inserting a square whose top edge is located at the topmost uncovered point. Repeat until all points in the strip are covered. \end{enumerate}
The asymptotic cost of the algorithm is only $O(n\log n)$, as we need to sort the points by $x$-coordinate to partition them into the strips, and within each strip, we need to sort the points by $y$-coordinate to process the points in order of decreasing height.
The correctness of the greedy procedure in step 2 is easy to see, and is described in some detail by \cite{federgreene}. In fact, for this set of lines, this algorithm solves LRUDC optimally. This is because the greedy procedure is optimal for each strip, and the strips are all \textit{independent} from one another --- meaning that no point will be covered by squares from two different strips.
\begin{figure}
\caption{Theorem \ref{square}. Top: a point set optimally coverable by one square. Bottom: a covering solution for each strip. Dashed lines denote restriction lines in $S$.}
\label{fig3}
\end{figure}
\begin{theorem}\label{square} This algorithm is a 2-approximation for UDC in $L_\infty$. \end{theorem} \begin{proof}
For convenience, we define an \textit{$S$-restricted} solution to be any line-restricted solution covering all the points of $P$ using the same set $S$ of lines as our algorithm, but not necessarily the same set of circles as the one produced by our algorithm.
Let \textsc{opt} be an optimal solution for UDC in $L_\infty$. Each square in the optimal solution will intersect at most two strips, since each strip has the same width as the squares. We can construct an $S$-restricted solution, by simply using two $S$-restricted squares to cover each square in \textsc{opt} (see Figure \ref{fig3}). This uses exactly twice as many squares as \textsc{opt}. Since our algorithm solves LRUDC on $S$ optimally, it will be at least as good as the 2-approximation on each strip. As each strip is independent, our algorithm will be as good as the $S$-restricted solution over all strips. \end{proof}
\subsection{A $5$-approximation with the $L_2$ norm} \label{l2p2approx}
First, we present a simple $5$-approximation algorithm that forms the basis of our $25/6$-approximation algorithm.
\begin{enumerate} \item Partition the plane into vertical strips of width $\sqrt{3}$. As before, let the restriction line set $S$ be the set of center lines of the strips. \item For each non-empty strip, use the simple greedy procedure of inserting a circle positioned as low as possible while still covering the topmost uncovered point. Assume that all points in the strip are uncovered initially, and repeat until all points in the strip are covered. \end{enumerate}
Note that the only difference between the algorithm above and the one for $L_\infty$ is the width of the vertical strip. With a width of $\sqrt{3}$, circles centred on a particular strip can cover points of neighbouring strips. Hence the strips are no longer independent, and we run the greedy procedure in step 2 assuming that all the points in the strip are uncovered initially (even though they may be covered by circles from different strips). Alternatively, we could remove points already covered by neighbouring strips as we go, but this makes no difference to the approximation factor or the asymptotic run time of the algorithm.
As before, parititioning the points into each strip is $O(n \log n)$ time. Within each strip, the subprocedure of greedily covering the circles can be done in $O(n_s \log n_s)$ time, where $n_s$ is the number of points in the strip. This is achieved by transforming the point covering problem into a segment covering problem instead.
The reduction is as follows: from each point $p$ in the strip draw a unit circle $C_p$ centred at $p$. The circle $C_p$ intersects the restriction line of the strip in two points, creating a segment $s_p$ between the two points. If the centre of an $S$-restricted circle is placed anywhere on $s_p$, it will cover $p$. Hence to cover all the points in the strip, we simply have to stab all the segments $\{s_p\}_{p\in P}$ with points representing centres of $S$-restricted circles. The strategy of greedily covering the topmost point reduces to choosing the stabbing point as low as possible, while still stabbing the topmost unstabbed segment. This can be done in $O(n_s \log n_s)$ time via sorting the segments by $y$-coordinate.
The correctness of the greedy subprocedure in step 2 follows from the same logic as Section \ref{sec:max-norm-p3}.
The argument that our algorithm is a 5-approximation is based on the following fact: for each circle $C$ in an optimal solution \textsc{opt} of UDC, there exists an $S$-restricted solution which covers each $C$ entirely using at most five circles. Furthermore, this solution is redundant in that the points of each strip are covered completely by $S$-restricted circles on that strip. We call such a solution \textit{oblivious}, as it does not take into account points covered by circles of neighbouring strips. Note that our algorithm produces a solution that is at least as good as any oblivious $S$-restricted solution, as each strip is solved optimally by our algorithm. It follows that our algorithm is also a 5-approximation.
It is necessary in the worst case to cover $C$ entirely since an adversary may provide an input point set $P$ consisting of arbitrarily many points coverable by a single circle (Figures \ref{fig1} and \ref{fig2}). The following proofs use a straightforward application of geometry to establish bounds on the number of $S$-restricted circles to cover $C$. There are two possible cases --- either $C$ intersects two strips, or $C$ intersects three strips.
\begin{obs} \label{4c} Let \textsc{opt} be the set of optimal circles for UDC. Suppose that the centre of a circle $C\in \textsc{opt}$ does not lie within $1-\frac{\sqrt{3}}{2}$ of a restriction line. Then, any oblivious $S$-restricted solution will require at least four circles to cover $C$.
Moreover, there exists an oblivious $S$-restricted solution which uses exactly four circles to cover $C$. \end{obs}
\begin{proof} Without loss of generality, let $C$ be centered at $(x_c,0)$ where $1-\frac{\sqrt{3}}{2}\leq x_c \leq 3\frac{\sqrt{3}}{2}-1$. For $x_c$ in this range, $C$ intersects two strips. Let the corresponding restriction lines be called $\mathcal L_1$ and $\mathcal L_2$ and be placed at $x=0$ and $x=\sqrt{3}$ respectively. Consider the strip boundary, a vertical line $\mathcal L_{12}$ at $x=\frac{\sqrt{3}}{2}$. The intersection of $C$ with this line forms a segment of length greater than 1 but smaller than 2. To cover $C$ entirely, this segment must be covered. For both strips that $C$ intersects, the algorithm would cover this segment, as each strip is oblivious that the neighbouring strip may have covered the same segment. Since each line-restricted circle can only cover a segment of length 1 on $\mathcal L_{12}$, each strip would need two circles, resulting in a total of 4.
It is easy to see that $C$ is covered by the four circles centred at $\left(0,\frac{1}{2}\right)$, $\left(0,-\frac{1}{2}\right)$, $\left(\sqrt{3},\frac{1}{2}\right)$, $\left(\sqrt{3},-\frac{1}{2}\right)$ (see Figure \ref{fig1}). Note that the obliviousness constraint is satisfied as the circles within each strip do not depend on circles from neighbouring strips to cover the points from $C$. \end{proof}
\begin{figure}
\caption{Observation \ref{4c}. Top: a set of points optimally coverable by one circle. Bottom: a covering solution for each strip. Dashed lines denote restriction lines in $S$.}
\label{fig1}
\end{figure}
\begin{obs} \label{5c} Let \textsc{opt} be the set of optimal circles for UDC. Suppose that the centre of a circle $C\in \textsc{opt}$ lies within $1-\frac{\sqrt{3}}{2}$ of a restriction line. Then, any oblivious $S$-restricted solution will require at least five circles to cover $C$.
Moreover, there exists an oblivious $S$-restricted solution which uses exactly five circles to cover $C$. \end{obs}
\begin{proof} Without loss of generality, let $C$ be centered at $(x_c,0)$ where $0\leq x_c < 1-\frac{\sqrt{3}}{2}$. For $x_c$ in this range, $C$ intersects three strips. Let the corresponding restriction lines be called $\mathcal L_1$, $\mathcal L_2$ and $\mathcal L_3$ and be placed at $x=-\sqrt{3}$, $x=0$ and $x=\sqrt{3}$ respectively. Consider the two strip boundaries, vertical lines $\mathcal L_{12}$ at $x=-\frac{\sqrt{3}}{2}$ and $\mathcal L_{23}$ at $x=\frac{\sqrt{3}}{2}$. The intersection of $C$ with $\mathcal L_{23}$ forms a segment centered at $y=0$ of length greater than 1 but smaller than 2, and the intersection of $C$ with $\mathcal L_{12}$ forms a segment centered at $y=0$ of length smaller than 1. To cover $C$ entirely, these segments must be covered. Since each line-restricted circle can only cover a segment of length 1 on the strip boundary, and each strip is oblivious that the neighbouring strip may have covered the same segment, strips 2 and 3 would need two circles each and strip 1 would need one circle, resulting in a total of 5.
Finally, it is easy to see that $C$ is covered by the five circles centred at $\left(-\sqrt{3},0\right)$, $\left(0,\frac{1}{2}\right)$, $\left(0,-\frac{1}{2}\right)$, $\left(\sqrt{3},\frac{1}{2}\right)$, $\left(\sqrt{3},-\frac{1}{2}\right)$ (see Figure \ref{fig2}). \end{proof}
\begin{figure}
\caption{Observation \ref{5c}. Top: a set of points optimally coverable by one circle. Bottom: a covering solution for each strip. Dashed lines denote restriction lines in $S$.}
\label{fig2}
\end{figure}
\begin{theorem} \label{5app} This algorithm is a $5$-approximation for UDC in $L_2$. \end{theorem}
\begin{proof} Since our algorithm solves each strip optimally, it produces a solution that is at least as good as any oblivious $S$-restricted solution. We have shown that there exists such a solution with an approximation factor of 5, which follows directly from Observations \ref{4c} and \ref{5c}. Hence, our algorithm has an approximation factor of at most 5. \end{proof}
\subsection{Improving the 5-approximation to a 25/6-approximation} \label{improve}
To improve the 5-approximation algorithm to a $25/6$-approximation algorithm, we employ a ``smoothing'' technique. From the calculations in Observation \ref{5c}, the region where a circle $C$ in an optimal solution requires five circles to cover is only $2-\sqrt{3}$ wide. In all other cases, $C$ can be covered by only four circles. Since $2-\sqrt{3}$ is less than one-sixth of the width of the entire strip $\sqrt{3}$, it is intuitive that we can do better than a 5-approximation. Here, we show that by shifting the strip partition, we can smooth out the regions that require 5-circles and achieve a $25/6$-approximation.
To be precise, we define a strip partition with shift $\alpha$ to be the partition of $\mathbb{R}^2$ into width $\sqrt{3}$ vertical strips, where the boundaries of the strips are located at $x=\alpha + k\sqrt{3}$, $k\in \mathbb{Z}$. As usual, our restriction lines are in the centres of these strips. Our algorithm with the smoothing technique is:
\begin{enumerate} \item For $\alpha=0,\frac{\sqrt{3}}{6},\frac{2\sqrt{3}}{6},\ldots, \frac{5\sqrt{3}}{6}$, partition the plane into vertical strips of width $\sqrt{3}$ with shift $\alpha$ and use the 5-approximation algorithm. \item Return the best of the six solutions obtained above. \end{enumerate}
\begin{theorem} The algorithm with smoothing approximates is a $25/6$-approximation for UDC in $L_2$. \end{theorem}
\begin{proof} Let \textsc{opt} be the set of optimal circles for UDC, and let $S_1,\ldots,S_6$ be the six sets of shifted line sets used in our algorithm. For each of the six shifts, there exists an oblivious $S_i$-restricted solution.
Suppose that for $i=1,\ldots,6$, there are $q_i$ circles in \textsc{opt} with centers $(x,y)$ satisfying \begin{align} \alpha_i + k\sqrt{3} + \frac{5}{12}\sqrt{3} \leq x\leq \alpha_i + k\sqrt{3} + \frac{7}{12}\sqrt{3}\label{eq1} \end{align}
for $k\in\mathbb{Z}$, where $\alpha_i = (i-1)\frac{\sqrt{3}}{6}$. Note that since these six ranges fill the plane, $\sum q_i=|\textsc{opt}|$.
According to Observations \ref{4c} and \ref{5c}, each solution has five circles for every circle in \textsc{opt} that has center $(x,y)$ satisfying \begin{align} \alpha_i + k\sqrt{3} + \sqrt{3}-1 \leq x \leq \alpha_i + k\sqrt{3} + 1\label{eq2} \end{align} and four circles for every other circle in \textsc{opt}. Since the range in Equation \ref{eq2} is a subrange of that in Equation \ref{eq1}, it follows that an oblivious $S_i$-restricted solution of Section \ref{l2p2approx} uses no more than \begin{align} 5q_i + 4\sum_{\substack{j=1\\ j \neq i}}^6 q_j \end{align} circles.
For $i=1,\ldots,6$, let $A_i$ be the $i$-th candidate solution generated by our algorithm and let $A^*$ be the solution with fewest circles out of the 6 $A_i$'s. Since each $A_i$ is at least as good as any oblivious $S_i$-restricted solution, we have the inequality: \begin{align}
|A^*| & = \min_{i=1,..,6} |A_i| \\ &\leq \min_{i=1,..,6} \left[5q_i + 4\sum_{\substack{j=1\\ j \neq i}}^6 q_j\right]\\
&=4|\textsc{opt}| + \min_{i=1,..,6} q_i\\
&\leq 4|\textsc{opt}| + \frac{1}{6}|\textsc{opt}| = \frac{25}{6}|\textsc{opt}| \end{align} Hence the output of our algorithm is a $25/6$ approximation to the unit disk cover problem. \end{proof}
\section{Extensions}
The approximation algorithm outlined above can be applied to any $L_p$ norm --- one simply has to figure out the worst case number of oblivious line-restricted circles to cover an arbitrary circle in the plane. For each norm, the optimal line spacing varies. However, our algorithm is guaranteed to produce constant factor approximations when the spacing less than $2$.
Our algorithm can also be extended to higher dimensions. A natural extension is to use a collection of uniformly spaced parallel lines, and solve LRUDC in a small tube surrounding each line. In this way, we can obtain approximations in arbitrarily large $d$ dimensions, albeit with an approximation factor that scales exponentially with $d$. In particular, applying this technique to the $L_\infty$ norm gives a $2^{d-1}$-approximation in $d$ dimensions, matching an earlier result by \cite{gonzalez}.
Finally, our algorithm applies to covering objects more general than points, such as polygonal shapes. Our proofs only rely on the fact that any optimal circle can be covered entirely with a constant number of oblivious line-restricted circles. The fact that we are covering points is not used.
\section{Concluding Remarks}
We presented a simple algorithm to approximate the unit disk cover problem within factor of $25/6$ in $L_2$ and within a factor of 2 in $L_\infty$.
The algorithm runs in $O(n \log n)$ time $O(n)$ space, with the most time consuming step being a simple sorting of the input. On a practical level, we believe our algorithm has a good mix of performance and simplicity, with a typical implementation of no more than 30 lines of C++.
We wonder what the best approximation an oblivious line-restricted approach can achieve for the unit disk cover problem. For the $L_\infty$ norm, we saw that 2 was the best possible approximation factor for lines spaced equally apart. Similarly, one can show for the $L_2$ norm that a lower bound of $15/4$ is the best that can be done with an oblivious algorithm for equally spaced lines. It would be interesting to see if these lower bounds can be broken by an oblivious algorithm once the equal spacing condition is removed. Finally, an analysis of the optimal spacing in other $L_p$ norms would be interesting as well.
\end{document} |
\begin{document}
\begin{abstract} We check McKay conjecture on character degrees for the case of symplectic groups over the field with two elements ${\rm Sp}_{2n}(2)$ and the prime 2. Then we check the inductive McKay condition (see [IMN] 10) for ${\rm Sp}_4(2^m)$ and all primes. \end{abstract}
\title{Odd character degrees for $\Sp (2n,2)$.}
If $G$ is a finite group and $\ell$ is a prime number, denote by ${\rm Irr}_{\ell '}(G)$ the set of irreducible characters of $G$ with degree prime to $\ell$. The McKay conjecture on character degrees asserts that $$|{\rm Irr}_{\ell '}(G)|=|{\rm Irr}_{\ell '}(\No G(P))|$$ for $P$ a Sylow $\ell$-subgroup of $G$. McKay's conjecture has gained new interest since appearance of Isaacs-Malle-Navarro's theorem reducing it to a related conjecture on quasi-simple groups (see [IMN]). It has been checked for all quasi-simple groups not of Lie type.
Among groups of Lie type and for the prime $\ell$ being the defining characteristic, the group ${\rm Sp}_{2n}(2)$ is one of the rare cases, and the only infinite series, of finite groups of Lie type whose Sylow subgroup at defining characteristic (2 in the present case) has an abelian quotient bigger than expected from the rank of the underlying reductive group (see Proposition~3 below). We prove that it nevertheless satisfies McKay conjecture, which in this case also finishes the checking of the conditions devised by [IMN]. For a uniform treatment of the ``general" case, see [S3].
We intended to give a sequel to this note by checking the conditions of [IMN] for all simple groups Sp$_{2n}(2^m)$ but the equivariance problems proved more delicate than we first thought. In a joint work with B. Sp\"ath, we developed some general methods which cover this case (see [CS]). Here, we present however the case of Sp$_4(2^m)$ which requires some ad hoc analysis (see \S 2).
\section{Notations}
When $\ell$ is a prime and $n\geq 1$ an integer, one denotes by $n_\ell$ the greatest power of $\ell$ dividing $n$ and $n_{\ell '}:=n/n_\ell$. If $H$ is a finite group and $X{\subseteq} {\rm Irr} (H)$, one denotes $X_{\ell '} :=X\cap {\rm Irr}_{\ell '} (H)$.
If $H$ acts on a set $Y$, one denotes by $Y^H$ the subset of fixed points. If $H$ is cyclic generated by $h$, one writes $Y^H=Y^h$. If $Y$ is a group on which $H$ acts by group automorphisms, one denotes by $[Y,H]$ the subgroup of $Y$ generated by commutators $y^{-1}h(y)$ for $h\in H$, $y\in Y$.
For finite reductive groups $\GF$ (${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ a reductive group defined over a finite field ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_q$ with associated Frobenius endomorphism $F\colon {\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F\to{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$), the Deligne-Lusztig generalized characters $\Lu{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F{\theta}$, and associated partition of ${\rm Irr} (\GF)$ into series $\ser\GF s$ (in our cases rational and geometric series will always coincide since ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ will have connected center) we follow the notations of [DM].
\section{Odd character degrees for ${\rm Sp}_{2n}(2)$}
Let us denote by ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}$ the algebraic closure of ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2$ the field with 2 elements. Let $n\geq 2$ be an integer, let ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F ={\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}} )$ with Frobenius endomorphism ${F_0}\colon {\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F\to {\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ squaring matrix entries. Let $G={\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F^{F_0}={\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$.
\subsection{The global case}
We use the usual notion and notation $\mathcal{E}(G,1)$ for unipotent characters (see [L], [C2]~\S~12, [DM]~13.19). Unipotent characters of finite classical groups over a finite field are described by the work of Lusztig [L] (see also [C2]\S 13.8). They are parametrized by a set (independent of the finite field) of so-called symbols ${\Lambda}$, which in the case of type $B_n$ are of the kind ${\Lambda} = \left(\begin{matrix} S\\ T \end{matrix} \right)$ where
$S,T$ are subsets of ${\Bbb N}$ with $0\not\in S\cap T$, $|S|-|T|$ is odd and positive, and $n=\sum_{{\lambda}\in S}{\lambda}+\sum_{\mu\in T}\mu
-({|S|+|T|-1\over 2})^2$. Denote by $\chi_{\Lambda}\in \mathcal{E}(G,1)$ the unipotent character associated with ${\Lambda}$. For the following, see also [M1]~6.8 but we give a full proof for the convenience of the reader.
\noindent {\bf Proposition~1.} {\sl Keep $n\geq 2$. Then $\chi_\Lambda (1)$ is odd if and only if $\Lambda$ is among the {\bf five} following symbols $\left(\begin{matrix} n \\ \\ \end{matrix} \right) $, $\left(\begin{matrix} 0,1,n \\ \\ \end{matrix} \right) $, $\left(\begin{matrix} 0,1 \\ n \\ \end{matrix} \right) $, $\left(\begin{matrix} 1,n \\ 0 \\ \end{matrix} \right) $, and $\left(\begin{matrix} 0,n \\ 1 \\ \end{matrix} \right) $ . }
\noindent {\it Proof.} For a unipotent character of a finite classical group over a finite field of cardinality $q$ and characteristic $p$ (see a list of types in [L]~\S~8), $\chi_{\Lambda}(1)_p = D_{\Lambda} (q)_p$ where $D_{\Lambda}$ is the rational function in $q$ given in [L]~2.8 and [C2]~p.467. In particular for $p=2$, denoting ${\Lambda} = \left(\begin{matrix} S\\ T \end{matrix} \right)$ and
$M=|S|+|T|$, we have $$\chi_{\Lambda} (1)_2={\Pi_{{\lambda} '<{\lambda}\ \!
{\rm in}\ \! S\ \! }(2^{\lambda}-2^{{\lambda} '})_2\ \!\Pi_{\mu '<\mu \ \! {\rm in}\ \! T\ \! }(2^\mu-2^{\mu '})_2\ \!\Pi_{{\lambda} \in S,\mu\in T}(2^{\lambda}+2^{\mu})_2\over 2^{{M-1\over 2}}2^{{1\over 2}\bigl( (M-2)(M-3)+(M-4)(M-5)+\dots \bigr)}}.$$
The denominator is equal to $2^{{M-1\over 24}(2M^2-7M+15)}$ for any odd $M\geq 1$ (easy induction).
Concerning the numerator, if $c,d\geq 0$ are integers, one has $(2^c-2^d)_2=(2^c+2^d)_2=2^{{\rm min}(c,d)}$ whenever $c\not=d$, while $2^c+2^d=2^{{\rm min}(c,d)+1}$ whenever $c=d$. So $$\chi_{\Lambda} (1)_2=2^{\phi_{S,T}-{{M-1\over 24}(2M^2-7M+15)}}$$ where
$$\phi_{S,T}=|S\cap T|+\sum_{{\lambda} '<{\lambda} \ \rm in\ S}{\rm min}({\lambda} ,{{\lambda} '})+ \sum_{\mu '<\mu \ \rm in\ T}{\rm min}(\mu ,{\mu '})+\sum_{{\lambda}\in S,\mu \in T}{\rm min}({\lambda} ,{\mu }).$$
It is now clear that $\chi_{\Lambda} (1)$ is odd for $ \Lambda\in\Big\{ \left(\begin{matrix} n \\ \\ \end{matrix} \right) , \left(\begin{matrix} 0,1,n \\ \\ \end{matrix} \right) ,
\left(\begin{matrix} 0,1 \\ n \\ \end{matrix} \right) ,
\left(\begin{matrix} 1,n \\ 0 \\ \end{matrix} \right) , \left(\begin{matrix} 0,n \\
1 \\ \end{matrix} \right) \Big\}$. Conversely, if $M=3$ and $\chi_{\Lambda}(1)$ is odd, then ${\Lambda} = \left(\begin{matrix} {\lambda} '<{\lambda}\\ \mu \end{matrix} \right)$ and $\phi_{\Lambda}={\lambda} '+{\rm min} (\mu,{\lambda} ')+{\rm min}(\mu ,{\lambda})$, or ${\Lambda} = \left(\begin{matrix} {\lambda} ''<{\lambda}'<{\lambda}\\ \\ \end{matrix} \right)$ and $\phi_{\Lambda} = {\lambda} '+2{\lambda} ''$. Having $\phi_{\Lambda} =1$ clearly forces $\{{\lambda} ',\mu\}=\{0,1\}$ (since ${\lambda} '=\mu =0$ is forbidden) and $({\lambda} '',{\lambda} ')=(0,1)$, respectively. Hence the Proposition for $M\leq 3$.
There remains to check that $\phi_{S,T}>{{M-1\over 24}(2M^2-7M+15)}$ as soon as $M>3$. Let us re-index the elements of $S$ and $T$ in a single sequence $\nu_1\leq \nu_2\leq\dots\leq \nu_{M} $ such that $S$ and $T$ correspond with disjoint subsets of indexes in $[1,M]$. Then
$$\phi_{S,T}=|S\cap T|+ \sum_{i<j}{\rm min}(\nu_i,\nu_j)=|S\cap T|+\sum_{i=1}^M\nu_i(M-i).$$ Since the sequence $(\nu_i)$ is the merging of two strictly increasing sequences, it may take a given value of $S\cup T$ only once or twice, and only once for the value 0. So $\nu_i\geq$ the i-th term of the sequence $0,1,1,2,2,\dots
{M-1\over 2}, {M-1\over 2}$. Then $\phi_{S,T}\geq |S\cap T|+\sum_{i=1}^{ {M-1\over 2}}i(2M -4i-1)$. An easy induction on ${M-1\over 2}$ shows that $$\sum_{i=1}^{ {M-1\over 2}}i(2M -4i-1) - {M-1\over 24}(2M^2-7M+15)={(M-1)(M-3)\over 4}$$ for any odd $M\geq 3$. Thus our claim.
\vrule height 1.6ex width .7ex depth -.1ex
\noindent {\bf Proposition~2. } {\sl Let $n\geq 2$ be an integer. Then ${\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ has $2^{n+1}$ characters of odd degrees.}
\noindent {\it Proof. } Recall ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F ={\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}} )$ with Frobenius endomorphism ${F_0}\colon {\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F\to {\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ squaring matrix entries. Let $G={\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F^{F_0}={\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ (part of case (a) in [L]~\S~8). Note that ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ has (trivial) connected center.
By [L] p.164, ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}$ being of characteristic 2, there is an isogeny between ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ and its dual ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F^*$ inducing a bijection between rational semi-simple elements with isomorphism of centralizers of corresponding elements. This, along with property (A) of [L]~7.8 shows that ${\rm Irr} (G)$ is in bijection with the disjoint union of the $\ser{\Ce G(s)}1$'s for $s$ ranging over the semi-simple conjugacy classes of $G$ (see [L]~8.7.6). Through this Jordan decomposition, the degrees are multiplied by
$|{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F^*{}^{F_0}|_{2'}|\Ce G(s)|^{-1}_{2'}$, so
$|{\rm Irr}_{2'}(G)|=\sum_s |\ser{\Ce G(s)}1_{2'}|$, a sum over the semi-simple classes of $G$.
Characteristic polynomials provide a bijection between the classes of semi-simple elements of ${\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ and the set of self dual polynomials $f\in{\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2[X]$ of degree $2n$. If $s$ corresponds with $f$, then $\Ce G (s)\cong {\rm Sp}_{2m}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)\times C_s$ where $C_s$ is a product of finite linear groups and $2m$ is the multiplicity of $(X-1)$ in $f$. For a given $m<n$, the number of such classes is $2^{n-m-1}$. This is because one has to count the polynomials $f=(X-1)^{2m}g$ with a self dual $g(X)=1+a_1X+\dots + a_{n-m-1}X^{n-m-1}+a_{n-m}X^{n-m}+a_{n-m-1}X^{n-m+1} +\dots +a_{1}X^{2n-2m-1}+X^{2n-2m}$ such that $g(1)\not=0$. Such $g$'s are $2^{n-m-1}$, corresponding to the choice of coefficients at degrees $1,2,\dots , n-m-1$ since $g(1)=a_{n-m}$ has to be $=1$. For $m=n$ (central element) there is 1 conjugacy class ($s=1$).
The unipotent characters of finite reductive groups of type $A$ are of even degrees except the trivial character (see for instance [H] or [M1]~6.8). Then Proposition~1 implies that each semi-simple class $s$ corresponding with $m$ as above satisfies
$|\ser{\Ce G(s)}1_{2'}|=5$ for $m\geq 2$, $|\ser{\Ce G(s)}1_{2'}|=1$ otherwise. So the above implies that
$$|{\rm Irr}_{2'}(G)|=5.\sum_{m=2}^{n-1}2^{n-m-1}+5+2^{n-2}+2^{n-1}=5.2^{n-2}+3.2^{n-2}=2^{n+1}.$$ This is our claim.
\vrule height 1.6ex width .7ex depth -.1ex
\subsection{The local case.}
We use the description of Sp$_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2 )\subset {\rm GL}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ as the subgroup of matrices $u$ such that $^tu\left(\begin{matrix} 0&J\\ J&0 \end{matrix} \right)u=\left(\begin{matrix} 0&J\\ J&0 \end{matrix} \right)$ where $J$ denotes the matrix with coefficients $({\delta}}\def\DD{{\Delta}_{i, n+1-j})_{1\leq i,j\leq n}$ and $u\mapsto {}^tu$ denotes transposition (see [DM] 15.2).
Let $U:=\Big\{ \left(\begin{matrix} x&xsJ\\ 0&J\bar x J\end{matrix} \right)\mid x\in V\ ,\ s\in {\rm Sym}_n\Big\}$ where ${\rm Sym}_n$ (resp. $V$) is the set of symmetric (resp. upper triangular unipotent) matrices of order $n$ with coefficients in ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2$, and one denotes $\bar x = {}^tx^{-1}$. We have
\noindent {\bf Proposition~3. } {\sl $U$ is a Sylow $2$-subgroup of $G={\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ for $n\geq 2$. Moreover ${\rm N}_G(U)=U$ and $U/[U,U]$ is of order $2^{n+1}$.}
\noindent {\bf Corollary~4. } {\sl Mac Kay conjecture (on character degrees) is satisfied in $G={\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ for the prime 2 ($n\geq 2$). That is, the normalizer of any Sylow $2$-subgroup of $G$ has the same number of characters of odd degrees as $G$ itself.}
\noindent {\it Proof. } By Proposition~3, the irreducible characters of N$_G(U)=U$ of odd degrees are exactly the linear characters of $U$. So their number is the cardinality of $U/[U,U]$, that is $2^{n+1}$ thanks to Proposition~3 again. Combining with Proposition~2 gives our claim.
\vrule height 1.6ex width .7ex depth -.1ex
\noindent {\it Proof of Proposition~3. } Note that $U$ equals the group of elements over ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2$ of a rational Borel subgroup (see [DM] 15.2), so it equals its normalizer by the axioms of finite BN-pairs which are satisfied by this group. Thus our first claim.
Note also the semi-direct decomposition $U\cong {\rm Sym}_n\rtimes V$ for the action of $ V$ on ${\rm Sym}_n$ given by $x.s = xs{}^t\! x$ for $x\in V$, $s\in {\rm Sym}_n$. Since ${\rm Sym}_n$ is abelian and since the Sylow $2$-subgroup
$V$ of GL$_n({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ is known to satisfy $|V/[V,V]|=2^{n-1}$ (see for instance [DM] p.129 and [H]), our claim about $U/[U,U]$ reduces to show that ${\rm Sym}_n/[{\rm Sym}_n,V]$ if of order 4. So we have to prove that the sum $S'=\sum_{x\in V}{\theta}_x({\rm Sym}_n)$ of images of endomorphisms ${\theta}_x\colon s\mapsto xs{}^t\! x-s$ of ${\rm Sym}_n$ has codimension 2.
For $1\leq i,j\leq n$, let us denote by $E_{ij}$ the usual elementary matrix of order $n$. We have $E_{ij}+E_{ji}+E_{ii}\in S'$ for any $1\leq i<j\leq n$, by computing ${\theta}_x(s)$ for $s=E_{jj}$, $x=I_n+E_{ij}$. We also have $E_{ij}+E_{ji}\in S'$ for any $1\leq i<j\leq n$ with $(i,j)\not= (n-1,n)$ (taking $s=E_{jk}+E_{kj}$ and $x=I_n+E_{ik}$ for some $k>i$, $k\not= j$). This shows that $S'$ contains the $E_{ij}+E_{ji}$'s for $1\leq i<j\leq n$ with $(i,j)\not= (n-1,n)$, along with $E_{11},E_{22},\dots E_{n-2,n-2}$ and $E_{n-1,n}+E_{n,n-1}+E_{n-1,n-1}$. This makes a subspace of codimension 2 in ${\rm Sym}_n$, a supplement subspace being generated by $E_{n-1,n-1}$ and $E_{n,n}$. The action of $V$ on the quotient is easily checked to be trivial (one just has to check the images of $E_{n-1,n-1}$ and $E_{n,n}$ by ${\theta}_x$ for $x=I_n+E_{ij}$ -- which we just did above -- since the latter generate $V$ as a group, using again the fact that the field has two elements). So this subspace is indeed the sum of the images of all the ${\theta}_x$'s for $x\in V$.
\vrule height 1.6ex width .7ex depth -.1ex
\subsection { Inductive McKay condition for {\rm Sp}$_{2n}(2)$}
In [IMN], the authors show that a finite group satisifies the McKay conjecture as soon as all its simple non-abelian subquotients satisfy a series of conditions concerning the automorphism groups of the perfect central extensions of those simple subquotients. Note that ${\rm Sp}_{4}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ is isomorphic with the symmetric group on 6 letters.
\noindent {\bf Theorem~5. } {\sl Let $n\geq 3$ be an integer. Then ${\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ is a simple group that satisfies the conditions of [IMN]~\S~10 for all prime numbers.}
\noindent {\it Proof of the Theorem. } When $n\geq 3$, ${\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ is a simple group. When $n=3$, it satisfies the theorem by [M2]~4.1. When $n>3$, ${\rm Sp}_{2n}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2)$ has trivial Schur multiplier and trivial outer automorphism group (see [GLS]), so that the checking required by [IMN] just amounts to the McKay conjecture itself (see [IMN]~10.3). For $\ell =2$, it is Corollary~4. In the case of other primes, this is a consequence of Malle's parametrization [M1]~7.8 along with Sp\"ath's extensibility results (see [S1]~1.2, [S2]~1.2, 8.4).
\vrule height 1.6ex width .7ex depth -.1ex
\section{${\rm Sp}_4({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_{2^m})$.}
We prove here the following
\noindent {\bf Theorem~6. } {\sl Let $m\geq 2$ be an integer. Then ${\rm Sp}_{4}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_{2^m})$ is a simple group that satisfies the conditions of [IMN]~\S~10 for all prime numbers.}
We keep ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}$ the algebraic closure of the field ${\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_2$ with 2 elements and ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F ={\rm Sp}_4({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}} )$. We denote by ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0$ its diagonal torus and $({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}} ,+)\to{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$, $t\mapsto x_{\alpha}}\def\bb{{\beta}}\def\gg{{\gamma} (t)$ its minimal unipotent ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0$-stable subgroups indexed by the ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0$-roots ${\alpha}}\def\bb{{\beta}}\def\gg{{\gamma}$. The Weyl group $W:=\No{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F ({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0)/{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0$ is generated by the classes $s_1$, $s_2$ of the permutation matrices in ${\rm GL}_4({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}} )$ associated with the permutations $(1,2)(3,4)$ and $(2,3)$, respectively.
Denote by $F_0'$ the automorphism of ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F ={\rm Sp}_4({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}} )$ which sends $x_{\alpha}}\def\bb{{\beta}}\def\gg{{\gamma} (t)$ to $x_{{\alpha}}\def\bb{{\beta}}\def\gg{{\gamma} '}(t^2)$ if ${\alpha}}\def\bb{{\beta}}\def\gg{{\gamma} $ is short (i.e. its associated reflection is conjugated with $s_1$), to $x_{{\alpha}}\def\bb{{\beta}}\def\gg{{\gamma} '}(t)$ otherwise, and where ${\alpha}}\def\bb{{\beta}}\def\gg{{\gamma}\mapsto {\alpha}}\def\bb{{\beta}}\def\gg{{\gamma} '$ is the permutation of roots corresponding to the swap of $s_1$ and $s_2$, see [C1]~12.3.3. Note that $F_0=(F'_0)^2$ (notation of \S 1). Denote $F=F_0^m$, so that ${\rm Sp}_{4}({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_{2^m})={\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F^F$.
\noindent\it Proof of Theorem~6. \rm The group $G={\rm Sp}_4({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_{2^m})$ ($m\geq 2$) is simple with trivial Schur multiplier and cyclic outer automorphism group generated by $F'_0$ (see [GLS]). Then the conditions of [IMN]~\S~10 amount to find for each prime $\ell$ dividing $|G|$ a proper subgroup $N<G$ containing $\No G(P)$ for $P$ a Sylow $\ell$-subgroup of $G$ and such that ${\sigma} (N)=N$ and $|{\rm Irr}_{\ell '} (G)^{\sigma} |=|{\rm Irr}_{\ell '} (N)^{\sigma}|$ for any ${\sigma}\in\No{{{\rm Aut}(G)}}(P)$ (see [Br]~\S~3). The case of $\ell =2$ is also done in [Br], so we assume that $\ell$ is odd dividing $(2^{4m}-1)(2^{2m}-1)=|{\rm Sp}_4({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_{2^m})|_{2'}$. The order of $2^m$ mod. $\ell$, is $e\in\{ 1,2,4\}$. Let $\SS_e$ be a Sylow $\phi_e$-torus of ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$. We have that ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e:=\Ce{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F (\SS_e )$ is a maximal torus of ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0$-type $w_e=$ 1, $s_1s_2s_1s_2$, or $s_1s_2$ according to $e$ being 1, 2 or 4 (for types of maximal $F$-stable tori, and latter Levi subgroups, we refer to [DM] p.113).
Arguing as in the proof of [M1]~5.14, any Sylow $\ell$-subgroup $P$ has a unique maximal toral elementary abelian subgroup whose normalizer $N$ in $G$ is then also $N:=\No G(\SS_e )=\No G({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e )$. It is stable by any automorphism ${\sigma}$ such that ${\sigma} (P)=P$. From what has been said about possible ${\sigma}$'s, and noting that $N$ has an abelian normal subgroup ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e^F$ with ${\ell '}$ index, we see that we must just prove that $$|{\rm Irr}_{\ell '} (G)^{F'}|=|{\rm Irr} (N)^{xF'}|\leqno({\rm E})$$ for any $F'$ a power of $F'_0$ and some $x\in G$ is such that $F'(\SS_e )=\SS_e^x$.
Bringing $({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e, F)$ to $({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0 ,w_eF)$ by conjugacy with some $g\in{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ such that $g^{-1}F(g)\in w_e{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0$, we may rewrite the above as $$|{\rm Irr}_{\ell '} (G)^{F''}|=|{\rm Irr} (\No{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0)^{w_eF})^{F''}|\leqno({\rm E'})$$ if $F''$ commutes with $w_eF$ and is in the same class as $F'$ mod inner automorphisms of $G$.
Recall Malle's bijection ${\rm Irr}_{\ell '} (G)\mr{\sim}{\rm Irr}_{\ell '} (N)$ which, among other properties, sends components of $\Lu{{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e}{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F{\theta}$ to components of $\Ind_{{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e}^{N}{\theta}$ for relevant ${\theta}\in{\rm Irr} ({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e^F)$ (see [M1]~\S 7.1).
Let us first look at regular characters $\pm\Lu{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F ({\theta} )$. They are of degree ${\ell '}$ if and only if ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}$ can be taken as ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e=\Ce{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F (\SS_e )$ (see [M1]~6.6). Such a character is fixed by $F'$ if and only if $F'({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e ,{\theta} )$ and $({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e ,{\theta} )$ are $\GF$-conjugate (see [Br]\S~2.1.2). This is equivalent to $xF'({\theta} )$ being $\No G(\SS_e)$-conjugate to ${\theta}$ ([M1]~5.11). This is also the criterion for $\Ind_\TF^N ({\theta} )$ being $xF'$-fixed as can be seen easily from the definition of induced characters. Thus our claim in the form of (E) above.
Let us now turn to unipotent characters. From [M1] 6.5, we know that they have to be in $\ser\GF {{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e ,1}$, the set of irreducible characters occuring in the generalized character $\Lu{{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e}{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F 1$. So we have to check that $\ser\GF {{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e ,1}^{F'}_{\ell '}$ and ${\rm Irr}(N/{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e^F )^{F'}$ have same cardinality.
As for the first set, one knows that among the six unipotent characters of ${\rm Sp}_4({\Bbb{F}}}\def\G{{\Bbb{G}}}\def\Q{{\Bbb{Q}}_{2^m})$, only the two that are of generic degree ${1\over 2}q(q^2+1)$ are not fixed by $F'_0$ (see [M1]~3.9.a). Those are among unipotent characters of degree prime to $\ell$ only when $e=1$ or $2$. So it suffices to check that all characters of $N/{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_e^F $ but 2 are fixed by $xF'$ in case $e=1$ or $2$ and $F'$ is an odd power of $F'_0$, and that all are fixed otherwise.
In cases $e=1$ or $2$, $w_1=1$, $w_2=s_1s_2s_1s_2$ both are fixed by $F'_0$, so one may take $F''=F'$ in (E') above. Recall that $F_0'$ acts on $W$ by permuting $s_1$ and $s_2$. The group $W$ is dihedral of order 8, so $F'_0$ induces an automorphism of order two of $W^{\rm ab}$, so two linear characters out of four are $F_0'$-fixed, while the character of degree two is fixed. Hence our claim for any odd power of $F'_0$. In the case of an even power, the action is trivial, as expected.
In the case $e=4$, one may take $w_4=s_1s_2$ and $F''=(s_1F'_0)^a$ when $F' =(F'_0)^a$. Then the action of $F''$ on $(\No{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F ({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0 )/{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0)^{w_4F} =\Ce W(w_4)$ is trivial.
We now assume $\ser Gs^{F'}_{\ell '}\not=\emptyset}\def\mm{{-1}$ for an $s$ that is neither central nor regular. The group $\Ce{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F (s)$ is always a Levi subgroup of ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ (see proof of Proposition~2 above) and by [M1] 6.5 it must contain a Sylow $\phi_e$-torus. A proper $F$-stable Levi subgroup of ${\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F$ can contain a $\phi_1$-Sylow for types $(\LL_{\{s_1\}}, F)$ and $(\LL_{\{s_2\}}, F)$ and a $\phi_2$-Sylow for types $(\LL_{\{s_1\}}, s_2s_1s_2F)$ and $(\LL_{\{s_2\}}, s_1s_2s_1F)$. In each case the corresponding finite group has two unipotent characters, the trivial and the Steinberg characters, of distinct degrees, so that for an $s$ whose class is $F'$-stable with such a centralizer in the dual, $\ser Gs$ has two elements with distinct degrees, so $F'$ acts trivially on $\ser Gs$.
The corresponding statement on the local side is as follows : if ${\theta}$ is a non regular non central linear character of ${\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0^{w_eF}$, then $\Ind^{\No{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0)^{w_eF}}_{{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0^{w_eF}} {\theta}$ has two elements both $F''$-fixed if $F''({\theta}) \in \No{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0)^{w_eF}.{\theta}$. This holds because non-regularity implies $(\No{\bf G}}\def\GD{{{\bf G}^*}}\def\GF{{{\bf G}^F}}\def\GDF{{\GD}^F({\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0)^{w_eF})_{\theta}/{\bf T}}\def\TD{{\TT^*}}\def\TF{{\TT^F}}\def\SS{{\bf S}_0^{w_eF}$ is of order 2, but then $F''$ can act only trivially on it.
\vrule height 1.6ex width .7ex depth -.1ex
\section{References.} \begin{enumerate}
\item[{[Br]}] O. Brunat, On the inductive McKay condition in the defining characteristic, {\sl preprint}.
\item[{[CS]}] M. Cabanes and B. Sp\"ath, Equivariance and extendibility in finite reductive groups with connected center, \it in preparation, \rm 2011.
\item[{[C1]}] R. Carter, {\it Simple groups of Lie type}, Wiley, New York 1972.
\item[{[C2]}] R. Carter, {\it Finite groups of Lie type : conjugacy classes and complex characters}, Wiley, New York 1985.
\item[{[DM]}] F. Digne and J. Michel, {\it Representations of finite groups of Lie type}, Cambridge, 1991.
\item[{[GLS]}] D. Gorenstein, R. Lyons and R. Solomon, {\it The classification of the finite simple groups, Number 3.} {\sl Mathematical Surveys and Monographs, Amer. Math. Soc.}, Providence, 1998.
\item[{[H]}] R. Howlett, On the degrees of Steinberg characters of Chevalley groups, {\sl Math. Z.}, {\bf 135} (1974), 125-135.
\item[{[IMN]}] M. Isaacs, G. Malle, and G. Navarro, A reduction theorem for McKay conjecture, \sl Inventiones Math., \bf 170\rm (2007), 33--101.
\item[{[L]}] G. Lusztig, Irreducible representations of finite classical groups, \sl Inventiones Math. \bf 43 \rm (1977), 125--175.
\item[{[M1]}] G. Malle, Height 0 characters of finite groups of Lie type, \sl Representation Theory, \bf 11\rm (2007), 192--220.
\item[{[M2]}] G. Malle, The inductive McKay condition for simple groups not of Lie type, \sl Comm. Algebra., \bf 36-2\rm (2008), 455--463.
\item[{[S1]}] B. Sp\"ath, Sylow $d$-tori of classical groups and the McKay conjecture I, \sl J. Algebra, \bf 323\rm (2010), 2469-2493.
\item[{[S2]}] B. Sp\"ath, Sylow $d$-tori of classical groups and the McKay conjecture II,\sl J. Algebra, \bf 323\rm (2010), 2494-2509.
\item[{[S3]}] B. Sp\"ath, Inductive McKay condition in defining characteristic, \it preprint \rm (2010) arXiv:1009.0463 .
\end{enumerate}
\end{document} |
\begin{document}
\title{Six-dimensional nilpotent Lie algebras} \author{Serena Cical\`o, Willem A. de Graaf and Csaba Schneider} \date{1 June 2020} \maketitle
\begin{abstract} We give a full classification of 6-dimensional nilpotent Lie algebras over an arbitrary field, including fields that are not algebraically closed and fields of characteristic~2. To achieve the classification we use the action of the automorphism group on the second cohomology space, as isomorphism types of nilpotent Lie algebras correspond to orbits of subspaces under this action. In some cases, these orbits are determined using geometric invariants, such as the Gram determinant or the Arf invariant. As a byproduct, we completely determine, for a 4-dimensional vector space $V$, the orbits of $\mathrm{\mathop{GL}}(V)$ on the set of 2-dimensional subspaces of $V\wedge V$. \end{abstract}
\noindent {\bf Keywords and phrases:} nilpotent Lie algebras, six-dimensional Lie algebras, second cohomology, Klein correspondence, quadratic forms, Arf invariant.
\noindent {\bf 2010 Mathematics Subject Classification:} 17B30, 17B40, 17B56, 11E04.
\section{Introduction}
The classification of small-dimensional Lie algebras is a classical problem. The history of the classification problem of 6-dimensional nilpotent Lie algebras goes back to Umlauf (\cite{umlauf}). In the 1950's Morozov (\cite{morozov}) published a classification of 6-dimensional nilpotent Lie algebras valid over fields of characteristic 0. Recently several classifications have appeared, over various ground fields. We mention \cite{gong} (over algebraically closed fields, and over the real field), \cite{sch} (over various finite fields) \cite{artw} (over fields of characteristic not 2). However, no classification that treats all ground fields, in particular fields of characteristic 2, is known up to now. It is the purpose of this paper to complete the classification of 6-dimensional nilpotent Lie algebras over an arbitrary field. \par Nilpotent Lie algebras up to dimension five are well-known. There is just one isomorphism type of nilpotent Lie algebras with dimension two, two isomorphism types in dimension~3, three isomorphism type in dimension 4, and 9 isomorphism types in dimension~5. The classification of nilpotent Lie algebras with dimension up to~5 is independent of the field in the sense that the isomorphism types can be described by uniform structure constant tables with integer entries. This is not, however, the case in dimension~6, as the number of isomorphism types of 6-dimensional nilpotent Lie algebras may depend on the characteristic of the underlying field. There are 36 isomorphism types over $\mathbb{F}_2$, but only 34 isomorphism types over $\mathbb{F}_3$; see~\cite{sch}. Over fields $\mathbb{F}$ of characteristic not 2, the number of isomorphism types is $26+4s$ where $s$ is the (possibly infinite) index of $(\mathbb{F}^*)^2$ in the multiplicative group $\mathbb{F}^*$ of $\mathbb{F}$ (\cite{artw}). In the present paper we give a classification that covers all ground fields, in particular also those of characteristic 2. For a field $\mathbb{F}$ of characteristic~2, define an equivalence relation $\relstarplus$ on $\mathbb{F}$ as follows: $\alpha\relstarplus\beta$ if and only if $\alpha=\gamma^2\beta+\delta^2$ with some $\gamma\in\mathbb{F}^*$ and $\delta\in\mathbb{F}$. Let $t$ be the (possibly infinite) number of
equivalence classes of $\relstarplus$ in $\mathbb{F}$. Moreover, for a field $\mathbb{F}$ of characteristic~2, define the
equivalence relation $\relpsi$ by $\alpha\relpsi\beta$ if and only if there is a $\gamma\in \mathbb{F}$ with $\gamma^2+\gamma+\alpha+\beta=0$. Let $u$ be the (possibly infinite) number of equivalence classes of
$\relpsi$ in $\mathbb{F}$.
\begin{thm}\label{main} The number of isomorphism types of nilpotent Lie algebras with dimension~$6$ over fields of characteristic different from~$2$ is $26+4s$, while this number is $26+2s+4t+2u$ over fields of characteristic~$2$. \end{thm}
In Section~\ref{ressect}, a list of the isomorphism classes of the 6-dimensional nilpotent Lie algebras over an arbitrary field is given. The first part of this theorem concerning fields of characteristic different from~2 was already proved in the article~\cite{artw} by the second author (see also~\cite{prew} for more details).
As stressed above, our paper treats fields of characteristic~2, and, for the first time, gives a full classification of 6-dimensional nilpotent Lie algebras over an arbitrary field of characteristic~2. As far as we are aware, in characteristic~2, the only existing classification of such Lie algebras was given over algebraically closed fields by Gong's Ph.D.\ dissertation~\cite{gong}. Comparing Gong's results to ours, we found that Gong's classification contains one mistake: namely, his Lie algebras $N_{6,2,10}$ and $(E)$ are both isomorphic to our Lie algebra $L_{6,24}(0)$ defined in Section~\ref{ressect}. Apart from this, our classification agrees with Gong's.\par
The original aim of the research presented in this paper was to extend the results of~\cite{artw} to fields of characteristic~2. As in the course of this work some proofs of~\cite{artw} were revised, we decided, in this paper, to present a full classification of 6-dimensional nilpotent Lie algebras that is valid over all fields. In addition, some results in~\cite{artw} relied on computer calculations (specifically, computing a Gr\"obner bases for ideals in a polynomial rings), while the arguments of the present paper are all theoretical with no computer calculations involved. Nevertheless, we should mention here that several of the theoretical arguments in Section~\ref{orbsect} were inspired by Gr\"obner basis computations in the computational algebra system Magma~\cite{Magma} and it would have been significantly more difficult, maybe even impossible, to obtain the classification in Theorem~\ref{main} without performing such computations.\par Our methodology, which is the same as in~\cite{artw}, is explained in Section~\ref{backg}. We construct the 6-dimensional nilpotent Lie algebras as certain central extensions, descendants in our terminology, of lower-dimensional algebras. To separate the isomorphism classes of the descendants, we use the action of the automorphism group on the subspaces of the second cohomology space. It is interesting to note that, in some examples, the automorphism group preserves a non-degenerate quadratic form on the second cohomology space and the isomorphism classes of the descendants can be characterized by purely geometric means. An example of this situation is the abelian Lie algebra $L$ with dimension~4, whose automorphism group $\mathrm{\mathop{GL}}(4,\mathbb{F})$, by the Klein correspondence, preserves a quadratic form on the second cohomology space $H^2(L,\mathbb{F})=(\mathbb{F}^4)\wedge(\mathbb{F}^4)$ with dimension~6. In characteristic different from~2, the 6-dimensional descendants of $L$ can be completely determined using the Gram determinant of the restriction of the quadratic form to the 2-dimensional subspaces of $H^2(L,\mathbb{F})$. In characteristic~2, we determined these descendants using the Arf invariant of the restriction of the quadratic form to these 2-dimensional subspaces. See Sections~\ref{formsect} and~\ref{orbsect} for the details. The use of the Klein correspondence in the classification of nilpotent Lie algebras of nilpotency class~2 was also explored in~\cite{stroppel}. An interesting byproduct of our work is the determination of the $\mathrm{\mathop{GL}}(4,\mathbb{F})$-orbits on the 2-dimensional subspaces of $(\mathbb{F}^4)\wedge (\mathbb{F}^4)$ (Theorem~\ref{kleinth}).
Here is an outline of the paper. In Section \ref{backg} we describe the cohomological method that we use to classify nilpotent Lie algebras, which also appeared in \cite{artw}, \cite{skjelsund}, \cite{gong}. In Section \ref{ressect} we present the main result of this paper, that is the classification of the 6-dimensional nilpotent Lie algebras. Section \ref{formsect} has a number of results on bilinear and quadratic forms that we need. Then in Section \ref{orbsect} the main work is performed to prove the main result.\par
The main result of this paper can be accessed electronically using the {\sf LieAlgDB} package~\cite{liealgdb} of the computational algebra system {\sf GAP}~\cite{gap}.
\section{A summary of the method}\label{backg}
The main idea that we use here is to obtain the nilpotent Lie algebras of dimension $n$ as central extensions of Lie algebras of smaller dimension. The central extensions are defined using the second cohomology space, and the isomorphism classes of the central extensions correspond to the orbits of the automorphism group on the set of the subspaces of this cohomology space. This method has been described for Lie algebras by Skjelbred and Sund (\cite{skjelsund}). Similar ideas appear in the recent work concerning the classification of $p$-groups; see, e.g., \cite{eickob}, \cite{newobrvau}, \cite{obrien}, \cite{obvau}. We summarize the method in this section without giving proofs or explanations; the details can, for instance, be found in~\cite{artw}.\par For a Lie algebra $L$, let $L^i$ denote the terms of the lower central series. If $L$ is nilpotent then $L^{i+1}=0$ with some $i$ and the smallest such $i$ is called the {\em nilpotency class} of $L$. The second term $L^2$ of the lower central series will usually be written as $L'$. We denote the center of $L$ by $C(L)$. Adapting terminology from~\cite{obrien} to our context, a Lie algebra $K$ is said to be a {\em descendant} of the Lie algebra $L$ if $K/C(K)\cong L$ and $C(K)\leqslant K'$. If $\dim C(K)=s$ then $K$ is also referred to as a {\em step-$s$ descendant}. A descendant of a nilpotent Lie algebra is nilpotent. Conversely, if $K$ is a finite-dimensional nilpotent Lie algebra over a field $\mathbb{F}$, then $K$ is either a descendant of a smaller-dimensional nilpotent Lie algebra, or $K=K_1\oplus\mathbb{F}$ where $K_1$ is an ideal of $K$ and $\mathbb{F}$ is viewed as a 1-dimensional Lie algebra. Hence determining the isomorphism types of the descendants of the nilpotent Lie algebras with dimension at most~5 suffices for the classification of the nilpotent Lie algebras with dimension~6.\par The main idea of the method is that, for a nilpotent Lie algebra $L$ over a field $\mathbb{F}$, the isomorphism types of the descendants of $L$ are in 1-1 correspondence with the $\mathrm{\mathop{Aut}}(L)$-orbits of some of the subspaces of the second cohomology space $H^2(L,\mathbb{F})$. The second cohomology spaces for nilpotent Lie algebras are defined as follows. For a vector space $V$, let $Z^2(L,V)$ denote the set of alternating bilinear maps $\vartheta:L\times L\rightarrow V$ with the property that \[\vartheta([x_1,x_2],x_3)+\vartheta([x_3,x_1],x_2)+\vartheta([x_2,x_3],x_1)=0 \text{ for all $x_1,\ x_2,\ x_3\in L$}.\]
The set $Z^2(L,V)$ is viewed as a vector space over $\mathbb{F}$ and the elements of $Z^2(L,V)$ are said to be {\em cocycles}. We define, for a linear map $\nu \colon L\to V$, a map $\eta_\nu\colon L\times L\rightarrow V$ as $\eta_\nu(x,y)=\nu([x,y])$. The set $\{\eta_\nu\ |\ \mbox{$\nu:L\rightarrow V$ is linear}\}$ is denoted by $B^2(L,V)$. It is routine to check that $B^2(L,V)$ is a subspace of $Z^2(L,V)$, and the elements of $B^2(L,V)$ are called {\em coboundaries}. The {\em second cohomology space} $H^2(L,V)$ is defined as the quotient $Z^2(L,V)/B^2(L,V)$. \par The vector spaces defined in the previous paragraph can be viewed as $\mathrm{\mathop{Aut}}(L)$ modules. Indeed, for $\varphi\in \mathrm{\mathop{Aut}}(L)$ and $\vartheta\in Z^2(L,V)$ define $\varphi\vartheta\in Z^2(L,V)$ by the equation $(\varphi\vartheta)(x,y) = \vartheta(\varphi(x),\varphi(y))$. The action $\vartheta\mapsto \varphi\vartheta$ makes $Z^2(L,V)$ an $\mathrm{\mathop{Aut}}(L)$-module and it is easy to see that $B^2(L,V)$ is an $\mathrm{\mathop{Aut}}(L)$-submodule. Hence the quotient $H^2(L,V)$ can also be viewed as an $\mathrm{\mathop{Aut}}(L)$-module.\par Let $L$ be a Lie algebra and $V$ a vector space over a field $\mathbb{F}$. For $\vartheta\in Z^2(L,V)$, define a Lie algebra $L_\vartheta$ as follows. The underlying space of $L_\vartheta$ is $L\oplus V$. The product of two elements $x+v,\ y+u\in L_\vartheta$, is defined as $[x+v,y+w]=[x,y]_L+\vartheta(x,y)$ where $[x,y]_L$ denotes the product in $L$. Then $L_\vartheta$ is a Lie algebra and $V$ is an ideal of $L_\vartheta$ such that $V\leqslant C(L_\vartheta)$. In addition, $L\cong L_\vartheta/V$, and hence $L_\vartheta$ is a central extension of $L$. Further, if $\vartheta_1,\ \vartheta_2\in Z^2(L,V)$ such that $\vartheta_1-\vartheta_2\in B^2(L,V)$ then $L_{\vartheta_1}\cong L_{\vartheta_2}$, and so the isomorphism type of $L_\vartheta$ only depends on the element $\vartheta+B^2(L,V)$ of $H^2(L,V)$. Conversely let $K$ be a Lie algebra such that $C(K)\neq 0$, and set $V=C(K)$ and $L=K/C(K)$. Let $\pi \colon K\to L$ be the projection map. Choose an injective linear map $\sigma \colon L\to K$ such that $\pi(\sigma(x))=x$ for all $x\in L$. Define $\vartheta \colon L\times L\to V$ by $\vartheta(x,y) = [ \sigma(x), \sigma(y) ] -\sigma([x,y])$. Then $\vartheta$ is a cocycle such that $K\cong L_\vartheta$. Though $\vartheta$ depends on the choice of $\sigma$, the coset $\vartheta+B^2(L,B)$ is independent of $\sigma$. Hence the central extension $K$ of $L$ determines a well-defined element of $H^2(L,V)$. \par Let us now fix a basis $\{e_1,\ldots,e_s\}$ of $V$. A cocycle $\vartheta \in Z^2(L,V)$ can be written as \[\vartheta(x,y) = \sum_{i=1}^s \vartheta_i(x,y) e_i,\] where $\vartheta_i \in Z^2(L,\mathbb{F})$. Furthermore, $\vartheta$ is a coboundary if and only if all $\vartheta_i$ are. For $\vartheta\in Z^2(L,V)$, let $\vartheta^\perp$ denote the {\em radical} of $\vartheta$; that is, the set of elements $x\in L$ such that $\vartheta(x,y)=0$ for all $y\in L$. Then \[\vartheta^\perp=\bigcap_{\eta\in\langle \vartheta_1,\ldots,\vartheta_s\rangle} \eta^\perp=\vartheta_1^\perp\cap\cdots\cap\vartheta_s^\perp.\]
\begin{thm}[Lemmas~2--4 in \cite{artw}]\label{coctheorem} Let $L$ be a Lie algebra, let $V$ be a vector space with fixed basis $\{e_1,\ldots,e_s\}$ over a field $\mathbb{F}$, and let $\vartheta,\ \eta$ be elements of $Z^2(L,V)$. \begin{enumerate} \item[(i)] The Lie algebra $L_\vartheta$ is a step-$s$ descendant of $L$ if and only if $\vartheta^\perp\cap C(L)=0$ and the image of the subspace $\langle \vartheta_1,\ldots,\vartheta_s\rangle$ in $H^2(L,\mathbb{F})$ is $s$-dimensional . \item[(ii)] Suppose that $\eta$ is an other element of $Z^2(L,V)$ and that $L_\vartheta,\ L_\eta$ are descendants of $L$. Then $L_\vartheta\cong L_\eta$ if and only if images of the subspaces $\langle \vartheta_1,\ldots,\vartheta_s\rangle$ and $\langle \eta_1,\ldots,\eta_s\rangle$ in $H^2(L,\mathbb{F})$ are in the same orbit under the action of $\mathrm{\mathop{Aut}}(L)$. \end{enumerate} \end{thm}
A subspace $U$ of $H^2(L,\mathbb{F})$ is said to be allowable if $\bigcap_{\vartheta\in U}\vartheta^\perp\cap C(L)=0$. By Theorem~\ref{coctheorem}, there is a one-to-one correspondence between the set of isomorphism types of step-$s$ descendants of $L$ and the $\mathrm{\mathop{Aut}}(L)$-orbits on the $s$-dimensional allowable subspaces of $H^2(L,\mathbb{F})$. Hence the classification of 6-dimensional nilpotent Lie algebras requires that we determine these orbits for all nilpotent Lie algebras of dimension at most~5. The determination of these orbits is achieved in Section~\ref{orbsect}.
\section{The 6-dimensional nilpotent Lie algebras}\label{ressect}
Let $\mathbb{F}$ be an arbitrary field. In order to classify the 6-dimensional nilpotent Lie algebras over $\mathbb{F}$, we determine the isomorphism classes of the 6-dimensional descendants of the nilpotent Lie algebras with dimension at most 5. In this section we summarize the result by listing these isomorphism classes for each of the Lie algebras with dimension at most 5, while in the Section~\ref{orbsect} we provide with a detailed proof. The notation we use to describe nilpotent Lie algebras of dimension at most~5 is the same as in~\cite{artw}. \par Unlike the 5-dimensional algebras, nilpotent Lie algebras of dimension~6 cannot be described uniformly over all fields. In some cases, the isomorphism types of descendants will depend on a parameter. In order to describe these cases, we need some notation. First, for a field $\mathbb{F}$, let $\mathbb{F}^*$ denote the multiplicative group of non-zero elements of $\mathbb{F}$. If $\mathbb{F}$ is a field, then let $\relstar$ denote the equivalence relation on $\mathbb{F}$ defined as $\alpha\relstar \beta$ if and only if $\alpha= \gamma^2\beta$ with some $\gamma\in\mathbb{F}^*$. If $\text{char}\, \mathbb{F}=2$ then define the equivalence relation $\relstarplus$ as $\alpha\relstarplus\beta$ if and only if $\alpha=\gamma^2\beta+\delta^2$ with some $\gamma\in\mathbb{F}^*$ and $\delta\in\mathbb{F}$. Using that $\text{char}\, \mathbb{F}=2$, it is easy to show that $\relstarplus$ is indeed an equivalence relation. For a set $X$ and equivalence relation $\sim$, let $X/(\sim)$ denote a transversal of the equivalence relation $\sim$; that is, $X/(\sim)$ is a set that contains precisely one element from each of the equivalence classes of $\sim$. Now let $\mathbb{F}$ be a field of characteristic 2. View $\mathbb{F}$ as a vector space over $\mathbb{F}_2$, the map $\psi \colon \mathbb{F}\to\mathbb{F}$ defined by $\psi(x)=x^2+x$ is $\mathbb{F}_2$-linear with kernel $\mathbb{F}_2$. Let $\relpsi$ denote the equivalence relation that corresponds to the coset partition of $\mathbb{F}$ with respect to the subspace $\psi(\mathbb{F})$. Note that if $\mathbb{F}$ is a finite field, then $\psi(\mathbb{F})$ is a subspace of codimension~1, and in
particular $|\mathbb{F}/(\relpsi)|=2$. On the other hand, if $\mathbb{F}$ is algebraically closed
then $\psi$ is surjective and $|\mathbb{F}/(\relpsi)|=1$.
Following and extending the notation in~\cite{artw}, the nilpotent Lie algebras in this paper are denoted by $L_{d,k}$, $L_{d,k}(\varepsilon)$, $L^{(2)}_{d,k}$, or $L^{(2)}_{d,k}(\varepsilon)$ where $d$ is the dimension of the algebra, $k$ is its index among the nilpotent Lie algebras with dimension $d$, $\varepsilon$ is a possible parameter, and the superscript ``$(2)$'' refers to the fact that the algebra is defined over a field of characteristic~2. The list of nilpotent Lie algebras with dimension at most~5, described in the same notation, can be found in~\cite{artw}. In particular, there are 9 isomorphism types of nilpotent Lie algebras of dimension~5: $L_{5,1},\ldots,L_{5,9}$. Hence there are 9 isomorphism types of nilpotent Lie algebras with dimension~6 that are not descendants of smaller-dimensional Lie algebras, namely $L_{6,1},\ldots,L_{6,9}$, where $L_{6,i}=L_{5,i}\oplus\mathbb{F}$.\par
Next we describe the 6-dimensional descendants of nilpotent Lie algebras with dimension at most~5. The Lie algebras in this section are given with multiplication tables with respect to fixed bases with trivial products of the form $[x_i,x_j]=0$ omitted. With respect to the list of~\cite{artw} we have made a few small changes. The multiplication table of $L_{6,19}(\varepsilon)$, for $\varepsilon\neq 0$, is different from (but isomorphic to) the Lie algebra denoted with the same symbol in~\cite{artw}. The Lie algebras $L_{6,19}(0)$ and $L_{6,21}(0)$ from~\cite{artw} are denoted here by $L_{6,27}$ and $L_{6,28}$, respectively. We have made these changes because the structure of $L_{6,k}(\epsilon)$, $k=19,21$, is different for $\varepsilon=0$ and $\varepsilon\neq 0$. Furthermore, in characteristic~2 a few new algebras appear that are not contained in~\cite{artw}. These Lie algebras are $L_{6,k}^{(2)}$ or $L_{6,k}^{(2)}(\varepsilon)$ (here $k\in\{1,\ldots, 8\}$).
\subsection*{Step-1 descendants of $5$-dimensional Lie algebras}
\begin{itemize} \item[(5/1)] The abelian Lie algebra $L_{5,1}$ has no step-1 descendants. \item[(5/2)] The Lie algebra $L_{5,2}=\langle x_1,\ldots,x_5\mid [ x_1,x_2]=x_3\rangle$ has only one isomorphism class of step-$1$ descendants namely \[L_{6,10}=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_6, [x_4,x_5]=x_6\rangle.\] \item[(5/3)] The Lie algebra $L_{5,3}=\langle x_1,\ldots,x_5\mid [x_1,x_2]=x_3,[x_1,x_3]=x_4\rangle$ has two isomorphism classes of step-$1$ descendants namely \begin{eqnarray*} L_{6,11}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_4]=x_6, [x_2,x_3]=x_6,[x_2,x_5]=x_6\rangle\mbox{ and}\\ L_{6,12}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_4]=x_6, [x_2,x_5]=x_6\rangle. \end{eqnarray*} \item[(5/4)] The Lie algebra $L_{5,4}=\langle x_1,\ldots,x_5\mid [x_1,x_2]=x_5,[x_3,x_4]=x_5\rangle$ has no step-1 descendants. \item[(5/5)] Let $L_{5,5}=\langle x_1,\ldots,x_5\mid [x_1,x_2]=x_3,[x_1,x_3]=x_5,[x_2,x_4]=x_5\rangle$. If $\text{char}\, \mathbb{F}\neq 2$, then $L_{5,5}$ has a unique isomorphism type of step-1 descendants, namely \[L_{6,13}=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3,[x_1,x_3]=x_5,[x_1,x_5]=x_6,[x_2,x_4]=x_5, [x_3,x_4]=x_6\rangle.\] If $\text{char}\, \mathbb{F}=2$ then $L_{5,5}$ has two isomorphism classes of step-1 descendants, namely $L_{6,13}$ above and \[L_{6,1}^{(2)}=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3,[x_1,x_3]=x_5,[x_1,x_5]=x_6,[x_2,x_4]=x_5+x_6, [x_3,x_4]=x_6\rangle.\] \item[(5/6)] Set $L_{5,6}=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3,[x_1,x_3]=x_4,[x_1,x_4]=x_5,[x_2,x_3]=x_5\rangle$. If $\text{char}\, \mathbb{F}\neq 2$, then $L_{5,6}$ has two isomorphism classes of step-1 descendants, namely \begin{eqnarray*} L_{6,14}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_4]=x_5,\\ &&{[x_2,x_3]}=x_5, {[x_2,x_5]}=x_6, [x_3,x_4]=-x_6\rangle;\\ L_{6,15}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, {[x_1,x_4]}=x_5,\\ &&{[x_1,x_5]}=x_6, [x_2,x_3]=x_5, [x_2,x_4]=x_6\rangle. \end{eqnarray*} If $\text{char}\, \mathbb{F}=2$ then $L_{6,15}$ and the Lie algebras \begin{eqnarray*} L_{6,2}^{(2)}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3,[x_1,x_3]=x_4,[x_1,x_4]=x_5,\\ &&{[x_1,x_5]}=x_6,{[x_2,x_3]}=x_5+ x_6,[x_2,x_4]=x_6\rangle;\\ L_{6,3}^{(2)}(\varepsilon)&=&\langle x_1,\ldots,x_6 \mid[x_1,x_2]=x_3,[x_1,x_3]=x_4,[x_1,x_4]=x_5,\\ &&{[x_2,x_3]}=x_5+\varepsilon x_6, {[x_2,x_5]}=x_6,[x_3,x_4]=x_6\rangle\mbox{ with }\varepsilon\in\mathbb{F}/(\relstarplus), \end{eqnarray*} form a complete and irredundant list of representatives of the isomorphism classes of step-1 descendants of $L_{5,6}$. \item[(5/7)] Set $L_{5,7}=\langle x_1,\ldots,x_5\mid [x_1,x_2]=x_3,[x_1,x_3]=x_4,[x_1,x_4]=x_5\rangle$. If $\text{char}\, \mathbb{F}\neq 2$, then the Lie algebra $L_{5,7}$ has three isomorphism classes of step-$1$ descendants namely \begin{eqnarray*} L_{6,16}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, {[x_1,x_4]}=x_5,{[x_2,x_5]}=x_6, [x_3,x_4]=-x_6\rangle;\\ L_{6,17}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_4]=x_5, [x_1,x_5]=x_6,[x_2,x_3]=x_6\rangle;\\ L_{6,18}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_4]=x_5, [x_1,x_5]=x_6\rangle. \end{eqnarray*} If $\text{char}\, \mathbb{F}=2$ then $L_{6,17}$, $L_{6,18}$ and the Lie algebras \begin{eqnarray*} L_{6,4}^{(2)}(\varepsilon)=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3,[x_1,x_3]=x_4,[x_1,x_4]=x_5,\\ {[x_2,x_3]}=\varepsilon x_6,{[x_2,x_5]}=x_6,[x_3,x_4]=x_6\rangle, \end{eqnarray*} where $\varepsilon$ runs through the elements of $\mathbb{F}/(\relstarplus)$, form a complete and irredundant set of representatives of the isomorphism types of step-1 descendants of $L_{5,7}$. \item[(5/8)]Set $L_{5,8}=\langle x_1,\ldots,x_5\mid [x_1,x_2]=x_4,[x_1,x_3]=x_5\rangle$. If $\text{char}\, \mathbb{F}\neq 2$, then the Lie algebras \begin{eqnarray*} L_{6,20}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_4, [x_1,x_3]=x_5, [x_1,x_5]=x_6, [x_2,x_4]=x_6\rangle;\\ L_{6,19}(\varepsilon)&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_4, [x_1,x_3]=x_5, [x_1,x_5]=x_6, [x_2,x_4]=x_6, [x_3,x_5]=\varepsilon x_6\rangle,\\ && \mbox{ where $\varepsilon\in \mathbb{F}^*/(\relstar)$}, \end{eqnarray*} form a complete and irredundant set of representatives of the isomorphism classes of step-1 descendants of $L_{5,8}$. If $\text{char}\, \mathbb{F}=2$ then such a set of representatives is formed by the Lie algebras $L_{6,19}(\varepsilon)$ with $\varepsilon\in\mathbb{F}^*/(\relstar)$, $L_{6,20}$ and the Lie algebra \[L_{6,5}^{(2)}=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_4,[x_1,x_3]=x_5,{[x_2,x_5]}=x_6,[x_3,x_4]=x_6\rangle.\] \item[(5/9)] Set $L_{5,9}=\langle x_1,\ldots,x_5\mid [x_1,x_2]=x_3,[x_1,x_3]=x_4, [x_2,x_3]=x_5\rangle$. If $\text{char}\, \mathbb{F}\neq 2$ then the Lie algebras \[L_{6,21}(\varepsilon)=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_4]=x_6, [x_2,x_3]=x_5, [x_2,x_5]= \varepsilon x_6\rangle,\] where $\varepsilon$ runs through the elements of $\mathbb{F}^*/(\relstar)$ form a complete and irredundant set of representatives of the isomorphism classes of step-1 descendants of $L_{5,9}$. If $\text{char}\, \mathbb{F}=2$ then such a set of representatives is formed by the Lie algebras $L_{6,21}(\varepsilon)$ with $\varepsilon\in \mathbb{F}^*/(\relstar)$ and by the Lie algebra \[L_{6,6}^{(2)}=\langle x_1,\ldots,x_6\mid[x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_5]=x_6, [x_2,x_3]=x_5, [x_2,x_4]=x_6\rangle.\] \end{itemize}
\subsection*{Step-2 descendants of $4$-dimensional Lie algebras} \begin{itemize} \item[(4/1)] Let $L_{4,1}$ be the abelian Lie algebra of dimension $4$. If $\text{char}\, \mathbb{F}\neq 2$ the following is a complete and irredundant list of the representatives of the isomorphism classes of the step-2 descendants of $L_{4,1}$: \[ L_{6,22}(\varepsilon) =\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_5, [x_1,x_3]=x_6, [x_2,x_4]=\varepsilon x_6, [x_3,x_4]=x_5\rangle\] where $\varepsilon\in \mathbb{F}/(\relstar)$. If $\text{char}\, \mathbb{F}=2$, then such a list is formed by the Lie algebras $L_{6,22}(\nu)$ as above, where, in this case, $\nu \in \mathbb{F}/(\relstarplus)$, and by the Lie algebras \[ L_{6,7}^{(2)}(\eta) =\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_5, [x_1,x_3]=x_6, [x_2,x_4]=\eta x_6, [x_3,x_4]=x_5+x_6\rangle\] where $\eta\in\mathbb{F}/(\relpsi)$ \item[(4/2)] Set $L_{4,2}=\langle x_1,\ldots,x_4\mid [x_1,x_2]=x_3\rangle$. If $\text{char}\, \mathbb{F}\neq 2$, then the following Lie algebras form a complete and irredundant set of representatives of the isomorphism classes of the step-2 descendants of $L_{4,2}$: \begin{eqnarray*} L_{6,27}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_5, [x_2,x_4]= x_6 \rangle;\\ L_{6,23}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_5, [x_1,x_4]=x_6, [x_2,x_4]= x_5\rangle;\\ L_{6,25}&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_5, [x_1,x_4]=x_6\rangle;\\ L_{6,24}(\varepsilon)&=&\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_5, {[x_1,x_4]}=\varepsilon x_6,\\ &&{[x_2,x_3]}=x_6, [x_2,x_4]= x_5\rangle\mbox{ where $\varepsilon\in \mathbb{F}/(\relstar)$}. \end{eqnarray*} If $\text{char}\, \mathbb{F}=2$ then such a set of representatives is formed by the Lie algebras $L_{6,27}$, $L_{6,23}$, $L_{6,24}(\nu)$, where $\nu \in \mathbb{F}/(\relstarplus)$, $L_{6,25}$ and by the Lie algebras \begin{eqnarray*} L_{6,8}^{(2)}(\eta)=\langle x_1,\ldots,x_6&\mid& [x_1,x_2]=x_3,[x_1,x_3]=x_5,[x_1,x_4]=\eta x_6,\\ &&{[x_2,x_3]}=x_6,[x_2,x_4]=x_5+x_6\rangle \mbox{ where $\eta\in \mathbb{F}/(\relpsi)$}; \end{eqnarray*} \item[(4/3)] The Lie algebra $L_{4,3}=\langle x_1,\ldots,x_4\mid[x_1,x_2]=x_3,[x_1,x_3]=x_4\rangle$ has only one isomorphism class of step-$2$ descendants namely \[L_{6,28}=\langle x_1,\ldots,x_6\mid [x_1,x_2]=x_3, [x_1,x_3]=x_4, [x_1,x_4]=x_5, [x_2,x_3]=x_6\rangle.\] \end{itemize}
\subsection*{Step-3 descendants of $3$-dimensional Lie algebras}
\begin{itemize} \item[(3/1)] The abelian Lie algebra $L_{3,1}$ has a unique isomorphism type of step-3 descendants, namely
\[L_{6,26}=\langle x_1,\ldots,x_6\ |\ [x_1,x_2]=x_4,\ [x_1,x_3]=x_5,\ [x_2,x_3]=x_6\rangle.\]
\item[(3/2)] The Lie algebra $L_{3,2}=\langle x_1,x_2,x_3 | [x_1,x_2]=x_3\rangle$ has no step-3 descendants. \end{itemize}
By explicitly listing the isomorphism classes of 6-dimensional nilpotent Lie algebras, the following theorem gives a summary of the results stated in this section.
Recall that in a field $\mathbb{F}$ of characteristic~2, $\relpsi$ denotes the equivalence relation that corresponds to the coset partition of $\mathbb{F}$ with respect to the subspace
$\{x^2+x\ |\ x\in\mathbb{F}\}$
\begin{thm}\label{classif} \begin{enumerate} \item[(I)] Over a field $\mathbb{F}$ of characteristic different from~$2$, the list of the isomorphism types of $6$-dimensional nilpotent Lie algebras is the following: $L_{5,k}\oplus\mathbb{F}$ with $k\in\{1,\ldots,9\}$; $L_{6,k}$ with $k\in\{10,\ldots,18,20,23,25,\ldots,28\}$; $L_{6,k}(\varepsilon_1)$ with $k=\{19,21\}$ and $\varepsilon_1\in\mathbb{F}^*/(\relstar)$; $L_{6,k}(\varepsilon_2)$ with $k\in\{22,24\}$ and $\varepsilon_2\in\mathbb{F}/(\relstar)$. \item[(II)] Over a field $\mathbb{F}$ of characteristic~$2$, the isomorphism types of $6$-dimensional nilpotent Lie algebras are $L_{5,k}\oplus\mathbb{F}$ with $k\in\{1,\ldots,9\}$; $L_{6,k}$ with $k\in\{10,\ldots,13,15,17,18,20,23,25,\ldots,28\}$; $L_{6,k}(\varepsilon_1)$ with $k=\{19,21\}$ and $\varepsilon_1\in\mathbb{F}^*/(\relstar)$; $L_{6,k}(\varepsilon_2)$ with $k=\{22,24\}$ and $\varepsilon_2\in\mathbb{F}/(\relstarplus)$; $L_{6,k}^{(2)}$ with $k=\{1,2,5,6\}$; $L_{6,k}^{(2)}(\varepsilon_3)$ with $k=\{3,4\}$ and $\varepsilon_3\in\mathbb{F}/(\relstarplus)$; $L_{6,k}^{(2)}(\varepsilon_4)$ with $k\in\{7,8\}$ and $\varepsilon_4\in\mathbb{F}/(\relpsi)$ \end{enumerate} \end{thm}
Theorem~\ref{classif} follows from the statements concerning the descendants of this section. These statements are proved in Section~\ref{orbsect}. Noting that $\mathbb{F}/(\relstar)= \mathbb{F}^*/(\relstar)\cup\{0\}$, the main Theorem~\ref{main} is a consequence of Theorem~\ref{classif}. If $\mathbb{F}$ is an algebraically closed field or a perfect field of characteristic~2, then $s=1$. If $\mathbb{F}$ is a finite field of size $q$ with $q$ odd then $s=2$. Further, if $\mathbb{F}$ is a perfect field of characteristic~2 then $t=1$. Finally, if $\mathbb{F}$ is a finite field of characteristic 2 then $u=2$, and if $\mathbb{F}$ is algebraically closed of characteristic 2 then $u=1$. This gives that the number of isomorphism types of 6-dimensional nilpotent Lie algebras is 34 over a finite field of characteristic different from 2; is 30 over an algebraically closed field of characteristic different from 2; it is 36 over a finite field of characteristic~2; and it is 34 over an algebraically closed field of characteristic 2.
\section{Vector spaces with forms}\label{formsect}
As cocycles are alternating bilinear forms, and vector spaces with quadratic forms will play an important role in determining isomorphisms within certain descendants of $L_{4,1}$ and $L_{4,2}$, we summarize in this section some basic facts concerning quadratic and bilinear forms. Suppose that $V$ is a vector space over a field $\mathbb{F}$ and $f$ is a function from $V\times V$ to $\mathbb{F}$. The function $f$ is said to be a {\em bilinear form} if $f$ is linear in both of its variables. Further, $f$ is said to be {\em symmetric} if $f(u,v)=f(v,u)$, $f$ is said to be alternating if $f(v,v)=0$, while $f$ is said to be {\em skew-symmetric} if $f(u,v)=-f(v,u)$ for all $v,\ u\in V$. An alternating form is always skew-symmetric, while the converse of this statement is only valid if $\text{char}\, \mathbb{F}\neq 2$. In characteristic~2, there are skew-symmetric forms that are not alternating. For a map $Q:V\rightarrow\mathbb{F}$ define $f_Q:V\times V\rightarrow\mathbb{F}$ as \[f_Q(u,v)=Q(u+v)-Q(u)-Q(v).\] Then $Q$ is said to be a {\em quadratic form} if $Q(\alpha v)=\alpha^2Q(v)$ for all $\alpha\in\mathbb{F}$ and $v\in V$ and $f_Q$ is a bilinear form. In this case the bilinear form $f$ is called the {\em associated bilinear form} of $Q$. If $f$ is a symmetric or skew-symmetric bilinear form on a vector space $V$ and $U\subseteq V$ then let $U^\perp$ denote the {\em orthogonal complement} of $U$ in $V$:
\[U^\perp=\{v\in V\ |\ f(u,v)=0\mbox{ for all }u\in U\}.\] The {\em radical} of the form $f$ is defined as $V^\perp$ and $f$ is said to be {\em non-singular} if $V^\perp=0$; otherwise $f$ is said to be {\em singular}. If $U$ is a subspace of $V$ then an orthogonal form $Q$ or a bilinear form $f$ can be restricted to $U$ and the restriction is a form with the same symmetrical properties as $f$. Such a space $U$ is called {\em singular} or {\em non-singular}, if the restriction of the form to $U$ is singular or non-singular, respectively.\par If $V$ is a vector space with basis $\{b_1,\ldots,b_n\}$ then, for $i,\ j\in\{1,\ldots,n\}$ with $i\neq j$, let $\Delta_{i,j}$ denote the alternating bilinear form defined as $\Delta_{i,j}(b_i,b_j)=-\Delta_{i,j}(b_j,b_i)=1$ and $\Delta_{i,j}(b_k,b_l)=0$ otherwise. Then the set of forms $\Delta_{i,j}$ with $i<j$ is a basis for the linear space of alternating bilinear forms on $V$. Suppose that $V$ is a vector space with a bilinear form $f$. If $g\in\mathrm{\mathop{GL}}(V)$ then the form $gf$ defined by $(gf)(u,v)=f(gu,gv)$ is also a bilinear form on $V$. Further, $gf$ is alternating, skew-symmetric, or symmetric if and only if $f$ is alternating, skew-symmetric, or symmetric, respectively. This defines a $\mathrm{\mathop{GL}}(V)$-action on the set of bilinear forms on $V$. The following lemma is well-known, see for example~\cite[Theorem~8.10.1]{lang}.
\begin{lem}\label{altlemma} Let $V$ be a vector space with a fixed basis $\{b_1,\ldots,b_n\}$ and set $n_1=n$ if $n$ is even, $n_1=n-1$ if $n$ is odd. \begin{enumerate} \item[(i)] If $\Delta$ is a non-singular alternating bilinear form on $V$ then $n$ is even. \item[(ii)] The group $\mathrm{\mathop{GL}}(V)$ has $\lfloor n/2\rfloor$ orbits on the set of alternating bilinear forms on $V$ with orbit representatives \[\Delta_{1,2},\ \Delta_{1,2}+\Delta_{3,4},\ldots,\Delta_{1,2}+\Delta_{3,4}+\cdots +\Delta_{n_1-1,n_1}.\] \end{enumerate} \end{lem}
Let $f$ be a form of one of the types above on a vector space $V$ over a field $\mathbb{F}$. If $\{b_1,\ldots,b_n\}$ is a fixed basis of $V$ then the Gram matrix $\mathbb G$ is defined with respect to this basis as the matrix whose $(i,j)$ entry is $f(b_i,b_j)$. The Gram determinant will be crucial for separating isomorphism types within parametric families of Lie algebras in characteristic different from~2. However, in characteristic~2, another invariant, namely the Arf invariant, will be needed. Let $Q$ be a quadratic form on a vector space $V$ over a field of characteristic~2 with associated bilinear form $f$. Note, in this case, that $f$ is alternating. Assume that $f$ is non-singular, which implies by Lemma~\ref{altlemma}(i), that $\dim V$ is even. Let $e_1,\ldots,e_k,f_1,\ldots,f_k$ be a symplectic basis of $V$; that is $f(e_i,e_j)=f(f_i,f_j)=0$ and $f(e_i,f_j)=\delta_{i,j}$ where $\delta_{i,j}$ is the Kronecker-delta. Define the {\em Arf invariant} $\delta_Q$ of $Q$ with respect to the given basis as \[q=\sum_{i=1}^kQ(e_i)Q(f_i).\] Of course, the Arf invariant of $Q$ depends on the chosen symplectic basis of $V$. However, the following is valid.
\begin{lem}\label{gramlemma} Let $V$ be a vector space over a field and let $Q$ be a quadratic form with non-singular associated bilinear form. \begin{itemize}
\item[(i)] Let $\mathbb G_1$ and $\mathbb G_2$ denote the Gram matrices of $V$ with respect to two bases of $V$. Then $\det \mathbb G_1/\det\mathbb G_2$ is an element of the multiplicative subgroup $\{x^2\ |\ x\in\mathbb{F}^*\}$ of $\mathbb{F}^*$. \item[(ii)]
Assume that $\text{char}\, \mathbb{F}=2$ and suppose that $q_1$ and $q_2$ are the values of the Arf invariant of $Q$ with respect to two symplectic bases. Then $q_1+q_2$ is an element of the additive subgroup $\{x^2+x\ |\ x\in\mathbb{F}\}$ of $\mathbb{F}$. \end{itemize} \end{lem} \begin{proof} Statement~(i) is well-known, see for instance equation~(8.1.8) in~\cite{cohn}. Statement~(ii) is proved in~\cite[Theorem 8.11.12]{cohn}. \end{proof}
Suppose that $Q$ is a quadratic form on a vector space $V$ with associated bilinear form $f_Q$. A vector $v\in V$ is said to be {\em singular} if $Q(v)=0$, and it is {\em isotropic} if $f(v,v)=0$. A subspace $U$ of $V$ is {\em totally singular} if $Q(u)=0$ for all $u\in U$, while $U$ is said to be totally isotropic if $f(u,u)=0$ for all $u\in U$. If $\text{char}\, \mathbb{F}\neq 2$, then the notions of singular and isotropic, and those of totally singular and totally isotropic can be freely interchanged. This, however, is no longer true if $\text{char}\, \mathbb{F}=2$. Let $V$ be a vector space with a quadratic form $Q$ and let $G$ be a group acting on $V$. Then we say that $G$ {\em preserves $Q$ modulo scalars} if for each $g\in G$ there is some $\alpha_g\in\mathbb{F}$ such that \[Q(gv)=\alpha_gQ(v).\]
\begin{lem}\label{arflemma} Suppose that $V$ is a vector space over a field $\mathbb{F}$ with a quadratic form $Q$ whose associated bilinear form is $f$, and let $G$ be a subgroup of $\mathrm{\mathop{GL}}(V)$ preserving $Q$ modulo scalars. Suppose that $S_1$ and $S_2$ are $2$-dimensional subspaces of $V$ in the same $G$-orbit, and let $\{b_1,b_2\}$ and $\{c_1,c_2\}$ be bases of $S_1$ and $S_2$, respectively. \begin{enumerate} \item[(i)]
Suppose that $S_1$ and $S_2$ are non-singular subspaces, and let $\mathbb G_1$ and $\mathbb G_2$ be the Gram matrices of $S_1$ and $S_2$ with respect to the given bases. Then $(\det \mathbb G_1)/(\det \mathbb G_2)\in\{\alpha^2\ |\ \alpha\in\mathbb{F}^*\}$. \item[(ii)] Suppose that $\text{char}\, \mathbb{F}=2$, that $f$ is non-singular, that the given bases of $S_i$ are symplectic,
and that $q_1$ and $q_2$ are the Arf invariants of $S_1$ and $S_2$ with respect to the given bases. Then $q_1+q_2\in\{\alpha+\alpha^2\ |\ \alpha\in\mathbb{F}\}$. \item[(iii)] Suppose that $\text{char}\, \mathbb{F}=2$ and that $f$ is identically zero on $S_i$, for $i=1,2$, and set $q_1=Q(b_1)Q(b_2)$ and $q_2=Q(c_1)Q(c_2)$. Then there exist $\alpha\in\mathbb{F}^*$ and $\beta\in\mathbb{F}$ such that $q_2=\alpha^2q_1+\beta^2$. \end{enumerate} \end{lem} \begin{proof} Suppose that $g\in G$ such that $gS_1=S_2$. Since $G$ preserves the form $Q$ modulo scalars, there is some $\alpha\in\mathbb{F}$ such that $Q(gv)=\alpha Q(v)$, and $f(gu,gv)=\alpha f(u,v)$ for all $u,\ v\in V$. Let us prove statement~(i) first. The elements $gb_1$ and $gb_2$ form a basis for $S_2$ and the Gram matrix of $S_2$ with respect to this basis is $\alpha \mathbb G_1$ with determinant $\alpha^2\det \mathbb G_1$. By a remark above, there is some $\beta\in\mathbb{F}^*$ such that $\alpha^2\det \mathbb G_1=\beta^2\det \mathbb G_2$, and hence $(\det \mathbb G_1)/(\det \mathbb G_2)=(\beta/\alpha)^2$ as claimed.\par Let us now prove the second assertion. Since $b_1,\ b_2$ is a symplectic basis of $S_1$, we obtain that $f(gb_1,gb_1)=f(gb_2,gb_2)=0$ and $f(gb_1,gb_2)=f(gb_2,gb_1)=\alpha$. Hence the basis $(1/\alpha)gb_1,\ gb_2$ is a symplectic basis of $S_2$ and the Arf invariant with respect to this basis is \[Q\left(\frac 1\alpha gb_1\right)Q(gb_2)=\frac{1}{\alpha^2}\alpha^2Q(b_1)Q(b_2)=q_1.\]
Thus Lemma~\ref{arflemma} gives that $q_1+q_2\in \{\alpha+\alpha^2\ |\ \alpha\in\mathbb{F}\}$.\par (iii) First we note that if $f$ is identically zero, then $Q(\gamma b_1+\delta b_2)=\gamma^2Q(b_1)+\delta^2Q(b_2)$. Since $gS_1=S_2$, there are $\alpha_1,\ \alpha_2,\ \beta_1,\ \beta_2\in\mathbb{F}$ such that $c_1=g(\alpha_1b_1+\alpha_2 b_2)$ and $c_2=g(\beta_1b_1+\beta_2b_2)$. As $c_1$ and $c_2$ are linearly independent, we obtain that $\alpha_1\beta_2+\alpha_2\beta_1\neq 0$. Then \begin{eqnarray*} q_2&=&Q(c_1)Q(c_2)\\ &=&Q(g(\alpha_1b_1+\alpha_2 b_2))Q(g(\beta_1b_1+\beta_2b_2))\\ &=&\alpha^2(\alpha_1^2Q(b_1)+\alpha_2^2Q(b_2))(\beta_1^2Q(b_1)+\beta_2^2Q(b_2))\\ &=&\alpha^2(\alpha_1^2\beta_1^2Q(b_1)^2+(\alpha_1^2\beta_2^2+\alpha_2^2\beta_1^2)Q(b_1)Q(b_2)+\alpha_2^2\beta_2^2Q(b_2)^2)\\ &=&(\alpha\alpha_1\beta_1Q(b_1)+\alpha\alpha_2\beta_2Q(b_2))^2+(\alpha\alpha_1\beta_2+\alpha\alpha_2\beta_1)^2q_2. \end{eqnarray*} Since $\alpha\neq 0$ and $\alpha_1\beta_2+\alpha_2\beta_1\neq 0$, the assertion follows. \end{proof}
We close this section with an interesting byproduct of our work. In the course of determining the isomorphism classes of the descendants of the 4-dimensional abelian Lie algebra, we determined, for an arbitrary 4-dimensional vector space $V$, the $\mathrm{\mathop{GL}}(V)$-orbits on the 2-dimensional subspaces of $V\wedge V$. As this result may have applications elsewhere, we state it separately.
\begin{thm}\label{kleinth} Let $V$ be a vector space of dimension~$4$ over a field $\mathbb{F}$ with basis $\{v_1,v_2,v_3,v_4\}$. If $\text{char}\, \mathbb{F}\neq 2$ then set \[\mathcal P=\{\langle v_1\wedge v_2,v_1\wedge v_3\rangle\}\cup\{\langle v_1\wedge v_2+v_3\wedge v_4,v_1\wedge v_3+
\varepsilon v_2\wedge v_4\rangle\, |\,\varepsilon\in\mathbb{F}/(\relstar)\}.\] If $\text{char}\, \mathbb{F}=2$, then let
\begin{eqnarray*} \mathcal P&=&\{\langle v_1\wedge v_2,v_1\wedge v_3\rangle,\langle v_1\wedge v_2+v_3\wedge v_4,
\rangle,\\ &&\quad\langle v_1\wedge v_2+v_3\wedge v_4,v_1\wedge v_3+\omega v_2\wedge v_4+v_3\wedge v_4\rangle\mid \omega\in\mathbb{F}/(\relpsi)\}\\ &\cup&\{\langle v_1\wedge v_2+v_3\wedge v_4,v_1\wedge v_3+
\varepsilon v_2\wedge v_4\rangle\, |\,\varepsilon\in\mathbb{F}/(\relstarplus)\}. \end{eqnarray*} Then $\mathcal P$ is a complete and irredundant set of representatives of the $\mathrm{\mathop{GL}}(V)$-orbits on the two-dimensional subspaces of $V\wedge V$. \end{thm}
The proof of Theorem~\ref{kleinth} will be given in Section~\ref{orbsect}.
\section{The calculation of 6-dimensional descendants}\label{orbsect}
In this section we prove the results stated in Section~\ref{ressect}. In order to make the calculations more compact, we introduce uniform notation. In each of the sections, $L$ will denote a fixed nilpotent Lie algebra with dimension $d$, where $d<6$, for which the descendants will be computed. The algebra $L$ will be given by a multiplication table as in Section~\ref{ressect}. Recall that the forms $\Delta_{i,j}$ with $i\leqslant j$ form a basis for the space of alternating bilinear forms on $L$, and the cohomology spaces $Z^2(L,\mathbb{F})$, $B^2(L,\mathbb{F})$, and $H^2(L,\mathbb{F})$ will be determined in terms of this basis. These spaces will be described for each of the Lie algebras without proof, as it is an easy exercise to compute them in these cases. If $\Delta$ is an element of $Z^2(L,\mathbb{F})$, then $\overline\Delta$ denotes its image in $H^2(L,\mathbb{F})$.\par The automorphism group of $L$ will be described as a group of $(d\times d)$-matrices with respect to the given basis of $L$. The automorphism groups will also be presented without proof as it is in general easy to verify that the given matrices do indeed form the full automorphism group. We use column notation to describe the automorphisms; that is, the $i$-th column of the given matrix will contain the image of the $i$-th basis vector of $L$. By Theorem~\ref{coctheorem}, we will need to compute the $\mathrm{\mathop{Aut}}(L)$-orbits on the $(6-d)$-dimensional allowable subspaces of $H^2(L,\mathbb{F})$. The set of these subspaces will be denoted by $\mathcal S$. The quotient $H^2(L,\mathbb{F})$ will be given with a fixed basis $\Gamma_1,\ldots,\Gamma_k$. The element $\alpha_1\Gamma_1+\cdots+\alpha_k\Gamma_k$ of $H^2(L,F)$ will be written simply as $(\alpha_1,\ldots,\alpha_k)$. The action of $\mathrm{\mathop{Aut}}(L)$ on $H^2(L,\mathbb{F})$ will be computed explicitly.\par Now we can start determining the 6-dimensional descendants of the nilpotent Lie algebras with dimension at most $5$.
\subsection*{$\mathbf L_{5,1}$, $\mathbf L_{5,4}$, $\mathbf L_{4,3}$, $\mathbf L_{3,1}$, $\mathbf L_{3,2}$}
First we compute the descendants in the easy cases. As there are no non-singular alternating bilinear forms on an odd-dimensional space (Lemma~\ref{altlemma}(i)), the Lie algebra $L_{5,1}$ has no step-1 descendants. If $L=L_{5,4}$ then $Z^2(L,F)=\langle \Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{2,3},\Delta_{2,4},\Delta_{3,4}\rangle$, $B^2(L,F)=\langle\Delta_{1,2}+\Delta_{3,4}\rangle$, and $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,3}},\overline{\Delta_{1,4}}, \overline{\Delta_{2,3}},\overline{\Delta_{2,4}},\overline{\Delta_{3,4}}\rangle$. Note that each element of $H^2(L,\mathbb{F})$ has $x_5$ in its radical. Hence $\mathcal S=\emptyset$, and so there are no step-$1$ descendants of $L$.\par If $L=L_{4,3}$ then $H^2(L,\mathbb{F})$ is 2-dimensional, spanned by $\overline{\Delta_{1,4}}$, and $\overline{\Delta_{2,3}}$. Since $H^2(L,\mathbb{F})$ is allowable, $\mathcal S=H^2(L,\mathbb{F})$ which gives that $L$ has a single isomorphism type of step-2 descendants, as claimed in Section~\ref{ressect}. Similarly, if $L=L_{3,1}$, we obtain that $H^2(L,\mathbb{F})$ is 3-dimensional spanned by $\overline{\Delta_{1,2}}$, $\overline{\Delta_{1,3}}$, and $\overline{\Delta_{2,3}}$. Since $H^2(L,\mathbb{F})$ is allowable, we have that $\mathcal S=H^2(L,\mathbb{F})$, which shows that $L$ has only one isomorphism class of step-3 descendants. Finally, if $L=L_{3,2}$, then $H^2$ is 2-dimensional spanned by $\overline{\Delta_{1,3}}$ and $\overline{\Delta_{2,3}}$ which shows, in this case, that $\mathcal S=\emptyset$.
\subsection*{$\mathbf L_{5,2}$}
Set $L=L_{5,2}$. It is routine to check that $\mathrm{\mathop{Aut}}(L)$ is the group of invertible matrices of the form \begin{equation}\label{A52} A=\begin{pmatrix} a_{11} & a_{12} & 0 & 0 & 0 \\ a_{21} & a_{22} & 0 & 0 & 0\\ a_{31} & a_{32} & u & a_{34} & a_{35} \\ a_{41} & a_{42} & 0 & a_{44} & a_{45}\\ a_{51} & a_{52} & 0 & a_{54} & a_{55} \end{pmatrix} \end{equation} with $u = a_{11}a_{22}-a_{12}a_{21}$. As $Z^2(L,\mathbb{F})=\langle \Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{1,5},\Delta_{2,3},\Delta_{2,4},\Delta_{2,5},\Delta_{4,5}\rangle$ and $B^2(L,\mathbb{F})=\langle \Delta_{1,2}\rangle$, we obtain that $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,3}},\overline{\Delta_{1,4}},\overline{\Delta_{1,5}},\overline{\Delta_{2,3}},\overline{\Delta_{2,4}},\overline{\Delta_{2,5}},\overline{\Delta_{4,5}}\rangle$ and
\[\mathcal S=\left\{\langle (a,b,c,d,e,f,g)\rangle\ |\ g\neq 0\mbox{ and }(a,d)\neq (0,0)\right\}.\] As the matrix $A$ is invertible, $u\neq 0$. For $\vartheta=(a,b,c,d,e,f,g)\in H^2(L,\mathbb{F})$, we compute that $A\vartheta =(\bar a,\bar b,\bar c,\bar d,\bar e,\bar f,\bar g)$ where \begin{eqnarray*} \bar a & =& u(a_{11}a+a_{21}d);\\ \bar b & =& a_{11}a_{34}a+a_{11}a_{44}b+a_{11}a_{54}c+a_{21}a_{34}d+a_{21}a_{44}e+a_{21}a_{54}f +(a_{41}a_{54}-a_{44}a_{51})g; \\ \bar c & =& a_{11}a_{35}a+a_{11}a_{45}b+a_{11}a_{55}c+a_{21}a_{35}d+a_{21}a_{45}e+a_{21}a_{55}f +(a_{41}a_{55}-a_{45}a_{51})g; \\ \bar d & =& u(a_{12}a+a_{22}d);\\ \bar e & =& a_{12}a_{34}a+a_{12}a_{44}b+a_{12}a_{54}c+a_{22}a_{34}d+a_{22}a_{44}e+a_{22}a_{54}f +(a_{42}a_{54}-a_{44}a_{52})g; \\ \bar f & =& a_{12}a_{35}a+a_{12}a_{45}b+a_{12}a_{55}c+a_{22}a_{35}d+a_{22}a_{45}e+a_{22}a_{55}f +(a_{42}a_{55}-a_{45}a_{52})g; \\ \bar g & =& (a_{44}a_{55}-a_{45}a_{54})g. \end{eqnarray*} Choose $S=\langle (a,b,c,d,e,f,g)\rangle\in\mathcal S$ and let $B$ be the first of the following matrices if $a\neq 0$, or the second if $a=0$: \[\begin{pmatrix} 1&-dg&0&0&0\\ 0&ag&0&0&0\\ 0&0&ag&-b&-c\\ 0&-(af-cd)&0&a&0\\ 0&ae-bd&0&0&a \end{pmatrix}, \quad \begin{pmatrix} 0&-dg&0&0&0\\ 1&0&0&0&0\\ 0&0&dg&-e&-f\\ 0&cd&0&d&0\\ 0&-bd&0&0&d \end{pmatrix}.\] Then $B S=\langle (1,0,0,0,0,0,1)\rangle$ and hence the group of matrices of the form~\eqref{A52} has only one orbit on $\mathcal S$. Thus $L$ has only one step-$1$ descendant, namely $L_{6,10}$.
\subsection*{$\mathbf L_{5,3}$}
Set $L=L_{5,3}$. The group $\mathrm{\mathop{Aut}}(L)$ consists of the invertible matrices of the form \begin{equation}\label{A53} A=\begin{pmatrix} a_{11} & 0 & 0 & 0 & 0 \\ a_{21} & a_{22} & 0 & 0 & 0\\ a_{31} & a_{32} & a_{11}a_{22} & 0 & 0 \\ a_{41} & a_{42} & a_{11}a_{32} & a_{11}^2a_{22} & a_{45}\\ a_{51} & a_{52} & 0 & 0 & a_{55}\end{pmatrix}. \end{equation} We have that $Z^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{1,5},\Delta_{2,3},\Delta_{2,5}\rangle$, $B^2(L,\mathbb{F})=\langle \Delta_{1,2},\Delta_{1,3}\rangle$, and so $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,4}},\overline{\Delta_{1,5}},\overline{\Delta_{2,3}},\overline{\Delta_{2,5}}\rangle$. Further, \[\mathcal S=\{\langle (a,b,c,d)\rangle\mid a,d\neq 0\}.\] If $\vartheta=(a,b,c,d)\in H^2(L,\mathbb{F})$, then $A\vartheta =(\bar a,\bar b,\bar c,\bar d)$ where \[\bar a = a_{11}^3a_{22}a,\quad \bar b = a_{11}a_{45}a+a_{11}a_{55}b+a_{21}a_{55}d,\quad \bar c = a_{11}a_{22}^2c,\quad \bar d = a_{22}a_{55}d.\] Choose $S=\langle(a,b,c,d)\rangle\in\mathcal S$. Set $c_1=1$ if $c=0$ and set $c_1=c/(ad^2)$ otherwise. Let $B$ denote the matrix \[\begin{pmatrix} c_1&0&0&0&0\\ 0&c_1&0&0&0\\ 0 &0&c_1^2&0&0\\ 0&0&0&c_1^3&0\\ 0&0&0&0&c_1^3 \end{pmatrix}\begin{pmatrix} d&0&0&0&0\\ 0&1&0&0&0\\ 0&0&d&0&0\\ 0&0&0&d^2&-bd^2\\ 0&0&0&0&ad^2 \end{pmatrix}.\] Easy computation shows that $BS=\langle (1,0,0,1)\rangle$ if $c=0$ while $BS=\langle (1,0,1,1)\rangle$ otherwise. Hence the group of matrices of the form~\eqref{A53} has two orbits on $\mathcal S$, namely $\langle(1,0,0,1)\rangle$ and $\langle(1,0,1,1)\rangle$. The corresponding Lie algebras are $L_{6,12}$ and $L_{6,11}$.\par The derived subalgebra $\langle x_3, x_4, x_6\rangle$ of both $L_{6,11}$ and $L_{6,12}$ is 3-dimensional. However, in $L_{6,11}$ the centralizer $\langle x_3,x_4,x_5,x_6\rangle$ of the derived subalgebra is $4$-dimensional, while in $L_{6,12}$ the centralizer $\langle x_2,x_3,x_4,x_5,x_6\rangle$ of $L_{6,12}'$ is 5-dimensional. Hence $L_{6,11}$ and $L_{6,12}$ are not isomorphic.
\subsection*{$\mathbf L_{5,5}$} Let $L=L_{5,5}$. Invertible matrices of the form \begin{equation}\label{A55} A=\begin{pmatrix}
a_{11} & 0&0&0&0\\ a_{21}&a_{22}&0&0&0\\ a_{31}&a_{32}&a_{11}a_{22}& -a_{11}a_{21}&0\\ a_{41}&a_{42}&0&a_{11}^2&0\\ a_{51}&a_{52}&u&a_{54}&a_{11}^2a_{22}, \end{pmatrix} \end{equation} where $u=a_{11}a_{32}+a_{21}a_{42}-a_{22}a_{41}$, form the group $\mathrm{\mathop{Aut}}(L)$. We have, in addition, that $Z^2(L,\mathbb{F})=\langle \Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{1,5}+\Delta_{3,4},\Delta_{2,3},\Delta_{2,4}\rangle$, $B^2(L,\mathbb{F})=\langle \Delta_{1,2},\Delta_{1,3}+\Delta_{2,4}\rangle$, and so $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,4}},\overline{\Delta_{1,5}}+\overline{\Delta_{3,4}},\overline{\Delta_{2,3}},\overline{\Delta_{2,4}}\rangle$. The set of allowable subspaces of $H^2(L,\mathbb{F})$ is \[\mathcal S=\{\langle(a,b,c,d)\rangle\mid b\neq 0\}.\] If $\vartheta=(a,b,c,d)\in H^2(L,\mathbb{F})$, then $A\vartheta=(\bar a,\bar b,\bar c,\bar d)$ where \begin{eqnarray*} \bar a=a_{11}^3a+(a_{11}a_{54}+a_{11}^2a_{31}+a_{11}a_{21}a_{41})b-a_{11}a_{21}^2c+a_{11}^2a_{21}d;\\ \bar b= a_{11}^3a_{22}b,\quad \bar c=-a_{11}a_{22}a_{42}b+a_{11}a_{22}^2c,\quad \bar d=2a_{11}a_{22}a_{41}b-2a_{11}a_{21}a_{22}c+a_{11}^2a_{22}d. \end{eqnarray*} Choose $S=\langle(a,b,c,d)\rangle\in \mathcal S$. If $\text{char}\, \mathbb{F}\neq 2$ then \[\begin{pmatrix} 2b&0&0&0&0\\ 0&b&0&0&0\\ -2a &0&2b^2&0&0\\ -d&c&0&4b^2&0\\ 0&0&bd&0&4b^3 \end{pmatrix}S=\langle (0,1,0,0)\rangle,\] which shows that the group of matrices of the form~\eqref{A55} is transitive on $\mathcal S$, and that $L$ has only one isomorphism type of step-1 descendants, namely $L_{6,13}$, as claimed.\par Suppose now that $\text{char}\, \mathbb{F}=2$. Set $b_1=d/b^2$ if $b\neq 0$ and set $b_1=1$ otherwise. Let $B$ denote the matrix \[\begin{pmatrix} b_1&0&0&0&0\\ 0&1&0&0&0\\ 0 &0&b_1&0&0\\ 0&0&0&b_1^2&0\\ 0&0&0&0&b_1^2 \end{pmatrix} \begin{pmatrix} b&0&0&0&0\\ 0&b&0&0&0\\ a &0&b^2&0&0\\ 0&c&0&b^2&0\\ 0&0&0&0&b^3 \end{pmatrix}. \] Then $BS=\langle (0,1,0,1)\rangle$ if $d\neq 0$ and $BS=\langle (0,1,0,0)\rangle$ otherwise. Hence, in this case, the group of matrices of the form~\eqref{A55} has two orbits on $\mathcal S$, namely $\langle (0,1,0,0)\rangle$ and $\langle(0,1,0,1)\rangle$. The corresponding Lie algebras are $L_{6,13}$, and $L_{6,1}^{(2)}$, respectively.\par We claim, for $\text{char}\, \mathbb{F}= 2$ that $L_{6,1}^{(2)}$ and $L_{6,13}$ are not isomorphic. It suffices to show that the allowable subspaces $\langle (0,1,0,0)\rangle$ and $\langle(0,1,0,1)\rangle$ are in different orbits under $\mathrm{\mathop{Aut}}(L)$. Suppose by contradiction that $A$ is of the form~\eqref{A55} such that $A\langle(0,1,0,1)\rangle=\langle(0,1,0,0)\rangle$. Then, as $\text{char}\, \mathbb{F}=2$, the expression for $\bar d$ gives that $0 = a_{11}^2a_{22}$, which is impossible as $A$ is invertible. Hence the two descendants are non-isomorphic.
\subsection*{$\mathbf L_{5,6}$}
Set $L=L_{5,6}$. The group $\mathrm{\mathop{Aut}}(L)$ consists of the invertible matrices of the form \[A= \begin{pmatrix} a_{11} & 0 & 0 & 0 & 0 \\ a_{21} & a_{11}^2 & 0 & 0 & 0\\ a_{31} & a_{32} & a_{11}^3 & 0 & 0 \\ a_{41} & a_{42} & a_{11}a_{32} & a_{11}^4 & 0 \\ a_{51} & a_{52} & u & v & a_{11}^5 \end{pmatrix},\] where $u=-a_{11}^2a_{31}+a_{11}a_{42}+a_{21}a_{32}$ and $v=a_{11}^3a_{21}+a_{11}^2a_{32}$. As $Z^2(L,\mathbb{F})=\langle \Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{1,5}+\Delta_{2,4},\Delta_{2,3},\Delta_{2,5}-\Delta_{3,4}\rangle$ and $B^2(L,\mathbb{F})=\langle \Delta_{1,2},\Delta_{1,3},\Delta_{1,4}+\Delta_{2,3}\rangle$, we obtain that
$H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,5}}+ \overline{\Delta_{2,4}},\overline{\Delta_{2,3}},\overline{\Delta_{2,5}}-\overline{\Delta_{3,4}}\rangle$. Moreover, \[\mathcal S=\{\langle(a,b,c)\rangle\mid (a,c)\neq (0,0)\}.\] If $\vartheta=(a,b,c)\in Z^2(L,\mathbb{F})$, then $A\vartheta=(\bar a,\bar b,\bar c)$ where \[\bar a = a_{11}^5(a_{11}a+a_{21}c),\quad \bar b = -2a_{11}^4a_{21}a+a_{11}^5b+(2a_{11}^3a_{42}-a_{11}^3a_{21}^2-a_{11}a_{32}^2)c,\quad\bar c = a_{11}^7 c.\] Choose $S=\langle(a,b,c)\rangle\in \mathcal S$. Suppose first that $\text{char}\, \mathbb{F}\neq 2$. Let $B$ denote the first of the following matrix if $c\neq 0$, or the second if $c=0$: \[\begin{pmatrix} 2c&0&0&0&0\\ -2a&4c^2&0&0&0\\ 0 &0&8c^3&0&0\\ 0&-2(a^2+bc)&0&16c^4&0\\ 0&0&-4c(a^2+bc)&-16ac^3&32c^5 \end{pmatrix},\quad \begin{pmatrix} 2a&0&0&0&0\\ b&4a^2&0&0&0\\ 0 &0&8a^3&0&0\\ 0&0&0&16a^4&0\\ 0&0&0&8a^3b&32a^5 \end{pmatrix}.\] Then, if $c\neq 0$, then $BS=\langle (0,0,1)\rangle$, while $BS=\langle (1,0,0)\rangle$ otherwise. This shows that $\mathrm{\mathop{Aut}}(L)$ has at most two orbits on $\mathcal S$, namely $\langle(0,0,1)\rangle$ and $\langle(1,0,0)\rangle$ and the corresponding Lie algebras are $L_{6,14}$ and $L_{6,15}$, respectively. The Lie algebras $L_{6,14}$ and $L_{6,15}$ are clearly non-isomorphic, as $(L_{6,15})'$ is abelian, while $(L_{6,14})'$ is not. \par Assume now that $\text{char}\, \mathbb{F}=2$. If $b=c=0$ then $S=\langle (1,0,0)\rangle$, and so we may suppose that $(b,c)\neq(0,0)$. Let $B$ be the first of the following matrices if $c=0$ and $b\neq 0$; while if $c\neq 0$ then let $B$ be the second matrix: \[\begin{pmatrix} b&0&0&0&0\\ 0& b^2&0&0&0\\ 0 &0& b^3&0&0\\ 0&0&0& b^4&0\\ 0&0&0&0& b^5 \end{pmatrix},\quad \begin{pmatrix} c&0&0&0&0\\ a&c^2&0&0&0\\ 0 &ac&c^3&0&0\\ 0&0&ac^2&c^4&0\\ 0&0&a^2c&0&c^5 \end{pmatrix}.\] Now, if $c=0$ and $b\neq 0$ then $BS=\langle (1,1,0)\rangle$, while $BS=\langle (0,b/c^3,1)\rangle$ otherwise. Thus the set of the subspaces $\langle (1,0,0)\rangle$, $\langle (1,1,0)\rangle$ and $\langle (0,\varepsilon,1)\rangle$ with $\varepsilon\in\mathbb{F}$ contains a representative of each of the $\mathrm{\mathop{Aut}}(L)$-orbits on $\mathcal S$. The Lie algebras corresponding to these subspaces are $L_{6,15}$, $L_{6,2}^{(2)}$ and $L_{6,3}^{(2)}(\varepsilon)$. \par Let us now determine the possible isomorphisms among the algebras $L_{6,15}$, $L_{6,2}^{(2)}$ and $L_{6,3}^{(2)}(\varepsilon)$. First note that the derived subalgebras of $L_{6,15}$ and $L_{6,2}^{(2)}$ are abelian, while that of $L_{6,3}^{(2)}(\varepsilon)$ is not. Thus $L_{6,3}^{(2)}(\varepsilon)\not\cong L_{6,15}$ and $L_{6,3}^{(2)}(\varepsilon)\not\cong L_{6,2}^{(2)}$. If $L_{6,2}^{(2)} \cong L_{6,15}$, then there is some $A\in\mathrm{\mathop{Aut}}(L)$ such that $A\langle(1,1,0)\rangle=\langle(1,0,0)\rangle$. However, the expression for $\bar b$ implies in this case that $a_{11}=0$, which makes $A$ non-invertible. Thus $L_{6,2}^{(2)} \not \cong L_{6,15}$.\par Therefore it remains to determine the isomorphisms among the algebras $L_{6,3}^{(2)}(\varepsilon)$ with different values of $\varepsilon$. We claim that $L_{6,3}^{(2)}(\varepsilon)\cong L_{6,3}^{(2)}(\nu)$ if and only of $\varepsilon\relstarplus\nu$ where $\relstarplus$ is the equivalence relation defined at the beginning of Section~\ref{ressect}. To prove one direction of this claim, assume that $L_{6,3}^{(2)}(\varepsilon)\cong L_{6,3}^{(2)}(\nu)$. Then there is $A\in\mathrm{\mathop{Aut}}(L)$ such that $A\langle(0,\varepsilon,1)\rangle=\langle(0,\nu,1)\rangle$. The equations for $\bar b$ and $\bar c$ give that $a_{11}^5\varepsilon +a_{11}^3a_{21}^2+a_{11}a_{32}^2=a_{11}^7\nu$. Since $a_{11}\neq 0$, we may divide both sides of this equation by $a_{11}^5$ and obtain that $\varepsilon\relstarplus\nu$, as required. Now suppose, conversely, that $\varepsilon\relstarplus\nu$; that is, there are $\alpha\in\mathbb{F}^\ast$ and $\beta\in\mathbb{F}$ such that $\nu=\alpha^2\varepsilon+\beta^2$. Then \[\begin{pmatrix} \alpha^{-1}&0&0&0&0\\ 0&\alpha^{-2}&0&0&0\\ 0 &\beta\alpha^{-3}&\alpha^{-3}&0&0\\ 0&0&\beta\alpha^{-4}&\alpha^{-4}&0\\ 0&0&0&\beta\alpha^{-5}&\alpha^{-5} \end{pmatrix}\langle(0,\varepsilon,1)\rangle=\langle(0,\nu,1)\rangle.\] This proves the claim about the isomorphisms among the algebras $L_{6,3}^{(2)}(\nu)$.
\subsection*{$\mathbf L_{5,7}$}
The group $\mathrm{\mathop{Aut}}(L)$ consists of the invertible matrices of the form \[A= \begin{pmatrix} a_{11} & 0 & 0 & 0 & 0 \\ a_{21} & a_{22} & 0 & 0 & 0\\ a_{31} & a_{32} & a_{11}a_{22} & 0 & 0 \\ a_{41} & a_{42} & a_{11}a_{32} & a_{11}^2a_{22} & 0 \\ a_{51} & a_{52} & a_{11}a_{42} & a_{11}^2a_{32} & a_{11}^3a_{22} \end{pmatrix}.\] We have $Z^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{1,5},\Delta_{2,3},\Delta_{2,5}-\Delta_{3,4}\rangle$ and $B^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3},\Delta_{1,4}\rangle$, and so $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,5}},\overline{\Delta_{2,3}},\overline{\Delta_{2,5}}-\overline{\Delta_{3,4}}\rangle$. Moreover, \[\mathcal S=\{\langle (a,b,c)\rangle\mid (a,c)\neq (0,0)\}.\] If $\vartheta=(a,b,c)\in H^2(L,\mathbb{F})$, then $A\vartheta =(\bar a,\bar b,\bar c)$ where \[\bar a = a_{11}^3a_{22}(a_{11}a+a_{21}c),\quad \bar b = a_{11}(a_{22}^2b + (2a_{22}a_{42}-a_{32}^2)c),\quad \bar c = a_{11}^3a_{22}^2 c.\] Choose $S=\langle(a,b,c)\rangle\in\mathcal S$. If $b=c=0$ then $S=\langle (1,0,0)\rangle$. Let $B$ be the first, the second, or the third of the following matrices, in the cases when $c=0$ and $b\neq 0$; $c\neq 0$ and $\text{char}\, \mathbb{F}\neq 2$; or $c\neq0$ and $\text{char}\, \mathbb{F}=2$; respectively: \[\begin{pmatrix} b&0&0&0&0\\ 0& b^2&0&0&0\\ 0 &0& b^3&0&0\\ 0&0&0& b^4&0\\ 0&0&0&0& b^5 \end{pmatrix},\quad \begin{pmatrix} c&0&0&0&0\\ -a&2c&0&0&0\\ 0 &0&2c^2&0&0\\ 0&-b&0&2c^3&0\\ 0&0&-bc&0&2c^4 \end{pmatrix},\quad \begin{pmatrix} c&0&0&0&0\\ a&1&0&0&0\\ 0 &0&c&0&0\\ 0&0&0&c^2&0\\ 0&0&0&0&c^3 \end{pmatrix}.\] Then we obtain that $BS=\langle(1,1,0)\rangle$ if $c=0$ and $b\neq 0$; $BS=\langle(0,0,1)\rangle$ if $c\neq 0$ and $\text{char}\, \mathbb{F}\neq 2$; while $BS=\langle(0,\varepsilon,1)\rangle$ if $c\neq 0$ and $\text{char}\, \mathbb{F}=2$. Hence if $\text{char}\, \mathbb{F}\neq 2$ then $\mathrm{\mathop{Aut}}(L)$ has at most three orbits on $\mathcal S$, namely $\langle (1,0,0)\rangle$, $\langle (1,1,0)\rangle$, $\langle (0,0,1)\rangle$ and the corresponding Lie algebras are $L_{6,18}$, $L_{6,17}$, and $L_{6,16}$, respectively. If $\text{char}\, \mathbb{F}=2$ then the 1-spaces $\langle (1,0,0)\rangle$, $\langle (1,1,0)\rangle$, $\langle (0,\varepsilon,1)\rangle$ contain a set of representatives for the $\mathrm{\mathop{Aut}}(L)$-orbits on $\mathcal S$ with corresponding Lie algebras $L_{6,18}$, $L_{6,17}$ and $L_{6,4}^{(2)}(\varepsilon)$, respectively.\par Note that the derived subalgebras $L_{6,17}$ and $L_{6,18}$ are abelian, while those of $L_{6,16}$ and $L_{6,4}^{(2)}(\varepsilon)$ are not. Hence $L_{6,16}\not\cong L_{6,17}$, $L_{6,16}\not\cong L_{6,18}$, $L_{6,4}^{(2)}(\varepsilon) \not\cong L_{6,17}$ and $L_{6,4}^{(2)}(\varepsilon)\not\cong L_{6,18}$. Further, the centralizer $\langle x_3,x_4,x_5,x_6\rangle$ of $(L_{6,17})'$ is 4-dimensional while the centralizer $\langle x_2,x_3,x_4,x_5,x_6\rangle$ of $(L_{6,18})'$ is 5-dimensional, and hence $L_{6,17}\not\cong L_{6,18}$. Thus we are left with having to determine the possible isomorphisms between the algebras $L_{6,4}^{(2)}(\varepsilon)$ with different values of $\varepsilon$. We claim that $L_{6,4}^{(2)}(\varepsilon)\cong L_{6,4}^{(2)}(\nu)$ if and only if $\varepsilon\relstarplus\nu$. To show this claim, suppose first that $L_{6,4}^{(2)}(\varepsilon)\cong L_{6,4}^{(2)}(\nu)$. Then there is $A\in\mathrm{\mathop{Aut}}(L)$ such that $A\langle(0,\varepsilon,1)\rangle=\langle(0,\nu,1)\rangle$. The equations for $\bar b$ and $\bar c$ and the fact that $a_{11}\neq 0$ imply that $a_{22}^2\varepsilon+a_{32}^2=a_{11}^2a_{22}^2\nu$, which, in turn, gives that $\varepsilon\relstarplus\nu$. Conversely, let us assume that $\varepsilon\relstarplus\nu$. Then there is some $\alpha\in\mathbb{F}^\ast$ and $\beta\in\mathbb{F}$ such that $\nu=\alpha^2\varepsilon+\beta^2$. Then \[\begin{pmatrix} \alpha^{-1} & 0 & 0 & 0 & 0\\ 0 & \alpha^{-1}\beta^{-1} & 0 & 0 & 0 \\ 0 & \alpha^{-2} & \alpha^{-2}\beta^{-1} & 0 & 0\\ 0 & 0 & \alpha^{-3} & \alpha^{-3}\beta^{-1} & 0\\ 0 & 0 & 0 & \alpha^{-4} & \alpha^{-4}\beta^{-1} \end{pmatrix}\langle(0,\varepsilon,1)\rangle =\langle(0,\nu,1)\rangle.\] This completes the proof of the claim concerning the isomorphism among the $L_{6,4}^{(2)}(\varepsilon)$.
\subsection*{$\mathbf L_{5,8}$}
The group $\mathrm{\mathop{Aut}}(L)$ consists of the invertible matrices of the form \[A= \begin{pmatrix} a_{11} & 0 & 0 & 0 & 0 \\ a_{21} & a_{22} & a_{23} & 0 & 0\\ a_{31} & a_{32} & a_{33} & 0 & 0 \\ a_{41} & a_{42} & a_{43} & a_{11}a_{22} & a_{11}a_{23} \\ a_{51} & a_{52} & a_{53} & a_{11}a_{32} & a_{11}a_{33} \end{pmatrix}.\] We have $Z^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{1,5},\Delta_{2,3},\Delta_{2,4},\Delta_{2,5}+\Delta_{3,4},\Delta_{3,5}\rangle$ and $B^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3}\rangle$, and so $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,4}},\overline{\Delta_{1,5}},\overline{\Delta_{2,3}},\overline{\Delta_{2,4}},\overline{\Delta_{2,5}}+\overline{\Delta_{3,4}},\overline{\Delta_{3,5}}\rangle$. Further, \[\mathcal S=\left\{\langle (a,b,c,d,e,f)\rangle\mid \text{rank}\begin{pmatrix}a&b\\d&e\\e&f\end{pmatrix}=2\right\}.\] If $\vartheta=(a,b,c,d,e,f)\in H$, then $A\vartheta =(\bar a,\bar b,\bar c,\bar d,\bar e,\bar f)$ where \begin{eqnarray*} \bar a & =& a_{11}^2a_{22}a+a_{11}^2a_{32}b+a_{11}a_{21}a_{22}d+a_{11}(a_{21}a_{32}+a_{22}a_{31})e+a_{11}a_{31}a_{32}f;\\ \bar b & =& a_{11}^2a_{23}a+a_{11}^2a_{33}b+a_{11}a_{21}a_{23}d+a_{11}(a_{21}a_{33}+a_{23}a_{31})e +a_{11}a_{31}a_{33}f;\\ \bar c & =& (a_{22}a_{33}-a_{23}a_{32})c+(a_{22}a_{43}-a_{23}a_{42})d+(a_{22}a_{53}-a_{23}a_{52}+a_{32}a_{43}-a_{33}a_{42})e;\\ &+&(a_{32}a_{53}-a_{33}a_{52})f;\\ \bar d & =& a_{11}a_{22}^2 d +2a_{11}a_{22}a_{32}e +a_{11} a_{32}^2f;\\ \bar e & =& a_{11}a_{22}a_{23}d + a_{11}(a_{22}a_{33}+a_{23}a_{32})e+a_{11}a_{32}a_{33}f;\\ \bar f & =& a_{11}a_{23}^2 d + 2a_{11}a_{23}a_{33} e + a_{11}a_{33}^2 f. \end{eqnarray*} Choose $S=\langle(a,b,c,d,e,f)\rangle\in\mathcal S$ and set $\delta_1=ae-bd$, $\delta_2=af-be$, $\delta_3=df-e^2$. Suppose first that $d\neq 0$. If $\delta_1\neq 0$, then let $B$ be the first of the following two automorphisms; if $\delta_1=0$ (which implies that $\delta_3\neq 0$), then let $B$ be the second: \[\begin{pmatrix} 1&0&0&0&0\\ 0&\delta_1&0&0&0\\ 0&0&\delta_1&0&0\\ 0&0&0&\delta_1&0\\ 0&0&0&0&\delta_1 \end{pmatrix}\begin{pmatrix} d&0&0&0&0\\ -a&1&e&0&0\\ 0 &0&-d&0&0\\ 0&0&c&d&de\\ 0&0&0&0&-d^2 \end{pmatrix},\quad \begin{pmatrix} 1&0&0&0&0\\ 0&\delta_3&0&0&0\\ 1&0&\delta_3&0&0\\ 0&0&0&\delta_3&0\\ 0&0&0&0&\delta_3 \end{pmatrix} \begin{pmatrix} d&0&0&0&0\\ -a&1&e&0&0\\ 0 &0&-d&0&0\\ 0&0&c&d&de\\ 0&0&0&0&-d^2 \end{pmatrix}.\] Then $BS=\langle(0,1,0,1,0,\delta_3)\rangle$. Suppose now that $d=0$ and consider the following two cases: $e=0$ which implies that $a,\ f\neq 0$; and $e\neq 0$. In these cases, let $B$ denote the first or the second of the following transformations, respectively: \[\begin{pmatrix} 1&0&0&0&0\\ 0&-b&af&0&0\\ 0 &a&0&0&0\\ 0&0&0&-b&af\\ 0&0&ac&a&0 \end{pmatrix},\quad \begin{pmatrix} e^2&0&0&0&0\\ af-be&1&0&0&0\\ -ae&0&e&0&0\\ 0&c&0&e^2&0\\ 0&0&0&0&e^3 \end{pmatrix}.\] Then, in the first case, $BS=\langle (0,1,0,1,0,0)\rangle$, while in the second case, $BS=\langle (0,0,0,0,1,f)\rangle$. Thus if $d=e=0$ then we obtain that $S$ is in the orbit of $\langle (0,1,0,1,0,0)\rangle$. Therefore we may assume that $d=0$ and $e \neq 0$, and that $S$ is in the orbit of $\langle (0,0,0,0,1,f)\rangle$. If $f\neq 0$, then let $B_1$ denote the first of the following transformations; if $f=0$ and $\text{char}\, \mathbb{F}\neq 2$ then let $B_1$ be the second: \[\begin{pmatrix} 1&0&0&0&0\\ 0&f&0&0&0\\ -1&-1&1&0&0\\ 0&0&0&f&0\\ 0&0&0&-1&1 \end{pmatrix},\quad \begin{pmatrix} 1&0&0&0&0\\ 1&1&-1&0&0\\ -1&1&1&0&0\\ 0&0&0&1&-1\\ 0&0&0&1&1 \end{pmatrix}.\] We obtain, in both cases, that $B_1\langle(0,0,0,0,1,f\rangle)=\langle(0,1,0,1,0,-1)\rangle$.\par To summarize, in characteristic different from~2, the set of subspaces $\langle (0,1,0,1,0,\varepsilon)\rangle$ contain a representative from each of the $\mathrm{\mathop{Aut}}(L)$-orbits on $\mathcal S$. In characteristic 2, these orbits are covered by the subspaces $\langle (0,1,0,1,0,\varepsilon)\rangle$ and $\langle (0,0,0,0,1,0)\rangle$. The Lie algebras corresponding to these subspaces are $L_{6,19}(\varepsilon)$ and $L_{6,5}^{(2)}$. The Lie algebra $L_{6,19}(0)$ is written as $L_{6,20}$ in Section~\ref{ressect}, to minimize the difference between our list and that in~\cite{artw}. The expression for $\bar d$ shows, in characteristic~2, that the vectors in the $\mathrm{\mathop{Aut}}(L)$-orbit of $(0,0,0,0,1,0)$ all have 4-th coordinate 0, and so $\langle (0,1,0,1,0,\varepsilon)\rangle$ and $\langle (0,0,0,0,1,0)\rangle$ are indeed in different orbits. Thus we only need to verify the isomorphisms among the algebras $L_{6,19}(\varepsilon)$ with different values of $\varepsilon$. We claim that $L_{6,19}(\varepsilon)\cong L_{6,19}(\nu)$ if and only if $\varepsilon\relstar\nu$. To prove one direction of this claim assume that $L_{6,19}(\varepsilon)\cong L_{6,19}(\nu)$. Then there is some $A\in\mathrm{\mathop{Aut}}(L)$ such that $\langle (0,1,0,1,0,\varepsilon)\rangle=\langle (0,1,0,1,0,\nu)\rangle$. Considering the equations for $\bar a$, $\bar b$ and $\bar f$ we obtain that \begin{equation}\label{eq0} a_{22}^2+\varepsilon a_{32}^2\neq 0 \end{equation} and that \begin{eqnarray} a_{22}a_{23}+\varepsilon a_{32}a_{33}&=&0\label{eq1};\\ a_{23}^2+\varepsilon a_{33}^2-\nu a_{22}^2-\varepsilon\nu a_{32}^2 \label{eq2}&=&0. \end{eqnarray} If $\varepsilon=0$ then~\eqref{eq0} implies that $a_{22}\neq 0$, and equations~\eqref{eq1}--\eqref{eq2} give that $a_{23}=0$ and that $\nu=0$. Thus $L_{6,19}(0)$ is not isomorphic to $L_{6,19}(\nu)$ with $\nu\neq 0$. Therefore we may assume without loss of generality that $\varepsilon,\ \nu\neq 0$. Set $\delta=a_{22}^2+\varepsilon a_{32}^2$. Then routine computation shows that \[(a_{23}a_{22}-\varepsilon a_{23}a_{22})(a_{22}a_{23}+\varepsilon a_{32}a_{33})+(\varepsilon a_{32}^2-\delta)(a_{23}^2+\varepsilon a_{33}^2-\nu a_{22}^2-\varepsilon\nu a_{32}^2)=\delta(\nu a_{22}^2-\varepsilon a_{33}^2).\] Since $\delta\neq 0$, equations~\eqref{eq1} and~\eqref{eq2} imply that $\nu a_{22}^2-\varepsilon a_{33}^2$. Thus either $a_{22}=a_{33}=0$ or $\varepsilon\relstar\nu$, as required. Suppose that $a_{22}=a_{33}=0$. Then $a_{23}^2=\varepsilon\nu a_{32}^2$. Since the matrix $A$ is invertible, we obtain that $a_{23}\neq 0$ and $a_{32}\neq 0$ and hence $1/\varepsilon\relstar \nu$. Since $1/\varepsilon\relstar\varepsilon$, this gives that $\varepsilon\relstar\nu$.\par Now we assume the converse; that is, let $\varepsilon,\ \nu\in\mathbb{F}$ such that $\varepsilon\relstar\nu$. Let $A$ be the automorphism of $L_{5,8}$ represented by the diagonal matrix with the entries $(1, 1, \alpha, 1, \alpha)$ in the diagonal. Then $A\langle(0,1,0,1,0,\varepsilon)\rangle=\langle(0,1,0,1,0,\alpha^2\varepsilon)\rangle= \langle(0,1,0,1,0,\nu)\rangle$. Hence $L_{6,19}(\varepsilon)\cong L_{6,19}(\nu)$. \par
\subsection*{$\mathbf L_{5,9}$}
The group $\mathrm{\mathop{Aut}}(L)$ consists of the invertible matrices of the form \[A= \begin{pmatrix} a_{11} & a_{12} & 0 & 0 & 0 \\ a_{21} & a_{22} & 0 & 0 & 0\\ a_{31} & a_{32} & u & 0 & 0 \\ a_{41} & a_{42} & a_{11}a_{32}-a_{12}a_{31} & a_{11}u & a_{12}u \\ a_{51} & a_{52} & a_{21}a_{32}-a_{22}a_{31} & a_{21}u & a_{22}u \end{pmatrix},\] where $u = a_{11}a_{22}-a_{12}a_{21}$. Then $Z^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{1,5}+\Delta_{2,4},\Delta_{2,3},\Delta_{2,5}\rangle$ and $B^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3},\Delta_{2,3}\rangle$, and hence $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,4}},\overline{\Delta_{1,5}}+\overline{\Delta_{2,4}},\overline{\Delta_{2,5}}\rangle$. Further, \[\mathcal S=\{\langle (a,b,c)\rangle\mid ac-b^2\neq 0\}.\] Let $\vartheta=(a,b,c)\in H^2(L,\mathbb{F})$. Then $A\vartheta=(\bar a,\bar b,\bar c)$ where \begin{eqnarray*} \bar a &=& (a_{11}^2a+2a_{11}a_{21}b+a_{21}^2c)u;\\ \bar b &=& (a_{11}a_{12}a+(a_{11}a_{22}+a_{12}a_{21})b+a_{21}a_{22}c)u;\\ \bar c &=& (a_{12}^2a+2a_{12}a_{22}b+a_{22}^2c)u. \end{eqnarray*} Choose $S=\langle(a,b,c)\rangle\in\mathcal S$. Let us consider three cases: $a\neq 0$; $a=0$ and $c\neq 0$; $a=0$, $c=0$, and $\text{char}\, \mathbb{F}\neq 2$. Let, in these cases, $B$ denote the first, the second, or the third of the following transformations, respectively: \[\begin{pmatrix}
1&-b&0&0&0\\ 0&a&0&0&0\\ 0&0&a&0&0\\ 0&0&0&a&-ab\\ 0&0&0&0&a^2 \end{pmatrix},\quad \begin{pmatrix}
0&-c&0&0&0\\ 1&b&0&0&0\\ 0&0&c&0&0\\ 0&0&0&0&-c^2\\ 0&0&0&c&bc \end{pmatrix},\quad \begin{pmatrix}
1&-1&0&0&0\\ 1&1&0&0&0\\ 0&0&2&0&0\\ 0&0&0&2&-2\\ 0&0&0&2&2 \end{pmatrix}.\] We obtain, in the first and the second case, that $BS=\langle(1,0,ac-b^2)\rangle$, while, in the third case, we find that $BS=\langle(1,0,-1)\rangle$.\par To summarize, if $\text{char}\, \mathbb{F}\neq 2$ then, as $ac-b^2\neq 0$, the set formed by the subspaces $\langle (1,0,\varepsilon)\rangle$ with $\varepsilon\neq 0$ contain a representative in each of the $\mathrm{\mathop{Aut}}(L)$-orbits on $\mathcal S$. The Lie algebra corresponding to $\langle (1,0,\varepsilon)\rangle$ is $L_{6,21}(\varepsilon)$. If $\text{char}\, \mathbb{F}=2$ then the set consisting of the subspaces $\langle(1,0,\varepsilon)\rangle$ and $\langle(0,1,0)\rangle$ contain such a system of representatives. The Lie algebra corresponding to $\langle(0,1,0)\rangle$ is $L_{6,6}^{(2)}$. \par The expressions for $\bar a,\ \bar b,\ \bar c$ give in characteristic~2 that $\langle (0,1,0)\rangle$ is fixed by $\mathrm{\mathop{Aut}}(L)$, and hence $L_{6,6}^{(2)}\not\cong L_{6,21}(\varepsilon)$. We claim, for $\varepsilon,\ \nu\in\mathbb{F}^*$ that $L_{6,21}(\varepsilon)\cong L_{6,21}(\nu)$ if and only if $\varepsilon\relstar\nu$. Suppose first that $L_{6,21}(\varepsilon)\cong L_{6,21}(\nu)$. Then there is some automorphism $A\in\mathrm{\mathop{Aut}}(L)$ such that $A(1,0,\varepsilon)=(1,0,\nu)$. Using the equations for $\bar a$, $\bar b$, and $\bar c$ we obtain that \begin{eqnarray} a_{11}a_{12}+a_{21}a_{22}\varepsilon&=&0;\label{eq591}\\ a_{12}^2+a_{22}^2\varepsilon-a_{11}^2\nu-a_{21}^2\varepsilon\nu&=&0\label{eq592}. \end{eqnarray} Simple computation shows that \[(a_{12}a_{22}-\nu a_{11}a_{21})(a_{11}a_{12}+a_{21}a_{22}\varepsilon)- a_{11}a_{22}(a_{12}^2+a_{22}^2\varepsilon-a_{11}^2\nu-a_{21}^2\varepsilon\nu)= (a_{11}a_{22}-a_{12}a_{21})(a_{11}^2\nu-a_{22}^2\varepsilon).\] Since $a_{11}a_{22}-a_{12}a_{21}\neq 0$, we obtain that equations~\eqref{eq591} and~\eqref{eq592} imply that $a_{11}^2\nu-a_{22}^2\varepsilon$. Then either $\varepsilon\relstar\nu$ or $a_{11}=a_{22}=0$. If $a_{11}=a_{22}=0$, then equation~\eqref{eq592} becomes $a_{12}^2=a_{21}^2\varepsilon\nu$. Since $A$ is invertible, we obtain, in this case, that $1/\varepsilon\relstar\nu$. Since $\varepsilon\relstar 1/\varepsilon$, this gives that $\varepsilon\relstar\nu$.\par Conversely, let us suppose that $\varepsilon,\ \nu\in\mathbb{F}^*$ such that $\varepsilon\relstar\mu$. That is, there is some $\alpha\in\mathbb{F}^*$ such that $\nu=\alpha^2\varepsilon$. Then let $A$ be the automorphism of $L$ represented by the diagonal matrix with $(1,\alpha,\alpha,\alpha,\alpha^2)$ as its diagonal. Then $A\langle (1,0,\varepsilon)\rangle=\langle (1,0,\alpha^2\nu)\rangle= \langle (1,0,\nu)\rangle$.
\subsection*{$\mathbf L_{4,1}$}\label{41sec}
The automorphism group of $L$ is $\mathrm{\mathop{GL}}_4(\mathbb{F})$ and $H^2(L,\mathbb{F})=Z^2(L,\mathbb{F})$ consists of all skew-symmetric bilinear forms on $L$, and hence $H^2(L,\mathbb{F})=\langle \Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{2,3},\Delta_{2,4},\Delta_{3,4}\rangle$. Note that $H^2(L,\mathbb{F})$ is naturally isomorphic to the wedge product $\Lambda^2( \mathbb{F}^4)$ as $\mathrm{\mathop{GL}}_4(\mathbb{F})$-modules; therefore we do not write the explicit formulas for the action. The set of allowable subspaces $\mathcal S$ consists of the 2-dimensional subspaces $S=\langle \vartheta_1,\vartheta_2\rangle$ such that $\vartheta_1^\perp\cap\vartheta_2^\perp=0$. Let us compute the representatives of the $\mathrm{\mathop{Aut}}(L)$-orbits on $H^2(L,\mathbb{F})$. Let $S\in\mathcal S$ and write \[S=\langle(a_1,b_1,c_1,d_1,e_1,f_1),(a_2,b_2,c_2,d_2,e_2,f_2)\rangle.\] By Lemma~\ref{altlemma}(i), we may assume without loss of generality that $a_1=1$. Then \[\begin{pmatrix} 1&0&d_1&e_1\\ 0&1&-b_1&-c_1\\ 0&0&1&0\\ 0&0&0&1& \end{pmatrix}S=\langle(1,0,0,0,0,f_1'),(0,b_2',c_2',d_2',e_2',f_2')\rangle,\] where $f_1',\ b_2',\ c_2',\ d_2',\ e_2',\ f_2'\in\mathbb{F}$. Thus no generality is lost by assuming that \[S=\langle(1,0,0,0,0,f_1),(0,b_2,c_2,d_2,e_2,f_2)\rangle.\] We claim that there is $B\in \mathrm{\mathop{Aut}}(L)$ such that \[BS=\langle(1,0,0,0,0,f_1),(0,1,0,0,\bar e_2,f_2)\rangle.\] Consider the following list of matrices: \[\begin{pmatrix} 1&-d_2&0&0\\ 0&b_2&0&0\\ 0&0&1&-c_2\\ 0&0&0&b_2& \end{pmatrix}, \begin{pmatrix} 1&-e_2&0&0\\ 0&c_2&0&0\\ 0&0&0&-c_2\\ 0&0&1&b_2& \end{pmatrix}, \begin{pmatrix} 0&-d_2&0&0\\ 1&b_2&0&0\\ 0&0&1&-e_2\\ 0&0&0&d_2& \end{pmatrix}, \begin{pmatrix} 0&-e_2&0&0\\ 1&c_2&0&0\\ 0&0&0&-e_2\\ 0&0&1&d_2& \end{pmatrix}.\] If $(b_2,c_2,d_2,e_2)\neq (0,0,0,0)$, then the list above contains at least one invertible matrix, and let $B$ denote this matrix. On the other hand, if $(b_2,c_2,d_2,e_2)=(0,0,0,0)$, then $f_2\neq 0$ and set, in this case, \[B=\begin{pmatrix} 1&-f_2&f_1&0\\ 1&0&0&0\\ 0&0&1&0\\ -1&0&0&f_2& \end{pmatrix}.\] Having defined $B$ as above, we obtain $BS= \langle(1,0,0,0,0,f_1),(0,1,0,0,\bar e_2,f_2)\rangle$ with some $\bar e_2$, as claimed. Let us hence suppose without loss of generality that $S=\langle(1,0,0,0,0,f_1),(0,1,0,0,e_2,f_2)\rangle$. Next consider the following three cases: $f_1\neq 0$; $f_1=0$ and $e_2\neq 0$; $f_1=0$ and $e_2=0$. Note that in the last case $f_2\neq 0$, as otherwise $\vartheta_1^\perp\cap\vartheta_2^\perp\neq 0$. Define $C\in\mathrm{\mathop{Aut}}(L)$ in these cases, respectively, as follows: \[\begin{pmatrix} f_1&0&0&0\\ 0&f_1&0&0\\ 0&0&1&0\\ 0&0&0&f_1 \end{pmatrix},\quad \begin{pmatrix} 1&0&0&0\\ 0&f_2&-1&0\\ 0&-e_2&0&0\\ 0&0&0&1 \end{pmatrix},\quad \begin{pmatrix} 1&0&0&0\\ 0&f_2&-1&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix}.\] We obtain, in each of these cases, that $CS=\langle(1,0,0,0,0,1),(0,1,0,0,e_2',f_2)\rangle$ where $e_2'\in\mathbb{F}$. If $f_2=0$ then $CS=\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon,0)\rangle$ with some $\varepsilon\in\mathbb{F}$. On the other hand, if $f_2\neq 0$ then \[\begin{pmatrix} f_2&0&0&0\\ 0&1&0&0\\ 0&0&f_2&0\\ 0&0&0&1 \end{pmatrix} \langle(1,0,0,0,0,1),(0,1,0,0,e_2',f_2)\rangle=\langle(1,0,0,0,0,1),(0,1,0,0,e_2'',1)\rangle,\] where $e_2''\in\mathbb{F}$. Suppose now that $\text{char}\, \mathbb{F}\neq 2$, and let $D$ be the first of the following two automorphisms if $e_2''\in\{0,-1/4\}$, while we let $D$ be the second otherwise: \[\begin{pmatrix} 2e_2''&0&0&0\\ 0&2&0&0\\ 0&1&1&0\\ -1&0&0&4e_2''+1& \end{pmatrix},\quad \begin{pmatrix} 1&0&0&1\\ 0&2&0&0\\ 0& 1&1&0\\ 0&0&0&2 \end{pmatrix}.\] Then $D\langle(1,0,0,0,0,1),(0,1,0,0,e_2'',1)\rangle=\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon,0)\rangle$.\par To summarize, in characteristic different from two, the set of 2-spaces $\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon,0)\rangle$ where $\varepsilon\in\mathbb{F}$ contains at least one representative of each of the $\mathrm{\mathop{Aut}}(L)$-orbits on $\mathcal S$. In characteristic~2, such a set of 2-spaces is formed by the spaces $\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon,0)\rangle$ and the spaces $\langle(1,0,0,0,0,1),(0,1,0,0,\nu,1)\rangle$ where $\varepsilon,\ \nu\in\mathbb{F}$. The corresponding Lie algebras are $L_{6,22}(\varepsilon)$ and $L_{6,7}^{(2)}(\varepsilon)$, respectively.\par It remains to find the possible isomorphisms of the step-2 descendants of $L$. The group $\mathrm{\mathop{Aut}}(L)$ preserves, modulo scalars, a quadratic form $Q$ on $H^2(L,\mathbb{F})$, defined, for $\vartheta=(a,b,c,d,e,f)\in H^2(L,\mathbb{F})$, as \begin{equation}\label{qfeq} Q(\vartheta)=af-be+cd. \end{equation} Let $f$ denote the symmetric bilinear form associated with $Q$. It is easy to see that $Q$ is indeed preserved by the action of $\mathrm{\mathop{GL}}(V)$ modulo scalars; namely, for $A\in\mathrm{\mathop{GL}}(V)$ and $v\in V$ we have that $Q(Av)=(\det A)Q(v)$. \par Assume first that $\text{char}\, \mathbb{F}\neq 2$. and consider two subspaces $S_1=\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon,0)\rangle$ and $S_2=\langle(1,0,0,0,0,1),(0,1,0,0,\nu,0)\rangle$ such that $S_1$ and $S_2$ are in the same $\mathrm{\mathop{Aut}}(L)$-orbit. Since the determinants of the Gram matrices of the form $f$ restricted to $S_1$ and $S_2$ are $4\varepsilon$ and $4\nu$, respectively, Lemma~\ref{gramlemma}(2) implies that $\nu=\alpha^2\varepsilon$ with some $\alpha\in\mathbb{F}^*$. Conversely if $\nu=\gamma^2\varepsilon$ with some $\gamma\in\mathbb{F}^*$ and $A$ is the automorphism of $L_{4,1}$ represented by the diagonal matrix with the entries $(1,\gamma,1,\gamma)$ in the diagonal, then $AS_1=S_2$. \par Suppose now that the characteristic of $\mathbb{F}$ is $2$. Set $S_1=\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon,0)\rangle$ and $S_2=\langle(1,0,0,0,0,1),(0,1,0,0,\nu,1)\rangle$ where $\varepsilon,\ \nu\in\mathbb{F}$. Since the restriction of $f$ is identically zero on $S_1$ while it is non-singular on $S_2$, we obtain that $S_1$ and $S_2$ cannot be in the same $\mathrm{\mathop{Aut}}(L)$-orbit. Suppose now that
$S_1=\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon_1,0)\rangle$ and $S_2=\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon_2,0)\rangle$ such that $S_1$ and $S_2$ are in the same $\mathrm{\mathop{Aut}}(L)$-orbit. Since $f$ is identically zero on $S_1$ and $S_2$ and, for $i=1,\ 2$, $Q(1,0,0,0,0,1)Q(0,1,0,0,\varepsilon_i,0)=\varepsilon_i$, we obtain from Lemma~\ref{gramlemma}(iii) that $\varepsilon_2= \alpha^2\varepsilon_1+\beta^2$ with some $\alpha\in\mathbb{F}^*$ and $\beta\in\mathbb{F}$. Assume, conversely that $\varepsilon_2=\alpha^2\varepsilon_1+\beta^2$ with some $\alpha\in\mathbb{F}^*$ and $\beta\in\mathbb{F}$. Since there is nothing to prove if $\varepsilon_1=\varepsilon_2=1$, we may assume without loss of generality that $\varepsilon_1\neq 1$. Let $A$ be the automorphism of $L_{4,1}$ represented by the matrix \[\begin{pmatrix} \varepsilon_1&\beta&1&\varepsilon_1\beta\\ 0&\alpha&0&\alpha\\ 1&\beta&1&\beta\\ 0&\alpha&0&\varepsilon_1\alpha \end{pmatrix}.\] Then $\det A=\alpha^2(\varepsilon_1^2+1)$ which, by assumption, is non-zero, and so $A$ does define an isomorphism. Further, $A\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon_1,0)\rangle=\langle(1,0,0,0,0,1),(0,1,0,0,\varepsilon_2,0)\rangle$. \par Finally if $S_1=\langle(1,0,0,0,0,1),(0,1,0,0,\nu_1,1)\rangle$ and $S_2=\langle(1,0,0,0,0,1),(0,1,0,0,\nu_2,1)\rangle$, then the restriction of $f$ on $S_1$ and on $S_2$ is non-singular, and the given bases of $S_1$ and $S_2$ are symplectic. Further, the Arf invariants of $S_1$ and $S_2$ with respect to these bases are $\nu_1$ and $\nu_2$, respectively. Thus Lemma~\ref{gramlemma}(iii)
gives that $\nu_1+\nu_2\in\{\alpha^2+\alpha\ |\ \alpha\in\mathbb{F}^\ast\}$. Assume, conversely, that $\alpha^2+\alpha+\nu_1+\nu_2=0$ with some $\alpha$. Let $A$ denote the isomorphism of $L_{4,1}$ represented by the matrix \[\begin{pmatrix} 1&0&0&\alpha\\ 0&1&0&0\\ 0&\alpha&1&0\\ 0&0&0&1 \end{pmatrix}\] Then $A\langle(1,0,0,0,0,1),(0,1,0,0,\nu_1,1)\rangle=\langle(1,0,0,0,0,1),(0,1,0,0,\nu_2,1)\rangle$. \par The argument presented in this section give rise to the proof of Theorem~\ref{kleinth}.
\begin{proof}[Proof of Theorem~$\ref{kleinth}$.] Let $V$ be the vector space as in the statement of the theorem and let $L$ be the 4-dimensional abelian Lie algebra $L_{4,1}$. As noted before, there is an isomorphism between the $\mathrm{\mathop{GL}}(4,\mathbb{F})$-modules $V\wedge V$ and $H^2(L,\mathbb{F})$ realized by the mapping $b_i\wedge b_j\mapsto\Delta_{i,j}$, and so we will identify $V\wedge V$ with $H^2(L,\mathbb{F})$. If $S$ is a 2-dimensional subspace of $H^2(L,\mathbb{F})$, then the corresponding central extension $L_S$ of $L$ is a 6-dimensional nilpotent Lie algebra with 4~generators and central derived subalgebra of dimension~2. In addition if $S$ is not allowable then $L_S=K\oplus\mathbb{F}$, which, using the classification of 5-dimensional nilpotent Lie algebras, gives that $L_S\cong L_{5,8}\oplus\mathbb{F}$. Hence $\mathrm{\mathop{GL}}(4,\mathbb{F})$ has a single orbit on the set of not allowable 2-dimensional subspaces. Since the orbits on the allowable 2-dimensional subspaces were determined in this section, the theorem follows. \end{proof}
\subsection*{$\mathbf L_{4,2}$}
The group $\mathrm{\mathop{Aut}}(L)$ consists of the invertible matrices of the form \[A= \begin{pmatrix} a_{11} & a_{12} & 0 & 0 \\ a_{21} & a_{22} & 0 & 0 \\ a_{31} & a_{32} & u & a_{34}\\ a_{41} & a_{42} & 0 & a_{44} \end{pmatrix},\] where $u = a_{11}a_{22}-a_{12}a_{21}$. We have $Z^2(L,\mathbb{F})=\langle\Delta_{1,2},\Delta_{1,3},\Delta_{1,4},\Delta_{2,3},\Delta_{2,4}\rangle$, $B^2(L,\mathbb{F})=\langle\Delta_{1,2}\rangle$, and so $H^2(L,\mathbb{F})=\langle \overline{\Delta_{1,3}},\overline{\Delta_{1,4}},\overline{\Delta_{2,3}},\overline{\Delta_{2,4}}\rangle$. The set of allowable subspaces $\mathcal S$ consists of the 2-dimensional subspaces $S=\langle \vartheta_1,\vartheta_2\rangle$ such that $\vartheta_1^\perp\cap\vartheta_2^\perp\cap\langle x_3,x_4\rangle=0$. If $\vartheta=(a,b,c,d)\in H^2(L,\mathbb{F})$, then $A\vartheta =(\bar a,\bar b,\bar c,\bar d)$ where \begin{eqnarray*} \bar a &=& (a_{11}a+a_{21}c)u;\\ \bar b &=& a_{11}a_{34}a + a_{11}a_{44}b + a_{21}a_{34}c+a_{21}a_{44}d;\\ \bar c &=& (a_{12}a+a_{22}c)u;\\ \bar d &=& a_{12}a_{34}a+a_{12}a_{44}b+a_{22}a_{34}c+a_{22}a_{44}d. \end{eqnarray*} Choose a 2-dimensional subspace $S=\langle \vartheta_1,\vartheta_2\rangle$ of $\mathcal S$ where $\vartheta_1=(a_1,b_1,c_1,d_1)$ and $\vartheta_2=(a_2,b_2,c_2,d_2)$. If $a_1=c_1=0$ and $a_2=c_2=0$ then $x_3\in\vartheta_1^\perp\cap\vartheta_2^\perp$, and hence $S$ is not allowable. Thus, by possibly swapping $\vartheta_1$ and $\vartheta_2$, we may assume without loss of generality that $(a_1,c_1)\neq(0,0)$. Let $B$ be the first of the following automorphisms if $a_1\neq 0$ and let $B$ be the second if $a_1=0$ (which implies that $c_1\neq 0$): \[\begin{pmatrix} 1&-c_1&0&0\\ 0&a_1&0&0\\ 0&0&a_1&-a_1b_1\\ 0&0&0&a_1^2& \end{pmatrix},\quad \begin{pmatrix} 0&1&0&0\\ -1&0&0&0\\ 0&0&1&-d_1\\ 0&0&0&c_1& \end{pmatrix}.\] Then the image $B S$ is of the form $\langle (1,0,0,d_1'),(0,b_2',c_2',d_2')\rangle$ which implies that we may assume without loss of generality that $S=\langle (1,0,0,d_1),(0,b_2,c_2,d_2)\rangle$. We note that such an $S$ is allowable if and only if $(d_1,b_2,d_2)\neq(0,0,0)$.\par Suppose first that $c_2=0$ and $d_2\neq 0$. Then \[\begin{pmatrix} d_2&0&0&0\\ -b_2&1&0&0\\ 0&0&d_2&d_1b_2\\ 0&0&0&d_2 \end{pmatrix}S= \langle (1,0,0,0),(0,0,0,1)\rangle.\] Next, we assume that $c_2=0$ and $d_2=0$. If $d_1=0$ then $S=\langle (1,0,0,0),(0,1,0,0)\rangle$, while if $d_1\neq 0$ then \[\begin{pmatrix} d_1&0&0&0\\ 0&1&0&0\\ 0&0&d_1&0\\ 0&0&0&d_1 \end{pmatrix}S=\langle (1,0,0,1),(0,1,0,0)\rangle.\] Now suppose that $c_2\neq 0$ and $d_2=0$ and let $C$ be the first or the second of the following matrices, depending on whether $b_2=0$ or not: \[\begin{pmatrix} 0&1&0&0\\ b_2&0&0&0\\ 0&0&-b_2&0\\ 0&0&0&-b_2c_2 \end{pmatrix},\quad \begin{pmatrix} d_1&0&0&0\\ 0&1&0&0\\ 0&0&d_1&0\\ 0&0&0&d_1 \end{pmatrix}.\] Then $CS=\langle (1,0,0,1),(0,\varepsilon,1,0)\rangle$. Finally, assume that $c_2\neq 0$ and $d_2\neq 0$ and let $D$ be the first or the second of the following matrices depending on whether $b_2\neq 0$: \[\begin{pmatrix} d_2&-d_2&0&0\\ -b_2&0&0&0\\ 0&0&-b_2d_2&0\\ 0&0&0&-b_2c_2 \end{pmatrix},\quad \begin{pmatrix} c_2&(d_1-1)c_2&0&0\\ 0&d_2&0&0\\ 0&0&c_2d_2&0\\ 0&0&0&c_2^2 \end{pmatrix}.\] Then $DS=\langle (1,0,0,1),(0,\bar b_2,1,1)\rangle$, with $\bar b_2\in\mathbb{F}$. If $\text{char}\, \mathbb{F}\neq 2$ then this gives no new orbit as \[\begin{pmatrix} 2&0&0&0\\ 1&1&0&0\\ 0&0&2&-2\\ 0&0&0&4 \end{pmatrix}\langle (1,0,0,1),(0,\bar b_2,1,1)\rangle=\langle (1,0,0,1),(0,4\bar b_2+1,1,0)\rangle,\] If $\text{char}\, \mathbb{F}=2$ then we obtain that $S=\langle(1,0,0,1),(0,\nu,1,1)\rangle$.\par To summarize, if $\text{char}\, \mathbb{F}\neq 2$ then the list of 2-spaces $\langle (1,0,0,0),(0,0,0,1)\rangle$, $\langle (1,0,0,0),(0,1,0,0)\rangle$, $\langle (1,0,0,1),(0,1,0,0)\rangle$, and $\langle (1,0,0,1),(0,\varepsilon,1,0)\rangle$ with $\varepsilon\in\mathbb{F}$ contains at least one representative for each of the $\mathrm{\mathop{Aut}}(L)$-orbits on $\mathcal S$. The corresponding Lie algebras are $L_{6,27}$, $L_{6,25}$, $L_{6,23}$, and $L_{6,24}(\varepsilon)$. In characteristic~2, such a set is formed by the subspaces above in addition to the subspaces $\langle (1,0,0,1),(0,\nu,1,1)\rangle$ with $\nu\in\mathbb{F}$. The Lie algebra that corresponds to the subspace $\langle (1,0,0,1),(0,\nu,1,1)\rangle$ is $L_{6,8}^{(2)}(\nu)$. \par Finally, in this section we have to determine the possible isomorphisms of the Lie algebras in the previous paragraph. First we note, for $L=L_{6,24}(\varepsilon)$ and $L=L_{6,8}^{(2)}(\nu)$, that $C(L)=L^3$, while this equation is not valid for $L_{6,27}$, $L_{6,23}$, or $L_{6,25}$. In order to separate the Lie algebras $L_{6,27}$, $L_{6,25}$, $L_{6,23}$ we use the geometry of the $\mathrm{\mathop{Aut}}(L)$-action on $H^2(L,\mathbb{F})$. The expressions for $\bar a$, $\bar b$, $\bar c$, and $\bar d$ above give that the action of the automorphism $A$ on $H^2(L,\mathbb{F})$ is represented, with respect to the basis $\{\overline{\Delta_{1,3}},\overline{\Delta_{1,4}},\overline{\Delta_{2,3}}, \overline{\Delta_{2,4}}\}$, by the matrix \begin{equation}\label{tensor} \begin{pmatrix} a_{11}u & a_{11}a_{34} & a_{12}u & a_{12}a_{34}\\ 0 & a_{11}a_{44} & 0 & a_{12}a_{44} \\ a_{21}u & a_{21}a_{34} & a_{22}u & a_{22}a_{34} \\ 0 & a_{21}a_{44} & 0 & a_{22}a_{44} \end{pmatrix}= \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}\otimes \begin{pmatrix} u & a_{34} \\ 0 & a_{44} \end{pmatrix}. \end{equation} It is well-known that $\mathrm{\mathop{GL}}(2,\mathbb{F})\otimes\mathrm{\mathop{GL}}(2,\mathbb{F})$ preserves a quadratic form modulo scalars in its natural action on $\mathbb{F}^4$. To exploit this fact in our situation, define a quadratic form $Q$ on $H^2(L,\mathbb{F})$ as \[Q(\vartheta)=\alpha_1\alpha_4-\alpha_2\alpha_3\quad\mbox{where}\quad\vartheta=(\alpha_1,\alpha_2,\alpha_3,\alpha_4).\] Let $A\in\mathrm{\mathop{Aut}}(L)$ and decompose $A$ as $g_1\otimes g_2$ as in~\eqref{tensor}. Then we have, for all $\vartheta\in H^2(L,\mathbb{F})$, that \[Q(A\vartheta)=Q((g_1\otimes g_2)\vartheta)=(\det g_1)(\det g_2)Q(\vartheta).\]
Now to show that the subspaces obtained above are in different $\mathrm{\mathop{Aut}}(L)$-orbits, notice that the subspace $\langle (1,0,0,0),(0,1,0,0)\rangle$ is totally singular. On the other hand, the singular vectors of the subspace $\langle (1,0,0,1),(0,1,0,0)\rangle$ are the elements of the 1-space $\langle (0,1,0,0)\rangle$, and hence two singular vectors are linearly dependent. Moreover, the singular vectors of the subspace $\langle (1,0,0,0),(0,0,0,1)\rangle$ are the elements of the 1-spaces $\langle (1,0,0,0)\rangle$ and $\langle (0,0,0,1)\rangle$, which shows that there is a pair of linearly independent singular vectors. Thus these three subspaces are in different $\mathrm{\mathop{Aut}}(L)$-orbits. Let us consider now two subspaces of the form $S_1=\langle (1,0,0,1),(0,\varepsilon,1,0)\rangle$ and $S_2=\langle (1,0,0,1),(0,\nu,1,0)\rangle$ and assume that they are in the same $\mathrm{\mathop{Aut}}(L)$-orbit; that is $S_2=AS_1$ with some $A\in\mathrm{\mathop{Aut}}(L)$. If $\text{char}\, \mathbb{F}\neq 2$, then the Gram determinants of $S_1$ and $S_2$ with respect to the given bases are $\varepsilon$ and $\nu$, respectively. Lemma~\ref{gramlemma}(i) gives that $\varepsilon\relstar\nu$. Conversely, assume that $\text{char}\, \mathbb{F}\neq 2$ and that $\varepsilon\relstar\nu$; that is $\nu=\varepsilon\alpha^2$ with some $\alpha\in\mathbb{F}^*$. Let $A$ be the diagonal automorphism of $L$ with $(\alpha,1,\alpha,\alpha^2)$ in the diagonal. Then $A\langle (1,0,0,1),(0,\varepsilon,1,0)\rangle=\langle (1,0,0,1),(0,\nu,1,0)\rangle$. Hence the subspaces $\langle (1,0,0,1),(0,\varepsilon,1,0)\rangle$ and $\langle (1,0,0,1),(0,\nu,1,0)\rangle$ are in the same $\mathrm{\mathop{Aut}}(L)$-orbit if and only if $\varepsilon\relstar\nu$. This settles the isomorphisms among the possible step-2 descendants of $L_{4,2}$ in the cases when $\text{char}\, \mathbb{F}\neq 2$. \par Suppose next that $\text{char}\, \mathbb{F}=2$ and let \[S_1=\langle (1,0,0,1),(0,\varepsilon_1,1,0)\rangle\quad\mbox{and}\quad S_2=\langle (1,0,0,1),(0,\varepsilon_2,1,0)\rangle,\] as above. Since $f$ is identically zero on $S_1$ and $S_2$ and, for $i=1,\ 2$, $Q(1,0,0,0,0,1)Q(0,1,0,0,\varepsilon_i,0)=\varepsilon_i$, we obtain from Lemma~\ref{gramlemma}(iii) that $\varepsilon_2=\alpha^2\varepsilon_1+\beta^2$ with some $\alpha\in\mathbb{F}^*$ and $\beta\in\mathbb{F}$. Suppose conversely that $\text{char}\, \mathbb{F}=2$ and that $\varepsilon_1\relstarplus\varepsilon_2$; that is $\varepsilon_2=\alpha^2\varepsilon_1+\beta^2$ with some $\alpha\in\mathbb{F}^*$ and $\beta\in\mathbb{F}$. Let $A$ be the automorphism of $L$ represented by the matrix \[\begin{pmatrix} \alpha & 0 & 0 & 0 \\ \beta & 1 & 0 & 0\\ 0 & 0 & \alpha & \alpha\beta\\ 0 & 0 & 0 & \alpha^2 \end{pmatrix}.\]
Then $AS_1=S_2$, which gives, when $\text{char}\, \mathbb{F}=2$, that $\langle (1,0,0,1),(0,\varepsilon_1,1,0)\rangle$ and $\langle (1,0,0,1),(0,\varepsilon_2,1,0)\rangle$ are in the same $\mathrm{\mathop{Aut}}(L)$-orbit if and only if $\varepsilon_1\relstarplus\varepsilon_2$. \par Suppose now that $S_1=\langle (1,0,0,1),(0,\nu_1,1,1)\rangle$ and $S_2=\langle (1,0,0,1),(0,\nu_2,1,1)\rangle$ and assume that $S_1$ and $S_2$ are in the same $\mathrm{\mathop{Aut}}(L)$-orbit; that is, there is some $A\in\mathrm{\mathop{Aut}}(L)$ such that $S_1A=S_2$. The restriction of $f$ on $S_1$ and on $S_2$ is non-singular, and the given bases of $S_1$ and $S_2$ are symplectic. Further, the Arf invariants of $S_1$ and $S_2$ with respect to these bases are $\nu_1$ and $\nu_2$, respectively. Thus Lemma~\ref{gramlemma}(iii) gives that $\nu_1+\nu_2\in\{\alpha^2+\alpha\ |\ \alpha\in\mathbb{F}\}$. Conversely, suppose that $\nu_1,\ \nu_2\in\mathbb{F}$ such that $x^2+x+\nu_1+\nu_2$ has a solution $\alpha$ in $\mathbb{F}$. Let $A$ denote the automorphism represented by the matrix \[\begin{pmatrix} 1 & 0 & 0 & 0 \\ \alpha & 1 & 0 & 0\\ 0 & 0 & 1 & \alpha\\ 0 & 0 & 0 & 1 \end{pmatrix}.\] Then $AS_1=S_2$, and so two subspaces $\langle(1,0,0,1),(0,\nu_1,1,1)\rangle$ and $\langle (1,0,0,1),(0,\nu_2,1,1)\rangle$ are in the same $\mathrm{\mathop{Aut}}(L)$-orbit if and only if the polynomial $x^2+x+\nu_1+\nu_2$ has a root in $\mathbb{F}$.
\section*{Acknowledgment}
The first and the third authors would like to acknowledge the support of the grants ISFL-1-143/BPD/2009 and PTDC/MAT/101993/2008, respectively, of the {\em Funda\c c\~ao para a Ci\^ encia e a Tecnologia} (Portugal). The third author was also supported by the Hungarian Scientific Research Fund (OTKA) grant~72845.
\noindent Serena Cical\`o and Csaba Schneider\\ Centro de \'Algebra de Universidade de Lisboa\\ Av.\ Professor Gama Pinto, 2, 1649-003 Lisboa, Portugal.\\ cicalo@science.unitn.it, csaba.schneider@gmail.com
\noindent Serena Cical\`o (current address)\\ Dipartimento di Matematica e Informatica\\ Via Ospedale, 72 - 09124 Cagliari, Italy\\ cicalo@science.unitn.it
\noindent Willem A.\ de Graaf\\ Dipartimento di Matematica\\ Via Sommarive 14 - 38050 Povo (Trento), Italy\\ degraaf@science.unitn.it
\end{document} |
\begin{document}
\setcounter{page}{1}
\title{Complexity of syntactical tree fragments of Independence-Friendly logicootnote{The present work has been developed under the Academy of Finland project 286991, ``Dependence and Independence in Logic: Foundations and Philosophical Significance'', and revised under the Academy of Finland project 316460, ``Semantics of causal and counterfactual dependence''.
}
\begin{abstract}
A dichotomy result of Sevenster (2014) completely classified the quantifier prefixes of regular Independence-Friendly (IF) logic according to the patterns of quantifier dependence they contain. On one hand, prefixes that contain ``Henkin'' or ``signalling'' patterns were shown to characterize fragments of $IF$ logic that capture NP-complete problems; all the remaining prefixes were shown instead to be essentially first-order.
In the present paper we develop the machinery which is needed in order to extend the results of Sevenster to non-prenex, regular IF sentences. This involves shifting attention from quantifier prefixes to a (rather general) class of syntactical tree prefixes.
We partially classify the fragments of regular $IF$ logic that are thus determined by syntactical trees; in particular, a) we identify three tree prefixes that are neither signalling nor Henkin, and yet express NP-complete problems and other second-order concepts; and b) we give more general criteria for checking the first-orderness of an $IF$ sentence. \\
Keywords:
Independence-Friendly logic, tractability frontier, prefixes, syntactical trees, signalling, NP-complete problems.\\
MSC classification: 03C80, 03B60, 68Q19. \end{abstract}
\section{Introduction}
In formulas of first-order logic, an existential quantifier is implicitly dependent on all quantifiers that occur above it. For example, a sentence of the form $\forall x \exists y\psi(x,y)$ asserts that, for each value that can be picked for $x$, a value for $y$ can be chosen so that $\psi(x,y)$ is satisfied. In other words, the fact that $\exists y$ occurs in the scope of $\forall x$ determines the existence of a (Skolem) function $f$ such that $\psi(x,f(x))$ holds. Independence-Friendly (IF) logic is an extension of first-order logic which frees the notion of dependence between variables from the syntactical notion of scope dependence. This is obtained by enriching the syntax with a slashing device, that is, by allowing quantifiers of the form $(Qv/V)$, where $V$ is a finite set of variables. A quantifier $(\exists v/V)$ expresses the fact that $v$ is to picked as a function of all quantifiers above $(\exists v/V)$ \emph{except} for those that are listed in $V$. $IF$ logic was introduced in \cite{HinSan1989} with the purpose of decomposing the so-called partially ordered quantifiers of Henkin (\cite{Hen1961}) into individual quantifiers. The simplest among the Henkin quantifiers is a prefix of $4$ classical quantifiers arranged as in the formula
$$\left(\begin{array}{ll} \forall x & \exists y\\ \forall z & \exists w \end{array}\right)\psi(x,y,z,w)$$
\noindent which is meant to assert that $y$ must be picked as a function of $x$ only, and $w$ as a function of $z$ only. The meaning of such a sentence is expressed by a Skolemization of the form $\exists f\exists g\forall x\forall z\psi(x,f(x),z,g(z))$. In the version of $IF$ logic that we consider here (which was introduced in \cite{Hod97}, and is sometimes called \emph{slash logic}) this Henkin sentence can be rendered in a number of different ways, for example by the sentence $\forall x\exists y\forall z(\exists w/\{x,y\})\psi(x,y,z,w)$. However, it has been soon realized that slash logic can express patterns of dependence which differ from those that come from Henkin quantifiers. One example is the short signalling sequence in $$\forall x \exists y(\exists z/\{x\})\chi(x,y,z).$$ \noindent If, say, $\chi(x,y,z)$ is a quantifier-free formula, then this sentence is equivalent to a Skolem formula of the form $\exists f\exists g\forall x\forall z \chi(x,f(x),g(f(x)))$. One way to understand what happens in the evaluation of such a formula (over some structure $M$) is to imagine a game played by a Falsifier and a team of two Verifiers, who pick witnesses for $x,y,z$; the purpose of the Falsifier is to build an assignment which does not satisfy $\chi(x,y,z)$, while the Verifiers aim for the opposite goal. The game is in three consecutive turns: first the Falsifier picks a value for $x$; the first Verifier then picks a value for $y$; finally, the second Verifier picks a value for $z$ without looking at the value that was chosen for $x$; if the chosen values satisfy $\chi$, then the Verifiers win. If the Verifiers have a winning strategy for this game, then the sentence is true in the structure under consideration. Notice that the second Verifier can look at what the first Verifier has chosen; the two might then agree to use the value chosen for $y$ in order to signal some information about the value of $x$ (which is visible to the first Verifier, but not to the second). Hence the name of \emph{signalling} for this kind of sequence of quantifiers. It was realized in \cite{CaiKry1999} that this kind of sentence can express second-order notions such as infinity over a poor vocabulary.\footnote{Actually, some suggestion of this kind already occurred in the literature on Henkin quantifiers, see \cite{End1970}.} Later, \cite{Sev2014} showed the same over finite structures; signalling sentences can express the NP-complete problem EXACT COVER BY 3-SET. In \cite{Sev2014} one also finds a classification result: if we restrict attention to \emph{prenex} $IF$ sentences which are \emph{regular} (that is, variables cannot be requantified), then the quantifier prefixes which can express second-order concepts are exactly those that are Henkin or signalling (in a specific technical sense that will be reviewed later). If instead a prenex, regular $IF$ sentence has a quantifier prefix that is neither Henkin nor signalling, then it can be mechanically transformed into an equivalent first-order sentence.
The purpose of the present paper is to extend the results of \cite{Sev2014} beyond the boundaries of prenex $IF$ logic, by identifying syntactical structures (tree prefixes) which can capture second order concepts and others which cannot. We are lead in this direction by a number of examples from the literature which point to the presence, in $IF$ logic, of interesting interactions between quantifiers \emph{and connectives}. To give an idea, we review an example from \cite{Jan2002} which illustrates the phenomenon of \emph{signalling by disjunction}. Consider the formula $$\forall x (\exists y/\{x\})x\neq y.$$ Here a value for $y$ must be chosen to be different from the value of $x$ without knowing the value of $x$; it should be clear that, if the structure under consideration has at least two distinct elements, no strategy allows to do this; thus the sentence is not true on such a structure. However, the sentence $$\forall x ( (\exists y/\{x\})x\neq y \lor (\exists y/\{x\})x\neq y )$$ is true in all structures with at least two elements. The reason can be again explained in game-theoretical terms. Here we have three Verifiers (one for the disjunction, and one for each existential quantifier) and one Falsifier, for the universal quantifier. First the Falsifier picks a value for $x$; then the first Verifier picks one of the two disjuncts; if the left disjunct is chosen, then the Verifier corresponding to the left occurrence of $(\exists y/\{x\})$ picks a value for $y$; if the witnesses picked for $x$ and $y$ are distinct, then the Verifiers win, and otherwise the Falsifier wins. In case instead the right disjunct is chosen, the game proceeds analogously, but the witness for $y$ is picked by the Verifier associated to the right occurrence of $(\exists y/\{x\})$. On structures with at least two elements, the Verifier team has the following winning strategy: fix two distinct elements $a,b$; choose the left disjunct if and only if $a$ was chosen for $x$; choose $b$ for $y$ in the left disjunct; choose $a$ for $y$ in the right disjunct (this ensures that distinct witnesses for $x$ and $y$ are picked in any play). The interpretation that \cite{Jan2002} gives of this phenomenon is that some binary information can be stored by the Verifier team by means of the choice of a disjunct, in analogy with signalling by means of the choice of a value for a quantifier. This parallelism lead us to conjecture that some form of signalling by disjunction might be used to express second-order concepts. This is not the case for the syntctical tree of the example above, which we can write as $$\forall x ( (\exists y/\{x\})[\phantom{a}] \lor (\exists y/\{x\})[\phantom{a}]);$$ a careful use of equivalence rules of $IF$ logic tranforms any sentence of this form (with the gap symbols $[\phantom{a}]$ replaced by quantifier-free formulas) into a first-order sentence. However, we will see that appropriate extensions of this basic pattern allow expressing NP-complete problems; we identified three such patterns, which appear under the names of GH2($\lor$), C1 and C2 patterns (see e.g. the table at the end of the paper). We must also mention that, recently (\cite{JaaBar2019}), a new second-order pattern has been identified which does not use disjunctions. For completeness, we will briefly describe it (under the name of GH3 pattern).
We briefly explain why the NP-complete problems play an important role in our attempt of classification. The reason lies in the fact that $IF$ logic (as well as many other \emph{logics of imperfect information}, such as positive Henkin quantification \cite{BlaGur1986}, Dependence logic \cite{Vaa2007}, Inclusion logic \cite{Gal2012}, Independence logic \cite{GraVaa2013}) is expressively equivalent, at the level of sentences, with existential second-order logic (ESO). A classical descriptive complexity result of Fagin (\cite{Fag1974}) tells us that, on finite structures, ESO captures exactly the complexity class NP of decision problems which can be solved in polynomial time by a nondeterministic Turing machine. The link between logic and complexity is given, in this case, by the ``data complexity'' version of the model checking problem: given a fixed formula $\psi$ expressed in some logical language, and a class $K$ of finite structures, the problem asks whether an input structure $M\in K$ satisfies $\varphi$ ($M\models\varphi$). The choice of reasonable encodings of input instances, and of the class $K$, allows reducing decision problems, that may seem to be completely unrelated to logic, to model checking problems. A decision problem is \emph{described} by a sentence $\varphi$ if there is a reasonable\footnote{The idea behind this notion of ``reasonable'' is a bit vague. Roughly, it means that the encoding transforms instances of a problem D, in one of its typical presentations, into finite structures without using more resources than what are needed for solving the problem in the encoded form. For most of the discussion in this paper, this amounts to the requirement that the encoding takes polynomial time.} encoding of the instances of the problem into $K$, so that, for all $M\in K$, $M\models \varphi$ if and only if $M$ encodes a ``yes'' instance of the problem. In this sense, $IF$ logic can be seen as an alternative cartography of the NP complexity class. The NP-complete problems are then important for the purpose of classification of second-order fragments of IF logic, because it is well known that such problems are not expressible in first-order logic. If a fragment contains a description of an NP-complete problem, then the fragment is not translatable into first-order logic, and its corresponding model-checking problem is unfeasible.\footnote{This last assertion is conditional on the well-known open conjecture P$\neq$NP, and on Cobham's thesis -- that the polynomial time complexity class captures the notion of feasibility.} Such a fragment will be itself called NP-hard (or \emph{mighty}, in the terminology of \cite{BlaGur1986}). Descriptions of complete problems for lower complexity classes such as L, NL, P would serve equally well the purpose of identifying second-order fragments; for example, recent work on the classification of prefixes of relational ESO (\cite{GotKolSch2000},\cite{Tan2015}) lead to the discovery of fragments which describe L- or NL-complete problems, but no problems of higher complexity. However, the classification result of \cite{Sev2014} shows that quantifier prefix fragments of regular $IF$ logic are either NP-hard or in FO (the class of first-order definable problems); in this sense, it was called a \emph{dichotomy result}. The results of our paper, although they do not reach a full classification, seem to confirm that also the tree prefixes fall into this dichotomy.\footnote{This statement concerns those tree prefixes that generalize the notion of quantifier prefix; we will see that a less strict notion of tree prefix allows capturing an NL-complete fragment.} (On the other hand, a recent paper, \cite{BarHelRon2018}, proposed a candidate for an \emph{ir}regular prefix that is not first-order but plausibly stays within the complexity class L.)
The kind of tree prefixes that we are trying to classify are those that are \emph{positive} and \emph{initial}, by which we mean that they contain no occurrences of negation symbols nor of atomic formulas; this seems to be the reasonable analogue of a quantifier prefix, at least in the context of $IF$ logic. To make a concrete example, we will study the fragment of $IF$ sentences of the special form $\forall x(\forall y(\exists u/\{x\})\epsilon_1(x,y,u) \, \, \lor \, \, \forall z(\exists v/\{x\})\epsilon_2(x,z,v))$, where $\epsilon_1(x,y,u)$ and $\epsilon_2(x,z,v)$ are quantifier-free. We want to point out that the study of this fragment is not easily reducible to known results\footnote{The only classification results for \emph{functional} ESO that we are aware of are those of \cite{Gra1990}, which show that the smallest non-first order prefix of functional ESO, $\exists f\forall x$, already suffices to capture NP-complete problems.} on ESO: the Skolemization procedure transforms sentences of this form into equivalent functional ESO sentences of the (not very simple) form $\exists f\exists g\forall x\forall y\forall z(\epsilon_1(x,y,f(y)/u) \lor \epsilon_2(x,z,g(z)/v))$. Sentences of this form do not fully cover the fragment of ESO corresponding to the quantifier prefix $\exists f\exists g\forall x\forall y\forall z$, because not all quantifier-free formulas $\epsilon(x,y,z)$ are equivalent to quantifier-free formulas of the form $\epsilon_1(x,y,f(y)/u) \lor \epsilon_2(x,z,g(z)/v)$ (i.e., formulas which have $\lor$ as their most external operator; in which each disjunct is a two-variable formula; in which all occurrences of $f$ are restricted to be applied to $y$, and all occurrences of $g$ are applied to $x$). Thus, we will either have to conjure new descriptions of NP-complete problems, or instead make use of the semantics and the inferential rules for IF logic in order to show that the given fragments are first-order.
Summarizing, in the present paper we give a partial classification of the complexity of the fragments of regular $IF$ logic which are induced by tree prefixes; some of these fragments are shown to contain descriptions of NP-complete problems, while others are shown to express only first-order concepts (more precisely, each sentence of such a fragment is shown to be equivalent to a first-order sentence).
In section \ref{SEM} we briefly present $IF$ logic. Section \ref{EQRULES} presents some notions of equivalence of $IF$ sentences and formulas and reviews several equivalence rules of $IF$ logic that are used throughout the paper. An earlier draft (\cite{Bar2016}) made use of syntactical manipulations of trees rather than formulas; the difficulties of this alternative approach are succinctly described in appendix \ref{APPMANTREE}. Section \ref{EQUIVTREESSEC1} introduces syntactical trees and related notions; section \ref{COMPLEXITY1} isolates a notion of complexity for tree prefixes. In section \ref{SECROUGH}, we extend the notions of Henkin and signalling prefixes to the case of trees; then two new significant classes of tree prefixes are introduced: the \emph{generalized Henkin} and the \emph{coordinated} ones. It is then shown that the search for NP-hard prefixes can be narrowed down to the Henkin, signalling, generalized Henkin and coordinated classes; instead, sentences that have as prefix one of the remaining trees (called \emph{modest} trees) can be mechanically transformed into first-order sentences. In section \ref{HIGHCOMP} we generalize Sevenster's extension lemma, showing that taking extensions of regular syntactical trees preserves properties such as NL,P,NP-hardness; and we use it to show that all trees that contain Henkin or signalling patterns are NP-complete. Section \ref{GHTREES} divides the generalized Henkin fragment into four subclasses; of these, one is shown to contain only NP-hard prefixes, which can express the SAT problem; for the other three classes, we give partial results, showing that many of the trees they contain are in FO. We also account for the recent discovery (\cite{JaaBar2019}) of a new NP-hard tree. Section \ref{CTREES} considers a first kind of coordinated trees, which are classified into three subclasses, all shown to be NP-complete (the first two define SAT, and the third one the SET SPLITTING problem). We also show that the trees in the third class can express 2-COLORABILITY (a logspace, non first-order problem). Section \ref{CTREES2} takes briefly into account the remaining coordinated trees (of ``second kind''), showing that a few of them are in FO. Two difficult proofs are postponed to appendices \ref{APPMODEST} and \ref{APPEXTLEM} for the sake of readability.
\section{$IF$ logic} \label{SEM}
We present here the syntax of $IF$ logic in the form which is also sometimes called \emph{slash logic}. The reader can consult \cite{ManSanSev2011} for further details on this language. We assume a countable set of (individual) variables. Signatures and terms are defined as for first-order logic. An \textbf{$IF$ formula} is an expression of one of the following forms
$$t_1 = t_2 \ | \ t_1 \neq t_2 \ | \ R(t_1,\dots,t_n) \ | \ \neg R(t_1,\dots,t_n) \ | \ \psi \land \chi \ | \ \psi \lor \chi \ | \ (\forall v/V)\psi \ | \ (\exists v/V)\psi$$ where $t_1,\dots,t_n$ are terms, $R$ is an $n$-ary relation symbol, $v$ a variable, $V$ a finite set of variables (called \emph{slash set}), $\psi$ and $\chi$ $IF$ formulas.\footnote{Variants of this language allow a so-called \emph{dual negation} to occur in front of any (sub)formula. For most purposes, however, this extended language offers nothing new with respect to ours, in which formulas are negation normal -- only atomic formulas can be negated. See \cite{ManSanSev2011} for a discussion.} Formulas of the forms $t_1 = t_2, t_1 \neq t_2, R(t_1,\dots,t_n), \neg R(t_1,\dots,t_n)$ are called, as usual, \textbf{literals}.
The set of free variables of a given $IF$ formula is defined inductively as follows: \begin{itemize} \item $FV(\psi\land\chi) = FV(\psi\lor\chi) = FV(\psi)\cup FV(\chi)$ \item $FV((\forall v/V)\psi) =FV((\exists v/V)\psi) =(FV(\psi)\setminus \{v\})\cup V$ \end{itemize} Thus, also occurrences of variables in slash sets may be counted as free.
The set of bound variables of an $IF$ formula $\psi$, denoted as $Bound(\psi)$, is as usual the set of variables that occur quantified in $\psi$.
If $FV(\psi)=\emptyset$, then $\psi$ is said to be an \textbf{$IF$ sentence}; otherwise, it is an \textbf{open formula}. If $Bound(\psi)=\emptyset$, then $\psi$ is said to be \textbf{quantifier-free}.
For brevity, we will sometimes write quantifiers as $(Qv/u_1,\dots,u_n)$ instead of $(Qv/\{u_1,\dots,u_n\})$. A quantifier with empty slash set, say $(Qv/\emptyset)$, will be simply written $Qv$. A conservativity result (theorem \ref{TEOCONS} below) supports the identification of quantifiers with empty slash set with first-order quantifiers.
In the introduction, we made use of games in order to give an idea of the meaning of $IF$ sentences. For technical purposes, it will be convenient to use in the rest of the paper a different semantics, nowadays called \emph{team semantics} (\cite{Hod97}, \cite{Vaa2007}), which is in accordance with the game-theoretical account over sentences, but also assigns a meaning to open formulas.
In team semantics, formulas are interpreted over sets of assignments of a common variable domain (\emph{teams}), and thus their ``meanings'' are sets of teams.\footnote{The word ``team'' as used in this context has no relation to the teams of players from the game-theoretical interpretation.} Indeed, intuitively the notion of independence has no meaning over single assignments, and this intuition has been assessed by a combinatorial argument (\cite{CamHod2001}). We will write $M,X\models\varphi$ to say that the formula $\varphi$ is satisfied by the team $X$ on a (first-order) structure $M$.
\begin{conv} By a structure $M$ we mean a pair $(dom(M),I_M)$, where $dom(M)$ is a set and $I_M$ is a function that maps each element of the signature into its interpretation (defined as for first-order logic). As is common, we write $M$ for $dom(M)$ when there is no risk of ambiguity. \end{conv}
\begin{df} A \textbf{team} $X$ on a structure $M$ is a set of assignments such that, for all $s,s'\in X$, $dom(s)$ is a finite set of variables, and $dom(s) = dom(s') = : dom(X)$.
A team $X$ is \textbf{suitable} for a formula $\psi$ in case $FV(\psi)\subseteq dom(X)$.
A structure $M$ is \textbf{suitable} for $\psi$ if the signature of $M$ contains all the nonlogical symbols of $\psi$. \end{df}
\begin{df}
Given a team $X$ over a structure $M$ and a variable $v$, the \textbf{duplicated team} $X[M/v]$ is defined as the team $\{s(a/v) \ | \ s\in X,a\in M\}$.
Given a team $X$ over a structure $M$, a variable $v$ and a function $F: X \rightarrow M$, the \textbf{supplemented team} $X[F/v]$ is defined as the team $\{s(F(s)/v) \ | \ s\in X\}$. \end{df}
\begin{df} Given two assignments $s,s'$ with the same domain, and a set of variables $V$, we say that $s$ and $s'$ are \textbf{$V$-equivalent}, and we write $s\sim_V s'$, if $s(x) = s'(x)$ for all variables $x\in dom(s)\setminus V$.\\ Given a team $X$, a structure $M$ and a set $V$ of variables, a function $F:X\rightarrow M$ is \textbf{V-uniform} if $s\sim_V s'$ implies $F(s)=F(s')$ for all $s,s'\in X$. \end{df}
The notion of satisfaction by a team is defined by the following compositional clauses, which we present in the style of \cite{Vaa2007}. We assume familiarity with Tarskian semantics.
\begin{df} We say that a suitable team X satisfies an IF formula $\varphi$ over a structure $M$, and we write $M,X\models \varphi$ if any of the following holds: \begin{itemize} \item For $\alpha$ literal, $M,X\models \alpha$ if $M,s\models \alpha$ in the Tarskian sense for every $s\in X$. \item $M,X\models \psi\land\chi$ if $M,X\models\psi$ and $M,X\models\chi$. \item $M,X\models \psi\lor\chi$ if there are $Y,Z\subseteq X$ such that $Y\cup Z=X$, $M,Y\models \psi$, and $M,Z\models \chi$. \item $M,X\models (\forall v/V)\psi$ if $M,X[M/v]\models \psi$. \item $M,X\models (\exists v/V)\psi$ if $M,X[F/v]\models \psi$ for some $V$-uniform function $F:X\rightarrow M$. \end{itemize} \end{df}
\begin{df} An $IF$ sentence $\varphi$ is said to be \textbf{true} in a structure $M$, and we write $M\models\varphi$, if $M,\{\emptyset\}\models\varphi$.\footnote{Here $\{\emptyset\}$ denotes the singleton team containing the empty assignment.} \end{df}
\noindent These definitions conservatively extend the usual semantics of first-order logic, in the following sense:
\begin{prop}\label{TEOCONS} 1) (\cite{CaiDecJan2009}, Theorem 4.11) Let $\varphi$ be an $IF$ sentence which is syntactically first-order (i.e., all of its slash sets are empty). Then $M\models\varphi$ according to team semantics if and only if $M\models\varphi$ according to Tarskian semantics.
2) (\cite{CaiDecJan2009}, Lemma 4.10) Let $\psi$ be an $IF$ formula which is syntactically first-order. Then $M,X\models\psi$ according to team semantics if and only if, for all $s\in X$, $M,s\models\psi$ according to Tarskian semantics. \end{prop}
The focus of the paper will be on the \emph{regular} fragment of $IF$ logic, in which requantification is forbidden:
\begin{df} An $IF$ formula $\psi$ is said to be \textbf{regular} if: \begin{enumerate} \item Variables are not requantified, i.e., if a quantifier $(Qv/V)$ occurs in $\psi$, then no other quantifier of the form $(Q'v/V')$ occurs in the scope of $(Qv/V)$. \item No variable occurs both free and bound in $\psi$. \end{enumerate} \end{df}
\noindent Notice that condition 2. is automatically satisfied by sentences.
\section{Equivalence of sentences and formulas} \label{EQRULES}
\begin{df} Two $IF$ sentences are (truth-)equivalent if they are true in the same structures (i.e., $\varphi\equiv\chi$ if for all structures $M$, $M\models\varphi\Leftrightarrow M\models\chi$). \end{df}
We will need a well known fact about the expressivity of $IF$ sentences:
\begin{prop} \label{FAGIN} (\cite{ManSanSev2011}, Theorems 6.10, 6.16) On the sentence level, $IF$ logic is equiexpressive with existential second order logic. Thus, by Fagin's theorem (\cite{Fag1974}), the set of $IF$ sentences characterizes the complexity class NP. \end{prop}
A richness of equivalence rules for $IF$ formulas was developed (mainly) in \cite{Dec2005}, \cite{CaiDecJan2009}, \cite{Man2009}, \cite{ManSanSev2011}, \cite{Bar2013}, \cite{Sev2014}; we list here those rules that will be needed in the following. These rules act on formulas, so our notion of truth-equivalence of sentences does not suffice to describe them. Many alternatives have been considered in the literature for what regards equivalence of $IF$ formulas;
the simplest option would be to consider two formulas $\psi,\theta$ equivalent if in all structures they are satisfied by the same teams, \emph{provided that we only consider teams whose variable domain contains $FV(\psi)\cup FV(\theta)$}. However, many important equivalence rules of $IF$ logic are context-dependent: they hold only if some kinds of restrictions are imposed on the contexts in which the formulas may appear; that is, these rules only hold if the formulas do not occur in the scope of certain quantifiers. Thus, it is in many occasions more convenient to consider notions of equivalence relativized to contexts.
We do it here in the style of Caicedo, Dechesne and Janssen (\cite{CaiDecJan2009}), specifying which variables should not appear in the context.\footnote{Actually, our definitions will slightly differ from those of \cite{CaiDecJan2009}. Our definitions are meant to extend truth-equivalence to open formulas, \cite{CaiDecJan2009} aimed at extending a stricter notion called ``strong equivalence''.}
\begin{df} Let $\psi$ be an $IF$ formula, $Z$ a finite set of variables. Then $\psi$ is \textbf{$Z$-closed} if $FV(\psi)\cap Z = \emptyset$. \end{df}
\begin{df} Let $\psi,\chi$ be $IF$ formulas, let $Z$ be a finite set of variables. We say that $\psi$ and $\chi$ are \textbf{$Z$-equivalent}, and we write $\psi\equiv_Z\chi$, if they are $Z$-closed and, furthermore, $M,X\models \psi \Leftrightarrow M,X\models \chi$ for all structures $M$ and for all teams $X$ that are suitable for $\psi$ and $\chi$ and such that $dom(X)\cap Z = \emptyset$.
If we have an explicit listing $\{z_1,z_2,\dots,z_n\}$ of $Z$, we can also write, for brevity, $\psi\equiv_{z_1z_2\dots z_n}\chi$. \end{df}
So, the subscripts to the equivalence symbols mean that the equivalence only holds for those teams whose domain does not contain any of the subscripted variables; and also, in order to avoid triviality, the subscripted variables must not occur free in the formulas under consideration. This notion of equivalence of formulas works well because of the following two facts:
\begin{prop} \label{TEOEQSENT}
(\cite{CaiDecJan2009}, remarks on page 22) If $\varphi$ and $\chi$ are $IF$ sentences, then, for any finite set $Z$ of variables, $\varphi\equiv_Z \chi$ if and only if $\varphi\equiv \chi$.
\end{prop}
\begin{prop} \label{TEOSUBEQ} (\cite{CaiDecJan2009}, Theorem 6.14)
If $\varphi,\psi,\psi'$ are $IF$ formulas, $Z$ a finite set of variables, $\varphi'$ is obtained from $\varphi$ by replacing a subformula occurrence of $\psi$ with $\psi'$, and $\psi\equiv_Z\psi'$, then $\varphi\equiv_Z\varphi'$. \end{prop}
We can now present the equivalence rules that we shall need. We just point out that many of the context restrictions of each rule can in practice be ignored when applying the rules within a regular formula.
\begin{prop}[Renaming]\label{VARIANT}\footnote{A note of warning. This proposition is nothing else than Theorem 6.12 of \cite{CaiDecJan2009}. If the reader compares our formulation with the rule stated in that paper, (s)he might think that we have forgotten a clause; that we should have specified that $u$ must not be in $U$. Yet, this is already implied by $uv$-closedneess: the formulation in \cite{CaiDecJan2009} was redundant.} Suppose $u$ is not bound in $\psi$. If $v$ does not occur in $(Qu/U)\psi$, then \[ (Qu/U)\psi \equiv_{uv} (Qv/U) Subst(\psi,u,v). \] where $Subst(\psi,u,v)$ is the formula obtained by replacing, in $\psi$, all free occurrences of $u$ with $v$. \end{prop}
When extracting a quantifier $(Qu/U)$, say, from a left disjunct (resp.conjunct), the variable $u$ must in general be added to the slash sets of the right disjunct in order to prevent it to be used as a source of signals -- which could be used to circumvent the restrictions imposed by slash sets (see \cite{CaiDecJan2009}). However, using the form of extraction rule that we review below, we can avoid adding $u$ to \emph{empty} slash sets (i.e., we can preserve the first-order quantifiers).
\begin{df} \label{VERTICALSLASH}
Given an IF formula $\psi$, we define $\psi|_v$ to be the formula obtained by adding the variable $v$ to all \emph{nonempty} slash sets of $\psi$; and similarly for syntactical trees. \end{df}
\begin{prop}[Strong quantifier extraction, a special case of Theorem 8.3 of \cite{CaiDecJan2009}]\label{STRONGEXTRACTION} If $u$ does not occur in $\psi$ nor $U$, then: \[
(Qu/U)\varphi \circ \psi \equiv_u (Qu/U)(\varphi \circ \psi|_u) \] where $\circ$ is either $\land$ or $\lor$. \end{prop}
We list two more useful equivalence rules, distribution of universal quantifiers (see \cite{ManSanSev2011}, 5.23) and quantifier swapping (\cite{ManSanSev2011}):
\begin{prop}[Distribution of universal quantifiers over conjunctions] \label{DISTRIBUTION} For all $\varphi,\psi$ $IF$ formulas: \[
\hspace{10pt} \forall u(\varphi \land \psi) \equiv_u \forall u\varphi \land \forall u\psi. \]
\end{prop}
\begin{prop}[Quantifier swapping]\label{SWAP} Let $Q,Q'$ be quantifiers, $\psi$ an $IF$ formula. Then: \[
(Qu/U)(Q'v/V\cup\{u\})\psi \equiv_{uv} (Q'v/V)(Qu/U\cup\{v\})\psi. \] \end{prop}
\noindent Observe that adjacent quantifiers of the same kind are not always allowed to commute: for example, notice that in the left member of the above formula we require $u$ to occur in the slash set of $v$. It is also worth noting that the usual first-order rule for swapping quantifiers of the same type is not a special case of this scheme; indeed, $IF$ logic has a second rule which allows swapping certain quantifiers of the same type (\cite{CaiDecJan2009}, Theorem 13.3); we will only make use of the following very special case:
\begin{prop}[Swapping first-order universal quantifiers]\label{UNISWAP} For any $IF$ formula $\psi$: \[ \forall u \forall v\psi \equiv \forall v \forall u\psi. \] \end{prop}
\noindent Finally, we look at two rules which are specific of $IF$ quantification.\footnote{The previous rules hold under a more restrictive notion called \emph{strong equivalence}, under which $IF$ logic is treated as a three-valued logic. The two remaining rules, instead, owe their validity to the fact that we are only analyzing the truth of sentences, not their falsity.}
\begin{prop}[Slash sets of universal quantifiers are irrelevant] \label{UNIDEP} For any $IF$ formula $\psi$, \[ (\forall u/U)\psi \equiv \forall u \psi. \] \end{prop}
\begin{prop}[Purely existential slash sets are irrelevant] \label{DEPEX} Suppose two regular IF sentences $\varphi,\varphi'$ differ only for one quantifier, which is $(\exists v/V)$ in $\varphi$ and $\exists v$ in $\varphi'$; suppose furthermore that all variables in $V$ are existentially quantified. Then $\varphi\equiv\varphi'$. \end{prop}
\section{Syntactical trees: basic definitions} \label{EQUIVTREESSEC1}
We define here the class of syntactical trees which is of our interest -- we are seeking for the simplest possible generalization of what a prefix is if we do not restrict attention to prenex sentences. This requires including in the prefixes also connectives, and taking into account the binary ramifications they induce in the structure of formulas. This class of trees (the \emph{positive initial} trees) has already been introduced elsewhere (\cite{Bar2013}), but here we will require some more precision in the formal details. For technical ease, in our trees we will allow occurrences of the gap symbol $[\phantom{a}]$. Each gap symbol is a marker for a node to which (the tree of) some $IF$ formula might potentially be attached. By a \textbf{tree} here we mean a finite partially ordered set $(T,\preceq_T)$ with a minimum element (the \textbf{root}) and such that, for each $t\in T$, the set of predecessors of $t$ is linearly ordered by $\preceq_T$.
\begin{df} A \textbf{syntactical tree} is a (finite) tree whose nodes are occurrences of atomic formulas, negation, conjunction, disjunction, quantifiers (with their slash sets), and the gap symbol $[\phantom{a}]$, and which respects the following constraints: 1) atomic formulas are leaves (i.e., they have no successors) \\ 2) gap symbols are leaves \\ 3) each negation has exactly one successor \\ 4) each binary connective has exactly two successors \\ 5) each quantifier has exactly one successor. \end{df} It should be clear in what sense to each $IF$ sentence we can associate \emph{its} syntactical tree (which is, of course, a tree without gaps), and in the following we will always indentify a formula with its tree. Here are some examples of trees that are not the syntactical tree of any formula, since they contain gaps:
\Tree [.$\forall x$ [.$\exists y$ [.$[\phantom{a}]$ ] ] ] \Tree [.$\forall x$ [.$\lor$ [.$[\phantom{a}]$ ] [.$[\phantom{a}]$ ] ] ] \Tree [.$(\forall x/\{z\})$ [.$\lor$ $A(x)$ [.$\neg$ [.$[\phantom{a}]$ ] ] ] ] \Tree [.$\forall x$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\land$ $B(y)$ $C(z)$ ] ] ] \Tree [.$\lor$ [.$\exists x$ $A(x)$ ] [.$\exists y$ [.$[\phantom{a}]$ ] ] ]
We will use some terminology which is standard for trees:
\begin{df}\label{DEFCHAINBRANCH} If $(T,\preceq_T)$ is a syntactical tree, a \textbf{chain} of $(T,\preceq_T)$ is a pair $(S, \preceq_S)$, where $S\subseteq T$, $\preceq_S$ is the restriction of $ \preceq_T$ to $S$, and $\preceq_S$ linearly orders $S$.
A \textbf{branch} of $(T,\preceq_T)$ is a maximal chain of $(T,\preceq_T)$.\footnote{I.e., a chain $(S, \preceq_S)$ such that, for each $t\in T\setminus S$, the set $S\cup\{t\}$, together with the restriction of $\preceq_T$ to $S\cup\{t\}$, is not a chain of $(T,\preceq_T)$ ).} \end{df}
The notion of a quantifier prefix is generalized by the following class of syntactical trees:
\begin{df} A \textbf{positive initial tree}, or \textbf{tree prefix}, is a syntactical tree which contains no occurrence of atomic formulas nor of negation. \end{df} Said otherwise, a positive initial tree can be obtained from the syntactical tree of some negation normal $IF$ formula by removing from it all nodes that correspond to literals.
The word \emph{positive} refers to the fact that we do not allow negation symbols to occur in the tree, while the word \emph{initial} refers to the fact that none of the branches of the tree end with an atomic formula (among the trees in the previous picture, only the first and second are positive and initial; the fourth and fifth are positive but not initial). This generalizes the fact that a quantifier prefix is an initial segment of the syntactical tree of a formula; the obvious generalization of ``initial segment'' for a tree is the notion of \emph{down set}.
\begin{df} A \textbf{down set} $Y$ of a tree $T$ is $Y\subseteq T$ such that \[ \forall y\in Y\forall t\in T(t\preceq_T y \rightarrow t\in Y). \]
\end{df} \begin{df} Let $T$ be a syntactical tree, and $T^-$ the tree obtained by removing the gap nodes from $T$. An IF formula $\varphi$ \textbf{begins with $T$} if $T^-$ is a down set of the syntactical tree of $\varphi$. \end{df} \begin{df} Given an $IF$ formula $\varphi$, we define \textbf{the tree prefix of $\varphi$}, and denote it as $PTr(\varphi)$, to be the largest positive initial tree $T$ such that $\varphi$ begins with $T$. \end{df}
For example, we can say that the formula $\varphi = \forall x(A(x)\lor \neg B(x))$ begins with the tree
\Tree [.$\forall x$ [.$\lor$ $[$\phantom{a}$]$ [.$\neg$ [.$B(x)$ ] ] ] ] \\
\noindent even though this tree is not positive initial (it contains a negation, and also an atomic formula). Instead, $PTr(\varphi)$ is
\Tree [.$\forall x$ [.$\lor$ $[$\phantom{a}$]$ [.$[$\phantom{a}$]$ ] ] ]
\noindent We can also write these kinds of trees in linear notation; e.g., the tree above is $\forall x([\phantom{a}]\lor[\phantom{a}])$.
The notion of \emph{subtree} is in a sense dual to the notion of a prefix.
\begin{df} An \textbf{up set} $Y$ of a tree $T$ is $Y\subseteq T$ such that \[ \forall y\in Y\forall t\in T(y\preceq_T t \rightarrow t\in Y). \] \end{df}
\begin{df} A \textbf{subtree} $S$ of a tree $T$ is a suborder of $T$ which 1) is an up set of $T$, and 2) has a root (i.e. a minimum according to $\preceq_T$). \end{df}
So, a subtree of $T$ is obtained whenever we choose a node $t$ of $T$ and we pick all nodes that follow $t$ in the ordering $\preceq_T$. We might also say that such a subtree is made of $t$ and of all the nodes of $T$ which are in the scope of $t$.
\begin{notation} Given a formula $\varphi$ (resp. a quantifier prefix $\vec Q$, a syntactical tree $T$), we denote the relation of scope between pairs of logical operators as $\prec_\varphi$ (resp. $\prec_{\vec Q}$, $\prec_T$). So, for example, $\forall x \prec_\varphi \exists y$ means that (a specific occurrence of) $\exists y$ occurs within the scope of (a specific occurrence of) $\forall x$ in formula $\varphi$. In the case of trees, $\prec_T$ is just the strict partial order which is associated to the ordering $\preceq_T$ of the tree. \end{notation}
Two quantifier prefixes $R$, $S$ can obviously always be concatenated in order to obtain a longer prefix $RS$; this notation can sometimes be extended to trees:
\begin{notation} Whenever $R$ is a finite linearly ordered syntactical tree whose last element is a gap, and $S$ is a tree, we can unambiguosly denote as $RS$ the concatenation of $R$ and $S$, that is, the tree obtained by removing the last (gap) node of $R$ and replacing it with the tree $S$. \end{notation}
\begin{comment} such that \\ 1) it contains the root of the tree\\ 2) if $c<_Td<_Te$ and $c,e\in R$, then $d\in R$\\ 3) $O'$ is a singleton $\{O\}$\\ \end{comment}
\begin{df}
An \textbf{incomplete branch} of $T$ is a branch whose last element is a gap $[\phantom{a}]$. The set of the incomplete branches of $T$ will be denoted as $IBranch(T)$. \end{df}
\noindent So, there is an obvious bijection between incomplete branches of a tree $T$, and the occurrences of gap symbols in $T$.
The main results of \cite{Sev2014} worked properly only for a restricted class of quantifier prefixes, the \emph{sentential} class. These are prefixes that can in principle give rise to a sentence when they are prefixed to some appropriate quantifier-free formulas. An example of a \emph{non}-sentential prefix is $\forall x(\exists y/z)$. Any formula which begins with this prefix is not a sentence, because it has a free variable, $z$. We define here a class of tree prefixes which incorporates both the notions of sententiality and \emph{regularity} (which mainly amounts to forbidding requantification of a variable). For brevity (and since sententiality is an obvious requirement for the kind of analysis we pursue here) such trees will be simply called ``regular''.
The definitions of the sets of free and bound variables, which we have given for formulas, extend in a straightforward way to trees, if one reads $(Qu/U)\prec_T(Qv/V)$ as ``$(Qv/V)$ is in the scope of $(Qu/U)$''. We write $FV(T)$, resp. $Bound(T)$ for these sets.
\begin{df} \label{REGU2} A tree prefix $T$ is \textbf{regular} if the following hold:\\ 0) If a quantifier $(Qv/V)$ occurs in $T$, and a variable $u$ is in $V$, then there is in $T$ another quantifier $(Q'u/U)\prec_T(Qv/V)$ (sententiality).\\
1) No variable occurs both free and bound in $T$.\\ 2) If a quantifier $(Qv/V)$ occurs in $T$, then it is not in the scope of any quantifier of the form $(Qv/W)$. \end{df}
This definition slightly clashes with our earlier definition of a regular $IF$ formula, which did not include requirement $0$). However, in this paper we deal almost exclusively with sentences; and notice that sentences automatically satisfy conditions 0) and 1).
Now we specify what $IF$ sentences can be obtained by filling the incomplete branches of a tree with (the trees of) quantifier-free formulas. The first of the following definitions generalizes the operation of postfixing an open formula to a quantifier prefix; here we may need to attach \emph{many} formulas, one for each gap in the tree.
\begin{df} \label{DEFCF} Let $T$ be a syntactical tree. We call any function $e:IBranch(T)\rightarrow QFree$ a \textbf{completing function for $T$}.
A completing function is \textbf{sentential} if, for each $R\in IBranch(T)$, we have $FV(e(R))\subseteq Bound(R)$.
\end{df}
\begin{df} For any tree $T$ and completing function $e$ (for $T$), we call $\hat e(T)$ the formula obtained replacing, for each $R\in IBranch(T)$, the gap at the of $R$ with $e(R)$. We will call the formula $\hat e(T)$ a \textbf{completion} of $T$.
If $S$ is a subtree of $T$, we denote by $\hat e(S)$ the smallest subformula of $\hat e(T)$ which contains $S$. \end{df}
It should be clear that, if $T$ is a regular tree prefix, then asserting that $e:IBranch(T)\rightarrow QFree$ is sentential amounts to saying that $\hat e(T)$ is a sentence.
\begin{example} 1) The simplest possible examples of tree prefixes are the quantifier prefixes. For instance, let $T = \exists y(\forall x/\{y\})[\phantom{a}]$. There is only one gap, so only one incomplete branch, which is $T$ itself (with its ordering); so, a completing function for $T$ is just a function from the singleton set $\{T\}$ to $QFree$. Set for example $e(T) = P(x)\land Q(x,y)$. Applying this completing function to $T$, one obtains the formula $\hat e(T) = \exists y(\forall x/\{y\})(P(x)\land Q(x,y))$. Notice that we have $FV(e(T)) = \{x,y\} = Bound(\hat T)$: $e$ is sentential, and indeed $\hat e(T)$ is a sentence.
2) Consider a tree of the form $T' = \exists y([\phantom{a}]\lor(\forall x/\{y\})[\phantom{a}])$, which is not linear. It has two branches. Call $A$ the branch containing the leftmost gap, and $B$ the other one. A completing function for $T'$ will be a function $j:\{A,B\}\rightarrow QFree$, for example \begin{equation*} \left\{\begin{array}{l} j(A)=P(y,z)\\ j(B)=Q(x,y) \end{array} \right. \end{equation*}
\noindent which is not sentential because of $z$ occurring free in $j(A)$. The result of the completion is the (open) formula $\hat j(T') = \exists y(P(y,z)\lor (\forall x/\{y\})Q(x,y))$ with free variable $z$.
If we instead define a completing function $k$ by $k(A):= S(y), k(B):=Q(x,y)$, then $k$ is sentential, and $\hat k(T') = \exists y(S(y) \lor (\forall x/\{y\})Q(x,y))$ is a sentence. \end{example}
\section{Complexity of $IF$ tree prefixes} \label{COMPLEXITY1}
We assume the reader is familiar with basic notions of complexity theory, in particular reductions, hardness, completeness and the complexity classes FO, L, NL, P and NP. In the following, when we speak of NP-completeness, we are thinking of completeness up to polynomial reductions (although, in most cases, much weaker reductions are adequate). It is known that the following inclusions hold: \[ \text{FO}\subseteq\text{AC}^0\subset\text{TC}^0\subseteq\text{L}\subseteq\text{NL}\subseteq\text{P}\subseteq\text{NP} \] where AC$^0$ and TC$^0$ are two classes of computation by circuits (AC$^0$: problems decidable by boolean circuits of unbounded fan-in and constant depth, TC$^0$: problems solvable by threshold circuits of constant depth). $\text{AC}^0\subset\text{TC}^0$ is one of the few strict inclusions that are known of within NP; it has the interesting consequence that first-order formulas cannot even express all L problems.
We study the complexity of $IF$ positive initial trees, in the sense given by the following definitions (given along the lines of \cite{BlaGur1986}).
\begin{df} \label{DEFC}
To each $IF$ sentence $\varphi$, we associate the class $F_\varphi = \{M \ | \ M \text{ finite, }$ $M\models \varphi\}$ of its finite models. Given a syntactical tree $T$, we define the \textbf {complexity class of $T$}: \begin{equation*}
\text{\emph C}(T) = \{F_{\hat e(T)} \ | \text{ } e \text{ is a sentential completing function for } T\}. \end{equation*}
If we have \emph{C}$(T) =$ \emph{C}$(T')$, resp. \emph{C}$(T) \subseteq$\emph{ C}$(T')$... then we say that $T$ is as complex as $T'$, resp. $T$ is less complex than $T'$... \end{df}
The complexity classes $C(T)$, as defined above, are, from the set-theoretical point of view, metaclasses. If the reader is worried by this point, (s)he just has to replace the above definition of $F_\varphi$ with $\{[M] \ | \ M \text{ finite, }M\models \varphi\}$, where $[M]$ is a fixed representative of the isomorphism class of $M$, having as domain a subset of $\mathbb{N}$. All finite models have isomorphic copies of this kind, and $\{[M] \ | \ M \text{ finite, }M\models \varphi\}$ is a set.
\begin{df} The (``data complexity'' version of) the model-checking problem for an $IF$ sentence $\varphi$ is the problem of establishing whether $M\models \varphi$ when a (representation of a) finite structure $M$ is given as input. \end{df}
\begin{df}
We shall say that a regular tree prefix $T$ \textbf{is in complexity class \emph{K}} if for all sentential completing functions $e$ the model-checking problem for $\hat e(T)$ is in \emph{K} (equivalently: if $\text{\emph C(T)}\subseteq \text{\emph K}$).
We say that $T$ is \textbf{\emph{K}-hard}, or that it \textbf{encodes a \emph{K}-hard problem}, if there is at least one sentential completing function $e$ such that the model-checking problem for $\hat e(T)$ is \emph{K-hard} (equivalently: if $\text{\emph C(T)}\cap \text{\emph K-hard} \neq \emptyset$).
If $T$ is in \emph{K} and it is \emph{K}-hard, we say it is \textbf{\emph{K}-complete}\footnote{We might say, more properly, K-complete \emph{up to reduction closure}. Even if a tree prefix $T$ is K-complete in this sense, there may be problems from K which are not definable by sentences that begin with $T$. \cite{BlaGur1986} also use the term \emph{mighty} to refer to an NP-complete prefix.} (equivalently: if $\text{\emph C(T)}\cap \text{\emph K-complete} \neq \emptyset$ \emph{and} $\text{\emph C(T)}\subseteq \text{\emph K}$). \end{df}
\section{A rough classification of tree prefixes} \label{SECROUGH}
We are now in the condition to enunciate in our framework the dichotomy result given by Sevenster (\cite{Sev2014}, Theorem 5.1), restricted to the case of $IF$ regular prefixes: \begin{prop} \label{OLDDICHOTOMY} Every regular $IF$ quantifier prefix either encodes an \emph{NP}-complete problem, or it is in the class $\operatorname{FO}$ of first-order definable problems. \end{prop}
This result can be stated in a stronger form, saying 1) that the FO prefixes are equivalent, in a rather strong sense, to syntactically first-order prefixes, and 2) giving a complete (and effective) classification of the NP-complete vs. the FO prefixes. The NP-complete prefixes were classified according to the presence of particular patterns of dependence and independence among the quantifiers. We define analogous classes for syntactical trees.
\begin{df} We say that a quantifier $(Qy/Y)$, occurring in a regular formula or a regular tree, \textbf{depends on} $(Q'x/X)$ if $(Q'x/X)\prec(Qy/Y)$ and $x\notin Y$. If any of these two conditions does not hold, we say that $(Qy/Y)$ \textbf{does not depend on} $(Q'x/X)$.
For brevity, we will sometimes more simply write that $y$ depends (resp. does not depend) on $x$. \end{df}
We define two ``branch properties'' which generalize the homonymous properties defined in \cite{Sev2014}. They identify branches which mimick Henkin quantifiers (\cite{Hen1961}) and branches which contain signalling patterns (\cite{Hod97}, \cite{Jan2002}).
\begin{df} A branch of a syntactical tree is \textbf{Henkin} if it contains quantifiers $(\forall x/X),(\exists y/Y),(\forall z/Z),(\exists w/W)$ such that:\\ 1) $(\exists y/Y)$ depends on $(\forall x/X)$ but does not depend on $(\forall z/Z)$ nor $(\exists w/W)$\\ 2) $(\exists w/W)$ depends on $(\forall z/Z)$ but does not depend on $(\forall x/X)$ nor $(\exists y/Y)$. \end{df}
\begin{df} A branch of a syntactical tree is \textbf{signalling} if it contains quantifiers $(\forall x/X),(\exists y/Y),(\exists z/Z)$ such that:\\ 1) $(\exists y/Y)$ depends on $(\forall x/X)$ \\ 2) $(\exists z/Z)$ depends on $(\exists y/Y)$ but does not depend on $(\forall x/X)$. \end{df}
\begin{df} Given any property $\mathbb{P}$ of branches, we say that a tree $T$ has property $\mathbb{P}$ if there is a branch of $T$ which has property $\mathbb{P}$. \end{df}
Of course, not every interesting property of a syntactical tree is induced in this way from the properties of its branches; the next two properties of trees exemplify this point.
\begin{df} A syntactical tree T is \textbf{first-order} if all of its slash sets are empty. \end{df}
\begin{df} A syntactical tree T is \textbf{primary} if it is neither Henkin nor signalling. \end{df}
With these definitions, the classification result of Sevenster can be summarized more precisely thus: \begin{prop} \label{SEVCLASSIFY} (\cite{Sev2014}) \\ 1) Henkin and signalling (regular) $IF$ quantifier prefixes are \emph{NP}-complete. \\ 2) Primary regular $IF$ quantifier prefixes are in $\operatorname{FO}$. \end{prop}
The apparatus developed so far allows us to (partially) extend the result on primary prefixes to trees. We define three new classes of trees. We will show that the first two of these, the \emph{generalized Henkin} and the \emph{coordinated} class, delimit the space for searching for genuinely new NP-complete prefixes.
\begin{df} A syntactical tree $T$ is \textbf{generalized Henkin} if it contains logical operators $\forall x,\forall y,\circ,(\exists u/U),(\exists v/V)$ ($\circ$ being either $\land$ or $\lor$) such that:\\ 0) $\forall x,\forall y,(\exists u/U),(\exists v/V)$ do not all occur in a same branch\\ 1) $\forall x\prec_T \circ \prec_T (\exists u/U),(\exists v/V)$\\ 2) $u$ depends on $x$ but does not depend on $y$ nor $v$\\ 3) $v$ depends on $y$ but does not depend on $x$ nor $u$. \end{df}
\noindent We can think of this structure as a sort of Henkin prefix which is split over multiple branches of the tree.
A typical example of a generalized Henkin tree prefix is $\forall x(\exists u[\phantom{a} ]\lor(\forall y(\exists v/\{x\})[\phantom{a}])$, in which the two existentials occur in distinct branches of the tree. This is a syntactical structure that can support phenomena of ``signalling by disjuction''. But notice that, in the general definition, $\circ$ is also allowed to be a conjunction. Clause 0) excludes the Henkin tree prefixes from this class, while clause 1) excludes some first-order trees, such as $(\forall x\exists u[\phantom{a} ])\lor(\forall y\exists v[\phantom{a}])$. If some operators $\forall x,\forall y, (\exists u/U),(\exists v/V)$ and $\circ$ (occurring in a tree) satisfy 0)-1)-2)-3) , we say that they form a \textbf{generalized Henkin pattern}. We will see many more examples in section \ref{GHTREES}, where we identify some classes of generalized Henkin prefixes that are NP-complete, and others that are in $\operatorname{FO}$.
\begin{df} A syntactical $IF$ tree is \textbf{coordinated} if it contains logical constants $(\forall x/X), \lor, (\forall y/Y), (\forall z/Z), (\exists u/U), (\exists w/W)$ such that:\\ 0) $(\forall x/X), (\forall y/Y), (\forall z/Z), (\exists u/U), (\exists w/W)$ do not all occur in a same branch\\ 1) $\forall x\prec_T \circ \prec_T (\exists u/U), (\exists w/W)$\\ 2) $u$ depends on $y$, but does not depend on $x, z, w$ \\ 3) $w$ depends on $z$, but does not depend on $x, y, u$\\ In case $(\exists u/U), (\exists w/W)$ occur in distinct disjuncts below $\lor$, we say the coordinated tree is of \textbf{first kind}; otherwise, we say it is of \textbf{second kind}. \end{df}
\noindent Again, we say that five operators $(\forall x/X), \lor, (\forall y/Y), (\forall z/Z), (\exists u/U), (\exists w/W)$ occuring in a given tree form a \textbf{coordinated pattern} if they respect clauses 0)-1)-2)-3). The main difference with respect to the case of generalized Henkin patterns is that there is one quantifier $(\forall x/X)$ which is not ``seen'' by the existential quantifiers. The other two universal quantifiers can occur either above or below the disjunction. In the case of generalized Henkin patterns, we forbid the possibility of having both universal quantifiers below the disjunction (or conjunction), so as to exclude trivially trivially first-order prefixes; here it is allowed for $\forall y$ and $\forall z$ to occur both below $\lor$ (as in the tree $\forall x(\forall y (\exists u/x)[\phantom{a}] \lor \forall z(\exists w/x)[\phantom{a}])$ ), and this actually leads to second-order expressive power. More generally, in section \ref{CTREES} we will show that \emph{all} coordinated trees of first kind are NP-complete. The second kind (e.g. the tree $\forall x([\phantom{a}] \lor(\forall y (\exists u/x)[\phantom{a}] \land \forall z(\exists w/x)[\phantom{a}])$ ) is less well understood; in section \ref{CTREES2} we show that some of these tree prefixes are first-order.
\begin{df} A regular tree is \textbf{modest} if it is neither signalling, Henkin, generalized Henkin nor coordinated. \end{df}
\begin{conv} We apply the above terminology (Henkin, signalling, generalized Henkin, coordinated of first-second kind, modest) also to (regular) sentences, whenever the syntactical tree of the sentence has the corresponding property.
We will take sometimes the liberty, in the following, to call a sentence a tree, and to call a subformula a subtree (but not viceversa, at least in general). \end{conv}
The importance of the taxonomy given in this section lies in the following result:
\begin{teo} \label{MODESTTREE}
a) All regular, modest $IF$ trees are in FO.
b) Every regular, modest $IF$ sentence can be transformed, by means of equivalence rules, into a first-order sentence. \end{teo}
\noindent The rather involved proof is postponed to Appendix \ref{APPMODEST}. An example is due to show that part b) of this theorem concretely improves over the criterion given in \cite{Sev2014} for recognizing first-orderness (i.e. checking whether a prenex $IF$ sentence is primary). A naive strategy for checking whether a (possibly non-prenex) regular $IF$ sentence is equivalent to some first-order sentence might be: transform the sentence into prenex normal form, and then check whether the resulting prenex sentence is primary (i.e. its syntactical tree is not Henkin nor signalling). However, this strategy gives some false negatives; in particular, it does not recognize some modest sentences as essentially first-order. Consider e.g. a sentence $\forall x(\forall y(\exists u/x)\psi \land \forall z(\exists v/x)\chi)$, with $\psi,\chi$ quantifier-free. By inspection one sees that this is a modest sentence. Furthermore, we can prove its equivalence to a first-order sentence as follows: first distribute $\forall x$ to obtain $\forall x\forall y(\exists u/x)\psi \land \forall x\forall z(\exists v/x)\chi$. Then by repeated applications of quantifier swapping (and by cancelling the slash sets of universal quantifiers) we obtain $\forall y\exists u\forall x\psi \land \forall z\exists v\forall x\chi$, which is first-order. These transformations apply to every sentence of this form, so what we have shown is that the tree $\forall x(\forall y(\exists u/x)[\phantom{a}] \land \forall z(\exists v/x)[\phantom{a}])$ is in FO. However, the use of prenex form does not account for this fact. Our sentence can e.g. be transformed into $\forall x\forall y(\exists u/x) \forall z(\exists v/x,y,u)(\psi \land\chi)$; here the prefix contains the Henkin pattern $\forall y,(\exists u/x),\forall z,(\exists v/x,y,u)$, and thus the sentence is not recognized as first-order using the criterion of \cite{Sev2014}. We also have to note that many different prenex forms can be obtained, and different prenex forms might yield different outcomes.\footnote{A systematical enumeration of prenex forms is further complicated by the fact that the \emph{strong} extraction rule presented here is not the only extraction rule that can be used in $IF$ logic. See \cite{ManSanSev2011}, Theorem 5.35 for a different, ``weak'' rule. The results in \cite{CaiDecJan2009} also show that a variety of intermediate versions of the extraction rule are available.}
\section{Extension lemma for tree prefixes} \label{HIGHCOMP}
Because of theorem \ref{MODESTTREE}, we know that the search for tree prefixes with second-order expressive power can be limited to the Henkin, signalling, generalized Henkin and signalling classes. In the present section we move a first step in this direction: we generalize to tree prefixes the Extension Lemma of \cite{Sev2014} (which concerned quantifier prefixes); this is the main tool for the study of regular NP-complete prefixes. Roughly stated, it says that whatever can be expressed using a regular prefix, it can also be expressed using a larger regular prefix. In particular, if a regular tree can define NP-complete problems, also its extensions can. This result fails dramatically if irregular prefixes are allowed; for example, $\forall x\exists y(\exists z/x)$ is an NP-complete prefix, but its extension $\forall x\exists y(\exists z/x)\forall x\forall y\forall z$ is first-order. Also in the regular case, the proof is not straightforward: we must take carefully into account the signalling phenomena that may be introduced by the additional quantifiers.
First of all, we must make precise what we mean by an extension of a tree prefix.
\begin{df} Let $T,U$ be tree prefixes. We say that $U$ \textbf{extends} $T$ if there is an injective function $\mu:T\rightarrow U$ such that: \\ 1) for every quantifier node $(Qv/V)$ in $T$, $\mu((Qv/V)) = (Qv/V')$ for some finite set of variables $V'\subseteq V$; and for every connective node $c$, $\mu(c)$ is an occurrence of the same connective. \\ 2) $\mu$ preserves the scope ordering $\prec_T$: if $c,d$ are two nodes in $T$, and $c\prec_T d$, then $\mu(c)\prec_U\mu(d)$.\\ 3) if $(Qv/V)$ and $(Qw/W)$ occur in $T$, then the latter depends on the former if and only if $\mu((Qw/W))$ depends on $\mu((Qv/V))$.\footnote{The clause 2) makes our definition stricter than the corresponding notion for quantifier prefixes given by Sevenster. The correct generalization should allow some form of swapping of independent quantifiers. However, we will not need such subtleties.} \end{df}
In short, $U$ extends $T$ if it contains all the logical operators of $T$, possibly with swollen slash sets, and the operators that come from $T$ keep their original mutual relations of dependence and independence.
\begin{example} Let $T$ be $\forall x\exists y(\exists z/x)[\phantom{a}]$, and $U$ be $\forall w\forall x\exists y([\phantom{a}] \land (\exists z/x)[\phantom{a}])$. $U$ extends $T$ via the function $\mu$ that sends each operator from $T$ into an occurrence of the same operator in $U$, and the gap of $T$ into the right-hand gap of $U$. Notice that this extension preserves the property of being a signalling tree. However, $U$ has more intricate dependencies: for example, both $\exists y$ and $ (\exists z/x)$ now depend on $\forall w$.
Also $U':\forall w\forall x\exists y([\phantom{a}] \land (\exists z/xw)[\phantom{a}])$ is an extension of $T$, via a function $\mu'$ that differs from $\mu$ only in that it sends $(\exists z/x)$ to $(\exists z/xw)$. In this case, $z$ is independent of the new variable $w$; however the (signalling) dependencies between the operators that were already in $T$ have not changed. Instead, $U'':\forall w\forall x\exists y([\phantom{a}] \land \exists z[\phantom{a}])$ and $U''':\forall w\forall x\exists y([\phantom{a}] \land (\exists z/x,y)[\phantom{a}])$ are not extensions of $T$; the signalling pattern involving $x,y$ and $z$ is not anymore in place. \end{example}
\begin{lem}[Extension Lemma for tree prefixes] \label{EXTENDLEMMA} Let $T$, $U$ be regular $IF$ tree prefixes. Suppose $U$ extends $T$, and $\varphi$ is a completion of $T$. Then there are a completion $\varphi'$ of $U$ and, for every structure $M$ suitable for $\varphi$, an expansion $M'$ of $M$ such that $M\models\varphi$ iff $M'\models\varphi'$. The signature of $M'$ only contains an additional constant symbol. \end{lem}
The idea behind the proof is simple, but the details are rather involved; so the proof is postponed to Appendix \ref{APPEXTLEM}.
\begin{cor} If a regular $IF$ tree prefix extends an \emph{NL}-hard (resp. \emph{P}, \emph{NP}-hard) positive initial tree, then it is \emph{NL}-hard (resp. \emph{P}, \emph{NP}-hard) itself. \end{cor}
\begin{proof}
Suppose the tree $T$ has a completion $\varphi$ which defines a C-hard problem (for C = NL, P or NP) over a class of structures K, and that tree $U$ extends $T$. Then by lemma \ref{EXTENDLEMMA} there is a completion $\varphi'$ of $U$ such that, for every structure $M$ suitable for $\varphi$, $M\models\varphi$ iff $M'\models\varphi'$, where $M'$ is $M$ expanded with a new constant. Since the increase in size of the structure, from $M$ to $M'$, is constant, the problem that $\varphi'$ defines on K$':=\{M'\ | \ M \in \operatorname K\}$ is C-hard. \end{proof}
We can immediately apply this result to show that not only quantifier prefixes, but also tree prefixes are NP-complete if they are Henkin or signalling.
\begin{teo} \label{HENKINCOMPLETE} Any regular $IF$ tree prefix which is Henkin is \emph{NP}-complete.\footnote{We emphasize once more that, by saying that a tree prefix is NP-complete, we do not mean that it can express all NP problems. It just means that all problems it can define are in NP, and that \emph{at least one} NP-complete problem is defined by (a completion of) the tree prefix.} \end{teo}
\begin{proof} Any such tree $T$ extends a regular Henkin quantifier prefix. It was shown in \cite{Sev2014}, Theorem 22, that regular Henkin prefixes encode the NP-complete problem of 3-COLORABILITY. So, from Lemma \ref{EXTENDLEMMA} it follows that $T$ is NP-hard. Since $IF$ sentences are in NP (Prop. \ref{FAGIN}), $T$ is NP-complete. \end{proof}
\begin{teo} \label{SIGNALLINGCOMPLETE} Any regular $IF$ tree prefix which is signalling is \emph{NP}-complete. \end{teo}
\begin{proof} Use Lemma \ref{EXTENDLEMMA} and Prop. \ref{FAGIN} again, on the basis that regular signalling quantifier prefixes codify the NP-complete problem EXACT COVER BY 3-SETS (\cite{Sev2014}, Theorem 23).\footnote{In \cite{BarHelRon2017} it is shown that also SAT and DOMINATING SET can be defined using the smallest signalling prefix.} \end{proof}
The remaining sections are devoted to a systematical study of the classes of generalized Henkin and coordinated trees.
\section{Complexity of generalized Henkin trees} \label{GHTREES}
The minimal examples of generalized Henkin trees are of the following forms:
\Tree [.$\forall x$ [.$\circ$ [.$\exists u$ [.$[\phantom a]$ ] ] [.$\forall y$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] ] ] \Tree [.$\forall x$ [.$\forall y$ [.$\circ$ [.$(\exists u/y)$ [.$[\phantom a]$ ] ] [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] ] ]
\noindent where $\circ$ is either $\lor$ or $\land$. We will call GH1($\circ$) the first type, and GH2($\circ$) the second.
\subsection{Describing SAT by a minimal generalized Henkin sentence} \label{GH2OR} Here we express the NP-complete problem SAT by means of an $IF$ sentence whose (positive initial) tree is generalized Henkin (specifically, GH2($\lor$)), but not Henkin nor signaling nor coordinated.\\ \\ SAT Problem: Given a proposition in conjunctive normal form,
decide whether there is an assignment which satisfies the proposition.\\ \\ How we model the problem: each instance of it is a structure of signature $P^2,N^2,C^1,0,1$ ($0,1$ are constants denoting two distinct elements; $C(y)$: ``$y$ is a clause''; $\neg C(y)$: ``$y$ is a propositional letter''; $P(x,y)$ : ``$x$ occurs positively in $y$''; $N(x,y)$ : ``$x$ occurs negatively in $y$''); we only allow structures such that for each clause $y$ there are at least two propositional letters $x$ such that $P(x,y)\lor N(x,y)$. It is well known that, even with this restriction, the SAT problem stays NP-complete.
For brevity, we shall write $O(x,y)$, ``$x$ occurs in $y$'', as a shortening for $\neg C(x) \land C(y) \land (P(x,y) \lor N(x,y))$.
The defining sentence is: \[ \varphi: \forall x\forall y((\exists u/y)\psi_1 \lor (\exists v/x)\psi_2) \] where \[ \psi_1: O(x,y) \land (P(x,y) \rightarrow u = 1) \land (N(x,y) \rightarrow u = 0) \] and \[ \psi_2: O(v,y) \land(O(x,y)\rightarrow x\neq v). \] \begin{teo} If $M$ is a suitable structure, then $M \models \varphi$ iff $M$ is a ``yes'' instance of SAT. \end{teo}
The idea behind this description is similar to that of Jarmo Kontinen's Theorem 4.3.3 from his PhD thesis (although, he deals with a very different kind of descriptive complexity; and although his method seems to capture just the 2-SAT problem). See \cite{Kon2010} or \cite{Kon2013} for a comparison. Think of $x$ as a propositional letter, $u$ as the truth value which is assigned to $x$, $y$ as a clause, $v$ as a propositional letter which corresponds to a literal of $y$ which is made true by the truth assignment. The left disjunct enforces $u$ to be a truth assignment; the $y$-uniformity of the function which picks $u$ guarantees that the assignment is correctly defined, i.e., a function of the propositional letters. The right disjunct ensures that, for every clause $\hat y$, there is at least one literal in it (corresponding to a prop.letter $\hat x$) which is made true by the assignment described by $u$; this is so because the formula $x\neq v$ enforces that at least one pair of values $(\hat x, \hat y)$ for $(x,y)$ is sent to the left disjunct, which ensures that $\hat x$ is in the domain of the assignment.
For the sake of the present proof, it will be convenient to adopt the following notation: if $X$ is a team and $v_1,\dots v_n$ a sequence of variables in the domain of $X$, we denote as $X(v_1,\dots,v_n)$ the relation $\{(s(v_1),\dots, s(v_n)) \ | \ s\in X\}$.
\begin{proof}
1) Suppose $M$ is a ``yes'' instance. Then there is a truth assignment $T$ on propositional letters which makes the proposition $\bigwedge \{c\in M| c\in C^M\}$ true. This means that to each clause $c$ we can associate a propositional letter $f(c)$ which either occurs positively in $c$ and $T(f(c)) = 1$, or it occurs negated in $c$ and $T(f(c)) = 0$. Let $R$ be $\{(f(c),c)|c\in M\}$, and $S = M^2 \setminus R$. Let $Y = \{s:\{x,y\}\rightarrow M \ | \ (s(x),s(y)) \in R \}$ and $Z = \{s:\{x,y\}\rightarrow M \ | \ (s(x),s(y)) \in S \}$ be the corresponding teams of domain $\{x,y\}$. They form a partition of $\{\emptyset\}[M/x,M/y]$. Let $Y':= Y[T/u]$; clearly $M,Y'\models \psi_1$. Let $g$ be any extension of $f$ to the whole $dom(M)$. Define $Z' = Z[g/v]$. Any triple $(\hat x,\hat y,\hat v)\in Z'(x,y,v)$ either is such that $\hat x$ does not occur in the clause $\hat y$, or, if it does, $\hat x$ is not $f(\hat y)$ (because the pair $(f(\hat y),\hat y)$ is not in $Z(x,y)$). So, $Z'$ satisfies $\psi_2$.
2) Suppose $M$ is a ``no'' instance. Let $Y,Z$ be any partition of $\{\emptyset\}[M/x,M/y]$; let $T$ be a $y$-uniform function $Y\rightarrow M$; let $g$ be an $x$-uniform function $Z\rightarrow M$. Define $Y', Z'$ from $X,Y,T,g$ as was done above. Since $T$ cannot be a satisfying assignment, there must be a clause $\hat y$ such that, for each propositional letter $x$, the triple $(x,\hat y,u)$
falsifies either $P(x,y) \rightarrow u = 1$ or $N(x,y) \rightarrow u = 0$ or $O(x,y)$. So, if $M,Y'\models \psi_1$, then for every $x\notin C^M$, $(x,\hat y,u)\notin Y'(x,y,u)$; so, $(x,\hat y)\notin Y(x,y)$; so, $(x,\hat y)\in Z(x,y)$. But then, if $(x,\hat y,v)\in Z'(x,y,v)$, by $x$-uniformity of $g$, we have that $(x',\hat y,v)\in Z'(x,y,v)$ for every propositional letter $x'$. So, $v$ must be equal to some such $x'$. Thus, $M, Z'\not\models O(x,y)\rightarrow x\neq v$: contradiction. \end{proof}
\begin{cor} The minimal generalized Henkin tree GH2($\lor$)
\Tree [.$\forall x$ [.$\forall y$ [.$\lor$ [.$(\exists u/y)$ [.$[\phantom a]$ ] ] [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] ] ]
\noindent and any regular tree prefix extending it are NP-complete. \end{cor} It is perhaps of some interest that the SAT-describing sentence above can be rewritten as an H$_2^1$ Henkin prefix sentence \[ \left(\begin{array}{cc} \forall x & \exists u \\ \forall y & \exists v \end{array}\right) (\psi_1 \lor \psi_2). \]
\noindent The paper \cite{KryVaa1989} introduced the so-called function quantifiers; in particular, the quantifier F$_2^1$, whose semantics is given by: $M\models$F$_2^1xyzw\psi(x,y,z,w)$ if and only if $\exists f\forall x \forall z\psi(x,f(x),z,f(z))$. It is unknown whether F$_2^1$ is strictly less expressive than H$_2^1$. Our SAT-defining sentence is an example of an H$_2^1$ sentence which cannot be reduced in any obvious way to a F$_2^1$ sentence, since the variables $u$ and $v$ describe here two very different functions. We are not aware of other examples of this kind in the literature.
\subsection{Disjunction-free Generalized Henkin trees} \label{GHAND}
\begin{conv} From this section onwards, it will be useful to follow a convention: talking of a tree prefix $T$ which is a minimal representative of some class K of trees, we will say that another tree $U$ is an extension$^*$ of $T$ if it is an extension of $T$ and furthermore it does not fall in any of the significant classes that are listed in the table at the end of the paper (signalling, Henkin, GH1($\land$), GH1($\lor$), GH2($\land$), GH2($\lor$), GH3, C1, C2, C1', modest) - except class K itself. So for example an extension$^*$ of the smallest signalling prefix $\forall x\exists y(\exists z/x)$ is any extension of it which is not Henkin, GH1($\land$),...nor modest. \end{conv}
The minimal trees GH1($\circ$) and GH2($\land$) can be easily shown to be first-order (use quantifier distribution for GH1($\land$) and GH2($\land$); use the strong extraction rule in the longest branch of GH1($\lor$)). This tells us nothing about their extensions. We might conjecture that, if a tree falls in one of these classes but not in any other class that we have isolated (i.e., it is an extension$^*$ of GH1($\circ$) or GH2($\land$)), then it is in FO. We fall short of proving such results in full generality; for example, in the case of extensions of GH1($\land$) we prove this to hold only under the additional assumption that the tree in question does not contain disjunctions. In an earlier draft we claimed that the same suffices for requiring a GH2($\land$) to be in FO; instead a stronger assumption is required. In order to express this condition, we follow \cite{JaaBar2019} and give a name to a particular extension of GH2($\land$), that we shall refer to as GH3:
\Tree [.$\forall x$ [.$\forall y$ [.$\exists i$ [.$\land$ [.$(\exists u/y,i)$ [.$[\phantom a]$ ] ] [.$(\exists v/x,i)$ [.$[\phantom a]$ ] ] ] ] ] ]
\noindent In it, the extra quantifier $\exists i$ does not produce any Henkin or signalling patterns; and it blocks the possibility of distributing the universal quantifiers below the conjunction. In \cite{JaaBar2019} it is shown that the GH3 tree suffices to describe NP-complete problems (3-COLORING and SAT) and that a family of its extensions$^*$ captures ESO. Those results show that ``signalling by disjunction'' does not exhaust the sources of second-order expressive power of regular, non-prenex $IF$ logic.
\begin{teo} \label{GHANDTEO} 1) If a regular tree prefix $T$
does not contain disjunctions, and is not GH2($\land$), Henkin, nor signalling, then it has first-order complexity. \\ 2) If a regular tree prefix $T$
does not contain disjunctions, and is not in GH1($\land$), GH3, Henkin, nor signalling, then it has first-order complexity. \end{teo}
\begin{proof} 1) Suppose $T$ satisfies the hypotheses; then, it is either modest or GH1($\land$). In the former case, it is in FO by theorem \ref{MODESTTREE}. In the latter, it contains at least one pattern
\Tree [.$\vdots$\\$\forall x$\\$\vdots$\\$\land$ [.$\vdots$\\$(\exists u/U)$\\$\vdots$ ] [.$\vdots$\\$\forall y$\\$\vdots$\\$(\exists v/V)$\\$\vdots$ ] ]
\noindent where $x\notin U, y\notin V,x\in V$, witnessing that $T$ is GH1($\land$). Notice:\\ \\ 1. There are, by assumption, no disjunctions between $\forall x$ and $(\exists v/V)$. \\ 2. Every existential quantifier $(\exists w/W)$ between $\forall x$ and $\forall y$ is independent of $\forall x$ (otherwise either $\forall x,\exists w,\forall y, \exists v$ form a Henkin pattern, or $\forall x,\exists w, \exists v$ form a signalling pattern). Consequently, $\forall x$ can be pushed below any such quantifier by the quantifier swapping rule.\\ 3. $\forall x$ can be pushed below any other universal quantifier by quantifier swapping. \\ 4. $\forall x$ can be pushed below any conjunction by means of quantifier distribution.\\ \\ None of these tranformations generates new dependence patterns , nor disjunction symbols; so, applying them preserves the hypotheses of the theorem. Using these transformations, one can push $\forall x$ below
$\land$, so that there is one less witness of the GH1($\land$) pattern.
Iterating the process, one can remove all witnesses of the GH1($\land$) pattern, until the resulting tree is modest (and thus of first-order complexity, Theorem \ref{MODESTTREE}).
2) Analogous. Point 2. by itself may be insufficient for the purpose of pushing $\forall x$ below the conjunction $\land$, since it does not exclude that there might be existential quantifiers depending on $\forall x$ and $\forall y$ between $\forall y$ and $\land$; it that case, it would be impossible to push $\forall x$ and/or $\forall y$ below $\land$. But this configuration does not arise, because of the assumption that $T$ is not GH3. \end{proof}
This theorem adds to theorem \ref{MODESTTREE} in that allows recognizing some non-modest trees/sentences as FO; for example, $\forall z\forall x((\exists u/z)[\phantom{a}] \lor \forall y(\exists v/x)[\phantom{a}])$ is a GH1($\land$) prefix which satisfies part 1) of the theorem, while $\forall z\forall x\forall y((\exists u/zy)[\phantom{a}]\land(\exists v/zx)[\phantom{a}])$ is a GH2($\land$) prefix which satisfies part 2). As far as we could see, it seems impossible to tell that sentences with these prefixes are first-order just by taking prenex forms and checking that the tree is primary; the prenex transformations seem to always generate Henkin patterns, for the first tree, and signalling patterns for the second.
In the contrapositive, the theorem above tells us that the search for genuinely new second-order tree prefixes without disjunctions can be restricted to trees that are \emph{both} GH1($\land$) and GH2($\land$):
\begin{cor} Let $T$ be a regular tree prefix without occurrences of disjunction and which is not signalling, Henkin nor GH3. If $T$ is not in $FO$, then $T$ is both GH1($\land$) and GH2($\land$). \end{cor}
\subsection{Conjunction-free GH1 trees} \label{GH1OR}
For GH1($\lor$), we conjecture that a dual result may hold: extensions$^*$ of GH1($\lor$) which do not contain \emph{conjunction} symbols are in FO.
\begin{conjecture} \label{GH1ORTEO} Suppose an IF regular tree prefix has no conjunctions, and it is not GH2($\lor$), coordinated, Henkin nor signalling. Then it is in the FO complexity class. \end{conjecture}
\noindent However, we have no fully convincing proof of this statement.
\section{Coordinated trees of the first kind} \label{CTREES}
The minimal examples of coordinated trees of the first kind can have the following forms:
\Tree [.$\forall x$ [.$\lor$ [.$\forall y$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] ] [.$\forall z$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] ] ] \Tree [.$\forall x$ [.$\forall y$ [.$\lor$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] [.$\forall z$ [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] ] \Tree [.$\forall x$ [.$\forall y$ [.$\forall z$ [.$\lor$ [.$(\exists u/x,z)$ [.$[\phantom a]$ ] ] [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] ]
\noindent We call these trees (and the corresponding fragments of $IF$ logic) C1, C2, and C3, from left to right. It is apparent that C(C1)$\subseteq$ C(C2) $\subseteq$ C(C3). In the subsections we show that all extensions of these trees are NP-complete.
\subsection{SAT by coordinated trees}\label{CSAT}
First: observe that the coordinated tree C3 is an extension of the generalized Henkin tree GH2($\lor$). So, by the Extension Lemma, it permits defining the SAT problem. So, all trees extending C3 are NP-complete (and we can exclude them from our classification, since they are a special case of GH2($\lor$) trees).\\ \\ Secondly: we give a different (but similar in spirit) description of SAT by means of the coordinated tree C2. This will prove that all trees extending C2 are NP-complete. We use the same notations and conventions as in the previous section, with the following exception\footnote{This difference is just related to our choice of names for the quantified variables; it has no deeper meaning.}: now $P(x,y)$ is interpreted as ``$y$ is a prop. letter occurring positively in the clause $x$'', and not viceversa; similarly for the relations $N$ and $O$. $O(x,y)$ is an abbreviation for $C(x)\land \neg C(y) \land (P(x,y)\lor N(x,y))$. We assume that each clause contains at least one literal. Then, the SAT-defining sentence is: \[ \theta: \forall x\forall y((\exists u/x)\chi_1 \lor\forall z(\exists v/xy) \chi_2) \] where \[
\chi_1 :
O(x,y)\land [ (P(x,y) \rightarrow u=1) \land (N(x,y) \rightarrow u=0)] \] and \[ \chi_2 : (z=x \land O(x,y)) \rightarrow (v\neq y \land O(x,v)). \]
\begin{teo}\label{TEOC2} If $M$ is a suitable structure, then $M\models\theta$ iff $M$ encodes a ``yes'' instance of SAT. \end{teo}
\begin{proof}
1) Suppose $M$ encodes a ``yes'' instance of SAT. Then there is an assignment $T$ of truth values to the propositional variables which makes the conjunction of clauses true. This means that to each clause $a$ we can associate a proposition $b = g(a)$ (we functionally choose one) such that either $b$ occurs positively in $a$, and $T(b)=1$, or $b$ occurs negatively in $a$, and $T(b)=0$. We say that such pairs are in a relation $R(a,b)$. Now define $Y,Z\subseteq \{\emptyset\}[MM/xy]$ as: $Y:=\{\{(x,a),(y,b)\} \ | \ (a,b)\in R\}$, $Z:= \{\emptyset\}[MM/xy]\setminus Y$. Define the function $F:Y\rightarrow M$, by $F(s):= T(s(y))$ whenever $s(y)$ is a propositional letter, and arbitrarily otherwise. Then, by the comments on $T$ above, $M,Y[F/u]\models \chi_1$.
Define the function $G: Z[M/z]\rightarrow M$ as $G(s) := g(s(z))$. Then, if $s\in Z[MG/zv]$ and $s(z)=s(x)$, we have $s(v)=g(s(z))=g(s(x))$; therefore $s(y)\neq s(v)$, because $s(v)= g(s(x))$, and the pair $((x,s(x)),(v,g(s(x))))$ is in $Y$, so not in $Z$. Furthermore,
by the definition of $G$, $(s(x),s(v)) = (s(x),g(s(x))\in R$, which implies $(s(x),s(v))\in O^M$. So $M, Z[MG/zv]\models \chi_2$.
2) Suppose $M\models\theta$. Then there are $Y,Z\subseteq \{\emptyset\}[MM/xy]$, a $x$-uniform function $F: Y\rightarrow M$ and a $xy$-uniform $G: Z[M/z]\rightarrow M$ such that $Y\cup Z=\{\emptyset\}[MM/xy]$, $M,Y[F/u]\models\chi_1$ and $M,Z[MG/zv]\models\chi_2$. Suppose for sake of contradiction that, for some clause $\hat a\in C^M$, $s\in Y$ implies $s(x)\neq \hat a$. By our assumption on structures, that in each clause at least one literal occurs, there must be an $s\in Z$ such that $b:=s(y)$ occurs in $\hat a$. Pick $s'\in Z[MG/zv]$ such that $s'(x) = \hat a$, $s'(y) = b$ and $s'(z) = s'(x)$. Since $M,Z[MG/zv]\models\chi_2$, we have $s'(v)\neq s(y)$ and $(s'(v),s(x))\in O^M$. By $xy$-uniformity of $G$, $s'(v)= G(s'_{\upharpoonright \{x,y,z\}})$ is different from all $s''(y)$ such that $s''\in Z$. Then, the assignment $\{(x,\hat a),(y,s'(v))\}$ must be in $Y$, contradicting our hypothesis.
So, for each clause $a$, there is a propositional letter $g(a)$ occurring in $a$ such that $s_a:=\{(x,a),(y,g(a))\}\in Y$. Define $T(g(a)):= F(s_a)$, and extend it arbitrarily to a propositional assignment over propositional letters that are not of the form $g(a)$. Since $M,Y[MF/yu]\models \chi_1$ and $Y$ contains $s_a$ for each clause $a$, $T$ is an assignment that satisfies the instance of SAT which is encoded by $M$. \end{proof}
In this proof we used the restriction that each clause contain at least one literal; eliminating such restriction on the class of structures would require the usage of an extra existential quantifier (independent of $x$) in each disjunct; the resulting tree would be an extension of the one considered, and (because of the right disjunct) either signalling or a Henkin tree. In any case, as before, also this variant of SAT is NP-complete. So:
\begin{cor} The coordinated tree C2 (and any tree extending it) is NP-complete. \end{cor}
\subsection{NP-completeness of C1} \label{C1LHARD}
We do not know whether the SAT problem is definable by means of the coordinated tree C1; however, we show here that a different NP-complete problem, SET SPLITTING (see e.g. \cite{GarJoh1979}), is definable by means of C1. This result was obtained in collaboration with Lauri Hella, who kindly agreed on including it in this paper.\\ \\ Input: a set $A$, a family $\mathcal{B}\subseteq \wp(A)$ s.t., for every $B\in \mathcal{B}$, $card(B)\geq 2$.\\ \\ Measure of the input: $card(A\cup\mathcal{B})$.\\ \\ Problem: Is there a partition $\{U,V\}$ of $A$ such that, for each $B\in \mathcal{B}$, $B\cap U \neq\emptyset$ and $B\cap V\neq\emptyset$?\\ \\ We encode input instances as structures of domain $A\cup\mathcal{B}$ (with $\mathcal{B}\subseteq \wp(A)$ such that each of its element has at least cardinality 2) which interprets in the obvious way unary predicates $A$ and $\mathcal{B}$, and a binary ``set membership'' relation $R_\in$ (with the restriction that, if $(a,B)\in R_\in$, then $a\in A$, $B\in \mathcal{B}$ and $a\in B$).
The requirement that the sets in $\mathcal{B}$ have at least cardinality 2 is our addition to the original problem; it obviously does not decrease its complexity, and it makes the problem easier to define in our fragment of $IF$ logic.
The defining sentence is: \[ \eta: \forall x(\forall y(\exists u/\{x\})\epsilon_1 \lor \forall z(\exists v/\{x\})\epsilon_2) \] where \[ \epsilon_1: (A(x) \land \mathcal{B}(y)) \rightarrow (u\neq x \land R_\in(u,y)) \] and \[ \epsilon_2: (A(x) \land \mathcal{B}(z)) \rightarrow (v\neq x \land R_\in(v,z)) \]
\begin{teo} \label{C1ISNP} For every suitable structure $M$, $M\models\eta$ iff $M$ encodes a ``yes'' instance of SET SPLITTING. \end{teo}
\begin{proof}
$\Leftarrow$) Let $M$ be a ``yes'' instance of SET SPLITTING. Let $\{U,V\}$ be a partition of $A$ which satisfies the requirement of the problem: for every $B\in \mathcal{B}$, $B\cap U \neq\emptyset$ and $B\cap V\neq\emptyset$. For each $B\in \mathcal{B}$, choose a $u_B\in B\cap V$ and a $v_B\in B\cap U$. Define teams $Y:=\{s\in\{\emptyset\}[M/x] \ | \ s(x)\in U\}$ and $Z:=\{\emptyset\}[M/x] \setminus Y$; they form a partition of $\{\emptyset\}[M/x]$. Let $F:Y[M/y]\rightarrow M$ be defined as $F(s) := u_B$ if $s(y) = B\in\mathcal{B}$, and as an arbitrary function of $y$ otherwise. Let $G:Z[M/z]\rightarrow M$ be defined as $G(s) := v_B$ if $s(z) = B\in\mathcal{B}$, and arbitrarily otherwise. Since the $u_B$s are in $V$, they are not in $U$, and so every $s\in Y[MF/yu]$ is such that, if $s(y)\in\mathcal{B}$, then $s(u) = F(s_{\upharpoonright\{x,y\}}) = u_B \neq s(x)\in U$. So, $M,Y[MF/yu]\models \epsilon_1$. A symmetrical argument shows that $M,Z[MG/zv]\models \epsilon_2$.
$\Rightarrow$) Suppose $M\models \eta$. Then there are $Y,Z\subseteq \{\emptyset\}[M/x]$ such that $Y\cup Z = \{\emptyset\}[M/x]$, and $x$-uniform functions $F:Y[M/y]\rightarrow M$ and $G: Z[M/z]\rightarrow M$, such that $M,Y[MF/yu]\models \epsilon_1$ and $M,Z[MG/zv]\models \epsilon_2$. Define $U$ as $\{a\in A \ | \ \exists s\in Y(s(x)=a)\}$, and $V:= A\setminus U$. Since $U\cup V=A$, at least one out of $U$ and $V$ is nonempty. We suppose w.l.o.g. that $U$ is nonempty, which implies that $Y$ is nonempty.
Let $B\in\mathcal{B}$. Let $s_B\in Y[MF/yu]$ be an assignment such that $s_B(y) = B$ and $s_B(x)\in A$ (there is at least one such $s_B$, because of the nonemptyness of $Y$ and the fact that $y$ is universally quantified). The fact that $M,s_B\models \epsilon_1$ implies that $s_B(u)\in s_B(y)=B$ and $s_B(u)\neq s_B(x)$; since $s_B(u) = F(s_{\upharpoonright \{x,y\}})$, the $x$-uniformity of $F$ implies that $s_B(u)\neq s(x)$ for each $s\in Y[MF/yu]$, that is, $s_B(u)\neq a$ for all $a\in U$. So $s_B(u)\in B\cap V$.
This furthermore implies that $V\neq\emptyset$. So, by a symmetric argument one can prove the existence of one element in $B\cap U$. \end{proof}
\begin{cor} The coordinated tree C1, as all trees extending it, is NP-complete. \end{cor}
This result, together with the Extension Lemma (\ref{EXTENDLEMMA}), yields an alternative, more indirect proof of theorem \ref{TEOC2}.
This concludes the classification of coordinated trees of first kind up to reduction closure. However, since plausibly the minimal coordinated trees do not capture all NP problems, it might be of interest that we found a description of an L-complete problem, 2-COLORABILITY, by means of the minimal C1 tree. This problem is known not to be in FO. The 2-COLORABILITY problem can be described as follows: given a graph $G = (V,E)$, show that $V$ can be decomposed as a partition into subsets $A,B$ such that $A^2\cap E =\emptyset$ and $B^2\cap E =\emptyset$ (i.e., there are no edges between vertices of $A$, and similarly for $B$).
The defining sentence, in the language of graphs, is: \[ \xi: \forall x(\forall y(\exists u/\{x\})\xi_1\lor\forall z(\exists v/\{x,y\})\xi_2) \] where \[ \xi_1: E(x,y) \rightarrow (u=y \land u\neq x) \] and \[ \xi_2: E(x,z) \rightarrow (v=z \land v\neq x). \]
\begin{teo} A graph structure $M$ satisfies $\xi$ if and only if it encodes a ``yes'' instance of 2-COLORABILITY. \end{teo}
\begin{proof} 1) Suppose $M$ is a ``yes'' instance of 2-COLORABILITY. Then, its domain can be partitioned into two subsets $A,B$ such that $c\in A$ plus $(c,d)\in E^M$ implies $d\in B$, and viceversa.
Define $Y:= \{s\in \{\emptyset\}[M/x] \ | \ s(x)\in A\}$ and $Z=\{\emptyset\}[M/x] \setminus Y$. Define $F:Y[M/y]\rightarrow M$, $F(s) := s(y)$, and $G:Z[M/z]\rightarrow M$, $G(s) := s(z)$.
Notice that, if $s\in Y$, then $s(x)\in A$; and if $(s(x),s(y))\in E^M$, then $s(y)\in B$; so, since $A$ and $B$ are disjoint, $s(y)\neq s(x)$. Furthermore, $s(u)=s(y)$ by the definition of $F$. Thus, $M,Y[MF/yu]\models \xi_1$. The proof that $M,Z[MG/zv]\models \xi_2$ is completely analogous.
2) Suppose $M\models \xi$. Then there are $Y,Z\subseteq \{\emptyset\}[M/x]$, a $x$-uniform function $F: Y[M/y]\rightarrow M$ and a $x$-uniform $G: Z[M/z]\rightarrow M$ such that $Y\cup Z=\{\emptyset\}[M/x]$, $M,Y[MF/yu]\models\xi_1$ and $M,Z[MG/zv]\models\xi_2$. By downward closure, we can assume that $Y\cap Z=\emptyset$. Let $A:=\{a\in M \ | \ \{(x,a)\}\in Y\}$, and $B=M\setminus A$. Now suppose, for the sake of contradiction, that $a\in A$, $(a,c)\in E^M$ and $c\in A$. There is an $s\in Y[MF/yu]$ such that $s(x)=a$ and $s(y)=c$. Since $M,Y[MF/yu]\models\xi_1$ and $M,s\models E(x,y)$, we have $s(u)=s(y)$ and $s(u)\neq s(x)$. Since $s(u) = F(s_{\upharpoonright \{x,y\}})$ and $F$ is $x$-uniform, we have $s(u) \neq s'(x)$ for all $s'\in Y[MF/yu]$. But $s(u)=s(y)$; so, $s(y)$ is different from all $s''(x)$ such that $s''\in Y$. Thus $c=s(y) \in B$: a contradiction.
Similarly one proves that $b\in B$, $(b,c)\in E^M$ implies $c\in A$. \end{proof}
\section{Coordinated trees, second kind} \label{CTREES2}
A coordinated tree is of the second kind if it contains some logical operators $\forall x,\forall y,\forall z,\lor,(\exists u/U),(\exists v/V)$ that form a coordinated pattern, and such that $(\exists v/V),(\exists v/V)$ occur in the same disjunct below $\lor$. Since the definition of coordinated pattern excludes the trivial (Henkin) case that $(\exists v/V),(\exists v/V)$ are in a same branch of the tree, we must suppose that below $\lor$ there is a connective $\circ$ such that $(\exists u/U)$ occurs (say) in the left subformula below $\circ$, and $(\exists v/V)$ occurs in the right subformula. If $\circ$ is a disjunction, then the tree is also first kind; so it is NP-complete by the results of the previous section. We focus then on the case $\circ = \land$. Taking into account all the different positions in which $\forall y,\forall z$ can occur, and ignoring permutations of quantifiers of the same kind, we can isolate six minimal coordinated trees of the second kind:
\Tree [.$\forall x$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\land$ [.$\forall y$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] ] [.$\forall z$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] ] ] ] \Tree [.$\forall x$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\forall y$ [.$\land$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] [.$\forall z$ [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] ] ] \Tree [.$\forall x$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\forall y$ [.$\forall z$ [.$\land$ [.$(\exists u/x,z)$ [.$[\phantom a]$ ] ] [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] ] ]
\Tree [.$\forall x$ [.$\forall y$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\land$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] [.$\forall z$ [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] ] ] \Tree [.$\forall x$ [.$\forall y$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\forall z$ [.$\land$ [.$(\exists u/x,z)$ [.$[\phantom a]$ ] ] [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] ] ] \Tree [.$\forall x$ [.$\forall y$ [.$\forall z$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\land$ [.$(\exists u/x,z)$ [.$[\phantom a]$ ] ] [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] ] ]
\noindent From left to right, we call these trees C1', C2', C3', C4', C5', C6'. Before the reader starts worrying because of this explosion of cases, we point out that only the C1' case is genuinely new, while C2', C3', C4', C5', C6' are extensions of either tree GH1($\land$) or GH2($\land$); they fall into cases that we had already left open before. Notice, furthermore, that C(C1') $\subseteq$ C(C2') $\subseteq$ C(C3') $\subseteq$ C(C5') $\subseteq$ C(C6') and C(C1') $\subseteq$ C(C2') $\subseteq$ C(C4') $\subseteq$ C(C5') $\subseteq$ C(C6').
We prove that the tree C6' (and thus C5', C4', C3', C2', C1') is in FO (the key idea of the proof is due to Lauri Hella). The result tells us nothing about extensions of these trees.
\begin{teo} \label{C6TEO} The trees C1', C2', C3', C4', C5', C6' are in FO. \end{teo}
\begin{proof} We prove that sentences which begin with tree C6', that is, are of the form \[ \varphi: \forall x\forall y\forall z(\psi_1(x,y,z) \lor ((\exists u/\{x,z\})\psi_2(x,y,z,u) \land (\exists v/\{x,y\})\psi_3(x,y,z,v))), \]
with $\psi_1,\psi_2,\psi_3$ quantifier-free, are equivalent to sentences of the form \[ \varphi': \forall y\forall z( ((\exists u/\{z\})\forall x(\psi_1(x,y,z) \lor\psi_2(x,y,z,u)) \land \] \[ (\exists v/\{y\})\forall x(\psi_1(x,y,z) \lor \psi_3(x,y,z,v)))). \]
\noindent Notice now that $\varphi'$ can also be obtained as completion of the positive initial tree
\Tree [.$\forall x$ [.$\forall y$ [.$\land$ [.$(\exists u/x,z)$ [.$\forall x$ [.$[\phantom{a}]$ ] ] ] [.$(\exists v/x,y)$ [.$\forall x$ [.$[\phantom{a}]$ ] ] ] ] ] ]
\noindent which is a disjunction-free extension$^*$ of the GH2($\land$) tree, and thus in FO, by Theorem \ref{GHANDTEO}. So, $\varphi$ itself is equivalent to a first-order sentence.
We have to prove the above equivalence.
$\Longrightarrow$) $M\models \varphi$ iff there are teams $X_1,X_2\subseteq \{\emptyset\}[MMM/xyz]$ such that $X_1 \cup X_2 = \{\emptyset\}[MMM/xyz]$, a $\{x,z\}$-uniform function $F:X_2\rightarrow M$, and a $\{x,y\}$-uniform function $G:X_2\rightarrow M$, such that $M,X_1\models \psi_1(x,y,z)$, $M,X_2[F/u]\models \psi_2(x,y,z,u)$, and $M,X_2[G/v]\models \psi_3(x,y,z,v)$.
Now fix an $a\in M$, and define functions $F',G':\{\emptyset\}[MM/yz]\rightarrow M$ as $F'(s) := F(s(a/x))$ and $G'(s) = G(s(a/x))$. Obviously $F'$ is $z$-uniform and $G'$ is $y$-uniform.
Define teams $X_2':= \{s\in \{\emptyset\}[MMF'M/yzux] \ | \ M,s \models \psi_2(x,y,z)\}$ and $X_1' :=\{\emptyset\}[MMF'M/yzux] \setminus X_2'$. We have to verify that, then, $M,X_1'\models \psi_1(x,y,z,u)$. Suppose this is not the case, that is, there is an assignment $s\in X_1'$ such that $M,s\not\models\psi_1(x,y,z,u)$. By locality of quantifier-free formulas, $M,s_{-u}\not\models \psi_1(x,y,z,u)$ (where $s_{-u}$ is the assignment $s$ restricted to $dom(s)\setminus\{u\}$). This implies that $s_{-u}\in X_2$; so, that $s\in X_2'$; this contradicts the initial assumption that $s\in X_1'$.
One can then analogously define $X_4':= \{s\in \{\emptyset\}[MMG'M/yzvx] | M,s \models \psi_3(x,y,z)\}$ and $X_3' :=\{\emptyset\}[MMG'M/yzvx] \setminus X_4'$, and prove that $M,X_3'\models \psi_1(x,y,z,v)$.
$\Longleftarrow$) $M\models\varphi'$ iff there are functions $F',G':\{\emptyset\}[MM/yz]\rightarrow M$ ($F'$ $z$-uniform, and $G'$ $y$-uniform) such that $M, \{\emptyset\}[MMF'M/yzux]\models \psi_1(x,y,z)\lor\psi_2(x,y,z,u)$, and $M, \{\emptyset\}[MMG'M/yzvx]\models \psi_1(x,y,z)\lor\psi_3(x,y,z,v)$. Calling $X_1':= \{s\in \{\emptyset\}[MMF'M/yzux] | M,s\models \psi_1(x,y,z)\}$ and $\overline X_1':= \{s\in \{\emptyset\}[MMG'M/yzvx] | M,s\models \psi_1(x,y,z)\}$, the last two statements above are equivalent to the existence of a team $X_2'\subseteq \{\emptyset\}[MMF'M/yzux]$ such that $X_1\cup X_2' = \{\emptyset\}[MMF'M/yzux]$ and $M,X_2'\models \psi_2$, and, respectively, to the existence of a team $X_3'\subseteq \{\emptyset\}[MMG'M/yzvx]$ such that $\overline X_1'\cup X_3' = \{\emptyset\}[MMG'M/yzvx]$ and $M,X_3'\models \psi_3$.
Let $X_1 := \{s\in\{\emptyset\}[MMM/xyz] | M,s\models\psi_1(x,y,z)\}$. Let $X_2$ be its complement $\{\emptyset\}[MMM/xyz] \setminus X_1$.
Define $F:X_2\rightarrow M$ as $F(s) := F'(s_{-x})$ and $G:X_2\rightarrow M$ as $G(s) := G'(s_{-x})$. Obviously $F'$ is $\{x,z\}$-uniform and $G'$ is $\{x,y\}$-uniform.
Does $M, X_2[F/u]\models \psi_2(x,y,z,u)$? Yes, because $s\in X_2[F/u]$ implies $s\in \{\emptyset\}[MMF'M/yzux]$, and we already know that $M,\{\emptyset\}[MMF'M/yzux]\models \psi_2(x,y,z,u)$. Similarly, one can see that $M, X_2[G/u]\models \psi_3(x,y,z,v)$.
\end{proof}
\section{Conclusions}
In this paper we have classified, up to reduction closure, many of the syntactical fragments of $IF$ logic that are individuated by positive initial trees (see the table at the end). All the tree prefixes that we have examined fall in the FO/NPC dichotomy. So, the question whether positive initial trees respect the dichotomy is still open.
One of the main contributions of the paper is the individuation of new patterns which allow defining second-order properties in $IF$ logic: we have found three patterns (GH2($\lor$), C2, C1) which express NP-complete problems even though they contain no Henkin nor signalling quantifier patterns. They do this by using some forms of ``signalling by disjunction''. (As already pointed out before, a further pattern GH3, which uses conjunctions instead of disjunctions, has been recently discovered in \cite{JaaBar2019}). For all we know, there might still be other unrecognized higher-order patterns (to be found among extensions of the GH1($\land$), GH1($\lor$), GH2($\land$) and C1' trees). We also point out that the descriptions of NP-complete problems we have found are quite atypical; in particular, they can be easily translated into H$_2^1$ sentences (H$_2^1$ being the smallest, four-place Henkin quantifier) but not so easily into F$_2^1$ sentences (F$_2^1$ being the smallest function quantifier, see \cite{KryVaa1989}).
For what regards trees of low complexity, our theorem on modest trees (\ref{MODESTTREE}) together with further results on generalized Henkin trees (\ref{GHANDTEO}) and coordinated trees of the second kind ( \ref{C6TEO}), provides a rather general sufficient (and effective) criterion for recognizing $IF$ sentences that have first-order expressive power, thus extending some criteria that come from earlier literature: the primality test of Sevenster (\cite{Sev2014}), on one side, and the Knowledge Memory test (\cite{Bar2013}) (which in turn extended the earlier Perfect Recall test, \cite{ManSanSev2011}). A different criterion for first-orderness is given by checking the absence of \emph{broken signalling sequences}, in the sense of \cite{Bar2013}; a moment of thought shows that this is also a special case of the modest tree criterion. The search for increasingly general sufficient, effective criteria for first-orderness has an interest because recognizing the $IF$ sentences (resp. ESO sentences, etc.) that are equivalent to first-order ones is an \emph{undecidable} problem.\footnote{We do not know where to find an easy proof of this fact in the literature (but see \cite{Cha1991}).}
The present results can be seen as a step forward in the understanding of fragments of $IF$ logic. Future work should be addressed to a more systematical understanding of the classes GH1, GH2($\land$) and C1', although it is not clear at present whether a complete and reasonable classification of the regular tree prefixes is possible.
In the somewhat long time that has elapsed since the archiving of an earlier draft of this paper, some significant further progress has been made in the classification of fragments of $IF$ logic induced by quantifier and tree prefixes. We have already mentioned the discovery of the NP-complete GH3 tree prefix (\cite{JaaBar2019}), which, for completeness, we include in the summary table at the end of the paper. \cite{BarHelRon2018} developed tools for the study of \emph{ir}regular prefixes, isolated some new minimal NP-complete prefixes that do not occur in the regular case (e.g. ``long signalling sequences'') and a putative counterexample to the FO/NP-complete dichotomy. Aside from the issue of tractability, a complementary problem has been studied: whether classes of extensions$^*$ of the significant patterns can be used to capture the whole existential second-order logic. These kinds of results typically are obtained by using the $IF$ patterns to explicitly simulate a complete set of Henkin quantifiers: this has been shown to be possible using extensions$^*$ of the signalling (\cite{BarHelRon2017}), GH2($\lor$) (\cite{BarHelRon2018}), C1 and GH3 (\cite{JaaBar2019}) prefixes.
Further work might be directed at finding exact characterizations of the expressive power of fragments (not just up to reduction closure). Secondly, it might be interesting to investigate what happens abandoning the restriction that trees be positive initial; although, surely in this case a satisfactory classification is impossible (a complete classification of syntactical trees would yield in particular a sufficient and necessary criterion for the first-orderness of sentences -- which, as we said, is an undecidable problem). One interesting example, in this sense, is the tree \[ \forall x\exists \alpha\forall z(\exists\beta/\{x,\alpha\})((\alpha = 0 \lor \alpha = 1) \land (\beta = 0 \lor \beta =1) \land [\phantom a]) \] which is equivalent to the smallest of the so-called \emph{narrow Henkin quantifiers}. From the results of \cite{BlaGur1986}, it follows that this tree is NL-complete, a possibility that, so far, we have not individuated among regular tree prefixes.
A third direction of work might be the analysis of quantifier and tree prefixes of logics similar to $IF$, such as the system $IF^*$ (see e.g. \cite{CaiDecJan2009}), which also allows slashed connectives, or Dependence-friendly logic (\cite{Vaa2007}), or their extensions via generalized quantifiers (see \cite{Eng2012}, \cite{Sev2014}). We also hope that the understanding of fragments of $IF$ logic might be of help for the analysis of other logics that cover NP, and that are structurally very different. One example is given by the logics of imperfect information based on atoms, such as Dependence logic (\cite{Vaa2007}), Independence logic (\cite{GraVaa2013}) or Inclusion logic (\cite{Gal2013}), whose higher-order expressive power is in part
generated at the level of quantifier-free formulas. The other main example is existential second-order logic; in particular, its functional version, for which a prefix approach only yields a not too interesting dichotomy.
In the following pages, a table summarizes all that we know about regular, positive initial tree prefixes. Remember that, as a convention, if T is the name of a specific tree, we refer to its \emph{extensions$^*$} to mean trees that extend T and do not fall in any other of the categories described in the table.
\begin{tabular}{|c|c|} \hline
\textbf{Tree} & \textbf{Complexity} \\ \hline \hline
\multicolumn{2}{|l|}{\textbf{Henkin}}\\ \hline \pbox{20cm}{$\phantom{a}$\\$\forall x\exists y\forall z(\exists w/x,y)$, \\ $\forall x\forall z(\exists y/z)(\exists w/x,y)$, \\ $\forall x\forall z(\exists w/x)(\exists y/z,w)$ \\ and their extensions\\} & NP-complete (3-COLORING) \\ \hline
\multicolumn{2}{|l|}{\textbf{Signaling}}\\ \hline \pbox{20cm}{$\phantom{a}$\\$\forall x\exists y(\exists z/x)$ \\ and its extensions\\} & \pbox{20cm}{NP-complete \\ (EXACT COVER BY 3-SETS, \\ SAT, DOMINATING SET)} \\ \hline
\multicolumn{2}{|l|}{\textbf{Generalized Henkin}}\\ \hline \pbox{20cm}{$\phantom{a}$\\ GH1($\land$): \Tree [.$\forall x$ [.$\land$ [.$\forall y$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] [.$\exists u$ [.$[\phantom a]$ ] ] ] ] \\ and its disjunction-free\\ extensions$^*$\\} & FO \\ \hline \pbox{20cm}{$\phantom{a}$\\ GH2($\land$): \Tree [.$\forall x$ [.$\forall y$ [.$\land$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] [.$(\exists u/y)$ [.$[\phantom a]$ ] ] ] ] ] \\ and its disjunction-free\\ extensions$^*$\\} & FO \\ \hline \pbox{20cm}{$\phantom{a}$\\ GH3: \Tree [.$\forall x$ [.$\forall y$ [.$\exists i$ [.$\land$ [.$(\exists u/y,i)$ [.$[\phantom a]$ ] ] [.$(\exists v/x,i)$ [.$[\phantom a]$ ] ] ] ] ] ] \\ and its extensions \\} & NP-complete (3-COLORING, SAT) (\cite{JaaBar2019}) \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline \pbox{20cm}{$\phantom{a}$\\ GH1($\lor$): \Tree [.$\forall x$ [.$\lor$ [.$\forall y$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] [.$\exists u$ [.$[\phantom a]$ ] ] ] ]
\\ \\} & FO \\ \hline \pbox{20cm}{$\phantom{a}$\\ GH2($\lor$): \Tree [.$\forall x$\\$\forall y$ [.$\lor$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] [.$(\exists u/y)$ [.$[\phantom a]$ ] ] ] ] \\ and its extensions\\} & NP-complete (SAT) \\ \hline
\pbox{20cm}{ \phantom{a} Extensions$^*$ of GH1($\land$) or GH2($\land$) \\ without disjunctions \\} & FO \\ \hline
\pbox{20cm}{ \phantom{a} Extensions$^*$ of GH1($\land$) or GH2($\land$) \\ with disjunctions \\ } & ??? \\ \hline
\pbox{20cm}{ \phantom{a} Extensions$^*$ of GH1($\lor$) \\ } & ??? \\ \hline
\multicolumn{2}{|l|}{\textbf{Coordinated}}\\ \hline \pbox{20cm}{
C1: \Tree [.$\forall x$ [.$\lor$ [.$\forall y$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] ] [.$\forall z$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] ] ] \\ and its extensions \\} & \pbox{20cm}{NP-complete (SET SPLITTING) } \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline \pbox{20cm}{
C2: \Tree [.$\forall x$\\$\forall y$ [.$\lor$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] [.$\forall z$ [.$(\exists v/x,y)$ [.$[\phantom a]$ ] ] ] ] ] \\ and its extensions \\} & \pbox{20cm}{NP-complete (SAT, SET SPLITTING)} \\ \hline \pbox{20cm}{
C1': \Tree [.$\forall x$ [.$\lor$ [.$[\phantom{a}]$ ] [.$\land$ [.$\forall y$ [.$(\exists u/x)$ [.$[\phantom a]$ ] ] ] [.$\forall z$ [.$(\exists v/x)$ [.$[\phantom a]$ ] ] ] ] ] ]
\\
} & \phantom{aaaaaaaaaaaaa}FO \phantom{aaaaaaaaaaaa} \\ \hline \pbox{20cm}{\phantom{a} Extensions$^*$ of C1' \\ } & C2'-C6' are FO; in general: ??? \\ \hline \pbox{20cm}{\phantom{a} \textbf{Modest} \phantom{a} \\ } & FO \\ \hline \end{tabular}
\normalsize
\pagebreak
\Large \begin{center} \textbf{APPENDIX} \end{center} \normalsize
\appendix
\renewcommand{\Alph{section}}{\Alph{section}}
\section{Proof of theorem \ref{MODESTTREE}} \label{APPMODEST}
The key to proving theorem \ref{MODESTTREE} will be to show that it is possible to transform a modest sentence into prenex form while preserving the property of being modest. The following lemma shows that many of the tranformations that we consider in this paper (with the notable exception of quantifier extraction) do not introduce new Henkin, signalling, generalized Henkin or coordinated patterns (in short: \textbf{non-modest} patterns). Therefore, when applied to a modest sentence, they produce a new modest sentence. This lemma will be mostly applied implicitly in what follows.
\begin{lemma} \label{SIMPLERULESPRESERVEMODESTY} The following tranformations do not introduce new non-modest patterns in regular $IF$ sentences: \begin{enumerate}[a)]
\item Swapping independent quantifiers (Prop. \ref{SWAP}).
\item Swapping first-order universal quantifiers (Prop. \ref{UNISWAP}).
\item Removing the slash set of a universal quantifier (Prop. \ref{UNIDEP}).
\item Removing purely existential slash sets (Prop. \ref{DEPEX}).
\item Distributing universal quantifiers over conjunctions (Prop. \ref{DISTRIBUTION}).
\end{enumerate} \end{lemma}
\begin{proof} a) This rule does not change the dependencies between logical operators, therefore it cannot generate new non-modest patterns.
b) Just observe that the relative order and dependencies between universal quantifiers play no role in the definitions of non-modest patterns.
c) The slash sets of universal quantifiers play no role in the definitions of non-modest patterns.
d) Generalized Henkin and coordinated patterns contain no existential quantifiers with empty slash sets; so, they cannot be produced by this rule.
If an existential quantifier with empty slash set occurs in a signalling pattern, then it must be the second quantifier $\exists y$ in a sequence $\forall x\dots\exists y\dots (\exists z/Z)$. But then, if $Y$ is any set of existentially quantified variables (or more generally any set which does not contain $x$), then also $\forall x\dots(\exists y/Y)\dots (\exists z/Z)$ is a signalling pattern. So, this rule cannot produce new signalling patterns.
The case of Henkin patterns is treated similarly to the signalling case.
e) This transformation does not change the dependencies among quantifiers, therefore it cannot produce new signalling or Henkin patterns. Since it also does not involve disjunctions, it cannot produce new coordinated patterns. All it does in terms of dependencies is to make a conjunction independent of a universal quantifier; therefore it can eliminate a generalized Henkin pattern, but not introduce a new one. \end{proof}
We move towards a second lemma which shows that regular, modest sentences can be required to be in a ``normal form'' which has some properties which will be convenient for proving the main result.
\begin{df}
By the \textbf{depth} of a node $t$ in a tree $(T,\preceq_T)$ we will mean the cardinality of the set $\{s\in T \ | \ s\preceq_T t\}$ (the set of predecessors of $t$). \end{df}
Thus the root of a tree will have depth $1$, the immediate successors of the root depth $2$, and so on. We are departing from the more common convention that the root have depth $0$ for technical reasons that will be clear in the proof of the main result.
\begin{df} Let $\varphi$ be an $IF$ sentence, and $\circ$ an occurrence of a connective in $\varphi$. We say that $\circ$ is \textbf{frontline} in $\varphi$ if
1) $\circ$ has at least one quantifier in its scope (i.e., there is at least one quantifier occurrence $(Qv/V)$ such that $\circ\prec_\varphi (Qv/V)$)
2) if $\circ'$ is a connective in the scope of $\circ$ (i.e $\circ\prec_\varphi\circ'$) then no quantifier is in the scope of $\circ'$. \end{df}
\begin{lem}[Normalization] \label{LEMMANORM} Let $\varphi$ be a regular, modest sentence. Then
we can transform $\varphi$ into an equivalent regular, \CORR{modest} sentence $\varphi'$ satisfying the following requirements: \begin{enumerate}
\item \CORR{All universal quantifiers have empty slash sets.} \item \CORR{Nonempty slash sets contain at least one universally quantified variable.} \item If $\circ$ is a binary connective that is frontline in $\varphi'$, and $\circ \prec_{\varphi'} \forall c \prec_{\varphi'} (\exists d/D)$, then $d$ depends on $c$ (i.e. $c\notin D$).
\end{enumerate}
\end{lem}
\begin{proof}
1) and 2) can be ensured by applying the rules of Prop. \ref{UNIDEP} and \ref{DEPEX} (which preserve modesty, by lemma \ref{SIMPLERULESPRESERVEMODESTY} a)-b) ).
3)
\CORR{Let $\circ \prec_{\varphi} \forall c \prec_{\varphi} (\exists d/D)$ be operators in $\varphi$ such that $\circ$ is frontline and $c\in D$.} We show how to push $\forall c$ below $(\exists d/D)$ by quantifier swapping. By lemma \ref{SIMPLERULESPRESERVEMODESTY} a)-b), the sentence $\varphi'$ obtained in the end is still modest (and, of course, regular).
First of all, notice that there are no connectives between $\circ$ and $(\exists d/D)$ (since $\circ$ is frontline); so, we can use the quantifier swapping rule to rearrange the part of the sentence strictly comprised between $\circ$ and $(\exists d/D)$ in Hintikka normal form (\cite{ManSanSev2011}, Theorem 5.45), as a sequence of universal quantifiers followed by a sequence of existential quantifiers.
Secondly, one can push $\forall c$ below the other universal quantifiers (using proposition \ref{UNISWAP}), until it is immediately above the sequence of existential quantifiers.
If the sequence of existential quantifiers begins with quantifiers that are independent of $\forall c$, swap them above $\forall c$.
Then, below $\forall c$ and before $\exists d$, find the first pair of existential quantifiers $(\exists u/U),(\exists v/V)$ such that: 1) $(\exists u/U)$ is immediately above $(\exists v/V)$, 2) $u$ depends on $c$, and 3) $v$ does not depend on $c$. Now, if $v$ depended on $u$, then $\forall c,(\exists u/U),(\exists v/V)$ would form a signalling sequence, contradicting the hypothesis that $\varphi$ is modest. So, $v$ is independent of $u$; thus, we can swap $(\exists v/V)$ above $(\exists u/U)$; then, for the same reason, we can push it above all the existential quantifiers that were between $\forall c$ and $(\exists u/U)$; and finally, above $\forall c$. Iterating this process, one can push above $\forall c$ all the existential quantifiers that are independent of $c$, including $(\exists d/D)$.
\end{proof}
Imagine that you have a modest and normalized sentence, you extract a universal quantifier above one of its frontline connective, and in the process a generalized Henkin pattern is formed. The following lemma shows that, in this particular case, we have a good amount of information concerning the structure of the generalized Henkin pattern. In some cases the same holds for coordinated patterns of the first kind.
\begin{lem}[Consequences of normalization]\label{LEMGENHEN}
Let $\varphi$ be a regular, modest $IF$ sentence in the normal form described in lemma \ref{LEMMANORM}. Let $\psi$ be a subformula of $\varphi$ of the form $\forall a\chi_1 \circ \chi_2$, where $a$ does not occur in $\chi_2$ and $\circ$ is a frontline connective of $\varphi$. Let $\varphi'$ be obtained by replacing $\psi$ with $\psi':\forall a(\chi_1 \circ (\chi_2)_{|a})$.\footnote{$\varphi'$ is equivalent to $\varphi$ by the strong extraction rule (proposition \ref{STRONGEXTRACTION}) and substitution of equivalents (proposition \ref{TEOSUBEQ}). }
a) Suppose $\varphi'$ contains a generalized Henkin pattern $\forall a,(\exists u/U),\forall z,(\exists v/V),\circ$, where $u$ depends on $a$ (but not on $z,v$) and $v$ depends on $z$ (but not on $a,u$). Then: \begin{enumerate}
\item $(\exists v/V)$ is in $(\chi_2)_{|a}$. \item $(\exists u/U)$ is in $\chi_1$. \item There is a quantifier $\forall b\prec_\varphi \circ$ such that $b\in V$. \end{enumerate}
b) Suppose $\varphi'$ contains a first kind coordinated pattern $\forall a,\forall y,\forall z,\lor,(\exists u/U),$ $(\exists v/V)$, where $u$ depends on $y$ (but not on $a,z$) and $v$ depends on $z$ (but not on $a,y$). Then this occurrence of $\lor$ is $\circ$, and 1.,2.,3. hold.
\end{lem}
\begin{proof} We begin with proving a).
1) Normalization excludes the possibility that $\chi_1$ contains an existential quantifier independent of $a$; so, $(\exists v/V)$ is not in $\chi_1$. It is not above $\circ$, either, because otherwise it would be in a same branch with $(\exists u/U)$, contradicting the fact that these two quantifiers are part of a generalized Henkin pattern. The third and last possibility is that there is a connective $\circ'\prec_{\varphi'} \circ$ such that $(\exists v/V)$ occurs in the subformula immediately below $\circ'$ which does not contain $\circ$. We show that this is impossible. Since $\forall a\prec_{\varphi}(\exists u/U)$, we see that $\forall a,(\exists u/U),\forall z,(\exists v/V),\circ'$ already formed a generalized Henkin pattern in $\varphi$: this contradicts the assumption that $\varphi$ was modest. So we can conclude:
(*) $(\exists v/V)$ is in $(\chi_2)_{|a}$.
2) We want to show that, instead, $(\exists u/U)$ is in $\chi_1$. It cannot be above $\circ$ (otherwise it would also be above $\forall a$, while we know it depends on it). For similar reasons, it cannot be in a branch which does not contain $\circ$. Finally, if it were in $(\chi_2)_{|a}$, we already know that it must be on a different branch with respect to $(\exists v/V)$; thus there is a connective $\circ'$ in $(\chi_2)_{|a}$ such that $\circ'\prec_{\varphi'}(\exists u/U),(\exists v/V)$. But this is impossible, since $\circ \prec_{\varphi'}\circ'$ and $\circ$ is frontline in $\varphi'$. So, $(\exists u/U)$ is in $\chi_1$.
3) Given (*) and the fact $v$ is not dependent on $a$, we can conclude that $a\in V$. The fact that the strong extraction rule was used implies that $V\setminus\{a\}\neq\emptyset$; i.e., there is a quantifier $Qb\prec_{\varphi'}(\exists v/V)$ such that $b\in V$. By the normalization assumption 2., $Q$ is $\forall$. By the normalization assumption 3., there is at least one connective $\circ'$ occurring between $\forall b$ and $(\exists v/V)$; since $\circ$ is frontline, we conclude that $\circ'\preceq\circ$. This immediately entails that $\forall b \prec_{\varphi'}\circ$.
The statement b) has a completely analogous proof, once one proves that $\circ$ is the specific occurrence of $\lor$ which is part of the coordinated pattern. We prove this point. By definition of coordinated pattern we have $\forall a\prec_{\varphi'}\lor\prec_{\varphi'} (\exists u/U),(\exists v/V)$. However, since $\forall a\prec_{\varphi'}\lor$ and $\forall a$ is immediately above $\circ$, we also have $\circ \preceq_{\varphi'}\lor$. Since $\circ \preceq_{\varphi'}\lor$, $\circ$ is frontline, and there are quantifiers in the scope of $\lor$, we must conclude that $\circ$ and $\lor$ coincide.
\end{proof}
We will say that operators $(\forall x/X), \land, (\forall y/Y), (\forall z/Z), (\exists u/U), (\exists w/W)$ occurring in a syntactical tree form a \textbf{$\land$-coordinated pattern} if\\ 0) $(\forall x/X), \land, (\forall y/Y), (\forall z/Z), (\exists u/U), (\exists w/W)$ do not all occur in a same branch\\ 1) $u$ depends on $y$, but not on $x, z, w$ \\ 2) $w$ depends on $z$, but not on $x, y, u$\\ 3) $\land$ is in the scope of $\forall x$, and $(\exists u/U), (\exists w/W)$ are in the scope of $\land$.\\ (These are just the conditions for coordinated patterns, restated for $\land$ instead of $\lor$).
\begin{lem}[Careful extraction does not increase the number of $\land$-coordinated patterns] \label{LEMANDCOORD}
Let $\varphi$ be a regular $IF$ sentence which is normalized according to lemma \ref{LEMMANORM}. Let $\psi$ be a subformula of $\varphi$ of the form $(Qa/A)\chi_1 \circ \chi_2$, where $a$ does not occur in $\chi_2$ and $\circ$ is a frontline connective of $\varphi$. Let $\varphi'$ be obtained by replacing $\psi$ with $\psi':(Qa/A)(\chi_1 \circ (\chi_2)_{|a})$.\footnote{$\varphi'$ is equivalent to $\varphi$ by the strong extraction rule (proposition \ref{STRONGEXTRACTION}) and substitution of equivalents (proposition \ref{TEOSUBEQ}). } Then $\varphi'$ has no new $\land$-coordinated patterns with respect to $\varphi$. \end{lem}
\begin{proof} Suppose that after replacing $\psi$, the resulting sentence $\varphi'$ has some extra $\land$-coordinated pattern. Obviously the new pattern must contain $(Qa/A)$.
Suppose first that $(Qa/A)$ is $\forall a$. Since in the definition of $\land$-coordinated pattern it does not matter whether the quantifiers over $y$ and $z$ occur above or below $\land$, we must conclude that our $\forall a$ plays the part of $(\forall x/X)$ in the definition of $\land$-coordinated pattern. So the pattern contains two existential quantifiers $(\exists u/U)$ and $(\exists v/V)$ such that $a\in U$ and $a\in V$. Since $\circ$ is frontline in $\varphi$, $\chi_1$ and $\chi_2$ contain no frontline connectives.
Then, $(\exists u/U)$ and $(\exists v/V)$ cannot occur both in the same $\chi_i$. We can then assume w.l.o.g. that $(\exists u/U)$ is in $\chi_1$. Since $\forall a$ occurred in the scope of $\circ$ in $\varphi$, and $a\in U$, this contradicts the assumption that $\varphi$ was normalized (see point 3. of lemma \ref{LEMMANORM}).
Suppose instead that $Q$ is $\exists$. Then we are asserting that $(\exists a/A)$ forms, in $\varphi'$, a new $\land$-coordinated pattern together with some operators, none of which occurs in the scope of $(\exists a/A)$ in $\varphi'$. However, this is impossible, because replacing $\psi$ with $\psi'$ only changes the dependency relations between $(\exists a/A)$ and some operators in its scope (in $\varphi'$). \end{proof}
By simpler arguments, one can see that also some other equivalence rules do not increase the number of $\land$-coordinated patterns.
\begin{lem}\label{LEMANDCOORD2} Let $\varphi$ be a regular $IF$ sentence, and $\varphi'$ be obtained by applying to $\varphi$ a sequence of the following transformations: \begin{enumerate} \item Swapping independent quantifiers (prop. \ref{SWAP}) \item Swapping universal quantifiers (prop. \ref{UNISWAP}) \item Distributing a universal quantifier over a conjunction (prop. \ref{DISTRIBUTION}). \end{enumerate} Then $\varphi'$ contains no new $\land$-coordinated patterns with respect to $\varphi$. \end{lem}
Suppose we have a normalized sentence $\varphi$, and suppose we extract a quantifier above one of the frontline connectives, say $\circ$. It might then happen that in the new sentence $\varphi'$ there are no more quantifiers in the scope of $\circ$: this implies that the set of frontline connectives is now different ($\circ$ is not anymore frontline, and possibly there is a new frontline connective $\circ'$). As a consequence, it may happen that $\varphi'$ is not in normal form. In the next lemma, we point out that it is then possible to renormalize $\varphi'$ in a way that impacts only ``locally'' the structure of the sentence.
\begin{df} Let $\varphi$ be a sentence and $c$ be a node in its syntactical tree. If there is a frontline connective $\circ$ such that $\circ\prec_\varphi c$, then we say that $c$ is in the \textbf{lower part} of $\varphi$; otherwise, we say it is in the \textbf{upper part} of $\varphi$. \end{df}
\begin{lemma}[Local renormalization]\label{LEMRENORM} Let $\varphi$ be a regular $IF$ sentence which is normalized in the sense of lemma \ref{LEMMANORM}, and let $\circ$ be a frontline connective of $\varphi$. Suppose $\varphi'$ is obtained by extracting a quantifier above $\circ$ by using proposition \ref{STRONGEXTRACTION}. Then there is a regular, normalized $IF$ sentence $\varphi''$ such that $\varphi'\equiv\varphi''$, and which has the same upper part as $\varphi'$.
Furthermore, a) if $\varphi'$ is modest, then also $\varphi''$ is, and b) $\varphi'$ and $\varphi''$ have the same number of $\land$-coordinated patterns. \end{lemma}
\begin{proof} As a first case, suppose that in $\varphi'$ there are no frontline connectives $\circ'\preceq_{\varphi'} \circ$. This means that the frontline connectives of $\varphi'$ are a proper subset of those of $\varphi$; thus, $\varphi'$ already satisfies condition 3. of normalization. Then one can proceed as in the normalization lemma to ensure that conditions 1. and 2. hold within the scope of $\circ$. It is obvious that the resulting sentence $\varphi''$ satisfies the statement.
Suppose instead that there is a connective $\circ'\preceq_{\varphi'}\circ$ which is frontline in $\varphi'$. We observe that, by definition of frontline connective, such $\circ'$ is unique (it is either $\circ$ itself, or the connective of maximum depth among those that have $\circ$ in their scopes). Observe then that each node in the scope of $\circ'$ is in the lower part of $\varphi'$. Now apply the transformations described in the normalization lemma (\ref{LEMMANORM}) to the subformula which has $\circ'$ as its most external operator; clearly, these transformations do not affect the nodes outside the subformula; so, in particular, they do not affect the upper part of $\varphi'$. We can then take $\varphi''$ to be the formula thus obtained.
For a), we have already observed in lemma \ref{LEMMANORM} that the transformations used in the normalization process preserve modesty.
For b), observe once more that the transformations used in the normalization process (i.e., the quantifierswapping rules \ref{SWAP} and \ref{UNISWAP}) do not change the dependencies between quantifiers and connectives; therefore, they cannot create or eliminate $\land$-coordinated patterns. \end{proof}
\begin{df}
By the \textbf{connective-depth} of a node $t$ in a tree $(T,\preceq_T)$ we will mean the cardinality of the set $\{\circ\in T \ | \ \circ = \land \text{ or } \lor \text{, and } \circ\preceq_T t\}$. \end{df}
\noindent Notice: if $\circ=\land,\lor$ occurs in $T$, then $\circ$ has at least connective-depth $1$.
\begin{teo} [Theorem \ref{MODESTTREE} in the main text] a) All regular, modest $IF$ tree prefixes are in FO.\\ b) Every regular, modest $IF$ sentence can be transformed, by means of equivalence rules, into a first-order sentence. \end{teo}
\begin{proof}
Let $T$ be a regular, modest tree and $\varphi$ be a sentence which is a completion of $T$. We can also assume without loss of generality that $\varphi$ is normalized as in Lemma \ref{LEMMANORM}. We want to show that $\varphi$ is equivalent to some other sentence which is prenex form, and still regular and modest (thus, primary). By Sevenster's result (Prop. \ref{SEVCLASSIFY}), this last sentence is then equivalent to some first-order sentence. This yields both a) and b).
We face a problem: extraction of quantifiers does not preserve \emph{in general} the modesty of the tree (or the sentence). We will thus need to choose carefully the order in which the extractions are performed. And in one case (case 2c.C below) we will not be able to apply extraction at all; in that case, we will have to push some quantifiers in the opposite direction, using quantifier distribution and swapping. However, each time this case is reached, we will reduce by one the number of $\land$-coordinated patterns occurring in the sentence; and lemma \ref{LEMANDCOORD} will guarantee that this number is never increased throughout the procedure. So, the transformations given in case 2c.C will be applied at most a finite number of times; this guarantees that the procedure we describe terminates, yielding a regular, modest sentence in prenex form.
To be more precise, our procedure will consist in applying $2n$ transformations \[ \varphi = \varphi_0 \stackrel{E_1}{\mapsto} \varphi_1 \stackrel{N_1}{\mapsto} \varphi_2 \stackrel{E_2}{\mapsto} \dots \stackrel{E_n}{\mapsto}\varphi_{2n-1} \stackrel{N_n}{\mapsto} \varphi_{2n} \] each of which preserves truth-equivalence and modesty; each consecutive application of $E_i$ followed by $N_i$ will also preserve normalization. Typically, $E_i$ will be an application of the strong extraction rule (prop. \ref{STRONGEXTRACTION}, possibly preceded by a renaming, prop \ref{VARIANT}), applied to a subformula which has a frontline connective as its most external operator; while $N_i$ will be a local renormalization (lemma \ref{LEMRENORM}). Exceptionally, when case 2c.C, described below, is reached, $E_i$ will be a sequence of applications of quantifier swapping (prop. \ref{SWAP} and \ref{UNISWAP}) and quantifier distribution (prop. \ref{DISTRIBUTION}). The sentence $\varphi_{2n}$ which is reached at the end will be the sentence that we need (i.e., it will be regular, modest and prenex). To understand that we can really reach such a sentence in a finite number of steps, consider the following counters: \[ k: \text{sum of the connective-depths of frontline connectives in the current sentence} \] \[ l: \text{number of $\land$-coordinated patterns in the current sentence.} \] As long as $k>0$, it is possible to extract quantifiers that lie immediately below some frontline connective; therefore the procedure can go forward. If $k$ is $0$, instead, there are no more frontline connectives (remember that occurrences of connectives have at least connective-depth $1$!); thus, the current sentence is prenex (and regular and modest, since the $E_i$ and $R_i$ preserve these two properties), and so we are done. Each quantifier extraction reduces the number of quantifiers occurring below a frontline connective without increasing $k$; when the last quantifier occurring under such a connective is extracted, $k$ is reduced at least by one. When quantifier distribution (case 2c.C) is applied, then the number of quantifiers under some connective increases, and possibly also $k$; however, $l$ is decreased by $1$. It will be seen that $l$ is never increased; and that, once $l$ is $0$, the case 2c.C cannot be reached anymore. So, quantifier distribution is applied only a finite number of times; after this, the counter $k$ will not increase anymore.
Let us fix some notation to describe what happens when $E_i$ is applied. So, let $\circ$ be a frontline connective of $\varphi_{2i-1}$.
Since $\circ$ is frontline, immediately below $\circ$ there occurs a quantifier $(Qa/A)$.
Let $L$ be the subformula immediately below $(Qa/A)$, and $R$ be the subformula immediately below $\circ$ that does not contain this occurrence of $(Qa/A)$. First, if needed, we use renaming (proposition \ref{VARIANT}) to replace $a$ with a new variable (but in the following we will keep writing $a$); then we use the strong extraction rule, which also replaces $R$ with $R' = R_{|a}$:
\Tree [.\vdots\\$\circ$ [.$Qa/A$ $L$ ] $R$ ] \Tree [.\vdots\\$Qa/A$ [.$\circ$ $L$ $R'$ ] ]
We call $\varphi_i$ the resulting sentence. By lemma \ref{LEMMANORM}, the number of $\land$-coordinated patterns does not increase with this transformation. In the last part of the proof we will check that the sentence resulting after the extraction is still modest (except in case 2c.C, which is treated differently). Before that, we underline that the sentence $\varphi_{2i}$ resulting after extraction may fail to be normalized; normalization is then restored when applying $N_i$, that is, the local renormalization described in lemma \ref{LEMRENORM}. This lemma tells us that the renormalized sentence has the same number of $\land$-coordinated patterns and the same upper part as $\varphi_{2i}$, that is, in particular, the same frontline connectives. So both counters $k$ and $l$ are not increased by $N_i$.
In the rest of the proof we check that, after extraction, the resulting sentence is still modest, and, in the one case when extraction does not preserve modesty (case 2c.C), we explain what transformations should be applied.
1) Suppose $Q=\exists$.
Then the new sentence $\varphi_{2i}$ obtained after quantifier extraction is not \underline{signalling} (the existential quantifiers in $R'$ are either first-order, in which case they cannot play the role of the rightmost quantifier of a signalling pattern, or they have nonempty slash set, in which case $a$ has been added to their slash set in $R'$, and they cannot receive signals from $\exists a$).
Suppose $\varphi_{2i}$ is \underline{Henkin}. Then there are quantifiers $\forall x,\forall y,$ $(\exists b/B)$ such that $\forall x,(\exists a/A),\forall y,(\exists b/B)$ form a Henkin pattern in $\varphi_{2i}$ (say, $a$ depends on $x$). But then the logical operators $\forall x,(\exists a/A),\forall y,(\exists b/B'),\circ$ (where \CORR{$B' = B\setminus \{a\}$}) formed a generalized Henkin pattern in $\varphi_{2i-1}$: contradiction.
$\varphi_{2i}$ cannot be \underline{generalized Henkin} nor \underline{co\phantom{g}\hspace{-5pt}ordinated}, because in $\varphi_{2i}$ there are no new universal-existential dependence pairs, no new universal-connective dependence pairs and no new connective-existential dependence pairs with respect to $\varphi_{2i-1}$.
2) Suppose $Q=\forall$.
2a) If $\varphi_{2i}$ is \underline{signalling}, then this must be witnessed by $\forall a$ itself (otherwise, the signalling pattern would have been already in the original tree) and by two existential quantifiers $(\exists u/U),(\exists v/V)$ occurring in $R'$ and such that $a\notin U$, $a\in V$, $u\notin V$. We can conclude that $U$ is empty: otherwise, after applying strong extraction, we would have $a\in U$.
Furthermore, we have that the slash set of $v$ contains another variable $b$ (otherwise, the strong extraction rule would have preserved the empty slash set); by our normalization assumptions, we can assume $b$ is universally quantified above $\circ$. But then, the quantifiers over $b,u,v$ prove that $\varphi_{2i-1}$ was signalling: contradiction.
2b) Suppose $\varphi_{2i}$ is \underline{Henkin}. Then there are quantifiers $(\exists u/U),\forall b,(\exists v/ V)$ in $\varphi_{2i}$ such that $u$ depends on $a$ but not on $b$ nor $v$; and $v$ depends on $b$ but not on $a$ nor $u$.
Notice that, since $(\exists u/U)$ and $(\exists v/ V)$ are in the same branch in $\varphi_{2i}$, they are also in the same branch in $\varphi_{2i-1}$. But they cannot be above $\circ$ (otherwise $\forall a,(\exists u/U),\forall b,(\exists v/ V)$ would form a Henkin pattern already in $\varphi_{2i-1}$); and for similar reasons they cannot be in $L$. So, $(\exists u/U)$ and $(\exists v/ V)$ are both in $R'$.
Since strong extraction is used, $a\notin U$ implies that $U = \emptyset$;
but then, since $u$ does not depend on $b$ nor on $v$, we must conclude that $\exists u\prec_{\varphi_{2i}} (\exists v/V), \forall b$, and that $u\in V$. From the former it also follows that $\forall a \prec_{\varphi_{2i}}(\exists v/V)$; so, since $v$ does not depend on $a$, we must conclude that $a\in V$.
The facts that $a\in V$, that $(\exists v/V)$ is in $R'$, that we have applied strong extraction, together with the normalization assumption 2. imply that $V$ contains at least one variable $v$ which is universally quantified and distinct from $a$; that is, in $\varphi_{2i}$ there is a universal quantifier $\forall c$ (distinct from $\forall a$) occurring above $(\exists v/ V)$ and such that $c \in V$.
Now notice that $\forall c$ cannot occur above $\exists u$: if it did, then $\forall c,\exists u, (\exists v/V)$ would be a signalling pattern already occurring in $\varphi_{2i-1}$: contradiction. So, $\exists u \prec_{\varphi_{2i-1}} \forall c\prec_{\varphi_{2i-1}}(\exists v/V)$: therefore, $\forall c$ is in $R'$, hence in the scope of $\circ$. But then, since we assumed that $\varphi_{2i-1}$ was normalized, we must have $c\notin V$: contradiction.
2c) Suppose that after applying strong extraction to a frontline connective of $\varphi_{2i}$ we obtain a sentence $\psi$ which is \underline{generalized Henkin}, as witnessed by quantifiers $\forall a,(\exists u/U),\forall z,(\exists v/V)$ such that $u$ depends on $a$ (but not on $z,v$) and $v$ depends on $z$ (but not on $a,u$). Since we are assuming that $\varphi_{2i}$ is regular, modest and normalized, lemma \ref{LEMGENHEN}, a) tells us that $(\exists u/U)$ is in $L$, $(\exists v/V)$ is in $R'$, and there is a quantifier $\forall b \prec_{\psi}\circ$ such that $b\in V$. These observations can be summarized by saying that $\psi$ has the following form:
\Tree [.$\vdots$\\$\forall b$\\$\vdots$\\$\forall a$ [.$\circ$ [.$\vdots$\\$(\exists u/U)$\\$\vdots$ ] [.$\vdots$\\$(\exists v/\{a,b,\dots\})$\\$\vdots$ ] ] ]
The quantifier $\forall z$ is not shown in the picture; we only know that it is somewhere above $(\exists v/V)$ (since $v$ depends on $z$). Now there are three cases.
A) $u$ depends on $b$. In this case, $\varphi_{2i-1}$ was already generalized Henkin (as witnessed by $\forall b,(\exists u/U),\forall z, \circ,(\exists v/V)$): contradiction. In this case, one can take $\varphi_{2i}$ to be $\psi$.
B) $u$ does not depend on $b$, and there is a disjunction $\forall b \prec_{\psi} \lor \preceq_{\psi} \circ$. Then $\varphi_{2i-1}$ was coordinated (as witnessed by $\forall b,\forall z,\forall a,\lor,(\exists u/U),(\exists v/V)$): a contradiction. Also in this case, one can take $\varphi_{2i}$ to be $\psi$.
C) $u$ does not depend on $b$, and there is no disjunction $\forall b \prec_{\psi} \lor \preceq_{\psi} \circ$. In this case we find no contradiction; so, instead of using quantifier extraction, we will apply some different transformations that will preserve modesty and reduce the number of $\land$-coordinated patterns by one. Observe that $\forall a,\circ,\forall b, \forall z, (\exists u/U)$ and $(\exists v/V)$ form a $\land$-coordinated pattern in $\varphi_{2i}$; therefore, $\forall a,\circ,\forall b, \forall z, (\exists u/U)$ and $(\exists v/V')$ (where $V'= v\setminus\{a\}$) already formed a $\land$-coordinated pattern in $\varphi_{2i-1}$.
Now notice that there are no existential \CORR{quantifiers} depending on $b$ and above $(\exists u/U)$ and $(\exists v/V)$ -- otherwise the tree would be either signalling or Henkin. So, $\forall b$ can be pushed down by quantifier swapping (prop. \ref{SWAP} and \ref{UNISWAP}) and distribution (\ref{DISTRIBUTION}), until it goes below $\circ$. Call $\varphi_{2i}$ the resulting sentence. It should be clear that $\varphi_{2i}$ is still modest (the only depedency relations that have changed are those between $\forall a$ and some occurrences of $\land$; these changes cannot create any new non-modest patterns). Also, by lemma \ref{LEMANDCOORD2}, the number $l$ of $\land$-coordinated patterns did not increase in the process, while the operators $\forall a,\circ,\forall b, \forall z, (\exists u/U)$ and $(\exists v/V)$ do not form such a pattern anymore; so $l$ has decreased by one.\footnote{At later stages of the procedure, the steps described in this case 2C.c will be applied to each of the universal quantifiers that play the same role as $\forall b$ (with respect to $\forall a$ and $\circ$). Only after pushing down all these quantifiers, it will be possible to extract $\forall a$ above $\circ$.}
2d) Now, suppose instead $\varphi_{2i}$ is \underline{coordinated, first kind}. It means that, in the new tree, $\forall a$ plays either the role of $\forall x$ or that of $\forall y$ in the definition of coordinated tree.
In the first case, we observe that in $\varphi_{2i}$ there are quantifiers $\forall z,\forall y$ and quantifiers $(\exists u/U),(\exists v/V)$ such that $u$ depends on $y$ but not on $a$ nor $z$, while $v$ depends on $z$ but not on $a$ nor $y$; and a disjunction $\forall a\prec_{\varphi_{2i}}\lor\prec_{\varphi_{2i}} (\exists u/U),(\exists v/V)$. Lemma \ref{LEMGENHEN}, b) then tells us that $\circ$ and $\lor$ coincide, $(\exists u/U)$ is in $L$, $(\exists v/V)$ is in $R'$, and there is a quantifier $\forall b \prec_{\psi}\circ$ such that $b\in V$.
We then check again the three subcases A), B), C) that were already considered in case 2c); case A) and case B) work similarly as before.
Case C) cannot be, because we already know that $\circ$ is an occurrence of $\lor$.
In the second case ($\forall a$ playing the role of \CORR{$\forall y$} in the definition of coordinated tree), to fix notation, $\forall a$ forms a first kind coordinated pattern in $\varphi_{2i}$ together with quantifiers $\forall x,\forall z, (\exists u/U), (\exists v/V)$ (s.t. $u$ depends on $a$ but not on $x$ nor $z$, and $v$ depends on $z$ but not on $x$ nor $a$) and an occurrence of $\lor$. We want to show that this occurrence of $\lor$ is $\circ$. We first observe that $u$ depends on $a$, but $U$ is nonempty; so $\forall a \prec_{\varphi_{2i-1}}(\exists u/U)$ (otherwise quantifier extraction would have added $a$ to $U$); so $(\exists u/U)$ is in $L$ (the left subformula under $\circ$). Next we prove the following
\underline{Claim}: $(\exists v/V)$ is in $R'$.
\begin{proof} If $(\exists v/V)$ occurred above $\forall a$, then the quantifiers $\forall x,\forall z, (\exists u/U), (\exists v/V)$ would all be on a same branch, contradicting the definition of coordinated pattern.If $(\exists v/V)$ occurs in the scope of $\forall a$, then it did so already in $\varphi_{2i-1}$, which is impossible by point 3. of normalization. Finally, it might be that there is a connective $\circ'\prec_{\varphi_{2i}}\circ$ such that $(\exists v/V)$ occurs in the subformula immediately below $\circ'$ which does not contain $\circ$. But we know that $(\exists u/U)$ is in $L$, so that already in $\varphi_{2i-1}$ we had $\forall a \prec_{\varphi_{2i-1}} (\exists u/U)$. This entails that $\forall x,\forall a,\forall z, (\exists u/U), (\exists v/V),\lor$ was already a coordinated pattern in $\varphi_{2i-1}$, contradicting the assumption that $\varphi_{2i-1}$ is modest. \end{proof}
But then $\circ$ is the unique connective which has $(\exists u/U)$ in its left subformula and $(\exists v/V)$ in its right subformula; so $\circ$ is the occurrence of $\lor$ which is part of the coordinated pattern. This and the fact that $\forall a \prec_{\varphi_{2i-1}} (\exists u/U)$ imply that $\forall x,\forall z, \forall a,(\exists u/U), (\exists v/V),\circ$ is a coordinated pattern already in $\varphi_{2i-1}$: contradiction.
2e) Suppose $\varphi_{2i}$ is \underline{coordinated, second kind}. Then, $\varphi_{2i-1}$ contains a coordinated pattern $\forall a,\forall y,\forall z,\lor,(\exists u/U),(\exists v/V)$ of second kind; so, $(\exists u/U),(\exists v/V)$ both occur in a same subformula under this occurrence of $\lor$. By definition of coordinated pattern, $(\exists u/U),(\exists v/V)$ must occur in different branches; this means that there is exactly one connective $\circ'$ which has $(\exists u/U)$ in its left subformula and $(\exists v/V)$ in its right subformula (this is also the connective of maximum depth among those that have $ (\exists u/U)$ and $(\exists v/V)$ in their scope).
Now suppose first that both $u$ and $v$ do not depend on $a$ (i.e., $\forall a$ plays the role of $\forall x$ in the definition of coordinated pattern). Then, again by definition of coordinated pattern, we also have that $\forall a\prec_{\varphi_{2i}}\lor$; and furthermore, $\forall a \prec_{\varphi_{2i}} (\exists u/U),(\exists v/V)$, and consequently $\circ \prec_{\varphi_{2i}} (\exists u/U),(\exists v/V)$. Then it follows that $\circ \prec_{\varphi_{2i}}\circ'$. But then, since $\circ$ is frontline and there are quantifiers in the scope of $\circ'$, we must conclude that $\circ'$ is $\circ$. But then $\lor \prec_{\varphi_{2i}}\circ$; and since $\forall a$ is the immediate predecessor of $\circ$ in $\varphi_{2i}$, we also have $\lor\prec_{\varphi_{2i}}\forall a$, contradicting our initial assumption.
Suppose instead that $\forall a$ plays the role of $\forall y$ in the definition of coordinated pattern, that is: $u$ depends on $a$, and $v$ depends $z$, $\forall y\prec_{\varphi_{2i}}\lor$ and neither $u$ nor $v$ depend on $y$. Since $u$ depends on $a$, and $U$ is nonempty ($y\in U$) we must conclude that $\forall a \prec_{\varphi_{2i-1}}(\exists u/U)$ (otherwise the strong extraction would have added $a$ to the slash set of $u$). Observe then that $\forall a,\forall y,\forall z,\lor,(\exists u/U),(\exists v/V)$ was already a second kind coordinated pattern in $\varphi_{2i-1}$: contradiction. \end{proof}
\section{Proof of the Extension Lemma (lemma \ref{EXTENDLEMMA})} \label{APPEXTLEM}
We point out the simple fact that, if $U$ extends $T$, then $U$ can be ``constructed'' adding one by one new logical operators to $T$, and updating the slash sets.
\begin{lem}\label{LEMCONSTRUCT} Suppose $U$ extends $T$. Then there is a finite sequence $T=T_0, T_1,\dots,T_{n+1} =U$ of regular tree prefixes such that, for each $i=0..n$, $T_{i+1}$ is obtained from $T_i$ in one of two ways: \begin{enumerate} \item $T_{i+1}$ is obtained by adding to $T_i$ a connective $\circ$ and one extra gap below it. \item $T_{i+1}$ is obtained by adding to $T_i$ a quantifier $(Qv/V)$, and adding $v$ to some of the slash sets in the scope of $(Qv/V)$. \end{enumerate} \end{lem}
\noindent Notice that we must add the quantifiers starting from those of maximum depth, if we want to keep track correctly of what variables must be added to the slash sets. And on the other hand, if $c,d,e$ are in $U\setminus \mu[T]$ and $c,e$ are to occur in distinct branches under the connective $d$, then it is possible to add both operators $c$ and $e$ only after $d$ has been added (otherwise we do not have enough branches). These two factors make the construction not completely obvious.
\begin{proof} We prove, by induction on the depth of the operators in $U\setminus \mu[T]$, the claim together with the additional statement that $U$ is an extension of $T_{i+1}$ via a function $\mu_{i+1}$. Suppose $T_i$ has been constructed. Pick a connective $c$ of \emph{minimum} depth in $U\setminus \mu[T_i]$, if there is one. Add it to $T_i$ in the same position in which it occurs in $U$; add a gap below $c$ so that $c$ has exactly two nodes below (so that the resulting $T_{i+1}$ is a syntactical tree). Let $\mu_{i+1} = \mu_i\cup\{(c',c'')\}$ (where $c'$ is the occurrence of $c$ in $T_{i+1}$, and $c''$ the occurrence in $U$). It is straightworward to check that, if $T_i$ is regular, also $T_{i+1}$ is; and that, if $U$ is an extension of $T_i$, then it is also an extension of $T_{i+1}$.
When a tree $T_j$ is reached such that $U\setminus\mu_j[T_j]$ contains no connectives, we begin treating the quantifiers. Pick a quantifier $(Qv/V)$ of \emph{maximum} depth in $U\setminus\mu_j[T_j]$ and add it to $T_j$ in the same position in which it occurs in $U$. Then, for each quantifier $(Q'u/U)$ in the scope of $(Qv/V)$, add $v$ to the slash set of $(Q'u/U)$ if and only if $v$ is in the slash set of $\mu_j( (Q'u/U) )$.
Let $\mu_{j+1} = \mu_j\cup\{((Qv/V),(Qv/V))\}$ (where again the former is the occurrence in $T_{j+1}$, and the latter the occurrence in $U$). Regularity of $T_{j+1}$, and the fact that $U$ is an extension of $T_{j+1}$, can be checked as above.
\end{proof}
We will need some lemmas from the literature, which describe some regularities in the interaction between teams and $IF$ formulas. Given two teams $X$, $Y$ of disjoint domains, we define $X\times Y:=\{s\cup t \ | \ s\in X, t\in Y\}$. Notice that this is not the cartesian product of $X$ and $Y$.
\begin{lem}[\cite{ManSanSev2011}, Theorem 5.5] \label{LEMCART} Let $\psi$ be an $IF$ formula, $M$ a structure, $X,Y$ teams with $dom(X)\supseteq FV(\psi)$, $dom(X)\cap dom(Y)=\emptyset$. Then $M,X\models \psi$ iff $M,X\times Y\models \psi$. \end{lem}
\begin{lem}[\cite{ManSanSev2011}, Theorem 5.8] \label{LEMEXTTEAM}
Let $\psi$ be an $IF$ formula, $M$ a structure, $X,Y$ teams with $dom(X)\supseteq FV(\psi)$ and $dom(Y) = dom(X)\cup V$, where $V$ is a set of variables that do not occur in $\psi$ nor $dom(X)$. Then $M,X\models \psi$ iff $M,Y\models \psi_{/V}$. \end{lem}
\begin{lem}[\cite{ManSanSev2011}, Theorem 5.22b] \label{LEMEASIER} For all $IF$ formulas $\psi$ and $M$ and $X$, if $M,X\models (\exists v/V)$ and $W\subseteq V$, then $M,X\models (\exists v/W)$. \end{lem}
The following lemma is crucial for keeping under control the signalling phenomena. The key idea is that adding a subformula of the form $v=c$ prevents the variable $v$ being used as a signal.
\begin{lem}\label{LEMADDQF} Let $\psi$ be an $IF$ formula, $v$ a variable which does not occur in $\psi$, $c$ a constant symbol. Let $\psi'$ be a formula obtained from $\psi$ by adding $v$ to some of the slash sets (as a limit case, $\psi'=\psi$). Then we have the following equivalences: \begin{enumerate}
\item $\psi\equiv_v (\exists v/V)(v=c \land \psi')$. \item $\psi\equiv_v (\forall v/V)\psi'$. \end{enumerate} \end{lem}
\begin{proof} 1) Let $M$ be a structure which interprets the signature of $\psi$ and the constant $c$. Let $X$ be a team s.t. $dom(X)\supseteq FV(\psi)$ and $v\notin dom(X)$.
Suppose $M,X\models \psi$. Let $F:dom(X)\rightarrow dom(M)$ be the constant function such that $F(s)=c^M$ for each $s\in X$. Then obviously $M,X[F/v]\models v=c$. From $M,X\models \psi$ and the fact that $v\notin dom(X)$, we get by Lemma \ref{LEMEXTTEAM} that $M,X[F/v]\models \psi_{/v}$. Then, by lemma \ref{LEMEASIER}, we get $M,X[F/v]\models \psi'$. By the semantical clauses, we can conclude that $M,X\models (\exists v/V)(v=c \land \psi')$.
Suppose $M,X\models (\exists v/V)(v=c \land \psi')$. Then there is $F:dom(X)\rightarrow dom(M)$ such that $M,X[F/v]\models v=c \land \psi'$. From the semantical clauses we get $M,X[F/v]\models \psi'$; from this, by lemma \ref{LEMEASIER} we get $M,X[F/v]\models\psi$. Since $M,X[F/v]\models v=c$, we know that $F$ is the constant function that picks $c^M$. But then $X[F/v] = X\times \{(v,c^M)\}$. By this fact, $M,X[F/v]\models\psi$ and lemma \ref{LEMCART} we get $M,X\models \psi$.
2) Let $M$ be a structure which interprets the signature of $\psi$. Let $X$ be a team s.t. $dom(X)\supseteq FV(\psi)$ and $v\notin dom(X)$.
Assume $M,X\models \psi$. Since $v$ is not $dom(X)$ nor in $\psi$ we can use lemma \ref{LEMEXTTEAM} to obtain $M,X[M/v]\models\psi_{/v}$. By lemma \ref{LEMEASIER}, then, we obtain $M,X[M/v]\models\psi'$. So $M,X\models \forall v\psi'$.
Suppose $M,X\models \forall x\psi'$. Then $M,X[M/v]\models\psi'$. By lemma \ref{LEMEASIER} we get $M,X[M/v]\models\psi$. Since $X[M/v] = X \times (\{\emptyset\}[M/v])$, and $v$ does not occur in $\psi$, by lemma \ref{LEMCART} we get $M,X\models \psi$. \end{proof}
For our purposes, part 1. of the previous lemma must still be refined. Given an $IF$ formula $\psi$, a variable $v$ that does not occur in $\psi$, and a constant $c$, we denote as $\psi^c$ the formula obtained from $\psi$ by replacing each maximal quantifier-free subformula $\alpha$ of $\psi$ with $v=c \land \alpha$.
\begin{lem}\label{LEMCFORMULA} Let $\psi$ be an $IF$ formula, $v$ a variable which does not occur in $\psi$, $c$ a constant symbol. Let $\psi'$ be a formula obtained from $\psi$ by adding $v$ to some of the slash sets (as a limit case, $\psi'=\psi$). Then $\psi \equiv_v (\exists v/V)(\psi')^c$. \end{lem}
\begin{proof} By Lemma \ref{LEMADDQF}, 2., it suffices to prove that, for all $IF$ formulas $\theta$ in which $v$ does not occur, $v=c\land \theta \equiv \theta^c$. We do this by induction on $\theta$. If $\theta$ is a quantifier-free formula, then $\theta^c$ is $v=c\land \theta$, and we are done. If $\theta$ is $\eta\land\chi$, then it is easy to see that $v=c\land (\eta\land\chi)$ is equivalent with $(v=c\land \eta)\land(v=c\land\chi)$. By induction hypothesis, this is equivalent to $\eta^c \land \chi^c$, which is $\theta^c$. When $\theta$ is $\eta\lor\chi$ we proceed analogously using the equivalence of $v=c\land (\eta\lor\chi)$ and $(v=c\land \eta)\lor(v=c\land\chi)$. In case $\theta=(Qu/U)\eta$, notice that $v=c\land(Qu/U)\eta\equiv (Qu/U)(v=c\land\eta)$ by quantifier extraction (Proposition \ref{STRONGEXTRACTION}), and apply the inductive hypothesis. \end{proof}
\begin{lem}[Extension Lemma for tree prefixes, lemma \ref{EXTENDLEMMA} in the main text] Suppose $U$ extends $T$, and $\varphi$ is a completion of $T$. Then there are a completion $\varphi'$ of $U$ and, for every structure $M$ suitable for $\varphi$, an expansion $M'$ of $M$ such that $M\models\varphi$ iff $M'\models\varphi'$. \end{lem}
\begin{proof}
Let $M'$ be identical to $M$, except that it interprets a constant symbol $c$ which was not in the signature of $M$. Obviously $M\models\varphi$ iff $M'\models\varphi$.
If $U$ extends $T$, then by lemma \ref{LEMCONSTRUCT} there is a sequence $T=T_0, T_1,\dots,T_{n+1} =U$ of regular tree prefixes such that each $T_{i+1}$ is obtained from $T_i$ by adding one connective (plus a gap) or a quantifier (plus adding the newly quantified variable to some of the slash sets in the scope of the new quantifier). Let $e_0:=e$. We prove, by induction on $i$, that, given any sentential completing function $e_i$ for $T_i$, there is a completing function $e_{i+1}$ for $T_{i+1}$ such that $M'\models\hat e_i(T_i)$ iff $M'\models\hat e_{i+1}(T_{i+1})$. The formula $\hat e_{n+1}(T_{n+1})$ is the formula $\varphi'$ required by the statement of the theorem.
Suppose $T_i$ and $T_{i+1}$ differ only in that one subtree $B$ of $T_i$ is replaced by $(Qv/V)B'$ in $T_{i+1}$ ($B$ may be empty); and that $B'$ differs from $B$ only for the addition of variable $v$ to some slash sets.
For each branch $P$ of $T_i$, call $P'$ the corresponding branch of $T_{i+1}$.\footnote{More precisely: if $P'$ is a branch of $T_{i+1}$ that does not intersect the above mentioned occurrence of $(Qv/V)$, then it is associated to an identical branch $P$ of $T_i$. If instead $P'= S_1(Qv/V)S_2$ contains the occurrence of $(Qv/V)$, it corresponds to a branch $P = S_1S_2$ of $T_i$.}
Let $e_i$ be a completing function for $T_i$. Define $e_{i+1}$ as the completing function which assigns, to each branch $P'$ of $T_{i+1}$, the formula $(v=c \land e_i(P))$ in case $P'$ intersects $B'$ and $Q=\exists$; and, if $Q=\forall$, just formula $e_i(P)$. Then (using lemma \ref{LEMCFORMULA} in case $Q=\exists$, and lemma \ref{LEMADDQF}, 2. if $Q=\forall$) we have: \[ M',X\models \hat e_i(B) \ \Leftrightarrow \ M',X\models (Qv/V)\hat e_{i+1}(B'). \] for any suitable team $X$.
Thus, by substitution of equivalents (\ref{TEOSUBEQ} plus \ref{TEOEQSENT}), $e_{i+1}$ is the completing function for $T_{i+1}$ that we were looking for.
Suppose instead that $T_{i+1}$ differs from $T_i$ in that a certain subtree $B$ is turned into a tree $([\phantom a] \land B)$; the ordering of the conjuncts is unimportant. For any completing function $e_i$ of $T_i$, we can define the completing function $e_{i+1}$ for $T_{i+1}$ as that function which differs from $e$ only in that it assigns $\forall x(x=x)$ to the branch we marked
with a $[\phantom a]$. Then it is clear that \[ M\models \hat e_i(T_i) \Leftrightarrow M\models \hat e_{i+1}(T_{i+1}). \] Notice that this works also in the special case in which $B$ consists just of a gap symbol.
The case that $T_{i+1}$ differs from $T_i$ in that a certain subtree $B$ is turned into a tree $([\phantom a] \lor B)$ can be treated analogously, using $\forall x(x\neq x)$ instead of $\forall x(x=x)$.
\end{proof}
\begin{comment}
\section{Proof of theorem \ref{GH1ORTEO}}
\CORR{ \begin{lem}[Normalization] \label{LEMMANORM2} Let $T$ be a regular, tree which is not Henkin, signalling, GH2, GH1($\land$) nor coordinated. Then, by quantifier swapping, we can transform $T$ into a tree $T'$ with the same characteristics, which has the same number of GH1($\lor$) patterns as $T$, and which satisfies the following requirements: \begin{enumerate} \item All universal quantifiers have empty slash sets. \item Nonempty slash sets contain at least one universally quantified variables. \item If $\circ$ is a binary connective occurring with maximal depth in $T$ among those in whose scope there occur quantifiers, and $\circ \prec_{T'} \forall c \prec_{T'} (\exists d/D)$, then $d$ depends on $c$ (i.e. $c\notin D$). \end{enumerate} \end{lem} } \CORR{ \begin{proof} Apply the same procedure that was described in the proof of lemma \ref{LEMMANORM}. It is straightforward to see that the resulting tree satisfies the desiderata. \end{proof} }
\begin{teo}[Theorem \ref{GH1ORTEO} in the main text] Suppose an IF positive initial tree has no conjunctions, and it is not GH2, coordinated, Henkin nor signalling. Then it can be reduced to a first-order tree, and is thus in the FO complexity class. \end{teo}
\begin{proof} Suppose the tree is not modest; then it is GH1($\lor$), and so it has the following form:
\Tree [.$\vdots$\\$\forall x$\\$\vdots$\\$\lor_1$\\$\vdots$\\$\lor_n$ [.$\vdots$\\$(\exists u/U)$\\$\vdots$ ] [.$\vdots$\\$\forall y$\\$\vdots$\\$(\exists v/V)$\\$\vdots$ ] ]
where $\forall x,\lor_n,\exists u,\forall y, \exists v$ form a GH1($\lor$) pattern (i.e., $x\notin U, y\notin V, x\in V$), (*) $\lor_n$ has maximal depth among disjunctions that are part of a GH1($\lor$) pattern,
and (**) $(\exists v/V)$ has minimal depth among existential quantifiers that form a GH1($\lor$) pattern together with $\forall x,\lor_n,\exists u,\forall y$; and $\lor_1,\dots,\lor_{n}$ is an exhaustive list of all disjunctions occurring between $\forall x$ and $\lor_n$.
\CORR{We can assume that $T$ is normalized as in lemma \ref{LEMMANORM2}.} Let $\varphi$ be an $IF$ sentence which is a completion of $T$.
Our final aim is to push the quantifiers $\forall y$ and $\exists v$ above $\forall x$, so that $x$ can be removed from the slash set of $\exists v$. The purpose is to obtain a new sentence which begin with a tree which still satisfies the hypotheses (it has no conjunctions, and it is not GH2, coordinated, Henkin nor signalling) of the theorem, but has one less witness of the GH1($\lor$) pattern (we must also check that new witnesses of GH1($\lor$) are not generated in the process). So, one can repeat the procedure until there are no more such witnesses: the resulting tree is modest, and thus we already know (Theorem \ref{MODESTTREE}) it has first-order complexity.
Notice that it might be necessary to push above also some other quantifiers $(Qr/R)$, occurring between $\forall y$ and $(\exists v/V)$, such that $r\notin V$ (they cannot be pushed below $\exists v$).
We divide this whole process into four phases, which will constitute four parts of the proof.\\ Phase 1: Push $\forall y$ and $\exists v$ (and the quantifiers in between) upwards, until they are immediately below $\lor_n$.\\ Phase 2: push the quantifiers above $\lor_n$\\ Phase 3: push the quantifiers above $\lor_1$\\ Phase 4: push the quantifiers above $\forall x$.
In each phase, we check that the transformation performed cannot generate any new higher-order or GH1($\lor$) patterns. We will proceed by contradiction: ``suppose that the transformed tree $T'$ has a certain pattern. Then, already the untransformed tree $T$ had some forbidden pattern...''\\ \\ PHASE 1: We must show how to push the quantifiers $\forall y$ and $(\exists v/V)$ (together with those in between) above disjunctions. We always use the \emph{weak} extraction rule: this prevents the formation of new signalling patterns. We also always assume w.l.o.g. that the quantifiers we extract are in the right \CORR{disjunct}. So, suppose that after pushing the quantifiers in question above some disjunction $\lor$, the new tree is: \begin{itemize} \item Henkin: then there is, in the left subformula, an ex. quantifier $(\exists w/W)$ which depends on some $\forall z$, and quantifiers $\forall \hat y, (\exists \hat v/\hat V)$, \CORR{this latter} occurring between $\forall y$ and $\exists v$\footnote{Thus, $\forall \hat y, \exists \hat v$ might be $\forall y, \exists v$, or some other quantifiers occurring between $\forall y$ and $\exists v$. We follow the same naming convention in the rest of the proof.}, such that $w$ does not depend on $\forall \hat y$ nor $\exists \hat v$, and $\hat v$ does not depend on $z$ nor $w$. \CORR{Notice that $x\in \hat V$; otherwise $\forall x, \exists \hat v,\exists v$ would form a signalling pattern.}
Suppose $x\in W$. Then $\forall x,\lor,\forall \hat y,\forall z,\exists w,\exists \hat v$ already formed a coordinated pattern in $T$: contradiction.
Suppose instead $x\notin W$. Then $\forall x, \lor, \exists w,\forall \hat y,\exists \hat v$ already formed a GH1($\lor$) pattern in $T$. But $\lor$ has greater depth than $\lor_n$, contradicting the assumption (**).
\item GH2: then there is some existential quantifier $(\exists w/W)$ and $\forall \hat y, (\exists \hat v/\hat V)$ as above, such that $\hat y,\hat v\in W$, and a quantifier $\forall z$ on which $w$ depends, \CORR{buit $\hat v$ does not}.
\CORR{As in the Henkin case, we have $x\in \hat V$}. Now, suppose first that $(\exists w/W)$ is in the right disjunct below $\lor$: then either $\forall \hat y,\exists \hat v,\exists w$ already formed a signalling pattern, or they formed a Henkin pattern together with
$\forall z$: contradiction. Suppose instead that $(\exists w/W)$ is in the left disjunct; then there are two possibilities: 1) $x\notin W$. Then $\forall x,\exists w,\lor,\forall\hat y,\exists \hat v$ already formed \CORR{either a GH2 pattern, or} a GH1($\lor$) with $\lor$ of greater depth than $\lor_n$: contradiction. 2) $x\in W$. Then $\forall x, \lor, \forall z, \exists w, \forall \hat y, \exists \hat v$ already formed a coordinated tree: contradiction. \item \CORR{Coordinated: this case is analogous to the GH2 case.} \item The new tree contains an extra GH1($\lor$) pattern, which involves a disjunction $\lor$ occurring below $\lor_n$. There are two possibilities.
Case 1: There is a quantifier $(\exists w/W)$ in the left subformula \CORR{below $\lor$}; a quantifier $\forall \hat y$ as above, such that $\hat y\notin W$; and quantifiers $\forall z$ and $(\exists s/S)$, occurring \CORR{in the right subtree below $\lor$}, such that $s$ depends on $z$ but not $\hat y$. \CORR{Now notice that if $x\in W$, this tree is also Henkin, as witnessed by $\forall x,\exists v,\forall z,\exists w$. This case has already been treated. If instead $x\notin W$, then the old tree had a GH1($\lor$) pattern $\forall x,\lor,\exists w,\forall z,\exists s$; this contradicts the assumption that $\lor_n$ has maximal depth among disjunctions that are part of a GH1($\lor$) pattern.}
Case 2: There are quantifiers $\forall z,(\exists w/W)$ in the left subformula, and $\forall \hat y$ as above, such that $z\notin W$ but $\hat y\in W$, and a quantifier $(\exists s/S)$ occurring below $(\exists v/V)$ such that $\hat y\notin S$. Notice that it must also be $x\in W$, otherwise $\forall x,\lor,\exists w,\forall y, \exists v$ would have formed a GH1($\lor$) pattern with $\lor$ of greater depth than $\lor_n$ (violating (**)). And it must also be the case that $x\in S$, otherwise $\forall x,\lor,\exists s,\forall z,\exists w$ would have formed a GH1($\lor$) pattern with $\lor$ of greater depth than $\lor_n$.
But then, $\forall x,\lor,\forall z,\exists w,\forall \hat y,\exists s$ already formed a coordinated pattern: contradiction.
\end{itemize}
PHASE 2: \CORR{We first show that, without loss of generality, we can assume that the following is satisfied:} \[ \CORR{ \#: \text{ For each existential quantifier $(\exists w/W)$ occurring in the left disjunct and} } \] \[ \CORR{\text{ dependent on $\forall x$, } W=\emptyset } \] \CORR{ In particular, $U=\emptyset$.}
\CORR{ Proof: Suppose $W$ is nonempty; then (by 2. of Lemma \ref{LEMMANORM2}) there some quantifier $\forall z$ such that $z\in W$. Now we have three cases. Suppose first $z\in V$; then we have that $\forall x,\forall z,\forall y,\lor_n,\exists w, \exists v$ form a coordinated pattern, a contradiction. Secondly, we may have $z\notin V$ and $\forall z$ above $\lor_n$. ***********Then $\forall x,\forall z,\lor_n,\exists w, \exists v$ form a GH2 pattern, again a contradiction. The third case is that $z\notin V$ and $\forall z$ is in the left subtree below $\lor_n$; but this possibility is excluded by normalization, 3.*****THIS APPEAL TO NORMALIZATION LOOKS WRONG}
Consequently, if we raise quantifiers from the right disjunct using the \emph{strong} quantifier extraction rule, no Henkin patterns can be generated. Suppose instead that some other pattern is generated:
\begin{itemize} \item Signalling: then the left disjunction contains an existential quantifier $(\exists w/\emptyset)$ and another quantifier $(\exists s/S)$ occurring below it, with $S\neq\emptyset$. But then, by our assumption 2., $S$ must contain some universally quantified variable $t$. Thus $\forall t,\exists w,\exists s$ formed a signalling pattern already before the transformation: a contradiction.
\item Coordinated: then there is an $(\exists w/W)$ in the left disjunct which depends on some universal quantifier $\forall s$ and is independent of some other quantifier (so that, after the strong extraction, also $y$ is in the slash set of $\exists w$); and some $\forall \hat y, (\exists \hat v/\hat V)$, occurring between $\forall y$ and $\exists v$, such that $s,x\in \hat V$ and $\hat y\in W$. But the observation $\#$ above implies that $\exists w$ is independent of $\forall x$. Then, $\forall x,\lor_n,\forall s,\forall \hat y,\exists w,\exists \hat v$ formed a coordinated pattern: contradiction.
\item GH2: Case 1: there was some existential quantifier $(\exists w/W)$, occurring below $(\exists v/V)$, and a $\forall \hat y$ between $\forall y$ and $\exists v$, such that $\hat y\in W$. But then $\forall x, \exists v,\exists w$ would have formed a signalling pattern: contradiction.
Case 2: there was some existential quantifier $(\exists w/W)$ occurring in the left disjunct, which depended on some universal $\forall z$ occurring above $\lor_n$, and such that $W$ is nonempty (so that, after the application of strong extraction, $y$ is inserted in the slash set of $w$). But, \CORR{by observation $\#$}, $W\neq \emptyset$:
contradiction.
\item GH1($\lor$): The proof is identical to the GH2 case (except for the phrase ``$\forall z$ occurring above $\lor_n$''). \end{itemize}
PHASE 3: Just observe that $\#$ can be proved for each of the $\lor_i$. Then, all cases can be treated as in phase 2.
PHASE 4: use quantifier swapping until $\exists v$ is above $\forall x$, and $x$ does not occur anymore in its slash set.
\end{proof}
\end{comment}
\section{Syntactical manipulation of trees} \label{APPMANTREE}
In a previous draft (\cite{Bar2016}) of the present paper we proved our syntactical results by means of rules that manipulate tree prefixes rather than formulas. We briefly account here for this different approach.
In \cite{Sev2014}, the analysis of low complexity quantifier prefixes is based on a somewhat intuitive notion of equivalence of prefixes. Two quantifier prefixes $R$, $S$ are defined to be equivalent if, whenever the same quantifier-free formula $\psi$ is postfixed to them, one obtains truth-equivalent formulas $R\psi\equiv S\psi$. This notion of equivalence has, for quantifier prefixes, two good properties: 1) it preserves complexity, in the sense of the above definitions; and 2) prefixes can be manipulated, up to equivalence, by means of manipulation rules that are formally identical to equivalence rules for $IF$ formulas. However, it seems to us that there is no reasonable notion of equivalence of tree prefixes which satisfies both 1) and 2) (although, we lack a formal proof of this statement).
To give an example of what troubles can arise when manipulating tree prefixes (even in the first-order case), consider applying quantifier extraction:
\Tree [.$\forall x$ [.$\lor$ [.$\exists y$ $[$\phantom{a}$]$ ] [.$[$\phantom{a}$]$ ] ] ] \Tree [.$\forall x$ [.$\exists y$ [.$\lor$ [.$[$\phantom{a}$]$ ] [.$[$\phantom{a}$]$ ] ] ] ]
\noindent This should be a legitimate equivalence rule for tree prefixes, according to the requirement 2); notice indeed that our notion of complexity only allows attaching, to the right gap of the first tree, formulas containing at most the free variable $x$; and any $IF$ sentence obtained completing (with sentential completing functions) such a tree can undergo this kind of quantifier extraction. But this tranformation does not preserve complexity, that is, it does not satisfy requirement 1); indeed, the second tree is more complex than the first, because in its right gap it is possible to attach formulas with free variables $x,y$ (not only $x$).
It would be possible, however, to define, instead of an equivalence relation, an \emph{ordering} relation, or \emph{reduction}, between tree prefixes, in a way that the usual syntactical transformations, when applied to tree prefixes, are reductions; and so that the following property is satisfied: 1') if $T$ reduces to $T'$, then C($T$) $\subseteq$ C($T'$). In \cite{Bar2016} we fully developed this approach, up to a prenex form theorem. Here, for reasons of space, we chose the more direct approach of working directly with completions of trees; that is, we fill the gaps of the tree under consideration with letters denoting generic quantifier free formulas, and we apply the usual equivalence rules to the resulting sentence:
\Tree [.$\forall x$ [.$\lor$ [.$\exists y$ $\psi(x,y)$ ] [.$\chi(x)$ ] ] ] \Tree [.$\forall x$ [.$\exists y$ [.$\lor$ [.$\psi(x,y)$ ] [.$\chi(x)$ ] ] ] ]
\noindent Such an equivalence of sentences tells us that the tree prefix on the right is at least as expressive as the tree prefix on the left.
\end{document} |
\begin{document}
\begin{abstract} In this paper we obtain a solution to the second order boundary value problem of the form $\frac{d}{dt}\Phi'(\dot{u})=f(t,u,\dot{u}),\ t\in[0,1],\ u\colon\mathbb{R}\to\mathbb{R}$ with Dirichlet and Sturm-Liouville boundary conditions, where $\Phi\colon\mathbb{R}\to\mathbb{R}$ is strictly convex, differentiable function and $f\colon[0,1]\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ is continuous and satisfies a suitable growth condition. Our result is based on a priori bounds for the solution and homotopical invariance of the Leray-Schauder degree. \end{abstract}
\title{On a generalization of a theorem of S. Bernstein}
\section{Introduction}
Our purpose is to show the existence of solutions to second order boundary value problems of the form \begin{equation} \label{eq:problem}\tag{P} \begin{cases} \frac{d}{dt}\Phi'(\dot{u})=f(t,u,\dot{u}),\ t\in[0,1] \\ u\in (BC) \end{cases} \end{equation} where $u\in (BC)$ means that $u$ satisfies either Dirichlet or Sturm-Liouville boundary conditions, $\Phi'$ is an increasing homeomorphism satisfying some technical assumptions and $f$ is a continuous function satisfying suitable growth conditions.
In particular, $\Phi(x)= \frac{1}{p_1}|x|^{p_1} + \ldots \frac{1}{p_n}|x|^{p_n}$, $1 < p_i \leq 2$ is in the considered class of functions and if $n = 1$ then the differential operator on the left hand side of the equation is a $p$-Laplacian.
To prove the existence we use topological methods. This approach was already used by many authors. In \cite{GraGueLee78} and \cite{FriORe88} the authors consider the case of a Laplace operator with various boundary conditions. Generalizations to the $p$-Laplacian and to the operator defined by an arbitrary increasing homeomorphism were developed in \cite{KelTer13} and \cite{MaZhaLiu15}, respectively. However, in \cite{KelTer13} and \cite{MaZhaLiu15} authors subject the equation to very specific boundary conditions, namely $u(0) = A$, $\dot{u}(1) = B$. In order to show the existence for Dirichlet and general Sturm-Liouville conditions more effort has to be put in as can be seen below. Moreover, we consider different assumptions on the function $f$.
The main idea in the orginal paper \cite{GraGueLee78} was to use the topological transversality theorem. This is a fixed point type theorem (see \cite{DugGra10}). We decided to use an approach via Leray-Schauder degree theory instead since it is essentially equivalent but the degree theory is familiar to a broader audience.
\section{Preliminaries}
In this section we are more precise on the assumptions on functions $\Phi$ and $f$ occurring in the problem \ref{eq:problem}. We also state the main theorem.
We assume that $\Phi\colon \mathbb{R} \to \mathbb{R}$ satisfies \begin{enumerate}[label=$(\Phi_\arabic*)$,ref=($\Phi_\arabic*$)]
\item \label{asm:Phi:convex} $\Phi$ is strictly convex, differentiable and $\Phi(x)/|x|\to \infty$
as $|x|\to \infty$, \item $\Phi(0) = \Phi'(0) = 0$, \item \label{asm:Phi:psi_differentiable} $(\Phi')^{-1}$ is continuously differentiable, \item \label{asm:Phi:nabla2} there exists a constant $k_\Phi>1$ such that \[ k_\Phi\Phi(x)\leq \Phi'(x) x\quad \text{for all $x\in \mathbb{R}$} \] \end{enumerate}
Assumption \ref{asm:Phi:convex} guarantees that $\Phi'$ is an increasing homeomorphism and so $(\Phi')^{-1}$ exists. However, we will also need that it is continuously differentiable \ref{asm:Phi:psi_differentiable}. We put $\varphi = \Phi'$, $\psi = \varphi^{-1}$ and $L_\varphi u =
\frac{d}{dt}\varphi(\dot{u})$. The domain of $L_\varphi$ will be defined later. As already mentioned in the introduction, $\Phi(x) = |x|^p$ satisfies the above assumptions and in this case $L_\varphi$ is just a $p$-Laplacian. A more general example of $\Phi$ is provided by an N-function satisfying the $\nabla_2$-condition (see. \cite{KraRut61}).
We assume that $f\colon [0,1]\times \mathbb{R}\times \mathbb{R} \to \mathbb{R}$ is continuous and satisfies \begin{enumerate}[label=$(f_\arabic*)$, ref=$(f_\arabic*)$] \item \label{asm:f:geometric} there exists a constant $R>0$ such that \[
x f(t,x,0) > 0 \text{ for $|x|>R$} \] \item \label{asm:f:growth} there exist positive functions $S$, $T$, bounded on bounded sets such that \[
|f(t,x,v)|\leq S(t,x) (\Phi'(v)\cdot v-\Phi(v)) + T(t,x) \] \end{enumerate} We consider the following boundary conditions:
\begin{enumerate}[ref=BC$_{\arabic*}$,label=(\arabic*)] \item \label{BC:Dirichlet-nh} Dirichlet \begin{equation*} \tag{\ref{BC:Dirichlet-nh}}
u(0)=A,\ u(1)=B, \end{equation*} \item \label{BC:S-L_I-nh}
Sturm-Liouville \begin{gather*} \tag{\ref{BC:S-L_I-nh}}
-\alpha u(0) + \beta \dot{u}(0)=A,\quad \alpha,\beta >0\\
a u(1) + b \dot{u}(1)=B,\quad a,b >0. \end{gather*} \end{enumerate} The purpose of the paper is to prove the following existence result.
\begin{MainTheorem} Suppose $\Phi$ and $f$ satisfy $(\Phi_1)$-$(\Phi_4)$ and $(f_1)$ - $(f_2)$ respectively. Then under boundary conditions \eqref{BC:Dirichlet-nh} or \eqref{BC:S-L_I-nh} the problem $(\ref{eq:problem})$ has at least one solution. \end{MainTheorem}
\section{Proof of the main theorem} Fix $\Phi$, $f$ and boundary conditions $(BC)$. We will now show that the existence of a solution to $(\ref{eq:problem})$ is equivalent to the existence of a fixed point of some map on a Banach space. Let $\hat{K}\colon C^0([0,1]) \times \mathbb{R} \times \mathbb{R} \to C^1([0,1])$ be given by \[ \hat{K}(v,c_1,c_2)(t) = c_1 + \int_0^t \psi(\int_0^\tau v(s) \, ds + c_2)\, d\tau \] For every $v$ we would like to choose $c_1$ and $c_2$ in such a way that $u = \hat{K}(v,c_1,c_2)$ is an element of $C^1_{BC}$, i.e. it satisfies boundary conditions. Moreover, we need that $c_1$ and $c_2$ depend continuously on $v$.
\begin{remark} Note that this trivializes in \cite{KelTer13,MaZhaLiu15}. For boundary conditions considered therein $c_1$ and $c_2$ are constants independent of $v$. We cannot proceed in such a way here. \end{remark}
\begin{lemma}\label{lemma:implicitFunction} Let $X$ be a metric space and let $G\colon X \times \mathbb{R} \to \mathbb{R}$ be continuous. Suppose that \begin{enumerate} \item for every $v \in X$ function $g_v(\cdot) = G(v,\cdot)\colon \mathbb{R} \to \mathbb{R}$ is an increasing homeomorphism, \item if $\{v_n\}$ is bounded and $b_n \to \pm\infty$ then $G(v_n,b_n) \to \pm\infty$. \end{enumerate} Fix a constant $C \in \mathbb {R}$. Then the function $c\colon X \to \mathbb{R}$ defined by $G(v,c(v)) = C$ is continuous. \end{lemma}
Note that if $g_v$ is differentiable and $g_v'$ is positive then the conclusion follows from implicit function theorem. However, in the problem that we consider $g'_v$ is only non-negative.
\begin{proof} Suppose, to derive a contradiction, that $v_n \to v_0$ and $c_n := c(v_n)$ does not converge to
$c_0 := c(v_0)$, i.e. there exists $\epsilon > 0$ such that, up to subsequence, $|c_n - c_0| > \epsilon$. By $(2)$, the sequence $c_n$ is bounded so it converges, again up to subsequence, to some $c_0' \neq c_0$. By the continuity and injectivity of $G$, \[ C = G(v_n,c_n) \to G(v_0,c_0') \neq G(v_0,c_0) = C \] \end{proof} We use this abstract lemma for our problem.
\begin{lemma}\label{lemma:functionK} Fix one of boundary conditions (\ref{BC:Dirichlet-nh}) or (\ref{BC:S-L_I-nh}). Then for every $v \in C^0([0,1])$ there exist unique constants $c_1(v),c_2(v)$ such that $\hat{K}(v,c_1(v),c_2(v)) \in C^1_{BC}([0,1])$. Moreover, the functions $c_1,c_2\colon C^0([0,1]) \to \mathbb{R}$ are continuous. \end{lemma} \begin{proof} Put $u = K(v,c_1,c_2)$. Then \[ u(0) = c_1, \quad \quad u(1) = c_1 + \int_0^1 \psi(\int_0^\tau v(s) \, ds + c_2)\, d\tau \] and \[ \dot{u}(0) = \psi(c_2), \quad \quad \dot{u}(1) = \psi(\int_0^1 v(s) \, ds + c_2) \] For each of the boundary conditions we define a suitable function $G\colon C^0([0,1])\times\mathbb{R}\to\mathbb{R}$ and use Lemma \ref{lemma:implicitFunction} to obtain the statement.
\textbf{Case \eqref{BC:Dirichlet-nh}:}\\ In this case $c_1$ is equal to $A$. Define $G\colon C^0([0,1]) \times \mathbb{R} \to \mathbb{R}$ by \[ G_1(v,c) = A + \int_0^t \psi(\int_0^\tau v(s) \, ds + c)\, d\tau \]
\textbf{Case \eqref{BC:S-L_I-nh}:}\\ From the first equation we get $c_1 = -\frac{A}{\alpha} + \frac{\beta}{\alpha}\psi(c_2)$. Now the second equation leads to the definition of $G$: \[ G_2(v,c) := a[ -\frac{A}{\alpha} + \frac{\beta}{\alpha}\psi(c) + \int_0^1 \psi(\int_0^\tau v(s) \, ds + c)\, d\tau)] + \psi(\int_0^1 v(s) \, ds + c) \] It is easy to check that the functions $G_i$ satisfy the assumptions of Lemma \ref{lemma:implicitFunction}. Therefore both $c_1$ and $c_2$ depend continuously on $v$. \end{proof}
By Lemma \ref{lemma:functionK}, for given boundary conditions (BC) we have a well defined continuous function $K\colon C^0([0,1]) \to C_{BC}^1([0,1])$ given by \[ K(v) = \hat{K}(v,c_1(v),c_2(v)). \] Note that the image of $K$ is contained in $C^2$ and, since the inclusion of $C^2$ into $C^1$ is compact, so is $K$. Define $N\colon C^1_{BC}([0,1]) \to C^0([0,1])$ by \[ N(u)(t) = f(t,u,\dot{u}). \] \begin{lemma}\label{lemma:fixedPoint} If $u$ is a fixed point of the composition $K \circ N$ then $u$ is a solution to \eqref{eq:problem}. \end{lemma} The proof is straightforward if we notice that $L_\varphi$ is well defined on the image of $K$ and $L_\varphi(K(v)) = v$. \begin{remark} $L_\varphi$ is not well defined on the whole of $C^1_{BC}$ as can be seen in the case of the Laplacian. \end{remark}
Instead of looking for fixed points of $K\circ N$ one can look for zeros of $Id - K \circ N$. For this we will use the Leray-Schauder degree and its homotopical invariance. Consider the homotopy $H\colon [0,1] \times C^1 \to C^1$ given by \[ H(\lambda,u) = Id - K(\lambda N(u)). \]
Let $l = K(0)$ be a unique linear function satisfying boundary conditions. If $r > |l|$ then \[ \deg ((H(0,\cdot),D(r)) = 1 \]
where $D(r) \subset C^1$ is a closed disc of radius $r$. If we prove that there exists $r > |l|$
such that $H(\lambda,u) \neq 0$ for any $\lambda \in [0,1]$ and any $u$ with $|u| = r$ then also $\deg ((H(1,\cdot),D(r)) = 1$. This would prove that $Id - K \circ N = H(1,\cdot)$ has a zero.
Since $H(\lambda,u) = 0$ if and only if $u$ is a solution to the boundary value problem
\begin{equation} \label{eq:problem:lambda} \tag{$P_\lambda$} \begin{cases} \frac{d}{dt}\Phi'(\dot{u})=\lambda f(t,u,\dot{u}),\ t\in[0,1] \\ u\in (BC) \end{cases} \end{equation}
we are left only to prove the following. \begin{lemma}[apriori bounds] If $u\in C^1$ is a solution to the problem \eqref{eq:problem:lambda} then there exists a constant $r>0$, independent of $\lambda$ and $u$, such that \[
\|u\|_{C^1} \leq r. \] \end{lemma}
Next section is devoted to proof of this lemma.
\begin{remark} In \cite{KelTer13,MaZhaLiu15}, authors use homotopy $H(\lambda,u) = \lambda K(N(u)) + (1-\lambda)l$. Although $H(\lambda,u)$ satisfies boundary conditions for every $\lambda$, fixed points of $H(\lambda,\cdot)$ does not have to be solutions to the parametrized problem, as claimed by the authors. \end{remark}
\section{A priori bounds}
We start by noticing that if $u$ is a $C^1$ solution to the problem \eqref{eq:problem:lambda} then $u \in C^2$. Indeed $\dot{u}$ reads \[ \dot{u}(t) = \psi\left( \int_0^t \lambda f(\tau,u,\dot{u}) \,d\tau + c \right) \] and by the assumption \ref{asm:Phi:psi_differentiable} and the continuity of $f$ it is continuously differentiable.
The next lemma is an adaptation of Theorem 3.3 \cite{FriORe88}.
\begin{lemma}
\label{lem:bound_for_u_2}
If $|u|$ achieves its maximum at $t_0\in (0,1)$ then \[
|u(t)|\leq R, \text{ for $t\in [0,1]$} \] \end{lemma} \begin{proof}
Suppose that $|u|$ achieves its maximum at $t_0\in (0,1)$. We can assume that $u(t_0)>R$. In the case $u(t_0)\leq -R$ the proof is similar. Since $t_0\in (0,1)$, $\dot{u}(t_0)=0$. Let $t\in [0,1]$, then \begin{multline*} \int_{t_0}^t
(t-\sigma) u(\sigma) \frac{d}{d \tau} \varphi(\dot{u})(\tau)\Big\vert_{\tau=\sigma}
\,d\sigma =
t \int_{t_0}^t u\frac{d}{d\tau}\varphi(\dot{u}) \,d\sigma
-
\int_{t_0}^t \sigma u\frac{d}{d\tau}\varphi(\dot{u}) \,d\sigma = \\ =
t\left(
u\varphi(\dot{u})\Big\vert_{t_0}^t - \int_{t_0}^t \dot{u}\varphi(\dot{u}) \,d\sigma
\right)
-
\left(
\sigma u\varphi(\dot{u})\Big\vert_{t_0}^t
-
\int_{t_0}^t (u+\sigma \dot{u})\varphi(\dot{u}) \,d\sigma
\right) = \\ =
t u(t) \varphi(\dot u(t))
-
t \int_{t_0}^t \dot{u}\varphi(\dot{u}) \,d\sigma
-
t u(t) \varphi(\dot u(t))
+
\int_{t_0}^t u \varphi(\dot u) \,d\sigma
+
\int_{t_0}^t \sigma \dot{u}\varphi(\dot{u}) \,d\sigma = \\ =
\int_{t_0}^t u \varphi(\dot u) \,d\sigma
+
\int_{t_0}^t (\sigma-t) \dot{u}\varphi(\dot{u}) \,d\sigma \end{multline*} Hence, using \eqref{eq:problem:lambda}, \begin{equation*} \int_{t_0}^t
(t-\sigma) \Big(
\lambda u(\sigma) f\big(\sigma,u(\sigma),\dot{u}(\sigma)\big)
+
\dot{u}(\sigma)\varphi\big(\dot{u}(\sigma)\big)
\Big)\,d\sigma = \int_{t_0}^t u(\sigma) \varphi(\dot u(\sigma)) \,d\sigma \end{equation*}
Note that for $0<\lambda\leq 1$, $xf(t,x,0)>0$, $|x|>R$ implies $\lambda xf(t,x,0)>0$, $|x|>R$. Thus, by the assumption \ref{asm:f:geometric}, $\lambda u(t_0)f(t_0,u(t_0),0)>0$. The continuity of $f$, $u$ and $\dot{u}$ implies that there exists a neighborhood $N$ of $(t_0,u(t_0),0)$ such that \[ \lambda u(t) f(t,u(t),\dot{u}(t))>0\quad \text{ for $(t,u(t),\dot{u}(t)\in N.$} \]
Since $u\in C^1$ and achieves its maximum at $t_0$, there exist $t_0^-$ and $t_0^+$ such that \begin{itemize} \item $u(t)>R$ for $t\in (t_0^-,t_0^+)$, \item $\dot{u}$ is non-negative on $(t_0^-,t_0]$, \item $\dot{u}$ is non-positive on $(t_0^-,t_0]$. \end{itemize} Hence $\varphi(\dot{u}(t))\geq 0$ for $t\in (t_0^-,t_0]$ and $\varphi(\dot{u}(t))\leq 0$ for $t\in [t_0,t_0^+)$. This implies that \begin{equation*} \int_{t_0}^t (t-\sigma)\dot{u}(\sigma) \varphi(\dot u(\sigma)) \,d\sigma \geq 0 \text{ for $t\in (t_0^-,t_0^+)$} \end{equation*} and \begin{equation*} \int_{t_0}^t u(\sigma) \varphi(\dot u(\sigma)) \,d\sigma \leq 0 \text{ for $t\in (t_0^-,t_0^+).$} \end{equation*} It follows that for $t$ close to $t_0$ \[ 0< \int_{t_0}^t
(t-\sigma) \Big(
\lambda u(\sigma) f\big(\sigma,u(\sigma),\dot{u}(\sigma)\big)
+
\dot{u}(\sigma)\varphi\big(\dot{u}(\sigma)\big)
\Big)\,d\sigma = \int_{t_0}^t u(\sigma) \varphi(\dot u(\sigma)) \,d\sigma \leq 0, \] a contradiction. Thus $u(t_0)\leq R$. \end{proof}
\begin{lemma} There exists a constant $r_0>0$, independent of $u$ and $\lambda$ such that \[
|u(t)|\leq r_0 \text{ for all $t\in [0,1]$} \] \end{lemma} \begin{proof}
If $\lambda=0$ then the problem \eqref{eq:problem:lambda} has a unique solution and thus
$|u(t)|\leq C$ for some constant $C\geq 0$. Let $0<\lambda\leq 1$.
Assume that $u$ satisfies \eqref{BC:Dirichlet-nh}. If $|u|$ achieves its maximum at $t_0=0$ (respectively $t_0=1$) then $|u(t)|\leq |A|$ (resp. $|u(t)|\leq |B|$). If the maximum is achieved in $t_0\in (0,1)$ then by Lemma \ref{lem:bound_for_u_2} we get $|u(t)|\leq R$. Hence \[
|u(t)|\leq K_0 = \max\{R,|A|,|B|\}. \]
Assume that $u$ satisfies \eqref{BC:S-L_I-nh}. If $|u|$ has its maximum value at $0$ then $u(0)\dot{u}(0)\leq 0$. The boundary conditions give \[ u(0)(A+\alpha u(0)) = \beta u(0)\dot{u}(0)\leq 0 \]
and consequently $|u(0)|\leq |A/\alpha|$. A similar argument shows that $|u(1)|\leq |B/a|$. If the maximum is achieved in $t_0\in (0,1)$ then by Lemma \ref{lem:bound_for_u_2} we get
$|u(t)|\leq M$. Finally \[
|u(t)|\leq r_0 = \max\{M,|A/\alpha|,|B/a|\}. \] \end{proof}
Now we provide bounds for $\dot{u}$. The proof of the following theorem is based on \cite{FriORe88}.
\begin{lemma} There exists a constant $r_1>0$ (depending only on $r_0$, $A$, $B$ and $\Phi)$ such that \[
|\dot{u}(t)|\leq r_1 \text{ for all $t\in [0,1]$} \] \end{lemma} \begin{proof}
Since we have obtained a priori bounds $|u(t)|\leq r_0$, it is easy to observe that there exists a constant $C\geq 0$ independent of $\lambda$ and $u$, such that \[
|\dot{u}(t_0)|\leq C \] for some $t_0\in [0,1]$. The point $t_0$ belongs to an interval $[\mu,\nu]\subset [0,1]$ such that the sign of $\dot{u}(t)$ does not change in $[\mu,\nu]$ and $\dot{u}(\mu)=\dot{u}(t_0)$ and/or $\dot{u}(\nu)=\dot{u}(t_0)$.
Assume that $\dot{u}(\mu)=\dot{u}(t_0)$ and $\dot{u}(t)\geq 0$ for every $t\in[\mu,\nu]$. The other cases are treated similarly and the same bound is obtained.
Denote by $S_0$, $T_0$ the upper bounds of $A$ and $B$ respectively on $[0,1]\times [-r_0,r_0]$. Since \[
|\lambda f(t,u,\dot{u})|\leq S_0 (\Phi'(\dot{u})\dot{u}-\Phi(\dot{u}))+T_0, \] we have \[ \int_\mu^t \frac{
S_0\,\dot{u}\, \left|\frac{d}{dt}\Phi'(\dot{u})\right| } {
S_0(\Phi'(\dot{u})\dot{u}-\Phi(\dot{u}))+T_0 }\,d\tau \leq S_0 \int_\mu^t \,\dot{u}\, d\tau \leq 2 S_0 r_0 \] For $\mu\leq \tau \leq t$, we have
\begin{multline*} \left( \Phi'(\dot{u}(\tau))\dot{u}(\tau)-\Phi(\dot{u}(\tau)) \right) - \left( \Phi'(\dot{u}(\mu))\dot{u}(\mu)-\Phi(\dot{u}(\mu)) \right) =\\= \int_\mu^t \frac{d}{dt}\left( \Phi'(\dot{u}(\tau))\dot{u}(\tau)-\Phi(\dot{u}(\tau))
\right)|_{t=\sigma} \,dt = \int_\mu^\tau \dot{u}\,\frac{d}{dt}\Phi'(\dot{u}) \,d\sigma. \end{multline*} There exists $C_0\geq 0$ such that $0 \leq S_0\left(\Phi'(\dot{u}(\mu))\dot{u}(\mu)-\Phi(\dot{u}(\mu))\right)+T_0 \leq C_0$. Hence, \[ 0 \leq S_0 \left( \Phi'(\dot{u}(\tau))\dot{u}(\tau)-\Phi(\dot{u}(\tau)) \right)+T_0
\leq S_0 \int_\mu^\tau \dot{u}\,\left|\frac{d}{dt}\Phi'(\dot{u})\right| \,d\sigma + C_0 + T_0. \]
Set $g(\tau) = S_0 \int_\mu^\tau \dot{u}\,\left|\frac{d}{dt}\Phi'(\dot{u})\right| \,d\sigma + C_0$, then integration by substitution yields \[ \log\left(\frac{g(t)+T_0}{C_0+T_0}\right) = \int_{C_0}^{g(t)}\frac{1}{x +T_0}\,dx = \int_\mu^t \frac{S_0\,\dot{u}\,\frac{d}{dt}\Phi'(\dot{u})}{g(\tau) +T_0}\,d\tau \leq 2 S_0 r_0. \] Thus \[ g(t)\leq (T_0+C_0) e^{2 S_0 r_0} - T_0 \] and by \ref{asm:Phi:nabla2} \[ (k_\Phi-1)\Phi(\dot{u}(t)) \leq \Phi'(\dot{u}(t))\dot{u}(t)-\Phi(\dot{u}(t)) \leq \frac{1}{S_0} ((T_0+C_0) e^{2 S_0 r_0} - T_0) \]
The last inequality gives $|\dot{u}(t)| \leq r_1$ for all $t\in [0,1]$. \end{proof}
\end{document} |
\begin{document}
\title[Pointwise estimate for the Bergman kernel]{Pointwise estimate for the Bergman kernel of the weighted Bergman spaces with exponential type weights} \author{Sa\"{\i}d Asserda and Amal Hichame} \address{Ibn tofail university , faculty of sciences, department of mathematics, PO 242 Kenitra Morroco} \email{asserda-said@univ-ibntofail.ac.ma}
\address{Regional Centre of trades of education and training, Kenitra Morocco} \email{ amalhichame@yahoo.fr}
\subjclass[2010]{Primary 32A25, Secondary 30H20.} \keywords{Bergman Kernel, $\bar\partial$-equation.} \date{\today} \begin{abstract} Let $AL^{2}_{\phi}(\mathbb{D})$ denote the closed subspace of $L^{2}(\mathbb{D},e^{-2\phi}d\lambda)$ consisting of holomorphic functions in the unit disc ${\mathbb D}$. For certain class of subharmonic funcions $\phi : {\mathbb D}\rightarrow{\mathbb D}$, we prove upper pointwise estimate for the Bergman kernel for $AL^{2}_{\phi}(\mathbb{D})$. \end{abstract} \maketitle \section{Introduction and statement of main result} \label{} Let $\mathbb{D}$ be the unit disc in $\mathbb{C}$ and $d\lambda$ be its Lebesgue measure. For a measurable function $\phi : \mathbb{D}\rightarrow\mathbb{D}$, let $L^{2}_{\phi}(\mathbb{D})$ be the Hilbert space of measurable function $f$ on $\mathbb{D}$ such that $$ \Vert f\Vert_{L^{2}_{\phi}}:=\Bigl(\int_{\mathbb{D}}\vert f\vert^{2}e^{-2\phi}d\lambda\Bigr)^{1\over 2} < \infty $$ Let $AL^{2}_{\phi}(\mathbb{D})$ be the closed subspace of $L^{2}_{\phi}(\mathbb{D})$ consisting of analytic functions. Let $P$ be the orthogonal projection of $L^{2}_{\phi}(\mathbb{D})$ onto $AL^{2}_{\phi}(\mathbb{D})$ : $$ Pf(z):=\int_{\mathbb{D}}K(z,w)f(w)e^{-2\phi(w)}d\lambda $$ where $K$ is the reproducing kernel of $P$.\\ The purpose of this note is to give an upper pointwise estimate of $K$ for some class of subharmonic functions $\phi$ on $\mathbb{D}$ introduced by Oleinik [10] and Oleinik-Perel'man [11]. \begin{defn} For $\phi \in C^{2}(\mathbb{D})$ and $\Delta\phi > 0$ put $\tau=(\Delta\phi)^{-1/2}$ where $\Delta$ is the Laplace operator. We call $\phi\in \mathcal{OP}(\mathbb{D}) $ if the following conditions holds.\\ (1)\ $\exists\ C_{1} > 0$ such that $\vert\tau(z)-\tau(w)\vert\le C_{1}\vert z-w\vert$ ,\\ (2)\ $\exists\ C_{2} > 0$ such that $\tau(z)\leq C_{2}(1-\vert z\vert)$,\\ (3)\ $\exists\ 0< C_{3} <1$ and $ a > 0$ such that $\tau(w)\leq \tau(z) + C_{3}\vert z-w\vert$ for $ w\notin D(z,a\tau(z))$ where $D(z,a\tau(z))=\{ w\in \mathbb{D}\ ,\ \vert w-z\vert\leq a\tau(z)\}$. \end{defn} Some examples of functions in $\mathcal{OP}(\mathbb{D})$ are as follows :\\ (i) $\phi_{1}(z)=-{A\over 2}\log(1-\vert z\vert^{2}),\ A>0$.\\ (ii) $\phi_{2}(z)={1\over 2}\bigl(-A\log(1-\vert z\vert^{2})+B(1-\vert z\vert^{2})^{-\alpha}\bigr),\ A\geq 0,\ B>0, \alpha > 0$.\\ (iii) $\phi_{1}+h$ and $\phi_{2}+h$ where $\phi_{1}$ and $\phi_{2}$ are as in (i) and (ii) respectively and $h\in C^{2}(\mathbb{D})$ can be any harmonic function on $\mathbb{D}$.\\ For $z,w\in\mathbb{D}$, the distance $d_{\phi}$ induced by the metric $\tau(z)^{-2}dz\otimes d{\bar z}$ is given by $$ d_{\phi}(z,w)= \inf_{\gamma}\int_{0}^{1}{\vert\gamma^{'}(t)\vert\over\tau(\gamma(t))}dt $$ where $\gamma$ runs over the piecewise $C^{1}$ curves $\gamma : [0,1]\rightarrow\mathbb{D}$ with $\gamma(0)=z$ and $\gamma(1)=w$. Thanks to condition $(2)$ the metric space $(\mathbb{D},d_{\phi})$ is complete and $d_{\phi}\succeq d_{h}$ where $d_{h}$ is the hyperbolic distance.\\
Our main result is the following theorem on the off-diagonal decay of the Bergman kernel. \begin{thm} Let $\phi\in \mathcal{OP}(\mathbb{D})$ and $K$ be the Bergman kernel for $AL^{2}_{\phi}(\mathbb{D})$. There exist positive constants $C$ and $\sigma$ such that for any $z,w \in \mathbb{D}$ $$ \vert K(z,w)\vert e^{-(\phi(z)+\phi(w))}\leq C{1\over\tau(z)\tau(w)}\exp(-\sigma d_{\phi}(z,w)) $$ \end{thm} In [4] and [9] M.Christ and J.Marzo-J.Ortega-Cerd\`a obtained a pointwise estimates for the Bergman kernel of the weighted Fock space $\mathcal{F}_{\phi}^{2}(\mathbb{C})$ under the hypothesis that $\Delta\phi$ is a doubling measure. This result was extended to several variables by H.Delin and H.Lindholm in [5] and [7] under similar hypothesis.\\ In [12], A.P.Schuster and D.Varolin obtained a pointwise estimate for the Bergman Kernel of the weighted Bergman space $AL^{2}(\mathbb{D},e^{-2\phi}(1-\vert z\vert^{2})^{-2}d\lambda)$ under the hypothesis that $\Delta\phi$ is comparable to hyperbolic metric of $\mathbb{D}$ : $$\vert K(z,w)\vert e^{-(\phi(z)+\phi(w))}\leq C\exp(-\sigma d_{h}(z,w))$$ For $\phi\in \mathcal{OP}(\mathbb{D})$ and under the strong condition : $\forall\ m\geq 1,\ \exists\ b_{m}>0$ and $0<t_{m}<{1\over m}$ such that $$\tau(w)\leq\tau(z)+t_{m}\vert z-w\vert\ \ \hbox{if}\ \ \vert z-w\vert > b_{m}\tau(z), $$ H.Arroussi and J.Pau [1] give the following pointwise estimate : for each $k\ge 1$ there exists $C_{k} > 0$ such that $$\vert K(z,w)\vert e^{-(\phi(z)+\phi(w))}\leq{C_{k}[d_{\tau}(z,w)]^{-k}\over\tau(z)\tau(w)}$$ where $d_{\tau}(w)={\vert z-w\vert\over\min[\tau(z),\tau(w)]}$. A better estimate will be $$\vert K(z,w)\vert e^{-(\phi(z)+\phi(w))}\leq{C\over\tau(z)\tau(w)}e^{-\sigma d_{\tau}(z,w)}.$$
\section{Proof of theorem 1.2} \noindent Near the diagonal, by [8,lemma 3.6 ] there exists $\alpha > 0$ sufficiently small such that $$ \vert K(z,w)\vert\sim\sqrt{K(z,z)}\sqrt{K(w,w)}\sim {e^{\phi(z)+\phi(w)}\over\tau(z)\tau(w)}\quad\hbox{if}\quad \vert z-w\vert\leq\alpha\min[\tau(z),\tau(w)] $$ Off the diagonal, let $\vert z-w\vert > \alpha\min[\tau(z),\tau(w)]$ and $\beta > 0$ such that $D(z,\beta\tau(z))\cap D(w,\beta\tau(w))=\emptyset$. We may suppose that $\tau(z)\leq\tau(w)$. Fix a smooth function $\chi\in C^{\infty}_{0}(\mathbb{D})$ such that \\ - $\hbox{supp}\chi\subset D(w,\beta\tau(w))$,\\ - $0\leq\chi\le 1$, $\chi=1$ in $D(w,{\beta\over 2}\tau(w))$ and \\ - $\vert\bar\partial\chi\vert^{2}\preceq\chi\tau(w)^{-2}$.\\ Since $\phi\in\mathcal{OP}(\mathbb{D})$, by [10,lemma 1 and 2] the following mean inequality holds \begin{eqnarray*} (*)\qquad\vert K(w,z))\vert^{2}e^{-2\phi(w)}&\preceq&{1\over\tau(w)^{2}}\int_{D(w,{\beta\over 2}\tau(w))}\chi(\zeta)\vert K(\zeta,z))\vert^{2}e^{-2\phi(\zeta)}d\lambda(\zeta)\\ &\preceq&{1\over\tau(w)^{2}}\Vert K(.,z)\Vert^{2}_{L^{2}(\chi e^{-2\phi}d\lambda)} \end{eqnarray*} Hence $$\Vert K(.,z)\Vert_{L^{2}(\chi e^{-\phi})}=\sup_{f}\vert<f,K(.,z)>_{L^{2}(\chi e^{-2\phi}d\lambda)}\vert$$ where $f$ is holomorphic in $D(w,\beta\tau(w))$ with $\Vert f\Vert_{L^{2}(\chi e^{-2\phi}d\lambda)}=1$. Since $P_{\phi}(f\chi)(z)=<f,K(.,z)>_{L^{2}(\chi e^{-2\phi}d\lambda)}$ and that $u_{f}=f\chi-P_{\phi}(f\chi)$ is the minimal solution in $L^{2}(\mathbb{D}, e^{-2\phi}d\lambda)$ of $\bar\partial u=f\bar\partial\chi$, and from the fact $\chi(z)=0$, we have $$ \vert<f,K(.,z)>_{L^{2}(\chi e^{-2\phi}d\lambda)}\vert=\vert P_{\phi}(f\chi)(z)\vert=\vert u_{f}(z)\vert $$ Since $D(z,\beta\tau(z))\cap D(w,\beta\tau(w))=\emptyset$, the function $u_{f}$ is holomorphic in $D(z,\nu\tau(z))$ for some $\nu > 0$. By the mean value inequality \begin{eqnarray*} \vert u_{f}(z)\vert^{2}e^{-2\phi(z)}&\preceq&{1\over\tau(z)^{2}}\int_{D(z,\nu\tau(z))}\vert u_{f}(\zeta)\vert^{2}e^{-2\phi(\zeta)}d\lambda\\ &\preceq&{1\over\tau(z)^{2}}\int_{D(z,\nu\tau(z))}e^{-\epsilon{\vert\zeta-z\vert\over\nu\tau(z)}}\vert u_{f}(\zeta)\vert^{2}e^{-2\phi(\zeta)}d\lambda \end{eqnarray*}
Since the linear curve $\gamma(t)=(1-t)z+t\zeta$ lies in $D(z,\nu\tau(z))$ and $\tau(\gamma(t))\sim\tau(z)$, we have $d_{\phi}(\zeta,z)\le C{\vert \zeta-z\vert\over\tau(z)}$ for $\zeta\in D(z,\nu\tau(z))$. Hence \begin{eqnarray*} \vert u_{f}(z)\vert^{2}e^{-2\phi(z)}&\preceq &{1\over\tau(z)^{2}}\int_{D(z,\nu\tau(z))}e^{-C\epsilon d_{\phi}(\zeta,z)}\vert u_{f}(\zeta)\vert^{2}e^{-2\phi(\zeta)}d\lambda\\ &\preceq &{1\over\tau(z)^{2}}\int_{\mathbb{D}}e^{-C\epsilon d_{\phi}(\zeta,z)}\vert u_{f}(\zeta)\vert^{2}e^{-2\phi(\zeta)}d\lambda \end{eqnarray*} The function $\zeta\rightarrow d_{\phi}(\zeta,z)$ is smooth on $\mathbb{D}\setminus\hbox{Cut}(z)\cup\{z\}$ where $\hbox{Cut}(z)$ is the cut locus : the set of all cut points of $z$ along all geodesics that start from $z$. To get a smooth Lipschitz approximation of $d_{\phi}$, we recall the following result of Greene-Wu [6] ( see also [2]). \begin{thm} Let $M$ be a complete Riemannian manifold, let $ h : M\rightarrow\mathbb{R}$ be a Lipschitz function, let $\eta : M\rightarrow ]0,+\infty[$ be a continuous function, and $r$ a positive number. Then there exist a smooth Lipschitz function $ g : M\rightarrow\mathbb{R}$ such that $\vert h(x)-g(x)\vert\leq\eta(x)$ for every $x\in M$, and $\hbox{Lip}(g)\leq\hbox{Lip}(h)+r$. \end{thm} We use this result with $h(\zeta)=d_{\phi}(\zeta,z),\ \eta=1$ and $r=1$. We have $d_{\phi}(\zeta,z)\prec g_{z}(\zeta)\prec d_{\phi}(\zeta,z)$ and $\tau(\zeta)\vert dg_{z}(\zeta)\vert\leq 2$. Hence $$ \vert u_{f}(z)\vert^{2}e^{-2\phi(z)}\preceq{1\over\tau(z)^{2}}\int_{\mathbb{D}}e^{-C\epsilon g_{z}(\zeta)}\vert u_{f}(\zeta)\vert^{2}e^{-2\phi(\zeta)}d\lambda $$ By Berndtsson-Delin's improved $L^{2}$ estimates of for the minimal solution of $\bar\partial\ $ in $L^{2}(\mathbb{D},e^{-2\phi}d\lambda)$ [3][5] , we have : $$ \int_{\mathbb{D}}e^{-C\epsilon g_{z}(\zeta)}\vert u_{f}(\zeta)\vert^{2}e^{-2\phi(\zeta)}d\lambda\preceq\int_{\mathbb{D}}e^{-C\epsilon g_{z}(\zeta)}\vert \bar\partial \chi(\zeta)\vert^{2}\vert f(\zeta)\vert^{2}\tau(\zeta)^{2}e^{-2\phi(\zeta)}d\lambda $$ provided that $\tau\vert\partial\omega_{\epsilon}\vert\leq\mu\omega_{\epsilon}$ with $\mu < \sqrt{2}$ where $\omega_{\epsilon}(\zeta)=e^{-C\epsilon g_{z}(\zeta)}$. If we choose $\epsilon$ small enough so that $\mu=2C\epsilon<\sqrt{2}$ then $\tau\vert\partial\omega_{\epsilon}\vert=C\epsilon\tau\vert\partial g_{z}\vert\omega_{\epsilon}\leq \mu\omega_{\epsilon}$. Thus $$ \vert u_{f}(z)\vert^{2}e^{-2\phi(z)}\preceq{1\over\tau(z)^{2}}\int_{D(w,\beta\tau(w))}e^{-C\epsilon d_{\phi}(\zeta,z)}\chi(\zeta)\vert f\vert^{2}e^{-2\phi(\zeta)}d\lambda $$ where for the last term we use $\tau(\zeta)\sim\tau(w)$. Since $\zeta\in D(w,\beta\tau(w))$ we have $$ d_{\phi}(\zeta,z)\geq d_{\phi}(z,w)-d_{\phi}(w,\zeta)\succeq d_{\phi}(z,w)-{\vert\zeta-w\vert\over\beta\tau(w)}\succeq d_{\phi}(z,w) $$ and thanks to $(*)$, we conclude $$ \vert K(z,w)\vert e^{-(\phi(w)+\phi(z))}\leq{C\over\tau(z)\tau(w)}e^{-\sigma d_{\phi}(z,w)}. $$
\vskip 20 pt
\end{document} |
\begin{document}
\nocite{*}
\title{Rankin-Cohen brackets and Serre derivatives as Poincar\'e series}
\begin{abstract} We give expressions for the Serre derivatives of Eisenstein and Poincar\'e series as well as their Rankin-Cohen brackets with arbitrary modular forms in terms of the Poincar\'e averaging construction, and derive several identities for the Ramanujan tau function as applications. \end{abstract}
\section{Introduction}
Let $k \in 2 \mathbb{Z}$, $k \ge 4.$ To any $q$-series $\phi(q) = \phi(e^{2\pi i \tau}) = \sum_{n=0}^{\infty} a_n q^n$ on the upper half-plane $\tau \in \mathbb{H}$ whose coefficients grow slowly enough, one can construct a \textbf{Poincar\'e series} $$\mathbb{P}_k(\phi;\tau) = \sum_{M \in \Gamma_{\infty} \backslash \Gamma} \phi |_k M (\tau) = \frac{1}{2} \sum_{c,d} \sum_{n=0}^{\infty} a_n (c \tau + d)^{-k} e^{2\pi i n \frac{a \tau + b}{c \tau + d}}$$ that converges absolutely and uniformly on compact subsets and defines a modular form of weight $k$. Here, the first sum is taken over cosets of $\Gamma = SL_2(\mathbb{Z})$ by the subgroup $\Gamma_{\infty}$ generated by $\pm \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$ and the second over all coprime integers $c,d \in \mathbb{Z}.$ As usual, $|_k$ is the Petersson slash operator of weight $k$. (More generally one can also construct Poincar\'e series that are not holomorphic in this way; see \cite{O}, section 8.3 for some applications.) \\
It is easy to show that every modular form $f$ (of weight $k \ge 4$) can be written as a Poincar\'e series $\mathbb{P}_k(\phi)$: because $f$ can always be written as a linear combination of the Eisenstein series $E_k = \mathbb{P}_k(1)$ and the Poincar\'e series of exponential type $P_{k,N} = \mathbb{P}_k(q^N)$ of various indices $N$. However, expressions found by this argument tend to be messy because the coefficients of $P_{k,N}$ are complicated series over Kloosterman sums and special values of Bessel functions (\cite{I}, section 3.2). The most reliable way to produce Poincar\'e series with manageable Fourier coefficients seems to be to start with seed functions $\phi(\tau)$ that already behave in a manageable way under the action of $SL_2(\mathbb{Z})$.
\begin{ex} When $\phi = 1$ (a modular form of weight $0$), we obtain the normalized Eisenstein series as mentioned above: $$\mathbb{P}_k(1;\tau) = E_k(\tau) = 1 - \frac{2k}{B_k} \sum_{n=1}^{\infty} \sigma_{k-1}(n) q^n, \; \; q = e^{2\pi i \tau}, \; \sigma_{k-1}(n) = \sum_{d|n} d^{k-1},$$ where $B_k$ is the $k$-th Bernoulli number. More generally, if $\phi$ is a modular form of any weight $k$ then expanding formally yields \begin{align*} \mathbb{P}_{k+l}(\phi;\tau) &= \sum_{M \in \Gamma_{\infty} \backslash \Gamma} (c \tau + d)^{-k-l} \phi \left( \frac{a \tau + b}{c \tau + d} \right) \\ &= \sum_{M \in \Gamma_{\infty} \backslash \Gamma} (c \tau + d)^{-k-l} (c \tau + d)^k \phi(\tau) \\ &= \phi(\tau) E_l(\tau), \end{align*} where $M$ is the coset of $ \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix}$; although the expression $\mathbb{P}_{k+l}(\phi)$ makes sense only when $l$ is sufficiently large compared to the growth of the coefficients of $\phi$. In recent work \cite{W} the author has considered the Poincar\'e series $\mathbb{P}_k(\vartheta)$ constructed from what are essentially weight $1/2$ theta functions $\vartheta$, which seem to be useful for computing with vector-valued modular forms for Weil representations; the details are somewhat more involved but this is related to the example above. \end{ex}
The motivation of this note was to consider the Poincar\'e series $\mathbb{P}_k(\phi)$ when $\phi$ is a \textbf{quasimodular form}, a more general class of functions which includes modular forms, their derivatives of all orders, and the series $$E_2(\tau) = 1 - 24 \sum_{n=1}^{\infty} \sigma_1(n) q^n$$ (cf. \cite{Z1}, section 5.3). We find that one obtains Rankin-Cohen brackets and Serre derivatives (see section 2 below for their definitions) of Eisenstein series and Poincar\'e series essentially from such forms $\phi$:
\begin{thrm} For any modular form $f \in M_k$ and $l \in 2 \mathbb{N}$, $l \ge 4$, and $m,N \in \mathbb{N}_0$, with $l \ge k+2$ if $f$ is not a cusp form, set $$\phi(\tau) = q^N \sum_{r=0}^m (-1)^r \binom{k+m-1}{m-r} \binom{l+m-1}{r} N^{m-r} D^r f(\tau);$$ then $$[f,P_{l,N}]_m= \mathbb{P}_{k+l+2m}(\phi).$$ \end{thrm}
Here $D = \frac{1}{2\pi i} \frac{d}{d \tau} = q \frac{d}{dq}$. Since $\mathbb{P}_{k+l+2m}(\phi)$ is modular by construction, and since $[-,-]_m$ is bilinear and $P_{l,N}$, $N \in \mathbb{N}_0$ span all modular forms, this gives another proof of the modularity of Rankin-Cohen brackets (at least for large $l$). This seed function $\phi$ is formally the Rankin-Cohen bracket $[f,q^N]_m$ where $q^N$ is treated like a modular form of weight $l$, so by linearity we see that Rankin-Cohen brackets and Poincar\'e averaging ``commute'' in the following sense:
\begin{cor} Let $f$ be a modular form of weight $k$ and let $\phi$ be a $q$-series whose coefficients grow sufficiently slowly that $\mathbb{P}_l(\phi;\tau)$ is well-defined, and denote by $[f,\phi]_m$ the formal result of the $m$-th Rankin-Cohen bracket where $\phi$ is treated like a modular form of weight $l$ (where $l \ge k+2$ if $f$ is not a cusp form). Then $$[f, \mathbb{P}_l(\phi)]_m = \mathbb{P}_{k+l+2m}([f,\phi]_m).$$ \end{cor}
This expression simplifies considerably for the Eisenstein series: $$[f,E_l]_m = \mathbb{P}_{k+l+2m}(\phi)$$ for the function $$\phi = (-1)^m\binom{l+m-1}{m} D^m f.$$ An equivalent result in this case has appeared in section 5 of \cite{Z3} (in particular see Proposition 6). There may be particular interest in the case that $f$ itself is an Eisenstein or Poincar\'e series as expressions of a different nature for the Rankin-Cohen brackets of two Poincar\'e series are known (e.g. \cite{DS}, section 6). \\
\begin{thrm} For any $m,N \in \mathbb{N}_0$ and $l \in 2 \mathbb{N}$ with $l \ge 2m+2$, set $$\phi(\tau) = q^N \sum_{r=0}^m \binom{m}{r} \frac{(l+m-1)!}{(l+m-r-1)!} (-E_2(\tau) / 12)^r N^{m-r};$$ then the $m$-th order Serre derivative (in the sense of section 2) of $P_{l,N}$ is $$\vartheta^{[m]} P_{l,N} = \mathbb{P}_{l+2m}(\phi).$$ \end{thrm}
Similarly, this seed function $\phi$ is formally the $m$-th Serre derivative of $q^N$ if one pretends that $q^N$ is a modular form of weight $l$; by linearity we find that Serre derivatives also commute with Poincar\'e averaging:
\begin{cor} Let $\phi$ be a $q$-series whose coefficients grow sufficiently slowly that $\mathbb{P}_l(\phi;\tau)$ is well-defined, and denote by $\vartheta^{[m]} \phi$ the formal result of the $m$-th order Serre derivative where $\phi$ is treated like a modular form of weight $l$ (where $l \ge 2m+2$). Then $$\vartheta^{[m]} \mathbb{P}_l(\phi) = \mathbb{P}_{l+2m}(\vartheta^{[m]} \phi).$$ \end{cor}
As before, this simplifies for the Eisenstein series: $$\vartheta^{[m]} E_l = \mathbb{P}_{l+2m}(\phi)$$ for the function $$\phi= \frac{(l+m-1)!}{(-12)^m (l-1)!} E_2^m.$$ It is interesting to compare this to Theorem 2 which suggests that the Serre derivative (at least of the Eisenstein series) is analogous to a Rankin-Cohen bracket with $E_2$. Similar observations have been made before (e.g. \cite{G}, section 2). \\
By computing Rankin-Cohen brackets and Serre derivatives of $P_{l,N} = 0$ in weights $l \le 10$ we can obtain new proofs of Kumar's identity (\cite{K}, eq. (14)) $$\tau(m) = -\frac{20 m^{11}}{m - 5/6} \sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^{11}}$$ and Herrero's identity (\cite{H}, eq. (1)) $$\tau(m) = -240m^{11} \sum_{n=1}^{\infty} \frac{\sigma_3(n) \tau(m+n)}{(m+n)^{11}}$$ that express the Ramanujan tau function in terms of special values of a shifted $L$-series introduced by Kohnen \cite{Ko}, as well as four additional identities of this form. Namely we find \begin{align*} \tau(m) &= -\frac{14m^8}{m - 7/12} \sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^8} \\ &= -\frac{16m^9}{m - 2/3} \sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^9} \\ &= -\frac{18m^{10}}{m - 3/4} \sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^{10}} \\ &= -240m^{10} \sum_{n=1}^{\infty} \frac{\sigma_3(n)\tau(m+n)}{(m+n)^{10}}. \end{align*} Here $\tau(m)$ is Ramanujan's tau function, i.e. the coefficient of $q^m$ in $\Delta(\tau) = q \prod_{n=1}^{\infty} (1 - q^n)^{24}$. We can also compute the values of these series with $m=0$. Based on numerical computations it seems reasonable to guess that there are no other identities of this type. The details are worked out in section 5.
\section{Background and notation}
Let $\mathbb{H} = \{\tau = x + iy: \, y > 0\}$ be the upper half-plane and let $\Gamma$ be the group $\Gamma = SL_2(\mathbb{Z})$, which acts on $\mathbb{H}$ by $\begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \cdot \tau = \frac{a \tau + b}{c \tau + d}$. A \textbf{modular form of weight $k$} is a holomorphic function $f : \mathbb{H} \rightarrow \mathbb{C}$ which transforms under $\Gamma$ by $$f\left( \frac{a \tau + b}{c \tau + d} \right) = (c \tau + d)^k f(\tau), \; \; M = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \Gamma, \; \tau \in \mathbb{H}$$ and whose Fourier expansion involves only non-negative exponents: $f(\tau) = \sum_{n=0}^{\infty} a_n q^n$, $q = e^{2\pi i \tau}$. We denote by $M_k$ the space of modular forms of weight $k$ and by $S_k$ the subspace of cusp forms (which in this context means $a_0 = 0$). \\
The \textbf{Rankin-Cohen brackets} are bilinear maps $$[\cdot,\cdot]_n : M_k \times M_l \rightarrow M_{k+l+2n},$$ \begin{equation}[f,g]_n = \sum_{j=0}^n (-1)^j \binom{k+n-1}{n-j} \binom{l+n-1}{j} D^j f D^{n-j} g,\end{equation} where $D^j f(\tau) = \frac{1}{(2\pi i)^j} \frac{d^j}{d \tau^j} f(\tau) = \frac{1}{(2\pi i)^j} f^{(j)}(\tau)$. If $f(\tau) = \sum_{n=0}^{\infty} a_n q^n$ is the Fourier expansion of $f$ then $$D^j f(\tau) = \sum_{n=0}^{\infty} a_n n^j q^n;$$ and in particular, the Rankin-Cohen brackets preserve integrality of Fourier coefficients. For example, the first few brackets are $$[f,g]_0 = fg, \; \; [f,g]_1 = k f \cdot Dg - lg \cdot Df,$$ $$[f,g]_2 = \frac{k(k+1)}{2} f \cdot D^2 g - (k+1)(l+1) Df \cdot Dg + \frac{l(l+1)}{2} D^2 f \cdot g.$$ These can be characterized as the unique (up to scale) bilinear differential operators of degree $2n$ that preserve modularity (see for example the second proof in section 1 of \cite{Z2}). \\
The Serre derivatives $\vartheta^{[n]}$ following \cite{Z1}, section 5.1 are maps $M_k \rightarrow M_{k+2n}$ defined recursively by $$\vartheta^{[0]} f = f, \; \; \vartheta^{[1]} f = \vartheta f = Df- \frac{k}{12} E_2 f,$$ and $$\vartheta^{[n+1]} f = \vartheta \vartheta^{[n]} f - \frac{n(k+n-1)}{144} E_4f, \; \; n \ge 1.$$ (In particular $\vartheta^{[n]}$ is \emph{not} simply the $n$-th iterate of $\vartheta$.) These functions are given in closed form by \begin{equation} \vartheta^{[n]} f(\tau) = \sum_{r=0}^n \binom{n}{r} \frac{(k+n-1)!}{(k+r-1)!} (-E_2(\tau) / 12)^{n-r} D^r f(\tau),\end{equation} as one can prove by induction or by inverting equation 65 of \cite{Z1} (section 5.2).
\section{Poincar\'e series}
\begin{rem} A sufficient criterion for the series $$\mathbb{P}_k(\phi;\tau) = \sum_{c,d} (c \tau + d)^{-k} \phi\Big( \frac{a \tau + b}{c \tau + d} \Big), \; \; \phi(\tau) = \sum_{n=0}^{\infty} a_n q^n$$ to converge absolutely and locally uniformly is for the coefficients of $\phi$ to satisfy the bound $a_n = O(n^l)$ where $l = \frac{k}{2} - 2 - \varepsilon$ for some $\varepsilon > 0$. To see this, note that $\binom{n+l}{l}$ is also $O(n^l)$, so we can bound $$\Big| \phi\Big( \frac{a \tau + b}{c \tau + d} \Big) \Big| \ll \sum_{n=0}^{\infty} \binom{n+l}{l} e^{-2\pi n \frac{1}{|c \tau + d|^2}} = \Big( 1 - e^{-\frac{2\pi}{|c \tau + d|^2}} \Big)^{-l-1}$$ up to a constant multiple. Since $(1 - e^{-x})^{-1} < x^{-1-\delta}$ for any fixed (small enough) $\delta > 0$ and all small enough $x > 0$, we can then bound $$\sum_{c,d} \Big| (c \tau + d)^{-k} \phi\Big( \frac{a \tau + b}{c \tau + d} \Big) \Big| \ll \sum_{c,d} |c \tau + d|^{-k + 2(l+1)(1 + \delta)} < \sum_{c,d} |c \tau + d|^{-2}.$$ \end{rem}
\begin{rem} Given a $q$-series $\phi(\tau) = \sum_{n=0}^{\infty} a_n q^n$, one can also consider the series $$\mathbb{P}_k'(\phi) = a_0 E_k + \sum_{n=1}^{\infty} a_n P_{k,n}$$ which generally has better convergence properties than the sum $\mathbb{P}_k(\phi)$ over cosets $\Gamma_{\infty} \backslash \Gamma$. Since $S_k$ is finite-dimensional, the convergence of $\sum_{n=1}^{\infty} a_n P_{k,n}$ to a cusp form in any sense is equivalent to the convergence of the series $$\sum_{n=1}^{\infty} a_n \langle f, P_{k,n} \rangle = \frac{(k-2)!}{(4\pi)^{k-1}} \sum_{n=1}^{\infty} \frac{a_n b_n}{n^{k-1}}$$ for every cusp form $f(\tau) = \sum_{n=1}^{\infty} b_n q^n \in S_k$. The Deligne bound $b_n = O(n^{(k-1)/2 + \varepsilon})$ implies that this is satisfied when the slightly weaker bound $a_n = O(n^{k/2 - 3/2 - \varepsilon})$ holds. It is clear that $\mathbb{P}_k'(\phi) = \mathbb{P}_k(\phi)$ whenever the latter series converges, so we will refer to both of these series by $\mathbb{P}_k(\phi;\tau)$ in what follows. \end{rem}
\section{Proofs}
\begin{proof}[Proof of Theorem 2]
The coefficients $a_n$ of any modular form of weight $k$ satisfy the bound $a_n = O(n^{k-1 + \varepsilon})$ for any $\varepsilon > 0$, while cusp forms satisfy the Deligne bound $a_n = O(n^{(k-1)/2 + \varepsilon})$. In particular, the coefficients of $$\phi(\tau) = q^N \sum_{r=0}^m (-1)^r \binom{k+m-1}{n-r} \binom{l+m-1}{r} N^{m-r} D^r f(\tau)$$ always satisfy the bound $O(n^{k+m-1 + \varepsilon})$, and our growth condition (of Remark 7), $$k+m-1+ \varepsilon \le \frac{k+l+2m}{2} - 3/2 - \varepsilon$$ becomes $k \le l - 1 - 2 \varepsilon$ and therefore (since $k,l \in 2 \mathbb{Z}$) $l \ge k+2$; while for cusp forms we instead require $$\frac{k-1}{2} + m + \varepsilon \le \frac{k+l+2m}{2} - 3/2 - \varepsilon,$$ or equivalently $2 \le l - 2 \varepsilon$ which is always satisfied for small enough $\varepsilon$. \\
Suppose first that the series $\mathbb{P}(\phi;\tau)$ over cosets $\Gamma_{\infty} \backslash \Gamma$ converges normally. Repeatedly differentiating the equation $$f\Big( \frac{a \tau + b}{c \tau + d} \Big) = (c \tau + d)^k f(\tau)$$ yields $$f^{(m)}\Big( \frac{a \tau + b}{c \tau + d} \Big) = \sum_{r=0}^m \binom{m}{r} \frac{(k+m-1)!}{(k+r-1)!} c^{m-r} (c \tau + d)^{k+m+r} f^{(r)}(\tau),$$ as one can prove by induction or derive directly by considering the action of $SL_2(\mathbb{Z})$ on $\tau$ in the generating series $$\sum_{m=0}^{\infty} f^{(m)}(\tau) \frac{w^m}{m!} = f(\tau + w),$$ for $|w|$ sufficiently small. By another induction argument one finds the similar formula \begin{equation} \frac{d^m}{d \tau^m} \Big( (c \tau + d)^{-k} e^{2\pi i N \tau}\Big) = \sum_{r=0}^m \binom{m}{r} \frac{(k+m-1)!}{(k+r-1)!} (-c)^{m-r} (2\pi i N)^r (c \tau + d)^{-k-m-r} e^{2\pi i N \tau} \end{equation} for any $N \in \mathbb{N}_0$. \\
Let $(a)_m = \frac{(a+m-1)!}{(a-1)!} = a \cdot (a+1) \cdot ... \cdot (a+m-1)$ denote the Pochhammer symbol. Then \begin{align*} &\quad \sum_{M \in \Gamma_{\infty} \backslash \Gamma} \Big[ q^N \sum_{r=0}^m (-1)^r \binom{k+m-1}{m-r} \binom{l+m-1}{r} N^{m-r} D^r f(\tau) \Big] \Big|_{k+l+2m} M (\tau) \\ &= \sum_{M \in \Gamma_{\infty} \backslash \Gamma} \sum_{r=0}^m \sum_{j=0}^r \Big[ \binom{k+m-1}{m-r} \binom{l+m-1}{r} \binom{r}{j} N^{m-r} \times \\ &\quad \quad = (-2\pi i)^{-r} c^{r-j} (k+j)_{r-j} (c \tau + d)^{j+r-l-2m} e^{2\pi i N \frac{a \tau + b}{c \tau + d}} f^{(j)}(\tau) \Big] \\ &=(-1)^m \sum_{M \in \Gamma_{\infty} \backslash \Gamma} \sum_{j=0}^m (-1)^j f^{(j)}(\tau) \sum_{r=0}^{m-j}\Big[ (2\pi i N)^r \binom{k+m-1}{r} \binom{l+m-1}{m-r} \binom{m-r}{j} (k+j)_{m-j-r} \times \\ &\quad\quad\quad\quad\quad\quad\quad\quad \times (-c)^{m-r-j} (c \tau + d)^{j-l-m-r} e^{2\pi i N \frac{a \tau + b}{c \tau + d}} \Big],\end{align*} where we have replaced $r$ by $m-r$ in the second equality. Since \begin{align*} &\quad \binom{k+m-1}{r} \binom{l+m-r}{m-r} \binom{m-r}{j} (k+j)_{m-j-r} \\ &= \frac{(k+m-1)! (l+m-1)! (m-r)! (k+m-r-1)!}{r!(k+m-r-1)!(m-r)!(l+r-1)!j!(m-r-j)!(k+j-1)!} \\ &= \binom{k+m-1}{m-j} \binom{l+m-1}{j} \binom{m-j}{r} (l+r)_{m-j-r}, \end{align*} as we see by replacing $\frac{(m-r)! (k+m-r-1)!}{(m-r)!(k+m-r-1)!}$ by $\frac{(m-j)! (l+m-j-1)!}{(m-j)!(l+m-j-1)!}$ in the above expression, this equals \begin{align*} &\quad (2\pi i)^{-m} \sum_{j=0}^m \Big[ (-1)^j f^{(j)}(\tau) \binom{k+m-1}{m-j} \binom{l+m-1}{j} \times \\ &\quad\quad\quad\quad \times \sum_{M \in \Gamma_{\infty} \backslash \Gamma} \sum_{r=0}^{m-j} (2\pi i N)^r \binom{m-j}{r} (l+r)_{m-j-r} (-c)^{m-r-j} (c \tau + d)^{j-l-m-r} e^{2\pi i N \frac{a\tau +b}{c \tau + d}}\Big] \\ &= \sum_{j=0}^m (-1)^j D^j f(\tau) \binom{k+m-1}{m-j} \binom{l+m-1}{j} \sum_{M \in \Gamma_{\infty} \backslash \Gamma} D^{m-j} \Big( (c \tau + d)^{-l} e^{2\pi i N \frac{a \tau + b}{c \tau + d}} \Big) \\ &= \sum_{j=0}^m (-1)^j \binom{k+m-1}{m-j} \binom{l+m-1}{j} D^j f(\tau) D^{m-j} P_{l,N}(\tau) \\ &= [f,P_{l,N}]_m(\tau),\end{align*} the last equality by definition (equation (1)), and the third-to-last equality using equation (3). \\
When $\phi$ satisfies the weaker growth condition, we can include a convergence factor $(c \overline{\tau} + d)^{-s}$ into the argument above (which is ignored by the operator $D$) to see that, if $\phi(\tau) = a_0 + a_1 q + a_2 q^2 + ...$, then
\begin{align*} &\quad a_0 E_k(\tau;s) + a_1 P_{k,1}(\tau;s) + a_2 P_{k,2}(\tau;s) + ... \\ &= \sum_{M \in \Gamma_{\infty} \backslash \Gamma} (c \overline{\tau} + d)^{-s} \times \phi(\tau) \Big|_{k+l+2m} M \\ &= \sum_{j=0}^m (-1)^j \binom{k+m-1}{m-j} \binom{l+m-1}{j} D^j f(\tau) D^{m-j} \Big( \sum_{M \in \Gamma_{\infty} \backslash \Gamma} (c \overline{\tau} + d)^{-s} \times q^N \Big|_l M \Big)\end{align*} when $\mathrm{Re}[s]$ is sufficiently large, and $E_k(\tau;s)$ and $P_{k,N}(\tau;s)$ denote the deformed series $$E_k(\tau;s) = \frac{1}{2}\sum_{c,d} \frac{1}{(c \tau + d)^k (c \overline{\tau} + d)^s}, \; \; P_{k,N}(\tau;s) = \frac{1}{2} \sum_{c,d} \frac{e^{2\pi i N \frac{a \tau + b}{c \tau + d}}}{(c \tau + d)^k (c \overline{\tau} + d)^s}.$$ The claim follows by analytic continuation to $s=0$.
\end{proof}
\begin{proof}[Proof of Theorem 4] The condition $l \ge 2m+2$ makes the Fourier coefficients of $\phi$ grow sufficiently slowly: the $n$-th coefficient of $E_2^m$ is $O(n^{2m-1 + \varepsilon})$ for any $\varepsilon > 0$, so the growth condition $$2m-1+\varepsilon \le \frac{l+2m}{2} - 3/2 - \varepsilon$$ of Remark 7 is satisfied for all $l \ge 2m + 2$. \\
Using the transformation law $$E_2 \Big( \frac{a \tau + b}{c \tau + d} \Big) = (c \tau + d)^2 E_2(\tau) + \frac{6}{\pi i} c (c \tau + d),$$ it follows that $$E_2 \Big( \frac{a \tau + b}{c \tau + d} \Big)^m = \sum_{r=0}^m \binom{m}{r} (c \tau + d)^{m+r} c^{m-r} \Big( \frac{12}{2\pi i} \Big)^{m-r} E_2(\tau)^r$$ for all $m \in \mathbb{N}$. Therefore, with $$\phi(\tau) = q^N \sum_{r=0}^m \binom{m}{r} \frac{(l+m-1)!}{(l+m-r-1)!} (-E_2(\tau) / 12)^r N^{m-r},$$ ignoring convergence issues for now, we find
\begin{align*} &\quad \sum_{M \in \Gamma_{\infty} \backslash \Gamma} \phi \Big|_{l+2m} M (\tau) \\ &= \sum_{r=0}^m (-12)^{-r} N^{m-r} \binom{m}{r} \frac{(l+m-1)!}{(l+m-r-1)!} \sum_M \sum_{j=0}^r \binom{r}{j} c^{r-j} (c \tau + d)^{r+j} (12 / 2\pi i)^{r-j} E_2(\tau)^j e^{2\pi i N \frac{a \tau + b}{c \tau + d}} \\ &= \sum_{j=0}^m \sum_{r=j}^m (-12)^{-r} N^{m-r} \binom{m}{r} \frac{(l+m-1)!}{(l+m-r-1)!} \binom{r}{j} (12 / 2\pi i)^{r-j} E_2(\tau)^j \sum_M \Big[ c^{r-j} (c \tau + d)^{r+j-l-2m} e^{2\pi i N \frac{a \tau + b}{c \tau + d}} \Big] \\ &= \sum_{j=0}^m E_2(\tau)^j \sum_{r=0}^{m-j} (-12)^{r-m} N^r \binom{m}{r} \frac{(l+m-1)!}{(l+r-1)!} \binom{m-r}{j} (12 / 2\pi i)^{m-r-j} \sum_M c^{m-r-j} (c \tau + d)^{j-l-m-r} e^{2\pi i N \frac{ a \tau + b}{c \tau + d}}, \end{align*} where in the last line we replaced $r$ by $m-r$. Since \begin{align*} &\quad (-12)^{r-m} N^r \binom{m}{r} \frac{(l+m-1)!}{(l+r-1)!} \binom{m-r}{j} (-12 / 2\pi i)^{m-r-j} \\ &= (2\pi i)^{-m} \binom{m}{j} \frac{(l+m-1)!}{(l+m-j-1)!} (-2\pi i / 12)^j \binom{m-j}{r} \frac{(l+m-j-1)!}{(l+r-1)!} (2\pi i N)^r, \end{align*} as one can see by expanding both sides of this equation, the expression above equals \begin{align*} &\quad (2\pi i)^{-m} \sum_{j=0}^m E_2(\tau)^j \sum_{r=0}^{m-j} \Big[ \binom{m}{j} \frac{(l+m-1)!}{(l+m-j-1)!} (-2\pi i / 12)^j \binom{m-j}{r} \frac{(l+m-j-1)!}{(l+r-1)!} \times \\ &\quad\quad\quad\quad\quad\quad\quad\quad \times \sum_M (-c)^{m-j-r} (c \tau + d)^{j-l-m-r} (2\pi i N)^r e^{2\pi i N \frac{a \tau + b}{c \tau + d}} \Big] \\ &= \sum_{j=0}^m \binom{m}{j} \frac{(l+m-1)!}{(l+m-j-1)!} \Big( -E_2(\tau) / 12 \Big)^j D^{m-j} P_{l,N}(\tau) \\ &= \vartheta^{[m]} P_{l,N}(\tau), \end{align*} using equation (2) from section 2. Convergence issues can be resolved by including the factor $(c \overline{\tau} + d)^{-s}$ as in the proof of Theorem 2. \end{proof}
\section{Examples involving Ramanujan's tau function}
In weight $12$, the space $S_k$ of cusp forms is one-dimensional and therefore all Poincar\'e series are multiples of the discriminant $\Delta(\tau) = \sum_{n=1}^{\infty} \tau(n) q^n$; we find this multiple by writing $P_{k,m} = \lambda_m \Delta$ and using $\lambda_m \langle \Delta, \Delta \rangle = \langle \Delta, P_{k,m} \rangle = \tau(m) \frac{10!}{(4\pi m)^{11}},$ such that $$P_{k,m} = \frac{10! \cdot \tau(m)}{(4\pi m)^{11} \langle \Delta, \Delta \rangle} \Delta.$$ We can form the Poincar\'e series $\mathbb{P}_{12}(\phi)$ from any $q$-series $\phi(\tau) = \sum_{n=0}^{\infty} a_n q^n$ with $a_n = O(n^{9/2 - \varepsilon})$. This includes the $q$-series $E_2$ and $E_4$ and some of their derivatives. Applying Theorems $2$ and $4$ together with the vanishing of cusp forms in weight $\le 10$ gives identities involving $\tau(n)$. (Similar arguments can be used to derive identities for the coefficients of the normalized cusp forms of weights $16,18,20,22,26$.)
\begin{ex} By Theorem 4, $$0 = \vartheta P_{10,m} = \mathbb{P}_{12}\Big[ q^m \Big( m - \frac{5}{6}E_2 \Big) \Big] = (m-5/6) P_{12,m} + (-5/6) \cdot (-24) \sum_{n=1}^{\infty} \sigma_1(n) P_{12,m+n},$$ so we recover Kumar's identity $$\tau(m) = -\frac{20 m^{11}}{m - 5/6} \sum_{n=1}^{\infty} \frac{\tau(m+n) \sigma_1(n)}{(m+n)^{11}}.$$ \end{ex}
\begin{ex} By Theorem 2, $$0 = P_{8,m}E_4 = \mathbb{P}_{12}(q^m E_4) = P_{12,m} - 240 \sum_{n=1}^{\infty} \sigma_3(n) P_{12,m+n},$$ which yields Herrero's identity $$\tau(m) = -240 m^{11} \sum_{n=1}^{\infty} \frac{\sigma_3(n) \tau(m+n)}{(m+n)^{11}}.$$ \end{ex}
\begin{ex} By Theorem 2, $$0 = [E_4,P_{6,m}]_1 = \mathbb{P}_{12} \Big( 4m E_4 + 6 DE_4\Big) = 4m P_{12,m} + 240 \sum_{n=1}^{\infty} (4m + 6n) \sigma_3(n) P_{12,m+n},$$ which implies $$\tau(m) = -60 m^{10} \sum_{n=1}^{\infty} \frac{(4m + 6n) \sigma_3(n) \tau(m+n)}{(m+n)^{11}}.$$ Together with the previous identity this implies $$\tau(m) = -240 m^{10} \sum_{n=1}^{\infty} \frac{(m+n) \sigma_3(n) \tau(m+n)}{(m+n)^{11}} = -240m^{10} \sum_{n=1}^{\infty} \frac{\sigma_3(n) \tau(m+n)}{(m+n)^{10}}.$$ \end{ex}
\begin{ex} By Theorem 4, $$0 = \vartheta^{[2]} P_{8,m} = \mathbb{P}_{12}\Big( q^m (m^2 - (3/2)m E_2(\tau) + (1/2) E_2(\tau)^2) \Big),$$ where by Ramanujan's equation $D E_2 = \frac{1}{12} (E_2^2 - E_4)$ the coefficient of $q^n$ in $E_2(\tau)^2$ is $240\sigma_3(n) - 288n \sigma_1(n)$. Therefore we find $$0 = \left( m^2 - \frac{3}{2}m + \frac{1}{2} \right) P_{12,m} + \sum_{n=1}^{\infty} \Big( 36m \sigma_1(n) + 120 \sigma_3(n) - 144 n \sigma_1(n) \Big) P_{12,m+n}$$ and therefore $$(2m^2 - 3m + 1)\tau(m) = -24m^{11} \sum_{n=1}^{\infty} \frac{((3m-12n) \sigma_1(n) + 10 \sigma_3(n)) \tau(m+n)}{(m+n)^{11}}, \;\; m \in \mathbb{N}.$$ Combining this with the previous examples, we find $$\tau(m) = -180m^9 \sum_{n=1}^{\infty} \frac{n \sigma_1(n) \tau(m+n)}{(m+n)^{11}}$$ and therefore $$\tau(m) = -\frac{18m^{10}}{m - 3/4} \sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^{10}}.$$ \end{ex}
\begin{ex} It is not valid to form the Poincar\'e series $\mathbb{P}_{12}(\phi)$ with either $\phi = E_2^3$ or $E_6$, because their Fourier coefficients grow too quickly; however, their difference $E_2^3 - E_6 = 9 DE_4 +72 D^2 E_2$ has coefficients that satisfy the required bound $O(n^{9/2-\varepsilon})$. We use \begin{align*} 0 &= \vartheta^{[3]} P_{6,m} + \frac{7}{36} P_{6,m} E_6 \\ &= \mathbb{P}_{12}\Big( q^m (m^3 - 2m^2 E_2 + (7/6) m E_2^2 - (7/36) (E_2^3 - E_6)) \Big) \\ &= \left(m^3 - 2m^2 + \frac{7}{6}m \right) P_{12,m} + \sum_{n=1}^{\infty} \Big[ (48m^2 - 336mn + 336n^2) \sigma_1(n) + (280m - 420n) \Big] P_{12,m+n} \end{align*} to obtain $$\tau(m) = -720m^8 \sum_{n=1}^{\infty} \frac{n^2 \sigma_1(n) \tau(m+n)}{(m+n)^{11}},$$ and combining this with the previous examples, $$\tau(m) = -\frac{16m^9}{m - 2/3} \sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^9}.$$ Similarly, by expressing $D^3 E_2$ in terms of powers of $E_2$ and derivatives of modular forms one obtains the formula $$\tau(m) = -\frac{14m^8}{m - 7/12} \sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^8}.$$ \end{ex}
\begin{rem} In particular, for any $m \in \mathbb{N}$ the values of the $L$-series $\sum_{n=1}^{\infty} \frac{\sigma_1(n) \tau(m+n)}{(m+n)^s}$ at $s=8,9,10,11$ and of $\sum_{n=1}^{\infty} \frac{\sigma_3(n) \tau(m+n)}{(m+n)^s}$ at $s=10,11$ are rational numbers, and Lehmer's conjecture that $\tau(n)$ is never zero is equivalent to the non-vanishing of any of these $L$-values. Computing these $L$-series at other integers $s$ numerically does not seem to yield rational numbers. In any case, the methods of this note do not apply to other values of $s$. \end{rem}
We can also evaluate the values of these $L$-series with $m=0$ by a similar argument. Comparing $$\vartheta E_{10} = -\frac{5}{6} - 24q - ... = -\frac{5}{6} E_{12} + \frac{38016}{691} \Delta$$ with the result of Theorem 4, $$\vartheta E_{10} = -\frac{5}{6} E_{12} + 20 \sum_{n=1}^{\infty} \sigma_1(n) P_{12,n}$$ we find $$\tau(m) = \frac{20 \cdot 691}{38016} \sum_{n=1}^{\infty} \tau(n) \tau(m) \sigma_1(n) \cdot \frac{10!}{\langle \Delta, \Delta \rangle \cdot (4\pi n)^{11}},$$ i.e. $$\sum_{n=1}^{\infty} \frac{\tau(n) \sigma_1(n)}{n^{11}} = \frac{2^{19} \cdot 11}{3 \cdot 5^3 \cdot 7 \cdot 691} \pi^{11} \langle \Delta, \Delta \rangle \approx 0.968.$$ Here, the Petersson norm-square of $\Delta$ to $18$ decimal places is $$\langle \Delta, \Delta \rangle \approx 1.03536205680 \times 10^{-6}$$ which can be computed using PARI/GP. \\
Similarly, comparing $E_8 E_4 = 1 + 720q + ... = E_{12} + \frac{432000}{691} \Delta$ with $$E_8 E_4 = \mathbb{P}_{12}(E_4) = E_{12} + 240 \sum_{n=1}^{\infty} \sigma_3(n) P_{12,n},$$ we find $$\tau(m) = \frac{240 \cdot 691}{432000} \sum_{n=1}^{\infty} \tau(n) \tau(m) \sigma_3(n) \cdot \frac{10!}{\langle \Delta, \Delta \rangle \cdot (4\pi n)^{11}},$$ i.e. $$\sum_{n=1}^{\infty} \frac{\tau(n) \sigma_3(n)}{n^{11}} = \frac{2^{17}}{3^2 \cdot 7 \cdot 691} \pi^{11} \langle \Delta, \Delta \rangle \approx 0.917.$$
With similar arguments applied to \begin{align*} -3456 \Delta &= [E_4,E_6]_1 = -6 \mathbb{P}_{12}(DE_4), \\ \frac{1}{2} E_{12} - \frac{49344}{691} \Delta &= \vartheta^{[2]} E_8 = \frac{1}{2} \mathbb{P}_{12}(E_2^2), \\ -168 \Delta &= \vartheta^{[3]} E_6 + \frac{7}{36} E_6^2 = \frac{7}{36} \mathbb{P}( E_6 - E_2^3), \end{align*} and $$-600 \Delta = \vartheta^{[4]}E_4 - \frac{35}{864} E_4 E_8 - \frac{7}{40} [E_4,E_4]_2 + \frac{35}{432} [E_6,E_4]_1 = \frac{35}{3} \mathbb{P}_{12}(D^3 E_2),$$ one can compute the values \begin{align*} \sum_{n=1}^{\infty} \frac{\tau(n) \sigma_3(n)}{n^{10}} &= \frac{2^{16}}{3^3 \cdot 5^3 \cdot 7} \pi^{11} \langle \Delta, \Delta \rangle \approx 0.845, \\ \sum_{n=1}^{\infty} \frac{\tau(n) \sigma_1(n)}{n^{10}} &= \frac{2^{17}}{3^5 \cdot 5^2 \cdot 7} \pi^{11} \langle \Delta, \Delta \rangle \approx 0.939, \\ \sum_{n=1}^{\infty} \frac{\tau(n) \sigma_1(n)}{n^9} &= \frac{2^{13}}{3^4 \cdot 5 \cdot 7} \pi^{11} \langle \Delta,\Delta \rangle \approx 0.880, \\ \sum_{n=1}^{\infty} \frac{\tau(n) \sigma_1(n)}{n^8} &= \frac{2^{14}}{3^3 \cdot 5 \cdot 7^2} \pi^{11} \langle \Delta, \Delta \rangle \approx 0.754. \end{align*} Unlike the $L$-values of examples $8$ through $12$, none of these are expected to be rational. \\
\textbf{Acknowledgments:} I thank the reviewers for pointing out a mistake in Remark 6 in an earlier version of this note, and also for several suggestions that improved the exposition.
\end{document} |
\begin{document}
\title[Combinatorial properties]{Combinatorial properties of ultrametrics and generalized ultrametrics} \author{Oleksiy Dovgoshey}
\newcommand{\newline\indent}{\newline\indent}
\address{\textbf{O. Dovgoshey}\newline\indent Function theory department\newline\indent Institute of Applied Mathematics and Mechanics of NASU\newline\indent Dobrovolskogo str. 1, Slovyansk 84100, Ukraine}
\email{oleksiy.dovgoshey@gmail.com}
\subjclass[2010]{Primary 54E35, Secondary 06A05, 06A06}
\keywords{ultrametric, generalized ultrametric, equivalence relation, poset, totally ordered set, isotone mapping.}
\begin{abstract} Let \(X\), \(Y\) be sets and let \(\Phi\), \(\Psi\) be mappings with domains \(X^{2}\) and \(Y^{2}\) respectively. We say that \(\Phi\) and \(\Psi\) are \emph{combinatorially similar} if there are bijections \(f \colon \Phi(X^2) \to \Psi(Y^{2})\) and \(g \colon Y \to X\) such that \(\Psi(x, y) = f(\Phi(g(x), g(y)))\) for all \(x\), \(y \in Y\). Conditions under which a given mapping is combinatorially similar to an ultrametric or a pseudoultrametric are found. Combinatorial characterizations are also obtained for poset-valued ultrametric distances recently defined by Priess-Crampe and Ribenboim. \end{abstract}
\maketitle
\section{Introduction}
Recall some definitions from the theory of metric spaces. Let \(X\) be a set, let \(X^{2}\) be the Cartesian square of \(X\), \[ X^{2} = X \times X = \{\<x, y> \colon x, y \in X\}, \] and let \(\mathbb{R}^{+} = [0, \infty)\).
\begin{definition}\label{d1.1} A \textit{metric} on \(X\) is a function \(d\colon X^{2} \to \mathbb{R}^{+}\) such that for all \(x\), \(y\), \(z \in X\): \begin{enumerate} \item \(d(x,y) = 0\) if and only if \(x=y\), the \emph{positive property}; \item \(d(x,y)=d(y,x)\), the \emph{symmetric property}; \item \(d(x, y)\leq d(x, z) + d(z, y)\), the \emph{triangle inequality}. \end{enumerate} A metric \(d\colon X^{2} \to \mathbb{R}^{+}\) is an \emph{ultrametric} on \(X\) if \begin{enumerate} \item [\((iv)\)] \(d(x,y) \leq \max \{d(x,z),d(z,y)\}\) \end{enumerate} holds for all \(x\), \(y\), \(z \in X\). \end{definition}
Inequality \((iv)\) is often called the {\it strong triangle inequality}.
The theory of ultrametric spaces is closely connected with various investigations in mathematics, physics, linguistics, psychology and computer science. Different properties of ultrametrics have been studied in~\cite{DM2009, DD2010, DP2013SM, Groot1956, Lemin1984FAA, Lemin1984RMS39:5, Lemin1984RMS39:1, Lemin1985SMD32:3, Lemin1988, Lemin2003, QD2009, QD2014, BS2017, DM2008, DLPS2008, KS2012, Vaughan1999, Vestfrid1994, Ibragimov2012, GomoryHu(1961), Carlsson2010, DLW, Fie, GurVyal(2012), GV, Hol, H04, BH2, Lemin2003, Bestvina2002, DDP(P-adic), DP2019, DPT(Howrigid),PD(UMB), P2018(p-Adic),DP2018, DPT2015, CO2017TaAoC, Wei2017TaAoC, Ber2019SMJ}.
An useful generalization of the concept of ultrametric is the concept of pseudoultrametric and this is one of the main objects of our research below.
\begin{definition}\label{ch2:d2} Let \(X\) be a set and let \(d \colon X^{2} \to \mathbb{R}^{+}\) be a symmetric function such that \(d(x, x) = 0\) holds for every \(x \in X\). The function \(d\) is a \emph{pseudoultrametric} (\emph{pseudometric}) on \(X\) if it satisfies the strong triangle inequality (triangle inequality). \end{definition}
The strong triangle inequality also admits a natural generalization for poset-valued mappings.
Let \((\Gamma, \leqslant)\) be a partially ordered set with the smallest element \(\gamma_0\) and let \(X\) be a nonempty set.
\begin{definition}\label{d1.3} A mapping \(d \colon X^{2} \to \Gamma\) is an \emph{ultrametric distance}, if the following conditions hold for all \(x\), \(y\), \(z \in X\) and \(\gamma \in \Gamma\). \begin{enumerate} \item [\((i)\)] \(d(x, y) = \gamma_0\) if and only if \(x = y\). \item [\((ii)\)] \(d(x, y) = d(y, x)\). \item [\((iii)\)] If \(d(x, y) \leqslant \gamma\) and \(d(y, z) \leqslant \gamma\), then \(d(x, z) \leqslant \gamma\). \end{enumerate} \end{definition}
The ultrametric distances were introduced by Priess-Crampe and Ribenboim \cite{PR1993AMSUH} and studied in~\cite{PR1996AMSUH, PR1997AMSUH, Rib2009JoA, Rib1996PMH}. This generalization of ultrametrics has some interesting applications to logic programming, computational logic and domain theory \cite{Kro2006TCS, PR2000JLP, SH1998IMSB}.
Let us recall now the definition of combinatorial similarity. In what follows we will denote by \(F(A)\) the range of a mapping \(F \colon A \to B\), \(F(A) = \{F(x) \colon x \in A\}\).
\begin{definition}[{\cite{Dov2019a}}]\label{d2.17} Let \(X\), \(Y\) be nonempty sets and let \(\Phi\), \(\Psi\) be mappings with the domains \(X^{2}\) and \(Y^{2}\), respectively. The mapping \(\Phi\) is \emph{combinatorially similar} to \(\Psi\) if there are bijections \(f \colon \Phi(X^2) \to \Psi(Y^{2})\) and \(g \colon Y \to X\) such that \begin{equation}\label{d2.17:e1} \Psi(x, y) = f(\Phi(g(x), g(y))) \end{equation} holds for all \(x\), \(y \in Y\). In this case, we say that \(g \colon Y \to X\) is a \emph{combinatorial similarity} for the mappings \(\Psi\) and \(\Phi\). \end{definition}
Equality~\eqref{d2.17:e1} means that the diagram \begin{equation*} \ctdiagram{ \ctv 0,50:{X^{2}} \ctv 100,50:{Y^{2}} \ctv 0,0:{\Phi(X^{2})} \ctv 100,0:{\Psi(Y^{2})} \ctet 100,50,0,50:{g\otimes g} \ctet 0,0,100,0:{f} \ctel 0,50,0,0:{\Phi} \cter 100,50,100,0:{\Psi} } \end{equation*} is commutative, where we understand the mapping \(g\otimes g\) as \[ (g\otimes g)(\<y_1, y_2>) := \<g(y_1), g(y_2)> \] for \(\<y_1, y_2> \in Y^{2}\).
Some characterizations of mappings which are combinatorially similar to pseudometrics, strongly rigid pseudometrics and discrete pseudometrics were obtained in~\cite{Dov2019a}. The present paper deals with combinatorial properties of ultrametrics and generalized ultrametrics and this can be seen as a further development of research begun in~\cite{Dov2019a, Dov2019IEJA}.
The paper is organized as follows.
In Section~2 we introduce the notions of strongly consistent mappings and \(a_0\)-coherent mappings and show that these properties of mappings are invariant w.r.t. combinatorial similarities, Proposition~\ref{p2.4}. The main results of the section, Proposition~\ref{p2.7} and Theorem~\ref{p2.10}, describe \(a_0\)-coherent mappings in terms of binary relations defined on the domains of these mappings. An important special case of combinatorial similarities, the so-called weak similarities, are introduced in Definition~\ref{d2.9} at the end of the section.
In Section~3, starting from the characterization of mappings which are combinatorially similar to pseudometrics, we prove Theorem~\ref{t3.7}, a characterization of mappings which are combinatorially similar to pseudoultrametrics with at most countable range. The corresponding results for ultrametrics are given in Corollary~\ref{c3.8}. A basic for our goals subclass of Priess-Crampe and Ribemboim ultrametric distances, the \({\preccurlyeq}_Q\)-ultra\-metrics an related them \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics, are introduced in Definition~\ref{d3.11}. In Proposition~\ref{p3.16} we show that \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics are \(a_0\)-coherent. The main result of the section is Theorem~\ref{t3.15} which gives us the necessary and sufficient condition under which a given mapping is combinatorially similar to some \({\preccurlyeq}_Q\)-pseudo\-ultra\-metric. Proposition~\ref{p3.23} and Corollary~\ref{c3.24} expand on \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics the characterization of ultrametric-preserving functions obtained recently by Pongsriiam and Termwuttipong.
Section~4 mainly describes the interrelations between combinatorial and weak similarities of \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics. First of all, in Definition~\ref{d3.13}, we expand the notion of weak similarity from usual pseudo\-ultra\-metrics to \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics. Proposition~\ref{p3.16} claims that, for all \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics, every weak similarity is a combinatorial similarity (but not conversely in general). The orders \({\preccurlyeq}_Q\), for which the weak similarities and the combinatorial similarities are the same (for the corresponding \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics) are described in Theorem~\ref{t4.3}. In Proposition~\ref{c4.4}, for every totally ordered set \((Q, {\preccurlyeq}_Q)\) (which contains a smallest element) we construct a \({\preccurlyeq}_Q\)-ultra\-metric satisfying conditions of Theorem~\ref{t4.3}. Using this result in Proposition~\ref{p4.8} we found a metric \(d^{*}\), defined on a set \(X\) with \(|X| = 2^{\aleph_{0}}\), such that \(d^{*}\) is not combinatorially similar to any ultrametric but, for every countable \(X_1 \subseteq X\), the restriction \(d^{*}\) on \(X_1\) is combinatorially similar to an ultrametric. The mappings which are combinatorially similar to \({\preccurlyeq}_Q\)-pseudo\-ultra\-metrics are described in Theorems~\ref{t4.11}, \ref{t4.15} and \ref{t4.19} for the case of totally ordered \((Q, {\preccurlyeq}_Q)\) satisfying the distinct universal and topological restrictions. The final results of the paper, Theorem~\ref{t4.20} and Corollary~\ref{c4.22}, give a kind of necessary and sufficient conditions under which a given mapping is combinatorially similar to a pseudoultrametric or, respectively, to an ultrametric.
\section{Consistency with equivalence relations}
Let \(X\) be a set. A \emph{binary relation} on \(X\) is a subset of the Cartesian square \(X^{2}\). A relation \(R \subseteq X^{2}\) is an \emph{equivalence relation} on \(X\) if the following conditions hold for all \(x\), \(y\), \(z \in X\): \begin{enumerate} \item \(\<x, x> \in R\), the \emph{reflexive} property; \item \((\<x, y> \in R) \Leftrightarrow (\<y, x> \in R)\), the \emph{symmetric} property; \item \(((\<x, y> \in R) \text{ and } (\<y, z> \in R)) \Rightarrow (\<x, z> \in R)\), the \emph{transitive} property. \end{enumerate}
Let \(R\) be an equivalence relation on \(X\). A mapping \(F \colon X^{2} \to X\) is \emph{consistent} with \(R\) if the implication \begin{equation*} \bigl(\<x_1, x_2> \in R \text{ and } \<x_3, x_4> \in R\bigr) \Rightarrow \bigl(\<F(x_1, x_3), F(x_2, x_4)> \in R\bigr) \end{equation*} is valid for all \(x_1\), \(x_2\), \(x_3\), \(x_4 \in X\) (see~\cite[p.~78]{KurMost}). Similarly, we will say that a mapping \(\Phi \colon X^{2} \to Y\) is \emph{strongly consistent} with \(R\) if the implication \begin{equation}\label{e2.5} \bigl(\<x_1, x_2> \in R \text{ and } \<x_3, x_4> \in R\bigr) \Rightarrow \bigl(\Phi(x_1, x_3) = \Phi(x_2, x_4)\bigr) \end{equation} is valid for all \(x_1\), \(x_2\), \(x_3\), \(x_4 \in X\).
\begin{remark}\label{r2.1} Let \(R\) be an equivalence relation on a set \(X\). Then every strongly consistent with \(R\) mapping \(\Phi \colon X^{2} \to X\) is consistent with \(R\). The converse statement holds if and only if \(R\) is the diagonal of \(X\), \[ R = \Delta_{X} = \{\<x, x> \colon x \in X\}. \] \end{remark}
\begin{definition}\label{d2.5} Let \(X\) be a nonempty set, let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\) and let \(a_0 \in \Phi(X^{2})\). The mapping \(\Phi\) is \(a_0\)-\emph{coherent} if \(\Phi\) is strongly consistent with the fiber \[ \Phi^{-1}(a_0) := \{\<x, y> \colon \Phi(x, y) = a_0\}. \] \end{definition}
\begin{remark}\label{r2.3} In particular, if \(\Phi\) is \(a_0\)-coherent, then \(\Phi^{-1}(a_0)\) is an equivalence relation on \(X\). \end{remark}
The following proposition claims that the properties to be strongly consistent and to be coherent are invariant w.r.t. combinatorial similarities.
\begin{proposition}\label{p2.4} Let \(X\), \(Y\) be nonempty sets, let \(\Phi\), \(\Psi\) be combinatorially similar mappings with \(\dom \Phi = X^{2}\) and \(\dom \Psi = Y^{2}\) and the commutative diagram \begin{equation*} \ctdiagram{ \def25{25} \ctv 0,25:{X^{2}} \ctv 100,25:{Y^{2}} \ctv 0,-25:{\Phi(X^{2})} \ctv 100,-25:{\Psi(Y^{2})} \ctet 100,25,0,25:{g\otimes g} \ctet 0,-25,100,-25:{f} \ctel 0,25,0,-25:{\Phi} \cter 100,25,100,-25:{\Psi} }. \end{equation*} If \(\Phi\) is strongly consistent with an equivalence relation \(R_X\) on \(X\), then \(\Psi\) is strongly consistent with an equivalence relation \(R_Y\) on \(Y\) satisfying \[ (\<x,y> \in R_Y) \Leftrightarrow (\<g(x), g(y)> \in R_X) \] for every \(\<x, y> \in Y^{2}\). In addition, if \(\Phi\) is \(a_0\)-coherent for \(a_0 \in \Phi(X^{2})\), then \(\Psi\) is \(f(a_0)\)-coherent. \end{proposition}
The proof is straightforward and we omit it here.
Let \(X\) be a set and let \(R_1\) and \(R_2\) be binary relations on \(X\). Recall that a composition of binary relations \(R_1\) and \(R_2\) is a binary relation \(R_1 \circ R_2 \subseteq X^{2}\) for which \(\<x, y> \in R_1 \circ R_2\) holds if and only if there is \(z \in X\) such that \(\<x, z> \in R_1\) and \(\<z, y> \in R_2\).
Using the notion of binary relations composition we can reformulate Definition~\ref{d2.5} as follows.
\begin{proposition}\label{p2.7} Let \(X\) be a nonempty set, \(\Phi\) be a mappings with \(\dom \Phi = X^{2}\) and let \(a_0 \in \Phi(X^{2})\). Then \(\Phi\) is \(a_0\)-coherent if and only if the fiber \(R = \Phi^{-1}(a_0)\) is an equivalence relation on \(X\) and the equality \begin{equation}\label{p2.7:e1} \Phi^{-1}(b) = R \circ \Phi^{-1}(b) \circ R \end{equation} holds for every \(b \in \Phi(X^{2})\). \end{proposition}
\begin{proof} It suffices to show that \(\Phi\) is strongly consistent with \(R\) if and only if equality \eqref{p2.7:e1} holds for every \(b \in \Phi(X^{2})\). Let \(b \in \Phi(X^{2})\) and \eqref{p2.7:e1} hold. Suppose \(\<x_1, x_3> \in X^{2}\) such that \[ \Phi(x_1, x_3) = b. \] If \(\<x_2, x_1> \in R\), \(\<x_1, x_3> \in \Phi^{-1}(b)\) and \(\<x_3, x_4> \in R\), then from the definition of the composition \(\circ\) we obtain \[ \<x_2, x_4> \in R \circ \Phi^{-1}(b) \circ R \] that implies \(\<x_2, x_4> \in \Phi^{-1}(b)\) by equality~\eqref{p2.7:e1}. Thus, the implication~\eqref{e2.5} is valid.
Conversely, suppose that \(\Phi\) is strongly consistent with \(R\). Then~\eqref{e2.5} implies the inclusion \begin{equation}\label{p2.7:e3} R \circ \Phi^{-1}(b) \circ R \subseteq \Phi^{-1}(b) \end{equation} for every \(b \in \Phi(X^{2})\). Since \(R\) is reflexive, the converse inclusion is also valid. Equality~\eqref{p2.7:e1} follows. \end{proof}
\begin{corollary}\label{c2.5} Let \(X\) be a nonempty set, let \(\Phi\) be a symmetric mapping with \(\dom \Phi = X^{2}\) and let \(a_0 \in \Phi(X^{2})\). Suppose \(R := \Phi^{-1}(a_0)\) is an equivalence relation on \(X\). Then the following conditions are equivalent. \begin{enumerate} \item[\((i)\)] \(\Phi\) is \(a_0\)-coherent. \item[\((ii)\)] \(\Phi^{-1}(b) = R \circ \Phi^{-1}(b) \circ R\) holds for every \(b \in \Phi(X^{2})\). \item[\((iii)\)] \(\Phi^{-1}(b) = R \circ \Phi^{-1}(b)\) holds for every \(b \in \Phi(X^{2})\). \item[\((iv)\)] \(\Phi^{-1}(b) = \Phi^{-1}(b) \circ R\) holds for every \(b \in \Phi(X^{2})\). \item[\((v)\)] For every \(b \in \Phi(X^{2})\), at least one of the equalities \[ \Phi^{-1}(b) = R \circ \Phi^{-1}(b), \quad \Phi^{-1}(b) = \Phi^{-1}(b) \circ R \] holds. \end{enumerate} \end{corollary}
\begin{proof} In what follows, for every \(b \in \Phi(X^{2})\), we write \(R_b = \Phi^{-1}(b)\) and, for every \(A \subseteq X^{2}\), define the inverse binary relation \(A^{T}\) by the rule: \begin{itemize} \item the membership \(\<x, y> \in A^{T}\) holds if and only if \(\<y, x> \in A\). \end{itemize}
Suppose \((v)\) is valid and we have \begin{equation}\label{c2.5:e1} R_b = R_b \circ R. \end{equation} It is trivial that a binary relation \(A\) is symmetric if and only if we have \(A^{T} = A\). Furthermore, the equality \[ (C \circ B)^{T} = B^{T} \circ C^{T} \] holds for all binary relations \(B\) and \(C\) defined on the one and the same set (see, for example, \cite[p.~15]{How1976AP}). Consequently, from \eqref{c2.5:e1} it follows that \begin{align*} R_b &= (R_b)^{T} = (R_b \circ R)^{T} = R^{T} \circ R_b^{T} \\ & = R \circ R_b = R \circ (R_b \circ R) = R \circ R_b \circ R. \end{align*} Similarly, from \(R_b = R \circ R_b\) follows \(R_b = R \circ R_b \circ R\). Thus, the implication \((v) \Rightarrow (ii)\) is valid.
If \((ii)\) holds, then we have \[ R_b = R \circ R_b \circ R \] for every \(b \in \Phi(X^{2})\). Since \(R\) is an equivalence relation, the equality \(R \circ R = R\) holds. Consequently, \begin{align*} R_b & = (R \circ R) \circ R_b \circ R = R \circ (R \circ R_b \circ R) = R \circ R_b. \end{align*} Thus, \((ii)\) implies \((iii)\). Analogously, \((ii)\) implies \((iv)\). The implications \((iii) \Rightarrow (v)\) and \((iv) \Rightarrow (v)\) are evidently valid. To complete the proof we recall that \((i)\) and \((ii)\) are equivalent by Proposition~\ref{p2.7}. \end{proof}
Let \(X\) be a nonempty set and \(P = \{X_j \colon j \in J\}\) be a set of nonempty subsets of \(X\). Then \(P\) is a \emph{partition} of \(X\) with the blocks \(X_j\) if \[ \bigcup_{j \in J} X_j = X \] and \(X_{j_1} \cap X_{j_2} = \varnothing\) holds for all distinct \(j_1\), \(j_2 \in J\).
There exists the well-known, one-to-one correspondence between the equivalence relations and partitions.
If \(R\) is an equivalence relation on \(X\), then an \emph{equivalence class} is a subset \([a]_R\) of \(X\) having the form \begin{equation}\label{e1.1} [a]_R = \{x \in X \colon \<x, a> \in R\}, a \in X. \end{equation} The \emph{quotient set} of \(X\) w.r.t. \(R\) is the set of all equivalence classes \([a]_R\), \(a \in X\).
\begin{proposition}\label{p2.5} Let \(X\) be a nonempty set. If \(P = \{X_j \colon j \in J\}\) is a partition of \(X\) and \(R_P\) is a binary relation on \(X\) defined as \begin{itemize} \item[] \(\<x, y> \in R_P\) if and only if \(\exists j \in J\) such that \(x \in X_j\) and \(y \in X_j\), \end{itemize} then \(R_P\) is an equivalence relation on \(X\) with the equivalence classes \(X_j\). Conversely, if \(R\) is an equivalence relation on \(X\), then the set \(P_R\) of all distinct equivalence classes \([a]_R\) is a partition of \(X\) with the blocks \([a]_R\). \end{proposition}
For the proof, see, for example, \cite[Chapter~II, \S{}~5]{KurMost}.
\begin{lemma}[{\cite[p.~9]{Kel1975S}}]\label{l2.6} Let \(X\) be a nonempty set. If \(R\) is an equivalence relation on \(X\) and \(P_R = \{X_j \colon j \in J\}\) is the corresponding partition of \(X\), then the equality \begin{equation*} R = \bigcup_{j \in J} X_j^2 \end{equation*} holds. \end{lemma}
For every partition \(P = \{X_j \colon j \in J\}\) of a nonempty set \(X\) we define a partition \(P \otimes P^1\) of \(X^{2}\) by the rule: \begin{itemize} \item A subset \(B\) of \(X^{2}\) is a block of \(P \otimes P^1\) if and only if either \[ B = \bigcup_{j \in J} X_{j}^{2} \] or there are \emph{distinct} \(j_1\), \(j_2 \in J\) such that \[ B = X_{j_1} \times X_{j_2}. \] \end{itemize}
\begin{definition}\label{d2.8} Let \(X\) be a nonempty set and let \(P_1\) and \(P_2\) be partitions of \(X\). The partition \(P_{1}\) is \emph{finer} than the partition \(P_{2}\) if the inclusion \[ [x]_{R_{P_1}} \subseteq [x]_{R_{P_2}} \] holds for every \(x \in X\), where \(R_{P_1}\) and \(R_{P_2}\) are equivalence relations corresponding to \(P_1\) and \(P_2\) respectively. \end{definition}
If \(P_1\) is finer than \(P_2\), then we say that \(P_{1}\) is a \emph{refinement} of \(P_{2}\).
The following proposition gives us a new characterization of \(a_0\)-coherent mappings.
\begin{theorem}\label{p2.10} Let \(X\) be a nonempty set, \(\Phi\) be a mapping with \(\dom \Phi= X^{2}\) and let \(a_0 \in \Phi(X^{2})\). Then \(\Phi\) is \(a_0\)-coherent if and only if the fiber \[ R := \Phi^{-1}(a_0) \] is an equivalence relation on \(X\) and the partition \(P_R \otimes P_R^1\) of \(X^{2}\) is a refinement of the partition \(P_{\Phi^{-1}} := \{\Phi^{-1}(b) \colon b \in \Phi(X^{2})\}\), where \(P_R\) is a partition of \(X\) whose blocks are the equivalence classes of \(R\). \end{theorem}
\begin{proof} Let \(\Phi\) be \(a_0\)-coherent. Then, by Definition~\ref{d2.5}, \(R\) is an equivalence relation on \(X\). We claim that \(P_R \otimes P_R^{1}\) is a refinement \(P_{\Phi^{-1}}\). It suffices to show that for every block \(B_0\) of \(P_R \otimes P_R^{1}\) there is \(b_0 \in \Phi(X^{2})\) such that \begin{equation}\label{p2.10:e1} B_0 \subseteq \Phi^{-1}(b_0). \end{equation} Suppose that \begin{equation}\label{p2.10:e2} B_0 = \bigcup_{j \in J} X_j^{2}, \end{equation} where \(X_j\), \(j \in J\), are the blocks of the partition corresponding to the equivalence relation \(\Phi^{-1}(a_0)\) on \(X\). By Lemma~\ref{l2.6}, we have the equality \[ \bigcup_{j \in J} X_j^{2} = \Phi^{-1}(a_0). \] The last equality and \eqref{p2.10:e2} imply \eqref{p2.10:e1} with \(b_0 = a_0\). If \(B_0\) is a block of \(P_R \otimes P_R^{1}\) but \eqref{p2.10:e2} does not hold, then there are two distinct \(j_1\), \(j_2 \in J\) such that \begin{equation}\label{p2.10:e3} B_0 = X_{j_1} \times X_{j_2}. \end{equation} Let \(x_1 \in X_{j_1}\) and \(x_2 \in X_{j_2}\) and let \(b_0 \in \Phi(X^{2})\) such that \begin{equation}\label{p2.10:e4} \<x_1, x_2> \in \Phi^{-1}(b_0). \end{equation} We must show that \begin{equation}\label{p2.10:e5} X_{j_1} \times X_{j_2} \subseteq \Phi^{-1}(b_0). \end{equation} It follows from Proposition~\ref{p2.7} and Lemma~\ref{l2.6} that \begin{equation}\label{p2.10:e6} \Phi^{-1}(b_0) = \left(\bigcup_{j \in J} X_j^{2}\right) \circ \Phi^{-1}(b_0) \circ \left(\bigcup_{j \in J} X_j^{2}\right) \end{equation} holds. Inclusion~\eqref{p2.10:e5} holds if, for every \(x \in X_{j_1}\) and \(y \in X_{j_2}\), we have \[ \<x, y> \in \Phi^{-1}(b_0). \] Using \eqref{p2.10:e6} we obtain \begin{equation}\label{p2.10:e7} \Phi^{-1}(b_0) \supseteq X_{j_1}^{2} \circ \Phi^{-1}(b_0) \circ X_{j_2}^{2}. \end{equation} Since \(\<x, x_1> \in X_{j_1}^{2}\) and \(\<x_1, x_2> \in \Phi^{-1}(b_0)\) and \(\<x_2, y> \in X_{j_2}^{2}\), the definition of composition \(\circ\) and \eqref{p2.10:e7} imply \(\<x, y> \in \Phi^{-1}(b_0)\). Thus, \(P_R \otimes P_R^{1}\) is a refinement of \(P_{\Phi^{-1}}\) if \(\Phi\) is \(a_0\)-coherent.
Conversely, suppose that \(R = \Phi^{-1}(a_0)\) is an equivalence relation on \(X\) and \(P_R \otimes P_R^{1}\) is a finer than \(P_{\Phi^{-1}}\). By Proposition~\ref{p2.7}, the mapping \(\Phi\) is \(a_0\)-coherent if and only if the equality \[ R \circ \Phi^{-1}(b) \circ R = \Phi^{-1}(b) \] holds for every \(b \in \Phi(X^{2})\). The reflexivity of \(R\) implies that \[ R \circ \Phi^{-1}(b) \circ R \supseteq \Phi^{-1}(b). \] Consequently, to complete the proof it suffices to show that \begin{equation}\label{p2.10:e8} R \circ \Phi^{-1}(b) \circ R \subseteq \Phi^{-1}(b) \end{equation} holds for every \(b \in \Phi(X^{2})\). Inclusion~\eqref{p2.10:e8} holds if and only if \begin{equation}\label{p2.10:e9} R \circ \{\<x, y>\} \circ R \subseteq \Phi^{-1}(b) \end{equation} holds for every \(\<x, y> \in \Phi^{-1}(b)\), where \(\{\<x, y>\}\) is the one-point subset of \(X^{2}\) consisting the point \(\<x, y>\) only. A simple calculation shows that \begin{equation}\label{p2.10:e10} B = R \circ B \circ R \end{equation} holds for every block \(B\) of the partition \(P_R \otimes P_R^{1}\). Since \(P_R \otimes P_R^{1}\) is a refinement of \(P_{\Phi^{-1}}\), equality \eqref{p2.10:e10} implies \eqref{p2.10:e9} for \(\<x, y> \in B\). \end{proof}
Let us consider now some examples.
\begin{proposition}\label{c2.2} Let \(X\) be a nonempty set and let \(d \colon X^{2} \to \mathbb{R}^{+}\) be a pseudoultrametric on \(X\). Then \(d^{-1}(0)\) is an equivalence relation on \(X\) and \(d\) is \(0\)-coherent. \end{proposition}
This proposition is a corollary of the corresponding result for pseudometrics \cite[Ch.~4, Th.~15]{Kel1975S}.
\begin{definition}\label{d2.9} Let \((X_1, d_1)\) and \((X_2, d_2)\) be pseudoultrametric spaces. A bijection \(\Phi \colon X_1 \to X_2\) is a weak similarity if there is a strictly increasing bijective function \(f \colon d_1(X_{1}^{2}) \to d_2(X_{2}^{2})\) such that the equality \begin{equation}\label{d2.9:e1} d_1(x, y) = f(d_2(\Phi(x), \Phi(y))) \end{equation} holds for all \(x\), \(y \in X_1\). \end{definition}
\begin{remark}\label{r2.10} The weak similarities of semimetric spaces and ultrametric ones were studied in~\cite{DP2013AMH} and \cite{P2018(p-Adic)}. See also~\cite{KvL2014} and references therein for some results related to weak similarities of subsets of Euclidean finite-dimensional spaces. \end{remark}
\begin{proposition}\label{p2.11} Let \((X_1, d_1)\) and \((X_2, d_2)\) be pseudoultrametric spaces and \(\Phi \colon X_1 \to X_2\) be a weak similarity. Then \(\Phi\) is a combinatorial similarity for the pseudoultrametrics \(d_1\) and \(d_2\). \end{proposition}
\begin{proof} It follows directly from Definition~\ref{d2.9} and Definition~\ref{d2.17}. \end{proof}
\section{Combinatorial similarity for generalized ultrametrics}
First of all, we recall a combinatorial characterization of arbitrary pseudometric.
\begin{theorem}[{\cite{Dov2019a}}]\label{ch2:p7} Let \(X\) be a nonempty set. The following conditions are equivalent for every mapping \(\Phi\) with \(\dom\Phi = X^{2}\). \begin{enumerate} \item\label{ch2:p7:s1} \(\Phi\) is combinatorially similar to a pseudometric.
\item\label{ch2:p7:s2} \(\Phi\) is symmetric, and \(|\Phi(X^{2})| \leqslant 2^{\aleph_{0}}\), and there is \(a_0 \in \Phi(X^{2})\) such that \(\Phi\) is \(a_0\)-coherent. \end{enumerate} \end{theorem}
\begin{corollary}[{\cite{Dov2019a}}]\label{c3.2}
Let \(X\) be a nonempty set and let \(\Phi\) be a mapping with \(\dom\Phi = X^{2}\). Then \(\Phi\) is combinatorially similar to a metric if and only if \(\Phi\) is symmetric, and \(|\Phi(X^{2})| \leqslant 2^{\aleph_{0}}\), and there is \(a_0 \in \Phi(X^{2})\) such that \(\Phi^{-1}(a_0) = \Delta_{X}\), where \(\Delta_{X}\) is the diagonal of \(X\). \end{corollary}
Consequently, if a mapping \(\Phi\), with \(\dom \Phi = X^{2}\), is combinatorially similar to a pseudoultrametric, then it satisfies condition \((ii)\) of Theorem~\ref{ch2:p7}.
Another necessary condition for combinatorial similarity of \(\Phi\) to a pseudoultrametric follows from the fact that \begin{itemize} \item all triangles are isosceles in every pseudoultrametric space. \end{itemize}
This fact can be written in the form.
\begin{lemma}\label{l3.1} Let \(X\) be a nonempty set and let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\). If \(\Phi\) is combinatorially similar to a pseudoultrametric, then, \begin{enumerate} \item[\((i)\)] for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). \end{enumerate} \end{lemma}
The following example shows that condition~\((i)\) is not sufficient for existence of a pseudoultrametric \(d\) which is combinatorially similar to \(\Phi\), even \(\Phi\) is a metric.
\begin{example}\label{ex3.4} Let \(X = \{x_1, x_2, x_3, x_4\}\) and let \(\rho \colon X^{2} \to \mathbb{R}^{+}\) be a symmetric mapping defined as \begin{equation}\label{ex3.4:e1} \rho(x, y) = \begin{cases} 0 & \text{if } x = y,\\ \frac{\pi}{2} & \text{if } \{x, y\} = \{x_1, x_2\} \text{ or } \{x, y\} = \{x_2, x_3\},\\ \pi & \text{otherwise}. \end{cases} \end{equation} It is easy to see that \(\rho\) is a metric on \(X\) such that every triangle is isosceles in \((X, \rho)\) (see Figure~\ref{fig1}). Suppose \(\rho\) is combinatorially similar to some pseudoultrametric \(d \colon Y^{2} \to \mathbb{R}^{+}\). Then, by Definition~\ref{d2.17}, there are bijections \(f \colon \rho(X^{2}) \to d(Y^{2})\) and \(g \colon Y \to X\) such that \[ d(x, y) = f(\rho(g(x), g(y))) \] for all \(x\), \(y \in Y\). The last equality and \eqref{ex3.4:e1} imply \[ d(g^{-1}(x_1), g^{-1}(x_2)) = d(g^{-1}(x_2), g^{-1}(x_3)) = f\left(\frac{\pi}{2}\right) \] and \begin{equation*} d(g^{-1}(x_1), g^{-1}(x_4)) = d(g^{-1}(x_4), g^{-1}(x_2)) = f\left(\pi\right). \end{equation*} Using these equalities and the strong triangle inequality (for the triples \(\<g^{-1}(x_1), g^{-1}(x_2), g^{-1}(x_3)>\) and \(\<g^{-1}(x_1), g^{-1}(x_4), g^{-1}(x_2)>\)) we obtain \[ f\left(\frac{\pi}{2}\right) \geqslant f\left(\pi\right) \text{ and } f\left(\frac{\pi}{2}\right) \leqslant f\left(\pi\right). \] Thus, \(f\left(\frac{\pi}{2}\right) = f\left(\pi\right)\) holds, contrary to the bijectivity of \(f\). \end{example}
\begin{figure}
\caption{The metric space \((X, \rho)\) is (up to isometry) a subspace of the metric space \(L\) consisting of the three rays \(\protect\overrightarrow{x_4x_1}\), \(\protect\overrightarrow{x_4x_2}\), \(\protect\overrightarrow{x_4x_3}\) and a unit circle (a circle with the radius \(1\)) passing through \(x_1\), \(x_2\) and \(x_3\) if we consider \(L\) endowed with the shortest path metric.}
\label{fig1}
\end{figure}
We want to describe the mappings which are combinatorially similar to pseudoultrametrics. For this goal we recall some definitions.
Let \(\gamma\) be a binary relation on a set \(X\). We will write \(\gamma^{1} = \gamma\) and \(\gamma^{n+1} = \gamma^{n}\circ \gamma\) for every integer \(n \geqslant 1\). The \emph{transitive closure} \(\gamma^{t}\) of \(\gamma\) is the relation \begin{equation}\label{e3.2} \gamma^{t} := \bigcup_{n=1}^{\infty} \gamma^{n}. \end{equation}
For every \(\beta \subseteq X^{2}\), the transitive closure \(\beta^{t}\) is transitive and the inclusion \(\beta \subseteq \beta^{t}\) holds. Moreover, if \(\tau \subseteq X^{2}\) is an arbitrary transitive binary relation for which \(\beta \subseteq \tau\), then we also have \(\beta^{t} \subseteq \tau\), i.e., \(\beta^{t}\) is the smallest transitive binary relation containing \(\beta\).
Recall that a reflexive and transitive binary relation \(\preccurlyeq_Y\) on a set \(Y\) is a \emph{partial order} on \(Y\) if, for all \(x\), \(y \in Y\), we have the \emph{antisymmetric property}, \[ \bigl(\<x, y> \in \preccurlyeq_Y \text{ and } \<y, x> \in \preccurlyeq_Y \bigr) \Rightarrow (x = y). \]
In what follows we use the formula \(x \preccurlyeq y\) instead of \(\<x, y> \in \preccurlyeq\) and write \(x\prec y\) instead of \[ x \preccurlyeq y \quad \text{and} \quad x \neq y. \]
Let \(\preccurlyeq_Y\) be a partial order on a set \(Y\). A pair \((Y, \preccurlyeq_Y)\) is called to be a \emph{poset} (a partially ordered set). A poset \((Y, \preccurlyeq_Y)\) is \emph{linear} (= \emph{totally ordered}) if, for all \(y_1\), \(y_2 \in Y\), we have \[ y_1 \preccurlyeq_Y y_2 \quad \text{or} \quad y_2 \preccurlyeq_Y y_1. \]
\begin{definition}\label{d3.5} Let \((Q, {\preccurlyeq}_Q)\) and \((L, {\preccurlyeq}_L)\) be posets. A mapping \(f \colon Q \to L\) is \emph{isotone} if, for all \(q_1\), \(q_2 \in Q\), we have \[ (q_1 \preccurlyeq_Q q_2) \Rightarrow (f(q_1) \preccurlyeq_L f(q_2)). \]
Let \(\Phi \colon X \to Y\) be an isotone mapping of posets \((X, {\preccurlyeq}_X)\) and \((Y, {\preccurlyeq}_Y)\). If \(\Phi\) is bijective and the inverse mapping \(\Phi^{-1} \colon Y \to X\) is also isotone, then we say that \((X, {\preccurlyeq}_X)\) and \((Y, {\preccurlyeq}_Y)\) are \emph{isomorphic} and \(\Phi\) is an (\emph{order}) \emph{isomorphism}. \end{definition}
If \((Y, \preccurlyeq_Y)\) is a poset, and \(Y_1 \subseteq Y\), and \(\preccurlyeq_{Y_1}\) is a partial order on \(Y_1\) such that, for all \(x\), \(y \in Y_1\), \[ (x \preccurlyeq_{Y_1} y) \Leftrightarrow (x \preccurlyeq_Y y), \] then we say that \((Y_1, \preccurlyeq_{Y_1})\) is a \emph{subposet} of the poset \((Y, \preccurlyeq_Y)\).
Write \(\mathbb{Q}^{+}\) for the set of all nonnegative rational numbers, \[ \mathbb{Q}^{+} = \mathbb{Q} \cap [0, +\infty), \] and let \(\leqslant\) be the usual ordering on \(\mathbb{Q}^{+}\).
\begin{lemma}[Cantor]\label{l3.2}
Let \((X, \preccurlyeq_X)\) be a totally ordered set and let \(|X| \leqslant \aleph_{0}\) hold. Then \((X, \preccurlyeq_X)\) is isomorphic to a subposet of \((\mathbb{Q}^{+}, \leqslant)\). \end{lemma}
The proof can be obtained directly from the classical Cantor's results (see, for example, \cite{Ros1982}, Chapter~2, Theorem~2.6 and Theorem~2.8).
We will also use the following Szpilrajn Theorem.
\begin{lemma}[Szpilrajn]\label{l3.3} Let \((X, \preccurlyeq_X)\) be a poset. Then there is a linear order \(\preccurlyeq\) on \(X\) such that \[ {\preccurlyeq_X} \subseteq {\preccurlyeq}. \] \end{lemma}
Informally speaking it means that each partial order on a set can be extended to a linear order on the same set.
\begin{remark}\label{r3.9} This result was obtained by Edward Szpilrajn in~\cite{Szp1930FM}. Interesting reviews of Szpilrajn-type theorems can be found in~\cite{And2009} and \cite{BP1982}. \end{remark}
Let \(X\) be a nonempty set and let \(\Phi\) be a symmetric mapping with \(\dom\Phi = X^{2}\) and let \(Y := \Phi(X^{2})\). Let us define a binary relation \(u_{\Phi}\) by the rule: \(\<y_1, y_2> \in u_{\Phi}\) if and only if \(\<y_1, y_2> \in Y^{2}\) and there are \(x_1\), \(x_2\), \(x_3 \in X\) such that \begin{equation}\label{e2.16} y_1 = \Phi(x_1, x_3) \text{ and } y_2 = \Phi(x_1, x_2) = \Phi(x_2, x_3). \end{equation}
\begin{example}\label{ex3.20} Let \((X, d)\) be a nonempty ultrametric space. Recall that a subset \(B\) of \(X\) is a (closed) ball if there are \(x^{*} \in X\) and \(r^{*} \in \mathbb{R}^{+}\) such that \[ B = \{x \in X \colon d(x, x^{*}) \leqslant r^{*}\}. \] The diameter of \(B\), we denote it by \(\diam(B)\), is defined as \[ \diam(B) := \sup\{d(x,y) \colon x, y \in B\}. \] The following statements are equivalent for every \(\<r_1, r_2> \in \mathbb{R}^{+} \times \mathbb{R}^{+}\). \begin{itemize} \item \(\<r_1, r_2> \in u_d\). \item There are some balls \(B_1\) and \(B_2\) in \((X, d)\) such that \(B_1 \subseteq B_2\), and \(r_1 = \diam(B_1)\), and \(r_2 = \diam(B_2)\). \item There are some balls \(B_1\) and \(B_2\) in \((X, d)\) such that \(B_1 \cap B_2 \neq \varnothing\), and \(r_1 = \diam(B_1)\), \(r_2 = \diam(B_2)\), and \(r_1 \leqslant r_2\). \end{itemize} The interchangeability of these conditions is easy to justify using the known properties of balls in ultrametric spaces (see, for example, Proposition~1.2 and Proposition~1.6 in~\cite{Dov2019PNUAA}). \end{example}
\begin{theorem}\label{t3.7}
Let \(X\) be a nonempty set and let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\) and \(|\Phi(X^{2})| \leqslant \aleph_{0}\). Then the following conditions are equivalent. \begin{enumerate} \item[\((i)\)] \(\Phi\) is combinatorially similar to a pseudoultrametric \(d \colon X^{2} \to \mathbb{R}^{+}\) with \(d(X^{2}) \subseteq \mathbb{Q}^{+}\). \item[\((ii)\)] \(\Phi\) is combinatorially similar to a pseudoultrametric. \item[\((iii)\)] The mapping \(\Phi\) is symmetric, and the transitive closure \(u_{\Phi}^{t}\) of the binary relation \(u_{\Phi}\) is antisymmetric, and \(\Phi\) is \(a_0\)-coherent for a point \(a_0 \in \Phi(X^{2})\), and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). \end{enumerate} \end{theorem}
\begin{proof} \((i) \Rightarrow (ii)\). This is trivially valid.
\((ii) \Rightarrow (iii)\). Suppose \(\Phi\) is combinatorially similar to a pseudoultrametric. Then \(\Phi\) also is combinatorially similar to a pseudometric. Consequently, by Theorem~\ref{ch2:p7}, \(\Phi\) is symmetric and there is \(a_0 \in \Phi(X^{2})\) such that \(\Phi\) is \(a_0\)-coherent. If \(\<x_1, x_2, x_3>\) is an arbitrary triple of points of \(X\), then, by Lemma~\ref{l3.1}, there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). To complete the proof of validity of \((ii) \Rightarrow (iii)\) it suffices to show that the transitive closure \(u_{\Phi}^{t}\) of the binary relation \[ u_{\Phi} \subseteq Y^{2}, \quad Y = \Phi(X^{2}), \] is antisymmetric. Suppose contrary that there are distinct \(y_1\), \(y_2 \in Y\) such that \(\<y_1, y_2> \in u_{\Phi}^{t}\) and \(\<y_2, y_1> \in u_{\Phi}^{t}\). The definition of the transitive closure (see \eqref{e3.2}) and the definition of the composition of binary relations imply that there are a positive integer \(n_1\) and some points \[ y_{1}^{*},\ y_{2}^{*},\ \ldots,\ y_{n_1+1}^{*} \in Y \] with \begin{equation}\label{t3.7:e2} y_{1}^{*} = y_1 \quad \text{and} \quad y_{n_1+1}^{*} = y_2 \quad \text{and} \quad \<y_{i}^{*}, y_{i+1}^{*}> \in u_{\Phi} \end{equation} for \(i = 1\), \(\ldots\), \(n_1\). Since \(\Phi\) is combinatorially similar to a pseudoultrametric \(d \colon Z^2 \to \mathbb{R}^{+}\), there are bijections \[ g \colon Z \to Y \text{ and } f \colon \Phi(X^{2}) \to d(Z^{2}) \] satisfying \[ d(z_1, z_2) = f(\Phi(g(z_1), g(z_2))) \] for all \(z_1\), \(z_2 \in Z\). Consequently, \[ d(g^{-1}(x_1), g^{-1}(x_2)) = f(\Phi(x_1, x_2)) \] holds for all \(x_1\), \(x_2 \in X\). As in Example~\ref{ex3.4}, the last equality, \eqref{e2.16}, \eqref{t3.7:e2}, and the strong triangle inequality imply \[ f(y_1) = f(y_{1}^{*}) \geqslant f(y_{2}^{*}) \geqslant \ldots \geqslant f(y_{n_1+1}^{*}) = f(y_{2}). \] Thus, the inequality \(f(y_1) \geqslant f(y_{2})\) holds. Similarly, we can obtain the inequality \(f(y_2) \geqslant f(y_{1})\).
Consequently, the equality \(f(y_1) = f(y_{2})\) holds, that contradicts the bijectivity of \(f\).
\((iii) \Rightarrow (i)\). Suppose \(\Phi\) satisfies condition \((iii)\). Let us define a binary relation \({\preccurlyeq}\) on \(Y = \Phi(X^{2})\) as \begin{equation}\label{t3.7:e3} {\preccurlyeq} := u_{\Phi}^{t} \cup \Delta_{Y}, \end{equation} where \(\Delta_{Y} = \{\<y, y> \colon y \in Y\}\). We claim that \({\preccurlyeq}\) is a partial order on \(Y\). Indeed, \eqref{t3.7:e3} implies that \({\preccurlyeq}\) is reflexive. By condition~\((iii)\), the transitive closure \(u_{\Phi}^{t}\) is antisymmetric. From this and \eqref{t3.7:e3} it follows that \({\preccurlyeq}\) is also antisymmetric. Moreover, using the transitivity of \(u_{\Phi}^{t}\) we obtain \begin{align*} (u_{\Phi}^{t} \cup \Delta_{Y})^2 &= (u_{\Phi}^{t} \circ u_{\Phi}^{t}) \cup (u_{\Phi}^{t} \circ \Delta_{Y}) \cup (\Delta_{Y} \circ u_{\Phi}^{t}) \cup (\Delta_{Y} \circ \Delta_{Y}) \\ & \subseteq u_{\Phi}^{t} \cup \Delta_{Y}. \end{align*} Consequently, \({\preccurlyeq}\) is transitive. Thus, \({\preccurlyeq}\) is a partial order as required.
By condition~\((iii)\), \(\Phi\) is \(a_0\)-coherent. We will show that \(a_0\) is the smallest element of the poset \((Y, {\preccurlyeq})\).
Let \(y_1\) be an arbitrary point of \(Y\). Then there are \(x_1\), \(x_2 \in X\) such that \(y_1 = \Phi(x_1, x_2)\). The mapping \(\Phi\) is symmetric. Thus, \begin{equation}\label{t3.7:e4} \Phi(x_1, x_2) = \Phi(x_2, x_1) \end{equation} holds. Since \(\Phi\) is \(a_0\)-coherent, we have \begin{equation}\label{t3.7:e5} \Phi(x_1, x_1) = a_0. \end{equation} Using~\eqref{t3.7:e4}, \eqref{t3.7:e5} and the definition of \(u_{\Phi}\) we obtain \(\<a_0, y_1> \in u_{\Phi}\) for every \(y_1 \in Y\), as required.
Write \(\preccurlyeq_{0}\) for the intersection \({\preccurlyeq}\) with the set \(Y_0^{2}\), where \[ Y_0 = \{y \in Y \colon y \neq a_0\}. \] Then \(\preccurlyeq_{0}\) is a partial order on the set \(Y_0\). By Lemma~\ref{l3.3}, there is a linear order \(\preccurlyeq^{*}\) on \(Y_0\) such that \[ {\preccurlyeq_{0}} \subseteq {\preccurlyeq^{*}}. \]
The inequality \(|Y| \leqslant \aleph_{0}\) implies \(|Y_0| \leqslant \aleph_{0}\). Using Lemma~\ref{l3.2} we can find an injective mapping \(f^{*} \colon Y \to \mathbb{Q}^{+}\) such that \(f^{*}(a_0) = 0\) and \[ (y_1 \preccurlyeq^{*} y_2) \Leftrightarrow (f^{*}(y_1) \leqslant f^{*}(y_2)) \] for all \(y_1\), \(y_2 \in Y\). Then the function \(d \colon X^{2} \to \mathbb{R}^{+}\), \[ d(x_1, x_2) = f^{*}(\Phi(x_1, x_2)), \quad x_1, x_2 \in X, \] is a pseudoultrametric on \(X\) and \(d(X^{2}) \subseteq \mathbb{Q}^{+}\) holds. Since the function \(f^{*}\) is injective, the identical mapping \(X \xrightarrow{\operatorname{id}} X\) is a combinatorial similarity. \end{proof}
Using Theorem~\ref{t3.7} and Corollary~\ref{c3.2} we also obtain.
\begin{corollary}\label{c3.8}
Let \(X\) be a nonempty set. The following conditions are equivalent for every mapping \(\Phi\) with \(\dom \Phi = X^{2}\) and \(|\Phi(X^{2})| \leqslant \aleph_{0}\). \begin{enumerate} \item[\((i)\)] \(\Phi\) is combinatorially similar to an ultrametric \(d \colon X^{2} \to \mathbb{R}^{+}\) satisfying the inclusion \(d(X^{2}) \subseteq \mathbb{Q}^{+}\). \item[\((ii)\)] \(\Phi\) is combinatorially similar to an ultrametric. \item[\((iii)\)] \(\Phi\) is symmetric, and the transitive closure \(u_{\Phi}^{t}\) of the binary relation \(u_{\Phi}\) is antisymmetric, and the equality \[ \Phi^{-1}(a_0) = \Delta_{X} \] holds for some \(a_0 \in \Phi(X^{2})\), and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). \end{enumerate} \end{corollary}
\begin{example}\label{ex3.9} A four-point metric space \((X, d)\) is called a \emph{pseudolinear quadruple} (see~\cite{Blu1953CP} for instance) if, for a suitable enumeration of points of \(X\), we have \begin{multline}\label{ex3.9:e1} d(x_1, x_2) = d(x_3, x_4) = s, \quad d(x_2, x_3) = d(x_4, x_1) = t, \\ d(x_2, x_4) = d(x_3, x_1) = s + t, \end{multline} with some positive reals \(s\) and \(t\). For a pseudolinear quadruple \((X, d)\), Corollary~\ref{c3.8} implies that the metric \(d \colon X^{2} \to \mathbb{R}^{+}\) is combinatorially similar to an ultrametric if and only if \((X, d)\) is ``equilateral'', i.e., \eqref{ex3.9:e1} holds with \(s= t\) (see Figure~\ref{fig2}). \begin{figure}
\caption{Each equilateral, pseudolinear quadruple is (up to similarity) a subspace \(\{x_1, x_2, x_3, x_4\}\) of the unit circle endowed with the shortest path metric.}
\label{fig2}
\end{figure} \end{example}
\begin{remark}\label{r3.10} The pseudolinear quadruples appeared for the first time in the paper of Menger~\cite{Men1928MA}. According to Menger, the pseudolinear quadruples are characterized as the metric spaces which are not isometric to any subset of \(\mathbb{R}\), but such that every triple of whose points embeds isometrically into \(\mathbb{R}\). There is also an elementary proof of this fact \cite{DD2009UMZ}. It is interesting to note that the equilateral, pseudolinear quadruples are the ``most non-Ptolemaic'' metric spaces~\cite{DP2011SMJ}. \end{remark}
For what follows we need a specification of the concept of ultrametric distances introduced above in Definition~\ref{d1.3}.
\begin{definition}\label{d3.11} Let \((Q, \preccurlyeq_Q)\) be a poset with a smallest element \(q_0\) and let \(X\) be a nonempty set. A mapping \(d \colon X^2 \to Q\) is a \(\preccurlyeq_Q\)-\emph{pseudo\-ultra\-metric} if \(d\) is symmetric and \(d(x, x) = q_0\) holds for every \(x \in X\) and, in addition, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \begin{equation}\label{d3.11:e1} d(x_{i_1}, x_{i_3}) \preccurlyeq_Q d(x_{i_1}, x_{i_2}) \quad \text{and} \quad d(x_{i_1}, x_{i_2}) = d(x_{i_2}, x_{i_3}). \end{equation} For \(\preccurlyeq_{Q}\)-pseudoultrametric \(d\), satisfying \(d(x, y) = q_0\) if and only if \(x = y\), we say that \(d\) is a \(\preccurlyeq_{Q}\)-\emph{ultrametric}. \end{definition}
If there is no ambiguity in the choice of the order \(\preccurlyeq_{Q}\) we write ``\(d\) is a \(Q\)-pseudoultrametric'' instead of ``\(d\) is a \(\preccurlyeq_Q\)-pseudoultrametric''.
\begin{remark}\label{r3.13} It is easy to prove that every ultrametric is a \(\leqslant\)-ultra\-metric for \((\mathbb{R}^{+}, \leqslant)\). Moreover, every \({\preccurlyeq}_{Q}\)-ultrametric is an ultrametric distance with the same \((Q, {\preccurlyeq}_Q)\) but not conversely (see, in particular, Example~\ref{ex3.26} at the end of the present section). For all totally ordered sets \(Q\), the ultrametric distances coincide with \(Q\)-ultrametrics, and with generalized ultrametrics defined by Priess-Crampe \cite{Pri1990RiM}. \end{remark}
The following proposition is an extension of Proposition~\ref{c2.2} for the case of arbitrary \(Q\)-pseudoultrametric.
\begin{proposition}\label{p3.12} Let \(X\) be a nonempty set and \((Q, \preccurlyeq_Q)\) be a poset with the smallest element \(q_0\) and let \(d \colon X^2 \to Q\) be a \(Q\)-pseudo\-ultra\-metric on \(X\). Then \(d^{-1}(q_0)\) is an equivalence relation on \(X\) and the mapping \(d\) is \(q_0\)-coherent. \end{proposition}
\begin{proof} It follows directly from Definition~\ref{d3.11} that \(d^{-1}(q_0)\) is reflexive. To prove that \(d^{-1}(q_0)\) is symmetric it suffices to note that the mapping \(d \colon X^{2} \to Q\) is symmetric, because, for each mapping \(\Phi\) with \[ \dom \Phi = X^{2}, \] \(\Phi\) is symmetric if and only if \(\Phi^{-1}(b)\) is a symmetric binary relation for every \(b \in \Phi(X^{2})\). Thus, \(d^{-1}(q_0)\) is an equivalence relation if and only if \(d^{-1}(q_0)\) is transitive.
Let \(\<x_1, x_2>\) and \(\<x_2, x_3>\) belong to \(X^{2}\) and let \begin{equation}\label{p3.12:e1} d(x_1, x_2) = d(x_2, x_3) = q_0. \end{equation} We claim that \(d(x_1, x_3) = q_0\) holds. Indeed, by Definition~\ref{d3.11}, there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that~\eqref{d3.11:e1} holds. From~\eqref{p3.12:e1} and \eqref{d3.11:e1} it follows that \begin{equation}\label{p3.12:e2} d(x_{i_1}, x_{i_2}) = d(x_{i_2}, x_{i_3}) = q_0. \end{equation} Using \eqref{d3.11:e1} again we see that \eqref{p3.12:e2} implies \begin{equation}\label{p3.12:e3} d(x_{i_1}, x_{i_3}) \preccurlyeq_Q q_0. \end{equation} Since \(q_0\) is the smallest element of \((Q, \preccurlyeq_Q)\), inequality \eqref{p3.12:e3} implies \begin{equation}\label{p3.12:e4} d(x_{i_1}, x_{i_3}) = q_0. \end{equation} The equality \(d(x_1, x_3) = q_0\) follows from \eqref{p3.12:e4} and \eqref{p3.12:e2}. Thus, \(d^{-1}(q_0)\) is transitive.
Now we need to prove that \(d\) is \(q_0\)-coherent. The mapping \(d\) is symmetric. Hence, by Corollary~\ref{c2.5}, it suffices to show that \begin{equation}\label{p3.12:e5} d^{-1}(q) = d^{-1}(q_1) \circ d^{-1}(q_0) \end{equation} for every \(q_1 \in d(X^{2})\). Let \(q_1 \in d(X^{2})\). We have \[ d^{-1}(q_1) \subseteq d^{-1}(q_1) \circ d^{-1}(q_0), \] because \(d^{-1}(q_0)\) is reflexive. The converse inclusion \begin{equation}\label{p3.12:e6} d^{-1}(q_1) \supseteq d^{-1}(q_1) \circ d^{-1}(q_0) \end{equation} holds if and only if, for all \(x_1\), \(x_2\), \(x_3 \in X\), we have \begin{equation}\label{p3.12:e7} \<x_1, x_3> \in d^{-1}(q_1) \end{equation} whenever \(\<x_1, x_2> \in d^{-1}(q_1)\) and \(\<x_2, x_3> \in d^{-1}(q_0)\). If \(q_1 = q_0\), then \eqref{p3.12:e6} holds, since \(d^{-1}(q_0)\) is an equivalence relation. Suppose \[ q_1 \neq q_0. \] Write \(q_2 := d(x_1, x_3)\). If \(q_2 = q_1\), then \eqref{p3.12:e7} follows from \(\<x_1, x_3> \in d^{-1}(q_2)\). Consequently, if \eqref{p3.12:e7} is false, then we have \begin{equation}\label{p3.12:e8} q_2 \neq q_1 \neq q_0. \end{equation} The equality \(q_2 = q_0\) implies \begin{equation}\label{p3.12:e9} \<x_1, x_3> \in d^{-1}(q_0), \end{equation} because \(d^{-1}(q_0)\) is transitive. From \eqref{p3.12:e9} and \eqref{p3.12:e7} follows \(q_0 = q_1\), contrary to \eqref{p3.12:e8}. Thus, \(q_0\), \(q_1\) and \(q_2\) are pairwise distinct, that contradicts~\eqref{d3.11:e1}. \end{proof}
\begin{corollary}\label{c3.17} Let \(X\) be a nonempty set and \((Q, \preccurlyeq_Q)\) be a poset and let \(d \colon X^{2} \to Q\) be a \(Q\)-pseudoultrametric (\(Q\)-ultrametric) on \(X\). Then the following statements are valid. \begin{enumerate}
\item [\((i)\)] If \(|d(X^2)| \leqslant 2^{\aleph_{0}}\) holds, then \(d\) is combinatorially similar to an usual pseudometric (metric).
\item [\((ii)\)] If \(|d(X^2)| \leqslant \aleph_{0}\) holds, then \(d\) is combinatorially similar to an usual pseudoultrametric (ultrametric). \end{enumerate} \end{corollary}
\begin{proof} Suppose first that \(d\) is a \(Q\)-pseudoultrametric.
\((i)\). If \(|d(X^2)| \leqslant 2^{\aleph_{0}}\) holds, then Definition~\ref{d3.11} and Proposition~\ref{p3.12} imply condition \((ii)\) of Theorem~\ref{ch2:p7}. Thus, \((i)\) is valid by Theorem~\ref{ch2:p7}.
\((ii)\). Analogously, using Definition~\ref{d3.11} we can show that condition \((iii)\) of Theorem~\ref{t3.7} is valid for \(\Phi = d\). Thus, \((ii)\) follows from Theorem~\ref{t3.7}.
The case when \(d\) is a \(Q\)-ultrametric can be considered similarly. \end{proof}
The next theorem is a partial generalization of Theorem~\ref{t3.7}.
\begin{theorem}\label{t3.15} Let \(X\) be a nonempty set and let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\). Then the following conditions are equivalent. \begin{enumerate} \item[\((i)\)] There is a totally ordered set \(Q\) such that \(\Phi\) is combinatorially similar to a \(Q\)-pseudoultrametric. \item[\((ii)\)] There is a poset \(Q\) such that \(\Phi\) is combinatorially similar to a \(Q\)-pseudoultrametric. \item[\((iii)\)] The mapping \(\Phi\) is symmetric, and the transitive closure \(u_{\Phi}^{t}\) of the binary relation \(u_{\Phi}\) is antisymmetric, and there is \(a_0 \in \Phi(X^{2})\) for which \(\Phi\) is \(a_0\)-coherent, and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). \item[\((iv)\)] There is \(b_0 \in \Phi(X^{2})\) such that \(\Phi(x,x) = b_0\) holds for every \(x \in X\), and the binary relation \begin{equation}\label{t3.15:e1} {\preccurlyeq}_{\Phi} := u_{\Phi}^{t} \cup \Delta_{\Phi(X^{2})} \end{equation} is a partial order on \(\Phi(X^{2})\), and \(b_0\) is the smallest element of \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\), and \(\Phi\) is a \( {\preccurlyeq}_{\Phi}\)-pseudo\-ultra\-metric on \(X\). \end{enumerate} \end{theorem}
\begin{proof} The implication \((i) \Rightarrow (ii)\) is trivially valid. The validity of \((ii) \Rightarrow (iii)\) can be verified by repetition of the first part of the proof of Theorem~\ref{t3.7} with the replacement of the word ``Theorem~\ref{ch2:p7}'' by word ``Proposition~\ref{p3.12}''. It should be noted that Lemma~\ref{l3.1} remains valid if \(\Phi\) is combinatorially similar to an arbitrary \(Q\)-pseudoultrametric.
\((iii) \Rightarrow (iv)\). Let \((iii)\) hold. Then \(u_{\Phi}^{t}\) is antisymmetric and transitive. Consequently, the relation \({\preccurlyeq}_{\Phi}\) is reflexive, antisymmetric and transitive, i.e., \({\preccurlyeq}_{\Phi}\) is a partial order on \(\Phi(X^{2})\). Since \(\Phi\) is \(a_0\)-coherent, the equality \(\Phi(x,x) = a_0\) holds for every \(x \in X\).
The point \(a_0\) is the smallest element of \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\) if and only if the inequality \begin{equation}\label{t3.15:e2} a_0 \preccurlyeq_{\Phi} \Phi(x, y) \end{equation} holds for all \(x\), \(y \in X\). To prove \eqref{t3.15:e2} we consider the triple \(\<y, x, y>\) and note that \(\Phi(x, y) = \Phi(y,x)\). Consequently, \(\<\Phi(x,x), \Phi(x,y)>\) belongs to \(u_{\Phi}\). Now \eqref{t3.15:e2} follows from \eqref{t3.15:e1}.
By condition \((iii)\), \(\Phi(x,x) = a_0\) holds for every \(x \in X\) and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). The mapping \(\Phi\) is symmetric. Hence, \(\Phi\) is a \({\preccurlyeq}_{\Phi}\)-pseudoultrametric on \(X\) as required.
\((iv) \Rightarrow (i)\). Let \((iv)\) hold. Then \(\Phi\) is a \({\preccurlyeq}_{\Phi}\)-pseudoultrametric. By Lemma~\ref{l3.3} (Szpilrajn) the partial order \({\preccurlyeq}_{\Phi}\) can be extended to a linear order \({\preccurlyeq}\) on \(\Phi(X^{2})\). It is easy to see that the smallest element \(a_0\) of \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\) is also the smallest element of \((\Phi(X^{2}), {\preccurlyeq})\). Thus, \(\Phi\) is also a \({\preccurlyeq}\)-pseudoultrametric. Condition \((i)\) follows. \end{proof}
\begin{corollary}\label{c3.19} Let \(X\) be a nonempty set and let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\). Then the following conditions are equivalent. \begin{enumerate} \item[\((i)\)] There is a totally ordered set \(Q\) such that \(\Phi\) is combinatorially similar to a \(Q\)-ultra\-metric. \item[\((ii)\)] There is a poset \(Q\) such that \(\Phi\) is combinatorially similar to a \(Q\)-ultra\-metric. \item[\((iii)\)] The mapping \(\Phi\) is symmetric, and the transitive closure \(u_{\Phi}^{t}\) of the binary relation \(u_{\Phi}\) is antisymmetric, and there is \(a_0 \in \Phi(X^{2})\) for which \(\Phi^{-1}(a_0) = \Delta_{X}\) holds, and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). \item[\((iv)\)] There is \(b_0 \in \Phi(X^{2})\) such that \(\Phi^{-1}(b_0) = \Delta_{X}\) holds, and the binary relation \[ {\preccurlyeq}_{\Phi} := u_{\Phi}^{t} \cup \Delta_{\Phi(X^{2})} \] is a partial order on \(\Phi(X^{2})\), and \(b_0\) is the smallest element of \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\), and \(\Phi\) is a \( {\preccurlyeq}_{\Phi}\)-ultra\-metric on \(X\). \end{enumerate} \end{corollary}
The next corollary follows from Corollary~\ref{c3.8} and Corollary~\ref{c3.19}.
\begin{corollary}\label{c3.20}
Let \((Q, {\preccurlyeq}_{Q})\) be a poset with a smallest element, let \(X\) be a nonempty set and let \(d \colon X^{2} \to Q\) be an ultrametric distance in the sense of Priess-Crampe and Ribenboim. If the inequality \(|Q| \leqslant \aleph_{0}\) holds, then the following conditions are equivalent. \begin{enumerate} \item [\((i)\)] The mapping \(d\) is a \(Q\)-ultra\-metric. \item [\((ii)\)] There is an usual ultrametric \(\rho \colon X^{2} \to \mathbb{R}^{+}\) such that \(d\) and \(\rho\) are combinatorially similar. \end{enumerate} \end{corollary}
The following proposition guarantees, for a given \(Q\)-pseudo\-ultra\-metric \(d\), the presence of the weakest (on \(Q\)) partial order at which \(d\) remains \(Q\)-pseudo\-ultra\-metric.
\begin{proposition}\label{p3.17} Let \(X\) be a nonempty set, \((Q, {\preccurlyeq}_{Q})\) be a poset and let \(d \colon X^{2} \to Q\) be a \({\preccurlyeq}_{Q}\)-pseudoultrametric. Then there is a unique partial order \({\preccurlyeq}_{Q}^{0}\) on \(Q\) such that \(d\) is a \({\preccurlyeq}_{Q}^{0}\)-pseudoultrametric and the inclusion \[ {\preccurlyeq}_{Q}^{0} \subseteq {\preccurlyeq} \] holds whenever \({\preccurlyeq}\) is a partial order on \(Q\) for which \(d\) is a \({\preccurlyeq}\)-pseudo\-ultrametric. \end{proposition}
\begin{proof} The uniqueness of \({\preccurlyeq}_{Q}^{0}\) satisfying the desirable conditions is clear. For the proof of existence of \({\preccurlyeq}_{Q}^{0}\), let \(\mathcal{F} = \{{\preccurlyeq}_i \colon i \in I\}\) be the family of all partial orders \({\preccurlyeq}_i\) on \(Q\) for which \(d\) is a \({\preccurlyeq}_i\)-pseudoultrametric. The family \(\mathcal{F}\) is non-void because \({\preccurlyeq}_{Q} \in \mathcal{F}\). Let us define a binary relation \({\preccurlyeq}_{Q}^{0}\) as the intersection of all \({\preccurlyeq}_i\), i.e., for \(p\), \(q \in Q\), \[ (\<p,q> \in {\preccurlyeq}_{Q}^{0}) \Leftrightarrow (p \preccurlyeq_i q \text{ holds for every } i \in I). \] Then \({\preccurlyeq}_{Q}^{0}\) is a partial order on \(Q\). Since \(d\) is a \({\preccurlyeq}_{Q}\)-pseudoultrametric, the poset \((Q, {\preccurlyeq}_{Q})\) has a smallest element \(q_0\) by definition. It is easy to prove that \(q_0\) is the common smallest element of all posets \((Q, {\preccurlyeq}_i)\), \(i \in I\).
Indeed, since \(d\) is a \({\preccurlyeq}_{Q}\)-pseudoultrametric, we have \(d(x, x) = q_0\). In addition, since, for arbitrary \(i^{*} \in I\), the mapping \(d\) is a \({\preccurlyeq}_{i^*}\)-pseudo\-ultrametric, we also have \[ d(x, x) = q_0^{*}, \] where \(q_0^{*}\) is the smallest element of \((Q, {\preccurlyeq}_{i^*})\). That implies \(q_0^{*} = q_0\).
Consequently, \(q_0\) is the smallest element of \((Q, {\preccurlyeq}_{Q}^{0})\).
Hence, to prove that \(d\) is a \({\preccurlyeq}_{Q}^{0}\)-pseudoultrametric it suffices to show that for every triple \(\<x_1, x_2, x_3>\) of points of \(X\) there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \begin{equation}\label{p3.17:e0} d(x_{i_1}, x_{i_3}) \preccurlyeq_{Q}^{0} d(x_{i_1}, x_{i_2}) \text{ and } d(x_{i_1}, x_{i_2}) = d(x_{i_2}, x_{i_3}). \end{equation} Condition~\eqref{p3.17:e0} evidently holds if \begin{equation}\label{p3.17:e1} d(x_1, x_2) = d(x_2, x_3) = d(x_3, x_1). \end{equation} If \eqref{p3.17:e1} does not hold, then we may set, for definiteness, that \begin{equation}\label{p3.17:e2} d(x_1, x_2) = d(x_2, x_3) \neq d(x_1, x_3). \end{equation} (The case when \(d(x_1, x_2)\), \(d(x_2, x_3)\) and \(d(x_1, x_3)\) are pairwise distinct is impossible because \(d\) is a \({\preccurlyeq}_Q\)-pseudoultrametric.) Using \eqref{p3.17:e2} and \eqref{d3.11:e1} we obtain \begin{equation}\label{p3.17:e3} d(x_1, x_3) \preccurlyeq_i d(x_1, x_2) \text{ and } d(x_1, x_2) = d(x_2, x_3) \end{equation} for every \(i \in I\), that, together with the equality \[ {\preccurlyeq}_{Q}^{0} = \bigcap_{i \in I} {\preccurlyeq}_{i}, \] implies \[ d(x_1, x_3) \preccurlyeq_{Q}^{0} d(x_1, x_2) \text{ and } d(x_1, x_2) = d(x_2, x_3). \qedhere \] \end{proof}
\begin{lemma}\label{l3.18} Let \(X\) be a nonempty set, \((Q, {\preccurlyeq}_{Q})\) be a poset and let \(d \colon X^{2} \to Q\) be a \({\preccurlyeq}_{Q}\)-pseudo\-ultrametric with \(d(X^{2}) = Q\). Then the equality \begin{equation}\label{l3.18:e1} {\preccurlyeq}_{Q}^{0} = (u_d^t \cup \Delta_{Q}) \end{equation} holds, where \(\Delta_{Q} := \{\<q,q> \colon q \in Q\}\). \end{lemma}
\begin{proof} As in the second part of the proof of Theorem~\ref{t3.7} we see that \(u_{d}^{t} \cup \Delta_{Q}\) is reflexive and transitive. Using \({\preccurlyeq}_{Q}\) instead of \(\leqslant\) and arguing as in the first part of that proof we obtain the antisymmetry of \(u_{d}^{t} \cup \Delta_{Q}\). Consequently, \(u_{d}^{t} \cup \Delta_{Q}\) is a partial order on \(Q\).
Let \({\preccurlyeq}\) be an arbitrary partial order on \(Q\) for which \(d\) is a \({\preccurlyeq}\)-pseudo\-ultrametric. Then, using Definition~\ref{d3.11} and the definition of \(u_d\), we see that \[ u_{d} \subseteq {\preccurlyeq}. \] The last inclusion implies \[ (u_{d}^{t} \cup \Delta_{Q}) \subseteq ({\preccurlyeq}^{t} \cup \Delta_{Q}) = {\preccurlyeq}. \] Consequently, \({\preccurlyeq}_{Q}^{0} \supseteq (u_{d}^{t} \cup \Delta_{Q})\) holds.
From the definition of the relation \(u_{d}\), Definition~\ref{d3.11} and the fact that \(d\) is \({\preccurlyeq}_{Q}\)-pseudo\-ultrametric it follows that \(d\) is a \((u_{d}^{t} \cup \Delta_{Q})\)-pseudo\-ultra\-metric. Thus, equality~\eqref{l3.18:e1} holds. \end{proof}
\begin{remark}\label{r3.18} Equality~\eqref{l3.18:e1} does not hold if \(d(X^{2}) \neq Q\). Indeed, if \(q_1 \in Q \setminus d(X^{2})\), then we evidently have \(q_1 \notin u_d^t\), that implies \[ \<q, q_1> \notin (u_d^t \cup \Delta_{Q}) \] for every \(q \in Q \setminus \{q_1\}\). Consequently, the poset \((Q, u_d^t \cup \Delta_{Q})\) does not have any smallest element. The last statement contradicts \eqref{l3.18:e1}, because the smallest element \(q_0 \in d(X^{2})\) of \((Q, {\preccurlyeq}_Q)\) is also the smallest element of \((Q, {\preccurlyeq}_Q^{0})\). \end{remark}
Results of the present section are based on the fact that, for all posets \((Q, {\preccurlyeq}_Q)\) and \((L, {\preccurlyeq}_L)\) with the smallest elements \(q_0 \in Q\) and \(l_0 \in L\), for every isotone injection \(f \colon Q \to L\) satisfying the condition \(f(q_0) = l_0\), and for each \(Q\)-pseudoultrametric \(d\), the mappings \(d\) and \(f \circ d\) are combinatorially similar. Moreover, in this case the transformation \(d \mapsto f \circ d\) converts the \(Q\)-pseudoultrametrics into \(L\)-pseudoultrametrics.
\begin{proposition}\label{p3.23} Let \((Q, {\preccurlyeq}_Q)\) and \((L, {\preccurlyeq}_L)\) be posets with the smallest elements \(q_0 \in Q\) and \(l_0 \in L\). The following conditions are equivalent for every mapping \(f \colon Q \to L\). \begin{enumerate} \item [\((i)\)] \(f \circ d\) is a \(L\)-pseudoultrametric whenever \(d\) is a \(Q\)-pseudo\-ultra\-metric. \item [\((ii)\)] \(f \circ d\) is a \(L\)-pseudoultrametric whenever \(d\) is a \(Q\)-ultra\-metric. \item [\((iii)\)] \(f\) is isotone and \(f(q_0) = l_0\) holds. \end{enumerate} \end{proposition}
\begin{proof} \((i) \Rightarrow (ii)\). This is evidently valid.
\((ii) \Rightarrow (iii)\). Suppose statement \((ii)\) is valid. Then, for every \(Q\)-ultrametric space \((X, d)\) and for every \(x \in X\), the equalities \[ f(q_0) = f(d(x, x)) = l_0 \] hold. Let \(q_1\), \(q_2 \in Q\) such that \(q_1 \preccurlyeq_Q q_2\). We must prove the inequality \begin{equation}\label{p3.23:e1} f(q_1) \preccurlyeq_L f(q_2). \end{equation} This is trivial if \(f(q_1) = f(q_2)\). Suppose \(f(q_1) \neq f(q_2)\) and \(X = \{x_1, x_2, x_3\}\). Let us define \(d \colon X^{2} \to L\) as \begin{equation}\label{p3.23:e2} d(x_1, x_2) = d(x_2, x_3) = q_2, \quad \text{and} \quad d(x_1, x_3) = q_1, \end{equation} and \(d(x_1, x_1) = d(x_2, x_2) = d(x_3, x_3) = q_0\). Then \(d\) is a \(Q\)-ultrametric and \(f \circ d\) is a \(L\)-pseudo\-ultra\-metric. Inequality \eqref{p3.23:e1} follows from \(f(q_1) \neq f(q_2)\), \eqref{p3.23:e2} and \eqref{d3.11:e1}.
\((iii) \Rightarrow (i)\). The validity of this implication follows directly from the definition of isotone mappings and the definition of poset-valued pseudoultrametrics. \end{proof}
\begin{corollary}\label{c3.24} Let \((Q, {\preccurlyeq}_Q)\) and \((L, {\preccurlyeq}_L)\) be posets with the smallest elements \(q_0 \in Q\) and \(l_0 \in L\). Then the following conditions are equivalent for every mapping \(f \colon Q \to L\). \begin{enumerate} \item [\((i)\)] \(f \circ d\) is a \(L\)-ultrametric whenever \(d\) is a \(Q\)-ultrametric. \item [\((ii)\)] \(f\) is isotone and the equivalence \begin{equation}\label{c3.24:e1} (f(q) = l_0) \Leftrightarrow (q = q_0) \end{equation} is valid for every \(q \in Q\). \end{enumerate} \end{corollary}
\begin{proof} \((i) \Rightarrow (ii)\). Let \((i)\) hold. Then, by Proposition~\ref{p3.23}, \(f\) is isotone and \(f(q_0) = l_0\) holds. Thus, to prove \((ii)\) it suffices to show that \(f(q) = l_0\) implies \(q = q_0\). Suppose contrary that there is \(q_1 \in Q\) such that \(q_1 \neq q_0\) and \(f(q_1) = l_0\).
Let \(X\) be an arbitrary set with \(|X| \geqslant 2\). The function \(d \colon X^{2} \to Q\), defined as \begin{equation}\label{c3.24:e2} d(x, y) = \begin{cases} q_0 & \text{if } x = y,\\ q_1 & \text{if } x \neq y, \end{cases} \end{equation} is a \(Q\)-ultrametric on \(X\). The equalities \(f(q_0) = l_0\), \(f(q_1) = l_0\) and \eqref{c3.24:e2} imply \(f(d(x, y)) = l_0\) for all \(x\), \(y \in X\). Hence, \(f \circ d\) is not a \(L\)-ultra\-metric on \(X\), which contradicts condition~\((i)\).
\((ii) \Rightarrow (i)\). Suppose \((ii)\) holds, but there are a set \(X\) and a \(Q\)-ultra\-metric \(d \colon X^{2} \to Q\) such that \(f \circ d\) is not a \(L\)-ultrametric. Then we evidently have \(|X| \geqslant 2\). Moreover, Proposition~\ref{p3.23} implies that \(f \circ d\) is a \(L\)-pseudoultrametric. Consequently, there are \(x_1\), \(x_2 \in X\) such that \(x_1 \neq x_2\) and \begin{equation}\label{c3.24:e3} f(d(x_1, x_2)) = l_0. \end{equation} Since \(d\) is a \(Q\)-ultrametric, \begin{equation}\label{c3.24:e4} d(x_1, x_2) \neq q_0 \end{equation} holds. From \eqref{c3.24:e3} and \eqref{c3.24:e4} it follows that \eqref{c3.24:e1} is false with \(q = d(x_1, x_2)\), contrary to condition \((ii)\). \end{proof}
The following example shows that we cannot replace statement \((i)\) of Corollary~\ref{c3.24} by the statement \begin{itemize} \item \(f \circ d\) is an ultrametric distance w.r.t \((L, {\preccurlyeq}_L)\) whenever \(d\) is an ultrametric distance w.r.t \((Q, {\preccurlyeq}_Q)\) \end{itemize} leaving statement \((ii)\) unchanged.
\begin{example}\label{ex3.26}
Let \(P\) and \(Q\) be sets with \(|P| = |Q| \geqslant 4\) and let \({\preccurlyeq}_{P}\) be a linear order on \(P\) with a smallest element \(p_0\). Let us define a binary relation \({\preccurlyeq}_{Q}\) on \(Q\) by the rule: \begin{equation}\label{ex3.26:e1} (\<q_1, q_2> \in {\preccurlyeq}_{Q}) \Leftrightarrow (q_1 = q_2 \text{ or } q_1 = q_0). \end{equation} Then \({\preccurlyeq}_{Q}\) is a partial order on \(Q\) and, for a set \(X = \{x_1, x_2, x_3\}\), a mapping \(d \colon X^{2} \to Q\) is an ultrametric distance w.r.t. \((Q, {\preccurlyeq}_{Q})\) if and only if \(d\) is symmetric and \[ (d(x, y) = q_0) \Leftrightarrow (x = y) \]
holds for all \(x\), \(y \in X\). Since \(|Q| \geqslant 4\) holds, there is an ultrametric distance \(d^{*} \colon X^{2} \to Q\) such that \(d^{*}(x_1, x_2)\), \(d^{*}(x_2, x_3)\), \(d^{*}(x_3, x_1)\) are pairwise distinct. It follows directly from \eqref{ex3.26:e1} and Definition~\ref{d3.5} that a function \(f \colon Q \to P\) is isotone if and only if \(f(q_0) = p_0\). Now, using the equality \(|P| = |Q|\) we can find an isotone bijection \(f^{*} \colon Q \to P\) such that \[ (f^{*}(q) = p_0) \Leftrightarrow (q = q_0) \] is valid for every \(q \in Q\). Since \((P, {\preccurlyeq}_{P})\) is totally ordered, and \(f^{*}\) is bijective, and \(d^{*}(x_1, x_2)\), \(d^{*}(x_2, x_3)\), \(d^{*}(x_3, x_1)\) are pairwise distinct, we can find a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] for which \[ f^{*}(d^{*}(x_{i_1}, x_{i_2})) \prec_{P} f^{*}(d^{*}(x_{i_2}, x_{i_3})) \prec_{P} f^{*}(d^{*}(x_{i_1}, x_{i_3})). \] From Definition~\ref{d1.3} it follows that the mapping \[ X^{2} \xrightarrow{d^{*}} Q \xrightarrow{f^{*}} P \] is not an ultrametric distance w.r.t. \((P, {\preccurlyeq}_{P})\). \end{example}
\begin{remark}\label{r3.25} For the case of standard ultrametrics and pseudoultrametrics Proposition~\ref{p3.23} and Corollary~\ref{c3.24} are known. In particular, Proposition~\ref{p3.23} is a generalization of Proposition~2.4 \cite{Dov2019v2} and, respectively, Corollary~\ref{c3.24} is a generalization of Theorem~9 \cite{PTAbAppAn2014}. \end{remark}
\section{From weak similarities to combinatorial similarities and back}
Let us expand the notion of weak similarity to the case of poset-valued pseudoultrametrics.
\begin{definition}\label{d3.13} Let \((Q_i, {\preccurlyeq}_{Q_i})\) be a poset, and \((X_i, d_{i})\) be a \(Q_i\)-pseudo\-ultra\-metric space, and let \(Y_i := d_{i}(X_i^2)\), \(i = 1\), \(2\). A bijection \(\Phi \colon X_1 \to X_2\) is a \emph{weak similarity} for \(d_1\) and \(d_2\) if there is an isomorphism \(f \colon Y_1 \to Y_2\) of the subposet \((Y_1, {\preccurlyeq}_{Y_1})\) of the poset \((Q_1, {\preccurlyeq}_{Q_1})\) and the subposet \((Y_2, {\preccurlyeq}_{Y_2})\) of the poset \((Q_2, {\preccurlyeq}_{Q_2})\) such that \begin{equation}\label{d3.13:e1} d_1(x, y) = f(d_2(\Phi(x), \Phi(y))) \end{equation} for all \(x\), \(y \in X_1\). \end{definition}
\begin{remark}\label{r4.2} For every totally ordered set \((P_1, \preccurlyeq_{P_1})\) and arbitrary poset \((P_2, \preccurlyeq_{P_2})\), every isotone bijection \(f \colon P_1 \to P_2\) is an isomorphism of \((P_1, \preccurlyeq_{P_1})\) and \((P_2, \preccurlyeq_{P_2})\). Thus, Definition~\ref{d2.9} and Definition~\ref{d3.13} are equivalent for the case when \((Q_1, \preccurlyeq_{Q_1})\) and \((Q_2, \preccurlyeq_{Q_2})\) coincide with \((\mathbb{R}^{+}, \leqslant)\). \end{remark}
The following is a generalization of Proposition~\ref{p2.11}.
\begin{proposition}\label{p3.16} Let \((Q_i, {\preccurlyeq}_{Q_i})\) be a poset and \((X_i, d_{i})\) be a \(Q_i\)-pseudoultrametric space, \(i=1\), \(2\). Then every weak similarity for \(d_1\) and \(d_2\) is a combinatorial similarity for \(d_{1}\) and \(d_{2}\). \end{proposition}
\begin{proof} The proposition can be directly driven from definitions. We just notice that if \(Y_1 := d_{1}(X_1^2)\) and \(Y_2 := d_{2}(X_2^2)\), and \(f \colon Y_1 \to Y_2\) is an isomorphism of the subposet \((Y_1, {\preccurlyeq}_{Y_1})\) of \((Q_1, {\preccurlyeq}_{Q_1})\) and the subposet \((Y_2, {\preccurlyeq}_{Y_2})\) of \((Q_2, {\preccurlyeq}_{Q_2})\), and~\eqref{d3.13:e1} holds for all \(x\), \(y \in X_1\), then we have \(q_2 = f(q_1)\), where \(q_i \in d_{i}(X_i^2)\) is the smallest element of \((Q_i, {\preccurlyeq}_{Q_i})\), \(i = 1\), \(2\), that agrees with Proposition~\ref{p3.12} and the second statement of Proposition~\ref{p2.4}. \end{proof}
\begin{theorem}\label{t4.3} Let \(X_i\) be a nonempty set and let \(\Phi_i\) be a mapping with \(\dom \Phi = X_i^{2}\), \(i = 1\), \(2\). Suppose \begin{equation}\label{t4.3:e1} {\preccurlyeq}_{1} := u_{\Phi_1}^{t} \cup \Delta_{\Phi_1(X_1^{2})} \quad \text{and} \quad {\preccurlyeq}_{2} := u_{\Phi_2}^{t} \cup \Delta_{\Phi_2(X_2^{2})} \end{equation} are partial orders on \(\Phi_1(X_1^{2})\) and, respectively, on \(\Phi_2(X_2^{2})\). If \(\Phi_i\) is a \({\preccurlyeq}_{i}\)-pseudoultrametric, \(i = 1\), \(2\), then the following conditions are equivalent for every mapping \(g \colon X_1 \to X_2\). \begin{enumerate} \item [\((i)\)] \(g\) is a weak similarity for \(\Phi_1\) and \(\Phi_2\). \item [\((ii)\)] \(g\) is a combinatorial similarity for \(\Phi_1\) and \(\Phi_2\). \end{enumerate} \end{theorem}
\begin{proof} Suppose \(\Phi_i\) is a \({\preccurlyeq}_{i}\)-pseudoultrametric, \(i = 1\), \(2\).
\((i) \Rightarrow (ii)\). This is valid by Proposition~\ref{p3.16}.
\((ii) \Rightarrow (i)\). Let \(g \colon X_1 \to X_2\) be a combinatorial similarity. We must prove that \(g\) is a weak similarity for \(\Phi_1\) and \(\Phi_2\). Since \(g\) is a combinatorial similarity, there is a bijection \(f \colon \Phi_2(X_2^2) \to \Phi_1(X_1^2)\) such that \begin{equation}\label{t4.3:e2} \Phi_1(x, y) = f(\Phi_2(g(x), g(y))) \end{equation} holds for all \(x\), \(y \in X_1\). In the correspondence with Definition~\ref{d3.13}, it suffices to show that \(f\) is an isomorphism of the posets \((\Phi_1(X_1^2), {\preccurlyeq}_1)\) and \((\Phi_2(X_2^2), {\preccurlyeq}_2)\). Using \eqref{t4.3:e1} we see that if \begin{equation}\label{t4.3:e3} \bigl(\<a, b> \in u_{\Phi_2}\bigr) \Leftrightarrow \bigl(\<f(a), f(b)> \in u_{\Phi_1}\bigr) \end{equation} is valid for all \(a\), \(b \in \Phi_2(X_2^2)\), then \(f\) is an isomorphism of these posets. Condition~\eqref{t4.3:e3} follows directly from \eqref{t4.3:e2} and the definitions of \(u_{\Phi_1}\) and \(u_{\Phi_2}\). \end{proof}
\begin{corollary}\label{c4.3} Let \(X\) and \(Y\) be nonempty sets and let \((Q, {\preccurlyeq}_{Q})\) and \((L, {\preccurlyeq}_{L})\) be posets. Suppose \(d_Q \colon X^2 \to Q\) and \(d_L \colon Y^2 \to L\) are a \(Q\)-pseudo\-ultrametric and a \(L\)-pseudoultrametric, respectively. If \(d_{Q}(X^{2}) = Q\), and \(d_{L}(Y^{2}) = L\), and \({\preccurlyeq}_{Q} = {\preccurlyeq}_{Q}^{0}\), and \({\preccurlyeq}_{L} = {\preccurlyeq}_{L}^{0}\), then the following conditions are equivalent for every mapping \(\Phi \colon X \to Y\). \begin{enumerate} \item [\((i)\)] \(\Phi\) is a weak similarity for \(d_Q\) and \(d_L\). \item [\((ii)\)] \(\Phi\) is a combinatorial similarity for \(d_Q\) and \(d_L\). \end{enumerate} \end{corollary}
In what follows we will use the next modification of Corollary~\ref{c4.3}.
\begin{lemma}\label{l4.7} Let \((Q, {\preccurlyeq}_{Q})\) be a totally ordered set and let \(d \colon Q^{2} \to Q\) be a \(Q\)-pseudoultrametric such that \(d(Q^{2}) = Q\) and \({\preccurlyeq}_{Q}^{0} = {\preccurlyeq}_{Q}\). Then, for every poset \((L, {\preccurlyeq}_{L})\) having a smallest element and for each \(L\)-pseudoultrametric \(d_L \colon X^{2} \to L\) with \(d_L(X^2) = L\), the following statement holds. If \(d_L\) is combinatorially similar to \(d\), then the corresponding combinatorial similarity is a weak similarity for \(d\) and \(d_L\). \end{lemma}
\begin{proof} Let \((L, {\preccurlyeq}_{L})\) be a poset with a smallest element and let \(d_L\) be a pseudoultrametric on a set \(X\) with \(d_L(X^2) = L\). Suppose \(d\) and \(d_L\) are combinatorially similar. Then there are bijections \[ g \colon X \to Q \quad \text{and} \quad f \colon Q \to L \] such that the diagram \begin{equation}\label{l4.7:e1} \ctdiagram{ \ctv 0,25:{Q^{2}} \ctv 100,25:{X^{2}} \ctv 0,-25:{Q} \ctv 100,-25:{L} \ctet 100,25,0,25:{g\otimes g} \ctet 0,-25,100,-25:{f} \ctel 0,25,0,-25:{d} \cter 100,25,100,-25:{d_L} } \end{equation} is commutative. If \(f\) is an isomorphism of \((Q, {\preccurlyeq}_Q)\) and \((L, {\preccurlyeq}_L)\), then \(g\) is a weak similarity. Since \((Q, {\preccurlyeq}_Q)\) is totally ordered and \(f\) is bijective, to prove that \(f\) is an isomorphism it suffices to show that the implication \begin{equation}\label{l4.7:e2} (q_1 \preccurlyeq_Q q_2) \Rightarrow (f(q_1) \preccurlyeq_L f(q_2)) \end{equation} is valid for all \(q_1\), \(q_2 \in Q\). The inclusion \({\preccurlyeq}_L^0 \subseteq {\preccurlyeq}_L\) (see Proposition~\ref{p3.17}) implies that \eqref{l4.7:e2} is valid if \begin{equation}\label{l4.7:e3} (q_1 \preccurlyeq_Q q_2) \Rightarrow (f(q_1) \preccurlyeq_L^0 f(q_2)). \end{equation} By Lemma~\ref{l3.18}, the equalities \(d(Q^2) = Q\) and \(d(X^2) = L\) imply \begin{equation}\label{l4.7:e4} {\preccurlyeq}_Q^0 = u_d^t \cup \Delta_{Q} \quad \text{and} \quad {\preccurlyeq}_L^0 = u_{d_L}^t \cup \Delta_{L}. \end{equation} Using \eqref{l4.7:e4} we see that \eqref{l4.7:e3} is valid whenever \[ (\<q_1, q_2> \in u_d) \Rightarrow (\<f(q_1), f(q_2)> \in u_{d_L}), \] which follows directly from the commutativity of \eqref{l4.7:e1} and the definition of \(u_d\) and \(u_{d_L}\). \end{proof}
\begin{proposition}\label{c4.4} Let \((Q, {\preccurlyeq}_{Q})\) be a totally ordered set with a smallest element \(q_0\). Then there is a \({\preccurlyeq}_{Q}\)-ultrametric \(d \colon Q^2 \to Q\) such that \[ d(Q^2) = Q \quad \text{and} \quad {\preccurlyeq}_{Q}^{0} = {\preccurlyeq}_{Q}. \] \end{proposition}
\begin{proof} Let us define a mapping \(d \colon Q^{2} \to Q\) by the rule: \begin{equation}\label{c4.4:e1} d(p, q) := \begin{cases} q_0 & \text{if } p=q,\\ p & \text{if } q \prec_{Q} p,\\ q & \text{if } p \prec_{Q} q. \end{cases} \end{equation} It is clear that \(d\) is symmetric and the equality \(d(p, q) = q_0\) holds if and only if \(p=q\).
Now let \(\<q_1, q_2, q_3>\) be a triple of points of \(Q\). Suppose these points are pairwise different. Since \((Q, {\preccurlyeq}_{Q})\) is totally ordered, there is a permutation \[ \begin{pmatrix} q_1 & q_2 & q_3\\ q_{i_1} & q_{i_2} & q_{i_3} \end{pmatrix} \] such that \begin{equation}\label{c4.4:e3} q_{i_1} \prec_{Q} q_{i_3} \prec_{Q} q_{i_2}. \end{equation} From \eqref{c4.4:e1} and \eqref{c4.4:e3} it follows that \[ d(q_{i_1}, q_{i_3}) = q_{i_3} \prec_{Q} q_{i_2} = d(q_{i_1}, q_{i_2}) = d(q_{i_2}, q_{i_3}). \] Thus, \begin{equation}\label{c4.4:e9} d(q_{i_1}, q_{i_3}) \preccurlyeq d(q_{i_1}, q_{i_2}) = d(q_{i_2}, q_{i_3}) \end{equation} holds. Analogously, if the number of different points in \(\<q_1, q_2, q_3>\) is two, we can find a permutation such that \(q_{i_1} = q_{i_3} \neq q_{i_2}\). Hence, \[ d(q_{i_1}, q_{i_3}) = q_{0} \prec_{Q} d(q_{i_1}, q_{i_2}) = d(q_{i_2}, q_{i_3}), \] that implies \eqref{c4.4:e9}. For the case when \(q_1 = q_2 = q_3\) holds, \eqref{c4.4:e9} is trivially valid for every permutation \[ \begin{pmatrix} q_1 & q_2 & q_3\\ q_{i_1} & q_{i_2} & q_{i_3} \end{pmatrix}. \] Hence, \(d\) is a \({\preccurlyeq}_{Q}\)-ultrametric on \(Q\).
It follows from \eqref{c4.4:e1} that \(d(q_0, q) = q\) holds for every \(q \in Q\). Thus, we have \begin{equation}\label{c4.4:e4} d(Q^{2}) = Q. \end{equation} To complete the proof it suffices to show that \begin{equation}\label{c4.4:e5} {\preccurlyeq}_{Q}^{0} = {\preccurlyeq}_{Q}. \end{equation}
By definition of \({\preccurlyeq}_{Q}^{0}\), equality \eqref{c4.4:e5} holds if \begin{equation}\label{c4.4:e6} {\preccurlyeq}_{Q}^{0} \supseteq {\preccurlyeq}_{Q}. \end{equation} Lemma~\ref{l3.18} and \eqref{c4.4:e4} imply the equality \({\preccurlyeq}_{Q}^{0} = (u_d^t \cup \Delta_{Q})\). Consequently, \eqref{c4.4:e6} is valid if and only if \begin{equation}\label{c4.4:e7} (u_d^t \cup \Delta_{Q}) \supseteq {\preccurlyeq}_{Q}. \end{equation}
Let \(q_1\) and \(q_2\) be some points of \(Q\) and let \(q_1 \preccurlyeq_{Q} q_2\). If there is \(q_3 \in Q\) such that \begin{equation}\label{c4.4:e8} q_1 = d(q_1, q_3) \quad \text{and} \quad q_2 = d(q_1, q_2) = d(q_2, q_3), \end{equation} then \(\<q_1, q_2> \in u_d\) holds. If we set \(q_3\) equals to \(q_0\), the smallest element of \((Q, {\preccurlyeq}_{Q})\), then \eqref{c4.4:e8} follows from \(q_1 \preccurlyeq_{Q} q_2\) and \eqref{c4.4:e1}. Thus, the inclusion \(u_d \supseteq {\preccurlyeq}_{Q}\) holds, that implies \eqref{c4.4:e7}. \end{proof}
\begin{remark}\label{r4.8} If \(Q\) is finite, \(Q = \{0, 1, \ldots, n\}\), and \({\preccurlyeq}_Q = {\leqslant}\) hold, then the mapping \(d\) defined by \eqref{c4.4:e1} is an ultrametric on \(Q\) for which the ultrametric space \((Q, d)\) is ``as rigid as possible''. Some extremal properties of such spaces and related graph-theoretical characterizations were found in \cite{DPT(Howrigid)}. \end{remark}
\begin{example}\label{ex4.6} Let us denote by \(\mathbb{R}_{0}\) the Cartesian product of \(\mathbb{R}^{+}\) and the two-points set \(\{0, 1\}\), \(\mathbb{R}_{0} := \mathbb{R}^{+} \times \{0, 1\}\), and let \({\preccurlyeq}_{\mathbb{R}_{0}}\) be the \emph{lexicographical} order on \(\mathbb{R}_{0}\), \begin{equation}\label{ex4.6:e1} \bigl(\<a,b> \preccurlyeq_{\mathbb{R}_{0}} \<c,d>\bigr) \Leftrightarrow \bigl((a < c) \text{ or } (a = c \text{ and } b = 0 \text{ and } d = 1)\bigr), \end{equation} where \(\leqslant\) is the standard order on \(\mathbb{R}^{+}\). The poset \((\mathbb{R}_{0}, {\preccurlyeq}_{\mathbb{R}_{0}})\) is totally ordered. By Proposition~\ref{c4.4}, the mapping \(d \colon \mathbb{R}_{0}^{2} \to \mathbb{R}_{0}\), defined by formula~\eqref{c4.4:e1}, is a \({\preccurlyeq}_Q\)-ultrametric and \begin{equation}\label{ex4.6:e2} d(\mathbb{R}_{0}^{2}) = \mathbb{R}_{0} \quad \text{and} \quad {\preccurlyeq}_{\mathbb{R}_{0}}^{0} = {\preccurlyeq}_{\mathbb{R}_{0}} \end{equation} hold.
Suppose that there is an ultrametric space \((X, \rho)\) such that \(d\) and \(\rho\) are combinatorially similar. From the definition of combinatorial similarity it follows that there are bijections \(f \colon \rho(X^{2}) \to d(\mathbb{R}_{0}^{2})\) and \(g \colon \mathbb{R}_{0} \to X\) such that \(d(x, y) = f(\rho(g(x), g(y)))\) holds for all \(x\), \(y \in \mathbb{R}_{0}\). Let us consider now the poset \((\rho(X^{2}), {\preccurlyeq}_{\rho})\), where \begin{equation}\label{ex4.6:e3} {\preccurlyeq}_{\rho} := u_{\rho}^{t} \cup \Delta_{\rho(X^{2})}. \end{equation} By Theorem~\ref{t3.15}, \(\rho\) is a \({\preccurlyeq}_{\rho}\)-ultrametric on \(X\). Moreover, using Lemma~\ref{l3.18} and Theorem~\ref{t4.3} we obtain that \(g \colon \mathbb{R}_{0} \to X\) is a weak similarity for \(d\) and \(\rho\). Hence, \(f \colon \rho(X^{2}) \to \mathbb{R}_{0}\) is an isomorphism of \((\mathbb{R}_{0}, {\preccurlyeq}_{\mathbb{R}_{0}})\) and \((\rho(X^{2}), {\preccurlyeq}_{\rho})\). Proposition~\ref{p3.17}, Lemma~\ref{l3.18} and \eqref{ex4.6:e3} imply \begin{equation}\label{ex4.6:e4} (q_1 \prec_{\mathbb{R}_{0}} q_2) \Leftrightarrow (f^{-1}(q_1) < f^{-1}(q_2)) \end{equation} for all \(q_1\), \(q_2 \in \mathbb{R}_{0}\).
Let us consider now the points \[ q_i^x := \<x, i> \quad \text{and} \quad q_i^y := \<y, i>, \quad i = 0, 1, \quad x, y \in \mathbb{R}^{+}. \] It follows directly from \eqref{ex4.6:e1} that if \(x < y\), then \[ q_0^x \prec_{\mathbb{R}_{0}} q_1^x \prec_{\mathbb{R}_{0}} q_0^y \prec_{\mathbb{R}_{0}} q_1^y. \] Consequently, \begin{equation}\label{ex4.6:e5} f^{-1}(q_0^x) < f^{-1}(q_1^x) < f^{-1}(q_0^y) < f^{-1}(q_1^y). \end{equation} Since \(\mathbb{Q}^{+} = \mathbb{R}^{+} \cap \mathbb{Q}\) is a dense subset of \(\mathbb{R}^{+}\), for every \(x \in \mathbb{R}^{+}\) there is \(p^x \in \mathbb{Q}^{+}\) such that \begin{equation}\label{ex4.6:e6} f^{-1}(q_1^x) < p^x < f^{-1}(q_2^x). \end{equation} From \eqref{ex4.6:e5} and \eqref{ex4.6:e6} it follows that the mapping \[ \mathbb{R}^{+} \ni x \mapsto p^x \in \mathbb{Q}^{+} \]
is injective, contrary to the equalities \(|\mathbb{R}^{+}| = 2^{\aleph_{0}}\) and \(|\mathbb{Q}^{+}| = \aleph_{0}\). Thus, there are no ultrametrics which are combinatorially similar to \(d\). \end{example}
\begin{remark}\label{r4.9} An interesting topological property of the poset \((\mathbb{R}_{0}, {\preccurlyeq}_{\mathbb{R}_{0}})\) was found by F.~S.Cater \cite{Cat1999/2000RAE}. We will return to it later in Theorem~\ref{t4.19}. \end{remark}
Example~\ref{ex4.6} shows that, after replacing \(\aleph_{0}\) by \(2^{\aleph_{0}}\) and \(\mathbb{Q}^{+}\) by \(\mathbb{R}^{+}\), Theorem~\ref{t3.7} becomes false. In particular, we have the following proposition.
\begin{proposition}\label{p4.8}
Let \(X\) be a set with \(|X| = 2^{\aleph_{0}}\). Then there is a metric \(d^{*} \colon X^{2} \to \mathbb{R}^{+}\) such that: \begin{enumerate} \item [\((i)\)] If \(\rho\) is an arbitrary ultrametric, then \(\rho\) and \(d^{*}\) are not combinatorially similar;
\item [\((ii)\)] For every \(X_1 \subseteq X\) with \(|X_1| \leqslant \aleph_{0}\), the restriction \(d^{*}|_{X_1^2}\) of \(d^{*}\) is combinatorially similar to an ultrametric. \end{enumerate} \end{proposition}
\begin{proof} Let \(d \colon \mathbb{R}_{0}^{2} \to \mathbb{R}_{0}\) be the \({\preccurlyeq}_{\mathbb{R}_{0}}\)-ultrametric defined in Example~\ref{ex4.6}. The equalities \begin{equation}\label{p4.8:e1}
|X| = 2^{\aleph_{0}} \quad \text{and} \quad 2^{\aleph_{0}} = |\mathbb{R}_{0}| \end{equation} imply the existence of a bijection \(g \colon X \to \mathbb{R}_{0}\). Let \(d_1 \colon X^{2} \to \mathbb{R}_{0}\) be a \({\preccurlyeq}_{\mathbb{R}_{0}}\)-ultrametric defined as \[ d_1(x, y) = d(g(x), g(y)), \quad x, y \in X. \]
From~\eqref{p4.8:e1} it follows that \(|d_1(X^{2})| \leqslant 2^{\aleph_{0}}\). Consequently, by statement \((i)\) of Corollary~\ref{c3.17}, there is an usual metric \(d_2\) such that \(d_1\) and \(d_2\) are combinatorially similar. It follows directly from the definition of combinatorial similarity that there is a metric \(d^{*} \colon X^{2} \to \mathbb{R}^{+}\) which is combinatorially similar to \(d_2\). Thus, \(d^{*}\) and \(d\) are combinatorially similar.
It is easy to prove that \(d^*\) satisfies conditions \((i)\) and \((ii)\). Indeed, condition \((ii)\) follows from statement \((ii)\) of Corollary~\ref{c3.17}. Furthermore, it was shown in Example~\ref{ex4.6} that there are no ultrametrics which are combinatorially similar to \(d \colon \mathbb{R}_{0}^{2} \to \mathbb{R}_{0}\). Consequently, \((i)\) also holds. \end{proof}
Let \((Q, {\preccurlyeq}_Q)\) be a totally ordered set, and let \(A\), \(B\) be nonempty subsets of \(Q\). We write \(A \prec_{Q} B\) when \(a \prec_{Q} b\) holds for all \(a \in A\) and \(b \in B\).
The sets \(A\) and \(B\) are \emph{neighboring} if \(A \prec_{Q} B\) or, respectively, \(B \prec_{Q} A\) and there is no \(q \in Q\) such that \[ A \prec_{Q} \{q\} \quad \text{and} \quad \{q\} \prec_{Q} B \] or, respectively, \[ B \prec_{Q} \{q\} \quad \text{and} \quad \{q\} \prec_{Q} A. \]
\begin{definition}\label{d4.9} A totally ordered set \(Q\) is a \(\eta_1\)-set if it has no neighboring subsets which both have a cardinality strictly less than \(\aleph_1\). \end{definition}
Let \((Q, {\preccurlyeq}_Q)\) and \((L, {\preccurlyeq}_L)\) be posets. An injection \(f \colon Q \to L\) is an \emph{embedding} of \((Q, {\preccurlyeq}_Q)\) in \((L, {\preccurlyeq}_L)\) if \[ \bigl(q_1 \preccurlyeq_Q q_2\bigr) \Leftrightarrow \bigl(f(q_1) \preccurlyeq_L f(q_2)\bigr) \] is valid for all \(q_1\), \(q_2 \in Q\).
A totally ordered set \(L\) is \(\aleph_1\)-\emph{universal} if every totally ordered set \(Q\) with \(|Q| \leqslant \aleph_1\) can be embedded into \(L\).
\begin{lemma}\label{l4.10} Every \(\eta_1\)-set is \(\aleph_1\)-universal. \end{lemma}
For the detailed proof of the lemma see, for example, Theorem~20 in~\cite{Ada2018}.
\begin{remark}\label{r4.11} The above definition of \(\aleph_1\)-universal sets can be naturally extended to arbitrary infinite cardinal number \(\aleph\). The construction of \(\aleph\)-universal posets was studied by many mathematicians (see, for example, \cite{Joh1956PA, Hed1969JoA} and the references therein). \end{remark}
In the proof of the following theorem we will use the Continuum Hypothesis.
\begin{theorem}\label{t4.11}
Let \(X\) be a nonempty set, let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\) and \(|\Phi(X^{2})| \leqslant 2^{\aleph_{0}}\), and let \((Q, {\preccurlyeq}_Q)\) be a \(\eta_1\)-set with a smallest element \(q_0\). Then the following conditions are equivalent. \begin{enumerate} \item[\((i)\)] \(\Phi\) is combinatorially similar to a \({\preccurlyeq}_Q\)-pseudoultrametric. \item[\((ii)\)] The mapping \(\Phi\) is symmetric, and the transitive closure \(u_{\Phi}^{t}\) of the binary relation \(u_{\Phi}\) is antisymmetric, and \(\Phi\) is \(a_0\)-coherent for a point \(a_0 \in \Phi(X^{2})\), and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\). \end{enumerate} \end{theorem}
\begin{proof} The validity of \((i) \Rightarrow (ii)\) follows from Theorem~\ref{t3.15}.
Suppose that \((ii)\) holds. Using Theorem~\ref{t3.15} we obtain that \(\Phi\) is a \({\preccurlyeq}_{\Phi}\)-pseudoultrametric for the partial order \[ {\preccurlyeq}_{\Phi} := u_{\Phi}^{t} \cup \Delta_{\Phi(X^{2})} \] defined on \(\Phi(X^{2})\).
By Lemma~\ref{l3.3} (Szpilrajn), there is an linear order \({\preccurlyeq}_{1}\) on \(\Phi(X^{2})\) such that \({\preccurlyeq}_{\Phi} \subseteq {\preccurlyeq}_{1}\). Consequently, \(\Phi\) is also a \({\preccurlyeq}_{1}\)-pseudoultrametric. The inequality \(|\Phi(X^{2})| \leqslant 2^{\aleph_{0}}\) holds. The Continuum Hypothesis, \(2^{\aleph_{0}} = \aleph_1\), and the last inequality imply the inequality \(|\Phi(X^{2})| \leqslant \aleph_1\). By Lemma~\ref{l4.10}, the \(\eta_1\)-set \((Q, {\preccurlyeq}_Q)\) is \(\aleph_1\)-universal. It is easy to prove that there is an embedding \(f \colon \Phi(X^{2}) \to Q\) of \((\Phi(X^{2}), {\preccurlyeq}_{1})\) in \((Q, {\preccurlyeq}_{Q})\) such that \(f(a_0) = q_0\). Then the mapping \[ X^2 \xrightarrow{\Phi} \Phi(X^{2}) \xrightarrow{f} Q \] is a \({\preccurlyeq}_{Q}\)-pseudoultrametric and this mapping is combinatorially similar to \(\Phi\). \end{proof}
The following definition can be found in \cite[pp.~57--58]{Kel1975S}.
\begin{definition}\label{d4.13}
Let \((Q, {\preccurlyeq}_{Q})\) be a totally ordered set with \(|Q| > 1\). A topology \(\tau\) with a subbase consisting of all sets of the form \[ \{q \in Q \colon q \prec_Q a\} \quad \text{or} \quad \{q \in Q \colon a \prec_Q q\} \] for some \(a \in Q\) is the order topology on \(Q\). In this case we say that \(\tau\) is the \({\preccurlyeq}_{Q}\)-topology for short. \end{definition}
Recall that a topological space is second countable if it has a countable or finite base.
\begin{lemma}\label{l4.14}
Let \((Q, {\preccurlyeq}_{Q})\) be a totally ordered set with \(|Q| > 1\). Then the following conditions are equivalent. \begin{enumerate} \item [\((i)\)] The \({\preccurlyeq}_{Q}\)-topology is second countable. \item [\((ii)\)] The poset \((Q, {\preccurlyeq}_{Q})\) is isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\). \end{enumerate} \end{lemma}
This lemma is a simple modification of Theorem~II from paper~\cite{Cat1999/2000RAE} of F.~S.~Cater.
\begin{theorem}\label{t4.15}
Let \((Q, {\preccurlyeq}_{Q})\) be a totally ordered set satisfying \(|Q| > 1\) and having the smallest element \(q_0\). Then the following conditions are equivalent. \begin{enumerate} \item [\((i)\)] The \({\preccurlyeq}_{Q}\)-topology is second countable. \item [\((ii)\)] For every \({\preccurlyeq}_{Q}\)-pseudoultrametric \(d\) there is a pseudoultrametric \(\rho\) such that \(d\) and \(\rho\) are weakly similar. \item [\((iii)\)] For every \({\preccurlyeq}_{Q}\)-pseudoultrametric \(d\) there is a pseudoultrametric \(\rho\) such that \(d\) and \(\rho\) are combinatorially similar. \end{enumerate} \end{theorem}
\begin{proof}
It is easy to see that \((i)\), \((ii)\) and \((iii)\) are equivalent if \(|Q| = 2\). Suppose \(|Q| \geqslant 3\) holds.
\((i) \Rightarrow (ii)\). Let the \({\preccurlyeq}_{Q}\)-topology be second countable, let \(X\) be a nonempty set and let \(d \colon X^{2} \to Q\) be a \({\preccurlyeq}_{Q}\)-pseudoultrametric. Write \(Q_0 := Q \setminus \{q_0\}\) and \({\preccurlyeq}_{Q_0} := Q_0^2 \cap {\preccurlyeq}_{Q}\). The inequality \(|Q| \geqslant 3\) implies \(|Q_0| > 1\). The \({\preccurlyeq}_{Q_0}\)-topology coincides with the topology induced on \(Q_0\) by \({\preccurlyeq}_{Q}\)-topology. Consequently, the \({\preccurlyeq}_{Q_0}\)-topology is also second countable. Hence, by Lemma~\ref{l4.14}, there is an isomorphism \(f \colon Q_0 \to A_0\) of the posets \((Q_0, {\preccurlyeq}_{Q_0})\) and \((A_0, \leqslant)\), where \(A_0 \subseteq (0, \infty)\) and \(\leqslant\) is the standard order on \(\mathbb{R}\). Write \(A := A_0 \cup \{0\}\). The function \(f^{*} \colon Q \to A\), \[ f^{*}(q) = \begin{cases} 0 & \text{if } q = q_0,\\ f(q) & \text{if } q\neq q_0, \end{cases} \] is an isomorphism of \((Q, {\preccurlyeq}_{Q})\) and \((A, \leqslant)\). Let \(\rho \colon X^{2} \to \mathbb{R}^{+}\) be defined as \[ \rho(x, y) = f^{*}(d(x, y)), \quad x, y \in X. \] Then \(\rho\) is a pseudoultrametric on \(X\) and the identical mapping \hbox{\(X \xrightarrow{\operatorname{id}} X\)} is a weak similarity for \(d\) and \(\rho\).
\((ii) \Rightarrow (iii)\). The validity of this implication follows from Proposition~\ref{p3.16}.
\((iii) \Rightarrow (i)\). Suppose condition \((iii)\) holds. By Proposition~\ref{c4.4}, there is a \({\preccurlyeq}_{Q}\)-ultrametric \(d \colon Q^2 \to Q\) satisfying the equalities \(d(Q^2) = Q\) and \({\preccurlyeq}_{Q}^{0} = {\preccurlyeq}_{Q}\).
Let \(\rho \colon X^{2} \to \mathbb{R}^{+}\) be a pseudoultrametric such that \(\rho\) and \(d\) are combinatorially similar. Write \(L := \rho(X^2)\) and \({\preccurlyeq}_{L} := {\leqslant} \cap L^2\). Then the \(L\)-pseudoultrametric \(\rho_L \colon X^{2} \to L\), \[ \rho_L(x, y) = \rho(x, y), \quad x, y \in X, \] is also combinatorially similar to \(d\). By Lemma~\ref{l4.7}, \(d\) and \(\rho_L\) are weakly similar. Using Definition~\ref{d3.13} we obtain that \((Q, {\preccurlyeq}_{Q})\) is isomorphic to the subposet \((L, {\preccurlyeq}_{L})\) of \((\mathbb{R}^{+}, {\leqslant})\). Hence, by Lemma~\ref{l4.14} (Cater), the \({\preccurlyeq}_{Q}\)-topology is second countable. \end{proof}
Recall that a topological space \((X, \tau)\) is said to be separable if there is a set \(A \subseteq X\) such that \(|A| \leqslant \aleph_{0}\) and \(A \cap U \neq \varnothing\) for every nonempty set \(U \in \tau\).
In what follows we denote by \((\mathbb{R}_{0}, {\preccurlyeq}_{\mathbb{R}_{0}})\) the totally ordered set constructed in Example~\ref{ex4.6}.
The next lemma is a part of Theorem~III \cite{Cat1999/2000RAE}.
\begin{lemma}[Cater]\label{l4.18}
Let \((Q, {\preccurlyeq}_{Q})\) be a totally ordered set with \(|Q| > 1\). Then the following conditions are equivalent. \begin{enumerate} \item [\((i)\)] The \({\preccurlyeq}_{Q}\)-topology is separable. \item [\((ii)\)] The poset \((Q, {\preccurlyeq}_{Q})\) is isomorphic to a subposet of \((\mathbb{R}_{0}, {\preccurlyeq}_{\mathbb{R}_{0}})\). \end{enumerate} \end{lemma}
\begin{theorem}\label{t4.19}
Let \((Q, {\preccurlyeq}_{Q})\) be a totally ordered set having a smallest element and satisfying the inequality \(|Q| > 1\). Then the following conditions are equivalent. \begin{enumerate} \item [\((i)\)] The \({\preccurlyeq}_{Q}\)-topology is separable. \item [\((ii)\)] For every \({\preccurlyeq}_{Q}\)-pseudoultrametric \(d\) there is a \({\preccurlyeq}_{\mathbb{R}_{0}}\)-pseudo\-ultra\-metric \(\rho\) such that \(d\) and \(\rho\) are weakly similar. \item [\((iii)\)] For every \({\preccurlyeq}_{Q}\)-pseudoultrametric \(d\) there is a \({\preccurlyeq}_{\mathbb{R}_{0}}\)-pseudo\-ultra\-metric \(\rho\) such that \(d\) and \(\rho\) are combinatorially similar. \end{enumerate} \end{theorem}
Using Lemma \ref{l4.18} instead of Lemma \ref{l4.14} we can prove this theorem similarly to Theorem~\ref{t4.15}.
The following theorem gives us some necessary and sufficient conditions under which a mapping is combinatorially similar to a pseudoultrametric, and it can be considered as a main result of the section.
\begin{theorem}\label{t4.20} Let \(X\) be a nonempty set and let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\). Then the following conditions are equivalent. \begin{enumerate} \item [\((i)\)] \(\Phi\) is combinatorially similar to pseudoultrametric. \item [\((ii)\)] There is \(b_0 \in \Phi(X^{2})\) such that \(\Phi(x,x) = b_0\) holds for every \(x \in X\), and the binary relation \begin{equation}\label{t4.20:e1} {\preccurlyeq}_{\Phi} := u_{\Phi}^{t} \cup \Delta_{\Phi(X^{2})} \end{equation} is a partial order on \(\Phi(X^{2})\), and \(b_0\) is the smallest element of \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\), and \(\Phi\) is a \( {\preccurlyeq}_{\Phi}\)-pseudo\-ultra\-metric on \(X\), and there is a linear order \({\preccurlyeq}\) on \(\Phi(X^{2})\) such that \begin{equation}\label{t4.20:e2} {\preccurlyeq}_{\Phi} \subseteq {\preccurlyeq} \end{equation} holds, and \((\Phi(X^{2}), {\preccurlyeq})\) is isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\). \item [\((iii)\)] The mapping \(\Phi\) is symmetric, and there is \(a_0 \in \Phi(X^{2})\) for which \(\Phi\) is \(a_0\)-coherent, and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\), and there is a linear order \({\preccurlyeq}\) on \(\Phi(X^{2})\) such that \(a_0\) is the smallest element of \((\Phi(X^{2}), {\preccurlyeq})\) and \(u_{\Phi} \subseteq {\preccurlyeq}\) holds, and \((\Phi(X^{2}), {\preccurlyeq})\) is isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\). \end{enumerate} \end{theorem}
\begin{proof} \((i) \Rightarrow (ii)\). Let \((i)\) hold. Then using Theorem \ref{t3.15} we see that condition \((ii)\) is valid whenever there is a linear order \(\preccurlyeq\) on \(\Phi(X^{2})\) such that \eqref{t4.20:e2} holds and \((\Phi(X^{2}), {\preccurlyeq})\) is isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\).
By condition~\((i)\), there are a set \(Y\) and a pseudoultrametric \(\rho \colon Y^{2} \to \mathbb{R}^{+}\) such that \(\Phi\) and \(\rho\) are combinatorially similar. Write \begin{equation}\label{t4.20:e3} {\preccurlyeq}_{\rho} := u_{\rho}^{t} \cup \Delta_{\rho(Y^{2})}. \end{equation} From Lemma~\ref{l3.18} it follows that \(\rho\) is a \({\preccurlyeq}_{\rho}\)-pseudo\-ultra\-metric. Since \(\Phi\) and \(\rho\) are combinatorially similar, there exists a bijection \(g \colon X \to Y\) such that \(g\) is combinatorial similarity for \(\Phi\) and \(\rho\). Now using Theorem~\ref{t4.3}, and \eqref{t4.20:e1}, and \eqref{t4.20:e3} we see that \(g\) is a weak similarity for \(\Phi\) and \(\rho\). Consequently, there is an order isomorphism \[ f \colon \Phi(X^{2}) \to \rho(Y^{2}) \] of posets \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\) and \((\rho(Y^{2}), {\preccurlyeq}_{\rho})\). By Proposition~\ref{p3.17} and Lemma~\ref{l3.18}, we obtain that \[ (\gamma_1 \preccurlyeq_{\rho} \gamma_2) \Rightarrow (\gamma_1 \leqslant \gamma_2) \] is valid for all \(\gamma_1\), \(\gamma_2 \in \rho(Y^{2})\).
Let us define a binary relation \(\preccurlyeq\) by the rule: \[ (\<g_1, g_2> \in {\preccurlyeq}) \Leftrightarrow (\<g_1, g_2> \in \Phi(X^{2}) \times \Phi(X^{2}) \text{ and } (f(g_1) \leqslant f(g_2))) \] Then \({\preccurlyeq}\) is a linear order satisfying all desirable conditions.
\((ii) \Rightarrow (i)\). Suppose \((ii)\) holds. Then \(\Phi\) is a \({\preccurlyeq}_{\Phi}\)-pseudo\-ultra\-metric on \(X\) and there is an injection \(f \colon \Phi(X^{2}) \to \mathbb{R}^{+}\) such that \[ (b_1 \preccurlyeq_{\Phi} b_2) \Rightarrow (f(b_1) \leqslant f(b_2)) \] holds for all \(b_1\), \(b_2 \in \Phi(X^{2})\). Since \(b_0\) is the smallest element of the poset \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\), the function \(f^{*} \colon \Phi(X^{2}) \to \mathbb{R}^{+}\) defined as \[ f^{*}(b) = f(b) - f(b_0) \] is nonnegative and isotone, and satisfies the condition \[ (f^{*}(b) = 0) \Leftrightarrow (b = b_0) \] for every \(b \in \Phi(X^{2})\). Proposition~\ref{p3.23} implies that \(f^{*} \circ \Phi\) is a pseudoultrametric on \(X\). From Definition~\ref{d2.17} it directly follows that \(\Phi\) and \(f^{*} \circ \Phi\) are combinatorially similar.
The validity of the equivalence \((ii) \Leftrightarrow (iii)\) follows from Theorem~\ref{t3.15}. We only note that \(u_{\Phi}^{t}\) is antisymmetric if and only if there is a partial order \({\preccurlyeq}'\) such that \({\preccurlyeq}' \supseteq u_{\Phi}\). \end{proof}
The proof of the following corollary is similar to prove of Theorem~\ref{t4.20}.
\begin{corollary}\label{c4.22} Let \(X\) be a nonempty set and let \(\Phi\) be a mapping with \(\dom \Phi = X^{2}\). Then the following conditions are equivalent. \begin{enumerate} \item [\((i)\)] \(\Phi\) is combinatorially similar to ultrametric. \item [\((ii)\)] There is \(b_0 \in \Phi(X^{2})\) such that \(\Phi^{-1}(b_0) = \Delta_{X}\), and the binary relation \[ {\preccurlyeq}_{\Phi} := u_{\Phi}^{t} \cup \Delta_{\Phi(X^{2})} \] is a partial order on \(\Phi(X^{2})\), and \(b_0\) is the smallest element of \((\Phi(X^{2}), {\preccurlyeq}_{\Phi})\), and \(\Phi\) is a \( {\preccurlyeq}_{\Phi}\)-ultra\-metric on \(X\), and there is a linear order \({\preccurlyeq}\) on \(\Phi(X^{2})\) such that \[ {\preccurlyeq}_{\Phi} \subseteq {\preccurlyeq} \] holds, and \((\Phi(X^{2}), {\preccurlyeq})\) is isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\). \item [\((iii)\)] The mapping \(\Phi\) is symmetric, and there is \(a_0 \in \Phi(X^{2})\) for which \(\Phi^{-1}(a_0) = \Delta_{X}\) holds, and, for every triple \(\<x_1, x_2, x_3>\) of points of \(X\), there is a permutation \[ \begin{pmatrix} x_1 & x_2 & x_3\\ x_{i_1} & x_{i_2} & x_{i_3} \end{pmatrix} \] such that \(\Phi(x_{i_1}, x_{i_2}) = \Phi(x_{i_2}, x_{i_3})\), and there is a linear order \({\preccurlyeq}\) on \(\Phi(X^{2})\) such that \(a_0\) is the smallest element of \((\Phi(X^{2}), {\preccurlyeq})\) and \(u_{\Phi} \subseteq {\preccurlyeq}\) holds, and \((\Phi(X^{2}), {\preccurlyeq})\) is isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\). \end{enumerate} \end{corollary}
In connection with Theorem \ref{t4.20} and Corollary \ref{c4.22}, the following problem naturally arises.
\begin{problem}\label{pr4.21} Describe (up to order-isomorphism) the partially ordered sets \((Q, {\preccurlyeq}_{Q})\) which admit extensions to totally ordered sets \((Q, {\preccurlyeq})\) such that \((Q, {\preccurlyeq})\) is order-isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\). \end{problem}
We do not discuss this problem in details but formulate the following conjecture.
\begin{conjecture}\label{con4.24} The following conditions are equivalent. \begin{enumerate} \item [\((i)\)] A poset \((Q, {\preccurlyeq}_{Q})\) admits an extension to totally ordered set \((Q, {\preccurlyeq})\) such that \((Q, {\preccurlyeq})\) is order-isomorphic to a subposet of \((\mathbb{R}^{+}, \leqslant)\).
\item [\((ii)\)] The inequality \(|Q| \leqslant 2^{\aleph_{0}}\) holds and every totally ordered subposet of \((Q, {\preccurlyeq}_{Q})\) can be embedded into \((\mathbb{R}^{+}, \leqslant)\). \end{enumerate} \end{conjecture}
\end{document} |
\begin{document}
\renewcommand\Authfont{\small} \renewcommand\Affilfont{\itshape\footnotesize} \title{Kinematic and Dynamic Vortices in a Thin Film Driven by an Applied Current and Magnetic Field}
\author[1]{Lydia Peres Hari\footnote{ lydia@fermat.technion.ac.il}} \author[1]{Jacob Rubinstein\footnote{koby@techunix.technion.ac.il}} \author[2]{Peter Sternberg\footnote{Corresponding author, sternber@indiana.edu}} \affil[1]{Department of Mathematics, Israel Institute of Technology, Haifa 32000, Israel} \affil[2]{Department of Mathematics, Indiana University, Bloomington, IN 47405}
\maketitle \begin{abstract} Using a Ginzburg-Landau model, we study the vortex behavior of a rectangular thin film superconductor subjected to an applied current fed into a portion of the sides and an applied magnetic field directed orthogonal to the film.
Through a center manifold reduction we develop a rigorous bifurcation theory for the appearance of periodic solutions in certain parameter regimes near the normal state. The leading order dynamics yield in particular a motion law for kinematic vortices moving up and down the center line of the sample. We also present computations that reveal the co-existence and periodic evolution of kinematic and magnetic vortices. \end{abstract} \vskip.2in \noindent {\bf Keywords:} Ginzburg-Landau, electric current, magnetic field, kinematic vortex
\section{Introduction}
We consider a thin superconducting sample occupying a rectangle. The sample is subjected to normal electric current that enters through a lead on one side and leaves through another lead at the opposite side. In addition the sample is subjected to a magnetic field oriented in the direction perpendicular to the rectangle's plane. Our main interest is in the regime in the parameter space where different physical quantities, such as the total current in the sample and the order parameter, are time-periodic. We pay special attention to the formation and motion of vortices in the sample.
The problem above is a natural generalization of the simpler case of a finite superconducting one-dimensional wire subjected to normal current that is fed into one of its endpoints. This problem received considerable attention since it is a canonical case of co-existence of normal current and superconducting current. Moreover, it is known that in this setting there exists a regime of prescribed current $I$ and temperature $T$, where the sample's behavior is time-periodic. In addition, in this regime the superconducting order parameter $\psi(x,t)$ vanishes at the wire's center at specific points in time that are separated by a fixed period. Such zeros of $\psi$ are called phase slip centers (PSC). These phenomena and others are studied, numerically and experimentally, by a number of authors including \cite{krba,krwa,laam}, and many of the results are summarized by Ivlev and Kopnin \cite{ivko}.
A recent study in \cite{rsm} and \cite{rsz} presents a comprehensive theory that explains the different patterns observed in this wire setting. The key idea is that the underlying system of equations enjoys a PT symmetry, that is, symmetry under complex conjugation and the transformation $x \rightarrow -x$. This symmetry enabled the authors to perform a rigorous bifurcation study of the problem, and to deduce that, under certain conditions, the order parameter bifurcates, as the temperature is lowered beyond a critical value, from the normal state $\psi \equiv 0$ to a nontrivial state. Moreover, this bifurcation is shown to be of Hopf type, and this explains the periodic nature of the solution and the periodic appearance of isolated zeros of $\psi$.
In the present study we look at a more realistic geometry of a finite strip. Moreover, we consider not just forced electric current, but also the effect of an external magnetic field. The problem is analyzed numerically in \cite{bmp}. The authors observe, just as in the one-dimensional setting, a periodic behavior of a number of physical quantities, including periodic appearance and motion of vortices. Therefore, our goal is to derive a theory that explains the observed patterns and vortex motion.
The issue of vortices is of particular interest. They are defined as isolated zeros of the order parameter, and they are characterized by their topological degree in the $(x,y)$ plane. The appearance of vortices in superconducting samples subjected to an applied magnetic field is of course well-known. Therefore, we expect them in our setting even in the absence of forced electric currents. What makes the present problem interesting is that also the opposite is true; namely, vortices form, for appropriate range of values of $I$ and $T$, even if no magnetic field is applied. Therefore, one can classify the vortices here into magnetic vortices generated by the applied magnetic field in the absence of any applied current, and kinematic vortices generated, as will be shown below, by the forced electric field and the special symmetry of the problem in the absence of any magnetic field.
When both applied magnetic and electric fields are present, it is not as clear how to make a distinction between the two kinds of vortices. As will be shown below, the symmetry of the problem implies that some vortices are formed and move time-periodically on the center line of the rectangle for large enough $I$ and for a range of applied magnetic fields $h$. We term them kinematic vortices. In certain cases, as presented below, these kinematic vortices collide and move off the center line line. We term such vortices, ``born" from kinematic vortices, `kinematic' as well.
Other vortex phenomena, not usually observed in more standard Ginzburg-Landau settings, include the time-periodic emergence of vortex pairs that are of opposite degrees, which we term vortex/anti-vortex pairs: typically stable vortex configurations of Ginzburg-Landau vortices only involve vortices of the same degree but in this periodic setting that is not always the case. We also show that for some range of $I$ and $h$ vortices of {\em the same degree} move towards each other and collide before moving away from each other. Again, this is a process that is atypical to more familiar Ginzburg-Landau settings.
The analysis of the present problem follows to some extent the lines of the one-dimensional wire problem. Namely, we construct a proper framework that enables us to use the Center Manifold Theorem to study the bifurcation picture, and thereby establish the existence of a Hopf bifurcation. Moreover, in both cases a key factor is played by the spectrum of the underlying linear Schr\"odinger operator. However, there are a few important differences between the one-dimensional case and the problem considered here. First, the construction of the center manifold requires certain apriori estimates on the solutions to the underlying differential equations. These estimates are harder to obtain in the two-dimensional setting. A second important difference relates to the vortex motion. While in the one-dimensional case the PSCs just appear momentarily at a fixed periodic sequence of instances, the kinematic vortices in the two-dimensional problem are present for periodic finite time inte
rvals. Moreover, they tend to move along or near the $y$-axis (which is the center line of the rectangle) as we shall show.
It is interesting to note that implications of PT-symmetry seem to appear in a number of quite different physical problems. For instance, we mention applications to quantum mechanics \cite{bebo}, \cite{cdv}, to hydrodynamic instability \cite{shk1}, \cite{shk2}, and to optics \cite{zia1}, \cite{zia2}. We also mention several recent rigorous studies within Ginzburg-Landau theory that incorporate magnetic effects along with applied currents in a variety of asymptotic regimes, \cite{Almog,AHP,dwz,ST,Tice}. One aspect of our investigation that distinguishes it from others, however, is that it is to our knowledge the first to capture a motion law for Ginzburg-Landau vortices that is not based on the assumption of large Ginzburg-Landau parameter.
In the next section we formulate the problem and the underlying equations. The bifurcation analysis is performed in section 3. In particular we establish there the existence of a center manifold for a certain regime in the $(I,T)$ plane. In section 4 we consider the formation of vortices and their motion and discuss some computational work on the problem.
\label{intro} \section{Formulation of Problem} We consider a superconducting material occupying a thin rectangular box with dimensions $-L<x<L,\;-K<y<K$ and say $0<z<\eta$ where $\eta$ is assumed to be much smaller than the coherence length or penetration depth, allowing us to work within the thin film 2d approximation of Ginzburg-Landau. In this approximation, we take the complex-valued order parameter $\Psi=\Psi(x,y,t)$ and the real-valued electric potential $\phi=\phi(x,y,t)$ to be defined on $\mathcal{R}\times[0,\infty)$ where we denote $\mathcal{R}:=[-L,L]\times[-K,K]$ and we ignore any induced magnetic field. Within this rectangular geometry, we assume the presence of leads forcing in electric current of magnitude $I$ through the sides $x=\pm L$, along the subinterval $-\delta<y<\delta$ for some positive $\delta<K.$ Additionally we assume that the thin film is subjected to an applied magnetic field of size $h$ oriented perpendicular to the rectangular cross-section. See Figure \ref{rectangle}.
\begin{figure}
\caption{A thin film superconductor subjected to applied current and magnetic field.}
\label{rectangle}
\end{figure}
Introducing the applied magnetic potential $A_0:=(-y,0)$, this applied field is then given by $h\nabla\times A_0$ and the thin film limit of Ginzburg-Landau takes the nondimensionalized form \begin{eqnarray} & \Psi_t+i\phi\Psi=\left(\nabla-ihA_0\right)^2\Psi+(\Gamma-\abs{\Psi}^2)\Psi\quad\mbox{for}\;(x,y)\in\mathcal{R},\;t>0,\label{psieqn}\\ &\Delta \phi=\nabla\cdot\bigg(\frac{i}{2}\{\Psi\nabla\Psi^*-\Psi^*\nabla \Psi\}-\abs{\Psi}^2hA_0\bigg)\quad\mbox{for}\;(x,y)\in\mathcal{R},\;t>0,\label{phieqn} \end{eqnarray} subject to the boundary conditions \begin{eqnarray} &\Psi(\pm L,y,t)=0\;\mbox{for}\;\abs{y}<\delta,\label{bc1}\\ &\Psi_x(\pm L,y,t)+ihy\Psi(\pm L,y,t)=0\;\mbox{for}\;\delta<\abs{y}\leqslant K,\label{bc2}\\ &\Psi_y(x,\pm K,t)=0\;\mbox{for}\;\abs{x}\leqslant L,\label{bc4}\\ &\phi_x(\pm L,y,t)=\left\{\begin{matrix}-I&\;\mbox{for}\;\abs{y}<\delta,\\ 0&\;\mbox{for}\;\delta<\abs{y}<K\end{matrix}\right.\label{bc5}\\ &\phi_y(x,\pm K,t)=0\;\mbox{for}\;\abs{x}\leqslant L.\label{bc6} \end{eqnarray} along with the initial condition $\Psi(x,y,0)=\Psi_{init}(x,y).$ As a convenient normalization we take $\int_{\mathcal{R}}\phi=0.$ The system of equations above are collectively known as the Time-Dependent Ginzburg-Landau (TDGL) model.
The parameter $\Gamma$ is proportional to $T-T_c$ where $T$ is temperature and $T_c$ is the critical temperature below which the normal (zero) state loses stability in the absence of any applied fields. Note that \eqref{phieqn} is simply the requirement of conservation of total (normal $+$ superconducting) current. Also, we remark that the boundary conditions \eqref{bc2}-\eqref{bc4} are the standard superconductor/vacuum conditions on $\Psi$, while $\eqref{bc1}$ reflects the presence of the normal leads.
There are numerous investigations of current-driven superconducting wires and thin films that utilize a model based on Ginzburg-Landau theory, and these have generally reported reasonable agreement between theory and experiment. Regarding \eqref{psieqn}, we should mention that some studies on this problem such as \cite{bepv,bmp,mmpvp} replace the left-hand side with the modification \[\frac{u}{\sqrt{1+\gamma^2\abs{\Psi}^2}}\bigg(\frac{\partial}{\partial t}+i\phi+\frac{\gamma^2}{2}\frac{\partial\abs{\Psi}^2}{\partial t}\bigg)\Psi,\] where $u$ and $\gamma$ are material parameters. However, we believe that the standard and simpler evolution equation \eqref{psieqn} corresponding to the choices $u=1$ and $\gamma=0$ captures the main features of the problem, an opinion shared by the authors of \cite{agkns} whose computational comparisons suggest that the modification does not have a large effect in this setting. What is more, in what follows, we shall concentrate on bifurcation from the normal state $\Psi\equiv 0$, so the smallness of the amplitude should mute the effect of this modification even more.
\section{Analysis of the model}
For a given $\Psi$, and a given value $h\geqslant 0$, let us decompose the solution $\phi$ to \eqref{phieqn}, \eqref{bc5}, \eqref{bc6} as \[\phi=I\phi^0+\tilde{\phi}\] and where $\phi^0$ is harmonic and satisfies the boundary conditions \begin{eqnarray} &\phi^0_x(\pm L,y)=\left\{\begin{matrix}-1&\;\mbox{for}\;\abs{y}<\delta,\\ 0&\;\mbox{for}\;\delta<\abs{y}<K,\end{matrix}\right.\label{bcx}\\ &\phi^0_y(x,\pm K)=0\;\mbox{for}\;\abs{x}\leqslant L,\label{bcy} \end{eqnarray} while $\tilde{\phi}$ satisfies \eqref{phieqn} subject to homogeneous Neumann boundary conditions on all portions of the rectangular boundary. We note that $\tilde{\phi}$ but depends on $\Psi$ so we will often write $\tilde{\phi}$ as $\tilde{\phi}[\Psi]$ to emphasize this dependence.
We will occasionally make use of the properties \begin{equation} \phi^0(-x,y)=-\phi^0(x,y)\quad\mbox{and}\quad \phi^0(x,-y)=\phi^0(x,y),\label{oddeven} \end{equation} which are easy to check.
The normal state in this setting corresponds to $\Psi\equiv 0$ and $\phi\equiv I\phi^0$. We will pursue a bifurcation analysis about this normal state, and therefore a crucial role will be played by the linear eigenvalue problem \begin{equation}
\mathcal{L}[u]:=\big(\nabla -ih A_0)^2u-iI\phi^0 u=-\lambda u\quad\mbox{for}\;\abs{x}<L,\;\abs{y}<K,\label{evpm} \end{equation} subject to the boundary conditions \eqref{bc1}--\eqref{bc4}. Though we do not indicate it in our notation, it is understood that $\mathcal{L}$ and therefore all of its eigenvalues and eigenfunctions depend on the parameters $L,K,\delta,h$ and $I$. We summarize below the key properties of the corresponding eigenvalues and eigenfunctions that will be needed in the analysis to follow. \begin{lemma}\label{evproblem} The spectrum of $\mathcal{L}$ consists only of point spectrum, denoted by $\{\lambda_j\}$ with corresponding eigenfunctions $\{u_j\}.$ If $(\lambda_j,u_j)$ is an eigenpair satisfying \eqref{evpm} then \begin{equation} {\rm{Re}}\,\lambda_j>0,\quad\mbox{and}\quad \abs{{\rm{Im}}\,\lambda_j}< \norm{\phi^0}_{L^{\infty}}I.\label{ReIm}\end{equation} Thus, in particular we may order the eigenvalues $\lambda_1,\lambda_2,\ldots$ according to the size of their real part, with $0<{\rm{Re}}\,\lambda_1\leqslant {\rm{Re}}\,\lambda_2\leqslant \ldots.$ The PT-symmetry of the operator is reflected in the fact that if $(\lambda_j,u_j)$ is an eigenpair then so is $(\lambda^*_j,u_j^{\dagger})$ where $u_j^{\dagger}(x,y):=u_j^*(-x,y).$ \end{lemma}
When working with $u_1$and $u_2$, we will choose the normalization \begin{equation} \int_{\mathcal{R}} u_1^2=1=\int_{\mathcal{R}} u_2^2. \label{normal1} \end{equation} Also, as a matter of convention, when $\lambda_1$ is non-real, we associate $u_1$ with the eigenvalue $\lambda_1$ having positive imaginary part, and $u_2$ with $\lambda_2^*$.
\begin{proof} The fact that the spectrum consists solely of eigenvalues follows from standard compact operator theory. The conditions in \eqref{ReIm} follow from multiplication of the equation $\mathcal{L}[u]=-\lambda u$ by $u^*$ and integration over the rectangle. This leads to the identity \begin{equation} \lambda=\frac{\int_{\mathcal{R}}\abs{\left(\nabla \pm i h A_0\right) u}^2}{\int_{\mathcal{R}}\abs{u}^2}+ iI\frac{\int_{\mathcal{R}} \phi^0 \abs{u}^2}{\int_{\mathcal{R}} \abs{u}^2},\label{lamid} \end{equation} implying \eqref{ReIm}.
The final claim of the lemma follows by noting that $\mathcal{L}[u^{\dagger}]=\mathcal{L}[u].$ \end{proof}
A numerical analysis of the eigenvalue problem \eqref{evpm} indicates that, fixing all other parameters (i.e. $K,L,\delta$ and $h$) and varying only the current $I$, there exists a critical value $I_c$ depending on these other parameters, such that $\lambda_1$ is real for $I\leqslant I_c$ and $\lambda_1$ is non-real for $I>I_c$. This complexification of $\lambda_1$ arises through a collision with another eigenvalue and the two eigenvalues emerge from this collision, that is for $I>I_c$, as complex conjugate pairs. Examples of eigenvalue collisions are shown in Figures \ref{eig_val}a and \ref{eig_val}b. In both cases the simulated geometry is $L=1, K=2/3, \delta = 4/15$. In Figure \ref{eig_val}a we took $h=0$, while in Figure \ref{eig_val}b we took $h=7.5$.
\begin{figure}
\caption{The real part of the spectrum of $\mathcal{L}$. The first 4 eigenvalues are drawn for the parameters $L=1, K=2/3, \delta = 1/6$. In (a) $h=0$, while in (b) $h=7.5$}
\label{aaa1}
\label{aaa2}
\label{eig_val}
\end{figure}
We note in passing that in light of \eqref{oddeven} and \eqref{lamid}, the evenness of $\abs{u_j}$ evidently implies reality of $\lambda_j$ so the passage to complex eigenvalues with increased values of $I$ carries with it a certain symmetry breaking of the corresponding eigenfunction. This phenomenon has been established rigorously for the one-dimensional version of \eqref{evpm}--that is, for the problem depending only on $x$ and with $h=0$ in \cite{shk1,shk2}. The reality of the spectrum for $I$ positive and sufficiently small in this two-dimensional setting should follow by the type of perturbation analysis to be found in \cite{cgs}, since for $I=0$, the spectrum is clearly real, cf. \eqref{ReIm}. However, since here we are interested in the regime where the first eigenvalue is complex, we do not pursue this point further.
Regarding eigenvalue collisions, we also wish to note another new phenomenon for this two-dimensional problem with the incorporation of magnetic effects that is not observed for the one-dimensional problem. Our computations reveal that for certain large enough values of applied magnetic field $h$, as $I$ increases through the regime $0<I<I_c$, the second and third (still real) eigenvalues pass through each other and it is ultimately what was originally labeled as the third eigenvalue that collides with $\lambda_1$ at $I_c$. We say the third eigenvalue ``passes through" the second rather than ``collides" with it because as $I$ varies through the point where $\lambda_2=\lambda_3$, both eigenvalues remain simple and the corresponding eigenfunctions vary smoothly without incident. This for example is the scenario for the parameter values $L=1, K=2/3, \delta = 4/15, h=20, I=25$. See Figure \ref{var_beta1}. We will return to this set of parameter values at the end of the article to discuss anomalous vortex behavior as well.
\begin{figure}
\caption{The real parts of the four leading eigenvalues for the parameter values $L=1, K=2/3, \delta = 4/15, h=20, I=25$. Note how $\lambda_3$ passes through $\lambda_2$ and then collides with $\lambda_1$ at $I=I_c\approx 20$. }
\label{var_beta1}
\end{figure}
At this point we fix any values of $K,L,\delta,h$ and then pick $I$ sufficiently large so that ${\rm{Im}}\,\lambda_1\not= 0$. We then consider our problem \eqref{psieqn}--\eqref{bc6} with $\Gamma$ given by $\Gamma={\rm{Re}}\,\lambda_1+\varepsilon$ where $\varepsilon$ is small and positive. Note that given the temperature dependence of $\Gamma$, this corresponds to lowering the temperature just below the value where the normal states becomes linearly unstable.
We now introduce the linear operator $\mathcal{L}_1[u]:=\mathcal{L}[u]+({\rm{Re}}\,\lambda_1)\, u$. Recalling Lemma \ref{evproblem} in the scenario where $\lambda_1$ is not real, the corresponding first eigenfunctions, say $u_1$ and $u_2$, satisfy \begin{equation} u_2=u_1^{\dagger}\quad \mbox{with}\quad\mathcal{L}_1[u_1]=-i\,{\rm{Im}}\,\lambda_1\,u_1,\quad \mathcal{L}_1[u_2]=i\,{\rm{Im}}\,\lambda_1\,u_2. \label{u1u2}\end{equation} For later use, we also note that if we introduce the adjoint operator $\mathcal{L}_1^{\star}$ satisfying \[ \int_{\mathcal{R}} \mathcal{L}_1[u]v=\int_{\mathcal{R}} \mathcal{L}_1^{\star}[v]u,\] then we can readily identify $\mathcal{L}_1^{\star}[v]$ as simply the operator given by $\big(\nabla +ih A_0)^2v-iI\phi^0v+({\rm{Re}}\,\lambda_1)\,v$. Furthermore, the fact that $\phi_0$ is even in $y$ (cf. \eqref{oddeven}) reveals that the functions $u^{\star}_j(x,y):=u_j(x,-y)$ for $j=1,2$ satisfy the equations \[ \mathcal{L}_1^{\star}[u^{\star}_1]=-i\,{\rm{Im}}\,\lambda_1\,u^{\star}_1,\quad \mathcal{L}_1^{\star}[u^{\star}_2]=i\,{\rm{Im}}\,\lambda_1\,u^{\star}_2.\] Consequently, we see that \begin{equation*} -i\,{\rm{Im}}\,\lambda_1\int_{\mathcal{R}} u^{\star}_1u_2=\int_{\mathcal{R}} \mathcal{L}_1^{\star}[u^{\star}_1]u_2=\int_{\mathcal{R}} \mathcal{L}_1[u_2]u^{\star}_1=i{\rm{Im}}\,\lambda_1\int_{\mathcal{R}} u^{\star}_1u_2,\end{equation*} with a similar relation holding between $u^{\star}_2$ and $u_1$. Hence, $\int_{\mathcal{R}} u^{\star}_1u_2=0=\int_{\mathcal{R}} u^{\star}_2u_1.$ Clearly, this orthogonality holds between any two eigenfunctions $u_j$ and $u_k$ of the operator $\mathcal{L}_1$ corresponding to distinct eigenvalues, namely \begin{equation} \int_{\mathcal{R}} u^{\star}_ju_k=0\quad\mbox{for}\;j\not=k.\label{orthog} \end{equation}
Of course, in the special case of no applied magnetic field, i.e. $h=0$, we have $\mathcal{L}=\mathcal{L}^{\star}$ and $u_1=u^{\star}_1$ and through \eqref{oddeven} we see then that each eigenfunction is even in $y$.
Let us now return to the \eqref{psieqn}-\eqref{phieqn} with the choice $\Gamma={\rm{Re}}\,\lambda_1+\varepsilon$ and re-express the system as a single nonlinear, non-local equation for the complex-valued order parameter $\Psi$: \begin{equation} \Psi_t=\mathcal{L}_1[\Psi]+\varepsilon \Psi+\mathcal{N}(\Psi),\label{pde}\end{equation} where \begin{equation} \mathcal{N}(\Psi):=-\abs{\Psi}^2\Psi-i\tilde{\phi}[\Psi]\Psi.\label{nonlinear} \end{equation}
When expressed in this form, the problem can be rigorously solved for $\varepsilon\ll 1$ via a center manifold reduction. The analysis is similar to that carried out for the one-dimensional (thin wire) problem in \cite{rsz}, Prop. 6.8, so we will only mention the key steps and those parts of the calculation where there are changes.
As regards the linear part of \eqref{pde}, the key point is that the operator $-\mathcal{L}_1$ is sectorial, in light of \eqref{ReIm}, cf. \cite{Henry}. Regarding estimates on the cubic, nonlocal nonlinearity $\mathcal{N}$, the analysis differs from that in \cite{rsz} in that for the one-dimensional problem it is easy to check that $\mathcal{N}$ is a bounded map from $H^1$ to $H^1$ (cf. \cite{rsz}, Lemma 6.3), while in the present two-dimensional setting this is no longer true--it just barely misses. One way to overcome this obstacle is by viewing $\mathcal{N}$ as a mapping from an interpolation space between $H^1$ and $H^2$ into $L^2$. The details of pursuing this strategy can be found in section 3.4 of \cite{mw}, where the authors execute a center manifold construction relevant to bifurcation from the normal state without applied electric current. Alternatively, one can view $\mathcal{N}$ as a map from $H^2(\mathcal{R})$ into $H^1(\mathcal{R})$. We describe how to make the necessary estimates for this latter approach.
Writing $\mathcal{N}=\mathcal{N}_1+\mathcal{N}_2$ with $\mathcal{N}_1(\Psi):=-\abs{\Psi}^2\Psi$ and $\mathcal{N}_2(\Psi):=-i\tilde{\phi}[\Psi]\Psi,$ it is an easy application of H\"older's inequality to make the estimate \begin{equation} \norm{\mathcal{N}_1(\Psi)}_{H^1(\mathcal{R})}\leqslant C\norm{\Psi}^3_{H^2(\mathcal{R})}. \label{N1est} \end{equation} To make a similar estimate on $\mathcal{N}_2$, let us first write the PDE coming from \eqref{phieqn} for $\tilde{\phi}(\Psi)$ as \begin{equation} \Delta \tilde{\phi}=\nabla\cdot \bigg(j(\Psi)-\abs{\Psi}^2A_0\bigg),\label{newtilde}\end{equation} subject to homogeneous Neumann conditions and the normalization condition of mean zero, where we have introduced the notation $j(\Psi):=\frac{i}{2}\{\Psi\nabla\Psi^*-\Psi^*\nabla \Psi\}.$ Fixing any $p\in (1,2)$ we may use H\"older's inequality and Sobolev imbedding to make the estimate \begin{eqnarray*} && \int_{\mathcal{R}} \abs{Dj(\Psi)}^p\leqslant C\int_{\mathcal{R}} \bigg(\abs{\Psi}^p\abs{D^2\Psi}^p+\abs{D\Psi}^{2p}\bigg)\\
&& \leqslant C\bigg(\norm{\Psi}^p_{H^2(\mathcal{R})}\norm{\Psi}^p_{L^{2p/(2-p)}(\mathcal{R})}+\norm{\Psi}^{2p}_{W^{1,2p}(\mathcal{R})}\bigg)\leqslant
C\norm{\Psi}^{2p}_{H^2(\mathcal{R})}. \end{eqnarray*} Hence, $\norm{Dj(\Psi)}_{L^p(\mathcal{R})}\leqslant C\norm{\Psi}^2_{H^2(\mathcal{R})}.$ It is also easy to estimate \[\norm{j(\Psi)}_{L^p(\mathcal{R})}+\norm{\abs{\Psi}^2A_0}_{W^{1,p}(\mathcal{R})}\leqslant C\norm{\Psi}^2_{H^2(\mathcal{R})},\] so we conclude that \[\norm{\nabla\cdot\big(j(\Psi)-\abs{\Psi}^2A_0\big)}_{L^{p}(\mathcal{R})}\leqslant C\norm{\Psi}^2_{H^2(\mathcal{R})}.\] Then appealing to the Calderon-Zygmund inequality (cf. \cite{W}, Chapter 2), equation \eqref{newtilde} implies that \[ \norm{\tilde{\phi}(\Psi)}_{W^{2,p}(\mathcal{R})}\leqslant C\norm{\Psi}^2_{H^2(\mathcal{R})}.\] From the Sobolev imbedding theorem and H\"older's inequality it then follows easily that $\norm{\mathcal{N}_2(\Psi)}_{H^1(\mathcal{R})}\leqslant C\norm{\Psi}^3_{H^2(\mathcal{R})}$. Combining this last estimate with \eqref{N1est} we arrive at the estimate on the nonlinearity: \begin{equation} \norm{\mathcal{N}(\Psi)}_{H^1(\mathcal{R})}\leqslant C\norm{\Psi}^3_{H^2(\mathcal{R})}.\label{nonlinearest} \end{equation} The upshot is that for each small $\varepsilon$, one can construct a center manifold $\mathcal{M}_\varepsilon$ as a graph $v\mapsto\Phi(v,\varepsilon)$ in $H^2({\mathcal{R}})$ over the center subspace $\mathcal{S}:={\rm{span}}\,\{u_1,u_2\}$, applying for example, the version of the Center Manifold Theorem to be found in \cite{HI}, Theorem 2.9. More precisely, there exist positive constants $\delta_0$ and $\varepsilon_0$, such that for any $\varepsilon$ satisfying $\abs{\varepsilon}\leqslant \varepsilon_0$ one can define $ \mathcal{M}_{\varepsilon}:=\{\Phi(v,\varepsilon):\,v\in \mathcal{S},\;\norm{v}_{H^2(\mathcal{R})}<\delta_0\},$ and $\mathcal{M}_\varepsilon$ enjoys the following properties:\\
\noindent (i) The center manifold is locally invariant for the flow \eqref{pde} in the sense that if $\abs{\varepsilon}<\varepsilon_0$ and the initial data $\psi_0$ lies on $\mathcal{M}_\varepsilon$, then so does the solution $\psi^{\varepsilon}$ to \eqref{pde} so long as $\norm{\psi^\varepsilon(\cdot,t)}_{H^2(\mathcal{R})}$ stays sufficiently small. Hence, for such initial data, one can describe the resulting solution $\psi^{\varepsilon}(t)=\psi^{\varepsilon}(\cdot,t)$ through two maps $\alpha^{\varepsilon}_1,\,\alpha^{\varepsilon}_2:[0,\infty)\to\mathbb{C}$ via $\psi^{\varepsilon}(t)=\Phi(\alpha^{\varepsilon}_1(t)u_1+\alpha^{\varepsilon}_2(t)u_2,\varepsilon).$ Since $u_1$ and $u_2$ are fixed throughout, we will often write simply \begin{equation}\psi^{\varepsilon}(t)=\Phi(\alpha^{\varepsilon}_1(t),\alpha^{\varepsilon}_2(t),\varepsilon)\label{cmr}\end{equation} for the sake of brevity.
\noindent (ii) The center manifold is PT-symmetric, i.e. $\psi\in\mathcal{M}_\varepsilon\implies\;\psi^{\dagger}\in \mathcal{M}_\varepsilon$. Furthermore, if $v=v^{\dagger}$, then $\Phi(v,\varepsilon)=\Phi^{\dagger}(v,\varepsilon).$ Since $u_2=u_1^{\dagger}$, this implies that if $\alpha^{\varepsilon}_2(0)=\alpha^{\varepsilon}_1(0)^*$, and one solves \eqref{pde} subject to the PT-symmetric initial conditions $\psi^{\varepsilon}(0)=\Phi(\alpha^{\varepsilon}_1(0),\alpha^{\varepsilon}_2(0),\varepsilon)$, then the resulting functions $\alpha^{\varepsilon}_1(t)$ and $\alpha^{\varepsilon}_2(t)$ describing the solution $\psi^{\varepsilon}$ at any positive time $t$ will remain complex conjugates. \\
\noindent (iii) $\mathcal{M}$ contains all nearby bounded solutions of \eqref{pde}, and in particular, it contains any nearby steady-state or time-periodic solutions.\\
\noindent (iv) Through an appeal to \eqref{nonlinearest}, the discrepancy between the center manifold and the center subspace can be expressed through the estimate \begin{equation} \norm{\Phi(v,\varepsilon)-v}_{H^2(\mathcal{R})} \leqslant C_1\left(\norm{v}_{H^2(\mathcal{R})}^3+\abs{\varepsilon}\norm{v}_{H^2(\mathcal{R})}\right)\label{verytang} \end{equation} which holds for any pair $(v,\varepsilon)$ such that $v\in \mathcal{S}$ with $\norm{v}_{H^2(\mathcal{R})}<\delta_0$ and $\abs{\varepsilon}<\varepsilon_0$, where $C_1$ is a positive constant independent of $v$ and $\varepsilon$.
Armed with these properties of the center manifold, one can then fix any sufficiently small complex numbers $\alpha^{\varepsilon}_1(0)$ and $\alpha^{\varepsilon}_2(0)$ as in (i) above, solve \eqref{pde} and then use the projection $\Pi_c$ onto the center subspace $\mathcal{S}$ to obtain a reduced system of O.D.E.'s governing the evolution of $\alpha^{\varepsilon}_1$ and $\alpha^{\varepsilon}_2.$ In light of \eqref{orthog} we can write an arbitrary function $f$ as $f=c_1u_1+c_2u_2+u^{\perp}$ where $u^{\perp}\in \big({\rm{span}}\,\{u^{\star}_1,u^{\star}_2\}\big)^{\perp}$. Hence the projection onto the center subspace of an arbitrary function $f$ is given by \[ \Pi_c(f)=\bigg(\frac{\int_{\mathcal{R}} u^{\star}_1f}{\int_{\mathcal{R}} u^{\star}_1u_1}\bigg)u_1+\bigg(\frac{\int_{\mathcal{R}} u^{\star}_2f}{\int_{\mathcal{R}} u^{\star}_2u_2}\bigg)u_2.\] Thus, substituting the reduction \eqref{cmr} into \eqref{pde} and projecting, we obtain the following system through the use of \eqref{u1u2}: \begin{eqnarray} &&\dot{\alpha}^{\varepsilon}_1u_1+\dot{\alpha}^{\varepsilon}_2u_2=(\varepsilon-i\,{\rm{Im}}\,\lambda_1)\alpha^{\varepsilon}_1u_1+(\varepsilon+i\,{\rm{Im}}\,\lambda_1)\alpha^{\varepsilon}_2u_2\nonumber\\ &&+\Pi_c\bigg(\mathcal{N}(\alpha^{\varepsilon}_1u_1+\alpha^{\varepsilon}_2 u_2)\bigg)+\bigg\{\Pi_c\bigg(\mathcal{N}(\Phi(\alpha^{\varepsilon}_1,\alpha^{\varepsilon}_2,\varepsilon))\bigg) -\Pi_c\bigg(\mathcal{N}(\alpha^{\varepsilon}_1u_1+\alpha^{\varepsilon}_2 u_2)\bigg)\bigg\},\nonumber\\ \label{reduced} \end{eqnarray} where $\dot{}$ denotes a time derivative.
Invoking \eqref{verytang}, one
finds that the last expression above involving the difference of nonlinear terms is lower order so one is justified
in initially ignoring it, solving the resulting simplified
system of O.D.E.'s and then arguing that the behavior of solutions
persists for the full system \eqref{reduced}. Again the details of this portion of the
argument can be found in \cite{rsz}. In an abuse of notation, we will persist in using the
notation $\alpha^{\varepsilon}_j$ to denote the solution to the truncated system in which the last expression is dropped.
Now using \eqref{orthog}, we integrate first against $u^{\star}_1$ and then against $u^{\star}_2$ to arrive at the system: \begin{eqnarray} &&\dot{\alpha}^{\varepsilon}_1=(\varepsilon-i\,{\rm{Im}}\,\lambda_1)\alpha^{\varepsilon}_1 +\frac{\int_{\mathcal{R}} u^{\star}_1\,\mathcal{N}(\alpha^{\varepsilon}_1u_1+\alpha^{\varepsilon}_2 u_2)}{\int_{\mathcal{R}} u^{\star}_1u_1},\label{redone}\\ &&\dot{\alpha}^{\varepsilon}_2=(\varepsilon+i\,{\rm{Im}}\,\lambda_1)\alpha^{\varepsilon}_2 +\frac{\int_{\mathcal{R}} u^{\star}_2\,\mathcal{N}(\alpha^{\varepsilon}_1u_1+\alpha^{\varepsilon}_2 u_2)}{\int_{\mathcal{R}} u^{\star}_2u_2}.\label{redtwo} \end{eqnarray}
In order to argue that \eqref{redone}-\eqref{redtwo} exhibits a Hopf bifurcation to a periodic state, we now restrict the flow to the PT symmetric portion of the center subspace, and hence, in light of item (ii) above, to the PT symmetric subset of the center manifold. This amounts to the restriction $\alpha^{\varepsilon}_2=(\alpha^{\varepsilon}_1)^*$ and allows us to only work with \eqref{redone}. At this juncture, we make a change of variables of the form \[ a^{\varepsilon}:=\alpha^{\varepsilon}_1+c_1(\alpha^{\varepsilon}_1)^3+ c_2\abs{\alpha^{\varepsilon}_1}^2(\alpha^{\varepsilon}_1)^*+c_3((\alpha^{\varepsilon}_1)^*)^3, \] in order to convert the problem to its normal form, cf. \cite{HI}, chapter 3. Writing the cubic nonlinear term in \eqref{redone} as \[ n_1(\alpha^{\varepsilon}_1)^3+n_2\abs{\alpha^{\varepsilon}_1}^2(\alpha^{\varepsilon}_1)^*+ n_3((\alpha^{\varepsilon}_1)^*)^3+n_4\abs{\alpha^{\varepsilon}_1}^2\alpha^{\varepsilon}_1, \] a tedious but direct calculation yields that with the choices $c_1=\frac{n_1}{2{\rm{Im}}\,\lambda_1}\,i,\;c_2=-\frac{n_2}{2{\rm{Im}}\,\lambda_1}\,i,$ and $c_3=-\frac{n_3}{4{\rm{Im}}\,\lambda_1}\,i$, the new variable $a^{\varepsilon}$ satisfies the simpler differential equation \begin{equation} \dot{a}^{\varepsilon}=(\varepsilon-i\,{\rm{Im}}\,\lambda_1)a^{\varepsilon}+n_4\abs{a^{\varepsilon}}^2 a^{\varepsilon},\label{aeqn} \end{equation} with the coefficient $n_4$ given by \begin{equation} n_4=\frac{- \int_{\mathcal{R}}\big(\abs{u_1}^2 u_1u^{\star}_1+2\abs{u_2}^2 u_1u^{\star}_1\big)-i\int_{\mathcal{R}} \big((\phi_{11}+\phi_{22})u_1u^{\star}_1+\phi_{12}u^{\star}_1 u_2\big)}{\int_{\mathcal{R}} u_1u^{\star}_1}. \label{n4} \end{equation} Here we have introduced the notation $\phi_{ij}$ to denote the solution to the equation
\[\Delta \phi_{ij}=\nabla\cdot\big(\frac{i}{2}[u_i\nabla u_j^*-u_j^*\nabla u_i]-u_i^*u_jhA_0\big)=0\quad\mbox{for}\;i,j=1,2\] subject to homogeneous Neumann boundary conditions on $\partial\mathcal{R}$ and zero mean on $\mathcal{R}$, so that $\tilde{\phi}(a^{\varepsilon}u_1+(a^{\varepsilon})^*u_2)$ in the nonlocal contribution $\mathcal{N}_2$ to the nonlinearity $\mathcal{N}$ takes the form $\abs{a^{\varepsilon}}^2(\phi_{11}+\phi_{22})+(a^{\varepsilon})^2\phi_{21}+((a^{\varepsilon})^*)^2\phi_{12}.$
Provided ${\rm{Re}}\,n_4<0$, it is then easy to check from \eqref{aeqn} that the system undergoes a supercritical Hopf bifurcation to a periodic solution given by \begin{equation} a^{\varepsilon}(t)=\frac{\varepsilon^{1/2}}{\abs{{\rm{Re}}\,n_4}}e^{-i\big({\rm{Im}}\,\lambda_1+\gamma\varepsilon\big)t}\quad\mbox{where}\;\gamma:=\frac{{\rm{Im}}\,n_4}{{\rm{Re}}\,n_4}.\label{aepsform} \end{equation} We have verified the condition ${\rm{Re}}\,n_4<0$ numerically for a wide range of parameter values. See Figure \ref{n4graph}.
\begin{figure}\label{n4graph}
\end{figure}
Summarizing the analysis above, we have shown:\\ \begin{thm} \label{periodic} Fix a choice of parameters $K,L,\delta,h$ and $I$ such that the first eigenvalue of $\mathcal{L}$ in \eqref{evpm} satisfies ${\rm{Im}}\,\lambda_1\not=0$ and $n_4$ given by \eqref{n4} satisfies ${\rm{Re}}\,n_4<0$. Then taking $\Gamma={\rm{Re}}\,\lambda_1+\varepsilon$ in \eqref{psieqn}, there exists a value $\varepsilon_0>0$ such that for all positive $\varepsilon<\varepsilon_0$, the system \eqref{psieqn}-\eqref{bc6} undergoes a supercritical Hopf bifurcation to a periodic state $(\psi_{\varepsilon},\phi_{\varepsilon})$. Applying \eqref{verytang} to this solution, we see that \begin{equation} \norm{\psi_{\varepsilon}-\bigg(a^{\varepsilon}(t)u_1+a^{\varepsilon}(t)^*u_1^{\dag}\bigg)}_{H^2(\mathcal{R})}\leqslant C\varepsilon^{3/2}\label{psiest} \end{equation} with $a^{\varepsilon}$ given by \eqref{aepsform}. \end{thm} \begin{rmrk} \label{stability} Though we do not present the analysis here, one can show that in fact this periodic solution is asymptotically attracting. Since there is no unstable subspace associated with $\mathcal{L}_1$, nearby points off of the center manifold $\mathcal{M}_{\varepsilon}$ are exponentially attracted to $\mathcal{M}_{\varepsilon}$. Then one argues that nearby non-PT symmetric points on $\mathcal{M}_{\varepsilon}$ are attracted exponentially to the PT symmetric part of $\mathcal{M}_{\varepsilon}.$ This type of argument was carried out in \cite{rsz} and no doubt the same type of argument will work here though we have not checked the details. \end{rmrk} \begin{rmrk}\label{stationary} One can also carry out the bifurcation analysis in the regime where $\lambda_1$ is real, which in particular would correspond to parameter regimes where $I$ is sufficiently small. As our primary goal in this article is to address issues related to periodic phenomena raised in \cite{agkns}, \cite{bepv} and \cite{bmp} we did not pursue it here. In this case the center subspace is simply spanned by $u_1$ and a stable stationary state emerges for $\varepsilon>0$ of the form \begin{equation} \psi_{\varepsilon}\sim C\varepsilon^{1/2}u_1\label{statu} \end{equation} for some computable constant $C$. In the case where $h$ is positive and sufficiently large, while $\lambda_1$ is real, one expects this stationary state to have magnetic vortices. Such a result might be compared to the single vortex stationary solution found in \cite{dwz} for a similar model. \end{rmrk}
\section{Vortex formation}
As we mentioned in the Introduction, vortices form in this problem due to two separate effects. One type of vortex, that we term a magnetic vortex, is well-known.
Magnetic vortices form as a result of the applied magnetic field, and can appear even when $I=0$. We present two examples of such vortices in Figure \ref{magnetic}. In both cases we used a rectangle with parameters $L=1, K=2/3, \delta = 4/15$. In Figure \ref{magnetic}a the applied field is $h=15$, while in Figure \ref{magnetic}b the applied field is $h=17$. In both figures the current is $I=10$ which is below the critical current $I_c$ for this geometry and these values of $h$. Hence, the first eigenvalue of the linearized problem is real and so the center manifold here is one-dimensional. This is the regime discussed in Remark \ref{stationary}. There are other examples, not shown, where vortices form even when $I=0$.
\begin{figure}
\caption{Stationary magnetic vortices. The curves represent level sets of the phase of the first eigenfunction $u_1$. The geometry is $L=1, K=2/3, \delta = 4/15$. In (a) we used $h=15$, while in (b) we used $h=17$. In both cases we took $I=10$ which is below the critical value $I_c$, so that a stationary vortex solution emerges to the full problem \eqref{psieqn}-\eqref{bc6} with leading profile given by $u_1$, cf. Remark \ref{stationary}}
\label{bbb1}
\label{bbb2}
\label{magnetic}
\end{figure}
What makes the present problem unusual is the formation of {\em kinematic} vortices, that is, vortices that are created even in the absence of magnetic fields. Physically, these vortices, just as the magnetic vortices, are points in space-time where the order parameter vanishes, the order parameter has a nonzero degree around these points, and large phase gradients occur near them. The formation of such kinematic vortices was extensively studied by Bendiyorov et al. \cite{bmp} using numerical simulations of the TDGL equations. The authors report on an unusual effect, where vortices appear periodically in pairs along the center line $x=0$ of the rectangle, and move along it. They also found that, depending on the parameter values in the problem, vortices can either form at opposite sides of the boundary and annihilate inside or else nucleate together at an internal point on the center line and move away from each other towards the boundary. When $h = 0$, both vortices appear a
t the same time and they vanish, either by crossing the boundary simultaneously, or by annihilating each other, symmetrically about the line $y=0$. When $h \neq 0$ the $y$-symmetry is broken: the creation of these kinematic vortices can take place at different times, and their motion is not symmetric with respect to the line $y=0$.
We will use the theory developed in the preceding section to give a simple explanation for the formation and motion of kinematic vortices. Our analysis also gives a simpler means to compute when and where they form. In fact, we derive an explicit equation of motion for the kinematic vortices, fully based on the leading eigenfunction of the operator $ \mathcal{L}$ defined in equation (\ref{evpm}). After deriving the equation of motion below, we demonstrate the different types of vortex creation and motion. One benefit of the new theory is that it allows us to easily detect additional types of kinematic vortex patterns, not observed in \cite{bmp}.
Mathematically, the formation and motion of the kinematic vortices are a consequence of the PT-symmetry of the problem. To see how they are created and move about, we consider the leading order term in the center manifold (cf. \eqref{psiest}): $$\psi = a^{\varepsilon}(t)u_1+a^{\varepsilon}(t)^*u_1^{\dag}.$$ It is convenient to introduce the notation $$a^{\varepsilon} = \xi \varepsilon^{-i \chi t},\;\; u_1(0,y) = g(y) e^{i \beta(y)},$$ where $\xi$ and $\chi$ take the values provided in equation (\ref{aepsform}). Therefore, along the rectangle's central line the order parameter is given (at leading order) by \begin{equation} \psi(0,y,t) = 2 \xi g(y) \cos\left(-\chi t + \beta(y)\right). \label{v1} \end{equation} Hence, the order parameter vanishes on the central line $x=0$ whenever the equation \begin{equation} \chi t = \beta(y) +\pi/2 + n \pi,\;\; n=0, \pm 1, \pm 2, ...\label{v3} \end{equation} holds. Equation (\ref{v3}) is the equation of both motion {\em and} creation of kinematic vortices. \begin{rmrk}\label{normal2} In the computation performed in this section we replaced the normalization condition (\ref{normal1}) with the normalization $u_1(0,0)=1$. This condition has the advantage that for all parameters we have $\beta(0)=0$, and therefore it is graphically easy to compare different $\beta$ functions. \end{rmrk} The boundary conditions on $u_1$ imply $\beta'(\pm K)=0$. Therefore, three simple shapes for $\beta$ might be expected: upward hump, downward hump, or a monotone shape. However, our numerical study shows that, while indeed each of these shapes can occur, depending on the problem's parameters, other shapes are also present. Also, when $h=0$, the problem is symmetric with respect to the $y$ axis, and therefore $\beta$ is either an even function, as in Figure \ref{varbeta}a, or an odd function, if it is monotone. On the other hand, when the magnetic field is turned on, and $h \neq 0$, the symmetry of $\beta$ is broken.
\begin{figure}
\caption{Different shapes for the function $\beta(y)$. The geometry is $L=1, K=2/3, \delta = 4/15$. In (a) we set $h=0$, while in (b) we set $h=0.05$. }
\label{ccc1}
\label{ccc2}
\label{varbeta}
\end{figure}
To demonstrate some of the different possible shapes of $\beta(y)$ and their dependence on the parameters in the problem, we refer to Figure \ref{varbeta}. In all parts of this figure we took $L=1, K=2/3, \delta= 4/15$. In Figure \ref{varbeta}a we set $h=0$. Here $\beta$ appears as an even function of $y$. Note that it changes its concavity depending on the values of the given current $I$. In Figure \ref{varbeta}b we observe symmetry breaking in the shape of $\beta$ under the influence of a nonzero applied field $h=0.05$. In particular we point out that for some $I$ levels the function $\beta(y)$ becomes a monotone function. We are still pursuing an explanation for the dramatic change in $\beta$ as $I$ varies from 100 to 105.\\
We now analyze four examples to illustrate the effect of the shape of the function $\beta$ on vortex creation and motion.
\vskip 0.2cm \noindent {\bf Case 1:} We refer to Figure \ref{beta1}. We use the same geometric parameters as before, namely, $L=1, K=2/3, \delta= 4/15$. We also use $I=25, h=0.05$. The function $\beta$ is an asymmetric downward hump, as depicted in Figure \ref{beta1}a. The first solution of equation (\ref{v3}) occurs when $t$ is large enough so that $\chi t$ reaches $\beta(K) + \pi/2$ and a vortex emerges from the boundary. Then, as $t$ increases the solution moves to lower values of $y$. When $t$ is sufficiently large so that $\chi t = \beta(-K) + \pi/2$, a second vortex forms at the lower end $y=K$. As $t$ grows further, both vortices move towards each other. When $t$ reaches the level where $\chi t = \beta(y_m) + \pi/2$, where $y_m$ is the location of the maximal point of $\beta$, the vortices collide and they annihilate. This is an example of annihilation of a vortex/anti-vortex pair, where we use `anti-vortex' to refer to a vortex of negative degree. Then, there is a time interval where equation (\ref{v3}) does not hold for any $y$, and therefore there are no kinematic vortices at those times. This scenario repeats itself when $\chi t = \beta (-K) + 3\pi/2$ and so on. The creation, motion and annihilation of vortices are shown in Figure \ref{beta1}b for a half of a single period.
\begin{figure}
\caption{Creation and motion of kinematic vortices. (a) The function $\beta(y)$ for the parameters $L=1, K=2/3, \delta = 4/15, h=0.05, I=25$. (b) The circles describe the location of the vortices in the $(y,t)$ plane. }
\label{ddd1}
\label{ddd2}
\label{beta1}
\end{figure}
\vskip 0.1cm \noindent {\bf Case 2:} Using the same parameters as in the previous example, except increasing the current $I$ to take the value $I=110$, gives rise to a different shape for $\beta(y)$. As depicted in Figure \ref{beta2}a, it is now a distorted U shape. Therefore, denoting the location of the minimum of $\beta$ by $y_m$, when $t$ reaches the value where $\chi t = \beta(y_m) + \pi/2$, a vortex/anti-vortex pair is created inside the sample. As $t$ increases, equation (\ref{v3}) is satisfied at two locations, until a point of time where $\chi t = \beta(K) + \pi/2$. After that, only one vortex remains in the rectangle, and this vortex eventually leaves the domain when $\chi t = \beta(-K) + \pi/2$. The motion of these kinematic vortices is depicted in Figure \ref{beta2}b.
\begin{figure}
\caption{ Creation and motion of kinematic vortices. (a) The function $\beta(y)$ for the parameters $L=1, K=2/3, \delta = 4/15, h=0.05, I=110$. (b) The circles describe the location of the vortices in the $(y,t)$ plane. }
\label{eee1}
\label{eee2}
\label{beta2}
\end{figure}
\vskip 0.1cm \noindent {\bf Case 3:} For the third example we maintain the same parameters as in the second example above, except that we increase $h$ to take the value $h=0.2$. For this choice of parameters $\beta(y)$ is a monotone function as depicted in Figure \ref{beta3}a. Now, the vortex is first created when $\chi t = \beta(K) + \pi/2$. It then travels to the lower end of the center line until $\chi t = \beta(-K) + \pi/2$. This motion is depicted in Figure \ref{beta3}b.
\begin{figure}
\caption{ Creation and motion of kinematic vortices. (a) The function $\beta(y)$ for the parameters $L=1, K=2/3, \delta = 4/15, h=0.2, I=110$. (b) The circles describe the location of the vortices along the central line $x=0$ in the $(y,t)$ plane. }
\label{ggg1}
\label{ggg2}
\label{beta3}
\end{figure}
\begin{figure}
\caption{ Creation and motion of kinematic vortices. (a) The function $\beta(y)$ for the parameters $L=1, K=2/3, \delta = 4/15, h=20, I=25$. (b) The circles describe the location of the vortices in the $(y,t)$ plane. }
\label{kkk1}
\label{kkk2}
\label{beta4}
\end{figure}
\vskip 0.1cm \noindent {\bf Case 4:} In the fourth example we present a case where $\beta(y)$ has both a local maximum {\em and} a local minimum. The function $\beta(y)$ for the parameters $h=20, \; I=25$ is shown in Figure \ref{beta4}a. This is the parameter choice of Figure \ref{var_beta1} as well. The geometry is the same as in the preceding examples. Following the scenarios above, if we look along the center line $x=0$ we expect to see a first vortex emerging at $y=K$ at a time that we denote $t_1$. Then, after this first vortex appears, a pair of vortices (of the same degree) appear at a later time, say $t_2$, where $\chi t_2=\beta(\bar{y})+\pi/2$ and $\bar{y}$ is the location of the interior local minimum of $\beta$. They move away from each other, until the one moving upward collides at a time $t_3$ with the first vortex moving downward. Finally, the remaining vortex that moves downward reaches the boundary $y=-K$ and exits the rectangle. This vortex creation and moti
on is indeed verified in Figure \ref{beta4}b.
\begin{figure}
\caption{Contour plots of the phase of the order parameter for the parameters of Case 4 at four distinct times during a period of the motion. In (a) we see five vortices, one of them at the upper end of the center line. In (b) two vortices that were earlier off the center line move into it and meet there. Then they separate and start moving away from each other along the center line in (c), with the upper one moving upward to approach a south-moving vortex along the center line. In (d) these two vortices seem to collide and then veer away from each other. In these figures, the darkest lines are not significant in that they only represent a $-\pi$ to $\pi$ jump in the phase. The tips of these lines, however, represent the vortices. }
\label{fff1}
\label{fff2}
\label{fff3}
\label{fff4}
\label{5vortex_1}
\end{figure}
This example, however, has several peculiar features. In Cases 1 and 2 vortices formed or disappeared in pairs of vortex-antivortex structure on the center line. In Case 4, on the other hand, the picture is different. We refer to a sequence of snapshots in Figure \ref{5vortex_1}. In Figure \ref{5vortex_1}a we observe five vortices, with only one on the center line. The two vortices far from the center line barely move throughout the period of the evolution. We view these two ``sluggish" vortices as magnetic vortices. The vortex {\em on} the center line is the kinematic vortex that formed at $y=K$ at $t=t_1$ as explained above. The two vortices located on either side of the center line, and near it, which are of the same degree, move towards each other. Eventually they meet at $t=t_2$ on the center line, giving rise to the two kinematic vortices that were discussed above, and are shown in Figure \ref{beta4}b as well as Figure \ref{5vortex_1}b. These two vortices move as descr
ibed above until $t=t_3$. Then, the middle vortex on the center line meets the upper vortex on the center line. This is shown also in Figure \ref{5vortex_1}c. Then, as shown in Figure \ref{5vortex_1}d, this pair of vortices split away from the center line. This new pair of vortices moves away from the center line and upward, and the entire process repeats itself periodically.
The picture we just outlined indicates that kinematic vortices can move away from the center line. We therefore also term such vortices kinematic, namely those that spend part of a period on and part of a period off the center line, since their formation and motion follow directly from the PT symmetry and the structure it imposes on the center manifold.
\section{Discussion}
All of the examples from the previous section illustrate that, at least near the normal state, there is a dichotomy in vortex behavior when both applied currents and applied magnetic fields are present. The expansion based on center manifold reduction gives a partial explanation for this phenomenon, with the kinematic vortices arising in part due to the PT symmetry of the problem. In any event, it is clear that the variety of possible vortex behavior in this system is far more extensive than that seen in models capturing only magnetic effects. In particular, the motion law \eqref{v3}, based on small amplitude asymptotics rather than large Ginzburg-Landau parameter asymptotics as is more common in the literature, allows for a wide range of effects including boundary and interior nucleation, collision of like-signed vortices and periodicity of these events. Of course, all of the rigorous analysis we conduct necessarily involves small amplitude solutions since it is based on a bifurcation from the normal state. One would imagine that an even richer array of vortex behavior is possible for this system if one looks far from the normal state though an analytical approach would clearly require different tools.
\noindent {\bf Acknowledgments.} L. Peres Hari and J. Rubinstein were generously supported by an ISF grant. P. Sternberg was generously supported by NSF grant DMS-1101290 and a Simons Foundation Collaboration Grant.
\end{document} |
\begin{document}
\title{Automated Generation of Triangle Geometry Theorems}
\author{Alexander Skutin\thanks{This work was supported by the Ministry of Education and Science of the Russian Federation as part of the program of the Moscow Center for Fundamental and Applied Mathematics under the agreement no. 075-15-2022-284, by the scholarship of Theoretical Physics and Mathematics Advancement Foundation “BASIS” (grant No 21-8-3-2-1) and by the Russian Science Foundation, project no. 22-11-00075.}}
\date{} \maketitle \begin{abstract}
In this article, we introduce an algorithm for automatic generation and categorization of triangle geometry theorems. \end{abstract}
\section{Introduction}\label{sc3}
Plane geometry is a vast field of research where many theorems had been obtained and new results are still being discovered. Over the past few decades, a lot of effort has been spent on creating algorithms designed to automatically generate theorems in plane geometry, some of which can be found in \cite{1, 2, 8, 3, 4}.
In this paper, we concretize the problem of automatic generation of plane geometry theorems for the case of triangle geometry theorems, that is, triangle $ABC$ theorems that are invariant with respect to permutations of $ABC$ vertices. We provide a new algorithm that generates and categorize triangle geometry theorems. It is expected that this algorithm is able to generate almost all of the theorems from the articles \cite{cos, cos1}. The main idea of our algorithm can be described as follows:\\The algorithm has inductive form and at each new step $t$ \begin{enumerate}
\item it considers a set of theorems obtained on the previous step and constructs a new set of theorems by adding at most one new object to each already existed theorem and formulating new theorems about the resulting configurations, \item it replaces the set of obtained theorems with some of its ``maximal generalizations''.
\end{enumerate}
The definition of ``maximally general'' (complete) sets of theorems will be presented in this article.
\subsection{Notation}\label{circ}
The arity $\text{ar}(f)$ of a function $f$ is the number of variables acting in $f$. Further, by $\wedge, \Rightarrow, \Leftrightarrow$ we will denote the logical operators `and', `implies' and `equivalent'. We will use the standard set-theory notation $\{x \:\vert\: \text{statement about x}\}$ which is read as, ``the set of all x such that the statement about x is true.''
\subsection{Structure of the paper} The paper is organized as follows. In Sections 2, 3 we introduce $\hbox{\scalebox{0.75}{$\triangle$}}$-objects and define the set $S_{\triangle}^{7}$. In Section 4, we develop an algorithm for automatic generation of triangle geometry theorems based on $S_{\triangle}^{7}$. Section 5 contains some propositions that simplify the computation of $S_{\triangle}^{7}$. Appendix A contains lists of objects which are used in the article.
\subsection{Triangle centers and lines} \begin{definition}[C. Kimberling, \cite{ki}]\label{d5} By a {\em triangle center} $ X$ denote a point $ X(A, B, C)$, which is defined for each tuple of points $A$, $B$, $C$ on the plane $\mathbb{R}^2$. \end{definition}
\begin{lis}\label{l1} The complete list of triangle centers $X_i$, $1\leq i\leq 13$ that are used in this article can be found in the Appendix (see List \ref{ltc} in the Appendix). Some of the centers in use with corresponding numbers: \begin{enumerate}
\item \text{In(ex)center} $I$, $I(A, B, C)$ -- the incenter of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the $A$-excenter of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item \text{Centroid} $G$, $G(A, B, C)$ -- the centroid of $ABC$.
\item \text{Circumcenter} $O$, $O(A, B, C)$ -- the circumcenter of $ABC$.
\item \text{Orthocenter} $H$, $H(A, B, C)$ -- the orthocenter of $ABC$.
\item \text{Nine-point center} $N$, $N(A, B, C)$ -- the nine-point center of $ABC$.
\item[7.] \text{First(second) Fermat point} $F$, $F(A, B, C)$ -- the first Fermat point of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the second Fermat point of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item[9.] \text{Inner(outer) Feuerbach point} $F_e$, $F_e(A, B, C)$ -- the inner Feuerbach point of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the $A$-external Feuerbach point of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item[12.] \text{Inner(outer) Morley point} $M$, $M(A, B, C)$ -- the $A$-vertex of the inner Morley triangle of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the $A$-vertex of the outer Morley triangle of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\end{enumerate} \end{lis}
\section{Definitions of \texorpdfstring{$\hbox{\scalebox{0.75}{$\triangle$}}$}{t}-objects}
\begin{definition}\label{d10} Denote by a {\em $\hbox{\scalebox{0.75}{$\triangle$}}$-point} any 6-tuple of points lying on the plane $\mathbb{R}^2$. \end{definition}
\begin{remark}
Similarly, one can introduce $\hbox{\scalebox{0.75}{$\triangle$}}$-lines, $\hbox{\scalebox{0.75}{$\triangle$}}$-circles and other $\hbox{\scalebox{0.75}{$\triangle$}}$-curves, but we omit them in this article for simplicity. \end{remark}
\begin{definition} For each $\hbox{\scalebox{0.75}{$\triangle$}}$-point $x = (x_1, x_2, x_3, x_4, x_5, x_6)$ define $$x_{bc} = x_1,\:\: x_{cb} = x_2,\:\: x_{ca} = x_3,\:\: x_{ac} = x_4,\:\: x_{ab} = x_5,\:\: x_{ba} = x_6.$$ \end{definition}
\begin{example}\label{lam}
Consider the Van Lamoen configuration (see \cite{van}) -- a triangle $ABC$ with the centroid $G$, the cevian triangle $A'B'C'$ of $G$ wrt $ABC$ and the circumcenters $O_{bc} = O(GBC'),\ldots, $\\$O_{ba} = O(GBA')$ of $GBC',\ldots, GBA'$. In this configuration it is possible to define the following $\hbox{\scalebox{0.75}{$\triangle$}}$-points $x = (A, A, B, B, C, C)$, $y = (G, \ldots , G)$, $z = (A', A', B', B', C', C')$, $t = (O_{bc}, \ldots , O_{ba})$. Thus, $x_{bc} = A,\ldots, x_{ba} = C$, $y_{bc} = G,\ldots, y_{ba} = G$, $z_{bc} =A',\ldots, z_{ba} = C'$, $t_{bc} = O_{bc},\ldots, t_{ba} = O_{ba}$. \end{example}
\begin{definition}\label{d15}
Denote by a {\em $\hbox{\scalebox{0.75}{$\triangle$}}$-function} any function $f$ which corresponds a non-empty set of $\hbox{\scalebox{0.75}{$\triangle$}}$-points to each $\text{ar}(f)$-tuple of $\hbox{\scalebox{0.75}{$\triangle$}}$-points, and is one of the functions $f_{\triangle, i}$ which are listed in the Appendix of this article (see List \ref{ltf} in the Appendix).\\Some of the $\hbox{\scalebox{0.75}{$\triangle$}}$-functions in use with corresponding numbers:\\ (these are $\hbox{\scalebox{0.75}{$\triangle$}}$-functions which will be used in the further definitions and examples)\begin{enumerate} \item[1.] $f_{\triangle, 1} = $ the set of all $\hbox{\scalebox{0.75}{$\triangle$}}$-points ($f_{\triangle, 1}$ has arity $0$ and, thus, is a set of $\hbox{\scalebox{0.75}{$\triangle$}}$-points. Same can be said about $f_{\triangle, i}$, $1\leq i\leq 8$). \item[2.] $f_{\triangle, 2} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point} \:\vert\: x_{bc} = x_{cb}, x_{ca} = x_{ac}, x_{ab} = x_{ba}\}$.
\item[8.] $f_{\triangle, 8} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: x_{bc},\ldots, x_{ba}\text{ lie on a circle}\}$.
\item[11.] $f_{\triangle, 11}( x) = \{y\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: y_{bc} = x_{bc},\ldots, y_{ba} = x_{ba}\text{ i.e. }y = x\}$.
\item[17.] $f_{\triangle, 17}( x, y, z) = \left\{t\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} t_{bc},\ldots, t_{ba}\text{ are the projections of}\\x_{bc}, \ldots, x_{ba}\text{ on }y_{bc}z_{bc},\ldots, y_{ba}z_{ba}\end{array}\right.\right\}$.
\item[19.] $f_{\triangle, {19, i}}( x, y, z) = \left\{t\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} t_{bc} = X_i(x_{bc}, y_{bc}, z_{bc}), t_{cb} = X_i(x_{cb}, z_{cb}, y_{cb}),
\\ t_{ca} = X_i(z_{ca}, x_{ca}, y_{ca}), t_{ac} = X_i(y_{ac}, x_{ac}, z_{ac}),
\\ t_{ab} = X_i(y_{ab}, z_{ab}, x_{ab}), t_{ba} = X_i(z_{ba}, y_{ba}, x_{ba})\end{array}\right.\right\}$,\\where $1\leq i\leq 13$, $ X_i$ denotes the $i$-th center from the list \ref{l1}.
\item[20.] $f_{\triangle, {20}}( x, y, z, t) = \left\{v\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} v_{bc} = x_{bc}y_{bc}\cap z_{bc}t_{bc},\ldots,\\ v_{ba} = x_{ba}y_{ba}\cap z_{ba}t_{ba}
\end{array}\right.\right\}$.
\item[25.] Functions of the form $f_{\triangle, n, \alpha, \beta, \gamma}(x, y, z) := f_{\triangle, n}(x^{\alpha}, y^{\beta}, z^{\gamma})$, $1\leq n\leq 24$, where $\alpha, \beta, \gamma$ are any symbols from the set $\{bc, cb, ca, ac, ab, ba\}$ and for each $\hbox{\scalebox{0.75}{$\triangle$}}$-point $x$,\\$x^{bc} := (x_{bc}, x_{cb}, x_{ca}, x_{ac}, x_{ab}, x_{ba}),\quad x^{ac} := (x_{ac}, x_{ca}, x_{cb}, x_{bc}, x_{ba}, x_{ab}),$\\$x^{cb} := (x_{cb}, x_{bc}, x_{ba}, x_{ab}, x_{ac}, x_{ca}),\quad x^{ba} := (x_{ba}, x_{ab}, x_{ac}, x_{ca}, x_{cb}, x_{bc}),$\\$x^{ab} := (x_{ab}, x_{ba}, x_{bc}, x_{cb}, x_{ca}, x_{ac}),\quad x^{ca} := (x_{ca}, x_{ac}, x_{ab}, x_{ba}, x_{bc}, x_{cb})$,\\ denotes the orbit of $x$.
\end{enumerate} \end{definition}
\begin{definition}\label{N}
Consider the sequence $x_1, x_2, x_3, \ldots$ of free variables which can be any $\hbox{\scalebox{0.75}{$\triangle$}}$-points. Denote by a {\em $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration} any logical statement about the sequence $x_1, x_2, x_3,\ldots$, which has the form $$\bigwedge_{i = 1}^N[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})], $$where\begin{enumerate}
\item $N$ is a natural number
\item $a_1 < a_2 < \ldots < a_N$ is a strongly increasing sequence of natural numbers
\item for each $1\leq i\leq N$, $f_i$ is a $\hbox{\scalebox{0.75}{$\triangle$}}$-function
\item for each $1\leq i\leq N$, $b_{i, 1}, b_{i, 2},\ldots, b_{i,\text{ar} (f_i)} < a_i$ is an $\text{ar} (f_i)$-tuple of natural numbers $<a_i$.
\end{enumerate}
\end{definition} Since statements of the form $[x_i\in f_{\triangle, 1}]$ don't carry any additional information, we will omit such terms within $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations (i.e. we may not consider the $\hbox{\scalebox{0.75}{$\triangle$}}$-function $f_{\triangle, 1}$).
\begin{definition}
For any $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c$ denote by $\text{deg}(c)$, $\text{height}(c)$ the values of $N$ and $a_N$ from the definition \ref{N} which are related to $c$, respectively. \end{definition}
\begin{example}\label{s}
Consider the following $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c$ which is related to the example \ref{lam}$$c = [x_1\in f_{\triangle, 2}]\wedge [x_2\in f_{\triangle, 19, 2}(x_1, x_1^{ab}, x_1^{ac})]\wedge [x_3\in f_{\triangle, 20}(x_1, x_2, x_1^{ab}, x_1^{ac})]\wedge [x_5\in f_{\triangle, 19, 3}(x_1, x_2, x_3^{ab})].$$So $\text{deg}(c) = 4$, $\text{height}(c) = 5$. \end{example}
\begin{definition}
Let $C_{\triangle}$ denote the set of all $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations. Also for each natural $n$ let $C_{\triangle}^n$ denote the set of all $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $c$ with $\text{height}(c)\leq n$. We will say that $c, d\inC_{\triangle}$ are {\em equivalent} if there exists a permutation of variables $\sigma : x_1, x_2, x_3,\ldots\to x_1, x_2, x_3,\ldots$ which sends $c$ to $d$, i.e. $\sigma(c) = d$. We will label $c\simeq d$ for each equivalent $c, d\inC_{\triangle}$. \end{definition}
\begin{definition}
For each $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $$c = \bigwedge_{i = 1}^N[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})], $$let $\text{terms}(c)$ denote the set of $\text{deg} = 1$ $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $$\text{terms}(c) := \{[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]\:\vert\: 1\leq i\leq N\}.$$Also we will say that $d\inC_{\triangle}$ is a {\em predecessor} of $c$ if $$d = \bigwedge_{i = 1}^M[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]$$for some $1\leq M\leq N$. \end{definition}
\begin{definition}
For each $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $c, c_1, c_2,\ldots, c_l$ we say that $c = \cup_{i = 1}^lc_i = c_1\cup\ldots\cup c_l$ if $\text{terms}(c) = \cup_{i = 1}^l\text{terms}(c_i) = \text{terms}(c_1)\cup\ldots\cup\text{terms}(c_l)$. Also for each $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $c, d$ we say that $c \subseteq d$ if $\text{terms}(c) \subseteq \text{terms}(d)$, and $c \subsetneq d$ if $\text{terms}(c) \subsetneq \text{terms}(d)$. \end{definition}
\begin{definition}
For each $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $c, d$ we say that $c \leq d$ if there exist $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $c'\simeq c, d'\simeq d$, which are equivalent to $c, d$ respectively and are such that $c'$ is a predecessor of $d'$.
\end{definition}
\begin{definition}\label{conf}
Denote by a {\em $\hbox{\scalebox{0.75}{$\triangle$}}$-theorem} any valid implication of the form $c \Rightarrow r$, $c, r\inC_{\triangle}$, where $\text{deg}(r) = 1$. \end{definition}
\begin{example}
Consider the $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c$ as in the example \ref{s}, and let $r = [x_5\in f_{\triangle, 8}]$. Then from the Van Lamoen theorem (see \cite{van}) we have that $c \Rightarrow r$ is a $\hbox{\scalebox{0.75}{$\triangle$}}$-theorem. \end{example}
\begin{definition}\label{abc}
Consider a triangle $ABC$ lying on the plane $\mathbb{R}^2$ in general position. For each $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $$c = \bigwedge_{i = 1}^N[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]\inC_{\triangle}, $$ let $c(ABC)$ denote\begin{enumerate}
\item the set of $\hbox{\scalebox{0.75}{$\triangle$}}$-points $\{x_{a_1}, x_{a_2},\ldots, x_{a_N}\}$ satisfying the system of equations $$\left\{\begin{array}{cl}x_{a_1} = (A, A, B, B, C, C)\in f_1(x_{b_{1,1}}, x_{b_{1, 2}}, \ldots, x_{b_{1,\text{ar} (f_1)}})
\\ x_{a_2}\in f_2(x_{b_{2,1}}, x_{b_{2, 2}}, \ldots, x_{b_{2,\text{ar} (f_2)}})
\\\ldots
\\ x_{a_N}\in f_N(x_{b_{N,1}}, x_{b_{N, 2}}, \ldots, x_{b_{N,\text{ar} (f_N)}})
\end{array}\right.$$if $f_1 = f_{\triangle, 2}$, $a_i = i$ ($1\leq i\leq N$), and this system of equations has the unique solution
\item $c(ABC) = \varnothing$, otherwise.
\end{enumerate}
\end{definition}
\begin{example}
Consider the following $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c'$$$c' = [x_1\in f_{\triangle, 2}]\wedge [x_2\in f_{\triangle, 19, 2}(x_1, x_1^{ab}, x_1^{ac})]\wedge [x_3\in f_{\triangle, 20}(x_1, x_2, x_1^{ab}, x_1^{ac})]\wedge [x_4\in f_{\triangle, 19, 3}(x_1, x_2, x_3^{ab})].$$So $c'$ is equivalent to $c$ from the example \ref{s}, and $c'(ABC) \not= \varnothing$. \end{example}
\begin{definition} For each triangle $ABC$ in general position and each set $S\subseteq C_{\triangle}$, denote $$S(ABC) := \bigcup_{c\in S}c(ABC).$$ \end{definition}
\section{Construction of \texorpdfstring{$S_{\triangle}^7$}{T7S7}}
\begin{definition}
For a $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c$ denote by $\normalfont{\text{Gen}(c)}$ the set of {\em generalizations of $c$}, where $$\normalfont{\text{Gen}(c)} \!:=\! \left\{d\inC_{\triangle}\left\vert\begin{array}{cl} \text{there exist }\hbox{\scalebox{0.75}{$\triangle$}}\text{-configurations }c'\simeq c, d'\simeq d,\text{ which are equivalent to }c, d\\\text{respectively and are such that: }
\\\:1.\:\: c'\Rightarrow d'\text{ is a valid implication, and}
\\\:2.\:\: c'\not\Leftrightarrow d'
\end{array}\right.\right\}$$ \end{definition}
\begin{remark}
It is also possible to implement a larger set of generalizations of $c$ by further considering cases when $d' = \sigma(d)$ for some surjective (and not necessarily bijective) mapping of variables $\sigma : x_1, x_2, x_3,\ldots\to x_1, x_2, x_3,\ldots$, and adding some additional condition 3 (see for example the generalization of Gergonne theorem in \cite[Theorem 9.1(1), p.13]{SNT}). However, we will omit such generalizations for simplicity. \end{remark}
\begin{definition}
A set $S\subseteqC_{\triangle}$ is called {\em complete} (we will also call such a set as ``maximally general'') if for each $d\in S$ the set of all $\hbox{\scalebox{0.75}{$\triangle$}}$-theorems of the form $c\Rightarrow r$, $c\in S$, $r\inC_{\triangle}$ can't be deductively derived\footnote{Here by ``can be deductively derived'' we mean ``can be derived with using the set of inference rules:
\begin{equation} \inference{(a, b, c\text{ are any logical statements}) & a \Rightarrow b & b \Rightarrow c} {a \Rightarrow c} \end{equation}
\begin{equation} \inference{(a, b, c\text{ are any logical statements})} {a\Leftrightarrow a\wedge a\qquad a\wedge b \Rightarrow a\qquad a\wedge b \Rightarrow b\qquad a\wedge b \Leftrightarrow b\wedge a\qquad (a\wedge b)\wedge c \Leftrightarrow a\wedge (b\wedge c)} \end{equation}
\begin{equation} \inference{(a_1, a_2, b_1, b_2\text{ are any logical statements}) & a_1 \Rightarrow b_1 & a_2 \Rightarrow b_2} {a_1\wedge a_2 \Rightarrow b_1\wedge b_2} \end{equation}
\begin{equation} \inference{(c, d\inC_{\triangle}, \sigma : x_1, x_2, x_3,\ldots\to x_1, x_2, x_3,\ldots\text{ is a surjective mapping of variables}) & c \Rightarrow d } {\sigma (c) \Rightarrow \sigma (d)} \end{equation}
\begin{equation} \inference{(c_1, c_2, d\inC_{\triangle}) & c_1\Rightarrow d & c_2\text{ is a predecessor of } c_1 & \text{height}(d)\leq\text{height}(c_2)} {c_2 \Rightarrow d}. \end{equation}
} from the set of $\hbox{\scalebox{0.75}{$\triangle$}}$-theorems of the form $c\Rightarrow r$, $c\in (S\setminus\{d\})\cup\text{Gen}(d)$, $r\inC_{\triangle}$. \end{definition}
\begin{definition}
Consider a natural number $n$. A set $S\subseteqC_{\triangle}$ is called {\em $n$-complete} if it is complete and each $\hbox{\scalebox{0.75}{$\triangle$}}$-theorem of the form $c\Rightarrow r$, $c\inC_{\triangle}^n$, $r\inC_{\triangle}$ can be deductively derived$^1$ from the set of $\hbox{\scalebox{0.75}{$\triangle$}}$-theorems of the form $c\Rightarrow r$, $c\in S$, $r\inC_{\triangle}$. \end{definition}
Next, we will be interested in computing $7$-complete sets, however none of these sets can be computed in practice, and we finish this section by constructing its computable analogue $S_{\triangle}^7$.
\begin{definition}\label{cgen}
A $\hbox{\scalebox{0.75}{$\triangle$}}$-theorem $c \Rightarrow r$ is called {\em computably generalizable} if there exists a set of $\hbox{\scalebox{0.75}{$\triangle$}}$-theorems of the form $\{c_i \Rightarrow r_i, d \Rightarrow r\:\vert\: 1\leq i\leq l\}$, such that $l\geq 1$, $c_i\subsetneq c$, $d = \cup_{i = 1}^l r_i$, $d\not\Leftrightarrow c$ ($1\leq i\leq l$). Obviously each computably generalizable $\hbox{\scalebox{0.75}{$\triangle$}}$-theorem $c\Rightarrow r$ can be deductively derived$^1$ from the set of $\hbox{\scalebox{0.75}{$\triangle$}}$-theorems $\{c_i \Rightarrow r_i, d \Rightarrow r\:\vert\: 1\leq i\leq l\}$ and, thus, from the set of $\hbox{\scalebox{0.75}{$\triangle$}}$-theorems of the form $c'\Rightarrow r'$, $c'\in\text{Gen}(c)$, $r'\inC_{\triangle}$. \end{definition}
\begin{definition}\label{cccgen}
Consider a triangle $ABC$ in general position. For a $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c$ denote by $\normalfont{\text{Gen}_{\triangle}(c)}$ the set of {\em triangular computable generalizations of $c$}, where $$\normalfont{\text{Gen}_{\triangle}(c)} := \{d\inC_{\triangle}\:\vert\: d(ABC)\not=\varnothing, d\leq c,\text{ and } d\not\simeq c\}.$$Also for a $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c$, define \begin{equation*}
\text{CGen}_{\triangle}(c) := \begin{cases}\text{Gen}_{\triangle}(c), &\text{if each }\hbox{\scalebox{0.75}{$\triangle$}}\text{-theorem of the form }c \Rightarrow r, r\inC_{\triangle} \text{ with}\\&\text{height}(r)\leq \text{height}(c)\text{ is computably generalizable},\\c, &\text{otherwise}.\end{cases}\end{equation*}Additionally, for a set $S\subseteq C_{\triangle}$, define the sets\begin{itemize}
\item $\text{CGen}_{\triangle}(S) := \cup_{c\in S}\text{CGen}_{\triangle}(c)$,
\item $\text{CGen}_{\triangle}^1(S) := \text{CGen}_{\triangle}(S)$,
\item $\text{CGen}_{\triangle}^{i + 1}(S) := \text{CGen}_{\triangle}(\text{CGen}_{\triangle}^i(S))$, $i = 1, 2, 3,\ldots$,
\item $\text{MaxCGen}_{\triangle}(S) := \text{CGen}_{\triangle}^d(S)$, where $d$ is the minimal natural number such that $\text{CGen}_{\triangle}^d(S) = \text{CGen}_{\triangle}^{d + 1}(S)$.
\end{itemize} \end{definition}
\begin{remark}
The sets $\text{CGen}_{\triangle}(S)$, $\text{CGen}_{\triangle}^i(S)$, $\text{MaxCGen}_{\triangle}(S)$ can be calculated in practice with the help of Propositions \ref{jjjj}, \ref{jj} from Section 5. \end{remark}
\begin{definition}\label{d} Define the sets $S_{\triangle}^n\subseteq C_{\triangle}^n$, $n\geq 1$ inductively. Let $S_{\triangle}^1 := \{[x_1\in f_{\triangle, 2}]\}$. Assume that for a natural $t\geq 1$ the set $S_{\triangle}^t\subseteqC_{\triangle}^t$ is already constructed. Consider the sets \begin{enumerate}
\item[] $J := \left\{c\inC_{\triangle}^{t + 1}\left\vert\begin{array}{cl} c = c_1\cup c_2\inC_{\triangle}^{t + 1}\text{ for some}
\\c_1\in S_{\triangle}^t, c_2\inC_{\triangle}^{t + 1}\text{ with }\text{deg}(c_2) = 1, \text{height}(c_2) = t + 1\end{array}\right.\right\},$
\item[] $S_{\triangle}^{t + 1} := S_{\triangle}^t\cup\text{MaxCGen}_{\triangle}(J).$
\end{enumerate}From this inductive process construct the sets $S_{\triangle}^1\subseteq S_{\triangle}^2\subseteq \ldots\subseteq S_{\triangle}^n\subseteq\ldots$. It is easy to see that $S_{\triangle}^n = \text{MaxCGen}_{\triangle}(S_{\triangle}^n)$ for each $n\geq 1$. \end{definition}
In what follows, we will be interested in computing the set $S_{\triangle}^7$. The set $S_{\triangle}^7$ can be seen as a computable analogue and an approximation of $7$-complete sets.
\begin{remark}\label{rlc}
Note that when calculating $S_{\triangle}^7$, on each new step, we don't need to list those $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations that have already been listed. \end{remark} The set $S_{\triangle}^{7}$ can be computed in practice from its definition with the help of Remark \ref{rlc} and the Propositions \ref{jjjj}, \ref{jj} from Section 5.
\section{Automated generation of theorems based on \texorpdfstring{$S_{\triangle}^{7}$}{S7}} In this section, we introduce an algorithm for a computer that generates and categorizes triangle geometry theorems based on the set $S_{\triangle}^{7}$.
\begin{definition}
For each $\hbox{\scalebox{0.75}{$\triangle$}}$-point $a\in S_{\triangle}^{7}(ABC)$ let $f_1(a), f_2(a), \ldots, f_{\gamma(a)}(a)$ denote the sequence of $\hbox{\scalebox{0.75}{$\triangle$}}$-functions which are used for the definition of $a$ and are ordered according their appearance. Also denote by $\Gamma(a)$ the sequence $(f_1(a), f_2(a),\ldots, f_{\gamma(a)}(a))$ after excluding those $f_k(a), 1\leq k\leq \gamma(a)$ which do not have the form $f_{\triangle, 19, i}$ for some $1\leq i\leq 13$, and then replacing each uniform segment of the remaining sequence of $\hbox{\scalebox{0.75}{$\triangle$}}$-functions with the single $\hbox{\scalebox{0.75}{$\triangle$}}$-function of the same type (for example, if $a\in S_{\triangle}^{7}(ABC)$ is such that $(f_1(a), f_2(a), \ldots, f_{\gamma(a)}(a)) = (f_{\triangle, 2}, f_{\triangle, 19, 1}, f_{\triangle, 3}, f_{\triangle, 19,1}, f_{\triangle, 19, 9})$, then $\Gamma(a) = (f_{\triangle, 19, 1}, f_{\triangle, 19, 9})$). \end{definition}
The next algorithm generates triangle theorems based on the computation of $S_{\triangle}^{7}(ABC)$. Also it produces an intuitive categorization of theorems, the same as in the articles \cite{cos, cos1}.
\begin{al}\label{d425}
The computer program inputs a sequence $X_{i_1}, X_{i_2}, \ldots, X_{i_d}$, $1\leq i_1, i_2,\ldots, i_d\leq 13$, $d\geq 1$ of triangle centers from the list \ref{l1} which has no uniform segments (i.e. $i_k\not=i_{k + 1}$, $1\leq k < d$), and then produces the output after following the steps below.
\begin{enumerate}
\item For a triangle $ABC$ in general position, compute the set $S_{\triangle}^7(ABC)$.
\item In ``Objects $X_{i_1}- X_{i_2}- \ldots- X_{i_d}$'' section print the definitions and notations of all $\hbox{\scalebox{0.75}{$\triangle$}}$-points $a\in S_{\triangle}^{7}(ABC)$ with $\Gamma(a) = (f_{\triangle, 19, i_1}, f_{\triangle, 19, i_2},\ldots, f_{\triangle, 19, i_d})$. As in the ETC \cite{etc}, we can label objects from the section ``Objects $X_{i_1}- X_{i_2}- \ldots- X_{i_d}$'' as $(X_{i_1}- X_{i_2}- \ldots- X_{i_d})_1$, $(X_{i_1}- X_{i_2}- \ldots- X_{i_d})_2$, $(X_{i_1}- X_{i_2}- \ldots- X_{i_d})_3$, $\ldots$.
\item Compute and print in ``Properties $X_{i_1}- X_{i_2}- \ldots- X_{i_d}$'' section all correct statements of the form $[a\in f(a_1, a_{2}, \ldots, a_{\text{ar} (f)})]$, where $f$ is any $\hbox{\scalebox{0.75}{$\triangle$}}$-function and $a$, $a_{1}$, $a_{2}$, $\ldots$, $a_{\text{ar} (f)}$ are any $\hbox{\scalebox{0.75}{$\triangle$}}$-points from the section ``Objects $X_{i_1}- X_{i_2}- \ldots- X_{i_d}$''.
\end{enumerate} \end{al} \begin{remark} Note that in the section ``Objects $X_{i_1}- X_{i_2}- \ldots- X_{i_d}$'' from the algorithm \ref{d425} for each object $x$ it is possible to leave only the representative $(x_{bc}, \ldots, x_{ba})$ of the orbit of elements$$x = x^{bc} := (x_{bc}, x_{cb}, x_{ca}, x_{ac}, x_{ab}, x_{ba}),\quad x^{ac} := (x_{ac}, x_{ca}, x_{cb}, x_{bc}, x_{ba}, x_{ab}),$$$$x^{cb} := (x_{cb}, x_{bc}, x_{ba}, x_{ab}, x_{ac}, x_{ca}),\quad x^{ba} := (x_{ba}, x_{ab}, x_{ac}, x_{ca}, x_{cb}, x_{bc}),$$$$x^{ab} := (x_{ab}, x_{ba}, x_{bc}, x_{cb}, x_{ca}, x_{ac}),\quad x^{ca} := (x_{ca}, x_{ac}, x_{ab}, x_{ba}, x_{bc}, x_{cb}).$$We also need to replace all sequences $x, y,\ldots, z$ of objects from ``Objects $X_{i_1}- X_{i_2}- \ldots- X_{i_d}$'' that have the same coordinates as $\hbox{\scalebox{0.75}{$\triangle$}}$-points (i.e. are such that $x = y = \ldots = z$) on the single object $x$, and list all descriptions of $x$, that are coming from $x, y, \ldots, z$, in the definition of $x$. \end{remark}
\subsection{Relation to the articles [8, 9]}We expect that a computer program based on the algorithm \ref{d425} will be able to generate almost all of the theorems from the articles \cite{cos, cos1}.
\section{Practical implementation}\label{sss}
\begin{definition} Consider any $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $$c = \bigwedge_{i = 1}^N[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]\inC_{\triangle}. $$ For each $l\geq 1$, denote by $O_l(c)$ the set of all $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $c'$ which have the following form $$c' = \left(\bigwedge_{\substack{1\leq i\leq N \: :\: a_i< l}}[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]\right)\wedge$$$$\wedge [x_{l}\in f(x_{b_{1}}, x_{b_{ 2}}, \ldots, x_{b_{\text{ar} (f)}})]\wedge$$$$\wedge\left(\bigwedge_{\substack{1\leq i\leq N \: :\: a_i > l}}[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]\right)$$for some $\text{deg} = 1$, $\text{height} = l$ $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $[x_{l}\in f(x_{b_{1}}, x_{b_{ 2}}, \ldots, x_{b_{\text{ar} (f)}})]$. \end{definition}
The following propositions \ref{jjjj}, \ref{jj} can be used for computing $\text{Gen}_{\triangle}(\cdot)$, $\text{CGen}_{\triangle}(\cdot)$, \\$\text{MaxCGen}_{\triangle}(\cdot)$, and $S_{\triangle}^7$ from the definitions \ref{cccgen}, \ref{d}.
For a $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c\inC_{\triangle}^7$, the set $\{e\inC_{\triangle}^7\:\vert\: e\leq c, e\not\simeq c,\text{ and } e(ABC)\not=\varnothing\}$ can be easily computed in practice by brutal force method, thus to compute $\text{Gen}_{\triangle}(c)$ it is enough to develop a method for calculating the set $\{d\inC_{\triangle}^7\:\vert\: c\Leftrightarrow d\}$. The next proposition \ref{jjjj} describes such a method.
\begin{proposition}\label{jjjj} Consider any $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $c\inC_{\triangle}^7$. The set $\{d\inC_{\triangle}^7\:\vert\: c\Leftrightarrow d\}$ can be computed after providing the following steps:\begin{enumerate}
\item consider the set $D_1$ of all $d_1\in O_{7}(c)$ with $d_1 \Leftrightarrow c$
\item consider the set $D_2$ of all $d_2\in \cup_{d_1\in D_1}O_{6}(d_1)$ with $d_2 \Leftrightarrow c$
\item repeat step 3 for $D_2$ instead of $D_1$ and finish with the set $D_3 = \{d_3\in \cup_{d_2\in D_2}O_{5}(d_2)\:\vert\: d_3 \Leftrightarrow c\}$
\item repeat step 3 for $D_3, D_4,\ldots$ until we finish with the set $D_7$ which satisfies $D_7 = \{d\inC_{\triangle}^7\:\vert\: c\Leftrightarrow d\}$. \end{enumerate}
\end{proposition}
To compute the sets $\text{CGen}_{\triangle}(\cdot)$, $\text{MaxCGen}_{\triangle}(\cdot)$, we need to use the method of computation of $\text{Gen}_{\triangle}(c)$, which was described previously, and also to develop a method for checking whether a given $\hbox{\scalebox{0.75}{$\triangle$}}$-theorem $c \Rightarrow r$ with $c, r\inC_{\triangle}^7$ is computationally generalizable. The next proposition \ref{jj} describes such a method.
\begin{proposition}\label{jj} Consider any $\hbox{\scalebox{0.75}{$\triangle$}}$-theorem $c\Rightarrow r$ such that $c, r\inC_{\triangle}^7$. Then to understand whether $c \Rightarrow r$ is computationally generalizable we need to provide the following steps:\begin{enumerate} \item Compute the set $$U_c := \{d\inC_{\triangle}^7\:\vert\: \text{deg}(d) = 1\text{ and }c' \Rightarrow d,\text{ for some }c'\subsetneq c\}$$
\item consider the set $D_1$ of all $d_1\in O_{7}(c)$ with $d_1 \Rightarrow r$ and $\text{terms}(d_1)\subseteq U_c$
\item if there exists $d_1\in D_1$ with $d_1\not\Leftrightarrow c$, then finish with the string ``$c \Rightarrow r$ is computationally generalizable''. Otherwise consider the set $D_2$ of all $d_2\in \cup_{d_1\in D_1}O_{6}(d_1)$ with $d_2 \Rightarrow r$ and $\text{terms}(d_2)\subseteq U_c$
\item repeat step 3 for $d_2\in D_2$ instead of $d_1\in D_1$ and finish either with the string ``$c \Rightarrow r$ is computationally generalizable'', or with the set $$D_3 = \{d_3\in \cup_{d_2\in D_2}O_{5}(d_2)\:\vert\: d_3 \Rightarrow r, \text{terms}(d_3)\subseteq U_c\}$$
\item repeat step 3 for $D_3, D_4,\ldots$ until we finish either with the string ``$c \Rightarrow r$ is computationally generalizable'', or with the set $D_7$, and in the latter case return the string ``$c \Rightarrow r$ is not computationally generalizable''. \end{enumerate}
\end{proposition}
Propositions \ref{jjjj}, \ref{jj} are trivial consequences of the following Proposition \ref{j}.
\begin{proposition}\label{j} Consider any $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations $$c = \bigwedge_{i = 1}^N[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]\inC_{\triangle}, $$ $$c' = \bigwedge_{i = 1}^{N'}[x_{a_i'}\in f_i'(x_{b_{i,1}'}, x_{b_{i, 2}'}, \ldots, x_{b_{i,\text{ar} (f_i')}'})]\inC_{\triangle}, $$ with $c\Rightarrow c'$. Then we have that for each natural $l$, $\hbox{\scalebox{0.75}{$\triangle$}}$-configurations\begin{enumerate}
\item $c(l) = \displaystyle{\bigwedge_{\substack{i = 1\a_i\leq l}}^N[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]}$
\item $c'(l) = \displaystyle{\bigwedge_{\substack{i = 1\a_i'\leq l}}^{N'}[x_{a_i'}\in f_i'(x_{b_{i,1}'}, x_{b_{i, 2}'}, \ldots, x_{b_{i,\text{ar} (f_i')}'})]}$
\item $\displaystyle{c''(l) = \bigwedge_{\substack{i = 1\a_i'> l}}^{N'}[x_{a_i'}\in f_i'(x_{b_{i,1}'}, x_{b_{i, 2}'}, \ldots, x_{b_{i,\text{ar} (f_i)}'})]}$
\item $d(l) = c(l)\wedge c''(l)$
\end{enumerate} are such that $c(l)\Rightarrow c'(l)$, $c\Rightarrow d(l)\Rightarrow c'$.
\end{proposition}\begin{proof}Proposition \ref{j} follows from the fact that for each $l\geq 1$ and each $\hbox{\scalebox{0.75}{$\triangle$}}$-configuration $$c = \bigwedge_{i = 1}^N[x_{a_i}\in f_i(x_{b_{i,1}}, x_{b_{i, 2}}, \ldots, x_{b_{i,\text{ar} (f_i)}})]\inC_{\triangle}, $$variables $x_{i}$ inside $c$ with $1\leq i\leq l$ are independent of variables $x_j$ inside $c$ with $j > l$.\end{proof}
\section{Appendix} This appendix contains the complete lists of triangle centers and $\hbox{\scalebox{0.75}{$\triangle$}}$-functions that we use in this article.
\begin{lis}\label{ltc}
The list of triangle centers
\begin{enumerate}
\item \text{In(ex)center} $I$, $I(A, B, C)$ -- the in center of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the $A$-excenter of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item \text{Centroid} $G$, $G(A, B, C)$ -- the centroid of $ABC$.
\item \text{Circumcenter} $O$, $O(A, B, C)$ -- the circumcenter of $ABC$.
\item \text{Orthocenter} $H$, $H(A, B, C)$ -- the orthocenter of $ABC$.
\item \text{Nine-point center} $N$, $N(A, B, C)$ -- the nine-point center of $ABC$.
\item \text{Symmedian point} $S$, $S(A, B, C)$ -- the Symmedian point of $ABC$.
\item \text{First(second) Fermat point} $F$, $F(A, B, C)$ -- the first Fermat point of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the second Fermat point of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item \text{First(second) Isodynamic point} $I_s$, $I_s(A, B, C)$ -- the first isodynamic point of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the second Isodynamic point of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item \text{Inner(outer) Feuerbach point} $F_e$, $F_e(A, B, C)$ -- the inner Feuerbach point of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the $A$-external Feuerbach point of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item \text{Euler reflection point} $E$, $E(A, B, C)$ -- the Euler Reflection point of $ABC$.
\item \text{Inner(outer) Apollonian point} $A_p$, $A_p(A, B, C)$ -- the $A$-vertex of inner Apollonian triangle of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the $A$ -- vertex of outer Apollonian triangle of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item \text{Inner(outer) Morley point} $M$, $M(A, B, C)$ -- the $A$-vertex of inner Morley triangle of $ABC$ if $A$, $B$, $C$ are placed clockwise on the plane $\mathbb{R}^2$ (or the $A$-vertex of outer Morley triangle of $ABC$ if $A$, $B$, $C$ are placed anti-clockwise on $\mathbb{R}^2$).
\item \text{Isogonal point} $Iso$, $Iso(A, B, C, D)$ -- the Isogonal conjugation of $D$ wrt $ABC$.
\item Other similar triangular centers and lines.
\end{enumerate}
\end{lis}
\begin{lis}\label{ltf}
The list of $\hbox{\scalebox{0.75}{$\triangle$}}$-functions
\begin{enumerate} \item $f_{\triangle, 1} = $ the set of all $\hbox{\scalebox{0.75}{$\triangle$}}$-points ($f_{\triangle, 1}$ has arity $0$ and, thus, is a set of $\hbox{\scalebox{0.75}{$\triangle$}}$-points. Same can be said about $f_{\triangle, i}$, $1\leq i\leq 8$). \item $f_{\triangle, 2} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point} \:\vert\: x_{bc} = x_{cb}, x_{ca} = x_{ac}, x_{ab} = x_{ba}\}$. \item $f_{\triangle, 3} = \left\{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point} \left\vert\begin{array}{cl} x_{bc} = x_{cb}, x_{ca} = x_{ac}, x_{ab} = x_{ba}
\\\text{and the triangle }x_{bc}x_{ca}x_{ab}\text{ is equilateral}\end{array}\right.\right\}$. \item $f_{\triangle, 4} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: x_{bc} = x_{ca} = x_{ab}, x_{cb} = x_{ac} = x_{ba}\}$. \item $f_{\triangle, 5} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: x_{bc} = \ldots = x_{ba}\}$.
\item $f_{\triangle, 6} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\:\text{points }x_{bc},\ldots, x_{ba}\text{ are collinear}\}$.
\item $f_{\triangle, 7} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: x_{bc},\ldots, x_{ba}\text{ lie on a conic}\}$.
\item $f_{\triangle, 8} = \{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: x_{bc},\ldots, x_{ba}\text{ lie on a circle}\}$.
\item $f_{\triangle, 9}( x) = \{y\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: \text{lines }x_{bc}y_{bc}, \ldots, x_{ba}y_{ba}\text{ are concurrent}\}$. \item $f_{\triangle, 10}( x) = \left\{y\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} \text{the midpoints of segments}
\\x_{bc}y_{bc}, \ldots, x_{ba}y_{ba}\text{ are collinear}
\end{array}\right.\right\}$.
\item $f_{\triangle, 11}( x) = \{y\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: y_{bc} = x_{bc},\ldots, y_{ba} = x_{ba}\text{ i.e. }y = x\}$.
\item $f_{\triangle, 12}( x) = \left\{y\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} y_{bc} = y_{cb}, y_{ca} = y_{ac}, y_{ab} = y_{ba}\text{ and}
\\\text{the triangle }x_{bc}x_{cb}\cap x_{ca}x_{ac}\cap x_{ab}x_{ba}\\\text{is perspective to }y_{bc}y_{ca}y_{ab}
\end{array}\right.\right\}$.
\item $f_{\triangle, 13}( x) = \left\{y\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} \text{---/--- }x_{bc}x_{cb}\cap x_{ca}x_{ac}\cap x_{ab}x_{ba}\\\text{is orthologic to }y_{bc}y_{ca}y_{ab}
\end{array}\right.\right\}$.
\item $f_{\triangle, 14}( x) = \left\{y\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} y_{bc} = \ldots = y_{ba}\text{ and }y_{bc}\text{ lies on the}\\\text{circumcircle of the triangle}
\\x_{bc}x_{cb}\cap x_{ca}x_{ac}\cap x_{ab}x_{ba}
\end{array}\right.\right\}$.
\item $f_{\triangle, 15}( x, y) = \left\{z\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} z_{bc},\ldots, z_{ba}\text{ coincides with the}
\\\text{midpoints of }x_{bc}y_{bc},\ldots, x_{ba}y_{ba}\text{ resp.}
\end{array}\right.\right\}$. \item $f_{\triangle, 16}( x, y) = \left\{z\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} z_{bc},\ldots, z_{ba}\text{ lie on the lines}\\x_{bc}y_{bc},\ldots, x_{ba}y_{ba},\text{ resp.}
\end{array}\right.\right\}$.
\item $f_{\triangle, 17}( x, y, z) = \left\{t\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} t_{bc},\ldots, t_{ba}\text{ are the projections}
\\\text{of }x_{bc}, \ldots, x_{ba}\text{ on }y_{bc}z_{bc},\ldots, y_{ba}z_{ba}\end{array}\right.\right\}$.
\item $f_{\triangle, 18}( x, y, z) = \left\{t\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} t_{bc},\ldots, t_{ba}\text{ are the reflections}
\\\text{of }x_{bc},\ldots, x_{ba}\text{ wrt }y_{bc}z_{bc},\ldots, y_{ba}z_{ba}\end{array}\right.\right\}$.
\item $f_{\triangle, {19, i}}( x, y, z) =\left\{t\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} t_{bc} = X_i(x_{bc}, y_{bc}, z_{bc}), t_{cb} = X_i(x_{cb}, z_{cb}, y_{cb}),
\\ t_{ca} = X_i(z_{ca}, x_{ca}, y_{ca}), t_{ac} = X_i(y_{ac}, x_{ac}, z_{ac}),
\\ t_{ab} = X_i(y_{ab}, z_{ab}, x_{ab}), t_{ba} = X_i(z_{ba}, y_{ba}, x_{ba})\end{array}\right.\right\}$,\\where $1\leq i\leq 12$, $ X_i$ denotes the $i$-th center from the list \ref{l1}.
\item $f_{\triangle, {20}}( x, y, z, t) = \left\{v\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} v_{bc} = x_{bc}y_{bc}\cap z_{bc}t_{bc},\ldots,\\ v_{ba} = x_{ba}y_{ba}\cap z_{ba}t_{ba}
\end{array}\right.\right\}$.
\item $f_{\triangle, 21}( x, y, z, t) = \left\{\begin{array}{cl}v\text{ is a}\\\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\end{array}\left\vert\begin{array}{cl} v_{bc} = X_{13}(x_{bc}, y_{bc}, z_{bc}, t_{bc}), v_{cb} = X_{13}(x_{cb}, z_{cb}, y_{cb}, t_{cb}),
\\v_{ca} = X_{13}(z_{ca}, x_{ca}, y_{ca}, t_{ca}), v_{ac} = X_{13}(y_{ac}, x_{ac}, z_{ac}, t_{ac}), \\v_{ab} = X_{13}(y_{ab}, z_{ab}, x_{ab}, t_{ab}), v_{ba} = X_{13}(z_{ba}, y_{ba}, x_{ba}, t_{ba})
\end{array}\right.\right\}$,\\where $ X_{13}$ denote the $13$-th center from the list \ref{l1}.
\item $f_{\triangle, 22}(x, y, z, t) = \left\{v\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl}\text{points }v_{bc},\ldots, v_{ba}\text{ lie on the pivotal}
\\\text{isocubics of triangles}
\\x_{bc}y_{bc}z_{bc},\ldots, x_{ba}y_{ba}z_{ba}\text{ with pivots}\\hbox{\scalebox{0.75}{$\triangle$}}_{bc},\ldots, t_{ba},\text{ respectively}
\end{array}\right.\right\}$.
\item $f_{\triangle, 23}(x, y, z) = \left\{t\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\:\vert\: x_{bc}y_{bc}z_{bc}t_{bc},\ldots, x_{ba}y_{ba}z_{ba}t_{ba}\text{ are cyclic}\right\}$.
\item $f_{\triangle, 24}( x, y) = \left\{z\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} z_{bc}\text{ lies on the rectangular hyperbola}\\\text{passing through the vertices of the}
\\\text{triangle }x_{bc}x_{cb}\cap x_{ca}x_{ac}\cap x_{ab}x_{ba}
\\\text{and the point }y_{bc},
\\\text{and similarly for }z_{cb},\ldots, z_{ba}
\end{array}\right.\right\}$.
\item Functions of the form $f_{\triangle, n, \alpha, \beta, \gamma}(x, y, z) := f_{\triangle, n}(x^{\alpha}, y^{\beta}, z^{\gamma})$, $1\leq n\leq 24$, where $\alpha, \beta, \gamma$ are any symbols from the set $\{bc, cb, ca, ac, ab, ba\}$ and for each $\hbox{\scalebox{0.75}{$\triangle$}}$-point $x$,\\$x^{bc} := (x_{bc}, x_{cb}, x_{ca}, x_{ac}, x_{ab}, x_{ba}),\quad x^{ac} := (x_{ac}, x_{ca}, x_{cb}, x_{bc}, x_{ba}, x_{ab}),$\\$x^{cb} := (x_{cb}, x_{bc}, x_{ba}, x_{ab}, x_{ac}, x_{ca}),\quad x^{ba} := (x_{ba}, x_{ab}, x_{ac}, x_{ca}, x_{cb}, x_{bc}),$\\$x^{ab} := (x_{ab}, x_{ba}, x_{bc}, x_{cb}, x_{ca}, x_{ac}),\quad x^{ca} := (x_{ca}, x_{ac}, x_{ab}, x_{ba}, x_{bc}, x_{cb})$,\\ denotes the orbit of $x$.
\item Other similar functions $f_{\triangle, i}$ and, for example, we can consider
$f_{\triangle, i} = \left\{x\text{ is a }\hbox{\scalebox{0.75}{$\triangle$}}\text{-point}\left\vert\begin{array}{cl} x_{bc}x_{ca}x_{ab}\text{ is similar}\\\text{(perspective, orthologic) to }x_{cb}x_{ac}x_{ba}\end{array}\right.\right\}$.
\end{enumerate}
\end{lis}
\addcontentsline{toc}{section}{Bibliography}
\end{document} |
\begin{document}
\title{ extbf{On the Well-posedness for the Chen-Lee equation in periodic Sobolev spaces}
\begin{abstract} We prove that the initial value problem associated to a perturbation of the Benjamin-Ono equation or Chen-Lee equation $u_t+uu_x+\beta \mathcal{H}u_{xx}+\eta (\mathcal{H}u_x - u_{xx})=0$, where $x\in \mathbb{T}$, $t> 0$, $\eta >0$ and $\mathcal{H}$ denotes the usual Hilbert transform, is locally and globally well-posed in the Sobolev spaces $H^s(\mathbb{T})$ for any $s>-\frac{1}{2}$. We also prove some ill-posedness issues when $s<-1$. \end{abstract}
\textit{Keywords:} Cauchy problem, local and global well-posedness, Benjamin-Ono equation.
\section{Introducción}\label{section}
The goal in this paper is to establish well-posedness results on the Cauchy pro\-blem \begin{equation}\label{cl} CL \left\{ \begin{aligned} u_t+uu_x+\beta \mathcal{H}u_{xx} + \eta (\mathcal{H}u_x-u_{xx})&=0 \qquad t>0, \quad x\in \mathbb{T},\\ u(x,0)&=\phi(x), \end{aligned} \right. \end{equation} where $\beta,\,\eta >0$ are constants. In the equation, $\mathcal{H}$ denotes the usual Hilbert transform given by \begin{equation*} \mathcal{H}f(x)=\frac{1}{2\pi}\text{p.v.}\int_{-\pi}^{\pi} \cot\Bigl(\frac{x-y}{2}\Bigr)f(y) \ dy=\frac{i}{2}(\text{sgn}(k)\widehat{f}(k))^{\vee}(x);\;\,k \in \mathbb{Z},\, f\in \mathcal{P}. \end{equation*} This equation was first introduced by H. H. Chen and Y. C. Lee in \cite{CL} to describe fluid and plasma turbulence. It deserves to remark that the fourth and the fifth terms represent, respectively, instability and dissipation. Several authors have studied this equation from a numerical standpoint. For example, H. H. Chen, Y. C. Lee and S. Qian in \cite{clq, clq1}, and B. -F. Feng and T. Kawahara, in \cite{FeKa}, who investigated the initial value problem as well as stationary solitary and periodic waves of the equation. Also, R. Pastr\'an in \cite{P} proved using the Fourier restriction norm method that the initial value problem $CL$ is locally well-posed in the Sobolev spaces $H^s(\mathbb{R})$ for any $s>-1/2$, globally well-posed in $H^s(\mathbb{R})$ when $s\geq 0$ and that one cannot solve the Cauchy problem by a Picard iterative method implemented on the integral formulation of $CL$ for initial data in the Sobolev space $H^s(\mathbb{R})$, $s<-1$.
We say that the Cauchy problem or initial value problem (\ref{cl}) is \textit{locally well-posed} in $H^s(\mathbb{T})$ if for any $\phi^* \in H^s(\mathbb{T})$ there exists a time $T>0$ and an open ball $B$ in $H^s(\mathbb{T})$ containing $\phi^*$, and a subset $\mathfrak{X}_T$ of $C([0,T]; H^s(\mathbb{T}))$, such that for each $\phi \in B$ there exists a unique solution $u\in \mathfrak{X}_T$ to the integral equation associated to the Cauchy problem and furthermore the map $\phi \mapsto u$ is continuous from $B$ to $\mathfrak{X}_T$. If we can take $T$ arbitrarily large we say that the initial value problem is \textit{globally well-posed}.
We show that the initial value problem $CL$ is locally and globally well-posed in the Sobolev spaces $H^s(\mathbb{T})$ for any $s>-1/2$. Since the dissipation of the Chen-Lee equation is in some sense ``stronger" than the dispersion, we will use the purely dissipative methods of Dix for the KdV-B equation \cite{Dix}, see also Duque \cite{Duque}, Esfahani \cite{Amin} and Pilod \cite{KDV}, which consist in applying a fixed point theorem to the integral equation associated on $CL$ in an adequate $\mathfrak{X}_T$ space (see (\ref{spacexts}) for the exact definition). We also prove that one cannot solve the Cauchy problem by a Picard iterative method implemented on the integral formulation of $CL$ for initial data in the Sobolev space $H^s(\mathbb{T})$, $s<-1$. In particular, the methods introduced by Bourgain \cite{Bourgain} and Kenig, Ponce and Vega \cite{KPV} for the KdV equation cannot be used for $CL$ with initial data in the Sobolev space $H^s(\mathbb{T})$ for $s<-1$. This kind of ill-posedness result is weaker than the loss of uniqueness proved by Dix in the case of Burgers equation.
\subsection{Definitions and Notations}
Given $a$, $b$ positive numbers, $a\lesssim b$ means that there exists a positive constant $C$ such that $a\leq C b$. And we denote $a\sim b$ when, $a \lesssim b$ and $b \lesssim a$. We will also denote $a\lesssim_{\lambda} b$ or $b\lesssim_{\lambda} a$, if the constant involved depends on some parameter $\lambda$. Given a Banach space $X$, we denote by $\nor{\cdot}{X}$ the norm in $X$. We will understand $\langle \cdot \rangle = (1+|\cdot|^2)^{1/2}$.
$\mathcal{P}=C^{\infty}(\mathbb{T})$ denotes the space of all infinitely differentiable $2\pi$-periodic functions and $\mathcal{P}'$ will denote the space of periodic distributions, i.e., the topological dual of $\mathcal{P}$. For $f\in \mathcal{P}'$ we denote by $\widehat{f}$ or $\mathcal{F}(f)$ the Fourier transform of $f$, $\Hat{f}=\left(\Hat{f}(k)\right)_{k\in\mathbb{Z}}$, where $\Hat{f}(k)=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-ik\cdot z}f(z)\,dz,$ for all integer $z$. We will use the Sobolev spaces $H^s(\mathbb{T})$ equipped with the norm $$\nor{\phi}{H^s}=(2\pi)^{\frac{1}{2}} \nor{(1+k^2)^{s/2}\Hat{\phi}(k)}{l^2(\mathbb{Z})}.$$ We will denote $\widehat{u}(k,t)$, $k\in\mathbb{Z}$, as the Fourier coefficient of $u(t)$ respect to the variable $x$. Let $U$ be the unitary group in $H^s(\mathbb{T})$, $s\in \mathbb{R}$, generated by the skew-symmetric operator $-\beta \mathcal{H}\partial_x^2 $, which defines the free evolution of the Benjamin-Ono equation, that is, \begin{equation}\label{unitarygroup} U(t)=\exp (itq(D_x)), \;\; U(t)f= \Bigl(e^{itq(\xi)}\Hat{f}\Bigr)^{\vee}\; \text{with}\;\, f\in H^s(\mathbb{T}), \, t\in \mathbb{R}, \end{equation}
where $q(D_x)$ is the Fourier multiplier with symbol $q(k)=\beta \,k \,|k|$, for all $k\in \mathbb{Z}$. Since the linear symbol of equation in (\ref{cl}) is $iq(k)+p(k)$, where $p(k)=\eta \,(k^2-|k|)$ for all $k \in \mathbb{Z}$, we also denote by $S(t)=e^{-(\beta \mathcal{H}\partial_x^2 +\eta (\mathcal{H}\partial_x-\partial_x^2))t}$, for all $t\geq 0$, the semigroup in $H^s(\mathbb{T})$ generated by the operator $-(\beta \mathcal{H}\partial_x^2 +\eta (\mathcal{H}\partial_x-\partial_x^2))$, i.e., \begin{align} S(t)f=\Bigl( e^{i\,q(\xi)\,t -p(\xi)\,t}\Hat{f} \Bigr)^{\vee}\quad \text{for}\quad f\in H^s(\mathbb{T}),\;t\geq 0. \label{semigrupos} \end{align} We define the next Banach spaces which are inspired by an adaptation made by Esfahani, in \cite{Amin}, of the spaces originally presented by Dix in \cite{Dix}. \begin{defin} Let $0\leq T\leq 1$ and $s < 0$. We consider $X_T^s$ as the class of all the functions $u\in C\left([0,T];H^s(\mathbb{T})\right)$ such that \begin{equation}\label{spacexts}
\left\|u\right\|_{X_T^s}:=\sup_{t\in(0,T]}\left(\left\|u(t)\right\|_{H^s}+t^{|s|/2}\left\|u(t)\right\|_{L^2}\right) < \infty. \end{equation} \end{defin}
\subsection{ Main Results} We will mainly work on the integral formulation of the $CL$ equation, \begin{equation}\label{intequation} u(t)=S(t)\phi - \int_0^tS(t-t')[u(t')u_x(t')]\,dt' ,\quad t\geq 0. \end{equation} \begin{teorem}[Local well-posedness]\label{mainresult} Let $\beta \geq 0$, $\eta >0$ and $s>-1/2$. Then for any $\phi \in H^s(\mathbb{T})$ there exists $T=T(\nor{\phi}{H^s})>0$ and a unique solution $u$ of the integral equation (\ref{intequation}) satisfying \begin{align*} &u\in C([0,T],H^s(\mathbb{T}))\cap C((0,T),H^{\infty}(\mathbb{T})). \end{align*} Moreover, the flow map $\phi \mapsto u(t)$ is smooth from $H^s(\mathbb{T})$ to $C([0,T],H^s(\mathbb{T}))\cap C((0,T],H^{\infty}(\mathbb{T}))\cap \mathfrak{X}_T^s$. \end{teorem} \begin{teorem}[Global well-posedness]\label{globalresult}
Let $s>-1/2$ and $\phi \in H^s(\mathbb{T})$. Then the supremum of all $T>0$ for which all the assertions of Theorem \ref{mainresult} hold is infinity. \end{teorem} It is known that the Banach's Fixed Point Theorem cannot be applied to the Benjamin-Ono equation \cite{MolSauTzv}. Here, it is proved that there does not exist a $T>0$ such that (\ref{cl}) admits a unique local solution defined on the interval $[0,T]$ and such that the flow-map data-solution $\phi \mapsto u(t)$, $t\in [0,T]$, is $C^2$ differentiable at the origin from $H^s(\mathbb{T})$ to $H^s(\mathbb{T})$. As a consequence, we cannot solve the Cauchy problem for the $CL$ equation by a Picard iterative method implemented on the integral formulation (\ref{intequation}), at least in the Sobolev spaces $H^s(\mathbb{T})$, with $s<-1$. \begin{teorem}\label{malpuestodos} Fix $s<-1$. Then there does not exist a $T>0$ such that (\ref{cl}) admits a unique local solution defined on the interval $[0,T]$ and such that the flow-map data-solution \begin{equation} \phi \longmapsto u(t), \qquad t\in [0,T], \end{equation} for (\ref{cl}) is $C^2$ differentiable at zero from $H^s(\mathbb{T})$ to $H^s(\mathbb{T})$. \end{teorem} A direct corollary of Theorem \ref{malpuestodos} is the next statement. \begin{teorem}\label{illposed} The flow map data-solution for the Chen-Lee equation is not $C^2$ from $H^s(\mathbb{T})$ to $H^s(\mathbb{T})$, if $s<-1$. \end{teorem} The layout of this paper is organized as follows: Section $2$ presents some linear estimates. Section $3$ is devoted to establishing a bilinear estimate in the space $X_{T}^s$. Theorems \ref{mainresult} and \ref{globalresult} will be proved in Section $4$, and finally, the proof of the Theorem \ref{malpuestodos} will be done in Section $5$.
\setcounter{equation}{0}
\section{ Linear Estimates } We start giving the following estimates.
\begin{lema}\label{LLE0} Let $\lambda>0$, $\eta>0$ and $t>0$ be given. Then
$$\left\||tk^2|^{\lambda} e^{\eta\left(|k|-k^2 \right)t}\right\|_{l^{\infty}(\mathbb{Z})}\lesssim_{\lambda}\left(t^{\lambda}+\eta^{-\lambda}\right) e^{\frac{\eta}{8}\left(t+t^{\frac{1}{2}}\sqrt{t+\frac{16\lambda}{\eta}}\right)}.$$ \end{lema} \begin{proof} We have the following inequality
$$|tk^2|^{\lambda} e^{\eta\left(|k|-k^2 \right)t}\leq \sup_{x\in \mathbb{R}} |x|^{2\lambda} e^{\eta\left(|x|t^{1/2}-x^2 \right)}, \qquad \forall k\in \mathbb{Z}.$$ Let $w_t(x)= x^{2\lambda} e^{\eta\left(xt^{1/2}-x^2 \right)}$, for all $x\geq 0$. Note that $w_t(x)$ tends to $0$ as $x\to \infty$, and $$w_t'(x_1)=0 \qquad \Longleftrightarrow \qquad x_{1}=\frac{1}{4}\left(t^{\frac{1}{2}}+\sqrt{t+\frac{16\lambda}{\eta}}\right).$$ Therefore, the maximum of $w_t$ is attained in $x_1$. So, we can obtain $$w_t(x_1)\lesssim_{\lambda} \left(t^{\lambda}+\eta^{-\lambda}\right) e^{\frac{\eta}{8}\left(t+t^{\frac{1}{2}}\sqrt{t+\frac{16\lambda}{\eta}}\right)}.$$ This inequality completes the proof. \end{proof} \begin{lema}\label{LLE1} Let $\lambda\geq 0$, $\eta>0$ and $t>0$ be given. Then
$$\left\||k|^{\lambda} e^{\eta\left(|k|-k^2 \right)t}\right\|_{l^2(\mathbb{Z})}\lesssim_{\lambda} \Upsilon_{\eta}^{\lambda}(t),$$ where $$\Upsilon_{\eta}^{\lambda}(t):=1+\frac{1}{(\eta t)^{\frac{\lambda}{2}}}+\frac{1}{(\eta t)^{\frac{1+2\lambda}{4}}}.$$ \end{lema} \begin{proof}
From the fact that $\eta\left(|k|-k^2\right)t\leq -\frac{\eta k^2t}{2}$, for all $k\in \mathbb{Z}$ with $|k|\geq 2$, we deduce \begin{equation}\label{LE1}
\left\||k|^{\lambda} e^{\eta\left(|k|-k^2 \right)t}\right\|_{l^2(\mathbb{Z})}^2=\sum_{k=-\infty}^{\infty} |k|^{2\lambda} e^{2\eta \left(|k|-k^2\right)t}\lesssim 1+\sum_{k=2}^{\infty} k^{2\lambda} e^{-\eta k^2t}. \end{equation} Let $h\left(x\right):=x^{2\lambda} e^{-\eta x^2t}$ for all $x > 0$. We observe that \begin{equation} h'\left(x\right)= 2\left(\lambda-\eta t x^2\right)x^{2\lambda-1}e^{-\eta x^2 t}. \label{LE2} \end{equation} Thus, $h(x)$ reaches its maximum value when $x_{\max}=\sqrt{\frac{\lambda}{\eta t}}$. If $x_{\max}\leq 1$, \eqref{LE2} implies that $h$ is a nonincreasing function on the interval $[1,\infty )$. Hence, with the change of variable $u=\eta x^2t$, we obtain \begin{align} \sum_{k=2}^{\infty}k^{2\lambda} e^{-\eta k^2t}\leq \int_{1}^{\infty} x^{2\lambda}e^{-\eta x^2 t} dx &= \frac{1}{2}\left(\frac{1}{\eta t}\right)^{\frac{1+2\lambda}{2}}\int_{\eta t}^{\infty} u^{\frac{2\lambda-1}{2}}e^{-u} dx \nonumber \\
& \leq \frac{1}{2} \left(\frac{1}{\eta t}\right)^{\frac{1+2\lambda}{2}}\Gamma\left(\frac{1+2\lambda}{2}\right). \label{LE3} \end{align}
On the other hand, if $x_{\max}>1$, we have from \eqref{LE2} that $h(x)$ is a nondecreasing function on the interval $[1,x_{\max})$ and nonincreasing on the interval $\left(x_{\max},\infty \right)$, but this implies arguing as above that \begin{align} \sum_{k=2}^{\infty} k^{2\lambda} e^{-\eta k^2t} &\leq \left(\frac{\lambda}{\eta t}\right)^{\lambda}e^{-\lambda}+\int_{2}^{\infty} x^{2\lambda}e^{-\eta x^2 t}dx \nonumber\\
& \leq \left(\frac{\lambda}{\eta t}\right)^{\lambda}e^{-s\lambda}+\left(\frac{1}{\eta t}\right)^{\frac{1+2\lambda}{2}}\Gamma\left(\frac{1+2\lambda}{2}\right). \label{LE4} \end{align} Combining \eqref{LE3} and \eqref{LE4}, and taking square root of the resulting expression, we can conclude the lemma. \end{proof} We have the next linear estimates \begin{prop} \label{PROP1} Let $0< T\leq 1$, $\eta>0$, $s \in \mathbb{R}$ and $\phi\in H^s(\mathbb{T})$. Then \begin{equation}\label{LE5}
\sup_{t\in[0,T]}\left\|S(t)\phi\right\|_{H^s}\leq \left\|\phi\right\|_{H^s}. \end{equation} Moreover, if $s< 0$, \begin{equation}\label{LE6}
\sup_{t\in[0,T]}t^{\frac{|s|}{2}}\left\|S(t)\phi\right\|_{L^2}\lesssim_{s}f_{s,\eta}(T)\left\|\phi\right\|_{H^s}, \end{equation}
where $$f_{s,\eta}(t)=1+ \left(t^{\frac{|s|}{2}}+\eta^{-\frac{|s|}{2}}\right) e^{\frac{\eta}{8}\left(t+t^{\frac{1}{2}}\sqrt{t+\frac{8|s|}{\eta}}\right)},$$ is a nondecreasing function on $[0,1]$. \end{prop}
\begin{proof}
Since $\eta \left(|k|-k^2\right)t\leq 0$ for every $k\in \mathbb{Z}$ and $t\geq 0$, we see that \begin{equation}\label{LE6a}
\left\|S(t)\phi\right\|_{H^s} = (2\pi)^{\frac{1}{2}}\left\|\langle k \rangle^s e^{\eta(|k|-k^2)t}\widehat{\phi}(k)\right\|_{l^2(\mathbb{Z})}\leq \left\|\phi\right\|_{H^s}. \end{equation} \eqref{LE6a} implies inequality \eqref{LE5}. To prove \eqref{LE6}, we assume that $s< 0$. Since $0 \leq T \leq 1$, we have
$$t\leq \frac{(1+k^2t)}{(1+k^2)}, \text{ for all }k\in \mathbb{Z}, \ t\in [0,T].$$ So, it follows that
\begin{equation}\label{LE7}
t^{|s|/2}\left\|S(t)\phi\right\|_{L^2} \leq \left\|\left\langle t^{1/2}k\right\rangle^{|s|} e^{\eta\left(|k|-k^2\right)t}\right\|_{l^{\infty}(\mathbb{Z})} \left\|\phi\right\|_{H^s}. \end{equation} Using Lemma \ref{LLE0} we obtain \begin{align}\label{LE8}
\left\langle t^{1/2}k\right\rangle^{|s|} e^{\eta\left(|k|-k^2\right)t} & \lesssim_{s} 1+(tk^2)^{\frac{|s|}{2}} e^{\eta\left(|k|-k^2\right)t} \\
& \lesssim_{s} 1+ \left(t^{\frac{|s|}{2}}+\eta^{-\frac{|s|}{2}}\right) e^{\frac{\eta}{8}\left(t+t^{\frac{1}{2}}\sqrt{t+\frac{8|s|}{\eta}}\right)}. \end{align} Therefore, we conclude \eqref{LE6} from \eqref{LE7} and \eqref{LE8}. \end{proof}
\section{Bilinear estimate}
In this section, we establish the crucial bilinear estimates.
\begin{prop}\label{PROP2} Let $0\leq T\leq 1$ and $-\frac{1}{2}<s < 0$, then \begin{equation}
\left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{X_T^s} \lesssim_{s,\eta} T^{\frac{1+2s}{4}}\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}, \end{equation} for all $u,v\in X_T^s$ \end{prop}
\begin{proof}
Since $s< 0$, it follows that $(1+k^2)^{ \frac{s}{2}}\leq |k|^s$, for all integer $k$ different from zero. So, we deduce that \begin{equation}\label{BE0} \begin{aligned}
& \left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{H^s} \\
& \hspace{30pt} \leq (2\pi)^{1/2} \int_{0}^t \left\|\left\langle k \right\rangle^s e^{\eta\left(|k|-k^2\right)(t-t')} \left(\partial_x(uv)(t')\right)^{\wedge}(k) \right\|_{l^2(\mathbb{Z})} \ dt' \\
& \hspace{30pt} \leq (2\pi)^{1/2} \int_{0}^t \left\||k|^{1+s}e^{\eta\left(|k|-k^2\right)(t-t')}\right\|_{l^2(\mathbb{Z})}\left\| \widehat{u(t')}\ast \widehat{v(t')}(k) \right\|_{l^{\infty}(\mathbb{Z})} \ dt'. \end{aligned} \end{equation} The Young inequality implies that \begin{equation}\label{BE1}
\left\| \widehat{u(t')}\ast\widehat{v(t')}(k) \right\|_{l^{\infty}(\mathbb{Z})}\leq \frac{1}{2\pi}\left( \frac{\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}}{|t'|^{|s|}} \right), \end{equation} hence, we obtain \begin{equation}\label{BE2} \begin{aligned}
& \left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{H^s}\\
&\hspace{30pt}\lesssim \int_{0}^t \frac{\left\||k|^{1+s}e^{\eta\left(|k|-k^2\right)( t-t')}\right\|_{l^2(\mathbb{Z})}}{|t'|^{|s|}}\ dt' \;\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s} . \end{aligned} \end{equation} To estimate the integral on the right-hand side of \eqref{BE2}, we have from Lemma \ref{LLE1} that \begin{equation}
\left\||k|^{s+1}e^{\eta\left(|k|-k^2\right)t}\right\|_{l^2(\mathbb{Z})}\lesssim_{s}\left(1+\frac{1}{(\eta t)^{\frac{s+1}{2}}}+\frac{1}{(\eta t)^{\frac{2s+3}{4}}}\right), \forall t>0. \end{equation} So, from \eqref{BE1}, \eqref{BE2} and taking $z=t'/t$ we get that \begin{equation}\label{BE3} \begin{aligned}
& \left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{H^s} \\
& \quad \lesssim_{s}\left(\frac{t^{1+s}}{1+s}+\frac{t^{\frac{1+s}{2}}}{{\eta^{\frac{1+s}{2}}}}\int_{0}^1 z^s|1-z|^{-(\frac{1+s}{2})}dz+
\frac{t^{\frac{1+2s}{4}}}{{\eta^{\frac{3+2s}{4}}}}\int_{0}^1 z^s|1-z|^{-(\frac{3+2s}{4})}dz \right)\\
&\quad \quad \quad \cdot \left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s} \\
& \quad \lesssim_{s}\left(1+\frac{1}{\eta^{\frac{1+s}{2}}}+\frac{1}{\eta^{\frac{3+2s}{4}}}\right)T^{\frac{1+2s}{4}}\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}. \end{aligned} \end{equation} Arguing in a similar way as above, we have for all $0\leq t\leq T$ that \begin{equation}\label{BE4} \begin{aligned}
&t^{|s|/2} \int_{0}^t \left\| S(t-t')\partial_x(uv)(t') \right\|_{L^2(\mathbb{T})} \ dt' \\
& \quad \lesssim T^{|s|/2} \int_{0}^t \frac{\left\||k|e^{\eta\left(|k|-k^2\right)(t-t')}\right\|_{l^2(\mathbb{Z})}}{|t'|^{|s|}}\ dt' \; \left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s} \\
& \quad \lesssim_s T^{|s|/2} \int_{0}^t \left(\frac{1}{|t'|^{|s|}}+\frac{1}{(\eta (t- t'))^{1/2}|t'|^{|s|}} +\frac{1}{(\eta (t- t'))^{3/4}|t'|^{|s|}}\right) \,dt'\\
&\quad \quad \quad \cdot \left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s} \\
&\quad \lesssim_{s} \left(1+\frac{1}{\eta^{1/2}} + \frac{1}{\eta^{3/4}}\right)T^{\frac{1+2s}{4}}\left\|u\right\|_{X_T^s}\left\|v\right\|_{X_T^s}. \end{aligned} \end{equation} This completes the proof. \end{proof}
\begin{rem} If we consider $s'>s>-\frac{1}{2}$. Then modifying the space $X_{T}^{s'}$ by
$$\tilde{X}_{T}^{s'}=\left\{u\in X_{T}^{s'}: \left\|u\right\|_{\tilde{X}_{T}^{s'}}<\infty \right\},$$ where
$$ \left\|u\right\|_{\tilde{X}_{T}^{s'}}= \left\|u\right\|_{X_{T}^{s'}}+t^{|s|/2} \left\|(1-\partial_x^2)^{\frac{ s'-s}{2}} u\right\|_{L^2}$$ and using that $$(1+k^2)^{s/2}\lesssim (1+k^2)^{s/2}(1+j^2)^{(s'-s)/2}+(1+k^2)^{s/2}\left(1+(k-j)^2\right)^{(s'-s)/2},$$ for all $k,j\in \mathbb{Z}$, we deduce arguing as in Proposition \eqref{PROP2} that
$$\left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{\tilde{X}_T^{s'}} \lesssim_{s,\eta} T^{\frac{1+2s}{4}}\left(\left\|u\right\|_{\tilde{X}_T^{s'}}\left\|v\right\|_{X_T^s}+\left\|u\right\|_{X_T^s}\left\|v\right\|_{\tilde{X}_T^{s'}}\right).$$ \end{rem}
\begin{prop}\label{PROP3} Let $0\leq T \leq 1$, $s \in (-\frac{1}{2},0)$ and $\delta\in [0,s+\frac{1}{2})$, then the application $$t\rightarrow \int_{0}^t S(t-t')\partial_x(u^2)(t')\ dt' ,$$ is in $C\left([0,T];H^{s+\delta}(\mathbb{T})\right)$, for every $u\in X_T^s$. \end{prop} \begin{proof} Let $t,\tau \in [0,T]$ be fixed with $t<\tau$. Then, by Minkowski inequality, we see that \begin{equation}
\left\|\int_{0}^{\tau} S(\tau-t')\partial_x(u^2)(t')dt'-\int_{0}^t S(t-t')\partial_x(u^2)(t')dt'\right\|_{H^{s+\delta}} \leq \mathbb{I}(t,\tau)+\mathbb{II}(t,\tau), \end{equation} where \begin{align*}
\mathbb{I}(t,\tau)&:=\int_{0}^t \left\|\left( S(\tau-t')-S(t-t')\right)\partial_x(u^2)(t')\right\|_{H^{s+\delta}}dt' \nonumber \\ \intertext{and}
\mathbb{II}(t,\tau)&:=\int_{t}^{\tau} \left\|S(\tau-t')\partial_x(u^2)(t')\right\|_{H^{s+\delta}}d t'. \nonumber \end{align*} Following the same ideas of the proof of Proposition \ref{PROP2} we obtain \begin{align} \label{BE5}
&\mathbb{II}(t,\tau) \lesssim \int_{t}^{\tau} \left\|\left\langle k \right\rangle^{s+\delta}e^{\eta\left(|k|-k^2\right)(\tau-t')}\left(\partial_{x}u^2(t')\right)^{\wedge}(k)\right\|_{l^2(\mathbb{Z})} \ dt' \nonumber\\
& \lesssim_{s,\delta,\eta} \int_{t}^{\tau}\bigl((t')^s+(t')^s|\tau-t'|^{-\frac{1+s+\delta}{2}}+(t')^s|\tau-t'|^{-\frac{3+2( s+\delta)}{4}}\bigr) \ dt' \;\left\|u\right\|_{X_{T}^s}^2 \nonumber \\
& \lesssim_{s,\delta,\eta} \left(\frac{(\tau-t)^{1+s}}{1+s}+\int_{t}^{\tau} \bigl(|t'-t|^s|\tau-t'|^{-\frac{1+s+\delta}{2}}+|t'-t|^s|\tau-t'|^{-\frac{3+2( s+\delta)}{4}}\bigr) dt'\right) \nonumber \\
& \quad \quad \quad \cdot \left\|u\right\|_{X_{T}^s}^2. \end{align} But with the change of variable $z=\frac{t'-t}{\tau-t}$, we have that \begin{align}\label{BE6}
&\int_{t}^{\tau}|t'-t|^s|\tau-t'|^{-\frac{1+s+\delta}{2}} \ dt' = (\tau-t)^{\frac{1+s-\delta}{2}}\int_{0}^{1}z^s|1-z|^{-\frac{1+s+\delta}{2}} \ dz, \nonumber \\
&\int_{t}^{\tau}|t'-t|^s|\tau-t'|^{-\frac{3+2( s+\delta)}{4}} \ dt' =(\tau-t)^{\frac{1+2(s-\delta)}{4}} \int_{0}^1 z^s|1-z|^{-\frac{3+2( s+\delta)}{4}} \ dz. \end{align} Therefore, combining \eqref{BE5}, \eqref{BE6} and the hypothesis, we deduce that \\ $\lim_{\tau \to t}\mathbb{II}(t,\tau)=0$. To estimate $\mathbb{I}(t,\tau)$, we observe that \begin{equation}\label{BE7}
\mathbb{I}(t,\tau) \lesssim \int_{0}^{t} \frac{\left\||k|^{1+s+\delta}\left(e^{\left(iq(k)-p(k)\right) (\tau-t')}-e^{\left(iq(k)-p(k)\right)(t-t')}\right)\right\|_{l^2(\mathbb{Z})}}{|t'|^{|s|}} \, dt' \, \left\|u\right\|_{X_{T}^s}^2 . \end{equation} Applying Lemma \ref{LLE1}, for all $t'\in [0,t)$, we have that \begin{align}\label{BE8}
&\left\||k|^{1+s+\delta}\left(e^{\left(iq(k)-p(k)\right)(\tau-t')}-e^{\left(iq(k)-p(k)\right)(t-t')}\right)\right\|_{l^2(\mathbb{Z})} \nonumber\\
&\hspace{30pt} \lesssim \left\||k|^{1+s+\delta}e^{\eta\left(|k|-k^2\right) (t-t')}\right\|_{l^2(\mathbb{Z})} \nonumber \\ &\hspace{30pt} \lesssim_{s,\delta}\Upsilon_{\eta}^{1+s+\delta}(t-t'), \end{align} where $$\Upsilon_{\eta}^{1+s+\delta}(t)=1+\frac{1}{(\eta t)^{\frac{1+s+\delta}{2}}}+\frac{1}{(\eta t)^{\frac{3+2(s+\delta)}{4}}}, \quad \forall t>0.$$ Then, it follows from \eqref{BE8} and Weierstrass M-test that \begin{equation}\label{BE9}
\lim_{\tau \to t} \ \left\||k|^{1+s+\delta}\left(e^{\left(iq(k)-p(k)\right)(\tau-t')}-e^{\left(iq(k)-p(k)\right)(t-t')}\right)\right\|_{l^2(\mathbb{Z})}=0, \end{equation}
for all $t'\in [0,t)$. Moreover, since $(t')^s \Upsilon_{\eta}^{1+s+\delta}(t-t')$ is in $L^1_{t'}(0,t)$, we deduce from \eqref{BE9} and Lebesgue dominated convergence theorem that $\lim_{\tau \to t}\mathbb{I}(t,\tau)=0$. This completes the proof. \end{proof}
The next lemma is an adaptation of Lemma 2.3.1 in \cite{D} to the periodic case. This result allows us to adapt the above propositions to $C\left([0,T];H^s(\mathbb{T})\right)$, when $s\geq 0$ and $0<T\leq 1$. For the sake of completeness, we will sketch a proof.
\begin{lema}\label{LBE1} Suppose $a>0$, $r\geq 0$ are real numbers and $\phi,\psi \in H^r(\mathbb{T})$. Then \begin{equation}
\left\|\left\langle ak \right\rangle^r(\phi \psi)^{\wedge}(k)\right\|_{l^{\infty}(\mathbb{Z})}\leq 2^{\frac{r}{2}} \left\|\left\langle ak \right\rangle^{r}\widehat{\phi}(k)\right\|_{l^2(\mathbb{Z})}\left\|\left\langle ak \right\rangle^{r}\widehat{\psi}(k)\right\|_{l^2(\mathbb{Z})}. \end{equation} \end{lema}
\begin{proof} Since $r\geq 0$, $H^r(\mathbb{T}) \hookrightarrow L^2(\mathbb{T})$. Therefore, $\phi,\psi \in L^2(\mathbb{T})$ and the exchange formula holds, $$(\phi\psi)^{\wedge}(k)=\widehat{\phi}\ast \widehat{\psi}(k)= \sum_{j=-\infty}^{\infty}\widehat{\phi}(j)\widehat{\psi}(k-j), \quad \forall k\in \mathbb{Z}.$$ To prove this lemma we will use Peetre's inequality which says that
$$\left(1+|x|^2\right)^{\rho}\leq 2^{|\rho|}\left(1+|x-y|^2\right)^{|\rho|}\left(1+|y|^2\right)^{\rho}, \quad \forall x,y,\rho\in\mathbb{R}.$$ Thus, $$\left\langle ak \right\rangle^r \leq 2^{\frac{r}{2}}\left\langle a(k-j)\right\rangle^r\left\langle aj \right\rangle^r, \quad \forall k \in \mathbb{Z}.$$ So, we have by Young's convolutions inequality that \begin{align*}
\left|\left\langle ak \right\rangle^r(\phi \psi)^{\wedge}(k)\right| &\leq 2^{\frac{r}{2}} \sum_{j=\infty}^{\infty} \left|\left\langle aj \right\rangle^r \widehat{\phi}(j)\left\langle a(k-j)\right\rangle^r \widehat{\psi}(k-j)\right|\ \\
& \leq 2^{\frac{r}{2}}\left\|\left\langle ak \right\rangle^{r}\widehat{\phi}(k)\right\|_{l^2(\mathbb{Z})}\left\|\left\langle ak \right\rangle^{r}\widehat{\psi}(k)\right\|_{l^2(\mathbb{Z})}. \end{align*} \end{proof}
\begin{rem}\label{RBE1} Assuming that $s\geq 0$ and $0<T\leq 1$, we have a similar result as the one obtained in Proposition \ref{PROP2} for the space $C\left([0,T];H^s(\mathbb{T})\right)$. In fact, we have that
$$\left\|\int_{0}^t S(t-t')\partial_x(uv)(t') \ dt'\right\|_{L^{\infty}_tH^s_x} \lesssim_{s,\eta} T^{\frac{1}{4}}\left\|u\right\|_{L^{\infty}_tH^s_x}\left\|v\right\|_{L^{\infty}_tH^s_x},$$ for all $u,v\in C\left([0,T];H^s(\mathbb{T})\right).$ To see this, we can use Lemma \ref{LBE1} with $a=1$ and Lemma \ref{LLE1}. \begin{align}
\int_{0}^{t} &\left\|S(t-t')\partial_x(uv)(t')\right\|_{H^s}dt' \nonumber \\
&\lesssim \int_{0}^t \left\| |k| e^{\eta\left(|k|-k^2\right)(t-t')} \right\|_{l^2(\mathbb{Z})} \left\|\left\langle k \right\rangle^s \left(uv(t')\right)^{\wedge}(k)\right\|_{l^{\infty}(\mathbb{Z})} \ dt' \nonumber \\
& \lesssim_s \int_{0}^{t} \left\||k| e^{\eta\left(|k|-k^2\right)(t-t')}\right\|_{l^2(\mathbb{Z})}\left\|u(t')\right\|_{H_x^s}\left\|v(t')\right\|_{H^s_x}\ dt' \nonumber \\
& \lesssim_{s,\eta} \int_{0}^{t} \Bigl(1+\frac{1}{(t-t')^{1/2}}+\frac{1}{(t-t')^{3/4}}\Bigr)\, dt' \;\left\|u\right\|_{L^{\infty}_tH^s_x}\left\|v\right\|_{L^{\infty}_tH^s_x} \nonumber \\
& \lesssim_{s,\eta} T^{\frac{1}{4}}\left\|u\right\|_{L^{\infty}_tH^s_x}\left\|v\right\|_{L^{\infty}_tH^s_x}. \end{align} \end{rem}
\begin{rem}\label{RBE2} Let $s\geq 0$ and $0<T\leq 1$. We have the same result given in Proposition \ref{PROP3}, changing $X_{T}^s$ by $C\left([0,T];H^s(\mathbb{T})\right)$ and taking $\delta\in[0,\frac{1}{2})$. In fact, considering $t,\tau\in [0,T]$ fixed with $t<\tau$, we define the terms $\mathbb{I}(t,\tau)$ and $\mathbb{II}(t,\tau)$ as in Proposition \ref{PROP3}. Then Lemma \ref{LLE1} implies that \begin{align}
\mathbb{II}(t,\tau) & \lesssim \int_{t}^{\tau} \left\||k| \left\langle k \right\rangle^{\delta}e^{\eta\left(|k|-k^2\right)(\tau-t')}\right\|_{l^2(\mathbb{Z})}\left\|\left\langle k \right\rangle^{s}[u^2(t')]^{\wedge}(k)\right\|_{l^{\infty}(\mathbb{Z})}\ dt' \nonumber \\
& \lesssim_{s,\eta} \int_{t}^{\tau} \left\||k|^{1+\delta}e^{\eta(|k|-k^2)(\tau-t')}\right\|_{l^2(\mathbb{Z})} \left\|u(t')\right\|_{H^s_x}^2\ dt' \nonumber \\
& \lesssim_{s,\eta,\delta} \int_{t}^{\tau} \Bigl(1+\frac{1}{(\tau-t')^{\frac{1+\delta}{2}}}+\frac{1}{(\tau-t')^{\frac{3+2\delta}{4}}}\Bigr) \, dt' \;\left\|u\right\|_{L^{\infty}_tH^s_x}^2 \nonumber \\
& \lesssim_{s,\eta,\delta} \left((\tau-t)+(\tau-t)^{\frac{1-\delta}{2}}+(\tau-t)^{\frac{1}{4}(1-2\delta)} \right)\left\|u \right\|_{L^{\infty}_tH^s_x}^2, \end{align} then is clear that $\lim_{\tau \to t} \mathbb{II}(t,\tau)=0$. Also, we observe that, \begin{equation}
\mathbb{I}(t,\tau) \lesssim_{s,\delta}\int_{0}^{t} \nor{|k|^{1+\delta}\Bigl(e^{\left(iq(k)-p(k)\right)(\tau-t')}-e^{\left(iq(k)-p(k)\right)(t-t')}\Bigr)}{l^2(\mathbb{Z})}dt' \nora{u}{L^{\infty}_tH^s_x}{2}, \end{equation} and again Lemma \ref{LLE1} implies that \begin{align*}
&\left\||k|^{1+\delta}\left(e^{\left(iq(k)-p(k)\right)(\tau-t')}-e^{\left(iq(k)-p(k)\right)(t-t')}\right)\right\|_{l^2(\mathbb{Z})} \\ &\qquad \lesssim_{s,\delta}\left(1+\frac{1}{\left(\eta(t-t')\right)^{\frac{1+\delta}{2}}}+\frac{1}{\left(\eta(t-t')\right)^{\frac{3+2\delta}{4}}}\right), \ \forall t>0. \end{align*} Therefore, using the above inequalities, we can argue as in the proof of Proposition \ref{PROP3} and conclude that $\lim_{\tau \to t} \mathbb{I}(t,\tau)=0$. \end{rem}
\section{Well-Posedness}
In this section we show that the Cauchy problem \eqref{cl} is locally and globally well-posed in $H^s(\mathbb{T})$ for $s > -\frac{1}{2}$. In fact, to prove the local existence result, we will construct a contraction with the integral formulation \eqref{intequation}. The principal argument to obtain our desired results is to use the bilinear estimates in Proposition \ref{PROP2} and Remark \ref{RBE1} for the nonlinear part $\partial_x(u^2)$ in the convenient $\mathfrak{X}_T^s$ spaces. We begin this section giving a proof of Theorem \ref{mainresult}
\begin{proof}[Proof of Theorem \ref{mainresult}] For $T\in(0,1]$, we consider the space $\mathfrak{X}_T^s=X_T^s$, if $-\frac{1}{2}< s < 0$, and if $s\geq 0$, we take $\mathfrak{X}_T^s=C\left([0,T];H^s(\mathbb{T})\right)$. We divide the proof in four steps.\\
\emph{1. Existence}. Let $\phi \in H^s(\mathbb{T})$, $s> -\frac{1}{2}$. We define the application $$\Psi(u)=S(t)\phi-\frac{1}{2}\int_{0}^t S(t-t')\partial_x(u^2(t')) \ dt', \text{ for each } u \in \mathfrak{X}_T^s.$$ By Proposition \ref{PROP1}, together with Proposition \ref{PROP2} when $s< 0$, or Remark \ref{RBE1} when $s\geq 0$, there exists a positive constant $C=C(\eta,s)$, independent of $\beta$, such that for all $u, v \in \mathfrak{X}_T^s$ and $0<T\leq 1$ \begin{align}
\left\|\Psi(u)\right\|_{\mathfrak{X}_T^s} &\leq C\left(\left\|\phi\right\|_s+T^{g(s)}\left\|u\right\|_{\mathfrak{X}_T^s}^2\right), \label{WP1} \\
\left\|\Psi(u)-\Psi(v)\right\|_{\mathfrak{X}_T^s} & \leq C T^{g(s)}\left\|u-v\right\|_{\mathfrak{X}_T^s}\left\|u+v\right\|_{\mathfrak{X}_T^s}, \label{WP2} \end{align}
where $g(s)=\frac{1}{4}(1+2s)$, for all $s\in (-\frac{1}{2},0)$, and $g(s)=\frac{1}{4}$, if $s\geq 0$. Then, let $E_{T}(\gamma)=\left\{u\in \mathfrak{X}_T^s : \left\|u \right\|_{\mathfrak{X}_T^s} \leq \gamma \right\}$, where $\gamma=2C\left\|\phi \right\|_s$ and \\ $0<T \leq \min \left\{1,\left(4C\gamma\right)^{-\frac{1}{g(s)}} \right\}$. The estimates \eqref{WP1} and \eqref{WP2} imply that $\Psi$ is a contraction on the complete metric space $E_T(\gamma)$. Therefore, we deduce by the Fixed Point Theorem that exists an unique solution $u$ of the integral equation \eqref{intequation} in $E_T(\gamma)$ and with initial data $u(0)=\phi$. \\
\emph{2. Continuous dependence}. Let $\phi_1,\phi_2 \in H^s(\mathbb{T})$ and $u_1\in \mathfrak{X}_{T_1}^s$, $u_2\in \mathfrak{X}_{T_2}^s$ be the respective solutions of the Chen-Lee equation constructed in the subsection of \emph{Existence} above. We recall that the solutions and the times of existence satisfy \begin{align*}
&\left\|u_i\right\|_{\mathfrak{X}_T^s} \leq 2C\left\|\phi_i\right\|_{H^s}, \\
&0<T_i\leq \min\left\{1, \left(8C^2\left\|\phi_i\right\|_{H^s}\right)^{-\frac{1}{g(s)}}\right\}, \end{align*} for $i=1,2$, and $C=C(\eta,s)$. Therefore, by Proposition \ref{PROP1}, together with Proposition \ref{PROP2} when $s<0$, or by Remark \ref{RBE1} when $s\geq 0$, we have that for all $T\in \left(0, \min\left\{T_1,T_2\right\}\right]$
\begin{align*}
\left\|u_1-u_2\right\|_{\mathfrak{X}_T^s} & \leq C\left\|\phi_1-\phi_2\right\|_{H^s}+ C T^{g(s)}\left\|u_1+u_2\right\|_{\mathfrak{X}_T^s}\left\|u_1-u_2\right\|_{\mathfrak{X}_T^s} \\
& \leq C\left\|\phi_1-\phi_2\right\|_{H^s}+ \frac{\left(\left\|\phi_1\right\|_{H^s}+\left\|\phi_2\right\|_{H^s}\right)}{4\max_{i=1,2}\left\{\left\|\phi_i\right\|_{H^s}\right\}}\left\|u_1-u_2\right\|_{\mathfrak{X}_T^s} \\
&\leq C\left\|\phi_1-\phi_2\right\|_{H^s}+ \frac{1}{2}\left\|u_1-u_2\right\|_{\mathfrak{X}_T^s}, \end{align*} but this implies that
$$\left\|u_1(t)-u_2(t)\right\|_{H^s}\leq \left\|u_1-u_2\right\|_{\mathfrak{X}_T^s}\leq 2C\left\|\phi_1-\phi_2\right\|_{H^s}, \, \text{for all } t \in [0,T].$$
\emph{3. Uniqueness}. We shall proof the uniqueness of solutions to the integral equation \eqref{intequation} in the space $\mathfrak{X}_T^s$, where $T$ is defined as in the subsection of \emph{Existence}. Let $u,v \in \mathfrak{X}_T^s$ be solutions of the integral equation \eqref{intequation} on the time interval $[0,T]$ with the same initial data $\phi$. Arguing as in the proof of Proposition \ref{PROP2} for $-\frac{1}{2}<s< 0$ or as in the Remark \ref{RBE1} when $s\geq 0$, there exists $C=C(\eta,s)$ such that for all $0<T_1\leq T$ \begin{equation}\label{WP3}
\left\|u-v\right\|_{\mathfrak{X}_{T_1}^s}\leq C K T_1^{g(s)}\left\|u-v\right\|_{\mathfrak{X}_{T_1}^s}, \end{equation}
where $K:=\left\|u\right\|_{\mathfrak{X}_{T}^s}+\left\|v\right\|_{\mathfrak{X}_{T}^s}$. Taking $T_1 \in \bigl(0,(CK)^{-\frac{1}{g(s)}}\bigr)$, we deduce from \eqref{WP3} that $u \equiv v$ on $[0,T_1]$. Thus, iterating this argument, we extend the uniqueness result to the whole interval $[0,T]$.\\
\emph{4. The solution $ u\in C\left((0,T],H^{\infty}(\mathbb{T})\right)$ }. Using Lemma \ref{LLE0} and arguing as in the proof of Proposition 2.2 in \cite{BI}, we have that the map $t\mapsto S(t)\phi$ is continuous in the interval $(0,T]$ with respect to the topology of $H^{\infty}(\mathbb{T})$. Since our solution $u$ is in $\mathfrak{X}_T^s$, we deduce from Proposition \ref{PROP3} or Remark \ref{RBE2}, that there exists $\lambda>0$, such that $$u\in C\left([0,T];H^s(\mathbb{T})\right)\cap C\left((0,T];H^{s+\lambda}(\mathbb{T})\right).$$ Therefore we can iterate this argument, using the uniqueness result and the fact that the time of existence of solutions depends uniquely on the $H^s(\mathbb{T})$-norm of the initial data. Thus we deduce that $$u\in C\left([0,T];H^s(\mathbb{T})\right)\cap C\left((0,T];H^{\infty}(\mathbb{T})\right).$$
\end{proof}
Next, we will give a proof of Theorem \ref{globalresult}
\begin{proof}[Proof of Theorem \ref{globalresult}] We divide the proof in two steps.\\
\emph{1.} Let $s\geq 0$ and $T^*(\left\|\phi\right\|_s)$ defined as $$T^*=\sup\left\{T>0: \exists !\text{ solution of \eqref{intequation} in } C\left([0,T];H^s(\mathbb{T})\right)\right\}.$$ Let $u \in C\left([0,T^*);H^s(\mathbb{T})\right)\cap C\left((0,T^*);H^{\infty}(\mathbb{T})\right)$ be the local solution of the integral equation \eqref{intequation} and defined in the maximal time interval $[0,T^*)$. We will prove that $T^*<\infty$ implies a contradiction. Since $u$ is smooth by Theorem \ref{mainresult}, we have that $u$ solves \eqref{intequation} in a classic sense, therefore we can take the $L^2(\mathbb{T})$ inner product between $u$ and \eqref{intequation} to obtain \begin{align*}
\frac{1}{2}\frac{d}{dt}\left\|u(t)\right\|_0^2&=(u,u_t)_0 \\ &=-(u,uu_x)_0-\beta(u,\mathcal{H}u_{xx})_0-\eta(u,\mathcal{H}u_x)_0+\eta(u,u_{xx})_0 \\
&=2\pi\eta\sum_{k=-\infty}^{\infty}\left(|k|-k^2\right)|\Hat{u}(t)|^2 \\ &\leq 0, \end{align*}
where we have used that $(|k|-k^2)\leq 0$ for all integer $k$. Thus we obtain \begin{equation}
\left\|u(t)\right\|_0 \leq \left\|\phi\right\|_0 \, \text{for all } t \in [0,T^*). \end{equation}
Since the time of existence $T(\cdot)$ is a decreasing function of the norm of the initial data, there exists a time $\widetilde{T}>0$, such that for all $\psi \in H^1(\mathbb{T})$ with $\left\|\psi \right\|_{0}\leq \left\|\phi\right\|_{0}$, there exists a function $\tilde{u}\in C\left([0,\widetilde{T}];H^s(\mathbb{T})\right)$ solution of \eqref{intequation} with $\widetilde{u}(0)=\psi$. Let $0<\epsilon <\widetilde{T}$, applying the above result to $\psi=u(T^*-\epsilon)$, we define \begin{equation} v(t)=\begin{cases}
u(t), & \text{ when } \, 0 \leq t \leq T^*-\epsilon, \\
\tilde{u}(t-T^*+\epsilon), &\text{ when } \, T^*-\epsilon \leq t \leq T^*+\widetilde{T}-\epsilon \\
\end{cases} \end{equation} Then $v(t)$ is a solution of the integral equation \eqref{intequation} in $[0,T^*+\widetilde{T}-\epsilon]$, but this contradicts the definition of $T^*$, since $T^*+\widetilde{T}-\epsilon >T^*$. We have concluded the global result when $s\geq 0$.
\emph{2.} Let $s\in (-1/2,0)$, $\phi\in H^s(\mathbb{T})$ and $u\in X_{T}^s$ be the solution of the Cauchy problem \eqref{cl} given in Theorem \ref{mainresult}. Let $T'\in (0,T)$ fixed, we have that $$
\left\|u\right\|_{X_{T'}^s}=M_{T',s}<\infty. $$ Since $u\in C\left((0,T];H^{\infty}(\mathbb{T})\right)$, it follows that $u(T')\in L^2(\mathbb{T})$. Thus, the part $(i)$ of Theorem \ref{globalresult} implies that $\tilde{u}$ the solution of the integral equation \ref{intequation} with initial data $u(T')$ is global in time. Moreover, uniqueness implies that $\tilde{u}(t)=u(T'+t)$ for all $t\in [0,T-T']$. Therefore, we deduce that \begin{align*}
\left\|u\right\|_{X_{T}^s} & \leq \left\|u\right\|_{X_{T'}^s}+\left\|u(T'+\cdot)\right\|_{X_{T-T'}^s} \\
&\leq M_{T',s}+\left\|\tilde{u}\right\|_{X_{T-T'}^s} \\
&= M_{T',s}+ \sup_{t\in [0,T-T']} \left\{\left\|\tilde{u}(t)\right\|_{s}+t^{|s|/2}\left\|\tilde{u}(t)\right\|_{L^2(\mathbb{T})}\right\} \\
& \leq M_{T',s}+ \left(1+(T-T') ^{|s|/2}\right)\sup_{t\in [0,T-T']}\left\|\tilde{u}(t)\right\|_{L^2(\mathbb{T})}. \end{align*} The global result follows from the above estimate when $s\in (-1/2,0)$. \end{proof}
\section{ Ill-posedness result}
From the Theorem \ref{mainresult}, it is known that $CL$ is locally well-posed for data $\phi \in H^s(\mathbb{T})$, $s>-1/2$. In fact the map data-solution turns out to be smooth. In this section we will prove that one cannot solve Cauchy problem \eqref{cl} applying Picard iterative method on the integral equation \eqref{intequation}, at least in the Sobolev spaces $H^s(\mathbb{T})$, with $s<-1$. We first prove the next theorem.
\begin{teorem}\label{TIP1} Let $s<-1$, $\beta,\eta>0$ and $T>0$. Then there does not exist a space $B_T^s$ continuously embedded in $C\left([0,T],H^s(\mathbb{T})\right)$,i.e.
$$\left\|u\right\|_{L_{t}^{\infty}H^s}\lesssim \left\|u\right\|_{B_T^s}, \, \forall u\in B_T^s$$ and such that \begin{equation}\label{IP1}
\left\|S(t)\phi\right\|_{B_T^s}\lesssim \left\|\phi \right\|_{H^s}, \qquad \forall \phi \in H^s(\mathbb{T}), \end{equation} and \begin{equation}\label{IP2}
\left\|\int_0^t S(t-t')[u(t')u_x(t')]\ dt'\right\|_{B_T^s}\lesssim \left\|u \right\|_{B_T^s}^2, \, \forall u\in B_T^s. \end{equation} \end{teorem}
\begin{proof} Let $s<-1$, $\beta,\eta>0$ and $T>0$. Suppose that there exists a space $B_T^s$, which satisfies the conditions given in the theorem. Take $\phi \in H^s(\mathbb{T})$ and $u(t)=S(t)\phi$, then \eqref{IP2} implies that \begin{equation}
\left\|\int_0^t S(t-t')[(S(t')\phi)(S(t')\phi_x)]\ dt'\right\|_{B_T^s}\lesssim \left\|S(t)\phi \right\|_{B_T^s}^2. \end{equation} Since $B_T^s$ is densely embedded in $C\left([0,T],H^s(\mathbb{T})\right)$, we obtain using \eqref{IP1} that for each $t\in [0,T]$ \begin{equation}\label{IP3}
\left\|\int_0^t S(t-t')[(S(t')\phi)(S(t')\phi_x)]\ dt'\right\|_{H^s}\lesssim \left\|\phi \right\|_{H^s}^2. \end{equation} We will show that \eqref{IP3} fails for an appropriated function $\phi$. Take $\phi$ defined by its Fourier transform as \[
\widehat{\phi}(k)=
\begin{cases}
N^{-s} & \text{if } k=N \text{ or } k=1-N, \\
0 & \text{otherwise }
\end{cases} \]
where $N>1$ is a positive integer. It is easy to see that $\left\|\phi\right\|_s^2 \sim_s 1$. From the definition of the group $\left(S(t)\right)_{t\geq 0}$, we have that \begin{align*} \int_0^t S(t-t')&[(S(t')\phi)(S(t')\phi_x)]\ dt' \\ &= \int_0^t \sum_{k}e^{\left(iq(k)-p(k)\right)(t-t')}e^{ikx}(ik)\left(\widehat{S(t')\phi}\ast \widehat{S(t')\phi}\right)(k) \ dt'\\ &= \int_0^t \sum_{k}e^{\left(iq(k)-p(k)\right)(t-t')}e^{ikx}(ik) \\ &\quad \quad \quad \cdot\Bigl(\sum_{j}e^{\left(iq(j)-p(j)\right)t'}e^{\left(iq(k-j)-p(k-j)\right)t'}\widehat{\phi}(j)\widehat{\phi}(k-j)\Bigr) \ dt' \\ &= \sum_{k,j} e^{\left(iq(k)-p(k)\right)t+ikx}(ik)\widehat{\phi}(j)\widehat{\phi}(k-j)\int_0^t e^{t'\left[i\psi(k,j)-\sigma(k,j)\right]} dt' \end{align*} where
$$\psi(k,j)=\beta\left[ (k-j)|k-j|-|k|k+|j|j \right]$$ and
$$\sigma(k,j)=\eta\left[(k-j)^2-|k-j|-|k|^2+|k|+|j|^2-|j| \right].$$ Thus the above argument and the definition of $\phi$ imply \begin{align*} &\left( \int_0^t S(t-t')[(S(t')\phi)(S(t')\phi_x)]\ dt' \right)^{\wedge}(1) \\ &\hspace{30pt}= e^{\left(iq(1)-p(1)\right)t}(i)N^{-2s}\int_0^t e^{t'\left[i\psi(1,N)-\sigma(1,N)\right]} dt'+\\ &\hspace{30pt}\quad \quad \quad+e^{\left(iq(1)-p(1)\right)t}(i)N^{-2s}\int_0^t e^{t'\left[i\psi(1,1-N)-\sigma(1,1-N)\right]} dt' \\ &\hspace{30pt}= 2 e^{\left(iq(1)-p(1)\right)t}(i) N^{-2s}\int_0^t e^{t'\left[i\psi(1,N)-\sigma(1,N)\right]} dt'. \end{align*} Here we have used that $\psi(1,N)=\psi(1,1-N)=2\beta(N-1)$ and $\sigma(1,N)=\sigma(1,1-N)=2\eta (N^2-2N+1)$. Hence, it follows that \begin{align}
&\left\|\int_0^t S(t-t')[(S(t')\phi)(S(t')\phi_x)]\ dt'\right\|_{H^s}^2 \nonumber \\
&\hspace{30pt}\gtrsim_s \left|\left( \int_0^t S(t-t')[(S(t')\phi)(S(t')\phi_x)]\ dt' \right)^{\wedge}(1)\right|^2 \nonumber \\
&\hspace{30pt} \sim_s \left| e^{\left(iq(1)-p(1)\right)t}N^{-2s}\frac{e^{t\left[i\psi(1,N)-\sigma(1,N)\right]}-1}{i\psi(1,N)-\sigma(1,N)}\right|^2 \nonumber \\
&\hspace{30pt} \gtrsim_s \left| N^{-2s}\Re\left(\frac{e^{t\left[i\psi(1,N)-\sigma(1,N)\right]}-1}{i\psi(1,N)-\sigma(1,N)}\right)\right|^2. \label{IP5} \end{align} Since $\sigma(1,N)\sim \eta N^2$ and $\sigma(1,N)\sim \beta N$ we deduce that \begin{align*} \sigma(1,N)\left(1- e^{-t\sigma(1,N)}\cos\left(t\psi(1,N)\right)\right) &\gtrsim \eta N^2\left(1- e^{-t\eta N^2}\right), \\ \psi(1,N)e^{-t\sigma(1,N)}\sin\left(t\psi(1,N)\right) &\gtrsim -\beta N e^{-t\eta N^2} \end{align*} and
$$|\sigma(1,N)|^2+|\psi(1,N)|^2 \sim N^2\left(\eta^2N^2+\beta^2\right),$$ hence \begin{equation}\label{IP6} \Re\left(\frac{e^{t\left[i\psi(1,N)-\sigma(1,N)\right]}-1}{i\psi(1,N)-\sigma(1,N)}\right) \gtrsim \frac{\left(\eta-(\eta+\beta)e^{-t\eta N^2}\right)}{(\eta^2+\beta^2)N^2} \end{equation}
Therefore, from \eqref{IP5} and \eqref{IP6} \begin{equation}
\left\|\int_0^t S(t-t')[(S(t')\phi)(S(t')\phi_x)]\ dt'\right\|_{H^s} \gtrsim_s \frac{\left(\eta-(\eta+\beta)e^{-t\eta N^2}\right)}{(\eta^2+\beta^2)}N^{-2(s+1)}, \end{equation}
but this contradicts \eqref{IP3} for $N$ large enough, since $\left\|\phi\right\|_s^2\sim_s 1$ and $s<-1$. \end{proof} As a consequence of Theorem \ref{TIP1} we can obtain the Theorem \ref{malpuestodos}. \begin{proof}[Proof of Theorem \ref{malpuestodos}] Let $s<-1$, suppose that there exists $T>0$ such that the Cauchy problem \eqref{cl} is locally well-posed in $H^s(\mathbb{T})$ on the time interval $[0,T]$ and such that the flow map $\Phi:H^s(\mathbb{T})\rightarrow C\left([0,T];H^s(\mathbb{T})\right)$ is $C^2$ at the origin. When $\phi \in H^s(\mathbb{T})$, we will denote as $u_{\phi}(t)=\Phi(t)\phi$ the solution of the Cauchy problem \eqref{cl} with initial data $\phi$. This means that $u_{\phi}$ is a solution of the integral equation
$$u_{\phi}(t)=S(t)\phi-\frac{1}{2}\int_{0}^t S(t-t')\partial_x(u_{\phi})^2(t')dt'.$$
By computing the Fr\'echet derivative of $\Phi(t)$ at $\phi$ with direction $\psi$, we obtain \begin{equation}\label{IP7} d_{\phi}\Phi(t)(\psi)=S(t)\psi-\int_{0}^t S(t-t')\partial_x\left(u_{\phi}(t') d_{\phi}\Phi(t')(\psi)\right)dt'. \end{equation} Since the Cauchy problem \eqref{cl} is supposed to be well-posed, we know using the uniqueness that $\Phi(t)(0)=0$, so we deduce from \eqref{IP7} that \begin{equation}\label{IP8} d_{0}\Phi(t)(\psi)=S(t)\psi. \end{equation} Using \eqref{IP7} and \eqref{IP8} we can compute the second Fr\'echet derivative at the origin in the direction $(\phi,\psi)$ $$d^2_0\Phi(t)(\phi,\psi)=-\int_0^t S(t-t')\partial_x[(S(t')\phi)(S(t')\psi)]\ dt'.$$ Assumption of $C^2$ regularity implies that $d^2_0\Phi(t)\in \mathcal{B}\left(H^s(\mathbb{T})\times H^s(\mathbb{T}),H^s(\mathbb{T})\right)$, which would lead to the following inequality \begin{equation}\label{IP9}
\left\|d^2_0\Phi(t)(\phi,\psi)\right\|_{H^s}\lesssim \left\|\phi\right\|_{H^s}\left\|\psi\right\|_{H^s}, \forall \phi,\psi \in H^s(\mathbb{T}). \end{equation} But inequality \eqref{IP9} is equivalent to \eqref{IP3}, which has been shown to fail in the proof of Theorem \ref{TIP1}. \end{proof}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{A global observer for attitude and gyro biases from vector measurements} \thanks[footnoteinfo]{This work was supported by the French Agence Nationale de la Recherche through the ANR ASTRID SCAR project ``Sensory Control of Aerial Robots'' (ANR-12-ASTR-0033).}
\author[First]{Philippe Martin} \author[First]{Ioannis Sarras}
\address[First]{Centre Automatique et Systèmes, MINES ParisTech, PSL Research University, Paris, France\\
(e-mail: \{philippe.martin,ioannis.sarras\}@mines-paristech.fr).}
\begin{keyword} Attitude estimation; nonlinear observer; guidance; navigation systems. \end{keyword}
\begin{abstract}
We consider the classical problem of estimating the attitude and gyro biases of a rigid body from vector measurements and a triaxial rate gyro. We propose a simple ``geometry-free'' nonlinear observer with guaranteed uniform global asymptotic convergence and local exponential convergence; the stability analysis, which relies on a strict Lyapunov function, is rather simple. The excellent behavior of the observer is illustrated through a detailed numerical simulation. \end{abstract}
\end{frontmatter}
\section{Introduction}\label{sec:intro}
Estimating the attitude of a rigid body from vector measurements has been for decades a problem of interest, because of its importance for a variety of technological applications such as satellites or unmanned aerial vehicles.
The attitude of the body can be described by the rotation matrix $R\in\mathrm{SO(3)}$ from inertial to body axes. The measurement vectors $u_1,\cdots,u_n\in\Rset^3$ correspond to the expression in body axes of known vectors $U_1,\cdots,U_n\in\Rset^3$ which are constant in inertial axes, i.e., $u_k(t)=R^T(t)U_k$. The goal is to reconstruct the attitude at time~$t$ using only the knowledge of the measurement vectors until~$t$. The problem would be very easy if the measurements were perfect: indeed, using for instance only the two vectors $u_1(t)$ and $u_2(t)$ and noticing $R^T(x\times y)=R^Tx\times R^Ty$ since $R$ is a rotation matrix, we readily find \begin{IEEEeqnarray*}{rCl}
R^T(t) &=& R^T(t)\cdot\begin{pmatrix}U_1& U_2& U_1\times U_2\end{pmatrix}\cdot\begin{pmatrix}U_1& U_2& U_1\times U_2\end{pmatrix}^{-1}\\
&=& \begin{pmatrix}u_1(t)& u_2(t)& u_1(t)\times u_2(t)\end{pmatrix}\cdot\begin{pmatrix}U_1& U_2& U_1\times U_2\end{pmatrix}^{-1}. \end{IEEEeqnarray*}
But in real situations, the measurement vectors are always corrupted at least by noise. Moreover, the $U_k$'s may possibly be not strictly constant: for instance a triaxial magnetometer measures the (locally) constant Earth magnetic field, but is easily perturbed by ferromagnetic masses and electromagnetic perturbations; similarly, a triaxial accelerometer can be considered as measuring the direction of gravity provided it is not undergoing a substantial acceleration (see e.g. \cite{MartiS2010ICRA} for a detailed discussion of this assumption and its consequences in the framework of quadrotor UAVs). That is why, despite the additional cost, it may be interesting to use a triaxial rate gyro to supplement the possibly not so good vector measurements.
The literature on attitude estimation from vector measurements can be broadly divided into three categories: i) optimization-based methods; ii) stochastic filtering; iii) nonlinear observers. Details on the various approaches can be found e.g. in the surveys \cite{CrassMC2007JGCD,ZamanTM2015arXiv} and the references therein.
The first category sets the problem as the minimization of a cost function, and is usually referred to as Wahba's problem. The attitude is algebraically recovered at time~$t$ using only the measurements at time~$t$. No filtering is performed, and possibly available velocity information from rate gyros is not exploited.
The second category mainly hinges on Kalman filtering and its variants. Despite their many qualities, the drawback of those designs is that convergence cannot in general be guaranteed except for mild trajectories. Moreover the tuning is not completely obvious, and the computational cost may be too high for small embedded processors.
The third, and more recent, approach proposes nonlinear observers with a large guaranteed domain of convergence and a rather simple tuning through a few constant gains. These observers can be designed: a) directly on $\mathrm{SO(3)}$ (or the unit quaternion space), see e.g.~\cite{MahonHP2008TAC,MartiS2010CEP,VascoSO2008IFAC,GripFJS2012TAC}; b) or more recently, on $\Rset^{3\times3}$, i.e., deliberately ``forgetting'' the underlying geometry~\cite{BatisSO2012CEP,BatisSO2014AUT,GripFJS2015AUT}. Probably the best-known design is the so-called nonlinear complementary filter of~\cite{MahonHP2008TAC}; as noticed in~\cite{MartiS2007CDC}, it is a special case of so-called invariant observers~\cite{BoMaRo_tac09}.
In this paper, we propose a new observer of attitude and gyro biases from gyro measurements and (at least) two measurement vectors. It also ``forgets'' the geometry of~$\mathrm{SO(3)}$, which yields a very simple structure with a straightforward proof of its uniform global asymptotic convergence (notice the observer of~\cite{MahonHP2008TAC} is only quasi-globally convergent, with moreover a much more involved proof). It is easy to tune and implement, and demonstrates excellent performance in simulation. This observer can be seen as a modification and simplification of the linear cascaded observer proposed in~\cite{BatisSO2012CEP}, along with a much simpler proof of convergence based on a strict Lyapunov function. Compared to the global exponential stability result of~\cite{BatisSO2012CEP}, we only prove uniform global asymptotic stability plus local exponential stability. However, in the case where our observer reduces to the observer of~\cite{BatisSO2012CEP} by using directly non-filtered measurements in certain terms, our Lyapunov function again serves as a strict Lyapunov function, which provides a simpler and more direct alternative to the non-trivial stability analysis of~\cite{BatisSO2012CEP}.
The paper runs as follows: the model used to design the observer is described in section~\ref{sec:Models}; the observer is presented in section~\ref{sec:Observer}, and its convergence is proved; finally, section~\ref{sec:Simulations} illustrates the excellent behavior of the observer on a detailed numerical simulation.
\section{The design model}\label{sec:Models} We consider a moving rigid body subjected to the angular velocity~$\omega$ (in body axes). Its orientation (from inertial to body axes) matrix $R\in\mathrm{SO(3)}$ is related to~$\omega$ by \begin{IEEEeqnarray}{rCl}
\dot R &=& R\omega_\times\label{eq:R}, \end{IEEEeqnarray} where the skew-symmetric matrix $\omega_\times$ is defined by $\omega_\times x:=\omega\times x$ whatever the vector~$x$.
The rigid body is equipped with a triaxal rate gyro measuring the angular velocity~$\omega$, and two additional triaxial sensors (for example accelerometers, magnetometers or sun sensors) providing the measurements of two vectors $\alpha$ and~$\beta$. These two vectors correspond to the expression in body axes of two known independent vectors $\alpha_i$ and $\beta_i$ which are constant in inertial axes. In other words, \begin{IEEEeqnarray*}{rCl}
\alpha &:=& R^T\alpha_i\\
\beta &:=& R^T\beta_i. \end{IEEEeqnarray*} Since $\alpha_i$, $\beta_i$ are constant, we readily find \begin{IEEEeqnarray*}{rCl}
\dot\alpha &=& \alpha\times\omega\\
\dot\beta &=& \beta\times\omega. \end{IEEEeqnarray*} As any sensor, the rate gyro is biased, and rather provides the measurement \begin{IEEEeqnarray*}{rCl}
\omega_m &:=& \omega+b, \end{IEEEeqnarray*} where $b$ is a slowly-varying (for instance with temperature) unknown bias. The effect of this bias on attitude estimation may be important when the observer gains are small, hence it is worth determining its value. But being not exactly constant, it cannot be calibrated offline and must be estimated online together with the attitude.
Our objective is to design an estimation scheme that can reconstruct online the orientation matrix $R(t)$ and the bias $b(t)$, using i) the measurements of the gyro and of the two vector sensors; ii) the knowledge of the constant vectors $\alpha_i$ and~$\beta_i$. The model on which the design will be based therefore consists of the dynamics \begin{IEEEeqnarray}{rCl}
\dot\alpha &=& \alpha\times\omega\label{eq:alpha}\\
\dot\beta &=& \beta\times\omega\\
\dot b &=& 0, \end{IEEEeqnarray} together with the measurements \begin{IEEEeqnarray}{rCl}
\omega_m &:=& \omega+b\label{eq:omega}\\
\alpha_m &:=& \alpha\\
\beta_m &:=& \beta.\label{eq:betam} \end{IEEEeqnarray}
\section{The observer}\label{sec:Observer} We show the state of the design system~\eqref{eq:alpha}--\eqref{eq:betam} can be estimated by the observer \begin{IEEEeqnarray}{rCl}
\dot{\hat\alpha} &=& \hat\alpha\times(\omega_m-\hat b)-k_\alpha(\hat\alpha-\alpha_m)\label{eq:alphahat}\\
\dot{\hat\beta} &=& \hat\beta\times(\omega_m-\hat b)-k_\beta(\hat\beta-\beta_m)\\
\dot{\hat b} &=& l_\alpha\hat\alpha\times\alpha_m+l_\beta\hat\beta\times\beta_m,\label{eq:bhat} \end{IEEEeqnarray} where $k_\alpha,k_\beta,l_\alpha,l_\beta>0$ are strictly positive constants. Notice this observer is very similar to the so-called bias observer of~\cite{BatisSO2012CEP}. The main difference is the use of the filtered terms $\hat\alpha\times(\omega_m-\hat b)$ and $\hat\beta\times(\omega_m-\hat b)$ instead of their unfiltered versions, which avoids injecting too much measurement noise when the gains are small; as a consequence the error system below is no longer linear in the measured quantities $\omega_m,\alpha_m,\beta_m$, and the technique of proof of~\cite{BatisSO2012CEP} is no longer applicable.
Defining the error vectors, $e_\alpha:=\hat\alpha-\alpha$, $e_\beta:=\hat\beta-\beta$ and $e_b:=\hat b-b$, the error system reads \begin{IEEEeqnarray*}{rCl}
\dot e_\alpha &=& e_\alpha\times\omega-(\alpha+e_\alpha)\times e_b-k_\alpha e_\alpha\label{eq:ealpha}\\
\dot e_\beta &=& e_\beta\times\omega-(\beta+e_\beta)\times e_b-k_\beta e_\beta\label{eq:ebeta}\\
\dot e_b &=& l_\alpha e_\alpha\times\alpha+l_\beta e_\beta\times\beta.\label{eq:eb} \end{IEEEeqnarray*} It will be convenient to use the rotated variables $E_\alpha:=Re_\alpha$, $E_\beta:=Re_\beta$, $E_b:=Re_b$, and~$\Omega:=R\omega$. In terms of the rotated variables, the error system reads \begin{IEEEeqnarray}{rCl}
\dot E_\alpha &=& E_b\times(\alpha_i+E_\alpha)-k_\alpha E_\alpha\label{eq:Ealpha}\\
\dot E_\beta &=& E_b\times(\beta_i+E_\beta)-k_\beta E_\beta\label{eq:Ebeta}\\
\dot E_b &=& \Omega\times E_b+l_\alpha E_\alpha\times\alpha_i+l_\beta E_\beta\times\beta_i.\label{eq:Eb} \end{IEEEeqnarray} Indeed, we find e.g. \eqref{eq:Ealpha} by writing \begin{IEEEeqnarray*}{rCl}
\dot E_\alpha &=& \dot Re_\alpha+R\dot e_\alpha\\
&=& R(\omega\times e_\alpha)\\
&& +\, R(e_\alpha\times\omega)-(R\alpha+Re_\alpha)\times Re_b-k_\alpha Re_\alpha\\
&=& -(\alpha_i+E_\alpha)\times E_b-k_\alpha E_\alpha, \end{IEEEeqnarray*} where we have used \eqref{eq:R}, \eqref{eq:ealpha} and $R(x\times y)=Rx\times Ry$ since $R$ is a rotation matrix.
It is of course equivalent to work with this rotated error system; notice also $\abs{\Omega}=\abs{\omega}$.
\begin{thm}\label{th:conv2vec} Assume $k_\alpha,k_\beta,l_\alpha,l_\beta>0$ and $\omega$ bounded.
Then the equilibrium point $(\bar E_\alpha,\bar E_\beta,\bar E_b):=(0,0,0)$ of the error system~\eqref{eq:Ealpha}--\eqref{eq:Eb} is uniformly globally asymptotically stable and locally exponentially stable. \end{thm}
\begin{pf} Consider the candidate Lyapunov function \begin{IEEEeqnarray*}{rCl}
V &:=& \sigma_1V_1+\sigma_2V_1^2+V_3\label{eq:LyapFct}, \end{IEEEeqnarray*} where the coefficients $\sigma_1,\sigma_2>0$ are yet to be defined, and \begin{IEEEeqnarray*}{rCl}
V_1(E_\alpha,E_\beta,E_b) &:=& \frac{1}{2}\bigl(l_\alpha\abs{E_\alpha}^2 + l_\beta\abs{E_\beta}^2 + \abs{E_b}^2\bigr)\\
V_3(E_\alpha,E_\beta,E_b) &:=& \frac{1}{2}\abs{E_b - \frac{l_\alpha}{k_\alpha}\alpha_i\times E_\alpha - \frac{l_\beta}{k_\beta}\beta_i\times E_\beta}^2. \end{IEEEeqnarray*} Clearly, $V$ is positive definite and radially unbounded.
We now compute the time derivatives of its pieces along the trajectories of the error system~\eqref{eq:Ealpha}--\eqref{eq:Eb}. \begin{IEEEeqnarray*}{rCl}
\dot V_1&=& l_\alpha\inner{E_\alpha}{E_b\times\alpha_i+E_b\times E_\alpha - k_\alpha E_\alpha}\\
&& +\> l_\beta\inner{E_\beta}{E_b\times\beta_i+E_b\times E_\beta - k_\beta E_\beta}\\
&& +\> \inner{E_b}{\Omega\times E_b + l_\alpha E_\alpha\times\alpha_i + l_\beta E_\beta\times\beta_i}\\
&=& -k_\alpha l_\alpha\abs{E_\alpha}^2 - k_\beta l_\beta\abs{E_\beta}^2, \end{IEEEeqnarray*} where we have used $\inner{x}{x\times y}=0$.
\begin{IEEEeqnarray*}{rCl}
\frac{d}{dt}V_1^2 &=& -k_\alpha l_\alpha^2\abs{E_\alpha}^4 - k_\beta l_\beta^2\abs{E_\beta}^4
- k_\alpha l_\alpha\abs{E_\alpha}^2\abs{E_b}^2\\
&&-\> k_\beta l_\beta\abs{E_\beta}^2\abs{E_b}^2 - (k_\alpha+k_\beta)l_\alpha l_\beta\abs{E_\alpha}^2\abs{E_\beta}^2. \end{IEEEeqnarray*}
Setting $u_{\alpha\beta}:=\frac{l_\alpha}{k_\alpha}\alpha_i\times E_\alpha + \frac{l_\beta}{k_\beta}\beta_i\times E_\beta$, \begin{IEEEeqnarray*}{rCl}
\IEEEeqnarraymulticol{3}{l}{ \inner{E_b}{\dot E_b-\dot u_{\alpha\beta}} }\\ ~~~
&=& \inner{E_b}{\Omega\times E_b}+\inner{E_b}{l_\alpha E_\alpha\times\alpha_i+l_\beta E_\beta\times\beta_i}\\
&& -\> \inner{E_b}{\frac{l_\alpha}{k_\alpha}\alpha_i\times(E_b\times\alpha_i) + \frac{l_\beta}{k_\beta}\beta_i\times(E_b\times\beta_i)}\\
&& -\> \inner{E_b}{\frac{l_\alpha}{k_\alpha}\alpha_i\times(E_b\times E_\alpha) + \frac{l_\beta}{k_\beta}\beta_i\times(E_b\times E_\beta)}\\
&& +\> \inner{E_b}{\frac{l_\alpha}{k_\alpha}\alpha_i\times(k_\alpha E_\alpha) + \frac{l_\beta}{k_\beta}\beta_i\times(k_\beta E_\beta)}\\
&=& \inner{E_b}{\Mab E_b} \\
&& -\> \inner{E_b}{\frac{l_\alpha}{k_\alpha}\alpha_i\times(E_b\times E_\alpha)
+\frac{l_\beta}{k_\beta}\beta_i\times(E_b\times E_\beta)}, \end{IEEEeqnarray*} where we have used $\inner{x}{x\times y}=0$ and $\inner{x}{y\times z}=\inner{z}{x\times y}$.
\begin{IEEEeqnarray*}{rCl}
\IEEEeqnarraymulticol{3}{l}{ \inner{u_{\alpha\beta}}{\dot E_b-\dot u_{\alpha\beta}} }\\ ~~~
&=& \inner{u_{\alpha\beta}}{\Omega\times E_b} + \inner{u_{\alpha\beta}}{\Mab E_b} \\
&& -\> \inner{u_{\alpha\beta}}{\frac{l_\alpha}{k_\alpha}\alpha_i\times(E_b\times E_\alpha)
+\frac{l_\beta}{k_\beta}\beta_i\times(E_b\times E_\beta)}\\
&=& \inner{E_b}{u_{\alpha\beta}\times\Omega} + \inner{E_b}{\Mab u_{\alpha\beta}} \\
&& -\> \inner{E_b}{\frac{l_\alpha}{k_\alpha}\alpha_i\times(u_{\alpha\beta}\times E_\alpha)
+\frac{l_\beta}{k_\beta}\beta_i\times(u_{\alpha\beta}\times E_\beta)}, \end{IEEEeqnarray*} where we have used $\inner{x}{y\times z}=\inner{z}{x\times y}$ several times.
We next bound the last two expressions. Since $\alpha_i,\beta_i$ are independent and $k_\alpha,l_\alpha,k_\beta,l_\beta>0$, the matrix \begin{IEEEeqnarray*}{C}
-\Mab \end{IEEEeqnarray*} is positive definite, see~\cite[Lemma~2]{TayebRB2013TAC}. As a consequence, there exists $\mu>0$ such that \begin{IEEEeqnarray*}{rCl}
\inner{E_b}{\Bigl(\frac{l_\alpha}{k_\alpha}a_{i\times}^2 +\frac{l_\beta}{k_\beta}b_{i\times}^2\Bigr)E_b}
&\le& -\mu\abs{E_b}^2. \end{IEEEeqnarray*}
Using then Young's inequality $\abs{\inner{x}{y}}\leq \frac{\epsilon\abs{x}^2}{2}+\frac{\abs{y}^2}{2\epsilon}$ and $\bigabs{\sum_{i=1}^nx_i}^2\le n\sum_{i=1}^n\abs{x_i}^2$ yields \begin{IEEEeqnarray*}{rCl}
\IEEEeqnarraymulticol{3}{l}{ \inner{E_b}{\dot E_b-\dot u_{\alpha\beta}} }\\ ~
&\le& -\mu\abs{E_b}^2 + \frac{\varepsilon}{2}\abs{E_b}^2\\
&& +\>\frac{1}{\varepsilon}\Bigabs{ \frac{l_\alpha}{k_\alpha}\alpha_i\times(E_b\times E_\alpha) }^2
+\frac{1}{\varepsilon}\Bigabs{ \frac{l_\beta}{k_\beta}\beta_i\times(E_b\times E_\beta) }^2\\
&\le& \Bigl(\frac{\varepsilon}{2}-\mu\Bigr)\abs{E_b}^2\\
&& +\> \frac{l_\alpha^2\abs{\alpha_i}^2}{\varepsilon k_\alpha^2}\abs{E_\alpha}^2\abs{E_b}^2
+\frac{l_\beta^2\abs{\beta_i}^2}{\varepsilon k_\beta^2}\abs{E_\beta}^2\abs{E_b}^2. \end{IEEEeqnarray*} Similarly, \begin{IEEEeqnarray*}{rCl}
\IEEEeqnarraymulticol{3}{l}{ \bigabs{\inner{u_{\alpha\beta}}{\dot E_b-\dot u_{\alpha\beta}}} }\\ ~
&\le& \frac{\varepsilon}{2}\abs{E_b}^2 + \frac{2}{\varepsilon}\abs{ u_{\alpha\beta}\times\Omega }^2
+ \frac{2}{\varepsilon}\Bigabs{ \Mab u_{\alpha\beta} }^2\\
&& +\> \frac{2}{\varepsilon}\Bigabs{ \frac{l_\alpha}{k_\alpha}\alpha_i\times(u_{\alpha\beta}\times E_\alpha) }^2
+\frac{2}{\varepsilon}\Bigabs{ \frac{l_\beta}{k_\beta}\beta_i\times(u_{\alpha\beta}\times E_\beta) }^2\\
&\le& \frac{\varepsilon}{2}\abs{E_b}^2 + \frac{2c_\omega^2}{\varepsilon}\abs{u_{\alpha\beta}}^2
+ \frac{4}{\varepsilon}\Bigl(\frac{l_\alpha^2}{k_\alpha^2}\abs{\alpha_i}^4 + \frac{l_\beta^2}{k_\beta^2}\abs{\beta_i}^4\Bigr)\abs{ u_{\alpha\beta} }^2\\
&& +\> \frac{2l_\alpha^2\abs{\alpha_i}^2}{\varepsilon k_\alpha^2}\abs{ E_\alpha }^2\abs{ u_{\alpha\beta} }^2
+ \frac{2l_\beta^2\abs{\beta_i}^2}{\varepsilon k_\beta^2}\abs{ E_\beta }^2\abs{ u_{\alpha\beta} }^2, \end{IEEEeqnarray*} where $\abs{\Omega}=\abs{\omega}\le c_\omega$ and \begin{IEEEeqnarray*}{rCl}
\abs{u_{\alpha\beta}}^2 &\le& \frac{2l_\alpha^2\abs{\alpha_i}^2}{k_\alpha^2}\abs{ E_\alpha }^2
+ \frac{2l_\beta^2\abs{\beta_i}^2}{k_\beta^2}\abs{ E_\beta }^2. \end{IEEEeqnarray*}
Collecting all the pieces, we eventually find \begin{IEEEeqnarray*}{rCl}
\dot V &\le& -\mu'\abs{E_b}^2 - \sigma_{1\alpha}\abs{ E_\alpha }^2 - \sigma_{1\beta}\abs{ E_\beta }^2\\
&&-\> \sigma_{2\alpha}\abs{ E_\alpha }^4 - \sigma_{2\beta}\abs{ E_\beta }^4 - \sigma_{2\alpha\beta}\abs{ E_\alpha }^2\abs{ E_\beta }^2\\
&&-\> \sigma'_{2\alpha}\abs{E_\alpha}^2\abs{E_b}^2 - \sigma'_{2\beta}\abs{E_\beta}^2\abs{E_b}^2, \end{IEEEeqnarray*} where \begin{IEEEeqnarray*}{rCl}
\mu' &:=& \mu-\varepsilon\\
\sigma_{1\alpha} &:=& \sigma_1k_\alpha l_\alpha - \frac{4}{\varepsilon}\Bigl(c_\omega^2+\frac{2l_\alpha^2}{k_\alpha^2}\abs{\alpha_i}^4
+ \frac{2l_\beta^2}{k_\beta^2}\abs{\beta_i}^4\Bigr)\frac{l_\alpha^2\abs{\alpha_i}^2}{k_\alpha^2}\\
\sigma_{1\beta} &:=& \sigma_1k_\beta l_\beta - \frac{4}{\varepsilon}\Bigl(c_\omega^2+\frac{2l_\alpha^2}{k_\alpha^2}\abs{\alpha_i}^4
+ \frac{2l_\beta^2}{k_\beta^2}\abs{\beta_i}^4\Bigr)\frac{l_\beta^2\abs{\beta_i}^2}{k_\beta^2}\\
\sigma_{2\alpha} &:=& \sigma_2k_\alpha l_\alpha^2 - \frac{4l_\alpha^4\abs{\alpha_i}^4}{\varepsilon k_\alpha^4}\\
\sigma_{2\beta} &:=& \sigma_2k_\beta l_\beta^2 - \frac{4l_\beta^4\abs{\beta_i}^4}{\varepsilon k_\beta^4}\\
\sigma_{2\alpha\beta} &:=& \sigma_2(k_\alpha+k_\beta)l_\alpha l_\beta
- \frac{8l_\alpha^2l_\beta^2\abs{\alpha_i}^2\abs{\beta_i}^2}{\varepsilon k_\alpha^2k_\beta^2}\\
\sigma'_{2\alpha} &:=& \sigma_2k_\alpha l_\alpha - \frac{l_\alpha^2\abs{\alpha_i}^2}{\varepsilon k_\alpha^2}\\
\sigma'_{2\beta} &:=& \sigma_2k_\beta l_\beta - \frac{l_\beta^2\abs{\beta_i}^2}{\varepsilon k_\beta^2}. \end{IEEEeqnarray*} All the above coefficients are strictly positive if $\varepsilon$ is chosen small enough and $\sigma_1,\sigma_2$ large enough. This means $V$ is a strict Lyapunov function, which proves uniform global asymptotic stability. Notice the bound $c_\omega$ need not be known, since $\sigma_1$ can always been chosen large enough to achieve $\sigma_{1\alpha},\sigma_{1\beta}>0$
To establish local exponential stability, we consider the linearized error system around the equilibrium point $(\bar E_\alpha,\bar E_\beta,\bar E_b):=(0,0,0)$; it reads \begin{IEEEeqnarray*}{rCl}
\delta\dot E_\alpha &=& \delta E_b\times\alpha_i-k_\alpha\delta E_\alpha\\
\delta\dot E_\beta &=& \delta E_b\times\beta_i-k_\beta E_\beta\\
\delta\dot E_b &=& \Omega\times\delta E_b+l_\alpha\delta E_\alpha\times\alpha_i+l_\beta\delta E_\beta\times\beta_i. \end{IEEEeqnarray*} This is a linear time-varying system (due to the dependence on $\Omega$); its origin is uniformly locally asymptotically stable, since the origin of the nonlinear system is uniformly globally asymptotically stable. Exponential stability follows from the fact that for linear (time-varying) systems, uniform asymptotic stability and exponential stability are equivalent, see~\cite[Theorem 4.11]{khalil2002book}.\qed \end{pf}
\begin{rem}
$V_1$ alone is a (non strict) Lyapunov function for the error system~\eqref{eq:Ealpha}--\eqref{eq:Eb}; using repeatedly Barbalat's lemma, (non uniform) global asymptotic stability can be very easily established, see~\cite{MartiS2016arXiv}. On the other hand, the more complicated Lyapunov function $V$ used in the proof is strict, hence yields uniform stability. \end{rem}
\begin{rem}
If in the observer~\eqref{eq:alphahat}-\eqref{eq:bhat} the filtered terms $\hat\alpha\times(\omega_m-\hat b)$ and $\hat\beta\times(\omega_m-\hat b)$ are replaced by their filtered versions $\hat\alpha\times\omega_m-\alpha_m\times\hat b$ and $\hat\beta\times\omega_m-\beta_m\times\hat b$, we recover the first part of the linear cascaded observer of~\cite{BatisSO2012CEP}. In this case, the strict Lyapunov function $V$ used in the proof can also be used to establish uniform global asymptotic stability, which provides a simpler and more constructive proof than the ``abstract'' proof of~\cite{BatisSO2012CEP} based on uniform observability arguments. \end{rem}
\begin{rem}
The result obviously also holds if the scalar gains $k_\alpha,k_\beta,l_\alpha,l_\beta$ are replaced by $3\times3$ symmetric definite positive matrices; these added degrees of freedom for the tuning might be useful in practice if the components of the measurement vectors are produced by sensors with very different characteristics.
It is also clear that more than two vectors can be used with an obvious generalization of the proposed structure. \end{rem}
\begin{rem}
The observer does not use the knowledge of the constant vectors $\alpha_i$ and $\beta_i$. This may be an interesting feature in some applications when those vectors for example are not precisely known and/or (slowly) vary. \end{rem}
We then have the obvious but important following corollary, which gives an estimate of the true orientation matrix~$R$ by using the knowledge of $\alpha_i$ and $\beta_i$. Notice it is considerably simpler than the approach proposed in~\cite{BatisSO2012CEP}, where the estimated orientation matrix is obtained through an additional observer of dimension~$9$.
\begin{cor} Under the assumptions of theorem~\ref{th:conv2vec}, the matrix $\tilde R$ defined by
\begin{IEEEeqnarray*}{rCl}
\tilde R^T &:=& \begin{pmatrix}\frac{\hat\alpha}{\abs{\alpha_i}}& \frac{\hat\alpha\times\hat\beta}{\abs{\alpha_i\times\beta_i}} & \frac{\hat\alpha\times(\hat\alpha\times\hat\beta)}{\abs{\alpha_i\times(\alpha_i\times\beta_i)}}\end{pmatrix}\cdot R_i^T\\
R_i &:=& \begin{pmatrix}\frac{\alpha_i}{\abs{\alpha_i}}& \frac{\alpha_i\times\beta_i}{\abs{\alpha_i\times\beta_i}} & \frac{\alpha_i\times(\alpha_i\times\beta_i)}{\abs{\alpha_i\times(\alpha_i\times\beta_i)}}\end{pmatrix}
\end{IEEEeqnarray*}
uniformly globally asymptotically converges to~$R$. \end{cor}
\begin{pf}
By Theorem~\ref{th:conv2vec}, $\hat\alpha\to\alpha$ and $\hat\beta\to\beta$. Hence,
\begin{IEEEeqnarray*}{rCl}
\tilde R^T &\to& \begin{pmatrix}\frac{\alpha}{\abs{\alpha_i}}& \frac{\alpha\times\beta}{\abs{\alpha_i\times\beta_i}} & \frac{\alpha\times(\alpha\times\beta)}{\abs{\alpha_i\times(\alpha_i\times\beta_i)}}\end{pmatrix}\cdot R_i^T\\
&=& \begin{pmatrix}\frac{R^T\alpha_i}{\abs{\alpha_i}}& \frac{R^T\alpha_i\times R^T\beta_i}{\abs{\alpha_i\times\beta_i}} & \frac{R^T\alpha_i\times(R^T\alpha_i\times R^T\beta_i)}{\abs{\alpha_i\times(\alpha_i\times\beta_i)}}\end{pmatrix}\cdot R_i^T\\
&=& R^TR_iR_i^T\\
&=& R^T,
\end{IEEEeqnarray*}
where we have used $R^T(u\times v)=R^Tu\times R^Tv$ since $R$ is a rotation matrix.\qed \end{pf}
Of course, $\tilde R^T$ has no reason to be a rotation matrix (it is only asymptotically so); it is nevertheless the product of a matrix with orthogonal (possibly zero) columns by a rotation matrix. If a bona fide rotation matrix is required at all times, a natural idea is to project $\tilde R$ on the ``closest'' rotation matrix~$\hat R$, thanks to a polar decomposition. Because of the particular form of $\tilde R$, the expression of $\hat R$ can readily be found without using the standard but computationally heavy projection algorithm based on singular value decomposition. For details about the polar decomposition and related matters, see e.g.~\cite[Chapter~$8$]{Higham2008book}. \begin{prop}
Consider the polar decomposition of $\tilde R^T$
\begin{IEEEeqnarray*}{rCl}
\tilde R^T &=& \hat R^T(\tilde R\tilde R^T)^\frac{1}{2}.
\end{IEEEeqnarray*}
$\hat R$, which is by construction the best approximation of~$\tilde R$ among all orthogonal matrices, is a rotation matrix that uniformly globally asymptotically converges to~$R$. When $\hat\alpha$ and $\hat\beta$ are not collinear, $\hat R$ is uniquely defined by
\begin{IEEEeqnarray*}{rCl}
\hat R^T &:=& \begin{pmatrix}\frac{\hat\alpha}{\abs{\hat\alpha}}& \frac{\hat\alpha\times\hat\beta}{\abs{\hat\alpha\times\hat\beta}} & \frac{\hat\alpha\times(\hat\alpha\times\hat\beta)}{\abs{\hat\alpha\times(\hat\alpha\times\hat\beta)}}\end{pmatrix}\cdot R_i^T.
\end{IEEEeqnarray*} \end{prop}
\begin{pf}
Since $\tilde R$ is the product of a matrix with orthogonal columns by a rotation matrix,
\begin{IEEEeqnarray*}{rCl}
(\tilde R\tilde R^T)^\frac{1}{2}
&=& \begin{pmatrix}R_i
\begin{pmatrix}\frac{\abs{\hat\alpha}^2}{\abs{\alpha_i}^2}& 0&0\\
0& \frac{\abs{\hat\alpha\times\hat\beta}^2}{\abs{\alpha_i\times\beta_i}^2} & 0\\
0&0& \frac{\abs{\hat\alpha\times(\hat\alpha\times\hat\beta)}^2}{\abs{\alpha_i\times(\alpha_i\times\beta_i)}^2}\end{pmatrix}
R_i^T\end{pmatrix}^\frac{1}{2}\\
&=& R_i
\begin{pmatrix}\frac{\abs{\hat\alpha}}{\abs{\alpha_i}}& 0&0\\
0& \frac{\abs{\hat\alpha\times\hat\beta}}{\abs{\alpha_i\times\beta_i}} & 0\\
0&0& \frac{\abs{\hat\alpha\times(\hat\alpha\times\hat\beta)}}{\abs{\alpha_i\times(\alpha_i\times\beta_i)}}\end{pmatrix}
R_i^T.
\end{IEEEeqnarray*}
When $\hat\alpha$ and $\hat\beta$ are not collinear, the expression for $\hat R$ follows at once from $\hat R^T=\tilde R^T(\tilde R\tilde R^T)^{-\frac{1}{2}}$. When $\hat\alpha=0$, one may choose $\hat R^T:=R_i^T$; when $\hat\alpha\neq0$ but $\hat\alpha\times\hat\beta=0$, one may choose $\hat R^T:=(\frac{\hat\alpha}{\abs{\hat\alpha}},\hat E_2,\hat E_3)\cdot R_i^T$, where $\frac{\hat\alpha}{\abs{\hat\alpha}}$, $\hat E_2$ and $\hat E_3$ form a direct orthonormal frame.\qed \end{pf}
\begin{figure}
\caption{Components of true $\omega$ (red) and measured $\omega_m$ (blue).}
\label{fig:omega}
\end{figure}
\begin{figure}
\caption{Components of ``constant'' vector~$\beta_i$.}
\label{fig:betai}
\end{figure}
\begin{figure}
\caption{Components of true $\alpha$ (red), measured $\alpha_m$ (blue) and estimated~$\hat\alpha$ (orange).}
\label{fig:alpha}
\end{figure}
\begin{figure}
\caption{Components of true $\beta$ (red), measured $\beta_m$ (blue) and estimated~$\hat\beta$ (orange).}
\label{fig:beta}
\end{figure}
\section{Simulations}\label{sec:Simulations} The good behavior of the observer is now illustrated in simulation. The system starts in the initial state $R(0):=\mathrm{I}$, i.e. $\bigl(\phi(0),\theta(0),\psi(0)\bigr):=(0,0,0)$, and then undergoes the angular velocity~$\omega(t)$ displayed in Fig.~\ref{fig:omega}. The constant vectors $\alpha_i$ and $\beta_i$ are respectively set to the nominal values $(0,0,1)^T$ and $(\frac{1}{\sqrt2},0,\frac{1}{\sqrt2})^T$, which mimics the gravity and magnetic vectors. Moreover, $\beta_i$ is subjected to violent disturbances for $t\in[500,700]$, see Fig.~\ref{fig:betai}; of course only the nominal values of $\alpha_i$ and $\beta_i$ are known to the observer.
The observer is fed with the measured signals $\omega_m,\alpha_m$ and $\beta_m$, see Fig.~\ref{fig:omega}-\ref{fig:alpha}-\ref{fig:beta}; the measured velocity is affected by the (unknown) slowly drifting bias~$b$, see Fig.~\ref{fig:omega}-\ref{fig:biases}. All the measurement signals are corrupted by band-limited independent gaussian white noises with sample time \num{e-3} and rather large noise powers (\num{2e-6} for the components of~$\alpha_m,\beta_m$, and \num{2e-7} for those of~$\omega_m$). The tuning gains are set to $(k_\alpha,k_\beta,l_\alpha,l_\beta)=(10,10,0.15,0.15)$.
The observer is initialized with no error, but suddenly reinitialized to zero at~$t=100$. The convergence of the estimated state $\hat\alpha,\hat\beta$ and $\hat b$ is as anticipated excellent after the reinitialization, very fast for $\hat\alpha,\hat\beta$ and slower for $\hat b$, in accordance with the choice of gains, see Fig.~\ref{fig:alpha}-\ref{fig:beta}-\ref{fig:biases}. The convergence is also very good for $t\in[500,700]$ when $\beta_i$ is violently disturbed: the error $\hat\beta-\beta$ exhibits only small spikes, $\hat b$ is only slightly affected, and $\hat\alpha$ not affected at all at this scale; this illustrates the (desirable) independence between $\hat\alpha$ and $\hat\beta$, which are only slightly coupled through~$\hat b$. Notice also that the disturbances on $\beta_i$ are interpreted by the observer as a variation of~$b$.
We insist that the observer does not need the knowledge of the ``constant'' vectors $\alpha_i$ and $\beta_i$, which is why it can converge to the true state even when $\alpha_i,\beta_i$ vary not too fast (with respect to the gains $k_\alpha,k_\beta$). These vectors are needed only when the rotation matrix $R$ (or Euler angles, or quaternions) must be reconstructed. Fig.~\ref{fig:phi}-\ref{fig:theta}-\ref{fig:psi} show the reconstruction of the Euler angles $\phi,\theta,\psi$ (in radians) from $\hat\alpha,\hat\beta$ using the nominal values of $\alpha_i,\beta_i$. Notice the pitch angle $\theta$ and roll angle~$\phi$ (which are paramount for the control of for instance UAVs) are perfectly estimated even during the period where $\beta_i$ is violently disturbed (the fast oscillations of $\hat\theta$ just after $t=100$ are due to the angle determinacy which switches between $0$ and $\pi$, and have nothing to do with the observer); inevitably the estimated yaw angle~$\hat\phi$ is severely affected, since a completely wrong value of $\beta_i$ is used. This scenario illustrates an interesting practical feature of the observer, not enjoyed by e.g. the observer in~\cite{MahonHP2008TAC}: even if the magnetic sensor (i.e., $\beta_m$) is easily disturbed, it is nevertheless worth using to help the accelerometer (i.e., $\alpha_m$) in the estimation of the gyro biases, hence in the reconstruction of the pitch and roll angles, whatever the excitation (or absence of) provide by the angular velocity.
\begin{figure}
\caption{Components of true $b$ (red) and estimated $\hat b$ (blue).}
\label{fig:biases}
\end{figure}
\begin{figure}
\caption{True $\phi$ (red) and estimated $\hat\phi$ (blue).}
\label{fig:phi}
\end{figure}
\begin{figure}
\caption{True $\theta$ (red) and estimated $\hat\theta$ (blue).}
\label{fig:theta}
\end{figure}
\begin{figure}
\caption{True $\psi$ (red) and estimated $\hat\phi$ (blue).}
\label{fig:psi}
\end{figure}
\section{Conclusion}\label{sec:conclusions} We have presented a simple nonlinear ``geometry-free'' observer for attitude and gyro bias estimation, with guaranteed uniform global convergence and local exponential convergence. Simulations demonstrate that it performs very well, even with noisy measurements and not so constant inertial vectors $\alpha_i$ and~$\beta_i$ and bias~$b$. It can be seen as an interesting alternative to the $\mathrm{SO(3)}$-based observer of~\cite{MahonHP2008TAC} or of the more complicated ``geometry-free'' observer of~\cite{BatisSO2012CEP}.
\ack We thank our colleague Laurent Praly for enlightening discussions on the construction of strict Lyapunov functions.
\end{document} |
\begin{document}
\title{A Proof of the Pumping Lemma for Context-Free Languages Through Pushdown
Automata}
\begin{abstract} The pumping lemma for context-free languages is a result about pushdown automata which is strikingly similar to the well-known pumping lemma for regular languages. However, though the lemma for regular languages is simply proved by using the pigeonhole principle on deterministic automata, the lemma for pushdown automata is proven through an equivalence with context-free languages and through the more powerful Ogden's lemma. We present here a proof of the pumping lemma for context-free languages which relies on pushdown automata instead of context-free grammars. \end{abstract}
\section{Setting}
The pumping lemma for regular languages is the following well-known result: \begin{theorem}
Let $L$ be a regular language over an alphabet $\Sigma$. There exists some integer $p \geq 1$ such that, for every $w \in L$ such that $|w| > p$, there exists a decomposition $w = x y z$ such that: \begin{enumerate}
\item $|xy| \leq p$
\item $|y| \geq 1$ \item $\forall n \geq 0, x y^n z \in L$ \end{enumerate} \end{theorem}
The pumping lemma for context-free languages~\cite{BarHillelPerlesShamir61Formal}, also known as the Bar-Hillel lemma, is the following similar result:
\begin{theorem}
Let $L$ be a context-free language over an alphabet $\Sigma$. There exists some integer $p \geq 1$ such that, for every $w \in L$ such that $|w| > p$, there exists a decomposition $w = u v x y z$ such that: \begin{enumerate}
\item $|vxy| \leq p$
\item $|vy| \geq 1$ \item $\forall n \geq 0, u v^n x y^n z \in L$ \end{enumerate} \end{theorem}
One would expect the classical proofs of these results to be similar. However, this is not the case. The pumping lemma for regular languages~\cite{Hopcroft79} is usually proved through the equivalence between regular languages and finite automata by picking a deterministic automaton $A$ which recognizes the language $L$ ; we can then use the fact that the accepting path of any word $w$ longer than the number of states of $A$ must pass by the same state twice (by the pigeonhole principle), yielding the points at which we can decompose $w$. The pumping lemma for context-free languages, however, is usually derived from Ogden's lemma~\cite{Ogden68} which is itself proved by examining context-free grammars (CFGs) and not pushdown automata (using the equivalence of these two formalisms).
It seems reasonable to hope that the pumping lemma for context-free languages can be proved directly from the properties of pushdown automata, with no reference to CFGs. In the next section, we propose such a proof. Though the underlying ideas that we introduce in this proof are apparently part of the folklore, we are not aware of any attempt to prove the pumping lemma directly through pushdown automata. The most relevant existing work that we know of is a weaker form of the result~\cite{Kartzow12}.
Analogous techniques to the one used below can be used to obtain a proof of Ogden's lemma. However, it seems that the most natural way to do so is very similar to a combination of the usual pushdown system encoding to CFGs and the usual proof of Ogden's lemma. These further efforts (not included in this note) suggest that the proof below, though it does not mention CFGs on the surface, may not differ very much from a CFG-based argument after all.
\section{Proof}
Let $L$ be a context-free language over an alphabet $\Sigma$. Let $A$ be a pushdown automaton which recognizes $L$, with stack alphabet $\Gamma$. We denote by $|A|$ the number of states of $A$. To simplify the reasoning, we will impose the following condition on $A$ (denoted by (*)): all transitions of $A$ pop the topmost symbol of the stack and either push no symbol on the stack or push on the stack the previous topmost symbol and some other symbol. It is easy to see that any pushdown automata which pushes arbitrary sequences of symbols on the stack can be rewritten in this fashion by replacing its transitions by an initial pop transition followed by a sequence of $\epsilon$-transitions pushing the appropriate symbols on the stack. (However, keep in mind that because of this translation, $|A|$ in what follows does not refer to the number of states of the original automaton recognizing $A$ but to that of its translation by this process.)
We define $p' = |A|^2 |\Gamma|$ and define the pumping length to be $p = |A|
(|\Gamma|+1)^{p'}$. We will now show that all $w \in L$ such that $|w| > p$ have a decomposition of the form $w = u v x y z$ such that $|vxy| \leq p$, $|vy| \geq 1$ and $\forall n \geq 0, u v^n x y^n z \in L$.
Let $w \in L$ such that $|w| > p$. Let $\pi$ be an accepting path of minimal length for $w$ (represented as a sequence of transitions of $A$), we denote its length by $|\pi|$. We can define, for $0 \leq i < |\pi|$, $s_i$ the size of the stack at position $i$ of the accepting path. For all $N > 0$, we will define an \textbf{$N$-level} over $\pi$ as a set of three indices $i, j, k$ with $0 \leq i < j < k \leq p$ such that the stack grows by $N$ symbols between $i$ and $j$ and shrinks by $N$ symbols between $j$ and $k$. Formally, we require that:
\begin{enumerate}
\item $s_i = s_k, s_j = s_i + N$
\item for all $n$ such that $i \leq n \leq j$, $s_i \leq s_n \leq s_j$
\item for all $n$ such that $j \leq n \leq k$, $s_k \leq s_n \leq s_k$. \end{enumerate}
We define the level $l$ of $\pi$ as the maximal $N$ such that $\pi$ has an
$N$-level. This definition is motivated by the following observation: if the size of the stack over a path $\pi$ becomes larger than its level $l$, then the stack symbols more than $l$ levels deep will never be popped. Formally, we define the \textbf{configurations} of $A$ as the couples of a state of $A$ and a sequence of $l$ stack symbols (where stacks of size less than $l$ are represented by padding them to $l$ with a special blank symbol, which is why we use $|\Gamma| + 1$ when defining $p$). By definition, there are $|A| (|\Gamma| + 1)^l$ such configurations. Essentially, $A$ acts as a finite automaton without stack between the configurations.
We can now distinguish two cases: either the level is low and the number of configurations is small, or the level is high. Formally:
\begin{enumerate}
\item $l < p'$ and, by the pigeonhole principle, the same configuration is
encountered twice in the first $p+1$ steps of $\pi$,
\item $l \geq p'$ and, by the pigeonhole principle, we will prove that a
certain notion of \emph{full state} is repeated for two different stack
sizes in any $l$-level of $w$. \end{enumerate}
\paragraph{Case 1.} $l < p'$. In this case, the number of configurations is less than $p$. Hence, in the $p+1$ first steps of $\pi$, the same configuration is encountered twice at two different positions, say $i < j$. Denote by $\widehat{i}$ (resp. $\widehat{j}$) the position of the last letter of $w$ read at step $i$ (resp. $j$) of $\pi$. We have $\widehat{i} \leq \widehat{j}$. Hence, we can factor $w = u v x y z$ with $y z = \epsilon$, $u = w_{0 \cdots \widehat{i}}$, $v = w_{\widehat{i} \cdots \widehat{j}}$, $x = w_{\widehat{j}
\cdots |w|}$. (By $w_{x \cdots y}$ we denote the letters of $w$ from $x$
inclusive to $y$ exclusive.) By construction, $|vxy| \leq p$.
We also have to show that $\forall n \geq 0, u v^n x y^n z = u v^n x \in L$, but this follows from our observation above: stack symbols deeper than $l$ are never popped, so there is no way to distinguish configurations which are equal according to our definition, and an accepting path for $u v^n x$ is built from that of $w$ by repeating the steps between $i$ and $j$, $n$ times.
Finally, we also have $|v| > 0$, because if $v = \epsilon$, then, because we have the same configuration at steps $i$ and $j$ in $\pi$, $\pi' = \pi_{0 \cdots i} \pi_{j \cdots |\pi|}$ would be an accepting path for $w$, contradicting the minimality of $\pi$.
\begin{figure}
\caption{Illustration of the construction for case 2. To simplify the drawing,
the distinction between the path positions and word positions are omitted.}
\label{fig}
\end{figure}
\paragraph{Case 2.} $l \geq p'$. Let $i, j, k$ be a $p'$-level. To any stack size $h$, $s_i \leq h \leq s_j$, we associate the \textbf{last push}
$\lp(h) = \max(\{y \leq j | s_y = h\})$ and the \textbf{first pop}
$\fp(h) = \min(\{y \geq j | s_y = h\})$. By definition, $i \leq \lp(h) \leq j$ and $j \leq \fp(h) \leq k$. We say that the \textbf{full state} of a stack size $h$ is the triple formed by:
\begin{enumerate}
\item the automaton state at position $\lp(h)$
\item the topmost stack symbol at position $\lp(h)$ (which, by construction,
is also the topmost stack symbol at position $\fp(h)$
\item the automaton state at position $\fp(h)$ \end{enumerate}
(Observe that there is a link between this definition and what is known as ``Ginsburg triples'' when encoding pushdown systems in CFGs.)
There are $p'$ possible full states, and $p' + 1$ stack sizes between $s_i$ and $s_j$, so, by the pigeonhole principle, there exist two stack sizes $g, h$ with $s_i \leq g < h \leq s_j$ such that the full states at $g$ and $h$ are the same. Like in Case 1, we define by $\lpp(g)$, $\lpp(h)$, $\fpp(h)$ and $\fpp(g)$ the positions of the last letters of $w$ read at the corresponding positions in $\pi$. We factor $w = u v x y z$ where $u = w_{0 \cdots \lpp(g)}$, $v = w_{\lpp(g) \cdots \lpp(h)}$, $x = w_{\lpp(h) \cdots \fpp(h)}$,
$y = w_{\fpp(h) \cdots \fpp(g)}$, and $z = w_{\fpp(g) \cdots |w|}$.
This factorization ensures that $|vxy| \leq p$ (because $k \leq p$ by our definition of levels).
We also have to show that $\forall n \geq 0, u v^n x y^n z \in L$. To do so, observe that each time that we repeat $v$, we start from the same state and the same stack top and we do not pop below our current position in the stack (otherwise we would have to push again at the current position, violating the maximality of $\lp(g)$), so we can follow the same path in $A$ and push the same symbol sequence on the stack. By the maximality of $\lp(h)$ and the minimality of $\fp(h)$, while reading $x$, we do not pop below our current position in the stack, so the path followed in the automaton is the same regardless of the number of times we repeated $v$. Now, if we repeat $w$ as many times as we repeat $v$, since we start from the same state, since we have pushed the same symbol sequence on the stack with our repeats of $v$, and since we do not pop more than what $v$ has stacked by minimality of $\fp(g)$, we can follow the same path in $A$ and pop the same symbol sequence from the stack. Hence, an accepting path from $u v^n x y^n z$ can be constructed from the accepting path for $w$.
Finally, we also have $|vy| > 1$, because like in case 1, if $v = \epsilon$ and $y = \epsilon$, we can build a shorter accepting path for $w$ by removing $\pi_{\lp(g)\cdots\lp(h)}$ and $\pi_{\fp(h)\cdots\fp(g)}$.\\
Hence, we have an adequate factorization in both cases, and the result is proved.
\end{document} |
\begin{document}
\begin{abstract}
In analogy with the geometric situation, we study real calculi over projective modules and show that they can be realized as projections of free real calculi. Moreover, we consider real calculi over matrix algebras and discuss several aspects of the classification problem for real calculi in this case, leading to the concept of quasi-equivalence of matrix representations. We also use matrix algebras to give concrete examples of real calculi where the module is projective, and show that the existence of a Levi-Civita connection depends on the eigenvectors of specific anti-hermitian matrices in this case. \end{abstract}
\title{Projective Real Calculi over Matrix Algebras}
\section{Introduction}\label{sec:intro}
\noindent In the rapidly developing and conceptually rich field of study that is noncommutative geometry it is important to discuss the subject from several points of view in order to understand strenghts and weaknesses of different perspectives. For instance, while spectral triples enable the use of powerful analytic tools to study the properties of noncommutative spaces it can be somewhat difficult to see whether the metric, implicitly given by the Dirac operator, can be realized as a bilinear form over a module. Therefore it can be helpful to take an alternative approach and study the subject from a more direct point of view, where both the metric and the module on which it is realized as a bilinear form are explicitly given, much in the spirit of classical Riemannian geometry.
In \cite{aw:cgb.sphere, aw:curvature.three.sphere}, the framework of pseudo-Riemannian calculi was introduced in order to discuss the existence (and uniqueness) of a Levi-Civita connection as well as curvature in noncommutative spaces. Although the existence of a Levi-Civita connection cannot be guaranteed in the context of real calculi, it was shown that it is unique whenever it exists. Note that the framework of real calculi is not the only approach to metrics and Levi-Civita connections in noncommutative geometry, and there are several others that are different but share some conceptual similarities (see e.g. \cite{lm:lie.extensions,dvm:connections.central.bimodules,fgr:supersymmetric.noncommutative, bm:starCompatibleConnections,r:leviCivita,mw:quantum.koszul,a:2020cartan.and.LC.connection}).
In \cite{aw:curvature.three.sphere} the Riemannian curvature of the noncommutative 3-sphere was studied, and in \cite{aw:cgb.sphere} a Gauss-Chern-Bonnet theorem was formulated for the noncommutative 4-sphere. As an example of a noncompact noncommutative manifold with nontrivial features, the noncommutative cylinder was featured in \cite{al:noncommutative.cylinder} and a pseudo-Riemannian calculus was constructed in order to explicitly calculate the Levi-Civita connection and the corresponding curvature. In \cite{atn:nc.minimal.embeddings} algebraic aspects of real calculi were discussed with the introduction of real calculus homomorphisms, and these were interpreted geometrically to develop a noncommutative theory of submanifolds. In general, it is interesting to see examples of real calculi where the module used over the algebra to form a real calculus is projective, but in all of the examples given in the above-mentioned articles the modules are even more well-behaved in the sense that they are free. In this paper we shall consider real calculi over projective modules, and in analogy with projective modules being projections of free modules we show that projective real calculi can be realized as "projections" of free real calculi.
Although the study of projective real calculi is an important part of this text, the main focus will lie on real calculi over $\mathcal{A}=\Mat{N}$ and how isomorphisms of real calculi are characterized in this case; in particular, we give a complete classification of real calculi over $\mathcal{A}=\Mat{N}$ in the case where the Lie algebra $\mathfrak{g}$ acting on $\mathcal{A}$ as a set of derivations is 1-dimensional and the module is $\mathbb{C}^N$. Real calculi over this algebra are also used to give concrete examples of real calculi where the associated $\mathcal{A}$-module $M=(\mathbb{C}^{N})^n$ is projective (but not free whenever $n\neq N$). From a more general perspective, the study of finite-dimensional algebras is not only done for the purpose of highlighting interesting aspects of noncommutative geometry (for instance, a relatively rich differential geometric structure using $\Omega_D(\Mat{N})$ as differential forms can be developed on matrix algebras despite the fact that derivations are inner, as described in \cite{dvkm:nc.diff.geom.matrix.algebras}), but these algebras have an important role in their own right from a theoretical point of view (see for instance the spectral Standard Model described in \cite{cc:why.nc.standard.model}, where the finite-dimensional algebra $\mathcal{A}_F=\mathbb{C}\oplus\mathbb{H} \oplus\Mat{3}$ encodes the symmetries of the standard model). Hence, it is important to study how real calculi over matrix algebras behave, in order to understand how the framework of real calculi fits into the larger body of work in noncommutative geometry as a whole.
The paper is organized as follows. First, in Section \ref{sec:prelims} we recall basic definitions and results regarding real calculi and their morphisms. Then, in Section \ref{sec:free.and.projective} we introduce the concept of projective real calculi, and discuss how these structures can be understood as projections of free real calculi. We describe affine connections on the projective module and how they are derived from a free module before we turn our attention to metrics and projective real calculi. In Section \ref{sec:matrix.algebra} we treat isomorphisms of real calculi over $\mathcal{A}=\Mat{N}$, both when the module $M$ is free and when $M=(\mathbb{C}^N)^n$. Finally, in Sections \ref{subsec:ex.dim.1} and \ref{subsec:ex.dim.n} we show that the existence of a Levi-Civita connection when $\mathfrak{g}$ is abelian and $M=(\mathbb{C}^N)^n$ depends on the map $\varphi:\mathfrak{g}\rightarrow (\mathbb{C}^N)^n$ and the eigenvectors of specific traceless and anti-hermitian matrices representing hermitian derivations. \section{Preliminaries}\label{sec:prelims}
We begin by recalling the basic definitions and results regarding real calculi (see \cite{aw:curvature.three.sphere}) and their morphisms (see \cite{atn:nc.minimal.embeddings}) which make out the framework used throughout this text. The framework takes a derivation-based approach where a module over the algebra representing the (noncommutative) space in question acts as a generalization of the concept of vector fields. However, since the set $\operatorname{Der}(\mathcal{A})$ generally lacks a module structure when $\mathcal{A}$ is noncommutative the correspondence between vector fields and derivations in the framework is given as an explicit linear map which is not assumed to be an isomorphism. We make the notion of a real calculus precise below.
\begin{definition}[Real calculus]
Let $\mathcal{A}$ be a unital $^*$-algebra, and let $\mathfrak{g}_D$ denote a real Lie algebra together with a faithful representation $D:\mathfrak{g}\rightarrow\text{Der}(\mathcal{A})$ such that $D(\partial)$ is a hermitian derivation for all $\partial\in\mathfrak{g}$. Moreover, let $M$ be a right $\mathcal{A}$-module and let $\varphi:\mathfrak{g}\rightarrow M$ be a $\mathbb{R}$-linear map such that $M$ is generated by $\varphi(\mathfrak{g})$. Then
$C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ is called a \textit{real calculus} over $\mathcal{A}$. \end{definition} \noindent In the following we will write $\partial(a)$ instead of the more cumbersome $D(\partial)(a)$ for an element $\partial\in\mathfrak{g}$ and $a\in\mathcal{A}$ as long as there is no risk of confusion. Moreover, if $\mathfrak{g}\subseteq \operatorname{Der}(\mathcal{A})$ and the representation $D$ is left unspecified this is to be interpreted as $D$ being the identity map. \begin{definition}
Let $\mathcal{A}$ be a $^*$-algebra and let $M$ be a right $\mathcal{A}$-module. A \textit{hermitian form} on $M$ is a map $h:M\times M\rightarrow\mathcal{A}$ with the following properties:
\begin{itemize}
\item[$h1$.] $h(m_1,m_2+m_3)=h(m_1,m_2)+h(m_1,m_3)$
\item[$h2$.] $h(m_1,m_2a)=h(m_1,m_2)a$
\item[$h3$.] $h(m_1,m_2)=h(m_2,m_1)^*$
\end{itemize}
for every $m_1,m_2,m_3\in M$ and $a\in\mathcal{A}$. Moreover, if $h(m_1,m_2)=0$ for every $m_2\in M$ implies that $m_1=0$ then $h$ is said to be \textit{nondegenerate}, and in this case we say that $h$ is a \textit{metric} on $M$. The pair $(M,h)$ is called a \textit{(right) hermitian} $\mathcal{A}$-\textit{module}, and if $h$ is a metric on $M$ we say that $(M,h)$ is a \textit{(right) metric} $\mathcal{A}$-\textit{module}.
Finally, we say that $h$ is \emph{invertible} if the map $\hat{h}:M\rightarrow M^*$, defined by $\hat{h}(m)(n)=h(m,n)$, is a bijection. \end{definition}
\begin{definition}[Real metric calculus]
Let $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ be a real calculus over $\mathcal{A}$ and let $(M,h)$ be a (right) metric $\mathcal{A}$-module. If
\begin{equation*}
h(\varphi(\partial_1),\varphi(\partial_2))^*=h(\varphi(\partial_1),\varphi(\partial_2))
\end{equation*}
for every $\partial_1,\partial_2\in\mathfrak{g}$ then the pair $(C_{\mathcal{A}},h)$ is called a \textit{real metric calculus}. \end{definition} \noindent Since free modules are the simplest kind of modules, real calculi $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ where the module $M$ is free are interesting to study. However, the map $\varphi$ can be degenerate, leading to complications in the analysis of real calculi over a free module. Therefore we shall state additional regularity conditions that $\varphi$ needs to satisfy before the real calculus $C_{\mathcal{A}}$ can be considered free.
\begin{definition}[Free real (metric) calculus]
Let $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ be a real calculus and suppose that $M$ is a free module. If there exists a basis $\{\partial_1,...,\partial_n\}$ of $\mathfrak{g}$ such that $\{\varphi(\partial_1),...,\varphi(\partial_n)\}$ is a basis of $M$, then $C_{\mathcal{A}}$ is a \textit{free real calculus}. Moreover, if $(C_{\mathcal{A}},h)$ is a real metric calculus and $h$ is an invertible metric, then $(C_{\mathcal{A}},h)$ is a \textit{free real metric calculus}. \end{definition}
\begin{definition}
Let $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ be a real calculus over $\mathcal{A}$. An \textit{affine connection} on $(M,\mathfrak{g})$ is a map $\nabla:\mathfrak{g}\times M\rightarrow M$ satisfying
\begin{enumerate}
\item $\nabla_{\partial}(m+n)=\nabla_{\partial}m+\nabla_{\partial}n$
\item $\nabla_{\lambda\partial+\partial'}m=\lambda\nabla_{\partial}m+\nabla_{\partial'}m$
\item $\nabla_{\partial}(ma)=(\nabla_{\partial}m)a+m\partial(a)$
\end{enumerate}
for $m,n\in M$, $\partial,\partial'\in\mathfrak{g}$, $a\in\mathcal{A}$ and $\lambda\in\mathbb{R}$. \end{definition}
\begin{definition}[Real connection calculus]
Let $(C_{\mathcal{A}},h)$ be a real metric calculus and let $\nabla$ be an affine connection on $(M,\mathfrak{g})$. Then $(C_{\mathcal{A}},h,\nabla)$ is called a \textit{real connection calculus} if
\begin{equation*}
h(\nabla_{\partial}\varphi(\partial_1),\varphi(\partial_2))=h(\nabla_{\partial}\varphi(\partial_1),\varphi(\partial_2))^*
\end{equation*}
for every $\partial,\partial_1,\partial_2\in\mathfrak{g}$. \end{definition}
\begin{definition}[Pseudo-Riemannian calculus]
Let $(C_{\mathcal{A}},h,\nabla)$ be a real connection calculus. We say that $(C_{\mathcal{A}},h,\nabla)$ is \textit{metric} if
\begin{equation*}
\partial(h(m,n))=h(\nabla_{\partial}m,n)+h(m,\nabla_{\partial}n)
\end{equation*}
for every $\partial\in\mathfrak{g}$ and $m,n\in M$, and \textit{torsion-free} if
\begin{equation*}
\nabla_{\partial_1}\varphi(\partial_2)-\nabla_{\partial_2}\varphi(\partial_1)-\varphi([\partial_1,\partial_2])=0
\end{equation*}
for every $\partial_1,\partial_2\in\mathfrak{g}$. A metric and torsion-free real connection calculus is called a \textit{pseudo-Riemannian calculus}. \end{definition}
A connection fulfilling the conditions for a pseudo-Riemannian calculus is called a \textit{Levi-Civita connection}. In the quite general setup of real metric calculi the existence of a Levi-Civita connection can not be guaranteed. However, it is unique if it exists and if we make the extra assumption that the real metric calculus is free existence can be guaranteed as well.
\begin{theorem}[Levi-Civita connection]\label{thm:existence.and.uniqueness.of.LC} Let $(C_{\mathcal{A}},h)$ be a real metric calculus. Then there exists at most one affine connection $\nabla$ such that $(C_{\mathcal{A}},h,\nabla)$ is a pseudo-Riemannian calculus (\cite{aw:curvature.three.sphere}). Moreover, if $(C_{\mathcal{A}},h)$ is a free real metric calculus, then there exists a unique connection $\nabla$ such that $(C_{\mathcal{A}},h,\nabla)$ is pseudo-Riemannian (\cite{atn:nc.minimal.embeddings}). \end{theorem}
Given the existence of a Levi-Civita connection, a noncommutative analogue to the classical Koszul formula can be used to find it in the case of a free real metric calculus. In the context of real connection calculi we state it as follows.
\begin{proposition}[\cite{aw:curvature.three.sphere}] Let $(C_{\mathcal{A}},h,\nabla)$ be a pseudo-Riemannian calculus and assume that $\partial_1,\partial_2,\partial_3\in\mathfrak{g}$. Then it holds that \begin{multline}\label{eqn:Koszul}
2h(\nabla_{\partial_1} e_2,e_3)=\partial_1h(e_2,e_3)+\partial_2h(e_1,e_3)-\partial_3h(e_1,e_2)\\
-h(e_1,\varphi([\partial_2,\partial_3]))+h(e_2,\varphi([\partial_3,\partial_1]))+h(e_3,\varphi([\partial_1,\partial_2])), \end{multline} where $e_i=\varphi(\partial_i)$ for $i=1,2,3$. Conversely, if $(C_{\mathcal{A}},h)$ is a real metric calculus and $\nabla$ is a connection satisfying Koszul's formula (\ref{eqn:Koszul}) for every $\partial_1,\partial_2,\partial_3\in\mathfrak{g}$, then $(C_{\mathcal{A}},h,\nabla)$ is a pseudo-Riemannian calculus. \end{proposition}
\begin{example} We shall briefly describe the construction of real calculi over the noncommutative torus $T^2_{\theta}$ and the noncommutative 3-sphere $S^3_{\theta}$. Since these are well-known examples of noncommutative manifolds it is interesting to see how they fit in the framework of real calculi. Both $T^2_{\theta}$ and $S^3_{\theta}$ are closely related to their respective classical counterparts $T^2$ and $S^3$ which are both parallelizable manifolds; this is reflected in the real calculi $C_{T^2_{\theta}}$ and $C_{S^3_{\theta}}$ constructed below being free. For more details about these constructions, see \cite{aw:curvature.three.sphere}.
The noncommutative torus $T^2_{\theta}$ is the $^*$-algebra with unitary generators $U,V$ satisfying the relation $VU=qUV$, where $q=e^{2\pi i\theta}$. We let $\mathfrak{g}'$ be the (real) Lie algebra generated by the hermitian derivations $\delta_1,\delta_2$, defined by
\begin{align*}
&\delta_1(U)=iU&\delta_2(U)&=0\\
&\delta_1(V)=0&\delta_2(V)&=iV,
\end{align*} which implies that $[\delta_1,\delta_2]=0$. In analogy with the classical torus $T^2$ being parallellizable, we let $M'$ be a free module of rank 2, with basis elements $e'_1,e'_2\in (T^2_{\theta})^4$. With the map $\varphi':\delta_i\mapsto e'_i$ ($i=1,2$), we have the free real calculus $C_{T^2_{\theta}}=(T^2_{\theta},\mathfrak{g}'_{D'},M',\varphi')$ over $T^2_{\theta}$ representing the noncommutative torus.
The noncommutative 3-sphere is the unital $^*$-algebra with generators $Z,Z^*,W,W^*$ subject to the relations
\begin{align*}
&WZ=qZW &W^*Z&=\bar{q}ZW^* &WZ^*&=\bar{q}Z^*W\\
&W^*Z^*=qZ^*W^* &Z^*Z&=ZZ^* &W^*W&=WW^* \\
&WW^*=\mathbbm{1}-ZZ^*.&&&&
\end{align*} We let $\mathfrak{g}$ be the Lie algebra of dimension 3 generated by the hermitian derivations $\partial_1,\partial_2,\partial_3$:
\begin{align*}
&\partial_1(Z)=iZ, &\partial_2(Z)&=0, &\partial_3(Z)&=ZWW^*\\
&\partial_1(W)=0 &\partial_2(W)&=iW &\partial_3(W)&=-WZZ^*,
\end{align*} which implies that $[\partial_i,\partial_j]=0$ for every $i,j\in\{1,2,3\}$. In analogy with the 3-sphere $S^3$ being parallellizable, we let $M$ be a free module of rank 3 with generators $e_1,e_2,e_3\in (S^3_{\theta})^4$ constituting a basis of $M$. With the map $\varphi:\partial_i\mapsto e_i$ ($i=1,2,3$), we have the free real calculus $C_{S^3_{\theta}}=(S^3_{\theta},\mathfrak{g}_{D},M,\varphi)$ over $S^3_{\theta}$ representing the noncommutative 3-sphere.
Given invertible metrics $h':M'\times M'\rightarrow T^2_{\theta}$ and $h:M\times M\rightarrow S^3_{\theta}$ such that $(C_{T^2_{\theta}},h')$ and $(C_{S^3_{\theta}},h)$ are real metric calculi, Theorem~\ref{thm:existence.and.uniqueness.of.LC} guarantees that there are unique affine connections $\nabla$ and $\nabla'$ such that $(C_{S^3_{\theta}},h,\nabla)$ and $(C_{T^2_{\theta}},h',\nabla')$ are pseudo-Riemannian. Since $C_{S^3_{\theta}}$ and $C_{T^2_{\theta}}$ are free, $\nabla$ and $\nabla'$ can be stated in terms of their respective Christoffel symbols which can be derived from Koszul's formula. \end{example}
Given an algebra $\mathcal{A}$, a real Lie algebra $\mathfrak{g}$ (isomorphic to a subalgebra of $\operatorname{Der}(\mathcal{A})$ of hermitian derivations) and a (finitely generated) right $\mathcal{A}$-module $M$, it is interesting to ask how the choice of representation $D:\mathfrak{g}\rightarrow\operatorname{Der}(\mathcal{A})$ and linear map $\varphi:\mathfrak{g}\rightarrow M$ affects the overall structure of the resulting real calculus $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$. To answer such questions a concept of real calculus homomorphisms is needed in order to determine when the structures of two real caluli are essentially the same, i.e., when they are isomorphic.
\begin{definition}\label{def:rc.morphism}
Let $C_{\mathcal{A}}$ and $C_{\mathcal{A}'}$ be real calculi and let $\phi:\mathcal{A}\rightarrow\mathcal{A}'$ be a $^*$-algebra homomorphism. The Lie algebra homomorphism $\psi:\mathfrak{g}'\rightarrow\mathfrak{g}$ is said to be compatible with $\phi$ if it satisfies the condition
\begin{equation*}
\phi(\psi(\delta)(a))=\delta(\phi(a))
\end{equation*}
for every $\partial\in\mathfrak{g}'$.
If $\psi$ is compatible with $\phi$, then one sets $\Psi=\varphi\circ\psi$ and let $M_{\Psi}$ denote the submodule of $M$ generated by $\Psi(\mathfrak{g}')$.
Furthermore, if there exists a map $\hat{\psi}:M_{\Psi}\rightarrow M'$ that satisfies the conditions
\begin{enumerate}
\item $\hat{\psi}(m_1+m_2)=\hat{\psi}(m_1)+\hat{\psi}(m_2)$ for all $m_1,m_2\in M_{\Psi}$,\label{def:rc.morphism.psih.cond1}
\item $\hat{\psi}(ma)=\hat{\psi}(m)\phi(a)$ for all $m\in M_{\Psi}, a\in\mathcal{A}$,\label{def:rc.morphism.psih.cond2}
\item $\hat{\psi}(\Psi(\partial))=\varphi'(\partial)$ for all $\partial\in\mathfrak{g}'$,\label{def:rc.morphism.psih.cond3}
\end{enumerate}
then $\hat{\psi}$ is said to be compatible with $\phi$ and $\psi$, and we say that $(\phi,\psi,\hat{\psi})$ is a real calculus homomorphism from $C_{\mathcal{A}}$ to $C_{\mathcal{A}'}$; if $\phi$ and $\psi$ are isomorphisms and $\hat{\psi}$ is a bijection, then $(\phi,\psi,\hat{\psi}):C_{\mathcal{A}}\rightarrow C_{\mathcal{A}'}$ is called a real calculus isomorphism. \end{definition}
Real calculus homomorphisms were originally developed in \cite{atn:nc.minimal.embeddings}, giving an important tool in the study of real calculi as algebraic structures. However, in the aforementioned article the main emphasis was placed on a geometric interpretation leading to a noncommutative theory of submanifolds. In what follows, the main focus is instead placed on algebraic aspects of real calculi, and we begin to explore the nuances involved in determining when two real calculi are essentially the same structure.
\section{Free and projective real metric calculi}\label{sec:free.and.projective}
We shall consider real calculi $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ where $M$ is projective, and we call such real calculi \emph{projective}. Given the importance of projective modules in algebra and geometry, it is interesting to know what general statements can be made about real calculi of this kind and in what way one should view them. Much like how every projective module can be realized as the projection of a free module, we show that a viable way to realize these structures is as "projections" of free real calculi. However, when we introduce metrics in an effort to construct a projective real metric calculus as a projection of a free real metric calculus the situation becomes more complicated, and additional conditions on the metric and projection in question must be satisfied.
In general, it is easy to generate projective real calculi from a given free real calculus. \begin{proposition}\label{prop:derived.projective.calculus}
Let $\tilde{C}_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,\mathcal{A}^n,\tilde{\varphi})$ be a free real calculus and let $P:\mathcal{A}^n\rightarrow \mathcal{A}^n$ be a projection. Then $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),P\circ\tilde{\varphi})$ is a projective real calculus. \end{proposition} \begin{proof}
All that needs to be checked is that the map $P\circ\tilde{\varphi}$ generates $P(\mathcal{A}^n)$ as a module over $\mathcal{A}$. But this is trivial, since $\tilde{\varphi}(\mathfrak{g})$ generates $\mathcal{A}^n$ and $P(m\cdot a)=P(m)\cdot a$, i.e., $P$ is a homomorphism. \end{proof} In light of the above proposition, it is natural to ask whether every projective real calculus can be derived from a free real calculus in this way. This is indeed the case, as the next proposition shows. \begin{proposition}\label{prop:projective.rc.realized.as.projection}
Let $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ be a projective real calculus where $\dim \mathfrak{g}=n$. Then there exists a $\mathbb{R}$-linear map $\tilde{\varphi}:\mathfrak{g}\rightarrow \mathcal{A}^n$ and a projection $P:\mathcal{A}^n\rightarrow\mathcal{A}^n$ such that
\begin{equation*}
(\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),P\circ\tilde{\varphi})\simeq C_{\mathcal{A}}.
\end{equation*} \end{proposition} \begin{proof} Let $\partial_1,...,\partial_n$ be a basis of $\mathfrak{g}$ and let $e_k:=\varphi(\partial_k)$ for $k=1,...,n$. Furthermore, let $\hat{e}_1,...,\hat{e}_n$ be a basis of $\mathcal{A}^n$ and let $\tilde{\varphi}$ be the $\mathbb{R}$-linear map defined by $\tilde{\varphi}(\partial_k):=\hat{e}_k$ for $k=1,...,n$. Define the module homomorphism $\rho:\mathcal{A}^n\rightarrow M$ by the formula $\rho(\hat{e}_kA^k)=e_kA^k$. Since $\varphi(\mathfrak{g})$ generates $M$, it follows that $\rho$ is an epimorphism. And since $M$ is projective it follows that $\rho$ splits, implying that there is a monomorphism $\nu:M\rightarrow \mathcal{A}^n$ such that $\rho\circ\nu=\text{id}_M$. Let $P:=\nu\circ\rho$. Then it is clear that $P$ is a projection, since \begin{equation*}
P^2=(\nu\circ\rho)\circ(\nu\circ\rho)=\nu\circ(\rho\circ\nu)\circ\rho=\nu\circ\text{id}_M\circ\rho=\nu\circ\rho=P. \end{equation*}
Let $\hat{\psi}=P\circ \nu$. Then $\hat{\psi}:M\rightarrow P(\mathcal{A}^n)$ is an isomorphism, with inverse $\hat{\psi}^{-1}=\rho|_{P(\mathcal{A}^n)}$.
We now verify that $(\id{\mathcal{A}},\id{\mathfrak{g}},\hat{\psi}):C_{\mathcal{A}}\rightarrow (\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),P\circ\tilde{\varphi})$ is indeed a real calculus isomorphism. That $\text{id}_{\mathcal{A}}$ and $\text{id}_{\mathfrak{g}}$ are compatible is trivial to check. Also, since $\hat{\psi}$ is an isomorphism and since $\phi$ is the identity map, $\hat{\psi}$ trivially satisfies conditions~(\ref{def:rc.morphism.psih.cond1}) and (\ref{def:rc.morphism.psih.cond2}) from Definition~\ref{def:rc.morphism}.
Finally, we have that \begin{equation*}
\hat{\psi}(\varphi(\text{id}_{\mathfrak{g}}(\partial_k)))=\hat{\psi}(e_k)=\hat{\psi}(\rho(\hat{e}_k))=P\circ(\nu\circ\rho)(\hat{e}_k)=P^2(\hat{e}_k)=P\circ\tilde{\varphi}(\partial_k) \end{equation*} for $k=1,...,n$, verifying that condition~(\ref{def:rc.morphism.psih.cond3}) for $\hat{\psi}$ is satisfied as well. We conclude that $(\id{\mathcal{A}},\id{\mathfrak{g}},\hat{\psi}):C_{\mathcal{A}}\rightarrow (\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),P\circ\tilde{\varphi})$ is a real calculus isomorphism. \end{proof} \noindent Thus, the above result says that any projective real calculus can be realized as a projection of a free real calculus $\tilde{C}_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,\mathcal{A}^n,\tilde{\varphi})$ in the sense that it is isomorphic to $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),P\circ\tilde{\varphi})$, where $P$ is a projection.
Before discussing metric aspects of projective real calculi, we shall construct affine connections on these structures using the underlying structure of a free real calculus. As in differential geometry, there is a link between affine connections and \emph{bilinear maps}, i.e., maps $\Delta:\mathfrak{g}\times M\rightarrow M$ that satisfy \begin{align}
&\Delta(\lambda\delta_1+\delta_2,m)=\lambda\Delta(\delta_1,m)+\Delta(\delta_2,m),\quad \delta_1,\delta_2\in\mathfrak{g}, \lambda\in\mathbb{R},\text{ and } m\in M,\label{eq:bilinear.map.1}\\
&\Delta(\delta,m_1a+m_2)=\Delta(\delta,m_1)a+\Delta(\delta,m_2),\quad \delta\in\mathfrak{g}, a\in\mathcal{A},\text{ and } m_1,m_2\in M.\label{eq:bilinear.map.2} \end{align} The following lemma connecting bilinear maps and affine connections is a well-known result, and here we adapt it to our particular setting. \begin{lemma}\label{lem:AffineConn.and.BilinForm}
Let $\nabla^1,\nabla^2:\mathfrak{g}\times M\rightarrow M$ be affine connections. Then $\Delta:=\nabla^1-\nabla^2$ is a bilinear map in the sense of (\ref{eq:bilinear.map.1}) and (\ref{eq:bilinear.map.2}). Converesely, if $\Delta:\mathfrak{g}\times M\rightarrow M$ is a bilinear map and $\nabla:\mathfrak{g}\times M\rightarrow M$ is an affine connection, then $\nabla':=\nabla+\Delta$ is an affine connection. \end{lemma} \begin{proof}
For the first part, the only nontrivial condition to check is that $\Delta(\delta,ma)=\Delta(\delta,m)a$, but this readily follows from the Leibniz property:
\begin{equation*}
\Delta(\delta,ma)=\nabla^1_{\delta}(ma)-\nabla^2_{\delta}(ma)=(\nabla^1_{\delta}m-\nabla^2_{\delta}m)a+m\delta(a)-m\delta(a)=\Delta(\delta,m)a.
\end{equation*}
For the second part, the only nontrivial condition to check is the Leibniz condition. But this is a direct consequence of $\Delta$ being a bilinear form:
\begin{align*}
\nabla'_{\delta}(ma)&=\nabla_{\delta}(ma)+\Delta(\delta,ma)\\
&=(\nabla_{\delta}m+\Delta(\delta,m))a+m\delta(a)=(\nabla'_{\delta}m)a+m\delta(a),
\end{align*}
and the proof is complete. \end{proof}
We are now ready to state a standard result for connections on projective modules. \begin{proposition}\label{prop:connection.on.projective.module}
Let $P:\mathcal{A}^n\rightarrow\mathcal{A}^n$ be a projection. If $\tilde{\nabla}:\mathfrak{g}\times\mathcal{A}^n\rightarrow \mathcal{A}^n$ is an affine connection, then the map $\nabla:\mathfrak{g}\times P(\mathcal{A}^n)\rightarrow P(\mathcal{A}^n)$, defined by
\begin{equation*}
\nabla= P\circ\tilde{\nabla}
\end{equation*}
is an affine connection.
Conversely, for every affine connection $\nabla:\mathfrak{g}\times P(\mathcal{A}^n)\rightarrow P(\mathcal{A}^n)$ there is an affine connection $\tilde{\nabla}:\mathfrak{g}\times\mathcal{A}^n\rightarrow \mathcal{A}^n$ such that $\nabla=P\circ \tilde{\nabla}$ on $\mathfrak{g}\times P(\mathcal{A}^n)$. \end{proposition} \begin{proof}
For the first statement one needs to check that $\nabla$ as defined is an affine connection, and since $P$ is a module homomorphism one only needs to check the Leibniz condition. Let $m\in P(\mathcal{A}^n)$, $\delta\in\mathfrak{g}$ and $a\in \mathcal{A}$. Then
\begin{equation*}
\nabla_{\partial} ma=P(\tilde{\nabla}_{\partial}ma)=P((\tilde{\nabla}_{\partial}m)a+m\partial(a))=(\nabla_{\partial}m)a+m\partial(a)
\end{equation*}
since $P(m)=m$, which shows that $\nabla$ is indeed an affine connection.
For the second statement, let $e_1,...,e_n$ be a basis of $\mathcal{A}^n$ and let $\tilde{\nabla}^0:\mathfrak{g}\times \mathcal{A}^n\rightarrow \mathcal{A}^n$ be the flat connection with respect to this basis, i.e., $\tilde{\nabla}^0_{\partial} (e_i a^i)=e_i\partial(a^i)$. Next, let $\nabla^0:\mathfrak{g}\times P(\mathcal{A}^n)\rightarrow P(\mathcal{A}^n)$ be the affine connection defined by
\begin{equation*}
\nabla^0_{\partial}m=P(\tilde{\nabla}^0_{\partial}m).
\end{equation*}
Let $\nabla:\mathfrak{g}\times P(\mathcal{A}^n)\rightarrow P(\mathcal{A}^n)$ be an arbitrary affine connection. Then, by Lemma~\ref{lem:AffineConn.and.BilinForm}, it follows that $\Delta:=\nabla-\nabla^0$ is a bilinear map. Define the map $\tilde{\Delta}:\mathfrak{g}\times\mathcal{A}^n\rightarrow\mathcal{A}^n$ by the formula
\begin{equation*}
\tilde{\Delta}(\partial,m):=\Delta(\partial,P(m));
\end{equation*}
then $\tilde{\Delta}$ is a bilinear map, since $P(ma)=P(m)a$.
This, together with Lemma~\ref{lem:AffineConn.and.BilinForm}, implies that the map $\tilde{\nabla}:=\tilde{\nabla}^0+\tilde{\Delta}$ is an affine connection on $\mathfrak{g}\times\mathcal{A}^n$. One quickly checks that $\nabla$ is indeed the projection of $\tilde{\nabla}$:
\begin{align*}
P(\tilde{\nabla}_{\partial}m)&=P(\tilde{\nabla}^0_{\partial}m+\tilde{\Delta}(\partial,m))=\nabla^0_{\partial}m+\Delta(\partial,m)\\
&=\nabla^0_{\partial}m+(\nabla_{\partial} m-\nabla^0_{\partial}m)=\nabla_{\partial}m,
\end{align*}
where the identity $P(\tilde{\Delta}(\partial,m)))=\tilde{\Delta}(\partial,m)=\Delta(\partial,P(m))=\Delta(\partial,m)$, $m\in P(\mathcal{A}^n)$, is used in the second equality above.
This completes the proof. \end{proof}
Let $\tilde{C}_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,\mathcal{A}^n,\tilde{\varphi})$ be a free real calculus and let $(\tilde{C}_{\mathcal{A}},\tilde{h})$ be a free real metric calculus. If $P:\mathcal{A}^n\rightarrow\mathcal{A}^n$ is a projection it has already been established in Proposition~\ref{prop:derived.projective.calculus} that we may generate the projective real calculus $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),P\circ\tilde{\varphi})$ from $\tilde{C}_{\mathcal{A}}$. It is interesting to see under which conditions it is possible to generate a projective real metric calculus from $(\tilde{C}_{\mathcal{A}},\tilde{h})$, and if we assume that $P$ is orthogonal with respect to $\tilde{h}$ it is possible to make a few general statements. \begin{proposition}\label{prop:restriction.of.metric}
Let $\tilde{C}_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,\mathcal{A}^n,\tilde{\varphi})$ be a free real calculus, and let the projection $P$ be orthogonal with respect to the metric $\tilde{h}$ on $\mathcal{A}^n$, i.e., $\tilde{h}(P(m_1),m_2)=\tilde{h}(m_1,P(m_2))$ for all $m_1,m_2\in\mathcal{A}^n$. Then
\begin{enumerate}
\item The map $h(m_1,m_2)=\tilde{h}(m_1,m_2)$, $m_1,m_2\in P(\mathcal{A}^n)$, is a metric on $P(\mathcal{A}^n)$.
\item If $\tilde{\nabla}:\mathfrak{g}\times \mathcal{A}^n\rightarrow \mathcal{A}^n$ is an affine connection that is compatible with the metric $\tilde{h}$ then $\nabla=P\circ\tilde{\nabla}$ is compatible with $h$.
\end{enumerate} \end{proposition} \begin{proof}
For the first statement, all that is needed is to show that $h$ is nondegenerate. Assume that $m_1\in P(\mathcal{A}^n)$ and that $h(m_1,m_2)=0$ for every $m_2\in P(\mathcal{A}^n)$. Then
\begin{equation*}
0=h(m_1,P(\tilde{m}))=\tilde{h}(m_1,P(\tilde{m}))=\tilde{h}(P(m_1),\tilde{m})=\tilde{h}(m_1,\tilde{m}),
\end{equation*}
where $\tilde{m}\in \mathcal{A}^n$ is arbitrary. Since $\tilde{h}$ is a metric it follows that $m_1=0$.
For the second statement, Proposition~\ref{prop:connection.on.projective.module} ensures that $\nabla$ is an affine connection and if $\tilde{\nabla}$ is metric with respect to $\tilde{h}$ then a standard computation verifies that $\nabla$ is compatible with $h$ as well:
\begin{align*}
\partial(h(m_1,m_2))&=\partial(\tilde{h}(m_1,m_2))=\tilde{h}(\tilde{\nabla}_{\partial} m_1,m_2)+\tilde{h}(m_1,\tilde{\nabla}_{\partial}m_2)\\
&=\tilde{h}(\tilde{\nabla}_{\partial} m_1,P(m_2))+\tilde{h}(P(m_1),\tilde{\nabla}_{\partial}m_2)\\
&=\tilde{h}(\nabla_{\partial} m_1,m_2)+\tilde{h}(m_1,\nabla_{\partial}m_2)=h(\nabla_{\partial} m_1,m_2)+h(m_1,\nabla_{\partial}m_2),
\end{align*}
completing the proof. \end{proof}
A natural question to ask is what conditions on $\tilde{h}$ and $P$ are needed to make sure that the projective real calculus $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),P\circ\tilde{\varphi})$ together with the restriction $h$ of $\tilde{h}$ to $P(\mathcal{A}^n)$ constitute a real metric calculus. Although Proposition~\ref{prop:restriction.of.metric} ensures that $h$ is a metric on $P(\mathcal{A}^n)$, it is not guaranteed that $h$ is symmetric on $\varphi(\mathfrak{g})$, i.e., $h(\varphi(\partial_1),\varphi(\partial_2))=h(\varphi(\partial_2),\varphi(\partial_1))$ for every $\partial_1,\partial_2\in\mathfrak{g}$. Before presenting the next result we introduce some notation for the sake of clarity. Let $\{\partial_1,...,\partial_n\}$ be a basis of $\mathfrak{g}$, and let $\hat{e}_i=\tilde{\varphi}(\partial_i)$ and $e_i=(P\circ\tilde{\varphi})(\partial_i)$ denote the generators of $\mathcal{A}^n$ and $P(\mathcal{A}^n)$, respectively. We introduce $\tilde{h}_{ij}$ and $h_{ij}$ to denote the components of the metric: \begin{equation*}
\tilde{h}_{ij}=\tilde{h}(\hat{e}_i,\hat{e}_j),\quad h_{ij}=h(e_i,e_j). \end{equation*} Moreover, we introduce the coefficients $p^j_k\in\mathcal{A}$ as the unique elements such that \begin{equation*}
P(\hat{e}_k)=\hat{e}_j p^j_k; \end{equation*} since $P$ is a projection we have that $p^j_k=p^j_l p^l_k$. \begin{proposition}
Let $\tilde{h}$ be a metric on $\mathcal{A}^n$ such that $\tilde{h}_{ij}=\tilde{h}_{ji}$ with respect to the basis $\{\hat{e}_k\}_1^n$, and let $P$ be an orthogonal projection with respect to $\tilde{h}$. If $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,P(\mathcal{A}^n),\varphi)$ is a projective real calculus such that $\varphi(\partial_k)=P(\hat{e}_k)=\hat{e}_l p^l_k$ and $h$ is the restriction of $\tilde{h}$ to $P(\mathcal{A}^n)$, then $(C_{\mathcal{A}},h)$ is a real metric calculus if and only if
\begin{equation*}
\tilde{h}_{jk}p^k_i=\tilde{h}_{ik}p^k_j
\end{equation*}
for any pair indices $i$ and $j$. \end{proposition} \begin{proof}
From the orthogonality of $P$ it follows that
\begin{equation}\label{eqn:orthogonal.metric.components}
(p^k_i)^*\tilde{h}_{kj}=\tilde{h}(P(\hat{e}_i),\hat{e}_j)=\tilde{h}(\hat{e}_i,P(\hat{e}_j))=\tilde{h}_{ik}p^k_j.
\end{equation}
Using this, we see that
\begin{equation*}
h(\varphi(\partial_i),\varphi(\partial_j))=\tilde{h}(P(\hat{e}_i),P(\hat{e}_j))=(p^k_i)^*\tilde{h}_{kl}p^l_j\overset{(\ref{eqn:orthogonal.metric.components})}{=}\tilde{h}_{ik}p^k_lp^l_j,
\end{equation*}
and since $P$ is a projection (i.e., $P^2=P$), $\tilde{h}_{ik}p^k_lp^l_j$ can be simplified further to $\tilde{h}_{ik}p^k_j$.
Thus $h(\varphi(\partial_i),\varphi(\partial_j))=h(\varphi(\partial_j),\varphi(\partial_i))$ is equivalent to $\tilde{h}_{ik}p^k_j=\tilde{h}_{jk}p^k_i$, and the statement follows. \end{proof} \section{Real calculi over matrix algebras}\label{sec:matrix.algebra}
Given two real calculi $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ and $C_{\mathcal{A}'}=(\mathcal{A}',\mathfrak{g}'_{D'},M',\varphi')$ it follows straight from the definition of real calculus isomorphisms that a necessary condition for $C_{\mathcal{A}}\simeq C_{\mathcal{A}'}$ is that $\mathcal{A}$ is isomorphic to $\mathcal{A}'$ and that $\mathfrak{g}$ is isomorphic to $\mathfrak{g}'$. Moreover, it was shown in \cite{atn:nc.minimal.embeddings} that the modules $M$ and $M'$ must also be isomorphic when viewed as modules over the same algebra (via the isomorphism between $\mathcal{A}$ and $\mathcal{A}'$). These basic observations lead to the following natural question: given a unital $^*$-algebra $\mathcal{A}$, a Lie algebra $\mathfrak{g}$ and a right $\mathcal{A}$-module $M$, how many nonisomorphic real calculi of the form $(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ are there? Going forward, we shall refer to this as the classification problem for real calculi.
The classification problem for real calculi over general $^*$-algebras $\mathcal{A}$ is complicated, but the compatibility conditions between the maps $\phi$, $\psi$ and $\hat{\psi}$ constituting an isomorphism $(\phi,\psi,\hat{\psi}):(\mathcal{A},\mathfrak{g}_D,M,\varphi)\rightarrow (\mathcal{A},\mathfrak{g}_{D'},M,\varphi')$ imply that for a given $^*$-isomorphism $\phi:\mathcal{A}\rightarrow\mathcal{A}$ there is at most one choice of $\psi$ and $\hat{\psi}$ such that $(\phi,\psi,\hat{\psi})$ is a real calculus homomorphism (cf. \cite{atn:nc.minimal.embeddings}); in general, the existence of a Lie algebra isomorphism $\psi:\mathfrak{g}\rightarrow\mathfrak{g}$ compatible with $\phi$ depends on the representations $D$ and $D'$, and the existence of a bijective map $\hat{\psi}:M\rightarrow M$ that is compatible with $\phi$ and $\psi$ then depends on the relationship between the maps $\varphi$ and $\varphi'$. As a starting point we consider real calculi over matrix algebras, i.e., the case when $\mathcal{A}=\Mat{N}$. By the Skolem-Noether theorem all automorphisms of $\Mat{N}$ are inner, i.e., conjugations by a matrix $U\in\GL{N}$, and in the following we shall write $\phi_U$ to denote conjugation by $U\in\GL{N}$, i.e., $\phi_U(A)=U^{-1}AU$ for $A\in\Mat{N}$.
Another important property is that every derivation on $\Mat{N}$ is inner, i.e., any derivation $\partial\in\text{Der}(\mathcal{A})$ can be identified with the commutator of a unique trace-free matrix $\hat{D}\in\Mat{N}$: \begin{equation*}\label{eqn:derivation.is.commutator.of.matrix}
\partial=[\hat{D},\cdot]. \end{equation*} It follows that for any given representation $D:\mathfrak{g}\rightarrow\text{Der}(\mathcal{A})$ there is a unique matrix representation $\hat{D}:\mathfrak{g}\rightarrow\Mat{N}$ such that $\hat{D}(\partial)$ is trace-free and $D(\partial)=[\hat{D}(\partial),\cdot]$ for every $\partial\in\mathfrak{g}$. We call $\hat{D}$ the matrix representation of $\mathfrak{g}$ associated with $D$, and note that since $D$ is faithful it follows that $\hat{D}$ is also a faithful representation of $\mathfrak{g}$. Moreover, since every derivation $D(\delta)$ is assumed to be hermitian this has the effect that $\hat{D}(\delta)$ is anti-hermitian, implying that $\mathfrak{g}$ is isomorphic to a Lie subalgebra of $\su{N}$. When discussing whether two real calculi $C_{\mathcal{A}}$ and $C'_{\mathcal{A}}$ are isomorphic when $\mathcal{A}=\Mat{N}$ one has to take into account that any real calculus isomorphism involves a Lie algebra isomorphism $\psi:\mathfrak{g}\rightarrow\mathfrak{g}$. This affects the relationship between the matrix representations $\hat{D}$ and $\hat{D}'$, and hence the concept of quasi-equivalent matrix representations becomes relevant.
\begin{definition}
Let $\hat{D}$ and $\hat{D}'$ be matrix representations of a Lie algebra $\mathfrak{g}$. Then $\hat{D}$ and $\hat{D}'$ are said to be quasi-equivalent if there is a matrix $U\in\GL{N}$ and a Lie algebra automorphism $\psi\in\text{Aut}(\mathfrak{g})$ such that
\begin{equation*}
\hat{D}'(\partial)=\phi_U(\hat{D}(\psi(\partial)))
\end{equation*}
for every $\partial\in\mathfrak{g}$. The pair $(\phi_U,\psi)$ is called a \emph{realization} of the quasi-equivalence. \end{definition} If the representations $\hat{D}$ and $\hat{D}'$ are faithful, then the automorphism $\psi$ in the above definition is unique since \begin{align*}
\phi_U\paraa{\hat{D}(\psi(\partial))}=\phi_U\paraa{\hat{D}(\tilde{\psi}(\partial))}\Leftrightarrow \hat{D}(\psi(\partial))=\hat{D}(\tilde{\psi}(\partial))\Leftrightarrow \psi(\partial)=\tilde{\psi}(\partial). \end{align*} Therefore, in this case we define $\psi_U$ to be the unique automorphism satisfying \begin{equation*}
\hat{D}'(\partial)=\phi_U(\hat{D}(\psi_U(\partial))) \end{equation*} for every $\partial\in\mathfrak{g}$ whenever such an automorphism exists. In terms of real calculi over $\mathcal{A}=\Mat{N}$, quasi-equivalence of the matrix representations $\hat{D}$ and $\hat{D}'$ (associated with $D$ and $D'$ respectively) is the relevant condition for the existence of compatible automorphisms $\phi$ and $\psi$ (Definition~\ref{def:rc.morphism}), as can be seen in the following lemma.
\begin{lemma}\label{lem:Lie.automorphism.compatibility}
Let $C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,M,\varphi)$ and $C'_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_{D'},M,\varphi')$ be real calculi and suppose that $\phi\in\text{Aut}(\mathcal{A})$ and $\psi\in\text{Aut}(\mathfrak{g})$. Then $\phi$ and $\psi$ are compatible (i.e., for every $a\in\mathcal{A}$ and $\partial\in\mathfrak{g}$, $\partial(\phi(a))=\phi(\psi(\partial)(a))$) if and only if $(\phi,\psi)$ is a realization of a quasi-equivalence between $\hat{D}$ and $\hat{D}'$. \end{lemma} \begin{proof}
To prove sufficiency,
assume that $\hat{D}$ and $\hat{D}'$ are quasi-equivalent.
Then there is a nonsingular matrix $U$ such that $\hat{D}'(\partial)=\phi_U(\hat{D}(\psi_U(\partial)))$ for every $\partial\in\mathfrak{g}$. One readily checks that $\phi_U$ and $\psi_U$ are compatible:
\begin{align*}
D'(\partial)(\phi_U(A))&=[\hat{D}'(\partial),\phi_U(A)]=[\phi_U(\hat{D}(\psi_U(\partial))),\phi_U(A)]\\
&=\phi_U([\hat{D}(\psi_U(\partial)),A])=\phi_U(D(\psi_U(\partial))(A)).
\end{align*}
To prove necessity, assume that $\phi:\Mat{N}\rightarrow \Mat{N}$ and $\psi:\mathfrak{g}\rightarrow\mathfrak{g}$ are compatible automorphisms. Then $\phi=\phi_U$ for a matrix $U\in\GL{N}$, and from the compatibility condition
\begin{equation*}
\phi_U\paraa{D(\psi(\partial))(A)}=D'(\partial)(\phi_U(A))
\end{equation*}
one obtains the following:
\begin{align*}
[\phi_U(\hat{D}(\psi(\partial))),\phi_U(A)]&=U^{-1}([\hat{D}(\psi(\partial)),A])U=\phi_U(\psi(\partial)(A))\\
&=\partial(\phi_U(A))=[\hat{D'}(\partial),\phi_U(A)],\quad \partial\in \mathfrak{g}';
\end{align*}
since $\phi=\phi_U$ is an automorphism (and since $\hat{D}$ and $\hat{D}'$ are trace-free representations) this is equivalent to $\hat{D'}(\partial)=\phi_U(\hat{D}(\psi(\partial)))$ for all $\partial\in \mathfrak{g}$, implying that $\psi=\psi_U$, as desired. \end{proof} \noindent In particular, the above lemma implies that if $C_{\mathcal{A}}$ and $C'_{\mathcal{A}}$ are isomorphic then the representations $\hat{D}$ and $\hat{D}'$ are quasi-equivalent.
Generally speaking, if only free real calculi $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ are considered where $\mathcal{A},\mathfrak{g}$ and $M$ are fixed and $D$ and $\varphi$ are allowed to vary, then the particular choice of $\varphi$ does not affect the overall structure of $C_{\mathcal{A}}$ in an essential way. This holds true not only for matrix algebras, but for general free real calculi. We state it as follows. \begin{lemma}\label{lem:isomorphism.of.free.real.calculi}
Let $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ and $C'_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_{D'},M,\varphi')$ be free real calculi and suppose that $\phi\in\text{Aut}(\mathcal{A})$ and $\psi\in\text{Aut}(\mathfrak{g})$ are compatible in the sense of Definition~\ref{def:rc.morphism}. Then $C_{\mathcal{A}}$ and $C_{\mathcal{A}'}$ are isomorphic as real calculi. \end{lemma} \begin{proof}
Let $\{\delta_k\}_1^n$ be a basis of $\mathfrak{g}$. Then $\{\partial_k\}_1^n$, where $\partial_k=\psi(\delta_k)$, is also a basis of $\mathfrak{g}$. Moreover, since $C_{\mathcal{A}}$ and $C'_{\mathcal{A}}$ are free real calculi it follows that
$\{\varphi(\partial_k)\}_1^n$ and $\{\varphi'(\delta_k)\}_1^n$ are bases of $M$.
Let $\hat{\psi}:M\rightarrow M$ and $\tilde{\psi}:M\rightarrow M$ be defined by the following:
\begin{align*}
&\hat{\psi}(\varphi(\partial_k)A^k):=\varphi'(\delta_k)\phi(A^k)\\
&\tilde{\psi}(\varphi'(\delta_k)A^k):=\varphi(\partial_k)\phi^{-1}(A^k).
\end{align*}
It is trivial to check that $(\phi,\psi,\hat{\psi}):C_{\mathcal{A}}\rightarrow C'_{\mathcal{A}}$ and $(\phi^{-1},\psi^{-1},\tilde{\psi}):C'_{\mathcal{A}}\rightarrow C_{\mathcal{A}}$ are real calculus homomorphisms, and that $\tilde{\psi}=\hat{\psi}^{-1}$. \end{proof} This result, together with Lemma \ref{lem:Lie.automorphism.compatibility}, can be used to describe how the structure of a free real calculus $C_{\mathcal{A}}=(\mathcal{A},\mathfrak{g}_D,M,\varphi)$ is completely determined by the representation $D$ (and its associated matrix representation $\hat{D}$) when $\mathcal{A}=\Mat{N}$ and $\mathfrak{g}$ and $M$ are fixed.
\begin{theorem}\label{thm:free.real.calculi}
Let
\begin{equation*}
C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,M,\varphi)\quad\text{and}\quad C'_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_{D'},M,\varphi')
\end{equation*}
be free real calculi. Then $C_{\mathcal{A}}\simeq C'_{\mathcal{A}}$ if and only if the matrix representations $\hat{D}$ and $\hat{D}'$ (associated with $D$ and $D'$, respectively) are quasi-equivalent. \end{theorem} \begin{proof}
From Lemma \ref{lem:isomorphism.of.free.real.calculi} it follows that $C_{\mathcal{A}}\simeq C'_{\mathcal{A}}$ if and only if there are compatible automorphisms $\phi:\mathcal{A}\rightarrow\mathcal{A}$ and $\psi:\mathfrak{g}\rightarrow\mathfrak{g}$, and Lemma \ref{lem:Lie.automorphism.compatibility} states that such $\phi$ and $\psi$ exist if and only if $\hat{D}$ and $\hat{D}'$ are quasi-equivalent. \end{proof}
Let $C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,M,\varphi)$ and $C'_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_{D'},M,\varphi')$ be real calculi where $\text{dim }\mathfrak{g}=n$. Since Lemma \ref{lem:Lie.automorphism.compatibility} states that quasi-equivalence between the matrix representations $\hat{D}$ and $\hat{D}'$ is a necessary and sufficient condition for the existence of compatible automorphisms $\phi$ and $\psi$, this may be assumed when studying the role that the maps $\varphi$ and $\varphi'$ play in distinguishing the structures of $C_{\mathcal{A}}$ and $C'_{\mathcal{A}}$ from one another. In the case of free real calculi, Theorem \ref{thm:free.real.calculi} implies that the relationship between $\varphi$ and $\varphi'$ is irrelevant in order to determine whether two real calculi are isomorphic, but for general projective real calculi the situation changes. As an example we consider the case when $M=(\mathbb{C}^N)^n$, so that neither $C_{\mathcal{A}}$ nor $C'_{\mathcal{A}}$ is free whenever $n\neq N$.
Before moving forward we shall explain how the module $M=(\mathbb{C}^N)^n$ is represented for the sake of clarity. Since we are considering right modules, vectors in $\mathbb{C}^N$ are written as row vectors rather than column vectors, and $v\in (\mathbb{C}^N)^n$ is seen as a vector in $\mathbb{C}^{Nn}$ (in most cases written on the form $v=(v_1,...,v_n)\in(\mathbb{C}^N)^n$, with each $v_i\in\mathbb{C}^N$) , with matrices $A\in\Mat{N}$ acting on $(\mathbb{C}^N)^n$ in the following way: \begin{equation*}
v\cdot A=(v_1,...,v_n)\cdot A:=(v_1 A,...,v_n A)=v\left(\bigoplus_1^n A\right), \end{equation*} where $\oplus$ denotes the direct sum of matrices. In general, the notation $v\cdot A$ above is used to distinguish the module action from a regular matrix multiplication when $n > 1$. Moreover, by $A\otimes B$ we will denote the Kronecker product of matrices. \begin{proposition}\label{prop:projective.rc.iso}
Let
\begin{equation*}
C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,(\mathbb{C}^N)^n,\varphi)\quad\text{and}\quad C'_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_{D'},(\mathbb{C}^N)^n,\varphi')
\end{equation*} be real calculi. Then $C_{\mathcal{A}}$ and $C'_{\mathcal{A}}$ are isomorphic if and only if there are matrices $X\in\GL{n}$ and $U\in\GL{N}$ such that
\begin{enumerate}
\item $\hat{D}$ and $\hat{D}'$ are quasi-equivalent, with $\hat{D}'(\partial)=\phi_U\paraa{\hat{D}(\psi_U(\partial))}$,
\label{prop:projective.rc.iso.cond1}
\item $\varphi'(\partial)=\varphi(\psi_U(\partial))(X\otimes U)$\label{prop:projective.rc.iso.cond2}
\end{enumerate}
for all $\partial\in\mathfrak{g}$. \end{proposition} \begin{proof} Assume that $(\phi,\psi,\hat{\psi}):C_{\mathcal{A}}\rightarrow C'_{\mathcal{A}}$ is a real calculus isomorphism. Then Lemma \ref{lem:Lie.automorphism.compatibility} implies $\hat{D}$ and $\hat{D}'$ are quasi-equivalent and that there is a matrix $U\in\GL{N}$ such that $\phi=\phi_U$ and $\psi=\psi_U$. By the linearity condition together with the compatibility condition with $\phi$, $\hat{\psi}$ is a linear mapping from $(\mathbb{C}^N)^n$ to itself. Thus there is a unique matrix $\tilde{X}\in\Mat{Nn}$ such that $\hat{\psi}(v)=v\tilde{X}$ for all $v\in (\mathbb{C}^N)^n$; for notational purposes, we identify $\tilde{X}$ with an $n$-by-$n$ matrix whose entries $\tilde{X}_{ij}$ are in $\Mat{N}$. Moreover, the compatibility condition between $\hat{\psi}$ and $\phi$, $\hat{\psi}(v\cdot A)=\hat{\psi}(v)\cdot\phi_U(A)=(v\cdot A)\tilde{X}$, states that: \begin{equation*}
(v\cdot A)\tilde{X}=(v\tilde{X})\cdot (U^{-1}AU)\Leftrightarrow v\left[\tilde{X}\left(\bigoplus_1^n U^{-1}\right),\bigoplus_1^n A\right]=0 \end{equation*} for all $v\in (\mathbb{C}^N)^n$ and $A\in\Mat{N}$. This is equivalent to $[\tilde{X}_{ij}U^{-1},A]=0$ for each $i,j,=1,...,n$, implying that each $\tilde{X}_{ij}=x_{ij}U$ for a constant $x_{ij}\in\mathbb{C}$; by setting $X:=(x_{ij})\in\Mat{n}$ we get the identity $\tilde{X}=X\otimes U$ and since $\hat{\psi}$ is a bijection $X$ is invertible. Thus, $\hat{\psi}:v\mapsto v(X\otimes U)$, and the compatibility condition between $\hat{\psi}$ and $\psi_U$ states that \begin{equation*}
\varphi'(\partial)=\hat{\psi}(\varphi(\psi_U(\partial)))=\varphi(\psi_U(\partial))(X\otimes U) \end{equation*} for every $\partial\in\mathfrak{g}$, proving necessity of the stated conditions.
For sufficiency, we simply note that given matrices $X$ and $U$ satisfying the stated conditions we may define the bijective map $\hat{\psi}:v\mapsto v(X\otimes U)$ on $(\mathbb{C}^N)^n$, and by the previous calculations it is clear that $\hat{\psi}$ is compatible with both $\phi_U$ and $\psi_U$, implying that $(\phi_U,\psi_U,\hat{\psi})$ is a real calculus isomorphism from $(\Mat{N},\mathfrak{g}_D,(\mathbb{C}^N)^n,\varphi)$ to $(\Mat{N},\mathfrak{g}_{D'},(\mathbb{C}^N)^n,\varphi')$. \end{proof}
A direct consequence of Proposition \ref{prop:projective.rc.iso} is that the classification task of real calculi $(\Mat{N},\mathfrak{g}_D,(\mathbb{C}^N)^n,\varphi)$ where $D$ is given can be simplified in some cases by replacing $D$ with a quasi-equivalent representation $D'$ which is easier to work with. We state this as a lemma.
\begin{lemma}\label{lem:diagonal.form.of.matrix.representation} Let $C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,(\mathbb{C}^N)^n,\varphi)$ be a real calculus. Then, for any representation $D'$ that is quasi-equivalent with $D$, there is a $\mathbb{R}$-linear map $\varphi':\mathfrak{g}\rightarrow (\mathbb{C}^N)^n$ such that $C_{\mathcal{A}}\simeq C'_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_{D'},(\mathbb{C}^N)^n,\varphi')$. \end{lemma} \begin{proof}
For any pair of quasi-equivalent representations $D'$ and $D$ there is a matrix $U\in \GL{N}$ such that $\hat{D}'(\partial)=\phi_U(\hat{D}(\psi_U(\partial)))$, where $(\phi_U,\psi_U)$ is a realization of the quasi-equvialence.
Choosing $\varphi':\mathfrak{g}\rightarrow (\mathbb{C}^N)^n$ to be defined by the formula $\varphi'(\partial)=\varphi(\psi_U(\partial)))\cdot U$, it follows that $\varphi'$ is a linear map that generates $(\mathbb{C}^N)^n$ as a module over $\Mat{N}$. Moreover, since $\varphi'(\partial)=\varphi(\psi_U(\partial)))\cdot U=\varphi(\psi_U(\partial)))(\mathbbm{1}_n\otimes U)$, Proposition \ref{prop:projective.rc.iso} states that $C_{\mathcal{A}}\simeq C'_{\mathcal{A}}$, as desired. \end{proof}
\subsection{The 1-dimensional case}\label{subsec:ex.dim.1} When studying real calculi \begin{equation*}
C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_{D},(\mathbb{C}^N)^n,\varphi) \end{equation*} it is natural to begin with the simplest such case, i.e., when $n=\dim \mathfrak{g}=1$. Although this may seem trivial at first glance, a more detailed survey of this case yields a lot of insight into what we can expect to find when the dimension of $\mathfrak{g}$ is higher, especially in relation to metrics and connections. Also, due to the simplistic nature of $\operatorname{Aut}(\mathfrak{g})\simeq \R\setminus\{0\}$ in the one-dimensional case, it becomes possible to give a comprehensive discussion of the classification problem when the module is $\mathbb{C}^N$.
The analysis of isomorphism classes of real calculi $C_{\mathcal{A}}$ with a 1-dimensional Lie algebra $\mathfrak{g}=\gen{\partial}$ and a fixed representation $D:\mathfrak{g}\rightarrow\operatorname{Der}(\Mat{N})$ is greatly simplified by the fact that all Lie algebra automorphisms of $\mathfrak{g}$ are of the form $\psi_{\mu}(\partial)=\mu\partial$, where $\mu\in\R\setminus\{0\}$. Also, when $\mathfrak{g}$ is 1-dimensional, it means that $\hat{D}$ is equivalent to a diagonal representation $\hat{D}'$, since $\hat{D}(\partial)$ is skew-hermitian and thus diagonizable, with all eigenvalues being imaginary. Let \begin{equation*}
(\Mat{N},\gen{\partial}_D,\mathbb{C}^N,\varphi)\quad\text{and}\quad (\Mat{N},\gen{\partial}_D,\mathbb{C}^N,\varphi') \end{equation*} be real calculi. When analysing whether these are isomorphic, Proposition~\ref{prop:projective.rc.iso} implies that it is necessary to find every pair $U\in\GL{N}$ and $\mu\in\R\setminus\{0\}$ such that $\hat{D}(\partial)=\phi_U(\hat{D}(\psi_{\mu}(\partial)))=\mu\phi_U(\hat{D}(\partial))$. To simplify this task, we may diagonalize $\hat{D}(\partial)$ on the form \begin{equation*}
\hat{D}(\partial)=\bigoplus_{j=1}^k \lambda_j I_{n_j}, \end{equation*}
where the eigenvalues $\lambda_j$ are sorted in a descending order (with respect to imaginary parts); by Lemma~\ref{lem:diagonal.form.of.matrix.representation}, we may assume that $\hat{D}(\partial)$ is of this diagonal form to begin with. In this particular case it is feasible to determine the possible pairs $U\in\GL{N}$ and $\mu\in\R\setminus\{0\}$ such that $\hat{D}(\partial)=\mu\phi_U(\hat{D}(\partial))$ directly. Since this implies that $\hat{D}(\partial)$ and $\mu\hat{D}(\partial)$ are similar, and since $\hat{D}(\partial)$ is nonzero, this implies that $|\mu|=1$, with the special case $\mu=-1$ being possible if and only if $\hat{D}(\partial)$ and $-\hat{D}(\partial)$ are similar; in what follows we will refer to matrices $D$ such that $D$ and $-D$ are similar as \emph{anti-selfsimilar}. Furthermore, since $\hat{D}(\partial)$ is diagonal with its eigenvalues sorted in a descending order, the condition $\hat{D}(\partial)=\mu\phi_U(\hat{D}(\partial))$ implies that $U$ is of the form \begin{equation*}
U_{+}=\begin{pmatrix}
U_1 & & \\
& \ddots & \\
& & U_k
\end{pmatrix}
\quad\text{or}\quad
U_{-}=\begin{pmatrix}
& & U_1 \\
& \reflectbox{$\ddots$} & \\
U_k & &
\end{pmatrix}, \end{equation*} where $U_j\in\GL{n_j}$ for $j=1,...,k$; we note in particular that the latter case where $U=U_{-}$ and $\mu=-1$ is possible only when $\hat{D}(\partial)$ is anti-selfsimilar.
\begin{lemma}\label{lem:diag.representation.isomorphism.criteria}
Let
\begin{equation*}
C_{\mathcal{A}}=(\Mat{N},\gen{\partial}_D,\mathbb{C}^N,\varphi)\quad\text{and}\quad C'_{\mathcal{A}}=(\Mat{N},\gen{\partial}_{D'},\mathbb{C}^N,\varphi')
\end{equation*}
be real calculi such that
\begin{equation*}
\hat{D}(\partial)=\bigoplus_{j=1}^k \lambda_j I_{n_j}\quad\text{and}\quad \hat{D}'(\partial)=\mu_0\hat{D}(\partial), \quad\mu_0\in\R\setminus\{0\},
\end{equation*}
where the eigenvalues of $\hat{D}(\partial)$ are sorted in a descending order (with respect to their imaginary parts)
and where $n_j$ is the dimension of the eigenspace corresponding to the distinct eigenvalue $\lambda_j$ of $\hat{D}(\partial)$. Moreover,
write
$\varphi(\partial)=(v_1, v_2,..., v_k)$ and $\varphi'(\partial)=(v'_1, v'_2,..., v'_k)$
with $v_j,v'_j\in \mathbb{C}^{n_j}$ for $j=1,2,...,k$. If $\hat{D}(\partial)$ is anti-selfsimilar, then
$C_{\mathcal{A}}$ is isomorphic to $C'_{\mathcal{A}}$ if and only if one of the two conditions hold:
\begin{enumerate}
\item $v_j=0\Leftrightarrow v'_j=0$ for every $j=1,2,...,k$,
\item $v_j=0\Leftrightarrow v'_{k+1-j}=0$ for every $j=1,...,k$.
\end{enumerate}
If $\hat{D}(\partial)$ is not anti-selfsimilar, then $C_{\mathcal{A}}$ is isomorphic to $C'_{\mathcal{A}}$ if and only if $v_j=0\Leftrightarrow v'_j=0$ for every $j=1,2,...,k$. \end{lemma} \begin{proof}
Since $\hat{D}$ and $\hat{D}'$ are quasi-equivalent and $\gen{\partial}$ is 1-dimensional, it is clear that every quasi-equivalence ($\phi_U$,$\psi_U$) between $\hat{D}$ and $\hat{D}'$ is such that $\psi_U:\partial\mapsto \mu\partial$. Thus, since $\hat{D}'(\partial)=\mu_0\hat{D}(\partial)=\mu\phi_U(\hat{D}(\partial))$, it follows that $U\in\GL{N}$ satisfies $\phi_U(\hat{D}(\partial))=\hat{D}(\partial)$ (corresponding to $\mu=\mu_0$) or $\phi_U(\hat{D}(\partial))=-\hat{D}(\partial)$ (corresponding to $\mu=-\mu_0$); due to the specific diagonal form of $\hat{D}(\partial)$, every such matrix $U$ can be written as
\begin{equation*}
U_{+}=\begin{pmatrix}
U_1 & & \\
& \ddots & \\
& & U_k
\end{pmatrix}
\quad\text{or}\quad
U_{-}=\begin{pmatrix}
& & U_1 \\
& \reflectbox{$\ddots$} & \\
U_k & &
\end{pmatrix},
\end{equation*}
where $U_j\in\GL{n_j}$ for $j=1,...,k$.
From this, together with Proposition \ref{prop:projective.rc.iso}, it follows that $C_{\mathcal{A}}\simeq C'_{\mathcal{A}}$ if and only if there is a matrix $U\in \GL{N}$ and $x\in\R\setminus\{0\}$ such that $v'=\mu x vU$ and such that $U$ is either of the form $U_{+}$ or $U_{-}$ as described above. This is equivalent to either
\begin{enumerate}
\item $v'_j=\mu_0 x v_j U_j$ for some $U_j\in \GL{n_j}$ for $j=1,2,...,k$, or
\item $v'_j=-\mu_0 x v_{k+1-j} U_{k+1-j}$ for some $U_{k+1-j}\in \GL{n_{k+1-j}}$ for $j=1,2,...,k$;
\end{enumerate}
if $\hat{D}(\partial)$ is anti-selfsimilar, then both of the above cases are possible, and if $\hat{D}(\partial)$ is not, then only case (1) is possible. Since $U_j\in\GL{n_j}$ is arbitrary for $j=1,...,n$ in (1) and (2), the statement follows. \end{proof}
Lemmas~\ref{lem:diagonal.form.of.matrix.representation} and~\ref{lem:diag.representation.isomorphism.criteria} imply the following corollary, which tells us the number of isomorphism classes there are for calculi of the form $(\Mat{N},\gen{\partial}_D,\mathbb{C}^N,\varphi)$ when $D$ is given. In general, this number only depends on the eigenvalues of $\hat{D}(\partial)$. \begin{corollary}
Let $D:\mathfrak{g}\rightarrow\operatorname{Der}(\Mat{N})$ be a faithful representation of $\mathfrak{g}=\gen{\partial}$, let $k$ be the number of distinct eigenvalues of $\hat{D}(\partial)$ and let $|C_D|$ denote the number of pairwise nonisomorphic real calculi of the form $(\Mat{N},\mathfrak{g}_D,\mathbb{C}^N,\varphi)$. Then \begin{enumerate}
\item if $\hat{D}(\partial)$ is not anti-selfsimilar, then $|C_D|=2^k-1$,\\
\item if $\hat{D}(\partial)$ is anti-selfsimilar and $k$ is odd, then $|C_D|=2^{(k-1)/2}(1+2^{(k-1)/2})-1$,\\
\item if $\hat{D}(\partial)$ is anti-selfsimilar and $k$ is even, then $|C_D|=2^{k/2-1}(1+2^{k/2})-1$. \end{enumerate} \end{corollary} \begin{proof}
By Lemma~\ref{lem:diagonal.form.of.matrix.representation} we may assume that $\hat{D}(\partial)$ is diagonal, i.e.,
\begin{equation*}
\hat{D}(\partial)=\bigoplus_{j=1}^k \lambda_j I_{n_j}.
\end{equation*}
The only thing left to specify is $\varphi:\mathfrak{g}\rightarrow M$, which must be chosen such that the image of $\mathfrak{g}$ under $\varphi$ generates $M$; this means that the only restriction on $\varphi$ is that
$\varphi(\partial)=(v_1, v_2,...,v_k)\neq 0$. This restriction is the same as requiring that at least one of $v_1,v_2,...,v_k$ has to be chosen to be nonzero. For $v=(v_1, v_2,...,v_k)\in\mathbb{C}^N$, let $a(v)=(a_1,a_2,...,a_k)\in \{0,1\}^k$ be such that $a_j=1$ if $v_j\neq 0$ and $a_j=0$ if $v_j=0$, and set $\tilde{a}(v)=(\tilde{a}_1,...,\tilde{a}_k)=(a_k,...,a_1)$.
If $\hat{D}(\partial)$ is not anti-selfsimilar, Lemma \ref{lem:diag.representation.isomorphism.criteria} states that
the choices $\varphi(\partial)=v$ and $\varphi'(\partial)=v'$
yield isomorphic real calculi if and only if
$a(v')=a(v)\in\{0,1\}^k$, and since there are exactly $2^k-1$ nonzero elements in $\{0,1\}^k$, $|C_D|=2^k-1$ in this case.
If $\hat{D}(\partial)$ is anti-selfsimilar one gets two relevant cases.
Suppose that $k=2m+1$. Then there are exactly $2^{m+1}$ elements $b\in\{0,1\}^k$ such that $b=(b_1,...,b_k)=(b_k,...,b_1)$, implying that there are exactly $2^k-2^{m+1}$ elements $c\in\{0,1\}^k$ such that $c=(c_1,...,c_k)\neq (c_k,...,c_1)$. By Lemma \ref{lem:diag.representation.isomorphism.criteria},
the choices $\varphi(\partial)=v$ and $\varphi'(\partial)=v'$
yield isomorphic real calculi if and only if either
$a(v')=a(v)\in\{0,1\}^k$ or $a(v')=\tilde{a}(v)\in\{0,1\}^k$; this distinction does not matter if $a(v)=\tilde{a}(v)$, and as a result from the above we have
that the number of pairwise nonisomorphic real calculi of the given form is
\begin{equation*}
\frac{2^k-2^{m+1}}{2}+(2^{m+1}-1)=2^2m-2^m+2\cdot 2^m-1=2^m(2^m+1)-1,
\end{equation*}
since $a(v)=(0,...,0)$ does not correspond to a valid choice of $v\in\mathbb{C}^N$.
If, instead, $k=2m$, then there are exactly $2^m$ elements $b\in\{0,1\}^k$ such that $b=(b_1,...,b_k)=(b_k,...,b_1)$, implying that there are exactly $2^k-2^m$ elements $c\in\{0,1\}^k$ such that $c=(c_1,...,c_k)\neq (c_k,...,c_1)$. Thus, the number of pairwise nonisomorphic real calculi of the given form is
\begin{equation*}
\frac{2^k-2^m}{2}+(2^m-1)=2^{2m-1}-2^{m1}+2\cdot 2^{m-1}-1=2^{m-1}(2^m+1)-1
\end{equation*}
in this case, completing the proof. \end{proof}
Next, we wish to study real metric calculi and connections and in the spirit of Section~\ref{sec:free.and.projective} we shall do this by realizing the real calculus $C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,\mathbb{C}^N,\varphi)$ as a projection of a free real calculus. By Proposition~ \ref{prop:projective.rc.realized.as.projection} this amounts to finding a projection $P:\mathcal{A}\rightarrow\mathcal{A}$ and a map $\tilde{\varphi}:\mathfrak{g}\rightarrow\mathcal{A}$ such that $P(\mathcal{A})\simeq \mathbb{C}^N$ and such that $\varphi$ can be identified with $P\circ\tilde{\varphi}$. If we let $P:\mathcal{A}\rightarrow\mathcal{A}$ be the projection that maps a matrix $A$ to the matrix $A'$ whose first row is equal to that of $A$ and whose other rows are zero, and let $\hat{e}$ be an invertible matrix whose first row is $v_0=\varphi(\partial)$, we may set $\tilde{\varphi}(\partial)=\hat{e}$; by construction, these choices of projection $P$ and map $\tilde{\varphi}$ yield us a free real calculus $\tilde{C}_{\mathcal{A}}$ from which we may realize $C_{\mathcal{A}}$ as a projection. For the sake of simplicity, $\hat{e}$ is chosen so that its rows are pairwise orthogonal, and for the sake of clarity we make the following identification between $\mathbb{C}^N$ and $P\paraa{\Mat{N}}$: \begin{equation*}
(x_1,...,x_N)\simeq \theta\paraa{(x_1,...,x_N)}:=\begin{pmatrix}
x_1 & \cdots & x_N \\
0 & \cdots & 0\\
\vdots & \ddots & \vdots \\
0 & \cdots & 0
\end{pmatrix}, \end{equation*} where the map $\theta:\mathbb{C}^N\rightarrow P(\Mat{N})$ defined above is an obvious isomorphism. Although the above description of $P$ may seem clear, since $P$ is going to be used with respect to the basis of $\mathcal{A}$ given by $\hat{e}$ rather than the standard basis, a more detailed description of $P$ is warranted. When viewed in the basis $\{\hat{e}\}$, $P$ can be regarded as the module homomorphism satisfying \begin{equation*}
P(\hat{e})=\hat{e}p, \end{equation*}
where $p=\frac{1}{||v_0||^2}v_0^{\dagger}v_0$. Since the rows of $\hat{e}$ are chosen to be pairwise orthogonal and since $p=p^2$ is the matrix that projects vectors in $\mathbb{C}^N$ onto $v_0$, it is clear that $P$ is indeed the projection described earlier, with $\varphi(\partial)=v_0\simeq\theta(v_0)=P(\hat{e})=(P\circ\tilde{\varphi})(\partial)$.
Next, let us show that every metric on $\mathbb{C}^N$ is of the form $h(u,v)=x\cdot u^{\dagger}v\in\Mat{N}$.
\begin{proposition}\label{prop:metrics.on.CN}
Let $h$ be a metric on the right $\Mat{N}$-module $\mathbb{C}^N$. Then there exists $x\in\R\setminus\{0\}$ such that
\begin{equation*}
h_x(u,v)= x \cdot u^\dagger v\in\Mat{N}.
\end{equation*}
for $u,v\in\mathbb{C}^N$. \end{proposition} \begin{proof} Let $h$ be a metric on $\mathbb{C}^N$, and let $e_1=(1,0,...,0)\in\mathbb{C}^N$. Then we may set $H:=h(e_1,e_1)$ and use the identity $v=e_1\theta(v)$ (where $\theta$ is the isomorphism between $\mathbb{C}^N$ and $P(\Mat{N})$ described earlier) to calculate $h(u,v)$ for general vectors in $\mathbb{C}^N$: \begin{align*}
h(u,v)&=h(e_1\theta(u),e_1\theta(v)=\theta(u)^{\dagger} h(e_1,e_1)\theta(v)=\theta(u)^{\dagger} H\theta(v)\\
&=\begin{pmatrix}
\overline{u}_1 & 0 &\cdots & 0\\
\overline{u}_2 & 0 &\cdots & 0\\
\vdots & \vdots & \ddots & \vdots\\
\overline{u}_N & 0 &\cdots & 0\\
\end{pmatrix}
\begin{pmatrix}
h_{11} & h_{12} &\cdots & h_{1N}\\
h_{21} & h_{22} &\cdots & h_{2N}\\
\vdots & \vdots & \ddots & \vdots\\
h_{N1} & h_{N2} &\cdots & h_{NN}\\
\end{pmatrix}
\begin{pmatrix}
v_1 & v_2 &\cdots & v_N\\
0 & 0 &\cdots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 &\cdots & 0\\
\end{pmatrix}\\
&=\begin{pmatrix}
\overline{u}_1 \\
\overline{u}_2\\
\vdots \\
\overline{u}_N \\
\end{pmatrix}
(h_{11}, h_{12},\cdots,h_{1N})
\begin{pmatrix}
v_1 & v_2 &\cdots & v_N\\
0 & 0 &\cdots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 &\cdots & 0\\
\end{pmatrix}=h_{11}\cdot u^{\dagger}v. \end{align*} If $h_{11}$ is zero, then $h(u,v)=0$ for every $u,v\in\mathbb{C}^N$. Thus, $h_{11}\in\R\setminus\{0\}$ and the statement follows. \end{proof}
We now move on to affine connections $\nabla$ on $\mathbb{C}^N$; by Proposition \ref{prop:connection.on.projective.module} every such connection is the composition of the projection $P$ (as defined earlier with respect to the basis $\hat{e}=\tilde{\varphi}(\partial)$ of $\Mat{N}$) with a connection $\tilde{\nabla}$ on the free module $\mathcal{A}$ and thus we may use the structure of the free calculus $\tilde{C}_{\mathcal{A}}$ to define $\nabla$. On the free module it is straightforward to define an affine connection in terms of its Christoffel symbol $\tilde{\Gamma}\in\mathcal{A}$ (with respect to the basis $\hat{e}=\tilde{\varphi}(\partial)$): \begin{equation*}
\tilde{\nabla}_{\partial} \hat{e}=\hat{e}\tilde{\Gamma}, \end{equation*} and for the connection $\nabla=P\circ\tilde{\nabla}$ on $P(\Mat{N})$ we have \begin{equation*}
\nabla_{\partial} P(\hat{e})=P(\tilde{\nabla}_{\partial} P(\hat{e}))=P(\tilde{\nabla}_{\partial} \hat{e}p)=P(\hat{e}\tilde{\Gamma})p+P(\hat{e})\partial(p)=P(\hat{e})(\tilde{\Gamma}p+\partial(p)), \end{equation*} where we recall that \begin{equation*}
p=\frac{1}{||v_0||^2}v_0^{\dagger}v_0 \end{equation*} is the matrix projecting vectors in $\mathbb{C}^N$ onto $v_0=\varphi(\partial)$. Thus, through the identification $v_0\simeq \theta(v_0)=P(\hat{e})$, we see that any connection $\nabla$ on $\mathbb{C}^N$ is defined by \begin{equation}\label{eqn:connection.on.CN.first.form}
\nabla_{\partial} v_0=v_0(\tilde{\Gamma} p+\partial(p)), \end{equation} where $\tilde{\Gamma}\in\Mat{N}$ can be chosen arbitrarily. This can be simplified somewhat since the product \begin{equation*}
v_0\tilde{\Gamma} p=\left(\frac{1}{||v_0||^2}v_0\tilde{\Gamma} v_0^{\dagger}\right)v_0 \end{equation*} is equal to $\lambda v_0$ for a unique $\lambda\in\mathbb{C}$ for every $\tilde{\Gamma}\in\mathcal{A}$, implying that we may define $\nabla$ \begin{equation}\label{eqn:connection.on.CN}
\nabla_{\partial} v_0=v_0(\lambda\mathbbm{1}_N+\partial(p)) \end{equation} for an arbitrary $\lambda\in\mathbb{C}$. Note that an arbitrary connection could also be characterized by \begin{equation*}
\nabla_{\partial} v_0=v_0(\tilde{\lambda}\mathbbm{1}_N-\hat{D}(\partial)) \end{equation*} for an arbitrary $\lambda\in\mathbb{C}$, since equation~(\ref{eqn:connection.on.CN.first.form}) is equivalent to \begin{align*}
\nabla_{\partial} v_0&=v_0(\tilde{\Gamma} p+\partial(p))=v_0(\tilde{\Gamma} p+[\hat{D}(\partial),p])=v_0((\tilde{\Gamma}+\hat{D}(\partial))p-p\hat{D}(\partial))\\
&=v_0(\tilde{\lambda}\mathbbm{1}-\hat{D}(\partial)). \end{align*}
At this point we have managed to define a connection on the projective module, and by Proposition \ref{prop:metrics.on.CN} we know that any metric $h$ on $\mathbb{C}^N$is given by $h=h_x:(u,v)\mapsto x\cdot u^{\dagger}v$. One easily checks that the pair $(C_{\mathcal{A}},h_x)$ forms a real metric calculus for any real $x\neq 0$, and so we may use the above characterization of connections on $\mathbb{C}^N$ to determine the possible real connection calculi on the form $(C_{\mathcal{A}},h_x,\nabla)$. In general, the existence of a real connection calculus $(C_{\mathcal{A}},h_x,\nabla)$ relies upon $v_0=\varphi(\partial)$ being an eigenvector of $\hat{D}(\partial)$, as can be seen in the following proposition.
\begin{proposition}\label{prop:v0.is.eigenvector.and.character.of.nabla}
Let $\hat{D}=\hat{D}(\partial)$ be the matrix representation of $\partial\in\mathfrak{g}$. Then, for any $x\in\R\setminus\{0\}$, there exists a connection $\nabla$ such that
$(C_{\mathcal{A}},h_x,\nabla)$ is a real connection calculus if and only if $v_0$ is an eigenvector of $\hat{D}(\partial)$. In this case, if $\nabla^0$ is a given connection on $\mathbb{C}^N$, then $(C_{\mathcal{A}},h_x,\nabla^0)$ is a real connection calculus if and only if $\nabla^0_{\partial}v_0=\lambda v_0$ for $\lambda\in\mathbb{R}$. \end{proposition}
\begin{proof} Assume that $(C_{\mathcal{A}},h_x,\nabla)$ is a real connection calculus. Using equation~(\ref{eqn:connection.on.CN}), we have that $\nabla_{\partial}v_0=v_0(\lambda\mathbbm{1}_N+\partial(p))$ for a $\lambda\in\mathbb{C}$. Thus, we find that the symmetry condition for real connection calculi, $h_x(v_0,\nabla_{\partial}v_0)=h_x(v_0,\nabla_{\partial}v_0)^{\dagger}$, is equivalent to \begin{equation*}
h_x(v_0,\nabla_{\partial}v_0)=h_x(v_0,v_0(\lambda\mathbbm{1}_N+\partial(p)))=x\cdot v_0^{\dagger}v_0(\lambda\mathbbm{1}_N+\partial(p))=x\cdot(\bar{\lambda}\mathbbm{1}_N+\partial(p))v_0^{\dagger}v_0, \end{equation*}
since $\partial$ is a hermitian derivation and $p=p^{\dagger}$. Using $v_0^{\dagger}v_0=||v_0||^2 p$, one gets \begin{align*}
(\lambda-\bar{\lambda})v_0^{\dagger}v_0&=\partial(p)v_0^{\dagger}v_0-v_0^{\dagger}v_0\partial(p)\\
&=||v_0||^2([\hat{D},p]p-p[\hat{D},p])=||v_0||^2(\hat{D}p+p\hat{D}-2p\hat{D}p). \end{align*} Multiplying the above expression by $v_0$ from the left, we note that $v_0$ is a (left) eigenvector of $\hat{D}p+p\hat{D}-2p\hat{D}p$ with eigenvalue $(\lambda-\bar{\lambda})$, meaning that \begin{align*}
(\lambda-\bar{\lambda})v_0&=v_0(\hat{D}p+p\hat{D}-2p\hat{D}p)=v_0\hat{D}(\mathbbm{1}_N-p), \end{align*} since $v_0 p=v_0$. Multiplying this expression from the right by $v_0^{\dagger}$, we find that \begin{equation*}
(\lambda-\bar{\lambda})||v_0||^2=v_0\hat{D}(v_0^{\dagger}-v_0^{\dagger})=0, \end{equation*} which in turn implies that \begin{equation*}
v_0\hat{D}(\mathbbm{1}_N-p)=0. \end{equation*} Hence, $v_0\hat{D}$ lies in the kernel of $(\mathbbm{1}_N-p)$ which is the same as saying that $v_0$ is an eigenvector of $\hat{D}$.
To complete the proof, let $v_0$ be an eigenvector of $\hat{D}$ with eigenvalue $i \mu$ (where $\mu\in\mathbb{R}$ since $\hat{D}$ is anti-hermitian), and let $\nabla^0_{\partial} v_0=v_0(\lambda\mathbbm{1}_N+\partial(p))$, where $\lambda\in\mathbb{C}$. Since $p=\frac{1}{||v_0||^2}v_0^{\dagger}v_0$ and $\hat{D}^{\dagger}=-\hat{D}$, we have that \begin{equation*}
\hat{D}p=-\frac{1}{||v_0||^2}(v_0\hat{D})^{\dagger}v_0=-\frac{1}{||v_0||^2}(i\mu v_0)^{\dagger}v_0 =i\mu p, \end{equation*} which implies that $\partial(p)=\hat{D}p-p\hat{D}=i\mu p-i\mu p=0$. Thus, $\nabla^0_{\partial} v_0=\lambda v_0$, and the symmetry condition $h_x(v_0,\nabla^0_{\partial}v_0)=h_x(v_0,\nabla^0_{\partial}v_0)^{\dagger}$ then becomes equivalent to \begin{equation*}
x\lambda\cdot v_0^{\dagger}v_0=x\bar{\lambda}\cdot v_0^{\dagger}v_0. \end{equation*} This is true if and only if $\lambda\in\mathbb{R}$. \end{proof}
We are now ready to state a general result regarding the existence of a Levi-Civita connection given a real metric calculus $(C_{\mathcal{A}},h_x)$.
\begin{proposition}
Let $C_{\mathcal{A}}=(\Mat{N},\gen{\partial}_D,\mathbb{C}^N,\varphi)$ and let $(C_{\mathcal{A}},h_x)$ be a real metric calculus.
Then there exists a unique connection $\nabla$ such that $(C_{\mathcal{A}},h_x,\nabla)$ is pseudo-Riemannian if and only if $v_0=\varphi(\partial)$ is an eigenvector of $\hat{D}(\partial)$ with eigenvalue $\lambda_{\partial}$. In this case, the Levi-Civita connection $\nabla$ is given by
\begin{equation*}
\nabla_{\partial} v=\lambda_{\partial}v-v\hat{D}(\partial),
\end{equation*}
for $v\in\mathbb{C}^N$. \end{proposition} \begin{proof} Assume that $(C_{\mathcal{A}},h_x,\nabla)$ is a real connection calculus. Then Proposition~\ref{prop:v0.is.eigenvector.and.character.of.nabla} immediately implies that $v_0$ is an eigenvector of $\hat{D}$ with eigenvalue $\lambda_{\partial}$. Using the fact that $\nabla_{\partial} v_0=\lambda v_0$ for some $\lambda\in\mathbb{R}$, and noting that $\partial(h_x(v_0,v_0))=0$ (since $h_x(v_0,v_0)$ is proportional to $p$, and since $\partial(p)=0$), the metric condition for $\nabla$ becomes \begin{align*}
0=\partial(h_x(v_0,v_0))=h_x(v_0,\nabla_{\partial}v_0)+h_x(\nabla_{\partial}v_0,v_0)=(\lambda+\bar{\lambda}) h_x(v_0,v_0). \end{align*} This is satisfied if and only if $\lambda+\bar{\lambda}=0$, and since $\lambda\in\mathbb{R}$ it follows that $2\lambda=0$. In the case of a one-dimensional Lie algebra the torsion condition is trivial, which means that the real connection calculus $(C_{\mathcal{A}},h_x,\nabla)$ is pseudo-Riemannian if and only if $\nabla$ is given by \begin{equation*}
\nabla_{\partial} v=\nabla_{\partial} (v_0 B) =v_0\partial(B)=v_0[\hat{D}(\partial),B]=\lambda_{\partial} v_0 B-(v_0 B)\hat{D}(\partial)=\lambda_{\partial} v-v\hat{D}(\partial), \end{equation*} where $v=v_0B$ is an arbitrary vector in $\mathbb{C}^N$. \end{proof}
\subsection{General abelian Lie algebras}\label{subsec:ex.dim.n} Let $C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,(\mathbb{C}^N)^n,\varphi)$ and assume that $\mathfrak{g}$ is abelian with a basis $\{\partial_1,...,\partial_n\}$. Some of the results from the previous section can be generalized to higher dimensions without great effort, and to this end we will consider the special case where each $e_i:=\varphi(\partial_i)$ is of the following form: \begin{equation*}
e_i=(0,...,0,\alpha_i v_0,0,...,0)\in(\mathbb{C}^{N})^n, \end{equation*} where $v_0\in\mathbb{C}^{N}$ is a unit vector and $\alpha_i\in\R\setminus\{0\}$. The effect this has is that for any metric $h$ on $(\mathbb{C}^{N})^n$ we have that $h(e_i,e_j)=\mu_{ij}\alpha_i\alpha_j v_0^\dagger v_0=\tilde{\mu}_{ij}v_0^\dagger v_0$ where $\mu_{ij}\in\mathbb{R}$ (implying that $\tilde{\mu}_{ij}\in\mathbb{R}$ as well); this can be most easily seen by noting that $h$ can be broken down as the sum of general bilinear forms over $\mathbb{C}^N$, after which we can apply the exact same reasoning used to prove Proposition \ref{prop:metrics.on.CN} to reach this conclusion. Moreover, the symmetry condition $h(\varphi(\partial_i),\varphi(\partial_j))=h(\varphi(\partial_j),\varphi(\partial_i))$ implies that $\tilde{\mu}_{ij}=\tilde{\mu}_{ji}\in\mathbb{R}$ for every pair $i,j$ and the metric $h$ being nondegenerate is reflected in the fact that the real and symmetric $n\times n$ matrix $\tilde{M}=(\tilde{\mu}_{ij})$ defining the metric $h$ is invertible.
To realize $(C_{\mathcal{A}},h)$ as a projection of a free real metric calculus we need to determine a suitable projection $P:\mathcal{A}^n\rightarrow\mathcal{A}^n$ and invertible metric $\tilde{h}$ on $\mathcal{A}^n$, and to this end we let $A$ be a unitary matrix whose first row is equal to $v_0$. We then set $A_i=\alpha_i A$ and let $\{\hat{e}_i\}_1^n$ be the basis of $\mathcal{A}^n$ given by \begin{equation*}
\hat{e}_i=(0,...,0,A_i,0,...,0)\in\mathcal{A}^n. \end{equation*} As a projection we use the module homomorphism $P:\mathcal{A}^n\rightarrow\mathcal{A}^n$ given by \begin{equation*}
P(\hat{e}_j)=\hat{e}_kp^k_j, \end{equation*} where \begin{equation*}
p^k_j=\begin{cases}
v_0^{\dagger}v_0=p &\text{if }j=k,\\
0 &\text{otherwise}.
\end{cases} \end{equation*} Since $p$ is the matrix that projects vectors in $\mathbb{C}^N$ onto $v_0$, and since each $A_i=\alpha_i A$ where $A$ is a unitary matrix whose first row is $v_0$, the product $A_i p$ is the matrix whose first row is $\alpha_i v_0$ and whose other rows are all zero. Using the map $\theta:\mathbb{C}^N\rightarrow\Mat{N}$ from the previous section, given by \begin{equation*}
\theta\paraa{(x_1,...,x_N)}:=\begin{pmatrix}
x_1 & \cdots & x_N \\
0 & \cdots & 0\\
\vdots & \ddots & \vdots \\
0 & \cdots & 0
\end{pmatrix}, \end{equation*} we define the map $\Theta:(\mathbb{C}^N)^n\rightarrow P(\mathcal{A}^n)$ as \begin{equation*}
\Theta(v_1,...,v_n)=(\theta(v_1),...,\theta(v_n)), \end{equation*} where each $v_i\in\mathbb{C}^N$. It is easy to check that $\Theta$ is an isomorphism between $(\mathbb{C}^N)^n$ and $P(\mathcal{A}^n)$, so that vectors $v\in (\mathbb{C}^N)^n$ can be identified with $\Theta(v)\in P(\mathcal{A}^n)$. Using this identification we note that $\varphi(\partial_i)=e_i\simeq\Theta(e_i)=P(\hat{e}_i)$ for $i=1,...,n$, and thus we define the map $\tilde{\varphi}:\mathfrak{g}\rightarrow \mathcal{A}^n$ by $\tilde{\varphi}(\partial_i)=\hat{e}_i$ for $i=1,...,n$. This yields the free real calculus $\tilde{C}_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,(\Mat{N})^n,\tilde{\varphi})$ from which $C_{\mathcal{A}}$ can be realized as a projection.
Next, we choose the metric $\tilde{h}$ to be given by \begin{equation*}
\tilde{h}(\hat{e}_i,\hat{e}_j)=\tilde{h}_{ij}:=\tilde{\mu}_{ij}A^{\dagger}A=\tilde{\mu}_{ij}\mathbbm{1}_N. \end{equation*} One quickly verifies that this is an invertible metric by calculating $\tilde{h}^{ij}\tilde{h}_{jk}$, where $\tilde{h}^{ij}=\tilde{\mu}^{ij}\mathbbm{1}_N$. It is straightforward to verify that $P$ is orthogonal with respect to $\tilde{h}$ and that $h(e_i,e_j)=\tilde{h}(P(\hat{e}_i),P(\hat{e}_j))$, which means that $(C_{\mathcal{A}},h)$ can be realized as the projection of the free real metric calculus $(\tilde{C}_{\mathcal{A}},\tilde{h})$.
We are interested in affine connections $\nabla$ on $(\mathbb{C}^{N})^n$, and since every such connection is the projection of a connection $\tilde{\nabla}$ on the free module, we may use the structure on the free module to perform calculations. On the free module, it is trivial to define an affine connection $\tilde{\nabla}$ in terms of its Christoffel symbols $\tilde{\Gamma}^k_{ij}$, where \begin{equation*}
\nabla_i \hat{e}_j=\hat{e}_k\tilde{\Gamma}^k_{ij}. \end{equation*} Using this, we may calculate corresponding Christoffel symbols $\Gamma^k_{ij}$ for the connection $\nabla:=P\circ\tilde{\nabla}$: \begin{align*}
\nabla_i e_j&=P(\tilde{\nabla}_i P(\hat{e}_j))=P(\tilde{\nabla}_i\hat{e}_k p^k_j)=P(\tilde{\nabla}_i\hat{e}_j p)=P(\hat{e}_k\tilde{\Gamma}^k_{ij}p+\hat{e}_j\partial_i(p))\\
&=e_k\tilde{\Gamma}^k_{ij}p+e_j\partial_i(p)=e_k(\tilde{\Gamma}^k_{ij}p+\delta^k_j\partial_i(p)), \end{align*} where $p=v_0^{\dagger}v_0$ is the matrix projecting vectors in $\mathbb{C}^N$ onto $v_0$. Thus, if we define $\nabla$ by \begin{equation*}
\nabla_i e_j:=e_k\Gamma^k_{ij}, \end{equation*} where each $\Gamma^k_{ij}=\tilde{\Gamma}^k_{ij}p+\delta^k_j\partial_i(p)$ for a $\tilde{\Gamma}^k_{ij}\in\mathcal{A}$, then $\nabla$ is a well-defined connection on $(\mathbb{C}^N)^n$. We are, however, only interested in connections such that $(C_{\mathcal{A}},h,\nabla)$ is a real connection calculus, and for this to be the case it is necessary that we assume that $v_0$ is an eigenvector of every $\hat{D}(\partial_i)$; this requirement is proven in the same way that Proposition~\ref{prop:v0.is.eigenvector.and.character.of.nabla} is proved in the 1-dimensional case. Since we are only considering abelian Lie-algebras $\mathfrak{g}$ such a vector $\tilde{v}_0$ of course always exists, and as in the 1-dimensional case this has the consequence that $\partial_i(p)=\partial_i(h_{kj})=0$. Moreover, we note that for every matrix $\tilde{\Gamma}^k_{ij}$ there is a unique vector $v^k_{ij}:=v_0(\tilde{\Gamma}^k_{ij})^{\dagger} \in\mathbb{C}^N$ such that $\Gamma^k_{ij}=\tilde{\Gamma}^k_{ij}p+\delta^k_j\partial_i(p)=(v^k_{ij})^{\dagger}v_0+0$, leading to the characterization of $\nabla$ being simplified through the use of the standard scalar product on $\mathbb{C}^N$: \begin{equation*}
e_k\Gamma^k_{ij}=(0,...,\alpha_kv_0,0,...,0)\cdot (v^k_{ij})^{\dagger}v_0=(0,...,\alpha_k (v_0 (v^k_{ij})^{\dagger})v_0,0,...,0)=e_k(v_0 (v^k_{ij})^{\dagger}), \end{equation*} and if we let $\lambda^k_{ij}=v_0 (v^k_{ij})^{\dagger}\in\mathbb{C}$ we have that \begin{equation*}
\nabla_i e_j=e_k\Gamma^k_{ij}=e_k\lambda^k_{ij}; \end{equation*} since the matrix $\tilde{\Gamma}^k_{ij}$ that yields $\lambda^k_{ij}$ is arbitrary, the above expression defines an affine connection for any choice of $\lambda^k_{ij}\in\mathbb{C}$.
With this simplification, the hermiticity condition for $h(e_i,\nabla_j e_k)$ becomes \begin{equation*}
h(e_i,\nabla_j e_k)=(\tilde{\mu}_{il}\lambda^l_{jk})v_0^{\dagger}v_0=(\tilde{\mu}_{il}\bar{\lambda}^l_{jk})v_0^{\dagger}v_0=h(e_i,\nabla_j e_k)^{\dagger}, \end{equation*} and multiplying by $\tilde{\mu}^{mi}$ from the left, we see that this is true if and only if $\lambda^l_{jk}\in\mathbb{R}$ for every $j,k,l$.
Since the torsion condition is not trivial if dim $\mathfrak{g}>1$ we may use Koszul's formula to find constants $\lambda^k_{ij}$ satisfying the metric and torsion-free conditions for $\nabla$ directly if they exist. We find that the connection must satisfy \begin{multline*}
2h(\nabla_i e_j,e_k)=\partial_i h(e_j,e_k)+\partial_j h(e_i,e_k)-\partial_k h(e_i,e_j)\\
-h(e_i,\varphi([\partial_j,\partial_k]))+h(e_j,\varphi([\partial_k,\partial_i]))+h(e_k,\varphi([\partial_i,\partial_j]))=0, \end{multline*} since the Lie bracket is zero and $\partial_i (h_{jk})=0$ for every $i,j,k$. On the other hand, $h(\nabla_i e_j,e_k)$ may be calculated explicitly explicitly: \begin{equation*}
0=2h(\nabla_i e_j,e_k)=2h(e_l\lambda^l_{ij},e_k)=2\tilde{\mu}_{lk}\bar{\lambda}^l_{ij}; \end{equation*} multiplying by $\tilde{\mu}^{mk}$ from the right, one finds that this happens if and only if $\bar{\lambda}^k_{ij}=\lambda^k_{ij}=0$ for every $i,j,k$. In other words, for an arbitrary vector $v=e_kB^k\in (\mathbb{C}^N)^n$ and $\partial\in\mathfrak{g}$, \begin{equation*}
\nabla_{\partial} v=\nabla_{\partial} (e_k B^k)=e_k\partial(B^k)=e_k[\hat{D},B^k]=\lambda_{\partial} e_k B^k-(e_k B^k)\cdot \hat{D}=\lambda_{\partial} v-v\cdot \hat{D}, \end{equation*} where $\hat{D}=\hat{D}(\partial)$ and $\lambda_{\partial}$ is the eigenvalue of $\hat{D}$ associated with $v_0$. We summarize the above discussion as follows.
\begin{proposition}
Let $\mathfrak{g}$ be an abelian Lie algebra with basis $\{\partial_1,...,\partial_n\}$ and let $(C_{\mathcal{A}},h)$ be a real metric calculus such that
\begin{equation*}
C_{\mathcal{A}}=(\Mat{N},\mathfrak{g}_D,(\mathbb{C}^N)^n,\varphi)
\end{equation*}
and $\varphi(\partial_i)=(0,...,0,\alpha_i v_0,0,...,0)\in(\mathbb{C}^{N})^n$,
where $\alpha_i\in\R\setminus\{0\}$ and $v_0$ is a unit vector.
Then there exists a unique connection $\nabla$ such that $(C_{\mathcal{A}},h,\nabla)$ is pseudo-Riemannian if and only if $v_0$ is an eigenvector of $\hat{D}(\partial_i)$ with eigenvalue $\lambda_i$ for $i=1,...,n$. In this case, the Levi-Civita connection $\nabla$ is given by
\begin{equation*}
\nabla_{\partial_i} v=\lambda_i v-v\cdot\hat{D}(\partial_i),
\end{equation*}
for $v\in (\mathbb{C}^N)^n$. \end{proposition}
\end{document} |
\begin{document}
\title{Rank Polynomials of Fence Posets are Unimodal} \begin{abstract}
We prove a conjecture of Morier-Genoud and Ovsienko that says that rank polynomials of the distributive lattices of lower ideals of fence posets are unimodal. We do this by introducing a related class of \emph{circular} fence posets and proving a stronger version of the conjecture due to McConville, Sagan and Smyth. We show that the rank polynomials of circular fence posets are symmetric and conjecture that unimodality holds except in some particular cases. We also apply the recent work of Elizalde, Plante, Roby and Sagan on rowmotion on fences and show many of their homomesy results hold for the circular case as well.
\end{abstract} \section{Introduction} Fence posets are a natural class of posets that appear in the study of cluster algebras, quiver respresentations and other areas of enumerative combinatorics, see \cite{Saganpaper} for an overview. Let $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_s)$ be a composition of $n$. The fence poset of $\alpha$, denoted $F(\alpha)$ is the poset on $x_1,x_2,\ldots,x_{n+1}$ with the order relations: \begin{equation*} x_1\preceq x_2 \preceq \cdots\preceq x_{\alpha_1+1}\succeq x_{\alpha_1+2}\succeq \cdots\succeq x_{\alpha_1+\alpha_2+1}\preceq x_{\alpha_1+\alpha_2+2}\preceq\cdots\preceq x_{\alpha_1+\alpha_2+\alpha_3+1}\succeq \cdots \end{equation*} The relations describe a poset with $n+1$ nodes, where $n = \alpha_1 + \ldots + \alpha_s$ is the \emph{size} of $\alpha$, schematically depicted in Figure~\ref{fig:first} below.
\begin{figure}
\caption{The fence poset $F(\alpha)$}
\label{fig:first}
\end{figure}
We call the $s$ maximal chains of this poset corresponding to parts of $\alpha$ its \emph{segments}. Lower order ideals of $F(\alpha)$ ordered by inclusion give a distributive lattice which we denote by $J(\alpha)$. The lattice $J(\alpha)$ is ranked by the size of the ideals, with a generating polynomial $R(\alpha;q)= \sum_{I \in J(\alpha)} q^I$, called the \emph{rank polynomial}. We will use $r(\alpha)$ to denote the corresponding \emph{rank sequence} given by the powers of $q$.
\begin{example} \label{ex:2113} The fence poset for $\alpha=(2,1,1,3)$ is given in the left part of Figure \ref{fig:2113}. Note that the ideals of maximal and minimal rank are unique. Ideals of rank $1$ and rank $7$ are given by minima and complements of maxima respectively, and there are five ideals of rank $2$, depicted in in Figure \ref{fig:2113}, right. The full rank sequence is $(1,3,5,6,6,5,3,2,1)$. \begin{figure}
\caption{The fence poset $F(2,1,1,3)$ (left) and its five ideals of rank $2$ (right).}
\label{fig:2113}
\end{figure} \end{example}
\begin{comment} \begin{example}\label{F2113} The fence poset for $\alpha=(2,1,1,3)$ is given in Figure \ref{fig:2113intro}. \begin{figure}
\caption{The fence poset $F(2,1,1,3)$}
\label{fig:2113intro}
\end{figure} The full rank sequence for this poset is $(1,3,5,6,6,5,3,2,1)$, which is unimodal. \end{example} \end{comment}
The rank sequences of fence posets were used by Morier-Genoud and Ovsienko in \cite{originalconj} in their recent work defining $q$-analogues of the rational numbers. Their $q$-rationals are defined by the ratio of the rank polynomials for two compositions given by the continued fraction expression of the rationals considered and enjoy several interesting properties including a type of convergence which allows one to extend their definition to obtain $q$-real numbers. They also proposed the following conjecture in their paper, the proof of which is the main result in this paper. \begin{thm}[Conjecture $1.4$ in \cite{originalconj}]\label{thm:main0} The rank polynomials of fence posets are unimodal. \end{thm}
While there was no a priori reason for the authors to expect that this conjecture holds, there was ample numerical evidence. Results predating the conjecture itself were given by Salvi and Munarini \cite{crown}, who considered the case when all parts equal to $1$. Claussen \cite{claussen2020expansion} showed that the conjecture holds when the composition has at most $4$ parts. Further partial progress was made by McConville, Sagan and Smyth \cite{Saganpaper}, who proved the conjecture in the case where the first segment is larger than the sum of the others and proposed the following strengthening of this conjecture. The various interlacing properties referred to in the next theorem are defined in the next section.
\begin{thm}[Conjecture 1.4 in \cite{Saganpaper}]\label{thm:main} Suppose $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_s)$. \begin{enumerate}
\item[(a)] If $s=1$ then $r(\alpha) = (1,1,\ldots,1)$ is symmetric. \item[(b)] If $s$ is even, then $r(\alpha)$ is bottom interlacing. \item[(c)] If $s\ge 3$ is odd we have:
\begin{enumerate}
\item[(i)] If $\alpha_1>\alpha_s$ then $r(\alpha)$ is bottom interlacing.
\item[(ii)] If $\alpha_1<\alpha_s$ then $r(\alpha)$ is top interlacing.
\item[(iii)] If $\alpha_1=\alpha_s$ then $r(\alpha)$ is symmetric, bottom interlacing, or top interlacing depending on whether
$r(\alpha_2,\alpha_3,\ldots,\alpha_{s-1})$ is symmetric, top interlacing, or bottom interlacing, respectively. \end{enumerate} \end{enumerate} \end{thm}
One of the challenges in proving the above theorem comes from the feature that there are fence posets whose rank sequences can have long flat parts. \begin{example} The rank sequence of the composition $(a, 1, 1, 1)$ where $a > 2$ is \[r(\alpha) = (1, 3, 4, \overbrace{5}^{a-2}, 4, 3, 2, 1).\] \end{example} We will describe the main ideas in our proof later but it is noteworthy that our proof is purely combinatorial and essentially constructive, in that we can effectively describe injections that realize the desired unimodality. Unimodality of combinatorial sequences is often deduced by first proving stronger properties of the sequence such as log concavity, ultra log concavity or even real rootedness, but for this problem, none of these stronger properties need hold. Indeed to see that even log concavity need not hold, we see that for the fence poset $F(\alpha) = F(2, 1, 1, 3)$ described in example \ref{ex:2113}, we have \[9 = r(\alpha)[6]^2 < r(\alpha)[5]\, r(\alpha)[7] = 5\cdot 2 = 10,\] where, we use the notation $r(\alpha)[k]$ to refer to the number of $k$ ideals of the fence poset $F(\alpha)$.
They key idea in our proof is to navigate between the properties of fence posets and those of the closely related class of \emph{circular fence posets}. For a composition $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s})$ of $n$ we define the \emph{circular fence poset} of $\alpha$, denoted $\overline{F}(\alpha)$ as the fence poset of $\alpha$ where $x_{n+1}$ and $x_1$ are taken to be equal, so we get a circular poset with $n$ nodes.
\begin{example} The circular fence poset $\bar{F}(2, 1, 1,3)$ is obtained from the regular fence poset $F(2, 1, 1, 3)$ (see figure \ref{fig:2113}) by identifying the vertices $x_1$ and $x_8$, yielding a poset on $7$ elements. Referring once again to figure \ref{fig:2113}, given that we have identified $x_1$ and $x_8$, two of the five ideals of size two are identical in the circular version and the ideal $(x_1, x_8)$ does not appear. Thus, the number of rank $2$ ideals in $\bar{F}(2, 1, 1,3)$ is $3$. The full rank sequence for $\bar{F}(2, 1, 1,3)$ is $(1, 2, 3, 4, 4, 3, 2, 1)$. \end{example}
We will use $\bar{J}(\alpha)$ to refer to the lattice of lower ideals of $\bar{F}(\alpha)$, $\bar{R}(\alpha; q)$ to refer to the rank polynomal of $\bar{J}(\alpha)$ and $\bar{r}(\alpha)$ to refer to the rank sequence. Rank polynomials for circular fence posets behave slightly differently from those for regular fence posets; there are examples where they fail to be unimodal, see section $4$ for a discussion and a characterization. However, they do satisfy a highly convenient property.
\begin{thm} Rank polynomials of circular fence posets are \emph{symmetric}. \end{thm}
Given a fence poset, there are several naturally related circular fence posets. Our proof consists of relating the rank polynomials of these various posets and inductively proving a number of ancillary results. One of the byproducts of our proof is the following result that might be of independent interest. \begin{thm} Let $\alpha = (\alpha_1, \ldots, \alpha_{2s})$ be a composition with an even number of parts and consider any cyclic shift of $\alpha$, $\beta = (\alpha_k, \alpha_{k+1}, \ldots, \alpha_{2s}, \alpha_1, \alpha_2, \ldots, \alpha_{k-1})$. Then \[\bar{R}(\alpha;q) = \bar{R}(\beta;q).\] In other words, the rank polynomial of a circular fence poset is well defined over \emph{circular} compositions. \end{thm}
As mentioned above, when it comes to circular fence posets, unimodality need not hold. \begin{example} Let $\alpha = (1, a, 1, a)$ be a composition. A direct calculation shows that the rank sequence is \[r(\alpha) = (1, 2, \ldots, a, a+1, a, a+1, a, a-1, \ldots, 1).\] This sequence has a dip in the middle term and is not unimodal. \end{example}
\section{Notation and Terminology}
Let $P$ be a finite poset. A subset $I$ of $P$ is said to be a lower order ideal (resp. upper order ideal) if when $x \in I$, any $y \preceq x$ (resp. any $y \succeq x$) lies in $I$ as well. We will use the word ``ideal'' to denote a lower order ideal, unless stated otherwise, and use the notation $I \trianglelefteq P$. Ideals (or upper order ideals) of a poset $P$ ordered by inclusion give the structure of a distributive lattice $J(P)$, ranked by the number of elements. See \cite{Stanley} Chapter 3.4 for a detailed discussion. For the purposes of this work, we will use the work "rank" exclusively to refer to the rank structure of the order ideal lattice. Note that taking the setwise complement of an ideal gives an upper order ideal of complementary rank.
We will be interested in the case where $P$ is a fence, or a circular fence, and consider the corresponding rank sequence and rank polynomial. The fences are defined to start with an up step, but as flipping a fence vertically only reverses the rank sequence, their structure can be inferred easily. Fences that start with a down step will come up at a few instances in our proofs, but instead of developing a separate notation for upside down fences, we will allow the first part of the composition to be zero in those instances.
A sequence is called \emph{unimodal} if there exists an index $m$ such that $$ a_0\le a_1 \le \cdots \le a_m \ge a_{m+1}\ge \ldots\ge a_{n}$$. It was conjectured in \cite{originalconj} that the rank sequence of $J(\alpha)$ is unimodal. A more specific conjecture about the behaviour of the coefficients was given in \cite{Saganpaper}. A sequence is called {\em top interlacing} if
$$ a_0\le a_n \le a_1\le a_{n-1} \le \ldots\le a_{\ce{n/2}}$$
where $\ce{\cdot}$ is the ceiling function. Similarly, the sequence is {\em bottom interlacing} if $$ a_n\le a_0 \le a_{n-1} \le a_1 \le \ldots \le a_{\fl{n/2}} $$ with $\fl{\cdot}$ being the floor function. Note that top interlacing as well as bottom interlacing sequences are unimodal.
To prove this Theorem \ref{thm:main}, we will define a circular version of the fence poset, where the first and last node are related.
\section{Circular Fences} For a composition $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s})$ of $n$ we define the \emph{circular fence poset} of $\alpha$, denoted $\overline{F}(\alpha)$ to be the fence poset of $\alpha$ with the additional relation $x_{n+1}=x_1$, so that we end up with a circular poset with $n$ nodes. We will denote the corresponding order ideal lattice, rank polynomial and rank sequence by $\overline{J}(\alpha), \overline{R}(\alpha;q)$ and $\overline{r}(\alpha)$ respectively. We will call the nodes that correspond to $\alpha_i$ the $i$th \emph{segment} of $\overline{F}(\alpha)$
Circular fences have substantial intrinsic symmetry. Shifting the parts cyclically by two steps gives the same object and reversing the order of the parts preserves the rank sequence. In the special case when all the segments are of size $1$, the object we obtain is called a \emph{crown}. Crowns were previously studied in \cite{crown} where it was shown that the corresponding rank polynomials are symmetric, and that they are unimodal when the number of segments is different than $4$. Examining the one step shift allows us to directly say that the symmetry holds when one of the segments is larger as well. This will serve as the basis to prove that in fact, for any circular fence poset we get a rank symmetric lattice.
\begin{lemma} Shifting the parts of $\alpha$ cyclically by one step reverses the rank sequence $\overline{r}(\alpha)$. In particular \label{lemma:basis}$\overline{R}((k,1,1,1,\ldots,1);q)$ where the number of segments is even is symmetric for any $k\in \mathbb{N}$. \end{lemma} \begin{proof} This follows as a cyclic shift of one step on a circular fence is equivalent to reversing the order relation or flipping the poset upside down. Making a cyclic shift of one step followed by reversing the parts of $(k,1,1,1,\ldots,1)$ gives $(1,1,1,1,\ldots,1,k,1)$ which has the exact same structure but a reversed rank sequence.
\end{proof}
In general, rank polynomials for circular fences are no easier to calculate than their non circular counterparts and we only have formulas for a limited number of cases. The case when $\alpha=(1,a,1,a,\ldots,1,a)$ was considered in \cite{crown2}. They were able to formulate the rank polynomial in terms of Chebyshev polynomials of the first kind, defined recursively by $T_0(q)=1$, $T_1(q)=q$ and $T_{n+2}(q) = 2q \,T_{n+1}(q) - T_n(q)$.
\begin{prop}[\cite{crown2}] We have $$\displaystyle \overline{R}((1,a,1,a,\ldots,1,a)=2q^{({(a-1)s})/{2}}\,{T}_{a-1}\left(\frac{1+q+q^2+\cdots+q^s}{2q^{{s}/{2}}}\right)$$ where $2s$ is the number of segments of $(1,a,1,a,\ldots,1,a)$. \end{prop}
Note that when $s=2$, we get the polynomial $1+2q+3q^2+\cdots+(a+1)q^a+aq^{a+1}+(a+1)q^{a+2}+(a)q^{a+3}+\cdots+2q^{2a+1}+q^{2a+2}$, which is not unimodal.
Some other small cases that can be easily calculated by hand are listed in Table \ref{tab:smallcases} below. \begin{table}[ht]
\centering
\begin{tabular}{||c c c||}
\hline
$\alpha$ & Ideal Count & Rank Polynomial \\ [0.5ex]
\hline\hline
$(a,b)$ & $ab+2$ & $1+q[a]_q[b]_q+q^{a+b}$ \\ [0.5ex]
\hline
$(a,1,b,1)$ & $ab+2a+2b+2$ & $[a+2]_q[b+2]_q-q^{a+1}-q^{b+1}$ \\[0.5ex]
\hline
$(a,b,c,d)$ & $abcd+ab+cd+ad+bc+2$ & \begin{tabular}{c}
$1+q[a]_q[d]_q+q[b]_q[c]_q+q^{a+b+1}[c]_q[d]_q$ \\
$ +q^{c+d+1}[a]_q[b]_q+q^{a+b+c+d}$
\end{tabular} \\
\hline
$(a,a,a,a)$ & $a^4+4a^2+2$ & $1+([a]_q)^4+(2q^{2a+1}+2q)([a]_q)^2+q^{4a}$ \\[0.5ex]
\hline \end{tabular}
\caption{Ideal count and rank polynomial for small cases}
\label{tab:smallcases} \end{table}
The cases of $(a,b)$ and $(1,a,1,b)$ are indeed quite straightforward. The lattice formed by the ideals of $\overline{F}(a,b)$ is formed by the direct product of two chains of lengths $a$ and $b$, with an added minimum element (for the empty ideal) and maximum element (for the full ideal): $\hat{0} \oplus C_{a}\times C_{b} \oplus \hat{1}$. Here, the position on $C_{a}$ corresponds to the number of unshared elements in the segment of size $a$, whereas the position on $C_{b}$ describes the number of unshared elements in the segment of size $b$. The natural symmetric chain decomposition on $C_{a}\times C_{b}$ can easily be extended to accommodate the two added nodes, as seen in Figure \ref{fig:48latticechains} for the example of $(5,8)$. We get $ab+2$ ideals with the corresponding rank polynomial $\overline{R}((a,b);q)=1+q[a]_q[b]_q+q^{a+b}$.
\begin{figure}
\caption{The lattice $J((5,8))$ (left) has a natural symmetric chain decomposition (right)}
\label{fig:48latticechains}
\end{figure}
When we have $(1,a,1,b)$, any ideal of size $k$ is a partitioning of $k$ into two parts $p_1\leq a$ and $p_2\leq b $ such that $p_1=a \Rightarrow p_2 \neq 0$ and $p_2=b \Rightarrow p_1\neq 0$. The lattice we obtained can be visualised as $C_{a+1} \times C_{b+1}$ with the two opposite corners deleted. When $a\neq b$ this also has a natural symmetric chain decomposition. When $a=b$ however, we have no such decomposition as the resulting rank polynomial is not unimodal, see Figure \ref{fig:1417}. We have $(a+1)(b+1)-2$ ideals, with $\overline{R}((1,a,1,b);q)=[a+2]_q[b+2]_q-q^{a+1}-q^{b+1}$. \begin{figure}
\caption{The lattice $J((1,3,1,6))$ (left) has a natural symmetric chain decomposition (middle) whereas $J((1,4,1,4))$ (right) can not be decomposed into symmetric chains.}
\label{fig:1417}
\end{figure} \renewcommand{1.5}{1.5}
\section{An Example} \label{sec:example}
In this chapter, we will consider ways of closing up a fence poset to get a rounded fence through the example of $\alpha=(2,1,1,3)$. The ideas illustrated here will be the backbone of the proofs that will be given in the upcoming sections.
\begin{method}Letting $x_1=x_8$. \end{method} \begin{figure}
\caption{The ideals of $\overline{F}(2,1,1,3)$ that contain $x_1$ $\Longleftrightarrow$ Ideals of $F(1,1,1,2)$}
\end{figure}
This natural choice of setting $x_0=x_8$ gives us the rounded fence poset $\overline{F}(\alpha)$, which has the disadvantage of having only $7$ nodes, so that we do not have all the structure of our original poset included in this circular version. In particular, we lose the ideals that contain only one of $x_1$ and $x_8$.
For the ideals of $\alpha$ that contain $x_1$ but not $x_8$, any node above $x_8$ is also not included, and there is no effect on nodes above $x_1$, so we get a bijection with ideals of $F(1.1)$ as depicted in Figure \ref{fig:2113left} below. \begin{figure}
\caption{The ideals of ${F}(2,1,1,3)$ that contain $x_1$ but not $x_8$ $\Longleftrightarrow$ Ideals of $F(1,1)$}
\label{fig:2113left}
\end{figure}
Similarly, ideals that contain $x_8$ but not $x_1$ are in bijection with ideals of $1,2$, see Figure \ref{fig:2113right}.
\begin{figure}
\caption{The ideals of ${F}(3,1,1,4)$ that contain $x_8$ but not $x_1$ $\Longleftrightarrow$ Ideals of $F(1,2)$}
\label{fig:2113right}
\end{figure}
The connection between the rank polynomials, consequentally, is a bit tricky. The ideals of $F(2,1,1,3)$ that do not contain $x_1$ or $x_8$ do not contain any node above them, so we have only two such ideals, the empty one and the one that consists of just $x_4$. Subtracting all these gives us the contribution of the ideals that contain both $x_0$ and $x_8$, which can also be calculated via adding $2$ nodes to each ideal of $F(1,1,1,2)$. \begin{eqnarray*} q^2 R(1,1,1,2)&=& R(2,1,1,3)-q R(1,1) - q R(1,2) - (1+q).\\ &=& (1+3q+5q^2+ 6q^3 + 6q^4 + 5q^5 + 3q^6 + 2q^7 + q^8)-q(1+2q+q^2+q^3)\\&&-q(1+2q+2q^2+q^3+q^4)-(1+q)\\ &=& q^2+3q^3+4q^4+4q^5+3q^6+2q^7+q^8. \end{eqnarray*} These ideals are shifted by $q^{-1}$ to give the ideals of $\overline{F}(2,1,1,3)$ that contain $x_1=x_8$. The two that do not contribute $1+q$, so that we get the following rank symmetric polynomial: \begin{eqnarray*} \overline{R}(2,1,1,3) &=& (q^{-1})(q^2+3q^3+4q^4+4q^5+3q^6+2q^7+q^8)+1+q\\ &=& 1+2q+3q^2+4q^3+4q^4+3q^5+2q^6+q^7. \end{eqnarray*}
Adding the relation that $x_1$ is above (or below) $x_8$ allows us to get a circular fence with the same number of nodes.
\begin{method} \label{method:connect} Connecting $x_1$ and $x_8$.\end{method}
\begin{figure}
\caption{The circular fence $R(3,1,1,3)$ given by assuming $x_1$ is above $x_8$}
\end{figure}
Note that the ideals of $\overline{F}(3,1,1,3)$ give all ideals $I$ of $F(2,1,1,3)$ satisfying $x_1 \in I \Rightarrow x_8 \in I$. The ones that are left over are exactly the ones that contain $x_1$ but not $x_8$ that correspond to the ideals of $F(1,1)$ as we discussed above (See Figure \ref{fig:2113left}). The corresponding rank polynomials are also related: \begin{eqnarray*} R(2,1,1,3)&=& \overline{R}(3,1,1,3)+ qR(1,1).\\ &=& (1+2q+3q^2+ 5q^3 + 5q^4 + 5q^5 + 3q^6 + 2q^7 + q^8)+q(1+2q+q^2+q^3)\\ &=& 1+3q+5q^2+ 6q^3 + 6q^4 + 5q^5 + 3q^6 + 2q^7 + q^8. \end{eqnarray*}
Alternatively we can add a new node $x_0$ to complete the cycle.
\begin{method}Adding a new $x_0$ above $x_1$ and $x_8$. \end{method} \begin{figure}
\caption{The ideals of $\overline{F}(2,1,1,3,1,1)$ that contain $x_0$ $\Longleftrightarrow$ Ideals of $F(1,1,1,2)$}
\end{figure}
Adding $x_0$ gives us the rounded fence poset $P_T:=\overline{F}(2,1,1,3,1,1)$ with 9 nodes. Ideals of our original poset are exactly the ideals of $P_T$ that do not contain $x_0$. Any ideal that contains $x_0$ also contains $x_1$ and $x_3$ but puts no other restrictions on the inclusion of the other nodes, so these ideals are in bijection with those of $F(1,1,1,2)$. On the rank polynomials side, we get the identity:
\begin{eqnarray*} &&R(2,1,1,3)= \overline{R}(2,1,1,3,1,1)-q^3 R(1,1,1,2)\\ &&= (1+3q+5q^2+ 7q^3 + 9q^4 + 9q^5 + 7q^6 + 5q^7 + 3q^8+q^9 )-q^3(1+3q+4q^2+4q^3+3q^4+2q^5+q^6)\\ &&= 1+3q+5q^2+ 6q^3 + 6q^4 + 5q^5 + 3q^6 + 2q^7 + q^8. \end{eqnarray*}
\begin{method}Adding a new $x_0$ below $x_1$ and $x_8$.\end{method}
\begin{figure}
\caption{The ideals of $\overline{F}(3,1,1,4)$ that do not contain $x_0$ $\Longleftrightarrow$ Ideals of $F(1)$}
\end{figure}
Adding $x_0$ below gives us the rounded fence poset $P_B:=\overline{F}(3,1,1,4)$ with 9 nodes. Ideals that contain $x_0$ are in bijection with our original poset. Any ideal that does not contain $x_0$ can not contain anything above either, and there are only two such ideals, the empty ideal and the rank $1$ ideal that only contains $x_4$:
\begin{eqnarray*} R(2,1,1,3)&=& (q^{-1})( \overline{R}(3,1,1,4)- R(1)).\\ &=& (q^{-1}) ((1+2q+3q^2+ 5q^3 + 6q^4 + 6q^5 + 5q^6 + 3q^7 + 2q^8+q^9 )-(1+q))\\ &=& 1+3q+5q^2+ 6q^3 + 6q^4 + 5q^5 + 3q^6 + 2q^7 + q^8 . \end{eqnarray*}
\section{Rank Symmetry in Circular Fences} \begin{comment} In this section, we will prove that the rank polynomial for circular fences is always symmetrical:
\begin{thm} \label{thm:sym} For any composition $\alpha$ of $n$ with an even number of segments, the rank polynomial of $\alpha$ is symmetric with center of symmetry at $n/2$. \end{thm}
As we already showed symmetry holds in the case $(k,1,1,\ldots,1)$ for any $k$, for any given number of beads we have a case where we already know the rank polynomial is symmetric, so it suffices to show that moving beads around does not break the symmetry. The constructions given in the previous section will be our main tool, with which we will go back and forth between the circular and the non-circular cases. We will also make use of the following auxiliary statements about regular (non-circular) fences:
\begin{itemize} \setlength\itemsep{3mm}
\item[ ]
\item[ \textbf{A(n):}] Given a composition $\beta=(\beta_1,\beta_2,\ldots,\beta_{2s})$ of $n-1$, let $\mathfrak{I}_L$ be the set of ideals of $F(\alpha)$ that include the leftmost node $x_1$, but not the rightmost node $x_n$. Similarly let $\mathfrak{I}_R$ be the set of ideals of $F(\alpha)$ that include the rightmost node but not the leftmost. The polynomial $$ \displaystyle \sum_{I\in \mathfrak{I}_L }q^{|I|}-\sum_{J\in \mathfrak{I}_R }q^{|J|}$$ is symmetric with center of symmetry $n/2$.
\item[ \textbf{B(n):}] Given a composition $\beta=(\beta_1,\beta_2,\ldots,\beta_{2s})$ of $n-1$, where $\beta_1$ and $\beta_{2s}$ are allowed to be $0$ with the convention that when $\beta_1$ is $0$, we get a fence that starts with a down step instead of an up step. Then \[R((\beta_1+1,\beta_2,\ldots,\beta_{2s});q)-R((\beta_1,\beta_2,\ldots,\beta_{2s}+1);q)\] is symmetric around $n/2$.
\item [\textbf{C(n):}] Given a composition $\beta=(\beta_1,\beta_2,\ldots,\beta_{2s})$ of $n$. Then
\[\bar{R}((\beta_1+1,\beta_2,\ldots,\beta_{2s});q)-\bar{R}((\beta_1,\beta_2,\ldots,\beta_{2s}+1);q)\] is symmetric around $(n+1)/2$.
\item[ \textbf{D(n):}] Given a composition $\beta = (\beta_1, \ldots, \beta_{2s})$ of $n$, the rank polynomial of the associated circular fence poset $\bar{R}(\beta)$ is symmetric. \end{itemize}
The outline of the proof is as follows. We will show in sequence, \begin{itemize}
\item $\mathbf{D(n)}$ implies $\mathbf{A(n)}$.
\item $\mathbf{A(n)}$ implies $\mathbf{B(n)}$.
\item $\mathbf{B(n)}$ implies $\mathbf{C(n+1)}$.
\item $\mathbf{C(n+1)}$ implies $\mathbf{D(n+2)}$. \end{itemize}
\begin{proof}[Proof of Theorem \ref{thm:sym}] If we have just $2$ beads, we have the symmetric rank polynomial $\overline{R}((1,1);q)=1+q+q^2$. Assume that symmetry holds when we have $n\leq M$ nodes. We Let $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s})$ be a composition with $M$ nodes. We will show that if the rank polynomial of $\alpha_L=(\alpha_1+1,\alpha_2,\ldots,\alpha_{2s})$ is symmetric with center of symmetry given by $(M+1)/2$, then so is the one for $\alpha_R=(\alpha_1,\alpha_2,\ldots,\alpha_{2s}+1)$. Equivalently, moving nodes from one side of a minimal point to the other preserves symmetry. This would also imply we can move nodes over maximal points, as turning the fence upside down preserves symmetric rank sequences. By Lemma \ref{lemma:basis} the result follows.
\begin{claim}\label{claim1} $A(n)$ holds for any $n \leq M$. \end{claim}
\begin{claimproof} We will consider two natural circular fences related to $\beta$: $\overline{R}((\beta_1+1,\beta_2,\ldots,\beta_{2s}))$ given by adding the relation $x_{n}\preceq x_1$ to the fence of $\beta$ and $\overline{R}((\beta_1,\beta_2,\ldots,\beta_{2s}+1))$ given by adding the relation $x_{n}\succeq x_1$ to the fence of $\beta$ (See Method~\ref{method:connect} from Section~\ref{sec:example}). Let us denote their rank polynomials by $\overline{\mathfrak{R}}_L(q)$ and $\overline{\mathfrak{R}}_R(q)$ respectively.
Note that: \begin{eqnarray*}
R(\beta;q)&=&\overline{\mathfrak{R}}_L(q)+\sum_{\substack{I\trianglelefteq F(\beta)\\x_{n} \in I,\, x_1 \notin I} }q^{|I|}=\overline{\mathfrak{R}}_R(q)+\sum_{\substack{J\trianglelefteq F(\beta)\\x_{1} \in J,\, x_{n} \notin J} }q^{|J|} \end{eqnarray*}
\begin{eqnarray*}
\sum_{I\in \mathfrak{I}_L }q^{|I|}-\sum_{J\in \mathfrak{I}_R }q^{|J|}&=&\overline{\mathfrak{R}}_R(q)-\overline{\mathfrak{R}}_L(q). \end{eqnarray*} Both rank polynomials belong to circular fences with $n$ nodes, which are symmetric around $n/2$ by our hypothesis. \end{claimproof}
\begin{claim} \label{claim2} $B(n)$ holds for any $n \leq M$.
\end{claim} \begin{claimproof} Let $\mathfrak{F}_L=F((\beta_1+1,\beta_2,\ldots,\beta_{2s}))$ be the fence with $n+1$ nodes given by incrementing the leftmost segment of $\beta$ by $1$. Similarly, let $\mathfrak{F}_R=F((\beta_1,\beta_2,\ldots,\beta_{2s}+1))$.
We want to show that the following polynomial is symmetric around $n/2$:
$$ \displaystyle \sum_{I \trianglelefteq \mathfrak{F}_L }q^{|I|}-\sum_{J \trianglelefteq \mathfrak{F}_R }q^{|J|}.$$ We will make use of the circular fence $\overline{\mathfrak{RF}}$ for $(\beta_1+1,\beta_2,\ldots,\beta_{2s}+1)$. Note that we can obtain $\overline{\mathfrak{RF}}$ from $\mathfrak{F}_L$ by adding the relation $x_1 \preceq x_{n+1}$ so that:
$$\displaystyle \sum_{I\trianglelefteq \overline{\mathfrak{RF}} } q^{|I|} =\sum_{I\trianglelefteq \mathfrak{F}_L }q^{|I|}-\sum_{\substack{I\trianglelefteq \mathfrak{F}_L\\x_{n+1} \in I,\, x_1 \notin I} }q^{|I|}.$$ Similarly we have:
$$\displaystyle \sum_{I\trianglelefteq \overline{\mathfrak{RF}} } q^{|I|} =\sum_{J\trianglelefteq \mathfrak{F}_R }q^{|J|}-\sum_{\substack{J\trianglelefteq \mathfrak{F}_R\\x_1 \in J,\, x_{n+1} \notin J} }q^{|J|}.$$ Now we will try to describe the difference in terms of the ideals of $F(\beta)$: \begin{eqnarray*}
\displaystyle \sum_{I \trianglelefteq \mathfrak{F}_L }q^{|I|}-\sum_{J \trianglelefteq \mathfrak{F}_R }q^{|J|} &=&\sum_{\substack{J\trianglelefteq \mathfrak{F}_R\\x_1 \in J,\, x_{n+1} \notin J} }q^{|J|}-\sum_{\substack{I\trianglelefteq \mathfrak{F}_L\\x_{n+1} \in I,\, x_1 \notin I} }q^{|I|}\\
&=& \sum_{\substack{J\trianglelefteq F(\beta)\\x_1 \in J,\, x_{n} \notin J} }q^{|J|}-\sum_{\substack{I\trianglelefteq F(\beta)\\x_{n} \in I,\, x_1 \notin I} }q^{|I|}. \end{eqnarray*}
By Claim 1, this difference is symmetric with center of symmetry at $n/2$. \end{claimproof}
Let let $\mathfrak{I}_L$ be the set of ideals of $\overline{F}(\alpha_L)$ and $\mathfrak{I}_R$ be the set of ideals of $\overline{F}(\alpha_R)$. The ideals that do not contain $x_1=x_{m+2}$ are in bijection as they contain no nodes from the segments $\alpha_1$ or $\alpha_{2s}$. That means we can limit our attention to the ideals that include $x_1=x_{m+2}$, which can be seen as ideals of regular fences.
The ideals of $\overline{F}(\alpha_L)$ that contain $x_1$ are exactly those of $F((\alpha_1,\alpha_2,\ldots,\alpha_{2s}-1))$ shifted by q. Similarly, the ideals of $\overline{F}(\alpha_R)$ that do not contain $x_1$ correspond to those of $F((\alpha_1-1,\alpha_2,\ldots,\alpha_{2s}))$ shifted by q (Here, again we abuse notation a bit and allow the first or last segments to possibly be $0$.).
These are both partitions of $M-1$, so by Claim \ref{claim2} the difference between the generating polynomials of their ideals is symmetric around $(M-1)/2$. Shifting by $q$ gives a rank sequence symmetric around $(M+1)/2$ as desired. \end{proof}
\begin{corollary} The polynomial $\overline{R}(\alpha;q)$ is invariant under cyclic shift of segments of $\alpha$, so it is well defined over cyclic compositions.
\end{corollary} \end{comment} In this section, we will prove that the rank polynomial for circular fences is always symmetric.
\begin{thm} \label{thm:sym} For any composition $\alpha$ of $n$ with an even number of segments, the rank polynomial of $\alpha$ is symmetric with center of symmetry at $n/2$. \end{thm}
As we already showed symmetry holds in the case $(k,1,1,\ldots,1)$ for any $k$, for any given number of beads we have a case where we already know the rank polynomial is symmetric, so it suffices to show that moving beads around does not break the symmetry. The constructions given in the previous section will be our main tool, with which we will go back and forth between the circular and the non-circular cases. Consider the following statements:
\begin{itemize} \setlength\itemsep{3mm}
\item[ ]
\item[ \textbf{A(n):}] Given a composition $\beta=(\beta_1,\beta_2,\ldots,\beta_{2s})$ of $n-1$, let $\mathfrak{I}_L$ be the set of ideals of $F(\alpha)$ that include the leftmost node $x_1$, but not the rightmost node $x_n$. Similarly let $\mathfrak{I}_R$ be the set of ideals of $F(\alpha)$ that include the rightmost node but not the leftmost. The polynomial $$ \displaystyle \sum_{I\in \mathfrak{I}_L }q^{|I|}-\sum_{J\in \mathfrak{I}_R }q^{|J|}$$ is symmetric with center of symmetry $n/2$.
\item[ \textbf{B(n):}] Given a composition $\beta=(\beta_1,\beta_2,\ldots,\beta_{2s})$ of $n-1$, where $\beta_1$ and $\beta_{2s}$ are allowed to be $0$ with the convention that when $\beta_1$ is $0$, we get a fence that starts with a down step instead of an up step. Then \[R((\beta_1+1,\beta_2,\ldots,\beta_{2s});q)-R((\beta_1,\beta_2,\ldots,\beta_{2s}+1);q)\] is symmetric around $n/2$.
\item [\textbf{C(n):}] Given a composition $\beta=(\beta_1,\beta_2,\ldots,\beta_{2s})$ of $n$, the following difference is symmetric around $(n+1)/2$
\[\bar{R}((\beta_1+1,\beta_2,\ldots,\beta_{2s});q)-\bar{R}((\beta_1,\beta_2,\ldots,\beta_{2s}+1);q).\]
\item[ \textbf{D(n):}] Given a composition $\beta = (\beta_1, \ldots, \beta_{2s})$ of $n$, the rank polynomial of the associated circular fence poset $\bar{R}(\beta)$ is symmetric. \end{itemize}
We will prove the rank symmetry of circular fences by showing that $\mathbf{D(n)} \Rightarrow \mathbf{A(n)} \Rightarrow \mathbf{B(n)} \Rightarrow \mathbf{C(n+1)}$ which in turn implies $\mathbf{D(n+2)}$. Note that as a byproduct we get that the statements $\mathbf{A(n)}$ and $\mathbf{B(n)}$ about the structure of non-circular fences.
\begin{proof}[Proof of Theorem \ref{thm:sym}] We will use induction on the size of the composition. If $\alpha$ is a composition of $\leq 3$ with an even number of parts we have only three choices, each of which giving us symmetric rank polynomials: \[\overline{R}((1,1);q)=1+q+q^2, \qquad \overline{R}((2,1);q)=\overline{R}((1,2);q)=1+q+q^2 + q^3.\]
Now, let us assume that $\mathbf{D(n)}$ holds, that is, for any composition $\alpha = (\alpha_1, \ldots, \alpha_{2s})$ of $n$, the rank polynomial $\bar{R}(\alpha)$ is symmetric.
\begin{claim}\label{claim1} $\mathbf{A(n)}$ holds. \end{claim}
\begin{claimproof} We will consider two natural circular fences related to $\beta$: $\overline{R}((\beta_1+1,\beta_2,\ldots,\beta_{2s}))$ given by adding the relation $x_{n}\preceq x_1$ to the fence of $\beta$ and $\overline{R}((\beta_1,\beta_2,\ldots,\beta_{2s}+1))$ given by adding the relation $x_{n}\succeq x_1$ to the fence of $\beta$. Let us denote their rank polynomials by $\overline{\mathfrak{R}}_L(q)$ and $\overline{\mathfrak{R}}_R(q)$ respectively.
Note that \begin{eqnarray*}
R(\beta;q)&=&\overline{\mathfrak{R}}_L(q)+\sum_{\substack{I\trianglelefteq F(\beta)\\x_{n} \in I,\, x_1 \notin I} }q^{|I|}=\overline{\mathfrak{R}}_R(q)+\sum_{\substack{J\trianglelefteq F(\beta)\\x_{1} \in J,\, x_{n} \notin J} }q^{|J|} \end{eqnarray*} Consequently \begin{eqnarray*}
\sum_{I\in \mathfrak{I}_L }q^{|I|}-\sum_{J\in \mathfrak{I}_R }q^{|J|}&=&\overline{\mathfrak{R}}_R(q)-\overline{\mathfrak{R}}_L(q). \end{eqnarray*} Both rank polynomials belong to circular fences with $n$ nodes, which are symmetric around $n/2$ by our hypothesis. \end{claimproof}
\begin{claim} \label{claim2} $\mathbf{B(n)}$ holds.
\end{claim} \begin{claimproof} Let $\mathfrak{F}_L=F((\beta_1+1,\beta_2,\ldots,\beta_{2s}))$ be the fence with $n+1$ nodes given by adding a new node to the (possibly empty) leftmost segment of $\beta$ by $1$. Similarly, let $\mathfrak{F}_R=F((\beta_1,\beta_2,\ldots,\beta_{2s}+1))$.
We want to show that the following polynomial is symmetric around $n/2$:
$$ \displaystyle \sum_{I \trianglelefteq \mathfrak{F}_L }q^{|I|}-\sum_{J \trianglelefteq \mathfrak{F}_R }q^{|J|}.$$ We will make use of the circular fence $\overline{\mathfrak{RF}}$ for $(\beta_1+1,\beta_2,\ldots,\beta_{2s}+1)$. Note that we can obtain $\overline{\mathfrak{RF}}$ from $\mathfrak{F}_L$ by adding the relation $x_1 \preceq x_{n+1}$ (see Method~\ref{method:connect} from Section~\ref{sec:example} for an example) so that:
$$\displaystyle \sum_{I\trianglelefteq \overline{\mathfrak{RF}} } q^{|I|} =\sum_{I\trianglelefteq \mathfrak{F}_L }q^{|I|}-\sum_{\substack{I\trianglelefteq \mathfrak{F}_L\\x_{n+1} \in I,\, x_1 \notin I} }q^{|I|}.$$ Similarly we have:
$$\displaystyle \sum_{I\trianglelefteq \overline{\mathfrak{RF}} } q^{|I|} =\sum_{J\trianglelefteq \mathfrak{F}_R }q^{|J|}-\sum_{\substack{J\trianglelefteq \mathfrak{F}_R\\x_1 \in J,\, x_{n+1} \notin J} }q^{|J|}.$$ This yields
\begin{eqnarray*}
\displaystyle \sum_{I \trianglelefteq \mathfrak{F}_L }q^{|I|}-\sum_{J \trianglelefteq \mathfrak{F}_R }q^{|J|} &=&\sum_{\substack{J\trianglelefteq \mathfrak{F}_R\\x_1 \in J,\, x_{n+1} \notin J} }q^{|J|}-\sum_{\substack{I\trianglelefteq \mathfrak{F}_L\\x_{n+1} \in I,\, x_1 \notin I} }q^{|I|} \end{eqnarray*} We observe that
\[\sum_{\substack{I\trianglelefteq \mathfrak{F}_L\\x_{n+1} \in I,\, x_1 \notin I} }q^{|I|} = \sum_{\substack{I\trianglelefteq F(\beta)\\x_{n} \in I,\, x_1 \notin I} }q^{|I|}. \qquad \sum_{\substack{J\trianglelefteq \mathfrak{F}_R\\x_1 \in J,\, x_{n+1} \notin J} }q^{|J|} = \sum_{\substack{J\trianglelefteq F(\beta)\\x_1 \in J,\, x_{n} \notin J} }q^{|J|}.\] This is simply because if $x_1 \not\in I \trianglelefteq \mathfrak{F}_L$, then no nodes from the first segment can be in the ideal and this sets up a bijection between the two sets of ideals in the left equation above. The second equation may be similarly justified. We conclude that \begin{eqnarray*}
\sum_{\substack{J\trianglelefteq \mathfrak{F}_R\\x_1 \in J,\, x_{n+1} \notin J} }q^{|J|}-\sum_{\substack{I\trianglelefteq \mathfrak{F}_L\\x_{n+1} \in I,\, x_1 \notin I} }q^{|I|} = \sum_{\substack{J\trianglelefteq F(\beta)\\x_1 \in J,\, x_{n} \notin J} }q^{|J|}-\sum_{\substack{I\trianglelefteq F(\beta)\\x_{n} \in I,\, x_1 \notin I} }q^{|I|}. \end{eqnarray*}
By Claim 1, this difference is symmetric with center of symmetry at $n/2$. \end{claimproof}
\begin{claim} \label{claim3} $\mathbf{C(n+1)}$ holds.
\end{claim} \begin{claimproof} Let $\alpha = (\alpha_1, \ldots, \alpha_{2s})$ be a composition of $n+1$ and let $\alpha_L = (\alpha_1+1, \ldots, \alpha_{2s})$ and $\alpha_R = (\alpha_1, \ldots, \alpha_{2s}+1)$. Let $\mathfrak{I}_L$ be the set of ideals of $\overline{F}(\alpha_L)$ and $\mathfrak{I}_R$ be the set of ideals of $\overline{F}(\alpha_R)$. The ideals that do not contain $x_1=x_{n+3}$ are in bijection as they contain no nodes from the first or last segments. That means we can limit our attention to the ideals that include $x_1=x_{n+3}$ and these can be seen as ideals of regular fences. Let $\tilde{\alpha}_L = (\alpha_1, \ldots, \alpha_{2s}-1)$ and $\tilde{\alpha}_R = (\alpha_1-1, \ldots, \alpha_{2s})$ We claim that
\[ \sum_{\substack{J\trianglelefteq \bar{\mathfrak{F}}(\alpha_L)\\x_1 = x_{n+3} \in J} }q^{|J|} = q\,\sum_{J\trianglelefteq \mathfrak{F}(\tilde{\alpha}_L) }q^{|J|}, \qquad \sum_{\substack{J\trianglelefteq \bar{\mathfrak{F}}(\alpha_R)\\x_1 = x_{n+3} \in J} }q^{|J|} = q\,\sum_{J\trianglelefteq \mathfrak{F}(\tilde{\alpha}_R) }q^{|J|}. \]
This is because the ideals of $\overline{F}(\alpha_L)$ that contain $x_1$ correspond exactly to ideals of $F((\alpha_1,\alpha_2,\ldots,\alpha_{2s}-1))$, only shifted by $q$. The other equality is similarly justified. Here, we slightly abuse notation to permit the first or last segments to possibly be $0$, something that claim $\mathbf{B(n)}$ permits us to do. Consequently \begin{eqnarray*}
\sum_{J\trianglelefteq \bar{\mathfrak{F}}(\alpha_L)}q^{|J|} - \sum_{J\trianglelefteq \bar{\mathfrak{F}}(\alpha_R) }q^{|J|} =& \sum_{\substack{J\trianglelefteq \bar{\mathfrak{F}}(\alpha_L)\\x_1 = x_{n+2} \in J} }q^{|J|} - \sum_{\substack{J\trianglelefteq \bar{\mathfrak{F}}(\alpha_R)\\x_1 = x_{n+2} \in J} }q^{|J|} \\
=& q\left(\sum_{J\trianglelefteq \mathfrak{F}(\tilde{\alpha}_L) }q^{|J|}, \qquad - \sum_{J\trianglelefteq \mathfrak{F}(\tilde{\alpha}_R) }q^{|J|}\right). \end{eqnarray*}
In the final expressions, we have compositions of $n$, so by Claim \ref{claim2} the difference between the generating polynomials of their ideals is symmetric around $n/2$. Shifting by $q$ gives a rank sequence symmetric around $(n+2)/2$ as desired. \end{claimproof}
\begin{claim} \label{claim4} $\mathbf{D(n+2)}$ holds.
\end{claim} \begin{claimproof} Let $\alpha = (\alpha_1, \ldots, \alpha_{2s})$ be a composition of $n+2$. Claim \ref{claim3} says that moving an element across a valley (as long as the number of parts does not change) preserves symmetry. Taking the vertical reflection of the poset, $\alpha_1$ yields another fence poset whose rank polynomial is the reflection of the original rank polyomial. \begin{eqnarray}\label{flip} \bar{R}(\alpha) = \sum_0^{n+2} r_k q^k \Longrightarrow \bar{R}(\alpha_1) = \sum_0^{n+2} r_{n-k} q^k.\end{eqnarray} This is because for any $k$, there is a bijection between lower ideals of size $k$ of $\bar{F}(\alpha)$ and the lower ideals of size $n+2-k$ for $\bar{F}(\alpha_1)$, which is achieved by taking the set complement.
Claim \ref{claim3} then shows that moving an element of $\alpha$ across a peak preserves symmetry as well. Applying these operations consecutively, we may transform $\alpha$ to a composition of the form $(k, 1, \ldots, 1)$ with $2s$ parts. By lemma \ref{lemma:basis}, this has symmetric rank polynomial. \end{claimproof}
As noted above, $\mathbf{D(2)}$ and $\mathbf{D(3)}$ are true by direct computations and the implications \[\mathbf{D(n)} \implies \mathbf{A(n)} \implies \mathbf{B(n)} \implies \mathbf{C(n+1)} \implies \mathbf{D(n+2)},\]
yield our theorem for all values of $n$.
\end{proof}
\begin{corollary} The polynomial $\overline{R}(\alpha;q)$ is invariant under cyclic shifts of segments of $\alpha$, so it is well defined over cyclic compositions. \label{cor:cyclicshift} \end{corollary} \begin{proof} if $\alpha = (\alpha_1, \ldots, \alpha_{2s})$ is a composition of $n$ and $\beta = (\alpha_2, \ldots, \alpha_{2s}, \alpha_1)$, then we have that their rank polynomials are mirror images, \[\bar{R}(\alpha) = \sum_0^n r_k q^k \Longrightarrow \bar{R}(\beta) = \sum_0^n r_{n-k} q^k,\] as noted above in (\ref{flip}). Theorem \ref{thm:sym} yields the result.
\end{proof} \section{Proof of Main Theorem}
Given a rank sequence $(r_0,r_1,\ldots,r_{n+1})$, the properties of it being top interlacing, bottom interlacing or symmetric and unimodal are determined by the relationship between elements whose indices are equidistant from $(n+1)/2$, which we will call $\text{mid}(\alpha)$. In all three cases, if $|j-\text{mid}(\alpha)|>|i-\text{mid}(\alpha)|$, then $r_j\leq r_i$,
To this end, we will partition the inequalities that correspond to interlacing into two parts; the part that holds for both bottom and top interlacing sequences and the one that separates bottom and top interlacing sequences.
\begin{eqnarray*} \text{(ineqA)} & & r_0 \le r_{n},\, r_1 \le r_{n-1}\ldots \qquad \quad r_{n+1} \le r_{1},\, r_{n} \le r_{2}\ldots\\ \text{(ineqB)}& & r_0\ge r_{n+1},\, r_1\ge r_{n},\, \ldots\\ \text{(ineqT)}& & r_0\le r_{n+1},\,r_1\le r_{n},\, \ldots \end{eqnarray*}
Bottom interlacing sequences are ones that satisfy (ineqA) and (ineqB), top interlacing sequences are ones that satisfy (ineqA) and (ineqT), and symmetric unimodal ones are the ones that satisfy all three sets of inequalities.
\begin{proof}[Proof of Theorem \ref{thm:main}] Assume that the theorem holds for all compositions of length at most $n-1$. Let $\alpha$ be a composition of size $n$. \begin{claim} The rank sequence $r(\alpha)=(r_0,r_1,\ldots,r_{n+1})$ satisfies (ineqA). \end{claim} \begin{claimproof} Following Methods $3$ and $4$ from Section \ref{sec:example}, we will consider two circular fences obtained by adding a new node to the fence of $\alpha$. We will Let $\overline{F}(\alpha_T)$ be given by adding a node $x_{0}$ lying above both $x_1$ and $x_{n+1}$. Let $(t_0,t_1,\ldots,t_{n+2})$ be the corresponding rank sequence, which is symmetric by Theorem \ref{thm:sym}.
Note that the ideals of the fence poset of $\alpha$ correspond exactly to the ideals of $\overline{F}(\alpha_T)$ that do not contain $x_0$. The ideals that contain $x_0$ also contain $x_1, x_{n+1}$ and anything that is lying below them.
\begin{eqnarray*}\displaystyle \overline{R}(\alpha_T;q)={R}(\alpha;q) +\sum_{\substack{I\trianglelefteq \overline{F}(\alpha_T)\\x_{0} \in I}}q^{|I|}\\ {R}(\alpha;q)=\overline{R}(\alpha_T;q)-q^k R(\beta;q). \end{eqnarray*} where $\beta$ is obtained from $\alpha$ by deleting $x_1$, $x_{n+1}$ and anything below them, and $k$ is the number of nodes deleted $+1$. Note that by the induction hypothesis, $R(\beta;q)$ is bottom or top interlacing, with $n-k+1$ nodes. For each symmetric pair $t_i$ and $t_{n+2-i}$, as ${n+2-i}$ is closer to the shifted center $k+(n-k+1)/2$, the amount subtracted from $t_{n+2-i}$ is at least as large as the amount subtracted from $t_i$, implying $r_i \geq r_{n+2-i}$ for $1\leq i \leq \lceil{n}\rceil$.
Similarly we can add a node $x_0$ that lies below both $x_1$ and $x_{n+1}$. Let $\overline{F}(\alpha_B)$ be the corresponding circular fence poset with rank sequence $(b_0,b_1,\ldots,b_{n+2})$. By the same reasoning as above we get:
\begin{eqnarray*}\displaystyle \overline{R}(\alpha_B;q)=q{R}(\alpha;q) +\sum_{\substack{I\trianglelefteq \overline{F}(\alpha_T)\\x_{0} \notin I}}q^{|I|}\\ q{R}(\alpha;q)=\overline{R}(\alpha_T;q)- R(\beta;q). \end{eqnarray*} where $\beta$ is obtained from $\alpha$ by deleting $x_1$, $x_{n+1}$ and anything above them. Now the center is shifted left, so that the amount subtracted from $b_{n+2-i}$ is less than or equal to the amount subtracted from $b_i$. As we shifted by $q$, this means $r_{i-1}\leq r_{n+1-i}$ for $1\leq i \leq \lceil{n}\rceil$. \end{claimproof}
To finish our proof, we will look at whether the rank sequence is bottom interlacing, top interlacing and symmetrical. To this end, we will add the new relation $x_1 \succeq x_n$, as in Method $2$ from \ref{sec:example} \begin{claim} If $\alpha$ has an even number of segments, $R(\alpha;q)$ is bottom interlacing. \end{claim}
\begin{claimproof} Let us add the relation $x_1\succeq x_{n+1}$ to $\alpha$. The resulting circular fence contains all ideals of ${F}(\alpha)$ satisfying $x_1 \in I \Rightarrow x_{n+1} \in I$. The ones that are left over are exactly the ones that contain $x_1$ but not $x_{n+1}$. The inclusion of $x_1$ is equivalent to deleting the node $x_1$ and shifting by $q$ and not including $x_{n+1}$ is equivalent to deleting the node $x_{n+1}$ as well as anything above it. What we are left with is the rank polynomial of a smaller composition $\beta$ shifted by $q$. As we have an even number of parts, there is at least one node above $x_{n+1}$ (See Figure \ref{fig:2113left} for an example) which means $\beta$ has at most $n-2$ nodes. So $\text{mid}(\beta)$, even when shifted by $q$ lies strictly to the left of $\frac{n+1}{2}$.
Let $(c_0,c_1,\ldots,c_{n+1})$ be the rank sequence of the circular fence, symmetric by Theorem \ref{thm:sym}. In particular, for each $i\leq \frac{n+1}{2}$, $c_i=c_{n+1-i}$. Adding the rank sequence for $\beta$ to this gives the rank sequence of $\alpha$. But $\text{mid}(\beta)$ laying strictly to the left of $\frac{n+1}{2}$ means for each $i$, what we add to $c_i$ is at least as large as what we add to $c_{n+1-i}$, giving us $r_i\geq r_{n+1-i}$. \end{claimproof}
\begin{claim} If $\alpha$ has an odd number of segments, $R(\alpha;q)$ is bottom interlacing (respectively top interlacing) if and only if $R(\alpha';q)$ is bottom interlacing (respectively top interlacing) where $\alpha'=(\alpha_1-1,\alpha_2,\alpha_3,\ldots,\alpha_{s-1},\alpha_s -1)$ is the composition of $n-2$ obtained from $\alpha$ by subtracting $1$ from first and last segments (the fence of $\alpha'$ starts with a downwards segment if the first part is zero).
\end{claim} \begin{claimproof} Again, we consider the circular fence obtained by adding the relation $x_1\succeq x_{n+1}$ to $\alpha$. The ideals of the circular fence are in bijection with ideals of ${F}(\alpha)$ satisfying $x_1 \in I \Rightarrow x_{n+1} \in I$. We will calculate the ones $x_1$ but not $x_{n+1}$ separately. As we have an odd number of parts, $x_1$ is below $x_2$ and $x_{n+1}$ is above $x_{n}$, so that the ideals containing $x_1$ but not $x_{n+1}$ are in bijection with the ideals of $\alpha'$ described above, with the rank sequence shifted by one. Deleting two nodes and shifting by one means that $\text{mid}(\alpha')=\text{mid}(\alpha)=\frac{n+1}{2}$. The rank polynomial of the circular fence is symmetric around $\frac{n+1}{2}$. Adding a bottom interlacing (respectively top interlacing) polynomial with the same $\text{mid}$ value gives us a bottom interlacing (resp. top interlacing) polynomial. \end{claimproof}
Note that in the case of odd parts, if $\alpha_1>\alpha_s$ then removing pairs from both ends eventually gives us a fence with an even number of parts that is bottom interlacing, so $r(\alpha)$ is bottom interlacing. If $\alpha_1<\alpha_s$, looking at $\alpha^r=(\alpha_s,\alpha_{s-1},\ldots,\alpha_1)$ reverses the rank sequence, so $r(\alpha)$ is top interlacing. When $\alpha_1=\alpha_s$, removing pairs of nodes from both ends eventually gives us the fence for $(\alpha_2,\alpha_3,\ldots,\alpha_{s-1})$ turned upside down, whose rank sequence is the reverse of $r(\alpha_2,\alpha_3,\ldots,\alpha_{s-1})$. \end{proof}
\section{Rank Unimodality of Circular Fences}\label{sect:rankuni}
Unlike the regular case, the rank polynomial of circular fences is not always cyclic. In the case of $\alpha=( 1 , k , 1 , k )$, we get the rank sequence $[1, 2, 3, 4,\ldots, k, k+1, k, k+1, k,\ldots, 3, 2, 1]$ which makes a slight dip in the middle (Refer to Figure \ref{fig:1417} for the rank lattice of $(1,5,1,5)$). We will next see that this issue can only happen when we have an even number of nodes, and a dip can only happen in the middle term of the rank sequence.
\begin{prop} \label{prop:unimod} If $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s})$ has an odd number of nodes, then $\overline{R}(\alpha;q)$ is unimodal. If $\alpha$ is of size $2t$ for some $t \in \mathbb{N}$, then we have $r_i\geq r_{i-1}$ for all $i<t$.\end{prop} \begin{proof} Take a composition $\alpha$ of $n$ and let $T$ be a maximal node in $\overline{F}(\alpha)$. We can partition ideals of $\alpha$ into two parts: those that contain $T$ (and necessarily anything below it), and those that do not contain $T$. As deleting $T$ does not place any restrictions on other nodes, the ones that do not contain $T$ correspond to a regular fence of a composition $\beta$ with $n-1$ nodes, unimodal with $\text{mid}{\beta}=\frac{n-1}{2}$ by Theorem \ref{thm:main}. The ones that contain $T$ also contain the $k\geq 2$ nodes that lie below $T$ in $\overline{F}(\alpha)$, and they are in bijection with the ideals of $F(\gamma)$ obtained from $\overline{F}(\alpha)$ by deleting those nodes. The rank polynomial for $\overline{F}(\alpha)$ satisfies: \begin{eqnarray*} \overline{R}(\alpha;q)&={R}(\beta_q)+q^{k+1}R(\gamma;q). \end{eqnarray*}
Denote the rank sequences of $\alpha$ and $\beta$ by $(r_0,r_1,..,r_n)$ and $(b_0,b_1,\ldots,b_{n-1})$ respectively. We have $b_{n-i}\geq b_{n-i+1}$ for all $1\leq i\leq\frac{n-1}{2}$ by unimodality (we take $b_n=0$). As $R(\gamma;q)$ is also rank unimodal, and $q^k\text{mid}(\gamma)$ lies strictly to the right of $\text{mid}(\beta)=\frac{n-3}{2}$, the value we add to $b_{n-i}$ is at least as large as the value we add to $b_{n-i-1}$, giving us $r_{n-i}\geq r_{n-i+1}$ and by symmetry $r_i\geq r_{i-1}$ for all $i$ satisfying $1\leq i\leq\frac{n-3}{2}$.
If $n$ is odd, this means unimodality. If $n=2t$ is even, we do get any information about the ordering of $r_{t-1}$ and $r_{t}$, so it is possible to have a dip in the middle, which indeed happens for $\alpha \neq ( 1 , k , 1 , k )$ and $( k , 1 , k, 1)$ for $k \in \mathbb{N}$. \end{proof}
\begin{conj} For any $\alpha \neq ( 1 , k , 1 , k )$ or $( k , 1 , k, 1)$ for some $k$, the rank sequence $\overline{R}(\alpha;q)$ is unimodal. \end{conj}
If the segments were fully independent, we would naturally end up with a unimodal polynomial. The connections of maximal and minimal entries work to add some additional relations so that some configurations are not allowed, a relatively small number. What this conjecture is saying is, when we look at a larger number of parts, the configurations disallowed are not sufficient to offset the underlying unimodality. Though we were unable to prove this in all generality, the next result shows that if there are exceptions, they are indeed very rare.
\begin{lemma} Let $T$ be a maximal node in the cyclic fence $\overline{F}(\alpha)$, and let $F_{T^-}$ be the (possibly upside down) fence obtained by deleting $T$. If the rank polynomial $R_{T^-}(q)$ corresponding to $F_{T^-}$ is top interlacing, then $\overline{R}(\alpha;q)$ is rank unimodal. \end{lemma}
\begin{proof} We have already shown unimodality when the number of nodes is odd, so let us focus on the case $\alpha$ is a composition of $2t$. Let $F_{T^+}$ denote the fence obtained by deleting $T$ and any node below $T$ with the corresponding rank polynomial $R_{T^+}(q)$ so that we have: $$\overline{R}(\alpha;q)= R_{T^-}(q)+q^{k+1}R_{T^+}(q) $$ where $k$ is the number of nodes below $T$ in $\overline{F}(\alpha)$.
As $F_{T^-}$ is top interlacing, its rank sequence $(r_0,r_1,\ldots,r_{2s-1})$ satisfies $r{t-1}\leq t_m$. The rank sequence of $q^kR_{T^+}(q)$ is unimodal with the largest entry falling strictly to the right of position $t$, so that the number we add to $r_{t-1}$ to obtain the $t-1$st entry of the rank sequence of $\overline{F}(\alpha)$ is at least as large as the number we add to $r_{t}$. As we already showed the only issue might be in the middle in Proposition \ref{prop:unimod}, we are done. \end{proof}
\begin{corollary} If $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s} )$ has two consecutive segments larger than one, or $3$ consecutive segments $k, 1, l$ with $|k-l|>1$, then $\overline{R}(\alpha;q)$ is unimodal. \end{corollary}
\begin{proof} If $\alpha$ has two consecutive segments larger than one, we can assume without loss of generality, by Corollary \ref{cor:cyclicshift}, that they meet at a top bead $T$. Deleting $T$ gives an upside down fence of an odd number of parts, so it is top interlacing. Similarly, in the case where we have consecutive segments $k, 1, l$ with $|k-l|>1$ by symmetry we can assume that $k$ is larger and $k$ and $1$ meet in a top node $T$. Deleting $T$ gives a fence with an even number of parts, first of which is $l$ and the last is $k-1$. As $k-1$ is strictly larger than $l$, the corresponding rank polynomial is top interlacing. \end{proof}
The leftover cases can be fully analysed when we have a small number of parts. For example, if we have four parts, the only cases that are not covered are of forms $(1,k,1,k)$ and $(1,k,1,k+1)$. If we have $6$ parts, possible counter examples to unimodality must be of one of these forms: $(1,k,1,k,1,k)$,$(1,k,1,k,1,k+1)$,$(1,k,1,k+1,1,k+1)$,$(1,1,2,1,1,2)$.
\section{Rowmotion on Circular Fences}
We can identify the ideals of a fence with antichains on that fence, as any ideal is uniquely described by its maximal elements. Rowmotion acts on ideals by taking an ideal $I$ to the ideal $\rho(I)$ corresponding to the antichain given by the minimal elements of the complement of $I$. In their recent paper \cite{rowmotion}, Elizalde, Plante, Roby and Sagan explored rowmotion on fences, and gave homomesy and orbomesy results, many of which hold for the circular case as well.
In particular they gave a bijection between the orbits of rowmotion on $F(\alpha)$ and an object called an $\alpha$-tiling. Here, we introduce a natural analogue, the class of \emph{circular} $\alpha$-tilings: \begin{defn} For a composition $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s})$, a circular $\alpha$-tiling is a tiling of a rectangle $R_{2s}$ with $2s$ rows labeled $1,2,\ldots,2s$ from top to bottom and an infinite number of columns with yellow $1 \times 1$ tiles, red $2 \times 1$ tiles which are allowed to wrap around and black $1\times (\alpha_i-1)$ tiles in row $i$ satisfying the following properties: \begin{enumerate}[label=\textbf{(\alph*)}]
\item If there is at least one black tile in a row, then when the red tiles are ignored, the black and yellow tiles alternate in that row.
\item If $i$ is odd, there is a red tile in a column covering rows $i$ and $i+1$ if and only if the next column contains two yellow tiles in those two rows.
\item If $i$ is even, there is a red tile covering rows $i$ and $i+1$ if $i<2s$ and wrapping around to cover $2s$ and $1$ if and only if the previous column contains two yellow tiles in those rows. \end{enumerate} \end{defn}
We say that a red tile \emph{starts} at row $i$ if it covers $i, i+1$ or $i=2s$ and it covers $2s$ and $1$. Though it is by no means clear from the definition, the connection with rowmotion orbits which we will prove next in Lemma~\ref{lem:rowmotionbijection} implies that all such tilings are periodic. The period of an orbit $\mathcal{O}$ will be called the \emph{size} of $\mathcal{O}$, denoted $|\mathcal{O}|$. We will visually represent tilings by drawing one such period and identify tilings that are cyclic shifts of each other horizontally.
Let the map $\overline{\phi}$ take an ideal of $\overline{F}(\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s}))$ to a $2s\times 1$ rectangle where box $i$ is colored yellow if the $i$th segment contains no maximal elements of $I$, red if it contains a shared maximal element and black if it contains an unshared maximal element. $\overline{\phi}$ can be seen as a map on taking orbits of rowmotion to infinite rectangles of $2s$ rows by seeing each iteration of the rowmotion operation as a new column (See Figure \ref{fig:rowmotion2113} for an example). The following result directly follows from the proof of the corresponding Lemma 2.2 in \cite{rowmotion} and contains no new ideas. The proof is therefore omitted.
\begin{lemma}\label{lem:rowmotionbijection} The map $\overline{\phi}$ is a bijection between orbits of rowmotion on $\overline{F}(\alpha)$ and circular $\alpha$-tilings. \end{lemma}
For the following discussion, we will identify each tiling with its corresponding orbit and use the two interchangeably. Note that the placement of red tiles uniquely determines an orbit as long as there is at least one red tile in a row, as yellow and black tiles alternate in the leftover spaces. When describing all orbits of a particular fence, we will often talk about the placement of the red tiles, leaving it up to the reader to verify that the construction indeed gives a valid orbit.
\begin{figure}
\caption{A circular $(2,1,1,3)$-tiling and the corresponding orbit of rowmotion on $\overline{F}(2,1,1,3)$}
\label{fig:rowmotion2113}
\end{figure}
A statistic $\text{st}$ is said to be $d$-mesic (with respect to a group operation) if its average is $d$ on every orbit, and it is said to be homomesic if it is $d$-mesic for some $d$. We will consider the following statistics on orbits of rowmotion on cyclic fences:
\begin{eqnarray*} \overline{\mathcal{M}}_x(\mathcal{O})=\text{number of times $x$ occurs as a maximal element in } \mathcal{O},\qquad &&\overline{\mathcal{M}}(\mathcal{O})=\sum_x\overline{\mathcal{M}}_x(\mathcal{O}),\\ \overline{\chi}_x(\mathcal{O})=\text{number of times $x$ occurs in } \mathcal{O},\qquad &&\overline{\chi}(\mathcal{O})=\sum_x\overline{\chi}_x(\mathcal{O}). \end{eqnarray*}
We can read the values of the statistics described directly from the tiling. Let $b_i$ and $w_i$ denote the number of black tiles and white on row $i$ on one period of $\mathcal{O}$ respectively, and $r_i$ denote the number of red tiles starting in row $i$. Note that as black tiles only occur alternating with white tiles, $b_i=w_i$ in any row where $b_i$ is non-zero. So, for any row $i$, the total $w_i(\alpha_i)+r_i+r_{i-1}$ is equal to the period of $\mathcal{O}$. An unshared element $x$ on segment $i$ occurs as a maximal element once per every black tile on row $i$, and the occurrences of shared elements correspond to red tiles.
If $\alpha_i\geq 2$ and $x$ is the $j$th smallest unshared element on segment $i$, we get: \begin{eqnarray*} \overline{\mathcal{M}}_x(\mathcal{O})= b_i, \qquad \overline{\chi}_x(\mathcal{O})=\begin{cases} b_i(\alpha_i-j)+r_i& \text{if $i$ is odd}\\ b_i(\alpha_i-j)+r_{i-1}& \text{if $i$ is even}. \end{cases} \end{eqnarray*}
Similarly for a maximal element $T$ lying between segments $2i+1$ and $2i+2$, and a minimal element $B$ lying between segments $2i$ and $2i+1$ (cyclically), we have:
\begin{eqnarray*} \overline{\chi}_T(\mathcal{O})&=& r_{2i+1}=\overline{\mathcal{M}}_T(\mathcal{O}), \quad \quad
\overline{\mathcal{M}}_B(\mathcal{O})=r_{2i}, \\ \overline{\chi}_B(\mathcal{O})&=&|\mathcal{O}|-r_{2i}=r_{2i-1}+w_{2i}(\alpha_{2i})=r_{2i+1}+w_{2i+1}(\alpha_{2i+1}). \end{eqnarray*}
By summing up these values over all nodes of the fence, we get the following formulas (using the convention $\alpha_{2s+1}=\alpha_1$): \begin{eqnarray} \overline{\mathcal{M}}(\mathcal{O})&=&\sum_{i\leq2s}b_i(\alpha_i-1)+r_i,\label{eq:maxorbit}\\
\overline{\chi}(\mathcal{O})&=&s|\mathcal{O}|+\sum_{i\leq2s}b_i\binom{\alpha_i}{2}+\sum_{i\leq s}r_{2i-1}(\alpha_{2i-1}+\alpha_{2i}-1)-r_{2i}.\\
&=&m/2\,|\mathcal{O}|-\sum_{i\leq2s}(-1)^i r_i(\alpha_i+\alpha_{i+1})/2\label{eq:sumorbit} \end{eqnarray}
We have shown in Theorem \ref{thm:sym} that the rank polynomial for circular fences is always symmetric. This means that if the statistic $\overline{\chi}$ is homomesic, it is necessarily $m\slash 2$-mesic. So in a way the last part of Equation \ref{eq:sumorbit} describes how far from an homomesy an orbit is. If $\alpha_i+\alpha_{i+1}$ is the same for all $i$, as in the example of $(3,1,3,1)$ below, then $\overline{\chi}$ is $m\slash 2$-mesic if and only if $\sum_{i\leq s} r_{2i}-r_{2i-1}=0$ for all orbits.
\begin{example}[$\overline{F}(3,1,3,1)$.]\label{ex:3131} On the small case $(3,1,3,1)$, row motion has $3$ orbits, one of size $5$ and two of size $9$.
\begin{tikzpicture}[scale=.45] \draw(0,0)grid(5,-4); \rrec{1}{1} \rrec{3}{1} \rrec{2}{3} \rrrec{4}{3} \yrec{1}{2} \yrec{2}{2} \yrec{3}{2} \yrec{4}{2} \brec{1}{4}{2} \brec{3}{4}{2} \yrec{2}{4} \yrec{2}{5} \yrec{4}{4} \yrec{4}{5} \draw node at (2.5,-4.5) {$\mathcal{O}_1$}; \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[scale=.45] \draw(0,0)grid(9,-4); \rrec{1}{1}\rrrec{4}{3}\rrec{2}{4}\rrec{1}{6}\rrec{3}{7} \yrec{4}{1}\yrec{4}{2}\yrec{4}{5}\yrec{4}{6}\yrec{4}{4} \yrec{4}{8} \yrec{2}{2}\yrec{2}{3}\yrec{2}{5}\yrec{2}{7}\yrec{2}{8} \yrec{1}{2}\yrec{1}{7}\yrec{3}{3}\yrec{3}{8}\brec{1}{4}{2}\brec{1}{8}{2}\brec{3}{1}{2} \brec{3}{5}{2} \yrec{2}{9} \yrec{4}{9} \rrec{2}{9} \draw node at (4.5,-4.5) {$\mathcal{O}_2$}; \end{tikzpicture} \qquad \qquad \begin{tikzpicture}[scale=.45] \draw(0,0)grid(9,-4); \rrec{3}{1}\rrec{2}{3}\rrrec{4}{4}\rrec{3}{6}\rrec{1}{7} \yrec{2}{1}\yrec{2}{2}\yrec{2}{5}\yrec{2}{6}\yrec{2}{4} \yrec{2}{8} \yrec{4}{2}\yrec{4}{3}\yrec{4}{5}\yrec{4}{7}\yrec{4}{8} \yrec{3}{2}\yrec{3}{7}\yrec{1}{3}\yrec{1}{8}\brec{3}{4}{2}\brec{3}{8}{2}\brec{1}{1}{2} \rrrec{4}{9} \brec{1}{5}{2} \yrec{2}{9}
\draw node at (4.5,-4.5) {$\mathcal{O}_3$}; \end{tikzpicture} \centering
\begin{flushleft}
We can calculate the values of $\overline{\chi}$ and $\overline{\mathcal{M}}$ statistics via Equations $\ref{eq:maxorbit}$-$\ref{eq:sumorbit}$: \end{flushleft} \begin{eqnarray*}
\overline{\mathcal{M}}(\mathcal{O})=2(b_1+b_3)+(r_1+r_2+r_3+r_4),& \quad \quad &\overline{\chi}(\mathcal{O})=4|\mathcal{O}|+4(r_1-r_2+r_3-r_4),\\ \overline{\mathcal{M}}(\mathcal{O}_1)=2(2)+4=8,& \quad \quad & \overline{\mathcal{M}}(\mathcal{O}_2)=\overline{\mathcal{M}}(\mathcal{O}_3)=2(4)+6=14,\\ \overline{\chi}(\mathcal{O}_1)=4(5)=24,& \quad \quad & \overline{\chi}(\mathcal{O}_2)=\overline{\chi}(\mathcal{O}_3)=4(9)=36. \end{eqnarray*} \begin{flushleft}
Note that the second $9$-orbit can be obtained from the first by shifting rows cyclically by $2$ so it makes sense that they have the same statistics. The statistic $\overline{\chi}$ is $4$-mesic. \end{flushleft} \end{example}
Applying the formulas for the $\overline{\mathcal{M}}$ and $\overline{\chi}$ statistics, we see that many homomesy results from the non-circular fences also apply for the circular ones:
\begin{prop}\label{prop:homomesy} For a composition $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{2s})$, rowmotion operation on the circular fence $\overline{F}(\alpha)$ has the following properties: \begin{enumerate}
\item If $x$ and $y$ are unshared elements on the same segment, $\overline{\mathcal{M}}_x-\overline{\mathcal{M}}_y$ is $0$-mesic.
\item For an unshared element $x$ of segment $i$ that lies between a maximal element $T$ and a minimal element $B$, $\overline{\mathcal{M}}_x \alpha_i+\overline{\mathcal{M}}_{T}+\overline{\mathcal{M}}_{B}$ is $1$-mesic.
\item For a maximal element $T$ lying between segments $2i+1$ and $2i+2$, and a minimal element $B$ lying between segments $2j$ and $2j+1$ (cyclically), if $r_{2i+1}=r_{2j}$ for all orbits $\mathcal{O}$, then $\overline{\chi}_T+\overline{\chi}_B$ is $1$-mesic.
\item If $\alpha_i=2$ for all $i$, then $\overline{\mathcal{M}}$ is $s$-mesic. \end{enumerate} \end{prop}
We have previously noted that taking setwise complements maps the ideals of $\overline{F}(\alpha)$ to ideals of $\overline{\text{sh}(\alpha)}$,the fence of a cyclic shift of $\alpha$ by one step. We can also see $\kappa$ as the map taking a circular $\alpha$-tiling, doing a vertical cyclic shift of one step and a horizontal flip to get a circular $\text{sh}(\alpha)$-tiling. Figure \ref{fig:shrowmotion2113} shows the action of $\kappa$ on the orbit seen in Figure \ref{fig:rowmotion2113}. As the rowmotion is defined via the complement operation, it is quite well behaved under this map.
\begin{lemma} \label{lem:rowmotionflip} Let $\kappa$ denote the complement map between ideals of $\overline{F}(\alpha)$ and $\overline{F}(\text{sh}(\alpha))$. Then for any ideal $I$ we have $\kappa(\partial(I))=\partial^{-1}(\kappa(I))$, meaning $\kappa$ maps orbits to orbits. In particular, if $|\alpha|=m-n$, for any orbit $\mathcal{O}$ of rowmotion on $\overline{F}(\alpha)$ we have: \begin{eqnarray*} \overline{\mathcal{M}}(\mathcal{O})&=&\overline{\mathcal{M}}(\kappa(\mathcal{O}))\\
\overline{\chi}(\mathcal{O})+ \overline{\chi}(\kappa(\mathcal{O}))&=&{n}|\mathcal{O}|. \end{eqnarray*} \end{lemma} \begin{proof} As for any ideal $I$ $\overline{\chi}(I)+\overline{\chi}(\kappa(I))=m$, the second statement is trivial. The first is slightly more complicated as we do not necessarily have $\overline{\mathcal{M}}(I)=\overline{\mathcal{M}}(\kappa(I))$, for example if $I$ is the empty ideal, $\overline{\mathcal{M}}(I)=0$ whereas $\overline{\mathcal{M}}(\kappa(I))=s$, where $2s$ is the length of $\alpha$. However, as the total number of red and black tiles remains unchanged under $\kappa$, the result follows by Equation \ref{eq:maxorbit}. \end{proof}
\begin{figure}
\caption{A circular $(3,2,1,1)$-tiling and the corresponding orbit}
\label{fig:shrowmotion2113}
\end{figure}
When we have only two parts of sizes $a$ and $b$, the rank lattice becomes a $a\times b$ lattice with added minimum and maximum elements (see Figure \ref{fig:48latticechains} for an example). The bijection with tilings allo.ws us to easily describe the orbits in terms of the lcm and gcd of the two segments.
\begin{prop} Let $d=\text{gcd}(a,b)$ and $m=\text{lcm}(a,b)$. Then, rowmotion on $\overline{F}(a,b)$ has a unique orbit of size $m+2$ and $d-1$ orbits of size $m$, where $\overline{\mathcal{M}}$ takes the values $2(a+b)(m+2)/d$ and $2m-(a+b)/d$ respectively. The statistic $\overline{\chi}$ is $(a+b)/2$-mesic. \end{prop}
\begin{proof} There is a unique tiling of size $m+2$ that contains red tiles, one starting on the first row of column $1$, and the other starting at the third row of column $3$. The rest of the orbits contain no red tiles, and are given by the $d-1$ ways of placing the white tiles so that they always fall on different columns (see Figure \ref{fig:48latticeorbits} for an example). In all orbits $w_1=b/d$ and $w_2=a/d$, and plugging in these values to equations \ref{eq:maxorbit} and \ref{eq:sumorbit} allows us the calculate $\overline{\mathcal{M}}$ and $\overline{\chi}$. \end{proof}
Another way to visualise the orbits in this case is to think of them as walks on the rank lattice using the moves $(1,1)$,$(a,-1)$,$(-1,b)$ and the special move that connects the maximum and the minimum, refer to Figure \ref{fig:48latticeorbits} for the example of $\overline{F}(4,8)$. \begin{figure}
\caption{The four orbits of rowmotion on $\overline{F}(4,8)$ as paths on $\overline{J}(4,8)$ and corresponding tilings.}
\label{fig:48latticeorbits}
\end{figure}
\subsection{Other Cases With Few Segments}
When the number of segments is small, it is often possible to build all orbits from ones of smaller size, by \emph{dilating} orbits for partitions of small size, where we add new columns that lengthen the black tiles without creating problems. Figure \ref{fig:4dilation} shows an example where adding new columns to the marked spaces is how we build the orbits for larger partitions. In this section, we will use "dilation" arguments to fully describe the action of rowmotion on $\overline{F}((1,1,a,1))$ and $\overline{F}((a,1,a,1))$. The idea can be extended to use the orbits for $(k,1,a,1)$ to build the orbits of $(k,1,a+k+3,1)$ for general $k$.
\begin{figure}
\caption{We can add black-yellow-black-yellow columns to the $6$-orbit for $\overline{F}(4,1,4,1)$ get orbits for $\overline{F}(a,1,a,1)$, $a\geq4$.}
\label{fig:4dilation}
\end{figure}
\begin{thm} For $a\geq 2$, row motion on $\overline{F}(a,1,a,1)$ has $a-2$ small orbits $\mathcal{O}_s$ of size $a+2$ with $\overline{\mathcal{M}}(\mathcal{O}_s)=2a+2$ and $2$ large orbits $\mathcal{O}_l$ of size $2a+3$ with $\overline{\mathcal{M}}(\mathcal{O}_l)=4a+2$. The statistic $\overline{\chi}$ is $(a+1)$-mesic. \end{thm} \begin{proof} The orbits for the case $a=3$ were already examined in Example \ref{ex:3131}. For larger $a$, all small orbits have one red tile starting at each row. For any $0\leq k\leq a-4$ we get a unique $5$-orbit with red tiles starting on rows $1,2,3,4$ placed on columns $1$, $6+k$, $4+k$ and $3$ respectively. All these orbits can be obtained by dilating the $6$-orbit of rowmotion on $\overline{F}(4,1,4,1)$ shown on Figure \ref{fig:4dilation}, left by adding columns with alternating black and yellow tiles. The only other $a-2$ orbit has red tiles starting on rows $1$ and $3$ on column $1$, and red tiles starting on rows $2$ and $4$ on column $3$.
The larger orbits can similarly be obtained by dilating the $9$-orbits shown in Example \ref{ex:3131}. The placement of starting position of red tiles on one of the large orbits is : columns $1$ and $a+3$ on row $1$, $4$ in row $2$,$a+4$ in row $3$ and $3$ in row $4$. The other orbit is obtained by shifting rows cyclically by $2$, so the positions for rows $1$ and $2$ are flipped with the positions for rows $3$ and $4$ respectively.
As all $(a+2)^2$ elements are represented, there can be no more orbits. Equations \ref{eq:maxorbit}-\ref{eq:sumorbit} give the data about the statistics. \end{proof}
\begin{wrapfigure}{r}{0.35\linewidth}\centering \begin{tikzpicture}[scale=.45] \draw (-4,0) rectangle(4,-4); \draw (-4,0) grid (-3,-4); \draw (-1,0)grid(4,-4); \rrec{1}{2}\rrrec{4}{4}\yrec{1}{3}\yrec{2}{1}\yrec{2}{3}\yrec{2}{4}\yrec{4}{1}\yrec{2}{-3}\yrec{4}{-3}\yrec{2}{0}\yrec{4}{0} \yrec{4}{2}\yrec{4}{3} \brec{3}{-.5}{5.5}\brec{1}{-.5}{2.5} \brec{3}{-3}{3.5}\brec{1}{-3}{3.5} \draw[ultra thick,dashed](-3,-1.5)--(-1,-1.5)(-3,-3.5)--(-1,-3.5);\draw[thick,yellow,dashed](-3,-1.5)--(-1,-1.5)(-3,-3.5)--(-1,-3.5); \draw [
thick,
decoration={
brace,
mirror,
},
decorate ] (-4,-4.2) -- (1,-4.2); \node at (-1.5,-4.8) {$k-1$}; \end{tikzpicture} \end{wrapfigure} For a fixed $k$ value, we can obtain all orbits of $\overline{F}(k,1,a+k+2,1)$ from orbits of $\overline{F}(k,1,a,1)$ by adding new pieces that extend the black tiles on the third row, while adding an extra black tile to row two. The addition of the $4\times (k+2)$ piece on the right to extend each black tile in row $3$, shifting cyclically if necessary achieves exactly this. We will now use this process to calculate the orbits of rowmotion on $\overline{F}(1,1,a,1)$.
\begin{thm} If $a\equiv 0$ or $2$ mod $3$, rowmotion on $\overline{F}(a,1,1,1)$ has a unique orbit $\mathcal{O}$ of size $3a+4$ and $\overline{\chi}$ is homomesic. If $a\equiv 1$ mod $3$, then rowmotion has $3$ orbits, of sizes $a+2$, $a+1$ and $a+1$ and $\overline{\chi}$ values $(a+2)(a+3)/2$, $((a+2)(a+3)/2$ and $((a)(a+3)/2$ respectively. \end{thm}
\begin{proof} For the cases when $a\equiv 0$ or $2$ mod $3$, it is possible to extend the unique orbits for $\overline{F}(1,1,2,1)$ and $\overline{F}(1,1,3,1)$ (see Figure ~\ref{fig:rowmotionona111}) to get the orbits for $\overline{F}(1,1,2+3t,1)$ and $\overline{F}(1,1,3+3t,1)$, $t\in\mathbb{N}$. Considering the sizes shows us that no other orbits exist. Similarly, the three orbits of $\overline{F}(1,1,1,1)$ can be extended to get orbits for $\overline{F}(1,1,1+3t,1)$ as seen in Figure~\ref{fig:rowmotion2ona111}. From Table~\ref{tab:smallcases} we can see that $\overline{F}(1,1,a,1)$ has a total of $3a+4$ ideals, so these are all the orbits. To calculate the $\overline{\chi}$ values, we can use Equation~\ref{eq:sumorbit}, which simplifies to $\overline{\chi}(\mathcal{O})=(a+3)|\mathcal{O}|/2 +(r_1-r_4)+(a+1)/2(r_3-r_2)$ for the particular case of $\overline{F}(1,1,a,1)$. \end{proof}
\begin{figure}
\caption{Orbits for $\overline{F}(1,1,2,1)$ (left) and $\overline{F}(1,1,3,1)$ (middle) can be extended via the piece on the right to get orbits for $\overline{F}(1,1,2+3t,1)$ and $\overline{F}(1,1,3+3t,1)$ }
\label{fig:rowmotionona111}
\end{figure}
\begin{figure}
\caption{Orbits for $\overline{F}(1,1,1,1)$ (left) can be extended to get orbits for ,$\overline{F}(1,1,1+3t,1)$ }
\label{fig:rowmotion2ona111}
\end{figure} A similar process shows that for $a\geq 2$, the number of orbits of rowmotion on $\overline{F}(2,1,a,1)$ depends on what $a$ is in modulo $4$. If $a\equiv 1$ we get a unique orbit of size $4a+6$. If $a \equiv 3$, we get three orbits, two of size $a+1$ and one of size $2a+4$. If $a$ is even, we get two orbits of sizes $a+2$ and $3a+4$ respectively. The statistics for this orbits can be found in Table~\ref{tab:rowmotion}. As mentioned before, a similar process can be used to characterize orbits of fences of size $(k,1,a,1)$ where even more cases would be involved.
\begin{table}\center
\begin{tabular}{||c|c|c|c|c||}
\hline Composition& Orbit Count& $|\mathcal{O}|$& $\overline{\mathcal{M}}(\mathcal{O})$&$\overline{\chi}(\mathcal{O})$ \\ [0.5ex]
\hline\hline \multirow{2}{*}{\begin{tabular}{c} $(a,b)$: \\ gcd$(a,b)=m$
\end{tabular}}& $1$&$m+2$&$2m(a+b)(m+2)/ab$&$|\mathcal{O}|n/2$\\\cline{2-5}
& $(ab/m) -1$ &$m$&$2m-(a+b)m/ab$&$|\mathcal{O}|n/2$\\\hline
\multirow{1}{*}{$(a\neq3t+1,1,1,1)$}& $1$&$3a+4$&$5a+6$&$|\mathcal{O}|n/2$\\\hline
\multirow{3}{*}{$(a=3t+1,1,1,1)$}& $1$&$a+2$&$5t+4$&$|\mathcal{O}|n/2$\\\cline{2-5}
& $1$&$a+1$&$5t+2$&$(|\mathcal{O}|+1)n/2$\\\cline{2-5}
& $1$&$a+1$&$5t+2$&$(|\mathcal{O}|-1)n/2$\\\hline
\multirow{2}{*}{$(a=4t-2,1,2,1)$}& $1$&$a+2$&$7t-1$&$|\mathcal{O}|n/2$\\\cline{2-5}& $1$&$3a+4$&$21t-7$&$|\mathcal{O}|n/2$\\\hline
\multirow{3}{*}{$(a=4t-1,1,2,1)$}& $1$&$a+1$&$7t-1$&$|\mathcal{O}|n/2-(a+1)/2$\\\cline{2-5}&
$1$&$a+1$&$7t-1$&$|\mathcal{O}|n/2+(a+1)/2$\\\cline{2-5}&
$1$&$2a+4$&$14t+1$&$|\mathcal{O}|n/2$\\\hline
\multirow{2}{*}{$(a=4t,1,2,1)$}& $1$&$a+2$&$7t+2$&$|\mathcal{O}|n/2$\\\cline{2-5}&
$1$&$3a+4$&$21t+4$&$|\mathcal{O}|n/2$\\\hline
\multirow{1}{*}{$(a=4t+1,1,2,1)$}& $1$&$4a+6$&$7a+6$&$|\mathcal{O}|n/2$\\\hline
\multirow{2}{*}{$(a,1,a,1)$}& $a-2$&$a+2$&$2a+2$&$|\mathcal{O}|n/2$\\\cline{2-5}
& $2$&$2a+3$&$4a+2$&$|\mathcal{O}|n/2$\\\hline
\multirow{5}{*}{$(a,a,a,a)$}& $a^3-4a^2+6a-3$&$a$&$4a-4$&$|\mathcal{O}|n/2$\\\cline{2-5}
& $a$&$a+1$&$4a-2$&$(|\mathcal{O}|-1)n/2$\\\cline{2-5}
& $a$&$a+1$&$4a-2$&$(|\mathcal{O}|+1)n/2$\\\cline{2-5}
& $1$&$a+2$&$4a$&$|\mathcal{O}|n/2$\\\cline{2-5} & $2a-2$&$2a^2$&varied&varied \\\hline \end{tabular} \caption{The behaviour of rowmotion on circular fences for small examples, where $n$ denotes the size of $\alpha$} \label{tab:rowmotion} \end{table}
Looking at Table~\ref{tab:rowmotion}, we see that even when we focus on the cases with at most $4$ parts, it is difficult to predict the numbers and lengths of the orbits in general. The connection to modular arithmetics is prevalent, but it is not always as simple as looking at the gcd of the parts, as in the case of two parts. When we consider partitions of type $(a,1,k,1)$ with $a>k$ for example, the structure seems to depend on $\text{gcd}(a,k+2)$ instead. Note that the statistic $\overline{\chi}$ is quite well behaved, with orbits or pairs of orbits averaging out to $|\mathcal{O}|n/2$ where $n=|\alpha|$.
\subsection{The case of $(a,a,a,a)$ and a note on orbomesy}
When a statistic has the same average on all orbits of the same size, it is called \emph{orbomesic}. Extending the idea of homomesy, the orbomesy phonemenon is introduced in \cite{rowmotion} and illustrated through a number of cases it applies to. The rowmotion on fences is a periodic operation, and except when we get shared elements it applies to each segment independently according to their own size. The orbit structure therefore is determined by how often we get shared elements- how in sync the action on different segments are. In the examples of orbomesy given in \cite{rowmotion}, we often get groups of orbits that, though not isomorphic in a well defined sense, are structurally equivalent and are formed by picking different pairings of moduli that are out of sync. As a result, they naturally have the same length, $\overline{\mathcal{M}}$ value and $\overline{\chi}$ value, resulting in an apparent orbomesy. In the circular case, we see that the orbomesy of $\overline{\chi}$ breaks down completely, in that we either get a full homomesy or we get pairs of orbits of the same size with different $\overline{\chi}$ values. Now we will look at the case of $\overline{F}(a,a,a,a)$ where this is especially visible. \begin{thm} Rowmotion on $\overline{F}(a,a,a,a)$ has: \begin{itemize}
\item $a^3-4 a^2 +6a -3$ orbits of size $a$, satisfying $\overline{\mathcal{M}}(\mathcal{O})=4a-4$, $\overline{\chi}(\mathcal{O})=2a|\mathcal{O}|$,
\item $2a$ orbits of size $a+1$ with $\overline{\mathcal{M}}(\mathcal{O})=4a-2$, $a$ of them satisfying $\overline{\chi}(\mathcal{O})=2a(|\mathcal{O}|+1)$, the other $a$ satisfying $\overline{\chi}(\mathcal{O})=2a(|\mathcal{O}|-1)$.
\item $1$ orbit of size $a+2$, $\overline{\mathcal{M}}(\mathcal{O})=4a$, $\overline{\chi}(\mathcal{O})=2a|\mathcal{O}|$, \item $2a-2$ orbits of size $2a^2$ with $\overline{\mathcal{M}}(OO)=8a^2-10a+4$, where for each $r \in \{0,1,\ldots,a-2\}$ we get two orbits whose $\overline{\chi}$ value is equal to $4a^3+2a^2-4a+4ra$. \end{itemize}
\end{thm}
\begin{proof} We will describe each of these orbits. As the total number of ideals represented, $a^4+4a^2+2$, matches $\overline{R}((a,a,a,a);1)$ there can be no other orbits. After describing the orbits, the statistics can be calculated via the simplified formulas:$$\overline{\mathcal{M}}(\mathcal{O})=\sum_{i\leq4}b_i(a-1)+r_i\qquad \overline{\chi})(\mathcal{O})=a(2|\mathcal{O}|+r_1-r_2+r_3-r_4).$$ \begin{itemize} \item The size $a$ orbits are the ones that contain no red tile. Each row contains one white and one black tile, where white tiles on consecutive rows fall on different columns, including rows $1$ and $4$. Placing the white tile on position $1$ of the first row, the rest of the white tiles can be placed in $(a-1)^3-(a-1)(a-2)=a^3-4 a^2 +6a -3$ ways. \item The size $a+1$ orbits have two red tiles that lie in different columns, which we can choose in $a$ ways. They either start on rows $1$ and $3$ or rows $2$ and $4$, giving us $2a$ such orbits in total. \item The unique size $a+2$ orbit contains $4$ red tiles: starting at rows $1$ and $3$ of column $1$ and rows $2$ and $4$ of column $3$. \item The largest orbits can be indexed with $r \in \{0,1,\ldots,a-2\}$, where we get a pair of orbits $\mathcal{O}^1_r$ and $\mathcal{O}^2_r$ of size $2a^2$ for each choice of $r$. $\mathcal{O}^1_r$ has the following positions for the red tiles:
\begin{minipage}{\dimexpr\textwidth-3cm} \begin{itemize}
\item [ROW 1:] $1+(a+1)t$ for $0\leq t\leq r$,
\item [ROW 2:] $a^2-(a+1)t$ for $1\leq a-1-r$,
\item [ROW 3:] $a^2+(a+1)t$ for $0\leq t\leq r$,
\item [ROW 4:] $2a^2-(a+1)t$ for $1\leq a-1-r$. \end{itemize} \xdef\tpd{\the\prevdepth} \end{minipage} $\mathcal{O}^2_r$ has the values for rows $2$ and $4$ flipped. \end{itemize}\end{proof}
The value $\overline{\mathcal{M}}$ seems to be orbomesic in this example, in fact that is the case in all examples listed on Table~\ref{tab:rowmotion}. That could be indicative of a general pattern or could be because we are only looking at a very limited sample of examples. For $(a,a,a,a)$, the $\overline{\chi}$ values of orbits of size $2a^2$ are not just paired up but actually follow an arithmetic progression centered around $|\mathcal{O}|n/2$. It would be interesting to see if this trend continues in larger examples. \section{Comments, Questions and Future Directions}\label{sec:further} We list some questions and observations here that are of natural interest. \begin{itemize} \item \textbf{Bijective proofs:} The original starting point of this work was finding a bijective proof for the symmetry of the rank sequences of lower ideas of circular fences. The setwise complement of a size $k$ lower ideal is a size $n-k$ upper ideal. : The tantalizingly simple notion of taking this setwise complement and then letting the beads (nodes belonging to the ideal) fall with gravity unfortunately did not work. When there are filled or empty sections, the algorithm can not be described locally section by section, which makes both for a tricky description and a complicated proof. It is possible however, that another perspective on the objects might lead to a more natural approach to the proof.
Fence posets are in bijective correspondence in a variety of combinatorial objects. An example is given by perfect matchings in snake graphs. Circular fence posets, similarly, can be viewed as a circular analogue to snake graphs, with two ends identified. It is possible to get a bijection if two ends are identified in a \emph{parity reversing} way to avoid any extra matchings forming, or disallowing matchings that do not work in the uncircular case as done in \cite{bandgraphs}. The natural symmetries of this object are different and can possibly provide new insight.
\item \textbf{Rowmotion orbits under shifting:} We have seen in Lemma~\ref{lem:rowmotionflip} that the setwise complement map $\kappa$ gives a natural bijection between orbits of $\overline{F}(\alpha)$ and $\overline{F}(\text{sh}(\alpha))$ that takes $\overline{\chi}$ to $n|\mathcal{O}|-\overline{\chi}$, while fixing orbit length and $\overline{\mathcal{M}}$. The pairing up of $\overline{\chi}$ statistics seen in Table~\ref{tab:rowmotion} suggests that it might be possible to find a bijection that also \emph{fixes} $\overline{\chi}$. That would be exciting in two levels. It would confirm that in the circular case, $\overline{\chi}$ is never truly orbomesic, it is either a true homomesy or we have orbits of the same size with different $\overline{\chi}$ values. It would also provide a bijective proof for the symmetry of the rank polynomial, possibly making way to a bijective proof of unimodality in the non-circular case.
\item \textbf{A Polyhedral Perspective: } A related and probably simpler question that we were unable to answer goes as follows. Given a composition $\alpha$ of $n$, consider the polytope $\overline{P}_{\alpha} \subset \mathbb{R}^n$ given by the indicator vectors of $\overline{J}(\alpha)$, the set of all lower ideals of the associated circular fence poset. Consider the sections of the polytope: \[\overline{P}_{\alpha}^t = \overline{P}_{\alpha} \cap \{x\in \mathbb{R}^n, \, \sum_{i} x_i = t\}.\] We have observed that the function $t \rightarrow \operatorname{Vol} \, \overline{P}_{\alpha}^t$ is symmetric about the point $n/2$, that is, \[\operatorname{Vol}(\overline{P}_{\alpha}^t) = \operatorname{Vol}(\overline{P}_{\alpha}^{n/2-t}),\qquad 0 \leq t \leq n.\] Interestingly, these polytopes are not necessarily combinatorially equivalent. The special case when we look at compositions $(\alpha_1, 1, \ldots, \alpha_s, 1)$ of $n$ is especially interesting. Note that thesee are of special interest (at least to the authors) as these are potentially the only compositions where we are yet to settle the unimodality problem for rank sequences. In this case, we can parse the question as follows. Define the polytope $H_{\alpha}$ by \[H_{\alpha} = \{x \in \prod_{i = 1}^{s} [0, \alpha_i + 1], \,\,\, x_i - x_{i+1\, (\operatorname{mod}\,s)\,} \leq \alpha_i, \, i = 1, \ldots, s\}.\] Then the above conjecture in this special reduces to the claim that \[\operatorname{Vol}(\overline{H}_{\alpha}^t) = \operatorname{Vol}(\overline{H}_{\alpha}^{n/2-t}), \qquad 0 \leq t \leq n,\] where these polytopes are defined similarly to above. Again, we have equality of volumes despite the polytopes not necessarily being isomorphic. A natural explanation of this would be interesting. \item \textbf{Refinements of Unimodality: } In their paper \cite{Saganpaper}, McConville, Sagan and Smyth investigated the existence of chain decompositions as a possible method of proving unimodality. Though unable to make process in this direction, the examples we considered led us the believe that for circular fences -apart from the case $\alpha = (a, 1, a, 1)$ (see Figure~\ref{fig:1417})- the associated lattices admit \emph{symmetric chain decompositions} and are thus \emph{strongly Sperner}. A resolution of this would be satisfying and would go some way towards elucidating the structure of fence and circular fence posets.
\item\textbf{Skew Young-Posets:} The boxes on the Ferrers diagram of a partition $\lambda$ have a natural poset structure, where ideals are in bijection with partitions whose Ferrers diagrams fit inside $\lambda$. From this viewpoint, fence poset can be viewed as the posets for certain skew-diagrams $\lambda\backslash \mu$ corresponding to maximal border strips. The unimodality of the corresponding rank polynomial in the non-skew case was studied previously by Stanton in 1990 (see \cite{stanton}), where he conjectured that self dual partitions give rise to unimodal polynomials. He also provided several examples where unimodality fails. These examples are of a similar flavor to the examples in the circular case in that the two largest entries are seperated by a slightly smaller entry which provides the only violation to unimodality.
Progress towards a general classification has been limited since then. Zbarsky showed in \cite{nearrectangular} that when the partition $\lambda$ is satisfies certain properties to ensure the parts are of similar sizes the rank polynomial is unimodal. He also provided further examples where unimodality fails and conjectured that in any example that unimodality fails the rank polynomial is bimodal with the two modes being seperated by one entry only.
The tricky part about the case of the (possibly skew) diagrams compared to the fences is that the position of the mode (or modes) is trickier to determine and does not necessarily lie around $q^{|\lambda/2|}$. Nevertheless, there is enough similarity to warrant a new look at this problem through the lens of skew-diagrams and possible circular analogues.
\item \textbf{Extremal Ranks: } For a fixed size, how does changing the shape of the fence affect the resulting rank sequence? It is a simple observation to see that any maximum is achieved at a composition with parts $\leq 2$: \begin{prop} Let $\alpha'$ be obtained from a partition $\alpha$ from replacing a part of size $t\geq3$ by parts $t-2,1,1$. Then $\overline{R}(\alpha';q)-\overline{R}(\alpha;q)$ has non-negative integer coefficients. \end{prop} \begin{proof} As the rank polynomial is invariant under cyclic shifts, we can assume that the part of size $t$ is the first part. Let $x\preceq y \preceq z$ be the maximum nodes on the first segment. The ideals of $\alpha'$ can be obtained by replacing these relations with the weaker set $ y\succeq x \preceq z$, where $x$ and $z$ are incomparable. \end{proof}
Our experiments suggest that following is true. \begin{itemize}
\item Given any composition $\alpha$ of $n$, we conjecture that $r(\alpha) \leq r(1^n)$, where the inequalities are pointwise i.e. we believe that for every $k$, the number of rank $k$ down ideals of $F(\alpha)$ are at most the number of rank $k$ down ideals of $F(1^n)$. \item More generally, we conjecture that for any fixed $k$ and $n$ where $k$ divides $n$ and any composition of $n$ with $k$ parts, we have that $r(\alpha) \leq r(n/k, \ldots, n/k)$. \end{itemize}
\end{itemize}
\end{document} |
\begin{document}
\date{}
\title{Depth in Bingo Closure}
\begin{abstract}
Bingo is played on a $5\times 5$ grid. Take the 25 squares to be the ground set of a closure system in
which square $s$ is {\em dependent} on a set $S$ of squares iff $s$ completes
a {\em line} --- a row, column, or diagonal --- with squares that are already in $S$.
The closure of a set $S$ is obtained via an iterative process in
which, at each stage, the squares dependent upon the current state are added.
In this paper we establish for the $n \times n$ Bingo board the
maximum number of steps required in this closure process.
\end{abstract}
\section{Introduction} \label{intro}
If you are playing a game of Bingo and you find that you already have 4 squares on any row, column, or
diagonal, then the fifth square on that line takes of special interest.
If you get that fifth square, it {\em completes} a line and you
win. Convexity, and more generally closure, is based on the idea
of filling gaps, closing holes, completing some set. A common and
convenient way to express this is in terms on of {\em dependency}.
Here linear algebra is a model: a subspace is a set of vectors
which contains all vectors dependent on it.
\begin{figure}
\caption{Bingo Closure.}
\label{Bingen}
\end{figure}
Bingo closure, of course, can be considered on any $n\times n$ grid.
A square $s$ is {\em dependent} on a set $S$ of squares
iff $s$ completes a {\em line} --- a row, column, or diagonal --- with squares that are already in $S$.
Let $X$ denote the $n^2$ squares on an $n \times n$ grid.
Dependency defines a set map $\varphi: \textsf{Pow}(X) \rightarrow \textsf{Pow}(X)$ where
$\varphi(S)$ is the set of all squares dependent on $S$.
For example in Board 1 in Figure \ref{Bingen}, let $S$ be the set of 12 squares marked by solid black dots.
Each of the 5 squares marked by open circles completes a line with
squares in $S$. Thus $\varphi(S)$ is the set $A$ of squares marked by open
circles. Our definition of dependency has the undesirable
peculiarity that a point {\em in} a set may fail to be dependent on
that set. In particular it is the case in Baord 1 that no square in $S$ is dependent on $S$. However, if
we look at $S \cup A$ both diagonals belong to $S \cup A$ and are
also dependent on $S \cup A$.
A set map is {\em isotone} provided $\varphi(A) \subseteq \varphi(B)$ whenever $A \subseteq B \subseteq X$.
A set map is {\em expansive} provided $A \subseteq \varphi(A)$ for all $A \subseteq X$.
A set map over a finite set $X$ that is isotone and expansive is {\em dolmatic\footnote{From the Turkish verb ``dolmak"
which means ``to fill".}}. In general, the first step in computing the $\varphi$-closure is to produce
the dolmatic extension $\varphi_*(A)$ of $\varphi$:
\[
\varphi_*(A) := A \cup \bigcup \{\varphi(S): S \subseteq A \}.
\]
In the case of Bingo closure the dependency map $\varphi$, as defined above,
is isotone already so we can get its dolmatic extension as $\varphi_*(A) := A \cup \varphi(A)$.
The importance of having a dolmatic function is that its iterates
are always increasing and hence, in the finite case, eventually
stabilize at the closure. Here a set is {\em closed} iff it
contains all points dependent on it, and the {\em closure} of a set
$S$ is the smallest closed set containing $S$. These ideas are
discussed in a rather different setting in \cite{JN} and are
treated in abstract generality in \cite{Haus, Jam}.
In this paper, we answer the question for Bingo closure:
\textsf{What is the maximum number of times the dolmatic map
must be applied to obtain the closure of a set?}
\rmv{
\begin{quote} What is the maximum number of times the dolmatic map
must be applied to obtain the closure of a set?
\end{quote}
}
\section{Maximum Depth for the Bingo Closure}
Throughout the rest of this paper $X$ will denote set of $n^2$ squares on an $n \times n$ grid.
The closure is the Bingo closure described above. The Bingo
closure of a set $S$ will be denoted by $\mathscr{C}(S)$.
The {\em depth} of a set $S$ is the number of iterations of the
dolmatic dependency map $\varphi_*$ required to obtain the closure $\mathscr{C}(S)$ of $S$.
In other words, the {\em depth} of $S$ is the smallest $d$ such that $\varphi_*^{d+1}(S) = \varphi_*^d(S)$.
A set $S$ {\em spans} $X$ provided its closure is $X$ --- that is, $\mathscr{C}(S) = X$.
The little lemma below helps to show that the maximum depth can be
achieved only by a spanning set.
\begin{lemma}
\label{prop}
If $K$ is closed but not all of $X$, then there are at least 4
points of $X$ not in $X$ and at least 4 lines not contained in $K$.
\end{lemma}
\begin{proof}
Suppose $p$ is not in $K$. Since $K$ is closed, $K$ must be missing another point on the row and a point on the column containing $p$. Say, $r$ and $c$ are also
missing from $K$ in the row and column of $p$. Since $r$ is
missing from $K$, there is another point $q$ missing from its column.
Clearly $q \ne c$ since $p$ and $r$ are different points in the
same row and hence lie in different columns. Thus 4 points are
missing. The rows through $p$ and $c$ are not in $K$ as are the
columns through $p$ and $r$. These are 4 lines not contained in $K$.
\end{proof}
\begin{figure}
\caption{A $9\times9$ set of maximum depth}
\label{odd}
\label{odd}
\end{figure}
\begin{theorem}
\label{mainthm}
a) The depth of any non-spanning set $S$ on an $n\times n$ board is at most $2n-2$.
b) The depth of a spanning set $S$ at most $2n$.
\end{theorem}
\begin{proof}
Let $K = \mathscr{C}(S)$ be the Bingo closure of $S$ and suppose the depth of $S$ is $\lambda$.
Each iteration of the dependency map completes at least one line in $K$. Obviously, once a line
is completed, it cannot be completed again. Thus the number of steps required to obtain the closure $K$
is bounded by the number $\lambda$ of lines contained in $K$. That is, $d \le \lambda$.
The number of lines in $X$ is $2n+2$, so if $S$ is not spanning, the number of lines in $K$ is at most
$2n-2$. Therefore (a) is established, so we can assume $S$ is
spanning and $K = X$.
Now consider the last element $z$ added in forming the closure $\mathscr{C}(S) = X$ of $S$. If $z$ is to be last,
then it must complete all lines in $X$ through $z$. There are at least
two such lines (a row and a column) that are not complete before $z$ is added.
Thus two lines are used in the same iteration, giving an upper bound of $2n+1$.
Now, in this case the last element to be added to $S$ comes from two
lines simultaneously. Hence two elements of $S$ must also be added
to $S$ in the penultimate iteration. This also uses two lines in the same
iteration, and so the upper bound becomes $2n$.
\end{proof}
Board 2 of Figure \ref{Bingen} shows a spanning set of depth 10 in
the traditional Bingo board with $n = 5$. The spanning set $S$
consists of those squares marked with black dots. The numbers in
the other squares indicate the order in which they are picked up by
the dolmatic dependency map in forming the closure.
\begin{figure}
\caption{A $10\times10$ set of maximum depth}
\label{even}
\end{figure}
\begin{theorem}
For $n\geq 5$, the Bingo board of side $n$ has a spanning set $S$ of depth at least $2n$.
\end{theorem}
\begin{proof}
The proof is by construction of such a set of depth $2n$ for each $n\geq 5$.
We proceed by induction, constructing the larger cases from smaller ones.
We must split into the cases of even and odd $n$. The base
case for the odd construction is given by Board 2 in Figure \ref{Bingen}.
Larger boards can be obtained by spiraling around the base case as
shown in Figure \ref{odd}. The base case is in the $5 \times 5$ rectangle
enclosed by double lines. The numbers inside the base case have
been increased by 8 since those squares appear 8 steps later in the
closure process. The 4 new rows and and 4 new columns get used
first.
The situation is analogous in the even case, with the base case given in Board S of Figure \ref{small}.
Figure \ref{even} shows the construction for $n = 10$. Again the base case is
in the $6 \times 6$ rectangle enclosed by double lines.
For larger grids the same spiraling technique can be used.
\end{proof}
For $n<5$, the maximum depth is less than $2n$. For $n=1$ and $n=2$, the maximum depth is
clearly just one. For $n=3$, the maximum depth is 4. This is given
by the construction below in Board 3 of Figure \ref{small}. This could be seen
by merely enumerating all possible $3\times 3$ boards.
Instead however, let us consider again the last piece filled in.
The last four squares to be filled is
significant and is the primary technique used in this paper in
analyzing the maximum depth of small grids. The proof technique of Theorem \ref{mainthm} shows the fact that these last four squares filled in form the corners of a
rectangle. Hence they will be called the {\em final rectangle}.
The comments above show that the last piece filled in comes from a
rectangle of four, and that this rectangle must have depth at most three.
Now to have depth three there must be a diagonal line which
is on precisely one of the squares on the rectangle of four.
No such rectangle exists on a $3\times 3$ board, so the depth of the final rectangle is at most 2. Hence because each
rectangle lies on both diagonals, there are at least $4$ wasted lines.
Hence the maximum depth is indeed $2\cdot 3+2-4=4$.
\begin{figure}
\caption{Small cases}
\label{small}
\end{figure}
For $n=4$ the analysis is a little more tricky. This is because of the necessary overlap of the diagonals and the final rectangle. There are several cases to worry about though.
If a rectangle is left empty four squares are left empty and there are least $5$ (two horizontal, two vertical, and one diagonal) wasted lines giving a maximum depth of $2\cdot 4 + 2 - 5=5$. If on the other hand we are to generate all $16$ tiles, then the final rectangle must overlap either both diagonals. or a single diagonal twice. If it has has a single diagonal twice, one cannot come onto a final rectangle via a diagonal and thus at all. This leaves only the case that it contains both diagonals. Hence one of the squares must be filled from two lines - one diagonal and one from horizontal or vertical. This gives an additional two wasted lines: the diagonal, and whatever line filled the square that used the diagonal. This gives a maximum depth of $2\cdot 4 + 2 - 2 - 1 - 1=6$. Such a construction is given below in Board 4 of Figure \ref{small}.
\end{document} |
\begin{document}
\title{Teleportation of qubit states through dissipative channels:\\ Conditions for surpassing the no-cloning limit}
\author{\c{S}ahin Kaya \"Ozdemir$^*$}
\affiliation{SORST Research Team for Interacting Carrier Electronics, 4-1-8 Honmachi, Kawaguchi, Saitama 331-0012, Japan} \affiliation{CREST Research Team for Photonic Quantum Information, 4-1-8 Honmachi, Kawaguchi, Saitama 331-0012, Japan} \affiliation{Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531, Japan}
\author{Karol Bartkiewicz\footnote{These authors have made equal contribution.}} \affiliation{Institute of Physics, Adam Mickiewicz University, 61-614 Pozna\'n, Poland}
\author{Yu-xi Liu} \affiliation{Frontier Research System, Institute of Physical and Chemical Research (RIKEN), Wako-shi 351-0198, Japan} \affiliation{CREST, Japan Science and Technology Agency (JST), Kawaguchi, Saitama 332-0012, Japan}
\author{Adam Miranowicz} \affiliation{Institute of Physics, Adam Mickiewicz University, 61-614 Pozna\'n, Poland} \affiliation{SORST Research Team for Interacting Carrier Electronics, 4-1-8 Honmachi, Kawaguchi, Saitama 331-0012, Japan} \affiliation{Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531, Japan}
\begin{abstract} We investigate quantum teleportation through dissipative channels and calculate teleportation fidelity as a function of damping rates. It is found that the average fidelity of teleportation and the range of states to be teleported depend on the type and rate of the damping in the channel. Using the fully entangled fraction, we derive two bounds on the damping rates of the channels: one is to beat the classical limit and the second is to guarantee the non-existence of any other copy with better fidelity. Effect of the initially distributed maximally entangled state on the process is presented; and the concurrence and the fully entangled fraction of the shared states are discussed. We intend to show that prior information on the dissipative channel and the range of qubit states to be teleported is helpful for the evaluation of the success of teleportation, where success is defined as surpassing the fidelity limit imposed by the fidelity of 1-to-2 optimal cloning machine for the specific range of qubits. \end{abstract} \pacs{03.67.Hk, 03.65.Ud, 03.65.Ta} \date{\today} \pagestyle{plain} \pagenumbering{arabic} \maketitle
\section{Introduction}
The quantum state of a system can be transmitted from a location to a distant one using only classical information provided that a quantum channel exists between the sender and the receiver. Sharing entangled states between the two parties opens the necessary quantum channel \cite{HorodeckiReview}. Research in quantum state transfer \cite{cloning}, especially the quantum teleportation \cite{Bennett}, has emerged as one of the major research areas of theoretical and experimental quantum mechanics. Various discussions and criteria have appeared about the evaluation of the state transfers under ideal and imperfect conditions \cite{Holger}. In a perfect scheme, the shared entangled state is a maximally entangled state (MES) enabling perfect quantum state transfer. However, in practice, entanglement is susceptible to local interactions with the environment, which can result in loss of coherence. In this article, we study the teleportation of qubits through damping channels.
We consider quantum state transfer as an operation, such as cloning and teleportation, which beats the classical limits on measurement and transmission. The resemblance of two quantum states and the properties of quantum state transfer (teleportation and cloning) are quantified by the fidelity $F(|\psi_{{\rm in}}\rangle)=\langle\psi_{{\rm in}}|\hat{\rho}_{{\rm out}}|\psi_{{\rm in}}\rangle$, which measures the overlap of the states $|\psi_{\rm in}\rangle$ to be teleported (cloned) and the output state with the density operator $\hat{\rho}_{{\rm out}}$.
A qubit state to be teleported $|\psi_{{\rm in}}\rangle=\alpha|0\rangle+\beta|1\rangle$ with
$|\alpha|^{2}+|\beta|^{2}=1$ can be represented on a Bloch sphere as \begin{equation}\label{ktyu}
|\psi_{\rm in} \rangle=\cos(\delta/2)\E^{i\gamma}|0 \rangle
+\sin(\delta/2)|1\rangle, \end{equation} where $\delta$ and $\gamma$ are the polar and azimuthal angles, respectively. Since this state is generally unknown, it is more appropriate to calculate the average of the fidelity
$F(|\psi_{{\rm in}}\rangle)$ over all possible states $|\psi_{{\rm in}}\rangle$ to quantify the process. This average fidelity
$F=\overline{\langle\psi_{{\rm in}}|\hat{\rho}_{{\rm out}}|\psi_{{\rm in}}\rangle}$ \cite{Popescu} can be calculated as \begin{eqnarray}\label{a1} F=\frac{1}{4\pi}\int_{0}^{2\pi}d\gamma\int_{0}^{\pi}d\delta F(\delta,\gamma)\sin\delta, \end{eqnarray} where the $4\pi$ is the solid angle.
The relation between the teleportation fidelity and the degree of entanglement shared by the parties has been studied by many researchers (e.g. in \cite{Bennett,Holger,Horodecki,Popescu,Grosshans,Band,Carlo,Gordon} and others cited in \cite{HorodeckiReview}) and it has been shown that (i) less entangled quantum channel reduces the fidelity and the range of states, which can be teleported \cite{Bennett}, (ii) for the standard teleportation scheme, the maximum attainable average fidelity is simply related to the fully entangled fraction of a bipartite entangled state \cite{Horodecki}, and (iii) some mixed states, which do not violate the Bell inequalities, can still be used for teleportation \cite{Popescu}. On the other hand, only a few studies are directed to the relation between the fidelity of teleportation and the type and strength of the damping in the quantum channel. That is the topic of the present study.
According to the definition of teleportation as stated by Bennett {\em et al.} \cite{Bennett}, in the process of quantum teleportation, one can construct an exact replica of the original unknown quantum state with the cost of destroying the original state. Therefore, to call a quantum state transfer operation as quantum teleportation, the process should not only generate output states with better qualities than what can be done classically but also obey the no-cloning theorem \cite{Zurek}. Defining a teleportation operator $\hat{U}_{\rm tel}$, which can be implemented in a standard quantum circuit (see, e.g., \cite{Nielsen}) with an input state $\hat{\rho}_{\rm in}=\hat{\rho}_{a}$ and a shared entangled state $\hat{\rho}_{\rm ent}=\hat{\rho}_{b,c}$, the output state $\hat{\rho}_{\rm out}$ is written as \begin{equation}\label{cc1} \hat{\rho}_{\rm out}={\rm Tr}_{\rm in,a}[\hat{U}_{\rm tel}\hat{\rho}_{\rm in} \otimes\hat{\rho}_{\rm ent} \hat{U}^{\dagger}_{\rm tel}]. \end{equation} If the teleportation process is ideal then $\hat{\rho}_{\rm out}= \hat{\rho}_{\rm in} $ implying a fidelity value of unity. However, in practical applications, this is not the case due to the presence of noise which may be due to (i) noisy sources of $\hat{\rho}_{\rm in}$ and $\hat{\rho}_{\rm ent}$, (ii) noisy entanglement distribution channel, (iii) noisy measurements and unitary operators, and (iv) an eavesdropper who attempts to clone $\hat{\rho}_{\rm in}$. Since, in general one cannot be sure of which of the above are the reason, all the noise in the process should be attributed to an eavesdropper in order to assess the security whenever quantum teleportation is to be used as a means of secure communication. This assessment to quantify the process should be done according to the definition of the teleportation given above. That is, one should check to see whether $F$ in Eq. (\ref{a1}) satisfies the conditions of (i) beating the classical limit, and (ii) obeying the no-cloning.
The linearity of quantum mechanics forbids the exact cloning of an unknown quantum state, however, if one allows discrepancies between the original quantum state and its copy, then it is possible to devise a scheme that can produce clones and copies of a given unknown state with the highest resemblance to the original one \cite{Buzek,Gisin,Buzek1,Bruss1} (for reviews see \cite{cloning}). This is known as the {\em optimal cloning}, where with the increasing number of clones (copies), the resemblance to the original state decreases. It has been shown that for a state-independent universal cloning machine the relation between the optimum fidelity $F$ of each copy and the number $M$ of copies is given by $F=(2M+1)/(3M)$. In classical situations, one can make infinite number of copies ($M\rightarrow\infty$) of a given state resulting in a fidelity $F=2/3$, which is the best one can do with classical operations. On the other hand, when $M=2$, the universal cloning machine has an optimum fidelity of $F=5/6$ \cite{Buzek,Gisin,Buzek1,Bruss1}.
Combining the above information on teleportation and cloning, one can infer that a teleportation process beats the classical limit if $F>2/3$, and obeys the no-cloning requirement if $F>5/6$ \cite{Buzek,Gisin,Buzek1,Bruss1}. If this is assured, then there is no any other copy of the output state with better fidelity, therefore, the teleportation process is secure. It is noteworthy that this is true if and only if the quantum state $\hat{\rho}_{\rm in}$ is completely unknown to the eavesdropper. In some cases, $\hat{\rho}_{\rm in}$ may be prepared in a state that is selected from a known ensemble of states. If the eavesdropper has this a priori knowledge about $\hat{\rho}_{\rm in}$, a state dependent cloner which can perform better than the optimal universal one can be constructed. Thus, the fidelity constraint imposed on teleportation due to no-cloning condition will become much stricter.
Quality of the shared entangled state is a good criterion to quantify the reliability of the quantum teleportation. Bennett {\em et al. } \cite{Bennett2} and, in general case, Horodecki {\em et al. } \cite{Horodecki,Badziag} (for a review see \cite{HorodeckiReview}) have shown that for a shared bipartite entangled state $\hat{\rho}_{\rm ent}$ to be useful for quantum teleportation, its fully entangled fraction $f_{{\rm ent}}$, defined by \cite{Bennett1} \begin{equation}\label{dd1}
f_{{\rm ent}}={\max_{\Phi}}~\langle\Phi|\hat{\rho}_{{\rm ent}}|\Phi\rangle, \end{equation}
must be greater than $1/2$. In Eq. (\ref{dd1}), maximum is taken over all MES $|\Phi\rangle$. It has also been shown that, the maximum achievable teleportation fidelity $F$ is related to $f_{{\rm ent}}$ by \cite{Horodecki} \begin{equation}\label{ee1}
F=\frac{2f_{{\rm ent}}+1}{3}. \end{equation} States with $f_{{\rm ent}}\leq1/2$ cannot be used directly for teleportation unless they are enhanced through filtering to satisfy $f_{{\rm ent}}>1/2$. Choosing the boundary value of $f_{{\rm ent}}=1/2$ gives a teleportation fidelity of $F=2/3$ which is the boundary between classical and quantum state transfer. That is if $f_{{\rm ent}}\leq1/2$ and hence $F\leq2/3$, then the same operation can be done classically. According to this definition, if in a process $F>2/3$ is achieved then it can be called quantum teleportation. On the other hand, the discussion on cloning in the former paragraphs implies that one can make infinite number of copies of a qubit with a fidelity of $2/3$ which violates the original definition of quantum teleportation given by Bennett {\em et al. } \cite{Bennett}. Here again arises the question of achievable teleportation fidelity, which guarantees better than classical teleportation and surpasses the no-cloning limit.
The problem studied in this paper can be formulated as follows: Alice and Bob are far from each other, and they share an entangled quantum state $\hat{\rho}_{\rm en}$. The entangled state is prepared either by a third party, say Claire, and delivered to Alice and Bob (scenario 1: two-qubit affected scenario) or prepared by Alice and one of the qubits is sent to Bob and the other is kept with her (scenario 2: one-qubit affected scenario)
as shown in Fig.~\ref{fig1}. The only manipulations that Alice and Bob are allowed to do is local quantum operations and classical communications. Now suppose that Alice wants to transfer the quantum state represented by the qubit state $|\psi_{\rm in}\rangle$ to Bob and the entangled state is distributed through a dissipating channel. Then, how does the dissipation of the channel affect the entanglement properties of the distributed entangled state, and hence what is its effect on the transferred quantum state? What is the allowable amount of dissipation that does not affect the security of quantum state transfer?
In this paper, we derive the damping rates of quantum channels at which a quantum state transfer that overcomes the classical counterpart can be realized. In the same way, conditions, which guarantee a secure quantum teleportation, are also derived. We study the effect of noise on the range of qubits that can be teleported accurately. The noisy channels, including amplitude damping channel, phase damping channel and depolarizing channel, and the effects of these noisy channels on the distributed entanglement and teleportation process are studied in Sec. II and III. And finally, Sec. IV includes a brief summary and conclusion of this study. \begin{figure}\label{fig1}
\end{figure}
\section{Effect of Damping Channels on Entanglement and Teleportation}
We consider the two scenarios shown in Fig.~\ref{fig1}. In the first scenario, the qubits of the initial MES are distributed through two channels, which may or may not have the same damping properties. On the other hand, in the second scenario, only one of the qubits of the initial MES is distributed through the damping channel. In the following, we give analytical expressions, which show how a given state is affected when transmitted through noisy channels causing amplitude damping, phase damping or depolarization. Initial MES that are considered in this study are the Bell states \begin{eqnarray}
|\psi^{\pm}\rangle&=&\frac{1}{\sqrt{2}}(|01\rangle\pm|10\rangle),
\nonumber \\
|\phi^{\pm}\rangle&=&\frac{1}{\sqrt{2}}(|00\rangle\pm|11\rangle). \label{N1} \end{eqnarray} We derive the bounds for the damping rate of the channel to satisfy the quantum teleportation conditions discussed in the previous section. In the following, we assume that there is no a priori information on $\hat{\rho}_{\rm in}$, therefore the optimal universal cloning machine which imposes $F>5/6$ is considered.
\subsection{Amplitude Damping Channel}
The evolution of environment (denoted by subscript $e$) and a system (subscript $a$ or, equivalently, $b$) with the states
$|0\rangle$ and $|1\rangle$ is defined by the following transformation in the presence of the amplitude damping channel (ADC) \cite{Preskill}: \begin{eqnarray}\label{N01}
|0\rangle_{a}|0\rangle_{e}&&\rightarrow|0\rangle_{a}|0\rangle_{e}\nonumber\\
|1\rangle_{a}|0\rangle_{e}&&\rightarrow\sqrt{q}
|1\rangle_{a}|0\rangle_{e}+\sqrt{p}|0\rangle_{a}|1\rangle_{e} \end{eqnarray} where $q\equiv1-p$. This transformation implies that a system with an excited state makes a transition to the ground state with a probability $p$ and emits a photon to the environment which makes a transition to the excited state. When the system is initially in the ground state, there is no transition.
\subsubsection{Input Bell states $|\psi^{\pm}\rangle$}
If both of the qubits in the Bell states $|\psi^{\pm}\rangle$ are transmitted through an ADC (scenario 1), then using Eq. (\ref{N01}), we can write the state at the output of the channel as \begin{eqnarray}\label{N02}
|\Psi^{\pm}\rangle_{abe_{1}e_{2}}
&=&\frac{1}{\sqrt{2}}[(\sqrt{q_{b}}|01\rangle _{ab}\pm\sqrt{q_{a}}|10\rangle _{ab})|00\rangle _{e_{1}e_{2}}~~~\nonumber\\
&&+(\sqrt{p_{b}}|01\rangle _{e_{1}e_{2}}\pm\sqrt{p_{a}}|10\rangle _{e_{1}e_{2}})|00\rangle _{ab}], \end{eqnarray} where we assumed that channels have different damping rates denoted by $p_{a}$, $p_{b}$ and, for simplicity, we denote $q_a\equiv 1-p_a$, $q_b\equiv 1-p_b$. If we assume $p_{a}=p_{b}=p$ and the environment is not monitored (unwatched channel), the shared state between Alice and Bob at the outputs of the channels can be found by tracing out the environment variables resulting in
$\hat{\rho}^{\pm}_{ab}=q|\psi^{\pm}\rangle_{ab\;ab}\langle\psi^{\pm}|
+p|00\rangle_{ab\;ab}\langle00|$. It is seen that the MES survives with a probability of $q$. On the other hand, if the environment is monitored, Alice and Bob proceed with the protocol if no photon is detected in the environment implying they have a MES, and they do nothing when photon is detected in the environment.
If only one of the qubits (say, that for Bob) is sent through the channel (scenario 2), the damping in the channel affects only that part. If the channel is watched and no photon is detected in the environment, the state that is shared between Alice and Bob becomes \begin{eqnarray}\label{N03}
|\Psi^{\pm}\rangle_{ab}=\frac{1}{\sqrt{2-p_{b}}}(\pm\sqrt{q_{b}}
|01\rangle _{ab}+|10\rangle _{ab}). \end{eqnarray} For an unwatched channel, the shared state is given as \begin{equation}\label{N04}
\hat{\rho}^{\pm}_{ab}=\frac{1}{2}[(2-p_{b})|\Psi^{\pm}\rangle_{ab\,ab}
\langle\Psi^{\pm}| +p_{b}|00\rangle_{ab\,ab}\langle00|]. \end{equation} It is clearly seen that if only one qubit of the initial MES is sent through the ADC, the shared state between the parties is no longer a MES.
Using Eq. (\ref{N02}), one can find that for scenario 1, the fully entangled fraction is \cite{Band} \begin{eqnarray}\label{gg1} f^{\pm}_{\rm ent,1}=\frac{1}{4}(\sqrt{q_{a}}+\sqrt{q_{b}})^{2} \end{eqnarray} if $p_{a}\ge \frac12 (q_{b}+\sqrt{1+2p_{b}-3p_{b}^2})$ is satisfied, otherwise it becomes \begin{eqnarray}\label{gg1a} f^{\pm}_{\rm ent,2}&=&\frac{1}{4}(p_{a}+p_{b}). \end{eqnarray} If we assume that both channels have the same damping properties that is $p_{a}=p_{b}=p$, the fully entangled fraction is found as \begin{equation}\label{gg1abc}
f^\pm_{{\rm ent}}= \left\{ \begin{array}{cc}
q ~~~~~~~~~~~~~&\text{if $p\le 2/3$;} \\
& \\
p/2 ~~~~~~~~~~~~~& \text{if $p> 2/3$. } \end{array}\right. \end{equation} For scenario 2, where $p_{a}=0$, $f^{\pm}_{\rm ent}$ becomes $f^{\pm}_{\rm ent}=\frac{1}{4}(1+\sqrt{q_{b}})^{2}$ for all $p_{b}$.
Imposing the condition $f_{{\rm ent}}>1/2$, which assures that a quantum state operation beats the classical limit, gives the relation $\sqrt{q_{a}}+\sqrt{q_{b}}>\sqrt{2}$ for scenario 1 \cite{Band}. Taking $p_{b}$ as a variable, it can be found that $p_{a}<2(\sqrt{2}-1)$ and $p_{b}\leq p_{a}-2+2\sqrt{2q_{a}}$ must be satisfied simultaneously \cite{Band}. Similarly for the case $p_{a}=p_{b}=p$, one can find that classical limit can be beaten only when $p<1/2$. For scenario 2, it can easily be shown that $p_{b}<2(\sqrt{2}-1)$ must be satisfied. If the channels are watched but no photon is detected, then $f_{{\rm ent}}>1/2$ can always be satisfied provided that $p_{a}<1\wedge p_{b}<1$ and $p_{b}<1$ for scenarios 1 and 2, respectively.
If the conditions given in the above paragraph for unwatched channels are satisfied, one can only be sure that the operation is a quantum one with fidelity $F>2/3$, however, cannot be sure about the security of the process, which requires $F>5/6$ according to $1\rightarrow2$ cloning condition. Then solving Eq. (\ref{ee1}) for $f_{{\rm ent}}$ to satisfy $F>5/6$, we find that \begin{eqnarray}\label{gg1b} f_{{\rm ent}}>\frac{3}{4} \end{eqnarray} must be satisfied for the fully entangled fraction. Imposing this condition on the shared entangled state between the two parties results in a much tighter condition on the channel damping rates, which can be summarized as follows \begin{eqnarray} && f^{\pm}_{\rm ent}>\frac{3}{4}~ \text{if}\nonumber\\ \nonumber\\ &&\left\{ \begin{array}{cc} p<\frac{1}{4}&\text{scenario 1} \\
& \text{for } p_{b}=p_{a}=p,\\
& \\ p_{b}<p_{a}-3+2\sqrt{3q_{a}}&\text{scenario 1}\\ p_{a}\leq 2\sqrt{3}-3\text{ or vice versa}&\text{for $p_{b}\neq p_{a}$},\\ & \\ p_{b}< 2\sqrt{3}-3& \text{scenario 2}. \end{array}\right.
\end{eqnarray}
\subsubsection{Input Bell states $|\phi^{\pm}\rangle$}
In the scenario 1, when both qubits of $|\phi^{\pm}\rangle$ are sent through the damping channels, the shared state between Alice and Bob for watched and unwatched channel are found as \begin{eqnarray}\label{N05}
|\Phi^{\pm}\rangle_{ab}=\frac{|00\rangle _{ab}\pm\sqrt{q_{a}q_{b}}|11\rangle _{ab}}{\sqrt{1+q_{a}q_{b}}}, \end{eqnarray} and \begin{eqnarray}\label{N06} \hat{\rho}^{\pm}_{ab}=&&\frac{1}{2}
\left[(1+q_{a}q_{b})|\Phi^{\pm}\rangle_{ab\;ab}
\langle\Phi^{\pm}|\right. \nonumber\\
&&\left. +q_{b}p_{a}|01\rangle_{ab\;ab}\langle01|
+p_{b}q_{a}|10\rangle_{ab\;ab}\langle10|\right. \nonumber\\
&&\left. +p_{a}p_{b}|00\rangle_{ab\;ab}\langle00| \right], \end{eqnarray} respectively, where we have considered that no photon is detected in the environment for the watched channel case. From these equations, it is seen that a MES survives with a non-zero probability iff $p_{a}=p_{b}=0$.
When only one of the qubits (say again, Bob's qubit) of the MES is propagated through the ADC, the shared state between Alice and Bob is not maximally entangled unless $p_{b}=0$ for both watched and unwatched channels as can be seen in the following expressions given, respectively, for watched and unwatched channels \begin{eqnarray}\label{N07}
|\Phi^\pm\rangle_{ab}=\frac{1}{\sqrt{2-p_{b}}}(|00\rangle _{ab}\pm\sqrt{q_{b}}|11\rangle _{ab}), \end{eqnarray} and \begin{equation}\label{N08}
\hat{\rho}^{\pm}_{ab}=\frac{1}{2}\left[(2-p_{b})|\Phi^\pm\rangle_{ab\;ab}
\langle\Phi^\pm|+ p_{b}|10\rangle_{ab\;ab}\langle 10|\right]. \end{equation} Then the fully entangled fraction of the shared state, when the channel is not watched, is found as \begin{equation}\label{gg1abd}
f^{\pm}_{\rm ent}=\frac{1}{4}[p_{a}p_{b}+(1+\sqrt{q_{a}q_{b}})^{2}], \end{equation} which reduces to \begin{equation}\label{gg1abe}
f^{\pm}_{\rm ent}=\frac{1}{4} \left\{ \begin{array}{cc}
2(p^{2}-2p+2)& \text{scenario 1,}\\ & \text{for } p_{a}=p_{b}\equiv p,\\
& \\
(1+\sqrt{q_{b}}~)^{2}&\text{scenario 2}. \\ \end{array}\right.
\end{equation} It can easily be found from Eq. (\ref{gg1abe}) that, the condition $f_{{\rm ent}}>1/2$ is satisfied for any $p$ in the range $p<1$ when both channels have the same damping rates in scenario 1; and for $p_{b}<2(\sqrt{2}-1)$ in scenario 2. When the channels have different damping rates, we can write using Eq. (\ref{gg1abd}) that $p_{a}p_{b}+(1+\sqrt{q_{a}q_{b}})^{2}>2$ must be satisfied to beat the classical limit. Analytical solution for this is very lengthy to give here. Instead, to give an idea on the relation between $p_{b}$ and $p_{a}$ to satisfy the condition $f_{{\rm ent}}>1/2$, we give some numerical values: when $p_{b}=1/2$, $p_{a}<7/8$ and when $p_{b}=1/4$, $p_{a}$ must satisfy $p_{a}<(6\sqrt{6}-13)/2$ to beat the classical limit. As it has been pointed out by Bandyopadhyay \cite{Band}, scenario 1 can be made to have higher $f_{{\rm ent}}$ than scenario 2 such that
$f_{{\rm ent}}>1/2$ is satisfied. This, in turn, implies that for the state $|\phi^{\pm}\rangle$, one can let one of the qubits to undergo a controlled dissipation if the information on the dissipation of the other qubit in the other channel is available.
Looking at the condition $f_{{\rm ent}}>3/4$ for quantum teleportation to surpass the no-cloning limit, we find the following constraints on the damping rates of the ADC: \begin{eqnarray} && f^\pm_{{\rm ent}}>\frac{3}{4}~ \text{if}\nonumber\\ \nonumber\\ &&\left\{ \begin{array}{cc}
p<1-\frac{\sqrt{2}}{2}&\text{scenario 1} \\
& \text{for~} p_{a}=p_{b}\equiv p,\\
& \\ p_{a}\leq 2\sqrt{3}-3&\text{scenario 1}\\ p_{b}\leq g(p_{a}) \text{ or vice versa}&\text{for $p_{a}\neq p_{b}$}{,}\\ & \\ p_{a}< 2\sqrt{3}-3& \text{scenario 2,} \end{array}\right.
\end{eqnarray} where $g(x)=(1-2x)^{-2}[-3+x(3+2x)+2\sqrt{(1-x)(2x^2-6x+3)}]. $ Contrary to the above case, a controlled dissipation cannot increase $f_{{\rm ent}}$ above $3/4$.
\subsection{Phase Damping Channel}
A phase damping channel (PDC) affects an input state with the following transformations \cite{Preskill} \begin{eqnarray}\label{N05a}
|0\rangle_{a}|0\rangle_{e}&&\rightarrow\sqrt{q}|0\rangle_{a}|0\rangle_{e}
+\sqrt{p}|0\rangle_{a}|1\rangle_{e},\nonumber\\
|1\rangle_{a}|0\rangle_{e}&&\rightarrow\sqrt{q}|1\rangle_{a}|0\rangle_{e}
+\sqrt{p}|1\rangle_{a}|2\rangle_{e}. \end{eqnarray} In this channel, the energy of the information carrier is conserved (no losses to environment), however the state of the carrier is decohered.
\subsubsection{Input Bell states $|\psi^{\pm}\rangle$}
Bell states $|\psi^{\pm}\rangle$ evolve into \begin{eqnarray}\label{N09} \hat{\rho}^{\pm}_{ab}&=&\frac{1}{2}(1-q_{a}q_{b})
(|01\rangle_{ab\;ab}\langle01|+|10\rangle_{ab\;ab}\langle10|)\nonumber\\
&&~~~~~~~~~~~+q_{a}q_{b}|\psi^{\pm}\rangle_{ab\;ab}\langle\psi^{\pm}|, \end{eqnarray} when both qubits are sent through the unwatched channel. For the limiting case, $p_{a}=1\vee p_{b}=1$, off-diagonal components of the density matrix vanish resulting in a mixed state. For a watched channel with no photon detected in the environment, there is a probability of $q_{a}q_{b}$ that the state observed is
$|\psi^{\pm}\rangle$.
On the other hand, when only one qubit is sent (scenario 2), the probability that the MES survives becomes $q_{b}$ when the channel is watched. When the channel is not watched, then the output state, which is mixed and not a MES, can be found from Eq. (\ref{N09}) by substituting $p_{a}=0$.
When the $f^{\pm}_{\rm ent}$ of the output state at the end of the unwatched channels are calculated it is seen that \begin{equation}\label{hh1abe}
f^{\pm}_{\rm ent}=\frac{1}{2} \left\{ \begin{array}{cc}
1+q_{a}q_{b}& \text{scenario 1}\\ & \text{for~} p_{a}\neq p_{b},\\
& \\ p^{2}-2p+2& \text{scenario 1,}\\ & \text{for~} p_{a}=p_{b}\equiv p,\\
& \\
2-p_{b}&\text{scenario 2}. \\ \end{array}\right.
\end{equation} Then we find that $f^{\pm}_{\rm ent}$ is always greater than $1/2$ provided that $p_{b}\neq 1 \wedge p_{a}\neq 1$ and $p\neq 1$ are satisfied for both scenarios. Moreover, we find scenario 1 cannot be made to have $f^{\pm}_{\rm ent}$ larger than scenario 2.
The no-cloning limit imposes the following conditions on the allowable PDC rate: \begin{eqnarray} && f^\pm_{{\rm ent}}>\frac{3}{4}~ \text{if}\nonumber\\ \nonumber\\ &&\left\{ \begin{array}{cc} p<1-\frac{\sqrt{2}}{2}&\text{scenario 1} \\
& \text{for~} p_{a}=p_{b}\equiv p,\\
& \\
p_{b}<(1-2p_{a})/(2q_{a})&\text{scenario 1}\\
p_{a}<1/2 \text{ or vice versa}&\text{for $p_{a}\neq p_{b}$}{,}\\ & \\
p_{b}< 1/2& \text{{scenario} 2{. }} \end{array}\right.
\end{eqnarray}
\subsubsection{Input Bell states $|\phi^{\pm}\rangle$}
When the input is $|\phi^{\pm}\rangle$, then the state at the output of the channels becomes \begin{eqnarray}\label{N10} \hat{\rho}^{\pm}_{ab}&&=\frac{1}{2}(1-q_{a}q_{b})
(|00\rangle_{ab\;ab}\langle00|+|11\rangle_{ab\;ab}\langle11|)\nonumber\\
&&~~~~~~~~~~~+q_{a}q_{b}|\phi^{\pm}\rangle_{ab\;ab}\langle\phi^{\pm}| \end{eqnarray} for an unwatched channel for scenario 1. A comparison of this output state with Eq. (\ref{N09}) reveals that the same discussions and the conditions on the channel damping properties are valid here, too.
\subsection{Depolarizing Channel}
When a qubit is sent through a depolarizing channel (DC), with a probability $q\equiv 1-p$ it is intact, while with probability $p~$ an error (bit flip error, phase flip error or both) occurs. The transformation that characterizes this channel is \cite{Preskill} \begin{eqnarray}\label{N10a}
|0\rangle_{a}|0\rangle_{e}&&\rightarrow\sqrt{1-\frac{3p}{4}}~
|0\rangle_{a}|0\rangle_{e} \nonumber\\
&& +\sqrt{\frac{p}{4}}~\big(|1\rangle_{a}|1\rangle_{e}
+i|1\rangle_{a}|2\rangle_{e}+|0\rangle_{a}|3\rangle_{e}\big), \nonumber\\
|1\rangle_{a}|0\rangle_{e}&&\rightarrow\sqrt{1-\frac{3p}{4}}|1\rangle_{a}
|0\rangle_{e} \\ &&
+\sqrt{\frac{p}{4}}~\big(|0\rangle_{a}|1\rangle_{e}
-i|0\rangle_{a}|2\rangle_{e}-|1\rangle_{a}|3\rangle_{e}\big). \nonumber \end{eqnarray}
In the DC, any given state $|\varphi\rangle$ evolves to an ensemble of the four states $|\varphi\rangle$,
$\hat\sigma_{x}|\varphi\rangle$, $\hat \sigma_{y}|\varphi\rangle$
and $\hat\sigma_{z}|\varphi\rangle$ where $\sigma_{k}$ is the Pauli operator. $p=1$ corresponds to complete depolarization where each of the four states occur with equal probabilities.
If the input state to the channel is $|\psi^{\pm}\rangle$ or
$|\phi^{\pm}\rangle$ both qubits are sent through the channel then with a probability of $(4-3p_{a})(4-3p_{b})/16$, this state is conserved at the output of the channels if the channel is watched and no photon is detected. For an unwatched channel with an input
$|\eta^{\pm}_{1,2}\rangle =
{|\psi^{\pm}\rangle,|\phi^{\pm}\rangle}$ for indices 1 and 2, respectively, the output state can be written as \begin{eqnarray}\label{N11} \hat{\rho}^{\pm}_{ab}&=&
\frac{1-q_{a}q_{b}}{4}(|\eta_{1,2}^{\mp}\rangle
\langle\eta_{1,2}^{\mp}|+|\eta_{2,1}^{-}\rangle\langle\eta_{2,1}^{-}|+
|\eta_{2,1}^{+}\rangle\langle\eta_{2,1}^{+}|) \nonumber\\
&& + \frac{1+3q_{a}q_{b}}{4} |\eta_{1,2}^{\pm}\rangle\langle\eta_{1,2}^{\pm}|, \end{eqnarray} which becomes a mixture of Bell states with equal probability $1/4$ when $p=1$. The effect of this channel on the input state when only one of the qubits is sent through can be found simply by substituting $p_{a}=0$. Imposing the criteria $f_{{\rm ent}}>1/2$ and $f_{{\rm ent}}>3/4$ on the state at the output of the channel for both scenarios, we find the following ranges for damping rate of the DC \begin{eqnarray}\label{yyzccc} && f^{\pm}_{\rm ent}>\frac{1}{2}~ \text{if}\nonumber\\ \nonumber\\ &&\left\{ \begin{array}{cc} p<1-\sqrt{3}/3&\text{scenario 1} \\
& \text{for}~ p_{b}=p_{a}=p,\\
& \\ p_{a}<\frac{2-3p_{b}}{3q_{b}};~~ p_{b}<2/3 &\text{scenario 1}\\
&\text{for~ $p_{b}\neq p_{a}$},\\ & \\ p_{b}< 2/3& \text{scenario 2} \end{array}\right.
\end{eqnarray} and \begin{eqnarray}\label{xc12} && f^{\pm}_{\rm ent}>\frac{3}{4}~ \text{if}\nonumber\\ \nonumber\\ &&\left\{ \begin{array}{cc} p<1-\sqrt{6}/3&\text{scenario 1} \\
& \text{for}~ p_{a}=p_{b}\equiv p,\\
& \\ p_{a}<\frac{1-3p_{b}}{3q_{b}};~~ p_{b}<1/3 &\text{scenario 1}\\
&\text{for~ $p_{b}\neq p_{a}$},\\ & \\ p_{b}< 1/3& \text{scenario 2}. \end{array}\right.
\end{eqnarray}
\subsection{Concurrence and fully entangled fraction}
Fully entangled fraction $f_{{\rm ent}}$, given by (\ref{dd1}), can be regarded as a measure of entanglement when the quantum channel is in a pure state and it is related to the Wootters concurrence $C$ \cite{Wootters} through the relation $f_{{\rm ent}}=(1+C)/2$. However, when the quantum channel is in a mixed state, $f_{{\rm ent}}$ can no longer be used as a measure of entanglement. This is due to the fact that entanglement cannot be increased by local quantum operations and classical communications, but the fully entangled fraction $f_{{\rm ent}}$ can be increased as shown by Bandyopadhyay \cite{Band} and Badzi\c{a}g {\em et al. } \cite{Badziag}.
In the following, we present the dependence of concurrence on the properties of the unwatched damping channels in both scenarios introduced previously and discuss the relation between $f_{{\rm ent}}$ and concurrence so that we can assure a quantum state transfer and a secure quantum teleportation. As defined by Wootters \cite{Wootters}, concurrence of a mixed state $\varrho$ is given by $C=\max(0,\lambda_{1}-\lambda_{2} -\lambda_{3}-\lambda_{4})$, where \{$\lambda_{i}$\} are the square-roots of the eigenvalues, in decreasing order, of the non-Hermitian matrix $\varrho\tilde{\varrho}$ with $\tilde{\varrho}= \hat \sigma_{y}\otimes\hat \sigma_{y}\varrho^{*}\hat \sigma_{y} \otimes\hat \sigma_{y}$, where Pauli matrices act on Alice and Bob qubit respectively and (*) stands for complex conjugation.
For the ADC, we observe that, in the first scenario, the relations between the concurrence and the damping parameter are different for the initial entangled states $|\psi^{\pm}\rangle$ and
$|\phi^{\pm}\rangle$. The concurrence of the shared state at the output of the channels for the input state $|\psi^{\pm}\rangle$ is given as $C'=\sqrt{q_{a}q_{b}}$. And the concurrence for
$|\phi^{\pm}\rangle$ becomes $C''=(1-\sqrt{p_{a}p_{b}})C'$. When both channels have the same damping rate $p_{a}=p_{b}\equiv p$. It is seen that while $C'$ decreases linearly with $p$, $C''$ decreases with $p^{2}$. On the other hand, for scenario 2, both initial states show the same tendency, which is given as $C'=C''=\sqrt{q}$.
In the cases of the PDC and DC, $C \equiv C'=C''$. For the PDC, concurrence is found as $C=q_{a}q_{b}$ for the first scenario. The expression for the second scenario can be found by taking $p_{a}=0$ and $p_{b}=p$. Although the expressions found for concurrence for the ADC and PDC are valid for all values of $p_{a}$ and $p_{b}$ in the range of $[0,1]$, the expressions for concurrence in the case of the DC are valid only for a limited range of damping rates. For example, for the second scenario, concurrence is found as $C=1-3p/2$ provided that $p<2/3$, otherwise it is zero implying a separable state. For the first scenario when both DCs have the same damping rate, concurrence is given as $C=1+3p(p-2)/2$ when $p\leq 1-\sqrt{3}/3$, otherwise $C=0$. When the damping rates are different, we find that $C=(3q_{a}q_{b}-1)/2$ provided that $p_{b}< 2/3$ and $p_{a}<1-1/(3 q_{b})$ are satisfied simultaneously, otherwise $C=0$. It is seen from Fig.~\ref{fig2} that $f_{{\rm ent}}$ is always $\leq 1/2$ for $C=0$. And even a very small amount of entanglement shifts the process from classical to quantum regime. \begin{figure}\label{fig2}
\end{figure}
\section{range of qubits \protect{for accurate teleportation}}
In this section, we analyze the effect of the noise in the system on the range of qubits that can be teleported with a desired fidelity value. In order to show how the a priori information on the ensemble from which $\hat{\rho}_{{\rm in}}$ is prepared affects the fidelity criterion on secure teleportation, we will consider an optimal one-to-two phase-covariant cloning machine (PCCM) \cite{Bruss2,Fiurasek} in comparison with the universal cloning machine. We will assume that the states to be teleported are chosen from the whole set of qubit states with a fixed and specified polar angle $\delta$ in the Bloch sphere. An eavesdropper, who knows $\delta$, can use the optimal (one-to-two) PCCM for which the cloning fidelity is given by \cite{Fiurasek,Du}: \begin{eqnarray}
F'(\delta) = \frac12 \sin^2\frac{\delta+\kappa\pi}{2}
+\cos^4\frac{\delta-\kappa\pi}{2}
+\frac{\sqrt{2}}4 \sin^2\delta \hspace{8mm}
\\
= \frac18\left[5+\sqrt{2}+2\cos(\delta+\kappa)-(\sqrt{2}-1)\cos(2\delta)\right],
\nonumber \label{new1} \end{eqnarray} where $\kappa=[[2\delta/\pi]]$, i.e., $\kappa=0$ for $0\le\delta<\frac{\pi}2$ and $\kappa=1$ for $\frac{\pi}2\le\delta\le\pi$. Note that fidelity $F'(\delta)$ for any $\delta$ is greater than the fidelity of the optimal universal cloning machine \cite{Buzek}, given by $F=5/6$. For the qubit states on the equator of the Bloch sphere ($\delta=\pi/2$), the optimal PCCM prepares clones with fidelity $F'(\pi/2)=\frac14(2+\sqrt{2})$. On the other hand, when the states are close to the poles that is in the neighborhood of
$|1\rangle$ or $|0\rangle$ in the Bloch sphere, i.e., for a fixed angle $\delta=\pi-\Delta\delta$ or $\delta=0+\Delta\delta$ with $\Delta\delta\ll 1$, Eq. (\ref{new1}) simplifies to \begin{eqnarray}
F'(\delta) = \frac18\left[ 5+\sqrt{2}
+2\cos\Delta\delta-(\sqrt{2}-1)\cos(2\Delta\delta)\right] \nonumber \\
= 1-\frac{3-2\sqrt{2}}{8}(\Delta\delta)^2+ {\cal O}(\Delta\delta)^4.\hspace{21mm}
\label{new2} \end{eqnarray}
In a teleportation process, measurement of Alice results in four possible outcomes $m_{i}$ where $i=0,1,2,3$ with
$m_{0}=|00\rangle\langle00|$, $m_{1}=|01\rangle\langle01|$,
$m_{2}=|10\rangle\langle10|$, and $m_{3}=|11\rangle\langle11|$. Then the state at Bob's side conditioned on Alice's measurement can be written as $\hat{\rho}'(m_{i})$. In the standard teleportation protocol with shared MES, upon receiving the classical information $i$, Bob can make the appropriate unitary operations on his qubit $\hat{\rho}'(m_{i})$ to obtain the teleported state $\hat{\rho}_{{\rm out}}=\hat{\rho}_{{\rm in}}$. We discuss how the measurement result affects this process in the presence of noise.
Since the entanglement distribution channel is noisy, the state at the output of teleportation process given in Eq. (\ref{cc1}) can be rewritten as $\hat{\rho}_{{\rm out}}={\rm Tr}_{\rm in,a}[\hat{U}_{{\rm tel}}\hat{\rho}_{{\rm in}}\otimes\hat{\rho}^{s}_{{\rm ent}} \hat{U}^{\dagger}_{{\rm tel}}]$ where $\hat{\rho}^{s}_{{\rm ent}}$ is the noisy entangled state. We can say that the fidelity is a function of $\delta$,
$\gamma$ and the noise introduced into the system, and we can represent it as $F(\delta,\gamma)\equiv F(|\psi_{{\rm in}}\rangle)$. We observe that $F(\delta,\gamma)$ is independent of $\gamma$, as denoted by $F(\delta)\equiv F(\delta,\gamma)$.
\subsection{Amplitude damping channel}
\subsubsection{Input Bell states $|\psi^\pm\rangle$}
For the ADC, in scenario 1, let us assume that $p_{b}=p_{a}=p$ and Alice made a measurement, obtained the outcome $m_{1}$ and then sent the classical information $k=1$ to Bob. The output density operator conditioned on $m_{1}$ becomes
$\hat{\rho}'(m_{1})=N[q\hat{\rho}_{{\rm in}}+2p\sin^{2}(\delta/2)|0\rangle\langle0|]$ with $N$ being the renormalization constant defined as $N^{-1}=1-p\cos\delta$ and $\hat{\rho}_{{\rm in}}$ is the density operator of the state to be teleported. Bob cannot rotate this $\hat{\rho}'(m_{1})$ to the desired state without the prior knowledge of $\delta$ and
$\gamma$. Since $|\psi_{{\rm in}}\rangle$ is supposed to be unknown, standard teleportation protocol fails to reproduce the desired state at Bob's side. This conclusion is valid for all $m_{i}$. Interestingly, output state at Bob's side can be grouped into two as $\chi_{0}=\{\hat{\rho}'(m_{0}),\hat{\rho}'(m_{2})\}$ and $\chi_{1}=\{\hat{\rho}'(m_{1}),\hat{\rho}'(m_{3})\}$. Although the output states in one of these groups can be rotated into each other by using a Z-gate or first X then Z-gate, states belonging to different groups cannot be rotated to each other. This problem is caused by the ADC, which reduces the degree of entanglement and introduces the additional terms
$2p\cos^{2}(\delta/2)|0\rangle\langle0|$ and
$2p\sin^{2}(\delta/2)|0\rangle\langle0|$, respectively, for $\chi_{0}$ and $\chi_{1}$. We observed that for teleportation in the presence of this ADC, if Alice's measurement yields $m_1$, Bob does not need to do anything. For other measurement results $m_0,m_2$ and $m_3$, Bob should apply $\hat \sigma_x$, $\hat
\sigma_y$ and $\hat \sigma_z$, respectively. In this way, he rotates his qubit into the output state $\hat{\rho}_{\rm out}(m_{k})=N_k[q\hat{\rho}_{{\rm in}}+p(1+(-1)^{k}\cos\delta)|k\oplus 1\rangle\langle k\oplus 1|]$ with $N_k$ being the renormalization constant defined as $N^{-1}_k=1+(-1)^k p\cos\delta$ and $\oplus$ stands for addition modulo 2. Then the state-dependent fidelity becomes \begin{eqnarray}\label{hytr1} F_{m_{k}}(\delta)&=&1-\frac{p~(1+(-1)^{k}\cos\delta)^2} {2(1+(-1)^{k}p\cos\delta)}, \end{eqnarray} where $k=0,1,2,3$. When $p\rightarrow 1$, the limiting values are calculated as $F_{m_{1,3}}(\delta)=\cos^2(\delta/2)$ and $F_{m_{0,2}}(\delta)= \sin^2(\delta/2)$.
For $p\leq 1/11$ and $p< 1/5$, all states can be teleported, respectively, with $F>5/6$ and $F>2/3$, independent of Alice's measurement result. For the equatorial qubits $\delta=\pi/2$, we find that as far as $p<1-\sqrt{2}/2$, teleportation fidelity will surpass that of the PCCM regardless of the measurement outcome. On the other hand, if the qubits are chosen at the neighborhood of
$|1\rangle$, then for even $k$ the channel damping rate should be bounded as $0\leq p\leq 0.162$.
Although state-dependent teleportation fidelity is mainly determined by Alice's measurement result, the average fidelity calculated using Eq. (\ref{a1}) is the same for all measurement results and given as \begin{eqnarray}\label{hytr1ad} \nonumber\\ F&=&\frac{1}{4 p^{2}}\left(2p+q^2\ln\frac{q}{1+p}\right), \end{eqnarray} which takes the minimum and maximum values of $1/2$ and $1$ for $p=1$ and $p=0$, respectively.
For the second scenario, the output density operator elements $\rho^{(jl)}_{\rm out}(m_{k})$ are found in terms of the input density operator elements $\rho^{(jl)}_{\rm in}$ as follows: $\rho^{(00)}_{\rm out}(m_{k})=q\rho^{(00)}_{\rm in}+(1-(-1)^k)p/2$, $\rho^{(01)}_{\rm out}(m_{k})=\sqrt{q}\rho^{(01)}_{\rm in} $, $\rho^{(10)}_{\rm out}(m_{k})=\sqrt{q}\rho^{(10)}_{\rm in} $ and $\rho^{(11)}_{\rm out}(m_{k})=q\rho^{(11)}_{\rm in}+(1+(-1)^k)p/2$. When Alice measures $m_{k}$ and applies the appropriate unitary transformation to get the highest fidelity for the process, i.e., when Bob receives the information that $k=3$ or $k=1$ for
$|\psi^+\rangle$ and $|\psi^-\rangle$ respectively, he can use a Z-gate to rotate the state on his side to obtain the above state. Then state-dependent fidelity can be found as \begin{eqnarray}\label{hytr2} F_{m_{k}}(\delta)&=&1-\frac12[p(1+(-1)^k\cos{\delta})-x], \end{eqnarray} where $x=(\sqrt{q}-q)\sin^2\delta$. In the limit of $p\rightarrow 1$, we get the same functions as those obtained from Eq. (\ref{hytr1}).
It is clearly seen from above results that the range of qubits that can be teleported correctly depends not only on the strength of the ADC but also on the measurement result of Alice.
Results imply that some of the states can be teleported with much better fidelity than others depending on $m_{k}$. This enables Alice and Bob, in a communication protocol, to decide to choose their qubits randomly from a range of states with higher fidelity when a certain measurement result, say $\{m_{0},m_{2}\}$, is obtained. As seen in Figs. \ref{fig3} and \ref{fig4}, some states give better fidelity than others depending on $m_k$. In the figure we have shaded regions where all the states can be teleported with $F>5/6$ regardless of Alice's outcome. Note that the states with
$\delta=\pi/2\mp\Delta$ can tolerate much higher damping rates than the ones located around the poles of the Bloch sphere. It is also seen that in Scenario 1 it is advantageous to rotate the initial entangled state into $|\phi\rangle$ because it is more immune to the ADC and therefore provides a larger parameter space for $F>5/6$ teleportation.
\begin{figure}\label{fig3}
\end{figure} \begin{figure}\label{fig4}
\end{figure}
Let us assume that qubits chosen from a range defined by $\Lambda$ have higher fidelity when Alice measures $\{m_{0},m_{2}\}$, on the other hand qubits chosen from $\lambda$ have higher fidelity when Alice measures $\{m_{1},m_{3}\}$. Then in a teleportation protocol, Alice first mixes her state chosen from $\Lambda$ with her part of the entangled state and makes a measurement, when she obtains $\{m_{0},m_{2}\}$, she sends the other qubit of entangled state to Bob together with the classical information, then Bob applies unitary transformation to get the desired state. When she gets $\{m_{1},m_{3}\}$, either she sends nothing or a dummy state. In this way, they can increase the fidelity of the process. If they decide to abort the protocol whenever Alice measures $\{m_{1},m_{3}\}$ then the efficiency of the process is low.
If Alice and Bob decide to keep all measurement results then the fidelity of the process can be written as \begin{eqnarray}\label{pro1} F_{{\rm}}(\delta)&=&\sum_{k=0}^{3}p_{m_{k}}F_{m_{k}}=q-\frac{x}{2} \end{eqnarray} where $x$ is defined as in Eq. (\ref{hytr2}), $F_{m_{k}}$ is the fidelity of the output state to the teleported state when Alice obtains $m_{k}$, and $p_{m_{k}}$ is the probability of obtaining this result. Moreover, if they do that for any $(\delta,\gamma)$, they end up with $F=2/3+(2\sqrt{q}-p)/6$. From Eq. (\ref{hytr2}), it can be seen that for a fixed $p$ of the channel, if the state to be teleported is chosen such that $\delta<\pi/2$, then the set $\{m_{1},m_{3}\}$ gives higher teleportation fidelity for that state; otherwise, the set $\{m_{0},m_{2}\}$ yields higher fidelity. Let us assume that Alice randomly chooses a state to be teleported from the upper hemisphere of the Bloch sphere, $(\delta<\pi/2)$, therefore, their preferred measurement set is $\{m_{1},m_{3}\}$, which occurs with a probability of $1/2$. When the measurement result is $\{m_{0},m_{2}\}$, she sends nothing according to the protocol described above. In this way, the fidelity of the process increases to $F_{{\rm}}(\delta)= F_{m_{1}}(\delta)=F_{m_{3}}(\delta)$ and the average fidelity becomes $F=2/3+(4\sqrt{q}+p)/12$.
In the same way, if entangled state is distributed by a third party and both qubits undergo damping, Alice proceeds as explained above. If Alice and Bob decide to keep all measurement results then the fidelity of the process becomes \begin{eqnarray}\label{pro1a} F_{{\rm }}(\delta)&=&\sum_{k=0}^{3}p_{m_{k}}F_{m_{k}} =\frac{2-p(1+\cos^2\delta)} {2(1-p^2\cos^2\delta)}, \end{eqnarray} where $F_{m_{k}}$ is the fidelity of the output state to the teleported state when Alice obtains $m_{k}$, and $p_{m_{k}}$ is the probability of obtaining this result. Moreover, if they do that for any $(\delta,\gamma)$, they end up with $F=\frac{1}{4p^2}[2p+q^2\ln(\frac{q}{1+p})]$. From Eq. (\ref{hytr1}), it can be seen that for a fixed $p$ of the channel, if the state to be teleported is chosen such that $\delta>\pi/2$, then the set $\{m_{0},m_{2}\}$ gives higher fidelity; otherwise, the set $\{m_{1},m_{3}\}$ does. Let us assume that Alice randomly chooses the state to be teleported from the lower hemisphere of the Bloch sphere, $(\delta>\pi/2)$, therefore, their preferred measurement set is $\{m_{0},m_{2}\}$.
\subsubsection{Input states $|\phi^\pm\rangle$} The output density operators for Alice's outcomes $m_{k=0,1,2,3}$ can be written as \begin{eqnarray}\nonumber \widehat{\rho }_{out}\left( m_{k}\right)&=&N[ q\widehat{ \rho }_{in}+p\left( 1+(-1)^{k}p\cos \delta \right) \hat \sigma _{x}^{k}\left\vert 0\right\rangle \left\langle 0\right\vert \hat \sigma _{x}^{k} \\ && +(-1)^{k}pq\cos\delta \hat \sigma _{x}^{k}\left\vert 1\right\rangle \left\langle 1\right\vert \hat \sigma _{x}^{k}] \end{eqnarray} with $N^{-1}=1+(-1)^{k}p\cos \delta $ from which the state dependent fidelity is derived as \begin{equation} F_{m_{k}}\left( \delta \right) =1-\frac{p\left[ 3-2p-(2p-1)\cos (2\delta ) \right] }{4\left( 1+(-1)^{k}p\cos \delta \right) } \end{equation} with the limiting values of $F_{m_{0,2}}(\delta )=\cos ^{2}\left( \delta /2\right) $ and $F_{m_{1,3}}(\delta )=\sin ^{2}\left( \delta /2\right) $ for $p$ approaching $1.$ It is easy to see that as $p$ approaches $0$, $ F_{m_{k}}\left( \delta \right) \rightarrow 1.$ For $p<1/6$, all states can be teleported with $F>5/6$ regardless of the outcome. Average values of teleportation fidelity for these two cases are the same as given in Eq. (\ref{hytr1ad}).
For the second scenario, contrary to the first scenario, the output state that Bob gets after the proper application of the quantum gates, and its fidelity to the desired state is the same as that of the case when initial MES is $|\psi\rangle$. When only one of the qubits of the MES goes through the ADC, distributing either $|\psi\rangle$ or $|\phi\rangle$ does not give any advantage to the parties.
\subsection{Phase damping channel}
When the channel is the PDC, given by (\ref{N05a}), only the off-diagonal elements are affected by the damping. The fidelity of the teleportation process when the initial MES is subjected to the PDC is independent of Alice's measurement result, because, contrary to the ADC case, Bob can use X and Z gates to rotate all the possible outcomes to each other and to the state with the highest fidelity to the input one. The elements of the density matrix can be written as $\rho^{(00)}_{\rm out}=\rho^{(00)}_{\rm in}$, $\rho^{(01)}_{\rm out}=q^{2}\rho^{(01)}_{\rm in}$, $\rho^{(10)}_{\rm out}=q^{2}\rho^{(10)}_{\rm in}$ and $\rho^{(11)}_{\rm out}=\rho^{(11)}_{\rm in}$ for the first scenario. In case of the second scenario, the elements of the density matrix are the same as above with $q^{2}$ replaced by $q$. Then the fidelity of the teleportation process for scenario 1 can be written as \begin{eqnarray}\label{hytr3} F(\delta)&=&1-\frac{1}{2}p(2-p)\sin^{2}\delta,\nonumber\\ F&=&1-\frac{1}{3}p(2-p), \end{eqnarray} and for scenario 2 as \begin{eqnarray}\label{hytr4} F(\delta)&=&1-\frac{1}{2}p\sin^{2}\delta,\nonumber\\ F&=&1-\frac{1}{3}p. \end{eqnarray}
The effect of the PDC on the teleportation fidelity is the same for both initial MES $|\phi\rangle$ and $|\psi\rangle$. The qubit states with $\delta=0$ and $\delta=\pi$ which are located at the poles of the Bloch sphere are always teleported with $F=1$. Because these states correspond to $|0\rangle$ and $|1\rangle$, which do not carry relative phase information and hence are not affected by the PDC. Indeed, these results show that if Alice chooses the states to be teleported around the poles then they can have a better teleportation fidelity (see Fig.~\ref{fig5}). On the other hand, states with $\delta=\pi/2$, which correspond to all the states lying on the equator of the Bloch sphere, are the most affected states.
If Eve does not have the information on the region from which the qubits are chosen, then the best she can do is to use the optimal universal quantum cloning machine of Bu\v{z}ek {\em et al. } \cite{Buzek}. Then we find that any qubit state satisfying $\sin^{2}\delta<1/3p(2-p)$ and $\sin^{2}\delta<1/3p$, respectively for the first and second scenarios, can be teleported in the presence of PDC with higher fidelity than that of the cloning machine of the eavesdropper. It is apparent that if there is no eavesdropper and that parties just want to beat the classical limit, the range of qubits at a fixed $p$ is much larger.
Now, let us assume that the states to be teleported are chosen with fixed $\delta$ but varying $\gamma$, and the information on $\delta$ may be leaked to an eavesdropper. Since the eavesdropper may use the optimal PCCM, to speak about a secure teleportation its fidelity should exceed the PCCM fidelity given in Eq. (\ref{new2}). Comparing Eq. (\ref{new2}) with state dependent teleportation fidelities for PDC given in Eqs. \ref{hytr3} and \ref{hytr4}, we find $\cos\delta<-1+1/x'$ where $x'=\sqrt{2}-1+2p(2-p)$ and $\cos\delta>1-1/x''$ where $x''=\sqrt{2}-1+2p$, respectively for scenarios 1 and 2. If
$\delta$ is chosen in the neighborhood of $|1\rangle$, the damping rate of the channel should satisfy $0\leq p<(3-2\sqrt{2})/4$ and $0\leq p<1-(\sqrt{1+2\sqrt{2}})/2$, respectively for the first and second scenarios. On the other hand, if the states to be teleported are chosen from the equatorial qubit states, the damping rate of the channel should satisfy $p<1-1/2^{1/4}$ and $p<1-1/\sqrt{2}$, respectively, for the first and second scenarios. These requirements are obviously stricter than those for the universal CM. We see in Fig. \ref{fig5} that while for the universal cloning machine the constraint on $p$ relaxes as we approach to the poles of the Bloch sphere, for the PCCM it becomes tighter. This is because as we approach the poles, the fidelity of the clones from the PCCM gets closer to one requiring a PDC with damping rates approaching to zero.
\begin{figure}\label{fig5}
\end{figure}
\subsection{Depolarizing channel}
When the channel is the DC, the elements of the density matrix can be written as $\rho^{(00)}_{\rm out}=\chi\rho^{(00)}_{\rm in}+\mu\chi$, $\rho^{(01)}_{\rm out}=\xi\rho^{(01)}_{\rm in}$, $\rho^{(10)}_{\rm out}=\xi\rho^{(10)}_{\rm in}$ and $\rho^{(11)}_{\rm out}=\chi\rho^{(11)}_{\rm in}+\mu\chi$ for the first scenario, where we have used $\mu=(1-q)/2$ and $\chi=(1+q)$ and $\xi=q^{2}$. In case of the second scenario, the elements of the density matrix are the same as above with $\chi=1$ and
$\xi=q$. Then the fidelity of the teleportation process for the first and second scenarios can be written, respectively, as \begin{eqnarray}\label{hytr5} F(\delta)=F=\frac12+\frac12 q^2, \end{eqnarray} \begin{eqnarray}\label{hytr6} F(\delta)=F=\frac12+\frac12 q \end{eqnarray}
from which we see that fidelity is independent of $|\psi_{{\rm in}}\rangle$. For DC too, contrary (similar) to the ADC (PDC), Bob can use quantum gates to rotate all possible outcomes to each other. Therefore, the fidelity is independent of the input state, of Alice's measurement result, and of the initially distributed MES The parties in the protocol can choose their qubits from the whole Bloch sphere and an eavesdropper may use a universal quantum cloning machine, in that case the damping rates of the channels should satisfy $p<1-\sqrt{6}/3$ and $p<1/3$ to surpass the no-cloning limit. In case of an eavesdropper with the PCCM, the relation between the qubits states that can be teleported securely and the damping rate of the channel becomes $\cos\delta<-(1+\sqrt{2})\left(-1+[1+4(\sqrt{2}-1)(xx-1)]^{1/2}\right)/2$ and $\cos\delta<\left(-1+[17-12\sqrt{2}+8p(\sqrt{2}-1)]^{1/2}\right)/(2\sqrt{2}-2)$ for the first and second scenarios.
\subsection{Direct transmission: Noisy state $+$ shared MES}
For Bob to whom Alice wants to teleport the unknown state
$|\psi_{{\rm in}}\rangle$, it is difficult to distinguish whether the $|\psi_{{\rm in}}\rangle$ is a noisy state or the quantum channel is responsible for the noise. The state to be teleported might be subjected to noise, loose its coherence and becomes a mixed state before it is teleported.
Let us assume that Alice and Bob share a MES, which they have obtained using entanglement distillation and purification protocols. In this section, we assume that the qubit is influenced by the ADC, PDC and DC, and discuss the outcome of the teleportation process. We assume that only the qubit to be teleported is subjected to noise and the shared entangled state is any of the Bell states. Indeed, this is similar to direct transmission scheme where the original state $|\psi_{{\rm in}}\rangle$ is sent directly to Bob through noisy channel.
If $|\psi_{{\rm in}}\rangle$ is subjected only to the ADC, the elements of the output density matrix become $\rho^{(00)}_{\rm out}=\rho^{(00)}_{\rm in}+p~\rho^{(11)}_{\rm in}$, $\rho^{(01)}_{\rm out}=\sqrt{q}~\rho^{(01)}_{\rm in}$, $\rho^{(10)}_{\rm out}=\sqrt{q}~\rho^{(10)}_{\rm in}$ and $\rho^{(11)}_{\rm out}=q\rho^{(11)}_{\rm in}$ where $\rho^{(kl)}_{\rm in}$ are the elements of the density matrix of
$|\psi_{\rm in}\rangle$. Then fidelity is found as
\begin{equation}\label{bhy123} F(\delta)=1-\frac{1}{2}[2p\sin^{2}(\delta/2)-(\sqrt{q}-q)\sin^{2}\delta]. \end{equation} Averaging this over all possible input states, average fidelity is found as \begin{equation}\label{bhy123a} F=\frac{2}{3}+\frac{1}{6}(2\sqrt{q}-p), \end{equation} which is the same as for scenario 2, when the entangled state is distributed through the ADC. We see that depending on the damping parameter, the range of qubits that can be teleported with a desired fidelity changes (see Fig.~\ref{fig6}). For example when
$p=0.8$, when only $|\psi_{\rm in}\rangle$ is subjected to noise, the states with $\delta<0.5436\pi$ and $\delta<0.4021\pi$ can be teleported, respectively, with $F>2/3$ and $F>5/6$. For the
$|\psi_{\rm in}\rangle$ damped case, all the states satisfying $\delta<0.2677\pi$ can be teleported with $F>5/6$. In Fig. \ref{fig6}, we have depicted the fidelity of the PCCM from which we see that when $p=1/2$, the teleportation fidelity and the PCCM fidelity are equal for the qubits $0\leq\delta\leq \pi/2$. In this range of qubits, a secure teleportation is possible for damping rates $p<1/2$. As $\delta$ approaches to $\pi$, the damping rate $p$ approaches to zero to achieve secure teleportation.
\begin{figure}\label{fig6}
\end{figure}
When the qubit is subjected only to the PDC, the output density operator becomes $\hat{\rho}_{{\rm out}} =q \hat{\rho}_{{\rm in}}
+p[\cos^{2}(\delta/2) |0\rangle
\langle0|+\sin^{2}(\delta/2)|1\rangle\langle 1|]$, resulting in a fidelity $F=1-p\sin^{2}(\delta/2)$ from which average fidelity can be written as $F=1-p/3$. Comparing these equations with Eq.
(\ref{hytr4}), it is seen that when the PDC affects only the qubit to be teleported, the fidelity is the same as in scenario 2 when the distributed entangled states undergo the PDC. We observe the same similarity if only the qubit $|\psi_{\rm in}\rangle$ is subjected to DC. In this case the fidelity expression is given as in Eq. (\ref{hytr6}).
In the analysis of security of a damping particular channel, the fidelities of optimal cloning machines were taken as a reference: either (i) the optimal universal cloning machine if no a priori information about a teleported state is given or (ii) the optimal phase-covariant cloning machine if prior partial information about the state is available. Clearly, a channel is secure if it provides a better fidelity than the optimal cloning. This is the lowest fidelity bound for security of any channel assuming that Alice sends her qubit through a damping channel, while an eavesdropper copies qubit at Alice's site and does not send it (or sends it through a perfect channel). Otherwise, the action of the channel will restrict the quality of the cloning consistent with the channel and, thus, less demanding security conditions can be given.
\begin{figure}\label{fig7}
\end{figure}
\section{CONCLUSION}
We have examined the problem of teleportation fidelity in the presence of various types of noise during the entanglement distribution of the teleportation process. Using the fully entangled fraction and concurrence, we derived the bounds on the damping parameters of channels so that the average fidelity (i) exceeds the classical limit, and (ii) satisfy the security condition for teleportation. Moreover, we derived the range of states that can be teleported accurately with a desired fidelity value and studied how this range is affected by noise. For the security condition, we considered eavesdroppers with universal and phase-covariant cloning machines where the first eavesdropper has no information on the qubit to be teleported but in the latter he/she knows the $\delta$ but not the relative phase $\gamma$.
For the ADC, although the bounds on $p$ for one-qubit affected case are the same for both $|\psi^{\pm}\rangle$ and
$|\phi^{\pm}\rangle$ as the source entangled state, for the two-qubit affected case we find that the bounds are different and much tighter for $|\psi^{\pm}\rangle$. This implies that if one is given $|\psi^{\pm}\rangle$, instead of distributing this state directly, it is better to first locally convert to
$|\phi^{\pm}\rangle$ and distribute it. In that case the effect of damping is less pronounced. We observe that only for the ADC these bounds change with the initial MES to be distributed. We have found that contrary to the case of the ADC, in the presence of the PDC and DC, two-qubit affected case cannot be made to have higher entangled fraction than the one-qubit affected case. Hence, the average fidelity cannot be increased by subjecting one of the qubits to controlled dissipation. As seen in Fig.~\ref{fig7}, average fidelity is dependent on the type and strength of damping in the channel. For the PDC, fidelity is always larger than $2/3$ if $p\neq 1$, on the other hand for the ADC and DC average fidelity decreases below $2/3$ down to $1/2$ depending on the damping rate.
We have discussed the direct transmission case, too. We observe that the results obtained for direct transmission and teleportation with one-qubit affected entanglement distribution case (scenario two), are the same in the cases of the DC and PDC. However, discrepancies are seen for the case of the ADC. Average fidelity for scenario 2 is more immune to damping than the direct transmission.
This study shows that information on the noise affecting the teleportation process during the phases of entanglement distribution and the qubit preparation can be helpful in increasing the fidelity. Moreover, it is important to note that if the source of noise in the process is not known then all should be attributed to an eavesdropper. Thus, the criterion on the teleportation fidelity should be re-formulated taking into account the set of states from where to be teleported state is chosen and the optimal cloning machine for that set.
\begin{acknowledgments} We thank Prof. Nobuyuki Imoto and Prof. Masato Koashi for their useful comments. AM was supported by the Polish Ministry of Science and Higher Education under Grant No. 1 P03B 064 28. \end{acknowledgments}
\end{document} |
\begin{document}
\title{Large violation of Bell inequalities using both particle and wave measurements}
\author{Daniel Cavalcanti} \email{dcavalcanti@gmail.com} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science drive 2, Singapore 117543} \author{Nicolas Brunner} \affiliation{H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol, BS8 1TL, United Kingdom} \author{Paul Skrzypczyk} \affiliation{H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol, BS8 1TL, United Kingdom} \author{Alejo Salles} \affiliation{Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark} \author{Valerio Scarani} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science drive 2, Singapore 117543} \affiliation{Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542}
\begin{abstract} When separated measurements on entangled quantum systems are performed, the theory predicts correlations that cannot be explained by any classical mechanism: communication is excluded because the signal should travel faster than light; pre-established agreement is excluded because \textit{Bell inequalities} are violated. All optical demonstrations of such violations have involved discrete degrees of freedom and are plagued by the detection-efficiency loophole. A promising alternative is to use continuous variables combined with highly efficient homodyne measurements. However, all the schemes proposed so far use states or measurements that are extremely difficult to achieve, or produce very weak violations. We present a simple method to generate large violations for feasible states using both photon counting and homodyne detections. The present scheme can also be used to obtain nonlocality from easy-to-prepare Gaussian states ({\it{e.g.~}} two-mode squeezed state).
\end{abstract}
\maketitle
The violation of Bell inequalities has played a crucial role in the foundations of quantum physics, since it provides a testable criterion to rule out classical mechanisms as the origin of quantum correlations \cite{bell64}. Moreover, it is also an important test for future applications, since it provides device-independent assessment of the performance of some quantum tasks like key distribution \cite{diqkd} or randomness generation \cite{dirandom}.
In experiments, violations have been demonstrated so far only for discrete-outcome measurements \cite{genovese}. The countless optical realizations have used several encodings, the most frequent ones being polarization \cite{aspect,weihs} or time-bins \cite{tittel}. Light can easily be sent at large distances, so the locality loophole can be closed; but the detection loophole \cite{pearle} remains open due to the joint effect of losses (both in the coupling between the source and the optical link, and in the link itself) and of limited efficiency of the photon counters. When energy levels of ions and atoms are used, fluorescence measurements are very efficient but slow: the detection loophole can be closed \cite{rowe,dirandom}, but it is practically impossible to think of separating these systems far enough to close the locality loophole. Entanglement swapping between light and atoms was proposed several years ago in order to combine the best of both worlds \cite{simon}, but its full implementation has yet to be reported \cite{rosenfeld}.
Another path towards a loophole-free Bell test consists in using only light, but measuring rather continuous degrees of freedom, exploiting the high efficiency of homodyne measurements \cite{reid}. However, this path has proved harder than expected: no experimental violation of Bell inequalities (let alone loophole-free ones) involving homodyne measurements has been reported to date. One of the main problems is that for the simplest states that can be produced (having positive, usually Gaussian, Wigner functions), homodyne measurements produce statistics that do not violate any Bell inequality. Some theoretical schemes have shown however that violations are indeed possible, however they require either measurements \cite{bana,stobi} or states \cite{munro,wenger,ecava,acin} that are practically unfeasible. Only in 2004 a proposal was put forth~\cite{grangier1,nhacar}, in which homodyne measurements on a feasible state, followed by suitable data processing, lead to a violation $S\approx 2.046$ of the Clauser-Horne-Shimony-Holt (CHSH) inequality $S\leq 2$ \cite{chsh}. Such a small violation, however, is hardly observable in the presence of imperfections, and has indeed not yet been achieved experimentally.
The main goal of the present paper is to demonstrate that large violations of Bell inequalities can indeed be achieved with feasible setups involving homodyne measurements. We study schemes in which both Alice and Bob alternate between photon counting and homodyne measurements, then locally post-process their data to extract bits and check the CHSH inequality. We show that a significant violation $S\approx 2.25$ can be achieved by the state \begin{eqnarray}\ket{\Psi_2}=\frac{\ket{2}_A\ket{0}_B+\ket{0}_A\ket{2}_B}{\sqrt{2}}, \label{noonstate}\end{eqnarray} where again $\ket{0}$ and $\ket{2}$ refer again to states of well defined photon-number. This state can be created by having two heralded single photons from down-conversion sources bunch on a beam-splitter, in a Hong-Ou-Mandel setup \cite{hom}.
Our scheme was motivated by a recent result by Ji and coworkers in the tentative of finding Bell tests for easy-to-prepare quantum states \cite{koreans}. However, the inequalities they used are not Bell inequalities in the most general sense, since they rule out only a particular class of local models. Thus they cannot be used for any device-independent assessment---as required for demonstrating nonlocality---since they can be violated by a local model \cite{comment}.
\textbf{Ideal case.--} The setup under study is sketched in Fig.~\ref{fignice}. Alice and Bob can perform two measurements each: one is the photon number $N$; the other is the $X$ quadrature. The measurement results are then processed to obtain bits $a,b\in\{-1,+1\}$, where $a$ and $b$ label Alice and Bob's outcomes respectively. We describe these binning procedures for the case of Alice, those of Bob are identical. When measuring $N$, Alice sets $a=+1$ if the result is $N>0$ and $a=-1$ if the result is $N=0$: this binning is simply the direct outcome of a perfect threshold detector. As for the $X$ measurement, Alice divides the real axis in two disjoint regions and sets $a=+1$ if $x\in {\cal A}^+$ and $a=-1$ if $x\in {\cal A}^-=\mathbb{R}\setminus {\cal A}^+$. These sets can still be quite complicated in general; here it will be sufficient to consider very simple sets, namely ${\cal A}^+={\cal B}^+=[-z,z]$, where $z$ remains to be chosen.
\begin{figure}
\caption{{(Color online) Sketch of the setup.} {\bf A.} A source sends a photonic entangled state to two space-like separated locations. In these locations each subsystem is subjected to one of two measurements: number of photons (photon counting) or quadrature (homodyning) measurements. In this way both ``wave'' and ``particle'' characteristics of the systems are tested. {\bf B.} The state $(\ket{0}\ket{2}+\ket{2}\ket{0})/\sqrt{2}$ violates the CHSH Bell inequality in the previous scenario and can be created as follows: two pairs of photons are created in different non-linear crystals by parametric down conversion. The detection of one photon of each pair at detectors D1 and D2 heralds the presence of the other two photons, which are sent to a beam splitter. The Hong-Ou-Mandel interference in the beam splitter makes the photons bunch, resulting in the desired two-photon state.}
\label{fignice}
\end{figure}
Using these measurements, we focus on the CHSH inequality, which reads \begin{eqnarray} S&=&E_{XX}+E_{XN}+E_{NX}-E_{NN}\leq2,\label{chshXN}
\end{eqnarray} where $E_{jk}=P(a=b|jk)-P(a\neq b|jk)$ is the expectation value of the measurements $j$ and $k$ after the binning. Now we are going to show that this inequality can be violated by measuring the state \eqref{noonstate}. The statistics of the four pairs of measurements are easy to write down. In fact, when both Alice and Bob measure $N$, their bits are always different, hence $E_{NN}=-1$. When Alice measures $N$ and Bob measures $X$: if $a=+1$, Bob's state is $\ket{0}$, whence his measurement of $X$ is described by the density function $|\langle x|0\rangle|^2=|\phi_0(x)|^2$ where $\phi_0(x)=\frac{1}{\pi^{1/4}}e^{-x^2/2}$; similarly, if $a=-1$, Bob's statistics are described by the density $|\langle x|2\rangle|^2=|\phi_2(x)|^2$ where $\phi_2(x)=\frac{1}{(4\pi)^{1/4}}(2x^2-1)e^{-x^2/2}$. The case when Alice measures X and Bob measures N is symmetric. Finally, when both Alice and Bob measure $X$, their statistics are described by $|\langle x_A,x_B|\Psi_2\rangle|^2=|\Psi_2(x_A,x_B)|^2$ where $\Psi_2(x_A,x_B)$ is obtained by replacing the state $\ket{k}$ with $\phi_k(x)$ in (\ref{noonstate}). All in all, the probabilities are given by the following expressions: \begin{eqnarray} \begin{array}{lcl}
P(a,b|NN)&=&(1-ab)/4\,;\\
P(a,b|XN)&=&\frac{1}{2}\int_{{\cal A}^a}dx|\phi_{m(b)}(x)|^2\,;\\
P(a,b|NX)&=&\frac{1}{2}\int_{{\cal B}^b}dx|\phi_{m(a)}(x)|^2\,;\\
P(a,b|XX)&=&\int_{{\cal A}^a} dx\int_{{\cal B}^b}dy |\Psi_2(x,y)|^2,\end{array}\label{qprobs} \end{eqnarray} where $m(+1)=0$ and $m(-1)=2$. Substituting these statistics into (\ref{chshXN}), one obtains a value of $S$ for any choice of $z$. The maximal violation of the CHSH inequality is $S\approx 2.25$ for $z\approx 0.83$ (see Fig.~\ref{figst}).
\begin{figure}
\caption{{(Color online) Value of the CHSH expression S as a function of the parameter $z$ for ideal detectors.} The full line is for the state $\ket{\Psi_2}$ given in (\ref{noonstate}). The dashed lines are for $\rho$ given in (\ref{eqrho}), describing a lossy line with transmisivity $t=0.95$, $0.9$ and $0.85$ from top to bottom respectively. Inset: the density functions $|\phi_0(x)|^2$ (full blue line) and $|\phi_2(x)|^2$ (dashed purple line), with the choice $z\approx 0.83$ (dotted vertical lines) for the maximal violation $S\approx 2.25$; notice that this value of $z$ allows one to discriminate the density functions with high probability. This is important to attain a high violation of CHSH since it allows to maximize the correlations between the $X$ and $N$ measurements. }
\label{figst}
\end{figure}
\textbf{Non-ideal case.--} So far, we have proved that an ideal realization of the state (\ref{noonstate}) would lead to a large violation of CHSH for ideal detectors. Let us now introduce two deviations from the ideal case and study the robustness of the result (for simplicity, all the parameters below are supposed to be the same for Alice and Bob).
First, we introduce the \textit{transmission} $t$ of the optical paths between the sources and the detectors. This parameter includes the coupling from the source into the transmitted mode and the subsequent possible losses in the channel. The ideal state $\ket{\Psi_2}$ reaches the detectors with probability $t^2$. With probability $2t(1-t)$, one of the two photons is lost. In this case, the state at the detector becomes $\rho_1=\frac{1}{2}(\ket{10}\bra{10}+\ket{01}\bra{01})$, because the photon lost in the environment would identify the path. Finally, with probability $(1-t)^2$, both photons are lost and the state at the detector is just $\ket{00}$. The final state measured is therefore \begin{eqnarray} \rho= t^2\ket{\Psi_2}\bra{\Psi_2}\,+\,2t(1-t)\,\rho_1\,+\,(1-t)^2\ket{00}\bra{00}. \label{eqrho} \end{eqnarray}
Second, while keeping the measurement $X$ fully efficient, we attribute a \textit{quantum efficiency} $\eta<1$ to the threshold detector used to perform the measurement $N$. We stress that no post-selection will be performed on the data: each event in which the threshold detector does not fire will be counted as $a=-1$, respectively $b=-1$. The final result is shown in Fig.~\ref{figetat} (see also Methods). Our scheme is more sensitive to losses on the line than to losses on the threshold detector: this was expected, since the former affect both measurements while the latter affect only the $N$ measurements. For a transmission of $t=90\%$, a detection efficiency of $\eta \approx 86\%$ can be tolerated. Though these are demanding features, they are within reach of current technology \cite{pittman,rosenberg}. These numbers are also comparable to the most favorable feasible schemes known to date for discrete variables, where the figure of merit is $\eta t$ \cite{vpb}. In contrast, here the losses correspond to the imperfections of the state (since they act on the same degree of freedom as the measurements) while for discrete variables the imperfections of the state are an additional problem.
\begin{figure}
\caption{{(Color online) Quantum efficiency of the threshold detectors ($\eta$) vs. transmission of the optical links ($t$) for violation of the CHSH inequality.} The curve supposes that, for each $t$, the optimal choice of $z$ for the binning is made. If there are no losses in the line, the detector efficiency can be as low as $\eta\approx 71.1\%$; conversely, for perfectly efficient detectors, one can tolerate a transmission $t\approx 84\%$.}
\label{figetat}
\end{figure}
\textbf{Experimental considerations.--} Let us make some considerations about the experimental implementation of our scheme. Homodyne measurements require a sufficiently long coherence time of the signal. So, if the state $\ket{\Psi_2}$ is implemented using down-conversion sources as we propose, the bandwidth of the down-converted photons must be narrow enough. Fortunately, the Hong-Ou-Mandel effect between photons coming from different crystals has been demonstrated using continuous pumping \cite{halder}; thus pulsed pumping is not a hidden requirement (notice that this fact has another positive consequence: the four-photon processes in down-conversion can indeed be neglected). Also homodyne measurements on one and two-photon states coming from down-conversion have already been reported \cite{lvo}. It seems therefore that the experiment is feasible with current technology, though certainly challenging.
\textbf{Other quantum states.--} The combination of counting and homodyne measurements can be applied to many more scenarios. A natural question is whether \textit{other states}, among those that are feasible in laboratories today, violate the CHSH inequality. It turns out that the two-mode squeezed state \begin{eqnarray} \ket{\psi}&=&\sqrt{1-\lambda^2}\sum_{n}\lambda^n \ket{n}\ket{n} \label{TMSS} \end{eqnarray} violates a version of CHSH for some values of $\lambda$, provided (say) Bob's homodyne measurements is in the complementary quadrature $P$. Although the violation found is small ($S\approx 2.05$ for $\lambda\approx 0.83$ and $z\approx0.86$), it is remakable, since this state is Gaussian and easily produceable in the lab. Note also that the amount of violation is similar to the best value previously reported with a feasible state \cite{grangier1,nhacar}. The latter, however, used a more complicated state, obtained from \eqref{TMSS} by photon subtraction in each arm. We could not find any violation for the states (here unnormalized) $\ket{1}\ket{0}+\ket{0}\ket{1}$ \cite{0110} and $\ket{\alpha}\ket{-\alpha}+\ket{-\alpha}\ket{\alpha}$ ($\ket{\alpha}$ being a coherent state of amplitude $\alpha$) \cite{gran}.
\textbf{Discussion.--} The use of efficient homodyne measurements and photonic continuous-degrees of freedom in Bell tests has triggered much attention in the past years. Although this appears as an interesting path towards a loophole-free nonlocality test, no result so far had indicated that this method could actually work in practice. All the results reported previously suffered from using impractical quantum states and measurements, or achieved very small violations. Our main goal was to overcome these problems and present a feasible scheme to observe a large violations of Bell inequalities with continuous-variable measurements. The key element was to combine both photon counting and homodyne measurements in the same Bell test.
Although the implementation of our explicit scheme is still challenging, we believe that our method opens up new possibilities for designing a loophole-free Bell test. From the theoretical point of view, considering other quantum states and/or more sophisticated Bell inequalities could lead to larger and more robust violations. From the experimental point of view, simplifying the creation of the states described here and the progress towards the experimental considerations we discussed are certainly fruitful ways to research on.
Finally, we believe that a proof-of-principle experiment in which the experimental data is post-processed in order to take into account the inefficiencies in the experiment (similarly to the fair-sampling assumption in the discrete case) is interesting in its own right. Such an experimental demonstration would reinforce the usefulness of homodyne measurements in Bell tests and could be realized with current technology.
\textbf{Appendix.--} In order to study the effect of the limited efficiency $\eta$, we rewrite the CHSH inequality in the Clauser-Horne form \cite{CH}, which is equivalent for no-signaling distributions: \begin{eqnarray}\label{CHineq}
&-&p(a_X=+)-p(b_X=+)+p(++|XX)\\&+&p(++|NX)+p(++|XN)-p(++|NN)\leq0.\nonumber
\end{eqnarray} Here, $p$ describes the \textit{observed} statistics. Now, $p(++|NN)=0$ because one of the modes is always empty (we are neglecting spurious counts here). The first line can be re-written as $p(--|XX)-1$ and there is no effect of $\eta$, so one just has to compute this quantity for $\rho$ along the same lines as we did for $\ket{\Psi_2}$ above. Finally, consider $p(++|NX)$, the case for $p(++|XN)$ being symmetric. If the state is $\ket{\Psi_2}$, one has $p(++|NX,\Psi_2)=[1-(1-\eta)^2]\,P(++|NX)$ where $P(++|NX)$ is given in (\ref{qprobs}), because there are two photons reaching the detector. If the state is $\rho_1$, one has $p(++|NX,\rho_1)=\eta\,P(++|NX)$: indeed, Alice finds $a_N=+1$ with probability $\frac{\eta}{2}$ and prepares the state $\phi_0(x)$ on Bob's side. When the state is $\ket{00}$, Alice never finds $a_N=+1$. All in all, $p(++|NX)=t\eta(2-t\eta)\,P(++|NX)$. Thus the condition for \eqref{CHineq} to be violated becomes
\begin{eqnarray} t\eta&\geq&1-\sqrt{1-\frac{1-p(--|XX)}{P(++|NX)+P(++|XN)}}\,.
\end{eqnarray} Note that $t$ enters in the r.h.s. of this equation through $p(--|XX)$ evaluated for $\rho$. So, contrary to the schemes using discrete variables, the effects of $t$ and $\eta$ are not identical. Ultimately, one has to resort to numerical evaluation to find the best value of $z$ for each case.
AS, NB and PS acknowledge hospitality from the National University of Singapore. DC thanks A. Ac\'in and A. Ferraro for many discussions about non-locality in continuous-variable systems along the years. We thank C. Kurtsiefer and D. Tasca for useful discussions. This work was supported by the National Research Foundation and the Ministry of Education, Singapore, the UK EPSRC, EU STREP COQUIT under FET-Open grant number 233747, and QESSENCE.
\end{document} |
\begin{document}
\setcounter{page}{1}
\title{Commutators of automorphic composition operators with adjoints}
\author{Liangying Jiang}
\address{Department of Applied Mathematics, Shanghai Finance University, Shanghai 201209, P. R. China}
\email{\textcolor[rgb]{0.00,0.00,0.84}{liangying1231@163.com, jiangly@shfc.edu.cn}}
\subjclass[2010]{Primary 47B33; Secondary 32A35, 32A36}
\keywords{Composition operator; commutator; automorphism; Hardy space; weighted Bergman space; Dirichlet space}
\date{ May 12th, 2014 \newline \indent Supported by the National Natural Science Foundation of China (No.11101279)}
\begin{abstract} In this paper, we investigate the compactness of the commutator $[C_\psi^{\ast}, C_\varphi]$ on the Hardy space $H^2(B_N)$ or the weighted Bergman space $A^2_s(B_N)$ ($s>-1$), when $\varphi$ and $\psi$ are automorphisms of the unit ball $B_N$. We obtain that $[C_\psi^{\ast}, C_\varphi]$ is compact if and only if $\varphi$ and $\psi$ commute and they are both unitary. This generalizes the corresponding result in one variable. Moreover, our technique is different and simpler. In addition, we also discuss the commutator $[C_\psi^{\ast}, C_\varphi]$ on the Dirichlet space $\mathcal{D}(B_N)$, where $\varphi$ and $\psi$ are linear fractional self-maps or both automorphisms of $B_N$. \end{abstract} \maketitle
\section{Introduction}
Let $B_N$ denote the unit ball of $\mathbb{C}^N$ and let $\varphi$ be a holomorphic self-map of $B_N$. We define the composition operator $C_\varphi$
by $C_\varphi(f)=f\circ \varphi$, where $f$ is analytic in $B_N$.
In this paper, we are interested in characterizing the compactness of the commutator $[C_\psi^{\ast}, C_\varphi]=C_\psi^{\ast} C_\varphi- C_\varphi C_\psi^{\ast}$ on some classical function spaces, when $\varphi$ and $\psi$ are automorphisms of $B_N$. The motivation comes from a recent work of Clifford et al. \cite{CLN12}. They considered when the commutator $[C_\psi^{\ast}, C_\varphi]$ is non-trivially compact on the Hardy space $H^2(D)$ for linear fractional self-maps $\varphi$ and $\psi$ of the unit disk $D$. Here, non-trivially compact means that $[C_\psi^{\ast}, C_\varphi]$ is compact but nonzero, moreover, both $C_\psi^{\ast} C_\varphi$ and $C_\varphi C_\psi^{\ast}$ are not compact. In particular, when $\varphi$ and $\psi$ are automorphisms of $D$, they showed that $[C_\psi^{\ast}, C_\varphi]$ is non-trivially compact if and only if both maps are rotations. All results were extended by MacCluer et al. \cite{MNW13} to the weighted Bergman space $A^2_s(D)$ ($s>-1$).
In Section 3, we first investigate the commutator $[C_\psi^{\ast}, C_\varphi]$ on the Hardy space $H^2(B_N)$
and the weighted Bergman space $A^2_s(B_N)$ ($s>-1$), where $\varphi$ and $\psi$ are automorphisms of $B_N$, neither $\varphi$ nor $\psi$ is the idetity. In this case, we will prove that $[C_\psi^{\ast}, C_\varphi]$ is compact if and only if $\varphi$ and $\psi$ commute and both maps are unitary. This generalizes the result of automorphisms case in the unit disk to the unit ball. In order to deduce that
$\varphi$ and $\psi$ commute when $[C_\psi^{\ast}, C_\varphi]$ is compact on $A^2_s(D)$, MacCluer et al. \cite{MNW13} had done lots of complicated calculations. It is difficult for us to use this similar idea. So in Section 3, we will find another simpler technique to solve similar problem in higher dimensions, which involved in the application of the semi-multiplication property for Toeplitz operators.
Furthermore, we will extend to discuss the compactness of $[C_\psi^{\ast}, C_\varphi]$ on the Dirichlet space $\mathcal{D}(B_N)$ in Section 4. Based on the adjoint formula for $C_\psi$ on $\mathcal{D}(B_N)$, we obtain the necessary and sufficient condition for $[C_\psi^{\ast}, C_\varphi]$ to be compact in terms of linear fractional self-maps $\varphi$ and $\psi$ of $B_N$. As an immediate result, for automorphisms $\varphi$ and $\psi$ of $B_N$, we find that $[C_\psi^{\ast}, C_\varphi]$ is compact on $\mathcal{D}(B_N)$ if and only if $\varphi$ and $\psi$ commute. More specially, when
$\varphi$ and $\psi$ are linear fractional self-maps of $D$, the condition for $[C_\psi^{\ast}, C_\varphi]$ to be compact on the Dirichlet space $\mathcal{D}(D)$ is the same as that on the Hardy space $H^2(D)$ and the weighted Bergman space $A^2_s(D)$ ($s>-1$).
\section{ Preliminaries}
Here we collect some necessary background information.
\subsection{Analytic function spaces} Let $\partial B_N$ denote the boundary of the unit ball $B_N$. The Hardy space $H^2(B_N)$ is defined by
$$H^2(B_N)=\{f\ \mbox{analytic in}\ B_N: ||f||^2\equiv \sup\limits_{0<r<1}\int_{\partial B_N}|f(r\zeta)|^2 d\sigma(\zeta)<\infty\},$$ where $d\sigma$ denotes the normalized surface measure on $\partial B_N$. The weighted Bergman space $A^2_s(B_N)$, for $s>-1$, is defined by
$$A^2_s(B_N)=\{f\ \mbox{analytic in}\ B_N: ||f||_s^2\equiv \int_{B_N}|f(z)|^2 dv_s(z)<\infty\},$$
where $dv_s(z)=\frac{\Gamma(N+s+1)}{N!\Gamma(s+1)}(1-|z|^2)^sdv(z)$ and $dv$ denotes the normalized volume measure on $B_N$. In this paper, we will often use $\mathcal{H}$ to denote the Hardy space $H^2(B_N)$ or the weighted Bergman space $A^2_s(B_N)$. It is well known that both the Hardy space and the weighted Bergman space are the reproducing kernel Hilbert spaces, where the reproducing kernel is given by
$K_w(z)=(1-<z, w>)^{-t}$ for $z, w \in B_N$, with $t=N$ for $H^2(B_N)$ and $t=N+s+1$ for $A^2_s(B_N)$. So the the normalized reproducing kernel is given by $$k_w(z)=\frac{K_w(z)}{||K_w||_{\mathcal{H}}}=\frac{(1-|w|^2)^{t/2}}{(1-<z, w>)^t}.$$
A multi-index $\alpha=(\alpha_1,\ldots,\alpha_N)$ is an $N$-tuple of non-negative integers $\alpha_i$. The total order of a multi-index
is given by $|\alpha|=|\alpha_1|+\cdots+|\alpha_N|$. Let $\alpha!=\alpha_1!\cdots \alpha_N!$ and $z^\alpha=z_1^{\alpha_1}\cdots z_N^{\alpha_N}$
for $z=(z_1,\ldots,z_N)\in \mathbb{C}^N$. An analytic function $f$ in $B_N$ has a power series representation $$f(z)=\sum\limits_\alpha c_\alpha z^\alpha,$$ where the sum is over all multi-indexes.
The Dirichlet space $\mathcal{D}(B_N)$ is defined as
$$\mathcal{D}(B_N)=\{f(z)=\sum\limits_\alpha c_\alpha z^\alpha\, \mbox{analytic in}\, B_N: ||f||_{\mathcal{D}^0}^2\equiv \sum\limits_\alpha |c_\alpha |^2|\alpha|\frac{\alpha!}{|\alpha|!}<\infty\},$$
where the quantity $||\cdot||_{\mathcal{D}^0}$ defines a semi-norm on $\mathcal{D}(B_N)$. We equip it with the norm $$||f||_{\mathcal{D}}^2=|f(0)|^2+||f||_{\mathcal{D}^0}^2$$ and the inner product $<\cdot , \cdot>_{\mathcal{D}}$. So the reproducing kernel for $\mathcal{D}(B_N)$ is given by
$$K_w(z)=1+\log \frac{1}{1-<z,w>},\quad z, w\in B_N.$$
When acting on the reproducing kernel, composition operators and Toeplitz operators have the following adjoint property: $$C_\varphi^\ast T_h^\ast K_w=\overline{h(w)}K_{\varphi(w)}, \quad w\in B_N$$ for all analytic self-maps $\varphi$ of $B_N$ and $h\in L^\infty(B_N)$.
\subsection{Adjoint formula}
Given a bounded measurable complex-valued function $b$ on $\partial B_N$ (or $B_N$), the Toeplitz operator $T_b$ on $H^2(B_N)$ (or $A^2_s(B_N)$) is defined by $$T_b f=P(bf),$$ where $P$ is the orthogonal projection from $L^2(\partial B_N)$ (or $L^2(B_N, dv_s)$) onto $H^2(B_N)$ (or $A^2_s(B_N)$). If $b$ is analytic, then $T_b$ is a multiplication by $b$.
In this paper, we will often use the semi-multiplication property for Toeplitz operators mod $\mathcal{K}$, where $\mathcal{K}$ denotes the ideal of compact operators. That is, if $b\in L^\infty(\partial B_N)$ (or $ L^2(B_N)$) and $h\in C(\overline{B_N})$, then $T_bT_h-T_{bh}$ is compact on $H^2(B_N)$ (or $A^2_s(B_N)$). This result on the Hardy space $H^2(B_N)$ is Proposition 1.4 in \cite{McD77}; the Bergman space version comes from the proof of Theorem 1 in \cite{Cob73}; similar to the proof of Theorem 1 in \cite{Cob73}, this property also can be extended to the weighted Bergman space $A^2_s(B_N)$ ($s>-1$). In the unit disk $D$, this fact has been well described in page 73 of \cite{MNW13}. \\ \\ {\bf Theorem A.} {\it Suppose that $$\varphi(z)=\frac{az+b}{cz+d}$$ is a linear fractional self-map of $D$ and $ad-bc\ne 0$. Then the adjoint of $C_\varphi$ acting on the Hardy space $H^2(D)$ or the weighted Bergman space $A^2_s(D)$ ($s>-1$) is given by $$C_{\varphi}^{\ast}=T_g C_{\sigma} T_h^{\ast},$$ where $$\sigma(z)=\frac{\overline{a}z-\overline{c}}{-\overline{b}z+\overline{d}}$$ is the Krein adjoint of $\varphi$, $g(z)=(-\overline{b}z+ \overline{d})^{-t}$ and $h(z)=(cz+ d)^t$ are in $H^\infty$ with $t=1$ for $H^2(D)$ and $t=s+2$ for $A^2_s(D)$.} \\ \par This adjoint formula of $C_\varphi$ was first established on $H^2(D)$ by Cowen \cite{Co88} and was generalized to $A^2_s(D)$ ($s>-1$) by Hurst \cite{Hu97}. In the unit ball $B_N$, Cowen and MacCluer \cite{CM00} obtained the following similar adjoint formula for $C_\varphi$ when acting on $H^2(B_N)$ or $A^2_s(B_N)$ ($s>-1$). \\ \\ {\bf Theorem B.} {\it Let $$\varphi(z)=\frac{Az+B}{<z, C>+d}$$ be a linear fractional self-map of $B_N$, where $A$ is an $N\times N$ matrix, $B$ and $C$
are $N\times 1$ matrices, and $d$ is a scalar. Then on the space $\mathcal{H}$, $$C_{\varphi}^{\ast}=T_g C_{\sigma} T_h^{\ast},$$ where $$\sigma(z)=\frac{A^\ast z-C}{<z, -B>+ \bar{d}},$$ $g(z)=(<z, -B>+ \bar{d})^{-t}$ and $h(z)=(<z, C>+ d)^t$ with $t=N$ when $\mathcal{H}=H^2(B_N)$ and $t=N+s+1$ when $\mathcal{H}=A^2_s(B_N)$ ($s>-1$).} \\ \par We will refer to the functions $g, h$ and $\sigma$ as the auxiliary functions of $\varphi$ when they connected by the equation $C_{\varphi}^{\ast}=T_g C_{\sigma} T_h^{\ast}$. We also need frequently use the property: if $\varphi\in\mbox{Aut}(B_N)$ (or $\mbox{Aut}(D)$) then $\sigma=\varphi^{-1}$, where $\mbox{Aut}(B_N)$ (or $\mbox{Aut}(D)$) denotes the set of automorphisms of $B_N$ (or $D$).
Moreover, we have a different adjoint formula for $C_{\varphi}$ on the Dirichlet space $\mathcal{D}(B_N)$ as the following. \\ \\ {\bf Theorem C.} (Theorem 7 in \cite{Po11}) {\it Let $\varphi$ be a linear fractional self-map of $B_N$ and $K_w$ be the reproducing kernel of $\mathcal{D}(B_N)$. Then for $f\in \mathcal{D}(B_N)$, we have
$$C_{\varphi}^{\ast}f=f(0)K_{\varphi(0)}+C_\sigma f-f(\sigma(0)),$$ where $\sigma$ is the Krein adjoint of $\varphi$.} \\ \par This means that the adjoint of $C_{\varphi}$ on $\mathcal{D}(B_N)$ can be identified as another composition operator and a rank $2$ operator. For the unit disk case, this adjoint formula on the Dirichlet space $\mathcal{D}(D)$ was given by Gallardo-Guti\'{e}rrez and Montes-Rodr\'{i}guez (see Theorem 3.3 in \cite{GM03}).
It is well known that when $\varphi$ is a linear fractional self-map of $B_N$, then $C_\varphi$ is compact on the space mentioned in this paper if and only if $||\varphi||_\infty<1$. Let $\varphi$ be a linear fractional self-map of $D$ with a fixed point $\omega\in\partial D$, if $\omega$ is the only fixed point for $\varphi$, we say that $\varphi$ is parabolic. If $\varphi$ has an additional fixed point, we call $\varphi$ hyperbolic.
\subsection{Julia-Carath\'{e}odory theorem} Given $\zeta\in \partial B_N$, a continuous function $\gamma:[0, 1]\rightarrow B_N$ with $\lim\limits_{t\to 1}\gamma(t)=\zeta$ is said to a restricted $\zeta$-curve if
$$\lim\limits_{t\to 1} \frac{|\gamma(t)-<\gamma(t),\zeta>|^2}{1-|<\gamma(t),\zeta>|^2}=0
\quad \mbox{and} \quad \sup\limits_{0\le t<1}\frac{|\zeta-<\gamma(t),\zeta>\zeta|}{1-|<\gamma(t),\zeta>|}<\infty.$$ If $\lim\limits_{t\to 1}f(\gamma(t))=f(\zeta)$ for every restricted $\zeta$-curve $\gamma$, we say that $f: B_N\rightarrow \mathbb{C}$
has restricted limit and write $R\lim\limits_{z\to\zeta}f(z)=f(\zeta)$.
Let $\varphi$ be a holomorphic self-map of $B_N$. We say that $\varphi$ has finite angular derivative at $\zeta\in \partial B_N$, if there exists a point $\eta\in \partial B_N$ so that
$$A_\varphi(\zeta)=R\lim\limits_{z\to\zeta}\frac{<1-<\varphi(z), \eta>}{ 1-<z, \zeta>}$$ exists. We write $\varphi_\eta:=<\varphi, \eta>$ and $D_\zeta=\frac{\partial}{\partial \zeta}$ for the directional derivative in the direction of $\zeta$, and we put $$d_{\varphi}(\zeta)=\liminf\limits_{z\to
\zeta}\frac{1-|\varphi(z)|}{1-|z|}$$
The following is the Julia-Carath\'{e}odory theorem for the ball (see Theorem 8.5.6 in \cite{Rudin} or Theorem 2.2 in \cite{CKP14}) \\ \\ {\bf Theorem D.} {\it Let $\varphi$ be a holomorphic self-map of $B_N$ and $\zeta\in \partial B_N$. The following statements are equivalent: \\ (1) $\varphi$ has finite angular derivative at $\zeta$. \\ (2) $d_{\varphi}(\zeta)<\infty$. \\ (3) $\varphi$ has restricted limit $\eta\in \partial B_N$ at $\zeta$ and $D_{\zeta}\varphi_{\eta}(z)=<\varphi'(z)\zeta, \eta>$ has finite restricted
limit at $\zeta$. \\ Moreover, when these conditions hold, the following statements hold: \\ (4) $D_{\zeta}\varphi_{\eta}(z)=<\varphi'(z)\zeta, \eta>$ has restricted limit at $\zeta$ with $D_{\zeta}\varphi_{\eta}(\zeta)=d_{\varphi}(\zeta)$. \\ (5) $A_\varphi(\zeta)=d_{\varphi}(\zeta)$. \\ (6) $\frac{\varphi_{\eta^\bot}(z)}{1-<z, \zeta>}$ has restricted limit $0$ at $\zeta$ for any $\eta^\bot\in \partial B_N$ orthogonal to $\eta$.}
\section { The commutator on the Hardy space and the weighted Bergman space}
In this section, we discuss the compactness of the commutator $[C_\psi^{\ast}, C_\varphi]$ on the Hardy space $H^2(B_N)$
and the weighted Bergman space $A^2_s(B_N)$ ($s>-1$), when $\varphi$ and $\psi$ are automorphisms of $B_N$. We will show the following main theorem.
\begin{theorem} \label{the 3.1} Let $\varphi$ and $\psi$ be automorphisms of $B_N$, neither $\varphi$ nor $\psi$ is the identity. The commutator $[C_\psi^{\ast}, C_\varphi]$ is compact on $H^2(B_N)$ or $A^2_s(B_N)$ ($s>-1$) if and only if $\varphi$ and $\psi$ commute and both maps are unitary. \end{theorem}
Before we give a proof for Theorem 3.1, we need some useful lemmas. The first lemma is analogous to Lemma 3.2 of \cite{CLN12} and Lemma 4.3 of \cite{MNW13}. But the result has been improved. Recall that for $w\in B_N$, the function $k_w$ is the normalized reproducing kernel given by $k_w=K_w/||K_w||_{\mathcal{H}}$.
\begin{lemma}\label{lem 3.2} Assume that $\varphi$ and $\psi$ are holomorphic self-maps of $B_N$. Suppose that there exist points $\zeta_1$ and $\zeta_2$ on $\partial B_N$ such that $\varphi(\zeta_1)=\psi(\zeta_2)=\omega\in \partial B_N$ and $A_\varphi(\zeta_1)$ and $A_\psi(\zeta_2)$ exist. Then on the space $\mathcal{H}$, $$\lim\limits_{r\to 1}<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>=\biggl(\frac{2}{d_\varphi(\zeta_1)+d_\psi(\zeta_2)}\biggr)^t>0,$$ where $t=N$ when $\mathcal{H}=H^2(B_N)$ and $t=N+s+1$ when $\mathcal{H}=A^2_s(B_N)$ ($s>-1$). \end{lemma}
\proof Let $U$ be a unitary map of $B_N$ so that $U\omega=e_1$. Then $U\varphi(\zeta_1)=U\psi(\zeta_2)=U\omega=e_1$. Write $\phi=U\varphi$ and $\rho=U\psi$, we see that $$<C_{\rho}^{\ast} k_{r\zeta_2}, C_{\phi}^{\ast} k_{r\zeta_1}>=<C_{U}^{\ast}C_{\psi}^{\ast} k_{r\zeta_2}, C_{U}^{\ast}C_{\varphi}^{\ast} k_{r\zeta_1}>=<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>.$$
Thus we may assume $\omega=e_1$. Note that $$\frac{1}{<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>}=\frac{||K_{r\zeta_1}||_{\mathcal{H}}||K_{r\zeta_2}||_{\mathcal{H}}}{<K_{\psi(r\zeta_2)}, K_{\varphi(r\zeta_1)}>}=\biggl(\frac{1-<\varphi(r\zeta_1), \psi(r\zeta_2)>}{1-r^2}\biggr)^t$$ and{\setlength\arraycolsep{2pt} \begin{eqnarray*}\frac{1-<\varphi(r\zeta_1), \psi(r\zeta_2)>}{1-r^2}
&=&\frac{1-|\psi(r\zeta_2)|^2}{1-r^2}+\frac{|\psi(r\zeta_2)|^2-<\varphi(r\zeta_1), \psi(r\zeta_2)>}{1-r^2}
\\ &=& \frac{1-|\psi(r\zeta_2)|^2}{1-r^2}+\frac{<\psi(r\zeta_2)-\varphi(r\zeta_1), \psi(r\zeta_2)>}{1-r^2}. \end{eqnarray*}}Now, we calculate that{\setlength\arraycolsep{2pt} \begin{eqnarray*} && \frac{<\psi(r\zeta_2)-\varphi(r\zeta_1), \psi(r\zeta_2)>}{1-r^2} \\ &=& \frac{\psi_1(r\zeta_2)-\varphi_1(r\zeta_1)}{1-r^2}\cdot\overline{\psi_1(r\zeta_2)}+ \sum_{j=2}^N\frac{\psi_j(r\zeta_2)-\varphi_j(r\zeta_1)}{1-r^2}\cdot\overline{\psi_j(r\zeta_2)} \\ &=&\biggr(\frac{1-<\varphi(r\zeta_1), e_1>}{1-<r\zeta_1, \zeta_1>} -\frac{1-<\psi(r\zeta_2), e_1>}{1-<r\zeta_2, \zeta_2>}\biggl)\frac{\overline{\psi_1(r\zeta_2)}}{1+r} \\ && + \sum_{j=2}^N \biggr(\frac{\psi_j(r\zeta_2)}{1-<r\zeta_2, \zeta_2>} -\frac{\varphi_j(r\zeta_1)}{1-<r\zeta_1, \zeta_1>}\biggl)\frac{\overline{\psi_j(r\zeta_2)}}{1+r}. \end{eqnarray*}}Since $\varphi$ and $\psi$ have finite angular derivatives respectively at $\zeta_1$ and $\zeta_2$, by Theorem D(5), we get that $$\lim\limits_{r\to 1}\frac{1-<\varphi(r\zeta_1), e_1>}{1-<r\zeta_1, \zeta_1>}=A_\varphi(\zeta_1)=d_\varphi(\zeta_1)$$ and $$\lim\limits_{r\to 1}\frac{1-<\psi(r\zeta_2), e_1>}{1-<r\zeta_2, \zeta_2>}=A_\psi(\zeta_2)=d_\psi(\zeta_2).$$ Moreover, by Theorem D(6), $$\lim\limits_{r\to 1}\frac{\varphi_j(r\zeta_2)}{1-<r\zeta_1, \zeta_1>}=0 \qquad \mbox{and} \qquad \lim\limits_{r\to 1}\frac{\psi_j(r\zeta_2)}{1-<r\zeta_2, \zeta_2>}=0$$ hold for $2\le j\le N$. Therefore, {\setlength\arraycolsep{2pt} \begin{eqnarray*}&& \lim\limits_{r\to 1}\frac{1}{<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>}=\lim\limits_{r\to 1}\biggl(\frac{1-<\varphi(r\zeta_1), \psi(r\zeta_2)>}{1-r^2}\biggr)^t
\\ &=& \lim\limits_{r\to 1} \biggl(\frac{1-|\psi(r\zeta_2)|^2}{1-r^2}+\frac{<\psi(r\zeta_2)-\varphi(r\zeta_1), \psi(r\zeta_2)>}{1-r^2}\biggr)^t \\ &=&\biggr[d_\psi(\zeta_2)+\frac{1}{2}(d_\varphi(\zeta_1)-d_\psi(\zeta_2))\biggl]^t \\ &=&\biggl(\frac{d_\varphi(\zeta_1)+d_\psi(\zeta_2)}{2}\biggr)^t. \end{eqnarray*}}and we obtain the desire conclusion. \ \ $\Box$
\begin{lemma}\label{lem 3.3} Assume that $\varphi$ and $\psi$ are holomorphic self-maps of $D$. Suppose that there exist points $\zeta_1$ and $\zeta_2$ on $\partial D$ such that $\varphi(\zeta_1)=\psi(\zeta_2)=\omega\in \partial D$ and the angular derivatives $\varphi'(\zeta_1)$ and $\psi'(\zeta_2)$ exist. Then on $H^2(D)$ or $A^2_s(D)$ ($s>-1$),
$$\lim\limits_{r\to 1}<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>=\biggl(\frac{2}{|\varphi'(\zeta_1)|+|\psi'(\zeta_2)|}\biggr)^t>0,$$
where $$k_w(z)=\biggl(\frac{\sqrt{1-|w|^2}}{1-z\overline{w}}\biggr)^t,\qquad z, w\in D$$ is the normalized reproducing kernel with $t=1$ for $H^2(D)$ and $t=s+2$ for $A^2_s(D)$. \end{lemma}
This is an easy corollary of Lemma 3.2. In fact, Lemma 3.3 can also be immediately obtained if we we notice that $\zeta_1\varphi'(\zeta_1)\omega=|\varphi'(\zeta_1)|$ and $\zeta_2\psi'(\zeta_2)\omega=|\psi'(\zeta_2)|$ from the Julia-Carath\'{e}odory Theorem of the unit disk (see Theorem 2.44 of \cite{CM1}). Because Lemma 3.2 of \cite{CLN12} and Lemma 4.3 of \cite{MNW13} have given that
$$\lim\limits_{r\to 1}<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>=\biggl(\frac{2\omega}{2\omega|\psi'(\zeta_1)|+[\zeta_1\varphi'(\zeta_1)-\zeta_2\psi'(\zeta_2)|}\biggr)^t,$$ we only need multiple $\overline{\omega}$ on numerator and denominator of the above fraction to deduce the limit in Lemma 3.3.
We now show that when $\varphi$ and $\psi$ are automorphisms of $D$, in order for the commutator $[C_\psi^{\ast}, C_\varphi]$ to be compact, the inducing maps $\varphi$ and $\psi$ must commute. This result on the Hardy space $H^2(D)$ is Lemma 5.1 of \cite{CLN12}, and it on the weighted Bergman space $A^2_s(D)$ is Theorem 4.6 of \cite{MNW13}. Their technique is similar, but on $A^2_s(D)$, some calculations involved are very complex. It is almost impossible to do similar calculations in higher dimensions. We have to find other method which can be used on $H^2(D)$ and $A^2_s(D)$ simultaneously and which can also be extended to the unit ball.
\begin{lemma}\label{lem 3.4} Assume that $\varphi$ and $\psi$ are automorphisms of $D$. If the commutator $[C_\psi^{\ast}, C_\varphi]$ is compact on $H^2(D)$ or $A^2_s(D)$ ($s>-1$), then $\varphi$ and $\psi$ commute. \end{lemma}
\proof This is Lemma 5.1 in \cite{CLN12} and Theorem 4.6 in \cite{MNW13}. We will give another simpler proof and we only focus on the Hardy space $H^2(D)$. Set $$\varphi(z)=\frac{a_1z+b_1}{c_1z+d_1}\qquad \mbox{and} \qquad \psi(z)=\frac{a_2z+b_2}{c_2z+d_2}$$ with normalization $a_1d_1-c_1b_1=1$ and $a_2d_2-c_2b_2=1$. By Theorem A, we have $$C_{\varphi}=T_{h_1} C^\ast_{\sigma_1}T_{g_1}^\ast \qquad \mbox{and} \qquad C_{\psi}=T_{h_2} C^\ast_{\sigma_2}T_{g_2}^\ast,$$ where $g_1, h_1, \sigma_1$ and $g_2, h_2, \sigma_2$ are respectively the auxiliary functions for $\varphi$ and $\psi$ on Theorem A. Since
$\varphi$ and $\psi$ are automorphisms of $D$, we have $\sigma_1=\varphi^{-1}$ and $\sigma_2=\psi^{-1}$.
For $w\in D$, let $K_w$ denote the reproducing kernel and $k_w$ be the normalized reproducing kernel respectively given by $$K_w(z)=\frac{1}{1-z\overline{w}} \qquad \mbox{and} \qquad k_w(z)=\frac{K_w(z)}{||K_w||}=\frac{\sqrt{1-|w|^2}}{1-z\overline{w}},$$ where $||\cdot||$ denotes the norm of $H^2(D)$. Now, using the formula $C_\phi^{\ast}T^\ast_b K_w=\overline{b(w)}K_{\phi(w)}$, we get that{\setlength\arraycolsep{2pt} \begin{eqnarray*} <C_\varphi K_z, C_\psi K_w> &=& <T_{h_1} C^\ast_{\sigma_1}T_{g_1}^\ast K_z, T_{h_2} C^\ast_{\sigma_2}T_{g_2}^\ast K_w> \\ &=& g_1(z)\overline{g_2(w)}<T_{h_1} K_{\sigma_1(z)}, T_{h_2} K_{\sigma_2(w)}> \\ &=& g_1(z)\overline{g_2(w)}< K_{\sigma_1(z)}, T^\ast_{h_1}T_{h_2} K_{\sigma_2(w)}>. \end{eqnarray*}}
Since $h_1(z)=c_1z+ d_1$ and $h_2(z)=c_2z+ d_2$ are in $ H^\infty$, using the semi-multiplicative property for Toeplitz operators mod $\mathcal{K}$ as mentioned in Section 2, we see $$T^\ast_{h_1}T_{h_2}=T_{\overline{h_1}}T_{h_2}=T_{\overline{h_1}\, h_2}=T_{h_2\, \overline{h_1}}=T_{h_2}T_{\overline{h_1}}+L=T_{h_2}T^\ast_{h_1}+L,$$ where $L$ is a compact operator on $H^2(D)$. It follows that{\setlength\arraycolsep{2pt} \begin{eqnarray*}&& <C_\varphi K_z, C_\psi K_w> = g_1(z)\overline{g_2(w)}< K_{\sigma_1(z)}, T^\ast_{h_1}T_{h_2} K_{\sigma_2(w)}> \\ &=& g_1(z)\overline{g_2(w)}< K_{\sigma_1(z)}, T_{h_2}T^\ast_{h_1} K_{\sigma_2(w)}>+ g_1(z)\overline{g_2(w)}< K_{\sigma_1(z)}, L K_{\sigma_2(w)}> \\ &=& g_1(z)\overline{g_2(w)}< T_{h_2}^\ast K_{\sigma_1(z)},T^\ast_{h_1} K_{\sigma_2(w)}>+ g_1(z)\overline{g_2(w)}< K_{\sigma_1(z)}, L K_{\sigma_2(w)}> \\ &=& g_1(z)\overline{g_2(w)}h_2\circ\sigma_1(z) \overline{h_1\circ\sigma_2(w)}< K_{\sigma_1(z)}, K_{\sigma_2(w)}> \\ &&+ g_1(z)\overline{g_2(w)}< K_{\sigma_1(z)}, L K_{\sigma_2(w)}>. \end{eqnarray*}}
Fix $\omega\in \partial D$, since $\varphi$ and $\psi$ are automorphisms, there exist $\zeta_1$ and $\zeta_2$ on $\partial D$ such that $\varphi(\zeta_1)=\psi(\zeta_2)=\omega$. Setting $z=r\zeta_2$ and $w=r\zeta_1$, then{\setlength\arraycolsep{2pt}
\begin{eqnarray*} &&\lim\limits_{r\to 1^-}<C_\varphi k_{r\zeta_2}, C_\psi k_{r\zeta_1}> = \lim\limits_{r\to 1^-}<C_\varphi \frac{K_{r\zeta_2}}{||K_{r\zeta_2}||}, C_\psi \frac{K_{r\zeta_1}}{||K_{r\zeta_1}||}> \\ &=& \lim\limits_{r\to 1^-}(1-r^2)<C_\varphi K_{r\zeta_2}, C_\psi K_{r\zeta_1}> \\ &=& \lim\limits_{r\to 1^-}(1-r^2) g_1(r\zeta_2)\,\overline{g_2(r\zeta_1)}\,h_2\circ\sigma_1(r\zeta_2)\, \overline{h_1\circ\sigma_2(r\zeta_1)}< K_{\sigma_1(r\zeta_2)}, K_{\sigma_2(r\zeta_1)}> \\ && + \lim\limits_{r\to 1^-}(1-r^2)g_1(r\zeta_2)\,\overline{g_2(r\zeta_1)}< K_{\sigma_1(r\zeta_2)}, L K_{\sigma_2(r\zeta_1)}> \\ &=&: I+II.
\end{eqnarray*}}Note that $\{k_w\}$ is a weakly convergent sequence as $|w|\to 1$ with $||k_w||=1$ and $L$ is compact, we see that
$$\lim\limits_{|w|\to 1}\sqrt{1-|w|^2}||LK_w||=\lim\limits_{|w|\to 1}||Lk_w||=0.$$ Which gives that{\setlength\arraycolsep{2pt}
\begin{eqnarray*}&&\lim\limits_{r\to 1^-}\sqrt{1-r^2}||L K_{\sigma_2(r\zeta_1)}||
\\ &=& \lim\limits_{r\to 1^-}\sqrt{1-|\sigma_2(r\zeta_1)|^2}||L K_{\sigma_2(r\zeta_1)}||\cdot\sqrt{\frac{1-r^2}{1-|\sigma_2(r\zeta_1)|^2}}
\\ &=& 0\cdot\frac{1}{|\sigma_2'(\zeta_1)|^{1/2}}=0. \end{eqnarray*}} Hence, {\setlength\arraycolsep{2pt}
\begin{eqnarray*} &&\lim\limits_{r\to 1^-}(1-r^2)|g_1(r\zeta_2)\,\overline{g_2(r\zeta_1)}< K_{\sigma_1(r\zeta_2)}, L K_{\sigma_2(r\zeta_1)}>|
\\ &\le & \lim\limits_{r\to 1^-}(1-r^2)|g_1(r\zeta_2)\,\overline{g_2(r\zeta_1)}|\cdot || K_{\sigma_1(r\zeta_2)}||\cdot ||L K_{\sigma_2(r\zeta_1)}||
\\ &= & \lim\limits_{r\to 1^-}|g_1(r\zeta_2)\,\overline{g_2(r\zeta_1)}| \sqrt{\frac{1-r^2}{1-|\sigma_1(r\zeta_2)|^2}}\cdot \sqrt{1-r^2} ||L K_{\sigma_2(r\zeta_1)}||
\\ &= & |g_1(\zeta_2)\,\overline{g_2(\zeta_1)}|\frac{1}{|\sigma_1'(\zeta_2)|^{1/2}} \cdot 0=0 \end{eqnarray*}}and so $II=0$.
Now, using $\sigma_1=\varphi^{-1}$ and $\sigma_2=\psi^{-1}$, we calculate that{\setlength\arraycolsep{2pt} \begin{eqnarray*} I &=& \lim\limits_{r\to 1^-}(1-r^2) g_1(r\zeta_2)\,\overline{g_2(r\zeta_1)}\,h_2\circ\sigma_1(r\zeta_2)\, \overline{h_1\circ\sigma_2(r\zeta_1)}< K_{\sigma_1(r\zeta_2)}, K_{\sigma_2(r\zeta_1)}> \\ &=& \lim\limits_{r\to 1^-} \overline{h_1\circ\psi^{-1}(r\zeta_1)}\ g_1(r\zeta_2)\ h_2\circ\varphi^{-1}(r\zeta_2)\ \overline{g_2(r\zeta_1)}\ \frac{1-r^2}{1-<\psi^{-1}(r\zeta_1), \varphi^{-1}(r\zeta_2)>}. \end{eqnarray*}}It is easy to see that $$\lim\limits_{r\to 1^-}\frac{1-r^2}{1-<\psi^{-1}(r\zeta_1), \varphi^{-1}(r\zeta_2)>}=0$$ unless $\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$. Next, we assume $\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$. By Lemma 3.3,
we get{\setlength\arraycolsep{2pt} \begin{eqnarray*}\lim\limits_{r\to 1}\frac{1-r^2}{1-<\psi^{-1}(r\zeta_1), \varphi^{-1}(r\zeta_2)>}&=&\lim\limits_{r\to 1}<C_{\varphi^{-1}}^{\ast} k_{r\zeta_2}, C_{\psi^{-1}}^{\ast} k_{r\zeta_1}>
\\ &=&\frac{2}{|(\psi^{-1})'(\zeta_1)|+|(\varphi^{-1})'(\zeta_2)|}. \end{eqnarray*}}On the other hand, an easy computation shows that $$h_1(z)\overline{g_1\circ\varphi(z)}=\frac{c_1z+d_1}{\overline{a_1-c_1\varphi(z)}}
=\frac{c_1z+d_1}{\overline{a_1-c_1\frac{a_1z+b_1}{c_1z+d_1}}}=\frac{|c_1z+d_1|^2}{\overline{a_1d_1-c_1b_1}}=|c_1z+d_1|^2$$ and similarly
$$h_2(z)\overline{g_2\circ\psi(z)}=|c_2z+d_2|^2.$$ This fact has been proved by MacCluer and Pons \cite{MP06} for the the unit ball case. Combining this with $\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$, we obtain{\setlength\arraycolsep{2pt} \begin{eqnarray*}\overline{h_1\circ\psi^{-1}(\zeta_1)}\ g_1(\zeta_2)&=& \overline{h_1\circ\varphi^{-1}(\zeta_2)}\ g_1\circ\varphi\circ\varphi^{-1}(\zeta_2)
\\ &=& |c_1\varphi^{-1}(\zeta_2)+d_1|^2= \biggl|c_1\frac{d_1\zeta_2-b_1}{-c_1\zeta_2+a_1}+d_1\biggr|^2
\\ &=& \frac{1}{|a_1-c_1\zeta_2|^2}=|(\varphi^{-1})'(\zeta_2)| \end{eqnarray*}} and{\setlength\arraycolsep{2pt} \begin{eqnarray*}h_2\circ\varphi^{-1}(\zeta_2)\ \overline{g_2(\zeta_1)}&=&h_2\circ\psi^{-1}(\zeta_1)\ \overline{g_2\circ\psi\circ\psi^{-1}(\zeta_1)}
\\ &=& |c_2\psi^{-1}(\zeta_1)+d_2|^2=|(\psi^{-1})'(\zeta_1)|.
\end{eqnarray*}}As a result, we show that $$I=|(\varphi^{-1})'(\zeta_2)||(\psi^{-1})'(\zeta_1)|\frac{2}{|(\psi^{-1})'(\zeta_1)|+|(\varphi^{-1})'(\zeta_2)|}$$ if $\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$; Otherwise $I=0$. Moreover, since $\varphi(\zeta_1)=\psi(\zeta_2)=\omega\in\partial D$, if $\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$ then we have $$\psi^{-1}\circ\varphi^{-1}(\omega)=\varphi^{-1}\circ\psi^{-1}(\omega).$$ This means $\varphi^{-1}$ and $\psi^{-1}$ commute and hence $\varphi$ and $\psi$ commute. Immediately, we get $$(\psi^{-1})'(\varphi^{-1}(\omega))(\varphi^{-1})'(\omega)=(\varphi^{-1})'(\psi^{-1}(\omega))(\psi^{-1})'(\omega),$$ i.e. $$(\psi^{-1})'(\zeta_1)\frac{1}{\varphi'(\zeta_1)}=(\varphi^{-1})'(\zeta_2)\frac{1}{\psi'(\zeta_2)}.$$ Thus, the above discussions deduce that{\setlength\arraycolsep{2pt} \begin{eqnarray*}&&\lim\limits_{r\to 1^-}<C_\varphi k_{r\zeta_2}, C_\psi k_{r\zeta_1}> =:I+II
\\ &=& |(\varphi^{-1})'(\zeta_2)| |(\psi^{-1})'(\zeta_1)|\frac{2}{|(\psi^{-1})'(\zeta_1)|+|(\varphi^{-1})'(\zeta_2)|}+0
\\ &=& \biggl|(\psi^{-1})'(\zeta_1)\frac{\psi'(\zeta_2)}{\varphi'(\zeta_1)}\biggr| |(\psi^{-1})'(\zeta_1)|\frac{2}{|(\psi^{-1})'(\zeta_1)|+ \biggl|(\psi^{-1})'(\zeta_1)\frac{\psi'(\zeta_2)}{\varphi'(\zeta_1)}\biggr| }
\\ &=&|(\psi^{-1})'(\zeta_1)\psi'(\zeta_2)| \frac{2}{|\varphi'(\zeta_1)|+|\psi'(\zeta_2)|} \end{eqnarray*}}under the condition of $\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$. Otherwise, it is zero.
At last, by our hypothesis that $[C_\psi^{\ast}, C_\varphi]$ is compact on $H^2(D)$, we see that{\setlength\arraycolsep{2pt}
\begin{eqnarray*}0 &=& \lim\limits_{r\to 1^-}|| [C_\psi^{\ast}, C_\varphi]k_{r\zeta_2}||\ge \lim\limits_{r\to 1^-}|<[C_\psi^{\ast}, C_\varphi]k_{r\zeta_2}, k_{r\zeta_1}>| \\ &=&
\lim\limits_{r\to 1^-}|<C_\varphi k_{r\zeta_2}, C_\psi k_{r\zeta_1}> -<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>|.
\end{eqnarray*}}Using Lemma 3.3 again, we know that $$\lim\limits_{r\to 1^-}<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>=\frac{2}{|\varphi'(\zeta_1)|+|\psi'(\zeta_2)|}.$$ It follows that $$\lim\limits_{r\to 1^-}<C_\varphi k_{r\zeta_2}, C_\psi k_{r\zeta_1}> =\lim\limits_{r\to 1^-}<C_{\psi}^{\ast} k_{r\zeta_2}, C_{\varphi}^{\ast} k_{r\zeta_1}>\ne 0.$$
Combining this with the previous conclusion, we must have $\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$. Moreover, the above equality implies $$|(\psi^{-1})'(\zeta_1)\psi'(\zeta_2)|=1.$$ (We don't know how to use this conclusion, but it maybe have some independent interest.) Note that we have obtained that $\varphi$ and $\psi$ commute from
$\varphi^{-1}(\zeta_2)=\psi^{-1}(\zeta_1)$. Therefore, $[C_\psi^{\ast}, C_\varphi]$ is compact must deduce that $\varphi$ and $\psi$ commute. \ \ $\Box$ \\ \par In the proof of Lemma 3.4, replacing Lemma 3.3 by Lemma 3.2, we can deduce the similar result for the unit ball. We state it as the following and omit its proof.
\begin{lemma}\label{lem 3.5} If $\varphi, \psi\in \mbox{Aut} (B_N)$ and $[C_\psi^{\ast}, C_\varphi]$ is compact on $H^2(B_N)$ or $A^2_s(B_N)$ ($s>-1$), then $\varphi$ and $\psi$ commute. \end{lemma}
Next, we will give a complete proof for our main theorem. That is, we will show that when $\varphi$ and $\psi$ are automorphisms of $B_N$, the compactness of $[C_\psi^{\ast}, C_\varphi]$ implies that both $\varphi$ and $\psi$ are unitary. In the proof of this similar conclusion, the technique treated on $A^2_s(D)$ (see \cite{MNW13}) is different from that on $H^2(D)$ (see \cite{CLN12}). On $A^2_s(D)$, the polar decomposition of $C_\varphi$ was used. I think which also holds on the unit ball, so we can use similar idea as on $A^2_s(D)$ to complete the proof of this result. However, in order to exhibit the special property of composition operator $C_\varphi$ when $\varphi$ is an automorphism of $B_N$, we try to use the following interesting lemmas to prove this result.
\begin{lemma}\label{lem 3.6} Suppose that $\varphi$ is an automorphism of $B_N$. Then on the space $\mathcal{H}$,
$$C_{\varphi}^{\ast}=T_f C_{\varphi}^{-1}=T_f C_{\varphi^{-1}}, $$ where $T_f$ is the Toeplitz operator with symbol $$f(z)=\biggl(\frac{1-|\varphi(0)|^2}{|1-<z, \varphi(0)>|^2}\biggr)^t$$ with $t=N$ when $\mathcal{H}=H^2(B_N)$ and $t=N+s+1$ when $\mathcal{H}=A^2_s(B_N)$ ($s>-1$). \end{lemma}
This result on the Hardy space $H^2(B_N)$ and the Bergman space $A^2(B_N)$ was established by Bourdon and MacCluer \cite{BM07}. The extension to the weighted space $A^2_s(B_N)$ can be obtained similarly when using the change of variables formula in Proposition 1.13 of \cite{Zhu}. For the case of the unit disk, please see Theorem 4.2 of \cite{MNW13}.
\begin{lemma}\label{lem 3.7} Let $\varphi$ be an automorphism of $B_N$. If $f$ is continuous on $\partial B_N$ or $\overline{B_N}$, then
$$C_{\varphi}T_f- T_{f\circ \varphi} C_{\varphi} $$ is compact when acting on $H^2(B_N)$ or $A^2_s(B_N)$ ($s>-1$). \end{lemma}
\proof Let $\varphi$ be an automorphism of $B_N$ and $a=\varphi^{-1}(0)$. By Theorem 2.2.5 of \cite{Rudin}, the identity
$$1-<\varphi(z), \zeta>=1-<\varphi(z), \varphi\circ\varphi^{-1}(\zeta)>=\frac{(1-|a|^2)(1-<z, \varphi^{-1}(\zeta)>)}{(1-<z, a>)(1-<a, \varphi^{-1}(\zeta)>)}$$ holds for all $z\in \overline{B_N}$ and $\zeta\in \partial B_N$. Now, using this identity, for any $g\in H^2(B_N)$, we get that{\setlength\arraycolsep{2pt} \begin{eqnarray*}&& (C_{\varphi}T_f \, g)(z)=(T_f g)(\varphi(z))=\int_{\partial B_N} f(\zeta) g(\zeta)\frac{1}{(1-<\varphi(z), \zeta>)^N}d\sigma(\zeta)
\\ &=& \int_{\partial B_N} f(\zeta) g (\zeta)\frac{(1-<z, a>)^N(1-<a, \varphi^{-1}(\zeta)>)^N}{(1-|a|^2)^N(1-<z, \varphi^{-1}(\zeta)>)^N}d\sigma(\zeta)
\\ &=& \int_{\partial B_N} f\circ\varphi(\eta) g\circ\varphi(\eta)\frac{(1-<z, a>)^N(1-<a, \eta>)^N}{(1-|a|^2)^N(1-<z, \eta>)^N}\cdot\frac{(1-|a|^2)^N}{|1-<\eta, a>|^{2N}}d\sigma(\eta) \\ &=& (1-<z, a>)^N \int_{\partial B_N} f\circ\varphi(\eta) g\circ\varphi(\eta)\frac{1}{(1-<\eta, a>)^N}\cdot \frac{1}{(1-<z, \eta>)^N}d\sigma(\eta) \\ &=& (1-<z, a>)^N (T_{f\circ\varphi\cdot K_a}\, g\circ\varphi)(z) \\ &=& (T_{1/K_a}T_{f\circ\varphi} T_{ K_a} C_\varphi\, g)(z), \end{eqnarray*}}where we have used the change of variables formula (see Corollary 4.4 of \cite{Zhu}) and the kernel function $$K_a(z)=\frac{1}{(1-<z, a>)^N}$$ and the function $1/K_a$ are analytic on $\overline{B_N}$. Hence, $$C_{\varphi}T_f =T_{1/K_a}T_{f\circ\varphi} T_{ K_a} C_\varphi.$$
Since $f$ is continuous on $\partial B_N$, by the semi-multiplicative property for Toeplitz operators mod $\mathcal{K}$ in Section 2, we know that $$T_{1/K_a}T_{f\circ\varphi}= T_{1/K_a\cdot f\circ\varphi}+L=T_{f\circ\varphi \cdot 1/K_a}+L= T_{f\circ\varphi} T_{1/K_a}+L,$$ where $L$ is a compact operator on $H^2(B_N)$. Therefore,
$$C_{\varphi}T_f =T_{1/K_a}T_{f\circ\varphi} T_{ K_a} C_\varphi=(T_{f\circ\varphi} T_{1/K_a}+L)T_{ K_a} C_\varphi=T_{f\circ\varphi} C_\varphi+L', $$ where $L'$ is compact. Applying similar technique, we can obtain the similar result for the weighted Bergman space and so we complete the proof. \ \ $\Box$ \\ \par Based on these lemmas, we will use another technique to prove Theorem 3.1, which generalizes Theorem 5.2 of \cite{CLN12} and Theorem 5.1 of \cite{MNW13} to the unit ball. \\ \\ {\bf Proof of Theorem 3.1.} We only need to prove the "only if" part and we only give a proof for the Hardy space case.
Assume that $[C_\psi^{\ast}, C_\varphi]$ is compact. Since $\psi$ is an automorphism of $B_N$, by Lemma 3.6, we have $$C_{\psi}^{\ast}=T_f C_{\psi^{-1}} $$ with $$f(z)=\biggl(\frac{1-|\psi(0)|^2}{|1-<z, \psi(0)>|^2}\biggr)^N.$$ Thus, $$[C_\psi^{\ast}, C_\varphi]=C_\psi^{\ast} C_\varphi- C_\varphi C_\psi^{\ast}=T_f C_{\psi^{-1}}C_\varphi- C_\varphi T_f C_{\psi^{-1}}.$$ Now, using Lemma 3.5, $[C_\psi^{\ast}, C_\varphi]$ is compact means that $\varphi$ and $\psi$ commute, i.e. $$\varphi\circ\psi=\psi\circ\varphi.$$ This gives $\varphi=\psi\circ\varphi\circ\psi^{-1}$ and{\setlength\arraycolsep{2pt} \begin{eqnarray*}[C_\psi^{\ast}, C_\varphi]C_\psi &=&(T_f C_{\psi^{-1}}C_\varphi- C_\varphi T_f C_{\psi^{-1}})C_\psi \\ &=& T_f C_{\psi^{-1}}C_\varphi C_\psi- C_\varphi T_f C_{\psi^{-1}} C_\psi \\ &=& T_f C_{\psi\circ\varphi\circ\psi^{-1}}- C_\varphi T_f \\ &=& T_f C_{\varphi}- C_\varphi T_f. \end{eqnarray*}}
Note that $f$ is continuous on $\overline{B_N}$ and $\varphi$ is an automorphism of $B_N$. It follows from Lemma 3.7 that $$C_{\varphi}T_f\equiv T_{f\circ \varphi} C_{\varphi} \quad (\mbox{mod} \ \mathcal{K}).$$ Therefore, {\setlength\arraycolsep{2pt} \begin{eqnarray*}[C_\psi^{\ast}, C_\varphi]C_\psi &=& T_f C_{\varphi}- C_\varphi T_f \\ &\equiv & T_f C_{\varphi}- T_{f\circ \varphi} C_{\varphi}\quad (\mbox{mod} \ \mathcal{K}) \\ &= & T_{f - f\circ \varphi} C_{\varphi}\quad (\mbox{mod}\ \mathcal{K}). \end{eqnarray*}}Finally, since $C_\psi$ and $C_\varphi$ are invertible, we see that $[C_\psi^{\ast}, C_\varphi]$ is compact if and only if $[C_\psi^{\ast}, C_\varphi]C_\psi$ is compact, which is equivalent to that $ T_{f - f\circ \varphi}$ is compact. Applying Lemma 2 of \cite{Cob73}, we see that $f - f\circ \varphi\equiv 0$ on $\overline{B_N}$. Since $\varphi$ is not the identity, the representation of $f$ gives that $f$ must be constant. Thus, we get $\psi(0)=0$.
As we know, an automorphism $\phi$ of $B_N$ is a unitary transformation of $\mathbb{C}^N$ if and only if $\phi(0)=0$ (see Lemma 1.1 of \cite{Zhu}). Together this with the above discussion, we see that $\psi$ must be unitary. On the other hand, $[C_\psi^{\ast}, C_\varphi]=C_\psi^{\ast} C_\varphi- C_\varphi C_\psi^{\ast}$ is compact implies that $$(C_\psi^{\ast} C_\varphi- C_\varphi C_\psi^{\ast})^\ast=C_\varphi^{\ast} C_\psi- C_\psi C_\varphi^{\ast}=[C_\varphi^{\ast}, C_\psi]$$ is compact. So similar arguments deduce that $\varphi$ is also unitary. Consequently, all these give that both $\varphi$ and $\psi$ are unitary and they commute under the condition that $[C_\psi^{\ast}, C_\varphi]$ is compact. \ \ $\Box$
\section {The commutator on the Dirichlet space}
In this section, we try to characterize the compactness of $[C_\psi^{\ast}, C_\varphi]$ on the Dirichlet space $\mathcal{D}(B_N)$, where $\varphi$ and $\psi$ are linear fractional self-maps of $B_N$. The following lemma is about compact difference of linear fractional composition operators on $\mathcal{D}(B_N)$, which will prove useful for our result.
\begin{lemma}\label{lem 4.1} Suppose that $\varphi$ and $\psi$ are linear fractional self-maps of $B_N$. Then $C_\varphi-C_\psi$ is compact on $\mathcal{D}(B_N)$ if and only if $\varphi=\psi$ or both $C_\varphi$ and $C_\psi$ are compact. \end{lemma}
\proof This result for a special case has been pointed out by Pons \cite{Po11}, without proof. For completeness, we give a simple proof.
In \cite{Po11}, for real $s$, the weighted Dirichlet space $\mathcal{D}_s(B_N)$ is defined by
$$\mathcal{D}_s(B_N)=\{f(z)=\sum\limits_\alpha c_\alpha z^\alpha\ \mbox{analytic in}\ B_N: \sum\limits_\alpha(|\alpha|+1)^{1-s}|c_\alpha|^2\omega_\alpha<\infty\},$$
where $$\omega_\alpha=||z^\alpha||^2=\frac{(N-1)!\alpha!}{(N-1+|\alpha|)!}.$$
On the other hand, let $\beta(k)=(k+1)^t$ for real $t$, if $f(z)=\sum\limits_\alpha c_\alpha z^\alpha=\sum^\infty_0f_k(z)$ is analytic in $B_N$, then $f$ belongs to the weighted Hardy space $H^2(\beta, B_N)$ if and only if $$\sum^\infty_0||f_k||^2 \beta(k)^2=\sum\limits_\alpha(|\alpha|+1)^{2t}|c_\alpha|^2\omega_\alpha<\infty.$$ Thus, the weighted Dirichlet space $\mathcal{D}_s(B_N)$, in fact, is the weighted Hardy space $H^2(\beta, B_N)$ with the weight $\beta(k)=(k+1)^{(1-s)/2}$. Using Proposition 2.4 of \cite{JC14}, we see that all linear fractional self-maps induce bounded composition operators on $\mathcal{D}_s(B_N)$.
Suppose $s_1<s<s_2<\infty$, complex interpolation theorem for the weighted Dirichlet space $\mathcal{D}_s(B_N)$ tells us $$[\mathcal{D}_{s_1}, \mathcal{D}_{s_2}]_\theta=\mathcal{D}_s$$ with $s=(1-\theta)s_1+\theta s_2$ for $\theta\in (0, 1)$ (see Proposition 1 of \cite{Po11}). Choosing $s_1=-n$ and $s_2=2$, then for linear fractional self-maps $\varphi$ and $\psi$ of $B_N$, the operator $C_\varphi-C_\psi$ is bounded on $\mathcal{D}_{-n}(B_N)$ and $\mathcal{D}_2(B_N)=A^2(B_N)$. Now, if $C_\varphi-C_\psi$ is compact on $\mathcal{D}(B_N)=\mathcal{D}_{1-n}(B_N)$. Since $1-n=(1-\theta)(-n)+\theta\cdot 2$ with $\theta=\frac{1}{n+2}$, using the compactness theorem for interpolating operators (see Theorem 2.1 in \cite{Cwi92}), then $C_\varphi-C_\psi$ is also compact on $H^2(B_N)=\mathcal{D}_1(B_N)$ with $1=(1-\theta)(-n)+\theta\cdot 2$ for $\theta=\frac{n+1}{n+2}$. Here, all above spaces are identified to equal with an equivalent norm. Applying Theorem 2 of \cite{HMW11} or Theorem 3.1 of \cite{JO11}, we get that $\varphi=\psi$ or both $C_\varphi$ and $C_\psi$ are compact. So we have shown one direction. Another direction is obvious and the proof is completed. \ \ $\Box$ \\ \par When $\varphi$ is a linear fractional self-map of $B_N$, the adjoint of composition operator $C_\varphi$ on $\mathcal{D}(B_N)$ is mainly determined by another composition operator. Together this adjoint property with Lemma 4.1, we will give the following condition for $[C_\psi^{\ast}, C_\varphi]$ to be compact on $\mathcal{D}(B_N)$.
\begin{theorem} \label{the 4.2} Let $\varphi$ and $\psi$ be linear fractional self-maps of $B_N$. If $[C_\psi^{\ast}, C_\varphi]\ne 0$, then $[C_\psi^{\ast}, C_\varphi]$ is non-trivially compact on $\mathcal{D}(B_N)$ if and only if $||\psi||_\infty=||\varphi||_\infty=1$ and $\varphi\circ\sigma=\sigma\circ\varphi$, where $\sigma$ is the Krein adjoint of $\psi$. \end{theorem}
\proof For $w\in B_N$, let $K_w$ denote the reproducing kernel of $\mathcal{D}(B_N)$. Theorem C gives $$C_{\psi}^{\ast}f=f(0)K_{\psi(0)}+C_\sigma f-f(\sigma(0))$$ for any $f\in \mathcal{D}(B_N)$, where $\sigma$ is the Krein adjoint of $\psi$. Thus, {\setlength\arraycolsep{2pt} \begin{eqnarray*}[C_\psi^{\ast}, C_\varphi]f &=& (C_\psi^{\ast} C_\varphi- C_\varphi C_\psi^{\ast}) f=C_\psi^{\ast} C_\varphi f- C_\varphi C_\psi^{\ast} f \\ &=& f\circ\varphi(0) K_{\psi(0)}-C_\sigma C_\varphi f-f\circ\varphi\circ\sigma(0) \\ && - [f(0)K_{\psi(0)}\circ\varphi- C_\varphi C_\sigma f-f(\sigma(0))] \\ &=& (C_\varphi C_\sigma-C_\sigma C_\varphi )f+f\circ\varphi(0) K_{\psi(0)}+f(\sigma(0)) \\ && -f\circ\varphi\circ\sigma(0)-f(0)K_{\psi(0)}\circ\varphi. \end{eqnarray*}}This implies that $[C_\psi^{\ast}, C_\varphi]$ is compact on $\mathcal{D}(B_N)$ if and only if $$[C_\varphi, C_\sigma]=C_\varphi C_\sigma-C_\sigma C_\varphi=C_{\sigma\circ\varphi}-C_{\varphi\circ\sigma}$$ is compact.
It is easy to see that $C_\psi^{\ast} C_\varphi$ is compact if and only if $C_\sigma C_\varphi =C_{\varphi\circ\sigma}$ is compact, and the compactness of $C_\varphi C_\psi^{\ast}$ is equivalent to the compactness of $C_\varphi C_\sigma=C_{\sigma\circ\varphi}$. Since $[C_\psi^{\ast}, C_\varphi]\ne 0$, we get that $[C_\psi^{\ast}, C_\varphi]$ is non-trivially compact on $\mathcal{D}(B_N)$ if and only if
$C_{\sigma\circ\varphi}-C_{\varphi\circ\sigma}$ is also non-trivially compact. By Lemma 4.1, this is equivalent to $\varphi\circ\sigma=\sigma\circ\varphi$ and $||\varphi\circ\sigma||_\infty=||\sigma\circ\varphi||_\infty=1$, i.e. $||\psi||_\infty=||\varphi||_\infty=1$. So we complete the proof. \ \ $\Box$ \\ \par Note that if $\psi$ is an automorphism of $B_N$, then $\sigma=\psi^{-1}$. Thus, $\varphi\circ\sigma=\sigma\circ\varphi$ gives that $\varphi\circ\psi^{-1}=\psi^{-1}\circ\varphi$. This is the same to $\varphi\circ\psi=\psi\circ\varphi$. Immediately, as a corollary of Theorem 4.2, we obtain the following result.
\begin{theorem} \label{the 4.3} If $\varphi$ and $\psi$ are automorphisms of $B_N$. Then $[C_\psi^{\ast}, C_\varphi]$ is compact on $\mathcal{D}(B_N)$ if and only if $\varphi$ and $\psi$ commute. \end{theorem}
Using Theorem 4.2 and similar discussions in the proof of Theorem 3.1 in \cite{MNW13}, we also obtain the following similar result on the Dirichlet space $\mathcal{D}(D)$ as on the Hardy space $H^2(D)$ and the weighted Bergman space $A^2_s(D)$ ($s>-1$).
\begin{theorem} \label{the 4.4} Suppose that $\varphi$ and $\psi$ are linear fractional self-maps of $D$, and one of which is a non-automorphism. The commutator $[C_\psi^{\ast}, C_\varphi]$ is non-trivially compact on $\mathcal{D}(D)$ if and only if one of the following is true. \\ (i)\ $\varphi$ and $\psi$ are both parabolic with the same boundary fixed point, \\ (ii)\ $\varphi$ and $\psi$ are hyperbolic such that the fixed points of $\varphi$ and $\psi$ are $(\zeta, a)$ and $(\zeta, 1/\overline{a})$ respectively with $\zeta\in \partial D$. \end{theorem}
\noindent {\bf Remark.} We first see the following example. Let {\setlength\arraycolsep{2pt} \begin{eqnarray*} \varphi(z)=\left( \begin{array}{cc}
1 & 0 \\ 0 & 1/2 \end{array}\right)\left( \begin{array}{c} z_1 \\ z_2 \end{array}\right)=(z_1, z_2/2), \qquad z=(z_1, z_2)\in B_2 \end{eqnarray*}}and{\setlength\arraycolsep{2pt} \begin{eqnarray*} \psi(z)=\left( \begin{array}{cc}
1 & 0 \\ 0 & 1/3 \end{array}\right)\left( \begin{array}{c} z_1 \\ z_2 \end{array}\right)=(z_1, z_2/3), \qquad z=(z_1, z_2)\in B_2. \end{eqnarray*}}Thus, {\setlength\arraycolsep{2pt} \begin{eqnarray*} \sigma(z)=\left( \begin{array}{cc}
1 & 0 \\ 0 & 1/3 \end{array}\right)^\ast\left( \begin{array}{c} z_1 \\ z_2 \end{array}\right)=\left( \begin{array}{cc}
1 & 0 \\ 0 & 1/3 \end{array}\right)\left( \begin{array}{c} z_1 \\ z_2 \end{array}\right) \end{eqnarray*}}and so{\setlength\arraycolsep{2pt} \begin{eqnarray*} \varphi\circ\sigma(z)=\left( \begin{array}{cc}
1 & 0 \\ 0 & 1/6 \end{array}\right)\left( \begin{array}{c} z_1 \\ z_2 \end{array}\right)=\sigma\circ\varphi(z). \end{eqnarray*}}Moreover, it is easy to check that{\setlength\arraycolsep{2pt} \begin{eqnarray*}[C_\psi^{\ast}, C_\varphi]f &=& (C_\varphi C_\sigma-C_\sigma C_\varphi )f+f\circ\varphi(0) K_{\psi(0)}+f(\sigma(0)) \\ && -f\circ\varphi\circ\sigma(0)-f(0)K_{\psi(0)}\circ\varphi \\ &=& 0
\end{eqnarray*}}for any $f\in \mathcal{D}(B_2)$. As a consequence, when $\varphi\circ\sigma=\sigma\circ\varphi$ and $||\psi||_\infty=||\varphi||_\infty=1$, it may happen that $[C_\psi^{\ast}, C_\varphi]=0$. Hence, in Theorem 4.2, we need assume that $[C_\psi^{\ast}, C_\varphi]\ne 0$. However, from the proof of Theorem 3.1 in \cite{MNW13}, this condition is not necessary for the unit disk. \\ \\ \\ \centerline{ ACKNOWLEDGEMENTS } \\ \par During the author's visiting at The College at Brockport, State University of New York, they provided a good environment for working on this paper. Shanghai Municipal Education Commission provided the financial support during her visiting. She would like to express her gratitude to them.
\end{document} |
\begin{document}
\title{\LARGE\bf An alternative representation of the Vi\'{e}te\text{'}s formula for pi by Chebyshev polynomials of the first kind}
\author{ \normalsize\bf S. M. Abrarov\footnote{\scriptsize{Dept. Earth and Space Science and Engineering, York University, Toronto, Canada, M3J 1P3.}}\, and B. M. Quine$^{*}$\footnote{\scriptsize{Dept. Physics and Astronomy, York University, Toronto, Canada, M3J 1P3.}}}
\date{September 19, 2016} \maketitle
\begin{abstract} There are several reformulations of the Vi\'{e}te\text{'}s formula for pi that have been reported in the modern literature. In this paper we show another analog to the Vi\'{e}te\text{'}s formula for pi by Chebyshev polynomials of the first kind.
\\ \noindent {\bf Keywords:} Chebyshev polynomials, sinc function, cosine infinite product, Vi\'{e}te\text{'}s formula, constant pi
\end{abstract}
\section{Introduction} The sinc function, also known as the cardinal sine function, is defined as \cite{Gearhart1990, Kac1959} \[ \text{sinc}\left( t \right)=\left\{ \begin{aligned} \frac{\sin \left( t \right)}{t}, &\qquad\qquad t \ne 0 \\ 1, &\qquad\qquad t = 0. \\ \end{aligned} \right. \] The sinc function finds many applications in sampling, spectral methods, differential equations and numerical integration \cite{Gearhart1990, Stenger2011, Rybicki1989, Lether1998, Quine2013, Abrarov2015, Ortiz-Gracia2016}.
More than four centuries ago the French lawyer and amateur mathematician Fran\c{c}ois Vi\'{e}te found a fabulous relation showing how the sinc function can be represented elegantly as an infinite product of the cosines \cite{Gearhart1990, Kac1959} \begin{equation}\label{eq_1} \text{sinc}\left( t \right)=\prod\limits_{m=1}^{\infty }{\cos \left( \frac{t}{{{2}^{m}}} \right)}. \end{equation} Since $$ \text{sinc}\left( \frac{\pi }{2} \right)=\frac{2}{\pi } $$ we may attempt to substitute the argument $t=\pi /2$ into right side of equation \eqref{eq_1}. Thus, using repeatedly for each $m$ the following cosine identity for double angle $$ \cos \left( 2\theta_m \right)=2{{\cos }^{2}}\left( \theta_m \right)-1 $$ or $$ \cos \left( \theta_m \right)=2{{\cos }^{2}}\left( \theta_{m+1} \right)-1 \Leftrightarrow \cos \left( {{\theta }_{m+1}} \right)=\sqrt{\frac{\cos \left( {{\theta }_{m}} \right)+1}{2}}, $$ where $$ \theta_m =\frac{\pi /2}{{{2}^{m}}}, \quad \theta_{m+1} = \frac{\theta_{m}}{2} $$ and taking into account that $$ \cos \left( \theta_{1} \right) = \cos \left( \frac{\pi /2}{{{2}^{1}}} \right)=\frac{\sqrt{2}}{2}, $$ we can find the following sequence $$ \cos \left( \frac{\pi /2}{{{2}^{2}}} \right)=\frac{\sqrt{2+\sqrt{2}}}{2}, $$ $$ \cos \left( \frac{\pi /2}{{{2}^{3}}} \right)=\frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2}, $$ $$ \vdots $$ \begin{equation}\label{eq_2} \cos \left( \frac{\pi /2}{{{2}^{m}}} \right)=\frac{\overbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots +\sqrt{2}}}}}^{m\,\,\text{square}\,\,\text{roots}}}{2}. \end{equation} Therefore, from the equations \eqref{eq_1} and \eqref{eq_2} we obtain the Vi\'{e}te\text{'}s infinite product formula for the constant pi (in radicals consisting of square roots and twos only) \cite{Osler1999, Servi2003, Levin2005, Levin2006, Kreminski2008} \[ \begin{aligned} \text{sinc}\left( \frac{\pi}{2} \right) &=\cos \left( \frac{\pi /2}{{{2}^{1}}} \right)\cos \left( \frac{\pi /2}{{{2}^{2}}} \right)\cos \left( \frac{\pi /2}{{{2}^{3}}} \right)\cdots \\
& =\frac{\sqrt{2}}{2}\frac{\sqrt{2+\sqrt{2}}}{2}\frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2}\cdots = \frac{2}{\pi } \end{aligned} \] that can be conveniently rewritten as \begin{equation}\label{eq_3} \frac{2}{\pi }=\underset{M\to \infty }{\mathop{\lim }}\,\prod\limits_{m=1}^{M}{\frac{{{a}_{m}}}{2}}, \end{equation} where ${{a}_{m}}=\sqrt{2+{{a}_{m-1}}}$ and ${{a}_{1}}=\sqrt{2}$.
Several reformulations of the Vi\'{e}te\text{'}s formula \eqref{eq_3} for pi have been reported in the modern literature \cite{Osler1999, Servi2003, Levin2005, Levin2006, Kreminski2008}. Notably, Osler has shown by \text{``}double product\text{''} generalization a direct relationship between the classical Vi\'{e}te\text{'}s and Wallis\text{'}s infinite products for pi (see equation (3) in \cite{Osler1999}). In this work we derive another equivalent to the Vi\'{e}te\text{'}s formula for pi expressed in terms of the Chebyshev polynomials of the first kind.
\section{Derivation}
The Chebyshev polynomials ${{T}_{m}}\left( x \right)$ of the first kind can be defined by the following recurrence relation \cite{Press1992, Mathews1999, Zwillinger2012} $$ {{T}_{m}}\left( x \right)=2x{{T}_{m-1}}\left( x \right)-{{T}_{m-2}}\left( x \right), $$ where ${{T}_{1}}\left( x \right)=x$ and ${{T}_{0}}\left( x \right)=1$. It should be noted that the recurrence procedure is not required in computation since these polynomials can also be determined directly by using, for example, a simple identity \[ T_{m}\left( x \right)=x^{m}\sum\limits_{n=0}^{\left\lfloor m/2 \right\rfloor } \binom{m}{2n}{{\left( {1 - {x}^{-2}} \right)}^{n}}. \]
Due to a remarkable property of the Chebyshev polynomials $$ \cos \left( m \alpha \right) = {{T}_{m}}\left( \cos \left( \alpha \right) \right), $$ making change of the variable in form $$ \alpha = \cos \left( \frac{t}{{{2}^{M}}} \right) $$ results in \begin{equation}\label{eq_4} \cos \left( \frac{2m-1}{{{2}^{M}}}t \right)={{T}_{2m-1}}\left( \cos \left( \frac{t}{{{2}^{M}}} \right) \right). \end{equation} Consequently, substituting equation \eqref{eq_4} into the following product-to-sum identity \cite{Quine2013, Abrarov2015, Ortiz-Gracia2016} \begin{equation}\label{eq_5} \prod\limits_{m=1}^{M}{\cos \left( \frac{t}{{{2}^{m}}} \right)}=\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{\cos \left( \frac{2m-1}{{{2}^{M}}}t \right)} \end{equation} yields \begin{equation}\label{eq_6} \prod\limits_{m=1}^{M}{\cos \left( \frac{t}{{{2}^{m}}} \right)}=\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\left( \cos \left( \frac{t}{{{2}^{M}}} \right) \right)}. \end{equation} It can be shown that the right side of the equation \eqref{eq_6} can be further simplified and represented by a single Chebyshev polynomial of the second kind (see {\it{Appendix A}}).
Comparing equations \eqref{eq_1} and \eqref{eq_5} we can see that the infinite product of cosines for the sinc function can be transformed into infinite sum of cosines \cite{Abrarov2015} \begin{equation}\label{eq_7} \text{sinc}\left( t \right)=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{\cos \left( \frac{2m-1}{{{2}^{M}}}t \right)}. \end{equation} Since the right side of equation \eqref{eq_5} represents a truncation of the limit \eqref{eq_7} by a finite value of upper integer ${{2}^{M-1}}$ in summation, it is simply the incomplete cosine expansion of the sinc function. Indeed, if the condition ${{2}^{M-1}}>>1$ is satisfied, then the incomplete cosine expansion of the sinc function quite accurately approximates the original sinc function as given by \cite{Abrarov2015, Ortiz-Gracia2016} $$ \frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{\cos \left( \frac{2m-1}{{{2}^{M}}}t \right)}\approx \text{sinc}\left( t \right). $$
It is interesting to note that comparing equations \eqref{eq_1} and \eqref{eq_6} we can also write now \footnotesize $$ \text{sinc}\left( t \right)=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\cos \left( \frac{t}{{{2}^{M}}} \right)}=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\cos \left( \frac{t/2}{{{2}^{M-1}}} \right)} $$ \normalsize or $$ \text{sinc}\left( t \right)=\underset{L\to \infty }{\mathop{\lim }}\,\frac{1}{L}\sum\limits_{\ell =1}^{L}{{{T}_{2\ell -1}}\cos \left( \frac{t}{2L} \right)}, $$ since we can imply that $L={{2}^{M-1}}$.
At $t=\pi /2$ the equation \eqref{eq_6} provides \begin{equation}\label{eq_8} \prod\limits_{m=1}^{M}{\cos \left( \frac{\pi /2}{{{2}^{m}}} \right)}=\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\left( \cos \left( \frac{\pi /2}{{{2}^{M}}} \right) \right)}. \end{equation} Applying equation \eqref{eq_2} again for each $m$ repeatedly, the product-to-sum identity \eqref{eq_8} can be rearranged in form \footnotesize \[ \begin{aligned} \frac{\sqrt{2}}{2}\frac{\sqrt{2+\sqrt{2}}}{2}\frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2} & \ldots \frac{\overbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots +\sqrt{2}}}}}^{M\,\,\text{square}\,\,\text{roots}}}{2}\\ & = \frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\left( \frac{\overbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots +\sqrt{2}}}}}^{M\,\,\text{square}\,\,\text{roots}}}{2} \right)} \end{aligned} \] \normalsize or \begin{equation}\label{eq_9} \prod\limits_{m=1}^{M}{\frac{{{a}_{m}}}{2}}=\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\left( \frac{{a}_{M}}{2} \right)}. \end{equation}
Increase of the integer $M$ approximates the product of cosines on the left side of equation \eqref{eq_9} closer to the value $2/\pi$. This signifies that the right side of the equation \eqref{eq_9} also tends to $2/\pi $ as the integer $M$ increases. Consequently, this leads to $$ \frac{2}{\pi }=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\left( \frac{\overbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots +\sqrt{2}}}}}^{M\,\,\text{square}\,\,\text{roots}}}{2} \right)} $$ or \begin{equation}\label{eq_10} \frac{2}{\pi }=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M-1}}}\sum\limits_{m=1}^{{{2}^{M-1}}}{{{T}_{2m-1}}\left( \frac{{a}_{M}}{2} \right)}. \end{equation}
The equation \eqref{eq_10} is completely identical to the Vi\'{e}te infinite product \eqref{eq_3} for the constant pi. Since the relation \eqref{eq_8} remains valid for any integer $M$, the equation \eqref{eq_10} can be regarded as a product-to-sum transformation of the Vi\'{e}te\text{'}s formula for pi.
It should be noted that the equation \eqref{eq_10} can be readily rearranged as a single Chebyshev polynomial of the second kind (see {\it{Appendix B}}).
\section{Conclusion} We show a new analog to the Vi\'{e}te\text{'}s formula for pi represented in terms of the Chebyshev polynomials of the first kind. This approach is based on a product-to-sum transformation of the Vi\'{e}te\text{'}s formula.
\section*{Acknowledgments}
This work is supported by National Research Council Canada, Thoth Technology Inc. and York University.
\section*{Appendix A}
The Chebyshev polynomials ${{U}_{m}}\left( x \right)$ of the second kind can also be defined by the recurrence relation. Specifically, we can write \cite{Zwillinger2012} $$ {{U}_{m}}\left( x \right)=2x{{U}_{m-1}}\left( x \right)-{{U}_{m-2}}\left( x \right), $$ where ${{U}_{1}}\left( x \right)=2x$ and ${{U}_{0}}\left( x \right)=1$.
There is a simple relation for sum of the odd Chebyshev polynomials of the first kind \[ \tag{A.1}\label{A.1} {{U}_{K}}\left( x \right)=2\sum\limits_{k\,\,\text{odd}}^{K}{{{T}_{k}}\left( x \right)}, \] where $K$ is an odd integer. Consequently, substituting equation \eqref{eq_6} into relation \eqref{A.1} provides\footnote{The subscript $2^M-1$ should not be confused with notation $2^{M-1}$ that has been used in some equations earlier.} \[ \prod\limits_{m=1}^{M}{\cos \left( \frac{t}{{{2}^{m}}} \right)}=\frac{1}{{{2}^{M}}}{{U}_{{{2}^{M}}-1}}\left( \cos \left( \frac{t}{{{2}^{M}}} \right) \right). \] According to equation \eqref{eq_1} tending $M$ to infinity leads to the limit \[ \text{sinc}\left( t \right)=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M}}}{{U}_{{{2}^{M}}-1}}\left( \cos \left( \frac{t}{{{2}^{M}}} \right) \right) \] or \[ \tag{A.2}\label{A.2} \text{sinc}\left( t \right)=\underset{N\to \infty }{\mathop{\lim }}\,\frac{1}{N}{{U}_{N-1}}\left( \cos \left( \frac{t}{N} \right) \right), \] since we can imply that $2^M = N$. Obviously at $N >> 1$, one can truncate equation \eqref{A.2} to approximate the sinc function by a single Chebyshev polynomial of the second kind as \[ \text{sinc}\left( t \right) = \frac{1}{N}{{U}_{N-1}}\left( \cos \left( \frac{t}{N} \right) \right)+\epsilon \left( t \right), \] where $\epsilon \left( t \right)$ is the error term. For example, taking $N=16$ results in \[ \begin{aligned} \text{sinc}\left(t\right) = & \,\, 2048 \cos ^{15}\left(\frac{t}{16}\right)-7168 \cos ^{13}\left(\frac{t}{16}\right)+9984 \cos ^{11}\left(\frac{t}{16}\right)\\ &-7040 \cos ^9\left(\frac{t}{16}\right)+2640 \cos ^7\left(\frac{t}{16}\right)-504 \cos ^5\left(\frac{t}{16}\right)\\ &+42 \cos ^3\left(\frac{t}{16}\right)-\cos \left(\frac{t}{16}\right) + \epsilon \left( t \right), \end{aligned} \]
where within the range $-10 \leq t \leq 10$ the error term satisfies $\left| \epsilon \left( t \right) \right| < 0.006$. As we can see, this approach quite accurately approximates the sinc function even if the integer $N$ in the limit \eqref{A.2} is not very large.
\section*{Appendix B}
Substituting equation \eqref{eq_10} into relation \eqref{A.1} we can express the Vi\'{e}te\text{'}s formula for pi by a single Chebyshev polynomial of the second kind as given by \[ \frac{2}{\pi}=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M}}}{{U}_{{{2}^{M}}-1}}\left( \frac{\overbrace{\sqrt{2+\sqrt{2+\sqrt{2+\cdots \sqrt{2}}}}}^{M\,\,\text{square}\,\,\text{roots}}}{2} \right) \] or \[ \tag{B.1}\label{B.1} \frac{2}{\pi}=\underset{M\to \infty }{\mathop{\lim }}\,\frac{1}{{{2}^{M}}}{{U}_{{{2}^{M}}-1}}\left( \frac{{{a}_{M}}}{2} \right). \]
Although the equation \eqref{B.1} is more simple, the equation \eqref{eq_10} reflects explicitly the product-to-sum transformation of the Vi\'{e}te\text{'}s formula \eqref{eq_3} for the constant $\pi$.
\small
\normalsize
\end{document} |
\begin{document}
\title{Generating entangled coherent state of two cavity modes in three-level $\Lambda$-type atomic system} \author{Qing-Xia Mu, Yong-Hong Ma, L.Zhou } \affiliation{ \\School of physics and optoelectronic technology, Dalian University of Technology, Dalian 116024 China
}
\date{\today}
\begin{abstract} In this paper, we present a scheme to generate an entangled coherent state by considering a three-level $"\Lambda "$ type atom interacting with a two-mode cavity driven by classical fields. The two-mode entangled coherent state can be obtained under large detuning condition. Considering the cavity decay, an analytical solution is deduced. \end{abstract}
\pacs{ 03.67.Mn, 42.50.Dv} \maketitle \subsection{I. Introduction} Entanglement between quantum systems is recognized nowadays as a key ingredient for testing quantum mechanics versus local hidden-variable theory \cite{Hybrid-32}. Entanglement as a valuable resource has been used in quantum information processing such as quantum computation \cite{Hybrid-1}, quantum sweeping and teleportation \cite{Hybrid-2}. As macroscopic nonclassical states, Schr$\ddot{o}$dinger cat states and entangled coherent states have always been an attractive topic. In quantum optics, these two kinds of states are described as superpositions of different coherent states and superpositions of two-mode coherent states, respectively. It has been shown that such superposition states have many practical applications in quantum information processing \cite{Hybrid-4}. So far, a variety of physical systems presenting entangled coherent states have been investigated \cite {Hybrid-7,Hybrid-24,Hybrid-9,Hybrid-10,Hybrid-25,Hybrid-29,Hybrid-12}. Sanders \cite{Hybrid-7} presented a method for generating an entangled coherent state with equal weighting factors by using a nonlinear Kerr medium placed in one arm of the nonlinear Mach-Zehnder interferometer. Wielinga \emph{et al.} \cite{Hybrid-24} modified this scheme via an optical tunnelling device instead of the Kerr medium to generate entangled coherent states with a variable weighting factor. Schemes have also been proposed for generating such entangled coherent states using trapped ions \cite{Hybrid-9} by controlling the quantized ion motion precisely. \begin{figure}
\caption{Schematic diagram of a three-level $\Lambda -$type atom interacting with two cavity modes and two classic fields with detunings $\Delta $ and $ \Delta ^{\prime }$, respectively.}
\end{figure}
On the other hand, cavity QED, with Rydberg atoms interacting with an electromagnetic field inside a cavity, has also been proved to be a promising environment to generate quantum states. In the context of cavity QED, several schemes have been proposed to generate such superposition coherent states \cite {Hybrid-10,Hybrid-25,Hybrid-29,Hybrid-12}. Ref. \cite {Hybrid-25} showed that entangled coherent states can be generated by the state-selective measurement on a two-level atom interacting with a two-mode field. Recently, Wang and Duan \cite{Hybrid-29} studied the generation of multipartite and multidimensional cat states by reflecting coherent pulses successively from a single-atom cavity. Solano \emph{et al.} \cite{Hybrid-12} proposed a method for generating entangled coherent states by considering a two-level atom cavity QED driven by a strong classic field. However, the two cavity modes in this scheme interact with the same atomic transitions, and thus can not be easily manipulated.
In our research, we present an alternative method to prepare two modes of cavity in an entangled coherent state with the context of cavity QED. Based on the nonresonant interaction of a three-level $"\Lambda"$ type atom with two cavity modes and two classic fields, we can obtain the entangled coherent states. Compared with Ref.
\cite{Hybrid-12}, the two cavity modes in our research interact with different atomic transitions so that they are easy to be recognized and manipulated. Furthermore, we work on the large detuning condition, so the decoherence induced by the spontaneous emission of excited level $|c\rangle $ can be ignored. Our scheme can also be generalized to generate multidimensional entangled coherent state with the assistance of another two-level atom in two-photon process.
\subsection{II. The theoretical model and calculation}
The system we consider is a three-level atom in $\Lambda $
configuration placed inside a two-mode field cavity. The level structure of the atom is depicted in Fig.1, where the two atomic transitions $|c\rangle \leftrightarrow |e\rangle $ and $|c\rangle
\leftrightarrow |g\rangle $ interact with the two cavity modes with the same detuning $\Delta $ but with different coupling constants $g_{1}$ and $g_{2}$, respectively. The two atomic transitions
$|c\rangle \leftrightarrow |e\rangle $ and $|c\rangle
\leftrightarrow |g\rangle $ are also driven by two classical fields with detuning $\Delta ^{\prime }$, and $\Omega _{1}$ and $\Omega _{2}$ are the Rabi frequencies of the two classical fields. The Hamiltonian for the system can be written as \begin{eqnarray}
H &=&\hbar w_{e}|e\rangle \langle e|+\hbar w_{c}|c\rangle \langle c|+\hbar w_{1}a_{1}^{\dagger }a_{1}+\hbar w_{2}a_{2}^{\dagger }a_{2} \nonumber \\
&&+\hbar g_{1}(a_{1}^{\dagger }|e\rangle \langle c|+a_{1}|c\rangle
\langle e|)+\hbar g_{2}(a_{2}^{\dagger }|g\rangle \langle c|+a_{2}|c\rangle \langle g|) \nonumber \\
&&+\hbar \Omega _{1}(e^{-i(w_{c}-w_{e}-\Delta ^{\prime })t}|c\rangle
\langle e|+H.c.)+\hbar \Omega _{2}(e^{-i(w_{c}-\Delta ^{\prime
})t}|c\rangle \langle g|+H.c.), \end{eqnarray} where $a_{i}^{\dagger }$ and $a_{i}$ are the creation and annihilation operators for the cavity fields of frequencies
$w_{i}$ (i=1,2), while $w_{c}$ and $w_{e}$ are the Bohr frequencies associated with the two atomic transitions $|c\rangle
\leftrightarrow |g\rangle $ and $|e\rangle \leftrightarrow |g\rangle $, respectively.
We consider the large detuning domain \begin{eqnarray} \left( \frac{\Omega _{1}}{\Delta^{\prime } },\frac{\Omega _{2}}{ \Delta ^{\prime }},\frac{g_{1}}{\Delta }, \frac{g_{2}}{\Delta }\right) \ll 1. \end{eqnarray}
After adiabatically eliminating the excited level $|c\rangle $, we derive the effective Hamiltonian as follows \cite{Hybrid-14} \begin{eqnarray} H_{eff}{=}-\hbar g_{eff}(a_{1}^{\dagger }a_{2}\sigma ^{\dagger }+a_{1}a_{2}^{\dagger }\sigma )-\hbar \Omega _{eff}(\sigma ^{\dagger }+\sigma ), \end{eqnarray} where $g_{eff}{=}\frac{g_{1}g_{2}}{\Delta }$, $\Omega _{eff}{=}\frac{\Omega _{1}\Omega _{2}}{\Delta ^{\prime }}$; $\sigma
^{\dagger }{=}\left| e\right\rangle \left\langle g\right| $ and
$\sigma {=}\left| g\right\rangle \left\langle e\right| $ are raising and lowering atomic operators, respectively. In Eq.(3) we have assumed that the Stark shifts can be corrected by retuning the laser frequencies \cite{Hybrid-22}.
In the strong driving regime $\Omega _{eff}{\gg }g_{eff}$, we choose $ H_{eff}^{0}{=}-\hbar \Omega _{eff}(\sigma ^{\dagger }+\sigma )$ and $ H_{eff}^{I}{=}-\hbar g_{eff}(a_{1}^{\dagger }a_{2}\sigma ^{+}+a_{1}a_{2}^{\dagger }\sigma )$. By performing the unitary transformation $U{=}e^{-\frac{i}{\hbar }H_{eff}^{0}t}$ on $H_{eff}^{I}$, in which we neglect the terms that oscillate with high frequencies, the Hamiltonian reads \begin{eqnarray} H_{eff}^{int}=-\frac{\hbar g_{eff}}{2}(a_{1}^{\dagger }a_{2}+a_{1}a_{2}^{\dagger })(\sigma ^{\dagger }+\sigma ). \end{eqnarray} We recognize the field Hamiltonian part $-\frac{\hbar g_{eff}}{2}
(a_{1}^{\dagger }a_{2}+a_{1}a_{2}^{\dagger })$ is the generator of the SU(2) coherent state \cite{Hybrid-27}. Here, we are interested in using the Hamiltonian of Eq.(4) to entangle the two cavity modes through the interaction with the atom. For this purpose we consider the case that the atom state is initially prepared in the ground state $|g\rangle $, while both of the two cavity fields are in coherent states $|\alpha \rangle $ and $ \beta \rangle $, respectively. Thus the initial state of the system is \begin{eqnarray}
|\Psi (0)\rangle =|g\rangle \otimes |\alpha ,\beta \rangle . \end{eqnarray}
On the basis of $|\pm \rangle
=\frac{1}{\sqrt{2}}(|g\rangle \pm|e\rangle )$, which are the eigenstates of $\sigma +\sigma ^{\dagger }$ with eigenvalues $ \pm 1$, the time evolution of the system is given by \begin{eqnarray}
&|\Psi (t)\rangle &=e^{\frac{-i}{\hbar }H_{eff}^{int}t}|\Psi (0)\rangle \nonumber \\
&&=\frac{1}{\sqrt{2}}e^{\frac{ig_{eff}t}{2}(K_{+}+K_{-})}|+,\alpha ,\beta \rangle
+\frac{1}{\sqrt{2}}e^{\frac{-ig_{eff}t}{2}(K_{+}+K_{-})}|-,\alpha ,\beta \rangle , \end{eqnarray} where $K_{+}=a_{1}^{\dagger }a_{2}$, $K_{-}=a_{1}a_{2}^{\dagger }$. These operators satisfy the SU(2) commutation relations, i.e. $ [K_{-},K_{+}]=-2K_{0}$, $[K_{0},K_{+}]=K_{+}$, $[K_{0},K_{-}]=-K_{-}$, with $ K_{0}=\frac{1}{2}(a_{1}^{\dagger }a_{1}-a_{2}^{\dagger }a_{2})$. Thus we can use the SU(2) Lie algebra \cite{Hybrid-15} to expand the unitary evolution operator $e^{\pm \frac{ig_{eff}t}{2}(K_{+}+K_{-})}$ as \begin{equation} e^{\pm \frac{ig_{eff}t}{2}(K_{+}+K_{-})}=e^{\pm x_{+}K_{+}}e^{K_{0}\ln {x_{0} }}e^{\pm x_{-}K_{-}}, \end{equation} in which \begin{eqnarray*} x_{0} &=&\{\cosh {\frac{ig_{eff}t}{2}}\}^{-2}, \\ x_{+} &=&x_{-}=\tanh {\frac{ig_{eff}t}{2}}. \end{eqnarray*} Using Eq.(7) we can conveniently derive the evolution of the system as \begin{equation}
|\Psi (t)\rangle =\frac{1}{\sqrt{2}}|+\rangle |\tilde{\alpha},\tilde{\beta}
\rangle +\frac{1}{\sqrt{2}}|-\rangle |\tilde{\alpha}^{\ast },\tilde{\beta} ^{\ast }\rangle , \end{equation} with \begin{eqnarray*} \tilde{\alpha} &=&\alpha \cos {\frac{g_{eff}t}{2}}+i\beta \sin {\frac{ g_{eff}t}{2}}, \\ \tilde{\beta} &=&\beta \cos {\frac{g_{eff}t}{2}}+i\alpha \sin {\frac{g_{eff}t }{2}}. \end{eqnarray*} We now change the basis back to original atomic states \begin{eqnarray}
|\Psi (t)\rangle {=}\frac{1}{2}|g\rangle (|\tilde{\alpha},\tilde{\beta}
\rangle +|\tilde{\alpha}^{\ast },\tilde{\beta}^{\ast }\rangle )+\frac{1}{2}
|e\rangle (|\tilde{\alpha},\tilde{\beta}\rangle -|\tilde{\alpha}^{\ast },
\tilde{\beta}^{\ast }\rangle ). \end{eqnarray} When the atom comes out from the two-mode cavity, we can use level-selective ionizing counters to detect the atomic state. If the internal state of atom is detected to be in the state $|g\rangle $ or $|e\rangle $, Eq.(9) will project the two-mode cavity into \begin{equation}
|\Psi _{f}(t)\rangle {=}\frac{1}{\sqrt{M}}(|\tilde{\alpha},\tilde{\beta}
\rangle \pm |\tilde{\alpha}^{\ast },\tilde{\beta}^{\ast }\rangle ), \end{equation} where $M$ is normalization factor such that \begin{eqnarray}
M=2\pm \lbrack exp(-|\tilde{\alpha}|^{2}-|\tilde{\beta}|^{2}+\tilde{\alpha}
^{\ast ^{2}}+\tilde{\beta}^{\ast ^{2}})+exp(-|\tilde{\alpha}|^{2}-|\tilde{
\beta}|^{2}+\tilde{\alpha}^{2}+\tilde{\beta}^{2})]. \end{eqnarray}
By this way we obtain a superposition of two two-mode coherent states. It is interesting to note that under certain conditions on the amplitudes of two coherent states, such superposition state can exhibit nonclassical effects such as violation of the Cauchy-Schwartz inequality and two-mode squeezing \cite{Hybrid-16}. On the other hand, the interaction time of the atom in the cavity can be controlled as $m\pi /g_{eff}$ by using a velocity selector, where $m$ is odd number. Then we can obtain two-mode even and odd coherent states as $|\Psi _{f}(t)\rangle
{=}\frac{1}{\sqrt{M}}(|i\beta ,i\alpha \rangle \pm |-i\beta ,-i\alpha \rangle )$ \cite{Hybrid-16}. It has been proved that these even and odd coherent states exist strong correlations between two modes.
Now we try to estimate the entanglement of Eq.(10). Recently, different entanglement criteria for two-mode systems have been proposed in \cite {Hybrid-20,Hybrid-18,Hybrid-19}. Here, we choose constructing normalized and orthogonal basis and then use concurrence to evaluate the entanglement proposed in \cite{Hybrid-20,Hybrid-26}. According to Ref. \cite{Hybrid-26}, the concurrence of Eq.(10) is given by \begin{eqnarray}
C=\frac{2}{|M|}\sqrt{(1-|p_{1}|^{2})(1-|p_{2}|^{2})}. \end{eqnarray}
where $P_{1}=e^{-|\tilde{\alpha}|^{2}+\tilde{\alpha}^{\ast ^{2}}}$ and $
P_{2}=e^{-|\tilde{\beta}|^{2}+\tilde{\beta}^{\ast ^{2}}}$. \begin{figure}
\caption{The time evolution of the degree of the entanglement with $ g_{eff}=2.5$, $\protect\alpha =1$, $\protect\beta =1.5$.}
\end{figure}
Fig.2 shows the time evolution of the concurrence. Here the positive sign has been chosen for Eq.(10). We see that under this group of parameters of the two modes, concurrence oscillates periodically with time. From Eq.(10), it is easy to see that the state is entangled at any other time, except when $ \tilde{\alpha}$ and $\tilde{\beta}$ are real, namely $t=n\pi /g_{eff}$ (where $n$ is even number).
\subsection{III. Analytical solution including cavity decay}
Due to the large detuning, the excited atomic level $|c\rangle $ do not participate in the interaction. Therefore, the spontaneous emission atomic level can be ignored. Now, we discuss the time evolution of the system under the cavity losses. For simplicity, we assume the losses of the two cavity modes are equal. By including the cavity damping terms in the equation of motion for the density operators, the mast equation can be written as \begin{eqnarray} \dot{\rho}=\frac{-i}{\hbar }[H_{eff},\rho ]+L_{1}\rho +L_{2}\rho , \end{eqnarray} where $L_{i}=\frac{k}{2}(2a_{i}\rho a_{i}^{\dagger }-a_{i}^{\dagger }a_{i}\rho -\rho a_{i}^{\dagger }a_{i})$ for $i=1,2$.
This equation can be solved by Lie algebras \cite{Hybrid-15} and superoperator technique \cite{Hybrid-23}. When the initial state is prepared in $|g,\alpha,\beta\rangle$, we can obtain the analytical solution of the system as follows \begin{eqnarray}
\rho&=&\frac{1}{2}|+,\tilde{\alpha}e^{\frac{-kt}{2}},\tilde{\beta}e^{\frac{ -kt}{2}}\rangle \langle +,\tilde{\alpha}e^{\frac{-kt}{2}},\tilde{\beta}e^{
\frac{-kt}{2}}|+\frac{1}{2}|-,\tilde{\alpha}^*e^{\frac{-kt}{2}},\tilde{\beta} ^*e^{\frac{-kt}{2}}\rangle \langle-,\tilde{\alpha}^*e^{\frac{-kt}{2}},\tilde{
\beta}^*e^{\frac{-kt}{2}}| \nonumber \\
&&+\frac{1}{2}\eta |+,\tilde{\alpha}e^{\frac{-kt}{2}},\tilde{\beta}e^{\frac{ -kt}{2}}\rangle \langle -,\tilde{\alpha}^*e^{\frac{-kt}{2}},\tilde{\beta}
^*e^{\frac{-kt}{2}}|+\frac{1}{2}\eta^* |-,\tilde{\alpha}^*e^{\frac{-kt}{2}}, \tilde{\beta}^*e^{\frac{-kt}{2}}\rangle\langle +,\tilde{\alpha}e^{\frac{-kt}{
2}},\tilde{\beta}e^{\frac{-kt}{2}}|, \nonumber \\ \end{eqnarray} where \begin{eqnarray}
\eta{=}exp[-4\lambda_1\tilde{\alpha}\tilde{\beta}+(|\tilde{\alpha}|^2 +|
\tilde{\beta}|^2)(e^{-kt}-1)+2\lambda_2(\tilde{\alpha}^2+\tilde{\beta}^2)], \nonumber \end{eqnarray} \begin{eqnarray} \lambda_1{=}\frac{kg_{eff}\cos(g_{eff}t)-k^2\sin(g_{eff}t)-kg_{eff}e^{-kt}}{ 2i(k^2+g_{eff}^2)}, \nonumber \end{eqnarray} \begin{eqnarray} \lambda_2{=}\frac{k^2\cos(g_{eff}t)+kg_{eff}\sin(g_{eff}t)-k^2e^{-kt}}{ 2(k^2+g_{eff}^2)}. \end{eqnarray}
Then we measure the atomic state in the bare basis $\{|g\rangle,|e\rangle\}$
. If the atom is detected in the ground state $|g\rangle$, the field will be projected into the state \begin{eqnarray}
\rho_f&=&\frac{1}{N}[|\tilde{\alpha}e^{\frac{-kt}{2}},\tilde{\beta}e^{\frac{ -kt}{2}}\rangle \langle \tilde{\alpha}e^{\frac{-kt}{2}},\tilde{\beta}e^{
\frac{-kt}{2}}| \nonumber \\
&&+\eta |\tilde{\alpha}e^{\frac{-kt}{2}},\tilde{\beta}e^{\frac{-kt}{2} }\rangle \langle \tilde{\alpha}^*e^{\frac{-kt}{2}},\tilde{\beta}^*e^{\frac{
-kt}{2}}| \nonumber \\
&&+\eta^*|\tilde{\alpha}^*e^{\frac{-kt}{2}},\tilde{\beta}^*e^{\frac{-kt}{2} }\rangle\langle \tilde{\alpha}e^{\frac{-kt}{2}},\tilde{\beta}e^{\frac{-kt}{2}
}| \nonumber \\
&&+|\tilde{\alpha}^*e^{\frac{-kt}{2}},\tilde{\beta}^*e^{\frac{-kt}{2} }\rangle \langle\tilde{\alpha}^*e^{\frac{-kt}{2}},\tilde{\beta}^*e^{\frac{-kt
}{2}}|], \end{eqnarray} where $N$ is the normalization coefficient \begin{eqnarray}
N=2+\eta \exp[(-|\tilde{\alpha}|^2-|\tilde{\beta} |^2 +\tilde{\alpha}^2+\tilde{\beta}
^2)e^{-kt}]+\eta^*\exp[(-|\tilde{\alpha}|^2-|\tilde{\beta}|^2+ \tilde{\alpha}^{*^2}+\tilde{\beta}^{*^2})e^{-kt}]. \end{eqnarray}
The time dependent factors $\eta $ and $\eta ^{\ast }$ are more important and interesting here. They contain the information how fast the density matrix becomes an incoherent mixture state. Then we still use concurrence to estimate the entanglement. The normalized and orthogonal basis is defined as \begin{eqnarray*}
\text{For cavity mode }1,|0\rangle &{=}&|\tilde{\alpha}e^{\frac{-kt}{2}
}\rangle ,|1\rangle {=}\frac{|\tilde{\alpha}^{\ast }e^{\frac{-kt}{2}}\rangle
-p_{1}|\tilde{\alpha}e^{\frac{-kt}{2}}\rangle }{M_{1}}, \\
\text{For cavity mode }2,|0\rangle &{=}&|\tilde{\beta}e^{\frac{-kt}{2}
}\rangle ,|1\rangle {=}\frac{|\tilde{\beta}^{\ast }e^{\frac{-kt}{2}}\rangle
-p_{2}|\tilde{\beta}e^{\frac{-kt}{2}}\rangle }{M_{2}}. \end{eqnarray*}
with $p_{1}{=}\exp [(-|\tilde{\alpha}|^{2}+\tilde{\alpha}^{\ast
^{2}})e^{-kt}]$, $M_{1}=\sqrt{1-|p_{1}|^{2}}$, $p_{2}{=}\exp [(-|\tilde{\beta
}|^{2}+\tilde{\beta}^{\ast ^{2}})e^{-kt}]$,
$M_{2}=\sqrt{1-|p_{2}|^{2}}$.
After calculation, the entanglement of system $\rho _{f}$ has the form \begin{eqnarray}
C=\frac{2M_{1}M_{2}}{N}|\eta |. \end{eqnarray} \begin{figure}
\caption{The time evolution of the entanglement when considering cavity decay with $g_{eff}$=1, $\protect\alpha =1$, $\protect\beta =1.5$. From top to bottom, $k=0.1,0.2,0.5$, respectively.}
\end{figure} Fig.3 displays the entanglement of two cavity modes measured by concurrence for $k=0.1,0.2,0.5$, respectively. It is observed that amplitude of concurrence decreases with the increasing of $k$. The loss of the cavity destroys the entanglement. Thus, a high-Q two-mode cavity is preferred.
Furthermore, our method can also be extended to generate multidimensional entangled coherent state. In order to do this, we first send a two-level atom with a virtual intermediate level
\cite{Hybrid-30}, initially in the ground state $|g\rangle $, through a two-mode cavity. The atom dispersively interact with one of the cavity mode(e.g.,cavity mode with annihilation (creation)
operators $a_{1}(a_{1}^{\dagger }))$ where the two-photon process takes place. The effective Hamiltonian acting on state $|g\rangle $ is $ H=-\hbar \lambda a_{1}^{\dagger }a_{1}(a_{1}^{\dagger }a_{1}-1)$ \cite {Hybrid-31}. If the cavity mode is initially in a coherent state, the nonlinear Hamiltonian interaction equals to that of the Kerr medium \cite {Hybrid-28}. When the two-level atom flies out of the cavity, a three-level atom in $\Lambda $ configuration is sent into it. Doing the same operation we discussed in section II, finally we recognize that the total evolution operator of the field part has the same form as Eq.(4) in Ref. \cite
{Hybrid-28}. Following the methods of Ref. \cite{Hybrid-28}, we can derive the multidimensional entangled coherent state after a projective measurement of atomic state in the basis $\{|\pm \rangle \}$.
\subsection{IV. Conclusion}
In conclusion, we present a scheme to generate two-mode entangled coherent state via the QED system, in which a three-level $"\Lambda "$ configuration atom interacts with two cavity modes and two classic fields in large detuning. When we perform a measurement on the atomic state, the two-mode field will collapse into the entangled coherent state if the two cavity modes are both in the coherent states initially. In our scheme the two cavity modes interact with two distinct atomic transitions, so they are easy to be controlled. Moreover, taking into account the cavity decay, we study the system evolution and give an analytical solution. With the assistance of another two-level atom with intermediate level, our scheme can also be generalized to generate multidimensional entangled coherent state.
\begin{references} \bibitem{Hybrid-32}Bell J S 1965 Physics (Long Island City, N.Y.) {\bf 1} 195 \bibitem{Hybrid-1} Ekert A and Jozsa R 1996 Rev. Mod. Phys. {\bf 68} 733
Knill E, Laflamme R and Milburn G J 2001 Nature {\bf 46} 409 \bibitem{Hybrid-2} Bennett C H, Brassard G, Crepeau C, Jozsa R,
Peres A and Wooters W K 1993 Phys. Rev. Lett. {\bf 70} 1895
Wang X 2001 Phys. Rev. A {\bf 64} 022302
\bibitem{Hybrid-4} Munro W J, Milburn G J and Sanders B C 2000 Phys. Rev. A {\bf 62} 052108
van Enk S J and Hirota O 2001 Phys. Rev. A {\bf 64} 022313
Nguyen Ba An 2004 Phys. Rev. A {\bf 69} 022315 \bibitem{Hybrid-7} Sanders B C 1992 Phys. Rev. A {\bf 45} 6811
\bibitem{Hybrid-24} Wielinga B and Sanders B C 1993 J. Mod. Optics {\bf 40} 1923 \bibitem{Hybrid-9} Gerry C C 1997 Phys. Rev. A {\bf 55} 2478
Zou Xu Bo, Pahlke K and Mathis W 2002 Phys. Rev. A {\bf 65} 064303
Paternostro M, Kim M S and Ham B S 2003 Phys. Rev. A {\bf67} 023811
Zheng S B 2004 Phys. Rev.A {\bf 69} 055801 \bibitem{Hybrid-10} Davidovich L, Brune M, Raimond J M and Haroche S 1996 Phys. Rev. A {\bf 53} 1295
\bibitem{Hybrid-25} Guo G C and Zheng S B 1996 Opt. Commun. {\bf 133} 142 \bibitem{Hybrid-29} Wang B and Duan L M 2005 Phys. Rev. A {\bf 72} 022320 \bibitem{Hybrid-12} Solano E, Agarwal G S and Walther H 2003 Phys. Rev. lett. {\bf 90} 027903
\bibitem{Hybrid-14} Lougovski P, Solano E and Walther H 2005 Phys. Rev. A {\bf 71} 013811 \bibitem{Hybrid-22} Biswas A and Agarwal G S 2004 Phys. Rev. A {\bf 69} 062306 \bibitem{Hybrid-27} Gerry C C and Grobe R 1997 J. Mod. Optics {\bf 44} 41
\bibitem{Hybrid-15} Lu H X, Yang J, Zhang Y D and Chen Z B 2003 Phys. Rev. A {\bf 67} 024101 \bibitem{Hybrid-16} Chai C L 1992 Phys. Rev. A {\bf 46} 7187 \bibitem{Hybrid-20} Wang X 2002 J. Phys. A {\bf 35} 165 \bibitem{Hybrid-18} Shchukin E and Vogel W 2005 Phys. Rev. Lett. {\bf 95} 230502 \bibitem{Hybrid-19} Hillery M and Zubairy M S 2006 Phys. Rev. Lett. {\bf 96} 050503
\bibitem{Hybrid-26} Zhou L, Xiong H and Zubairy M S will be published.
\bibitem{Hybrid-23} Peixoto de Faria J G and Nemes M C 1999 Phys. Rev. A {\bf 59} 3918 \bibitem{Hybrid-30} Fang M F and Liu X 1996 Phys. Lett. A {\bf 210} 11 \bibitem{Hybrid-31} Zhou L, Song H S, Luo Y X and Li C 2001 Phys. lett. A {\bf 284} 156 \bibitem{Hybrid-28} van Enk S J 2003 Phys. Rev. Lett. {\bf 91} 017902
\end{references}
\end{document} |
\begin{document}
\title[On subharmonic and entire functions]{On subharmonic and entire functions of small order: after Kjellberg}
\author{P. J. Rippon} \address{School of Mathematics and Statistics \\ The Open University \\
Walton Hall\\
Milton Keynes MK7 6AA\\
UK} \email{phil.rippon@open.ac.uk}
\author{G. M. Stallard} \address{School of Mathematics and Statistics \\ The Open University \\
Walton Hall\\
Milton Keynes MK7 6AA\\
UK} \email{gwyneth.stallard@open.ac.uk}
\thanks{2010 {\it Mathematics Subject Classification.}\; Primary 30D15, secondary 30F45, 31A05.\\Both authors were supported by the EPSRC grant EP/R010560/1.}
\begin{abstract} We give a general method for constructing examples of transcendental entire functions of given small order, which allows precise control over the size and shape of the set where the minimum modulus of the function is relatively large. Our method involves developing a new technique to give an upper bound for the growth of a positive harmonic function defined in a certain type of multiply connected domain, giving a sharp estimate for the growth in many cases. \end{abstract} \maketitle
\section{Introduction}\label{intro} \setcounter{equation}{0}
This paper concerns transcendental entire functions of small order. Such functions have been studied extensively in classical complex analysis, ever since Wiman \cite{aW05} observed that such functions have properties which, in some ways, resemble those of polynomials. Subsequently, powerful results such as the version of the cos\,$\pi \rho$ theorem due to Barry \cite{Ba63} showed that, for such functions, the minimum modulus of the function on many circles centred at the origin is comparable in size to the maximum modulus of the function.
More recently, these properties have led to functions of small order playing a key role in two major conjectures in complex dynamics: Baker's conjecture explicitly concerns such functions (see \cite{NRS18} for recent progress on this conjecture) and they were also shown (in \cite{RS09a} and \cite{RS12}) to have an unexpected link with Eremenko's conjecture \cite{E}, one of the main drivers of research in transcendental dynamics, arising from the fact that for functions of small order the escaping set of the function is often connected; see \cite{NRS19} and \cite{RS12}.
In this paper we give a very general method for constructing examples of functions of given small order, including order~0, which allows precise control over the size and shape of the set where the minimum modulus of the function is relatively large. The original motivation for this work was related to further progress on Eremenko's conjecture, which we report on in forthcoming work \cite{NRS20}, but our new results here have wider applications.
For a transcendental entire function~$f$ the {\it maximum modulus} and {\it minimum modulus} of $f$ are denoted by \[
M(r) = M(r,f)=\max_{|z|=r} |f(z)| \quad\text{and}\quad m(r) = m(r,f)=\min_{|z|=r} |f(z)|, \] respectively. The {\it order} of~$f$ is \[ \limsup_{r \to \infty} \frac{\log \log M(r)}{\log r} \] and we say that a function has {\it small order} if it has order less than~1/2. Functions of order 1/2, {\it minimal type}, that is, functions of order 1/2 with $\log M(r) = o(r^{1/2})$ as $r \to \infty$, are sometimes included in this class, since they also have many circles on which the minimum modulus is relatively large (though far fewer such circles than for functions of order less than~$1/2$).
Here we give a considerable generalisation of Kjellberg's construction \cite{bK48} of {transcendental entire function}s with orders in the range $(0,1/2)$, which he used to show that various theorems about the minimum modulus of {transcendental entire function}s of order less that 1/2 are in a certain sense best possible.
Kjellberg's construction approximates continuous subharmonic functions with specified properties by functions of the form $\log |f|$, where~$f$ is a {transcendental entire function}, with similar properties. His construction is in two stages: \begin{itemize} \item[1.] a continuous subharmonic function~$u$ with the required properties is specified by using a positive harmonic function defined on a domain whose complement is a union of radial slits, on which~$u$ vanishes; \item[2.]
the Riesz measure of~$u$ is discretised to produce an entire function~$f$ such that $\log |f|$ is close to~$u$ away from the zeros of~$f$. \end{itemize} In Kjellberg's original construction, the radial slits were chosen to lie on a ray from the origin in such a way that the union of the slits is invariant under a scaling of the plane. This invariance was a key property to allow various parts of his reasoning to succeed.
Here we give a method which allows the slits to be chosen much more flexibly. This enables us to construct examples of entire functions with prescribed order~$\rho$, for each $\rho$ in the {\it closed} interval $[0,1/2]$, which also have bounded minimum modulus on as large a set as possible given the order. To achieve the necessary control, we introduce a new technique for estimating the growth of certain positive harmonic functions from above; this technique may well have applications beyond our current purpose.
We introduce a class of subharmonic functions named ${\mathcal K}$, after Kjellberg.
\begin{definition} A subharmonic function~$u$ is in the class ${\mathcal K}$ if $u$ is continuous in ${\mathbb C}$ and positive harmonic in $D={\mathbb C}\setminus E$, where $E\subset (-\infty,0]$ is a closed set on which $u$ vanishes. We assume that each point of~$E$ is regular for the Dirichlet problem in~$D$.
We denote by $u_{\alpha,\beta}$ the function in ${\mathcal K}$ corresponding to the set \[ E=\{0\}\cup\bigcup_{n\in {\mathbb Z}}[-\alpha\beta^n,-\beta^n], \] where $1<\alpha<\beta$, and $u_{\alpha,\beta}(1)=1$. \end{definition}
{\it Remarks}\; 1. For each closed subset~$E$ of the negative real axis, there is exactly one corresponding function $u\in {\mathcal K}$, up to positive scalar multiples, by a result of Benedicks \cite[Theorem~4]{Ben}, and such a~$u$ is unbounded and symmetric with respect to the real axis.
2. The functions $u_{\alpha,\beta}$ correspond to the functions considered by Kjellberg in his original construction \cite[Chapter 3]{bK48}. His set~$E$ was of the form $E=\{0\}\cup\bigcup_{n\in {\mathbb Z}}[\beta^n,\alpha\beta^n]$, where $1<\alpha<\beta$, whereas for us it is on balance more convenient to have the set~$E$ contained in the negative real axis.
To state our results, we recall that for a set $S\subset {\mathbb R}^+$ and $r>1$, the {\it upper logarithmic density} of~$S$ is \[ \overline{\Lambda}(S) = \limsup_{r\to \infty}\frac{1}{\log r}\int_{S\cap(1,r)}\frac{dt}{t}, \] and the {\it lower logarithmic density} of~$S$ is \[ \underline{\Lambda}(S) = \liminf_{r\to \infty}\frac{1}{\log r}\int_{S\cap(1,r)}\frac{dt}{t}. \] When $\overline{\Lambda}(S)=\underline{\Lambda}(S)$ we speak of the {\it logarithmic density} of $S$, denoted by $\Lambda(S)$.
We also define, for a continuous subharmonic function~$u$ in ${\mathbb C}$, \[
A(r)=A(r,u)=\min_{|z|=r} u(z)\quad\text{and}\quad B(r)=B(r,u)=\max_{|z|=r} u(z), \] and the {\it order} and {\it lower order} of $u$, \[ \rho(u)=\limsup_{r\to\infty}\frac{\log B(r)}{\log r}\quad \text{and}\quad \lambda(u)=\liminf_{r\to\infty}\frac{\log B(r)}{\log r}, \] respectively.
First, we give some basic properties of {\it all} functions in the class ${\mathcal K}$. \begin{theorem}\label{basic-props} Let $u\in {\mathcal K}$, with $E$ the corresponding closed subset of the negative real axis. Then $u$ has the following properties. \begin{itemize} \item[(a)] Monotonicity properties: for all $r>0$, \[ u(re^{i\theta})\;\;\text{is decreasing as a function of } \theta, \text{ for } 0\le \theta \le \pi, \] so, in particular, $B(r,u)=u(r)$ and $A(r,u)=u(-r)$ for all $r>0$. Also, \[ \frac{u(r)}{r^{1/2}}=\frac{B(r)}{r^{1/2}}\;\;\text{is decreasing for } r>0, \] so, in particular, $\rho(u)\le 1/2$. \item[(b)] Bounds for order and lower order: \[ \rho(u)\ge \tfrac12 \overline{\Lambda}(E^*)\quad {\text and} \quad \lambda(u)\ge \tfrac12 \underline{\Lambda}(E^*), \]
where $E^*=\{|x|:x\in E\}$. \end{itemize} \end{theorem} It is natural to ask whether equality holds for the lower bounds in part~(b). For the functions $u_{\alpha,\beta}$, $1<\alpha<\beta$, Kjellberg proved that \[ \frac{\log \alpha}{2\log \beta}\le \rho(u_{\alpha,\beta})=\lambda(u_{\alpha,\beta})<\frac12. \] Here the lower bound is the one given by Theorem~\ref{basic-props}, part~(b) and Kjellberg showed that equality does not hold in general; see the discussion below, after Theorem~\ref{exactorder}. He obtained the strict upper bound of~$1/2$ by using the invariant nature of his set~$E$; see Section~\ref{min-type-result}, Remark~2 for a proof of this upper bound for the functions $u_{\alpha,\beta}$.
In our main result, which follows, we show that for many functions $u\in{\mathcal K}$, corresponding to sets~$E$ that are unions of closed intervals, the order and lower order of~$u$ can be expressed explicitly in terms of the geometric properties of the set~$E$. We do this by developing a new technique to give upper bounds for the growth of positive harmonic functions defined in multiply connected domains, using a result about the relationship between the Harnack metric and the hyperbolic metric in such a domain \cite{dH87,hBwS}, together with the Beardon--Pommerenke estimate for the density of the hyperbolic metric \cite{aBcP}. \begin{theorem}\label{exactorder} Suppose that $u\in{\mathcal K}$ and \[ E=\bigcup_{n\ge 0}[-b_n,-a_n], \] where $0\le a_0<b_0<a_1<b_1< \cdots,$ and $a_n \to \infty$ as $n\to\infty$. If \begin{equation}\label{an-cond} a_n^{1/n} \to \infty\;\text{ as }n\to \infty, \end{equation} then \begin{equation}\label{order-exact} \rho(u)=\tfrac12\overline{\Lambda}(E^*)\quad {\text and} \quad \lambda(u)= \tfrac12 \underline{\Lambda}(E^*). \end{equation} \end{theorem} We note that some condition such as \eqref{an-cond} is essential here. Indeed, as mentioned earlier, Kjellberg showed that the functions $u_{\alpha,\beta}\in{\mathcal K}$ do not in general satisfy the identities in \eqref{order-exact}. He did this by proving that if the parameters~$\alpha$ and~$\beta$ tend to 1 while $\frac12\log \alpha/\log \beta$ remains constant (so the intervals in the set~$E$ and their complementary intervals become ever slimmer while the logarithmic density of~$E^*$ remains fixed), then the order of $u_{\alpha,\beta}$ must tend to~1/2.
By using a precise harmonic measure estimate in multiply connected domains due to Sodin \cite{Sodin}, we can construct an even more extreme example to demonstrate this phenomenon.
\begin{example}\label{extreme-ex} There exists a function $u\in {\mathcal K}$ with the set $E$ of the form \[ E=\bigcup_{n\ge 0}[-b_n,-a_n], \] where $0\le a_0<b_0<a_1<b_1< \cdots,$ and $a_n \to \infty$ as $n\to\infty$, such that $\overline{\Lambda}(E^*)=0$, \[ \rho(u)= \lambda(u)=1/2\quad\text{and moreover}\quad \lim_{r\to\infty}\frac{u(r)}{r^{1/2}}>0. \] \end{example}
In forthcoming work \cite{NRS20} we will use the results in this paper to construct entire functions of order 1/2, minimal type, with dynamically interesting properties related to their minimum modulus. Our next theorem is useful in any situation where we need to construct examples of order 1/2, minimal type. \begin{theorem}\label{min-type} Let $u\in {\mathcal K}$, with $E$ the corresponding closed subset of the negative real axis. If \[E^c\supset\bigcup_{n\ge 0}(-d_n,-c_n),\] where $0\le c_0<d_0<c_1<d_1< \cdots,$ and $\limsup_{n\to\infty} d_{n}/c_n>1$, then \[ \frac{u(r)}{r^{1/2}}\to 0\;\;\text{as }r\to\infty. \] \end{theorem}
Our final theorem is the result needed in the second stage of Kjellberg's process, which shows how to approximate a function $u\in{\mathcal K}$ by $\log |f|$, where $f$ is entire. This generalises the result given by Kjellberg in \cite[Chapter~4]{bK48} for his particular type of set $E$. Due to the much greater generality of the set~$E$ considered here, the proof is significantly more delicate, so we include full details. \begin{theorem}\label{discretise} Suppose that $u\in{\mathcal K}$ and \[ E=\bigcup_{n\ge 0}[-b_n,-a_n], \] where $0\le a_0<b_0<a_1<b_1< \cdots,$ and $a_n \to \infty$ as $n\to\infty$. Put \[ D_1={\mathbb C}\setminus \{z: \text{{\rm dist}}(z,E)\le 1\}. \] Then there exists a {transcendental entire function}~$f$ with only negative zeros, all lying in the set~$E$, such that \begin{equation}\label{R-est}
\log|f(z)|-u(z)=O\left(\log|z|\right) \;\;\text{as }z\to \infty, \qfor z\in D_1. \end{equation} Moreover, if we also have \begin{equation}\label{d-cond} b_n/a_n \ge d >1,\qfor n\ge 0, \end{equation} then there exists $R=R(u)>0$ such that \begin{equation}\label{f-upper-est}
\log|f(z)|\le u(z)+4\log |z|, \qfor |z|\ge R. \end{equation} \end{theorem} Since the work of Kjellberg there have been many results on the approximation of subharmonic functions by logarithms of moduli of entire functions; see, for example, \cite[Chapter~10]{wH89} for subharmonic functions whose Riesz measure lies on a finite number of unbounded curves, \cite{Y} and \cite{dD01} for functions subharmonic in $\mathbb C$ of finite order, and \cite{LM01} and \cite{FR14} for functions subharmonic in $\mathbb C$ of infinite order. These works all give estimates of the form \eqref{R-est} either outside certain exceptional sets or on average in a certain sense, with various error bounds, but we are not aware of earlier results that provide the type of control of the entire function~$f$ given in \eqref{R-est} and \eqref{f-upper-est} simultaneously.
Indeed, the two estimates in Theorem~\ref{discretise} enable us to use any subharmonic function $u \in{\mathcal K}$ to obtain an entire function $f$ with the same order, lower order and type class as~$u$, and also with the property that~$\log|f|$ is uniformly bounded by~$u$, provided \eqref{d-cond} holds.
Finally, we recall that one aim of Kjellberg's work in \cite{bK48} was to show that a certain density estimate appearing in an early version of the $\cos \pi \rho$ theorem is best possible. Recall the strong version of the $\cos \pi \rho$ theorem due to Barry (see \cite{Ba63} or \cite[Theorem 6.13]{wH89}), which states that if~$u$ is a non-constant subharmonic function of order $\rho\in [0,1)$ and $\rho<\alpha<1$, then \begin{equation}\label{Barry-order} \underline{\Lambda}(\{r: A(r,u)>\cos (\pi\alpha) B(r,u)\}) \ge 1-\rho/\alpha. \end{equation} Kjellberg's examples in \cite{bK48} show that if $0\le \rho <\alpha =1/2$, then the logarithmic density of the set in \eqref{Barry-order} can be arbitrarily close to the quantity $1-\rho/\alpha= 1-2\rho$, thus demonstrating that the inequality in \eqref{Barry-order} for the lower logarithmic density is best possible in the case $\alpha=1/2$.
Theorem~\ref{exactorder} shows that for a given $\rho<1/2$ and $\alpha=1/2$ the value $1-\rho/\alpha=1-2\rho$ for the logarithmic density can in fact be {\it attained} by a subharmonic function~$u$ of order~$\rho$. Moreover, Theorem~\ref{discretise} allows us to use such a subharmonic function to construct an entire function with the same properties.
\begin{corollary}\label{entire-density} For each $\rho$, $0\le \rho<1/2$, there is a {transcendental entire function} $f$ of order $\rho$ such that \[
\underline{\Lambda}(\{r: A(r,\log|f|)>0)\}) = \overline{\Lambda}(\{r: A(r,\log|f|)>0)\}) = 1-2\rho. \] \end{corollary}
The structure of the paper is as follows. In Section~\ref{prelim} we state a number of key results that are needed in our proofs. Then we prove Theorem~\ref{basic-props} in Section~\ref{lower-estimate-proof}, Theorem~\ref{exactorder} in Section~\ref{upper-estimate-proof}, Example~\ref{extreme-ex} in Section~\ref{example-proof}, Theorem~\ref{min-type} in Section~\ref{min-type-result}, and finally Theorem~\ref{discretise} and Corollary~\ref{entire-density} in Section~\ref{Kjellbergproof}.
{\it Acknowledgements}\; The authors thanks Dan Nicks and Ian Short for helpful comments.
\section{Preliminary results}\label{prelim} \setcounter{equation}{0}
Our results depend on two entirely different techniques that will enable us to estimate the growth of functions in class ${\mathcal K}$ from below and from above. The first is a lemma of Beurling~\cite[page~95]{aB33}, which gives estimates of growth from below and was a key tool in~\cite{bK48}, and in many other papers. Recall that for any subharmonic function~$u$ we write \[
B(r,u)=\max_{|z|=r}u(z),\;\text{where } r>0. \]
\begin{lemma}\label{Beur} If~$u$ is subharmonic in~${\mathbb C}$, $0<r_1<r_2$, and \[
E(r_1,r_2)=\{r\in [r_1,r_2]:\inf_{|z|=r}u(z)\le 0\}, \] then \begin{equation}\label{Beur-est} B(r_2,u)> \frac12 \exp\left(\frac12\int_{E(r_1,r_2)} \frac{dt}{t}\right)B(r_1,u). \end{equation} In particular, if $E(r_1,r_2)=[r_1,r_2]$, then \[ B(r_2,u) > \frac12 \left(\frac{r_2}{r_1}\right)^{1/2} B(r_1,u). \] \end{lemma} The second technique, which will give us estimates from above, needs more preparation; we are not aware of this technique being used previously to estimate the growth of positive harmonic functions from above in multiply connected domains.
The {\it Harnack metric} is defined in a domain $G$ by \begin{equation}\label{Har-metric}
d_G(z_1,z_2)=\sup\{|\log (u(z_2)/u(z_1))|: u \text{ is positive and harmonic in } G\}, \end{equation}
where $z_1,z_2\in G$. This concept was introduced by Bear \cite{hB65} and named the Harnack metric by K\"onig \cite{hK69}. The Harnack metric in~$G$ has a close relationship with the hyperbolic metric in $G$, which we denote by $\rho_G$. Indeed, if $G$ is simply connected then these two metrics are identical, provided that $\rho_G$ is normalised so that the hyperbolic density in the unit disc ${\mathbb D}$ is $d\rho_{{\mathbb D}}(z)=2/(1-|z|^2$), or equivalently $d\rho_{{\mathbb H}}(z)=1/\Re (z)$ in the right-half plane ${\mathbb H}$.
In \cite{dH87}, and also \cite{hBwS}, the relationship between the two metrics when $G$ is multiply connected was investigated and, amongst other results, the following was obtained; see \cite[Theorem~6]{dH87} and \cite[Theorem~1.1]{hBwS}.
\begin{lemma}\label{BearSmith} Let $G$ be a domain with Harnack metric $d_G$. Then \[ d_G(z_1,z_2)\le \rho_G(z_1,z_2),\qfor z_1,z_2\in G. \] \end{lemma} It follows that we can make good estimates for the growth of a positive harmonic function in a domain whenever we can obtain good estimates for the hyperbolic metric in that domain. To do this we shall use the following result of Beardon and Pommerenke; see \cite[Theorem~1]{aBcP}. \begin{lemma}\label{BeardPomm} Let~$G$ be a domain in ${\mathbb C}$ that omits at least two finite points. Then the hyperbolic density in $G$ satisfies \[ d\rho_G(z) \le \frac{\pi/2}{{\rm dist}\,(z,\partial G)\beta_G(z)}, \] where \[
\beta_G(z)=\inf\{\left|\log|z-a|/|b-a|\right|:a,b\in \partial G,|z-a|={\rm dist}\,(z,\partial G)\}. \] \end{lemma} Note that the constant $\pi/2$ in Lemma~\ref{BeardPomm} appears as $\pi/4$ in \cite[Theorem~1]{aBcP} due to the different normalisation of the hyperbolic density in \cite{aBcP}.
In our situation, where we are estimating the growth of functions in class~${\mathcal K}$, the domains where the positive harmonic functions are defined always have their boundaries lying entirely in the negative real axis, so we can make use of the following special case of a deep result of Weitsman; see \cite[Theorem~9.16]{wH89}. \begin{lemma}\label{Weit}
Let $G$ be a domain in ${\mathbb C}$ whose complement contains at least two points, and suppose that $G$ has the symmetry property that for each $r>0$ the set $\{z\in G:|z|=r\}$ is either a circle or is of the form $\{re^{i\theta}: |\theta|< \pi\}$. Then $d\rho_G(re^{i\theta})$ is an increasing function of $|\theta|$ for $0\le |\theta|<\pi$. \end{lemma} It follows easily from Lemma~\ref{Weit} that for such a domain $G$ the hyperbolic geodesic joining any two points on the positive real axis is the line segment joining those two points.
In order to prove Example~\ref{extreme-ex} we need two further results. The first is due to Sodin \cite[Lemma~4]{Sodin}, who sharpened an earlier result of this type due to Benedicks~\cite{Ben}. In this result, cap\,$(.)$ denotes logarithmic capacity. \begin{lemma}\label{sodin} Let $Q_z(h)$ be the open square in the complex plane with center $z$ and sidelength $h>0$, and let $Q(h)=Q_0(h)$. Let $E\subset Q(r) \cap {\mathbb R}$ be a closed set such that, for some $\delta\in (0,1)$ and $h\in (0,r)$, \[
{\rm cap}\left( \frac{1}{2h}(E\cap Q_x(h))\right)\ge \delta, \qfor |x| < r-h. \] Then \[ \omega(0) \le \frac{Ch}{r}\log(1/\delta), \] where $\omega$ denotes the harmonic measure in $Q(r)\setminus E$ of $\partial Q(r)$, and $C>0$ is an absolute constant. \end{lemma} Finally, we need a special case of a beautiful theorem of Kjellberg~\cite[Theorem~6.7]{wH89}, which generalised and sharpened earlier results of Wiman~\cite{aW05} and Heins~\cite{mH48}.
\begin{lemma}\label{Wiman} Let~$u$ be a non-constant subharmonic function in ${\mathbb C}$ of order~1/2. Then either \[ \limsup_{r\to\infty}A(r,u)=\infty, \] or \[ \lim_{r\to\infty}B(r,u)/r^{1/2}=\alpha\;\text{ as } r \to\infty, \quad\text{where } 0<\alpha<\infty. \] \end{lemma}
\section{Proof of Theorem~\ref{basic-props}}\label{lower-estimate-proof} \setcounter{equation}{0} The proof of Theorem~\ref{basic-props} is reasonably straightforward. The first statement in part~(a) follows easily from the fact that any subharmonic function $u\in{\mathcal K}$ can be represented as a potential of the form \[
u(z)=u(0)+\int_0^{\infty} \log|1+z/t|\,d\mu(t), \]
where $\mu(t)$ denotes the Riesz measure of~$u$ in the disc $\{z:|z| \le t\}$, which is clearly entirely supported in the set~$E$; see \cite[Theorem~2.1]{mH48}, for example.
To prove the second statement in part~(a) we use the fact that the function $U(z)=u(z^2)$ is positive harmonic in the right half-plane with continuous boundary values, so it has the Poisson integral representation \begin{equation}\label{Poisson-half}
U(z)=cx+\frac{x}{\pi}\int_{-\infty}^{\infty}\frac{U(it)}{|z-it|^2}\,dt,\qfor x=\Re (z)>0, \end{equation} where $c\ge 0$; see, for example, \cite[Theorem 7.26]{AB}. It follows that $U(x)/x$ is decreasing for $x\in (0,\infty)$ and hence that $u(r)/r^{1/2}$ is decreasing for $r>0$.
To prove part~(b) we use Lemma~\ref{Beur}. Since $B(r,u)=u(r)$ for $r>0$, by Theorem~\ref{basic-props}, part~(a), this lemma gives \[ u(r)> \frac 12 \exp \left(\frac 12 \int_{E^*\cap(1,r)} \frac 1t\,dt\right)u(1),\qfor r>1, \] and hence \[ \frac{\log u(r)}{\log r}> \frac{1}{2\log r}\int_{E^*\cap(1,r)} \frac 1t\,dt+\frac{\log u(1)-\log 2}{\log r},\qfor r>1, \] from which both statements in part~(b) follow immediately.
{\it Remark}\;\; The above proof can be adapted easily to show that if the set~$E$ is a sufficiently `thick' subset of the negative real axis, then \begin{equation}\label{mean-type} u(r)\ge cr^{1/2}, \qfor r>1, \end{equation} where~$c$ is a positive constant. For example, suppose that the sequences $(a_n)$ and $(b_n)$ satisfy $a_0=1$ and the recurrence relations \[ b_n=2a_n\quad \text{and} \quad a_{n+1}=b_n+b^p_n,\qfor n \ge 0, \] where $0<p<1$, and $u\in{\mathcal K}$ corresponds to the set $E=\bigcup_{n=0}^{\infty} [-b_n,-a_n]$. Then Lemma~\ref{Beur} gives, for $n\ge 1$, \begin{align*} u(b_n)&> \frac 12 \exp\left(\frac 12 \int_{E^*\cap(b_0,b_n)} \frac 1t\,dt\right)u(b_0)\\ &=\frac12 \prod_{j=1}^n\left(\frac{b_j}{a_j}\right)^{1/2} u(b_0)\\ &=\frac12 \prod_{j=1}^n\left(\frac{b_j}{b_{j-1}+b^p_{j-1}}\right)^{1/2} u(b_0)\\ &=\frac12 \left(\frac{b_n}{b_0}\right)^{1/2}\prod_{j=1}^n\left(\frac{1}{1+b^{p-1}_{j-1}}\right)^{1/2} u(b_0), \end{align*} from which \eqref{mean-type} follows, since $b_n \ge 2^n$ for $n\ge 0$ and $u(r)/r^{1/2}$ is decreasing for $r>0$.
\section{Proof of Theorem~\ref{exactorder}}\label{upper-estimate-proof} \setcounter{equation}{0}
In this section we give the proof of Theorem~\ref{exactorder}. First, however, we give a basic technical lemma that will be needed in the proof. Here we use the notation $\log^+ x=\max\{\log x,0\}$ for $x>0$.
\begin{lemma}\label{tech} Let $x_n>0$ for $n=1,2,\ldots$, and suppose that \[ \frac1n \sum_{j=1}^n x_j \to \infty \;\text{ as } n\to \infty. \] Then \[ \sum_{j=1}^n \log^+ x_j = o\left(\sum_{j=1}^n x_j\right) \;\text{ as } n\to \infty. \] \end{lemma} \begin{proof} For $x\ge X\ge e$, we have $(\log x)/x \le (\log X)/X$. Therefore, for $\eps>0$ and $X\ge e$, we have \begin{align*} \sum_{j=1}^n \log^+ x_j &= \sum_{x_j\le X} \log^+ x_j+ \sum_{x_j>X} \log x_j\\ &\le n\log X+ \frac{\log X}{X}\sum_{j=1}^n x_j. \end{align*} Hence, by further taking~$X$ so large that $(\log X)/X \le \frac12 \eps$ and then $n$ so large that $n\log X \le \frac12 \eps \sum_{j=1}^n x_j$, we obtain \[ \sum_{j=1}^n \log^+ x_j\le \eps \sum_{j=1}^n x_j, \] as required. \end{proof} In order to prove Theorem~\ref{exactorder}, given the results of Theorem~\ref{basic-props}, part~(b), we need only show that, under the given hypotheses, \begin{equation}\label{upper-estimates} \rho(u)\le \tfrac12\overline{\Lambda}(E^*)\quad \text{and} \quad \lambda(u)\le \tfrac12 \underline{\Lambda}(E^*). \end{equation} By Theorem~\ref{basic-props}, part~(a), we have \[ \rho(u)=\limsup_{r\to\infty}\frac{\log u(r)}{\log r}\quad \text{and} \quad \lambda(u)=\liminf_{r\to\infty}\frac{\log u(r)}{\log r}. \] Therefore, by Lemma~\ref{BearSmith} and the definition of the Harnack metric in~\eqref{Har-metric}, it is sufficient to show that \begin{equation}\label{order-hyp-estimates} \limsup_{r\to\infty}\frac{\rho_D(1,r)}{\log r} \le \tfrac12\overline{\Lambda}(E^*) \quad \text{and} \quad \liminf_{r\to\infty}\frac{\rho_D(1,r)}{\log r} \le \tfrac12\underline{\Lambda}(E^*), \end{equation} where $D={\mathbb C}\setminus E^*$ as usual.
In view of the remark after Lemma~\ref{Weit}, we have \begin{equation}\label{segment} \rho_D(1,r)= \int_1^r d\rho_D(t)\,dt. \end{equation} Therefore, to prove \eqref{order-hyp-estimates} we need to obtain good upper estimates for the hyperbolic density $d\rho_D(t)$, for $t>1$.
First, we have the basic hyperbolic density estimate \begin{equation}\label{hyp-est1} d\rho_D(t) \le d\rho_{{\mathbb C}\setminus (-\infty,0]}(t) = \frac{1}{2t}, \qfor t>0, \end{equation} which follows from the standard monotonicity property of the hyperbolic metric, using the fact that $D\subset {\mathbb C}\setminus (-\infty,0]$, together with an evaluation of $d\rho_{{\mathbb C}\setminus (-\infty,0]}(t)$, for $t>0$, by conformal mapping from the right half-plane to the cut plane.
Second, we have the Beardon--Pommerenke estimate in Lemma~\ref{BeardPomm}. To apply this it is convenient to assume, as we may by the monotonicity property of the hyperbolic metric, that $a_0=0$ and $b_0\ge 1$; that is, the first interval of $E$ contains $[-1,0]$. With this assumption on~$E$ the closest point of $\partial D$ to any positive number~$t$ is 0, so we can apply Lemma~\ref{BeardPomm} to obtain the estimate \begin{equation}\label{hyp-est2} d\rho_D(t) \le \frac{\pi/2}{t\beta_D(t)}, \qfor t>0, \end{equation} where \[
\beta_D(t)=\inf\{\left|\log (t/|b|)\right|:b\in \partial D\}, \qfor t>0. \] The Beardon-Pommerenke estimate is more effective than \eqref{hyp-est1} when~$t$ lies well inside an interval of the form $[b_n,a_{n+1}]$. Indeed, putting $s_n=\sqrt{b_na_{n+1}}$, $n\ge 0$, we deduce from \eqref{hyp-est2} that, for $n\ge 0$, \begin{equation}\label{hyp-est3} \beta_D(t) = \begin{cases} \log t/b_n, & b_n <t\le s_n,\\ \log a_{n+1}/t, & s_n\le t<a_{n+1}. \end{cases} \end{equation} To take advantage of this better estimate, we shall apply \eqref{hyp-est1} for values of~$t$ lying in intervals of the form \[ [a'_n,b'_n], \quad \text{where } a'_n=\tfrac12 a_n \; \text{and} \; b'_n=2b_n, \qfor n\ge 0, \] and the estimate \eqref{hyp-est2} in the complementary intervals.
Together with \eqref{segment}, this gives \begin{equation}\label{up-bound} \rho_D(1,r) \leq \frac12\int_{E'\cap (1,r)}\frac{dt}{t} + \frac{\pi}{2}\int_{(1,r) \setminus E'} \frac{dt}{t\beta_D(t)}, \end{equation} where \[ E' = \bigcup_{n=0}^{\infty}[a'_n,b'_n]. \] Observe that the complementary intervals of $E'$ are of the form $(b'_n,a'_{n+1})$ in the cases where $b'_n<a'_{n+1}$. Also, for such~$n$, we have $s_n\in (b'_n,a'_{n+1})$.
We now consider each of the two integrals in \eqref{up-bound} separately.
{\bf Claim 1} \[ \frac12\int_{E'\cap (1,r)}\frac{dt}{t} \leq \tfrac12\overline{\Lambda}(E^*)\log r \,(1 + o(1))\; \text{ as } r \to \infty. \] \begin{proof} We note that, for $a'_n \leq r \leq a'_{n+1}$, where $n \geq 0$, we have \[ \frac12\int_{E'\cap (1,r)}\frac{dt}{t} \leq \frac12\int_{E^*\cap (1,r)}\frac{dt}{t} + n \log 2. \] Claim~1 now follows from the fact that \[ \overline{\Lambda}(E^*)=\limsup_{r \to\infty}\frac{1}{\log r}\int_{E^*\cap(1,r)}\frac{dt}{t}, \] together with the fact that $n = o(\log r)$ as $r \to \infty$, since $r \geq a_n/2$ and $a_n^{1/n} \to \infty$ as $n \to \infty$, by \eqref{an-cond}. \end{proof}
Obtaining an upper bound for the second integral requires more work.
{\bf Claim 2} \[ \frac{\pi}{2}\int_{(1,r) \setminus E'} \frac{dt}{t\beta_D(t)}= o(\log r)\; \text{ as } r \to \infty. \] \begin{proof} We consider the case that $a'_n \leq r \leq a'_{n+1}$, where $n \geq 1$. It follows from~\eqref{hyp-est2} and~\eqref{hyp-est3} that \begin{align*} \frac{\pi}{2}\int_{(1,a'_n) \setminus E'} \frac{dt}{t\beta_D(t)} & =\frac{\pi}{2} \sum_{\substack{j=0\\ b'_j<a'_{j+1}}}^{n-1}\left( \int_{b'_j}^{s_j} \frac{dt}{t \log t/b_j} + \int_{s_j}^{a'_{j+1}} \frac{dt}{t \log a_{j+1}/t} \right) \\ & = \frac{\pi}{2} \sum_{\substack{j=0\\ b'_j<a'_{j+1}}}^{n-1}\left( \log \frac{\log s_j/b_j}{\log 2} + \log \frac{\log a_{j+1}/s_j}{\log 2} \right)\\ & = \pi \sum_{\substack{j=0\\ b'_j<a'_{j+1}}}^{n-1} \log \frac{\log a_{j+1}/s_j}{\log 2}\\ & < \pi \sum_{j=0}^{n-1} \log^+ \log a_{j+1}/a_j, \end{align*} since $s_j/b_j=a_{j+1}/s_j=a^{1/2}_{j+1}/b^{1/2}_j\le a^{1/2}_{j+1}/a^{1/2}_j$.
In view of condition \eqref{an-cond}, we can now apply Lemma~\ref{tech} with $x_j = \log a_{j+1}/a_j$ to give \[ \frac{\pi}{2}\int_{(1,a'_n) \setminus E'} \frac{dt}{t\beta_D(t)} = o\left( \sum_{j=1}^{n-1}\log a_{j+1}/a_j \right) = o(\log a_n) = o(\log r)\; \text{ as } r \to \infty. \] This proves Claim 2 in the case that $a'_n \leq r \leq b'_n$.
It remains to show that, if $b'_n \leq r \leq a'_{n+1}$, then \begin{equation}\label{end} \frac{\pi}{2}\int_{b'_n}^r \frac{dt}{t \beta_D(t)} = o(\log r)\; \text{ as } r \to \infty. \end{equation}
We split this into two cases. First, if $b'_n \leq r \leq s_n$, then it follows from~\eqref{hyp-est3} that \begin{equation}\label{case1} \frac{\pi}{2}\int_{b'_n}^r \frac{dt}{t \beta_D(t)} = \frac{\pi}{2}\int_{b'_n}^r \frac{dt}{t \log t/b_n} = \frac{\pi}{2} \log \frac{\log r/b_n}{\log 2}= o(\log r)\; \text{ as } r \to \infty. \end{equation} Second, if $s_n < r \leq a'_{n+1}$, then it follows from~\eqref{hyp-est3} together with~\eqref{case1} that \begin{align*} \frac{\pi}{2}\int_{b'_n}^r \frac{dt}{t \beta_D(t)} & \leq \frac{\pi}{2}\int_{b'_n}^{s_n} \frac{dt}{t \log t/b_n} + \frac{\pi}{2}\int_{s_n}^{a'_{n+1}} \frac{dt}{t \log a_{n+1}/t} \\ & = \frac{\pi}{2} \log \frac{\log s_n/b_n}{\log 2} + \frac{\pi}{2} \log \frac{\log a_{n+1}/s_n}{\log 2}\\ & = \pi \log \frac{\log s_n/b_n}{\log 2} < \pi \log \frac{\log r}{\log 2} = o(\log r)\; \text{ as } r \to \infty. \end{align*} Together with~\eqref{case1}, this shows that~\eqref{end} is true. This completes the proof of Claim 2. \end{proof}
It follows from Claim 1 and Claim 2 together with~\eqref{up-bound} that \[ \rho_D(1,r) \leq \tfrac12\overline{\Lambda}(E^*)\log r (1 + o(1))\; \text{ as } r \to \infty. \] The first estimate in~\eqref{order-hyp-estimates} now follows.
A similar but somewhat simpler argument can be used to prove the second statement in~\eqref{order-hyp-estimates}. First recall that \[ \underline{\Lambda}(E^*)=\liminf_{r \to\infty}\frac{1}{\log r}\int_{E^*\cap(1,r)}\frac{dt}{t}. \] It is easy to check that, for $n\ge 0$, \[ \min_{b_n\le r\le b_{n+1}}\frac{1}{\log r}\int_{E^*\cap(1,r)}\frac{dt}{t} \] occurs at $r=a_{n+1},$ so there must be a subsequence $a_{n_k}, k=1,2,\ldots,$ such that \[ \underline{\Lambda}(E^*)=\lim_{k \to\infty}\frac{1}{\log a_{n_k}}\int_{E^*\cap (1,a_{n_k})}\frac{dt}{t}.
\] Hence, by similar reasoning to that used to prove Claim~1 and by Claim~2 (in the special case when $r=a_{n_k}, k=1,2,\ldots$), we deduce that \begin{align*} \rho(1,a_{n_k}) &\le \frac12\int_{E^*\cap (1,a_{n_k})}\frac{dt}{t} + n_k\log 2+ \frac{\pi}{2}\int_{(1,a_{n_k}) \setminus E'} \frac{dt}{t\beta_D(t)}\\ &\le \tfrac12\underline{\Lambda}(E^*)\log a_{n_k}(1+o(1)) + o\left( \log a_{n_k} \right)\\ &\le \tfrac12\underline{\Lambda}(E^*)\log a_{n_k} (1 +o(1)) \;\text{ as } k \to \infty. \end{align*} The second estimate in~\eqref{order-hyp-estimates} now follows.
This completes the proof of Theorem~\ref{exactorder}.
\section{Proof of Example~\ref{extreme-ex}}\label{example-proof} \setcounter{equation}{0} Our example is a function $u\in{\mathcal K}$ where \[ E=\bigcup_{n\ge 1}[-b_n,-a_n], \quad\text{with } a_n=n, b_n=n+1/n,\; n=1,2, \ldots, \] normalised so that $u(1)=1$. It is straightforward to check that $\overline{\Lambda}(E^*)=0$ in this case.
We shall show that, for this function, \begin{equation}\label{o(1)} u(-r)=o(1)\;\text{ as } r\to \infty. \end{equation} It follows, by Barry's theorem (see the end of Section~\ref{intro}), that~$u$ must have order~$1/2$ and then, by Lemma~\ref{Wiman}, that~$u$ cannot be of order~1/2, minimal type. Hence $\lim_{r\to\infty}u(r)/r^{1/2}>0$, by Theorem~\ref{basic-props}, part~(a); in particular,~$u$ has order and lower order~$1/2$.
To prove \eqref{o(1)}, we first apply Lemma~\ref{sodin} to the part of $E$ that lies in the square box of the form \[
Q_{z_r}(\tfrac12 r)=\{z:-2r <\Re z<-r:|\Im z|<\tfrac12 r\},\quad\text{where } z_r=-\tfrac32 r, r>1. \] From now on we assume that $r>4$. Then \[ \max\{n\in{\mathbb N}: -b_n\in Q_{z_r}(\tfrac12 r)\}\le 2r, \] so, for $r<a_n<b_n<2r$, \[ b_n-a_n = \frac1n \ge \frac{1}{2r}. \]
We shall apply Lemma~\ref{sodin} with $h=1$. In this case, for $|x-z_r| < \tfrac12r-h$, \[ E\cap Q_x(h)\text{ contains at least one interval of } E, \] so \[
{\rm cap}\left( \frac{1}{2h}(E\cap Q_x(h))\right)\ge \frac14\cdot \frac12\cdot \frac{1}{2r}=\frac{1}{16r}, \qfor |x| < \tfrac12 r-h, \]
since cap\,$(I) \ge \frac14 |I|$ for any interval $I$ on the real line; see~\cite[Corollary~9.10]{Pomm}, for example. Therefore, by Lemma~\ref{sodin}, with $h=1$ and $\delta=1/(16r)$, we have \begin{equation}\label{omega} \omega(z_r) \le \frac{C}{r}\log(16r), \qfor r>4, \end{equation} where $\omega$ denotes the harmonic measure in $Q_{z_r}(\tfrac12 r)\setminus E$ of $\partial Q_{z_r}(\frac12r)$ and $C>0$ is an absolute constant.
Now note that, by Theorem~\ref{basic-props}, part~(a), \[ \max\{u(z): z\in \partial Q_{z_r}(\tfrac12r)\}\le u(3r) \le (3r)^{1/2},\qfor r>0. \] It follows, by applying the maximum principle to~$u$ in $Q_{z_r}(\tfrac12 r)\setminus E$ and using \eqref{omega} that, for $r>4$, \begin{align*} u(z_r)&\le \max\{u(z): z\in \partial Q_{z_r}(\tfrac12r)\} \omega(z_r)\\ &\le \frac{C(3r)^{1/2}}{r}\log(16r)\\ &=o(1) \;\text{ as } r \to \infty, \end{align*} as required.
\section{Proof of Theorem~\ref{min-type}}\label{min-type-result} \setcounter{equation}{0} For the proof of Theorem~\ref{min-type}, we need the following result on positive harmonic functions defined in annuli; see \cite[Theorem~3.1]{BRS11} for a variation on Lemma~\ref{Har}.
\begin{lemma}\label{Har}
Let~$u$ be positive and harmonic in $\{z:r_1<|z|<r_2\}$ and suppose that $r_1<s_1\le s_2<r_2$. Then there is a positive constant $K$ depending only on $\mu:=\min\{\log(s_1/r_1), \log(r_2/s_2)\}$ such that \begin{equation}\label{annulus-est}
u(z')\le Ku(z), \qfor |z'|=|z|\in [s_1,s_2]. \overline{}\end{equation}
\end{lemma} \begin{proof} The positive harmonic function $u(e^t)$ is defined in the infinite strip $S=\{t:\log r_1<\Re (t)<\log r_2\}$. We can apply Harnack's inequality to this function in any disc of radius~$\mu$ whose centre lies in the rectangle \[ R=\{t:\log s_1\le \Re (t)\le \log s_2,-\pi\le \Im (t)\le \pi\}\subset S, \] to deduce that there is a positive constant $K=K(\mu)$ such that \[ u(e^{t'})\le Ku(e^t),\qfor t,t'\in R,\; \Re (t')=\Re (t)\in [\log s_1, \log s_2], \] and this gives \eqref{annulus-est}. \end{proof}
Theorem~\ref{min-type} states that if the function $u\in {\mathcal K}$, with corresponding closed subset $E$ of the negative real axis, satisfies \[E^c\supset\bigcup_{n\ge 0}(-d_n,-c_n),\] where $0\le c_0<d_0<c_1<d_1< \cdots,$ and $\limsup_{n\to\infty} d_{n}/c_n>1$, then \[ \frac{u(r)}{r^{1/2}}\to 0\;\;\text{as }r\to\infty. \] We give two proofs, the first based on Lemma~\ref{Wiman} and the other a direct one using the Poisson integral formula.
\begin{proof}[First proof of Theorem~\ref{min-type}] By the hypotheses on $(c_n)$ and $(d_n)$, we can assume that there exists $d>1$ such that \begin{equation}\label{dncn} d_n/c_n \ge d, \qfor n\ge 0. \end{equation} We will apply Lemma~\ref{Har} with \[ r_1=c_n,\quad s_1=c_n^{3/4}d_n^{1/4},\quad s_2=c_n^{1/4}d_n^{3/4},\quad r_2= d_n. \] Then, by~\eqref{dncn}, \[ \frac{s_1}{r_1}=\frac{r_2}{s_2}=\left(\frac{d_n}{c_n}\right)^{1/4}\ge d^{\,1/4},\qfor n\ge 0. \] We deduce that there is a constant $K=K(d)>0$ such that \begin{equation}\label{u(s)-est} u(r) \le Ku(-r),\qfor c_n^{3/4}d_n^{1/4}\le r \le c_n^{1/4}d_n^{3/4},\;n\ge 0, \end{equation} and, in particular, $A(r)=u(-r), r>0,$ is unbounded, by Theorem~\ref{basic-props}, part~(a).
The fact that $u(r)=B(r) = o(r^{1/2})$ as $r\to\infty$ now follows immediately from Lemma~\ref{Wiman} and Theorem~\ref{basic-props}, part~(a), in the case that $\rho(u)=1/2$ and is trivial if $\rho(u)<1/2$. \end{proof}
\begin{proof}[Second proof of Theorem~\ref{min-type}] The alternative direct argument uses the estimate \begin{equation}\label{u(r)-est} u(r) \ge \frac{r^{1/2}}{\pi}\int_0^{\infty} \frac{u(-s)}{s^{1/2} (s+r)}\,ds \ge \sum_{n= 0}^{\infty} \frac{r^{1/2}}{\pi}\int_{c_{n}}^{d_n}\frac{u(-s)}{s^{1/2} (s+r)}\,ds, \end{equation} which follows from the convergence of the integral in \eqref{Poisson-half} after the change of variable $z\mapsto \sqrt z$; note that $u\in{\mathcal K}$ is symmetric with respect to the real axis.
Now let $\alpha_n=c_n^{3/4}d_n^{1/4}$ and $\beta_n=c_n^{1/4}d_n^{3/4}$. We deduce from \eqref{u(r)-est} with $r=1$, together with \eqref{dncn}, \eqref{u(s)-est} and the fact that $u(r)/r^{1/2}$ is decreasing (by Theorem~\ref{basic-props}, part~(a)) that \begin{align} \pi u(1)&>\sum_{n\ge 0} \int_{c_n}^{d_{n}} \frac{u(-s)}{2s^{3/2}}\,ds \ge \frac1K\sum_{n\ge 0} \int_{\alpha_n}^{\beta_n} \frac{u(s)}{2s^{3/2}}\,ds\notag\\ & \ge \frac{1}{2K}\sum _{n\ge 0} \frac{u(\beta_n)}{\beta_n^{1/2}}\log \left(\frac{\beta_n}{\alpha_n}\right)\ge \frac{\log d^{\,1/2}}{2K} \sum _{n\ge 0} \frac{u(\beta_n)}{\beta_n^{1/2}}.\notag \end{align} Hence $u(\beta_n)/\beta_n^{1/2}\to 0$ as $n\to \infty$, so $u(r)/r^{1/2}\to 0$ as $r\to \infty$, as required. \end{proof}
{\it Remarks}\; 1. The example in the remark following the proof of Theorem~\ref{basic-props} and also Example~\ref{extreme-ex} show that we cannot hope to significantly weaken the condition $\limsup_{n\to\infty} d_{n}/c_n>1$ in Theorem~\ref{min-type}.
2. Theorem~\ref{min-type} can be used to show that the functions $u_{\alpha,\beta}$ considered by Kjellberg have order strictly less than~1/2. Indeed, for these functions the gaps between the intervals in the set~$E$ satisfy the hypotheses of Theorem~\ref{min-type}, so we certainly have $u_{\alpha,\beta}(r)=o(r^{1/2})$ as $r\to\infty$.
However, in view of the fact that the set~$E$ is invariant under scaling by $z\mapsto \beta z$ and the uniqueness property of functions $u\in {\mathcal K}$, we have $u_{\alpha,\beta}(\beta z)=Cu_{\alpha,\beta}(z)$ for all $z\in {\mathbb C}$, where $C=C(\alpha,\beta)>0$, and this property is incompatible with $u_{\alpha,\beta}$ having order~1/2, minimal type.
\section{Proofs of Theorem~\ref{discretise} and Corollary~\ref{entire-density}}\label{Kjellbergproof} \setcounter{equation}{0} The proofs of Theorem~\ref{discretise} and Corollary~\ref{entire-density} follow the structure of the reasoning in \cite[Chapter~4]{bK48} but require significant additional arguments due to the much greater generality of the sets~$E$ considered here. \begin{proof}[Proof of Theorem~\ref{discretise}] Once again, we express the subharmonic function $u\in{\mathcal K}$ as a potential of the form \[
u(z)=u(0)+\int_0^{\infty} \log|1+z/t|\,d\mu(t), \]
where $\mu(t)$ denotes the Riesz measure of the subharmonic function~$u$ in the disc $\{z:|z| \le t\}$, clearly entirely supported in the set~$E$. It is well known (see \cite[Chapter~3]{HK76}) that \begin{equation}\label{mu-u} \mu(r)=rI'(r),\quad\text{where } I(r)= \frac{1}{2\pi}\int_0^{2\pi} u(re^{i\theta})\,d\theta,\qfor r\ge 0. \end{equation}
As noted earlier, the idea of Kjellberg's method is to discretise the measure $\mu$ and use the resulting discrete measure to construct a subharmonic function~$u_1$ which is close to~$u$ in much of the plane, and for which $u_1(z)=\log |f(z)|$, where~$f$ is an entire function.
Indeed, writing \[
u_1(z)=u(0)+ \int_0^{\infty} \log|1+z/t|\,d[\mu(t)] = u(0)+\sum_{n=1}^{\infty} \log |1+z/x_n|, \]
where the sequence $(x_n)$ is positive and increasing, and $-x_n\in E$, we see that $u_1(z)=\log |f(z)|$, where $f$ is the entire function \[ f(z)=C\prod_{n=1}^{\infty}(1+z/x_n),\quad C=e^{u(0)}. \] Now recall from the statement of Theorem~\ref{discretise} that \[ D_1={\mathbb C}\setminus \{z: \text{{\rm dist}}(z,E)\le 1\}. \] We shall show that there exists a positive constant $R_0=R_0(u)$ such that \begin{equation}\label{u-u1}
|u(z)-u_1(z)|\le 4\log|z|, \qfor z\in D_1, |z|\ge R_0, \end{equation} which gives \eqref{R-est}.
To do this, we write \begin{align}
u(z)-u_1(z)&=\int_0^{\infty} \log|1+z/t|\,d(\mu(t)-[\mu(t)])\notag\\
&= \left(\int_0^{x_1} + \int_{x_1}^{\infty}\right)\log|1+z/t|\,d(\mu(t)-[\mu(t)])\notag\\ &= I_1(z)+I_2(z), \end{align} say, and estimate $I_1(z)$ and $I_2(z)$ in turn, for $z\in D_1$.
Since $\mu(x_1)=1$, we have \begin{align}\label{I1-est1}
I_1(z)&=\int_0^{x_1} \log|1+z/t|\,d\mu(t)-\log|1+z/x_1|\notag\\
&=\int_0^{x_1} \log|t+z|\,d\mu(t)-\int_0^{x_1} \log t\,d\mu(t)-\log|1+z/x_1|, \end{align} provided that the two integrals in the latter expression are convergent. This is the first point at which our proof requires different reasoning from that in \cite{bK48}.
To prove this convergence, we use the fact that $I(r)$, defined in \eqref{mu-u}, is a non-negative increasing convex function of $\log r$, that is, $\phi(s)=I(e^s)=I(r), s\in{\mathbb R},$ is a non-negative increasing convex function. Hence $rI'(r)\log r=s\phi'(s)\to 0$ as $s\to -\infty$, by a simple argument. Therefore, by \eqref{mu-u}, we deduce that the integral \begin{align*} \int_0^{x_1} \log t\,d\mu(t)&=[tI'(t)\log t]^{x_1}_0-\int_0^{x_1}I'(t)\, dt\\ &= x_1I'(x_1)\log x_1-I(x_1)+I(0) \end{align*} is convergent.
Since $|z+t|>1$ for $z\in D_1$ and $-t\in E$, we deduce that, for $z\in D_1$, \begin{equation}\label{I1-est2}
0<\int_0^{x_1} \log|t+z|\,d\mu(t)\le \log(x_1+|z|)\int_0^{x_1}\,d\mu(t) = \log(x_1+|z|). \end{equation} Hence, by \eqref{I1-est1} and \eqref{I1-est2}, there exists $R_1>0$ and $C_1>0$ such that \begin{equation}\label{I1-est3}
\left|I_1(z)\right| \le C_1, \qfor z\in D_1, |z|\ge R_1. \end{equation}
Next we estimate $|I_2(z)|$ for $z\in D_1$, and for this second integral we need to give considerably more detail than was given in \cite{bK48}. Integrating by parts, we obtain \begin{align}\label{int-by-parts}
I_2(z)&=\int_{x_1}^{\infty} \log|1+z/t|\,d(\mu(t)-[\mu(t)])\notag\\
&=-\int_{x_1}^{\infty}(\mu(t)-[\mu(t)])\,d\log|1+z/t|, \end{align} since $\mu(x_1)=[\mu(x_1)]$, so \begin{equation}\label{I2-est1}
|I_2(z)|\le\int_{x_1}^{\infty}\left|d\log|1+z/t|\right|.
\end{equation}
Now, for $z$ in the right half-plane and $t>0$, the function $t\mapsto \log|1+z/t|$, $t>0$, is decreasing (as $t$ increases), so \begin{equation}\label{I2-est2}
\int_{x_1}^{\infty}\left|d\log|1+z/t|\right|=-\int_{x_1}^{\infty}d\log|1+z/t|=\log|1+z/x_1|. \end{equation} Now we assume that $z=x+iy$ lies in the left half-plane, and observe that \begin{equation}\label{inc-dec}
t\mapsto \log|1+z/t| \text{ is } \begin{cases}
\text{decreasing for } 0< t\le |z|^2/|x|,\\
\text{increasing for } t\ge |z|^2/|x|; \end{cases} \end{equation}
the change from decreasing to increasing occurs at the value $t=|z|^2/|x|$ where $1+z/t$ is orthogonal to~$z$. Hence, for~$z$ in the left half-plane with $|z|\ge x_1$, \begin{align*}
\int_{x_1}^{\infty}\left|d\log|1+z/t|\right|&=-\int_{x_1}^{|z|^2/|x|}d\log|1+z/t|+\int^{\infty}_{|z|^2/|x|}d\log|1+z/t|\\
&=\log|1+z/x_1|-2\log\left|1+z|x|/|z|^2\right|\\
&=\log|1+z/x_1|-2\log|y|/|z|. \end{align*}
It follows from this estimate, and \eqref{I2-est1} and \eqref{I2-est2}, that if $z\in D_1\setminus \{z=x+iy:x< 0,|y|< 1/2\}$ and $ |z|\ge x_1$, then \begin{equation}\label{I2-est3}
|I_2(z)|\le \left|\log|1+z/x_1|\right|+2\log |z|+2\log 2. \end{equation}
Finally, suppose that $z\in D_1\cap \{z=x+iy:x < -b_0, |y|< 1/2\}$. Then $z$ lies in a set of the form \[
\Omega_n=\{z:|z+b_n|>1,|z+a_{n+1}|>1, |y|<1/2\},\;\text{ where } n\ge 0, \] and \eqref{int-by-parts} can be written as \[
I_2(z)=-\int_{x_1}^{b_n}(\mu(t)-[\mu(t)])\,d\log|1+z/t|-\int_{a_{n+1}}^{\infty}(\mu(t)-[\mu(t)])\,d\log|1+z/t|, \] since the measure $\mu$ is supported in $E$.
Also, for $n$ sufficiently large and $z\in \Omega_n$, we have $b_n<|z|^2/|x|<a_{n+1}$, so in this case, \begin{align}\label{I2-est4}
|I_2(z)|&\le\int_{x_1}^{b_n}\left|d\log|1+z/t|\right|+\int_{a_{n+1}}^{\infty}\left|d\log|1+z/t|\right|\notag\\
&=\log|1+z/x_1|-\log\left|1+z/b_n\right|-\log\left|1+z/a_{n+1}\right|, \end{align} in view of \eqref{inc-dec}.
Next we observe that, if $z\in \Omega_n$, for $n\ge 0$, then \begin{equation}\label{z+bn}
|1+z/b_n|\ge 1/|z|\quad \text{and} \quad |1+z/a_{n+1}| \ge 1/(2|z|). \end{equation}
The first estimate is immediate since $|z|>|b_n|$ and $|z+b_n|>1$ for $z\in \Omega_n$. The second estimate holds because $|z||a_{n+1}+z| \ge |x|(a_{n+1}+|x|)>a_{n+1}/2$, for $z=x+iy \in \Omega_n$, by an elementary calculation.
Therefore, by \eqref{I2-est4} and \eqref{z+bn}, we deduce that \eqref{I2-est3} holds for all values of $z\in D_1$ as long as $|z|$ is large enough.
Combining \eqref{I1-est3} and \eqref{I2-est3}, we deduce that there exist $R_2>0$ and $C_2>0$ such that \begin{equation}\label{u-u1-est}
|u(z)-u_1(z)|\le 3\log |z|+C_2,\qfor z\in D_1, |z|\ge R_2, \end{equation} which gives \eqref{u-u1}. This proves the first part of Theorem~\ref{discretise}.
To prove the second part, we first claim that if \eqref{d-cond} holds, that is, $b_n/a_n \ge d>1$ for $n\ge 0$, then there exists $C_3>0$ such that \begin{equation}\label{B-est} u(z)\le C_3,\;\; \text{whenever }\text{{\rm dist}}(z,E)\le 1. \end{equation} We use Lemma~\ref{Beur} again. First, let~$n$ be so large that $a_n(1-1/\sqrt d)>1$, and consider a general point $-s\in [-b_n,-a_n]$. We have $s(1-1/\sqrt d)>1$ and $u=0$ on at least one of the intervals $[-s\sqrt d,-s]$ or $[-s, -s/\sqrt d\,]$. Thus, if we apply Lemma~\ref{Beur} with the origin moved to $-s$ and the radii $r_1=1$, $r_2=s-s/\sqrt d>1$, then we obtain \[
\max_{|z+s|\le s-s/\sqrt d}u(z)\ge \frac12\left(\frac{r_2}{r_1}\right)^{1/2}\max_{|z+s|\le 1}u(z) = \frac12s^{1/2}\left(1-1/\sqrt d\right)^{1/2}\max_{|z+s|\le 1}u(z). \] Also, by Theorem~\ref{basic-props}, part~(a), \[
u(z) \le |z|^{1/2}u(1)\le \left(s\sqrt d\right)^{1/2}u(1),\qfor |z| \le s\sqrt d, \]
so the claim \eqref{B-est} follows, since $\{z:|z+s|\le s-s/\sqrt d\,\}\subset \{z:|z|\le s\sqrt d\,\}$.
Now let $v(z)=u_1(z)-(3\log|z|+C_2)$. By \eqref{u-u1-est} and \eqref{B-est}, we deduce that \[ v(z)\le u(z) \le C_3, \] when $z$ lies on the boundary of any complementary component of $D_1$. Since~$v$ is subharmonic in ${\mathbb C}\setminus \{0\}$, we deduce by the maximum principle that $v(z)\le C_3$ in each complementary component of $D_1$, except possibly one that contains 0, and hence there exists $R=R(u)>0$ such that \begin{equation}\label{fu2}
\log|f(z)|=u_1(z) \le u(z)+4\log |z|, \qfor |z|\ge R. \end{equation} This completes the proof of Theorem~\ref{discretise}. \end{proof}
\begin{proof}[Proof of Corollary~\ref{entire-density}] We shall assume that the required order $\rho\in (0,1/2)$ and consider the function $u\in{\mathcal K}$ with $E=\cup_{n=0}^{\infty}[-b_n,-a_n]$, where \[ b_n=\exp(n^2/(4\rho)) \quad \text{and}\quad a_n=b_n e^{-n}, \qfor n\ge 0, \] and also $u(0)=1$. The proof for the case $\rho=0$ us similar, but with $b_n=\exp(n^3)$.
Then $a_n^{1/n} \to \infty$ as $n\to \infty$ and it is easy to check that the logarithmic density of~$E$ is $2\rho$, so the order and lower order of~$u$ is~$\rho$, by Theorem~\ref{exactorder}. Also, \[ b_n/a_n\to \infty \;\text{ as } n\to \infty, \] so we can apply Theorem~\ref{discretise} to obtain an entire function of the form \[ f(z)=\prod_{n=1}^{\infty}(1+z/x_n), \] where the sequence $(x_n)$ is positive and increasing, and $-x_n\in E$, such that~$f$ has order and lower order $\rho$ and \[
\log|f(z)|\le u(z)+4\log |z|, \qfor |z|\ge R, \] where $R=R(u)>0$. If we then redefine~$f$ to be \[ f(z)=\prod_{n=6}^{\infty}(1+z/x_n), \] then~$f$ again has order~$\rho$ and \[
\log|f(z)|\le u(z), \qfor |z|\ge R, \] for some possibly larger $R$.
Since $u=0$ on the set $E$, we deduce that $\{r: A(r,\log|f|)>0)\}$ is a subset of $E^c \cup \{x: -R<x\le 0\}$, and so has lower logarithmic density at most $1-2\rho$, and hence exactly $1-2\rho$, by Barry's theorem \eqref{Barry-order}, applied with $\alpha=1/2$.
The set $\{r: A(r,\log|f|)>0)\}$ has upper logarithmic density at most $1-2\rho$ also. Hence it has upper logarithmic density exactly $1-2\rho$ by another theorem of Barry \cite{Ba64}, applied with $\alpha=1/2$ again, which states that if~$u$ is a non-constant subharmonic function of lower order $\lambda\in [0,1)$ and $\lambda<\alpha<1$, then \begin{equation}\label{Barry-lowerorder} \overline{\Lambda}(\{r: A(r,u)>\cos (\pi\alpha) B(r,u)\}) \ge 1-\lambda/\alpha. \end{equation} This completes the proof of Corollary~\ref{entire-density}. \end{proof}
\end{document} |
\begin{document}
\title{Photon statistics characterization of a single photon source}
\author{R.~All\'eaume\dag, F.~Treussart\dag\footnote[3]{To whom correspondence should be addressed.}, J.-M.~Courty\ddag \ and J.-F.~Roch\dag}
\address{\dag Laboratoire de Photonique Quantique et Mol\'eculaire\footnote[4]{Laboratoire du CNRS, UMR 8537, associ\'e ˆ l'\'Ecole Normale Sup\'erieure de Cachan}, ENS Cachan, 61 avenue du pr\'esident Wilson, 94235 Cachan cedex, France}
\address{\ddag Laboratoire Kastler Brossel\footnote[5]{UMR 8552, Unit\'e mixte de recherche de l'\'Ecole Normale Sup\'erieure, du CNRS, et de l'Universit\'e Pierre et Marie Curie}, UPMC case 74, 4 place Jussieu, 75252 Paris cedex 05, France}
\ead{francois.treussart@physique.ens-cachan.fr}
\begin{abstract} In a recent experiment, we reported the time-domain intensity noise measurement of a single photon source relying on single molecule fluorescence control. In this article we present data processing, starting from photocount timestamps. The theoretical analytical expression of the time-dependent Mandel parameter $Q(T)$ of an intermittent single photon source is derived from ON$\leftrightarrow$OFF dynamics . Finally, source intensity noise analysis using the Mandel parameter is quantitatively compared to the usual approach relying on the time autocorrelation function, both methods yielding the same molecular dynamical parameters. \end{abstract}
\pacs{ {33.50.-j},{42.50.Dv},{03.67.Dd} }
\submitto{\NJP}
\maketitle
\section{Introduction} \label{intro} Optical experiments at the level of single quantum emitters allow one to produce specific quantum states of light with photon statistics that deviate strongly from classical distributions~\cite{Kimble_77,Diedrich_PRL87}. Despite the experimental challenges of producing single photon states~\cite{Yamamoto_PRL94, Yamamoto_Nature99}, recent developments of quantum information theory have intensified interest in single photon sources. Realization of an efficient single-photon source (SPS) is, for instance, a key-problem in quantum cryptography and could more generally be applied to quantum information processing \cite{Knill_NAT}.
Recent experiments reported quantum key distribution (QKD) with polarisation encoding on single photons~\cite{Alexios_PRL02,Waks_NAT02}. They revealed potential gain of such sources over systems relying on strongly attenuated laser pulses. However, in these experiments, the actual performance of QKD is intrinsically linked to photon statistics of the single photon source~\cite{Lutkenhaus_PRA}.
Following the proposal of De Martini et al.~\cite{DeMartini96,Rosa_PRA00}, we recently realized a SPS based upon pulsed excitation of a single molecule~\cite{FMT_PRL02}. Among various experimental realizations of single photon sources~Ê\cite{Brunel_PRL99,Lounis00,Michler_Science_00,Alexios_EPJD,Santori_PRL_01,Moreau_APL_01,Yuan_02}, a molecular-based SPS presents several advantages. First, it can be driven at room temperature with a relatively simple setup which achieves global efficiency exceeding 5~$\%$ for single photon production and detection. Secondly, since the molecular fluorescence lifetime is a few nanoseconds, high repetition rate can potentially be used. Finally, background-emitted photon intensity level is extremely low, for carefully prepared samples.
At the single pulse timescale, the figure of merit of a SPS can be characterized by efficiency of delivering triggered photons to target and by the ratio of single photon to multiphoton pulses~\cite{Rosa_PRA00}. In reference~\cite{FMT_PRL02}, we extended this analysis to measurement of SPS noise properties over a wide integration timescale range. In the detection scheme, complete statistical information is extracted from the ``photocount by photocount'' record. We showed that measured photon statistics strongly deviates from Poisson law, therefore clearly exhibiting non-classical features.
In this article, we detail the steps of this work, from realization of a molecular-based SPS to extensive statistical analysis of detected photons.
\section{Single photon emission from a single molecule} \label{SPS} \subsection{Principle of the experiment} \label{principle} As fluorescence light of a 4-level single emitter is antibunched for timescale on the order of the excited state radiative lifetime~\cite{Alexios_EPJD,Kurtsiefer_00,Fleury_PRL00,FMT_OL01}, such systems can simply produce single photons on demand~\cite{DeMartini96,Rosa_PRA00,Lounis00}. As summarized in figure~\ref{mol_levels}, the molecule is pumped into a vibrational excited state by a short excitation pulse. It then quickly decays to the first electronic excited state by a non-radiative process~\cite{Atkins_book}. The emission of a single fluorescence photon then coincides with radiative de-excitation toward the ground state vibrational multiplicity, followed once again by fast non-radiative decay (figure~\ref{mol_levels}). To emit more than one photon at a time, a molecule has to undergo a full excitation, emission and reexcitation cycle within the same excitation pulse. The probability of this occurrence is extremely small when the pulse duration is much shorter than the excited state lifetime~\cite{DeMartini96,Rosa_PRA00}. Following the theoretical analysis of reference~\cite{Rosa_PRA00}, we
chose a pulse duration of $\simeq150$~fs, which makes this probability less than $5 \times 10^{-5}$. This value is negligible in comparison to the one associated to parasitic light, such as residual fluorescence from the molecular host matrix. To get one fluorescent photon per excitation pulse, the repetition period must also be much longer than the excited state lifetime so as to ensure relaxation into the ground state before application of the next excitation pulse.
\begin{figure}
\caption{(a) Single-photon generation by pulsed excitation of a single 4-level molecular system from the ground singlet state $S_{0}$ to a vibrationally excited sublevel of the singlet state $S_{1}$. Solid arrows corresponds to optical transitions, whereas dashed arrows depict non-radiative fast (ps) de-excitation. In order to emit a single photon per excitation pulse, the pulse duration $\delta t$ must be much shorter than the radiative lifetime $1/\Gamma$. (b) Absorption and fluorescence spectra of DiIC$_{18}$(3) dye embedded in a thin polymer film, measured respectively with absorption spectrometer and spectrofluorimeter with 514~nm excitation wavelength. Note that in the SPS experiment the 532~nm excitation wavelength is well separated from the dye's fluorescence emission which is centered at a wavelength around 570~nm FWHM.}
\label{mol_levels}
\end{figure}
\subsection{Experimental setup} \begin{figure}
\caption{Experimental setup for characterization of the triggered SPS. Single molecules are excited by a frequency-doubled femtosecond Ti:Sa laser at 532~nm. The laser is followed by PP: pulse picker; C: LiIO$_3$ nonlinear $\chi^{(2)}$ crystal; EO: ADP(NH$_4$PO$_4$) electro-optic cell; P: linear polarizer; PZT: piezoelectric translation stage; Obj: oil immersion microscope objective ($\times 60$, NA=1.4); DM: dichroic mirror; PH: pinhole for confocal detection (30~$\mu$m diameter); NF: notch filter centered at 532~nm; BS: non-polarizing beamsplitter; SPAD: single photon silicium avalanche photodiode; TAC: time to amplitude converter; MA: multichannel analyser; TIA: time interval analyser (GuideTech, Model GT653) and PC: computer.}
\label{exp_setup}
\end{figure}
We use standard confocal microscopy techniques to perform selective excitation and detection of single-molecule light emission at room temperature~\cite{FMT_OL01}. This setup allows one to readily achieve two required features for observation of non-classical photon statistics, namely good collection efficiency of emitted photons and high rejection of optical background noise.
The laser source, used for fluorescence excitation, is a femtosecond tunable Titanium:Sapphire (Ti:Sa) laser, frequency doubled by single-pass propagation into a LiIO$_3$ nonlinear crystal. The initial repetition rate of 82 MHz is divided by a pulse-picker with frequency set to 2.05 MHz (pulsed excitation repetition period $\tau_{\rm rep}=488~$ns) to avoid surpassing the maximum electronics counting rate.
The excitation light, centered at 532 nm, is reflected by a dichroic mirror into an inverted microscope. It is focused on the sample with an oil-immersion high numerical aperture objective, leading to a spot diameter of $\simeq350$~nm FWHM. The fluorescence light --redshifted with respect to the excitation-- is collected by the same objective, then transmitted through the dichroic mirror, and finally focused inside a pinhole for the confocal configuration. After recollimation, residual excitation light is removed by an holographic notch filter.
The samples used in our experiment consist of cyanine molecules DiIC$_{18}$(3). This dye choice was motivated by its fluorescence efficiency and photostability, with an emission spectrum well suited for detection using silicium avalanche photodiode (see figure~\ref{mol_levels} (b)). The dye molecules are embedded in a thin layer of PMMA deposited on a microscope coverplate by spin coating. The emitters are randomly distributed within the PMMA layer (thickness $\simeq30$~nm) at an approximate concentration of one molecule per 10 $\mu \rm{m}^{2}$.
To ensure localization of a single emitter in the detection volume, we use a standard Hanbury Brown and Twiss setup \cite{Loudon}. It consists of two single-photon-counting avalanche photodiodes (SPADs) placed on each side of a 50/50 nonpolarizing beamsplitter. A Start--Stop technique with a time-to-amplitude converter allows us to build a coincidences histogram as a function of time delay between two consecutive photodetections on each side of the beamsplitter. Following the textbook experiment of P.~Grangier and A.~Aspect on quantum properties of single photon states \cite{Grangier_EPL86}, the absence of coincidence at zero delay gives clear evidence of single photon emission \cite{Brunel_PRL99}.
We hence apply a simple three-steps procedure as explained in reference~\cite{FMT_PRL02}. We first raster scan the sample at low energy per pulse ($\simeq0.5$ pJ) so as to map the fluorescence intensity and locate efficient emitters. We then put the excitation beam on a given emitter and measure the autocorrelation function at low excitation energy. We hence determine whether a single fluorophore or an aggregate of several molecules is excited. Once such preliminary identification has been achieved, the single molecule is excited at much higher power so as to ensure saturated emission~\cite{FMT_PRL02}.
\subsection{Data acquisition} \label{data_acquisition} Once a single emitter is located, we switch the detection procedure from the Start--Stop method to a complete recording of photon arrival times. The properly normalized interval function $c(\tau)$ measured by the Start--Stop technique corresponds to the intensity autocorrelation function $g^{(2)}(\tau)$ in the limit of short timescales and low detection efficiency~\cite{Reynaud_these_etat}. However, to characterize more completely the statistical properties of the photon stream, one needs to test for correlations on timescales much longer than the excitation repetition period $\tau_{\rm rep}$. In that case, the relationship between $g^{(2)}(\tau)$ and $c(\tau)$ becomes more complicated~\cite{Reynaud_these_etat,Fleury_PRL00}. Instead of solely inferring $g^{(2)}(\tau)$ from $c(\tau)$ measurements, we have chosen to keep trace of the full range of dynamics by recording every photodetection time with a Time Interval Analyser (TIA) computer board. From this set of photocounts moments (that we call timestamps), detected photons statistics can then be directly analyzed over a wide range of timescales. Such a procedure avoids any mathematical bias in photon statistics analysis.
The total number of fluorescence photons that can be produced by a single molecule is limited at room temperature by its photostability \cite{Eggeling98}. Under weak CW excitation, a molecule of DiIC$_{18}$(3) typically undergoes $10^{6}$ excitation cycles before irreversible photobleaching occurs \cite{Veerman_PRL99}. In our experiment, the excitation pulses energy is progressively ramped up to a maximum value of 5.6~pJ that ensures saturation of the $S_{0} \rightarrow S_{1}$ transition \cite{FMT_PRL02}. This energy ramp, realized using an electro-optic modulator, consists of a 50~ms linear rise followed by a plateau lasting 300~ms and linear decrease (figure~\ref{ramp+int}). We experimentally found that applying such a procedure substantially improves molecular photostability, compared to an abrupt excitation. We then select the timestamps of photocounts that occured during the plateau of the excitation. These events correspond to saturated emission of our molecular SPS. Our analysis ``photocount by photocount'' then relies on determination of the number of detected photons in gated windows synchronized on Ti:Sa excitation pulses.
\begin{figure}
\caption{Black solid line (right scale): laser pulse energy $E_{p}$ vs. time during single molecule excitation. Maximum laser pulse energy $E_{p}^{\rm max}$ of 5.6~pJ saturates the molecular transition. Gray solid line (left scale): number of fluorescent photons detected during 50~$\mu$s integration duration. Photobleaching of the dye occurs 162~ms after excitation at the maximum energy per pulse begins, as delimited by two vertical dashed lines.}
\label{ramp+int}
\end{figure}
Data is first pre-processed over a discrete time grid. The excitation time base is reconstructed from the timestamps ensemble $\lbrace t_i \rbrace$ and by applying a time filtering procedure described in \ref{analysistechnique}. The gate duration is 30 ns, more than ten times the typical radiative lifetime of the molecule in PMMA layer. All records outside the time gates are rejected, slightly improving the signal to background ratio. Each timestamp $t_i$ is then attributed to a pulse $p_i$ in the time grid, and to each excitation pulse $p$, a number $n_p^{({\rm d})} = 0,1,2$ of detected photons is finally associated. The probability distribution of the number of detected photons per pulse is deduced from $\{n_p^{({\rm d})} \}$, as summarized in table~\ref{StatResults}.
We next analyse the photon statistics of a data set extracted from SPS emission displayed in figure~\ref{ramp+int}. We have selected photocounts recorded between the plateau beginning and molecular photobleaching clearly identified by a sudden drop in fluorescence emission. During this time, the molecule was excited at constant maximum pumping energy, yielding 15332 photodetection events for 325313 excitation pulses. The time filtering mentioned above is then applied keeping 15138 synchronous photocounts .
\section{Single pulse photon statistics} \label{SPPS} \begin{table} \caption{\label{StatResults}Single-pulse statistics of a molecular single photon source, as obtained after numerical synchronization (see \ref{analysistechnique}). This data will be referred to as (S). The total number of excitation pulses in the sequence is 325313, leading to a total of 0,1 or 2 photon number events of 15138. The mean number of photons detected per pulse is $\langle n \rangle=0.04653$. No events with $n^{\rm (d)}>2$ are observed due to deadtime in each detection channel.} \begin{indented} \item[]\begin{tabular}{@{}llll} \br $n^{\rm (d)}$(number of & 0 & 1 & 2 \\
detected photons) &&&\\ \mr $n^{\rm (d)}$-photons & 310190 & 15108 & 15 \\ event number &&&\\ $n^{(d)}$-photon & 0.95351 & 0.04644 & $4.6\times10^{-5}$ \\ event probability &&&\\ \br \end{tabular} \end{indented} \end{table}
The single-pulse statistics presented in table~\ref{StatResults} are the direct outcome of photocounts acquisition. They correspond to the molecular emission displayed in figure~\ref{ramp+int}. While our SPS photon statistics appear to differ from a classical Poissonian distribution, the influence of our experimental setup on these measurements must be considered for accurate interpretation of these figures and for comparison with Poisson shotnoise reference.
\subsection{Influence of deadtime} \label{deadtime} In the following, we make a distinction between the distribution of the photons produced by the source, denoted by script notation ($\mathcal{P}$), and photocount statistics, for which we conserve the usual notation ($P$).
Due to existence of a $\simeq280$-ns deadtime for each detection channel, a nonlinear relationship exists between detected photon statistics and source photon statistics. Indeed, for a given excitation pulse, the number of detected photons in a 30-ns gated time window cannot exceed two if we use two avalanche photodiodes (APDs) operating in the photon counting regime compulsory for our experiment.
Denoting by $\mathcal{P}^{\mathrm{in}}(n)$ the photon number probability distribution of incoming light on the detection setup, the nonlinear tranformation relating this probability to the \emph{detected} photon probability $P(n=0,1,2)$ is simply computed for ``ideal APDs". By ``ideal APD", we mean that each photodiode clicks with 100 $\%$ efficiency immediately upon receiving a photon, but that no more than one click can occur in a given repetition period. In the approach developed here, we consider the ideal APD case for the following reasons: \begin{itemize} \item{limited quantum efficiency of the APD (65~$\%$ in our experiment) is included in an overall linear loss coefficient along with other linear losses of the detection chain} \item{deadtime is shorter than repetition period $\tau_{\mathrm{rep}}$ and much longer than pulse duration.} \end{itemize} For detection with a single ideal APD, the relationship between the photocount and incoming light statistics is \begin{equation} P(0) = \mathcal{P}^{\mathrm{in}}(0)\; {\rm and}\; P(1) = \sum_{n \geq 1}^{\infty} \,\mathcal{P}^{\mathrm{in}}(n) \, \label{eq:1NL1} \end{equation}
With our experimental detection scheme, random splitting of photons on two sides of 50/50 beamsplitter gives \begin{eqnarray} &P(0) =& \mathcal{P}^{\mathrm{in}}(0) \label{eq:NL0} \\ &P(1) =& \sum_{n \geq 1}^{\infty} \,\mathcal{P}^{\mathrm{in}}(n) \, \frac{1}{2^{n-1}} \label{eq:NL1} \\ &P(2) =& \sum_{n \geq 2}^{\infty} \, \mathcal{P}^{\mathrm{in}}(n) \, \left(1-\frac{1}{ 2^{n-1}}\right) \label{eq:NL2} \end{eqnarray}
The relationship between $P(n)$ and photon statistics $\mathcal{P}(n)$ in SPS emission, comes from accounting for linear attenuation between SPS and detection. We call $\eta$ the overall detection efficiency, which includes all linear propagation losses and photodetector quantum efficiency. $\mathcal{P}^{\mathrm{in}}$ is then related to $\mathcal{P}$ by the following binomial law \begin{equation} \mathcal{P}^{\rm in}(n) = \sum_{m=n}^{\infty} \left(^{m}_{n} \! \right) \eta^{n} (1-\eta)^{m-n} \, {\mathcal{P}}(m) \label{eq:Bernoulli} \end{equation}
Combination of equations (\ref{eq:NL0}), (\ref{eq:NL1}), (\ref{eq:NL2}) and (\ref{eq:Bernoulli}) leads to a direct analytical relation between $\mathcal{P}(n)$ and $P(n=0,1,2)$.
Note that existence of such a saturation limit, due to detection deadtime, has no influence on photocount statistics of a perfect SPS, for which $\mathcal{P}(n\ge 2)=0$, as long as excitation repetition period is longer than electronics deadtime. On the contrary, for a ``real'' source with background light, the number of detected multi-photon pulses is systematically underestimated, leading to statistics artificially squeezed in comparison to shotnoise reference.
\subsection{Calibration with a coherent source} \label{calibration}
\subsubsection{Coherent beam photocount statistics} SPS performance can be directly evaluated by comparing single pulse photon statistics with those of a coherent source. This calibration takes into account the linear and nonlinear effects of our detection setup and permits accurate measurements of the multi-photon events probability reduction between single photon and Poissonian sources.
The photon number probability distribution for a coherent pulsed beam (C) is given by a Poisson law. According to equation~(\ref{eq:Bernoulli}), linear loss between the source and APDs change the mean photon number per pulse $\alpha$ to $\eta \alpha$ while the photon statistics remain Poissonian. The expected photocount statistics can then be calculated by applying nonlinear transformations~(equations (\ref{eq:NL0}), (\ref{eq:NL1}) and (\ref{eq:NL2})) to a Poissonian distribution of parameter $\eta \alpha$ \begin{eqnarray} &P_{\rm C}(0) &=e^{-\eta \alpha} \label{eq:PC0} \\ &P_{\rm C}(1) &=2 e^{-\eta \alpha/2} (1 - e^{-\eta \alpha/2}) \label{eq:PC1} \\ &P_{\rm C}(2) &= (1- e^{-\eta \alpha/2})^{2}, \label{eq:PC2} \end{eqnarray} such distribution being termed as $P_{\rm C}(n)$ in table \ref{tablestat}.
\subsubsection{Experimental calibration} A strongly attenuated pulsed laser beam is used as experimental reference to mimic a pulsed coherent source. It is obtained by slightly detuning the Ti:Sa wavelength from the notch filter rejection band resulting in detection of residual pump light reflected from the sample. The photocount statistics of this experimental reference (R) are then compared both with the experimental single photon source (S) and the calculated photocount distribution expected from a Poissonian source (C). To establish a valid comparison, calculated and experimental calibrations are determined for an --almost-- identical mean number of photons detected per pulse.
\begin{table} \caption{\label{tablestat}Photocount probabilities $P(n)$ for SPS (S), reference experimental coherent source (R), and theoretical coherent source (C), for which photocount statistics are affected by detection. This table also displays the mean number $\langle n\rangle$ of detected photons per pulse.} \begin{indented} \item[]\begin{tabular}{@{}llll} \br & $n=1$ & $n=2$ & $\langle n\rangle$ \\ \mr $P_{\mathrm{S}}(n)$ & 0.04644 & $\lineup{4.6\times10^{-5}}$ & 0.04653 \\ $P_{\mathrm{R}}(n)$ & 0.04520 & $50\times10^{-5}$ & 0.04620\\ $P_{\mathrm{C}}(n)$ & 0.04514 & $53\times10^{-5}$ & 0.04620 \\ \br \end{tabular} \end{indented} \end{table}
Table~\ref{tablestat} shows that theoretical predictions for the coherent source are in good agreement with experimental calibrations, proving that our detection model accounts for all significant biases. We can therefore confidently interpret the molecular SPS photon statistics we measure. For our SPS, the number of two-photons pulses is 10 times smaller than the corresponding probability for a Poissonian source. As mentioned earlier, residual multi-photon pulses mostly results from background fluorescence light triggered by Ti:Sa excitation. Indeed, the wavelength of this parasitic light lies within the molecule's fluorescence band and therefore cannot be filtered out. Careful optimization of substrate purity as well as of the chemicals purity used in sample fabrication can likely lower background fluorescence. \subsection{Molecular SPS efficiency} \label{efficiency} Our molecular SPS emission can be modeled as the superposition of a perfect SPS and a coherent state of light. In this model, all sources of linear loss (production~+~collection~+~detection), are gathered as an overall efficiency $\eta$. For this perfect SPS (perf. SPS), the photon probability distribution of light inpinging on the APD is then given by \begin{eqnarray} &\mathcal{P}^{\rm in}_{\rm perf. SPS}(0) &= 1-\eta \nonumber \\ & \mathcal{P}^{\rm in}_{\rm perf. SPS}(1) &= \eta \label{sources_1} \\ & \mathcal{P}^{\rm in}_{\rm perf. SPS}(n\ge2) &=0. \nonumber \end{eqnarray} Background (backgnd.) emission is modeled by a coherent state of light with a mean number $\eta\gamma$ of detected photons per pulse. The corresponding photon probability distribution is then \begin{equation} \mathcal{P}^{\rm in}_{\mathrm{backgnd.}}(n) = {e^{- \eta \gamma}(\eta \gamma)^{n}\over{n !}},\:{\rm for } \:n\ge 0. \label{sources_2} \end{equation}
Applying equations (\ref{eq:NL0}) to (\ref{eq:NL2}) to the (perf. SPS + backgnd.) probability distribution leads to the following analytical expressions for the real single photon source (S) \emph{photocounts} statistics~: \begin{eqnarray} &P_{\mathrm{S}}(0) &= e^{-\eta \gamma}\, (1-\eta) \nonumber \\ & P_{\mathrm{S}}(1) &= 2\,( e^{-\eta \gamma/2} - e^{- \eta \gamma})+ \eta \, (2 e^{-\eta \gamma} - e^{- \eta \gamma/2}) \label{mol+fond} \\ & P_{\mathrm{S}}(2) &= (1-e^{-\eta \gamma/2})^{2} + \eta \,(e^{-\eta \gamma/2} - e^{- \eta \gamma}). \nonumber \end{eqnarray}
Values for collection efficiency $\eta$ and signal-to-background ratio $1/ \gamma$ can be inferred from measured photocount statistics $P_{\rm S}$ (see table \ref{tablestat}). Using equations (\ref{mol+fond}) for experimental values of $P_{\mathrm{S}}(1)$ and $P_{\mathrm{S}}(2)$, we finds $\eta\simeq0.04456$ and $\eta \gamma\simeq2.02\times 10^{-3}$. This leads to a signal-to-background ratio of 22, in good agreement with that measured by sample raster scan.
\subsection{Single-pulse Mandel parameter} \label{Mandel} From a statistical point of view there exists two main differences between experimental and ideal SPS: source overall efficiency lower than unity, and finite ratio of single-photon to multi-photon pulses. Light produced by an ideal SPS consists in the periodic emission and detection of single photons with $100 \%$ efficiency, its intensity fluctuations being then perfectly squeezed. On the other hand, a real SPS yields less squeezing~\cite{Yamamoto_PRL94}.
It is then meaningful to assess SPS performance by measuring its intensity noise on the excitation repetition period $\tau_{\mathrm{rep}}$ timescale~\cite {FMT_PRL02}. Such analysis requires evaluation of the single pulse Mandel parameter $Q$~\cite{Short_Mandel_PRL83}. This parameter characterizes deviation of photon statistics from Poissonian statistics for which $Q=0$. Subpoissonian (resp. superpoissonian) statistics correspond to $Q<0$ (resp. $Q>0$). For the distribution $\{n^{\rm (d)}_p\}$ of detected photon number, the Mandel parameter is defined by \begin{equation} Q \equiv \frac{\langle n^2 \rangle - {\langle n \rangle}^{2}}{\langle n\rangle} - 1 \label{defMandel} \equiv {\langle (\Delta n)^2\rangle\over{\langle n \rangle}}-1, \end{equation} where $\langle n\rangle$ stands for the average value of $\{ {n^{\rm (d)}_p} \}$ calculated over the ensemble $\{p\}$ of excitation pulses. Note that an ideal SPS would yield $Q=-1$. Moreover, for any statistical distribution, the effect of linear attenuation can be straightforwardly evaluated: after linear attenuation $\eta$, a Mandel parameter $Q_0$ would be changed in $\eta Q_0 $. This means that every statistical distribution converges towards Poissonian statistics under attenuation. This sensitivity to loss for measurements of non-zero Mandel parameters is similar to sensitivity observed in squeezing experiments that measure reduced photocurrent noise spectra with respect to shotnoise reference.
For our molecular SPS, the Mandel parameter $Q$ of the \emph{photocount} statistics can be computed directly from single-pulse photocount probabilities \begin{equation} Q = \left[P(1) + 2 P(2) \right] \, \left\{ \frac{2 P(2) }{[ P(1) + 2 P(2) ]^{2} } - 1 \right\}. \label{MandelExp} \end{equation} From table~\ref{tablestat} data we infer a Mandel parameter $Q_{\rm S}= - 0.04455$ for the SPS. This negative value for $Q$ confirms that our SPS indeed exhibits subpoissonnian statistics at the timescale $\tau_{\mathrm{rep}}$. Since very few multi-photon events are observed, the value of $Q_{\rm S}$ is almost only limited by the collection efficiency, which imposes a lower limit on $Q$: $Q_{\rm limit} = - \eta = -0.04456$.
Our measurement of $Q_{S}$ can then be compared to a Poissonian reference measurement. Here again, statistical bias introduced by APD deadtime must be taken into account. From equations~(\ref {eq:PC0})-(\ref{eq:PC2}) and (\ref{MandelExp}), we can derive the Mandel parameter of detected photons for a coherent source (C) of parameter $\alpha$. Noticing that $\langle n\rangle_{\rm C}= 2 \, (1-e^{-\alpha/2} )$, we have \begin{equation} Q_{\rm C}= \langle n\rangle_{\rm C}\, \left[ \frac{2 P_{\rm C}(2) }{\langle n\rangle_{\rm C}^{2}}-1 \right] = - \frac{\langle n\rangle_{\rm C}}{2}. \label{MandelCoherent} \end{equation} As a consequence of photodetector deadtimes, a coherent source gives subpoissonian distribution of photodetection events. In our case, a coherent source with the same mean number $\langle n\rangle_{\rm C}=\langle n \rangle=0.04653$ of detected photons per pulse as the SPS, would then yield $Q_{\rm C}=-0.02327>Q_{\rm S}$. Despite this detection bias, our direct measurement of the Mandel parameter, still yields a value for $Q_{\rm S}$ that clearly departs from that of Poissonian statistics. This \emph{measured} Mandel parameter is larger (in absolute value) than those measured in previous measurements by more than one order of magnitude~\cite{Short_Mandel_PRL83,Diedrich_PRL87,Fleury_PRL00}.
\section{Single photon source intensity fluctuations} \label{intermittency} Emission intermittency has been observed with most single photon sources realized so far~\cite{Lounis00,Alexios_EPJD,Santori_PRL_01}. This effect decreases source efficiency and contributes to additional source of noise. Better understanding of physical processes responsible for intermittency would likely lead to significant improvement of current SPS devices.
To characterize intermittency for our molecular SPS, we have investigated its influence on the photon statistics recorded with the time-resolved photon counting system. For a periodically trigerred SPS, this analysis is equivalent to study of source intensity noise over a wide range of timescales, which is usually done in the frequency domain for squeezing experiments, using a radio-frequency spectrum analyzer.
\subsection{Measuring intensity fluctuations : Time-varying Mandel parameter $Q(T)$}
The analysis performed on single-pulse photon statistics (see section~\ref{Mandel}) can be extended to multiple-pulses scale, allowing characterization of intensity fluctuations at any timescale greater than the pulse repetition period. To do this, we analyze fluctuations of the total number $N\left( T\right)$ of photons detected during an integration time $T\equiv\mathcal{M\cdot }\tau _{\rm rep}$, which is a multiple of the repetition period. This analysis therefore corresponds to study of statistics of the photocounts number recorded during $\mathcal{M}$ successive pulses.
We then introduce the time-varying Mandel parameter $Q(T)$~\cite{Mandel_OL79}, defined similarly to single-pulse Mandel parameter. To perform statistical analysis, we extend the procedure used in section~\ref{SPPS}. More precisely, the complete data $\{n^{\rm(d)}_p\}_{p=1,\dots,\mathcal{N}}$ corresponding to photocounts recorded during $\mathcal{N}$ consecutive excitation pulses is split in successive samples, each lasting $T$. We then obtain $\mathcal{N}_{\mathrm{sample}}=E\left[ \mathcal{N}/\mathcal{M}\right] $ samples. We call $N_{k}\left( T\right) $ the number of photocounts recorded during the $k^{\rm th}$ sample. We then have \begin{equation} N_{k}\left( T\right) \equiv\int_{kT}^{\left( k+1\right) T} I(t) dt =\sum_{p\,=k \mathcal{M}}^{\left(k+1\right)\mathcal{M} -1}n_{p}. \end{equation} The statistical average over these samples of duration $T$ is denoted $\langle \;\rangle _{T}$, and we hence have \begin{equation} \left\langle N\right\rangle _{T}=\frac{1}{\mathcal{N}_{\mathrm{sample}}} \sum_{k=0}^{\mathcal{N}_{\mathrm{sample}}-1}N_{k}\left( T\right). \label{def_mean_N_T} \end{equation} Using this notations, the time-dependent Mandel parameter is given by \begin{equation} Q(T)\equiv \frac{\langle (\Delta N)^{2}\rangle _{T}}{\langle N\rangle _{T}}-1, \label{QdeT} \end{equation} that allows direct comparison of SPS noise properties to those of Poissonian ligth beam.
\subsection{Intensity noise and intermittency in the molecular fluorescence: the ON-OFF model} \label{model} To analyze our experimental results and link them to physical parameters of molecular fluorescence, we use a simple analytical model of molecular intermittency, in which we assume that the SPS can be ON or OFF. We call $p$ the ON to OFF transition rate and $q$ the OFF to ON one. These rates correspond to lifetimes $\tau_{\rm on}=1/p$ and $\tau_{\rm off}=1/q$, respectively.
\subsubsection{Physical interpretation of the ON-OFF model} For molecular SPS~\cite{FMT_PRL02} and other SPS's relying on fluorescence of a single emitter (e.g. single NV centers in diamond nanocrystals~\cite{Alexios_EPJD}), ON-OFF intermittency stems from the presence of a metastable non-fluorescent excited state in the energy level structure. Dynamics of ON-OFF behavior can then be computed from the three-level structure shown in figure~\ref{ON:OFF} \begin{itemize} \item ON $\rightarrow $ OFF transition corresponds to relaxation from the optical excited state $S_{1}$ to the triplet state $T_1$. For each excitation cycle, the probability $\mathcal{P}_{\rm ISC}$ of this intersystem crossing process is very small in the case of DiIC$_{18}$(3) molecule used in our experiment ($\mathcal{P}_{\rm ISC}\simeq10^{-4}$). Moreover, since singlet-triplet transitions occur exclusively from the excited state $S_1$, the excitation repetition period must be considered in defining the source ON state lifetime $\tau _{\rm on}$, which is then $\tau _{\rm on}=\tau _{\rm rep}/\mathcal{P}_{\rm ISC}$, assuming saturated excitation regime. \item OFF $\rightarrow $ ON transition consists simply of non-radiative decay from triplet $T_1$ to ground $S_0$ state. Note that the triplet level is metastable since selections rules forbid direct optical transition to the ground state. The triplet state lifetime $\tau _{\rm T}=\tau_{\rm off}=1/q$ is therefore usually much longer than a typical fluorescent lifetime (in the case of DiIC$_{18}$(3), $\tau _{\rm T}\simeq200~\mu$s~\cite{Veerman_PRL99}). \end{itemize}
\begin{figure}
\caption{Three states energy level structure and the corresponding ON and OFF states in the SPS intermittency model. In the ON state, the molecule undergoes fluorescent cycles between the ground $S_0$ and excited $S_1$ singlet states. In the OFF state, the molecule is trapped in the dark metastable triplet $T_1$ state. Coupling from $S_1$ to $T_1$ occurs at each excitation pulse with the intersystem crossing probability $\mathcal{P}_{\rm ISC}$, yielding a transition ON$\rightarrow$OFF rate $p=\mathcal{P}_{\rm ISC}/\tau_{\rm rep}$. The reverse transition OFF$\rightarrow$ON occurs at rate $q=1/\tau_{\rm T}$, where $\tau_{\rm T}$ is the triplet state lifetime. }
\label{ON:OFF}
\end{figure}
\subsubsection{Dynamics of the ON-OFF system} Under periodic pulsed excitation, the ON-OFF dynamics can be described using a discrete time model. As transitions between ON and OFF states are random, we introduce a stochastic variable $r_{k}$ to account for the source state at instants $t_{k}=k \tau_{\rm rep}$. This parameter has value $r_{k}=1$ (resp. $r_{k}=0$) if the source is in the ON (resp. OFF) state at time $t_{k}$.
We then call $u_{k}$ the probability for the source to be in the ON state at time $t_{k}$. As the SPS emits photons exclusively from the ON state and never from the OFF state, $u_{k}$ also corresponds to the photoemission probability at time $t_{k}$. We assume that lifetimes $\tau_{\rm on}$ and $\tau_{\rm off}$ of the ON-OFF states are much larger than the repetition rate $\tau_{\rm rep}$. Then, the ON$\rightarrow$OFF transition probability is $p\tau_{\rm rep}$ and the OFF$\rightarrow$ON transition is $q\tau_{\rm rep}$. It follows that the state of the emitter at pulse $k+1$ depends only on its state at pulse $k$. The recursion relation for the probability $u_{k+1}$ of the source to be ON at time $t_{k+1}$ is \begin{equation} u_{k+1}=\left( 1-p\tau _{\rm rep}\right) u_{k}+q\tau _{\rm rep}\,\left( 1-u_{k}\right), \end{equation} which leads to the general solution \begin{equation} u_{k}=\left( u_{0}-\frac{q}{p+q}\right) \,\left( 1-p\,\tau _{\rm rep}-q\,\tau_{\rm rep}\right) ^{k}+\frac{q}{p+q}. \label{recur:2} \end{equation}
Stationary probabilities for the molecule to be either ON or OFF are then \begin{eqnarray} &P_{\rm on}&={q\over{(p+q)}}\label{Pon} \\ &P_{\rm off}&=1-P_{\rm on}={p\over{(p+q)}}.\label{Poff} \end{eqnarray}
\subsubsection{Source intensity and Mandel parameter vs. time} \label{intensity_Q_T} According to our model, light emitted by the source is a succession of single photon pulses emitted at time $t_{k}=k\tau _{\rm rep}$ with probability $u_{k}$, corresponding to intensity \begin{equation} I(t)=\sum_{k\,=\,-\infty }^{+\infty }\delta (t-k \tau _{\rm rep})\times r_{k},\; {\rm with}\; r_k=0\;{\rm or}\; 1. \label{int} \end{equation}
The recursive relation~(\ref{recur:2}) for the ON-OFF model permits computation of statistical properties of source intensity $I(t)$. In particular, we can derive the time-dependent Mandel parameter from the variance of the number $N(T)$ of photons emitted by the intermittent source during $T=\mathcal{M}. \tau_{\mathrm{rep}}$. Details of this calculation are given in \ref{CalcQdeT}.
Analytical expression of the Mandel parameter given by equation~(\ref{eq:QdeTcomplet}), can be simplified in the regime for which $\beta =(p+q)\tau _{\rm rep}\ll1$, leading to the following Mandel parameter expression for a ``perfect'' SPS with intermittency \begin{equation} Q_{\rm perf. SPS}(\mathcal{M}\tau _{\rm rep})=\frac{2p\times \tau _{\rm rep}}{\beta ^{2}} \left\{1- \frac{1}{\mathcal{M}\beta }\left[1-(1-\beta )^{\mathcal{M}}\right]\right\} -1. \label{eq:QdeTsimplif} \end{equation}
Experimental measurements of the Mandel parameter are also affected by overall efficiency $\eta $ smaller than unity (see section \ref{efficiency}). Taking into account this limitation which is equivalent to linear loss, the Mandel parameter of the real source $Q_{\rm S}(T)$ is given by \begin{equation} Q_{\rm S}(T)=\eta \,Q_{\rm perf. SPS}(T). \label{Q_linear_losses} \end{equation}
\subsubsection{Experimental data analysis} \label{expMandelT} As shown in figure~\ref{figmandel}, our experimental data are well-fitted by equations~(\ref{eq:QdeTsimplif}) and (\ref{Q_linear_losses}) over more than four orders of magnitude in time. Setting the measured efficiency to $\eta =0.04456$, the fit yields $p \tau _{\mathrm{rep}}=\mathcal{P}_{\rm ISC}=2.1\times 10^{-4}$ and $\tau _{\mathrm{T}}=250\,\mu$s, for the remaining two free parameters. These values are in good agreement with values given in reference \cite{Veerman_PRL99}.
\begin{figure}
\caption{Direct measurement of Mandel parameter $Q(T)$ over short integration time $T$. The dashed horizontal line shows $Q(T)$ for the equivalent coherent source (C), taking into account detection dead time. Inset shows $Q(T)$ for longer integration time. The solid curve is a fit given by the model accounting for intermittency in SPS emission.}
\label{figmandel}
\end{figure}
Figure~\ref{figmandel} clearly shows that source photon statistics differ on short and long timescales. On timescales shorter than $\simeq8\tau _{\mathrm{rep}}$, the Mandel parameter of the source $Q(T)$ is smaller than $Q_{\rm C}$, the theoretical value of the Mandel parameter for poissonian light including the detection deadtime (horizontal dashed line on fig.\ref{figmandel}). On this short time scale, the SPS's photocount statistics are those of non-classical light. On timescales larger than $\simeq10\mu s$, fluorescence intermittency due to the triplet state, influences the photocount statistics by introducing excess of noise resulting in a positive value of the Mandel parameter.
The model developed here for a perfect intermittent SPS fits our experimental data with good accuracy. Indeed, apart from detection loss, other imperfections can be ignored or handle by the following: \begin{itemize} \item since the repetition period $\tau _{\mathrm{rep}}$ is much longer than the photodetection deadtime and since multi-photon events are extremely rare with our SPS, APD deadtime does not alter significantly the photocounts statistics. The detection can be considered effectively linear and equation~(\ref{Q_linear_losses}) remains valid in the presence of detection deadtimes. \item high signal-to-background ratio means that background light does not contribute significantly to photocount statistics. It can therefore be neglected, as done implicitly in the model developped in this section. It can moreover be shown that addition of uncorrelated Poissonian background light of intensity $B$ to the perfect SPS signal $S$ is equivalent to loss. If we model the real source by the superposition of fluorescence background and light from a perfect SPS, then, introducing $\rho\equiv {S}/(S+B)$, the Mandel parameter of the real source is simply given by $Q_{\mathrm{S+B}}=\rho Q_{\mathrm{S}}$. \end{itemize}
\subsubsection{SPS intensity autocorrelation function} As a consistency check for our study, the time dependent Mandel parameter analysis can be compared to a different approach using the intensity autocorrelation function $g^{(2)}$~\cite{Orrit_JCP93}, the measurement of which being at the heart of fluorescence correlation spectroscopy~\cite{Chu_91}.
\begin{figure}\label{fig:g2}
\end{figure}
In terms of discrete time variables, a discrete time autocorrelation function for time delay $\Delta \times \tau_{\rm rep}$ is given by \begin{equation} G^{(2)}(\Delta)\equiv\frac{\langle n_{i} n_{i+\Delta}\rangle}{\langle n_{i}\rangle^{2}}, \label{eq:g2} \end{equation} where $n_{i}$ is the number of detected photons in the $i^{\rm th}$ excitation pulse, and $\Delta$ is an integer.
This discrete correlation function is directly related to the intensity autocorrelation function $g^{(2)}(\tau) \equiv\langle I(t)I(t+\tau)\rangle/\langle I(t) \rangle ^{2}$ usually measured with Start-Stop techniques~\cite{Brunel_PRL99}. It can be shown that the normalized area of the $k^{\rm th}$ peak (with the $0^{\rm th}$ reference peak corresponding to $\tau=0$) of $g^{(2)}(\tau)$ over a period $\tau_{\rm rep}$ is equal to $G^{(2)}(\Delta=k)$.
The ON-OFF model developed in section~\ref{intermittency} can be applied to calculate $G^{(2)}(\Delta)$ \begin{equation} G^{(2)}(\Delta) =\frac{p}{q} e^{-(p+q) \Delta\times\tau_{\rm rep}}, \label{eq:g2Exp} \end{equation} which coincides with the formula given in reference~\cite{Santori_PRL_01}.
From our data $\{n^{(d)}_{p}\}$, we numerically compute $G^{(2)}(\Delta)$, varying $\Delta$ from 1 to 1000. Note that the latter value is chosen because blinking occurs in a timescale range of $\simeq1000\times \tau_{\rm rep}$. Results of this $G^{(2)}(\Delta)$ calculation are displayed in figure~\ref{fig:g2}. The experimental curve is fitted with equation (\ref{eq:g2Exp}), providing another way of measuring dynamical parameters of intermittent molecular SPS. This fit yields $\mathcal{P}_{\rm ISC}= 1.6 \times 10^{-4}$ and $\tau_{\rm T}=180~\mu$s, which are in good agreements with the values obtained in section~\ref{expMandelT} using Mandel parameter analysis.
Note that, on short time scale, the statistical noise is higher on $G^{(2)}$ than on $Q$. This is due to the fact that $G^{(2)}$ is computed over fewer but bigger statistical samples.
\section{Conclusion} We have realized an efficient triggered single photon source relying on the temporal control of a single molecule fluorescence. After a comparison to Poissoninan coherent light pulses with the same mean number of photons per pulse, we have characterized intensity noise properties of this SPS in the time domain and photocounting regime.
From the record of every photocount timestamp, we calculate the second order correlation function $G^{(2)}$ or equivalently the time-dependant Mandel parameter $Q(T)$. Observed negative $Q(T)$ values signifie non-classical photocount statistics.
This time-domain analysis is complementary to fluorescent correlation spectrocopy techniques for investigating photochemical properties at the single-emitter level. More specifically, we have modeled fluorescence intermittency by a two-state ON$\leftrightarrow$OFF dynamical process. By fitting a theoretical analytical expression of the Mandel parameter for an intermittent SPS, we obtained quantitative values for relevant molecular photodynamical parameters. Such a direct time-domain statistical analysis could give insight into molecular properties such as conformational changes~\cite{Xie_ChemPhys02,Mukamel_JCP02}, resonant energy transfer \cite{Mabuchi_PRL02} or collective emission effects in multichromophoric systems~\cite{Basche_PRL03}.
With expected application to quantum cryptography, higher overall efficiency within a given emission spectral band should be reached so that single photon sources can exhibit advantages over attenuated laser pulses~\cite{Alexios_PRL02}. In recent experiments we coupled the fluorescence of a single emitter (a colored center in a diamond nanocrystal) to the single mode of a planar microcavity and observed a significant increase in spectral density of the emitted photons. These preliminary results are promising realization of an efficient single photon source well-suited for open-air quantum key distribution.
\ack{ The authors are grateful to V.~Le Floc'h, L.T.~Xiao and C.~Grossman for their contributions at various point in the experiment. We also thank P.~Grangier and J.~Zyss for fruitful discussions, and Robin Smith for her valuable remarks on the manuscript. The experimental setup was built thanks to the great technical assistance of A.~Clouqueur, J.-P.~Madrange and C.~Ollier. This work was supported by an ``ACI Jeune chercheur'' grant from Minist\`ere de la Recherche, and by a France T\'el\'ecom R\&D grant (``CTI T\'el\'ecom Quantique'').}
\appendix
\section{General analysis technique of a set of photocounts} \label{analysistechnique} A set of data consists of a list of timestamps $\{t_i\}$ recorded by the Time Interval Analyser computer board. In this appendix we describe the protocol developed to process raw data. This procedure allows us first to postsynchronize the timestamps on an excitation timebase and then to build the set $\{n^{(d)}_p\}$ of the number of detected photons for each excitation pulse $p$.
The pulsed excitation laser acts as a periodic trigger of emitted photons with repetition period $\tau _{ \mathrm{rep}}\simeq488~$ns. An excitation laser pulse is emitted at time $t_{\mathrm{start} }+p\times \tau _{\mathrm{rep}},$ where the pulse is indexed by the integer $p$. The parameter $t_{\mathrm{start}}$ represents the pulse emission time taken as the first ($p=0$) of the data. For each set of data, $t_{\mathrm{start}}$ and $\tau _{\mathrm{rep}}$ must be determined because the repetition period of the Ti:Sa femtosecond laser can fluctuate slightly between acquisitions. However, the laser repetition rate is stable over the typical acquisition duration (under one second), and $\tau _{\mathrm{rep}}$ is therefore constant for a given data record.
Single photon emission by the molecule occurs at each excitation pulse after a random time delay related to the molecule's excited state lifetime. Non-synchronous photocounts due to APD dark counts are rare, so almost all recorded photocounts are triggered by photons emitted by the molecule, with few by photons from residual fluorescence background. For these reasons, $t_{\mathrm{start}}$ and $\tau_{\mathrm{rep}}$ can then be determined directly from recorded data.
The $i^{\rm th}$ photocount timestamp $t_{i}$ can be expressed as \begin{equation} t_{i}=t_{\mathrm{start}}+ (p_{i} \times \tau_{\mathrm{rep}}) + \delta \tau_{i}, \label{ti} \end{equation} where integer $p_{i} \in \{ 1,\dots,\cal{N} \}$ indexes the laser pulse preceding detection of the $i^{\rm th}$ photon, and data to be analysed lasts $\cal{N}$ repetition periods; $\delta \tau _{i}$ is the time delay between excitation pulse and photocount timestamp ($0\leq \delta \tau _{i}<$ $\tau _{\mathrm{rep}}$ ). Given a set of timestamps $\{t_{i}\}$, relevant information can be equivalently represented by lists $\{p_{i}\}$ and $\{\delta \tau_{i}\}$, provided the value of $\tau _{ \mathrm{rep}}$ is known accurately enough. Since the fluorescence lifetime of the molecule is much shorter than the laser repetition period, $\delta \tau _{i}\ll $ $\tau_{\mathrm{rep}}$, as long as photocount $i$ is not a dark count, which is rarely the case.
\begin{figure}\label{ecarts}
\end{figure}
As the laser period $\tau _{\mathrm{rep}}$ is not known precisely, we first attempt to synchronize the data on the excitation timebase considered as a clock of period $\tau_{\mathrm{ clock}}$ close to the expected laser period. We introduce a delay function parametrized with $t_{\mathrm{ start}}$ and $\tau _{\mathrm{clock}}$, that gives for each timestamp $t_i$ the time delay between this timestamp and the corresponding top of the clock\footnote{$E()$ stands for the integer part function.} \begin{equation} \mathrm{Delay}_{t_{\mathrm{start}},\tau_{\mathrm{clock}}}(t_{i})= t_{i}-t_{\mathrm{start}}-E\left(\frac{t_{i}-t_{\mathrm{start}}}{\tau_{\mathrm{clock}}}\right)\times \tau_{\mathrm{clock}}, \end{equation} where, if the clock period differs from the laser period, the drift of the time delay baseline with the pulse index $p_i$ is linear as ican be seen in figure~\ref{ecarts}(a) \begin{equation} \mathrm{Delay}_{t_{\mathrm{start}},\tau_{\mathrm{clock}}}(t_{i}) \simeq p_i (\tau _{\mathrm{rep}}-\tau _{\mathrm{clock}}) \simeq\frac{t_{i}}{\tau_{\mathrm{rep}}}\left( \tau _{\mathrm{rep}}-\tau _{\mathrm{clock}}\right). \end{equation} From the slope of the delay function baseline, we infer a new value for $\tau _{ \mathrm{clock}}$. Note that the first guess for $\tau_{\rm clock}$ is usually so far from the laser period that the delay function takes a saw-toothed shape, each jump corresponding to the delay reaching a multiple value of $\tau_{\rm clock}$. As a consequence, only a linear fraction of the sample corresponding to a single saw tooth can be used at first. In further steps, estimation of $\tau _{\mathrm{rep}}$ improves and fewer jumps occur. Longer samples, corresponding to higher fit precision can then be processed. This procedure is repeated until the whole data set is used, leading to the situation of figure~\ref{ecarts}(b). It corresponds to the same fraction of data as in figure~\ref{ecarts}(a), for which $\tau_{\mathrm{rep}}$ is determined up to relative precision greater than $10^{-9}$.
Once $\tau_{\mathrm{rep}}$ and $t_{\mathrm{start}}$ values are known, calculation of the lists $\{p_{i}\}$ and $\{\delta \tau_{i}\}$ is straightforward using \begin{equation} p_{i} = E\left({t_i-t_{\rm start}\over\tau_{\mathrm{rep}}}\right)\; {\rm and}\; \delta\tau_i = \mathrm{Delay}_{t_{\rm start},\tau_{\mathrm{rep}}}(t_i). \end{equation}
A time filtering procedure is then used to eliminate all photocounts with time delay much longer than the molecule excited state lifetime. To implement this filter, we use a time window of duration $\Delta T_{\mathrm{window}}$. From the set $\{p_{i}\}$, we calculate the number $n_{i}$ of photons detected by the two photodiodes in the time interval $[p_{i}\tau_{\mathrm{rep}},\,p_{i}\tau _{\mathrm{rep} }+\Delta T_{\mathrm{window}}]$. The time window duration $\Delta T_{\mathrm{window}}$ must be shorter than the laser period and much longer than the molecular excited state lifetime $1/\Gamma$ so that the probability of discarding a ``real'' photodetection event is negligible. The chosen time window duration $\Delta T_{\mathrm{window}}=30~$ns meets these two conditions, considering $1/\Gamma\simeq2.5~$ns for DiIC$_{18}$(3) dye. Note that choice of a time window significantly shorter than the laser period has the advantage of filtering out our data from the majority of non-synchronous background photocounts, such as APD dark counts.
The processed data, now expressed as the table $\{n_{i},p_{i}\}$, and shortened to $\{n^{\rm (d)}_p\}$ in the body of this article, allows us to characterize the statistics of our source on timescales from $ \tau _{\mathrm{rep}}\simeq500$ ns to milliseconds.
\section{Statistical characterisation of an intermittent SPS} \label{CalcQdeT} In this Appendix, we derive a general analytical expression of the ``perfect'' intermittent SPS Mandel parameter $Q(T)$ defined by equation~(\ref{QdeT}) using the ON-OFF model introduced in section~\ref{model}. We also retrieve the approximate expression~(\ref{eq:QdeTsimplif}) for $Q(T)$. We assume that source emission has reached its steady state at time $t=0$ .
The total number $N\left( T\right) $ of photocounts recorded during an integration time $T=\mathcal{M\cdot }\tau _{\rm rep}$ corresponding to $\mathcal{M}$ consecutive excitation pulses is related to the stochastic photocount variable $r_k$ associated to $k^{\rm th}$ excitation pulse (see section~\ref{intensity_Q_T}) by \begin{equation} N\left( T\right) =\sum_{k\,=0}^{\mathcal{M}-1}r_{k}, \end{equation} Calulating $Q(T)$ is equivalent to evaluating the variance ${\langle N^{2}\rangle }_{T}-{\langle N\rangle _{T}}^{2}$, where mean values $\langle\;\rangle_T$ are defined by equation~(\ref{def_mean_N_T}). Both $\langle N\rangle_T$ and $\langle N^2\rangle_T$ should then be evaluated. The mean value of $N(T)$ is given by: \begin{equation} \left\langle N\right\rangle _{T}=\sum_{k\,=0}^{\mathcal{M}-1}\left\langle r_{k}\right\rangle =\mathcal{M}\frac{q}{p+q}, \label{mean_N} \end{equation} where the steady state expression of $\langle r_k\rangle=q/(p+q)$ comes from the definition of $r_k$ and the recursive law (\ref{recur:2}). Similarly, $\langle N^2\rangle_T$ follows \begin{equation} \langle N^{2}\rangle _{T}=\sum_{k=0}^{\mathcal{M}-1}\sum_{k\,^{\prime }=0}^{ \mathcal{M}-1}\left\langle r_{k}\,r_{k^{\prime }}\right\rangle. \end{equation}
Index change $\ell=|k-k'|$ in the previous equation then yields \begin{eqnarray} \langle N^{2}\rangle _{T} &=&\sum_{k\,=0}^{\mathcal{M}-1}\left\langle r_{k}^{2}\right\rangle +2\sum_{k\,=0}^{\mathcal{M}-1}\sum_{\ell\,=1}^{ \mathcal{M}-k}C\left( \ell \right) \nonumber \\ &=&\sum_{k\,=0}^{\mathcal{M}-1}\left\langle r_{k}\right\rangle +2\sum_{\ell \,=1}^{\mathcal{M}-1}\left( \mathcal{M}-\ell \right) C\left( \ell \right),\label{mean_N2} \end{eqnarray} where we have introduced the discrete time \emph{correlation function} $C(\ell)$ defined as \begin{equation} C(\ell) =\langle r_{k}r_{k+\ell}\rangle -\langle r_{k}\rangle \langle r_{k+\ell}\rangle. \end{equation}
Calculation of $Q(T)$ now relies on evaluation of $C(\ell)$. We recall that, in the model of section~\ref{model}, source dynamic is described by stochastic process $u_k$, the probability for the molecular system to be in the ON state. The general expression of $u_{k+\ell}$ follows from the recursive law~(\ref{recur:2}) \begin{equation} u_{k+\ell}=\left(u_{k}-\frac{q}{p+q}\right)\,(1-p\,\tau _{\rm rep}-q\,\tau _{\rm rep})^{\ell}+\frac{q}{p+q} \label{eq:evolution2} \end{equation}
The stochastic variable $r_{k}$ equals 1 if the state is ON with probability $u_{k}$, and 0 if the state is OFF with probability $1-u_k$. The product $r_{k}r_{k+\ell}$ is then equal to 1 only if both $r_{k}$ and $r_{k+\ell}$ are simultaneously equal to unity and otherwise equal to zero. It can be summarized as \begin{eqnarray}
&\langle r_k r_{k+\ell} \rangle =P(r_k=1) P(r_{k+\ell}=1|r_k=1) \\ &\langle r_k \rangle \langle r_{k+\ell}\rangle=P(r_k=1) P(r_{k+\ell}=1). \end{eqnarray}
Note that $P(r_k=1) $ is the probability that $r_k=1$, and $P(r_{k+\ell}=1|r_k=1)$ is the conditional probability for $r_{k+\ell}=1$ when $r_k=1$. To fulfill this later condition $r_k=1$, one needs to have $u_k=1$. Moreover, by definition, the steady state probability is $P( r_{k}=1)=q/(p+q)$, so that we have \begin{equation}
P(r_{k+\ell}=1|r_k=1) =\frac{p}{p+q}(1-p\tau_{\rm rep}-q\tau_{\rm rep})^{\ell}+\frac{q}{p+q}, \end{equation} and, as a consequence, \begin{equation} C(\ell) =\frac{pq}{( p+q)^2}(1-p\tau_{\rm rep}-q\tau _{\rm rep})^\ell. \end{equation} This value $C(\ell)$ is then introduced in equation~(\ref{mean_N2}), and the expression for the variance follows from (\ref{mean_N2}) and (\ref{mean_N}). \begin{equation} \langle N^{2}\rangle _{T}-\langle N\rangle _{T}^{2}=\frac{pq}{(p+q)^{2}} \left[ \mathcal{M}\frac{1+\alpha }{1-\alpha }-2\alpha \frac{1-\alpha ^{ \mathcal{M}}}{(1-\alpha )^{2}}\right], \end{equation} where $\alpha \equiv\left( 1-p\,\tau _{\rm rep}-q\,\tau _{\rm rep}\right).$
The general analytical expression of the ``perfect'' intermittent SPS Mandel parameter is finally deduced by \begin{eqnarray} &&Q_{\rm perf.SPS}(\mathcal{M} \tau _{\rm rep})={{\langle N^{2}\rangle_{T}-\langle N\rangle _{T}^{2}}\over{\langle N\rangle _{T}}}-1 \\ &=&\frac{p}{p+q}\left(\frac{2-\beta}{\beta}-\frac{2(1-\beta)}{\mathcal{M}} \cdot \frac{1-(1-\beta )^{\mathcal{M}}}{\beta^2}\right) -1, \label{eq:QdeTcomplet} \end{eqnarray} where $\beta \equiv \left( p+q\right)\,\tau _{\rm rep}.$ In the limit of $\beta\ll1$, which is the case for molecular system dynamics considered in the body of this article, we retrieve expression (\ref{eq:QdeTsimplif}).
\Bibliography{30}
\bibitem{Kimble_77} Kimble~H~J , Dagenais~M and Mandel~M 1977 {\it Phys. Rev. Lett.} {\bf 39} 691
\bibitem{Diedrich_PRL87} Diedrich~F and Walther~H 1987 {\it Phys. Rev. Lett.} {\bf 58} 203
\bibitem{Yamamoto_PRL94} Imamo\v{g}lu ~A and Yamamoto~Y 1994 {\it Phys. Rev. Lett.} {\bf 72} 210
\bibitem{Yamamoto_Nature99} Kim~J, Benson~O, Kan~H and Yamamoto~Y 1999 {\it Nature} {\bf 397} 500
\bibitem{Knill_NAT} Laflamme~L, Knill~E and Milburn~G~J 2001 {\it Nature} {\bf 409} 46
\bibitem{Alexios_PRL02} Beveratos~A, Brouri~R, Gacoin~T, Villing~A, Poizat~J-P and Grangier~P 2002 {\it Phys. Rev. Lett.} {\bf 89} 187901
\bibitem{Waks_NAT02} Waks~E, Inoue~K, Santori~C, Fattal~D, Vu\v{c}ovi\'c~J, Solomon~G and Yamamoto~Y 2002 {\it Nature} {\bf 420} 762
\bibitem{Lutkenhaus_PRA} L\"{u}tkenhaus~N 2000 {\it Phys. Rev. A} {\bf 61} 052304
\bibitem{DeMartini96} De Martini~F, Di Giuseppe~G and Marrocco~M 1996 {\it Phys. Rev. Lett.} {\bf 76} 900
\bibitem{Rosa_PRA00} Brouri~R, Beveratos~A, Poizat~J-P and Grangier~P 2000 {\it Phys. Rev. A} {\bf 62} 063814
\bibitem{FMT_PRL02} Treussart~ÊF, All\'eaume~R, Le Floc'h~V, Xiao~L~T, Courty~J-M and Roch~J-F 2002 {\it Phys. Rev. Lett.} {\bf 89} 093601
\bibitem{Brunel_PRL99} Brunel~C, Lounis~B, Tamarat~P and Orrit~M 1999 {\it Phys. Rev. Lett.} {\bf 83} 2722
\bibitem{Lounis00} Lounis~B and Moerner~W~M 2000 {\it Nature} {\bf 407} 491
\bibitem{Michler_Science_00} Michler~P, Kiraz~A, Becker~C, Schoenfeld~W~V, Petroff~P~M, Zhang~L, Hu~E and Imamo\v{g}lu~A 2000 {\it Science} {\bf 290} 2282
\bibitem{Alexios_EPJD} Beveratos~A, K\"uhn~S, Brouri~R, Gacoin~T, Poizat~J~P and Grangier~P 2002 {\it Eur. Phys. J. D} {\bf 18} 191
\bibitem{Santori_PRL_01} Santori~C, Pelton~M, Solomon~G, Dale~Y and Yamamoto~Y 2001 {\it Phys. Rev. Lett.} {\bf 86} 1502
\bibitem{Moreau_APL_01} Moreau~E, Robert~I, G\'erard~J-M, Abram~I, Manin~L and Thierry-Mieg~V 2001 {\it Appl. Phys. Lett.} {\bf 79} 2865
\bibitem{Yuan_02} Yuan~Z, Kardynal~B~E, Stevenson~R~M, Shields~A~J, Lobo~C~J, Cooper~K, Beattie~N~S, Ritchie~D~A and Pepper~M 2002 {\it Science} {\bf 295} 102
\bibitem{Kurtsiefer_00} Kurtsiefer~C, Mayer~S, Zarda~P and Weinfurter~H 2000 {\it Phys. Rev. Lett.} {\bf 85} 290
\bibitem{Fleury_PRL00} Fleury~L, Segura~J~M, Zumofen~G, Hecht~B and Wild~U 2000 {\it Phys. Rev. Lett.} {\bf 84} 1148
\bibitem{FMT_OL01} Treussart~F, Clouqueur~A, Grossman~C and Roch~J~F 2001 {\it Opt. Lett.} {\bf 26} 1504
\bibitem{Atkins_book} Atkins~P~W and Friedman~R~S 1997 {\it Molecular Quantum Mechanics} (Oxford: Oxford University Press, Oxford)
\bibitem{Loudon} Loudon~R 2000 {\it The Quantum Theory of Light} (Oxford: Oxford University Press)
\bibitem{Grangier_EPL86} Grangier~P, Roger~G and Aspect~A 1986 {\it Europhys. Lett.} {\bf 1} 173
\bibitem{Reynaud_these_etat} Reynaud~S 1983 {\it Ann. Phys. Fr.} {\bf 8} 315
\bibitem{Rosa00} Brouri~R, Beveratos~A, Poizat~J-P and Grangier~P 2000 {\it Opt. Lett.} {\bf 25} 1294
\bibitem{Eggeling98} Eggeling~C, Widengren~J, Rigler~R and Seidel~C 1998 {\it Anal. Chem.} {\bf 70} 2651
\bibitem{Veerman_PRL99} Veerman~J~A, ÊGarcia-Parajo~M~F, Kuipers~L and Van Hulst~N~F 1999 {\it Phys. Rev. Lett.} {\bf 83} 2155
\bibitem{Short_Mandel_PRL83} Short~R and Mandel~L 1983 {\it Phys. Rev. Lett.} {\bf 51} 384
\bibitem{Mandel_OL79} Mandel~L 1979 {\it Opt. Lett.} {\bf 4} 205
\bibitem{Orrit_JCP93} Bernard~J, Fleury~L, Talon~H, and Orrit~M 1993 {\it J. Chem. Phys.} {\bf 98} 850
\bibitem{Chu_91} Chu~B 1991 {\it Laser Light Scattering. Basic Principles and Practice} (Boston: Academic Press)
\bibitem{Xie_ChemPhys02} Yang~H and Xie~X~S 2002 {\it Chem. Phys.} {\bf 284} 423\nonum Yang~H, Luo~G, Karnchanaphanurach~P, Louie~T-M, Rech~I, Cova~S, Xun~L, and Xie~X~S 2003 {\it Science} {\bf 302} {262-266}
\bibitem{Mukamel_JCP02} Barsegov~V and Mukamel~S 2002 {\it J. Chem. Phys.} {\bf 116} 9802
\bibitem{Mabuchi_PRL02} Berglund~A, Doherty~A and Mabuchi~H 2002 {\it Phys. Rev. Lett.} {\bf 89} 068101
\bibitem{Basche_PRL03} H\"ubner~C~G, Zumofen~G, Renn~A, Herrmann~A, M\"ullen~K and Basch\'e~T 2003 {\it Phys. Rev. Lett.} {\bf 91} 093903
\endbib
\end{document} |
\begin{document}
\begin{abstract} In this paper we introduce the notion of generalized quasi--Einstein manifold, that generalizes the concepts of Ricci soliton, Ricci almost soliton and quasi--Einstein manifolds. We prove that a complete generalized quasi--Einstein manifold with harmonic Weyl tensor and with zero radial Weyl curvature, is locally a warped product with $(n-1)$--dimensional Einstein fibers. In particular, this implies a local characterization for locally conformally flat gradient Ricci almost solitons, similar to that proved for gradient Ricci solitons. \end{abstract}
\maketitle
\section{Introduction}
In recent years, much attention has been given to the classification of Riemannian manifolds admitting an Einstein--like structure. In this paper we will define a class of Riemannian metrics which naturally generalizes the Einstein condition. More precisely, we say that a complete Riemannian manifold $(M^{n},g)$, $n\geq 3$, is a {\em generalized quasi--Einstein manifold}, if there exist three smooth functions $f,\mu,\lambda$ on $M$, such that \begin{equation}\label{gqe} {\mathrm {Ric}} + \nabla^{2} f - \mu\, df \otimes df = \lambda g \,. \end{equation} Natural examples of GQE manifolds are given by Einstein manifolds (when $f$ and $\lambda$ are two constants), gradient Ricci solitons (when $\lambda$ is constant and $\mu=0$), gradient Ricci almost solitons (when $\mu=0$, see~\cite{pigrimsett1}) and quasi--Einstein manifolds (when $\mu$ and $\lambda$ are two constants, see~\cite{caseshuwei}~\cite{mancatmazz1}~\cite{HePetWylie}). We will call a GQE manifolds {\em trivial}, if the function $f$ is constant. This will clearly imply that $g$ is an Einstein metric.
The Riemann curvature operator of a Riemannian manifold $(M^n,g)$ is defined as in~\cite{gahula} by $$ \mathrm{Riem}(X,Y)Z=\nabla_{Y}\nabla_{X}Z-\nabla_{X}\nabla_{Y}Z+\nabla_{[X,Y]}Z\,. $$ In a local coordinate system the components of the $(3,1)$--Riemann curvature tensor are given by ${\mathrm R}^{d}_{abc}\tfrac{\partial}{\partial
x^{d}}=\mathrm{Riem}\big(\tfrac{\partial}{\partial
x^{a}},\tfrac{\partial}{\partial
x^{b}}\big)\tfrac{\partial}{\partial x^{c}}$ and we denote by ${\mathrm R}_{abcd}=g_{de}{\mathrm R}^{e}_{abc}$ its $(4,0)$--version.
{\em In all the paper the Einstein convention of summing over the repeated indices will be adopted.}
With this choice, for the sphere ${{\mathbb S}}^n$ we have ${\mathrm{Riem}}(v,w,v,w)={\mathrm R}_{abcd}v^aw^bv^cw^d>0$. The Ricci tensor is obtained by the contraction ${\mathrm R}_{ac}=g^{bd}{\mathrm R}_{abcd}$ and ${\mathrm R}=g^{ac}{\mathrm R}_{ac}$ will denote the scalar curvature. The so called Weyl tensor is then defined by the following decomposition formula (see~\cite[Chapter~3, Section~K]{gahula}) in dimension $n\geq 3$, \begin{eqnarray*} {\mathrm W}_{abcd}=&\,{\mathrm R}_{abcd}+\frac{{\mathrm R}}{(n-1)(n-2)}(g_{ac}g_{bd}-g_{ad}g_{bc}) - \frac{1}{n-2}({\mathrm R}_{ac}g_{bd}-{\mathrm R}_{ad}g_{bc} +{\mathrm R}_{bd}g_{ac}-{\mathrm R}_{bc}g_{ad})\,. \end{eqnarray*}
We recall that a Riemannian metric has {\em harmonic Weyl tensor} if the divergence of ${\mathrm W}$ vanishes. In dimension three this condition is equivalent to local conformally flatness. Nevertheless, when $n\geq 4$, harmonic Weyl tensor is a weaker condition since locally conformally flatness is equivalent to the vanishing of the Weyl tensor.
In this paper we will give a local characterization of generalized quasi--Einstein manifolds with harmonic Weyl tensor and such that ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)=0$. As we have seen, this class includes the case of locally conformally flat manifolds.
\begin{teo}\label{main} Let $(M^{n},g)$, $n\geq 3$, be a generalized quasi--Einstein manifold with harmonic Weyl tensor and ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)=0$. Then, around any regular point of $f$, the manifold $(M^{n},g)$ is locally a warped product with $(n-1)$--dimensional Einstein fibers. \end{teo}
\begin{rem} We notice that the hypothesis ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)=0$ cannot be removed. Indeed, if we consider the gradient shrinking soliton on $M={\mathbb R}^{k}\times \mathbb{S}^{n-k}$, for $n\geq 4$ and $k\geq 2$, defined by the product metric $g=dx^{1}\otimes\cdots\otimes dx^{k}+g_{\mathbb{S}^{n-k}}$ and the potential function $$
f \,=\, \tfrac{1}{2}\,\big( \,|x^{1}|^{2}+\dots|x^{k}|^{2} \,\big) \,, $$ it is easy to verify that $(M^{n},g)$ has harmonic Weyl tensor, since it is the product of two Einstein metrics, whereas the radial part of the Weyl tensor ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)$ does not vanish. \end{rem}
\begin{rem} Theorem~\ref{main} generalizes the results obtained for gradient Ricci solitons (see~\cite{caochen} and~\cite{mancat1}) and, recently, for quasi--Einstein manifolds (see~\cite{mancatmazz1}). \end{rem}
As an immediate corollary, we have that a locally conformally flat generalized quasi--Einstein manifold is, locally, a warped product with $(n-1)$--dimensional fibers of constant sectional curvature. In particular, we can prove a local characterization for locally conformally flat Ricci almost solitons (which have been introduced in~\cite{pigrimsett1}), similar to the one for Ricci solitons (\cite{caochen}~\cite{mancat1}).
\begin{cor} Let $(M^{n},g)$, $n\geq 3$, be a locally conformally flat gradient Ricci almost soliton. Then, around any regular point of $f$, the manifold $(M^{n},g)$ is locally a warped product with $(n-1)$--dimensional fibers of constant sectional curvature. \end{cor}
If $n=4$, since a three dimensional Einstein manifold has constant sectional curvature, we get the following
\begin{cor} Let $(M^{4},g)$, be a four dimensional generalized quasi--Einstein manifold with harmonic Weyl tensor and ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)=0$. Then, around any regular point of $f$, the manifold $(M^{4},g)$ is locally a warped product with three dimensional fibers of constant sectional curvature. In particular, if it is nontrivial, then $(M^{4},g)$ is locally conformally flat. \end{cor}
Now, using the classification of locally conformally flat gradient steady Ricci solitons (see again~\cite{caochen} and \cite{mancat1}), we obtain
\begin{cor} Let $(M^{4},g)$, be a four dimensional gradient steady Ricci soliton with harmonic Weyl tensor and ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)=0$. Then $(M^{4},g)$ is either Ricci flat or isometric to the Bryant soliton. \end{cor}
\section{Proof of Theorem~\ref{main}}
Let $(M^{n},g)$, $n\geq 3$, be a generalized quasi--Einstein manifold with harmonic Weyl tensor and satisfying ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)=0$. If $n=3$, we have that $g$ is locally conformally flat, while if $n\geq 4$, one has \begin{align*} 0=&\,\nabla^d{\mathrm W}_{abcd}\\ =&\,\nabla^d\Bigl({\mathrm R}_{abcd}+\frac{{\mathrm R}}{(n-1)(n-2)}(g_{ac}g_{bd}-g_{ad}g_{bc}) - \frac{1}{n-2}({\mathrm R}_{ac}g_{bd}-{\mathrm R}_{ad}g_{bc} +{\mathrm R}_{bd}g_{ac}-{\mathrm R}_{bc}g_{ad})\Bigr)\\ =&\,-\nabla_a{\mathrm R}_{bc}+\nabla_b{\mathrm R}_{ac} +\frac{\nabla_b{\mathrm R}}{(n-1)(n-2)}g_{ac} -\frac{\nabla_a{\mathrm R}}{(n-1)(n-2)}g_{bc}\\ &\,- \frac{1}{n-2}(\nabla_b{\mathrm R}_{ac}-\nabla^d{\mathrm R}_{ad}g_{bc} +\nabla^d{\mathrm R}_{bd}g_{ac}-\nabla_a{\mathrm R}_{bc}g_{ad})\\ =&\,-\frac{n-3}{n-2}(\nabla_a{\mathrm R}_{bc}-\nabla_b{\mathrm R}_{ac}) +\frac{\nabla_b{\mathrm R}}{(n-1)(n-2)}g_{ac} -\frac{\nabla_a{\mathrm R}}{(n-1)(n-2)}g_{bc}\\ &\,+\frac{1}{2(n-2)}(\nabla_a{\mathrm R} g_{bc}/2 -\nabla_b{\mathrm R} g_{ac}/2)\\ =&\,-\frac{n-3}{n-2}\Bigl[ \nabla_a{\mathrm R}_{bc}-\nabla_b{\mathrm R}_{ac}-\frac{(\nabla_a{\mathrm R} g_{bc}-\nabla_b{\mathrm R} g_{ac})}{2(n-1)}\Bigr]\\ =&\,-\frac{n-3}{n-2}{\mathrm C}_{cba}\\ =&\,-\frac{n-3}{n-2}{\mathrm C}_{abc} \,, \end{align*} where ${\mathrm C}$ is the Cotton tensor $$ {\mathrm C}_{abc} \,=\, \nabla_{c} {\mathrm R}_{ab} - \nabla_{b} {\mathrm R}_{ac} - \tfrac{1}{2(n-1)} \big( \nabla_{c} {\mathrm R} \, g_{ab} - \nabla_{b} {\mathrm R} \, g_{ac} \big)\,. $$ Hence, if $n\geq 3$, harmonic Weyl tensor is equivalent to the vanishing of the Cotton tensor.
Now, the condition ${\mathrm W}(\nabla f,\cdot, \cdot, \cdot)=0$ implies that the conformal metric $$ \widetilde{g} \,= \,e^{-\frac{2}{n-2}f}g $$ has harmonic Weyl tensor. Indeed, from the conformal transformation law for the Cotton tensor (see Appendix), one has that, if $n\geq 4$, then $$ (n-2)\,\widetilde{{\mathrm C}}_{abc} \, = \, (n-2)\,{\mathrm C}_{abc}+\tfrac{1}{n-2}{\mathrm W}_{abcd}\nabla^{d}f = 0 \,, $$ whereas $\widetilde{{\mathrm C}}_{abc}={\mathrm C}_{abc}=0$ in three dimensions. Hence, from the definition of the Cotton tensor, we can observe that the Schouten tensor of $\widetilde{g}$ defined by $$ {\mathrm S}_{\widetilde{g}} \, = \, \tfrac{1}{n-2}\big(\,{\mathrm {Ric}}_{\widetilde{g}}-\tfrac{1}{2(n-1)}\,{\mathrm R}_{\widetilde{g}}\,\,\widetilde{g} \,\big) $$ is a Codazzi tensor, i.e. it satisfies the equation $$ (\nabla_{X}{\mathrm S})\,Y \,=\, (\nabla_{Y}{\mathrm S})\,X\,, \quad\,\hbox{for all}\,\, X,Y\in TM\,. $$ (see~\cite[Chapter~16, Section~C]{besse} for a general overview on Codazzi tensors).
Moreover, from the structural equation of generalized quasi--Einstein manifolds~\eqref{gqe}, the expression of the Ricci tensor of the conformal metric $\widetilde{g}$ takes the form \begin{eqnarray*}
{\mathrm {Ric}}_{\widetilde{g}}&=&{\mathrm {Ric}}_{g} +\nabla^{2} f +\tfrac{1}{n-2} df \otimes df +\tfrac{1}{n-2} \big(\Delta f - |\nabla f|^{2} \big) g\\
&=& \, \big(\mu + \tfrac{1}{n-2}\big) df \otimes df + \tfrac{1}{n-2}\big(\Delta f - |\nabla f|^{2}+(n-2)\lambda\big) \, e^{\frac{2}{n-2}f}\,\widetilde{g}\,. \end{eqnarray*}
Then, at every regular point $p$ of $f$, the Ricci tensor of $\widetilde{g}$ either has a unique eigenvalue or has two distinct eigenvalues $\eta_{1}$ and $\eta_{2}$ of multiplicity $1$ and $(n-1)$ respectively. In both cases, $\nabla f/|\nabla f|_{\widetilde{g}}$ is an eigenvector of the Ricci tensor of $\widetilde{g}$. For every point in $\Omega=\{p\in M \,|\, p \,\,\hbox{regular point}, \eta_{1}(p)\neq \eta_{2}(p)\}$ also the Schouten tensor ${\mathrm S}_{\widetilde{g}}$ has two distinct eigenvalues $\sigma_{1}$ of multiplicity one and $\sigma_{2}$ of multiplicity $(n-1)$, with same eigenspaces of $\eta_{1}$ and $\eta_{2}$ respectively. Splitting results for Riemannian manifolds admitting a Codazzi tensor with only two distinct eigenvalues were obtained by Derdzinski~\cite{derdz3} and Hiepko--Reckziegel~\cite{hiepkoreck} (see again~\cite[Chapter~16, Section~C]{besse} for further discussion).
From Proposition 16.11 in~\cite{besse} (see also~\cite{derdz3}) we know that the tangent bundle of a neighborhood of $p$ splits as the orthogonal direct sum of two integrable eigendistributions, a line field $V_{\sigma_{1}}$, and a codimension one distribution $V_{\sigma_{2}}$ with totally umbilic leaves, in the sense that the second fundamental form $\ti{h}$ of each leaves is proportional to the metric $\widetilde{g}$ (with abuse of notation, we will call $\widetilde{g}$ also the induced metric on the leaves of $V_{\sigma_{2}}$). We will denote by $\ti{\nabla}$ the Levi--Civita connection of the metric $\widetilde{g}$ on $M$ and by $\ti{\nabla}^{\sigma_{2}}$ the induced Levi--Civita connection of the induced metric $\widetilde{g}$ on the leaves of $V_{\sigma_{2}}$. In a suitable local chart $x^{1},x^{2},\dots,x^{n}$ with $\partial/\partial x^{1} \in V_{\sigma_{1}}$, $\partial/\partial x^{i} \in V_{\sigma_{2}}$ (in the sequel $i,j,k$ will range over $2,\dots,n$), we have $\widetilde{g}_{1i}=0$. Since $V_{\sigma_{2}}$ is totally umbilic, we have \begin{equation}\label{umbilic} \ti{h}_{ij} \,= \,-\big\langle \ti{\nabla}^{\sigma_{2}}_{\tfrac{\partial}{\partial x^{i}}} \tfrac{\partial}{\partial x^{j}}, \tfrac{\partial}{\partial x^{1}} \big\rangle = -\ti{\Gamma}^{1}_{ij} \,\,\widetilde{g}_{11} = \tfrac{\ti{{\mathrm H}}}{n-1} \ti{g}_{ij} \,, \end{equation} where $\ti{{\mathrm H}}$ will denote the mean curvature function. We recall that, from the Codazzi--Mainardi equation (see Theorem 1.72 in~\cite{besse}), one has \begin{equation}\label{codman} \big(\ti{\nabla}^{\sigma_{2}}_{\tfrac{\partial}{\partial x^{i}}}\ti{h}\big) \big(\tfrac{\partial}{\partial x^{j}},\tfrac{\partial}{\partial x^{k}}\big) - \big(\ti{\nabla}^{\sigma_{2}}_{\tfrac{\partial}{\partial x^{j}}}\ti{h}\big) \big(\tfrac{\partial}{\partial x^{i}},\tfrac{\partial}{\partial x^{k}}\big) \, = \, \big\langle \,\ti{{\mathrm {Rm}}} \big(\tfrac{\partial}{\partial x^{i}},\tfrac{\partial}{\partial x^{j}}\big) \tfrac{\partial}{\partial x^{k}}, \tfrac{\partial}{\partial x^{1}} \, \big\rangle \,. \end{equation} On the other hand, tracing with the metric $\widetilde{g}$, and using the umbilic property~\eqref{umbilic}, we get \begin{eqnarray*} \big(\ti{\nabla}^{\sigma_{2}}_{\tfrac{\partial}{\partial x^{i}}}\ti{h}\big) \big(\tfrac{\partial}{\partial x^{j}},\tfrac{\partial}{\partial x^{i}}\big) - \big(\ti{\nabla}^{\sigma_{2}}_{\tfrac{\partial}{\partial x^{j}}}\ti{h}\big) \big(\tfrac{\partial}{\partial x^{i}},\tfrac{\partial}{\partial x^{i}}) \,=\, \tfrac{1}{n-1}\,\partial_{j}\ti{{\mathrm H}} - \partial_{j}\ti{{\mathrm H}} \,=\, \tfrac{2-n}{n-1} \, \partial_{j}\ti{{\mathrm H}} \,. \end{eqnarray*} Using equation~\eqref{codman}, we obtain $$ \tfrac{2-n}{n-1} \, \partial_{j}\ti{{\mathrm H}} \,=\, {\mathrm {Ric}}_{\widetilde{g}} \big(\tfrac{\partial}{\partial x^{j}},\tfrac{\partial}{\partial x^{1}} \big) \,=\, 0 \,, $$ which implies that the mean curvature $\ti{{\mathrm H}}$ is constant on each leaves of $V_{\sigma_{2}}$. Now, from Proposition 16.11 (ii) in~\cite{besse}, one has that $$ \ti{{\mathrm H}} = \tfrac{1}{\sigma_{1}-\sigma_{2}}\, \partial_{1} \,\sigma_{2} \,. $$ The facts that both $\ti{{\mathrm H}}$ and $\sigma_{2}$ are constant on each leaves of $V_{\sigma_{2}}$ imply that $\partial_{j} \, \sigma_{1}=0$, for every $j=2,\dots,n$. This is equivalent to say that $V_{\sigma_{1}}$ has to be a geodesic line distribution, which clearly implies $\Gamma^{j}_{00}=0$, i.e. $\partial_{j} \,g_{11} =0$. Equation~\eqref{umbilic} yields $$ \partial_{1} \widetilde{g}_{ij} \,=\, -2 \,\ti{\Gamma}^{1}_{ij} = 2 \,\widetilde{g}_{11}^{-1}\, \tfrac{\ti{{\mathrm H}}}{n-1}\, \widetilde{g}_{ij} \,. $$ Since $\ti{{\mathrm H}}$ and $g_{11}$ are constant along $V_{\sigma_{2}}$, one has $$ \partial_{1} \widetilde{g}_{ij}(x^{1},\dots,x^{n}) \,=\, \varphi(x^{1}) \,\widetilde{g}_{ij}(x^{1},\dots,x^{n}) $$ for some function $\varphi$ depending only on the $x^{1}$ variable. Choosing a function $\psi=\psi(x^{1})$, such that $\tfrac{d\,\psi}{dx^{1}} = \varphi$, we have $\partial_{1} (e^{-\psi}\,\widetilde{g}_{ij})=0$, which means that $$ \widetilde{g}_{ij}(x^{1},\dots,x^{n})\,=\,e^{\psi(x^{1})} \, G_{ij}(x^{2},\dots,x^{n}) \,, $$ for some $G_{ij}$. This implies that the manifold $(M^{n},\widetilde{g})$, locally around every regular point of $f$, has a warped product representation with $(n-1)$--dimensional fibers. By the structure of the conformal deformation, this conclusion also holds for the original Riemannian manifold $(M^{n}, g)$. Now, the fact that $g$ has harmonic Weyl tensor, implies that the $(n-1)$--dimensional fibers are Einstein manifolds (there are a lot of papers where this computation is done, for instance see~\cite{geba}).
This completes the proof of Theorem~\eqref{main}.
\section*{Appendix}\label{appendix}
\setcounter{section}{1} \setcounter{teo}{0}
\noindent{\bf Lemma.} \,{\em The Cotton tensor ${\mathrm C}_{abc}$ is pointwise conformally invariant in dimension three, whereas if $n\geq 4$, for $\ti{g}=e^{-2u}g$, we have $$ (n-2)\,\ti{{\mathrm C}}_{abc} \,= \, (n-2)\,{\mathrm C}_{abc}+{\mathrm W}_{abcd}\nabla^{d}u \,. $$}
\begin{proof} The proof is a straightforward computation. Let $\ti{g}=e^{-2u}\,g$, then for the Schouten tensor ${\mathrm S} = \frac{1}{n-2} \big({\mathrm {Ric}} - \frac{1}{2(n-1)}{\mathrm R} \,g \big)$ we have the conformal transformation rule \begin{equation}\label{schconf}
\ti{{\mathrm S}} \,=\, {\mathrm S} + \nabla^{2} u + du \otimes du - \tfrac{1}{2} |\nabla u|^{2} g \,. \end{equation} The Cotton tensor of the metric $\ti{g}$ is defined by $$ (n-2)\, \ti{{\mathrm C}}_{abc} \,=\, \ti{\nabla}_{c} \ti{{\mathrm S}}_{ab} - \ti{\nabla}_{b} \ti{{\mathrm S}}_{ac} \,. $$ Moreover one can see that \begin{eqnarray*} \ti{\nabla}_{c} \ti{{\mathrm S}}_{ab} &=& \nabla_{c} {\mathrm S}_{ab} + \nabla_{c} \nabla_{a} \nabla_{b} u +\nabla_{c} \nabla_{a} u \, \nabla_{b} u + \nabla_{c} \nabla_{b} u \, \nabla_{a} u - \nabla_{c} \nabla_{d} u\, \nabla_{d} u \, g_{ab}+\\ && + \, \ti{{\mathrm S}}_{bc} \nabla_{a} u + \ti{{\mathrm S}}_{ac} \nabla_{b} u + \ti{{\mathrm S}}_{ab} \nabla_{c} u - \ti{{\mathrm S}}_{bd} \nabla_{d} u \, g_{ac} - \ti{{\mathrm S}}_{ad} \nabla_{d} u \, g_{bc} \,. \end{eqnarray*} Computing in the same way the term $\ti{\nabla}_{b} \ti{{\mathrm S}}_{ac}$, substituting in the previous formula $\ti{{\mathrm S}}$ with~\eqref{schconf} and using the fact that \begin{eqnarray*} \nabla_{c} \nabla_{b} \nabla_{a} u - \nabla_{b} \nabla_{c} \nabla_{a} u &=& {\mathrm R}_{cbad} \nabla^{d} u \,\,\, = \,\,\, {\mathrm R}_{abcd} \nabla^{d} u \\ &=& {\mathrm W}_{abcd} \nabla^{d} u + {\mathrm S}_{ac} \nabla_{b} u - {\mathrm S}_{cd} \nabla_{d} u \, g_{ab} +{\mathrm S}_{bd} \nabla_{d} u \, g_{ac} - {\mathrm S}_{ab} \nabla_{c} u \,, \end{eqnarray*} (we recall that ${\mathrm W}$ is zero in dimension three) one obtains the result. \end{proof}
\begin{ackn} The author is partially supported by the Italian project FIRB--IDEAS ``Analysis and Beyond''. The author wishes to thank Sara Rizzi for helpful remarks and discussions. \end{ackn}
\end{document} |
\begin{document}
\newcommand{\displaystyle}{\displaystyle} \title{Compact endomorphisms of $H^{\infty}
Let $D$ be the open unit disc and, as usual, let $H^{\infty}(D)$ be the algebra of bounded analytic functions on $D$
with $\|f\|=\sup_{z \in D} |f(z)|.$ With pointwise addition and multiplication, $H^{\infty}(D)$ is a well known uniform algebra. In this note we characterize the compact endomorphisms of $H^{\infty}(D)$ and determine their spectra.
We show that although not every endomorphism $T$ of $H^{\infty}(D)$ has the form $T(f)(z)=f(\phi(z))$ for some analytic $\phi$ mapping $D$ into itself, if $T$ is compact, there is an analytic function $\psi:D \rightarrow D$ associated with $T$. In the case where $T$ is compact, the derivative of $\psi$ at its fixed point determines the spectrum of $T.$
The structure of the maximal ideal space $M_{H^{\infty}}$ is well known. Evaluation at a point $z \in D$ gives rise to an element in $M_{H^{\infty}}$ in the natural way. The remainder of $M_{H^{\infty}}$ consists of singleton Gleason parts and Gleason parts which are analytic discs. An analytic disc, $P(m)$, containing a point $m\in M_{H^{\infty}}$, is a subset of $M_{H^{\infty}}$ for which there exists a continuous bijection $L_m:D \rightarrow P(m)$ such that $L_m(0)=m$ and $\hat {f}(L_m(z))$ is analytic on $D$ for each $f \in H^{\infty}(D)$. Moreover, the map $L_m$ has the form $\displaystyle L_m(z)=w^*\lim \frac{z+z_{\alpha}} {1 + \overline{z_{\alpha}}z}$ for some net ${z_{\alpha}} \rightarrow m$ in the w*topology, whence
$\displaystyle \hat{f}(L_m(z))=\lim f(\frac{z+z_{\alpha}}{1+\overline{z_{\alpha}}z})$ for all $f \in H^{\infty}(D)$. A fiber $M_{\lambda}$ over some $\lambda \in \overline{D}\setminus{D}$, is the zero set in $M_{H^{\infty}}$ of the function $z-\lambda.$ Each part, distinct from $D$, is contained in exactly one fiber $M_{\lambda}$. With no loss of generality we let $\lambda = 1.$ We recall, too, that two elements $n_1$ and $n_2$ are in the same part if, and only if,
$\|n_1 - n_2\| < 2$, where $\| . \|$ is the norm in the dual space $H^{\infty}(D)^*.$
Now let $T$ be an endomorphism of $H^{\infty}(D)$, i.e. $T$ is a (necessarily) bounded linear map of $H^{\infty}(D)$ to itself with $T(fg)=T(f)T(g)$ for all $f, g \in H^{\infty}(D)$. For a given $T$, either $T$ has the form $Tf(z)=f(\omega(z))$ for some analytic map $\omega:D \rightarrow D$, or $Tf=\hat{f}(n)1$ for some $n \in M_{H^{\infty}},$ or there exists an $m \in M_{H^{\infty}}$, a net $z_{\alpha} \rightarrow m$ in the w* topology and an analytic function $\tau:D \rightarrow D$, with $\tau(0)=0$ for which $Tf(z)=\hat{f}(L_m(\tau(z)))$ \cite{gar}. Further, on general principles, if $T$ is an endomorphism of $H^{\infty}(D)$ there exists a w* continuous map $\phi:M_{H^{\infty}} \rightarrow M_{H^{\infty}}$ with $\widehat{Tf}(n)=\hat{f}(\phi(n))$ for all $n \in M_{H^{\infty}}$. In the last case, $\phi(z)=L_m(\tau(z))$ for $z \in D$.
For a given endomorphism $T$, if the induced map $\phi$ maps $D$ to itself, then $T$ is commonly called a {\em composition operator}. Compact composition operators on $H^{\infty}$ were completely characterized in \cite{sw}. However, in general, $L_m(\tau(z))$ need not be in $D$, and so not every endomorphism of $H^{\infty}(D)$ is a composition operator. It is these endomorphisms that we discuss here. Trivially, for any $n \in M_{H^{\infty}} \setminus D$, the map $T$ defined by $T:Tf(z)=\hat{f}(n)1$ is a compact endomorphism of $H^{\infty}(D)$ which is not a composition operator.
Now let $P(m)$ be an analytic part and let $T$ be an endomorphism defined by $Tf(z)=\hat{f}(L_m(\tau(z)))$ as discussed above. Also suppose that $\displaystyle \phi:M_{H^{\infty}} \rightarrow M_{H^{\infty}}$ is such that $\widehat{Tf}=\hat{f} \circ \phi.$ Assume that $T$ is compact. We claim that $\overline{\tau(D)}$ is a compact subset of $D$ in the Euclidean topology. Indeed, if we regard the endomorphism $T$ as an operator from $H^{\infty}(D)$ into $C(M_{H^{\infty}})$, then $T$ is compact if, and only if, $\phi$ is w* to norm continuous on $M_{H^{\infty}}$ \cite{ds}. Since $M_{H^{\infty}}$ is itself compact and connected (in the w* topology), $\phi(M_{H^{\infty}})$ must be compact and connected in the norm topology on $M_{H^{\infty}}$ and so $\phi$ maps $M_{H^{\infty}}$ into a norm compact connected subset of $P(m).$ Therefore the range, $\phi(D)=L_m(\tau(D))$ is contained in a norm compact subset of $P(m)$, and further since $L_m^{-1}$ is an isometry in the Gleason norms on $P(m)$ and $D$ \cite{jff}, $\tau(D)=L_m^{-1}(\phi(D))$ is contained in a compact subset of $D$ in the norm topology on $D$. Since the norm, Euclidean and w* topologies on $D$ coincide, $\overline{\tau(D)}$ is a compact subset of $D$ in these three topologies. As a consequence, $\hat{\tau}(M_{H^{\infty}})\subset D.$
Next consider two maps of $H^{\infty}(D)$ into itself. The first, $C_{L_m}$ is defined by $C_{L_m}(f)(z)=\hat{f}(L_m(z))$, and the second $C_{\tau}$ by $C_{\tau}(f)(z)=f(\tau(z)).$ Then $(C_{L_m} \circ C_{\tau})(f)(z)=C_{L_m}(f \circ \tau)(z)=\widehat{f \circ \tau} (L_m(z))$ and $(C_{\tau} \circ C_{L_m})(f)(z)=\hat{f}(L_m(\tau(z)))=Tf(z).$ But if $B$ is a Banach space and $S_1$ and $S_2$ are any two bounded linear maps from $B \rightarrow B$, the spectrum $\sigma(S_1S_2)\setminus\{0\}=\sigma(S_2S_1)\setminus\{0\}$. Thus we see that
$\sigma(T)\setminus\{0\}= \sigma(C_{L_m} \circ C_{\tau})\setminus\{0\}.$
Since $f$ is analytic on a neighborhood of range $\hat{\tau}$ which is a subset of $D$, a standard functional calculus argument gives $\widehat{f \circ \tau }(L_m(z))=f(\hat{\tau}(L_m(z))$. If we let $\psi(z)=\hat{\tau}(L_m(z))$ we see that $C_{L_m} \circ C_{\tau}$ is a compact composition operator in the usual sense, and so if $z_0 \in D$ is the unique fixed point of $\psi$, and ${\bf N}$ is the set of positive integers, then $\sigma(T)=\{(\psi'(z_0))^n:n \in {\bf N}\} \cup \{0,1\}.$
To summarize, we have shown the following.
Theorem:\ If $T$ is a compact endomorphism of $H^{\infty}(D)$, then either $T$ has one dimensional range, i.e. $Tf=\hat{f}(n)1$ for some $n \in M_{H^{\infty}},$ or $T$ is a composition operator in the usual sense, or $T$ has the form $Tf(z)=\hat{f}(L_m(\tau(z)))$ where $\tau$ is described above. In the last case, there is a compact composition operator $C_{\psi}$, such that $\sigma(T)=\sigma(C_{\psi})=\{(\psi'(z_0))^n:n \in {\bf N}\} \cup \{0,1\}$ where $z_0 \in D$ is the unique fixed point of $\psi.$
We conclude with two examples showing differences between composition operators and general endomorphisms .
(a) With the same terminology and symbols, suppose $\hat{\tau}$ is constant on $P(m)$, i.e. $\hat{\tau}(P(m))=\{\hat{\tau}(m)\}.$ Since $T$ is compact, $\hat{\tau}(m)\in D.$ Then using $C_{\tau}$ and $C_{L_m}$ as before, we show that $T^2f=\hat{f}(n)1$ for some $n \in P(m).$ Indeed, $(C_{L_m} \circ C_{\tau})f=f(t_0)1$ where $t_0=\hat{\tau}(m) \in D$. Then we see that \[T^2 f=[(C_{\tau} \circ C_{L_m}) \circ (C_{\tau} \circ C_{L_m})]f=\] \[[C_{\tau} \circ (C_{L_m} \circ C_{\tau}) \circ C_{L_m}]f= [C_{\tau} \circ (C_{L_m} \circ C_{\tau})] (\hat{f} \circ L_m)= C_{\tau}(\hat{f} (L_m(t_0)) 1)=\hat{f}(L_m(t_0)) 1.\] Letting $n=L_m(t_0)$ gives the result.
One way to have $\hat{\tau}$ constant on $P(m)$ is for $\tau$ to be continuous at $1$ in the usual sense.
A more interesting example, perhaps, is to define $\tau$ by $\displaystyle \tau(z)= \frac{1}{2} z e^{\frac{z+1}{z-1}}$, and $m \in M_{H^{\infty}}$ as a w* limit of a real net $x_{\alpha}$ approaching $1$. Then $\displaystyle \hat{\tau}(L_m(z))=\lim_{\alpha} \tau(\frac{z+x_{\alpha}}{1+\overline{x_{\alpha}}z})=0$, and so $T^2 f=\hat{f}(m)1$ for all $f\in H^{\infty}(D)$.
In both cases, $\sigma(T)=\{0,1\}$.
(b) Finally, let $\{z_n\}$ be an interpolating Blaschke sequence approaching $1$, $z_1=0$, with $m$ in the w* closure of $\{z_n\}$ and $B$ the corresponding Blaschke product. If $\displaystyle \tau(z)=\frac{1}{2}B(z)$, then it is well known \cite{gar} that $\displaystyle (\hat{\tau} \circ L_m)'(0)=\frac{1}{2}(\hat{B} \circ L_m)'(0) \neq 0.$ This, then, is an example of a compact endomorphism of $H^{\infty}(D)$ which is not a composition operator but whose spectrum properly contains $\{0,1\}.$
{\sf School of Mathematical Sciences
University of Nottingham
Nottingham NG7 2RD, England
email: Joel.Feinstein@nottingham.ac.uk
and
Department of Mathematics
University of Massachusetts at Boston
100 Morrissey Boulevard
Boston, MA 02125-3393
email: hkamo@cs.umb.edu}
{\sf This research was supported by EPSRC grant GR/M31132}
\end{document} |
\begin{document}
\title{A Unified View of Quantum Correlations and Quantum Coherence} \author{Tan Kok Chuan Bobby, Hyukjoon Kwon, Chae-Yeun Park, and Hyunseok Jeong}
\affiliation{Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742, Korea} \date{\today}
\begin{abstract} In this paper, we argue that quantum coherence in a bipartite system can be contained either locally or in the correlations between the subsystems. The portion of quantum coherence contained within correlations can be viewed as a kind quantum correlation which we call correlated coherence. We demonstrate that the framework provided by correlated coherence allows us retrieve the same concepts of quantum correlations as defined by the asymmetric and symmetrized versions of quantum discord as well as quantum entanglement, thus providing a unified view of these correlations. We also prove that correlated coherence can be formulated as an entanglement monotone, thus demonstrating that entanglement may be viewed as a specialized form of coherence. \end{abstract}
\pacs{} \maketitle
\section{Introduction} Quantum mechanics admit a superposition between different physical states. A superposed quantum state is described by a pure state and is completely different in nature to a classical stochastic mixture of states, otherwise called mixed states. In the parlance of quantum mechanics, the former is usually referred to as a coherent superposition, the latter one as a incoherent classical mixture.
A particularly illuminating example of quantum coherence in action is the classic double slit experiment. In the quantum version, a single electron, passing through a double slit one at a time and upon emerging, forms an interference fringe despite not interacting with any other electron. An explanation of this phenomena requires a coherent superposition of two travelling waves emerging from both slits. Such an effect is impossible to explain using only incoherent classical mixtures. Following the birth of quantum theory, physical demonstrations of quantum coherence arising from superpositions of many different quantum systems such as electrons, photons, atoms, mechanical modes, and hybrid systems have been achieved \cite{Hornberger12, Wineland13, Aspelmeyer14}.
Recent developments in our understanding of quantum coherence have come from the burgeoning field of quantum information science. One important area of study that quantum information researchers concern themselves with is the understanding of quantum correlations. It turns out that in a multipartite setting, quantum mechanical effects allows remote laboratories to collaborate and perform tasks that would otherwise be impossible using classical physics \cite{NielsenChuang}. Historically, the most well studied quantum correlation is quantum entanglement \cite{EPR1935,Horodecki2001, Werner1989}. Subsequent developments of the idea lead to the formulation of quantum discord \cite{Ollivier2001, Henderson2001}, and its symmetrized version \cite{Oppenheim2002, Modi2010, Luo2008} as more generalized forms of quantum correlations that includes quantum entanglement. The development of such ideas of the quantumness of correlations has lead to a plethora of quantum protocols such as quantum cryptography \cite{Ekert91}, quantum teleportation \cite{Bennett1993}, quantum superdense coding \cite{Bennett2001}, quantum random access codes \cite{Chuan2013}, remote state preparation \cite{Dakic2012}, random number generation\cite{Pironio2010}, and quantum computing \cite{Raussendorf2001, Datta2008}, amongst others. Quantum correlations have also proven useful in the study of macroscopic quantum objects \cite{Jeong2015}.
Meanwhile, quantitative theories for entanglement \cite{Plenio07, Vedral98} have been formulated by characterizing and quantifying entanglement as a resource to achieve certain tasks that are otherwise impossible classically. Building upon this, Baumgratz et al. \cite{Baumgratz14} recently proposed a resource theory of quantum coherence. Recent developments has since unveiled interesting connections between quantum coherence and correlation, such as their interconversion with each other \cite{Ma15, Streltsov15} and trade-off relations \cite{Xi15}.
In this paper, we demonstrate that quantum correlation can be understood in terms of the coherence contained solely between subsystems. In contrast to previous studies which established indirect relationships between quantum correlation and coherence \cite{Ma15, Streltsov15, Xi15}, our study establishes a more direct connection between the two and provides a unified view of quantum correlations which includes quantum discord and entanglement using the framework of quantum coherence.
\section{Preliminaries} \subsection{Bipartite system and local basis} In this paper, we will frequently refer to a bipartite state which we denote $\rho_{AB}$, where $A$ and $B$ refer to local subsystems held by different laboratories. Following convention, we say the subsystems $A$ and $B$ are held by Alice and Bob respectively. The local state of Alice is obtained by performing a partial trace on $\rho_{AB}$, and is denoted by $\rho_A = \mathrm{Tr}_B(\rho_{AB})$, and $\{ \ket{i}_A \}$ is a complete local basis of Alice's system. Bob's local state and local basis are also similarly defined. In general, the systems Alice and Bob holds may be composite, such that $A=A_1 A_2 \cdots A_N$ and $B = B_1 B_2 \cdots B_M$ so the total state may identically be denoted by $\rho_{A_1A_2\cdots A_N B_1 B_2 \cdots B_M}$.
\subsection{Quantum coherence} We will adopt the axiomatic approach for coherene measures as shown in Ref.~\cite{Baumgratz14}. For a fixed basis set $\{ \ket{i} \}$, the set of incoherent states $\cal I$ is the set of quantum states with diagonal density matrices with respect to this basis. Then a reasonable measure of quantum coherence $\mathcal{C}$ should satisfy following properties: (C1) $C(\rho) \geq 0$ for any quantum states $\rho$ and equality holds if and only if $\rho \in \cal I$. (C2a) Non-increasing under incoherent completely positive and trace preserving maps (ICPTP) $\Phi$ , i.e., $C(\rho) \geq C(\Phi(\rho))$. (C2b) Monotonicity for average coherence under selective outcomes of ICPTP: $C(\rho) \geq \sum_n p_n C(\rho_n)$, where $\rho_n = \hat{K}_n \rho \hat{K}_n^\dagger/p_n$ and $p_n = \mbox{Tr} [\hat{K}_n \rho \hat{K}^\dagger_n ]$ for all $\hat{K}_n$ with $\sum_n \hat{K}_n \hat{K}^\dagger_n = \mathbb 1$ and $\hat{K}_n {\cal I} \hat{K}_n^\dagger \subseteq \cal I$. (C3) Convexity, i.e., $\lambda C(\rho) + (1-\lambda) C(\sigma) \geq C(\lambda \rho + (1-\lambda) \sigma)$, for any density matrix $\rho$ and $\sigma$ with $0\leq \lambda \leq 1$. In this paper, we will employ the $l_1$-norm of coherence, which is defined by $\coh{\rho} \coloneqq \sum _{i\neq j} \abs{ \bra{i} \rho \ket{j}}$, for any given basis set $\{ \ket{i} \}$ (otherwise called the reference basis). It can be shown that this definition satisfies all the properties mentioned \cite{Baumgratz14}.
\subsection{Local operations and classical communication (LOCC)} In addition, we will also reference local operations and classical communication (LOCC) protocols in the context of the resource theory of entanglement. LOCC protocols allow for two different types of operation. First, Alice and Bob are allowed to perform quantum operations, but only locally on their respective subsystems. Second, they are also allowed classical, but otherwise unrestricted communication between them. LOCC operations are especially important in the characterization of quantum entanglement, which typically does not increase under such operations. Measures of entanglement satisfying this are referred to as LOCC monotones \cite{Vidal00}.
\section{Maximal Coherence Loss}
Before establishing the connection between quantum correlation and coherence, we first consider the measurement that leads to the maximal coherence lost in the system of interest. For a monopartite system, the solution to this is trivial. For any quantum state $\rho = \sum_{i,j} \rho_{i,j} \ketbra{i}{j}$ with a reference basis $\{\ket{i}\}$, it is clear that the measurement that maximally removes coherence from the system is the projective measurement $\Pi(\rho) = \sum_{i} \ketbra{i}{i} \rho \ketbra{i}{i}$. This measurement leaves behind only the diagonal terms of $\rho$, so $\coh{\Pi(\rho )} = 0$, which is the minimum coherence any state can have.
A less obvious result for a bipartite state is the following:
\begin{proposition} \label{maxCoh} For any bipartite state $\rho_{AB} = \sum_{i,j,k,l}\rho_{i,j,k,l} \ket{i,j}_{AB}\bra{k,l}$ where the coherence is measured with respect to the local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$, the projective measurement on subsystem $B$ that induces maximal coherence loss is $\Pi_B(\rho_{AB}) = \sum_{j} \left( \openone_A \otimes \ket{j}_B\bra{j} \right) \rho_{AB} \left( \openone_A \otimes \ket{j}_B\bra{j} \right)$. \end{proposition}
\begin{proof}
We begin by using the spectral decomposition of a general bipartite quantum state $\rho_{AB} = \sum_n p_n \ket{\psi^n}_{AB} \bra{\psi^n}$. Assume that the subsystems have local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$ such that $\rho_{AB}= \sum_n \sum_{i,j,k,l} p_n \psi^n_{i,j} (\psi^n_{k,l})^*\ket{i,j}_{AB}\bra{k,l}$. The coherence of the system is measured with respect to these bases. To reduce clutter, we remove the subscripts pertaining to the subsystems $AB$ for the remainder of the proof. Unless otherwise stated, it should be clear from the context which subsystem every operator belong to.
Consider some complete basis on $B$, $\{ \ket{\lambda_m} \}$, and the corresponding projective measurement $\Pi_B(\rho) = \sum_m (\openone\otimes \ketbra{\lambda_m}{\lambda_m}) \,\rho\, (\openone\otimes \ketbra{\lambda_m}{\lambda_m})$. Computing the matrix elements, we get:
\begin{align*} \braket{i,j}{\Pi_B(\rho)}{k,l} &\coloneqq \left[ \Pi_B(\rho) \right]_{i,j,k,l} \\ &= \sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{kq})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}. \end{align*}
Note that minimizing the absolute sum of all the matrix elements will also minimize the coherence, since the diagonal elements of any density matrix always sums to 1 and are non-negative. Consider the absolute sum of all the matrix elements of $\Pi_B(\rho)$:
\begin{align*} \sum_{i,j,k,l}\abs{\left[ \Pi_B(\rho) \right]_{i,j,k,l}} &= \sum_{i,j,l,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\ &= \left( \sum_{ \substack{i,k \\ j=l}} + \sum_{\substack{i,k \\ j\neq l}} \right) \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\ &\geq \sum_{\substack{i,k \\ j=l}} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\ &= \sum_{i,k,j} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{j} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\ &\geq \sum_{i,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \sum_j \braket{j}{\lambda_m \rangle\langle \lambda_m}{j} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\ &= \sum_{i,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\ &= \sum_{i,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\delta_{q,p}} \displaybreak \\ &= \sum_{i,k} \abs{\sum_n p_n \psi^n_{i,p} (\psi^n_{k,p})^*} \end{align*}
The first inequality comes from omitting non-negative terms in the sum, while the second inequality comes from moving a summation inside the absolute value function. Note that the final equality is exactly the absolute sum of the elements when $\ket{\lambda_j} = \ket{j}$ since:
$$ \sum_{j} \left( \openone_A \otimes \ket{j}_B\bra{j} \right) \rho_{AB} \left( \openone_A \otimes \ket{j}_B\bra{j} \right) = \sum_{i,j,k} \sum_n p_n \psi^n_{i,j} (\psi^n_{k,j})^* \ketbra{i,j}{k,j}. $$ This proves the proposition. \end{proof} Since any $N$-partite state $\rho_{A_1A_2\ldots A_N}$, is allowed to perform a bipartition such that $\rho_{A_1A_2\ldots A_N} = \rho_{A'A_N}$ where $A' = A_1\ldots A_{N-1}$, we also get the following corollary:
\begin{pcorollary} For any $N$-partite state $\rho_{A_1A_2\ldots A_N}$ where the coherence is measured with respect to the local reference bases $\{\ket{i}_{A_k}\}$ and $k = 1,2, \ldots, N$, then the projective measurement on subsystem $A_k$ that induces maximal coherence loss is the projective measurement onto the local basis $\{\ket{i}_{A_k}\}$. \end{pcorollary}
\section{Local and Correlated Coherence}
Now consider a bipartite state $\rho_{AB}$, with total coherence $\coh{\rho_{AB}}$ with respect to local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$. Then $\coh{\rho_A}$ can be interpreted as the coherence that is local to $A$. Similarly, $\coh{\rho_B}$ is the portion of the coherence that is local to $B$. In general, the sum of the total local coherences is not necessarily the same as the total coherence in the system. It is therefore reasonable to suppose that a portion of the quantum coherences are not stored locally, but within the correlations of the system itself.
\begin{definition} [Correlated Coherence] With respect to local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$, Correlated Coherence for a bipartite quantum system is given by subtracting local coherences from the total coherence:
$$ \cc{\rho_{AB}} \coloneqq \coh{\rho_{AB}}- \coh{\rho_{A}} - \coh{\rho_{B}} $$
Where $\rho_{A}$ and $\rho_{B}$ are the reduced density matrices of $A$ and $B$ respectively. \end{definition}
Further reinforcing the idea that the local coherences form only a portion of the total coherence present in a quantum system, we prove the following property:
\begin{theorem} For any bipartite quantum state $\rho_{AB}$, $\cc{\rho_{AB}} \geq 0 $ (i.e. Correlated Coherence is always non-negative). \end{theorem}
\begin{proof} Let $\rho_{AB}= \sum_n \sum_{i,j,k,l} p_n \psi^n_{i,j} (\psi^n_{k,l})^*\ket{i,j}_{AB}\bra{k,l}$, then
\begin{align*} \cc{\rho_{AB}} &= \coh{\rho_{AB}}- \coh{\rho_{A}} - \coh{\rho_{B}} \\ &= \sum_{\substack{(i,j) \\ \neq (k,l)}} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,l})^* } - \sum_{i \neq k } \abs{\sum_n p_n\sum_{j} \psi^n_{i,j} (\psi^n_{k,j})^* } -\sum_{j \neq l} \abs{\sum_n p_n \sum_i \psi^n_{i,j} (\psi^n_{i,l})^* } \\ &\geq \sum_{\substack{(i,j) \\ \neq (k,l)}} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,l})^* } - \sum_{\substack{j \\ i \neq k }} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,j})^* } -\sum_{\substack{ i \\ j \neq l}} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{i,l})^* } \\ &= \left( \sum_{\substack{(i,j) \\ \neq (k,l)}} - \sum_{\substack{j = l \\ i \neq k }}-\sum_{\substack{ i = k \\ j \neq l}} \right ) \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,l})^* }. \end{align*}
The inequality comes from moving a summation outside of the absolute value function. Since $\sum_{\substack{(i,j) \\ \neq (k,l)}} = \sum_{\substack{j \neq l \\ i \neq k }}+\sum_{\substack{j = l \\ i \neq k }}+\sum_{\substack{ i = k \\ j \neq l}}$, the final equality above is always a sum of non-negative values, which completes the proof.
\end{proof}
\section{Correlated Coherence and Quantum Discord}
Of particular interest to the study of quantum correlations is the idea that certain correlations are quantum and certain correlations are classical. In this section, we will demonstrate that Correlated Coherence is able to unify many of these concepts of quantumness under the same framework.
First, note that in our definition of Correlated Coherence, the choice of reference bases is not unique, while most definitions of quantum correlations are independent of specific basis choices. However, we can retrieve basis independence via a very natural choice of local bases. For every bipartite state $\rho_{AB}$, the reduced density matrices $\rho_A$ and $\rho_B$ has eigenbases $\{ \ket{\alpha_i} \}$ and $\{\ket{\beta_i}\}$ respectively. By choosing these local bases, $\rho_A$ and $\rho_B$ are both diagonal and the local coherences are zero. The implication of this is that for such a choice, \textit{the coherence in the system is stored entirely within the correlations}. Since this can be done for any $\rho_{AB}$, Correlated Coherence with respect to these bases becomes a state dependent property as required. For the rest of the paper, unless otherwise stated, we will assume that the choice of local bases for the calculation of Correlated Coherence will always be the local eigenbases of Alice and Bob.
We first consider the definition of a quantum correlation in the symmetrized version of quantum discord. Under the framework of symmetric discord, a state contains quantum correlations when it cannot be expressed in the form $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$, where $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$ are sets of orthonormal vectors. Any such state has zero symmetric discord by definition.
We prove the following theorem:
\begin{theorem} [Correlated Coherence and Symmetric Quantum Discord] \label{symDisc} For a given state $\rho_{AB}$, $\cqc{\rho_{AB}} = 0$ iff $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$. \end{theorem}
\begin{proof}
If $\{ \ket{i}_A \}$ and $\{\ket{j}_B\}$ are the the eigenbases of $\rho_A$ and $\rho_B$, then $\cqc{\rho_{AB}} = 0$ implies $\coh{\rho_{AB}} = 0$ which implies $\rho_{AB}$ only has diagonal terms, so $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$. Therefore, $\cqc{\rho_{AB}} =0 \implies \rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$.
Conversely, if $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$, then the state clearly has zero coherence, which implies $\cqc{\rho_{AB}} =0$, so the converse is also true. This proves the theorem. \end{proof}
This establishes a relationship between Correlated Coherence and symmetric discord. We now consider the asymmetric version of quantum discord. Under this framework, a state contains quantum correlations when it cannot be expressed in the form $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$, where $\rho_B^j$ is some normalized density matrix.
We prove the following:
\begin{theorem} [Correlated Coherence and Asymmetric Quantum Discord] \label{AsymDisc} For a given state $\rho_{AB}$, let $\{ \ket{i}_A \}$ and $\{\ket{j}_B\}$ be the the eigenbases of $\rho_A$ and $\rho_B$ respectively. Define the measurement on $A$ onto the local basis as $\Pi_A(\rho_{AB}) \coloneqq \sum_i ( \ket{i}_A\bra{i} \otimes \openone_B) \rho_{AB} ( \ket{i}_A\bra{i} \otimes \openone_B)$. Then, with respect to these local bases, $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0$ iff $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$, where $\rho_B^i$ is some normalized density matrix and $\{\ket{i}_A\}$ is some set of orthonormal vectors. \end{theorem}
\begin{proof} First, we write the state in the form $\rho_{AB}= \sum_{i,j,k,l}\rho_{ijkl}\ket{i,j}_{AB}\bra{k,l}$. We can always write the state in block matrix form such that $\rho_{AB}= \sum_{i,k}\ket{i}_A\bra{k} \otimes \rho_B^{i,k}$ where $\rho_B^{i,k} \coloneqq \sum_{j,k}\rho_{ijkl}\ket{j}_B \bra{l}$. If $\{ \ket{i}_A \}$ and $\{\ket{j}_B\}$ are the the eigenbases of $\rho_A$ and $\rho_B$, then $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0$ implies that when $i \neq k$, $\rho_B^{i,k} = 0$. This implies $\rho_{AB}= \sum_{i}\ket{i}_A\bra{i} \otimes \rho_B^{i,i}$. By defining $\rho_B^i = \rho_B^{i,i}/p_i$ where $p_i \coloneqq \mbox{Tr}{\rho_B^{i,i}}$, we get $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$. Therefore, $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0 \implies \rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$.
For the converse, if $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$, then clearly, $\Pi_A(\rho_{AB})= \rho_{AB}$, so $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0$. This completes the proof. \end{proof}
Note that the above relationship with asymmetric quantum discord is expressed as a \textit{difference} between the Correlated Coherence of $\rho_{AB}$ and the post measurement state $\Pi_A(\rho_{AB})$. While this characterization of quantum correlations may at first appear to diverge from the one given in Theorem~\ref{AsymDisc}, they are actually similar since $\cqc{\Pi_A \Pi_B (\rho_{AB})} = 0$ so $\cqc{\rho_{AB}}= \cqc{\rho_{AB}} - \cqc{\Pi_A \Pi_B (\rho_{AB})}$. It is therefore possible to interpret quantum discord as the correlated coherence loss when either party performs a maximally coherence destroying measurement only their subsystems (See Proposition~\ref{maxCoh}). When the projective measurement is performed only on one side, one retrieves the asymmetric version of quantum discord, and the symmetrized version is obtained when the coherence destroying measurement is performed by both parties.
\section{Correlated Coherence and Entanglement}
Under the framework of entangled correlations, a state contains quantum correlations when it cannot be expressed as a convex combination of product states $\sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$, where $\ket{\alpha_i}$ and $\ket{\beta_i}$ are normalized but not necessarily orthogonal vectors that can repeat. It is also possible to extend our methodology to entangled quantum states. In order to do this, we consider extensions of the quantum state $\rho_{AB}$. We say that a state $\rho_{ABC}$ is an extension of $\rho_{AB}$ if $\mathrm{Tr}_C(\rho_{ABC}) = \rho_{AB}$. For our purpose, we will consider extensions of the form $\rho_{AA'BB'}$.
\begin{theorem} \label{separability} Let $\rho_{AA'BB'}$ be some extension of a bipartite state $\rho_{AB}$ and choose the local bases to be the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$ respectively. Then with respect to these local bases, $\min \cqc{\rho_{AA'BB'}} = 0$ iff $\rho_{AB} = \sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$ for some set of normalized vectors $\ket{\alpha_i}$ and $\ket{\beta_i}$ that are not necessarily orthogonal and may repeat. The minimization is over all possible extensions of $\rho_{AB}$ of the form $\rho_{AA'BB'}$. \end{theorem}
\begin{proof} If $\inf \cqc{\rho_{AA'BB'}} = 0$, then $\rho_{AA'BB'}$ must have the form $\sum_{i,j}p_{i,j}\ket{\mu_i}_{AA'}\bra{\mu_i}\otimes \ket{\nu_j}_{BB'}\bra{\nu_j}$ (See Thm.~\ref{symDisc}). Since $\rho_{AA'BB'}$ is an extension, $\mathrm{Tr}_{A'}\mathrm{Tr}_{B'}(\rho_{AA'BB'}) = \sum_{i,j}p_{i,j}\mathrm{Tr}_{A'}(\ket{\mu_i}_{AA'}\bra{\mu_i})\otimes \mathrm{Tr}_{B'} (\ket{\nu_j}_{BB'}\bra{\nu_j}) = \rho_{AB}$. Let $\rho^i_A \coloneqq \mathrm{Tr}_{A'}(\ket{\mu_i}_{AA'}\bra{\mu_i})$ and $\rho^j_B \coloneqq \mathrm{Tr}_{B'}(\ket{\nu_j}_{BB'}\bra{\nu_j})$. Then, $\rho_{AB} = \sum_{i,j}p_{i,j} \rho^i_A\otimes \rho^j_B$. This is equivalent to saying $\rho_{AB}=\sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$, for some set of (non necessarily orthogonal) vectors $\{\ket{\alpha_i}\}$ and $ \{\ket{\beta_i}\}$. This proves $\min \cqc{\rho_{AA'BB'}} = 0 \implies \rho_{AB} = \sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$
For the converse, suppose $\rho_{AB} = \sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$, consider the purification of $\rho_{AB}$ of the form $\ket{\psi}_{ABA'B'C'} = \sum_i \sqrt{p_i}\ket{\alpha_i}_A \ket{\beta_i}_B \ket{i}_{A'}\ket{i}_{B'}\ket{i}_{C'}$. Since this is a purification, $\rho_{AA'BB'} = \mathrm{Tr}_{C'}(\ket{\psi}_{ABA'B'C'}\bra{\psi})$ is clearly an extension of $\rho_{AB}$. Furthermore, the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$ are $\{ \ket{\alpha_i}_A \ket{i}_{A'}\}$ and $\{ \ket{\beta_i}_B \ket{i}_{B'}\}$ respectively. Since $\rho_{AA'BB'} = \sum_i p_i \ket{\alpha_i}_A \ket{i}_{A'} \bra{\alpha_i}_A \bra{i}_{A'} \otimes \ket{\beta_i}_B \ket{i}_{B'}\bra{\beta_i}_B \bra{i}_{B'}$, $\cqc{\rho_{AA'BB'}} = 0$ with respect to the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$. Therefore, $\min \cqc{\rho_{AA'BB'}} = 0$, which completes the proof. \end{proof}
\section{Coherence as an Entanglement Monotone} We now construct an entanglement monotone using the Correlated Coherence of a quantum state. In order to do this, we first define symmetric extensions of a given quantum state:
\begin{definition} [Unitarily Symmetric Extensions] \label{unitSym} Let $\rho_{AA'BB'}$ be an extension of a bipartite state $\rho_{AB}$. The extension $\rho_{AA'BB'}$ is said to be unitarily symmetric if it remains invariant up to local unitaries on $AA'$ and $BB'$ under a system swap between Alice and Bob.
More formally, let $\{\ket{i}_{AA'}\}$ and $\{\ket{i}_{BB'}\}$ be complete local bases on $AA'$ and $BB'$ respectively. Define the swap operator through $U_{\mathrm{swap}} \ket{i,j}_{AA'BB'} \coloneqq \ket{j,i}_{AA'BB'}$ . Then $\rho_{AA'BB'}$ is unitarily symmetric if there exists local unitary operations $U_{AA'}$ and $U_{BB'}$ such that $ U_{AA'}\otimes U_{BB'} \left(U_{\mathrm{swap}} \rho_{AA'BB'} U_{\mathrm{swap}}^\dag \right) U_{AA'}^\dag \otimes U_{BB'}^\dag = \rho_{AA'BB'}$. \end{definition}
Following from the observation that the minimization of coherence over all extensions is closely related to the the separability of a quantum state, we define the following:
\begin{definition} Let $\rho_{AA'BB'}$ be some extension of a bipartite state $\rho_{AB}$ and choose the local bases to be the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$ respectively. Then the entanglement of coherence (EOC) is defined to be:
$$ E_{cqc}(\rho_{AB}) \coloneqq \min \cqc{\rho_{AA'BB'}} $$
The minimization is over all possible unitarily symmetric extensions of $\rho_{AB}$ of the form $\rho_{AA'BB'}$. \end{definition}
It remains to be proven that $E_{cqc}(\rho_{AB})$ is a valid measure of entanglement (i.e. it is an entanglement monotone). But first, we prove the following elementary properties.
\begin{proposition} [EOC of Separable States] \label{separableEcqc} If a bipartite quantum state $\rho_{AB}$ is separable, $\ecqc{\rho_{AB}} = 0$. \end{proposition}
\begin{proof} The proof is identical to Thm.~\ref{separability}, with the additional observation that $\rho_{AA'BB'} = \sum_{i,j}p_{i,j}\ket{\mu_i}_{AA'}\bra{\mu_i}\otimes \ket{\nu_j}_{BB'}\bra{\nu_j}$ is unitarily symmetric. To see this, define $U_{AA'} \coloneqq \sum_{i} \ket{\nu_i}_{AA'}\bra{\mu_i}$ and $U_{BB'} \coloneqq \sum_{i} \ket{\mu_i}_{AA'}\bra{\nu_i}$. It is easy to verify that is satisfies
$$U_{AA'}\otimes U_{BB'} \left(U_{\mathrm{swap}} \rho_{AA'BB'} U_{\mathrm{swap}}^\dag \right) U_{AA'}^\dag \otimes U_{BB'}^\dag = \rho_{AA'BB'}$$
where $U_{\mathrm{swap}}$ is the same as in Def.~\ref{unitSym} so it is unitarily symmetric. \end{proof}
\begin{proposition} [Invariance under local unitaries] \label{localU} For a bipartite quantum state $\rho_{AB}$, $E_{cqc}(\rho_{AB})$ is invariant under local unitary operations on $A$ and $B$ \end{proposition}
\begin{proof} Without loss in generality, we only need to prove it is invariant under local unitary operations of $A$.
For some bipartite state $\rho_{AB}$, let $\rho^*_{AA'BB'} $ be the optimal unitarily symmetric extension such that $ \ecqc{\rho_{AB}}= \cqc {\rho^*_{AA'BB'}}$. Let ${\ket{i}_{AA'}}$ and ${\ket{j}_{BB'}}$ be the eigenbases of $\rho^*_{AA'}$ and $\rho^*_{BB'}$ respectively. With respect to these bases, $\rho^*_{AA'BB'} = \sum_{ijkl}\rho_{ijkl} \ket{i,j}_{AA'BB'}\bra{k,l}$
Suppose we perform a unitary $U = U_A\otimes \openone_{A'BB'}$ on $A$ such that so $U \ket{i,j} = \ket{\alpha_i, j}$ where $\{\ket{\alpha_i}\}$ is an orthonormal set. Since $U \rho^*_{AA'BB'} U^\dag = \sum_{ijkl}\rho_{ijkl} \ket{\alpha_i ,j}_{AA'BB'}\bra{\alpha_k,l}$, it is clear that the off diagonal matrix elements are invariant under the new bases $\ket{\alpha_i, j}_{AA'BB'}$ so $ \ecqc{\rho_{AB}} = \ecqc {U \rho^*_{AA'BB'} U^\dag }$, which proves the proposition. \end{proof}
\begin{proposition} [Convexity] $\ecqc{\rho_{AB}}$ is convex and decreases under mixing:
$$ \lambda \ecqc{\rho_{AB}} + (1-\lambda)\ecqc{\sigma_{AB}} \geq \ecqc{\lambda \rho_{AB} + (1-\lambda) \sigma_{AB}} $$
For any 2 bipartite quantum states $\rho_{AB}$ and $\sigma_{AB}$, and $\lambda \in [0,1]$. \end{proposition}
\begin{proof} Let $\rho^*_{AA'BB'}$ and $\sigma^*_{AA'BB'}$ be the optimal unitarily symmetric extensions for $\rho_{AB}$ and $\sigma_{AB}$ respectively such that $\ecqc{\rho_{AB}}= \cqc {\rho^*_{AA'BB'}}$ and $\ecqc{\sigma_{AB}}= \cqc {\sigma^*_{AA'BB'}}$.
Consider the state $\tau_{AA'A''BB'B''} \coloneqq \lambda \rho^*_{AA'BB'}\otimes \ket{0,0}_{A''B''}\bra{0,0}+ (1-\lambda) \sigma^*_{AA'BB'}\otimes \ket{1,1}_{A''B''}\bra{1,1}$ for $\lambda \in [0,1]$. Direct computation will verify that with respect to the eigenbases of $\tau_{AA'A''}$ and $\tau_{BB'B''}$, $\cqc{\tau_{AA'A''BB'B''}} = \lambda \cqc{\rho^*_{AA'BB'}} + (1-\lambda) \cqc{\sigma^*_{AA'BB'}} = \lambda \ecqc{\rho_{AB}} + (1-\lambda)\ecqc{\sigma_{AB}}$. However, as $\mathrm{Tr}_{A'A''B'B''}(\tau_{AA'A''BB'B''}) = \lambda \rho_{AB} + (1-\lambda) \sigma_{AB}$, it is an extension of $\lambda \rho^*_{AB} + (1-\lambda) \sigma^*_{AB}$.
It remains to be proven that the extension above is also unitarily symmetric. Let $\Xi^{\mathrm{swap}}_{ X \leftrightarrow Y}$ denote the swap operation between $X$ and $Y$. Let the operators $U_{AA'}$, $U_{BB'}$, $V_{AA'}$, $V_{BB'}$ satisfy $\rho^*_{AA'BB'} = U_{AA'}\otimes U_{BB'} \Xi^{\mathrm{swap}}_{ AA' \leftrightarrow BB'}(\rho_{AA'BB'}^*)U_{AA'}^\dag\otimes U_{BB'}^\dag $ and $\sigma^*_{AA'BB'} = V_{AA'}\otimes V_{BB'} \Xi^{\mathrm{swap}}_{ AA' \leftrightarrow BB'}(\sigma_{AA'BB'}^*)V_{AA'}^\dag\otimes V_{BB'}^\dag $ respectively. It can be verified that the local unitary operators $W_{AA'A''} \coloneqq U_{AA'}\otimes \ket{0}_{A''}\bra{0} + V_{AA'}\otimes \ket{1}_{A''}\bra{1}$ and $W_{BB'B''} \coloneqq U_{BB'}\otimes \ket{0}_{B''}\bra{0} + V_{BB'}\otimes \ket{1}_{B''}\bra{1}$ satisfies $\tau^*_{AA'A''BB'B''} = W_{AA'A''}\otimes W_{BB'B''} \Xi^{\mathrm{swap}}_{ AA'A'' \leftrightarrow BB'B''}(\tau_{AA'A''BB'B''})W_{AA'B''}^\dag\otimes W_{BB'B''}^\dag $, so it is also unitarily symmetric.
Since $E_{cqc}$ is a minimization over all unitarily symmetric extensions, we have $ \lambda \ecqc{\rho_{AB}} + (1-\lambda)\ecqc{\sigma_{AB}} = \cqc{\tau_{AA'A''BB'B''}} \geq \ecqc{\lambda \rho_{AB} + (1-\lambda) \sigma_{AB}}$ which completes the proof. \end{proof}
\begin{proposition} [Contraction under partial trace] \label{partTrace} Consider the bipartite state $\rho_{AB}$ where $A=A_1A_2$ is a composite system. Then the entanglement of coherence is non-increasing under a partial trace:
$$ \ecqc{\rho_{A_1A_2B}} \geq \ecqc{\mathrm{Tr}_{A_1}(\rho_{A_1A_2B})} $$
\end{proposition}
\begin{proof} Let $\rho^*_{A_1A_2A'BB'}$ be the optimal unitarily symmetric extension of $\rho_{A_1A_2B}$ such that $\ecqc{\rho_{A_1A_2B}} = \cqc{\rho^*_{A_1A_2A'BB'}}$. It is clear that $\mathrm{Tr}_{A_1A'B'}(\rho^*_{A_1A_2A'BB'}) = \mathrm{Tr}_{A_1}(\rho_{A_1A_2B}) = \rho_{A_2B}$ so $\rho^*_{A_1A_2A'BB'}$ is an unitarily symmetric extension of $ \mathrm{Tr}_{A_1}(\rho_{A_1A_2B})$. Since $E_{cqc}$ is a minimization over all such extensions, $\ecqc{\rho_{A_1A_2B}} \geq \ecqc{\mathrm{Tr}_{A_1}(\rho_{A_1A_2B})}$. \end{proof}
\begin{proposition} [Contraction under local projections]
Let $\pi^i_A$ be a complete set of rank 1 projectors on subsystem $A$ such that $\sum_i \pi^i_A = \openone_A$, and define the local projection $\Pi_A(\rho_A) \coloneqq \sum_i \pi^i_A \rho_A \pi^i_A$ The entanglement of coherence is contractive under a local projections:
$$ \ecqc{\rho_{AB}} \geq \ecqc{\Pi_A(\rho_{AB})} $$
Or, if $A$ is a composite system, $A=A_1A_2$
$$ \ecqc{\rho_{AB}} \geq \ecqc{\Pi_{A_1}(\rho_{A_1A_2B})} $$
\end{proposition}
\begin{proof} First, we observe that any projective measurement can be performed via a CNOT type operation with an ancilla, followed by tracing out the ancilla:
$$\mathrm{Tr}_X\left(U^{\mathrm{CNOT}}_{XY} \left( \ket{0}_X\bra{0} \otimes \sum_{i,j} \rho_{ij}\ket{i}_Y\bra{j} \right) (U^{\mathrm{CNOT}}_{XY})^\dag \right) = \sum_{i,i} \rho_{ii}\ket{i}_Y\bra{i}.$$
The unitary performs the operation $U^{\mathrm{CNOT}}_{XY} \ket{0,i}_{XY} = \ket{i,i}_{XY}$. Since adding an uncorrelated ancilla does not increase $E_{cqc}$ , we have $\ecqc{\ket{0}_{A_3}\bra{0} \otimes \rho_{A_1A_2B}} = \ecqc{\rho_{A_1A_2B}}$. As $E_{cqc}$ is invariant under local unitaries (Prop.~\ref{localU}) and contractive under partial trace (Prop.~\ref{partTrace}), this proves the proposition. \end{proof}
\begin{proposition} [Invariance under classical communication] \label{classCom} For a bipartite state $\rho_{AB}$, Suppose that on Alice's side, $A = A_1A_2$ is a composite system and $A_1$ is a classical registry storing classical information. Then $E_{eqc}$ remains invariant if a copy of $A_1$ is created on Bob's side.
More formally, let $\rho_{A_1A_2B_2}= \sum_i p_i \ket{i}_{A_1} \bra{i} \otimes \ket{\psi_i}_{A_2B_2}\bra{\psi_i}$ be the initial state, and let $\sigma_{A_1A_2B_1B_2}= \sum_i p_i \ket{i}_{A_1} \bra{i} \otimes \ket{\psi_i}_{A_2B_2}\bra{\psi_i} \otimes \ket{i}_{B_1}\bra{i}$ be the state after Alice communicates a copy of $A_1$ to Bob, then
$$ \ecqc{\rho_{A_1A_2B_2}} = \ecqc{\sigma_{A_1A_2B_1B_2}} $$ \end{proposition}
\begin{proof} Let $\Xi^{\mathrm{swap}}_{ X \leftrightarrow Y}$ denote the swap operation between $X$ and $Y$. Let $\rho^*_{A_1A_2A'B_1B_2B'}$ be the optimal unitarily symmetric extension of $\rho_{A_1A_2B_2}$ such that $\ecqc{\rho_{A_1A_2B_2}} = \cqc{\rho^*_{A_1A_2A'B_1B_2B'}}$. Note that $\cqc{\rho^*_{A_1A_2A'B_2B_1B'}} = \cqc{ \ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'B_1B_2B'} \otimes \ket{0}_{B''}\bra{0}}$. Define a $\mathrm{CNOT}$ type operation between $A_1$ and $B''$ such that $U^{\mathrm{CNOT}}_{A_1B''} \ket{0,i}_{A_1B''} = \ket{i,i}_{A_1B''}$. Ordinarily, such an operation cannot be done by Bob locally unless he has access to subsystem $A_1$ on Alice's side. However, since $\rho^*_{A_1A_2A'BB'}$ is unitarily symmetric, there exists local unitaries $U_{A_1A_2A'}$ and $U_{BB'}$ such that $\Xi^{\mathrm{swap}}_{A_1A_2A' \leftrightarrow BB'} (\rho^*_{A_1A_2A'BB'}) = U_{A_1A_2A'} \otimes U_{BB'}\rho^*_{A_1A_2A'BB'} U^\dag_{A_1A_2A'} \otimes U^\dag_{BB'}$. This implies that the Bob can perform $U^{\mathrm{CNOT}}_{A_1B''}$ locally by first performing the swap operation through local unitaries, gain access to the information in $A_1$, copy it to $B''$ by performing $U^{\mathrm{CNOT}}_{B_1B''}$ locally, and then undo the swap operation via another set of local unitary operations.
This means there must exist $V_{A_1A_2A'A''}$ and $V_{B_1B_2B'B''}$ such that:
$$V_{A_1A_2A'A''} \otimes V_{B_1B_2B'B''} \left( \ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'B_1B_2B'} \otimes \ket{0}_{B''}\bra{0} \right) V_{A_1A_2A'A''}^\dag \otimes V_{B_1B_2B'B''}^\dag$$
is a unitarily symmetric extension of $U^{\mathrm{CNOT}}_{A_1B''} \left( \rho_{A_1A_2B_2} \otimes \ket{0}_{B''}\bra{0} \right) U^{\mathrm{CNOT}}_{A_1B''}\, ^\dag$. However, because this state is equivalent to $\sigma_{A_1A_2B_1B_2}$ as defined previously, it is also a unitarily symmetric extension of $\sigma_{A_1A_2B_1B_2}$. Since $\mathrm{C}_{cqc}$ is invariant under local unitary operations, we have
\begin{align*}
\ecqc{\rho_{A_1A_2B_2}} &= \ecqc{\rho^*_{A_1A_2A'BB'}} \\
&= \ecqc{\ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'BB'} \otimes \ket{0}_{B''}\bra{0}} \\ &= \cqc{V_{A_1A_2A'A''} \otimes V_{BB'B''} \left( \ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'BB'} \otimes \ket{0}_{B''}\bra{0} \right) V_{A_1A_2A'A''}^\dag \otimes V_{BB'B''}^\dag} \\ &\geq \ecqc{\sigma_{A_1A_2B_1B_2}}, \end{align*} where the last inequality comes from the fact that the entanglement of coherence is a minimization over all unitarily symmetric extensions. On the other hand, a unitarily symmetric extension of $\sigma_{A_1A_2B_1B_2}$ is also an unitarily symmetric extension of $\rho_{A_1A_2B_2}$ so $\ecqc{\rho_{A_1A_2B_2}} \leq \ecqc{\sigma_{A_1A_2B_1B_2}}$. This implies that $\rho_{A_1A_2B_2}$ so $\ecqc{\rho_{A_1A_2B_2}} = \ecqc{\sigma_{A_1A_2B_1B_2}}$ which completes the proof. \end{proof}
\begin{proposition} [Contraction under LOCC] \label{LOCC} For any bipartite state $\rho_{AB}$ and let $\Lambda_{\mathrm{LOCC}}$ be any LOCC protocol performed between $A$ and $B$. Then $E_{cqc}$ is non-increasing under such operations:
$$\ecqc{\rho_{AB}} \geq \ecqc{\Lambda_{\mathrm{LOCC}}(\rho_{AB})}$$ \end{proposition}
\begin{proof} We consider the scenario where Alice performs a POVM on her subsystems, communicates classical information of her meaurement outcomes to Bob, who then performs a separate operation on his subsystem based on this measurement information.
Suppose Alice and Bob begins with the state $\rho_{A_1B_1}$. By Naimark's theorem, any POVM can be performed through a unitary interaction between the state of interest and an uncorrelated pure state ancilla, followed by a projective measurement on the ancilla and finally tracing out the ancillary systems. In order to facilitate Alice and Bob's performing of such quantum operations, we add uncorrelated ancillas to the state, which does not change the entanglement of coherence so $\ecqc{\rho_{A_1B_1}}= \ecqc{\ket{0,0}_{M_AA_2}\bra{0,0} \otimes \rho_{A_1B_1}\otimes \ket{0,0}_{M_BB_2}\bra{0,0}}$. For Alice's procedure, we will assume the projection is performed on $M_A$, so $M_A$ is a classical register storing classical measurement outcomes.
In the beginning, Alice performs a unitary operation on subsystems $M_AA_1A_2$, followed by a projection on $M_A$ which makes it classical. We represent the composite of these two operations with $\Omega_A$, which represents Alice's local operation. Since $E_{cqc}$ is invariant under local unitaries (Prop.~\ref{localU}) but contractive under a projection (Prop.~\ref{partTrace}), $\Omega_A$ is a contractive operation.
The next part of the procedure is a communication of classical bits to Bob. This procedure equivalent to the copying of the state of the classical register $M_A$ to the register $M_B$. However, $E_{cqc}$ is invariant under such communication (Prop.~\ref{classCom}). We represent this operation as $\Gamma_{A\rightarrow B}$. The next step requires Bob to perform an operation on his quantum system based on the communicated bits. He can achieve this by performing a unitary operation on subsystems $M_BB_1B_2$. We represent this operation with $\Omega_B$, which does not change $E_{cqc}$. The final step of the procedure requires tracing out the ancillas, $\mathrm{Tr}_{M_AA_2M_BB_2}$, which is again contractive (Prop.~\ref{partTrace}).
Since every step is either contractive or invariant, we have the following inequality:
\begin{align*} \ecqc{\rho_{A_1B_1}} &= \ecqc{\ket{0,0}_{M_AA_2}\bra{0,0} \otimes \rho_{A_1B_1}\otimes \ket{0,0}_{M_BB_2}\bra{0,0}} \\ &\geq \ecqc{\mathrm{Tr}_{M_AA_2M_BB_2} \circ \Omega_B \circ\Gamma_{A\rightarrow B} \circ \Omega_A \left[\ket{0,0}_{M_AA_2}\bra{0,0} \otimes \rho_{A_1B_1}\otimes \ket{0,0}_{M_BB_2}\bra{0,0}\right]}. \end{align*} Any LOCC protocol is a series of such procedures from Alice to Bob or from Bob to Alice, so we must have $\ecqc{\rho_{AB}} \geq \ecqc{\Lambda_{\mathrm{LOCC}}(\rho_{AB})}$, which completes the proof. \end{proof}
The following theorem shows that the entanglement of coherence is a valid measure of the entanglement of the system.
\begin{theorem} [Entanglement monotone] The entanglement of coherence $E_{cqc}$ is an entanglement monotone in the sense that it satisfies:
\begin{enumerate}
\item $\ecqc{\rho_{AB}} = 0$ iff $\ecqc{\rho_{AB}}$ is seperable.
\item $\ecqc{\rho_{AB}}$ is invariant under local unitaries on $A$ and $B$.
\item $\ecqc{\rho_{AB}}\geq \ecqc{\Lambda_{\mathrm{LOCC}}(\rho_{AB})}$ for any LOCC procedure $\Lambda_{\mathrm{LOCC}}$. \end{enumerate} \end{theorem}
\begin{proof} It follows directly from Prop.~\ref{separableEcqc} and Prop.~\ref{localU} and~\ref{LOCC}. \end{proof}
\section{conclusion} To conclude, we defined the Correlated Coherence of quantum states as the total coherence without local coherences, which can be interpreted as the portion of the coherence that is shared between 2 quantum subsystems. The framework of the Correlated Coherence allows us to identify the same concepts of non-classicality of correlations as those of (both symmetric and asymmetric) quantum discord and quantum entanglement. Finally, we proved that the minimization of the Correlated Coherence over all symmetric extensions of a quantum state is an entanglement monotone, showing that quantum entanglement may be interpreted as a specialized form of coherence. Our results suggest that quantum correlations in general may be understood from the viewpoint of coherence, thus possibly opening new ways of understanding both.
\section*{acknowledgment} This work was supported by the National Research Foundation of Korea (NRF) through a grant funded by the Korea government (MSIP) (Grant No. 2010-0018295).
\end{document} |
\begin{document}
\author{Dmitry V. Savostyanov \thanks{University of Southampton, Department of Chemistry, Highfield Campus, Southampton SO17 1BJ, United Kingdom
({\tt dmitry.savostyanov@gmail.com})} \thanks{Institute of Numerical Mathematics of Russian Academy of Sciences, Gubkina 8, Moscow 119333, Russia} } \title{Quasioptimality of maximum--volume cross interpolation of tensors \thanks{Partially supported by
RFBR grants 11-01-00549-a, 12-01-33013, 12-01-00546-a, 12-01-91333-nnio-a,
Rus. Fed. Gov. project 16.740.12.0727
at INM RAS
and EPSRC grant EP/H003789/1 at the University of Southampton.
}} \date{June 15, 2013} \maketitle
\begin{abstract} We consider a cross interpolation of high--dimensional arrays in the tensor train format. We prove that the maximum--volume choice of the interpolation sets provides the quasioptimal interpolation accuracy, that differs from the best possible accuracy by the factor which does not grow exponentially with dimension. For nested interpolation sets we prove the interpolation property and propose greedy cross interpolation algorithms. We justify the theoretical results and test the speed and accuracy of the proposed algorithm with convincing numerical experiments. \par {\it Keywords:} high--dimensional problems, tensor train format, maximum--volume principle, cross interpolation. \par {\it AMS:} 15A69, 15A23, 65D05, 65F99. \end{abstract}
\section{Introduction} As demand for big data analysis grows, high--dimensional data and algorithms have become increasingly important in scientific computing. The total number of entries in a \emph{tensor} (an array with $d$ indices) grows exponentially with dimension $d.$ Even for a moderate $d,$ it is impossible to process, store or compute all elements of a tensor by standard methods. This issue is known in numerical analysis and related areas as the \emph{curse of dimensionality}. Different techniques are used to relax or to overcome this problem, e.g. low--parametrical representation on {sparse grids}~\cite{smolyak-1963,griebel-sparsegrids-2004}, (Markov chain) Monte Carlo sampling in statistics~\cite{hastings-mcmc-1970}, model/dimensionality reduction, etc. Significant progress has been made in the development and understanding of the \emph{tensor product} methods (see reviews~\cite{kolda-review-2009,khor-survey-2011,hackbusch-2012,larskres-survey-2013}).
The tensor product methods implement the \emph{separation of variables} at the discrete level, which in the two--dimensional case is known as the \emph{low rank decomposition} of a matrix. Several approaches have been developed to generalize rank--structured low--parametrical models to tensors (see~\cite{kolda-review-2009} for details), and a particularly simple and efficient~\emph{tensor train} (TT) format has been proposed recently~\cite{osel-tt-2011}. It is equivalent to the \emph{matrix product states} (MPS) introduced in the quantum physics community to represent the quantum states of the many--body systems~\cite{fannes-mps-1992,klumper-mps-1993}. The optimization algorithms for the MPS include \emph{alternating least squares} (ALS) algorithm, which works with the fixed tensor structure, and the \emph{density matrix renormalization group} (DMRG) algorithm~\cite{white-dmrg-1993,ostlund-dmrg-1995}, which adaptively changes the \emph{ranks} of the tensor format, and manifests much faster convergence in numerical experiments. When the TT format was re-discovered in the numerical linear algebra community, both the ALS and DMRG schemes were adapted for other high--dimensional problems and novel algorithms were proposed. As a result we can use the TT/MPS format to approximate high--dimensional data and perform algebraic operations~\cite{schollwock-dmrg-mps-2010} (cf. \cite{osel-tt-2011,Os-mvk2-2011}), solve linear systems~\cite{jeckelmann-dmrgsolve-2002,holtz-ALS-DMRG-2012,DoOs-dmrg-solve-2011,ds-amr1-2013,ds-amr2-2013}, compute the multidimensional Fourier transform~\cite{dks-ttfft-2012} and discrete convolution~\cite{khkaz-conv-2011}. With these algorithms in hand, high--dimensional scientific computations become possible as soon as all data are somehow translated into the TT format.
It is crucial, therefore, to develop algorithms which construct the approximation of a given high--dimensional array in the tensor format. For some function--related tensors, the TT representation is written explicitly (see e.g.~\cite{khor-qtt-2011,osel-constr-2013}). In general, although every entry of a tensor can be computed \emph{on demand} (by a formula or as a solution of a feasible problem, e.g. PDE in three dimensions), all elements cannot be computed in a reasonable time. The question arises naturally whether a tensor can be \emph{reconstructed} or \emph{interpolated} in the TT format from a few elements, also known as \emph{samples}.
For matrices, i.e. $2$--tensors, this question is well studied. We know that a rank--$r$ matrix is recovered from a \emph{cross} of $r$ rows and columns if the submatrix on their intersection is nonsingular. When data are not exactly represented by the low--rank model, the accuracy of the \emph{cross interpolation} depends crucially on the chosen cross. A notable choice is the~\emph{maximum volume} cross, which has the $r\times r$ submatrix with the maximum determinant in modulus on the intersection. For this cross, the interpolation accuracy differs from the accuracy of the best possible approximation by the factor $\mathcal{O}(r^2),$ i.e. is \emph{quasioptimal}~\cite{schneider-cross2d-2010,gt-skel-2011}.
For tensors in the TT format an analog of the cross interpolation formula is given in~\cite{ot-ttcross-2010}. It reconstructs a tensor from a few samples under mild non-singularity conditions, if the TT representation is exact. For the approximate case, the ALS type algorithm is suggested in~\cite{ot-ttcross-2010}, which searches for the better crosses in order to improve the approximation accuracy. The rank--adaptive DMRG--like version of this algorithm is proposed in~\cite{so-dmrgi-2011proc}. These algorithms are~\emph{heuristic}, as well as interpolation algorithms developed for other tensor formats, e.g. the Tucker~\cite{ost-tucker-2008,ost-chem-2010} and the \emph{hierarchical Tucker} (HT) format~\cite{lars-htcross-2013,bebe-maca-2013pre}.
The accuracy of the cross interpolation of tensors has not been well studied yet. For the $3$--dimensional Tucker model the quasioptimality with the factor $\mathcal{O}(r^3)$ is shown in~\cite{ost-tucker-2008}. In $d$ dimensions we can expect an excessively large coefficient $\mathcal{O}(r^d),$ cf. $\mathcal{O}(r^{2d})$ for the HT format~\cite{lars-htcross-2013}. The main result of this paper is more optimistic. The quasioptimality of the maximum volume cross interpolation is generalized to the TT format with the coefficient $(2r+\kappa r+1)^{\lceil \log_2 d \rceil+2}$ that \emph{does not necessarily grow exponentially} with $d.$
The paper is organized as follows. Sec.~\ref{sec:def} presents notation and definitions. In Sec.~\ref{sec:mvol} the quasioptimality of the maximum--volume cross interpolation is proven. In Sec.~\ref{sec:emb} the interpolation on \emph{nested} sets is considered, which reduces the search space, but results in the larger quasioptimality constant. In Sec.~\ref{sec:emb2} the interpolation property for the nested sets is shown. In Sec.~\ref{sec:alg} practical cross interpolation algorithms for matrices are recalled and similar algorithms for tensor trains are proposed. In Sec.~\ref{sec:num} the coefficient of the quasioptimality is measured for randomly generated tensors, and speed and accuracy of the proposed algorithm is demonstrated with numerical experiments.
\section{Notation, definitions and preliminaries} \label{sec:def} The \emph{tensor train} (TT) decomposition of a tensor $A=\left[A(i_1,\ldots,i_d)\right]$ is written as follows \begin{equation}\label{eq:tt}
\begin{split}
A(i_1,\ldots,i_d) & = \sum_{\mathbf{s}} \def\t{\mathbf{t}} X^{(1)}(i_1,s_1) X^{(2)}(s_1,i_2,s_2) \ldots X^{(d-1)}(s_{d-2},i_{d-1},s_{d-1}) X^{(d)}(s_{d-1},i_d)
\\ & = \sum_{\mathbf{s}} \def\t{\mathbf{t}} \prod_{k=1}^d X^{(k)}(s_{k-1},i_k,s_k).
\end{split} \end{equation} In this equation $i_k=1,\ldots,n_k,$ $k=1,\ldots,d,$ are \emph{mode} or physical indices, and $s_k=1,\ldots, r_k$ are auxiliary \emph{rank} indices. Values $n_k$ are referred to as~\emph{mode sizes} of a tensor, and $r_k$ are~\emph{tensor train ranks} or TT--ranks. Summation over $\mathbf{s}} \def\t{\mathbf{t}=(s_1,\ldots,s_{d-1})$ means summation over all pairs of auxiliary indices $s_1,\ldots,s_{d-1},$ where each index runs through all possible values.
We use elementwise notation, i.e. assume that all equations hold for all possible values of \emph{free} indices. Therefore, Eq.~\eqref{eq:tt} represents every entry of a tensor by the product of matrices, where each $X^{(k)}(i_k)=[X^{(k)}_{s_{k-1},s_k}(i_k)]$ has size $r_{k-1}\times r_k$ and depends on the~\emph{parameter} $i_k.$ The three--dimensional array $X^{(k)}=\left[X^{(k)}(s_{k-1},i_k,s_k)\right]$ is referred to as \emph{TT--core}. To unify the notation, we introduce the virtual \emph{border ranks} $r_0=r_d=1$ and consider $[X^{(1)}(i_1,s_1)]=[X^{(1)}(s_0,i_1,s_1)]$ and $[X^{(d)}(s_{d-1},i_d)]=[X^{(d)}(s_{d-1},i_d,s_d)]$ as $3$--tensors.
The elementwise notation allows us to reshape tensors into vectors or matrices simply by moving indices. We have done this to present the TT--core $\left[X^{(k)}(s_{k-1},i_k,s_k)\right]$ as the parameter--dependent matrix $[X^{(k)}_{s_{k-1},s_k}(i_k)].$ More complicated transformations can be expressed by \emph{index grouping}, which combines indices $i_1,\ldots,i_d$ in the single multi--index $\overline{i_1\ldots i_d}.$ \footnote{ The multi--index is usually defined by either the \emph{big--endian} convention $\overline{i_1\ldots i_d}=i_d+(i_{d-1}-1)n_d +\ldots+(i_1-1)n_2\ldots n_d$ or the \emph{little--endian} convention $\overline{i_1\ldots i_d}=i_1+(i_2-1)n_1 +\ldots+(i_d-1)n_1\ldots n_{d-1}.$ The big--endian notation is similar to numbers written in the positional system, while the {little--endian} notation is used in numerals in the Arabic scripts and is consistent with the \textsc{Fortran} style of indexing.
The exact formula which maps indices to the multi--index is not essential in this paper. }
For example, the $k$--th \emph{unfolding} of a tensor is the $(n_1\ldots n_k) \times (n_{k+1}\ldots n_d)$ matrix with elements \begin{equation}\nonumber A^{\{k\}}(i_{\leq k},i_{>k}) = A^{\{k\}}(\overline{i_1\ldots i_k}, \overline{i_{k+1}\ldots i_d}) = A(i_1,\ldots,i_d). \end{equation} Here and further we use the following shortcuts to simplify the notation $$ i_{\leq k} = \overline{i_1\ldots i_k}, \quad i_{>k} = \overline{i_{k+1}\ldots i_d}, \qquad\mbox{and}\qquad i_{b:c}=\overline{i_b\ldots i_c}. $$ For $A$ in the TT--format~\eqref{eq:tt} it holds $\mathop{\mathrm{rank}}\nolimits A^{\{k\}}=r_k.$ In~\cite{osel-tt-2011} the reverse is proven: for any tensor $A$ there exists the representation~\eqref{eq:tt} with TT--ranks $r_k=\mathop{\mathrm{rank}}\nolimits A^{\{k\}}.$ This gives the term \emph{TT--rank} the definite algebraic meaning.
For a $m\times n$ matrix $A=\left[A(i,j)\right]$ the \emph{cross} (or \emph{skeleton}) interpolation is written as follows \begin{equation}\label{eq:im}
A(i,j) \approx \tilde A(i,j) = \sum_{s,t} A(i,\J_t) \left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_s,\J_t)\right]^{-1} A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_s,j).
\end{equation} Here sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}=\{\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_1,\ldots,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_r\}$ and $\J=\{\J_1,\ldots,\J_r\}$ define the positions of the interpolation rows and columns, respectively. The summation over $s,t=1,\ldots,r$ ties the pairs of subsets together, similarly to the pairs of indices in~\eqref{eq:tt}. In the matrix form the right hand side of~\eqref{eq:im} is the product of $m \times r$ matrix of columns, the inverse of $r\times r$ submatrix at the intersection and $r\times n$ matrix of rows. The essential property of the interpolation is that~\eqref{eq:im} is~\emph{exact} on its cross \begin{equation}\label{eq:ip}
A(i,j) = \tilde A(i,j) = \sum_{s,t} A(i,\J_t) \left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_s,\J_t)\right]^{-1} A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_s,j), \qquad
\mbox{if}\quad i\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K} \mbox{ or } j\in\J. \end{equation}
When $A$ is not exactly a rank-$r$ matrix, the choice of interpolation sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},\J$ may affect the interpolation accuracy significantly. A good choice of $A_\Box=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},\J)\right]$ is the~\emph{maximum--volume} $r\times r$ submatrix, such that $\mathop{\mathrm{vol}}\nolimits A_\Box = |\mathop{\mathrm{det}}\nolimits A_\Box| $ is maximal over all possible choices of $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}$ and $\J.$ Assuming that the ranks (sizes of submatrices) are defined \emph{a priori}, we denote this choice by \begin{equation}\nonumber
\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},\J\right] = \arg\max_{\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}',\J'}\mathop{\mathrm{vol}}\nolimits [A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}',\J')], \qquad\mbox{or}\qquad [\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},\J] = \mathop{\mathrm{maxvol}}\nolimits A. \end{equation} For $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},\J$ chosen by the maximum--volume principle, the following \emph{quasioptimality} statements are proven in~\cite{gt-maxvol-2001} and~\cite{schneider-cross2d-2010,gt-skel-2011}, respectively. \begin{equation}\label{eq:qm}
\begin{split}
\| A - \tilde A \|_C & \leq (r+1)^{\phantom{2}} \min\nolimits_{\mathop{\mathrm{rank}}\nolimits X=r} \| A - X \|_2, \\
\| A - \tilde A \|_C & \leq (r+1)^2 \min\nolimits_{\mathop{\mathrm{rank}}\nolimits X=r} \| A - X \|_C.
\end{split} \end{equation} Another important property of the maximum--volume submatrix is that it is \emph{dominant} (see~\cite{gostz-maxvol-2010} for more details) in the rows and columns which it occupies, i.e. \begin{equation}\label{eq:dom}
\left| \sum_{t} \left[ A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_t,\J_s) \right]^{-1} A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_t,j) \right| \leq 1,
\qquad
\left| \sum_{s} A(i,\J_s) \left[ A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_t,\J_s) \right]^{-1} \right| \leq 1. \end{equation}
In~\cite{ot-ttcross-2010} it is shown that if a tensor $A$ is exactly given by~\eqref{eq:tt} with TT--ranks $r_k,$ it is recovered from $\mathcal{O}(d n r^2)$ tensor entries \footnote{We always assume $n_1=n_2=\ldots=n_d=n$ and $r_1=\ldots=r_{d-1}=r$ in complexity estimates} by the following formula. \begin{equation}\label{eq:ii}
\begin{split}
A(i_1,\ldots,i_d) & = \sum_{\mathbf{s}} \def\t{\mathbf{t},\t} A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_{t_1}^{> 1}) \left[ A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_{s_1}^{\leq 1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_{t_1}^{>1})\right]^{-1} A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}_{s_1}^{\leq 1},i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}_{t_2}) \ldots A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq d-1}_{s_{d-1}},i_d)
\\ & = \sum_{\mathbf{s}} \def\t{\mathbf{t},\t} \prod_{k=1}^d A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}_{s_{k-1}},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k}) \left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})\right]^{-1},
\end{split} \end{equation} where $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k}$ denote the positions of $r_k$ rows and columns in the $k$--th unfolding $A^{\{k\}}.$ To unify the notation, we introduce the empty border sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 0}=\emptyset$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d}=\emptyset.$ We denote submatrices on the intersection of interpolation crosses as follows \begin{equation}\nonumber
\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})\right]_{t_k,s_k=1}^{r_k} = \left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right] = A_k, \qquad A_k^{-1} = B^{[k]}. \end{equation} Throughout the paper we assume that TT--ranks of~\eqref{eq:tt} and~\eqref{eq:ii} are the same, i.e., sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}=\{\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{1},\ldots,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{r_k}\}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}=\{\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{1},\ldots,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{r_k}\}$ have $r_k$ elements each and $A_k=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right]$ is $r_k \times r_k$ matrix, where $r_1,\ldots,r_{d-1}$ are TT--ranks of~\eqref{eq:tt}. When a~\emph{choice} of $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ is considered, it means that we choose $r_k$ `left' and `right' multiindices $i_{\leq k}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},$ $i_{>k}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}.$
In~\eqref{eq:qm}, $\| \,\cdot\, \|_2$ denotes the~\emph{spectral} norm of a matrix, and $\|\,\cdot\,\|_C$ denotes the~\emph{Chebyshev} norm, also known as \emph{uniform}, \emph{supremum}, $\|\,\cdot\,\|_{\infty}$--norm, or the maximum entry in modulus. For tensors Chebyshev and Frobenius norms are defined as follows $$
|A| = \|A\|_C = \max_{i_1,\ldots,i_d}|A(i_1,\ldots,i_d)|, \qquad
\|A\|^2 = \|A\|_F^2 = \sum_{i_1,\ldots,i_d}|A(i_1,\ldots,i_d)|^2. $$
\section{Maximum--volume principle in higher dimensions} \label{sec:mvol} We consider a tensor $A$ which is approximated by the TT format as follows \begin{equation}\label{eq:aa}
\begin{split}
A(i_1,\ldots,i_d) & \approx X(i_1,\ldots, i_d) = \sum_{\mathbf{s}} \def\t{\mathbf{t}} X^{(1)}(i_1,s_1) X^{(2)}(s_1,i_2,s_2) \ldots X^{(d)}(s_{d-1},i_d),
\\ & \qquad |A-X| \leq E_C, \qquad \|A-X\| \leq E_F,
\end{split} \end{equation} where $E_C$ and $E_F$ are known or estimated from computations or theoretical properties of $A.$ We apply~\eqref{eq:im} to $k$--th unfolding and write the cross interpolation \begin{equation}\nonumber
\begin{split}
A^{\{k\}}(i_{\leq k},i_{>k}) & \approx
\tilde A^{\{k\}}(i_{\leq k},i_{>k})
=
\sum_{s_k,t_k} A^{\{k\}}(i_{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})
\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})\right]^{-1}
A^{\{k\}}(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{>k}).
\end{split} \end{equation} For $\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}\right] = \mathop{\mathrm{maxvol}}\nolimits A^{\{k\}}$ the accuracy is estimated by~\eqref{eq:qm} as follows \begin{equation}\nonumber
\begin{split}
\| A^{\{k\}} - \tilde A^{\{k\}} \|_C & \leq (r_k+1)^{\phantom{2}} \| A^{\{k\}} - X^{\{k\}} \|_2 \leq (r_k+1)^{\phantom{2}} \| A^{\{k\}} - X^{\{k\}} \|_F, \\
\| A^{\{k\}} - \tilde A^{\{k\}} \|_C & \leq (r_k+1)^2 \| A^{\{k\}} - X^{\{k\}} \|_C.
\end{split} \end{equation}
We can safely omit the superscript for unfoldings when we use the pointwise notation, since the grouping of indices clearly defines the shape of the resulted matrix. The equation for the unfolding is recast for the tensor as follows \begin{equation}\label{eq:qt}
\begin{split}
A(i_1,\ldots,i_d) & = \sum_{s_k,t_k} A(i_1,\ldots,i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})
B^{[k]}_{t_k,s_k}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{k+1},\ldots,i_d)
+ E(i_1,\ldots,i_d), \\ & \quad
| E | \leq (r_k+1)^{\phantom{2}} E_F, \qquad
| E | \leq (r_k+1)^2 E_C, \qquad
B^{[k]} = A_k^{-1}.
\end{split} \end{equation} The interpolation step splits a $d$--tensor into a `product' of two tensors, which have $k$ and $d-k$ free indices, respectively. The same splitting is done in~\cite{ot-tt-2009}, where a \emph{Tree--Tucker} format (later recast as the tensor train format) has been proposed to break the curse of dimensionality. In~\cite{ot-tt-2009} the quasioptimality of the approximations computed by the proposed TT--SVD algorithm is shown. Similarly, we estimate the accuracy of the interpolation--based formula~\eqref{eq:ii}.
\begin{lemma}\label{lem1} If a tensor $A$ satisfies~\eqref{eq:aa}, then for any $k=1,\ldots,d-1$ it holds \begin{equation}\label{eq:q1}
\begin{split}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}, i_k,i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k+1})
& = \sum_{s_k,t_k} A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})
B^{[k]}_{t_k,s_k}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k+1})
\\ & + E(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}, i_k,i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k+1}),
\\ & \quad
| E | \leq (r_k+1)^{\phantom{2}} E_F, \qquad
| E | \leq (r_k+1)^2 E_C.
\end{split} \end{equation} \end{lemma} \begin{proof} In~\eqref{eq:qt} we reduce free indices $i_{\leq k-1}$ to the subset $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}$ and similarly $i_{>k+1}$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}.$ \end{proof}
\begin{lemma}\label{lem2}
If a tensor $A$ satisfies~\eqref{eq:aa}, and for some $1\leq p<k<q\leq d$ for subtensors
$$
A_\triangleleft=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, i_{p:k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k})\right], \qquad
A_\triangleright=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}, i_{k+1:q},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> q})\right],
$$
it holds $A_\triangleleft=T_\triangleleft + E_\triangleleft$ and $A_\triangleright=T_\triangleright + E_\triangleright$ with $|E_\triangleleft|\leq \eps |A|$ and $|E_\triangleright| \leq \eps |A|,$ then
\begin{equation}\label{eq:q2}
\begin{split}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, i_{p:q},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> q})
& = \sum_{s_k,t_k} T_\triangleleft(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, i_{p:k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k}_{t_k})
B^{[k]}_{t_k,s_k}
T_\triangleright(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k}, i_{k+1:q},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> q})
\\ & + \mathcal{E}(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, i_{p:q},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> q}),
\\
\frac{|\mathcal{E}|}{|A|} \leq (2 + \eps\kappa_k) \eps r_k & + \frac{|E|}{|A|}, \qquad \kappa_k = r_k |A| |A_k^{-1}|, \quad A_k= \left[ A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}) \right],
\end{split}
\end{equation}
where $|E|$ is estimated by~\eqref{eq:q1}. \end{lemma} \begin{proof} Like in the previous lemma, by taking the subtensor in~\eqref{eq:qt} we obtain \begin{equation}\nonumber
\begin{split}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, i_{p:q},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> q})
& = \sum_{s_k,t_k} A_\triangleleft(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, i_{p:k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k}_{t_k})
B^{[k]}_{t_k,s_k}
A_\triangleright(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k}, i_{k+1:q},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> q})
+ E(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, i_{p:q},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> q}),
\end{split}
\end{equation}
where $|E|$ is estimated by~\eqref{eq:q1}. We have \begin{equation}\nonumber
\begin{split} A_\triangleleft B^{[k]} A_\triangleright & = (T_\triangleleft + E_\triangleleft) B^{[k]} (T_\triangleright + E_\triangleright)
\\ & = T_\triangleleft B^{[k]} T_\triangleright + A_\triangleleft B^{[k]} E_\triangleright + E_\triangleleft B^{[k]} A_\triangleright - E_\triangleleft B^{[k]} E_\triangleright.
\end{split} \end{equation}
Since $A_k=\left[ A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}) \right]$ is the maximum--volume submatrix in $A^{\{k\}}=\left[A(i_{\leq k},i_{>k})\right],$ it dominates by~\eqref{eq:dom} in the corresponding rows and columns of the unfolding and \emph{a fortiori} in $A_\triangleleft$ and $A_\triangleright,$ i.e. $|A_\triangleleft A_k^{-1}| \leq 1$ and $|A_k^{-1} A_\triangleright| \leq 1.$ With $B^{[k]}=A_k^{-1}$ we have the following estimates $$
\begin{array}{c}
|A_\triangleleft B^{[k]} E_\triangleright| \leq r_k |A_\triangleleft B^{[k]} | \, |E_\triangleright| \leq r_k \eps |A|,
\qquad
|E_\triangleleft B^{[k]} A_\triangleright|\leq r_k\eps |A|, \qquad \mbox{and}
\\[1.1ex]
| E_\triangleleft B^{[k]} E_\triangleright | \leq r_k^2 \eps^2 | B^{[k]} | \, | A |^2 = r_k^2 \eps^2 | A_k^{-1} | \, | A |^2 = r_k \kappa_k \eps^2 |A|,
\end{array} $$ which completes the proof. \end{proof}
\begin{figure}
\caption{Interpolation steps on the balanced dimension tree for Thm.~\ref{thm1}.
Rectangle boxes show the subtensors $\left[A(\mathcal{I}}
\label{fig:t1}
\end{figure}
\begin{theorem}\label{thm1}
If a tensor $A$ satisfies~\eqref{eq:aa}, and $E_F$ and/or $E_C$ are sufficiently small, then $\tilde A$ given by~\eqref{eq:ii} with $\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}\right]=\mathop{\mathrm{maxvol}}\nolimits A^{\{k\}}$ provides the accuracy
\begin{equation}\label{eq1}
\begin{split}
|A-\tilde A| & \leq (2r+\kappa r+1)^{\lceil \log_2 d \rceil} (r+1)^{\phantom{2}} E_F,
\\
|A-\tilde A| & \leq (2r+\kappa r+1)^{\lceil \log_2 d \rceil} (r+1)^2 E_C,
\end{split}
\end{equation}
where $r=\max r_k,$ $\kappa=\max \kappa_k.$
By `sufficiently small' we mean such values of $E_F$ and/or $E_C$ that the corresponding estimate provides $|A-\tilde A|/|A| < 1.$
\end{theorem} \begin{proof} We will use the \emph{dimension tree} suggested in~\cite{ot-tt-2009}, see Fig.~\ref{fig:t1}. The interpolation step~\eqref{eq:qt} splits a given group of indices $i_p,\ldots,i_q$ in two parts $i_p,\ldots,i_k$ and $i_{k+1},\ldots,i_q,$ and introduces the auxiliary summation over the sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k}$ at the point of splitting. No more than two auxiliary sets appear in each subtensor when the decomposition goes from the whole tensor down to leaves $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right],$ which constitute~\eqref{eq:ii}. Leaves consist of the original entries of $A,$ therefore we have zero error at the ground level. The interpolation error at the level $1$ is estimated by~\eqref{eq:q1} as follows $$
\eps_1 = \frac{|E|}{|A|} = \min\left\{(r+1)\frac{E_F}{|A|}, \, (r+1)^2 \frac{E_C}{|A|} \right\}. $$ When we move up by one level of the dimension tree, the error is amplified as shown by~\eqref{eq:q2}, and the relative error in Chebyshev norm propagates as follows $$ \eps_{m+1}=(2+\eps_m \kappa) \eps_m r + \eps_1 \leq (2+\kappa)\eps_m r + \eps_m = (2r+\kappa r+1)\eps_m. $$ Here we use the inequality $\eps_m < 1$ provided by the assumption that $E_F$ and $E_C$ are sufficiently small. Clearly, $\eps_l=(2r+\kappa r+1)^{l-1}\eps_1.$ For the balanced tree $2^l \leq 2d$ and $l \leq \lceil \log_2 d \rceil + 1,$ which completes the proof. \end{proof} \begin{remark}
We are tempted to call $\kappa_k=r_k |A| |A_k^{-1}|$ the \emph{condition number} of the submatrix $A_k$ w.r.t. the Chebyshev norm.
Technically this is not correct, since in general $|A| \geq |A_k|$ and $\kappa_k \geq r_k |A_k| |A_k^{-1}| = \kappa_C(A_k).$
However, in~\cite{gostz-maxvol-2010} it is shown that the ratio of the Chebyshev norms of a matrix and its maximum--volume submatrix is bounded as follows $|A|/|A_k| \leq 2r_k^2+r_k$ and often does not grow with rank.
Therefore, $\kappa_k \leq (2r_k^2+r_k)\kappa_C(A_k),$ and usually $\kappa_k \simeq \kappa_C(A_k).$
The similar formula with spectral norms appears in the pioneering paper on the cross interpolation~\cite[Eq. $(1.5)$]{gtz-psa-1997}. \end{remark}
The splitting of indices in the balanced dimension tree was used to estimate the accuracy of the interpolation in the HT format~\cite{lars-htcross-2013}. The upper bound for the quasioptimality constant in the HT format is $\mathcal{O}(r^{2d}),$ where $r$ is the maximum representation rank. Note that the upper bounds in~\eqref{eq1} do not \emph{necessarily} grow exponentially with~$d.$ More strict statement is possible if $\kappa$ remains bounded or grows moderately with $d$ as well, which certainly depends on the properties of a \emph{sequence} of $d$--tensors considered for $d=1,2,3,\ldots$ Such rigorous analysis is very important, but is beyond the scope of this paper.
The result of Thm.~\ref{thm1} can be interpreted as the \emph{existence} of a sufficiently good TT approximation computed from a few entries of a tensor by formula~\eqref{eq:ii}, provided that the accurate representation in the TT format~\eqref{eq:tt} is possible. The coefficient $\mathcal{O}(r^{\lceil \log_2d \rceil + 2})$ can be also understood as upper bound for the ratio of the accuracy of the~\emph{best cross} interpolation~\eqref{eq:ii} and the \emph{best possible} accuracy of the approximation~\eqref{eq:tt} of the same TT--ranks. Thm.~\ref{thm1} is \emph{constructive} and prescribes the choice of the interpolation sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}, \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ to achieve the quasioptimal accuracy. However, the actual~\emph{computation} of maximum--volume sets in unfoldings $A^{\{k\}}$ is impossible due to their restrictively large sizes. In the next sections we consider the nested choice of the interpolation sets which reduces the search space.
\section{Nested maximum volume indices} \label{sec:emb} In this section we switch to the ultimately unbalanced dimension tree, which splits indices one-by-one, see Fig.~\ref{fig:t2}. In~\cite{osel-tt-2011} this tree has been used to develop the TT--SVD algorithm which approximates a given $d$--tensor by the TT format. We apply the same algorithm substituting the SVD approximation steps by the interpolation. As in the previous section, we estimate the accuracy of the resulted approximation w.r.t. the best possible approximation of the same TT--ranks.
Given a tensor $A=\left[A(i_1,\ldots,i_d)\right]$ that is approximated by the tensor train~\eqref{eq:aa}, we apply the interpolation formula~\eqref{eq:qt} and separate the rightmost index from the others as follows \begin{equation}\nonumber
A(i_1,\ldots,i_d) = \sum_{\substack{s_{d-1}\\t_{d-1}}} A(i_{\leq d-1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> d-1}_{t_{d-1}})
B^{[d-1]}_{t_{d-1},s_{d-1}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq d-1}_{s_{d-1}},i_d)
+ E_{d-1}(i_1,\ldots,i_d),
\end{equation}
where $\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq d-1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1}\right] = \mathop{\mathrm{maxvol}}\nolimits \left[A(i_{\leq d-1},i_d)\right].$ Then we interpolate the subtensor with $d-1$ free indices and separate the rightmost free index as follows \begin{equation}\nonumber
\begin{split}
A(i_{\leq d-1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> d-1}) & = \sum_{\substack{{s_{d-2}}\\{t_{d-2}}}}
A(i_{\leq d-2},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> d-2}_{t_{d-2}})
B^{[d-2]}_{t_{d-2},s_{d-2}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq d-2}_{s_{d-2}},i_{d-1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1})
+ E_{d-2}(i_{\leq d-1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> d-1}),
\end{split} \end{equation} where $\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq d-2},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-2}\right] = \mathop{\mathrm{maxvol}}\nolimits \left[A(i_{\leq d-2},i_{d-1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1})\right].$ The elements of $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-2}$ are now chosen not from all possible values of bi-index $\overline{i_{d-1}i_d}$ but from the reduced set $\overline{i_{d-1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1}},$ where index $i_d$ is restricted to $r_{d-1}$ elements of $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1}.$ Hereinafter we omit the overline for the sake of clarity, since the use of comma in the pointwise notation is sufficient to show which indices are grouped together. The maximum--volume subsets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-2}$ are \emph{right--nested} (cf.~\cite{ot-ttcross-2010}) which means that $i_{>d-2} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-2}$ leads to $i_{>d-1} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1}.$ As the interpolation develops further, it holds \begin{equation}\label{eq:emb}
i_{>k} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k} \:\Rightarrow\:
i_{>k+1} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1},
\qquad k=d-1,\ldots,1.
\end{equation}
\begin{figure}
\caption{Interpolation steps on the unbalanced dimension tree for Thm.~\ref{thm2}, cf. Fig.~\ref{fig:t1}.}
\label{fig:t2}
\end{figure}
\begin{theorem}\label{thm2}
If a tensor $A$ satisfies~\eqref{eq:aa}, then $\tilde A$ given by~\eqref{eq:ii} with
$$
\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}\right]=\mathop{\mathrm{maxvol}}\nolimits \left[A(i_{\leq k},i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right], \qquad k=d-1,\ldots,1,
$$
provides the following accuracy
\begin{equation}\label{eq2}
\begin{split}
|A-\tilde A| & \leq \frac{r^{d-1}-1}{r-1} (r+1)^{\phantom{2}} E_F,
\\
|A-\tilde A| & \leq \frac{r^{d-1}-1}{r-1} (r+1)^{2} E_C.
\end{split}
\end{equation} \end{theorem} \begin{proof} At the first level of the dimension tree the interpolation writes as follows \begin{equation}\nonumber A(i_1,i_2\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}) = \sum_{s_1,t_1} A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq1}_{s_1},i_2\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}) + E_1(i_1,i_2\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}), \end{equation} and since $\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}\right]=\mathop{\mathrm{maxvol}}\nolimits\left[A(i_1,i_2\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2})\right],$ it holds \begin{equation}\nonumber
|E_1| \leq (r_1+1)^{\phantom{2}} E_F, \qquad |E_1| \leq (r_1+1)^2 E_C, \end{equation} which proves the statement of the theorem for $d=2.$ Suppose at the level $k$ of the tree it holds \begin{equation}\nonumber
\begin{split}
A(i_{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}) & = \sum_{\substack{s_1\ldots s_{k-1}\\t_1\ldots t_{k-1}}}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_{1},s_{1}}
\ldots
B^{[k-1]}_{t_{k-1},s_{k-1}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}_{s_{k-1}},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})
+ \mathcal{E}_k(i_{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}),
\end{split} \end{equation}
where $|\mathcal{E}_k|\leq\frac{r^{k-1}-1}{r-1} |E_1|.$ Interpolation at the next level writes as follows \begin{equation}\nonumber
\begin{split}
A(i_{\leq k},i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})
& = \sum_{s_k,t_k} A(i_{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})
B^{[k]}_{t_{k},s_{k}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})
+ E_k(i_{\leq k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}).
\end{split} \end{equation} Using the previous equation we obtain \begin{equation}\nonumber
\begin{split}
A(i_{\leq k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}) & = \sum_{\substack{s_1\ldots s_k\\t_1\ldots t_k}}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_{1},s_{1}}
\ldots
B^{[k]}_{t_{k},s_{k}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})
\\
& + \underbrace{\sum_{s_k,t_k} \mathcal{E}_k(i_{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})
B^{[k]}_{t_{k},s_{k}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})
+ E_k(i_{\leq k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})}_{\mathcal{E}_{k+1}(i_{\leq k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})}.
\end{split} \end{equation} Since $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ are chosen by the maximum--volume principle, we have \begin{equation}\nonumber
|E_k| \leq (r_1+1)^{\phantom{2}} E_F, \quad |E_k| \leq (r_1+1)^2 E_C, \qquad
\left| \sum_{s_k} B^{[k]}_{t_k,s_k} A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}) \right| \leq 1, \end{equation} and it follows that \begin{equation}\nonumber
|\mathcal{E}_{k+1}| \leq r |\mathcal{E}_k| + |E_k| \leq r \frac{r^{k-1}-1}{r-1} |E_1| + |E_1| = \frac{r^k-1}{r-1} |E_1|. \end{equation} Substitution $k=d-1$ completes the proof. \end{proof}
\begin{lemma}\label{lemi}
If $\tilde A$ is given by~\eqref{eq:ii} and the interpolation sets are right--nested as shown by~\eqref{eq:emb}, then for all $k=1,\ldots,d-1$ it holds
$$
\tilde A(i_1,\ldots,i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}) = \sum_{\substack{s_1\ldots s_{k-1}\\t_1\ldots t_{k-1}}}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
\ldots
B^{[k-1]}_{t_{k-1},s_{k-1}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}_{s_{k-1}},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}).
$$ \end{lemma} \begin{proof}
To prove the statement of the lemma for $k=d-1,$ in~\eqref{eq:ii} we restrict $i_{>d-1}$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1}.$
The last core reduces to
$$
\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq d-1},i_d)\right]_{i_d\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1}} = \left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq d-1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>d-1})\right] = A_{d-1},
$$
and cancels out with the neighboring matrix $B^{[d-1]}.$
Suppose that the statement holds for $k=p+1,$ i.e.
$$
\tilde A(i_1,\ldots,i_{p+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p+1}) = \sum_{\substack{s_1\ldots s_{p}\\t_1\ldots t_{p}}}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
\ldots
B^{[p]}_{t_{p},s_{p}}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p}_{s_p},i_{p+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p+1}).
$$
Consider this equation for $i_{>p}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p},$ that by~\eqref{eq:emb} assumes $i_{>p+1}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p+1}.$
The rightmost core reduces as follows
$$
\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p},i_{p+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p+1})\right]_{i_{>p}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p}} = \left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p})\right] = A_p,
$$
and cancels out with $B^{[p]}.$
This proves the statement for $k=p$ and the lemma by recursion. \end{proof}
Since $\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}\right]=\mathop{\mathrm{maxvol}}\nolimits\left[A(i_{\leq k},i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right],$ the quasioptimal estimate~\eqref{eq:qt} holds for the entries of this subtensor only. However, $A_k=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right]$ is nonsingular and we can interpolate the whole unfolding $A^{\{k\}}$ by the cross based on $A_k$ with some (presumably worse) accuracy estimate \begin{equation}\label{eq:ihat}
A(i_1,\ldots,i_d) = \sum_{s_k,t_k} A(i_1,\ldots,i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}_{t_k})
B^{[k]}_{t_k,s_k}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}_{s_k},i_{k+1},\ldots,i_d)
+ \hat E_k(i_1,\ldots,i_d).
\end{equation}
The following theorem estimates the accuracy of the same interpolation $\tilde A$ as in the previous theorem w.r.t. the errors $|\hat E_k|$ in~\eqref{eq:ihat}.
\begin{theorem}\label{thm3}
Under the conditions of Thm.~\ref{thm2} assume additionally that the interpolation~\eqref{eq:ihat} provides sufficiently good accuracy $\hat\eps = \max |\hat E_k|/|A|.$ Then
\begin{equation}\label{eq3}
|A-\tilde A| \leq \frac{dr\hat\eps}{1-d\kappa r \hat\eps} |A|,
\end{equation} where $\kappa$ is defined in~\eqref{eq1}. By `sufficiently small' here we mean such $\hat\eps,$ that the denominator of~\eqref{eq3} does not approach zero. \end{theorem} \begin{proof} The interpolation sets~\eqref{eq:emb} have been constructed from right to left according to the dimension tree on Fig.~\ref{fig:t2}. In order to estimate the accuracy we separate indices one-by-one with the interpolation~\eqref{eq:ihat} proceeding from left to right. We begin with \begin{equation}\nonumber
A(i_1,\ldots,i_d) = \sum_{s_1,t_1}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 1}_{s_1},i_{>1}) + \hat E_1(i_1,\ldots,i_d), \end{equation}
and $|\mathcal{E}_1|=|\hat E_1|\leq\hat\eps |A|.$ On the second step we write \begin{equation}\nonumber
A(i_1,\ldots,i_d) = \sum_{s_2,t_2}
A(i_1,i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}_{t_2})
B^{[2]}_{t_2,s_2}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 2}_{s_2},i_{>2}) + \hat E_2(i_1,\ldots,i_d). \end{equation}
We restrict $i_1$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 1}$ and substitute the result into the previous equation. \begin{equation}\nonumber
\begin{split}
A(i_1,\ldots,i_d) & = \sum_{\substack{s_1,s_2\\t_1,t_2}}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq1}_{s_1},i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}_{t_2})
B^{[2]}_{t_2,s_2}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 2}_{s_2},i_{>2})
\\ & + \underbrace{\sum_{s_1,t_1}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
\hat E_2(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq1}_{s_1},i_{>1}) + \mathcal{E}_1(i_1,\ldots,i_d)}_{\mathcal{E}_2(i_1,\ldots,i_d)}.
\end{split} \end{equation} Since $\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}\right]=\mathop{\mathrm{maxvol}}\nolimits\left[A(i_1,i_2\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2})\right],$ submatrix $A_1$ dominates in the corresponding rows $
\left|\sum_{t_1} A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1}) B^{[1]}_{t_1,s_1}\right|\leq1, $
and therefore $|\mathcal{E}_2|\leq (r+1)\hat\eps |A|.$
The third interpolation step writes as follows \begin{equation}\nonumber
A(i_1,\ldots,i_d) = \sum_{s_3,t_3}
A(i_{\leq2},i_3,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>3}_{t_3})
B^{[3]}_{t_3,s_3}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 3}_{s_3},i_{>3}) + \hat E_3(i_1,\ldots,i_d). \end{equation} Again, we restrict $i_{\leq 2}$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 2}$ and substitute the result into the previous equation. \begin{equation}\nonumber
\begin{split}
A(i_1,\ldots,i_d) & = \sum_{\substack{s_1,s_2,s_3\\t_1,t_2,t_3}}
A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
\ldots
B^{[3]}_{t_3,s_3}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 3}_{s_3},i_{>3})
+ \mathcal{E}_3(i_1,\ldots,i_d),
\\
\mathcal{E}_3(i_1,\ldots,i_d) & = \sum_{\substack{s_1,s_2\\t_1,t_2}}
\underbrace{A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq1}_{s_1},i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}_{t_2})}_{\tilde A(i_1,i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}_{t_2})}
B^{[2]}_{t_2,s_2}
\hat E_3(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq2}_{s_2},i_{>2}) + \mathcal{E}_2(i_1,\ldots,i_d).
\end{split} \end{equation}
We need to estimate the norm of the matrix in front of $\hat E_3,$ avoiding the exponential amplification of the coefficient. To do this, we replace the `piece' of the interpolation train with the subtensor of $A.$ Since $A=\tilde A+\mathcal{E},$ the same holds for the subtensors $A(i_1,i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2})=\tilde A(i_1,i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2})+\mathcal{E}(i_1,i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}),$ and using Lemma~\ref{lemi} we write $$
\sum_{s_1,t_1} A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1}_{t_1})
B^{[1]}_{t_1,s_1}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq1}_{s_1},i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2})
= A(i_1,i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}) - \mathcal{E}(i_1,i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}). $$ Substituting this into the previous equation, we use the domination of the maximum--volume submatrix $A_2$ to write $
\left| \sum_{t_2} A(i_{\leq 2},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2}_{t_2}) B^{[2]}_{t_2,s_2}\right| \leq 1 $ and obtain $$
|\mathcal{E}_3|\leq (2r+1) \hat\eps |A| + \kappa r \hat\eps |\mathcal{E}|. $$ In further interpolation steps the error accumulates similarly. Finally, $$
|\mathcal{E}| = |\mathcal{E}_{d-1}|
\leq dr \hat\eps |A| + d \kappa r \hat\eps |\mathcal{E}|, $$ which completes the proof. \end{proof}
Theorems~\ref{thm2} and~\ref{thm3} estimate the accuracy of the interpolation formula~\eqref{eq:ii} with the same interpolation sets. In Thm.~\ref{thm2} the quasioptimality result is proven with the coefficient $\mathcal{O}(r^d),$ which is much larger than the one in~\eqref{eq1}, cf. the coefficient~$\mathcal{O}(r^{2d})$ in~\cite{lars-htcross-2013}. Since the coefficient in~\eqref{eq2} grows exponentially with the dimension, it can be hardly used in the real estimates.
The result of Thm.~\ref{thm3} improves the estimate of Thm.~\ref{thm2} provided the errors $|\hat E_k|$ in~\eqref{eq:ihat} do not grow exponentially with $d.$
In general we cannot provide such upper bound for $|\hat E_k|.$
The estimate~\eqref{eq3} is useful in special cases when the theoretical or numerical estimates available for the errors $|\hat E_k|$ are bounded or grow moderately with $d.$
Note that the nestedness of the interpolation sets is essential in the proof of Thm.~\ref{thm3}. The result of Thm.~\ref{thm3} cannot be generalized to the `fully' maximum--volume case described in Thm.~\ref{thm1}.
\section{Two--side nestedness and the interpolation property} \label{sec:emb2} In this section we consider the interpolation~\eqref{eq:ii} where both left and right interpolation sets are nested, i.e. for all valid $k$ it holds \begin{equation}\label{eq:emb2}
i_{>k} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k} \:\Rightarrow\: i_{>k+1} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}, \qquad
i_{\leq k} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k} \:\Rightarrow\: i_{\leq k-1} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}. \end{equation}
The naive way to construct such sets is to run the right--to--left interpolation pass explained in Sec.~\ref{sec:emb} and keep the right sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ only. The left sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}$ are computed by the left--to--right interpolation pass which separates the index $i_1,$ then $i_2,$ etc. We obtain \begin{equation}\nonumber
\left[\J^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}\right] = \mathop{\mathrm{maxvol}}\nolimits\left[A(i_{\leq k},i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right], \quad
\left[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\J^{>k}\right] = \mathop{\mathrm{maxvol}}\nolimits\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k,i_{>k})\right]. \end{equation} Note that $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right]$ is not necessarily the maximum--volume submatrix neither in the subtensor $\left[A(i_{\leq k},i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right],$ nor in $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k,i_{>k})\right],$ nor even in their intersection $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k,i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right].$ Therefore, we cannot use~\eqref{eq:qm} to estimate the accuracy of~\eqref{eq:ii} with these interpolation sets. Due to the restrictive sizes, the computation of the maximum volume submatrix is impossible even with implied nestedness. To make the problem tractable, we should further reduce the search space --- the practical recipes will be discussed in the next section.
If both left and right interpolation sets are nested, Eq.~\eqref{eq:ii} is indeed the \emph{cross interpolation formula}, as shown by the following theorem. \begin{theorem}\label{thmi}
For a tensor $A,$ the approximation $\tilde A$ given by~\eqref{eq:ii} with indices $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ satisfying~\eqref{eq:emb2}, is exact on the positions of all entries evaluated in a tensor
\begin{equation}\label{eqi}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}) = \tilde A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}), \qquad k=1,\ldots,d.
\end{equation} \end{theorem} \begin{proof}
It is sufficient to repeat the arguments from the proof of Lemma~\ref{lemi} for the left and right interpolation sets. \end{proof}
A $m\times n$ matrix $A$ of rank $r$ is defined by $(m+n)r-r^2$ parameters, e.g. by $mr+nr+r$ elements of the SVD decomposition $A=USV^*$ minus $r(r+1)$ normalization constraints $U^* U=I,$ $V^* V=I.$ The cross interpolation formula~\eqref{eq:im} recovers a rank--$r$ matrix from $(m+n)r-r^2$ entries, if a submatrix $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},\J)\right]$ is nonsingular. If $\mathop{\mathrm{rank}}\nolimits A>r,$ formula~\eqref{eq:im} provides the approximation $\tilde A,$ which is exact on $(m+n)r-r^2$ positions of a matrix. This fact is generalized to the tensor case by the following theorem. \begin{theorem}
A tensor $A$ with mode sizes $n_1,\ldots,n_d$ and TT--ranks $r_1,\ldots,r_{d-1}$ is defined by
$$
s=\sum_{k=1}^d r_{k-1} n_k r_k - \sum_{k=1}^{d-1} r_k^2
$$
parameters.
If the left and right interpolation sets satisfy~\eqref{eq:emb2}, and the matrices $A_k,$ $k=1,\ldots,d-1,$ are nonsingular, formula~\eqref{eq:ii} recovers $A$ from exactly $s$ entries.
If a tensor $A$ is not given by~\eqref{eq:tt} exactly, formula~\eqref{eq:ii} interpolates it on at least $s$ positions. \end{theorem} \begin{proof}
The first statement is proven in~\cite[Prop. A.3]{ushmaev-tt-2013}.
Taking into account the result of Thm.~\ref{thmi}, the second and the third statements require to calculate the total number of tensor entries in all subtensors in~\eqref{eqi}.
Each block $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right]$ consists of $r_{k-1}n_kr_k$ elements of a tensor, but some entries contribute to more than one block.
For example, if~\eqref{eq:emb2} holds, subtensors $\left[A(i_1,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1})\right]$ and $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 1},i_2,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>2})\right]$ intersect by the submatrix $A_1=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq 1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>1})\right],$ which has $r_1^2$ elements.
Similarly, $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1},i_k,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right]$ and $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right]$ have $r_k^2$ common elements in the submatrix $A_k.$
The common elements of $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right]$ and $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1},i_p,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p})\right]$ are described by the following conditions
$$
i_{\leq p-1}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1}, \quad i_{\leq k} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}, \qquad i_{>p}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p}, \quad i_{>k+1}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}.
$$
If $p<k,$ they are reduced by~\eqref{eq:emb2} to $i_{\leq k} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}$ and $i_{>p}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p},$ and it holds
$$
\{i_{\leq k} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}, i_{>p}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p}\} \subset
\{i_{\leq k} \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}, i_{>k}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}\}.
$$
Therefore, all common entries of $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right]$ and $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p-1},i_p,\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p})\right]$ belong to $A_k$ for $p=1,\ldots,k-1.$
The total number of entries which belong to more than one block equals $\sum_{k=1}^{d-1} r_k^2,$ which completes the proof. \end{proof}
\section{Interpolation algorithms for matrices and tensors} \label{sec:alg}
We start this section with a short overview of the cross interpolation algorithms for matrices. The idea of reconstruction and approximation of a matrix from several columns and rows by the~\emph{skeleton decomposition}~\eqref{eq:im} or the \emph{pseudoskeleton decomposition} $\tilde A=CGR,$ $C=\left[A(i,\J)\right],$ $R=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},j)\right],$ has been suggested by Goreinov and Tyrtyshnikov~\cite{gt-psa-1995}. In~\cite{gtz-psa-1997} the accuracy of the pseudoskeleton approximation has been studied and it has been pointed out that a good cross should intersect by a well bounded submatrix. The connection with the maximum--volume submatrix has been mentioned in~\cite{gtz-psa-1997}, and the maximum--volume principle has been presented in more detail in~\cite{gt-maxvol-2001}.
\begin{algorithm}[t]
\caption{Greedy cross interpolation algorithm for tensor trains} \label{algg}
\begin{algorithmic}[1]
\REQUIRE Function to compute entries of a tensor $A=\left[A(i_1,\ldots,i_d)\right]$
\ENSURE Cross interpolation~\eqref{eq:ii} with the nested interpolation sets~\eqref{eq:emb2}
\STATE $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}=\emptyset,$ $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}=\emptyset,$ $k=1,\ldots,d,$ $\tilde A=0,$ $E=A$
\WHILE{$|A-\tilde A|$ is not sufficiently small}
\STATE Find a pivot $i^\star=(i^{\star}_1,\ldots,i^{\star}_d)$ s.t. $|E(i^\star_1,\ldots,i^\star_d)| \simeq |A-\tilde A|$
\STATE Add $i^\star_{\leq k}$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},$ and $i^\star_{>k}$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k},$ $k=1,\ldots,d-1$
\STATE Update the interpolation $\tilde A$ by~\eqref{eq:ii}
\ENDWHILE
\end{algorithmic} \end{algorithm}
The search for the maximum--volume submatrix \emph{per se} is an NP--hard problem~\cite{bartholdi-1982}. For practical computations, it is necessary to find a \emph{sufficiently good} submatrix reasonably fast. The alternating direction algorithm has been proposed in~\cite{tee-cross-2000}, which adaptively increases the size of the interpolation cross following the maximum--volume principle at each step, and computes the approximation of a matrix in linear time w.r.t. the size. The greedy algorithm of such kind, equivalent to the Gaussian elimination with partial pivoting, was then suggested by Bebendorf~\cite{bebe-2000}. Due to its particular simplicity, it has become widely known as the \emph{adaptive cross approximation} (ACA). In practical computations, ACA and similar methods with minimal information are liable to breakdowns, i.e. they may quit when a good approximation is not yet obtained. A cheap remedy proposed in~\cite{sav-2006} is to check the accuracy on the random set of entries and restart the algorithm if necessary.\footnote{Published in English later as~\cite[Alg. 3]{ost-chem-2010}} Another well--known sampling method is the $CUR$ algorithm of Mahoney et al~\cite{mahoney-cur-2006}, which is the pseudoskeleton $CGR$ decomposition where positions of the rows and columns are chosen randomly.
The accuracy of the maximum--volume cross approximation is estimated for any matrix~\cite{gt-maxvol-2001,schneider-cross2d-2010,gt-skel-2011}. Algorithms which use a few elements (e.g. ACA) are \emph{heuristic} and construct the approximation which can be arbitrarily bad for other matrix elements. The accuracy of such algorithms can be estimated in special cases, e.g. for matrices generated by asymptotically smooth functions on quasi--uniform grids, see~\cite{bebe-2000,tee-kron-2004}, cf.~\cite{tee-tensor-2003} in many dimensions.
The existing cross interpolation algorithms for tensors can be classified similarly. The skeleton decomposition is generalized to the tensor case in~\cite{ot-ttcross-2010} by formula~\eqref{eq:ii}, where the submatrices $A_k=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k})\right]$ play the same role as $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K},\J)\right]$ in~\eqref{eq:im}. The `existence result' is generalized from the matrix case~\cite{gt-maxvol-2001,schneider-cross2d-2010,gt-skel-2011} to the TT case by Thm.~\ref{thm1}. Algorithm proposed in~\cite{ot-ttcross-2010} approximates the maximum--volume positions in the ALS way, similarly to the one from~\cite{tee-cross-2000}.
A greedy cross interpolation algorithm for the TT format can be suggested similarly to the matrix case, see e.g. Alg.~\ref{algg}. Similarly to the ACA, Alg.~\ref{algg} relies on the interpolation property for the tensor trains, established by Thm.~\ref{thmi}. On each step Alg.~\ref{algg} searches for a pivot $i^\star$ where the error of the current approximation is (quasi)maximum in modulus. Then it adds the indices of $i^\star=\overline{i_{\leq k}^\star i_{>k}^\star}$ to all subsets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k},$ $k=1,\ldots,d-1,$ to maintain the two--side nestedness~\eqref{eq:emb2}. The updated interpolation is exact on all \emph{lines} $(i^{\star}_1,\ldots,i^\star_{k-1},i_k,i^\star_{k+1},\ldots,i_d^\star),$ $i_k=1,\ldots,n_k$ $k=1,\ldots,d.$
The full pivoting in higher dimensions is impossible due to the curse of dimensionality, and we need cheaper alternatives to find a new pivot and estimate the accuracy for the stopping criterion. Following the tensor--CUR algorithm of Mahoney et al~\cite{mahoney-simax-2008} we can choose indices randomly. Another approach is to choose the maximum in modulus element of the current residual among a randomly sampled set. The third option is to choose the pivot from a \emph{restricted set} similarly to the ACA approach, check the accuracy of the approximation over a random set of entries, and restart if necessary, see~\cite[Alg. 3]{ost-chem-2010}.
The restricted pivoting set can naturally arise from the \emph{locality} requirement. By this we mean that with a new pivot we should modify only a few interpolation sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ and increase only a few TT--ranks of the approximation, not all of them. To put it differently, a pivoting algorithm should update only a few TT--cores of~\eqref{eq:ii} at each step, similarly to the ALS and DMRG algorithms introduced in quantum physics.
Following the DMRG algorithm, we choose a new pivot $i^\star$ in the DMRG \emph{supercore} $A'=\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1},i_k, i_{k+1},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right].$ This choice provides $i^\star_{\leq k-1}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}$ and by~\eqref{eq:emb2} $i^\star_{\leq p}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq p}$ for $p\leq k-1.$ Similarly, $i^\star_{>k+1}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}$ and by nestedness $i^\star_{>p}\in\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>p}$ for $p\geq k+1.$ When we add $i^\star_{\leq k}$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}$ and $i^\star_{>k}$ to $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k},$ the two--side nestedness~\eqref{eq:emb2} is preserved \emph{ipso facto.}
\begin{algorithm}[t]
\caption{Greedy restricted cross interpolation algorithm for tensor trains} \label{algi}
\begin{algorithmic}[1]
\REQUIRE Function to compute entries of a tensor $A=\left[A(i_1,\ldots,i_d)\right]$
\ENSURE Cross interpolation~\eqref{eq:ii} with the nested interpolation sets~\eqref{eq:emb2}
\STATE Choose $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},$ $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k},$ $k=1,\ldots,d,$ which satisfy~\eqref{eq:emb2}, and compute $\tilde A$ by~\eqref{eq:ii}
\WHILE{stopping criterion is not satisfied}
\FOR[Left--to--right half--sweep]{$k=1,\ldots,d-1$}
\STATE Apply the cross interpolation (e.g.~\cite[Alg. 3]{ost-chem-2010}) to the DMRG supercore matrix $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k, i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right],$
using sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ as the initial guess, and compute~\eqref{eq:dmrgs}
with $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k} \subset \J^{\leq k}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{> k} \subset \J^{> k}$
\STATE Substitute $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k}$ and $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}$ by the expanded sets $\J^{\leq k}$ and $\J^{>k}$
\ENDFOR
\STATE Perform right--to-left half--sweep in the same way
\ENDWHILE
\end{algorithmic} \end{algorithm}
The greedy algorithm with pivoting in $A'$ can be implemented as a simple modification of the cross interpolation algorithm TT--RC from~\cite{so-dmrgi-2011proc}. The TT--RC algorithm is of the DMRG type, which means that it updates two neighboring TT--cores at a step, computing the matrix $A'$ in full. The proposed Alg.~\ref{algi} substitutes this step with the cross interpolation and approximates \begin{equation}\label{eq:dmrgs}
A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k, i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})
\approx \sum_{s_k,t_k} A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k, \J_{t_k}^{>k})
\left[A(\J_{s_k}^{\leq k},\J_{t_k}^{>k})\right]^{-1}
A(\J_{s_k}^{\leq k}, i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}), \end{equation} where $\J^{\leq k}$ and $\J^{>k}$ are computed by the matrix cross interpolation algorithm, s.t. \begin{equation}\nonumber
\left[\J^{\leq k},\J^{>k}\right] \simeq \arg\max_{\substack{\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1} \\ \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1}}} \mathop{\mathrm{vol}}\nolimits\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k,i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right]. \end{equation} The resulting algorithm requires $\mathcal{O}(dnr^2)$ evaluation of tensor elements and $\mathcal{O}(dnr^3)$ additional operations, i.e. scales linearly in the mode size and very moderately in the TT--rank. The algorithm is rank--revealing, i.e. will not increase the TT--ranks of the approximation~\eqref{eq:ii} over the TT--ranks of a given tensor.
The greedy algorithms are not always good in practice, since the positions chosen as the initial guess may approximate the maximum--volume submatrices inaccurately and should be removed when the interpolation sets are sufficiently large. Only a slight modification of Alg.~\ref{algi} is required to develop a non--greedy version.
\section{Numerical experiments} \label{sec:num} The numerical results have been obtained using the Iridis3 High Performance Computing Facility at the University of Southampton.\footnote{Iridis3 is based on Intel $2.4$ GHz processors, for more specifications see \href{http://cmg.soton.ac.uk/iridis}{cmg.soton.ac.uk/iridis}.} Cross interpolation and auxiliary tensor train subroutines are written in \textsc{Fortran$90$} by the author. The code was compiled using the Intel Composer and linked with \textsc{Lapack}/\textsc{Blas} subroutines provided with the {MKL} library.
In the experiments we use a very simple version of Alg.~\ref{algi}. On each step (Line~4) we improve the current approximation by adding only one cross to $[\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}]$. The position of the new cross is computed as follows. First, a random sampling is performed on $r_{k-1}n_k+n_{k+1}r_{k+1}$ entries of the matrix $\left[A(\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k-1}i_k, i_{k+1}\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k+1})\right],$ and an element is chosen where the error of the current interpolation is maximum in modulus. Then the residual for the row or column (for left and right half--sweep, resp.) which contains this element is evaluated, and the pivot $i^\star$ is chosen among its entries. If pivot is not zero up to the machine precision, the obtained cross is added to interpolation sets $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{\leq k},\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}^{>k}.$ If pivot is machine null, the rank $r_k$ is not increased.
The interpolation sets are always initialized by the index $(1,1,\ldots,1).$
\subsection{The quasioptimality coefficient} \begin{figure}
\caption{Distribution of $\log_2 (|A-\tilde A|/|A-X|)$ for randomly generated tensors $A$ given by~\eqref{eq:rnd} and the cross interpolation $\tilde A,$ computed by Alg.~\ref{algi} (dashed lines), and additionally improved by the TT-RC algorithm from~\cite{so-dmrgi-2011proc} (solid lines). Dimension $d=16,$ mode size $n=2,$ noise level $\mu=10^{-7},$ rank $r=5,$ unless other value is shown on the graph.}
\label{fig:q}
\end{figure}
For a number of randomized experiments we measure the ratio between the accuracy of the approximation in the TT format~\eqref{eq:tt} and the cross interpolation~\eqref{eq:ii} with the same TT--ranks. Given dimension $d,$ mode size $n=2,$ mode ranks $r$ and \emph{noise level} $\mu,$ we consider the tensor \begin{equation}\label{eq:rnd}
A = X + \mu R, \qquad |X|=1, \quad |R|=1, \end{equation} where $R$ is random and $X$ is given by the TT format~\eqref{eq:tt} with TT--ranks $r$ and random TT--cores. All random elements are independently and uniformly distributed on the unit set and we seed them using the internal pseudorandom generator provided with the compiler.
We apply Alg.~\ref{algi} to compute the initial cross interpolation $\tilde A_{\mathrm{greedy}}$ with TT--ranks not larger than $r.$ Then we run $10$ additional sweeps of the DMRG--like TT--RC algorithm~\cite{so-dmrgi-2011proc} to improve the positions of the interpolation crosses and obtain $\tilde A_{\mathrm{DMRG}}.$ Density distributions of the logarithm of the quasioptimality coefficient for $\tilde A_{\mathrm{greedy}}$ and $\tilde A_{\mathrm{DMRG}}$ are shown on Fig.~\ref{fig:q}. The number of tests for each density distribution curve is at least $2^{20}.$
We note that for the randomly generated tensors, the quasioptimality coefficient is not very large. For example, the top left graph on Fig.~\ref{fig:q} corresponds to $d=16$ and $r=5.$ The estimate of Thm.~\ref{thm1} provides the upper bound for the quasioptimality coefficient $(2r+\kappa r+1)^{\lceil \log_2d \rceil} (r+1)^2 \geq 16^4 6^2 \geq 2^{21}.$ The computed value is $$
\log_2(|A-\tilde A_{\mathrm{greedy}}|/|A-X|)=3.2\pm2.1, \qquad
\log_2(|A-\tilde A_{\mathrm{DMRG}}|/|A-X|)=1.3\pm0.6. $$
Therefore, for the considered experiment the upper bound $2^{21}$ provided by Thm.~\ref{thm1} overestimates the actual value by a factor $\geq 2^{19.5}.$
It is important how the accuracy of the interpolation depends on the dimension $d$ and the TT--rank $r.$ The result of these experiments are shown in the right column of Fig.~\ref{fig:q}. We see that the coefficient grows with rank and dimension slower than the upper bound~\eqref{eq1}. For example, for $r=32$ the upper bound is $\gtrsim 2^{58},$ assuming $\kappa=1.$ The actual coefficient computed in the numerical experiments is of the order $2^3$ for $\tilde A_{\mathrm{greedy}}$ and $2^4$ for $\tilde A_{\mathrm{DMRG}}.$ Note that in this case the interpolation improved by the DMRG--like algorithm has worse accuracy than the interpolation returned by~Alg.~\ref{algi}. This may be explained by the fact that the TT--RC algorithm has the truncation step which reduces TT--ranks and introduces a perturbation to the tensor. The double--side nestedness~\eqref{eq:emb2} is not preserved during this step which may result in the loss of the interpolation property and deteriorate the accuracy. This emphasizes the importance of the interpolation property given by Thm.~\ref{thmi}.
Finally, we analyze how the accuracy of the cross interpolation depends on the noise level $\mu.$ On the bottom left graph on Fig.~\ref{fig:q} we see that this parameter does not change the distribution significantly. When $\mu\leq10^{-5},$ further reduction of the noise level has no effect on the distribution of the quasioptimality coefficient.
We summarize that for random tensors the accuracy of the computed cross interpolation behaves much better than the upper bound in~\eqref{eq1}.
\subsection{Speed and accuracy of the greedy interpolation algorithm}
\begin{table}[p]
\def_{10}-{_{10}-}
\def\p#1#2#3{\begin{tabular}{c} $#1$ \\ $#2$ \\ $#3$ \end{tabular}}
\begin{center}
\begin{tabular}{c|ccc|ccc}
& \multicolumn{3}{c|}{$d=2^4$} & \multicolumn{3}{c}{$d=2^5$} \\
$r$ & $n=2^5$ & $n=2^7$ & $n=2^9$ & $n=2^5$ & $n=2^7$ & $n=2^9$ \\ \hline
$ 6$ & \p{8\eee3 }{7\eee3 }{0.007} & \p{1\eee1 }{1\eee1 }{0.035} & & \p{1\eee1 }{6\eee2 }{0.021} & & \\ \hline
$12$ & \p{2\eee5 }{3\eee6 }{0.033} & \p{7\eee4 }{2\eee4 }{0.14} & \p{2\eee2 }{7\eee3 }{0.59} & \p{6\eee5 }{9\eee6 }{0.097} & \p{3\eee3 }{1\eee3 }{0.37} & \p{8\eee2 }{4\eee2 }{1.56} \\ \hline
$18$ & \p{8\eee9 }{2\eee9 }{0.081} & \p{2\eee6 }{3\eee6 }{0.34} & \p{7\eee5 }{1\eee5 }{1.45} & \p{2\eee8 }{5\eee9 }{0.23} & \p{2\eee5 }{2\eee6 }{0.90} & \p{1\eee4 }{5\eee4 }{3.97} \\ \hline
$24$ & \p{2\eee12}{1\eee12}{0.156} & \p{1\eee8 }{5\eee9 }{0.65} & \p{1\eee6 }{7\eee7 }{2.88} & \p{5\eee12}{2\eee12}{0.47} & \p{5\eee8 }{1\eee8 }{1.82} & \p{5\eee6 }{1\eee6 }{8.06} \\ \hline
$30$ & & \p{3\eee11}{2\eee11}{1.05} & \p{2\eee8 }{7\eee9 }{4.93} & \p{1\eee12}{2\eee13}{0.60} & \p{9\eee11}{3\eee11}{2.80} & \p{3\eee8 }{6\eee9 }{12.6} \\ \hline
$36$ & & \p{1\eee12}{8\eee13}{1.53} & \p{3\eee10}{1\eee10}{8.09} & & \p{3\eee12}{1\eee12}{4.11} & \p{6\eee10}{1\eee10}{20.8} \\ \multicolumn{7}{c}{} \\
& \multicolumn{3}{c|}{$d=2^6$} & \multicolumn{3}{c}{$d=2^7$} \\
$r$ & $n=2^5$ & $n=2^7$ & $n=2^9$ & $n=2^5$ & $n=2^7$ & $n=2^9$ \\ \hline
$ 9$ & \p{6\eee3 }{5\eee4 }{0.15} & \p{2\eee1 }{1\eee1 }{0.70} & & \p{4\eee3 }{5\eee4 }{0.50} & & \\ \hline
$15$ & \p{2\eee5 }{1\eee6 }{0.45} & \p{3\eee3 }{2\eee4 }{1.85} & \p{7\eee2 }{3\eee2 }{7.33} & \p{3\eee6 }{4\eee7 }{1.45} & \p{2\eee3 }{6\eee4 }{5.85} & \p{1\eee1 }{6\eee2 }{23.6} \\ \hline
$21$ & \p{5\eee10}{1\eee10}{1.01} & \p{3\eee6 }{2\eee7 }{3.70} & \p{2\eee4 }{4\eee5 }{15.1} & \p{1\eee9 }{1\eee10}{2.90} & \p{1\eee6 }{2\eee7 }{11.8} & \p{1\eee3 }{3\eee4 }{48.0} \\ \hline
$27$ & \p{3\eee12}{7\eee13}{1.45} & \p{3\eee9 }{3\eee10}{6.27} & \p{2\eee6 }{4\eee7 }{26.7} & \p{1\eee11}{1\eee12}{4.56} & \p{5\eee9 }{6\eee10}{19.9} & \p{4\eee6 }{5\eee7 }{82.8} \\ \hline
$33$ & & \p{1\eee11}{3\eee12}{9.97} & \p{4\eee9 }{1\eee9 }{42.9} & & \p{3\eee11}{5\eee12}{29.7} & \p{1\eee8 }{2\eee9 }{128} \\ \hline
$39$ & & \p{7\eee12}{2\eee12}{13.1} & \p{2\eee10}{2\eee11}{64.2} & & & \p{3\eee10}{6\eee11}{186} \\ \hline
\end{tabular}
\end{center}
\caption{Accuracy and the CPU time for Alg.~\ref{algi} applied for the interpolation of tensor~\eqref{eq:1r} with dimension $d,$ mode size $n$ and TT--ranks $r.$
Each cell contains the estimates of the relative error in the Chebyshev norm $|A-\tilde A|_{\sim}/|A|_{\sim}$ and
in the Frobenius norm $\|A-\tilde A\|_{\sim}/\|A\|_{\sim},$
and the computation time in seconds.
}
\label{tab} \end{table}
We apply Alg.~\ref{algi} to the tensor $A=[A(i_1,\ldots,i_d)]$ with elements \begin{equation}\label{eq:1r}
A(i_1,\ldots,i_d) = {1}/{\sqrt{i_1^2 + \ldots + i_d^2}}. \end{equation}
This example is the standard test considered in e.g.~\cite{ost-tucker-2008,ot-ttcross-2010,lars-htcross-2013}. We test the algorithm for large mode sizes $n$ and dimensions $d,$ where the evaluation of the accuracy $|A-\tilde A|$ is impossible due to the restrictively large number of entries. We substitute the exact evaluation by estimates computed on a large number of randomly distributed elements as follows $$
| A |_{\sim} = \max_{i \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}} |A(i_1,\ldots,i_d)|, \qquad \|A\|_{\sim}^2 = \frac{n_1\ldots n_d}{\#\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}}\sum_{i \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}} |A(i_1,\ldots,i_d)|^2, $$ where indices $i=(i_1,\ldots,i_d) \in \mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}$ are chosen randomly, and $\#\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}$ denotes the number of elements in the random set $\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K}.$ In our tests $\#\mathcal{I}} \def\J{\mathcal{J}} \def\K{\mathcal{K} \geq 2^{30}.$
The results are collected in Tab.~\ref{tab}. It is not difficult to notice the linear scaling w.r.t. the mode size $n.$ The scaling in dimension is between $\mathcal{O}(d)$ and $\mathcal{O}(d^2),$ since the algorithm requires $\mathcal{O}(d)$ evaluations of tensor elements, and each tensor element depends on $d$ indices. The scaling in TT--rank is almost quadratic, which shows that the evaluation of tensor elements takes longer than other operations.
For large ranks, the relative accuracy of the interpolation computed by Alg.~\ref{algi} reduces almost to the machine precision threshold and does not stagnate at the level of $10^{-8}$ or $10^{-9},$ cf.~\cite{ot-ttcross-2010,lars-htcross-2013}. The Alg.~\ref{algi} also appears to be very fast: using one core on the Iridis3 cluster, it is two to three times faster than the HT cross interpolation algorithm~\cite{lars-htcross-2013} applied to the same problem.
\section{Conclusions and future work} We have generalized two results on the matrix cross interpolation to the tensor case, using the cross interpolation formula~\eqref{eq:ii} proposed by Oseledets and Tyrtyshnikov~\cite{ot-ttcross-2010} for the tensor train format. First, we have shown that the maximum--volume cross interpolation is quasioptimal, i.e. its accuracy in the Chebyshev norm differs from the best possible accuracy by the factor which does not grow exponentially with dimension. This generalizes the matrix result of Goreinov and Tyrtyshnikov~\cite{gt-skel-2011}. Second, we have shown that for the nested interpolation indices formula~\eqref{eq:ii} computes $\sum_{k=1}^d r_{k-1}n_kr_k - \sum_{k=1}^{d-1}r_k^2$ parameters of the TT format inspecting exactly the same number of tensor entries, and on these elements the interpolation is exact. This generalizes the classical result on the skeleton approximation of matrices to the TT case.
In the tensor case, the maximum--volume interpolation sets in general are not nested, and we cannot have the quasioptimality and the interpolation property simultaneously. It would be interesting to find the nested interpolation sets which provide a moderate coefficient of the quasioptimality.
Using the interpolation property, we have proposed the fast and simple greedy cross interpolation algorithm, which provides very accurate results for the standard test, and is several times faster than other methods. Many variants of this algorithm can be developed, taking in account the interpolation property and the available information on the error of the interpolation for different entries of a tensor. It is easy to overcome the breakdowns, if they occur, simply by taking random pivots in larger subtensors or in the whole tensor, as is suggested in Alg.~\ref{algg}. In our experiments we have never had a breakdown using the restricted pivoting in Alg.~\ref{algi}.
The theoretical and experimental results of this paper show that the curse of dimensionality cannot stop us from developing fast and reliable cross interpolation methods in higher dimensions. The cross interpolation allows to convert a given high--dimensional data array into the tensor train format, for which many operations essential for the scientific computing are already possible. For many high--dimensional problems we can try to substitute the randomized (Monte Carlo) sampling by the cross interpolation in order to benefit from its adaptivity. This is a subject of further work.
\section*{Acknowledgments} {\small The theoretical results of this paper have been obtained when the author was with the Institute of Numerical Mathematics RAS, Moscow. The author is grateful to Prof.~Eugene Tyrtyshnikov and Dr.~Ivan Oseledets for fruitful discussions. The author appreciates the use of the Iridis High Performance Computing Facility, and the associated support services at the University of Southampton, that proved essential to carry out the extensive numerical experiments reported in this paper. The author acknowledges the hospitality of SAM ETH Z\"urich, where the most of the manuscript has been drafted. }
\end{document} |
\begin{document}
\begin{abstract} We generalize the concept of localization of a star operation to flat overrings; subsequently, we investigate the possibility of representing the set $\mathrm{Star}(R)$ of star operations on $R$ as the product of $\mathrm{Star}(T)$, as $T$ ranges in a family of overrings of $R$ with special properties. We then apply this method to study the set of star operations on a Pr\"ufer domain $R$, in particular the set of stable star operations and the star-class groups of $R$. \end{abstract}
\title{Jaffard families and localizations of star operations}
\section{Introduction}
Recently, the study of star operations, initiated by the works of Krull \cite{krull_idealtheorie} and Gilmer \cite[Chapter 32]{gilmer}, has focused on studying the whole set $\mathrm{Star}(R)$ of star operations on $R$, and in particular its cardinality. Using as a starting point the characterization of domains with $|\mathrm{Star}(R)|=1$ due to Heinzer \cite{heinzer_d=v}, Houston, Mimouni and Park have devoted a series of papers \cite{twostar,houston_noeth-starfinite,hmp_finite,starnoeth_resinfinito} to this study, obtaining, among other results, a characterization of Pr\"ufer domains with two star operations \cite[Theorem 3.3]{twostar} and the precise determination of $|\mathrm{Star}(R)|$ on some classes of one-dimensional Noetherian domains \cite{houston_noeth-starfinite,starnoeth_resinfinito}. Their work is based -- at least partly -- on the concept of \emph{localization} of finite-type star operations to localizations of the ring.
The purpose of this paper is to generalize the concept of localization of a star operation $\ast$, by avoiding (when possible) the hypothesis that $\ast$ is of finite type and by considering, instead of localizations, flat overrings of the base ring $R$. In particular, we will prove that, if $R$ admits a family of overrings with certain properties (precisely, a \emph{Jaffard family} \cite[Section 6.3]{{fontana_factoring}}) then $\mathrm{Star}(R)$ can be represented as a cartesian product of $\mathrm{Star}(T)$, as $T$ ranges in this family, and that this representation preserves the main properties of the star operations.
We then specialize to the case of Pr\"ufer domain, when this approach is complemented by the possibility, in certain cases, to link star operations on $R$ with star operations on a quotient of $R$. This method allows one to obtain a better grasp of several properties, like being a stable operation (Proposition \ref{prop:prufer-stab}), and to describe the star-class group of $R$ in terms of the class groups of some localizations of $R$.
\section{Preliminaries and notation} Let $R$ be an integral domain with quotient field $K$, and denote by $\mathcal{F}(R)$ the set of fractional ideals of $R$. A \emph{star operation} on $R$ is a map $\ast:\mathcal{F}(R)\longrightarrow\mathcal{F}(R)$, $I\mapsto I^\ast$ such that, for every $I,J\in\mathcal{F}(R)$ and $x\in K$, \begin{enumerate}[(a)] \item $I\subseteq I^\ast$; \item $(I^\ast)^\ast=I^\ast$; \item if $I\subseteq J$, then $I^\ast\subseteq J^\ast$; \item $R^\ast=R$; \item $(xI)^\ast=x\cdot I^\ast$. \end{enumerate} The set of star operations on $R$ is denoted by $\mathrm{Star}(R)$. An ideal $I$ is a \emph{$\ast$-ideal} if $I=I^\ast$.
Similarly, a \emph{semistar operation} on $R$ is a map $\ast:\mathbf{F}(R)\longrightarrow\mathbf{F}(R)$ (where $\mathbf{F}(R)$ is the set of $R$-submodules of $K$) satisfying the previous properties, except for $R^\ast=R$; if $\ast$ verifies also the latter, then it is said to be a \emph{(semi)star operation}. We indicate the sets of semistar and (semi)star operations by $\mathrm{SStar}(R)$ and $\mathrm{(S)Star}(R)$, respectively. A \emph{semiprime operation} is a map $c$, from the set of integral ideals of $R$ to itself, that satisfies the first four properties of star operations and, moreover, such that $xI^\ast\subseteq(xI)^\ast$ for every $x\in R$.
A star operation is said to be: \begin{itemize} \item \emph{of finite type} if, for every $I$, \begin{equation*} I^\ast=\bigcup\{J^\ast\mid J\subseteq I,~J\text{~is finitely generated}\}; \end{equation*}
\item \emph{semifinite} if any proper $\ast$-ideal $I$ is contained in a prime $\ast$-ideal;
\item \emph{stable} if $(I\cap J)^\ast=I^\ast\cap J^\ast$ for all ideals $I,J$;
\item \emph{spectral} if it is in the form $I^\ast=\bigcap\{IR_P\mid P\in\Delta\}$ for some $\Delta\subseteq\mathrm{Spec}(R)$; equivalently, $\ast$ is spectral if and only if it is stable and semifinite \cite[Theorem 4]{anderson_overrings_1988};
\item \emph{endlich arithmetisch brauchbar} (\emph{eab} for short) if, for every nonzero finitely generated ideals $F,G,H$ such that $(FG)^\ast\subseteq(FH)^\ast$, we have $G^\ast\subseteq H^\ast$; if this property holds for arbitrary nonzero fractional ideals $G,H$ (but $F$ still finitely generated) then $\ast$ is said to be \emph{arithmetisch brauchbar} (\emph{ab} for short);
\item \emph{Noetherian} if any set $\{I_\alpha\mid \alpha\in A\}$ of proper $\ast$-ideals has a maximum, or equivalently if and only if every ascending chain of $\ast$-closed ideals stabilizes. (More commonly, under this hypothesis $R$ is said to be \emph{$\ast$-Noetherian} \cite{zaf_ACCstar}.) \end{itemize}
The set of star operations has a natural order, such that $\ast_1\leq\ast_2$ if and only if $I^{\ast_1}\subseteq I^{\ast_2}$ for every ideal $I$, or equivalently if and only if every $\ast_2$-closed ideal is also $\ast_1$-closed. Under this order, $\mathrm{Star}(R)$ becomes a complete lattice, where the minimum is the identity (usually denoted by $d$) and the maximum the $v$-operation (or \emph{divisorial closure}) $I\mapsto(R:(R:I))$.
If $R$ is an integral domain, an \emph{overring} of $R$ is a ring $T$ contained between $R$ and its quotient field $K$. A family $\Theta$ of overrings of $R$ is \emph{locally finite} (or \emph{of finite character}) if every $x\in K\setminus\{0\}$ (or, equivalently, every $x\in R\setminus\{0\}$) is a nonunit in only finitely many $T\in\Theta$. The ring $R$ itself is said to be of finite character if $\{R_M:M\in\mathrm{Max}(R)\}$ is a family of finite character.
A flat overring of $R$ is an overring that is flat as a $R$-module. If $T$ is a flat overring, then $(I_1\cap\cdots\cap I_n)T=I_1T\cap\cdots\cap I_nT$ for every $I_1,\ldots,I_n\in\mathbf{F}(R)$, and $(I:J)T=(IT:JT)$ for every $I,J\in\mathbf{F}(R)$ with $J$ finitely generated \cite[Theorem 7.4]{matsumura} (see also \cite[Proposition 2]{compact-intersections}).
\section{Extendable star operations}\label{sect:extension} The starting point is the notion of localization of a star operation, originally defined in \cite{twostar}. We shall adopt a more general and more abstract approach. \begin{defin}\label{defin:starloc} Let $R$ be an integral domain and $T$ a flat overring of $R$. We say that a star operation $\ast\in\mathrm{Star}(R)$ is \emph{extendable to $T$} if the map \begin{equation} \begin{aligned} \ast_T\colon \mathcal{F}(T) & \longrightarrow \mathcal{F}(T)\\ IT & \longmapsto I^\ast T \end{aligned} \end{equation} is well-defined (where $I$ is a fractional ideal of $R$). \end{defin}
\begin{oss}\label{oss:def} ~\begin{enumerate} \item If $T$ is flat over $R$, then every fractional ideal of $T$ is an extension of a fractional ideal of $R$ (since, if $J$ is an integral ideal of $T$, $J=(J\cap R)T$); therefore, $\ast_T$ is (potentially) defined on all of $\mathcal{F}(T)$. \item\label{oss:def:primi} If $T$ is flat over $R$ and $P$ is a prime of $R$ such that $PT\neq T$, then $PT$ is a prime ideal of $T$. Indeed, let $Q$ be a minimal prime of $PT$. By the previous point, $Q=(Q\cap R)T$; suppose $P\subsetneq Q\cap R$. By \cite[Theorem 2]{richamn_generalized-qr}, $T_Q=R_{Q\cap R}$, and thus $PT_Q$ is not minimal over $QT_Q=(Q\cap R)T_Q$, a contradiction. Note that the equality $T_Q=R_{Q\cap R}$ also shows that there is at most one $Q\in\mathrm{Spec}(T)$ over any $P\in\mathrm{Spec}(R)$. \item When $T=S^{-1}R$ is a localization of $R$ and $\ast$ is of finite type, Definition \ref{defin:starloc} coincides with the definition of $\ast_S$ given in \cite[Proposition 2.4]{twostar}. \item When $T=R_P$ for some $P\in\mathrm{Spec}(R)$, we will sometimes denote $\ast_T$ with $\ast_P$. \end{enumerate} \end{oss}
The following proposition shows the basic properties of extendability. \begin{prop}\label{prop:starloc:basic} Let $R$ be an integral domain, let $\ast\in\mathrm{Star}(R)$ and let $T$ be a flat overring of $R$. \begin{enumerate}[(a)] \item\label{prop:starloc:basic:star} If $\ast$ is extendable to $T$, then $\ast_T$ is a star operation. \item\label{prop:starloc:basic:equiv} $\ast$ is extendable to $T$ if and only if $I^\ast T=J^\ast T$ whenever $IT=JT$. \item\label{prop:starloc:basic:id} The identity star operation $d$ is always extendable, and $d_T$ is the identity on $T$. \item\label{prop:starloc:basic:ft} If $\ast$ is of finite type, then it is extendable to $T$, and $\ast_T$ is of finite type. \end{enumerate} \end{prop}
Note that, if $T$ is a localization of $R$, point \ref{prop:starloc:basic:ft} is proved in \cite[Proposition 2.4]{twostar}.
\begin{proof} \ref{prop:starloc:basic:star} and \ref{prop:starloc:basic:id} are obvious, while \ref{prop:starloc:basic:equiv} is just a reformulation of Definition \ref{defin:starloc}.
For \ref{prop:starloc:basic:ft}, by symmetry it is enough to show that $J^\ast T\subseteq I^\ast T$, or equivalently that $1\in(I^\ast T:J^\ast T)$. Since $\ast$ is of finite type, \begin{equation*} (I^\ast T:J^\ast T)=\left(I^\ast T:\left(\sum_{\substack{L\subseteq J\\ L\text{~finitely generated}}}L^\ast\right)T\right)= \left(I^\ast T:\sum_{\substack{L\subseteq J\\ L\text{~fin. gen.}}}L^\ast T\right)=\end{equation*}\begin{equation*} =\bigcap_{\substack{L\subseteq J\\ L\text{~fin. gen.}}}(I^\ast T:L^\ast T)\supseteq\bigcap_{\substack{L\subseteq J\\ L\text{~fin. gen.}}}(I^\ast:L^\ast)T. \end{equation*} By properties of star operations, $(I^\ast:L^\ast)=(I^\ast:L)$; since $L$ is finitely generated and $T$ is flat, it follows that, for every $L$, \begin{equation*} (I^\ast:L^\ast)T=(I^\ast:L)T=(I^\ast T:LT) \end{equation*} which contains 1 since $LT\subseteq JT=IT\subseteq I^\ast T$. Hence, $1\in(I^\ast T:J^\ast T)$, as requested. \end{proof}
\begin{ex}\label{ex:ad} Not every star operation is extendable: let $R$ be an almost Dedekind domain which is not Dedekind (i.e., a one-dimensional non-Noetherian domain such that $R_M$ is a discrete valuation ring for every $M\in\mathrm{Max}(R)$), and let $P$ be a non-finitely generated prime ideal of $R$. Then $P$ is not divisorial \cite[Lemma 4.1.8]{fontana_libro}, and thus the $v$-operation is not extendable to $R_P$, since otherwise $(PR_P)^{v_P}=P^vR_P=R_P$, while the unique star operation on $R_P$ is the identity. \end{ex}
Beside being of finite type, extension preserves the main properties of a star operation. \begin{prop}\label{prop:estensione-prop} Let $R$ be a domain and $T$ be a flat overring of $R$; suppose $\ast\in\mathrm{Star}(R)$ is extendable to $T$. If $\ast$ is stable (respectively, spectral, Noetherian) then so is $\ast_T$. \end{prop} \begin{proof} Suppose $\ast$ is stable, and let $I_1:=J_1T$, $I_2:=J_2T$ be ideals of $T$, where $J_1$ and $J_2$ are ideals of $R$. Then, \begin{equation*} \begin{array}{rcl} (I_1\cap I_2)^{\ast_T} & = & (J_1T\cap J_2T)^{\ast_T}=[(J_1\cap J_2)T]^{\ast_T}=\\ & = & (J_1\cap J_2)^\ast T=(J_1^\ast\cap J_2^\ast)T=J_1^\ast T\cap J_2^\ast T=I_1^{\ast_T}\cap I_2^{\ast_T} \end{array} \end{equation*} and thus $\ast_T$ is stable.
If $\ast$ is spectral, it is stable, and thus so is $\ast_T$. Let now $I$ be a proper $\ast_T$-closed ideal of $T$, and let $J:=I\cap R$; then, $JT=(I\cap R)T=I$, and thus $J^\ast\subseteq I^{\ast_T}\cap R=I\cap R=J$, so that $J$ is a $\ast$-ideal. By definition, there is a $\Delta\subseteq\mathrm{Spec}(R)$ such that $\ast=\ast_\Delta$; hence, \begin{equation*} J=J^\ast=\bigcap_{P\in\Delta}JR_P=\bigcap_{P\in\Delta}(I\cap R)R_P=\bigcap_{P\in\Delta}IR_P\cap R_P. \end{equation*} In particular, there is a $P\in\Delta$ such that $1\notin IR_P=ITR_P$; hence, there is a $Q\in\mathrm{Spec}(TR_P)$ such that $ITR_P\subseteq Q$. We claim that $Q_0:=Q\cap T$ is a prime $\ast_T$-ideal containing $I$. Indeed, $I\subseteq ITR_P\cap T\subseteq Q\cap T=Q_0$; moreover, since $Q\cap R=Q_0\cap R\subseteq P$, $Q_0=LT$ for some prime ideal $L$ of $T$ contained in $P$ (Remark \ref{oss:def}\ref{oss:def:primi}), and thus \begin{equation*} Q_0^{\ast_T}=L^\ast T\subseteq (LR_P\cap R)T=LT=Q_0. \end{equation*} Therefore, $\ast_T$ is also semifinite, and by \cite[Theorem 4]{anderson_overrings_1988} it is spectral.
Suppose $\ast$ is Noetherian, and let $\{I_\alpha T\mid\alpha\in A\}$ be an ascending chain of $\ast_T$-ideals. Then, $\{I_\alpha^\ast\mid\alpha\in A\}$ is an ascending chain of $\ast$-ideals, which has to stabilize at $I_{\overline{\alpha}}$. Hence, the original chain stabilizes at $I_{\overline{\alpha}}T$, and $\ast_T$ is Noetherian. \end{proof}
Extendability works well with the order structure of $\mathrm{Star}(R)$. \begin{prop}\label{prop:starloc:ordine} Let $R$ be an integral domain and $T$ be a flat overring of $R$. Let $\ast_1,\ast_2,\{\ast_\lambda\mid\lambda\in\Lambda\}$ be star operations that are extendable to $T$. \begin{enumerate}[(a)] \item\label{prop:starloc:ordine:leq} If $\ast_1\leq\ast_2\in\mathrm{Star}(R)$, then $(\ast_1)_T\leq(\ast_2)_T$. \item\label{prop:starloc:ordine:wedge} $\ast_1\wedge\ast_2$ is extendable to $T$ and $(\ast_1\wedge\ast_2)_T=(\ast_1)_T\wedge(\ast_2)_T$. \item\label{prop:starloc:ordine:supft} If each $\ast_\lambda$ is of finite type, then $\sup_\lambda\ast_\lambda$ is extendable to $T$ and $(\sup_\lambda\ast_\lambda)_T=\sup_\lambda(\ast_\lambda)_T$. \end{enumerate} \end{prop} \begin{proof} \ref{prop:starloc:ordine:leq} If $\ast_1\leq\ast_2$, then $I^{\ast_1}\subseteq I^{\ast_2}$ for every fractional ideal $I$, and thus $(I^{\ast_1}T)\subseteq(I^{\ast_2}T)$. Using the definition of $\ast_T$, we get $(\ast_1)_T\leq(\ast_2)_T$.
\ref{prop:starloc:ordine:wedge} Let $I$ be an ideal of $R$. By definition, $I^{\ast_1\wedge\ast_2}=I^{\ast_1}\cap I^{\ast_2}$, so that \begin{equation*} (IT)^{(\ast_1\wedge\ast_2)_T}=(I^{\ast_1\wedge\ast_2})T=(I^{\ast_1}\cap I^{\ast_2})T=\end{equation*}\begin{equation*} =I^{\ast_1}T\cap I^{\ast_2}T=(IT)^{(\ast_1)_T}\cap (IT)^{(\ast_2)_T}=(IT)^{(\ast_1)_T\wedge(\ast_2)_T} \end{equation*} and thus $(\ast_1\wedge\ast_2)_T=(\ast_1)_T\wedge(\ast_2)_T$.
\ref{prop:starloc:ordine:supft} Let $\ast:=\sup_\lambda\ast_\lambda$. Since each $\ast_\lambda$ is of finite type, so is $\ast$ \cite[p.1628]{anderson_examples}, and thus $\ast$ is extendable to $T$ by Proposition \ref{prop:starloc:basic}\ref{prop:starloc:basic:ft}. Moreover, again by \cite[p.1628]{anderson_examples}, $I^\ast=\sum I^{\ast_1\circ\cdots\circ\ast_n}$, as $(\ast_1,\ldots,\ast_n)$ ranges among the finite strings of elements of $\{\ast_\lambda\mid\lambda\in\Lambda\}$ (here $\ast_1\circ\cdots\circ\ast_n$ indicates simply the composition of $\ast_1,\ldots,\ast_n$); therefore, \begin{equation*} I^\ast T=\left(\sum I^{\ast_1\circ\cdots\circ\ast_n}\right)T=\sum I^{\ast_1\circ\cdots\circ\ast_n}T. \end{equation*} We claim that $I^{\ast_1\circ\cdots\circ\ast_n}T=(IT)^{(\ast_1)_T\circ\cdots\circ(\ast_n)_T}$; we proceed by induction. The case $n=1$ is just the definition of the extension; suppose the claim holds for $m<n$. Then, \begin{equation*} I^{\ast_1\circ\cdots\circ\ast_n}T=(I^{\ast_1})^{\ast_2\circ\cdots\circ\ast_n}T= (I^{\ast_1}T)^{(\ast_2)_T\circ\cdots\circ(\ast_n)_T}=(IT)^{(\ast_1)_T\circ\cdots\circ(\ast_n)_T} \end{equation*} as claimed. Thus, \begin{equation*} I^\ast T=\sum(IT)^{(\ast_1)_T\circ\cdots\circ(\ast_n)_T}=(IT)^{\sup_\lambda(\ast_\lambda)_T} \end{equation*} the last equality coming from \cite[p.1628]{anderson_examples} and Proposition \ref{prop:starloc:basic}\ref{prop:starloc:basic:ft}. Hence, $\ast=\sup_\lambda(\ast_\lambda)_T$. \end{proof}
Extendability is also transitive: \begin{prop}\label{prop:ext-transitivo} Let $R$ be a domain and $T_1\subseteq T_2$ be two flat overrings of $R$. If $\ast\in\mathrm{Star}(R)$ is extendable to $T_1$ and $\ast_{T_1}$ is extendable to $T_2$, then $\ast$ is extendable to $T_2$, and $\ast_{T_2}=(\ast_{T_1})_{T_2}$. \end{prop} \begin{proof} Note first that if $T_2$ is flat over $R$ then it is flat over $T_1$, and thus it makes sense to speak of the extendability of $\ast_{T_1}$. For every ideal $I$ of $R$, we have \begin{equation*} I^\ast T_2=(I^\ast T_1)T_2=(IT_1)^{\ast_{T_1}}T_2=(IT_1T_2)^{(\ast_{T_1})_{T_2}}=(IT_2)^{(\ast_{T_1})_{T_2}} \end{equation*} and thus if $IT_2=JT_2$ then $I^\ast T_2=J^\ast T_2$, so that $\ast$ is extendable to $T_2$. The previous calculation also shows that $\ast_{T_2}=(\ast_{T_1})_{T_2}$. \end{proof}
\begin{prop} Let $R$ be an integral domain and $T$ be a flat overring of $R$. Let $\Delta:=\{M\cap R\mid M\in\mathrm{Max}(T)\}$. If $\ast\in\mathrm{Star}(R)$ is extendable to $R_P$, for every $P\in\Delta$, then it is extendable to $T$. \end{prop} \begin{proof} Let $I,J$ be ideals of $R$ such that $IT=JT$. Let $P\in\Delta$ and let $M$ be the (necessarily unique -- see Remark \ref{oss:def}(\ref{oss:def:primi})) maximal ideal of $T$ such that $M\cap R=P$. Then, $T_M=R_P$, and since $\ast$ is extendable to $R_P$ we have $I^\ast R_P=J^\ast R_P$. It follows that \begin{equation*} I^\ast T=\bigcap_{P\in\Delta}I^\ast R_P=\bigcap_{P\in\Delta}J^\ast R_P=J^\ast T, \end{equation*} and thus $\ast$ is extendable to $T$. \end{proof}
\begin{cor}\label{cor:def-extstar} Let $R$ be a domain, and let $\ast\in\mathrm{Star}(R)$. The following are equivalent: \begin{enumerate}[(i)] \item $\ast$ is extendable to $R_P$, for every $P\in\mathrm{Spec}(R)$; \item $\ast$ is extendable to every flat overring of $R$. \end{enumerate} \end{cor}
Note that condition (i) of the above corollary cannot be replaced by the version that considers only maximal ideals of $T$: indeed, if $(R,M)$ is local, then clearly every star operation is extendable to $R_M$, but it would be implausible that every star operation is extendable to every localization. We can build an explicit counterexample tweaking slightly \cite[Remark 2.5(3)]{twostar}. Let $R:=\ins{Z}_{p\ins{Z}}+X\ins{Q}(\sqrt{2})[[X]]$ (where $p$ is a prime number). Then, $R$ is a two-dimensional local domain, with maximal ideal $M:=p\ins{Z}_{p\ins{Z}}+X\ins{Q}(\sqrt{2})[[X]]$; let $P:=X\ins{Q}(\sqrt{2})[[X]]$. We claim that the $v$-operation is not extendable to $R_P=\ins{Q}+P$. Let $A:=X(\ins{Q}+P)$ and $B:=XR$: then, $AR_P=BR_P=A$, but $A^vR_P=P$ while $B^vR_P=BR_P\neq P$.
\section{Jaffard families}\label{sect:Jaffard} The concept of Jaffard family was introduced and studied in \cite[Section 6.3]{fontana_factoring}. \begin{defin} Let $R$ be a domain and $\Theta$ be a set of overrings of $R$ such that the quotient field of $R$ is not in $\Theta$. We say that $\Theta$ is a \emph{Jaffard family on $R$} if, for every integral ideal $I$ of $R$, \begin{itemize} \item $R=\bigcap_{T\in\Theta}T$; \item $\Theta$ is locally finite; \item $I=\prod_{T\in\Theta}(IT\cap R)$; \item if $T\neq S$ are in $\Theta$, then $(IT\cap R)+(IS\cap R)=R$. \end{itemize}
We say that an overring $T$ of $R$ is a \emph{Jaffard overring} of $R$ if $T$ belongs to a Jaffard family of $R$. \end{defin}
Note that, by the second axiom, if $I\neq(0)$ then $IT=T$ for all but finitely many $T\in\Theta$, so that the product $I=\prod_{T\in\Theta}(IT\cap R)$ is finite.
The next propositions collect the properties of Jaffard families that we will be using. \begin{prop}[{\protect\cite[Theorem 6.3.1]{fontana_factoring}}]\label{prop:jaffard:basic} Let $R$ be an integral domain with quotient field $K$, and let $\Theta$ be a Jaffard family on $R$. For each $T\in\Theta$, let $\ortog{\Theta}(T):=\bigcap\{U\in\Theta\mid U\neq T\}$. \begin{enumerate}[(a)] \item\label{prop:jaffard:basic:complete} $\Theta$ is complete (i.e., $I=\bigcap\{IT\mid T\in\Theta\}$ for every ideal $I$ of $R$). \item\label{prop:jaffard:basic:partitionSpec} For each $P\in\mathrm{Spec}(R)$, $P\neq(0)$, there is a unique $T\in\Theta$ such that $PT\neq T$. \item\label{prop:jaffard:basic:flat} For each $T\in\Theta$, both $T$ and $\ortog{\Theta}(T)$ are flat over $R$. \item\label{prop:jaffard:basic:complindip} For each $T\in\Theta$, we have $T\cdot\ortog{\Theta}(T)=K$.
\end{enumerate} \end{prop}
\begin{prop}\label{prop:caratt-jaffard} Let $\Theta$ be a family of flat overrings of the domain $R$, and let $K$ be the quotient field of $R$. Then, $\Theta$ is a Jaffard family if and only if it is complete, locally finite and $TS=K$ for all $T,S\in\Theta$, $T\neq S$. \end{prop} \begin{proof} If $\Theta$ is a Jaffard family, the properties follow by the definition and Proposition \ref{prop:jaffard:basic}. Conversely, suppose $\Theta$ verifies the three properties, let $I\neq(0)$ be an ideal of $R$ and let $T\neq S$ be members of $\Theta$. If $IT\cap R$ and $IS\cap R$ are not coprime, then there would be a prime $P$ of $R$ containing both; since $\Theta$ is complete, it would follow that both $IT\cap R$ and $IS\cap R$ survive in some $A\in\Theta$. In particular, without loss of generality, $A\neq T$; however, \begin{equation*} (IT\cap R)A=ITA\cap A=IK\cap A=A, \end{equation*} a contradiction. Therefore, $(IT\cap R)+(IS\cap R)=R$. Moreover, $I=\bigcap\{IT\cap R\mid T\in\Theta\}=(IT_1\cap R)\cap\cdots\cap(IT_n\cap R)$ by local finiteness; since the $IT_i\cap R$ are coprime, their intersection is equal to their product, and thus $I=(IT_1\cap R)\cdots(IT_n\cap R)$. \end{proof}
\begin{oss}\label{oss:Matlis} Any Jaffard family $\Theta$ defines a partition on $\mathrm{Max}(R)$, where each class is composed by the $M\in\mathrm{Max}(R)$ such that $MT\neq T$ for some fixed $T\in\Theta$. In particular, $T=\bigcap R_M$, as $M$ ranges in the class relative to $M$; hence, different Jaffard families define different partitions. In particular, a local domain has only one Jaffard family, namely $\{R\}$, and a semilocal domain has only a finite number of Jaffard families.
However, not every partition of $\mathrm{Max}(R)$ can arise in this way. For example, let $\Theta$ be a Jaffard family and let $M,N\in\mathrm{Max}(R)$; by Proposition \ref{prop:jaffard:basic}\ref{prop:jaffard:basic:partitionSpec}, there are unique overrings $T,U\in\Theta$ such that $MT\neq T$ and $NU\neq U$. If there is a nonzero prime $P\subseteq M\cap N$, then $PT\neq T$ and $PU\neq U$; therefore, again by Proposition \ref{prop:jaffard:basic}\ref{prop:jaffard:basic:partitionSpec}, it must be $T=U$. \end{oss}
A \emph{$h$-local domain} is an integral domain $R$ such that $\mathrm{Max}(R)$ is locally finite and such that every prime ideal $P$ is contained in only one maximal ideal. In this case, $\{R_M\mid M\in\mathrm{Max}(R)\}$ is a Jaffard family of $R$; conversely, if $\{R_M\mid M\in\mathrm{Max}(R)\}$ is a Jaffard family, then $\mathrm{Max}(R)$ is locally finite (by definition) and each prime is contained in only one maximal ideal (by Proposition \ref{prop:jaffard:basic}\ref{prop:jaffard:basic:partitionSpec}), and thus $R$ is $h$-local. Many properties of the Jaffard families can be seen as generalizations of the corresponding properties of $h$-local domains; the following proposition is an example (compare \cite[Proposition 3.1]{olberding_globalizing}). \begin{prop}\label{prop:integintersect} Let $R$ be a domain and $T$ be a Jaffard overring of $R$. Then: \begin{enumerate}[(a)] \item\label{prop:integintersect:a} for every family $\{X_\alpha:\alpha\in A\}$ of $R$-submodules of $K$ with nonzero intersection, we have $\left(\bigcap_{\alpha\in A}X_\alpha\right)T=\bigcap_{\alpha\in A}X_\alpha T$; \item\label{prop:integintersect:b} if $\{I_\alpha:\alpha\in A\}$ is a family of integral ideals of $R$ with nonzero intersection such that $\left(\bigcap_{\alpha\in A}I_\alpha\right)T\neq T$, then $I_{\overline{\alpha}}T\neq T$ for some $\overline{\alpha}\in A$. \end{enumerate} \end{prop} \begin{proof} \ref{prop:integintersect:a} Let $\Theta$ be a Jaffard family of $R$ such that $T\in\Theta$. Then, by the flatness of $T$, \begin{equation*} \left(\bigcap_{\alpha\in A}X_\alpha\right)T= \left(\bigcap_{\alpha\in A}\bigcap_{U\in\Theta}X_\alpha U\right)T= \left(\bigcap_{U\in\Theta}\bigcap_{\alpha\in A}X_\alpha U\right)T=\end{equation*}\begin{equation*} =\left(\bigcap_{U\in\Theta\setminus\{T\}}\bigcap_{\alpha\in A}X_\alpha U\right)T\cap \bigcap_{\alpha\in A}X_\alpha T=K\cap\bigcap_{\alpha\in A}X_\alpha T \end{equation*} since $\bigcap_{U\in\Theta\setminus\{T\}}\bigcap_{\alpha\in A}X_\alpha U$ is a $\ortog{\Theta}(T)$-module, and thus its product with $T$ is equal to $K$ by Proposition \ref{prop:jaffard:basic}\ref{prop:jaffard:basic:complindip}.
(a $\Longrightarrow$ b). Suppose $\left(\bigcap_{\alpha\in A}I_\alpha\right)T\neq T$. Since $\left(\bigcap_{\alpha\in A}I_\alpha\right)T\subseteq T$, then 1 is not contained in the left hand side. By \ref{prop:integintersect:a}, 1 is not contained in $\bigcap_{\alpha\in A}I_\alpha T$, i.e., there is a $\overline{\alpha}$ such that $1\notin I_{\overline{\alpha}} T$, and thus $I_{\overline{\alpha}} T\neq T$. \end{proof}
\section{Jaffard families and star operations}\label{sect:jaff-star} The reason why we introduced Jaffard families is that they provide a way to decompose $\mathrm{Star}(R)$ as a product of spaces of star operations of overrings of $T$. Before reaching this objective (Theorem \ref{teor:star-jaffard}) we show that weaker properties can lead to a decomposition of at least a subset of $\mathrm{Star}(R)$.
\begin{prop}\label{prop:indip-rhoinj} Let $R$ be an integral domain with quotient field $K$. Let $\Theta$ be a set of flat overrings of $R$ such that $R=\bigcap\{T\mid T\in\Theta\}$ and such that $AB=K$ whenever $A,B\in\Theta$ and $A\neq B$. Then, there is an injective order-preserving map \begin{equation*} \begin{aligned} \rho_\Theta\colon\prod_{T\in\Theta}\mathrm{Star}(T) & \longrightarrow\mathrm{Star}(R) \\ (\ast^{(T)})_{T\in\Theta} & \longmapsto\bigwedge_{T\in\Theta}\ast^{(T)}, \end{aligned} \end{equation*} where $\bigwedge_{T\in\Theta}\ast^{(T)}$ is the map such that \begin{equation*} I\mapsto\bigcap_{T\in\Theta}(IT)^{\ast^{(T)}} \end{equation*} for every fractional ideal $I$ of $R$. \end{prop} \begin{proof} Let $(\ast_T)_{T\in\Theta}\in\prod_{T\in\Theta}\mathrm{Star}(T)$, and let $\ast:=\rho_\Theta((\ast^{(T)})_{T\in\Theta})$. Since $\bigcap_{T\in\Theta}T=R$, the map $\ast$ is a star operation; moreover, it is clear that if $\ast_1^{(T)}\leq\ast_2^{(T)}$ for all $T$ then $\rho_\Theta(\ast_1^{(T)})\leq\rho_\Theta(\ast_2^{(T)})$. Hence, $\rho_\Theta$ is well-defined and order-preserving; we need to show that it is injective.
Suppose it is not; then, $\ast:=\rho_\Theta(\ast_1^{(T)})=\rho_\Theta(\ast_2^{(T)})$ for some families of star operations such that $\ast_1^{(U)}\neq\ast_2^{(U)}$ for some $U\in\Theta$. There is an integral ideal $J$ of $U$ such that $J^{\ast_1^{(U)}}\neq J^{\ast_2^{(U)}}$; let $I:=J\cap R$. Since $U$ is flat, for both $i=1$ and $i=2$ we have \begin{equation*} I^\ast U=\left[\bigcap_{T\in\Theta}(IT)^{\ast_i^{(T)}}\right]U=(IU)^{\ast_i^{(U)}}U\cap \left[\bigcap_{T\in\Theta\setminus\{U\}}(IT)^{\ast_i^{(T)}}\right]U. \end{equation*}
If $T\neq U$, then, since $T$ is flat, \begin{equation*} (IT)^{\ast_i^{(T)}}=((J\cap R)T)^{\ast_i^{(T)}}=(JT\cap T)^{\ast_i^{(T)}}. \end{equation*} However, $JT=JUT=K$ since $UT=K$ (by hypothesis); therefore, $(IT)^{\ast_i^{(T)}}=T$, and (since $I\subseteq U$) \begin{equation*} I^\ast U=(IU)^{\ast_i^{(U)}}U\cap\left[\bigcap_{T\in\Theta\setminus\{U\}}T\right]U= (IU)^{\ast_i^{(U)}}U\cap\left[\bigcap_{T\in\Theta}T\right]U=\end{equation*}\begin{equation*} =(IU)^{\ast_i^{(U)}}\cap RU=(IU)^{\ast_i^{(U)}}=J^{\ast_i^{(U)}} \end{equation*} for both $i=1$ and $i=2$. However, this contradicts the choice of $J$; hence, $\rho_\Theta$ is injective. \end{proof}
If $\Theta$ is a Jaffard family, the previous proposition can be strengthened. We need two lemmas. \begin{lemma}\label{lemma:intersez-ritorno} Let $R$ be a domain with quotient field $K$, and let $\Theta$ be a Jaffard family on $R$. For every $U\in\Theta$, let $J_U$ be a $U$-submodule of $K$, and define $J:=\bigcap_{U\in\Theta}J_U$. If $J\neq(0)$, then for every $T\in\Theta$ we have $JT=J_T$. \end{lemma} \begin{proof} By Proposition \ref{prop:integintersect}\ref{prop:integintersect:a}, we have \begin{equation*} JT=\left(\bigcap_{U\in\Theta}J_U\right)T= \bigcap_{U\in\Theta}J_UT. \end{equation*} If $U\neq T$, then $J_UT=J_UUT=J_UK=K$; therefore, $JT=J_TT=J_T$. \end{proof}
The next lemma can be seen as a generalization of \cite[Theorem 6.2.2(2)]{fontana_factoring} and \cite[Lemma 2.3]{warfield}. \begin{lemma}\label{lemma:Tcolon} Let $R$ be an integral domain, $T$ be a Jaffard overring of $R$, and let $I,J\in\mathbf{F}(R)$ such that $(I:J)\neq(0)$. Then, $(I:J)T=(IT:JT)$. \end{lemma} \begin{proof} It is enough to note that $(I:J)=\bigcap_{j\in J}j^{-1}I\neq(0)$, and apply Proposition \ref{prop:integintersect}\ref{prop:integintersect:a}. \end{proof}
\begin{teor}\label{teor:star-jaffard} Let $R$ be an integral domain and let $\Theta$ be a Jaffard family on $R$. Then, every $\ast\in\mathrm{Star}(R)$ is extendable to every $T\in\Theta$, and the maps \begin{equation*} \begin{aligned} \lambda_\Theta\colon \mathrm{Star}(R) & \longrightarrow \prod_{T\in\Theta}\mathrm{Star}(T)\\ \ast & \longmapsto (\ast_T)_{T\in\Theta} \end{aligned} \quad\text{and}\quad \begin{aligned} \rho_\Theta\colon \prod_{T\in\Theta}\mathrm{Star}(T) & \longrightarrow \mathrm{Star}(R)\\ (\ast^{(T)})_{T\in\Theta} & \longmapsto \bigwedge_{T\in\Theta}\ast^{(T)} \end{aligned} \end{equation*} (where $\bigwedge_{T\in\Theta}\ast^{(T)}$ is defined as in Proposition \ref{prop:indip-rhoinj}) are order-preserving bijections between $\mathrm{Star}(R)$ and $\prod\{\mathrm{Star}(T)\mid T\in\Theta\}$. \end{teor} \begin{proof} We first show that every $\ast\in\mathrm{Star}(R)$ is extendable. Let $T\in\Theta$ and let $I,J$ be ideals of $R$ such that $IT=JT$. Then, using Lemma \ref{lemma:Tcolon}, we have \begin{equation*} (I^\ast T:J^\ast T)=(I^\ast:J^\ast)T=(I^\ast:J)T=(I^\ast T:JT) \end{equation*} and, since $JT=IT\subseteq I^\ast T$, we have $1\in(I^\ast T:J^\ast T)$, so that $J^\ast T\subseteq I^\ast T$. Symmetrically, $I^\ast T\subseteq J^\ast T$, and hence $J^\ast T=I^\ast T$. By Proposition \ref{prop:starloc:basic}\ref{prop:starloc:basic:equiv}, $\ast_T$ is well-defined, and $\ast$ is extendable to $T$; in particular, $\lambda_\Theta$ is well-defined.
Moreover, for every $\ast\in\mathrm{Star}(R)$, we have \begin{equation*} I^\ast=\bigcap_{T\in\Theta}I^\ast T=\bigcap_{T\in\Theta}(IT)^{\ast_T} \end{equation*} using the completeness of $\Theta$ in the first equality and the definition of extension in the second. Thus, $\ast=\rho_\Theta\circ\lambda_\Theta(\ast)$, i.e., $\rho_\Theta\circ\lambda_\Theta$ is the identity. It follows that $\lambda_\Theta$ is injective and $\rho_\Theta$ is surjective. But $\rho_\Theta$ is injective by Proposition \ref{prop:indip-rhoinj}, so $\lambda_\Theta$ and $\rho_\Theta$ must be bijections. \end{proof}
The second part of the following corollary is a generalization of \cite[Theorem 2.3]{houston_noeth-starfinite}. \begin{cor}\label{cor:1dim} Let $R$ be a one-dimensional integral domain. \begin{enumerate}[(a)]
\item $|\mathrm{Star}(R)|\geq\prod\{|\mathrm{Star}(R_M)|:M\in\mathrm{Max}(R)\}$;
\item if $R$ is of finite character (for example, if $R$ is Noetherian), then $|\mathrm{Star}(R)|=\prod\{|\mathrm{Star}(R_M)|:M\in\mathrm{Max}(R)\}$. \end{enumerate} \end{cor} \begin{proof} If $M\neq N$ are maximal ideals of $R$, then $R_MR_N=K$, since both $M$ and $N$ have height 1. By Proposition \ref{prop:indip-rhoinj}, there is an injective map from $\mathrm{Star}(R)$ to the product $\prod\mathrm{Star}(R_M)$, which in particular implies the first inequality.
If, moreover, $R$ is one-dimensional and of finite character, then $\{R_M\mid M\in\mathrm{Max}(R)\}$ is a Jaffard family, and the claim follows by applying Theorem \ref{teor:star-jaffard}. \end{proof}
The bijections $\rho_\Theta$ and $\lambda_\Theta$ respect the properties of star operations; see the following Proposition \ref{prop:jaffard-eab} for the eab case. \begin{teor}\label{teor:jaffard-corresp} Let $R$ be a domain and $\Theta$ be a Jaffard family on $R$, and let $\ast\in\mathrm{Star}(R)$. Then, $\ast$ is of finite type (respectively, semifinite, stable, spectral, Noetherian) if and only if $\ast_T$ is of finite type (resp., semifinite, stable, spectral, Noetherian) for every $T\in\Theta$. \end{teor} \begin{proof} By Propositions \ref{prop:starloc:basic}\ref{prop:starloc:basic:ft} and \ref{prop:estensione-prop}, if $\ast$ is of finite type, stable, spectral or Noetherian so is $\ast_T$. If $\ast$ is semifinite, let $I$ be a $\ast_T$-closed ideal of $T$, and let $J:=I\cap R$. Then $JT=I$, and $J^\ast\subseteq I^{\ast_T}\cap R=J$, so that there is a prime ideal $Q\supseteq J$ such that $Q^\ast=Q$. For every $U\in\Theta$, $U\neq T$, we have $JU=U$; hence $QU=U$, and thus $QT\neq T$; moreover, since $R$ is flat, $QT$ is prime (Remark \ref{oss:def}(\ref{oss:def:primi})). Therefore, $(QT)^{\ast_T}=Q^\ast T=QT$ is a proper prime $\ast_T$-ideal containing $I$, and $\ast_T$ is semifinite.
Let now $\ast:=\rho_\Theta(\ast^{(T)})$.
If each $\ast^{(T)}$ is of finite type, then $\ast$ is of finite type by \cite{anderson_examples}.
Suppose each $\ast^{(T)}$ is semifinite and $I=I^\ast$ is a proper ideal of $R$. Then, $1\notin I$, so there is a $T\in\Theta$ such that $(IT)^{\ast^{(T)}}\neq T$, and thus there is a prime ideal $P$ of $T$ containing $IT$ such that $P=P^{\ast^{(T)}}$. If $Q:=P\cap R$, then \begin{equation*} Q^\ast\subseteq(QT)^{\ast^{(T)}}\cap R\subseteq P^{\ast^{(T)}}\cap R=Q, \end{equation*} so that $Q$ is a $\ast$-prime ideal of $R$ containing $I$.
If each $\ast^{(T)}$ is stable, then, given ideal $I,J$ of $R$, we have \begin{equation*} (I\cap J)^\ast=\bigcap_{T\in\Theta}((I\cap J)T)^{\ast^{(T)}}=\bigcap_{T\in\Theta}(IT)^{\ast^{(T)}}\cap \bigcap_{T\in\Theta}(JT)^{\ast{(T)}}=I^\ast\cap J^\ast. \end{equation*} Hence, $\ast$ is stable. The case of spectral star operation follows since $\ast$ is spectral if and only if it is stable and semifinite \cite[Theorem 4]{anderson_overrings_1988}.
Suppose now $\ast^{(T)}$ is Noetherian for every $T\in\Theta$ and let $\{I_\alpha:\alpha\in A\}$ be an ascending chain of $\ast$-ideals. If $I_\alpha=(0)$ for every $\alpha$ we are done. Otherwise, there is a $\overline{\alpha}$ such that $I_{\overline{\alpha}}\neq(0)$, and thus $I_{\overline{\alpha}}$ (and, consequently, every $I_\alpha$ for $\alpha>\overline{\alpha}$) survives in only a finite number of elements of $\Theta$, say $T_1,\ldots,T_n$. For each $i\in\{1,\ldots,n\}$, the set $\{I_\alpha T_i\}$ is an ascending chain of $\ast^{(T_i)}$-ideals, and thus there is a $\alpha_i$ such that $I_\alpha T_i=I_{\alpha_i}T_i$ for every $\alpha\geq\alpha_i$.
Let thus $\widetilde{\alpha}:=\max\{\overline{\alpha},\alpha_i: 1\leq i\leq n\}$. For every $\beta\geq\widetilde{\alpha}$, we have $I_\beta T_i=I_{\alpha_i}T_i=I_{\widetilde{\alpha}}T_i$, while, if $T\neq T_i$ for every $i$, then $I_\beta T=T=I_{\widetilde{\alpha}}T$ since $\beta\geq\overline{\alpha}$. Therefore, $I_\beta=\bigcap_{T\in\Theta}I_\beta T=\bigcap_{T\in\Theta}I_{\widetilde{\alpha}}T=I_{\widetilde{\alpha}}$ and the chain $\{I_\alpha\}$ stabilizes. \end{proof}
\begin{cor}\label{cor:trasfnoeth} Let $R$ be a domain and $\Theta$ be a Jaffard family on $R$. If every $T\in\Theta$ is Noetherian, so is $R$. \end{cor} \begin{proof} A domain $A$ is Noetherian if and only if the identity star operation $d^{(A)}$ is Noetherian. If every $T\in\Theta$ is Noetherian, each $d_T$ is a Noetherian star operation, and thus (by Theorem \ref{teor:jaffard-corresp}) $\rho_\Theta(d_T)$ is Noetherian. However, by Theorem \ref{teor:star-jaffard}, $\rho_\Theta(d_T)=d_R$, and thus $R$ is a Noetherian domain. \end{proof}
\begin{lemma}\label{lemma:IcapRJcapR} Let $R$ be an integral domain and let $T$ be a Jaffard overring of $R$. For all nonzero integral ideals $I,J$ of $T$, \begin{equation*} (I\cap R)(J\cap R)=IJ\cap R. \end{equation*} \end{lemma} \begin{proof} Let $\Theta$ be a Jaffard family containing $T$. Since $\Theta$ is complete, it is enough to show that they are equal when localized on every $U\in\Theta$. We have \begin{equation*} (I\cap R)(J\cap R)U=(IU\cap U)(JU\cap U)=\begin{cases} IJ & \text{~if~}U=T\\ U & \text{~if~}U\neq T \end{cases} \end{equation*} while \begin{equation*} (IJ\cap R)U=IJU\cap U=\begin{cases} IJ & \text{~if~}U=T\\ U & \text{~if~}U\neq T \end{cases} \end{equation*} and thus $(I\cap R)(J\cap R)=IJ\cap R$. \end{proof}
\begin{lemma}\label{lemma:IcapR} Let $R$ be an integral domain, $T$ a Jaffard overring of $R$, and let $I$ be a finitely generated integral ideal of $T$. Then, $I\cap R$ is finitely generated (over $R$). \end{lemma} \begin{proof} Let $S:=\ortog{\Theta}(T)$, where $\Theta$ is a Jaffard family to which $T$ belongs. Then, by Proposition \ref{prop:jaffard:basic}, $(I\cap R)S=IS\cap S=ITS\cap S=S$, and thus there are $i_1,\ldots,i_n\in I\cap R$, $s_1,\ldots,s_n\in S$ such that $1=i_1s_1+\cdots+i_ns_n$; let $I_0:=(i_1,\ldots,i_n)$.
Let $x_1,\ldots,x_m$ be the generators of $I$ in $T$. Since $(I\cap R)T=IT=I$, for every $x_i$ there are $j_{1i},\ldots,j_{n_ii}\in I\cap R$, $t_{1i},\ldots,t_{n_ii}\in T$ such that $x_i=j_{1i}t_{1i}+\cdots+j_{n_ii}t_{n_ii}$; let $I_i:=(j_{1i},\ldots,j_{n_ii})$. Then, $J:=I_0+I_1+\cdots+I_n$ is a finitely generated ideal contained in $I\cap R$ (since it is generated by elements of $I\cap R$) such that $(I\cap R)T\subseteq JT$ and $(I\cap R)S\subseteq JS$; thus, $I\cap R\subseteq J$. Therefore, $I\cap R=J$ is finitely generated, as claimed. \end{proof}
\begin{prop}\label{prop:jaffard-eab} Let $R$ be an integral domain and let $\Theta$ be a Jaffard family on $R$. A $\ast\in\mathrm{Star}(R)$ is eab (resp., ab) if and only if $\ast_T$ is eab (resp., ab) for every $T\in\Theta$. \end{prop} \begin{proof} $(\Longrightarrow)$. Suppose $(IJ)^{\ast_T}\subseteq(IL)^{\ast_T}$ for some finitely generated ideals $I,J,L$ of $T$ (which we can suppose contained in $T$). Since \begin{equation*} (IJ\cap R)^\ast T=((IJ\cap R)T)^{\ast_T}=(IJ)^{\ast_T} \end{equation*} (and the same happens for $IL$), we have $(IJ\cap R)^\ast T\subseteq (IL\cap R)^\ast T$, and so \begin{equation*} (IJ\cap R)^\ast T\cap R\subseteq (IL\cap R)^\ast T\cap R. \end{equation*} However, both $IJ\cap R$ and $IL\cap R$ survive (among the ideals of $\Theta$) only in $T$, so that \begin{equation*} (IJ\cap R)^\ast T\cap R=(IJ\cap R)^\ast=((I\cap R)(J\cap R))^\ast \end{equation*} by Lemma \ref{lemma:IcapRJcapR}, and thus \begin{equation*} ((I\cap R)(J\cap R))^\ast\subseteq ((I\cap R)(L\cap R))^\ast. \end{equation*} Since $I$ is finitely generated, by Lemma \ref{lemma:IcapR} so is $I\cap R$; the same happens for $J\cap R$ and $L\cap R$. Hence, since $\ast$ is eab, $(J\cap R)^\ast\subseteq(L\cap R)^\ast$, and thus \begin{equation*} J^{\ast_T}=(J\cap R)^\ast T\subseteq(L\cap R)^\ast T=L^{\ast_T}. \end{equation*} Hence, $\ast_T$ is eab.
$(\Longleftarrow)$. Suppose $(IJ)^\ast\subseteq(IL)^\ast$. Then, $(IJ)^\ast T\subseteq(IL)^\ast T$, i.e., $(IJT)^{\ast_T}\subseteq(ILT)^{\ast_T}$ for every $T\in\Theta$. Since $\ast_T$ is eab, this implies that $(JT)^{\ast_T}\subseteq(LT)^{\ast_T}$ for every $T\in\Theta$; since $H^\ast=\bigcap_{T\in\Theta}(HT)^{\ast_T}$, it follows that $J^\ast\subseteq L^\ast$, and $\ast$ is eab.
The same reasoning applies for the ab case. \end{proof}
Following \cite{hhp_m-canonical}, we say that an ideal $A$ is $m$-canonical if $I=(A:(A:I))$ for every fractional ideal $I$ of $R$. The following proposition can be seen as a generalization of \cite[Theorem 6.7]{hhp_m-canonical} to domains that are not necessarily integrally closed. \begin{prop}\label{prop:starloc:mcan}
Let $R$ be a domain. Then $R$ admits an $m$-canonical ideal if an only if $R$ is $h$-local, $R_M$ admits an $m$-canonical ideal for every $M\in\mathrm{Max}(R)$ and $|\mathrm{Star}(R_M)|\neq 1$ for only a finite number of maximal ideals of $M$. \end{prop} \begin{proof} Suppose $A$ is $m$-canonical. Then $R$ is $h$-local by \cite[Proposition 2.4]{hhp_m-canonical}; moreover, if $I$ is a $R_M$-fractional ideal, then $I=JR_M$ for some $R$-fractional ideal, and thus \begin{equation*} (AR_M:(AR_M:I))=(AR_M:(AR_M:JR_M))=\end{equation*}\begin{equation*} =(AR_M:(A:J)R_M)=(A:(A:J))R_M=JR_M=I \end{equation*}
applying Lemma \ref{lemma:Tcolon} (which is applicable since $R$ $h$-local implies that $R_M$ is a Jaffard overring of $R$). If $AR_M=R_M$, it follows that $R_M$ is an $m$-canonical ideal for $R_M$, and thus that the $v$-operation on $R_M$ is the identity, or equivalently that $|\mathrm{Star}(R_M)|=1$; hence, if $|\mathrm{Star}(R_M)|\neq 1$ then $AR_M\neq R_M$. But this can happen only for a finite number of $M$, since $R$ is $h$-local and thus of finite character.
Conversely, suppose that the three hypotheses hold. For every $M\in\mathrm{Max}(R)$, let $J_M$ be an $m$-canonical ideal of $R_M$, and define \begin{equation*} I_M:=\begin{cases}
R_M & \text{~if~}|\mathrm{Star}(R_M)|=1\\
J_M & \text{~if~}|\mathrm{Star}(R_M)|>1 \end{cases} \end{equation*}
Note that, if $|\mathrm{Star}(R_M)|=1$, then $R_M$ is $m$-canonical for $R_M$, and thus $I_M$ is $m$-canonical for every $M$.
The ideal $J:=\bigcap_{P\in\mathrm{Max}(R)}I_P$ of $R$ is nonzero, and by Lemma \ref{lemma:intersez-ritorno} $JR_M=I_M$ for every maximal ideal $M$. If $L$ is an ideal of $R$ then, for every maximal ideal $M$, \begin{equation*} (J:(J:L))R_M=(JR_M:(JR_M:LR_M))=(I_M:(I_M:LR_M))=LR_M, \end{equation*} so that \begin{equation*} (J:(J:L))=\bigcap_{M\in\mathrm{Max}(R)}(J:(J:L))R_M=\bigcap_{M\in\mathrm{Max}(R)}LR_M=L. \end{equation*} Therefore, $J$ is an $m$-canonical ideal of $R$. \end{proof}
\begin{oss} The results in Sections \ref{sect:extension} and \ref{sect:jaff-star} can be generalized in two different directions.
On the one hand, we can consider, instead of star operations, other classes of closure operations, for example semiprime or semistar operations. In both cases, the definitions of extendability and the results in Section \ref{sect:extension} carry over without modifications, noting that the equalities $(I^c:J^c)=(I^c:J)$ and $(I^\ast:J^\ast)=(I^\ast:J)$ holds when $c$ and $\ast$ are, respectively, a semiprime or a semistar operation.
However, the behaviour of these two classes differs when we come to Jaffard families. In one case there is no problem: with the obvious modifications, all result of Section \ref{sect:jaff-star} hold for the set $\mathrm{Sp}(R)$ of semiprime operations. For example, this means that we can analyze the structure of the semiprime operation on a Dedekind domain $D$ almost directly from the structure of $\mathrm{Sp}(V)$, for $V$ a discrete valuation ring, shortening the analysis done in \cite[Section 3]{vassilev_structure_2009}.
The case of semistar operations is much more delicate: indeed, the result corresponding to Theorem \ref{teor:star-jaffard} is \emph{not} true for $\mathrm{SStar}(R)$, meaning that a semistar operation on $R$ may not be extendable to a Jaffard overring $T$ of $R$. For example, let $\ast$ be the semistar operation defined by \begin{equation*} I^\ast=\begin{cases} I & \text{if~}I\in\mathcal{F}(R)\\ K & \text{otherwise}. \end{cases} \end{equation*} If $T\neq R$ is a Jaffard overring of $R$, then it is not a fractional ideal of $R$ (for otherwise $T\cdot\ortog{\Theta}(T)=K$ would imply $\ortog{\Theta}(T)=K$); however, we have $RT=TT$, while \begin{equation*} R^\ast T=T\neq K=T^\ast T. \end{equation*} Hence, $\ast$ is not extendable to $T$. The exact point in which the proof of Theorem \ref{teor:star-jaffard} fails is the possibility of using Lemma \ref{lemma:Tcolon}, because the equality $IT=JT$ does not imply that $(I:J)\neq(0)$. However, if we restrict to finite-type semistar operations, the analogue of Theorem \ref{teor:star-jaffard} does hold: indeed, a proof analogous to the one of Proposition \ref{prop:starloc:basic}\ref{prop:starloc:basic:ft} shows that finite-type operations are extendable, and thus the proof of Theorem \ref{teor:star-jaffard} continues without problems.
A second way of generalizing these results is by considering, beyond the order structure, also a \emph{topological} structure on $\mathrm{Star}(R)$: mimicking the definition of the Zariski topology on $\mathrm{SStar}(R)$ given in \cite{topological-cons}, we can define a topology on $\mathrm{Star}(R)$ by declaring open the sets of the form \begin{equation*} V_I:=\{\ast\in\mathrm{Star}(R)\mid 1\in I^\ast\}, \end{equation*} as $I$ ranges among the fractional ideals of $R$. In particular, Theorem \ref{teor:star-jaffard} can be interpreted at the topological level: if $\Theta$ is a Jaffard family of $R$, then $\lambda_\Theta$ and $\rho_\Theta$ are homeomorphisms between $\mathrm{Star}(R)$ and the space $\prod_{T\in\Theta}\mathrm{Star}(T)$ endowed with the product topology. \end{oss}
\section{Application to Pr\"ufer domains}\label{sect:starloc:prufer} Theorem \ref{teor:star-jaffard} allows one to split the study of the set $\mathrm{Star}(R)$ of star operations on $R$ into the study of the sets $\mathrm{Star}(T)$, as $T$ ranges among the members of a Jaffard family $\Theta$. Obviously, this result isn't quite useful if we don't know how to find Jaffard families, or if studying $\mathrm{Star}(T)$ is as complex as studying $\mathrm{Star}(R)$. The purpose of this section is to show that, in the case of (some classes of) Pr\"ufer domains, we can resolve the first question, and we can at least make some progress on the second, proving more explicit results on $\mathrm{Star}(R)$. We shall employ a method similar to the one used in \cite[Sections 3-5]{hmp_finite}
Let now $R$ be a Pr\"ufer domain with quotient field $K$. We say that two maximal ideals $M,N$ are \emph{dependent} if $R_MR_N\neq K$, or equivalently if $M\cap N$ contains a nonzero prime ideal. Since the spectrum of $R$ is a tree, being dependent is an equivalence relation; we indicate the equivalence classes by $\Delta_\lambda$, as $\lambda$ ranges in $\Lambda$. We also define $T_\lambda:=\bigcap\{R_P\mid P\in\Delta_\lambda\}$. We call the set $\Theta:=\{T_\lambda\mid \lambda\in\Lambda\}$ the \emph{standard decomposition} of $R$.
\begin{lemma}\label{lemma:dimfin} Let $R$ be a finite-dimensional Pr\"ufer domain. Then, $\Delta\subseteq\mathrm{Max}(R)$ is an equivalence class with respect to dependence if and only if $\Delta=V(P)\cap\mathrm{Max}(R)$ for some height-one prime $P$ of $R$. \end{lemma} \begin{proof} Suppose $\Delta=V(P)\cap\mathrm{Max}(R)$. If $M,N\in\Delta$, then $P\subseteq M\cap N$; conversely, since $P$ has height 1, $M\in\Delta$ and $Q\subseteq M\cap N$ imply that $P\subseteq Q$ (since the spectrum of $R$ is a tree).
On the other hand, suppose $\Delta=\Delta_\lambda$ for some $\lambda$, and let $M,N\in\Delta$. Since $\mathrm{Spec}(R)$ is a tree and $\dim(R)<\infty$, both $M$ and $N$ contain a unique height-one prime, respectively (say) $P_M$ and $P_N$; if $P_M\neq P_N$, then $M\cap N$ cannot contain a nonzero prime, and thus $M$ and $N$ are not dependent, against the hypothesis $M,N\in\Delta$. Therefore, the height-1 prime contained in the members of $\Delta$ is unique, and $\Delta=V(P)\cap\mathrm{Max}(R)$. \end{proof}
\begin{prop}\label{prop:Tlambda} Let $R$ be a Pr\"ufer domain, and suppose that \begin{enumerate}[(a)] \item\label{prop:Tlambda:noeth} $\mathrm{Max}(R)$ is a Noetherian space; or \item\label{prop:Tlambda:semiloc} $R$ is semilocal. \end{enumerate} Then, the standard decomposition $\Theta$ of $R$ is a Jaffard family of $R$. \end{prop} \begin{proof} Since $R$ is Pr\"ufer, every overring of $R$ is flat \cite[Theorem 1.1.1]{fontana_libro}, and this in particular applies to the $T\in\Theta$.
We claim that, under both hypotheses, if $T=T_\lambda\in\Theta$, then $\mathrm{Spec}(T)=\{PT\mid P\subseteq M\text{~for some~}M\in\Delta_\lambda\}$. Indeed, in both cases every $\Delta_\lambda$ is compact: if $\mathrm{Max}(R)$ is Noetherian this is immediate, while if $R$ is semilocal they are finite and thus compact. Hence, the semistar operation $\ast_\Delta$ is of finite type \cite[Corollary 4.6]{localizing-semistar}, and $R^{\ast_\Delta}=T$; since the unique finite-type (semi)star operation on a Pr\"ufer domain is the identity (since all finitely generated ideals are invertible), it follows that $\ast_\Delta$ is just the map $I\mapsto IT$, and thus $QT=T$ if $Q$ is not contained in any $M\in\Delta$. Therefore, no prime ideal $P$ of $R$ survives in two different members of $\Theta$; thus, $PT_\lambda T_\mu=T_\lambda T_\mu$ if $\lambda\neq\mu$ are in $\Lambda$. Hence, $T_\lambda T_\mu=K$.
We need to show that $\Theta$ is locally finite. If $R$ is semilocal then $\Theta$ is finite, and in particular locally finite; suppose $\mathrm{Max}(R)$ is Noetherian. For every $x\in R$, $x\neq 0$, the ideal $xR$ has only a finite number of minimal primes (this follows, for example, from the proof of \cite[Chapter 4, Corollary 3, p.102]{bourbaki_ac} or \cite[Chapter 6, Exercises 5 and 7]{atiyah}); in particular, since each prime survives in only one $T\in\Theta$, the family $\Theta$ is of finite character.
Hence, in both case $\Theta$ is a Jaffard family by Proposition \ref{prop:caratt-jaffard}. \end{proof}
\begin{oss}\label{oss:prufjaff} ~\\ \begin{enumerate} \item If $R$ is a Pr\"ufer domain that is both of finite character and finite-dimensional, then $\mathrm{Spec}(R)$ (and so $\mathrm{Max}(R)$) is Noetherian. Indeed, if $I$ is a nonzero radical ideal of $R$, then $V(I)$ is finite, and thus every ascending chain of radical ideals must stop; by \cite[Chapter 6, Exercise 5]{atiyah}, this implies Noetherianity. \item\label{oss:prufjaff:finerJaff} The standard decomposition $\Theta$ of $R$ is the ``finest'' Jaffard family of $R$, in the sense that the partition of $\mathrm{Max}(R)$ determined by $\Theta$ (see Remark \ref{oss:Matlis}) is the finer partition that can be induced by a Jaffard family; this follows exactly from the definition of the dependence relation. \item\label{oss:prufjaff:nonminimalbranch} In general, the standard decomposition of $R$ need not be a Jaffard family of $R$. For example, let $R$ be an almost Dedekind domain which is not Dedekind. Since $R$ is one-dimensional, no two maximal ideals are dependent, and thus each $T_\lambda$ has the form $R_M$ for some maximal ideal $M$. However, $\Theta$ is not a Jaffard family, since it is not locally finite (if it were, $R$ would be a Dedekind domain). Indeed, Example \ref{ex:ad} shows that not every star operation is extendable to every $R_M$. \end{enumerate} \end{oss}
\subsection{Cutting the branch} Let $R$ be a finite-dimensional Pr\"ufer domain whose standard decomposition $\Theta$ is a Jaffard family. By Lemma \ref{lemma:dimfin}, every $T\in\Theta$ will have a nonzero prime ideal $P$ contained in all its maximal ideals; moreover, by Remark \ref{oss:prufjaff}(\ref{oss:prufjaff:finerJaff}), $T$ does not admit a further decomposition. On the other hand, it may be possible that $T/P$ has a nontrivial standard decomposition that is still a Jaffard family; thus, if we could relate $\mathrm{Star}(T)$ with $\mathrm{Star}(T/P)$, we could (in principle) simplify the study of $\mathrm{Star}(T)$.
\begin{lemma}\label{lemma:prufer-jac} Let $R$ be a Pr\"ufer domain whose Jacobson radical $\mathrm{Jac}(R)$ contains a nonzero prime ideal. Then, there is a prime ideal $Q\subseteq\mathrm{Jac}(R)$ such that $\mathrm{Jac}(R/Q)$ does not contain nonzero prime ideals. \end{lemma} \begin{proof} Let $\Delta:=\{P\in\mathrm{Spec}(R),P\subseteq\mathrm{Jac}(R)\}$. By hypothesis, $\Delta$ contains nonzero prime ideals. Let $Q:=\bigcup_{P\in\Delta}P$.
Since $R$ is treed, $\Delta$ is a chain; hence, $Q$ is itself a prime ideal, and it is contained in every maximal ideal of $R$. Suppose $\mathrm{Jac}(R/Q)$ contains a nonzero prime ideal $\overline{Q}$. Then, $\overline{Q}=Q'/P$ for some prime ideal $Q'$ of $R$, and $Q'$ is contained in every maximal ideal of $R$. It follows that $Q\subsetneq Q'\subseteq\mathrm{Jac}(R)$, against the construction of $Q$. \end{proof}
Suppose now that $R$ is a Pr\"ufer domain with quotient field $K$, and suppose there is a nonzero prime ideal $P$ contained in every maximal ideal of $R$. Then, we have a quotient map $\phi:R_P\longrightarrow R_P/PR_P=k$ that, for every star operation $\ast$ on $R$, induces a semistar operation $\ast_\phi$ on $D:=R/P$ defined by \begin{equation*} I^{\ast_\phi}:=\phi\left(\phi^{-1}(I)^\ast\right), \end{equation*} such that $D^{\ast_\phi}=D$. Conversely, if $\sharp$ is a star operation on $D$, then we can construct a star operation $\sharp^\phi$ on $R$: indeed, if $I$ is a fractional ideal of $R$, then $I$ is either divisorial (and so we define $I^{\sharp^\phi}:=I$) or there is an $\alpha\in K$ such that $R\subseteq \alpha I\subseteq R_P$ \cite[Proposition 2.2(5)]{hmp_finite}: in the latter case, we define \begin{equation*} I^{\sharp^\phi}:=\alpha^{-1}\phi^{-1}\left(\phi(\alpha I)^\sharp\right). \end{equation*}
\begin{prop}\label{prop:star-semistar} Let $R,P,D,\phi$ as above. Then, the maps \begin{equation*} \begin{aligned} \mathrm{Star}(R) & \longrightarrow \mathrm{(S)Star}(R/P)\\ \ast & \longmapsto \ast_\phi \end{aligned} \quad\text{~and~}\quad \begin{aligned} \mathrm{(S)Star}(R/P) & \longrightarrow \mathrm{Star}(R)\\ \ast & \longmapsto \ast^\phi \end{aligned} \end{equation*} are well-defined order-preserving bijections. \end{prop} \begin{proof} The fact that they are well-defined and bijections follow from \cite[Lemmas 2.3 and 2.4]{hmp_finite}; the fact that they are order-preserving is immediate from the definitions. \end{proof}
\subsection{$h$-local Pr\"ufer domains}\label{sect:hloc-prufer} If $R$ is both a Pr\"ufer domain and a $h$-local domain, then its standard decomposition $\Theta:=\{R_M\mid M\in\mathrm{Max}(R)\}$ is composed by valuation domains, and star operations behave particularly well. We start by re-proving \cite[Theorem 3.1]{twostar} using our general theory. \begin{prop}\label{prop:hloc-prufer}
Let $R$ be an $h$-local Pr\"ufer domain, and let $\mathcal{M}$ be the set of nondivisorial maximal ideals of $R$. Then, $|\mathrm{Star}(R)|=2^{|\mathcal{M}|}$. \end{prop} \begin{proof}
By Theorem \ref{teor:star-jaffard}, there is an order-preserving bijection between $\mathrm{Star}(R)$ and $\prod\{\mathrm{Star}(R_M)\mid M\in\mathrm{Max}(R)\}$, and a maximal ideal $M$ is divisorial (in $R$) if and only if $MR_M$ is divisorial (in $R_M$). Since $R_M$ is a valuation domain, $|\mathrm{Star}(R_M)|$ is equal to 1 if $MR_M$ is divisorial, and to 2 if $MR_M$ is not; the claim follows. \end{proof}
It is noted in the proof of \cite[Theorem 3.10]{olberding_globalizing} that, if $R$ is an $h$-local Pr\"ufer domain and $I,J$ are divisorial ideals of $R$, then $I+J$ is also divisorial. We can extend this result to arbitrary star operations; we shall see a similar result in Proposition \ref{prop:pruf-somma-invt}. \begin{prop}\label{prop:hloc-pruf-somma} Let $R$ be an $h$-local Pr\"ufer domain, let $\ast\in\mathrm{Star}(R)$ and let $I,J$ be $\ast$-closed ideals. Then, $I+J$ is $\ast$-closed. \end{prop} \begin{proof} Since $R$ is $h$-local, $I+J$ is $\ast$-closed if and only if $(I+J)R_M$ is $\ast_M$-closed for every $M\in\mathrm{Max}(R)$. However, since $R_M$ is a valuation domain, either $IR_M\subseteq JR_M$ or $JR_M\subseteq IR_M$; hence, $(I+J)R_M=IR_M+JR_M$ is equal either to $IR_M$ or to $JR_M$, both of which are $\ast_M$-closed. \end{proof}
This result does not hold if we drop the hypothesis that $R$ is $h$-local: in fact, let $R=\ins{Z}+X\ins{Q}[[X]]$ and let $R_p:=\ins{Z}[1/p]+X\ins{Q}[[X]]$ for each prime number $p$. Consider the star operation \begin{equation*} \ast:I\mapsto(R:(R:I))\cap(R_2:(R_2:I))\cap(R_3:(R_3:I)). \end{equation*} Then, $R_2$ and $R_3$ are $\ast$-closed; we claim that $R_2+R_3$ is not. Indeed, if $T$ is equal to $R$, $R_2$ or $R_3$, then $(T:(R_2+R_3))=X\ins{Q}[[X]]$, and thus $(R_2+R_3)^\ast=\ins{Q}[[X]]$; however, $R_2+R_3=(\ins{Z}[1/2]+\ins{Z}[1/3])+X\ins{Q}[[X]]$ does not contain rationals with denominator not divisible by 2 or 3 (for example, $1/5\notin R_2+R_3$), and thus $R_2+R_3\neq\ins{Q}[[X]]$.
The following can be seen as a sort of converse to Proposition \ref{prop:hloc-pruf-somma}. \begin{prop}\label{prop:sommaintersez-prufer} Let $R$ be a Pr\"ufer domain and suppose that $R$ is either: \begin{enumerate}[(a)] \item semilocal; or \item locally finite and finite-dimensional. \end{enumerate} Then, the following are equivalent: \begin{enumerate}[(i)] \item $R$ is $h$-local; \item for every $\ast\in\mathrm{Star}(R)$, $I\in\mathcal{F}(R)\setminus\mathcal{F}^\ast(R)$ and $J\in\mathcal{F}(R)$, at least one of $I\cap J$ and $I+J$ is not $\ast$-closed; \item for every $I\in\mathcal{F}(R)\setminus\mathcal{F}^v(R)$ and $J\in\mathcal{F}(R)$, at least one of $I\cap J$ and $I+J$ is not divisorial. \end{enumerate} \end{prop} \begin{proof} (i $\Longrightarrow$ ii) For every $M\in\mathrm{Max}(R)$, $(I+J)R_M=IR_M+JR_M=\max\{IR_M,JR_M\}$, while $(I\cap J)R_M=IR_M\cap JR_M=\min\{IR_M,JR_M\}$. Since $I$ is not $\ast$-closed, and $\{R_M\mid M\in\mathrm{Max}(R)\}$ is a Jaffard family of $R$, there is a maximal ideal $N$ such that $IR_N$ is not $\ast_N$-closed; however, at least one of $(I+J)R_N$ and $(I\cap J)R_N$ is equal to $IR_N$, and thus at least one is not $\ast_N$-closed. Therefore, at least one between $I+J$ and $I\cap J$ is not $\ast$-closed.
(ii $\Longrightarrow$ iii) is obvious.
(iii $\Longrightarrow$ i) Consider the standard decomposition $\Theta$ of $R$; then, (iii) holds for every member of $\Theta$ but, if $R$ is not $h$-local, there must be a $T\in\Theta$ that is not local. By Lemma \ref{lemma:prufer-jac}, there is a prime ideal $P$ of $T$ such that $\mathrm{Jac}(T/P)$ does not contain nonzero primes. Let $\Lambda$ be the standard decomposition of $D:=T/P$, let $Z\in\Lambda$ and define $Z':=\bigcap_{W\in\Lambda\setminus\{Z\}}W=\ortog{\Lambda}(Z)$. We have $Z\cap Z'=D$ and, for every maximal ideal $M$ of $D$, either $ZD_M=K$ or $Z'D_M=K$. Therefore, $Z+Z'=\bigcap_{M\in\mathrm{Max}(T)}(Z+Z')D_M=K$.
By Proposition \ref{prop:star-semistar}, the $v$-operation on $T$ correspond to a (semi)star operation on $D$ such that $A^\ast=K$ if $A$ is not a fractional ideal of $D$; therefore, both $\phi^{-1}(Z)$ and $\phi^{-1}(Z')$ are not divisorial, but both $\phi^{-1}(Z\cap Z')=T$ and $\phi^{-1}(Z+Z')=T_P$ are (where $\phi:T\longrightarrow D$ is the quotient map). This is a contradiction, and $R$ must be $h$-local. \end{proof}
\subsection{Stability} Recall that a star operation $\ast$ is \emph{stable} if it distributes over finite intersections, i.e., if $(I\cap J)^\ast=I^\ast\cap J^\ast$. In this section, we study stable operations on Pr\"ufer domains; we start with an analogue of Proposition \ref{prop:star-semistar}. \begin{prop}\label{prop:star-semistar_stab} Preserve the notation and the hypotheses of Proposition \ref{prop:star-semistar}. There is a bijection between $\insstar_{st}(R)$ and $\insstar_{st}(R/P)$. \end{prop} \begin{proof} We first show that the bijections of Proposition \ref{prop:star-semistar} become bijections on the subsets of stable operations; let thus $\ast$ be a semistar operation in the first set and $\sharp$ be the corresponding operation on $\mathrm{(S)Star}(R/P)$. Let $\phi:R\longrightarrow R/P$ be the quotient map.
Suppose that $\ast$ is stable and let $I,J\in\mathbf{F}(R/P)$. Then, since $\phi$ is a bijection between the ideal comprised between $P$ and $R_P$ and $\mathbf{F}(R/P)$, \begin{equation*} \begin{array}{rcl} (I\cap J)^\sharp & = & \phi\left[\phi^{-1}(I\cap J)^\ast\right]= \phi\left[\left(\phi^{-1}(I)\cap\phi^{-1}(J)\right)^\ast\right]=\\ & = & \phi\left[\phi^{-1}(I)^\ast\cap\phi^{-1}(J)^\ast\right]= \phi\left(\phi^{-1}(I)^\ast\right)\cap\phi\left(\phi^{-1}(J)^\ast\right)=\\ & = & I^\sharp\cap J^\sharp. \end{array} \end{equation*} Therefore, $\sharp$ is stable.
Conversely, suppose $\sharp$ is stable and let $I,J\in\mathcal{F}(R)$. If $I$ and $J$ are divisorial, so is $I\cap J$; hence, $(I\cap J)^\ast=I\cap J=I^\ast\cap J^\ast$. Suppose (without loss of generality) that $I\neq I^v$. Then, there is an $\alpha$ such that $P\subseteq \alpha I\subseteq R_P$. Moreover, since $R$ is Pr\"ufer and $P$ is contained in every maximal ideal of $R$, every fractional ideal must be comparable with both $P$ and $R_P$: more precisely, if $\mathbf{v}$ is the valuation relative to $R_P$, and $L$ is an ideal, then either $\inf\mathbf{v}(L)=0$ (so that $P\subseteq L\subseteq R_P$), $\inf\mathbf{v}(L)$ exist and has a sign (if positive, $L\subseteq P$, if negative, $R_P\subseteq L$) or $\inf\mathbf{v}(L)$ has no infimum (so that if $\mathbf{v}(L)$ contains negative values then $R_P\subseteq L$, while $L\subseteq P$ in the other case). Therefore, we can distinguish three cases: \begin{itemize} \item $\alpha J\subseteq P$: then, $\alpha J\subseteq \alpha I$, and thus $(I\cap J)^\ast=J^\ast=I^\ast\cap J^\ast$; \item $R_P\subseteq\alpha J$: then, $\alpha I\subseteq\alpha J$, and thus $(I\cap J)^\ast=I^\ast=I^\ast\cap J^\ast$;
\item $P\subseteq\alpha J\subseteq R_P$. Let $I_0:=\alpha I$ and $J_0:=\alpha J$. Then, \begin{equation*} \begin{array}{rcl} (I_0\cap J_0)^\ast & = &\phi^{-1}\left(\phi(I_0\cap J_0)^\sharp\right)= \phi^{-1}\left(\phi(I_0)^\sharp\cap\phi(J_0)^\sharp\right)=\\ & = & \phi^{-1}(\phi(I_0)^\sharp)\cap\phi^{-1}(\phi(J_0)^\sharp)=I_0^\ast\cap J_0^\ast. \end{array} \end{equation*} Hence, \begin{equation*} \begin{array}{rcl} (I\cap J)^\ast & = & \alpha^{-1}(\alpha(I\cap J)^\ast)=\alpha^{-1}(I_0\cap J_0)^\ast=\\ & = & \alpha^{-1}(I_0^\ast\cap J_0^\ast)=\alpha^{-1}I_0^\ast\cap\alpha^{-1}J_0^\ast=I^\ast\cap J^\ast. \end{array} \end{equation*} \end{itemize} In all cases, $\ast$ distributes over finite intersection, and thus $\ast$ is stable.
Therefore, there is an order-preserving bijection between $\insstar_{st}(R)$ and $\mathrm{(S)Star}_{\mathrm{st}}(R/P)$. However, for every domain $D$, the restriction map $\mathrm{(S)Star}_{\mathrm{st}}(D)\longrightarrow\insstar_{st}(D)$ is a bijection (see \cite[Discussion after Proposition 3.10]{surveygraz} or \cite[Proposition 3.4]{spettrali-eab}), and thus $\insstar_{st}(R)$ corresponds bijectively with $\insstar_{st}(R/P)$. The claim follows. \end{proof}
We say that a star (or semistar) operation $\ast$ \emph{distributes over arbitrary intersections} if, whenever $\{I_\alpha\}_{\alpha\in A}$ is a family of ideals with nonzero intersection, we have $\left(\bigcap_{\alpha\in A}I_\alpha\right)^\ast=\bigcap_{\alpha\in A}I_\alpha^\ast$.
\begin{lemma}\label{lemma:intersez-valuation} If $V$ is a valuation domain, the $v$-operation distributes over arbitrary intersections. \end{lemma} \begin{proof} Let $\mathcal{A}:=\{I_\alpha\}_{\alpha\in A}$ be a family of ideals of $V$ with nonzero intersection. If $\mathcal{A}$ has a minimum $I_{\overline{\alpha}}$, then $I_{\overline{\alpha}}^v\subseteq I_\beta^v$ for every $\beta\in A$, and thus $\left(\bigcap_{\alpha\in A}I_\alpha\right)^v=I_{\overline{\alpha}}^v=\bigcap_{\alpha\in A}I_\alpha^v$.
Suppose $\mathcal{A}$ does not have a minimum: since $\left(\bigcap_{\alpha\in A}I_\alpha\right)^v\subseteq I_\alpha^v$ for every $\alpha\in A$, we have $\left(\bigcap_{\alpha\in A}I_\alpha\right)^v\subseteq\bigcap_{\alpha\in A}I_\alpha^v$.
Let $x\in\bigcap_{\alpha\in A}I_\alpha^v$: if $x\in\bigcap_{\alpha\in A}I_\alpha$ then $x\in\left(\bigcap_{\alpha\in A}I_\alpha\right)^v$. On the other hand, if $x\notin\bigcap_{\alpha\in A}I_\alpha$, then there is an $\overline{\alpha}$ such that $x\in I_{\overline{\alpha}}^v\setminus I_{\overline{\alpha}}$, i.e., $\mathbf{v}(x)=\inf \mathbf{v}(I_{\overline{\alpha}})$ (where $\mathbf{v}$ is the valuation associated to $V$ and $\mathbf{v}(J):=\{\mathbf{v}(j)\mid j\in J\}$). However, since $\mathcal{A}$ has no minimum, there are $\beta,\gamma\in A$ such that $I_\alpha\supsetneq I_\beta\supsetneq I_\gamma$; in particular, $\mathbf{v}(x)>\inf \mathbf{v}(I_\gamma)$, and thus $x\notin I_\gamma^v$, which is absurd. Therefore, $x\in\bigcap_{\alpha\in A}I_\alpha$. \end{proof}
The following proposition may also be proved, in a slightly more generalized setting, using a different, more direct, approach; see \cite{stable_prufer}. \begin{prop}\label{prop:prufer-stab} Let $R$ be a Pr\"ufer domain and suppose that $R$ is either: \begin{enumerate}[(a)] \item semilocal; or \item locally finite and finite-dimensional. \end{enumerate} Then, every stable star operation $\ast$ on $R$ is in the form \begin{equation}\label{eq:stablepruf} I\mapsto\bigcap_{P\in\mathrm{Max}(R)}(IR_P)^{\ast^{(P)}}, \end{equation} where each $\ast^{(P)}\in\mathrm{Star}(R_P)$. In particular, $\insstar_{st}(R)$ is order-isomorphic to $\prod\{\mathrm{Star}(R_P)\mid P\in\mathrm{Max}(R)\}$. \end{prop} \begin{proof} For any ring $A$, let $\mathcal{M}_A$ be the set of maximal ideals of $A$ that are not divisorial.
Suppose first that $R$ is semilocal, and let $\Delta$ be the set of star operations defined as in \eqref{eq:stablepruf}. By Lemma \ref{lemma:intersez-valuation}, every star operation in $\Delta$ is stable; moreover, a maximal ideal $P$ is $\ast$-closed if and only if $\ast^{(P)}$ is the identity, and thus $|\Delta|=2^{|\mathcal{M}_R|}$. Since $\mathrm{Star}(R)$ is finite \cite[Theorem 5.3]{hmp_finite}, it is enough to show that the cardinalities of $\Delta$ and $\insstar_{st}(R)$ are equal.
We proceed by induction on $n:=|\mathrm{Max}(R)|$; if $n=1$ the claim follows from Lemma \ref{lemma:intersez-valuation}. Suppose it holds up to $n-1$.
Let $\Theta$ be the standard decomposition of $R$. If $\Theta$ is not trivial, then by the inductive hypothesis the claim holds for every member of $\Theta$; by Theorem \ref{teor:star-jaffard}, $M\in\mathrm{Max}(R)$ is divisorial over $R$ if and only if $MT$ is divisorial over $T$ (where $T\in\Theta$ is such that $MT\neq T$), and thus $|\mathcal{M}_R|=\sum_{T\in\Theta}|\mathcal{M}_T|$. Since, by Theorem \ref{teor:jaffard-corresp}, we have $\insstar_{st}(R)\simeq\prod\{\insstar_{st}(T)\mid T\in\Theta\}$, it follows that the claim holds also for $R$.
Suppose $\Theta$ is trivial: then, $\mathrm{Jac}(R)$ must contain a nonzero prime ideal $P$ (and, by Lemma \ref{lemma:prufer-jac}, we can suppose $P$ is maximal with these properties). By Proposition \ref{prop:star-semistar_stab}, $|\insstar_{st}(R)|=|\insstar_{st}(R/P)|$; moreover, by Proposition \ref{prop:star-semistar} $\mathcal{M}_R$ and $\mathcal{M}_{R/P}$ have the same cardinality. By the maximality of $P$, $R/P$ has a nontrivial standard decomposition; by induction, the claim holds for every member of the decomposition, and thus, with the same reasoning as above, we see that $|\insstar_{st}(R/P)|=2^{|\mathcal{M}_{R/P}|}$. Putting all together we have $|\insstar_{st}(R)|=2^{|\mathcal{M}_R|}$ and so $\insstar_{st}(R)=\Delta$ holds for every semilocal Pr\"ufer domain.
If $R$ is locally finite and finite-dimensional, then $\insstar_{st}(R)=\prod\{\insstar_{st}(T)\mid T\in\Theta\}$, where $\Theta$ is the standard decomposition of $R$. Each $T\in\Theta$ is semilocal, and thus we can apply the previous part of the proof; the claim follows. \end{proof}
\begin{prop}\label{prop:intersez-prufer} Let $R$ be a Pr\"ufer domain and suppose that $R$ is either: \begin{enumerate}[(a)] \item semilocal; or \item locally finite and finite-dimensional. \end{enumerate} Then, the following are equivalent: \begin{enumerate}[(i)] \item $R$ is $h$-local; \item every star operation on $R$ distributes over arbitrary intersections; \item every star operation on $R$ distributes over finite intersections; \item the $v$-operation on $R$ distributes over arbitrary intersections; \item the $v$-operation on $R$ distributes over finite intersections; \item for every fractional ideal $I$ of $R$, $I^v=\bigcap\{(IR_M)^{v^{(R_M)}}\mid M\in\mathrm{Max}(R)\}$. \end{enumerate} \end{prop} \begin{proof} (i $\Longrightarrow$ ii) follows from Theorem \ref{teor:star-jaffard}, Lemma \ref{lemma:intersez-ritorno} and Lemma \ref{lemma:intersez-valuation}, since $\{R_M\mid M\in\mathrm{Max}(R)\}$ is a Jaffard family if $R$ is $h$-local. (ii $\Longrightarrow$ iii $\Longrightarrow$ v) and (ii $\Longrightarrow$ iv $\Longrightarrow$ v) are clear, while (v $\iff$ vi) follows from Proposition \ref{prop:prufer-stab}; we only have to show that (v $\Longrightarrow$ i).
Suppose (v) holds and let $\Theta$ be the standard decomposition of $R$. If $R$ is not $h$-local, then a branch $T\in\Theta$ is not local; the hypotheses on $R$ guarantee that there is a nonzero prime ideal of $T$ contained in every maximal ideal. Therefore, we can apply Lemma \ref{lemma:prufer-jac} and find a prime ideal $Q$ such that $\mathrm{Jac}(T/Q)$ contains no prime ideals. By Proposition \ref{prop:star-semistar}, there is an order-preserving bijection between $\mathrm{Star}(T)$ and $\mathrm{(S)Star}(T/Q)$, where the $v$-operation on $T$ corresponds to the semistar operation $\ast$ which is the trivial extension of the $v$-operation on $T/Q$.
Since $\mathrm{Jac}(T/Q)$ does not contain nonzero primes, $T/Q$ admits a nontrivial Jaffard family $\Lambda$; let $Z\in\Lambda$, and define $Z':=\bigcap_{W\in\Lambda\setminus\{Z\}}W=\ortog{\Lambda}(Z)$. Then, $Z$ and $Z'$ are not fractional ideals of $T/Q$, and thus $Z^\ast=Z'^\ast=F$, where $F$ is the quotient field of $T/Q$; on the other hand, $Z\cap Z'=T/Q$ and thus $(Z\cap Z')^\ast=T/Q$.
If $\pi:T_Q\longrightarrow T_Q/QT_Q$ is the canonical quotient, it follows that $\pi^{-1}(Z)^v=\pi^{-1}(Z')^v=T_Q$, while $\pi^{-1}(Z\cap Z')^v=\pi^{-1}(T/Q)^v=T^v=T$. Since $T$ is not local, $T\neq T_Q$, and thus $v$ does not distribute over finite intersections, against the hypothesis. \end{proof}
\section{The class group}\label{sect:jaff-InvCl} Let $\ast$ be a star operation on $R$. An ideal $I$ of $R$ is \emph{$\ast$-invertible} if $(I(R:I))^\ast=R$; the set of $\ast$-invertible $\ast$-ideals, indicated with $\mathrm{Inv}^\ast(R)$, is a group under the natural ``$\ast$-product'' $I\times_\ast J\mapsto(IJ)^\ast$ \cite{jaffard_systeme,griffin_vmultiplication_1967,zafrullah_tinvt,halterkoch_libro}. Any $\ast$-invertible $\ast$-ideal is divisorial \cite[Theorem 1.1 and Observation C]{zafrullah_tinvt} and, if $\ast_1\leq\ast_2$, there is a natural inclusion $\mathrm{Inv}^{\ast_1}(R)\subseteq\mathrm{Inv}^{\ast_2}(R)$. \begin{prop}\label{prop:jaffard-invt} Let $R$ be an integral domain and $\Theta$ be a Jaffard family on $R$. The map \begin{equation*} \begin{aligned} \Gamma\colon \mathrm{Inv}^\ast(R) & \longrightarrow \bigoplus_{T\in\Theta}\mathrm{Inv}^{\ast_T}(T)\\ I & \longmapsto (IT)_{T\in\Theta} \end{aligned} \end{equation*} is well-defined and a group isomorphism. \end{prop} \begin{proof} Define a map \begin{equation*} \begin{aligned} \widehat{\Gamma}\colon \mathcal{F}(R) & \longrightarrow \prod_{T\in\Theta}\mathcal{F}(T)\\ I & \longmapsto (IT)_{T\in\Theta} \end{aligned} \end{equation*} For every $\ast$-ideal $I$, $\widehat{\Gamma}(I)=(IT)$ is a sequence such that $IT$ is $\ast_T$-closed. Moreover, if $I$ is $\ast$-invertible, then $(I(R:I))^\ast=R$ and thus $(I(R:I)T)^{\ast_T}=T$, so that $IT$ is $\ast_T$-invertible. Thus $\widehat{\Gamma}(\mathrm{Inv}^\ast(R))\subseteq\prod_{T\in\Theta}\mathrm{Inv}^{\ast_T}(T)$, and indeed $\widehat{\Gamma}(\mathrm{Inv}^\ast(R))\subseteq\bigoplus_{T\in\Theta}\mathrm{Inv}^{\ast_T}(T)$ since $\Theta$ is locally finite, by Theorem \ref{teor:star-jaffard}. Hence, $\Gamma$ is well-defined, since it is the restriction of $\widehat{\Gamma}$ to $\mathrm{Inv}^\ast(R)$.
It is straightforward to verify that $\Gamma$ is a group homomorphism, and since $I=\bigcap_{T\in\Theta}IT$, we have that $\Gamma$ (or even $\widehat{\Gamma}$) is injective.
We need only to show that $\Gamma$ is surjective. Let $(I_T)\in\bigoplus_{T\in\Theta}\mathrm{Inv}^{\ast_T}(T)$, and define $I:=\bigcap I_T$. Since $I_T=T$ for all but a finite number of elements of $\Theta$, say $T_1,\ldots,T_n$, there are $d_1,\ldots,d_n\in R$ such that $d_iI_{T_i}\subseteq T_i$. Defining $d:=d_1\cdots d_n$, we have $dI_T\subseteq T$ for every $T$, and thus $dI\subseteq\bigcap_{T\in\Theta}T=R$, so that $I$ is indeed a fractional ideal of $R$. Moreover, since $I_T$ is $\ast_T$-closed, $I_T\cap R$ is $\ast$-closed, and thus $I$, being the intersection of a family of $\ast$-closed ideals, is $\ast$-closed. It is also $\ast$-invertible, since \begin{equation*} (I(R:I))^\ast=\bigcap_{T\in\Theta}(I(R:I)T)^{\ast_T}=\bigcap_{T\in\Theta}(IT(T:IT))^{\ast_T}=\bigcap_{T\in\Theta}T=R. \end{equation*} Therefore, $(I_T)=\Gamma(I)\in\Gamma(\mathrm{Inv}^\ast(R))$, and thus $\Gamma$ is an isomorphism. \end{proof}
The set of nonzero principal fractional ideals forms a subgroup of $\mathrm{Inv}^\ast(R)$, denoted by $\mathrm{Prin}(I)$. The quotient between $\mathrm{Inv}^\ast(R)$ and $\mathrm{Prin}(R)$ is called the \emph{$\ast$-class group} of $R$ \cite{anderson_generalCG_1988}, and it is denoted by $\mathrm{Cl}^\ast(R)$. If $\ast_1\leq\ast_2$, there is an injective homomorphism $\mathrm{Cl}^{\ast_1}(R)\subseteq\mathrm{Cl}^{\ast_2}(R)$. Of particular interest are the class group of the identity star operation (usually called the \emph{Picard group} of $R$, denoted by $\mathrm{Pic}(R)$) and the $t$-class group, which is linked to the factorization properties of the group (see for example \cite{samuel_factoriel,bouvier_zaf_1988,zafrullah_tinvt}). The quotient between $\mathrm{Cl}^\ast(R)$ and $\mathrm{Pic}(R)$ is called the \emph{$\ast$-local class group} of $R$, and it is indicated by $G_\ast(R)$ \cite{anderson_generalCG_1988}. \begin{teor}\label{teor:jaffard-clgroup} Let $R$ be an integral domain and let $\Theta$ be a Jaffard family on $R$. Then, the map \begin{equation*} \begin{aligned} \Lambda\colon G_\ast(R) & \longrightarrow \bigoplus_{T\in\Theta}G_{\ast_T}(T)\\ [I] & \longmapsto ([IT])_{T\in\Theta} \end{aligned} \end{equation*} is well-defined and a group isomorphism. \end{teor} \begin{proof} By Proposition \ref{prop:jaffard-invt}, there are two isomorphisms $\Gamma^\ast:\mathrm{Inv}^\ast(R)\longrightarrow \bigoplus_{T\in\Theta}\mathrm{Inv}^{\ast_T}(T)$ and $\Gamma^d:\mathrm{Inv}^d(R)\longrightarrow \bigoplus_{T\in\Theta}\mathrm{Inv}^{d_T}(T)$.
Consider the chain of maps \begin{equation*} \mathrm{Inv}^\ast(R)\xrightarrow{\Gamma^\ast}\bigoplus_{T\in\mathrm{Max}(T)}\mathrm{Inv}^{\ast_T}(T) \xrightarrow{\pi}\bigoplus_{T\in\mathrm{Max}(T)}\frac{\mathrm{Inv}^{\ast_T}(T)}{\mathrm{Inv}^{d_T}(T)} \end{equation*} where $\pi$ is the componentwise quotient; then, the kernel of $\pi$ is exactly $\bigoplus_{T\in\Theta}\mathrm{Inv}^{d_T}(T)$. However, $\Gamma^\ast$ and $\Gamma^d$ coincide on $\mathrm{Inv}^d(R)\subseteq\mathrm{Inv}^\ast(R)$; hence, \begin{equation*} \ker(\pi\circ\Gamma^\ast)=(\Gamma^d)^{-1}(\ker\pi)=\mathrm{Inv}^d(R). \end{equation*} Therefore, there is an isomorphism $\displaystyle{\frac{\mathrm{Inv}^\ast(R)}{\mathrm{Inv}^d(R)}\simeq\bigoplus_{T\in\mathrm{Max}(T)}\frac{\mathrm{Inv}^{\ast_T}(T)}{\mathrm{Inv}^{d_T}(T)}}$. However, for an arbitrary domain $A$ and an arbitrary $\sharp\in\mathrm{Star}(A)$, we have $\mathrm{Prin}(A)\subseteq\mathrm{Inv}^d(A)\subseteq\mathrm{Inv}^\sharp(A)$, and thus \begin{equation*} \frac{\mathrm{Inv}^\sharp(A)}{\mathrm{Inv}^d(A)}\simeq\frac{\mathrm{Inv}^\sharp(A)/\mathrm{Prin}(A)}{\mathrm{Inv}^d(A)/\mathrm{Prin}(A)}\simeq\frac{\mathrm{Cl}^\sharp(A)}{\mathrm{Pic}(A)}=G_\sharp(A) \end{equation*} so that $\Lambda$ becomes an isomorphism between $G_\ast(R)$ and $\bigoplus_{T\in\Theta}G_{\ast_T}(T)$, as claimed. \end{proof}
\subsection{The class group of a Pr\"ufer domain}\label{sect:prufer-Cl}
If $\ast$ is a (semi)star operation, we can define the $\ast$-class group by mirroring the definition of the case of star operations: we say that $I$ is $\ast$-invertible if $(I(R:I))^\ast=R$, and we define $\mathrm{Cl}^\ast(R)$ as the quotient between the group of the $\ast$-invertible $\ast$-ideals (endowed with the $\ast$-product) and the subgroup of principal ideals. Since $(R:I)=(0)$ if $I\in\mathbf{F}(R)\setminus\mathcal{F}(R)$, every $\ast$-invertible ideal is a fractional ideal, and thus $\mathrm{Cl}^\ast(R)$ coincides with $\mathrm{Cl}^{\ast'}(R)$, where $\ast':=\ast|_{\mathcal{F}(R)}$ is the restriction of $\ast$.
The first result of this section is that Proposition \ref{prop:star-semistar} can be extended to the class group. \begin{prop}\label{prop:star-semistar-clgroup} Let $R$ be a Pr\"ufer domain and let $P$ be a nonzero prime ideal of $R$ contained in every maximal ideal. Suppose also that $P\notin\mathrm{Max}(R)$. Let $\ast\in\mathrm{Star}(R)$ and let $\sharp$ be the corresponding (semi)star operation on $D:=R/P$. Then, $\mathrm{Cl}^\ast(R)$ is naturally isomorphic to $\mathrm{Cl}^\sharp(D)$. \end{prop} \begin{proof} Let $\pi:R_P\longrightarrow F=Q(D)$ be the quotient map, and let $I$ be a fractional ideal of $R$ contained between $P$ and $R_P$. We claim that $\pi((R:I))=(D:\pi(I))$. In fact, if $y\in\pi((R:I))$ then $y=\pi(x)$ for some $x\in(R:I)$, and thus $y\pi(I)=\pi(x)\pi(I)=\pi(xI)\subseteq\pi(R)=D$, and thus $x\in(D:\pi(I))$. Conversely, if $y\in(D:\pi(I))$ and $y=\pi(x)$ then $y\pi(I)\subseteq D$, i.e., $\pi(xI)\subseteq D$. By the correspondence between $R$-submodules of $R_P$ and $D$-submodules of $F$ we have $xI\subseteq R$ and $y\in\pi((R:I))$.
Let $J=\pi(I)$ be a $\sharp$-invertible ideal of $D$. Then, $(J(D:J))^\sharp=D$, and thus \begin{equation*} \begin{array}{rcl} R & = & \pi^{-1}\left((J(D:J))^\sharp\right)=\pi^{-1}(J(D:J))^\ast=\\ & = & \left(\pi^{-1}(J)\pi^{-1}(D:J)\right)^\ast=(I(R:I))^\ast. \end{array} \end{equation*} Therefore, $I$ is $\ast$-invertible, and there is an injective map $\theta:\mathrm{Inv}^\sharp(D)\longrightarrow \mathrm{Inv}^\ast(R)$. It is also straightforward to see that $\theta$ is a group homomorphism.
The well-definedness of the map $\ast\mapsto\ast_\phi$ implies that, if $J,J'$ are $D$-submodules of $F$, and $I:=\pi^{-1}(J)$, $I':=\pi^{-1}(J')$, then $J=zJ'$ for some $z\in F$ if and only if $I=wI'$ for some $w\in K$. Therefore, $\theta$ induces an injective map $\overline{\theta}:\mathrm{Cl}^\sharp(D)\longrightarrow \mathrm{Cl}^\ast(R)$, that is clearly is a group homomorphism.
Let now $I$ be a $\ast$-invertible ideal of $R$. Then, $I$ is $v$-invertible, and thus $(I:I)=R$ \cite[Proposition 34.2(2)]{gilmer}. In particular, $I$ is not a $R_P$-module, and thus the set $\mathbf{v}(I)$ has an infimum $\alpha$, where $\mathbf{v}$ is the valuation associated to $R_P$. If $a$ is an element of valuation $\alpha$, then $P\subsetneq a^{-1}I\subsetneq R_P$; hence, $a^{-1}I=\phi^{-1}(\phi(a^{-1}I))$ and $[I]=\overline{\theta}([\pi(a^{-1}I)])$, and in particular $[I]$ is in the image of $\overline{\theta}$. Since $I$ was arbitrary, $\overline{\theta}$ is surjective and $\mathrm{Cl}^\sharp(D)\simeq\mathrm{Cl}^\ast(R)$. \end{proof}
\begin{teor}\label{teor:clgroup-prufer} Let $R$ be a Pr\"ufer domain, and suppose that $R$ is either: \begin{enumerate}[(a)] \item semilocal; or \item locally finite and finite-dimensional. \end{enumerate} Consider a star operation $\ast$ on $R$. Then, \begin{equation*} G_\ast(R)\simeq\bigoplus_{\substack{M\in\mathrm{Max}(R)\\ M\neq M^\ast}}\mathrm{Cl}^v(R_M). \end{equation*} \end{teor} \begin{proof} We start by considering the case of $R$ semilocal, and we proceed by induction on the number $n$ of maximal ideals of $R$. Note that, in this case, $\mathrm{Pic}(R)=(0)$ and so $G_\ast(R)=\mathrm{Cl}^\ast(R)$. If $n=1$, the conclusion is trivial, since $\ast\neq v$ if and only if $M\neq M^\ast$.
Suppose $n>1$ and let $\Theta$ be the standard decomposition of $R$ (which is a Jaffard family by Proposition \ref{prop:Tlambda}). By Theorem \ref{teor:jaffard-clgroup}, and using the fact that $\mathrm{Pic}(R)=(0)=\mathrm{Pic}(T)$ for every $T\in\Theta$, we have $\mathrm{Cl}^\ast(R)\simeq\bigoplus_{T\in\Theta}\mathrm{Cl}^{\ast_T}(T)$. Moreover, since a maximal ideal $M$ of $R$ is $\ast$-closed if and only if $MT$ is $\ast_T$-closed, by induction it suffices to prove the theorem when the standard decomposition of $R$ is $\{R\}$.
In this case, $\mathrm{Jac}(R)$ contains nonzero primes, and by Lemma \ref{lemma:prufer-jac} we can find a prime ideal $Q\subseteq\mathrm{Jac}(R)$ such that $\mathrm{Jac}(R/Q)$ does not contain nonzero prime ideals. Let $A:=R/Q$.
The standard decomposition $\Theta'$ of $A$ is nontrivial, and thus every $B\in\Theta'$ is a semilocal Pr\"ufer domain with less than $n$ maximal ideals. Moreover, by Proposition \ref{prop:star-semistar-clgroup}, $\mathrm{Cl}^\ast(R)\simeq\mathrm{Cl}^\sharp(A)$, where $\sharp$ is the restriction to $\mathcal{F}(A)$ of the (semi)star operation corresponding to $\ast$. Therefore, by the inductive hypothesis, \begin{equation*} \mathrm{Cl}^\sharp(A)\simeq\bigoplus_{B\in\Theta'}\mathrm{Cl}^v(B)\simeq \bigoplus_{B\in\Theta'}\bigoplus_{\substack{N\in\mathrm{Max}(B)\\ N\neq N^{\sharp_B}}}\mathrm{Cl}^v(B_N)\simeq\bigoplus_{\substack{N\in\mathrm{Max}(A)\\ N\neq N^\sharp}}\mathrm{Cl}^v(A_N). \end{equation*} Thus, \begin{equation*} \mathrm{Cl}^\ast(R)\simeq\mathrm{Cl}^\sharp(A)\simeq\bigoplus_{\substack{N\in\mathrm{Max}(A)\\ N\neq N^\sharp}}\mathrm{Cl}^v(A_N). \end{equation*} However, if $M$ is the maximal ideal of $R$ which corresponds to the maximal ideal $N$ of $A$, then $R_M/QR_M\simeq A_N$, and thus by \cite[Theorem 3.5]{afz_vclass} we have $\mathrm{Cl}^v(R_M)\simeq\mathrm{Cl}^v(A_N)$; the claim follows.
Suppose now $R$ is finite-dimensional and of finite character, and let $\Theta$ be the standard decomposition of $R$. By Lemma \ref{lemma:dimfin}, there is a bijective correspondence between $\Theta$ and the height 1 prime ideals of $R$, and every $T\in\Theta$ is semilocal. Hence, by Proposition \ref{prop:Tlambda} and by the previous case, \begin{equation*} G_\ast(R)\simeq\bigoplus_{T\in\Theta}G_{\ast_T}(T)\simeq \bigoplus_{T\in\Theta}\mathrm{Cl}^{\ast_T}(T)\simeq \bigoplus_{T\in\Theta}\bigoplus_{\substack{M\in\mathrm{Max}(T)\\ M\neq M^{\ast_T}}}\mathrm{Cl}^v(T_M). \end{equation*} The conclusion now follows since $T_M=R_N$ (where $N:=M\cap R$) and $N=N^\ast$ if and only if $M=M^{\ast_T}$. \end{proof}
\begin{cor}\label{cor:clgroup-bezout} Let $R$ be a Bézout domain, and suppose that $R$ is either: \begin{enumerate}[(a)] \item semilocal; or \item finite-dimensional and of finite character. \end{enumerate} Let $\ast$ be a star operation on $R$. Then, \begin{equation*} \mathrm{Cl}^\ast(R)\simeq\bigoplus_{\substack{M\in\mathrm{Max}(R)\\ M\neq M^\ast}}\mathrm{Cl}^v(R_M). \end{equation*} \end{cor} \begin{proof} It is enough to note that $\mathrm{Pic}(R)=0$ if $R$ is a Bézout domain, so that $G_\ast(R)=\mathrm{Cl}^\ast(R)$ for every $\ast\in\mathrm{Star}(R)$, and then apply Theorem \ref{teor:clgroup-prufer}. \end{proof}
\begin{cor} Let $R$ be a Bézout domain, and suppose that $R$ is either \begin{enumerate}[(a)] \item semilocal; or \item finite-dimensional and of finite character. \end{enumerate} Let $S$ be a multiplicatively closed subset of $R$. Then, there is a natural surjective group homomorphism $\mathrm{Cl}^v(R)\longrightarrow \mathrm{Cl}^v(S^{-1}R)$, $[I]\mapsto[S^{-1}I]$. \end{cor} \begin{proof} Let $\Delta:=\{M\in\mathrm{Max}(R):M\cap S=\emptyset\}$. Then, for every $M\in\Delta$, $R_M=(S^{-1}R)_{S^{-1}M}$, and thus the isomorphism of Theorem \ref{teor:clgroup-prufer} reduces to a surjective map $\mathrm{Cl}^v(R)\longrightarrow \bigoplus_{M\in\Delta}\mathrm{Cl}^v(R_M)\simeq\mathrm{Cl}^v(S^{-1}R)$, where the last equality comes from the fact that the maximal ideals of $S^{-1}R$ are the extensions of the ideals belonging to $\Delta$. \end{proof}
Therefore, under each case of Theorem \ref{teor:clgroup-prufer}, the determination of $G_\ast(R)$ is reduced to the calculation of $\mathrm{Cl}^v(V)$, where $V$ is a valuation domain. In the case where the maximal ideal $M$ of $V$ is \emph{branched} (that is, if there is a $M$-primary ideal of $V$ different from $R$, or equivalently if there is a prime ideal $P\subsetneq M$ such that there are no prime ideal properly contained between $P$ and $M$ \cite[Theorem 17.3]{gilmer}), this group has been calculated in \cite[Corollaries 3.6 and 3.7]{afz_vclass}. Indeed, if $P$ is the prime ideal directly below $M$, and $H$ is the value group of $V/P$ (represented as a subgroup of $\ins{R}$), then \begin{equation*} \mathrm{Cl}^v(V)\simeq\begin{cases} 0 & \text{~if~}G\simeq\ins{Z}\\ \ins{R}/H & \text{~otherwise}. \end{cases} \end{equation*}
In particular, we have the following. \begin{cor} Let $R$ be a Bézout domain, and suppose that $R$ is either: \begin{enumerate}[(a)] \item semilocal; or \item finite-dimensional and of finite character. \end{enumerate} For every $\ast\in\mathrm{Star}(R)$, $\mathrm{Cl}^\ast(R)$ is an injective group (equivalently, an injective $\ins{Z}$-module). \end{cor} \begin{proof} By Corollary \ref{cor:clgroup-bezout} and the previous discussion, $\mathrm{Cl}^\ast(R)\simeq\bigoplus\ins{R}/H_\alpha$, for a family $\{H_\alpha:\alpha\in A\}$ of additive subgroups of $\ins{R}$. Each $\ins{R}/H_\alpha$ is a divisible group, and thus so is their direct sum; however, a divisible group is injective, and thus so is $\mathrm{Cl}^\ast(R)$. \end{proof}
We end with a result similar in spirit to Proposition \ref{prop:hloc-pruf-somma}. \begin{prop}\label{prop:pruf-somma-invt} Let $R$ be a Pr\"ufer domain and suppose that $R$ is either: \begin{enumerate}[(a)] \item semilocal; or \item finite-dimensional and of finite character. \end{enumerate} Let $\ast\in\mathrm{Star}(R)$. If $I,J\in\mathrm{Inv}^\ast(R)$, then $I+J\in\mathrm{Inv}^\ast(R)$. \end{prop} \begin{proof}
Suppose first that $R$ is semilocal, and proceed by induction on $n:=|\mathrm{Max}(R)|$. If $n=1$, then $R$ is a valuation domain and $I+J$ is equal either to $I$ or to $J$, and the claim is proved.
Suppose the claim is true up to rings with $n-1$ maximal ideals, let $|\mathrm{Max}(R)|=n$ and consider the standard decomposition $\Theta$ of $R$. By Proposition \ref{prop:jaffard-invt}, $I+J\in\mathrm{Inv}^\ast(R)$ if and only if $(I+J)T\in\mathrm{Inv}^{\ast_T}(T)$ for every $T\in\Theta$; therefore, if $\Theta$ is not trivial, we can use the inductive hypothesis. Suppose $\Theta$ is trivial: then, $\mathrm{Jac}(R)$ contains nonzero prime ideals, and by Lemma \ref{lemma:prufer-jac} there is a nonzero prime ideal $Q\subseteq\mathrm{Jac}(R)$ such that $\mathrm{Jac}(R/Q)$ does not contain nonzero primes. By Proposition \ref{prop:star-semistar-clgroup}, $I/Q$ and $J/Q$ are $\sharp$-invertible $\sharp$-ideals of $R/Q$ (where $\sharp$ is the (semi)star operation induced by $\ast$), and in particular $I/Q$ and $J/Q$ are fractional ideals of $R/Q$.
By construction, $R/Q$ admits a nontrivial Jaffard family $\Lambda$: for every $U\in\Lambda$, $(I/Q)U$ and $(J/Q)U$ are $\sharp_U$-invertible $\sharp_U$-ideals, and thus by the inductive hypothesis so is $(I/Q)U+(J/Q)U=((I+J)/Q)U$. Hence $(I+J)/Q$ is a $\sharp$-invertible $\sharp$-ideal, and so $I+J$ is a $\ast$-invertible $\ast$-ideal, i.e., $I+J\in\mathrm{Inv}^\ast(R)$.
If now $R$ is locally finite and finite-dimensional, we see that if $\Theta$ is the standard decomposition of $R$ then every $T\in\Theta$ is semilocal. The ideal $I+J$ is $\ast$-invertible if and only if $(I+J)T$ is $\ast_T$-invertible for every $T\in\Theta$; however, since $IT$ and $JT$ are $\ast_T$-invertible $\ast_T$-ideals, the previous part of the proof shows that so is $IT+JT=(I+J)T$. Therefore, $I+J\in\mathrm{Inv}^\ast(R)$. \end{proof}
\end{document} |
\begin{document}
\title{Real single-loop cyclic three-level configuration of chiral molecules} \author{Chong Ye$^{1}$}
\author{Quansheng Zhang$^{1}$}
\author{Yong Li$^{1,2}$}\email{liyong@csrc.ac.cn}
\affiliation{$^1$Beijing Computational Science Research Center, Beijing 100193, China} \affiliation{$^2$Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China} \date{\today} \begin{abstract}
Single-loop cyclic three-level ($\Delta$-type) configuration of chiral molecules was used for enantio-separation in many theoretical works. Considering the effect of molecular rotation, this simple single-loop configuration is generally replaced by a complicated multiple-loop configuration containing multiple degenerate magnetic sub-levels and the ability of the enantio-separation methods is suppressed. For chiral asymmetric top molecules, we propose a scheme to construct a real single-loop $\Delta$-type configuration with no connections to other states by applying three microwave fields with appropriate polarization vectors and frequencies. With our scheme, the previous theoretical proposals for enantio-separation based on single-loop $\Delta$-type configurations can be experimentally realized when the molecular rotation is considered.
\end{abstract} \pacs{33.80.-b, 33.15.Bh, 42.50.Hz} \maketitle \section{Introduction}\label{INTR} Chiral molecule cannot be superimposed on its mirror image through pure translation and/or rotation. The handedness of the two mirror images (enantiomers) is fundamentally important for the enantiomer-selective of their pharmacological effects~\cite{D1,D2,D3,D4}, biological processes~\cite{B1}, the homo-chirality of life~\cite{HCL}, and even fundamental physics representing systems with broken parity states~\cite{P1}. Despite this, enantio-separation is an important and challenging task in chemistry and medicine~\cite{PC1,PC2,PC3,PC4,PC5}.
Solely optical enantio-separation methods have also been investigated theoretically~\cite{OC1,OC2,OC3,OC4,OC5,IS1,IS2,IS3,RS1,RS2}. One interesting kind of methods~\cite{IS1,IS2,IS3,RS1,RS2} among them is based on a system with a cyclic three-level ($\Delta$-type) configuration~\cite{3loop,3loop1}. In general, such a system driven by electric-dipole transitions is forbidden in natural atoms, but can exist in chiral molecules and other symmetry-broken systems~\cite{SCP,SCP1,SCP2,SCP3,SCP4,SCP5,SCP6}. For chiral molecules, Kr\'{a}l \textit{et al}.~\cite{IS1,IS2} proposed a system with the $\Delta$-type configuration to realize enantio-separation in the inner-state space via applying three optical (microwave) fields to invoke both one-photon and two-photon processes in the lowest three vibrational levels. The product of the three Rabi frequencies will change sign with enantiomer. This leads to the chirality-dependency of the system due to the interference of the one-photon and two-photon processes. After adiabatical (or diabatical) processes, molecules of different chirality which are initially in their respective ground states are transferred to final levels at different energies~\cite{IS1}. With the $\Delta$-type configuration, the inner-state enantio-separation can also be realized by a purely dynamic transfer process via applying optical ultrashort $\pi/2$ and $\pi$ pulses~\cite{IS3}. Based on the $\Delta$-type configuration, one can realize the spatial enantio-separation via a chirality-dependent generalized Stern-Gerlach effect~\cite{RS1,RS2}.
However, a real gas molecule should involve the subspace of rotational states and each rotational state may have degenerate magnetic sub-levels. Thus, the ideal single-loop $\Delta$-type configuration~\cite{IS1,IS2,IS3,RS1,RS2,Hirota} would be replaced by a multiple-loop $\Delta$-type configuration~\cite{JCP.137.044313}. For the spatial enantio-separation~\cite{RS1,RS2}, this effect of the molecular rotation will give birth to the relevant reduction of the chirality-dependent generalized Stern-Gerlach effect~\cite{JCP.137.044313}. Very recently, some experimental groups have utilized the multiple-loop $\Delta$-type configuration to realize the inner-state enantio-separation~\cite{PRL.118.123002} as well as enantio-discrimination~\cite{Nature.497.475,PRL.111.023008, JPCL.6.196,Angew.Chem.10.1002,PCCP.16.11114,ACI,KK,JPCL.7.341,JCP.142.214201} in gas phase samples. It was pointed out~\cite{PRL.118.123002} that one of the reasons limiting the experimental efficiency is the appearance of multiple loops. With the multiple-loop $\Delta$-type configuration, it is not possible to achieve perfect enantio-separation as theoretically proposed in Refs.~\cite{IS1,IS2,IS3}. Therefore, constructing a real single-loop $\Delta$-type configuration, with no connections to other states, is strongly demanded for enantio-separation.
For chiral symmetric top molecules, it was theoretically pointed out in Ref.~\cite{JCP.137.044313} that under the consideration of the molecular rotation the real single-loop $\Delta$-type configuration is prohibited due to the selection rules. However, many kinds of chiral molecules are of asymmetric tops. The selection rules of asymmetric tops are different from those of symmetric tops. In this paper, we aim to present a scheme to construct the real single-loop $\Delta$-type configuration for chiral asymmetric top molecules (instead of the symmetric top ones considered in Ref.~\cite{JCP.137.044313}) under the consideration of molecular rotation. In order to elucidate our scheme, we assume all the states are in the vibrational ground state and choose the working states to be the rotational ground state and other two higher-energy rotational states. Three electromagnetic (optical, microwave, or radio frequency) fields are used to invoke three electric-dipole-allowed transitions among them. With the help of the electric-dipole selection rules, we can realize a real single-loop $\Delta$-type configuration of three single states by appropriately choosing the polarization vectors and the frequencies of the three electromagnetic fields. We also demonstrate that the product of the three corresponding Rabi frequencies will change sign with enantiomer, which guarantees the chirality-dependency of the configuration.
\section{Electric-dipole selection rules for rotational transitions of asymmetric top}\label{SRPL}
General chiral molecules such as $1,2-$propanediol, $1,3-$butanediol, carvone, and menthone are of asymmetric tops. For an asymmetric top molecule, the rotational eigenfunctions are $|J,\tau,M\rangle$ with the angular momentum quantum number $J$, the magnetic quantum number $M$, and $\tau$ running from $-J$ to $J$ in unit steps in the order of increasing energy~\cite{AM}. They can be written as a linear combination of prolate symmetric top eigenfunctions $|J,K,M)$~\cite{AM}: \begin{align}
|J,\tau,M\rangle=\sum_{K=-J}^{J}A^{J}_{K,\tau}|J,K,M). \end{align}
The coefficients $A^{J}_{K,\tau}$ are given by solving the static Schr\"{o}dinger equation of the asymmetric top in the basis of prolate symmetric top eigenfunctions~\cite{AM}. The total wavefunction of the molecule can be described as $|\alpha\rangle=|v_\alpha\rangle|J_\alpha,\tau_\alpha,M_\alpha\rangle$ with the vibrational wavefunction $|v_\alpha\rangle$. For clarity, we have used $|...\rangle$ and $|...)$ to distinguish the asymmetric top and symmetric top eigenfunctions.
We consider a linearly $Z$ polarized or a circularly polarized (in the $X$-$Y$ plane) electromagnetic field in the space-fixed frame \begin{align}\label{EqE} \bm{E}^{s}_{\sigma}=\mathrm{Re} \{\bm{\varepsilon}^{s}_{\sigma}E^{s}_{\sigma} e^{-i(2\pi\nu t+\varphi_\sigma)}\}. \end{align} Any electromagnetic field can be written as a linear combination of them. Here ``$s$'' indicates the space-fixed frame, and $\bm{\varepsilon}^{s}_{\sigma}$ ($\sigma=0,\pm1$) is the polarization vector of the electromagnetic field. $E^{s}_{\sigma}$, $\nu$, and $\varphi_{\sigma}$ are, respectively, the field amplitude, the frequency, and the initial phase of the electromagnetic field. Specifically, $\sigma=0$ corresponds to a linearly $Z$ polarized electromagnetic field with the polarization vector $\bm{\varepsilon}^{s}_{0}=\bm{e}^{s}_Z$; $\sigma=1$ ($\sigma=-1$) corresponds to a circularly polarized electromagnetic field rotating about the $Z$-axis in the right-hand (left-hand) sense with the polarization vector $\bm{\varepsilon}^{s}_{1}=(\bm{e}^{s}_X+i\bm{e}^{s}_Y)/\sqrt{2}$ [$\bm{\varepsilon}^{s}_{-1}=-(\bm{e}^{s}_X- i\bm{e}^{s}_Y)/\sqrt{2}$]~\cite{Book1}.
Considering only the electric-dipole-allowed transition between an upper level $|\alpha\rangle$ and a lower level $|\beta\rangle$, the Hamiltonian $\bm{\hat{\mu}}\cdot \bm{E}^{s}_{\sigma}$ in the interaction picture is given as \begin{align} {H}^{s}_{\sigma}&=\frac{1}{2}\Omega^{\sigma}_{\alpha\beta}
e^{i[2\pi(f_{\alpha\beta}-\nu)t]}|\alpha\rangle\langle\beta|+h.c., \label{Hs} \end{align} where $f_{\alpha\beta} \equiv f_{\alpha}-f_\beta$ ($>0$) is the transition frequency with energies of the two states $hf_{\alpha}$ and $hf_{\beta}$ ($h$ is the Planck constant), and the Rabi frequency is \begin{equation}\label{RB1}
\Omega^{\sigma}_{\alpha\beta}= {E^{s}_{\sigma}}e^{i\varphi_\sigma}\langle\alpha|\bm{\hat{\mu}}\cdot\bm{\varepsilon}^{s}_{\sigma}|\beta\rangle. \end{equation} Here $\bm{\hat{\mu}}$ is the electric dipole operator consisting of a sum over all the (nuclear and electronic) charges, weighted by their position vectors measured from a common origin~\cite{AM}. We have assumed that $f_{\alpha\beta}$ is close to $\nu$. With this, the counter-rotating terms like ${\Omega}^{\prime\sigma}_{\alpha\beta}e^{i2\pi(f_{\alpha\beta}+v)t}
|\alpha\rangle\langle \beta|$ and the permanent-dipole terms like $(\Omega^{\sigma}_{\alpha\alpha}e^{i2\pi vt}+\Omega^{\prime\sigma}_{\alpha\alpha}e^{-i2\pi vt})
|\alpha\rangle\langle \alpha|$ have been ignored in Eq.~(\ref{Hs}) since they oscillate rapidly and will affect little the dynamics of the system.
Here we have defined $\Omega^{\sigma}_{jj}= {E^{s}_{\sigma}}e^{i\varphi_\sigma}\langle j|\bm{\hat{\mu}}\cdot\bm{\varepsilon}^{s}_{\sigma}|j\rangle$ and
${\Omega}^{\prime\sigma}_{jj^{\prime}}= {E^{s}_{\sigma}}e^{-i\varphi_\sigma}\langle j|
\bm{\hat{\mu}}\cdot(\bm{\varepsilon}^{s}_{\sigma})^{\ast}|j^{\prime}\rangle $ with $j,j^{\prime}=\alpha,\beta$.
The components of the electric dipole in the space-fixed frame $\hat{\mu}_{\sigma}^{s}\equiv\bm{\hat{\mu}}\cdot\bm{\varepsilon}^{s}_{\sigma}$ can be obtained by a rotation from the molecular frame~\cite{AM,JCP.137.044313} \begin{align}\label{MU1} \hat{\mu}_{\sigma}^{s}=\sum_{\sigma^{\prime}=\pm1,0}[D^{1}_{\sigma\sigma^{\prime}} (\psi, \theta,\phi)]^{\ast}\hat{\mu}^{m}_{\sigma^{\prime}}. \end{align} The notation ``$m$'' indicates the molecular frame. $D^{1}$ is the rotation matrix in three dimensions. ``$\ast$'' denotes taking conjugate complex. $\psi$, $\theta$, $\phi$ are the Euler angles connecting the molecular frame and the space-fixed frame. $\hat{\mu}^{m}_{\sigma^{\prime}}$ are the components of the electric dipole in the molecular frame with $\hat{\mu}^{m}_{0}=\hat{\mu}^{m}_z$, $\hat{\mu}^{m}_{+}= ({\hat{\mu}}^{m}_x + i{\hat{\mu}}^{m}_y)/\sqrt{2}$, and $\hat{\mu}^{m}_{-}= -({\hat{\mu}}^{m}_x - i{\hat{\mu}}^{m}_y)/\sqrt{2}$. Here $x$, $y$, $z$ are the principal axes of the molecule in the molecular frame. We have used $(X,Y,Z)$ and $(x,y,z)$ to distinguish the coordinates in the space-fixed frame and that in the molecular frame. With Eq.~(\ref{MU1}), the Rabi frequency is \begin{align}\label{OME2} \Omega^{\sigma}_{\alpha\beta} =(-1)^{M_{\beta}+\sigma} {E}^{s}_{\sigma}e^{i\varphi_\sigma}W^{(\sigma)}_{J_\alpha M_\alpha,J_\beta M_\beta}\Gamma_{J_\alpha\tau_\alpha,J_\beta\tau_\beta}, \end{align} where the reduced matrix element is \begin{align}\label{OM3} &\Gamma_{J_\alpha\tau_\alpha,J_\beta\tau_\beta}=\sqrt{(2J_\alpha+1)(2J_\beta+1)}
\sum_{\sigma^{\prime}=\pm1,0}\langle v_\alpha|\hat{\mu}^{m}_{\sigma^{\prime}}|v_{\beta}\rangle\times\nonumber\\ &\sum^{J_\alpha}_{K_\alpha=-J_\alpha}\sum^{J_\beta}_{K_\beta=-J_\beta} (-1)^{-K_{\beta}+\sigma^{\prime}}(A^{J_\alpha}_{K_\alpha,\tau_\alpha})^{\ast} A^{J_\beta}_{K_\beta,\tau_\beta} W^{(\sigma^{\prime})}_{J_\alpha K_\alpha,J_\beta K_\beta} \end{align} and $W^{(\sigma^{\prime\prime})}_{J M,J^{\prime} M^{\prime}}=\left(
\begin{array}{ccc}
J & 1 & J^{\prime}\\
M & -\sigma^{\prime\prime} & -M^{\prime} \\
\end{array} \right)$ for $\sigma^{\prime\prime}=0,\pm1$ are 3$j$-symbols. We note that the process to achieve Eq.~(\ref{OME2}) is similar to that in Ref.~\cite{JCP.137.044313} except that we consider the case of asymmetric tops instead of the case of symmetric tops in Ref.~\cite{JCP.137.044313}.
Obviously, 3$j$-symbols play a center role in determining electric-dipole selection rules. For the selection rules of $J$, we have $\Delta J=J_{\alpha}-J_{\beta}=0,\pm1$ according to the 3$j$-symbols in both Eq.~(\ref{OME2}) and Eq.~(\ref{OM3}). The selection rules of $M$ are directly related to the polarization vectors of the electromagnetic field as demonstrated by the 3$j$-symbol in Eq.~(\ref{OME2}), which gives $\Delta M=M_{\alpha}-M_{\beta}=\sigma$.
In principle, using the Wigner-Eckart theorem, one can write the Rabi frequency in the form of Eq.~(\ref{OME2}) and easily get the selection rules of $J$ and $M$. However, the reduced matrix element~(\ref{OM3}) is fundamentally important for constructing the real single-loop $\Delta$-type configuration. This will be seen distinctly when we simplify our discussion to the case of symmetric tops with reducing Eq.~(\ref{OM3}) \begin{align}\label{OM4} &\Gamma_{J_\alpha K_\alpha,J_\beta K_\beta}=\sqrt{(2J_\alpha+1)(2J_\beta+1)}\times\nonumber\\
&\sum_{\sigma^{\prime}=\pm1,0}(-1)^{-K_{\beta}+\sigma^{\prime}}\langle v_\alpha|\hat{\mu}^{m}_{\sigma^{\prime}}|v_{\beta}\rangle W^{(\sigma^{\prime})}_{J_\alpha K_\alpha,J_\beta K_\beta}. \end{align} The 3$j$-symbol $W^{(\sigma^{\prime})}_{J_\alpha K_\alpha,J_\beta K_\beta}$ here establishes the selection rules of $K$. This is one of the reasons for preventing the formation of the single-loop $\Delta$-type configuration for chiral symmetric top molecules~\cite{JCP.137.044313}. For the case of asymmetric tops, the sum over $K_{\alpha}$ and $K_{\beta}$ in Eq.~(\ref{OM3}) releases the selection rules of $K$ and thus offers the possibility of forming the closed single-loop $\Delta$-type configuration for chiral asymmetric top molecules.
\section{Real single-loop $\Delta$-type configuration}
Our task is to establish a scheme to form the real chirality-dependent single-loop $\Delta$-type configuration with the help of discussions in Sec.~\ref{SRPL} for chiral asymmetric top molecules. For simplicity, we assume all the states are in the vibrational ground state $|v_g\rangle$ (in fact the case of different vibrational states will bring the similar results). With this, we only take the rotational subspace into consideration and shorten $\langle v_g|\hat{\mu}^{m}_{\sigma^{\prime}}|v_g\rangle$ to $\mu^{m}_{\sigma^{\prime}}$ in further discussions.
\subsection{General formula} A natural starting state of the single-loop $\Delta$-type configuration is the rotational ground state
$|J_a,\tau_a\rangle=|0,0\rangle$ which has no magnetic degeneracy. We apply three electromagnetic fields to resonantly couple, respectively, with three cyclic transitions $|0,0\rangle\rightarrow|J_b,\tau_b\rangle\rightarrow|J_c,\tau_c\rangle\rightarrow|0,0\rangle$. They can be written as $\bm{E}_{1}=\mathrm{Re}\{\sum_{\sigma=0,\pm1}E_{1,\sigma}\bm{\varepsilon}^{s}_{\sigma}e^{-i(2\pi\nu_1 t+\varphi_{1,\sigma})}\}$, $\bm{E}_{2}=\mathrm{Re}\{\sum_{\sigma=0,\pm1}E_{2,\sigma}\bm{\varepsilon}^{s}_{\sigma}e^{-i(2\pi\nu_2 t+\varphi_{2,\sigma})}\}$, and $\bm{E}_{3}=\mathrm{Re}\{\sum_{\sigma=0,\pm1}E_{3,\sigma}\bm{\varepsilon}^{s}_{\sigma}e^{-i(2\pi\nu_3 t+\varphi_{3,\sigma})}\}$ with $\nu_{1}=f_{ba}$, $\nu_{2}=f_{cb}$, and $\nu_{3}=f_{ca}$. According to the selection rules of $J$, we have $J_b=J_c=1$. Ignoring all the transitions that are off-resonantly coupled with the three fields, the total Hamiltonian in the interaction picture in the rotating-wave approximation can be arranged as \begin{align}\label{Eq9}
H_{total}=(\frac{\Omega_{1}}{2}|b\rangle\langle a|+\frac{\Omega_{2}}{2}|c\rangle\langle b|+\frac{\Omega_{3}}{2}|c\rangle\langle a|+h.c.) +H^{\prime}, \end{align}
where $|a\rangle=|0,0,0\rangle$. $H^{\prime}$ in Eq.~(\ref{Eq9}) is uncoupled with $|a\rangle$, when we choose \begin{align}\label{Eqb}
&|b\rangle=\sin\theta_1\cos\phi_1e^{i\varphi_{1,1}}|1,\tau_b,1\rangle +\sin\theta_1\sin\phi_1e^{i\varphi_{1,0}}\nonumber\\
&\times|1,\tau_b,0\rangle
+\cos\theta_1e^{i\varphi_{1,-1}}|1,\tau_b,-1\rangle, \end{align} and \begin{align}\label{Eqc}
&|c\rangle=\sin\theta_3\cos\phi_3e^{i\varphi_{3,1}}|1,\tau_c,1\rangle +\sin\theta_3\sin\phi_3e^{i\varphi_{3,0}}\nonumber\\
&\times|1,\tau_c,0\rangle
+\cos\theta_3e^{i\varphi_{3,-1}}|1,\tau_c,-1\rangle. \end{align} Here $\sin\theta_\lambda\cos\phi_\lambda=E_{\lambda,1}/E_{\lambda}$, $\sin\theta_\lambda\sin\phi_\lambda=E_{\lambda,0}/E_{\lambda}$, and $\cos\theta_\lambda=E_{\lambda,-1}/E_{\lambda}$ with $E_{\lambda}=\sqrt{E^2_{\lambda,1}+E^2_{\lambda,0}+E^2_{\lambda,-1}}$ ($\lambda=1,2,3$).
With these, $H^{\prime}$ denotes all the other resonantly coupling terms between $|J_b,\tau_b\rangle$ and $|J_c,\tau_c\rangle$. Since $J_b=J_c=1$, both $|b\rangle$ and $| c\rangle$ have two degenerate states, labeled respectively as
$|b^{\prime}\rangle$, $|b^{\prime\prime}\rangle$, $|c^{\prime}\rangle$, and $|c^{\prime\prime}\rangle$. If the conditions \begin{align}\label{C1}
&\langle c^{\prime}|H_{total}|b\rangle=0,~~
\langle c^{\prime\prime}|H_{total}|b\rangle=0,\nonumber\\
&\langle c|H_{total}|b^{\prime}\rangle=0,~~
\langle c|H_{total}|b^{\prime\prime}\rangle=0, \end{align} are satisfied, $H^{\prime}$
is uncoupled to the Hilbert space $\{|a\rangle,|b\rangle,|c\rangle\}$, and the evolution of a system initially prepared in the Hilbert space will be governed by the single-loop $\Delta$-type configuration with Hamiltonian \begin{align}\label{HSL}
H_{sl}=\frac{1}{2}(\Omega_{1}|b\rangle\langle a|+\Omega_{2}|c\rangle\langle b|+\Omega_{3}|c\rangle\langle a|+H.c.). \end{align} The Rabi frequencies $\Omega_{1}=-{\Gamma_{1\tau_b,00}}E_{1}/{\sqrt{3}}$ and $\Omega_{3}=-{\Gamma_{1\tau_c,00}}E_{3}/{\sqrt{3}}$ are $nonzero$ when $\Gamma_{1\tau_b,00}\ne 0$ and $\Gamma_{1\tau_c,00}\ne 0$. The Rabi frequency $\Omega_{2}$ is proportional to $\Gamma_{1\tau_c,1\tau_b}$ and the ratio between them is determined by the polarization vectors of the three fields. From the results in Sec.~\ref{SRPL}, we know $\Gamma_{1\tau_b,00}$, $\Gamma_{1\tau_c,00}$, and $\Gamma_{1\tau_c,1\tau_b}$ are irrelevant to the polarization vectors of the three fields. In order to have a closed $\Delta$-type configuration, we will ensure $\Gamma_{1\tau_b,00}$, $\Gamma_{1\tau_c,00}$, and $\Gamma_{1\tau_c,1\tau_b}$ are $nonzero$ first in the following.
\subsection{$(J,\tau)$-level structure and chirality-dependency }\label{JT} As demonstrated previously, $J_b=J_c=1$. In the $J=1$ subspace, there are three rotational states
$|J=1,\tau=-1\rangle=|J=1,K=0)$, $|1,0\rangle=[|1,1)-|1,-1)]/{\sqrt{2}}$, and $|1,1\rangle=[|1,1)+|1,-1)]/{\sqrt{2}}$~\cite{AM}. According to the rotational Hamiltonian $H_{rot}=h(AJ^2_z+BJ^2_x+CJ^2_y)$ (for the asymmetric top molecule with the rotational constants $A>B>C$ )~\cite{AM}, we have the eigenenergies for the $J=0$ and $J=1$ rotational states as $hf_{J=0,\tau=0}=0$, $hf_{1,-1}=h(B+C)$, $hf_{1,0}=h(A+C)$, and $hf_{1,1}=h(A+B)$.
\begin{figure}
\caption{ $(J,\tau)$-level structure for the (multiple-loop) $\Delta$-type configuration starting from the rotational ground state with cyclic transitions: (a) $|J=0,\tau=0\rangle\rightarrow|1,-1\rangle\rightarrow|1,1\rangle\rightarrow|0,0\rangle$;
(b) $|0,0\rangle\rightarrow|1,-1\rangle\rightarrow|1,0\rangle\rightarrow|0,0\rangle$;
(c) $|0,0\rangle\rightarrow|1,0\rangle\rightarrow|1,1\rangle\rightarrow|0,0\rangle$. The states are in the $|J,\tau\rangle$ notation. $\mu^{m}_{x}$, $\mu^{m}_y$, and $\mu^{m}_z$ are $nonzero$ electric dipoles in the molecule frame along the principal axis, and respectively proportional to the related reduced matrix elements~(\ref{OM3}).
}
\label{Fig2}
\end{figure}
Three candidates of $(J,\tau)$-level structures for constructing the closed $\Delta$-type configuration are shown in Fig.~\ref{Fig2}. Take the first one [Fig.~\ref{Fig2}(a)] as an example. It is formed by three cyclic transitions $|0,0\rangle\rightarrow|1,-1\rangle\rightarrow|1,1\rangle\rightarrow|0,0\rangle$ in the $|J,\tau\rangle$ notation. With Eq.~(\ref{OM3}), we have the $nonzero$ reduced matrix elements~\cite{N1} \begin{align}\label{RME} &\Gamma_{1-1,00}=\sqrt{3}\mu^{m}_{0}W^{(0)}_{10,00} \propto\mu^{m}_{z},\nonumber\\ &\Gamma_{11,1-1}=-\frac{3}{\sqrt{2}}(\mu^{m}_{1} W^{(1)}_{11,10}+ \mu^{m}_{-1}W^{(-1)}_{1-1,10})\propto\mu^{m}_{x},\nonumber\\ &\Gamma_{11,00}=-\sqrt{\frac{3}{2}} (\mu^{m}_{1}W^{(1)}_{11,00} +\mu^{m}_{-1}W^{(-1)}_{1-1,00})\propto\mu^{m}_{y}. \end{align} Here $\mu^{m}_x$, $\mu^{m}_y$, and $\mu^{m}_z$ are the $nonzero$ components of the electric dipole in the molecule frame along the respective principal axes. Thus, the $(J,\tau)$-level structure in Fig.~\ref{Fig2}~(a) is available to form the closed $\Delta$-type configuration.
Moreover, the chirality-dependency of the $\Delta$-type configuration is also reflected in the $(J,\tau)$-level structure. It is known that the sign of $\mu^{m}_x\mu^{m}_y\mu^{m}_z$ fully determines the chirality of an enantiomer~\cite{Nature.497.475,PRL.111.023008, JPCL.6.196,PRL.118.123002,Angew.Chem.10.1002,PCCP.16.11114,ACI}. The sign of any two of the three dipole moment components is arbitrary and changes with the choice of axes, whereas the sign of the combined quantity $\mu^{m}_x\mu^{m}_y\mu^{m}_z$ is axis independent and changes sign with enantiomer. Combining this with Eq.~(\ref{OME2}), the product of the three reduced matrix elements in Eq.~(\ref{RME}) as well as the product of the three Rabi frequencies in Eq.~(\ref{HSL}) will change sign with enantiomer. This guarantees the chirality-dependency of the $\Delta$-type configuration. Applying similar analyses on the other two candidates shown in Fig.~\ref{Fig2}~(b) and Fig.~\ref{Fig2}~(c), we find that they are also available to form the closed and chirality-dependent $\Delta$-type configuration.
\subsection{$M$-level structure with $Z$ and circularly polarized electromagnetic fields}\label{MSP}
Starting from the rotational ground state, so far we have given three kinds of
$(J,\tau)$-level structures for forming the closed and chirality-dependent $\Delta$-type configuration. However, due to the magnetic degeneracy of $|J_b=1,\tau_b\rangle$ and $|J_c=1,\tau_c\rangle$, generally such a $\Delta$-type configuration still has multiple loops when the polarizations of the electromagnetic fields are not appropriately chosen. In this subsection, we will turn to the conditions~(\ref{C1}), which provide the selection of the appropriated polarization vectors of the three electromagnetic fields to achieve the single-loop $\Delta$-type configuration with the Hamiltonian~({\ref{HSL}}).
We consider the situation where only the linearly $Z$ ($\sigma=0$) or circularly polarized electromagnetic field ($\sigma=\pm 1$) is applied to resonantly couple with each transition in the $\Delta$-type configuration. The three electromagnetic fields are $\bm{E}_{1}=\mathrm{Re}\{E_{1,\sigma_1}\bm{\varepsilon}^{s}_{\sigma_1}e^{-i(2\pi\nu_1 t+\varphi_{1,\sigma_1})}\}$, $\bm{E}_{2}=\mathrm{Re}\{E_{2,\sigma_2}\bm{\varepsilon}^{s}_{\sigma_2}e^{-i(2\pi\nu_2 t+\varphi_{2,\sigma_2})}\}$, and $\bm{E}_{3}=\mathrm{Re}\{E_{3,\sigma_3}\bm{\varepsilon}^{s}_{\sigma_3}e^{-i(2\pi\nu_3 t+\varphi_{3,\sigma_3})}\}$. Here, $\sigma_1$, $\sigma_2$, and $\sigma_3$ stand for their polarization vectors.
In this case, according to the selection rules of $M$, $\bm{E}_1$, and $\bm{E}_3$ only evoke transitions
$|0,0,0\rangle\rightarrow|1,\tau_b,M_b\rangle$ and $|0,0,0\rangle\rightarrow|1,\tau_c,M_c\rangle$, respectively. Thus, we have $|b\rangle =|J_b=1,\tau_b,M_b\rangle$ and $|c\rangle =|J_c=1,\tau_c,M_c\rangle$. If $\bm{E}_2$
can evoke the transition $|b\rangle\rightarrow|c\rangle$, the conditions~(\ref{C1}) are satisfied according to the selection rules of $M$ and we can form the real single-loop $\Delta$-type configuration with the Hamiltonian~(\ref{HSL}).
All possible $M$-level structures for constructing the single-loop $\Delta$-type configuration are listed in Table~\ref{Tab1}. \begin{table}[h] \begin{tabular}{rrrrrr}
\hline
\rowcolor{blue!10} $~~~~~M_a$ & $~~~~~M_b$ & $~~~~~M_c$ & $~~~~~\sigma_{1}$ & $~~~~~\sigma_{2}$ & $~~~~~\sigma_{3}~~~~~$ \\
\hline $0$~~ & $1$~~ & 0~~ & $ 1 $~ & $-1$~ & $0$~~~~~~ \\
\rowcolor{gray!10} 0~~ &$-1$~~ & 0~~ & $-1$~ & $1$~ & $0$~~~~~~ \\
0~~ &0~~ & 1~~ & $0$~ & $1$~ & $1$~~~~~~~\\
\rowcolor{gray!10} 0~~ &0~~ & $-1$~~ & $0$~ & $-1$~ &$-1$~~~~~~~\\
0~~ &$-1$~~ & $-1$~~ & $-1$~ & $0$~ & $-1$~~~~~~~\\
\rowcolor{gray!10} 0~~ &1~~ & 1~~ & $1$~ & $0$~ & $1$~~~~~~~\\
\hline \end{tabular}
\caption{$M$-level structure to form the real single-loop $\Delta$-type configuration starting from the rotational ground state. $\sigma_{1}$, $\sigma_{2}$, and $\sigma_{3}$ label the polarization vectors of the electromagnetic fields $\bm{E}_1$, $\bm{E}_2$, and $\bm{E}_3$. Here we only choose the $Z$ polarized ($\sigma=0$) electromagnetic field and circularly polarized ($\sigma=\pm$) electromagnetic field rotating about the $Z$-axis. The ($J,\tau$)-level structure of the configuration could be any one of the three cases in Fig.~\ref{Fig2}.
}\label{Tab1} \end{table}
We would like to note that, according to the selection rules of $J$ and $M$, it seems the closed single-loop $\Delta$-type configuration among $|0,0,0\rangle$, $|J_{b}=1,\tau_{b},M_b=0\rangle$, and $|J_{c}=1,\tau_{c},M_c=0\rangle$ can be constructed with three linearly $Z$ polarized electromagnetic fields. However, such a configuration will fail since the transition $|J_{b}=1,\tau_{b},M_b=0\rangle \rightarrow|1,\tau_c,0\rangle$ is forbidden by $W^{(0)}_{10,10}=0$.
So far, we can form a closed and chirality-dependent single-loop $\Delta$-type configuration described by the Hamiltonian~(\ref{HSL}) for chiral asymmetric top molecules. The $(J,\tau)$-level and $M$-level structures for constructing the single-loop $\Delta$-type configuration (as well as the polarizations of the electromagnetic fields) are given by Fig.~{\ref{Fig2}} and Table~\ref{Tab1} respectively. In addition, we note that changing the axis of quantization (i.e., change $Z$ to $X$ or $Y$) will give the equivalent configuration to what we form above.
\subsection{$M$-level structure with linearly polarized electromagnetic fields}\label{LPF} In the recent experiment~\cite{PRL.111.023008} of enantio-separation based on the (multiple-loop) $\Delta$-type configuration, the three electromagnetic fields are linearly polarized. In this subsection, we consider this situation of purely linearly polarized fields and will prove the single-loop configuration can be formed only when the polarization vectors of the three electromagnetic fields are mutually vertical to each other with the help of the conditions~(\ref{C1}).
Without loss of generality, we can set $\bm{E}_{1}$ as a linearly $Z$ polarized field. This gives
$|b\rangle=|1,\tau_b,0\rangle$ and \begin{align}\label{E15} \sin\theta_{1}\sin\phi_1=\pm 1. \end{align}
Combining this with the condition $\langle c^{\prime}|H_{total}|b\rangle=0$
and the condition $\langle c|H_{total}|b^{\prime}\rangle=0$, we can prove that both $\bm{E}_2$ and $\bm{E}_3$ are in the $X-Y$ plane and vertical to $\bm{E}_1$.
Generally, we can set $\bm{E}_{2}$ as a linearly $X$ polarized field. This gives \begin{align}\label{E20} E_{2,1}e^{i\varphi_{2,1}}=-E_{2,-1}e^{i\varphi_{2,-1}}. \end{align}
With the condition $\langle c^{\prime\prime}|H_{total}|b\rangle=0$, we can prove that $\bm{E}_3$ is a linearly $Y$ polarized field and vertical to $\bm{E}_2$.
Changing the definition of the coordinates in the space-fixed frame will not alter the physical properties. Thus, for a real single-loop configuration coupled with three linearly polarized fields, we have proven the polarization vectors of the fields must be mutually vertical to each other.
\section{Experimental realization for $1,2$-propanediol}
Now we take $1,2$-propanediol as an example to construct real single-loop $\Delta$-type configurations. The rotational constants and the components of the electric dipole in the molecule frame for $1,2$-propanediol are $A=8,572.05$\,MHz, $B=3,640.10$\,MHz, $C=2,790.96$\,MHz, $|\mu^{m}_x|=1.916$ Debye, $|\mu^{m}_y|=0.365$ Debye, and $|\mu^{m}_z|=1.201$ Debye with
$1~\mathrm{Debye}=3.33564\times10^{-30} \mathrm{C\cdot m}$ \cite{MST}. The $(J,\tau)$-level structure of the single-loop $\Delta$-type configurations is formed by $|J_a,\tau_a\rangle=|0,0\rangle$, $|J_b,\tau_b\rangle=|1,-1\rangle$, and $|J_c,\tau_c\rangle=|1,1\rangle$ as shown in Fig.~\ref{Fig2} (a). Three microwave fields are applied to couple resonantly with the transitions among them, respectively, with the corresponding frequencies $\nu_{1}=f_{ba}=6,431.06$\,MHz, $\nu_{2}=f_{cb}=5,781.09$\,MHz, and $\nu_{3}=f_{ca}=12,212.15$\,MHz. In the current related experiment~\cite{PRL.118.123002}, the coupling strengths (about $10$\,MHz) are much less than the detunings (about $1$\,GHz). All the other off-resonantly transitions are largely detuned coupled with these three microwave fields and then can be ignored. Thus, one can form the chirality-dependent $\Delta$-type configurations for $1,2$-propanediol. \subsection{Single-loop $\Delta$-type configuration with $Z$ and circularly polarized electromagnetic fields}\label{SZC} \begin{figure}
\caption{Real single-loop $\Delta$-type configuration starting from the rotational ground state for $1,2$-propanediol. Three microwave fields with frequencies $\nu_{1}=6,431.06$\,MHz, $\nu_{2}=5,781.09$\,MHz, $\nu_{3}=12,212.15$\,MHz and
polarization vectors labeled with $\sigma_{1}=1$, $\sigma_{2}=-1$, $\sigma_{3}=0$ are applied to resonantly couple, respectively, with cyclic electric-dipole-allowed transitions $|a\rangle\rightarrow|b\rangle\rightarrow|c
\rangle\rightarrow|a\rangle$. The dashed arrow denotes for the transition $|1,-1,0\rangle\rightarrow|1,1,-1\rangle$, which is coupled with the $\sigma_{2}=-1$
polarized electromagnetic field, and, however, is not involved in the single-loop $\Delta$-type configuration based on $|a\rangle$, $|b\rangle$, and $|c\rangle$.
}
\label{ST}
\end{figure}
In this subsection, we show one of the single-loop $\Delta$-type configurations with choosing the first case of the $M$-level structure in Table~\ref{Tab1}. The working states are $|a\rangle=|J=0,\tau=0,M=0\rangle$, $|b\rangle=|1,-1,1\rangle$, and $|c\rangle=|1,1,0\rangle$. The polarization vectors of the three microwave fields are chosen according to Table~\ref{Tab1}, labeled with $\sigma_{1}=1$, $\sigma_{2}=-1$, and $\sigma_{3}=0$. Thus the three microwave fields are $\bm{E_{1}}=\mathrm{Re} \{\bm{\varepsilon}^{s}_{1}E_{1,1} e^{-i2\pi\nu_1 t}\}$, $\bm{E_{2}}=\mathrm{Re} \{\bm{\varepsilon}^{s}_{-1}E_{2,-1} e^{-i2\pi\nu_2 t}\}$, and $\bm{E_{3}}=\mathrm{Re} \{\bm{\varepsilon}^{s}_{0}E_{3,0} e^{-i2\pi\nu_3 t}\}$ with $E_{1,1}>0$, $E_{2,-1}>0$, and $E_{3,0}>0$. For simplicity, we have set the initial phase of them to be $zero$.
For clarity, we show in Fig.~\ref{ST} all the magnetic sub-levels in the $\{|J=0,\tau=0\rangle,|1,-1\rangle,|1,1\rangle\}$ subspace and all the electric-dipole-allowed transitions that are coupled resonantly with the three microwave fields. The rotation states $|1,1,1\rangle$ and $|1,-1,-1\rangle$ are decoupled with the chosen microwave fields. Note that the transition $|1,-1,0\rangle\rightarrow|1,1,-1\rangle$ (the dashed arrow in Fig.~\ref{ST}) is also coupled with the field $\bm{E}_2$. However, it is not involved in the single-loop $\Delta$-type configuration constructed by $|a\rangle=|0,0,0\rangle$, $|b\rangle=|1,-1,1\rangle$, and $|c\rangle=|1,1,0\rangle$ in Fig.~\ref{ST}. The corresponding Rabi frequencies are $\Omega_{1}=-{\Gamma_{1-1,00}}{E}_1/{\sqrt{3}}$, $\Omega_{2}=-{\Gamma_{11,1-1}}{E}_2/{\sqrt{6}}$, and $\Omega_{3}=-{\Gamma_{11,00}}{E}_3/{\sqrt{3}}$, where ${E}_1=E_{1,1}$, ${E}_2=E_{2,-1}$, and ${E}_3=E_{3,0}$ are the intensities of the three microwave fields.
\subsection{Single-loop $\Delta$-type configuration with linearly polarized electromagnetic fields} \begin{figure}
\caption{Real single-loop $\Delta$-type configuration coupled with three linearly polarized electromagnetic fields for $1,2$-propanediol. Their frequencies are $\nu_{1}=5,781.09$\,MHz, $\nu_{2}=6,431.06$\,MHz, $\nu_{3}=12,212.15$\,MHz.
Their polarization vectors are $Z$, $X$, and $Y$ respectively. They are resonantly coupled respectively with cyclic electric-dipole-allowed transitions $|a\rangle\rightarrow|b\rangle\rightarrow|c
\rangle\rightarrow|a\rangle$. Here $|a\rangle=|0,0,0\rangle$, $|b\rangle=|1,-1,0\rangle$, and $|c\rangle=(|1,1,1\rangle+|1,1,-1\rangle)/\sqrt{2}$.
The dashed arrows denote for the transitions $|b^{\prime}\rangle\rightarrow|c^{\prime}\rangle$ and
$|b^{\prime\prime}\rangle\rightarrow|c^{\prime}\rangle$, which are coupled with the $X$ polarized field, and, however, are not involved in the single-loop $\Delta$-type configuration based on $|a\rangle$, $|b\rangle$, and $|c\rangle$. Here $|b^{\prime}\rangle=|1,-1,1\rangle$, $|b^{\prime\prime}\rangle=-|1,-1,-1\rangle$, $|c^{\prime}\rangle=-|1,1,0\rangle$, and
$|c^{\prime\prime}\rangle=(|1,1,1\rangle-|1,1,-1\rangle)/\sqrt{2}$.}
\label{Fig3}
\end{figure} In this subsection, we show an example of the real single-loop configurations resonantly coupled with three linearly polarized electromagnetic fields (as demonstrated in Fig.~\ref{Fig3}) according to the discussions in Sec.~\ref{LPF}. Here, $\bm{E_{1}}=\mathrm{Re} \{\bm{\varepsilon}^{s}_{0}E_{1,0} e^{-i2\pi\nu_1 t}\}$ with $E_{1,0}>0$, $\bm{E_{2}}=\mathrm{Re} \{(\bm{\varepsilon}^{s}_{1}E_{2,1}+\bm{\varepsilon}^{s}_{-1}E_{2,-1}) e^{-i2\pi\nu_2 t}\}$ with $E_{2,1}=-E_{2,-1}>0$, and $\bm{E_{3}}=\mathrm{Re} \{(\bm{\varepsilon}^{s}_{1}E_{3,1}+\bm{\varepsilon}^{s}_{-1}E_{3,-1})e^{-i2\pi\nu_3 t}\}$ with $E_{3,1}=E_{3,-1}>0$ are linearly $Z$, $X$, and $Y$ polarized electromagnetic fields, respectively. For simplicity, we have set the initial phase of them to be $zero$.
With Eq.~(\ref{Eqb}) and Eq.~(\ref{Eqc}), we have the three working states $|a\rangle=|0,0,0\rangle$, $|b\rangle=|1,-1,0\rangle$, and $|c\rangle=(|1,1,1\rangle+|1,1,-1\rangle)/\sqrt{2}$. The corresponding Rabi frequencies are $\Omega_{1}=-{\Gamma_{1-1,00}}{E}_1/{\sqrt{3}}$, $\Omega_{2}=-{\Gamma_{11,1-1}}{E}_2/{\sqrt{6}}$, and $\Omega_{3}=-{\Gamma_{11,00}}{E}_3/{\sqrt{3}}$, where ${E}_1=E_{1,0}$, ${E}_2=\sqrt{2}E_{2,1}$, and ${E}_3=\sqrt{2}E_{3,1}$ are the intensities of the three microwave fields. Generally, the three Rabi frequencies should be comparable to ensure that the dynamics of each transition can affect the global dynamics of the three-level process. In principle, this can be achieved by adjusting the intensities of the involved microwave fields and/or choosing the appropriate specific three levels. Here the absolute values of the three transition dipole moments are, respectively, given as
$|\Gamma_{1-1,00}/\sqrt{3}|=|\mu^{m}_{z}|/\sqrt{3}\simeq0.693~\mathrm{Debye}$,
$|\Gamma_{11,1-1}/\sqrt{6}|=|\mu^{m}_{x}|/2\simeq0.958~\mathrm{Debye}$, and
$|\Gamma_{11,00}/\sqrt{3}|=|\mu^{m}_{y}|/\sqrt{3}\simeq0.211~\mathrm{Debye}$. The three Rabi frequencies can be comparable under current experimental conditions~\cite{PRL.118.123002}, where the ratio of the intensities of the three microwave fields is $1:0.75:2.75$ and correspondingly we can give comparable Rabi frequencies as $|\Omega_1|:|\Omega_2|:|\Omega_3|\simeq 1:1.04:0.84$. This argument is also suitable for the example in Sec.~\ref{SZC}, where the Rabi frequencies have the same forms as those of here.
We also give the four states
$|b^{\prime}\rangle=|1,-1,1\rangle$, $|b^{\prime\prime}\rangle=-|1,-1,-1\rangle$, $|c^{\prime}\rangle=-|1,1,0\rangle$, and
$|c^{\prime\prime}\rangle=(|1,1,1\rangle-|1,1,-1\rangle)/\sqrt{2}$. The transitions $|b^{\prime}\rangle\rightarrow|c^{\prime}\rangle$ and
$|b^{\prime\prime}\rangle\rightarrow|c^{\prime}\rangle$ are coupled with the $X$ polarized field. However, they are not involved in the single-loop $\Delta$-type configuration.
\section{Summary and Discussion}
In conclusion, via appropriately choosing the frequencies and polarization vectors of three applied electromagnetic fields, we have established the scheme to form the closed and chirality-dependent real single-loop $\Delta$-type configuration starting from the rotational ground state for chiral asymmetric top molecules with only electric-dipole-allowed rotational transitions under the consideration of molecular rotation.
With our scheme, we have overcome the impediment to enantio-separation due to the averaging over the degenerate magnetic sub-levels. With our scheme, an inner state will be only occupied by one of the enantiomers via applying previous theoretical proposals~\cite{IS1,IS2,IS3}. However, there are other impediments to enantio-separation in practice such as the temperature and the phase mismatching. At finite temperate, the system is initially in a thermal equilibrium state. The population in the upper states ($|b\rangle$ and $|c\rangle$) will execute the cycle ``in reverse''~\cite{JPCL.6.196}. Extending our results to the cases where different vibrational states are involved, we can achieve higher population difference between the states initially driven and thus enantio-separation will be increased. Since the wave-vectors ($\bm{k}_1$, $\bm{k}_2$, and $\bm{k}_3$)
of the three electromagnetic fields cannot be parallel, there are inevitable phase mismatching in practice. It will impede the enantio-separation~\cite{JPCL.6.196}. In our discussion, we have $|\bm{k}_1|>|\bm{k}_2|>|\bm{k}_3|$. In order to minimize the effect of the phase mismatching, we should take $\bm{k}_1$ and $\bm{k}_2$ to be parallel and $\bm{k}_3$ to be perpendicular to them~\cite{KK}.
In addition, systems with $\Delta$-type configuration are also used in the enantio-discrimination experiments~\cite{Nature.497.475,PRL.111.023008, JPCL.6.196,Angew.Chem.10.1002,PCCP.16.11114,ACI,JPCL.7.341,JCP.142.214201}. The $\Delta$-type configuration used in the experiment~\cite{Nature.497.475} also starts from the rotational ground state and thus is similar to the case we consider here. However, such a configuration in this experiment~\cite{Nature.497.475} is not a single-loop $\Delta$-type one. The upper two levels are off-resonantly coupled by a time-varying electric field. Such an electric field will couple other levels to the configuration. These couplings can not be ignored. Using our scheme to form a real single-loop $\Delta$-type configuration may help to improve the enantio-discrimination efficiency in experiments.
\section{Calculation of $|b\rangle$ and $|c\rangle$ } \label{AP1}
For the transition $|J_a,\tau_a\rangle\rightarrow|J_b,\tau_b\rangle$ coupled with $\bm{E}_1$, we have \begin{align}\label{APH1} &H_{1}=\frac{1}{2}\Gamma_{1\tau_b,00}(-
{E}_{1,1}e^{i\varphi_{1,1}}W^{(1)}_{11,00}|J_b,\tau_b,1\rangle\langle0,0,0|\nonumber\\
&+{E}_{1,0}e^{i\varphi_{1,0}}W^{(0)}_{1 0,0 0}|J_b,\tau_b,0\rangle\langle0,0,0|-{E}_{1,-1}e^{i\varphi_{1,-1}}W^{(-1)}_{1-1,00}\nonumber\\
&\times|J_b,\tau_b,-1\rangle\langle0,0,0|)+h.c.\nonumber\\ &=-\frac{1}{2}\frac{\Gamma_{1\tau_b,00}}{\sqrt{3}}(
{E}_{1,1}e^{i\varphi_{1,1}}|J_b,\tau_b,1\rangle\langle a|+{E}_{1,0}e^{i\varphi_{1,0}}|J_b,\tau_b,0\rangle\langle a|\nonumber\\
&+{E}_{1,-1}e^{i\varphi_{1,-1}}|J_b,\tau_b,-1\rangle\langle a|)+h.c.\nonumber\\
&=\frac{1}{2}\Omega_{1}|b\rangle\langle a|+h.c. \end{align} Here $\Omega_1=-\frac{\Gamma_{1\tau_b,00}}{\sqrt{3}}E_{1}$ and \begin{align}\label{beq}
&|b\rangle=
\frac{{E}_{1,1}e^{i\varphi_{1,1}}}{E_{1}}|J_b,\tau_b,1\rangle
+\frac{{E}_{1,0}e^{i\varphi_{1,0}}}{E_{1}}|J_b,\tau_b,0\rangle\nonumber\\
&+\frac{{E}_{1,-1}e^{i\varphi_{1,-1}}}{E_{1}}|J_b,\tau_b,-1\rangle. \end{align}
Making $\sin\theta_1\cos\phi_1=E_{1,1}/E_{1}$, $\sin\theta_1\sin\phi_1=E_{1,0}/E_{1}$, and $\cos\theta_1=E_{1,-1}/E_{1}$, we have \begin{align}
&|b\rangle=\sin\theta_1\cos\phi_1e^{i\varphi_{1,1}}|J_b,\tau_b,1\rangle +\sin\theta_1\sin\phi_1e^{i\varphi_{1,0}}\nonumber\\
&\times|J_b,\tau_b,0\rangle
+\cos\theta_1e^{i\varphi_{1,-1}}|J_b,\tau_b,-1\rangle, \end{align} \begin{align}\label{EqB1}
&|b^{\prime}\rangle=\sin\phi_1e^{i\varphi_{1,1}}|J_b,\tau_b,1\rangle
-\cos\phi_1e^{i\varphi_{1,0}}|J_b,\tau_b,0\rangle, \end{align} and \begin{align}\label{EqB2}
&|b^{\prime\prime}\rangle=\cos\theta_1\cos\phi_1e^{i\varphi_{1,1}}|J_b,\tau_b,1\rangle +\cos\theta_1\sin\phi_1e^{i\varphi_{1,0}}\nonumber\\
&\times|J_b,\tau_b,0\rangle
-\sin\theta_1e^{i\varphi_{1,-1}}|J_b,\tau_b,-1\rangle. \end{align}
For the transition $|J_a,\tau_a\rangle\rightarrow|J_c,\tau_c\rangle$ coupled with $\bm{E}_3$, we have \begin{align} &H_{3}=\frac{1}{2}\Gamma_{1\tau_c,00}(
-{E}_{3,1}e^{i\varphi_{3,1}}W^{(1)}_{1 1,00}|J_c,\tau_c,1\rangle\langle0,0,0|\nonumber\\
&+{E}_{3,0}e^{i\varphi_{3,0}}W^{(0)}_{1 0,00}|J_c,\tau_c,0\rangle\langle0,0,0|\nonumber\\
&-{E}_{3,-1}e^{i\varphi_{3,-1}}W^{(-1)}_{1-1,0 0}|J_c,\tau_c,-1\rangle\langle0,0,0|+h.c. \end{align} This can be arranged as
$H_{3}=\frac{1}{2}\Omega_{3}|c\rangle\langle a|+h.c.$ with $\Omega_3=-\frac{\Gamma_{1\tau_c,00}}{\sqrt{3}}E_{3}$ and \begin{align}\label{ceq}
&|c\rangle=
\frac{{E}_{3,1}e^{i\varphi_{3,1}}}{E_{3}}|J_c,\tau_c,1\rangle
+\frac{{E}_{3,0}e^{i\varphi_{3,0}}}{E_{3}}|J_c,\tau_c,0\rangle\nonumber\\
&+\frac{{E}_{3,-1}e^{i\varphi_{3,-1}}}{E_{3}}|J_c,\tau_c,-1\rangle. \end{align}
Making $\sin\theta_3\cos\phi_3=E_{3,1}/E_{3}$, $\sin\theta_3\sin\phi_3=E_{3,0}/E_{3}$, and $\cos\theta_3=E_{3,-1}/E_{3}$, we have \begin{align}\label{EqC}
&|c\rangle=\sin\theta_3\cos\phi_3e^{i\varphi_{3,1}}|J_c,\tau_c,1\rangle +\sin\theta_3\sin\phi_3e^{i\varphi_{3,0}}\nonumber\\
&\times|J_c,\tau_c,0\rangle
+\cos\theta_3e^{i\varphi_{3,-1}}|J_c,\tau_c,-1\rangle, \end{align} \begin{align}\label{EqC1}
&|c^{\prime}\rangle=\sin\phi_3e^{i\varphi_{3,1}}|J_c,\tau_c,1\rangle
-\cos\phi_3e^{i\varphi_{3,0}}|J_c,\tau_c,0\rangle, \end{align} and \begin{align}\label{EqC2}
&|c^{\prime\prime}\rangle=\cos\theta_3\cos\phi_3e^{i\varphi_{3,1}}|J_c,\tau_c,1\rangle +\cos\theta_3\sin\phi_3e^{i\varphi_{3,0}}\nonumber\\
&\times|J_c,\tau_c,0\rangle
-\sin\theta_3e^{i\varphi_{3,-1}}|J_c,\tau_c,-1\rangle. \end{align}
\section{Specific expression of $\langle c|H_{total}|b^{\prime}\rangle$ etc.}\label{AP2}
In order to calculate the specific expression of $\langle c|H_{total}|b^{\prime}\rangle$ etc., we first give the Hamiltonian for the transition $|J_b,\tau_b\rangle\rightarrow|J_c,\tau_c\rangle$ coupled with $\bm{E}_2$. It can be expressed as \begin{align} H_{2}=\sum_{\sigma=0,\pm1}H_{2,\sigma}. \end{align} Here \begin{align}
&H_{2,1}=\frac{1}{2}E_{2,1}e^{i\varphi_{2,1}}\Gamma_{1\tau_c,1\tau_b}(W^{(1)}_{1 0,1 -1}|J_c,\tau_c,0\rangle\langle J_b,\tau_b,-1|\nonumber\\
&-W^{(1)}_{1 1,1 0}|J_c,\tau_c,1\rangle\langle J_b,\tau_b,0|)+h.c.\nonumber\\
&=-\frac{E_{2,1}e^{i\varphi_{2,1}}\Gamma_{1\tau_c,1\tau_b}}{2\sqrt{6}}(|J_c,\tau_c,0\rangle\langle J_b,\tau_b,-1|\nonumber\\
&+|J_c,\tau_c,1\rangle\langle J_b,\tau_b,0|)+h.c., \end{align} \begin{align}
&H_{2,0}=-\frac{1}{2}E_{2,0}e^{i\varphi_{2,0}}\Gamma_{1\tau_c,1\tau_b}(W^{(0)}_{1 -1,1 -1}|J_c,\tau_c,-1\rangle\langle J_c,\tau_b,-1|\nonumber\\
&+W^{(0)}_{1 1,1 1}|J_c,\tau_c,1\rangle\langle J_c,\tau_b,1|)+h.c.\nonumber\\
&=-\frac{E_{2,0}e^{i\varphi_{2,0}}\Gamma_{1\tau_c,1\tau_b}}{2\sqrt{6}}(|J_c,\tau_c,-1\rangle\langle J_c,\tau_b,-1|\nonumber\\
&-|J_c,\tau_c,1\rangle\langle J_c,\tau_b,1|)+h.c., \end{align} and \begin{align}
&H_{2,-1}=\frac{1}{2}{E}_{2,-1}e^{i\varphi_{2,-1}}\Gamma_{1\tau_c,1\tau_b}(-W^{(-1)}_{1 -1,1 0}|J_c,\tau_c,-1\rangle\langle J_c,\tau_b,0|\nonumber\\
&+W^{(-1)}_{1 0,1 1}|J_c,\tau_c,0\rangle\langle J_c,\tau_b,1|)+h.c.\nonumber\\
&=\frac{{E}_{2,-1}e^{i\varphi_{2,-1}}\Gamma_{1\tau_c,1\tau_b}}{2\sqrt{6}}(|J_c,\tau_c,-1\rangle\langle J_c,\tau_b,0|\nonumber\\
&+|J_c,\tau_c,0\rangle\langle J_c,\tau_b,1|)+h.c.. \end{align} Thus, we have \begin{align}\label{bceq} &H_{2}=\frac{\Gamma_{1\tau_c,1\tau_b}E_{2}}{2\sqrt{6}}(-\sin\theta_2\cos\phi_2e^{i\varphi_{2,1}}
|J_c,\tau_c,0\rangle\langle J_b,\tau_b,-1|\nonumber\\
&-\sin\theta_2\cos\phi_2e^{i\varphi_{2,1}}|J_c,\tau_c,1\rangle\langle J_b,\tau_b,0|\nonumber\\
&-\sin\theta_2\sin\phi_2e^{i\varphi_{2,0}}|J_c,\tau_c,-1\rangle\langle J_c,\tau_b,-1|\nonumber\\
&+\sin\theta_2\sin\phi_2e^{i\varphi_{2,0}}|J_c,\tau_c,1\rangle\langle J_c,\tau_b,1|\nonumber\\
&+\cos\theta_2e^{i\varphi_{2,-1}}|J_c,\tau_c,-1\rangle\langle J_c,\tau_b,0|\nonumber\\
&+\cos\theta_2e^{i\varphi_{2,-1}}|J_c,\tau_c,0\rangle\langle J_c,\tau_b,1|)+h.c.. \end{align} Here, we use $\sin\theta_2\cos\phi_2=E_{2,1}/E_{2}$, $\sin\theta_2\sin\phi_2=E_{2,0}/E_{2}$, and $\cos\theta_2=E_{2,-1}/E_{2}$. We have \begin{align}\label{F1}
&\langle c|H_{total}|b\rangle=\langle c|H_{2}|b\rangle=\frac{\Gamma_{1\tau_c,1\tau_b}E_{2}}{2\sqrt{6}}\times\nonumber\\ &[-\cos\theta_1\sin\theta_2\cos\phi_2\sin\theta_3\sin\phi_3e^{i(\varphi_{1,-1}+\varphi_{2,1}-\varphi_{3,0})}\nonumber\\ &-\cos\theta_1\sin\theta_2\sin\phi_2\cos\theta_3e^{i(\varphi_{1,-1}+\varphi_{2,0}-\varphi_{3,-1})}\nonumber\\ &-\sin\theta_1\sin\phi_1\sin\theta_2\cos\phi_2\sin\theta_3\cos\phi_3e^{i(\varphi_{1,0}+\varphi_{2,1}-\varphi_{3,1})}\nonumber\\ &+\sin\theta_1\sin\phi_1\cos\theta_2\cos\theta_3e^{i(\varphi_{1,0}+\varphi_{2,-1}-\varphi_{3,-1})}\nonumber\\ &+\sin\theta_1\cos\phi_1\sin\theta_2\sin\phi_2\sin\theta_3\cos\phi_3e^{i(\varphi_{1,1}+\varphi_{2,0}-\varphi_{3,1})}\nonumber\\ &-\sin\theta_1\cos\phi_1\cos\theta_2\sin\theta_3\sin\phi_3e^{i(\varphi_{1,1}+\varphi_{2,-1}-\varphi_{3,0})}]\nonumber\\ &=\frac{1}{2}\Omega_2, \end{align} \begin{align}\label{F2}
&\langle c^{\prime}|H_{total}|b\rangle=\langle c^{\prime}|H_{2}|b\rangle=\frac{\Gamma_{1\tau_c,1\tau_b}E_{2}}{2\sqrt{6}}\times\nonumber\\ &[\cos\theta_1\sin\theta_2\cos\phi_2\cos\phi_3e^{i(\varphi_{1,-1}+\varphi_{2,1}-\varphi_{3,0})}\nonumber\\ &-\sin\theta_1\sin\phi_1\sin\theta_2\cos\phi_2\sin\phi_3e^{i(\varphi_{1,0}+\varphi_{2,1}-\varphi_{3,1})}\nonumber\\ &+\sin\theta_1\cos\phi_1\sin\theta_2\sin\phi_2\sin\phi_3e^{i(\varphi_{1,1}+\varphi_{2,0}-\varphi_{3,1})}\nonumber\\ &-\sin\theta_1\cos\phi_1\cos\theta_2\cos\phi_3e^{i(\varphi_{1,1}+\varphi_{2,-1}-\varphi_{3,0})}] \nonumber\\ &=0, \end{align} \begin{align}\label{F3}
&\langle c^{\prime\prime}|H_{total}|b\rangle=\langle c^{\prime\prime}|H_{2}|b\rangle=\frac{\Gamma_{1\tau_c,1\tau_b}E_{2}}{2\sqrt{6}}\times\nonumber\\ &[-\cos\theta_1\sin\theta_2\cos\phi_2\cos\theta_3\sin\phi_3e^{i(\varphi_{1,-1}+\varphi_{2,1}-\varphi_{3,0})}\nonumber\\ &+\cos\theta_1\sin\theta_2\sin\phi_2\sin\theta_3e^{i(\varphi_{1,-1}+\varphi_{2,0}-\varphi_{3,-1})}\nonumber\\ &-\sin\theta_1\sin\phi_1\sin\theta_2\cos\phi_2\cos\theta_3\cos\phi_3e^{i(\varphi_{1,0}+\varphi_{2,1}-\varphi_{3,1})}\nonumber\\ &-\sin\theta_1\sin\phi_1\cos\theta_2\sin\theta_3e^{i(\varphi_{1,0}+\varphi_{2,-1}-\varphi_{3,-1})}\nonumber\\ &+\sin\theta_1\cos\phi_1\sin\theta_2\sin\phi_2\cos\theta_3\cos\phi_3e^{i(\varphi_{1,1}+\varphi_{2,0}-\varphi_{3,1})} \nonumber\\ &+\sin\theta_1\cos\phi_1\cos\theta_2\cos\theta_3\sin\phi_3e^{i(\varphi_{1,1}+\varphi_{2,-1}-\varphi_{3,0})}] \nonumber\\ &=0, \end{align} \begin{align}\label{F4}
&\langle c|H_{total}|b^{\prime}\rangle=\langle c|H_{2}|b^{\prime}\rangle=\frac{\Gamma_{1\tau_c,1\tau_b}E_{2}}{2\sqrt{6}}\times\nonumber\\ &[\cos\phi_1\sin\theta_2\cos\phi_2\sin\theta_3\cos\phi_3e^{i(\varphi_{1,0}+\varphi_{2,1}-\varphi_{3,1})}\nonumber\\ &-\cos\phi_1\cos\theta_2\cos\theta_3e^{i(\varphi_{1,0}+\varphi_{2,-1}-\varphi_{3,-1})}\nonumber\\ &+\sin\phi_1\sin\theta_2\sin\phi_2\sin\theta_3\cos\phi_3e^{i(\varphi_{1,1}+\varphi_{2,0}-\varphi_{3,1})}\nonumber\\ &-\sin\phi_1\cos\theta_2\sin\theta_3\sin\phi_3e^{i(\varphi_{1,1}+\varphi_{2,-1}-\varphi_{3,0})}] \nonumber\\ &=0, \end{align} \begin{align}\label{F5}
&\langle c|H_{total}|b^{\prime\prime}\rangle=\langle c|H_{2}|b^{\prime\prime}\rangle=\frac{\Gamma_{1\tau_c,1\tau_b}E_{2}}{\sqrt{6}}\times\nonumber\\ &[\sin\theta_1\sin\theta_2\cos\phi_2\sin\theta_3\sin\phi_3e^{i(\varphi_{1,-1}+\varphi_{2,1}-\varphi_{3,0})}\nonumber\\ &+\sin\theta_1\sin\theta_2\sin\phi_2\cos\theta_3e^{i(\varphi_{1,-1}+\varphi_{2,0}-\varphi_{3,-1})}\nonumber\\ &-\cos\theta_1\sin\phi_1\sin\theta_2\cos\phi_2\sin\theta_3\cos\phi_3e^{i(\varphi_{1,0}+\varphi_{2,1}-\varphi_{3,1})}\nonumber\\ &+\cos\theta_1\sin\phi_1\cos\theta_2\cos\theta_3e^{i(\varphi_{1,0}+\varphi_{2,-1}-\varphi_{3,-1})}\nonumber\\ &+\cos\theta_1\cos\phi_1\sin\theta_2\sin\phi_2\sin\theta_3\cos\phi_3e^{i(\varphi_{1,1}+\varphi_{2,0}-\varphi_{3,1})}\nonumber\\ &-\cos\theta_1\cos\phi_1\cos\theta_2\sin\theta_3\sin\phi_3e^{i(\varphi_{1,1}+\varphi_{2,-1}-\varphi_{3,0})}] \nonumber\\ &=0. \end{align}
\section{$M$-level structure with linearly polarized electromagnetic fields}\label{AP3}
With $\langle c^{\prime}|H_{total}|b\rangle=0$, we have \begin{align}\label{E16} \sin\theta_2\cos\phi_2\sin\phi_3=0. \end{align} This gives $E_{2,1}\propto\sin\theta_2\cos\phi_2=0$ or $E_{3,0}\propto\sin\phi_3=0$. If $E_{2,1}=0$, $E_{2,-1}$ should be equal to $zero$. Otherwise,
$\bm{E}_2$ will not be a linearly polarized field. In this case, we have $|c\rangle=|1,\tau_c,0\rangle$. However, the transition $|1,\tau_b,0\rangle\rightarrow
|1,\tau_c,0\rangle$ is prohibited. Therefore, we have to choose \begin{align}\label{E17} E_{3,0}\propto\sin\phi_3=0,~{E}_{2,1}\propto\sin\theta_2\cos\phi_2\ne 0. \end{align} That means $\bm{E}_3$ should be in the $X-Y$ plane.
With the above results and the condition $\langle c|H_{total}|b^{\prime}\rangle=0$ in Eq. (\ref{F4}), we have \begin{align}\label{E18} \sin\theta_2\sin\phi_2\sin\theta_3=0. \end{align} If $\sin\theta_{3}=0$, we have $E_{3,1}=0$. This conflicts with the fact that $E_{3,0}=0$ [see Eq. (\ref{E16})] and $E_3$ is not a linearly polarized field. Thus we have to make \begin{align}\label{E19} E_{2,0}\propto\sin\theta_2\sin\phi_2=0,~E_{3,1}\propto\sin\theta_{3}\ne0. \end{align}
That means $\bm{E}_2$ is also in the $X-Y$ plane. With $\sin\phi_3=0$ and $\sin\theta_2\sin\phi_2=0$, the condition $\langle c|H_{total}|b^{\prime\prime}\rangle=0$ in Eq. (\ref{F5}) is satisfied.
Now, we have proven $\bm{E}_2$ and $\bm{E}_3$ are in the $X-Y$ plane. Thus they are vertical to $\bm{E}_1$. Generally, we can set $\bm{E}_{2}$ as a linearly $X$ polarized field. This gives \begin{align}\label{E20} E_{2,1}e^{i\varphi_{2,1}}=-E_{2,-1}e^{i\varphi_{2,-1}}. \end{align}
With the above results and $\langle c^{\prime\prime}|H_{total}|b\rangle=0$, we have \begin{align}\label{F33} E_{3,-1}e^{i\varphi_{3,-1}}=E_{3,1}e^{i\varphi_{3,1}}. \end{align} Therefore, $\bm{E}_3$ is a linearly $Y$ polarized field and vertical to $\bm{E}_{2}$.
\end{document} |
\begin{document}
\title{Group Evacuation on a Line by Agents with Different Communication Abilities hanks{This research is supported by NSERC, Canada.}
\begin{abstract} We consider evacuation of a group of $n \geq 2$ autonomous mobile agents (or robots) from an unknown exit on an infinite line. The agents are initially placed at the origin of the line and can move with any speed up to the maximum speed $1$ in any direction they wish and they all can communicate when they are co-located. However, the agents have different wireless communication abilities: while some are fully wireless and can send and receive messages at any distance, a subset of the agents are {\em senders}, they can only transmit messages wirelessly, and the rest are {\em receivers}, they can only receive messages wirelessly. The agents start at the same time and their communication abilities are known to each other from the start. Starting at the origin of the line, the goal of the agents is to collectively find a target/exit at an unknown location on the line while minimizing the {\em evacuation time}, defined as the time when the last agent reaches the target.
We investigate the impact of such a mixed communication model on evacuation time on an infinite line for a group of cooperating agents. In particular, we provide evacuation algorithms and analyze the resulting competitive ratio ($CR$) of the evacuation time for such a group of agents. If the group has two agents of two different types, we give an {\em optimal} evacuation algorithm with competitive ratio $CR=3+2 \sqrt{2}$. If there is a single sender or fully wireless agent, and multiple receivers we prove that $CR \in [2+\sqrt{5},5]$, and if there are multiple senders and a single receiver or fully wireless agent, we show that $CR \in [3,5.681319]$. Any group consisting of only senders or only receivers requires competitive ratio 9, and any other combination of agents has competitive ratio 3.
\noindent {\bf Keywords and phrases:} Agent, Communication, Evacuation, Mobile, Receiver, Search, Sender. \end{abstract}
\section{Introduction}
Search by a group of cooperating autonomous mobile robots for a target in a given domain is a fundamental topic in the theoretical computer science. In the search problem one is interested in finding a target at an unknown location as soon as possible. In the related {\em evacuation problem} one is interested in optimizing the time it takes the last robot in the group to find the target, often called the {\em exit}. There has been a lot of interest in trying to understand the impact of communication between agents on the search and evacuation time in the distributed computing area. The design of optimal robot trajectories leading to tight bounds depends not only on the fault-tolerant characteristics of the agents but also on the communication model employed (see \cite{isaacCzyzowiczGKKNOS16,PODC16}). In previous works, agents are assumed to either have full wireless communication abilities, i.e., they can both transmit and receive messages across any distance \cite{chrobak2015group}, or limited distance \cite{Bagheri2019}, or they have no wireless communication abilities, and can only communicate when they are {\em face-to-face (F2F)}, i.e., co-located. In terms of communication abilities, the agents are identical.
The present work considers evacuation on an infinite line by a group $G$ of cooperating robots (initially located at the origin) whose wireless communication abilities are different, which compel them to employ a {\em mixed} communication model. At a {\em rudimentary} level they can always communicate reliably using F2F. However, some agents in $G$ are {\em senders} that can transmit messages wirelessly at any distance but only receive F2F, yet others are {\em receivers} in that they can receive messages wirelessly from any distance but can transmit only F2F, and the remaining are fully wireless, and can both send and receive messages wirelessly. This situation might occur because it is cheaper to build agents with limited wireless capabilities, or because the sender or receiver module failed in receiver or sender robots, respectively. Further, we assume the capabilities of the robots are known to each other in advance and remain the same for the duration of an evacuation algorithm. Robots can move at any speed up to maximum $1$. We give upper and lower bounds on the competitive ratio of evacuation algorithms, depending on the number of senders and receivers among the agents.
If there are at least two fully wireless agents in the group, then the optimal competitive ratio is 3, see \cite{chrobak2015group}. By pairing up a sender and a receiver we can simulate a fully wireless agent. Consequently, if there is one fully wireless agent, one sender agent, and one receiver agent, the competitive ratio is 3. Consider now the case when there is one fully wireless agent, and one or more senders. Since the sender agents cannot receive wireless transmissions, the sending capabilities of the fully wireless agent are useless, and it is equivalent to a receiver agent. Similarly if there is one fully wireless agent, and one or more receivers, the receiving module in the fully wireless agent is useless, and it is equivalent to a sender agent.
Thus we no longer consider fully wireless agents, and only consider sender and receiver agents.
When all of the agents are senders, or all of them are receivers, the only possible mode of communication between agents is F2F; in this case, it has previously been demonstrated that the optimal competitive ratio of evacuation for $n$ F2F agents is $9$. If there are at least two sender agents and two receiver agents, by pairing up sender and receiver agents, we obtain a competitive ratio of 3.
It follows that the
only interesting cases to consider are when there is exactly one sender and one receiver; one sender and several receivers; one receiver and several senders. These are the cases investigated in detail in this paper.
\subsection{Model and preliminaries}
We consider the problem of evacuation by $n \geq 2$ mobile agents beginning at the origin of the infinite line. All agents are assumed to have maximum speed 1 and can move in either the positive direction (referred to as moving to the right) or the negative direction (referred to as moving to the left). The agents may change their speed and the direction of motion instantaneously and arbitrarily often. Moreover, the robots can choose any speed as long as it does not exceed the maximum speed 1.
All agents have the ability to communicate F2F, however the wireless communication abilities of the agents are limited and are not all the same. Indeed the group of $n$ agents consists of a subset of agents that can only send wireless messages, called {\em senders}, and a subset that can only receive wireless messages, called {\em receivers}. We represent by $n_s \geq 0$ and $n_r = n-n_s$ the number of senders and receivers respectively.
The cost of an algorithm for the evacuation problem on a given instance of the problem is the time the {\em last} agent reaches the target, called the {\em evacuation time}. We denote by $E(\mathcal{A},x)$ the evacuation time of algorithm $\mathcal{A}$ when the target is at location $x$. Note that an offline algorithm in which agents know the position of the target can reach it in time $|x|$. The goal is to minimize the {\em competitive ratio}, denoted by $\textsc{CR}$, defined as the supremum, over all possible target locations, of the normalized cost $E(\mathcal{A},x)/|x|$, i.e.,
$\textsc{CR}(\mathcal{A}) := \sup_{|x| > 1}\frac{E(\mathcal{A},x)}{|x|}.$
An evacuation algorithm can be primarily viewed as a set of trajectories, one for each agent. The trajectory of an agent specifies where the agent should be located at any given time. More specifically, the trajectory of an agent is a continuous mapping from the non-negative reals (i.e. time) to the reals (i.e., position on the line). In general, we will represent the trajectory of an agent using the notation $X = X(t)$ with the interpretation that the agent with trajectory $X$ will be located at position $X(t)$ at time $t$.
Due to our assumption that the agents have maximum unit speed, an agent trajectory $X$ must satisfy
$|X(t')-X(t)| \leq t'-t,\quad \forall t' \geq t \geq 0.$
Agents are assumed to begin their search at the origin and so we must also have
$X(0) = 0.$
Taken together, these equations imply that
$|X(t)| \leq t,\ \forall t \geq 0.$
We assume that the agents are labelled so that we may assign a specific trajectory to a specific agent. Each agent is assumed to know the trajectory of all other agents.
All agents follow their assigned trajectories until they either find the target or are otherwise notified of the target's location. What an agent does in the event that it finds the target depends on the communication ability of the finder and of the other agents. For example, if the finder is a sender and all other agents are receivers, then the sender can immediately notify the other agents who can then proceed to move at full speed to the target. On the other hand, if the finder is a receiver, then it must move to notify another agent(s) of the target's location. In any case, the cost of the algorithm will depend both on the trajectories assigned to the agents to search for the target, as well as the subsequent phase of informing all agents, and the agents travelling to the target.
\subsection{Related work}
Search by a single agent on the infinite line was initiated independently by Beck \cite{beck1964linear,beck1965more,beck1970yet} and Bellman \cite{bellman1963optimal}. These seminal papers proved the competitive ratio $9$ for search on an infinite line and also gave the impetus for additional studies, including those by Gal \cite{gal1972general} which proposes a minimax solution for a game in which player I chooses a real number and player II seeks it by choosing a trajectory represented by a positive function, Friestedt~\cite{fristedt1977hide}, Friestedt and Heath~\cite{fristedt_heath_1974}, and Baeza-Yates et.al.\cite{baezayates1993searching,baezayates1995parallel} where search by agents in domains other than the line were proposed, e.g. the ``Lost Cow'' problem in the plane or at the origin of $w$ concurrent rays. Additional information on search games and rendezvous can also be found in the book~\cite{alpern2006theory}.
Group search on an infinite line has been researched in several papers and under various models. Evacuation by multiple cooperating robots was proposed in \cite{chrobak2015group}, for the case where the robots can communicate only F2F. More recently, search on the line was considered for two robots which have distinct speeds in \cite{bampas2019linear} and in \cite{demaine2006online} when turning costs are taken into account. In addition, in two papers \cite{CGKKKLNOS19wireless,czyzowicz2021time} the authors are concerned with minimizing the energy consumed during the search.
There are several types of robot communication models in the literature. The most restricted type of robot communication is F2F in which robots may exchange messages only when they are co-located. At the other extreme is wireless in which robots may communicate regardless of how far apart they are~\cite{georMAC}. A model where the wireless communication range is limited has been explored for the equilateral triangle domain in \cite{Bagheri2019}.
Within these communication models researchers have considered search with crash~\cite{PODC16} and Byzantine~\cite{isaacCzyzowiczGKKNOS16} faults. The former are innocent faults caused by robot sensor malfunctioning causing the robots an inability to communicate and/or perform their tasks. The latter, however, are malicious faults (intentionally or otherwise) in that the robots may lie and communicate maliciously the wrong information. Lower bounds for search in the crash fault model are proved in~\cite{kupavskii2018lower} and for Byzantine faults in~\cite{Sun2020}. The competitive ratio for search and evacuation in the near majority case (of $n=2f+1$ robots with $f$ faulty) is a notoriously hard problem and additional results can be found in the recent paper~ \cite{ryanSIAM}. Additional information and results can also be found in the recent PhD thesis~\cite{phdthesisryan}.
In our paper we investigate evacuation time by agents with different wireless communication abilities; as stated earlier, some of them can only transmit wirelessly but not receive, and the others can only receive messages sent wirelessly but not transmit. To our knowledge, in all previous works, the communication abilities of agents were identical; group search or evacuation by agents of different communication abilities has not been studied before.
\subsection{Results} As mentioned above, we need to consider three cases: when there is exactly one sender and one receiver; one sender and several receivers; one receiver and several senders.
In Section~\ref{sec:upper_bounds} we give evacuation algorithms and analyze upper bounds on the competitive ratios for each of these three cases. When we have one receiver and one sender agent, i.e., $n_s=n_r=1$, we give an evacuation algorithm whose competitive ratio is at most $3+2\sqrt{2}$. In case
when we have one sender and several receivers, i.e., $n_s=1$ and $n_r>1$ our evacuation algorithm has competitive ratio at most $5$, and when $n_s > 1$ and $n_r=1$ we specify an algorithm with competitive ratio at most $\approx 5.681319$. These results can be found in Theorems~\ref{thm:1s_1r_ub}, \ref{thm:1s_2r_ub}, and \ref{thm:2s_1r_ub} respectively.
In Section~\ref{sec:lower_bounds} we consider lower bounds on the competitive ratio of evacuation algorithms for only the cases with only one sender agent, i.e., $n_s=1$. In particular, we prove a lower bound matching our upper bound for the case of $n_s=n_r=1$, which proves the optimality of our algorithm. For the case of $n_s=1$ and $n_r>1$ we demonstrate that the evacuation cannot be completed with a competitive ratio less than $2+\sqrt{5}$. We conclude the paper with a discussion of open problems in Section~\ref{sec:conclusions}.
\section{Evacuation Algorithms and their Competitive Ratios}
\label{sec:upper_bounds}
In this section we give evacuation algorithms for our communication model and investigate their competitive ratios. We consider separately the cases, first the single sender and single receiver, second the single sender and multiple receivers, and lastly the multiple senders and single receiver.
\subsection{One sender, one receiver}
\begin{theorem}\label{thm:1s_1r_ub}
When $n_s=n_r=1$ there exists an evacuation algorithm with competitive ratio $3+2\sqrt{2}$.
\end{theorem}
\begin{proof}
The proof is constructive and based on the following algorithm: the receiver moves to the left at unit speed and the sender moves to the right with speed $\sqrt{2}-1$. If the sender finds the target first then it notifies the receiver (wirelessly) and the receiver moves at unit speed to the target. If the receiver finds the target first then it moves at unit speed to the right until it reaches the sender at which time both agents will move at full speed back to the target. We illustrate this algorithm in Figure~\ref{fig:1s1r_ub} using a space-time diagram which plots an agent's position on the $x$-axis, and uses the $y$-axis to indicate the flow of time.
\begin{figure}
\caption{The trajectories of the agents when the target is at location $+x$ (left) and $-x$ (right). The sender is colored red and the receiver is blue. A dashed line indicates when an agent deviates from its assigned search trajectory. Significant times and positions are indicated.}
\label{fig:1s1r_ub}
\end{figure}
Suppose that the target is at location $x > 1$. The sender will find this target first and will do so at the time $\frac{x}{\sqrt{2}-1} = (1+\sqrt{2})x$. The sender immediately notifies the receiver which is at location $-(1+\sqrt{2})x$. Moving at unit speed, the receiver will travel distance $x + (1+\sqrt{2})x=(2+\sqrt{2})x$ to reach the target and will arrive at time $(1+\sqrt{2})x+(2+\sqrt{2})x = (3+2\sqrt{2})x$. The competitive ratio when $x>1$ is thus $3+2\sqrt{2}$.
Suppose now that the target is at location $-x < -1$. The receiver will find the target first and will do so at the time $x$. The receiver must move to notify the sender who is located at $(\sqrt{2}-1)x$ at time $x$. Hence, the distance between the agents is $x+(\sqrt{2}-1)x = \sqrt{2}x$ and the receiver will need to cross this distance with a relative speed of $1 - (\sqrt{2}-1) = 2-\sqrt{2}$. The receiver will thus take time $\frac{\sqrt{2}x}{2 - \sqrt{2}} = \frac{x}{\sqrt{2}-1} = (1+\sqrt{2})x$, and both agents will take an additional time $(1+\sqrt{2})x$ to reach the target. The time to evacuate is thus $x + 2(1+\sqrt{2})x = (3+2\sqrt{2})x$, and, evidently, the competitive ratio in this case is also $3+2\sqrt{2}$.
\end{proof} Notice that in the algorithm of Theorem \ref{thm:1s_1r_ub} it is essential that the sender moves initially at speed less than~1.
\subsection{One sender, multiple receivers}
\begin{theorem}\label{thm:1s_2r_ub}
When $n_s=1$ and $n_r > 1$ there exists an evacuation algorithm with competitive ratio $5$.
\end{theorem}
\begin{proof}
The proof is constructive and based on the following algorithm: one receiver moves to the left at unit speed and one receiver moves to the right at unit speed. When one of the receivers finds the target it immediately moves to notify the sender (and all other agents) at the origin. The sender immediately notifies the remaining receiver, and all agents proceed to the target at unit speed. An illustration of this algorithm is provided in Figure~\ref{fig:1s2r_ub} for the case that the target is at location $-x < -1$. The situation is symmetric when $x > 1$.
\begin{figure}
\caption{The trajectories of the agents when the target is at location $-x$. The sender is in red and the receivers are in blue. A dashed line indicates when an agent deviates from its assigned search trajectory. Significant times and positions are indicated.}
\label{fig:1s2r_ub}
\end{figure}
Suppose that the target is at location $-x < -1$. The left receiver will find this target first and will do so at the time $x$. It then immediately moves to the origin to notify the sender, arriving at time $2x$. The sender notifies the right receiver who is at location $2x$. The right receiver then moves at unit speed to the target arriving at time $5x$. The competitive ratio is thus $5$. The case when $x > 1$ is totally symmetric and also yields a competitive ratio of 5.
\end{proof}
\subsection{Multiple senders, one receiver}
\begin{theorem}\label{thm:2s_1r_ub}
When $n_s > 2$ and $n_r = 1$ there exists an algorithm $\mathcal{A}$ with competitive ratio $\textsc{CR}(\mathcal{A}) < 5.681319$. More exactly, the competitive ratio is upper bounded by
\begin{equation}
\textsc{CR}(\mathcal{A}) \le 1 + \frac{1+v_r}{1-v_r}\left(\frac{1+4v_r-v_r^2}{v_r(3-v_r)}\right)
\end{equation}
with $v_r$ chosen to be the root of the equation $v_r^4 - 16v_r^3 + 26v_r^2 + 8v_r - 3 = 0$ satisfying $0 \leq v_r < 1$.
\end{theorem}
The proof of this result is much more involved than the previous two cases.
When there are more than two senders, all but two of the senders will remain at the origin until they are notified of the target (at which time they move to the announced location). Thus, in the rest of this section we will present our algorithm for the specific case of two senders and one receiver, i.e., $n_s=2$ and $n_r=1$.
\begin{figure}
\caption{Trajectories of the agents for the evacuation algorithm $\textsc{EVAC}_\textsc{Rays}(v_r)$. The sender trajectories are red, and the receiver trajectory is blue. }
\label{fig:2s1r_ub}
\end{figure}
\emph{High level idea.} The robots jointly maintain an interval around the origin of positions that have already been explored by at least one robot. They wish to expand this interval at a fast pace while maintaining the ability to notify all robots quickly in case an exit is found by at least one robot. The idea behind our algorithm is to make one sender responsible for extending the right end of the searched interval, and another sender responsible for extending the left end of the searched interval. The receiver zig-zags around the origin (with the lengths of zigs and zags increasing in rounds), so that if one of the senders finds an exit, the receiver is ``close'' to the other sender and can quickly notify it via F2F. In order for this idea to work, the senders cannot simply move away from the origin at full speed, but instead they perform zig-zags of their own (however, unlike the receiver their zig-zags are drifting away from the origin). One can think of a sender as first extending the searched region for a while and then coming back partway towards the origin to get notified by the receiver about what's happening on the other side of the origin. This strategy is illustrated in Figure~\ref{fig:2s1r_ub}. When the exit is found by one of the senders, the receiver goes to intercept the other sender and they both move towards the exit.
An interesting feature of the algorithm is that the zig-zag trajectory of the receiver non trivially overlaps with the zig-zag trajectories of the senders, i.e., it does not simply touch them. For example, take a particular time when the receiver meets the right sender for the first time during one zig-zag round. Then the receiver and the right sender travel to the right together for some time. During this time the left sender extends the searched region on the left. If an exit is found by the left sender at this point, this is good -- both the right sender and receiver will learn about it instantaneously and will start moving towards the exit. However, the right sender and receiver cannot keep travelling together for very long, since the trajectory needs to have certain symmetries, lest the left sender gets too far. Thus, at some point the receiver and the right sender part ways with the receiver moving towards the left sender and the right sender continuing to the right. At precisely this point, the left sender stops extending the search interval and starts to move towards the receiver (this situation is indicated by dashed lines in Figure~\ref{fig:2s1r_ub}). Intuitively, this is a good timing for the left sender to switch direction, because otherwise if it finds an exit soon after the receiver and right sender part ways then the receiver would not be able to catch up with the right sender for quite a while (until the right sender's next ``zag'').
Formalizing and analyzing this algorithm takes a lot of work and careful calculations.
We begin by introducing a class of search trajectories that are parameterized by a four-tuple $[\eta,v_0,v_1,\gamma]$ where: $\eta=\pm 1$, and $v_0$, $v_1$, and $\gamma$ are real numbers satisfying $0 \leq v_0 \leq 1$, $-1 \leq v_1 < v_0$, and $0 < \gamma \leq 1$.
{
\makeatletter
\renewcommand{Trajectory}{Trajectory}
\makeatother
\begin{algorithm}[!h] \caption{$\textsc{Rays}(\eta,v_0,v_1,\gamma)$} \label{alg:2s1r_traj}
\begin{algorithmic}[1]
\Begin Move to location $\eta \gamma$ and wait until time $\frac{\gamma}{v_0}$.
\MRepeat
\State Move in direction $-\eta$ at unit speed until my position $x$ at time $t$ satisfies $\frac{x}{t} = \eta v_1$;
\State Move in direction $\eta$ at unit speed until my position $x$ at time $t$ satisfies $\frac{x}{t} = \eta v_0$;
\EndRepeat
\End
\end{algorithmic}
\end{algorithm}
}
\begin{figure}
\caption{Example of the trajectory $\textsc{Rays}(1,v_0,v_1,\gamma)$. }
\label{fig:cone_alg}
\end{figure}
An example of this type of search trajectory is illustrated in Figure~\ref{fig:cone_alg}. As one can observe, after an initial setup phase, the trajectory $\textsc{Rays}(\eta,v_0,v_1,\gamma)$ will bounce back and forth between two space-time rays with slopes $\frac{\eta}{v_0}$ and $\frac{\eta}{v_1}$. The parameter $\gamma$ dictates the ``beginning'' position on the ray with slope $\frac{\eta}{v_0}$, and $\eta$ is a symmetry parameter in the sense that trajectories $\textsc{Rays}(\pm 1,v_0,v_1,\gamma)$ are reflections of each other about the time-axis. Although the above specification of the trajectory $\textsc{Rays}(\eta,v_0,v_1,\gamma)$ is simple to understand, it will be more convenient to express these trajectories in terms of their {\em turning-points} -- space-time points at which an agent changes its travel direction, and between which an agent moves at constant unit speed. One can observe from Figure~\ref{fig:cone_alg} that the turning-points of $\textsc{Rays}(\eta,v_0,v_1,\gamma)$ are precisely the points where the trajectory bounces off the rays with slopes $\frac{\eta}{v_0}$ and $\frac{\eta}{v_1}$. The next lemma provides expressions for the turning-points of the trajectory $\textsc{Rays}(\eta,v_0,v_1,\gamma)$.
\begin{lemma}\label{lm:tp_general}
The turning-points $(D_j,T_j)$, $j=0,1,\ldots$, of the trajectory $\textsc{Rays}(\eta,v_0,v_1,\gamma)$ are given by
\[D_j = \eta \gamma \left[\frac{(1-v_1)(1+v_0)}{(1+v_1)(1-v_0)} \right]^{\floor{\frac{j}{2}}}\begin{cases}
1,& \mbox{even }j\\
\frac{v_1(1+v_0)}{v_0(1+v_1)},& {\mbox{odd }j}
\end{cases},\qquad T_j = \frac{D_j}{\eta}\begin{cases}
\frac{1}{v_0},& \mbox{even }j\\
\frac{1}{v_1},& \mbox{odd }j.
\end{cases}\]
\end{lemma}
\begin{proof}
We will derive the turning-points when $\eta=1$. The turning-points for $\eta=-1$ are just reflections about the time axis.
The first turning-point is $P_0 = (\gamma,\gamma/v_0)$ which is evident from the description of $\textsc{Rays}(1,v_0,v_1,\gamma)$. This turning-point lies on the ray of slope $\frac{1}{v_0}$ and so the next turning-point will lie on the ray with slope $\frac{1}{v_1}$. The turning-points for larger $j$ will then alternate between these two rays. It follows that the turning-times $T_j$ can be expressed in terms of the turning-positions $D_j$ as follows:
\[T_j = D_j\begin{cases}
\frac{1}{v_0},& \mbox{even }j\\
\frac{1}{v_1},& \mbox{odd }j.
\end{cases}\]
We can therefore focus on finding the turning positions $D_j$.
The agents travel at unit speed between turning-points $P_{j-1}$ and $P_{j}$, and an agent will be moving to the right/left when $j$ is even/odd. When $j$ is even we have
\[1 = \frac{T_{j}-T_{j-1}}{D_{j}-D_{j-1}} = \frac{\frac{D_j}{v_0}-\frac{D_{j-1}}{v_1}}{D_{j}-D_{j-1}} \quad\rightarrow\quad \left(\frac{1}{v_0}-1\right)D_{j} = \left(\frac{1}{v_1}-1\right)D_{j-1}\]
and finally
\[D_j = \frac{v_0(1-v_1)}{v_1(1-v_0)}D_{j-1},\ \mbox{even j}.\]
When $j$ is odd we find in a similar manner that
\[D_j = \frac{v_1(1+v_0)}{v_0(1+v_1)}D_{j-1},\ \mbox{odd j}.\]
Combining these results yields, for even or odd $j$,
\[D_j = \frac{(1-v_1)(1+v_0)}{(1+v_1)(1-v_0)} D_{j-2}.\]
Unrolling this recursion then gives
\[D_j = \left[\frac{(1-v_1)(1+v_0)}{(1+v_1)(1-v_0)} \right]^{\floor{\frac{j}{2}}} \begin{cases}
D_0,& \mbox{even j}\\
D_1,& \mbox{odd j}
\end{cases} = \gamma \left[\frac{(1-v_1)(1+v_0)}{(1+v_1)(1-v_0)} \right]^{\floor{\frac{j}{2}}} \begin{cases}
1,& \mbox{even j}\\
\frac{v_1(1+v_0)}{v_0(1+v_1)},& \mbox{odd j}
\end{cases}\]
where we have used the fact that $D_0 = \gamma$, and our expression for $D_j$ when $j$ is odd.
\end{proof}
We will describe our evacuation algorithm in terms of the trajectories $\textsc{Rays}(1,v_0,v_1,\gamma)$. To this end, we represent by $X_\pm(t)$ the trajectories of the senders and we refer to the sender with trajectory $X_+$ (resp. $X_-$) as the {\em right-sender} (resp. {\em left-sender}). We use $X_r(t)$ to represent the trajectory of the receiver. The turning-points of the trajectories $X_\pm$ will be represented by $(D^\pm_j,T^\pm_j)$, and the turning-points of the trajectory $X_r$ will be represented by $(D^r_j, T^r_j)$. With this notation our evacuation algorithm can be expressed as in Algorithm~\ref{alg:2s1r_ub}. We refer to this algorithm by $\textsc{Evac}_\textsc{Rays}(v_r)$.
\begin{algorithm}[!] \caption{$\textsc{Evac}_\textsc{Rays}(v_r)$, $0 \leq v_r < 1$} \label{alg:2s1r_ub}
\begin{algorithmic}[1]
\State \begin{equation}\label{eq:2s1r_alg}
X_\pm(t) = \textsc{Rays}(\pm 1,v_0,v_1,\gamma_\pm),\quad X_r(t) = \textsc{Rays}(1,v_r,-v_r,1).
\end{equation}
\begin{equation}\label{eq:2s1r_pars}
v_0 = \frac{v_r(3-v_r)}{1+v_r},\quad v_1 = \frac{v_r(1-v_r)}{1+3v_r},\quad \gamma_+ = \frac{3-v_r}{1-v_r},\quad \gamma_- = \frac{3-v_r}{1-v_r} \frac{1+v_r}{1-v_r}
\end{equation}
\end{algorithmic}
\end{algorithm}
Figure~\ref{fig:2s1r_ub} illustrates the trajectories of the agents for the algorithm $\textsc{Evac}_\textsc{Rays}(v_r)$. The choices of $v_0,v_1,\gamma_\pm$ in \eqref{eq:2s1r_pars} ensure that the trajectories enjoy a number of important properties, some of which are evident in Figure~\ref{fig:2s1r_ub}. One immediately obvious property is the fact that the right/left-sender spends all of its time to the right/left of the origin (and hence the naming convention). Some other properties that are evident in Figure~\ref{fig:2s1r_ub} are given in Observation~\ref{obs:2s1r_alg}.
\begin{observation}\label{obs:2s1r_alg}
For all $k=0,1,2,\ldots$ the following properties hold for the algorithm $\textsc{Evac}_\textsc{Rays}(v_r)$:
\begin{enumerate}
\item the receiver reaches its turning point $2k+1$ (resp. $2k+2$) at the same time the right-sender (resp. left-sender) reaches its turning-point $2k$.
\item the receiver and right-sender (resp. left-sender) are co-located at all times in the interval $[T^+_{2k+1},T^r_{2k+2}]$ (resp. $[T^-_{2k+1},T^r_{2k+3}]$),
\end{enumerate}
\end{observation}
In order to establish these properties, we carefully calculate the turning points of all agents in terms of the parameter $v_r$. The following lemmas summarize the calculations.
Equipped with these formulas, Observation~\ref{obs:2s1r_alg} follows.
\begin{lemma}\label{lm:tp_alg_r}
The turning-points of the receiver are
\[D^r_j = (-1)^j \left(\frac{1+v_r}{1-v_r} \right)^{j},\qquad T^r_j = \frac{1}{v_r}\left(\frac{1+v_r}{1-v_r} \right)^{j}.\]
\end{lemma}
\begin{proof}
With $X_r(t) = \textsc{Rays}(1,-v_r,v_r,1)$ it follows from Lemma~\ref{lm:tp_general} that for even $j$ we have
\begin{align*}
D^r_j &= \left[\frac{(1+v_r)(1+v_r)}{(1-v_r)(1-v_r)} \right]^{\floor{\frac{j}{2}}} = \left( \frac{1+v_r}{1-v_r} \right)^{2\floor{\frac{j}{2}}} = \left( \frac{1+v_r}{1-v_r} \right)^j.
\end{align*}
and for odd $j$ we similarly find that $D^r_j = -\left( \frac{1+v_r}{1-v_r} \right)^{j}$. Thus, for $j$ even or odd we have $D^r_j = (-1)^j\left(\frac{1+v_r}{1-v_r} \right)^{j}$. The times $T^r_j$ are
\[T^r_j = D^r_j \begin{cases}
\frac{1}{v_r},& \mbox{even }j\\
-\frac{1}{v_r},& \mbox{odd }j.
\end{cases} = (-1)^j \frac{D^r_j}{v_r} = \frac{1}{v_r}\left(\frac{1+v_r}{1-v_r} \right)^{j}.\]
\end{proof}
We note the following identities concerning the turning-points $(D^r_j,T^r_j)$ which we will use these identities often and without reference.
\[T^r_j = \frac{1+v_r}{1-v_r}T^r_{j-1} = \frac{1-v_r}{1+v_r}T^r_{j+1},\qquad D^r_j = -\frac{1+v_r}{1-v_r}D^r_{j-1} = -\frac{1-v_r}{1+v_r}D^r_{j+1}.\]
\begin{lemma}\label{lm:tp_alg_right}
The turning-points of the right-sender are
\[D^+_j =v_r T^r_j \begin{cases}
\frac{3-v_r}{1-v_r},& \mbox{even }j\\
\frac{1-v_r}{1+v_r},& {\mbox{odd }j}
\end{cases},\qquad T^+_j = T^r_j\begin{cases}
\frac{1+v_r}{1-v_r},& \mbox{even }j\\
\frac{1+3v_r}{1+v_r},& {\mbox{odd }j}
\end{cases}\]
\end{lemma}
\begin{proof}
With $X_+(t) = \textsc{Rays}(1,v_0,v_1,\gamma_+)$ it follows from Lemma~\ref{lm:tp_general} that
\[D^+_j = \gamma_+ \left[\frac{(1-v_1)(1+v_0)}{(1+v_1)(1-v_0)} \right]^{\floor{\frac{j}{2}}}\begin{cases}
1,& \mbox{even }j\\
\frac{v_1(1+v_0)}{v_0(1+v_1)},& {\mbox{odd }j}
\end{cases},\qquad T^+_j = D^+_j\begin{cases}
\frac{1}{v_0},& \mbox{even }j\\
\frac{1}{v_1},& \mbox{odd }j.
\end{cases}\]
With $v_0$ and $v_1$ given by \eqref{eq:2s1r_pars} we have
\begin{align*}
\frac{(1-v_1)(1+v_0)}{(1+v_1)(1-v_0)} &= \frac{\left(1-\frac{v_r(1-v_r)}{1+3v_r}\right)\left(1+\frac{v_r(3-v_r)}{1+v_r}\right)}{\left(1+\frac{v_r(1-v_r)}{1+3v_r}\right)\left(1-\frac{v_r(3-v_r)}{1+v_r}\right)}\\
&= \frac{\left(1+2v_r+v_r^2\right)\left(1+3v_r+v_r(1-v_r)\right)}{\left(1+3v_r+v_r(1-v_r)\right)\left(1-2v_r+v_r^2\right)} = \left(\frac{1+v_r}{1-v_r}\right)^2
\end{align*}
We also observe that
\begin{align*}
\frac{v_1(1+v_0)}{v_0(1+v_1)} &= \frac{\frac{v_r(1-v_r)}{1+3v_r}\left(1+\frac{v_r(3-v_r)}{1+v_r}\right)}{\frac{v_r(3-v_r)}{1+v_r}\left(1+\frac{v_r(1-v_r)}{1+3v_r}\right)} \\ &= \frac{v_r(1-v_r)(1+v_r+v_r(3-v_r)}{v_r(3-v_r)(1+3v_r+v_r(1-v_r))} = \frac{1-v_r}{3-v_r} = \frac{1}{\gamma_+}.
\end{align*}
Substituting these last two results into our expression for $D^+_j$ then yields
\[D^+_j = \left(\frac{1+v_r}{1-v_r}\right)^{2\floor{\frac{j}{2}}}\begin{cases}
\gamma_+,& \mbox{even }j\\
1,& {\mbox{odd }j}
\end{cases} = \begin{cases}
\gamma_+\left(\frac{1+v_r}{1-v_r}\right)^{j},& \mbox{even }j\\
\left(\frac{1+v_r}{1-v_r}\right)^{j-1},& {\mbox{odd }j}
\end{cases} = v_r T^r_j \begin{cases}
\frac{3-v_r}{1-v_r},& \mbox{even }j\\
\frac{1-v_r}{1+v_r},& {\mbox{odd }j}
\end{cases}\]
as required.
For $T^+_j$ we have
\[T^+_j = D^+_j\begin{cases}
\frac{1}{v_0},& \mbox{even }j\\
\frac{1}{v_1},& \mbox{odd }j.
\end{cases} = v_r \begin{cases}
\frac{\gamma_+}{v_0} T^r_{j},& \mbox{even }j\\
\frac{1}{v_1}T^r_{j-1},& {\mbox{odd }j}
\end{cases}.\]
We observe that
\[\frac{\gamma_+}{v_0} = \frac{\frac{3-v_r}{1-v_r}}{\frac{v_r(3-v_r)}{1+v_r}} = \frac{1}{v_r}\left(\frac{1+v_r}{1-v_r}\right)\]
and thus
\[T^+_j = v_r\begin{cases}
\frac{1}{v_r}\left(\frac{1+v_r}{1-v_r}\right)T^r_{j},& \mbox{even }j\\
\frac{1+3v_r}{v_r(1-v_r)}T^r_{j-1},& {\mbox{odd }j}
\end{cases} = \begin{cases}
T^r_{j+1},& \mbox{even }j\\
\frac{1+3v_r}{1-v_r}T^r_{j-1},& {\mbox{odd }j}
\end{cases} = T^r_j\begin{cases}
\frac{1+v_r}{1-v_r},& \mbox{even }j\\
\frac{1+3v_r}{1+v_r},& {\mbox{odd }j}
\end{cases}.\]
This completes the proof.
\end{proof}
\begin{lemma}\label{lm:tp_alg_left}
The turning-points of the left-sender are
\[D^-_j = -v_r T^r_{j+1} \begin{cases}
\frac{3-v_r}{1-v_r},& \mbox{even }j\\
\frac{1-v_r}{1+v_r},& {\mbox{odd }j}
\end{cases},\qquad T^-_j = T^r_{j+1}\begin{cases}
\frac{1+v_r}{1-v_r},& \mbox{even }j\\
\frac{1+3v_r}{1+v_r},& {\mbox{odd }j}
\end{cases}.\]
\end{lemma}
\begin{proof}
The proof is essentially identical to the proof of Lemma~\ref{lm:tp_alg_right}.
\end{proof}
We are now ready to prove our previous observations about this algorithm.
\begin{proof}[Proof of Observation~\ref{obs:2s1r_alg}] We will prove the properties for the right-sender only. Those for the left-sender follow in a nearly identical manner.
The first statement we want to prove is: ``the receiver reaches its turning point $2k+1$ at the same time the right-sender reaches its turning-point $2k$''. The receiver reaches its turning-point $2k+1$ at time $T^r_{2k+1}$. The right-sender reaches its turning-point $2k$ at time $T^+_{2k}$ and by Lemma~\ref{lm:tp_alg_right} we have $T^+_{2k} = \frac{1+v_r}{1-v_r}T^r_{2k} = T^r_{2k+1}$, which proves the statement.
The second statement we want to prove is: ``the receiver and right-sender are co-located at all times in the interval $[T^+_{2k+1},T^r_{2k+2}]$''. During the interval $[T^+_{2k+1},T^+_{2k+2}]$ the right-sender will be moving to the right along the space-time line
\[t = x-D^+_{2k+1}+T^+_{2k+1} = x - v_r\frac{1-v_r}{1+v_r}T^r_{2k+1} + \frac{1+3v_r}{1+v_r}T^r_{2k+1} = x + \frac{1+2v_r+v_r^2}{1+v_r}T^r_{2k+1}\]
and finally
\begin{equation}\label{eq:2s1r_right_move}
t = x + (1+v_r)T^r_{2k+1}.
\end{equation}
During the interval $[T^r_{2k+1},T^r_{2k+2}]$ the receiver will be moving to the right along the space-time line
\[t = x - D^r_{2k+2}+T^r_{2k+2} = x - v_r T_R(k+2)+T_R^{2k+2} = x + (1-v_r)T^r_{2k+2} = x + (1+v_r)T^r_{2k+1}.\]
We can thus conclude that the right-sender and receiver will be travelling along the same space-time line and will be co-located during the interval $[T^+_{2k+1},T^+_{2k+2}] \cap [T^r_{2k+1},T^r_{2k+2}] = [T^+_{2k+1},T^r_{2k+2}]$.
\end{proof}
The next theorem provides an expression for the competitive ratio of $\textsc{Evac}_\textsc{Rays}(v_r)$ as a function of $v_r$.
\begin{theorem}\label{thm:2s1r_cr}
The competitive ratio of algorithm $\textsc{Evac}_\textsc{Rays}(v_r)$ satisfies
\[\textsc{CR} \le 1 + \frac{1+v_r}{1-v_r}\left(\frac{1+4v_r-v_r^2}{v_r(3-v_r)}\right).\]
\end{theorem}
\begin{proof}
Due to the symmetry between the right/left-senders, we may assume without loss of generality that the target is found by the right-sender. Moreover, the sequence of intervals $(D^+_{2k},D^+_{2k+2}]$, $k=0,1,2,\ldots$, collectively covers the entire line extending from $D^+_0$ to $+\infty$ and so we may assume without loss of generality that the target is at location $x_* \in (D^+_{2k},D^+_{2k+2}]$, for some fixed value of $k \geq 0$.
\begin{figure}
\caption{Setup for the proof of Theorem~\ref{thm:2s1r_cr}. The turning-point $2k$ of each agent is indicated. On the left the target is found at time $t_* \leq T^-_{2k+1}$ and on the right the target is found at time $t_* > T^-_{2k+1}$.}
\label{fig:2s1r_cr}
\end{figure}
The right-sender will reach $x_*$ while travelling to the right between its turning points $2k+1$ and $2k+2$, and, we demonstrated in the proof of Observation~\ref{obs:2s1r_alg} that while doing so this sender will be moving along the space-time line with equation \eqref{eq:2s1r_right_move}. Thus, the time $t_*$ at which the right-sender reaches the target is
\begin{align*}
t_* &= x_* + (1+v_r)T^r_{2k+1}.
\end{align*}
After reaching the target the right-sender will wirelessly notify the receiver and the receiver will move to notify the left-sender. There are two cases to consider, each of which is illustrated in Figure~\ref{fig:2s1r_cr}. In the first case -- left side of Figure~\ref{fig:2s1r_cr} -- the target is found at location $x_*$ such that $t_* \leq T^-_{2k+1}$. We know from Observation~\ref{obs:2s1r_alg} that the receiver will be co-located with the left-receiver at all times within the interval $[T^-_{2k+1},T^r_{2k+3}]$, and before time $T^-_{2k+1}$ the receiver and left-sender will be moving towards each other, each at unit speed. Thus, the earliest time that the left-sender could be notified of the target is at the time $T^-_{2k+1}$. Evidently, the evacuation time for this case is
\begin{align*}
E &= T^-_{2k+1} + |x_* - D^-_{2k+1}| = x_* + \frac{1+3v_r}{1+v_r} T^r_{2k+2} + v_r \frac{1-v_r}{1+v_r}T^r_{2k+2}\\
&= x_* + \frac{1+3v_r+v_r(1-v_r)}{1+v_r} T^r_{2k+2}.
\end{align*}
and the competitive ratio is
\begin{align*}
\textsc{CR} = \frac{E}{x_*} &= 1 + \frac{\frac{1+3v_r+v_r(1-v_r)}{1+v_r} T^r_{2k+2}}{x_*}.
\end{align*}
The competitive ratio increases with decreasing $x_*$, and with $x_* > D^+_{2k} = v_r \frac{3-v_r}{1-v_r} T^r_{2k}$ we get
\begin{align*}
\textsc{CR} &\le 1 + \frac{1+3v_r+v_r(1-v_r)}{1+v_r}\frac{T^r_{2k+2}}{v_r \frac{3-v_r}{1-v_r} T^r_{2k}}\\
&= 1 + \frac{1+3v_r+v_r(1-v_r)}{1+v_r} \cdot \frac{1-v_r}{v_r(3-v_r)} \cdot \frac{(1+v_r)^2}{(1-v_r)^2}\\
&= 1 + \frac{1+v_r}{1-v_r}\left(\frac{1+3v_r+v_r(1-v_r)}{v_r(3-v_r)}\right)
= 1 + \frac{1+v_r}{1-v_r}\left(\frac{1+4v_r-v_r^2}{v_r(3-v_r)}\right).
\end{align*}
and finally
\begin{equation}\label{eq:2s1r_cr}
\textsc{CR} \le 1 + \frac{1+v_r}{1-v_r}\left(\frac{1+4v_r-v_r^2}{v_r(3-v_r)}\right).
\end{equation}
The second case -- the right side of Figure~\ref{fig:2s1r_cr} -- occurs when the target is found at a time $t_* \in (T^-_{2k+1},T^+_{2k+2}] = (T^-_{2k+1},T^r_{2k+3}]$. The left-sender and receiver are co-located during the time interval $(T^-_{2k+1},T^r_{2k+3}]$, and so the left-sender will be notified of the target at time $t_*$. By referring to Figure~\ref{fig:2s1r_cr} one can observe that the evacuation time for this case will be $2(t_*-T^-_{2k+1})$ more than the evacuation time of the previous case, i.e.,
\[E = x_* + \frac{1+3v_r+v_r(1-v_r)}{1+v_r} T^r_{2k+2} + 2(t_*-T^-_{2k+1}).\]
Since $t_* = x_* + (1+v_r)T^r_{2k+1} = x_* + (1-v_r)T^r_{2k+2}$ and $T^-_{2k+1} = \frac{1+3v_r}{1+v_r}T^r_{2k+2}$ we get
\begin{align*}
E &= 3x_* + \frac{1+3v_r+v_r(1-v_r)}{1+v_r} T^r_{2k+2} + 2\left((1-v_r)-\frac{1+3v_r}{1+v_r}\right)T^r_{2k+2}\\
&= 3x_* + \frac{1+3v_r+v_r(1-v_r)+2(1+v_r)(1-v_r)-2(1+3v_r)}{1+v_r} T^r_{2k+2}\\
&= 3x_* + \frac{-(1+3v_r)+(2+3v_r)(1-v_r)}{1+v_r} T^r_{2k+2}\\
&= 3x_* + \frac{1-v_r(2+3v_r)}{1+v_r} T^r_{2k+2}\\
\end{align*}
and the competitive ratio is
\begin{align*}
\textsc{CR} = 3 + \frac{1-v_r(2+3v_r)}{1+v_r} \frac{T^r_{2k+2}}{x_*}
\end{align*}
When $v_r(2+3v_r) \geq 1$ the competitive ratio is $\leq 3$. When $v_r(2+3v_r) < 1$ the competitive ratio is $> 3$ and increases with decreasing $x_*$, or, equivalently, with decreasing $t_*$. Thus, we should take $t_*$ arbitrarily close to $T^-_{2k+1}$. However, $t_* = T^-_{2k+1}$ gave the best-case evacuation time for the case that $t_* \leq T^-_{2k+1}$. We can thus conclude that a worst-case competitive ratio can be achieved when $t_* \leq T^-_{2k+1}$ and the competitive ratio of the algorithm is upper bounded by \eqref{eq:2s1r_cr}.
\end{proof}
Now that we have an expression for a bound on the competitive ratio we can finally prove Theorem~\ref{thm:2s_1r_ub}.
\begin{proof}(Theorem~\ref{thm:2s_1r_ub})
We need to optimize the competitive ratio with respect to $v_r$ and so we need to compute the derivative of the right hand side of \eqref{eq:2s1r_cr}. This is most easily done with the aid of a computer. We find that
\begin{align*}
\frac{d}{dv_r}\left[1 + \frac{1+v_r}{1-v_r}\left(\frac{1+4v_r-v_r^2}{v_r(3-v_r)}\right)\right] = \frac{v_r^4 - 16v_r^3 + 26v_r^2 + 8v_r - 3}{v_r^2(3-v_r)^2(1-v_r)^2}
\end{align*}
and so the optimum choice of $v_r$ is a root of the quartic equation $v_r^4 - 16v_r^3 + 26v_r^2 + 8v_r - 3 = 0$ satisfying $0 \leq v_r < 1$. Numerically solving this equation for $v_r$ yields $v_r \approx 0.228652$. For this choice of $v_r$ one can confirm that $\textsc{CR} < 5.681319$.
\end{proof}
\section{Lower bounds}\label{sec:lower_bounds}
In this section we investigate lower bounds on the competitive ratio of evacuation in our communication model. Our goal is the proof of the following theorem:
\begin{theorem}\label{thm:lb}
Let $\mathcal{A}$ be an evacuation algorithm for one sender and $n_r \geq 1$ receivers.
\[\textsc{CR}(\mathcal{A}) \ge \begin{cases}
3+2\sqrt{2},& n_r=1\\
2+\sqrt{5},& n_r>1
\end{cases}.\]
\end{theorem}
We will need to introduce a number of concepts and definitions. The first definition concerns the knowledge that is available to an agent at a given time.
\begin{definition}
An agent is said to {\em know of a location $x$ at time $t$} if it has direct or indirect knowledge of $x$ at time $t$. An agent with direct knowledge of $x$ at time $t$ has visited location $x$ at a time $t' \leq t$. An agent has indirect knowledge of $x$ at time $t$ if it can be notified of $x$ at a time $t' \leq t$.
\end{definition}
The direct knowledge of an agent depends only on its own trajectory, whereas an agent's indirect knowledge depends on both its own and the other agents' trajectories. We define the direct knowledge set $K_X^D(t)$ as the set of all locations that the agent with trajectory $X$ has direct knowledge of at time $t$. We similarly define the indirect knowledge set $K^I_X(t;\mathcal{A})$. The (total) knowledge set is the set $K_X(t;\mathcal{A}) = K^D_X(t) \cup K^I_X(t;\mathcal{A})$. We make the following simple observation which results from the unit speed assumption.
\begin{observation}
$K^D_X(t) \subseteq [X(t)-t,X(t)+t]$.
\end{observation}
We can use the knowledge set of an agent to lower bound the competitive ratio.
\begin{lemma}\label{lm:cr_general}
For any evacuation algorithm $\mathcal{A}$, any $X \in \mathcal{A}$, and any time $t > 0$ we have
\[\textsc{CR}(\mathcal{A}) \geq \sup_{x \not\in K_X(t;\mathcal{A})} \frac{|X(t)-x|+t}{|x|}.\]
\end{lemma}
Thus, we can derive a lower bound on the competitive ratio by bounding the size of an agent's knowledge set.
We define the functional $\mu(X)$ which maps a search trajectory to a non-negative real number:
\begin{equation}
\mu(X) = :\limsup_{t \rightarrow \infty} \frac{|X(t)|}{t}.
\end{equation}
The quantity $\mu(X)$ can be thought of as an upper bound on the average rate at which the direct knowledge of an agent with trajectory $X$ grows. We naturally extend the definition of $\mu$ to take as input an evacuation algorithm:
\begin{equation}
\mu(\mathcal{A}) := \max_{X \in \mathcal{A}} \mu(X).
\end{equation}
We establish several properties of $\mu(X)$ and $K_X$ (and relationships between them) for trajectories $X$ in evacuation algorithms. These are used in the proofs of the following theorems, from which Theorem~\ref{thm:lb} follows.
\begin{lemma}\label{lm:KD}
Let $X$ be a search trajectory.
If $\mu(X) < 1$ then for all $0 < \epsilon < 1-\mu(X)$ there exists a time $T > 0$ such that
\[K^D_X(t) \subseteq \left[-\frac{(\mu(X)+\epsilon)(t-X(t))}{1+\mu(X)+\epsilon},\ \frac{(\mu(X)+\epsilon)(t+X(t))}{1+\mu(X)+\epsilon}\right],\ \forall t > T.\]
In the case of $\mu(X) = 1$ the parameter $\epsilon$ can be taken to be $0$ in the above expression.
\end{lemma}
\begin{proof}
By the definition of $\mu(X)$, it follows that for all $\epsilon > 0$ there exists a time $T'>0$ such that
\begin{equation}\label{eq:1}
-(\mu(X)+\epsilon)t \leq X(t) \leq (\mu(X)+\epsilon)t,\ \forall t > T'.
\end{equation}
Moreover, when $\mu(X)=1$ the $\epsilon$ can be taken to be $0$, since $|X(t)| \le t$.
In order to have direct knowledge of location $x$ at time $t > T'$ there must exist a time $t' \leq t$ such that $X(t') = x$. The unit speed of the agents implies that $|X(t)-x| \leq t-t'$ or
\begin{equation}\label{eq:2}
t'-t+X(t) \leq x \leq t+X(t)-t'.
\end{equation}
Assume that $t' > T'$. Then we can combine \eqref{eq:1} and \eqref{eq:2} to get
\begin{equation}\label{eq:3}
\max\{-(\mu(X)+\epsilon)t',\ t'-t+X(t)\} \leq x \leq \min\{(\mu(X)+\epsilon)t',\ t+X(t)-t'\}.
\end{equation}
On the left, the first term in the max decreases with $t'$ and the second term increases with $t'$. Thus, the best lower bound is achieved when the two terms are equal. This will occur when
\[t' = \frac{t-X(t)}{1+\mu(X)+\epsilon}.\]
For this value of $t'$ we get
\[x \geq -\frac{(\mu(X)+\epsilon)(t-X(t))}{1+\mu(X)+\epsilon}.\]
At time $t$ we have $X(t) \leq (\mu(X)+\epsilon)t$ and thus
\[t' = \frac{t-X(t)}{1+\mu(X)+\epsilon} \geq \frac{t-(\mu(X)+\epsilon)t}{1+\mu(X)+\epsilon} = \frac{1-\mu(X)-\epsilon}{1+\mu(X)+\epsilon}t.\]
Hence, we will have $t' > T'$ for all
\begin{equation}\label{eq:4}
t > T = \frac{1+\mu(X)+\epsilon}{1-\mu(X)-\epsilon}T'.
\end{equation}
When $\mu(X) = 1$ the above expression is vacuously true, since $T'$ can be taken to be $0$.
In a similar manner, we get from the right side of \eqref{eq:3} that
\[x \leq \frac{(\mu(X)+\epsilon)(t+X(t))}{1+\mu(X)+\epsilon}.\]
for all $t$ satisfying \eqref{eq:4}. This completes the proof.
\end{proof}
In a similar manner we can bound the total knowledge available to an agent that can only receive messages face-to-face.
\begin{lemma}\label{lm:Kf2f}
Let $\mathcal{A}$ be an evacuation algorithm and let $X_{f2f} \in \mathcal{A}$ represent the trajectory of an agent that can only receive messages face-to-face.
If $\mu(X_{f2f})<1$ then for all $0 < \epsilon < 1-\mu(X_{f2f})$ there exists a time $T > 0$ such that
\[K_{X_{f2f}}(t;\mathcal{A}) \subseteq \left[-\frac{(\mu(\mathcal{A})+\epsilon)(t-X_{f2f}(t))}{1+\mu(\mathcal{A})+\epsilon},\ \frac{(\mu(\mathcal{A})+\epsilon)(t+X_{f2f}(t))}{1+\mu(\mathcal{A})+\epsilon}\right],\ \forall t > T.\]
In the case of $\mu(X_{f2f}) = 1$ the parameter $\epsilon$ can be taken to be $0$ in the above expression.
\end{lemma}
\begin{proof}
When $\mu(X) < 1$ it follows from the definitions of $\mathcal{A}$ and $\mu(X)$ that there exists a time $T'>0$ such that for any $X \in \mathcal{A}$ we have
\begin{equation}\label{eq:5}
-(\mu(\mathcal{A})+\epsilon)t \leq X(t) \leq (\mu(\mathcal{A})+\epsilon)t,\ \forall t > T',\ \forall X \in \mathcal{A}.
\end{equation}
In order to have direct knowledge of location $x$ at time $t > T'$ there must exist a time $t' \leq t$ such that $X_{f2f}(t') = x$. The unit speed of the agents implies that $|X_{f2f}(t)-x| \leq t-t'$ or that \eqref{eq:2} must be satisfied by the trajectory $X_{f2f}$ at time $t$.
In order to have indirect knowledge of location $x$ at time $t$ there must exist another agent that visits $x$ at time $t' \leq t$ and can reach location $X_{f2f}(t)$ by time $t$. Indeed, this agent must be able to catch the agent with trajectory $X_{f2f}$ at or before time $t$, and the agent with trajectory $X_{f2f}$ will be at location $X_{f2f}(t)$ at time $t$. Thus, in order to have indirect knowledge of $x$, the unit speed condition implies again that $|X_{f2f}(t)-x| \leq t-t'$ or that \eqref{eq:2} is satisfied. To complete the proof we follow the same steps of the proof of Lemma~\ref{lm:KD} except with \eqref{eq:5} used in place of \eqref{eq:1}.
\end{proof}
We will now focus on the case that there is only a single sender involved in the evacuation. The sender can only be communicated with face-to-face and so Lemma~\ref{lm:Kf2f} applies in this case. We can use it to get the following result.
\begin{lemma}\label{lm:1s_1}
Let $\mathcal{A}$ be an evacuation algorithm with one sender and let $S \in \mathcal{A}$ represent the trajectory of this sender. If $\mu(S)=1$ then $\textsc{CR}(\mathcal{A})$ is unbounded. If $\mu(S)<1$ then we have
\[\textsc{CR}(\mathcal{A}) \ge 1 + \frac{(1+\mu(\mathcal{A}))(1+\mu(S))}{\mu(\mathcal{A})(1-\mu(S))}.\]
\end{lemma}
\begin{proof}
If $\mu(S)=1$ then the previous lemma tells us that $K_{S}(t;\mathcal{A}) = [-\frac{t-S(t)}{2},\frac{t+S(t)}{2}]$. Suppose without loss of generality that $S(t) = t$. Then $K_S(t) = [0,t]$ and by Lemma~\ref{lm:cr_general} we have
\[\textsc{CR}(\mathcal{A}) \geq \sup_{x \not\in K_S(t;\mathcal{A})} \frac{|X(t)-x|+t}{|x|} = \sup_{\epsilon > 1} \frac{2t+\epsilon}{\epsilon} > 2t+1.\]
Since the above holds for infinitely many arbitrary large $t$ values, we conclude that $\textsc{CR}(\mathcal{A})$ is unbounded.
Suppose now that $\mu(S)<1$. Then for all $0 < \epsilon < 1-\mu(S)$ there exists a time $T$ such that
\[K_{S}(t;\mathcal{A}) \subseteq \left[-\frac{(\mu(\mathcal{A})+\epsilon)(t-S(t))}{1+\mu(\mathcal{A})+\epsilon},\ \frac{(\mu(\mathcal{A})+\epsilon)(t+S(t))}{1+\mu(\mathcal{A})+\epsilon}\right],\ \forall t > T.\]
Moreover, from the definition of $\mu(X)$ it follows that for any $\Delta > 0$ there exists a time $\tau$ such that $|S(\tau)| = \mu(S)\tau \pm o(\tau)$. Take $\Delta > T$ and assume without loss of generality that $S(\tau) = \mu(S) \tau \pm o(\tau)$. Then
\[K_{S}(\tau;\mathcal{A}) \subseteq \left[-\frac{(\mu(\mathcal{A})+\epsilon)(1-\mu(S))\tau}{1+\mu(\mathcal{A})+\epsilon},\ \frac{(\mu(\mathcal{A})+\epsilon)(1+\mu(S))\tau}{1+\mu(\mathcal{A})+\epsilon}\right]\]
and
\begin{align*}
\textsc{CR}(\mathcal{A}) &\geq \sup_{x \not\in K_S(\tau;\mathcal{A})} \frac{|S(t)-x|+t}{|x|} \\ &= 1 + \sup_{\epsilon' > 0} \frac{(1+\mu(S))\tau}{\frac{(\mu(\mathcal{A})+\epsilon)(1-\mu(S))\tau}{1+\mu(\mathcal{A})+\epsilon}+\epsilon'} = 1 + \frac{(1+\mu(\mathcal{A})+\epsilon)(1+\mu(S))}{(\mu(\mathcal{A})+\epsilon)(1-\mu(S))}\\
&> 1 + \frac{(1+\mu(\mathcal{A}))(1+\mu(S))}{\mu(\mathcal{A})(1-\mu(S))}\left[\frac{1}{1+\frac{\epsilon}{\mu(\mathcal{A})}}\right].
\end{align*}
The term in square brackets approaches 1 from below as $\epsilon \rightarrow 0$ and thus for any fixed $\delta>0$ we can choose $\epsilon>0$ small enough that
\begin{align*}
\textsc{CR}(\mathcal{A}) &> 1 - \delta + \frac{(1+\mu(\mathcal{A}))(1+\mu(S))}{\mu(\mathcal{A})(1-\mu(S))}.
\end{align*}
\end{proof}
\begin{corollary}
Let $\mathcal{A}$ be an evacuation algorithm with one sender and let $S \in \mathcal{A}$ represent the trajectory of this sender. If $\mu(S) = \mu(\mathcal{A})$ we have $\textsc{CR}(\mathcal{A}) \ge 9$.
\end{corollary}
\begin{proof}
With $\mu(S) = \mu(\mathcal{A})$ we have from Lemma~\ref{lm:1s_1} that
\[\textsc{CR}(\mathcal{A}) \ge 1 + \frac{(1+\mu(S))^2}{\mu(S)(1-\mu(S))}\]
for all $\delta > 0$. Let $g(u) = \frac{(1+u)^2}{u (1-u)}$ and observe that
\begin{align*}
\frac{dg(u)}{du} &= \frac{2(1+u)}{u(1-u)} - \frac{(1+u)^2(1-2u)}{u^2(1-u)^2} = (1+u)\left[\frac{2u(1-u) - (1+u)(1-2u)}{u^2(1-u)^2}\right]\\
&= (1+u)\left[\frac{2u-2u^2 - 1 + 2u - u + 2u^2}{u^2(1-u)^2}\right]
= \frac{(1+u)(3u-1)}{u^2(1-u)^2}.
\end{align*}
From this last expression it is clear that $u = 1/3$ is the only non-negative minimizer. When $u=1/3$ we find $g(1/3) = \frac{(1+\frac{1}{3})^2}{\frac{1}{3}(1-\frac{1}{3})} = 8$ and thus we can conclude that $\textsc{CR}(\mathcal{A}) > 9 - \delta$ for arbitrary $\delta > 0$as required.
\end{proof}
In the next lemma we consider the knowledge set of the receivers.
\begin{lemma}\label{lm:Kreceivers}
Let $\mathcal{A}$ be an evacuation algorithm with one sender and at least one receiver. Let $S$ be the trajectory of the sender and suppose that $\mu(S) < \mu(\mathcal{A})$. Let $R$ be the trajectory of a receiver with $\mu(R) = \mu(\mathcal{A})$.
If $\mu(R) < 1$ then for all $0 < \epsilon < 1-\mu(R)$ there exists a time $T>0$ such that
\[K_{R}(t;\mathcal{A}) = K_S(t;\mathcal{A}) \cup \left[-\frac{(\mu(R)+\epsilon)(t-R(t))}{1+\mu(R)+\epsilon},\ \frac{(\mu(R)+\epsilon)(t+R(t))}{1+\mu(R)+\epsilon}\right],\ \forall t > T.\]
In the case of $\mu(R) = 1$ the parameter $\epsilon$ can be taken to be $0$ in the above expression.
\end{lemma}
\begin{proof}
The receivers can receive wireless messages from the sender and so at any time they know what the sender knows. If we exclude knowledge from the sender, a receiver can only possess direct knowledge, or receive knowledge indirectly from a different receiver. However, receivers can't send messages and so communication between receivers is face-to-face. Thus, to complete the proof, we only need to invoke Lemma~\ref{lm:Kf2f}.
\end{proof}
\begin{lemma}\label{lm:cr_r}
Let $\mathcal{A}$ be an evacuation algorithm with one sender and at least one receiver. Let $S \in \mathcal{A}$ represent the trajectory of this sender; let $R \in \mathcal{A}$ represent the trajectory of the receiver with the largest value of $\mu(R)$; and define $\mathcal{A}' = \mathcal{A} \setminus \{R\}$. Then, we have
\begin{align*}
\textsc{CR}(\mathcal{A}) &\ge 1 + \frac{(1+\mu(\mathcal{A}'))(1+\mu(R))}{\mu(\mathcal{A}')(1+\mu(S))}.
\end{align*}
\end{lemma}
\begin{proof}
We make use of Lemma~\ref{lm:Kreceivers}. If $\mu(R)=1$ then
\[K_{R}(t;\mathcal{A}) = K_S(t;\mathcal{A}) \cup \left[-\frac{t-R(t)}{2},\frac{t+R(t)}{2}\right].\]
If $\mu(R) < 1$ then for all $0 < \epsilon < 1-\mu(R)$ there exists a time $T_R>0$ such that
\[K_{R}(t;\mathcal{A}) = K_S(t;\mathcal{A}) \cup \left[-\frac{(\mu(R)+\epsilon)(t-R(t))}{1+\mu(R)+\epsilon},\ \frac{(\mu(R)+\epsilon)(t+R(t))}{1+\mu(R)+\epsilon}\right],\ \forall t > T_R.\]
Moreover, for any $\Delta > 0$ there exists a time $\tau > \Delta$ such that $|R(t)| = \mu(R)\tau$. Assume without loss of generality that $R(\tau) = \mu(R)\tau$. Then
$K_{R}(\tau;\mathcal{A}) = K_S(\tau;\mathcal{A}) \cup \left[0,\tau\right]$ if $\mu(R)=1$ and for $\mu(R)<1$ we can take any $\Delta > T_R$ to get
\[K_{R}(\tau;\mathcal{A}) = K_S(\tau;\mathcal{A}) \cup \left[-\frac{(\mu(R)+\epsilon)(1-\mu(R))\tau}{1+\mu(R)+\epsilon},\ \frac{(\mu(R)+\epsilon)(1+\mu(R))\tau}{1+\mu(R)+\epsilon}\right].\]
In light of the proof of Lemma~\ref{lm:1s_1} and its corollary, it is clear that unless the sender can increase the lower bound of the receiver's knowledge, we will find that $\textsc{CR}(\mathcal{A})$ is unbounded when $\mu(R)=1$, and when $\mu(R)<1$ we will get $\textsc{CR}(\mathcal{A}) > 9-\delta$ for all $\delta > 0$. Thus, we assume that the sender can increase the lower bound of the receiver's knowledge.
Consider the knowledge set $K_S(t;\mathcal{A})$ of the sender. Since this must extend the knowledge of the receiver with trajectory $R$ we can exclude this receiver from the computation of $K_S(t;\mathcal{A})$. Thus, if we take $\mathcal{A}' = \mathcal{A} \setminus \{R\}$ we can invoke Lemma~\ref{lm:Kf2f} with respect to $\mathcal{A}'$ to conclude that for all $0 < \epsilon < 1-\mu(S)$ there exists a time $T_S > 0$ such that
\[K_{S}(t;\mathcal{A}') \subseteq \left[-\frac{(\mu(\mathcal{A}')+\epsilon)(t-S(t))}{1+\mu(\mathcal{A}')+\epsilon},\ \frac{(\mu(\mathcal{A}')+\epsilon)(t+S(t))}{1+\mu(\mathcal{A}')+\epsilon}\right],\ \forall t > T_S.\]
We can take $\Delta > \max\{T_R,T_S\}$ so that $\tau > T_S$ and as a result
\[K_{S}(\tau;\mathcal{A}) \subseteq \left[-\frac{(\mu(\mathcal{A}')+\epsilon)(\tau-S(\tau))}{1+\mu(\mathcal{A}')+\epsilon},\ \frac{(\mu(\mathcal{A}')+\epsilon)(\tau+S(\tau))}{1+\mu(\mathcal{A}')+\epsilon}\right].\]
By definition of $\mu(S)$, at any time $t > T_s$ we have $|S(t)| \leq (\mu(S)+\epsilon)t$ and thus
\[K_{S}(\tau;\mathcal{A}) \subseteq \left[-\frac{(\mu(\mathcal{A}')+\epsilon)(1+\mu(S)+\epsilon)\tau}{1+\mu(\mathcal{A}')+\epsilon},\ \frac{(\mu(\mathcal{A}')+\epsilon)(1+\mu(S)+\epsilon)\tau}{1+\mu(\mathcal{A}')+\epsilon}\right].\]
By Lemma~\ref{lm:cr_general} we then have
\begin{align*}
\textsc{CR}(\mathcal{A}) &\geq \sup_{x \notin K_{R}(\tau;\mathcal{A})} \frac{|R(\tau)-x|+\tau}{|x|} \\ &= 1 + \sup_{\epsilon'>0}\frac{(1+\mu(R))\tau}{\frac{(\mu(\mathcal{A}')+\epsilon)(1+\mu(S))\tau}{1+\mu(\mathcal{A}')+\epsilon}+\epsilon'} = 1 + \frac{(1+\mu(\mathcal{A}')+\epsilon)(1+\mu(R))}{(\mu(\mathcal{A}')+\epsilon)(1+\mu(S)+\epsilon)}\\
&> 1 + \frac{(1+\mu(\mathcal{A}'))(1+\mu(R))}{\mu(\mathcal{A}')(1+\mu(S))}\left[\frac{1}{1+\frac{\epsilon}{\mu(\mathcal{A}')}}\right]\left[\frac{1}{1+\frac{\epsilon}{\mu(S)}}\right].
\end{align*}
It is clear that $\mu(S) > 0$ (and, thus, also $\mu(\mathcal{A}')>0$) since otherwise the sender would not extend the receiver's knowledge. Then both of the terms in square brackets approach 1 from below as $\epsilon \rightarrow 0$ and so we can choose $\epsilon>0$ small enough that for any fixed $\delta > 0$ we have
\begin{align*}
\textsc{CR}(\mathcal{A}) &> 1 - \delta + \frac{(1+\mu(\mathcal{A}'))(1+\mu(R))}{\mu(\mathcal{A}')(1+\mu(S))}
\end{align*}
as required.
\end{proof}
\begin{theorem}\label{thm:cr_11}
Let $\mathcal{A}$ be an evacuation for one sender and one receiver. Then $\textsc{CR}(\mathcal{A}) \ge 3+2\sqrt{2}$.
\end{theorem}
\begin{proof}
Let $\mathcal{A} = \{S,R\}$ with $S$ and $R$ the trajectories of the sender and receiver respectively. We must have $\mu(S)<\mu(\mathcal{A})=\mu(R)$ since otherwise the competitive ratio is at least $9$. Then, Lemma~\ref{lm:1s_1} states that
\[\textsc{CR}(\mathcal{A}) \ge 1 + \frac{(1+\mu(R))(1+\mu(S))}{\mu(R)(1-\mu(S))}\]
and from Lemma~\ref{lm:cr_r} we have
\begin{align*}
\textsc{CR}(\mathcal{A}) &\ge 1 + \frac{(1+\mu(S))(1+\mu(R))}{\mu(S)(1+\mu(S))} = \frac{1+\mu(R)}{\mu(S)}
\end{align*}
where we have used the fact that $\mathcal{A}' = \{S\}$ when there is only one receiver. We therefore have
\[\textsc{CR}(\mathcal{A}) \ge 1 + \max\left\{\frac{(1+\mu(R))(1+\mu(S))}{\mu(R)(1-\mu(S))},\ \frac{1+\mu(R)}{\mu(S)}\right\}.\]
The first term in the max decreases with $\mu(R)$ and the second term increases with $\mu(R)$ and so our best-lower bound is achieved when increasing $\mu(R)$ is such that the two terms are equal. We find that we need
\[\mu(R) = \frac{\mu(S)(1+\mu(S))}{1-\mu(S)}.\]
We then find that
\[\textsc{CR}(\mathcal{A}) \ge 1 +\frac{1+\frac{\mu(S)(1+\mu(S))}{1-\mu(S)}}{\mu(S)} = 1 + \frac{1}{\mu(S)}+ \frac{1+\mu(S)}{1-\mu(S)}.\]
Let $g(u) = \frac{1}{u}+ \frac{1+u}{1-u}$ and observe that
\begin{align*}
\frac{dg(u)}{du} &= -\frac{1}{u^2} + \frac{1}{1-u} + \frac{1+u}{(1-u)^2} \\ &= -\frac{1}{u^2} + \frac{2}{(1-u)^2} = \frac{(1-u)^2-2u^2}{u^2(1-u)^2}
= \frac{(1-u-\sqrt{2}u)(1-u+\sqrt{2}u)}{u^2(1-u)^2}\\
&= \frac{[1-(1+\sqrt{2})u][1-(1-\sqrt{2})u]}{u^2(1-u)^2}.
\end{align*}
From this last expression it is clear that $g(u)$ is minimized when $u=\frac{1}{1+\sqrt{2}} = \sqrt{2}-1$. The minimum is
\[g(\sqrt{2}-1) = \sqrt{2}+1 + \frac{\sqrt{2}}{2-\sqrt{2}} = 2+2\sqrt{2}\]
and we can conclude that
\[\textsc{CR}(\mathcal{A}) \ge 3+2\sqrt{2}.\]
\end{proof}
\begin{corollary} The evacuation algorithm for one sender and one receiver given in the proof of Theorem \ref{thm:1s_1r_ub} is optimal. \end{corollary}
\begin{theorem}\label{thm:cr_12}
Let $\mathcal{A}$ be an evacuation for one sender and $n_r>1$ receivers. Then $\textsc{CR}(\mathcal{A}) \ge 2+\sqrt{5}$.
\end{theorem}
\begin{proof}
Let $\mathcal{A} = \{S,R,R',\ldots\}$ with $S$ the trajectory of the sender, $R$ the trajectory of the receiver with largest value of $\mu(R)$, and $R'$ the trajectory of the receiver with second largest $\mu(R')$. We must have $\mu(S)<\mu(\mathcal{A})=\mu(R)$ since otherwise the competitive ratio is at least $9$. Then, Lemma~\ref{lm:1s_1} states that
\[\textsc{CR}(\mathcal{A}) \ge 1 + \frac{(1+\mu(R))(1+\mu(S))}{\mu(R)(1-\mu(S))}\]
and from Lemma~\ref{lm:cr_r} we have
\begin{align*}
\textsc{CR}(\mathcal{A}) &\ge 1 + \frac{(1+\mu(\mathcal{A}'))(1+\mu(R))}{\mu(\mathcal{A}')(1+\mu(S))}.
\end{align*}
If $\mu(\mathcal{A}') = \mu(S)$ then it follows from Theorem~\ref{thm:cr_11} that we will have $\textsc{CR}(\mathcal{A}) \ge 3+2\sqrt{2}$. Thus, we assume that $\mu(S) < \mu(\mathcal{A}')$. Then $\mu(R') = \mu(\mathcal{A}')$ and we have
\[\textsc{CR}(\mathcal{A}) \ge 1 + \max\left\{\frac{(1+\mu(R))(1+\mu(S))}{\mu(R)(1-\mu(S))},\ \frac{(1+\mu(R'))(1+\mu(R))}{\mu(R')(1+\mu(S))}\right\}.\]
The second term in the max increases with decreasing $\mu(R')$ and the first term does not depend on $\mu(R')$. Thus, we set $\mu(R')$ as large as possible, i.e., we take $\mu(R') = \mu(R)$. Then
\[\textsc{CR}(\mathcal{A}) \ge 1 + \max\left\{\frac{(1+\mu(R))(1+\mu(S))}{\mu(R)(1-\mu(S))},\ \frac{(1+\mu(R))^2}{\mu(R)(1+\mu(S))}\right\}.\]
Now both terms in the max increase with decreasing $\mu(R)$ and so we take $\mu(R)$ as large as possible, i.e., $\mu(R)=1$. Then
\[\textsc{CR}(\mathcal{A}) \ge 1 + 2\max\left\{\frac{1+\mu(S)}{1-\mu(S)},\ \frac{2}{1+\mu(S)}\right\}.\]
The first term in the max increases with $\mu(S)$ and the second term decreases with $\mu(S)$ and so our best-lower bound is achieved when $\mu(S)$ is such that the two terms are equal. We find that we need $(1+\mu(S))^2 = 2(1-\mu(S))$ or
\[\mu(S)^2+4\mu(S)-1=0.\]
The only non-negative solution to this quadratic equation is
\[\mu(S) = \frac{-4 + \sqrt{20}}{2} = \sqrt{5}-2\]
and we can conclude that
\[\textsc{CR}(\mathcal{A}) \ge 1 + \frac{4}{\sqrt{5}-1} = 1 + \frac{4(\sqrt{5}+1)}{4} = 2+\sqrt{5}\]
as required.
\end{proof}
Our upper bound for the case that $n_s=1$ and $n_r>1$ was $5 > 2+\sqrt{5}$ and so either the lower bound is not tight and/or the upper bound must come down. If one refers to the proof of Theorem~\ref{thm:cr_12} then one can observe that our best lower bound was achieved when $\mu_s = \sqrt{5}-2$ and $\mu_1=\mu_2=1$. However, one can easily confirm that any algorithm with $\mu_1=\mu_2=1$ has a competitive ratio of at least 5 and so it is evident that, at least, the lower bound is not tight. Thus, in order to make progress on this problem, we believe a different lower bounding technique will be required.
\section{Conclusions}\label{sec:conclusions}
We have introduced a novel communication model that puts an interesting twist on the classic linear group search problem. We provide upper bounds on the evacuation for the three interesting combinations of agents -- one sender and one receiver, one sender and multiple receivers, and multiple senders and one receiver. We demonstrate that our algorithm for the case of one sender and one receiver is optimal by providing a lower bound matching our upper bound. For the case of one sender and two receivers we provide a non-trivial lower bound of $2+\sqrt{5}$ which compares to our upper bound of 5. We do not provide any non-trivial lower bounds for the case of multiple senders and one receiver and it is believed that this is the most difficult case to do so (indeed, the upper bound for this case was considerably more complex than the other two cases).
The most immediate open problems concern the lower bounds for the cases of multiple senders and multiple receivers. For the multiple receiver case we provided arguments demonstrating that the lower bound presented here cannot be tight and so in order to close the gap between the lower and upper bounds a different lower bounding technique will be required. Of course, it can also be the case that the upper bound must come down as well (although this does not seem likely). We did not attempt to provide a lower bound for the case of multiple senders (there is a trivial lower bound of 3 which can be derived by considering the first time any agent reaches location $\pm x$).
Our upper bounds on the evacuation seem to hint at the fact that it is better to ``listen'' than it is to ``speak'' since our upper bound for the case of multiple receivers is 5 and for multiple senders it is $\approx 5.681319$ (and we do not believe that these can be improved). Closing the gap between the upper and lower bounds would be interesting even just from the standpoint of answering the question of whether or not it is better to ``listen'' than it is to ``speak''.
\end{document} |
\begin{document}
\title{Spin-structures on real Bott manifolds}
\author{A. G\c{a}sior\footnote{Author is supported by the Polish National Science Center grant DEC-2013/09/B/ST1/04125}}
\maketitle
\hskip5mm \section{Introduction} Let $M^n$ be a flat manifold of dimension $n$, i.e. a compact connected Riemannian manifold without boundary with zero sectional curvature. From the theorem of Bieberbach (\cite{Ch}, \cite{S3}) the fundamental group $\pi_{1}(M^{n}) = \Gamma$ determines a short exact sequence: \begin{equation}\label{ses} 0 \rightarrow {\mathbb Z}^{n} \rightarrow \Gamma \stackrel{p}\rightarrow G \rightarrow 0, \end{equation} where ${\mathbb Z}^{n}$ is a torsion free abelian group of rank $n$ and $G$ is a finite group which is isomorphic to the holonomy group of $M^{n}.$ The universal covering of $M^{n}$ is the Euclidean space ${\mathbb R}^{n}$ and hence $\Gamma$ is isomorphic to a discrete cocompact subgroup of the isometry group $\operatorname{Isom}({\mathbb R}^{n}) = \operatorname{O}(n)\times{\mathbb R}^n = E(n).$ In that case $p:\Gamma\to G$ is a projection on the first component of the semidirect product $O(n)\ltimes \mathbb R^n$ and $\pi_1(M_n)=\Gamma$ is a subfroup of $O(n)\ltimes \mathbb R^n$. Conversely, given a short exact sequence of the form (\ref{ses}), it is known that the group $\Gamma$ is (isomorphic to) the fundamental group of a flat manifold if and only if $\Gamma$ is torsion free. In this case $\Gamma$ is called a Bieberbach group. We can define a holonomy representation $\phi:G\to \operatorname{GL}(n,{\mathbb Z})$ by the formula: \begin{equation}\label{holonomyrep} \forall e\in\mathbb Z^n\;\forall g\in G,\phi(g)(e) = \tilde{g}e(\tilde{g})^{-1}, \end{equation} where $p(\tilde{g})=g.$ In this article we shall consider Bieberbach groups of rank $n$ with holonomy group ${\mathbb Z}_{2}^{k}$, $1\leq k\leq n-1$, and $\phi({\mathbb Z}_{2}^{k})\subset D\subset \operatorname{GL}(n,{\mathbb Z})$. Here $D$ is the group of matrices with $\pm1$ on the diagonal.
Let \begin{equation}\label{tower} M_{n}\stackrel{{\mathbb R} P^1}\to M_{n-1}\stackrel{{\mathbb R} P^1}\to...\stackrel{{\mathbb R} P^1}\to M_{1}\stackrel{{\mathbb R} P^1}\to M_0 = \{ \bullet\} \end{equation} be a sequence of real projective bundles such that $M_i\to M_{i-1}$, $i=1,2,\ldots,n$, is a projective bundle of a Whitney sum of a real line bundle $L_{i-1}$ and the trivial line bundle over $M_{i-1}$. The sequence (\ref{tower}) is called the real Bott tower and the top manifold $M_n$ is called the real Bott manifold, \cite{CMO}.
Let $\gamma_i$ be the canonical line bundle over $M_i$ and we set $x_i = w_1(\gamma_i)$ ($w_1$ is the first Stiefel-Whitney class). Since $H^1(M_{i-1},{\mathbb Z}_2)$ is additively generated by $x_1,x_2,..,x_{i-1}$ and $L_{i-1}$ is a line bundle over $M_{i-1},$ we can uniquely write \begin{equation}\label{w1} w_1(L_{i-1}) = \sum_{k=1}^{i-1} a_{ki}x_k \end{equation} where $a_{ki}\in {\mathbb Z}_2$ and $i = 2,3,...,n.$
From above we obtain the matrix $A = [a_{ki}]$ which is a $n\times n$ strictly upper triangular matrix whose diagonal entries are $0$ and remaining entries are either $0$ or $1.$ One can observe (see \cite{KM}) that the tower (\ref{tower}) is completly determined by the matrix $A$ and therefore we may denote the real Bott manifold $M_n$ by $M(A)$. From \cite[Lemma 3.1]{KM} we can consider $M(A)$ as the orbit space $M(A) = {\mathbb R}^n/\Gamma(A),$ where $\Gamma(A)\subset E(n)$ is generated by elements \begin{equation}\label{gener} s_{i} = \left(\left[ \begin{matrix} 1&0&0&.&.&...&0\\ 0&1&0&.&.&...&0\\ .&.&.&.&.&...&\\ 0&...&0&1&0&...&0\\ 0&...&0&0&(-1)^{a_{i,i+1}}&...&0\\ .&.&.&.&.&...&\\ 0&...&0&0&0&...&(-1)^{a_{i,n}} \end{matrix}\right], \begin{pmatrix} 0\\ .\\ 0\\ \frac{1}{2}\\ 0\\ .\\ 0\\ 0 \end{pmatrix}\right)\in E(n), \end{equation} where $(-1)^{a_{i,i+1}}$ is in the $(i+1, i+1)$ position and $\frac{1}{2}$ is the $i-$th coordinate of the column, $i = 1,2,...,n-1.$ $s_{n} = \left(I,\left(0,0,...,0,\frac12\right)\right)\in E(n).$ From \cite[Lemma 3.2, 3.3]{KM} $s_{1}^{2},s_{2}^{2},...,s_{n}^{2}$ commute with each other and generate a free abelian subgroup ${\mathbb Z}^n.$ In other words $M(A)$ is a flat manifold with holonomy group $Z_2^k$ of diagonal type. Here $k$ is a number of non zero rows of a matrix $A$.
We have the following two lemmas.
\begin{lm}[\cite{KM}, Lemma 2.1]
The cohomology ring $H^*(M(A),\mathbb Z_2)$ is generated by degree one elements $x_1,\ldots,x_n$ as a graded ring with $n$ relations
$$x_j^2=x_j\sum_{i=1}^na_{ij}x_i,$$
for $j=1,\ldots,n$. \end{lm} \begin{lm}[\cite{KM}, Lemma 2.2]\label{lemma12}
The real Bott manifold $M(A)$ is orientable if and only if the sum of entries is $0(\operatorname{mod}2)$ for each row of the matrix $A$. \end{lm}
There are a few ways to decide whether there exists a Spin-structure on an oriented flat manifold $M^n$. We start with \begin{defi}[\cite{Fr}]
An oriented flat manifold $M^n$ has a Spin-structure if and only if there exists a homomorphism $\epsilon\colon\Gamma\to\operatorname{Spin}(n)$ such that $\lambda_n\epsilon=p$, where $\lambda_n:\operatorname{Spin}(n)\to\operatorname{SO}(n)$ is the covering map. \end{defi} There is an equivalent condition for existence of Spin-structure. This is well known (\cite{Fr}) that the closed oriented differential manifold $M$ has a Spin-structure if and only if the second Stiefel-Whitney class vanishes.
The $k$-th Stiefel-Whitney class \cite[ page 3, (2.1) ]{LS} is given by the formula \begin{equation} w_k(M(A)) = (B(p))^{\ast}\sigma_{k}(y_1,y_2,...,y_{n})\in H^{k}(M(A);{\mathbb Z}_2) , \end{equation} where $\sigma_k$ is the $k$-th elementary symmetric function, $B(p)$ is a map induced by $p$ on the classification space and \begin{equation} y_i : = w_1(L_{i-1})\label{y}\end{equation} for $i=2,3,\ldots,n$. Hence, \begin{equation}\label{sw1} w_{2}(M(A)) = \sum_{1< i< j\leq n} y_{i}y_{j}\in H^{2}(M(A);{\mathbb Z}_2). \end{equation} \begin{defi} {(\cite{CMO}, page 4)}
A binary square matrix $A$ is a Bott matrix if $A = PBP^{-1}$ for a permutation
matrix $P$ and a strictly upper triangular binary matrix $B.$ \end{defi}
Our paper is a sequel of \cite{GS1}. There are given some conditions of the existence of Spin-structures. \begin{theo}{\rm{(\cite{GS1}, page 1021)}}
Let $A$ be a matrix of an orientable real Bott manifold $M(A)$.
\begin{enumerate}
\item Let $l\in\mathbb N$ be an odd number. If there exist $1\leq i<j\leq n$ and rows $A_{i,*}$, $A_{j,*}$ such that
$$\sharp\{m:a_{i,m}=a_{j,m}=1\}=l$$
and
$$a_{ij}=0$$
then $M(A)$ has no Spin-structure.
\item If $a_{ij}=1$ and there exist $1\leq i<j\leq n$ and rows
$$\begin{aligned}
A_{i,*}&=(0,\ldots,0,a_{i,i_1},\ldots,a_{i,i_{2k}},0,\ldots,0),\\
A_{j,*}&=(0,\ldots,0,a_{j,i_{2k+1}},\ldots,a_{j,i_{2k+2l}},0,\ldots,0)
\end{aligned}$$
such that $a_{i,i_1}=\ldots=a_{i,i_{2k}}=1$, $a_{i,m}=0$ for $m\not\in\{i_1,\ldots,i_{2k}\}$, $a_{j,i_{2k+1}}=\ldots=a_{j,i_{2k+2l}}=1$, $a_{j,r}=0$ for $r\not\in\{i_{2k+1},\ldots,i_{2k+2l}\}$ and $l$, $k$ are odd then $M(A)$ has no Spin-structure.
\end{enumerate} \end{theo}
In this paper we extend this theorem and we formulate necessary and sufficient conditions of the existence of a Spin-structure on real Bott manifolds. Here is our main result for Bott manifolds with holonomy group $Z_2^k$, $k$ even. Here is our main result \begin{theo}\label{theolast}
Let $A$ be a Bott matrix with $k$ non zero rows where $k$ is an even number. Then the real Bott manifold manifold $M(A)$ has a Spin-structure if and only if for all $1\leq i<j\leq n$ manifolds $M(A_{ij})$ have a Spin-structure, where $A_{ij}$ is a matrix with $i-$ and $j-$ th nonzero rows. \end{theo}
The structure of a paper is as follows. In Section 2 we give three lemmas. First of them gives a decomposition of the $n\times n-$integer matrix $A$ into $n\times n-$integer matrices $A_{ij}$ with $i-$th and $j-$th nonzero rows. In Lemmas 2.2. and 2.3 we examine dependence of $y_i$ and $w_2$ of a real Bott manifold $M(A)$ on values $y_i^{jk}$ and $w_2(M(A_{jk}))$ of manifolds $M(A_{jk})$. Then the proof of Theorem 1.2 will follow from Lemmas 2.2. and 2.3. Section 3 has a very technical character. In this section we shall give a complete characterization of the existence of the Spin-structure on manifolds $M(A_{ij})$, $1\leq i<j\leq n$. Almost all statements in part 2 and 3 are illustrated by examples.
The author is grateful to Andrzej Szczepa\'{n}ski for his valuable, suggestions and help.
\section{Proof of the Main Theorem}
At the beginning we give formula for the decomposition of real Bott matrix $A$ into the sum of the real Bott matrices with two nonzero rows.
\begin{lm}
Let $A$ be $n\times n-$Bott matrix and let $A_{ij}$, $1\leq i<j\leq n$, be $n\times n$-matrices with $i-$th and $j-$th nonzero rows. Then, if $k$ is even, we have the following decomposition
\begin{equation}A=\sum_{1\leq i<j\leq n}A_{ij}.\label{rozklad}\end{equation} \end{lm}
\noindent {\bf Proof.} Let $A$ be $n\times n$-Bott matrix with $k$ nonzero rows, $k$ is an even number. Without loss of generality we can assume that nonzero rows have numbers from 1 to $k$. We shall consider the matrix $A$ as a sum of matrices $A_{ij}$, $1\leq i<j\leq n$. The number of matrices $A_{ij}$ is equal $k\choose2$. For $1\leq i\leq k$ there are $(k-1)$-two elements subsets of $\{1,2,\ldots,k\}$ containing $i$. Thus having summed matrices $A_{ij}$ we obtain \begin{equation} (k-1)\cdot A=\sum_{1\leq i<j\leq n}A_{ij}.\label{rozklad_1} \end{equation} Since $A$ is Bott matrix and $k$ is an even number we get the formula (\ref{rozklad}).
\hskip 142mm $\Box$
\begin{ex}\label{ex1}
Let
$$A=\left[
\begin{matrix}
0&1&1&0&0&0\\
0&0&1&1&0&0\\
0&0&0&1&1&0\\
0&0&0&0&1&1\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right]$$
Thus $n=6$, $k=4$, so we have
$$\begin{aligned}
A=&
\underbrace{\left[
\begin{matrix}
0&1&1&0&0&0\\
0&0&1&1&0&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right]}_{A_{12}}
+\underbrace{\left[
\begin{matrix}
0&1&1&0&0&0\\
0&0&0&0&0&0\\
0&0&0&1&1&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right]}_{A_{13}}
+\underbrace{\left[
\begin{matrix}
0&1&1&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&1&1\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right]}_{A_{14}}\\
&+\underbrace{\left[
\begin{matrix}
0&0&0&0&0&0\\
0&0&1&1&0&0\\
0&0&0&1&1&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right]}_{A_{23}}
+\underbrace{\left[
\begin{matrix}
0&0&0&0&0&0\\
0&0&1&1&0&0\\
0&0&0&0&0&0\\
0&0&0&0&1&1\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right]}_{A_{24}}
+\underbrace{\left[
\begin{matrix}
0&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&0&1&1&0\\
0&0&0&0&1&1\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right]}_{A_{34}}
\end{aligned}$$ \end{ex} \vskip5mm Before we start a proof of the main theorem we give an example.
\begin{ex}
For the manifold $M(A)$ from Example \ref{ex1} we get
$$
y_2=x_1,\;
y_3=x_1+x_2,\;
y_4=x_2+x_3,\;
y_5=x_3+x_4,\;
y_6=x_4.$$
Hence
$$\omega_2(M(A))=x_1x_3+x_2x_4.$$
We compute second Stiefel-Whitney classes for real Bott manifolds $M(A_{ij})$ from Example \ref{ex1}. For these purpose we put
$y_l^{ij}=w_1(L_{l-1})$
for manifolds $M(A_{ij})$ and we obtain
$$\begin{array}{lllllll}
y_2^{12}=x_1&y_2^{13}=x_1&y_2^{14}=x_1&y_2^{23}=0&y_2^{24}=0&y_2^{34}=0\\
y_3^{12}=x_1+x_2&y_3^{13}=x_1&y_3^{14}=x_1&y_3^{23}=x_2&y_3^{24}=x_2&y_3^{34}=0\\
y_4^{12}=x_2&y_4^{13}=x_3&y_4^{14}=0&y_4^{23}=x_2+x_3&y_4^{24}=x_2&y_4^{34}=x_3\\
y_5^{12}=0&y_5^{13}=x_3&y_5^{14}=x_4&y_5^{23}=x_3&y_5^{24}=x_4&y_5^{34}=x_3+x_4\\
y_6^{12}=0&y_6^{13}=0&y_6^{14}=x_4&y_6^{23}=0&y_6^{24}=x_4&y_6^{34}=x_4
\end{array}$$
With the above notation we get
$$\begin{aligned}
\sum_{1\leq i<j\leq k}y_2^{ij}&=3x_1=x_1\Rightarrow \sum_{1\leq i<j\leq k}y_2^{ij}=y_2,\\
\sum_{1\leq i<j\leq k}y_3^{ij}&=3x_1+3x_2=x_1+x_2\Rightarrow \sum_{1\leq i<j\leq k}y_3^{ij}=y_3,\\
\sum_{1\leq i<j\leq k}y_4^{ij}&=3x_2+3x_3=x_2+x_3\Rightarrow\sum_{1\leq i<j\leq k}y_4^{ij}=y_4,\\
\sum_{1\leq i<j\leq k}y_5^{ij}&=3x_3+3x_4=x_3+x_4\Rightarrow\sum_{1\leq i<j\leq k}y_5^{ij}=y_5,\\
\sum_{1\leq i<j\leq k}y_6^{ij}&=3x_4=x_4\Rightarrow\sum_{1\leq i<j\leq k}y_6^{ij}=y_6
\end{aligned}$$
and second Stiefel-Whitney classes for manifolds $M(A_{ij})$ are follows
$$\begin{aligned}
w_2(M(A_{12}))&=0,\\
w_2(M(A_{13}))&=x_1x_3,\\
w_2(M(A_{14}))&=0,\\
w_2(M(A_{23}))&=0,\\
w_2(M(A_{24}))&=0x_2x_4,\\
w_2(M(A_{34}))&=0.
\end{aligned}$$
Hence
$$
\sum_{1=i<j=4}\omega_2(M(A_{ij})=x_1x_3+x_2x_4=\omega_2(M(A)).
$$
\end{ex}
Following the method described in the above example we have lemmas.
\begin{lm}\label{lemat1}
Let $A$ be a $n\times n$ Bott matrix with $k>3$ nonzero rows, $k$ is an even number.
Then
\begin{equation}y_l=\sum_{1\leq i<j\leq k}y_l^{ij},\label{wzor_y}\end{equation}
where $y_l=\omega_1(L_{l-1}(M(A))$ and $y_l^{ij}=\omega_1(L_{l-1}(M(A_{ij}))$. \end{lm}
\noindent {\bf Proof.} We have $$y_l=w_1(L_{l-1})=\sum_{k=1}^{l-1}a_{kl}x_k=x\cdot A^l$$ where $x=[x_1,\ldots,x_n]$, $A=[a_{ij}]$, $A^l$ is the $l-$th column of the matrix $A$ and $\cdot$ is multiplication of matrices. Let us multiply (\ref{rozklad})
on the left by $x$ $$\begin{aligned} x\cdot A&=\sum_{1\leq i<j\leq k} x\cdot A_{ij}.\\ \end{aligned}$$ Since $yx\cdot A=[y_1,y_2,\ldots,y_n]$ and $x\cdot A^{ij}=[y^{ij}_1,y^{ij}_2,\ldots,y^{ij}_n]$, we get (\ref{wzor_y}).
\hskip 142mm $\Box$
\begin{lm}\label{lemat2}
Let $A$ be $n\times n$ Bott matrix with $k-$nonzero rows, $k\geq4$, $k$ is an even number.
Then
$$w_2(M(A))=\sum_{1\leq i<j\leq k}w_2(M(A_{ij})).$$ \end{lm}
\noindent {\bf Proof.} From (\ref{sw1}) and (\ref{wzor_y}) $$\begin{aligned} \omega_2(M(A))&=\sum_{l<r}y_ly_r\\ &=\sum_{l<r}\left[\left(\sum_{i<j}y_l^{ij}\right)\right]\left[\left(\sum_{i<j}y_r^{ij}\right)\right] =\sum_{l<r}\left(\sum_{i<j}y_l^{ij}y_r^{ij}\right)\\ &=\sum_{i<j}\left(\sum_{l<r}y_l^{ij}y_r^{ij}\right) =\sum_{i<j}\omega_2(M(A_{ij})). \end{aligned}$$ \hskip 142mm $\Box$
From proofs of Lemma \ref{lemat1} and Lemma \ref{lemat2} we obtain a proof of Main Theorem \ref{theolast}.
\noindent {\bf Proof of Theorem \ref{theolast}} Let us recall the manifold $M$ has a Spin-structure if and only if $w_2(M)=0$. At the beginning let us assume, for each pair $1\leq i<j\leq n$, we have $w_2(M(A_{ij}))=0$. Then from Lemma \ref{lemat2} we have $$w_2(M(A))=\sum_{1\leq i<j\leq k}w_2(M(A_{ij}))=0,$$ so the real Bott manifold $M(A)$ has a Spin-structure.
On the other hand, let the manifold $M(A)$ admits the Spin-structure, then $$0=w_2(M(A))=\sum_{1\leq i<j\leq k}w_2(M(A_{ij})).$$ Second Stiefel-Whitney classes $M(A_{ij})$ are non negative so $$\forall_{1\leq i<j\leq n}w_2(M(A_{ij}))=0.$$ \hskip 142mm $\Box$
\begin{rem}
We do not know how to prove the main theorem for odd $k$.
From the other side we are not sure if we can formulate it as a conjecture
in this case. \end{rem}
In the next section of our paper we concentrate on calculations of Spin-structure on manifolds $A_{ij}$.
\vskip 4mm \section{Existence of Spin-structure on manifolds $M(A_{ij})$}
From now, let $A$ be a matrix of an orientable real Bott manifold $M(A)$ of dimension $n$ with two non-zero rows. From Lemma \ref{lemma12} we have that the number of entries 1, in each row, is an odd number and we have following three cases: \newline {\bf CASE I.} There are no columns with double entries 1, \newline{\bf CASE II.} The number of columns with double entries 1 is an odd number, \newline {\bf CASE III.} The number of columns with double entries 1 is an even number, \vskip5mm \noindent We give conditions for an existence of the Spin-structure on $M(A_{ij})$. In the further part of the paper we adopt the notation $0_{p}=(\underbrace{0,\ldots,0}_{p\text{ - times}})$. From the definition, rows of number $i$ and $j$ correspond to generators $s_i,s_j$ which define a finite index abelian subgroup $H\subset\pi_1(M(A))$ (see \cite{GK}).
\begin{theo}\label{theo1}
Let $A$ be a matrix of an orientable real Bott manifold $M(A)$ from the above case I. If there exist $1\leq i<j\leq n$ such that
\noindent {\bf 1.}
$$\begin{matrix}
A_{i,\ast} &=(0_{i_1},a_{i,i_{1}+1},\ldots,a_{i,i_{1}+2k},0_{i_{2l}},0_{i_{p}})\\
A_{j,\ast} &=(0_{i_1},0_{i_{2k}},a_{j,i_{1}+2k+1},\ldots,a_{j,i_{1}+2k+2l},0_{i_{p}}),\end{matrix}$$
where $a_{i,i_{1}+1} = \ldots = a_{i,i_{1}+2k} = 1, a_{i,m} = 0$ for $m\notin\{i_1,\ldots,i_{1}+2k\}$,
$a_{j,i_{1}+2k+1}= \ldots = a_{j,i_{1}+2k+2l} = 1, a_{j,r} = 0$ for
$r\notin\{i_{1}+2k+1,\ldots,i_{1}+2k+2l\}$.
\newline Then $M(A)$ admits the Spin-structure if and only if either $l$ is an even number or $l$ is an odd number and
$j\notin\{i_1+1,\ldots,i_{1}+2k\}$.
\noindent
{\bf 2.}
$$\begin{matrix}
A_{i,\ast} &=(0_{i_{1}},0_{i_{2k}},a_{i,i_{2k}+1},\ldots,a_{i,i_{2k}+2l},0_{i_{p}})\\
A_{j,\ast} &=(0_{i_{1}},a_{j,i_{1}+1},\ldots,a_{j,i_{1}+2k},0_{i_{2l}},0_{i_{p}}),\end{matrix}$$
where
$a_{j,i_{1}+1} = \ldots = a_{j,i_{1}+2k} = 1, a_{j,m} = 0$ for $m\notin\{i_1,\ldots,i_{1}+2k\}$,
$a_{i,i_{2k}+1} = \ldots = a_{i,i_{2k}+2l} = 1, a_{i,r} = 0$ for
$r\notin\{i_{2k}+1,\ldots,i_{2k}+2l\}$,
then $M(A)$ has the Spin-structure. \end{theo}
\noindent {\bf Proof.} {\bf 1.} From (\ref{y}) we have $$\begin{aligned} y_{i_1+1}&=\ldots=y_{i_{1}+2k}=x_i,\\ y_{i_{1}+2k+1}&=\ldots=y_{i_{1}+2k+2l}=x_j.\end{aligned}$$ Using (\ref{sw1}) and $x_i^2=x_i\sum_{j=1}^na_{ji}x_j$ we get $$\begin{aligned}w_2(M(A))&=k(2k-1)x_i^2+4klx_ix_j+l(2l-1)x_j^2\\ &=k(2k-1)x_i^2+l(2l-1)x_j^2=l(2l-1)x_j^2=lx_j^2. \end{aligned}$$ Summing up, we have to consider the following cases \begin{enumerate}
\item if $l=2b$, then $w_2(M(A))=2bx_j^2=0$. Hence $M(A)$ has a Spin-structure,
\item if $l=2b+1$, then
$$\begin{aligned}
w_2(M(A))&=(2b+1)x_j^2=x_j^2\\
&=\begin{cases}0,&\text{if }j\notin\{i_1+1,\ldots,i_{1}+2k\},M(A)\text{ has a Spin-structure,}\\x_ix_j,&\text{if }j\in\{i_1+1,\ldots,i_{1}+2k\},M(A)\text{ has no Spin-structure}.\end{cases}
\end{aligned}$$ \end{enumerate} {\bf 2.} From {\rm(}\ref{y}{\rm)} $$\begin{aligned} y_{i_1}+1&=\ldots=y_{i_{1}+2k}=x_j\\ y_{i_{1}+2k+1}&=\ldots=y_{i_{1}+2k+2l}=x_i. \end{aligned}$$ Moreover, from (\ref{sw1}) and since $i_1>j>i$ $$\begin{aligned} w_2(M(A))&=k(2k-1)x_j^2+4klx_ix_j+l(2l-1)x_i^2\\ &=k(2k-1)\underbrace{x_j^2}_{=0}+l(2l-1)\underbrace{x_i^2}_{=0}=0.\end{aligned}$$ Hence $M(A)$ has the Spin-structure.
\hskip 142mm $\Box$
\begin{theo}\label{theo2} Let $A$ be a matrix of an orientable real Bott manifold $M(A)$ from the above case II. If there exist $1\leq i<j\leq n$ such that
\newline{\bf 1.}
$$\begin{footnotesize}\begin{matrix}
A_{i,\ast} = (0_{i_1},a_{i,i_{1}+1},\ldots,a_{i,i_{1}+2k},a_{i,i_{1}+2k+1},\ldots,a_{i,i_{1}+2k+2l},0_{i_{2m}},0_{i_p})\\
A_{j,\ast} = (0_{i_1},0_{i_{2k}},a_{j,i_{1}+2k+1},\ldots,a_{j,i_{1}+2k+2l},a_{j,i_{1}+2k+2l+1},\ldots,a_{j,i_{1}+2k+2l+2m},0_{i_p}),\end{matrix}\end{footnotesize}$$
where
$a_{i,i_{1}+1} = \ldots = a_{i,i_{1}+2k} =a_{i,i_{1}+2k+1}=\ldots=a_{i,i_{1}+2k+2l}=1, a_{i,r} = 0$ for $r\notin\{i_1+1,\ldots,i_{1}+2k+2l\}$,
$a_{j,i_{1}+2k+1} = \ldots= a_{j,i_{1}+2k+2l+2m} = 1, a_{j,s} = 0$ for
$s\notin\{i_{1}+2k+1,\ldots,i_{1}+2k+2l+2m\}$.
Then $M(A)$ has the Spin-structure if and only if either $l$ and $m$ are number of the same parity or $l$ and $m$ are number of different parity and
$j\notin\{i_1+1,\ldots,i_{1}+2k\}$.
\noindent {\bf 2.}
$$\begin{footnotesize}\begin{matrix} A_{i,\ast} &=(0_{i_1},0_{i_{1}+2k},a_{i,i_{1}+2k+1},\ldots,a_{i,i_{1}+2k+2l},a_{i,i_{1}+2k+2l+1},\ldots,a_{i,i_{1}+2k+2l+2m},0_{i_p}) ,\\
A_{j,\ast} &=(0_{i_1},a_{j,i_{1}+1},\ldots,a_{j,i_{1}+2k},a_{j,i_{1}+2k+1},\ldots,a_{j,i_{1}+2k+2l},0_{i_{2m}},0_{i_p}) \end{matrix}\end{footnotesize}$$
where
$a_{j,i_{1}+1}= \ldots = a_{j,i_{1}+2k}=a_{j,i_{1}+2k+1}=\ldots=a_{j,i_{1}+2k+2l} = 1, a_{j,m} = 0$ for $m\notin\{i_1+1,\ldots,i_{1}+2k+2l\}$,
$a_{i,i_{1}+2k+1}= \ldots = a_{i,i_{1}+2k+2l}=a_{i,i_{1}+2k+2l+1}=\ldots=a_{i,i_{1}+2k+2l+2m} = 1, a_{i,r} = 0$ for
$r\notin\{i_{1}+2k+1,\ldots,i_{1}+2k+2l+2m\}$, then $M(A)$ has the Spin-structure \end{theo}
\noindent{\bf Proof.} {\bf 1.} From (\ref{y}) we have $$\begin{aligned} y_{i_1+1}&=\ldots=y_{i_{1}+2k}=x_i,\\ y_{i_{1}+2k+1}&=\ldots=y_{i_{1}+2k+2l}=x_i+x_j\\ y_{i_{1}+2k+2l+1}&=\ldots=y_{i_{1}+2k+2l+2m}=x_j.\end{aligned}$$ From (\ref{sw1}) and $x_i^2=x_i\sum_{j=1}^na_{ji}x_j$ we get $$\begin{aligned}w_2(M(A))&=k(2k-1)x_i^2+4klx_i(x_i+x_j)+l(2l-1)(x_i+x_j)^2+m(2m-1)x_j^2\\ &=l(2l-1)x_j^2+m(2m-1)x_j^2=(l+m)x_j^2. \end{aligned}$$ We have to consider the following cases: \begin{enumerate}
\item If $l+m$ is an even number then $w_2(M(A))=0.$ Hence $M(A)$ has a Spin-structure.
\item If $l+m$ is an odd number then
$$\begin{aligned}
w_2(M(A))&=x_j^2\\
&=\begin{cases}0,&\text{if }j\notin\{i_1+1,\ldots,i_{1}+2k\},M(A)\text{ has a Spin-structure}\\x_ix_j,&\text{if }j\in\{i_1+1,\ldots,i_{1}+2k\},M(A)\text{ has no Spin-structure}.\end{cases}
\end{aligned}$$ \end{enumerate} {\bf 2.} Using (\ref{y}) we get $$\begin{aligned} y_{i_1+1}&=\ldots=y_{i_{1}+1}=x_j\\ y_{i_{1}+2k+1}&=\ldots=y_{i_{1}+2k+2l}=x_i+x_j\\ y_{i_{1}+2k+2l+1}&=\ldots=y_{i_{1}+2k+2l+2m}=x_i. \end{aligned}$$ Moreover, from {\rm(}\ref{sw1}{\rm)} and since $i_1>j>i$ $$\begin{aligned} &w_2(M(A))=k(2k-1)x_j^2+l(2l-1)x_i^2+4klx_j(x_i+x_j)+4kmx_ix_j\\ &+4lmx_i(x_i+x_j)+l(2l-1)(x_i+x_j)^2+m(2m-1)x_i^2\\ &=k(2k-1)\underbrace{x_j^2}_{=0}+l(2l-1)\underbrace{x_i^2}_{=0}+l(2l-1)\underbrace{x_j^2}_{=0}+m(2m-1)\underbrace{x_i^2}_{=0}=0. \end{aligned}$$ Hence $M(A)$ has a Spin-structure.
\hskip 142mm $\Box$
\begin{theo}\label{theo3} Let $A$ be a matrix of an orientable real Bott manifold $M(A)$ from the above case III. If there exist $1\leq i<j\leq n$ such that
\newline {\bf 1.}
$$\begin{footnotesize}\begin{matrix} A_{i,\ast} &= (0_{i_1},a_{i,i_{1}+1},\ldots,a_{i,i_{1}+2k+1},a_{i,i_{1}+2k+2},\ldots,a_{i,i_{1}+2k+2l+2},0_{i_{2m+1}},0_{i_p})\\
A_{j,\ast} &= (0_{i_1},0_{i_{2k+1}},a_{j,i_{2k+2}},\ldots,a_{j,i_{1}+2k+2l+2},a_{j,i_{1}+2k+2l+3},\ldots,a_{j,i_{1}+2k+2l+2m+3},0_{i_p}),\end{matrix}\end{footnotesize}$$
where
$a_{i,i_{1}+1} = \ldots = a_{i,i_{1}+2k} =\ldots=a_{i,i_{1}+2k+2l+2}=1, a_{i,r} = 0$ for $r\notin\{i_1+1,\ldots,i_{1}+2k+2l+2\}$,
$a_{j,i_{1}+2k+2} = \ldots= a_{j,i_{1}+2k+2l+2m+3} = 1, a_{j,s} = 0$ for
$s\notin\{i_{1}+2k+2,\ldots,i_{1}+2k+2l+2m+3\}$.
Then $M(A)$ admits the Spin-structure if and only $l$ and $m$ are number of the same parity and
$j\in\{i_1+1,\ldots,i_{1}+2k+2\}$ .
{\bf 2.}
$$\begin{footnotesize}\begin{matrix} A_{i,\ast} &=(0,_{i_1},0_{i_{2l+1}},a_{i,i_{1}+2k+2},\ldots,a_{i,i_{1}+2k+2l+2},a_{i,i_{1}+2k+2l+3},\ldots,a_{i,i_{1}+2k+2l+2m+3},0_{i_p})\\
A_{j,\ast} &=(0_{i_1},a_{j,i_{1}+1},\ldots,a_{j,i_{1}+2k+1},a_{j,i_{1}+2k+2},\ldots,a_{j,i_{1}+2k+2l+2},0_{i_{2m}},0_{i_p}) \end{matrix}\end{footnotesize}$$
where
$a_{j,i_{1}+1}= \ldots = a_{j,i_{1}+2k}=a_{j,i_{1}+2k+1}=\ldots=a_{j,i_{1}+2k+2l+2} = 1, a_{j,m} = 0$ for $m\notin\{i_1+1,\ldots,i_{1}+2k+2l+2\}$,
$a_{i,i_{1}+2k+2}= \ldots = a_{i,i_{1}+2k+2l+2}=a_{i,i_{1}+2k+2l+3}=\ldots=a_{i,i_{1}+2k+2l+2m+3} = 1, a_{i,r} = 0$ for
$r\notin\{i_{1}+2k+2,\ldots,i_{1}+2k+2l+2m+3\}$.
Then $M(A)$ has no Spin-structure. \end{theo}
\noindent {\bf Proof.} {\bf 1.} From (\ref{y}) $$\begin{aligned} y_{i_1+1}&=\ldots=y_{i_{1}+2k+1}=x_i,\\ y_{i_{1}+2k+2}&=\ldots=y_{i_{1}+2k+2l+2}=x_i+x_j\\ y_{i_{1}+2k+2l+3}&=\ldots=y_{i_{1}+2k+2l+2m+3}=x_j.\end{aligned}$$ From (\ref{sw1}) and $x_i^2=x_i\sum_{j=1}^na_{ji}x_j$ we obtain $$\begin{aligned}w_2(M(A))&=k(2k+1)x_i^2+(2k+1)(2l+1)x_i(x_i+x_j)+(2k+1)(2m+1)x_ix_j\\ &+l(2l+1)(x_i+x_j)^2+(2l+1)(2m+1)x_j(x_i+x_j)+m(2m+1)x_j^2\\ &=(l+m+1)x_j^2+(2l+1)(2m+1)x_ix_j=(l+m+1)x_j^2+x_ix_j. \end{aligned}$$ Now, if $l$ and $m$ are number of the same parity we have $$\begin{aligned}&w_2(M(A))=x_ix_j+x_j^2\\&= \begin{cases}x_ix_j,&\text{ if } j\notin\{i_1+1,\ldots,i_{1}+2k+2\}, M(A)\text { has no Spin-structure},\\ 0,&\text{ if } j\in\{i_1+1,\ldots,i_{1}+2k+2\}, M(A)\text { has a Spin-structure}.\end{cases} \end{aligned}$$
{\bf 2.} From {\rm(}\ref{y}{\rm)} $$\begin{aligned} y_{i_1+1}&=\ldots=y_{i_{1}+2k+1}=x_j\\ y_{i_{1}+2k+2}&=\ldots=y_{i_{1}+2k+2l+2}=x_i+x_j\\ y_{i_{1}+2k+2l+3}&=\ldots=y_{i_{1}+2k+2l+2m+3}=x_i. \end{aligned}$$ From (\ref{sw1}) and since $i_1>j>i$ we get $$\begin{aligned} w_2(M(A))&=k(2k+1)x_j^2+m(2m+1)x_i^2+(2k+1)(2l+1)x_j(x_i+x_j)\\ &+(2k+1)(2m+1)x_ix_j+l(2l+1)(x_i+x_j)^2\\ &+(2l+1)(2m+1)x_i(x_i+x_j)+m(2m-1)x_i^2\\ &=k(2k+1)\underbrace{x_j^2}_{=0}+l(2l+1)\underbrace{(x_i+x_j)^2}_{=0}+m(2m+1)\underbrace{x_i^2}_{=0}\\ &+x_j(x_i+x_j)+x_ix_j+x_i(x_i+x_j)=x_ix_j\ne0, \end{aligned}$$ so $M(A)$ has no Spin-structure.
\hskip 142mm $\Box$
Now, we give examples which illustrate Theorems \ref{theo1} - \ref{theo3}. \begin{ex}
{\bf 1.} Let
$$A=\left[\begin{matrix}
0&0&0&0&0&0&0&0\\
0&0&1&1&0&0&0&0\\
0&0&0&0&1&1&1&1\\
0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0
\end{matrix}\right].$$
Here $2l=4\Rightarrow l=2$. Hence from Theorem \ref{theo1}, part 1.1, manifold $M(A)$ has the Spin-structure.
\newline{\bf 2.}
$$A=\left[\begin{matrix}
0&0&0&0&0&0\\
0&0&1&1&0&0\\
0&0&0&0&1&1\\
0&0&0&0&0&0\\
0&0&0&0&0&0\\
0&0&0&0&0&0
\end{matrix}\right].$$
Here
$
l=1,\{i_1,i_2,\ldots,i_n\}=\{3,4\},j=3\in\{3,4\}.$
Hence, from Theorem \ref{theo1}, part 1.2, the real Bott manifold $M(A)$ has no Spin-structure.
\newline{\bf 3.}
$$A=\left[\begin{matrix}
0&1&1&1&1&0&0&0&0\\
0&0&0&1&1&1&1&1&1\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0
\end{matrix}\right].$$
From Theorem \ref{theo2}, part 1.4 and since $l=1, m=2, \{i_1,\ldots,i_{2k}\}=\{2,3\}, j=2\in\{2,3\}$
the real Bott manifold has no Spin-structure.
\newline{\bf 4.}
$$A=\left[\begin{array}{ccccccccccccc}
0&0&0&0&0&0&0&0&0&0&0&0&0\\
0&0&1&1&1&1&1&1&0&0&0&0&0\\
0&0&0&0&0&1&1&1&1&1&1&1&1\\
0&0&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&0&0
\end{array}\right].$$
In this case $l=1, m=2, $
and from Theorem \ref{theo3} we have that $M(A)$ has no Spin-structure. \end{ex}
\vskip 2mm \noindent Maria Curie-Sk{\l}odowska University,\\ Institute of Mathematics\\ pl. Marii Curie-Sk{\l}odowskiej 1\\ 20-031 Lublin, Poland\\ E-mail: anna.gasior@poczta.umcs.lublin.pl
\end{document} |
\begin{document}
\title[On Stable embeddability of partitions]{On stable embeddability of partitions} \author{Dongseok KIM} \address{Department of Mathematics \\ Kyungpook National University \\ Taegu 702-201 Korea } \email{dskim@math.ucdavis.edu, dongseok86@yahoo.co.kr} \thanks{} \author{Jaeun Lee} \address{Department of Mathematics\\ Yeungnam University\\ Kyongsan, 712-749, Korea } \email{julee@yu.ac.kr} \thanks{The first author was supported in part by KRF Grant M02-2004-000-20044-0. The second author was supported in part by Com$^2$Mac-KOSEF} \subjclass[2000]{Primary 05A17; Secondary 94A99} \begin{abstract} Several natural partial orders on integral partitions, such as the embeddability, the stable embeddability, the bulk embeddability and the supermajorization, raise in the quantum computation, bin-packing and matrix analysis. We find the implications between these partial orders. For integral partitions whose entries are all powers of a fixed number $p$, we show that the embeddability is completely determined by the supermajorization order and we find an algorithm to determine the stable embeddability. \end{abstract}
\maketitle
\section{introduction}
A \emph{partition} $\lambda$ is a finite sequence of nonincreasing positive real numbers, denoted by $\lambda=[\lambda_1, \lambda_2, \ldots, \lambda_n]$ where $\lambda_i\ge \lambda_j$ for all $i\le j$. $\lambda_i$ is called an \emph{entry} of $\lambda$. A partition $\lambda$ is an \emph{integral} partition if all $\lambda_i \in \mathbb{N}$. Throughout the article, we assume all partitions are integral unless we state differently. Let $\lambda=[\lambda_1, \lambda_2, \ldots, \lambda_m]$, $\mu=[\mu_1,\mu_2, \ldots, \mu_n]$ be two partitions. We can naturally define an \emph{addition} of two partitions, $\lambda + \mu$ by a reordered juxtaposition, a \emph{product} of two partitions, $\lambda\times\mu$ by $[\lambda_i\cdot \mu_j]$ and a scalar multiplication, $\alpha\lambda$ by $[\alpha\cdot \lambda_i]$. We denote $\overset{n}{\overbrace{\lambda\times\lambda\times \ldots\times\lambda}}$ by $\lambda^{\times n}$. We recall definitions of partial orders on partitions. For more terms and notations, we refer to \cite{bhatia:gtm, Stanley:enumerative2}. A partition $\lambda$ \emph{supermajorizes} a partition $\mu$, or $\lambda \succcurlyeq_S \mu$, if for every $x \in \mathbb{N}$
$$\sum_{\lambda_i\ge x} \lambda_i \ge \sum_{\mu_j\ge x} \mu_j.$$ A partition $\lambda$ \emph{embeds} into $\mu$ if there exists a map $\varphi : \{1, 2, \ldots, m\} \rightarrow \{1, 2, \ldots, n\}$ such that
$$\sum_{i\in\varphi^{-1}(j)} \lambda_i \le \mu_j$$ for all $j$, denoted by $\lambda\hookrightarrow\mu$. This embedding problem can be interpolated as a \emph{bin-packing problem} by replacing the entries of a partition $\lambda$ by the sizes of the blocks and the entries of a partition $\mu$ by the sizes of the bins. It is well known that the question of whether $\lambda$ embeds into $\mu$ is computable but NP-hard.
Kuperberg found an interesting embeddability, $\lambda$ \emph{bulk-embeds} into $\mu$, or $\lambda {\overset{b}\hookrightarrow} \mu$, if for every rational $\epsilon > 0$, there exists an $N$ such that $\lambda^{\times N} \hookrightarrow \mu^{\times N(1+\epsilon)}$~\cite{Kuperberg:hybrid}. He showed the following theorem.
\begin{thm}\rm{~\cite{Kuperberg:hybrid}} Let $\lambda$ and $\mu$ are two partitions, then $\lambda {\overset{b}\hookrightarrow} \mu$ if and only if
$$||\lambda||_p \le ||\mu||_p$$ for all $p \in [1,\infty]$. \label{th:embed} \end{thm}
He also showed the following implications,
\begin{align} \lambda \hookrightarrow \mu \Longrightarrow\lambda \preccurlyeq_S \mu \Longrightarrow \lambda {\overset{b}\hookrightarrow} \mu, \nonumber\\ \lambda{\overset{b}\hookrightarrow} \mu \not\Longrightarrow\lambda \preccurlyeq_S \mu \not\Longrightarrow \lambda\hookrightarrow\mu. \label{kup} \end{align}
One can consider a partition as the capacity of a quantum memory \cite{NV:majorization}. Kuperberg introduced a stable embeddability in the presence of an auxiliary memory \cite{Kuperberg:hybrid}. A partition $\lambda$ \emph{stably embeds} into a partition $\mu$ if there exist a partition $\nu$ such that $\lambda\times\nu\hookrightarrow\mu\times\nu$, denoted by $\lambda{\overset{s}\hookrightarrow} \mu$. Then he asked the relation between the stable embeddability and the supermajorization order. We answer the question and compare these embeddabilities in section \ref{compa}. A complete classification of the stable embeddability is unknown. Since the sizes of the classical memories are all powers of $2$, it is natural to study the case all entries of partitions are powers of a fixed positive integer $p$. For these partitions, we find that the embeddability is completely determined by the supermajorization order. Also we find an algorithm to determine the stable embeddability in section~\ref{stable}.
\section{Comparison of embeddabilities}
For partitions $\lambda, \mu$, we find the following diagram about the implications of these embeddabilities.
$$ \begin{matrix} \lambda\hookrightarrow\mu & \Longrightarrow & \lambda\preccurlyeq_S\mu \\ \Downarrow & & \Downarrow \\ \lambda{\overset{s}\hookrightarrow}\mu & \Longrightarrow & \lambda{\overset{b}\hookrightarrow}\mu \end{matrix} $$ The converse of all implications are false. We provide counterexamples in Example~\ref{counterexample}. Moreover, there is no relation between the stable embeddability and the supermajorization order, which address the question arose in~\cite{Kuperberg:hybrid}. For these counterexamples, we need to show a few facts about these embeddabilities. One can see that if
$\lambda\hookrightarrow\mu$, then $||\lambda||_p\le ||\mu||_p$ for all $p\in [1,\infty].$
\begin{thm} Let $\lambda, \mu$ be partitions. If $\lambda\neq\mu$
and $\lambda \hookrightarrow\mu$, then $||\lambda||_p < ||\mu||_p$ for all $1<p<\infty$. \label{st} \end{thm} \begin{proof} Let $$\lambda=[a_1, a_2, \ldots, a_l],~~ \mu = [b_1, b_2, \ldots, b_m].$$
We will prove it by a contradiction. Suppose $\lambda\hookrightarrow \mu$,
$\lambda\neq\mu$ and $||\lambda||_p = ||\mu||_p$ for some $1<p<\infty$. Then there exists a map $\varphi : \{1, 2, \ldots, l\} \rightarrow \{1, 2, \ldots, m\}$ presenting the embedding. We divide cases by the sizes of $l, m$. If $l>m$, then there exist $i_1, i_2$ and $j$ such that $\{i_1, i_2\}\subset \varphi^{-1}(j)$. Since $$\alpha^p+\beta^p < (\alpha+\beta)^p$$ for all $p> 1$ and nonzero $\alpha, \beta$, we have
\begin{align} a_{i_1} +a_{i_2} \le b_j \Longrightarrow a_{i_1}^p+a_{i_2}^p < b_j^p
\Longrightarrow ||\lambda||_p^p < ||\mu||_p^p . \label{inject} \end{align}
If $l<m$, then there is $k$ such that $\varphi^{-1}(k)=\emptyset$ and hence
\begin{align} \sum_i (a_{i})^p \le (\sum_j (b_j)^p) - (b_k)^p \Longrightarrow
||\lambda||_p^p < ||\mu||_p^p .\label{surject} \end{align}
If $l=m$, then there exists $j$ such that $a_k=b_k$ for all $k<j$ and $a_j\neq b_j$ because $\mu^{\times n}\neq\mu^{\times n}$. Obviously we know $a_j < b_j$. Since $\lambda^{\times n}\hookrightarrow \mu^{\times n},$ either two or more boxes embed into the box of the size $b_j$ or a part of the box of size $b_j$ has not been used. If two or more boxes of $\lambda^{\times n}$ embed into the box of the size $b_j$, then we find a contradiction by equation~\ref{inject}. If a part of the box of size $b_j$ has not been used, then we find a contradiction by equation~\ref{surject}. \end{proof}
The following corollary shows the essentiality of $\epsilon$ in Theorem~\ref{th:embed}.
\begin{cor} Let $\lambda, \mu$ be partitions. If $||\lambda||_p = ||\mu||_p$ for some $1<p<\infty$ and $\lambda\neq \mu$, then $\lambda^{\times n}{\not\hookrightarrow} \mu^{\times n}$ for all $n$. \label{strict} \end{cor}
\begin{proof} Suppose $\lambda^{\times n}\hookrightarrow \mu^{\times n}$ for some $n$. Since $\lambda\neq \mu$, we find $\lambda^{\times n}\neq \mu^{\times n}$. By Theorem \ref{st} if $\lambda^{\times n}\neq\mu^{\times n}$ and $\lambda^{\times n}\hookrightarrow \mu^{\times n}$ for some $n$, then
$||\lambda^{\times n}||_p < ||\mu^{\times n}||_p$ for all $1<p<\infty$. But one can observe that for any partition $\lambda$,
$$||\lambda^{\times n}||_p=(||\lambda||_p)^n.$$ Thus we find a contradiction that for all $1<p<\infty$,
$$||\lambda||_p < ||\mu||_p.$$ \end{proof}
\begin{cor} Let $\lambda, \mu$ be two partitions. If $||\lambda||_p = ||\mu||_p$ for some $1<p<\infty$ and $\lambda\neq \mu$, then $\lambda{\overset{s}{\not\hookrightarrow}} \mu$. \label{cor:proper} \end{cor} \begin{proof} Suppose $\lambda{\overset{s}\hookrightarrow} \mu$. There exists a partition $\nu$ such that $\lambda\times \nu \hookrightarrow \mu\times\nu$. Since $\lambda\times \nu \neq \mu\times\nu$ by Theorem~\ref{st}, for all $1<p<\infty$
$$||\lambda\times \nu||_p < ||\mu\times\nu||_p.$$
One can easily see that $||\lambda||_p = ||\mu||_p$ for some $1<p<\infty$ implies that for the same $p$,
$$||\lambda\times \nu||_p = ||\lambda||_p||\nu||_p = ||\mu||_p||\nu||_p =||\mu\times\nu||_p.$$ Therefore, $\lambda{\overset{s}{\not\hookrightarrow}}\mu$. \end{proof}
\begin{exa} Let $\lambda_1=[2,2,2,2]$, $\lambda_2=[8, 8, 8, 8, 4, 4, 4, 4]$, $\lambda_3=[4,2,2]$, $\mu_1=[4, \overset{8}{\overbrace{1, 1, \ldots, 1}}]$, $\mu_2=[3,3,3]$, $\mu_3=[16,\overset{16}{\overbrace{2, 2, \ldots, 2}},\overset{16}{\overbrace{1, 1, \ldots, 1}}]$ and $\mu_4=[5,3]$. Then $\lambda_1{\overset{s}\hookrightarrow}\mu_1$ but $\lambda_1\not\preccurlyeq_S\mu_1$ and $\lambda_1{\not\hookrightarrow}\mu_1$. $\lambda_1$$\preccurlyeq_S$$\mu_2$ but $\lambda_1 {\not\hookrightarrow} \mu_2$. $\lambda_2{\overset{b}\hookrightarrow}\mu_3$ but $\lambda_2{\overset{s}{\not\hookrightarrow}}\mu_3$ and $\lambda_2\not\preccurlyeq_S\mu_3$. $\lambda_3\preccurlyeq_S\mu_4$ but $\lambda_3{\overset{s}{\not\hookrightarrow}}\mu_4$.\label{counterexample} \end{exa} \begin{proof} If we set $\nu=[2,1,1]$, we get $$\lambda_1\times\nu=[4,4,4,4,\overset{8}{\overbrace{2, 2, \ldots, 2}}]~~ \mathrm{and} ~~ \mu_1\times\nu=[8,4,4,\overset{8}{\overbrace{2, 2, \ldots, 2}}, \overset{16}{\overbrace{1, 1, \ldots, 1}}].$$ Then one can see that $\lambda_1{\overset{s}\hookrightarrow}\mu_1$. Since $$\sum_{(\lambda_1)_i\ge 2} (\lambda_1)_i =8 > 4 =\sum_{(\mu_1)_j\ge 2} (\mu_1)_j,$$ we see $\lambda_1\not\preccurlyeq_S\mu_1$. It is clear that $\lambda_1{\not\hookrightarrow}\mu_1$. To show $\lambda_2{\overset{b}\hookrightarrow}\mu_3$, one can check that
$$||\lambda_2||_p \le ||\mu_3||_p$$ for all $p\in[1,\infty]$ and the equality holds at $$p=\frac{\mathrm{Ln}(1 + \sqrt{5})}{\mathrm{Ln}(2)}>1.$$ Since $\lambda_2 \neq \mu_3$, we find that $\lambda_2 {\overset{s}{\not\hookrightarrow}} \mu_3$ by Corollary~\ref{cor:proper}. Clearly $\lambda_3\preccurlyeq_S\mu_4$. Suppose $\lambda_3{\overset{s}\hookrightarrow}\mu_4$, there exists a partition $\nu=[\nu_1,$$ \nu_2,$$ \ldots, $$\nu_n]$ such that $\lambda_3\times\nu \hookrightarrow \mu_4\times\nu$. Let $p$ be the power of $2$ in the prime factorization of the greatest common divisor $(\nu_1, \nu_2, \ldots, \nu_n)$ of $\nu_1, \nu_2, \ldots, \nu_n$. First we looks at entries of $\lambda_3\times\nu$, all these entries are multiples of $2^{p+1}$. Since
$||\lambda_3\times\nu||_1=||\mu_4\times\nu||_1$, there will be no space in $\mu_4\times \nu$ which was not used in the embedding, $i. e.,$ for all $j$, $$\sum_{i\in\varphi^{-1}(j)} (\lambda_3\times\nu)_i = (\mu_4\times\nu)_j.$$ Therefore, all entries of $\mu_4\times\nu$ have to be multiples of $2^{p+1}$. Since all entries of $\mu_4$ are odd numbers, $\nu_i$ has to be a multiple of $2^{p+1}$ and so does the greatest common divisor of $\nu_1, \nu_2, \ldots, \nu_n$. It contradicts the hypothesis of $p$. All others should be straightforward. \end{proof} \label{compa}
\section{Stable embeddability}
Let $\lambda, \mu$ be two partitions. Let us consider the following algorithm which is called a \emph{ first fit} algorithm~\cite{AM:firstrandom}. From $\lambda_1$ of $\lambda$, place it to any entry of $\mu$ in which it fits. Then repeat this step for $\lambda_2$ and so on. Usually this is not an efficient algorithm~\cite{johnson:bestvsfirst}. It is obvious that if the first fit algorithm works, then $\lambda\hookrightarrow \mu$. The converse is not true in general. But with some conditions on $\lambda$ we can show that it determines the embeddability of $\lambda$ into $\mu$.
\begin{thm} Let $\lambda=[\lambda_1,\lambda_2, \ldots, \lambda_s], \mu =[\mu_1$, $\mu_2$, $\ldots $, $\mu_t]$ be partitions with
$\lambda_i|\lambda_j$ for all $i\ge j$. If $\lambda\hookrightarrow \mu$, then the first fit algorithm works. \label{dividable} \end{thm} \begin{proof} Let us induct on $s$. It is trivial for $s=1$ because this is the first step of the algorithm. Suppose this is true for $s=n$, we look at the case $\lambda=[\lambda_1,\lambda_2, \ldots,\lambda_{n+1}]$. Since $\lambda\hookrightarrow \mu$, there is a map $\varphi : \{ 1, 2, \ldots , n+1\} \rightarrow \{1, 2, \ldots, t\}$ which represents the embedding of $\lambda$ into $\mu$, let us denote $\varphi(1)=j$. Then we will construct another embedding representing map $\psi$ after we decide where we put $\lambda_1$, say $\mu_k$, $i.e.,$ $\psi (1)=k$. To construct $\psi$, let us compare $\varphi(1)$ and $\psi(1)$. If $\varphi(1)=\psi(1)$, we pick $\psi = \varphi$. If $\varphi(1)=j\neq k=\psi(1)$, first we need to prove that there exists a subset $P$ of $\varphi^{-1}(k) =\{\lambda_{i_1}, \lambda_{i_2}, \ldots , \lambda_{i_l}\}$ such that $$\sum_{j\in P} \lambda_j \le \lambda_1 \hskip 1cm \mathrm{and} \hskip 1cm \sum_{j\in P^c} \lambda_j \le \mu_k-\lambda_1.$$ First we divide cases by the sizes of $\lambda_1$ and $\lambda_2$. If $\lambda_1=\lambda_2$, then we pick $P=\{2\}$. If $\lambda_1 \neq\lambda_2$, then $\lambda_1>\lambda_2$. If $$\sum_{j=2}^{l} \lambda_j \le \lambda_1,$$ we can pick $P=\{2, 3, \ldots , l\}$. Otherwise, there exist an integer $m$ such that $$ \sum_{j=2}^{m} \lambda_j\le \lambda_1 < \sum_{j=2}^{m+1}\lambda_j = (\sum_{j=2}^{m}\lambda_j) + \lambda_{m+1}.$$ If we divide by $\lambda_{m+1}$, we have $$ \sum_{j=2}^{m} \frac{\lambda_j}{\lambda_{m+1}}\le \frac{\lambda_1}{\lambda_{m+1}} < \sum_{j=2}^{m}\frac{\lambda_j}{\lambda_{m+1}} + 1.$$ Since
$\lambda_i|\lambda_j$ for all $i\ge j$, all these three numbers are integers, so the first two have to be the same. We choose $P=\{2, 3, \ldots , m\}$. Once we have such a $P$, we can define $$\psi(i) = \left\{ \begin{array}{cl}
k & ~~\mathrm{if}~~ i=1, \\
j & ~~\mathrm{if}~~ i \in \varphi^{-1}(k)\cap P, \\
\varphi (i) & ~~\mathrm{if}~~ i\not\in\varphi^{-1}(k)
\cup\{1\} ~~\mathrm{or}~~ i\in \varphi^{-1}(k)\cap P^{c}. \end{array} \right. $$
Then $\psi|_{\tilde\lambda}$ shows $\tilde\lambda=[\lambda_2,\ldots \lambda_{n+1}] \hookrightarrow \tilde\mu=[\mu_1, \ldots, \mu_k-\lambda_1,\ldots, \mu_t]$. By the induction hypothesis, the first fit algorithm works. \end{proof}
Let $\mathcal{P}$ be the set of all partitions whose entries are all powers of a fixed number $p$. For these partitions, we can show that the supermajorization completely determine the embeddability. Instead of the standard notation, we can use $$\lambda=[a_0, a_1, a_2, \ldots, a_s]_p$$ where $a_i$ is the number of entries $p^i$.
\begin{thm} Let $\lambda, \mu$ be partitions in $\mathcal{P}$. $\lambda\hookrightarrow\mu$ if and only if $\lambda\preccurlyeq_S \mu$. \label{superpp} \end{thm} \begin{proof} We only need to show that if $\lambda\preccurlyeq_S\mu$, $\lambda\hookrightarrow\mu$ because of equation \ref{kup}. Suppose $\lambda\preccurlyeq_S\mu$. Let $$\lambda=[ a_0, a_1,a_2, \ldots, a_s]_p, ~~ \mu=[b_0, b_1, b_2, \ldots, b_t]_p.$$ Without loss of generality, we assume $a_s\neq 0\neq b_t$. Obviously $s\le t$. We induct on the number of the boxes of $\lambda$, say $k$. If $k=1$, then $a_s=1$ $$ 1p^s=\lambda_{\ge p^s} \le \mu_{\ge p^s}=\sum_{j=s}^{t}b_jp^{j} $$ implies $\lambda\hookrightarrow\mu$. For nonzero $a_s$, we pick a box of size $p^s$, put it into a box of size $p^t$ in $\mu$. Then for $\lambda$ we subtract $1$ from $a_s$ and for $\mu$, we subtract $1$ from $b_t$ and distribute the reminder of $p^t-p^s$ in base $p$ into $\mu$. One can observe that all these numbers which have been distributed are bigger than or equal to $p^s$. Thus resulting partitions still have the same supermajorization order. By the induction hypothesis, we find an embedding of $\lambda'=[a_0, a_1, \ldots, a_{s-1}, a_s-1]_p$ into $\mu'=[b_0', b_1', \ldots, b_{t-1}', b_t-1]_p$. But, it is easy to recover an embedding of $\lambda$ into $\mu$. \end{proof}
Now we look the stable embeddability for partitions in $\mathcal{P}$.
\begin{thm} Let $\lambda, \mu$ be partitions in $\mathcal{P}$. If $\lambda{\overset{s}\hookrightarrow} \mu$, then there exists a partition $\nu$ in $\mathcal{P}$ such that $\lambda\times\nu \hookrightarrow \mu\times\nu$. \label{Cpower} \end{thm}
\begin{proof} Suppose $\lambda{\overset{s}\hookrightarrow} \mu$, then there is a partition $\nu$ such that $\lambda\times\nu \hookrightarrow \mu\times\nu$ and $\nu=[c_1, c_2, \ldots, c_k].$ We can uniquely rewrite $c_j$ in the base $p$ such as $$c_j= c_{j,0} p^0 + c_{j,1}p^1 + c_{j,2} p^{2} + \ldots + c_{j,l(j)} p^{l(j)}$$ where $c_{j,i}$ are nonnegative integers less than $p$ and $ c_{j,l(j)}\neq 0$. Using these expressions we can subdivide $\nu$ to get a refinement $$\tilde\nu=[\sum_{j} c_{j,0}, \sum_{j} c_{j,1}, \ldots , \sum_j c_{j,i}, \ldots, \sum_{j} c_{j, t}]_p$$ where the sum runs over all nonzero $c_{j,i}$ for each $i$. If the boxes $\sum[ c_{i_k}\times p^{j_k}]$ of $\lambda\otimes\nu$ were embedded into $[c_m\times p^{m'}]$ in $\mu\otimes\nu$, We can show that the refinement of $\sum[ c_{i_k}\times p^{j_k}]$ can be embedded in the refinement of $[c_m\times p^{m'}]$. Precisely if $$p^{j_1}c_{i_1}+ p^{j_2}c_{i_2} + \ldots + p^{j_n}c_{i_n} \le p^{m'}c_m$$ where $j_1\le j_2 \le \ldots \le j_n$, $c_{j_t}\neq 0$ and $$c_{i_t}=c_{i_t,0}p^{0} + c_{i_t,1}p^{1} + \ldots + c_{i_t,l(i_t)} p^{l(i_t)}$$ for all $t$, then $$\sum_{\alpha =1}^{n}\sum_{\beta =0}^{l(\beta)} [c_{i_{\alpha},\beta}\times p^{i_{\alpha}+\beta}]\hookrightarrow \sum_{\gamma}^{l(m)}[c_{m,\gamma}\times p^{m'+\gamma}].$$
First we look at the case, $n = 1$. If $p^{j_1}c_{i_1} \le p^{m'} c_m$, one can easily see that $$\sum_{\beta}[c_{i_1,\beta}\times p^{j_1+\beta}]\preccurlyeq_S \sum_{\gamma}[c_{m,\gamma}\times p^{m'+\gamma}]$$ because we are comparing two integers in base $p$. By Theorem~\ref{superpp}, $$\sum_{\beta}[c_{i_1,\beta}\times p^{j_1+\beta}]\hookrightarrow \sum_{\gamma}[c_{m,\gamma}\times p^{m'+\gamma}].$$
For the case $n>1$, we look at the integer $$\sum_{\alpha =1}^{n}\sum_{\beta =0}^{l(\beta)} c_{i_{\alpha},\beta}\times p^{i_{\alpha}+\beta}$$ as a sum of integers $$\sum_{\beta =0}^{l(\beta)} c_{i_{\alpha},\beta}\times p^{i_{\alpha}+\beta}$$ in base $p$. Then this returns to the case $n=1$. If we keep on tracking the addition, we can recover the embedding of $$\sum_{\alpha =1}^{n}\sum_{\beta =0}^{l(\beta)} [c_{i_{\alpha},\beta}\times p^{i_{\alpha}+\beta}]\hookrightarrow \sum_{\gamma}^{l(m)}[c_{m,\gamma}\times p^{m'+\gamma}].$$
Moreover, this process does not involve with other terms. Therefore, we can rewrite $\nu$ as the shape we desired. \end{proof}
\begin{cor} Let $\lambda=[a_i]_p, \mu=[b_i]_p$ be partitions in $\mathcal{P}$. If $0 \le a_i\le b_i (0\le b_i < a_i)$, we have two new partitions $\tilde\lambda,$$ \tilde\mu$ which are obtained from $\lambda,$ $\mu$ by replacing $a_i, b_i$ by $0$ and from $\mu, \lambda$ by replacing by $b_i-\mathrm{Min}\{ a_i, b_i\}(a_i-\mathrm{Min}\{ a_i, b_i\},$ $respectively)$. Then $\lambda{\overset{s}\hookrightarrow} \mu$ if and only if $\tilde\lambda{\overset{s}\hookrightarrow} \tilde\mu$. \label{shape} \end{cor}
\begin{proof} We assume $a_i\le b_i$ for a fixed $i$. Suppose $\lambda{\overset{s}\hookrightarrow} \mu$. By Theorem~\ref{Cpower}, we can find
$$\nu=[\nu_0, \nu_{1}, \ldots, \nu_n]_p$$ such that all entries of $\nu$ are all powers of a fixed number $p$ and $c_k$ is the number of the boxes of size $p^k$. Now $\lambda\otimes\nu\hookrightarrow \mu\otimes\nu$ and $\lambda\otimes\nu, \mu\otimes\nu$ satisfy the hypothesis of Theorem~\ref{dividable}, we can use the first fit algorithm. We put all boxes whose sizes are bigger than $p^i\times p^n$. Then we consider $a_i\cdot c_n$ boxes of size $p^i\times p^n$ in $\lambda\otimes\nu$. But none of boxes of size $p^i\times p^n$ in $\mu\otimes\nu$ were used in the previous steps, we can put these into boxes of size $p^i\times p^n$ of $\mu\otimes\nu$. Then we finish the rest of boxes of size $p^{n+i}$. Then we repeat the same process to the next $a_i\cdot c_{n-1}$ boxes of size $p^i\times p^{n-1}$ in $\lambda\otimes\nu$. This embedding keeps all boxes $p^{i}\otimes \nu$ of $\lambda\otimes\nu$ into $p^{i}\otimes \nu$ of $\mu\otimes\nu$. Thus, $\lambda\otimes\nu\hookrightarrow \mu\otimes\nu$. The converse is obvious. \end{proof}
\subsection{An algorithm to determine the stable embeddability} We introduce an algorithm to decide $\nu$. Let $\lambda, \mu$ be partitions in $\mathcal{P}$, $i.e.,$ $\lambda=[a_0, a_1, \ldots, a_n]_p$ and $\mu=[b_0, b_{1}, \ldots, b_m]_p,$ where $a_i, b_i$ are the number of boxes of size $p^i$ in $\lambda, \mu$ respectively. By Theorem~\ref{superpp}, we can decide whether $\lambda$ can be embedded in $\mu$ or not. Before we apply the algorithm, we modify the shape of $\lambda, \mu$ by Corollary~\ref{shape} such that none of $a_i, b_i$ are nonzero simultaneously. If $a_n\neq 0\neq b_m$ and $m<n$, $\lambda$ can not be stably embedded into $\mu$. For convenience, we will assume $p$ is $2$, $\nu$ is a rational partition whose entries are of non-positive powers of $2$ and $c_k$ is the number of boxes in $\nu$ of the size $2^{-k}$.
Initially, we will start $c_0=1$. There are $a_n$ boxes of size $2^n$ in $\lambda\times[c_0 \times 1]$ and none of blocks of size $2^n$ in $\mu\times[c_0\times 1]$. But there are rooms for
$$b_m \times 2^{m-n} +b_{m-1} \times 2^{m-n-1} + \ldots + b_{n+1}\times 2$$ many boxes of size $2^n$ in $\mu\times[c_0\times 1]$. If
$$b_m \times 2^{m-n} + b_{m-1} \times 2^{m-n-1} + \ldots + b_{n+1}\times 2 \ge a_n,$$ we set $c_1$ to zero and keep the difference for the next step, say $M$. Otherwise we set $$c_1 =\lceil \frac{a_n - (b_m \times 2^{m-n} + b_{m-1} \times 2^{m-n-1} + \ldots + b_{n+1}\times 2)}{b_m} \rceil$$ and $M=0$, where $\lceil x \rceil$ is the smallest natural number which is bigger than or equal to $x$. Then we look at $\lambda\times[c_0\times 1, c_1\times\frac{1}{2}], \mu\times[c_0\times 1,c_1\times\frac{1}{2}]$. We have $a_{n-1}\cdot c_0 + a_n\cdot c_1$ many boxes of size $2^{n-1}$ in $\lambda\times[c_0\times 1]$, then we compare it with
$$ 2\times M+ c_1\times ( b_m\times 2^{m-n} + b_{m-1} \times 2^{m-n-2} + \ldots + b_{n+1}\times 2) + b_{n-1}\cdot c_0 $$ and we repeat exactly the same process. For $N\ge m$, we find $c_N$ by comparing two terms $$\alpha= b_{m-1}\times c_{N+m-1} + b_{m-2}\times c_{N+m-2} + \ldots + b_0\times c_{N} + 2\times M$$ and $$\beta=a_{n}\times c_{N+n} + a_{n-1}\times c_{N+n-1} + \ldots + a_{0}\times c_{N}$$ because these numbers count exactly how many blocks of size $2^{-N}$ in the product $$\lambda\times[c_0\times 1, c_1\times\frac{1}{2}, \ldots, c_N\times\frac{1}{2^{N}}]$$ and $$\mu\times[c_0\times1, c_1\times\frac{1}{2}, \ldots, c_N\times\frac{1}{2^N}]$$ where $M$ is the number of boxes that were left in the previous step. Then $C_{N+1}$ is $\lceil (\beta-\alpha)/b_m \rceil$ if $\beta-\alpha > 0 ($and set $M=0)$ $0$ otherwise $($set $M= \alpha-\beta,$ $respectively)$. Then we compare the next biggest boxes. We stop if we get $n$ consecutive $0$'s for $c_i$. Let $N$ be the largest integer that $c_N$ is non-zero. We repeat the process starting $c_0=(b_m)^{N+1}$. One can easily see that we no longer have to use
$\lceil \rceil$ because $(b_m)^{N+1-k}|c_k$ for all $0\le k \le N+1$. Finally we multiply $2^N$ to make $\nu$ an integral partition.
To compare the optimality of such $\nu$'s, we define the \emph{length} of $\nu=[c_0, c_1, \ldots, c_n]_p$ to be $n+1$ where $c_0\neq 0 \neq c_n$. From the given $\lambda, \mu$ we collect all possible $\nu \in \mathcal{P}$ and $\lambda\otimes \nu\hookrightarrow \mu\otimes \nu$, say ${\mathcal T}$. Then we define a partial order on ${\mathcal T}$ by a lexicographic order, $$( \mathrm{length}\hskip .2cm \mathrm{of}\hskip .2cm \widehat\lambda(\nu), \frac{c_1}{c_0}, \ldots, \frac{c_n}{c_0}).$$ Moreover, ${\mathcal T}$ is closed under an addition, a tensor and a scalar multiplication.
\begin{thm} 1) The algorithm stops at finite time if and only if $\lambda{\overset{s}\hookrightarrow}\mu$.
2) Let $D$ be a partition which is obtained from the algorithm. Then $D$ is a minimal element with respect to the partial order we defined on ${\mathcal T}$. \label{optimal} \end{thm}
\begin{proof} We want to show that if $\lambda{\overset{s}\hookrightarrow}\mu$, then the algorithm must stop at finite steps and the one we find by the algorithm has the smallest length. Since $\lambda{\overset{s}\hookrightarrow}\mu$, ${\mathcal T}$ is nonempty and we find a minimal element in ${\mathcal T}$, say $${\mathfrak t}=[t_0, t_1, \ldots, t_l]_p.$$ First we assume the existence, $i. e.,$ the algorithm gives us an integral partition $$\nu=[c_0, c_1, \ldots, c_m]_p.$$ By the minimality, we have $l\le m$. But the process itself provides us $l\ge m$. We compare $$c_0{\mathfrak t}=[c_0\cdot t_0, c_0\cdot t_1, \ldots, c_0\cdot t_l]_p$$ and $$t_0 \nu=[t_0\cdot c_0, t_0\cdot c_1, \ldots, t_0\cdot c_m]_p.$$ Suppose $c_o{\mathfrak t}\neq t_o\nu$. There is a $j$ such that $$c_0\cdot t_j < t_0\cdot c_j.$$ But this obviously contradicts the process of the algorithm. Therefore, $c_0{\mathfrak t} = t_0\nu$ and it does also prove the existence. \end{proof}
\label{stable}
\section{Discussions} \label{discussion}
\subsection{Algebraic embeddabilities.} Let ${\mathcal A}$ be a finite dimensional semisimple algebra over an algebraically closed field $K$. By a simple application of Webberburn-Artin theorem, we can decompose ${\mathcal A}$ into a direct sum of matrix algebras. From a direct sum of matrix algebras ${\mathcal A}$, we can find a unique integral partition $\lambda$, denoted by $\lambda({\mathcal A})$. For an integral partition $\lambda$, one can assign a direct sum of matrix algebras $${\mathcal A}(\lambda) = \bigoplus_{i=1}^{m} {\mathcal M}_{\lambda_i},$$ where ${\mathcal M}_{\lambda_i}$ is the set of all $\lambda_i$ by $\lambda_i$ matrices over $K$. For integral partitions, one can see that $\lambda\hookrightarrow\mu$ if and only if ${\mathcal A}(\lambda)$ embeds into ${\mathcal A}(\mu)$ as $K$ algebras. All other partial orders can be naturally defined for a direct sum of matrix algebras. The question of the embeddability between algebraic objects such as groups, rings, modules and etc, is a long standing difficult question. For some algebraic objects such as sets, vector spaces, the question is straightforward. The embeddability between the modules over a complex simple Lie algebra is completely determined by Littlewood-Richardson formula and Schur's lemma. Authors have made a few progress on stable embeddability, the product is replaced by the tensor product, between the modules over a complex simple Lie algebra \cite{lie}. The stable embeddability between other algebraic objects should be an interesting question.
\subsection{Analytic embeddabilities.}
Let $\lambda, \mu$ be partition in $\mathcal{P}$. The algorithm we defined in section \ref{stable} brings us a new embeddability, $\lambda$ \emph{weakly stably embeds} into $\mu$, denoted by $\lambda{\overset{w. s}{\hookrightarrow}}\mu$, if there exists a rational partition $\nu$ of infinite length such that all entries of $\nu$ are nonpositive powers of the fixed number $p$ and $$\sum_{i=0}^{\infty} c_i p^{-i} <\infty,$$ where $c_i$ is the number of the entries $p^{-i}$. One can see that
\begin{eqnarray} \begin{matrix} \lambda{\overset{s}\hookrightarrow}\mu & \Longrightarrow & \lambda{\overset{w. s}{\hookrightarrow}}\mu & \Longrightarrow & \lambda{\overset{b}\hookrightarrow}\mu \\ & & & & \Updownarrow \\
& & ||\lambda||_p < ||\mu||_p, ~ \forall p\in(1,\infty) &
\Longrightarrow & ||\lambda||_p \le ||\mu||_p,~ \forall p\in[1,\infty] \end{matrix} \label{ws} \end{eqnarray}
It is not known that the converses of the first row of equation
\ref{ws} are true or not for partitions in $\mathcal{P}$. Authors have written a program that performs the algorithm described in section~\ref{stable} to see $||\lambda||_p < ||\mu||_p, ~ \forall p\in(1,\infty)$ and equality holds for $p=1$ and $\infty$ implies $\lambda{\overset{s}\hookrightarrow}\mu$. We have not found any answer yet.
\end{document} |
\begin{document}
\title{ Measurement-induced chaos and quantum state discrimination in an iterated Tavis-Cummings scheme }
\author{Juan Mauricio Torres} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289, Germany} \affiliation{Instituto de F\'isica, Benem\'erita Universidad Aut\'onoma de Puebla, Apdo. Postal J-48, Puebla, Pue. 72570, M\'exico} \author{J\'ozsef Zsolt Bern\'ad} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289, Germany} \author{Gernot Alber} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289, Germany} \author{Orsolya K\' alm\' an} \affiliation{ Institute for Solid State Physics and Optics, Wigner Research Centre, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest, Hungary} \author{Tam\'as Kiss} \affiliation{ Institute for Solid State Physics and Optics, Wigner Research Centre, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest, Hungary} \date{\today} \begin{abstract}
A cavity quantum electrodynamical scenario is proposed for implementing a
Schr\"odinger microscope capable of amplifying differences between non
orthogonal atomic quantum states. The scheme involves an ensemble of
identically prepared two-level atoms interacting pairwise with a single mode
of the radiation field as described by the Tavis-Cummings model. By repeated
measurements of the cavity field and of one atom within each pair a
measurement-induced nonlinear quantum transformation of the relevant atomic
states can be realized. The intricate dynamical properties of this nonlinear
quantum transformation, which exhibits measurement-induced chaos, allows
approximate orthogonalization of atomic states by purification after a few
iterations of the protocol, and thus the application of the scheme for
quantum state discrimination. \end{abstract} \maketitle \section{Introduction} Consistent with the no-cloning theorem nonorthogonal quantum states cannot be distinguished perfectly. However, for purposes of quantum communication, for example, it is necessary to be able to distinguish between two information carrying quantum states even if they have become nonorthogonal after passing through a channel. Therefore, quantum processes capable of distinguishing between nonorthogonal quantum states in an optimal way offer interesting perspectives for applications in quantum information science. Prominent examples of such processes are the Helstrom measurement \cite{Helstrom}, which minimizes errors, and the Ivanovic-Dieks-Peres measurement \cite{Ivanovic,Dieks,Peres}, which distinguishes pure quantum states in an unambiguous way.
Alternatively, nonorthogonal quantum states can also be distinguished with the help of nonlinear quantum state transformations \cite{Gisin}. Quantum state purification protocols \cite{Bennett97,Deutsch98,Macchiavello98,Alber2001,Torres2016b} are early examples of such nonlinear quantum state transformations. Thereby, identically prepared quantum systems are subjected to an entangling unitary transformation and a subsequent selective measurement performed on parts of the system. Iterating these operations typically results in a strong dependence of the final state on the initial conditions and in measurement-induced complex chaos \cite{Kiss2006,Kiss2011}. Recently, it has been demonstrated \cite{Gilyen2016} that the resulting strong sensitivity to initial conditions can in principle be used to amplify small initial differences of quantum states thus realizing a Schr\"odinger microscope, a term originally suggested by Lloyd and Slotine \cite{Lloyd2000} capable of distinguishing non orthogonal quantum states. Although Helstrom and Ivanov-Dieks-Peres measurements have already been realized experimentally both optically \cite{Clarke, Dada} and in solid state \cite{Waldherr} a Schr\"odinger microscope based on nonlinear quantum state transformations has not yet been realized.
Motivated by these developments the purpose of this paper is twofold, namely to propose an experimental scenario in which iterated nonlinear dynamics can be realized with atomic qubits and to explore the characteristic features of the underlying nonlinear quantum state transformation in order to present a Schr\"odinger microscope and to demonstrate its applicability for quantum state discrimination. In view of its possibilities to measure and control the interaction of individual atoms with a single mode of the quantized radiation field with high precision, the area of cavity quantum electrodynamics offers interesting perspectives for future experimental implementations in this direction \cite{Reimann2015,Neuzner2016}. Inspired by recent experimental advances which realize the Tavis-Cummings model \cite{Neuzner2016} in our proposal an ensemble of identically prepared two-level atoms (qubits) is considered which interact pairwise with a single mode of the radiation field. Afterwards, one member of each pair and the corresponding cavity field are measured. Conditioned on these measurement results the unmeasured atoms are kept or discarded. In practice, this may be implemented with the help of a single cavity and a pair of optical conveyor belts \cite{Reimann2015}, for example. Subsequently, the atoms are moved through the cavity by the conveyor belts in such a way that only one pair of atoms interacts with the cavity mode at a time and then the cavity is re-initialized after each interaction. The remaining atoms form a new identically prepared ensemble of smaller size. Similar as in entanglement distillation protocols, the state changes of the remaining two-level atoms are described by an iterated nonlinear quantum transformation.
We analyze the emerging nonlinear quantum state transformation and show that it exhibits measurement-induced complex chaos. We characterize the different parameter regimes, the possible stable fixed points and fixed cycles of the dynamics and the regions of convergence as well as non-converging sets of initial states, forming the so called Julia set. Based on this analysis we identify a case where the two stable fixed points correspond to orthogonal quantum states of the atom and the Julia set forms a line, separating the two regions of stability. Here the system can be utilized as a Schr\"odinger microscope capable of amplifying the distinguishability of nonorthogonal quantum states. In the presented setting, two-level atoms with small excitation amplitudes can be discriminated according to the sign of the real part of their excitation amplitudes. Thus it is also suitable to discriminate noisy nonorthogonal quantum states.
This paper is organized as follows. In Sec. \ref{Model} the dynamical equations of the two-atom Tavis-Cummings model are solved and exact and approximate analytical solutions are presented facilitating our subsequent treatment. Furthermore, the atomic postselection scheme is discussed which is used in Sec. \ref{protocol} to discuss our protocol for implementing a nonlinear map of atomic probability amplitudes. In Sec. \ref{analysis} the fractal structure of the resulting nonlinear map is analyzed. In Sec. V our proposal for implementing a Schr\"odinger microscope is presented. Finally, in Sec. VI some aspects concerning possible experimental realizations of our proposal by nowadays technology are discussed.
\section{The two-atom Tavis-Cummings model} \label{Model} The two-atom Tavis-Cummings model describes the resonant interaction between two atoms, say $A$ and $B$, and a single mode of the radiation field \cite{Tavis}. The atoms have ground states $\ket{0}_i$ and excited states $\ket{1}_i$ ($i \in\{A,B\}$) separated by an energy difference of $\hbar \omega$ which matches the energy of a photon inside the empty cavity. In the interaction picture the Hamiltonian can be expressed in the following form \begin{align}
\hat{H}&= \hbar g\sum_{i=A,B}\left( \hat{\sigma}^+_i\hat{a}+
\hat{\sigma}^-_i\hat{a}^\dagger\right) \label{Hamilton} \end{align} where $\hat{\sigma}^+_i=\ket{1}\bra{0}_i$ and $\hat{\sigma}^-_i=\ket{0}\bra{1}_i$ are the atomic raising and lowering operators ($i \in\{A,B\}$), and $\hat{a}$ ($\hat{a}^\dagger$) is the annihilation (creation) operator of the single-mode field. The interaction picture is taken with respect to the reference Hamiltonian \begin{equation} \hat{H}_0=\hbar\omega (\hat{a}^\dagger \hat{a}+\ketbra{1}{1}_A+\ketbra{1}{1}_B)
\label{H0} \end{equation} which is a constant of motion as it commutes with the interaction Hamiltonian $\hat H$. For this reason both operators, $\hat H$ and $\hat H_0$, can be diagonalized simultaneously. In fact, there is a set of common eigenvectors with eigenvalue zero, namely $\{\ket{\Psi^-}\ket n\}_{n=0}^\infty$. These states are written in terms of the Fock states $\ket n$ of the field and the atomic states $\ket{i,j}=\ket{i}_A\ket j_B$ ($i,j \in \{0,1\}$) together with the atomic Bell states \begin{align}
\ket{\Psi^\pm}&=\frac{1}{\sqrt2}\left(\ket{0,1}\pm\ket{1,0}\right).
\label{bellstates} \end{align} The evaluation of the rest of the eigenvectors can be simplified by realizing that
$\hat H$ has a block-diagonal form in the basis \begin{align}
&
\{\ket{0,0}\ket0\}\oplus
\{\ket{\Psi^+}\ket{0},\ket{0,0}\ket{1}\}\oplus
\nonumber\\&
\{\ket{1,1}\ket{n-2},
\ket{\Psi^+}\ket{n-1}
,\ket{0,0}\ket{n}\}_{n=2}^\infty,
\label{basisAB} \end{align} with blocks given by the following matrices \begin{align}
&H^{(0)}=0,\quad
H^{(1)}=\hbar g\left(
\begin{array}{cc}
0& \sqrt{2}\\
\sqrt{2}& 0
\end{array}
\right),
\nonumber\\
&H^{(n\ge 2)}=
\hbar g\left(
\begin{array}{ccc}
0 &\sqrt{2(n-1)}&0\\
\sqrt{2(n-1)}&0&\sqrt{2n}\\
0&\sqrt{2n}& 0
\end{array}
\right).
\label{Hblocks1} \end{align} The eigenvalues of these matrices are given by $\{0\}$ for $n=0$, $\{ -\sqrt2\hbar g,\sqrt2\hbar g\}$ for $n=1$, and $\{0,-\hbar\omega_n,\hbar\omega_n\}$ for $n\ge 2$, with \begin{align}
\omega_n=g\sqrt{4n-2}.
\label{} \end{align} The transformations that diagonalize each of the blocks $H^{(n)}$ are given by \begin{align}
O^{(1)}&=\frac{1}{\sqrt2}\left(
\begin{array}{cc}
1&1\\
-1&1
\end{array}
\right),
\\
O^{(n\ge2)}&=\frac{1}{\sqrt{4n-2}}\left(
\begin{array}{ccc}
-\sqrt{2n}&\sqrt{n-1}&\sqrt{n-1}\\
0&-\sqrt{2n-1}&\sqrt{2n-1}\\
\sqrt{2n-2}&\sqrt n&\sqrt n
\end{array}
\right).\nonumber
\label{} \end{align} These matrices are the blocks of the orthogonal transformation $\hat O$ that diagonalizes the Hamiltonian $\hat H$ as $\hat O^\dagger \hat H \hat O$.
\subsection{Exact solution} Having solved the eigenvalue problem for $\hat H$, it is now possible to evaluate the time-dependent state vector \begin{align}
\ket{\Psi_t}= e^{-i\hat H t/\hbar}\ket{\Psi_0}
\label{} \end{align} for any given initial pure state $\ket{\Psi_0}$. In this work we consider as initial condition a normalized product state of the two atoms and the single-mode field that can be expressed as \begin{align}
\ket{\Psi_0}=&\ket{\Psi_0^{\rm at}}\ket\alpha,
\nonumber\\
\ket{\Psi^{\rm at}_0}=&
c_0\ket{0,0}+
c_-\ket{\Psi^-}+
c_+\ket{\Psi^+}+
c_1\ket{1,1}.
\label{initial} \end{align} We have considered a general pure state $\ket{\Psi^{\rm at}_0}$ of the atoms with probability amplitudes $c_\pm$, $c_0$ and $c_1$. For the single mode of the radiation field we have chosen a coherent state \begin{align}
\ket{\alpha}=\sum_{n=0}^\infty
e^{-\frac{|\alpha|^2}{2}}
\frac{\alpha^n}{\sqrt{n!}}
\ket n,
\quad\alpha=\sqrt{\overline n}\,e^{i\phi},
\label{coherentstate} \end{align} with mean photon number $\bar n$. Using the eigenbasis of $\hat H$, the exact solution of the time-dependent state vector can be written as \begin{equation}
\ket{\Psi_t}=
\ket{0,0}\ket{\chi_t^{-1}}
+\ket{\Psi^+}\ket{\chi^0_t}
+\ket{1,1}\ket{\chi^{1}_t}
+c_-\ket{\Psi^-}\ket{\alpha}
\label{psi} \end{equation} with the relevant photonic states \begin{align}
\ket{\chi^{-1}_t}&=
c_0\,p_0\ket{0}+
\sum_{n=1}^\infty
\frac{
\sqrt{n}
\left(
\xi_{n,t}^-
-\xi_{n,t}^+\right)
+\sqrt{n-1}\xi_{n}
}{\sqrt{2n-1}}
\ket{n},
\nonumber\\
\ket{\chi^0_t}&=
\sum_{n=1}^\infty
\left(
\xi_{n,t}^-+\xi_{n,t}^+
\right)
\ket{n-1},
\label{fieldstates}
\\
\ket{\chi^1_t}&=\sum_{n=2}^\infty
\frac{
\sqrt{n-1}
\left(
\xi_{n,t}^-
-\xi_{n,t}^+\right)
-\sqrt{n}\xi_{n}
}{\sqrt{2n-1}}
\ket{n-2},
\nonumber \end{align} and with the aid of the following abbreviations \begin{align}
\label{abbr}
&\xi_{n,t}^\pm
=
\frac{e^{\pm i \omega_n t}}{2}
\left(
c_+\mp
\frac{\,c_0p_n+\sqrt{n-1}\,c_1p_{n-2}}{\sqrt{2n-1}}
\right),
\\
&\xi_{n}
=
\frac{\sqrt{n-1}\,c_0 p_n-\sqrt{n}\,c_1 p_{n-2}}{\sqrt{2n-1}},\quad
p_n=\alpha^n\sqrt{e^{-|\alpha|^2}/n!}.
\nonumber \end{align}
\subsection{Coherent-state approximation} The time-dependent solution of the state vector can be significantly simplified in the case of high values of the mean photon number, i.e., $\bar n\gg 1$. In this limit
$\bar n\gg \sqrt{\bar n}$, i.e., the mean of the Poisson distribution $\bar n$ is much larger than the standard deviation $\sqrt{\bar n}$. Therefore we approximate $\sqrt{(n-1)/(2n-1)}$ and $\sqrt{n/(2n-1)}$ by $1/\sqrt2$ and we also use the approximations \begin{align}
p_n=&\sqrt{\frac{\bar n}{n}}e^{i\phi}p_{n-1}\approx e^{i\phi}p_{n-1},
\nonumber\\
\omega_n/g&\approx\sqrt{4 \bar n+2}+2\frac{n-\bar n-1}{\sqrt{4\bar n+2}}.
\label{apps} \end{align} The last line is obtained from the first-order Taylor expansion in $n$ of the frequencies around $\bar n+1$. This is valid whenever the product between the second-order contribution times the interaction time $t$ remains small, a condition that is satisfied when $gt\ll\bar n$ \cite{Torres2014,Torres2016}. With these considerations and by introducing the abbreviations \begin{align}
\eta_\pm=\frac{1}{2}\left(
c_+\mp d^+_\phi
\right),\quad d_\phi^\pm=
\frac{e^{i\phi} c_0\pm e^{-i\phi}c_1}{\sqrt2},
\label{} \end{align} the photonic states can be simplified to \begin{align}
\ket{\chi^k_t}&\approx
\frac{
e^{ik\phi}}{\sqrt{1+|k|}}
\left(
\eta_-
\ket{F_{k,t}^-}
+(-1)^k\eta_+
\ket{F_{k,t}^+}
-kd_\phi^-\ket\alpha
\right),
\nonumber\\
k&\in\{-1,0,1\},
\label{fieldstates2} \end{align} where we have introduced the field states \begin{align}
\ket{F_{k,t}^\pm}=e^{\pm i 2gt\frac{1+k(\bar n+1)}{\sqrt{4 \bar n+1}}}\ket{\alpha e^{\frac{\pm i2gt}{\sqrt{4\bar n+1}}}},
\quad k\in\{-1,0,1\}.
\label{Fstates} \end{align} which are coherent states up to an additional phase.
\subsection{Atomic postselection} The description in terms of coherent states allows a simpler analysis of the dynamics. Our aim is to prepare the atoms in an atomic postselection scenario where the atoms are prepared conditioned to a successful projection of the field onto the initial coherent state $\ket\alpha$ in a simplified and ideal implementation. In such a case, one would have to consider the following overlaps \begin{align}
\braket{\alpha}{\chi_t^{k}}\approx-k
e^{ik\phi}
\frac{e^{i\phi} c_0\pm e^{-i\phi}c_1}{2}.
\label{18} \end{align} This result can be obtained by noting that the overlap between coherent states is given by \begin{align}
\left|\braket{\alpha}{\alpha e^{\frac{\pm i2gt}{\sqrt{4\bar n+1}}}}\right|
=&\left|\exp{\left[-\bar n\left(1-e^{\frac{\pm i2gt}{\sqrt{4\bar n+1}}}\right)\right]}\right|
\approx&e^{- g^2t^2},
\nonumber
\label{} \end{align} which can be neglected if $ gt\gg1$. Therefore, after the interaction with the resonator and projection onto state $\ket\alpha$, both atoms are left in the state \begin{align}
\frac{c_-}{Q_1}\ket{\Psi^-}+
\frac{e^{i\phi} c_0- e^{-i\phi}c_1}{2 Q_1}
\left(
e^{-i\phi}\ket{0,0}-
e^{i\phi}\ket{1,1}
\right),
\label{atpost} \end{align}
with $Q_1^2=|c_1|^2+|e^{i\phi} c_0- e^{-i\phi}c_1|^2/2$ success probability. The final state is actually a superposition of two states with probability amplitudes proportional to the intial ones. Therefore, the atomic postselection can be understood as a projection of the atomic state with the following rank two projector \begin{equation}
\hat M=
\ketbra{\Psi^-}{\Psi^-}+\ketbra{\Phi^-_\phi}{\Phi^-_\phi},
\label{Moperator} \end{equation} where we have introduced the state $\ket{\Phi^-_\phi}=\left(
e^{-i\phi}\ket{0,0}- e^{i\phi}\ket{1,1} \right)/\sqrt2$. The operation $\hat M$ represents the effective
description of the interaction of the atoms with the resonator and the postselection via measurement of the field.
\subsection{Atomic postselection by balanced homodyne detection} Considering the projection onto a coherent state is an idealization that provides a convenient simplified picture. In practice, however, it is sufficient to project onto a state with vanishing overlap with the time-dependent field components $\ket{F_{k,t}^\pm}$ and with finite overlap with $\ket\alpha$. A typical experimental setting able to achieve this goal is a balanced homodyne measurement \cite{Torres2014}. The basic idea is to use a $50/50$ beam splitter to combine the field to be measured with a reference coherent field parametrized by its phase $\theta$. Photons from the two outputs of the beam splitter are collected using photodetectors. In the strong limit of the reference field and assuming ideal photodetectors \cite{Raymer}, the probability of measuring a photocurrent difference between the detectors is proportional to the projection of the field onto the eigenstate $\ket{q_{\theta}}$ of a field quadrature $\hat q_\theta=(\hat a e^{-i\theta}+\hat a^\dagger e^{i\theta})/\sqrt2$. This probability density for a coherent state $\ket\alpha$ is given by \begin{equation}
|\braket{q_\theta}{\alpha}|^2=\frac{1}{\sqrt{\pi}}
\exp\left\{-[ q_\theta-\tilde q_\theta]^2\right\}
\label{probcoh1} \end{equation} with $\tilde q_\theta =(\alpha e^{-i\theta}+\alpha^\ast e^{i\theta})/\sqrt2$. This overlap can approach its maximum value by choosing the phase in such a way that $\tilde q_\theta=0$ and restricting values of $q_\theta$ close to zero. The square of the overlap with the other field components, that are also coherent states, can be evaluated as \begin{equation}
|\braket{q_\theta}{F_{kmt}^\pm}|^2=\frac{1}{\sqrt{\pi}}
\exp\left\{-[ q_\theta-\tilde q_{\Theta_t^\pm}]^2\right\},
\label{probcoh2} \end{equation} with $\Theta_t^\pm=\theta\mp 2gt/\sqrt{4\bar n +1}$. By choosing an appropriate interaction time $t$, these overlaps can be made exponentially small.
\section{A nonlinear map of pure atomic states} \label{protocol}
\label{onestep} \begin{figure}\label{scheme}
\end{figure}
In this section we use the atomic postselection scheme of the two-atom Tavis-Cummings model in order to implement an entangling quantum operation which by iteration leads to a nonlinear mapping of atomic probability amplitudes. The protocol is depicted schematically in Fig. \ref{scheme}. We consider the two two-level atoms initially prepared in a product state of the form ($z\in\mathbb{C}$) \begin{align}
\ket{\Psi^{\rm at}_0}=\ket{\psi_0}_A\otimes\ket{\psi_0}_B, \quad
\ket{\psi_0}=\frac{\ket{0}+ze^{i\phi}\ket{1}}{\sqrt{1+|z|^2}}.
\label{atom-state} \end{align} For later convenience we have included the phase $\phi$ of the coherent state. Before interacting with the optical resonator, a unitary gate $\hat U^B_\varphi$ is applied to atom $B$. We choose the following gate \begin{align}
\hat U_\varphi=\left(
\begin{array}{cc}
e^{i\varphi}&0\\
0&-e^{-i\varphi}
\end{array}
\right),
\label{gate} \end{align} which can be implemented by driving the atomic transition with a resonant classical electromagnetic field and properly controlling the coupling and duration of the interaction \cite{Nielsen,Meschede2006,Raimond}. After the application of $\hat U_\varphi^B$ and before entering the resonator we get the following atomic probability amplitudes \begin{align}
c_0&=\bra{0,0}\hat U^B_\varphi\ket{\Psi^{\rm at}_0}=-e^{-i\varphi}/(1+|z|^2)
\nonumber\\
c_1&=\bra{1,1}\hat U^B_\varphi\ket{\Psi^{\rm at}_0}=z^2e^{i(\varphi-2\phi)}/(1+|z|^2)
\nonumber\\
c_-&=\bra{\Psi^-}\hat U^B_\varphi\ket{\Psi^{\rm at}_0}=\sqrt2 z e^{i\phi} \cos \varphi/(1+|z|^2).
\label{} \end{align} The probability amplitude $c_+$ does not need to be specified, as the resulting quantum operation projects the atoms onto a subspace orthogonal to $\ket{\Psi^+}$ as can be noted from Eq. \eqref{atpost}. With these initial conditions, both atoms interact with the electromagnetic field inside a cavity prepared in a coherent state $\ket\alpha$. After the interaction a projection $P_{\ket\alpha}$ of the field onto the initial coherent state $\ket\alpha$ is performed and the atoms are left in the state \begin{align*}
&\frac{\sqrt2ze^{i\phi}\cos\varphi}{(1+|z|^2)Q_1}\ket{\Psi^-}
-\frac{ e^{-i\varphi}+z^2 e^{i\varphi}}{2(1+|z|^2)Q_1}
\left(\ket{0,0}-e^{i2\phi}\ket{1,1}\right). \end{align*} The success probability of this projection is \begin{equation}
Q_1^2=\frac{1+|z|^4+4|z|^2\cos^2\varphi+(z^2e^{i2\varphi}+{\rm c.c.})}{2(1+|z|^2)^2}.
\label{Q1} \end{equation} Afterwards a projection $P_{\ket{0}}$ onto the ground state of atom $B$ is implemented leaving atom $A$ in the state \begin{align}
&-\frac{ze^{i\phi}\cos\varphi}{(1+|z|^2)Q_1Q_2}\ket{1}
-\frac{ e^{-i\varphi}+z^2 e^{i\varphi}}{2(1+|z|^2)Q_1Q_2}
\ket{0}.
\label{} \end{align} This event occurs with success probability
$Q_2^2=1/2$. The overall success probability of the postselections is then given by \begin{equation}
P_{\rm s}=Q_2^2Q_1^2=Q_1^2/2\ge \frac{\cos^2\varphi}{4}.
\label{psuccess} \end{equation} The last inequality follows from analyzing Eq. \eqref{Q1} and noting that $Q_1$
attains its minimum value when $|z|^2=1$ and ${\rm Re} [z^2e^{i2\varphi}]=-1$. Up to normalization the final state is given by \begin{align}
\ket{0}+\frac{2 z\cos\varphi}{e^{-i\varphi}+z^2 e^{i\varphi}}e^{i\phi}\ket{1}.
\label{} \end{align} By iterating this procedure we attain a scheme implementing the following quantum map for the $(n+1)$th step \begin{align}
\frac{\ket{0}+f^{ n}_\varphi(z)e^{i\phi}\ket{1}}{\sqrt{1+|f_\varphi^{n}(z)|^2}}
\rightarrow
\frac{\ket{0}+f^{ n+1}_\varphi(z)e^{i\phi}\ket{1}}{\sqrt{1+|f_\varphi^{n+1}(z)|^2}}
\label{nmap} \end{align} with the complex functions \begin{align} &f_\varphi(z)=\frac{ 2z\cos\varphi}{e^{-i\varphi}+z^2 e^{i\varphi}}, \nonumber\\ &f^{n+1}_\varphi(z)=f_\varphi(f_\varphi^n(z)),\quad f^0_\varphi(z)=z. \label{map} \end{align} The map is independent of the parameter $\phi$, as one can note that the phase factor $e^{i\phi}$ appears in the probability amplitude of state $\ket{1}$ in the same manner as in the initial state $\ket{\psi_0}$ of Eq. \eqref{atom-state}.
We note that the iteration of the map involves repeated action of the protocol on an ensemble of atoms. The protocol acts on a pair of identically prepared atoms from the ensemble and prepares one atom probabilistically. The other atom becomes useless from the point of view of the protocol, as a result of the projective measurement on it. After acting on all the atoms of the ensemble, one arrives at a smaller ensemble of less than one half in size. Rapid downscaling of the ensemble size is a unavoidable condition for any quantum dynamics truly sensitive to initial conditions \cite{Gilyen2016}. In practice, realizing many steps of the protocol would require an exponentially large initial ensemble which would not be realistic. Another practical aspect is that employing more than one cavity would be challenging with today's experimental possibilities. On the other hand, as we will demonstrate in the next sections, already a few steps can be enough to make highly overlapping initial quantum states almost orthogonal. Furthermore we will outline an experimental proposal in Sec. \ref{Section-experimental} with currently available technology by applying an optical conveyor belt and a single cavity.
\section{Basic properties of the nonlinear map} \label{analysis}
The dynamics within the approximations we have made is fully described by the iterative complex function in Eq. (\ref{map}). This is a quadratic rational map \cite{MilnorGD}, similar to the maps occurring in the measurement-induced nonlinear quantum dynamical schemes first described in \cite{Gisin,Kiss2006,Gilyen2016}. In the following, we first carry out an analysis of the general properties of the iterated map $f_{\varphi}$ of Eq. (\ref{map}) by using concepts from the theory of complex dynamical maps \cite{Milnorbook}. Then we compare its behavior to the numerical solution of the complete iterated dynamics, based on the Hamiltonian of Eq. (\ref{Hamilton}) and the subsequent selective measurements.
\subsection{Stable cycles} \begin{figure}
\caption{Stability of the fixed cycles of $f_{\varphi}$ as a function the parameter $\varphi$. Blue corresponds to the one-cycles $z^{(1)}=\pm 1$, red corresponds to $z^{(1)}=0$. Dots, lines, and circles represent superattractive, attractive, and neutral cycles, respectively. Numerical investigation of the enlarged regions between the neutral one-cycles show two different attractive 4-cycles (orange lines) and a single attractive 6-cycle (green lines) close to the two ends of the region. The central part of the enlarged regions contain "islands" of attractive $n\geq 60$ cycles. The dotted circles indicate that is hard to identify the border of different regions.}
\label{Fig2}
\end{figure}
The periodic orbits or fixed cycles of the map $f_{\varphi}$ can be determined from the relation $f^{n}_{\varphi}(z)=z$. The one-cycles or fixed points as well as the 2-cycles can be determined analytically. For $n=1$ we find \begin{equation} z^{(1)}_{j}=j, \quad j\in\{-1,0,1\}. \end{equation} For $n=2$, in addition to the above one-cycles, one can find two more points which are transformed into each other by $f_{\varphi}$. These form the single nontrivial two-cycle \begin{equation} z^{(2)}_{k}=(-1)^{k}i\sqrt{1+2e^{-2i\varphi}}, \quad k\in\{1,2\}. \label{2cyc} \end{equation}
The stability of the fixed cycles can be checked by calculating the multiplier $\lambda=\left(f^{n}_\varphi\right)\rq(z_{j})= f\rq_{\varphi}(z_{1})f\rq_{\varphi}(z_{2})...f\rq_{\varphi}(z_{n})$. A fixed cycle is repelling, neutral, attractive, or superattractive if $\left| \lambda \right|>1$, $\left| \lambda \right|=1$, $\left| \lambda \right|<1$, or $\left| \lambda \right|=0$, respectively. Such an analysis can be carried out analytically for the one- and two-cycles, however, for $n\geq 2$ it is a nontrivial task. The analysis of the multipliers shows that for each of the one-cycles there are certain parameter regions where they are attractive. On the other hand, the two-cycle given by Eq. (\ref{2cyc}), is repelling for any value of $\varphi$.
For the determination of the longer ($n\geq$3) attractive cycles we can use the method based on the iteration of the critical points of the map. The critical points of $f_{\varphi}$ are those which solve the equation $f\rq_{\varphi}(z)=0$. In this case, there are two critical points: \begin{equation} z_{c\pm}=\pm e^{-i\varphi}. \label{crit} \end{equation} A general theorem on iterated rational polynomial maps states that a rational map of degree $d$ can have at most $2d-2$ attractive cycles. Following the orbits of the critical points one can find all stable cycles of the iterated map (in this case at most $2$).
Fig.~\ref{Fig2} shows where, according to the analytical calculations, the one-cycles are superattractive (dots), attractive (lines), and neutral (circles) as a function of the parameter $\varphi$. The numerical iteration of the critical points in the regions between the neutral one-cycles shows that there are two different attractive 4-cycles (orange lines) and a single 6-cycle (green lines) close to the two ends of the regions. The actual $z$ values belonging to the attractive 4- and 6-cycles depend on the parameter $\varphi$. In between these regions, it is numerically hard to rule out the existence of very long stable periodic orbits. The precision of our numerical simulation made it possible to identify a few "islands" of attractive fixed cycles of $n\geq 60$. The remaining part of this region may belong to maps without any stable periodic orbit, which means that all initial states belong to the Julia set. The dotted circles indicate that the border between different regions is hard to determine numerically, which is an indication of the fractal nature of the regions. Let us note that for $\varphi=\pi/2$ and $3\pi/2$ the map is actually not a genuine complex map since $f_{\varphi}\equiv 0$ in these cases.
\subsection{Nature of the iterated map}
The fractal nature of the map is more apparent when one determines the Julia set of $f_{\varphi}$, i.e. the set of points which do not converge to an attractive cycle for a given $\varphi$. One way of numerically finding the points belonging to the Julia set is backwards iterating the map starting from a point which is an element of a repelling cycle of the map. We show in Fig.~\ref{Fig3} the Julia set of $f_{\varphi}$ for $\varphi=1.666\pi$. In this case, the Julia set is a totally disconnected set, all other initial points converge to the single attractive cycle $z=0$, or physically speaking to the state $\ket{0}$. The analysis of the orbits of the critical points reveals important properties of the Julia set. In this case both critical points converge to the same attractive fixed point, consequently the Julia set is totally disconnected, similarly to the well-known Cantor set \cite{MilnorGD}. \begin{figure}
\caption{ The Julia set of the map $f_{\varphi}$ for $\varphi=1.666\pi$. }
\label{Fig3}
\end{figure} Another important case is when the two critical points converge to two distinct fixed points, then the Julia set is connected. This case is illustrated by the map at parameter value $\varphi=0.95\pi/4$ shown in Fig.~\ref{cplane1}. For quadratic rational maps a general theorem ensures that the Julia set is either totally disconnected, or connected \cite{Milnorbook}.
\begin{figure}
\caption{ Complex plane after $97$ iterations of the map in Eq. \eqref{nmap} for $\varphi=0.95\pi/4$. Two amplification levels are shown, confirming the fractal structure of the Julia set separating the regions whose points converge to the attractive fixed points $1$ (grey) and $-1$ (black). The region indicated by the square in the middle of the left figure is magnified in the right figure. }
\label{cplane1}
\end{figure}
\subsection{Iteration of the complete dynamics}
In order to investigate the real performance of the two-atom Tavis-Cummings model without the approximations of Sec. \ref{Model}, we compute a numerically exact version of the operator $\hat M$ in Eq. \eqref{Moperator}. The matrix elements are evaluated as \begin{align}
M_{j,k}=\bra\alpha\bra{e_j}e^{-i \hat H t/\hbar} \ket{e_k}\ket\alpha
\label{qmap} \end{align} where we considered the atomic basis $\ket{e_j}\in\{\ket{1,1},\ket{1,0},\ket{0,1},\ket{0,0}\}$. The interaction time $t$ and coupling strength $g$ satisfy the relation $gt=\pi\sqrt{\bar n}/2$. Each iteration of the map is then evaluated by renormalizing the following outcome $\bra{0}_B\hat M\hat U_B\ket{\Psi_0^{\rm at}}$ for qubit $A$. In Fig.~\ref{cplane2} we plotted the real part of the $97$th iteration for two different values of the mean photon number $\bar n$, namely $100$, and $10$. With precision of two (one) decimal places the two fixed points also converge to +1 and -1 in the case of $\bar n=100$ ($\bar n=10$). Both figures reveal a fractal structure which resembles more the ideal case for larger values of $\bar n$.
\begin{figure}
\caption{
The same as left part of Fig. \ref{cplane1} for the the numerically exact quantum map and two values of the mean photon number $\bar n$: $100$ (left) and $10$ (right). }
\label{cplane2}
\end{figure}
\section{Application of the protocol for state discrimination}
The number of atoms needed by a protocol based on a nonlinear transformation grows exponentially with the number of iterations even in an ideal case, which follows from the quantum magnification bound \cite{Gilyen2016}. In a realistic experiment, one can expect that only a few steps of the iteration can be carried out. On the other hand, a useful aspect of nonlinear quantum state transformations is that small initial differences between two similar quantum states can be amplified, enabling to distinguish them, realizing a Schr\"odinger microscope \cite{Lloyd2000}. Nonlinear quantum state transformations in an ideal case saturate the quantum magnification bound \cite{Gilyen2016}, thereby providing an optimal quantum state discrimination protocol, according to Helstrom \cite{Helstrom}. Here we show that our protocol provides a practical state discrimination procedure, transforming initially very close states into almost perfectly orthogonal ones in as few as 3 steps.
In the simple case when $\varphi=0$, the nonlinear map reads $f_{\varphi=0}=2z/(z^{2}+1)$ and the unitary of Eq. (\ref{gate}) is the well-known Z gate. In Fig.~\ref{numit} we show the plane of initial states colored according to the number of iterations needed to reach either of the fixed points $1$ or $-1$ with a precision of $0.1$. Complex numbers with a positive (negative) real part converge to the
fixed point $1$ ($-1$). The two regions are separated by the Julia set of the map, which is indicated by the yellow region on the figure, coinciding with the imaginary axis. If we choose two initial quantum states close to each other in the form of Eq. (\ref{atom-state}) with $z_{1}=-0.2$, and $z_{2}=0.2$ (with an overlap close to unity $|\braket{\psi^{1}_{0}}{\psi^{2}_{0}}|\sim 0.92$), then the two states will become almost orthogonal (with a scalar product of $\sim 0.08$) after three steps of the iteration. \begin{figure}
\caption{ The plane of initial states colored according to the number of iterations needed for a complex number $z$ to reach either of the fixed points $1$ or $-1$ with a precision of $0.1$.}
\label{numit}
\end{figure}
The overlap of the above mentioned two initial states converges fast to zero, as we show in Fig.~\ref{scalprod}. To account for possible imperfections in the preparation of the initial states we assumed a Gaussian uncertainty with a standard deviation of $\sigma=0.03$ in both the real and imaginary parts of the initial values $z_{1}=-0.2$ and $z_{2}=0.2$. We note that this value of $\sigma$ assures that we sample from a distribution of quantum states which have either positive or negative real part of the amplitude of state $\ket{1}$. Fig.~\ref{scalprod_a} shows that due to the nonlinear transformation the resulting uncertainty (represented by the error bars) in the initial value of the scalar product grows in the first and second step, but then decreases, and eventually becomes much smaller than its initial value (the error bars cannot be seen at the resolution of the figure for $n\geq 4$). Thus our procedure effectively discriminates between two different phases of small excitation amplitudes of the atoms. The evolution of the overlap of the above two initial states is not modified significantly when using the complete solution for the map, as can be seen in Fig.~\ref{scalprod_b}. Mean photon numbers of $\overline n = 10$ and $\overline n =100$ lead to essentially similar behavior to that of the idealized map (\ref{map}). Interestingly, the low-photon-number case leads to a faster decrease in the overlap during the first few steps of the iteration, but then converges to a larger value compared to the ideal map. \begin{figure}
\caption{(color online) (a) The overlap of the states
$\left|\psi^{1}_{0}\right>=0.98\left(\left|0\right>-0.2\left|1\right>\right)$ and
$\left|\psi^{2}_{0}\right>=0.98\left(\left|0\right>+0.2\left|1\right>\right)$ after $n$ iterations of the ideal map, when there is an uncertainty described by a Gaussian distribution of standard deviation $\sigma=0.03$ around both the real and imaginary parts of the initial values $z_{1}=-0.2$ and $z_{2}=0.2$. The error bars represent the root-mean-square deviation from the mean (black squares) of the possible values of the scalar product. (b) The overlap of the states after $n$ iterations of the ideal map (blue squares), and the complete map with mean photon number 10 (red circles) and 100 (black crosses).}
\label{scalprod_a}
\label{scalprod_b}
\label{scalprod}
\end{figure}
\section{Experimental considerations} \label{Section-experimental}
Our basic protocol involves atomic and photonic postselection and therefore there is always a finite probability of failure. This means that in order to implement several iterations of the map, one requires several copies of the initial qubit pair. The procedure explained in Sec. \ref{onestep} has to be applied to every single copy of the ensemble. The number $N$ of qubit pairs required to achieve $n$ iterations, can be bounded from below by taking into account the success probability $P_{\rm s}$ in \eqref{psuccess}. In addition, one has to take into account that half of the atoms in the ensemble are lost after being measured. Therefore, the number of pairs scales exponentially as $N=(2/P_{\rm s})^n=(8/\cos^2\varphi)^n$.
On a first thought one would naively consider the use of $N$ optical cavities for $N$ atomic pairs. However, there is another simpler solution motivated by current experimental implementations \cite{Khudaverdyan,Brakhane}, where a standing-wave dipole trap or ``optical conveyor belt'' is used to coherently transport neutral atoms into an optical resonator.
Using this setting, two conveyor belts are required to transport atoms into the cavity. In an initial stage, $N$ atoms are prepared in the minima of the two optical traps and are aligned as depicted in Fig. \ref{schemebelt}. For convenience, we number the atoms from left to right. The unitary gate $\hat U_\varphi$ is applied at this preparatory stage to atoms labeled with an even (odd) number in the upper (lower) conveyor belt, we call them marked atoms. The two conveyor belts are moved forward into the direction of the cavity until the first pair reaches the other side of the cavity. Then, the conveyor belts stop in order to allow the measurement of the first marked atom and the field inside the cavity. Afterwards, the cavity is reset to the state $\ket\alpha$ and the conveyor belts move again repeating the process. After all atoms have interacted with the cavity, the marked atoms are blacklisted as they are no longer useful. They are depicted in gray in Fig. \ref{schemebelt}. In order to pair only the useful atoms, the lower conveyor belt is shifted one period to the left, leaving the first marked atom without a partner. In this way, the potentially successfully prepared atoms are aligned. The process is repeated with both conveyor belts moving to the opposite side to start the second iteration. In the aforementioned implementation of the second iteration we have ignored the possibility of failure in the postselection. In order to overcome this problem, one has to keep track of successfully prepared atoms and then shift the conveyor belts in order to align useful pairs before transporting them into the cavity.
Finally, it is worth noting that for the sake of simplicity we have only considered the dynamics generated by the Hamiltonian $\hat H$ [see Eq. \eqref{Hamilton}] in the interaction picture with respect to the reference Hamiltonian $\hat H_0$ [see Eq. \eqref{H0}]. In the Schr\"odinger picture, or lab reference frame, the only differences are due to the free time evolution resulting from the Hamiltonian of Eq. (\ref{H0}). They lead to a relative phase in the atomic state and they have to be taken into account in the field measurements (compare with Eq. \eqref{18}, for example). The one-atom state after a step of the protocol and up to normalization can be written as $\ket{0}+f_\varphi(z)e^{i\phi-i\omega(t+t_1+t_2)}$\ket{1}. Here we have considered a free evolution with time $t_1$ ($t_2$) before (after) the interaction which takes place for a time $t$. To prove this it suffices to note that $\hat H$ commutes with $\hat H_0$ and therefore one can split the evolution operator in the Schr\"odinger picture as \begin{align}
\hat{\mathcal U}&=
e^{-i \hat H_0 t_2/\hbar}
e^{-i (\hat H+\hat H_0) t/\hbar}
e^{-i \hat H_0 t_1/\hbar}\nonumber\\
&=
e^{-i \hat H_0(t+t_1+t_2)/\hbar}
e^{-i \hat H t/\hbar}.
\label{} \end{align} After the evolution, field and atom $B$ are projected into pure states yielding for atom $A$ an evolution operator $\exp{[-i\omega\ketbra{1}{1}_A(t+t_1+t_2)]}$ which generates the mentioned phase. In order to keep the same form of the map in Eq. \eqref{map}, one could adjust the times in such a way that $t+t_1+t_2=2\pi/\omega$. Alternatively, one could eliminate this phase by driving the atoms with a classical electromagnetic field in a similar way as we proposed to implement the gate $\hat U_\varphi$ in Eq. \eqref{gate}.
\begin{figure}
\caption{ Possible implementation of the protocol using neutral atoms coherently transported using optical conveyor belts. }
\label{schemebelt}
\end{figure}
\section{Conclusion}
We have proposed a nonlinear map of qubit states in a cavity quantum electrodynamical scenario where the qubits are encoded in two-level atoms. The core step requires the interaction of two equally prepared atoms with the field inside an optical resonator according to the Tavis-Cummings model. By subsequent field detection and selective measurement of one of the atoms, the unmeasured atom is postselected into a state nonlinearly depending on its initial state.
From a mathematical point of view, we have studied the complex function describing this mapping of pure qubit states, where we have exploited the fact that any pure state of a qubit can be described by a complex parameter. We have performed an analysis of stable cycles under the iteration of the function and studied the behavior in the complex plane. In particular, we have numerically investigated the Julia set which changes from connected to disconnected for different parameters of the system. Thus, our study offers a demonstration of chaotic behavior in a quantum mechanical setting involving sequences of unitary transformations and postselective measurements. From a physical perspective, we have proposed the realization of this scheme using an ensemble of equally prepared atoms in two optical conveyor belts that are coherently transported and interact in pairs with a single optical resonator. We have estimated the number of atoms required for each iteration of the protocol taking into account the success probability of the measurements involved. Although possible realizations of this nonlinear qubit map require cutting edge quantum technological developments, such as optical conveyor belts and controlled two-qubit interactions with a single-mode radiation field, in view of the rapid experimental advances in cavity quantum electrodynamics its realization is within reach of nowadays technology.
The presented scheme provides an alternative approach to already established quantum state discrimination protocols \cite{Paris}. We suggested an effective implementation of the Schr\"odinger microscope in which two initially close pure quantum states can be discriminated by amplifying the distance between them and thus effectively orthogonalizing them. We have shown that initial states of the two-level atoms with high overlap will become almost perfectly orthogonal by a few iterations of the scheme. Let us note that the orthogonalization procedure has a slightly different flavor than previous quantum state discrimination procedures. First, it is deterministic in the sense that there is a probability of success for the whole process, but then the resulting quantum state is fully determined by the initial state. Second, it does not directly measure the orthogonalized systems, but rather prepares them in a non-demolition sense and therefore these systems can be used for further processing. Third, it is a purification process as well, which naturally accounts for initial noise, and effectively discriminates mixed nonorthogonal quantum states. Measurement-induced nonlinear evolution in quantum mechanics is a concept which could be used by other physical realizations of qubits to implement a Schr\"odinger microscope.
\begin{acknowledgments} This work was supported by the Hungarian Academy of Sciences (Lend\"ulet Program, LP2011-016) and the National Research, Development and Innovation Office (K115624, NN109651, PD120975) and by the Deutscher Akademischer Austauschdienst (M\"OB-DAAD project no. 65049). O. K. acknowledges support from the J\' anos Bolyai Research Scholarship of the Hungarian Academy of Sciences.
\end{acknowledgments}
\end{document} |
\begin{document}
\title[]{On trivialities of Chern classes} \author{Aniruddha C. Naolekar} \address{Indian Statistical Institute, 8th Mile, Mysore Road, RVCE Post, Bangalore 560059, INDIA.} \email{ani@isibang.ac.in}
\author{ Ajay Singh Thakur} \address{Department of Mathematics, University of Haifa, Mount Carmel, Haifa 31905, ISRAEL.} \email{thakur@math.haifa.ac.il} \keywords{$C$-trivial, $W$-trivial, Chern class, Stiefel-Whitney class, stunted projective space.}
\begin{abstract} A finite $CW$-complex $X$ is $C$-trivial if for every complex vector bundle $\xi$ over $X$, the total Chern class $c(\xi)=1$. In this note we completely determine when each of the following spaces are $C$-trivial: suspensions of stunted real projective spaces, suspensions of stunted complex projective spaces and suspensions of stunted quaternionic projective spaces. \end{abstract}
\subjclass[2010] {57R20 (55R50, 57R22).}
\email{}
\date{} \maketitle
\section{Introduction}
A $CW$-complex $X$ is said to be $C$-trivial if for any complex vector bundle $\xi$ over $X$, the total Chern class $c(\xi)=1$.
A related notion is that of $W$-triviality. A $CW$-complex $X$ is said to be $W$-trivial if for any real vector bundle $\eta$ over $X$, the total Stiefel-Whitney class $w(\eta)=1$. A central result in this direction is a theorem of Atiyah-Hirzebruch (\cite{atiyahirz}, Theorem\,2) which states that for any finite $CW$-complex $X$, the $9$-fold suspension $\Sigma^9X$ of $X$ is always $W$-trivial. In particular, the spheres $S^k$ are all $W$-trivial for $k\geq 9$. In fact, a sphere $S^k$ is $W$-trivial if and only if $k\neq 1,2,4,8$ (see \cite{atiyahirz}, Theorem\,1).
Understanding which spaces are $W$-trivial has been of some interest in recent times (see \cite{aniajay}, \cite{tanaka}, \cite{ajay} and the references therein). In \cite{tanaka}, the author has completely determined which suspensions $\Sigma^k \mathbb F\mathbb P^n$ are $W$-trivial. Here $\mathbb F$ denotes either the field $\mathbb R$ of real number, the field $\mathbb C$ of complex numbers or the skew-field $\mathbb H$ of quaternions and $\mathbb F\mathbb P^n$ denotes the appropriate projective space. In \cite{ajay}, the second named author has determined, in most cases, which suspensions of the Dold manifolds are $W$-trivial. In \cite{aniajay}, the authors have completely determined which suspensions of the stunted real projective spaces are $W$-trivial.
In this note we study the notion of $C$-triviality and determine whether some of the familiar spaces and their suspensions are $C$-trivial. To begin with it is well known that there is no analogue of the Atiyah-Hirzebruch theorem for Chern classes. Indeed, by the Bott integrality theorem (see Theorem\,\ref{bott} below for the precise statement) the even dimensional spheres are not $C$-trivial. Thus if $d>4$, then $S^{2d}$ is $W$-trivial but not $C$-trivial. The circle $S^1$ is $C$-trivial but not $W$-trivial. We give other examples in the sequel.
However, there are sufficient conditions under which one implies the other. We point out conditions under which this happens.
In this note we completely determine when the suspension of a stunted real projective space is $C$-trivial (see Theorem\,\ref{secondtheorem} and Theorem\,\ref{thirdtheorem} below). We also completely determine which suspensions of the stunted complex and quaternionic projective spaces are $C$-trivial (see Corollary\,\ref{complex} below).
We now state the main results of this paper.
The following theorem completely describes which suspensions $\Sigma^k\mathbb R\mathbb P^n$ of the real projective spaces are $C$-trivial. Since $S^1 = \mathbb R \mathbb P^1$ is $C$-trivial and $\mathbb R \mathbb P^n$ is not $C$-trivial for $n >1$, we shall assume $k>0$.
\begin{Thm} \label{secondtheorem} Let $X^k_n=\Sigma^k \mathbb R\mathbb P^n$ with $k,n>0$. Then $X^k_n$ is not $C$-trivial if and only if one of the following conditions is satisfied. \begin{enumerate} \item $k,n$ are both odd.
\item $k=2,4$ and $n\geq k$.
\end{enumerate} \end{Thm}
Next we look at the suspensions of the stunted real projective spaces. To state the results we introduce the following notations. Let $X_{m,n}$ denote the stunted real projective space $$X_{m,n}=\mathbb R\mathbb P^m/\mathbb R\mathbb P^n$$ and $X^k_{m,n}$ the $k$-fold suspension $$X^k_{m,n}=\Sigma^k\left(\mathbb R\mathbb P^m/\mathbb R\mathbb P^n\right).$$
\begin{Thm}\label{thirdtheorem} Let $X^k_{m,n}$ be as above with $k\geq 0$ and $0<n<m$. \begin{enumerate} \item If $k$ is odd and $m$ is even, then $X^k_{m,n}$ is $C$-trivial.
\item If $k,m$ are odd, then $X^k_{m,n}$ is not $C$-trivial. \item If $ n= 2t$, then $X_{m,n}$ is $C$-trivial if and only if $m< 2^{t+1}$. \item If $k,n$ are even and $k\geq 2$, then $X^k_{m,n}$ is $C$-trivial.
\item If $k$ is even and $n$ is odd, then $X^k_{m,n}$ is not $C$-trivial.
\end{enumerate} \end{Thm}
The paper is organized as follows. In Section 2 we prove some general facts about $C$-triviality. In Section 3 we determine which suspensions of stunted complex and quaternionic projective spaces are $C$-trivial and prove the main theorems.
{\em Conventions.} By a space we mean a finite connected $CW$-complex. Given a map $\alpha:X\longrightarrow Y$ between spaces the induced homomorphism in $K$-theory and singular cohomology will again be denoted by $\alpha$.
\section{Generalities}
In this section we prove some general facts about $C$-triviality. We give sufficient conditions under which $W$-triviality implies $C$-triviality and $C$-triviality implies $W$-triviality.
To begin, note that a finite $CW$-complex $X$ is $C$-trivial if the reduced complex $\widetilde{K}$-group $\widetilde{K}(X)=0$.
\begin{Lem} \label{prelim} For any space $X$, we have the following. \begin{enumerate} \item If $H^{2s}(X;\mathbb Z)=0$ for all $s> 0$, then $X$ is $C$-trivial. \item If $X$ has cells only in odd dimensions, then $X$ is $C$-trivial. \item If $X$ is $C$-trivial, then $H^2(X;\mathbb Z)=0$
\item If $X$ is $W$-trivial and the mod-$2$ reduction homomorphism $\rho_2:H^{2i}(X;\mathbb Z)\longrightarrow H^{2i}(X;\mathbb Z_2)$ is monomorphic for all $i>0$, then $X$ is $C$-trivial.
\end{enumerate} \end{Lem} {\bf Proof.} We omit the easy proofs of (1)-(3). The claim (4) follows from the fact that for a complex bundle $\xi$ we have $\rho_2(c_i(\xi))=w_{2i}(\xi_{\mathbb R})$. Here $\xi_{\mathbb R}$ denotes the underlying real bundle of $\xi$. \qed
Thus the real and complex projective spaces $\mathbb R\mathbb P^n$ ($n>1$) and $\mathbb C\mathbb P^n$ are not $C$-trivial as their second integral cohomology group is non-zero.
The following observations are straightforward and we omit their easy proofs.
\begin{Lem}\label{prelim1} Let $f:X\longrightarrow Y$ be a map between spaces. \begin{enumerate} \item If $f:\widetilde{K}(Y)\longrightarrow \widetilde{K}(X)$ is onto and $Y$ is $C$-trivial, then $X$ is $C$-trivial. \item Suppose $f:H^{2i}(Y;\mathbb Z)\longrightarrow H^{2i}(X;\mathbb Z)$ is a monomorphism for all $i>0$. Then if $X$ is $C$-trivial so is $Y$. \qed \end{enumerate} \end{Lem}
We shall use the above observations in the sequel sometimes without an explicit reference. We have already noted that the even dimensional spheres $S^{2d}$ with $d>4$ are examples of spaces that are $W$-trivial but not $C$-trivial and also that $S^1$ is $C$-trivial but not $W$-trivial.
We give some more examples below.
\begin{Exm} Let $X=\Sigma\mathbb R\mathbb P^2$ be the suspension of the real projective space $\mathbb R\mathbb P^2$. Then $X$ has non-zero integral cohomology only in degree $3$ and hence $X$ must be $C$-trivial. It is known that $X$ is not $W$-trivial (\cite{tanaka}, Theorem 1.4). \end{Exm}
\begin{Exm} Let $X=M(\mathbb Z_3,1)$ be the Moore space of type $(\mathbb Z_3,1)$. This is a $2$-dimensional $CW$-complex. Since the second (integral) cohomology of $X$ is non-zero, $X$ is not $C$-trivial. However, as $H^i(X;\mathbb Z_2)=0$ for $i>0$, $X$ is $W$-trivial. \end{Exm}
We now state a necessary condition for a space to be $C$-trivial.
\begin{Lem} Let $X$ be a $C$-trivial space. Then for any real bundle $\xi$ over $X$ we have $w_i^2(\xi)=0$ for all $i>0$. \end{Lem} {\bf Proof.} Let $\eta$ denote the underlying real bundle of the complexification $\xi\otimes \mathbb C$ of $\xi$. Then the total Stiefel-Whitney class of $\eta$ is given by $$w(\eta)=1+w_1^2+w_2^2+\cdots$$ where $w_i=w_i(\xi)$. If $w(\eta)\neq 1$, then clearly $c(\xi\otimes \mathbb C)\neq 1$. This completes the proof. \qed
In particular, if there exists a real bundle $\xi$ over $X$ with $w_i^2(\xi)\neq 0$ for some $i>0$, then $X$ cannot be $C$-trivial. The above lemma, in particular, implies that the quaternionic projective space $\mathbb H\mathbb P^n$ is not $C$-trivial for $n>1$. This is because for the canonical line bundle $\xi$ over $\mathbb H\mathbb P^n$ we have $w_4^2(\xi)\neq 0$. That $\mathbb H\mathbb P^1=S^4$ is not $C$-trivial is clear. The (even dimensional) spheres show that the converse of the above lemma is not true.
For a space $X$, we have the realification homomorphism $r:\widetilde{K}(X)\longrightarrow \widetilde{KO}(X)$. The following lemma gives a sufficient condition for a $C$-trivial space to be $W$-trivial.
\begin{Lem} Suppose that $r:\widetilde{K}(X)\longrightarrow \widetilde{KO}(X)$ is onto. If $X$ is $C$-trivial, then $X$ is $W$-trivial. \end{Lem} {\bf Proof.} The surjectivity of $r$ implies that every real vector bundle $\xi$ over $X$ is stably equivalent to the underlying real bundle $\eta_{\mathbb R}$ of a complex vector bundle $\eta$ over $X$. Now $w(\xi) =w(\eta_{\mathbb R})=1$. This completes the proof. \qed
We mention that the converse to the above lemma is not true. Indeed, if $X=S^6$, then as $\widetilde{KO}(S^6)=0$, $X$ is $W$-trivial but $X$ is not $C$-trivial.
Some of our proofs depend upon the following important observation which gives a sufficient condition for a $W$-trivial space to be $C$-trivial.
\begin{Prop}\label{onetytwo} Let $X$ be a space. Assume that $H_*(X;\mathbb Z)$ is concentrated in odd degrees and is direct sum of copies of $\mathbb Z _2$. If $X$ is $W$-trivial, then $X$ is $C$-trivial. \end{Prop} {\bf Proof.} Given the assumptions on the integral homology of $X$ it follows from the universal coefficients theorem that the integral cohomology of $X$ is concentrated in even degrees and is a direct sum of copies of $\mathbb Z_2$. Again, by the universal coefficients theorem, it follows that $H^{2i}(X;\mathbb Z_2)$ is a direct sum of copies of $\mathbb Z_2$ with the same number of $\mathbb Z_2$ summands as in the integral cohomology. As $X$ has no integral cohomology in odd degrees, the mod-$2$ reduction map $$\rho_2:H^{2i}(X;\mathbb Z)\longrightarrow H^{2i}(X;\mathbb Z_2)$$ is surjective and hence an isomorphism as both the groups are a direct sum of equal number of copies of $\mathbb Z_2$. The proposition now follows from Lemma\,\ref{prelim} (4). This completes the proof. \qed
\begin{Rem} We remark that the converse of the above proposition is not true. For consider the space $X^6_2=\Sigma^6\mathbb R\mathbb P^2$. Then $X^6_2$ satisfies the conditions of the above proposition. By Theorem\,\ref{secondtheorem}, $X^6_2$ is $C$-trivial. However $X^6_2$ is not $W$-trivial, by Theorem\,1.4 (3) of \cite{tanaka}. \end{Rem}
We end this section by noting that second suspension of a $C$-trivial space is $C$-trivial. The idea of the proof is same as that of Theorem\,1 in \cite{atiyahirz} and Theorem\,1.1 of \cite{tanaka}.
\begin{Thm}\label{hahaha} Let $X$ be a $C$-trivial space. Then the second suspension $\Sigma^2X$ of $X$ is $C$-trivial. \end{Thm} {\bf Proof.} Let $\pi_1,\pi_2$ denote the projection maps of $S^2\times X$ to the first and second factor respectively and $p:S^2\times X\longrightarrow \Sigma^2X$ the quotient map. Let $\nu$ denote the Hopf bundle over $S^2$. Given a bundle $\xi$ over $\Sigma^2X$ there exists, by Bott periodicity, a bundle $\theta$ of rank $n$ (say) over $X$ such that $p^*\xi$ is stably isomorphic to the tensor product $$(\pi_1^*\nu-1)\otimes (\pi_2^*\theta-n).$$ Thus $$c(p^*\xi)=c(\pi_1^*\nu\otimes\pi_2^*\theta)c(\pi_1^*\nu)^{-n}c(\pi_2^*\theta)^{-2}.$$ Since $X$ is $C$-trivial we have $c(\pi_2^*\theta)=1$. It is clear that $c(\pi_1^*\nu)=1+t\times 1$ where $t\in H^2(S^2;\mathbb Z)$ is a generator. Finally, one checks that $$c(\pi_1^*\nu\otimes\pi_2^*\theta)=(1+t\times 1)^n$$ so that $$c(p^*\xi)=1.$$ But as $f:H^*(\Sigma^2X;\mathbb Z)\longrightarrow H^*(S^2\times X;\mathbb Z)$ is a monomorphism it follows that $c(\xi)=1$. This completes the proof. \qed
\section{Proof of Theorems\,\ref{secondtheorem} and \ref{thirdtheorem}}
In this section we prove the main theorems and derive some consequences. We first state the Bott integrality theorem which we shall use in the sequel.
\begin{Thm} {\rm(Bott integrality theorem)}\label{bott} {\rm(\cite{hus}, Chapter 20, Corollary\,$9.8$)} Let $a\in H^{2n}(S^{2n};\mathbb Z)$ be a generator. For any complex vector bundle $\xi$ over $S^{2n}$, the Chern class $c_n(\xi)$ is divisible by $(n-1)!a$. For each $m$ divisible by $(n-1)!$, there exists a unique $\xi\in\widetilde{K}(S^{2n})$ with $c_n(\xi)=ma$. \qed \end{Thm}
The Bott integrality theorem implies that if $\xi$ is a complex vector bundle over the even dimensional sphere $S^{2n}$ with $n \geq 3$, then $c_n(\xi) \in H^{2n}(S^{2n};\mathbb Z)$ is an even multiple of a generator.
The following observation is now immediate from the Bott integrality theorem.
\begin{Prop}\label{firstcor} Let $X$ be a finite $CW$-complex. Assume that there is a map $\alpha:X\longrightarrow S^d$ such that the homomorphism $\alpha:H^d(S^d;\mathbb Z)\longrightarrow H^d(X;\mathbb Z)$ is injective. Then $\Sigma^kX$ is not $C$-trivial whenever $k+d$ even. \end{Prop} {\bf Proof.} The map $\Sigma^kf$ induces a monomorphism in cohomology in degree $(k+d)$ for all $k$. If $(k+d)$ is even, then as there exists a vector bundle $\xi$ over $S^{k+d}$ with $c(\xi)\neq 1$ we must have $c(f^*\xi)\neq 1$. \qed
If $X$ is a connected closed orientable manifold we have a degree one map $f:X\longrightarrow S^{\mathrm{dim}(X)}$. This induces an isomorphism in top integral cohomology. Thus we have the following.
\begin{Cor}\label{secondcor} Suppose $X$ is a connected closed orientable manifold. Then $\Sigma^k X$ is not $C$-trivial whenever $\mathrm{dim}(X)+k$ is even. \qed \end{Cor}
If $k$ is odd then the suspensions $\Sigma^k (\mathbb C\mathbb P^m/\mathbb C\mathbb P^n)$ and $\Sigma^k (\mathbb H\mathbb P^m/\mathbb H\mathbb P^n)$ have cells only in odd dimensions and hence are $C$-trivial. The following observation is now immediate from the the above noted facts.
\begin{Cor}\label{complex} Let $\mathbb F= \mathbb C$ or $\mathbb H$. Let $0\leq n<m$. Then $\Sigma^k(\mathbb F\mathbb P^m/\mathbb F\mathbb P^n)$ is $C$-trivial if and only if $k$ is odd. \qed \end{Cor}
\begin{Cor} The product of two connected closed orientable manifolds is not $C$-trivial. \end{Cor} {\bf Proof.} In the case the product $M\times N$ is even dimensional, the claim follows from Corollary\,\ref{secondcor}. In the case that $M\times N$ is odd dimensional, assume that $M$ is even dimensional. Then $M$ is not $C$-trivial and since the composition $$M\stackrel{i}\longrightarrow M\times N\stackrel{\pi_1}{\longrightarrow} M$$ where $i(x)=(x,y)$ for a fixed $y\in N$ and $\pi_1$ the projection to the first factor, is identity it follows that $M\times N$ is not $C$-trivial. This completes the proof. \qed
In particular, a product of spheres is not $C$-trivial. We make some more observations before proving the main theorems.
\begin{Lem}\label{howdoyoudo} Suppose $k$ is even. \begin{enumerate} \item If $X^k_n$ is $W$-trivial, then $X^k_n$ is $C$-trivial. \item If $n$ is even and $X^k_{m,n}$ is $W$-trivial, then $X^k_{m,n}$ is $C$-trivial. \end{enumerate} \end{Lem} {\bf Proof.} We prove (1), the proof of (2) is similar. As $k$ is even, the integral cohomology is zero in odd degrees except in degree $n$ when $n$ is odd in which case it is infinite cyclic. The integral cohomology in even degrees is cyclic of order two. The mod-$2$ reduction map in even degrees is readily seen to be an isomorphism. By Lemma\,\ref{prelim} (4), the proof is complete. \qed
\begin{Lem}\label{newlemma} Let $k,m,n$ be even. If $X^k_{m+1,n}$ is not $C$-trivial, then $X^k_{m,n}$ is not $C$-trivial. \end{Lem} {\bf Proof.} The lemma follows from the fact that the obvious map $$j:X_{m,n}\longrightarrow X_{m+1,n}$$ induces isomorphism in integral cohomology in even degrees. Hence so does the map $\Sigma^kj$. \qed
{\em Proof of Theorem\,\ref{secondtheorem}.} If both $k$ and $n$ are odd, then as $\mathbb R\mathbb P^n$ is orientable, it follows from Corollary\,\ref{secondcor} that $X^k_n$ is not $C$-trivial. Next assume that $k$ is odd and $n$ is even. Then as $H^i(X^k_n;\mathbb Z)=0$ if $i>0$ is even it follows that $X^k_n$ is $C$-trivial in this case. This proves the theorem when $k$ is odd.
Next we assume that $k$ is even. We first show that $X^4_n$ is not $C$-trivial if and only if $n\geq 4$.
We know that $X^4_n$ for $n \leq 3$ is $W$-trivial (\cite{tanaka}, Theorem\,1.4). Hence it follows from Lemma\,\ref{howdoyoudo} that $X^4_n$ is $C$-trivial for $n \leq 3$. So assume that $n\geq 4$. Let $\xi$ be a complex $2$-plane bundle over $S^4$ with $c_2(\xi)$ a generator and let $\eta$ be a complex line bundle over $\mathbb R\mathbb P^n$ with $c_1(\eta)=t\in H^2(\mathbb R\mathbb P^n;\mathbb Z)\cong \mathbb Z_2$ the non-zero element. The cofiber sequence $$S^4\vee \mathbb R\mathbb P^n\stackrel{j}\longrightarrow S^4\times \mathbb R\mathbb P^n\stackrel{\alpha}\longrightarrow\Sigma^4\mathbb R\mathbb P^n$$ gives rise to an exact sequence $$0\rightarrow \widetilde{K}(\Sigma^4\mathbb R\mathbb P^n)\stackrel{\alpha}\longrightarrow\widetilde{K}(S^4\times\mathbb R\mathbb P^n) \stackrel{j}\longrightarrow\widetilde{K}(S^4\vee \mathbb R\mathbb P^n)\rightarrow 0.$$ We compute (see, for example, Lemma\,2.1, \cite{tanaka}) $$\begin{array}{rcl} c((\pi_1^*\xi-2)\otimes (\pi_2^*\eta-1)) & = & 1+ c_2(\xi)\times ((1+t)^{-2}-1)\\ & = & 1+ c_2(\xi)\times (-2t+3t^2-4t^3+\cdots)\\ & \neq & 1 \end{array}$$ as $c_2(\xi)\neq 0$ is a generator and $3t^2\neq 0$. Now as $j((\pi_1^*\xi-2)\otimes (\pi_2^*\eta-1))=0$, there exists a bundle $\theta\in \widetilde{K}(\Sigma^k\mathbb R\mathbb P^n)$ with $$\alpha(\theta)=(\pi_1^*\xi-2)\otimes (\pi_2^*\eta-1).$$ Clearly, $c(\theta)\neq 1$. This completes the proof that $X^4_n$ is not $C$-trivial if and only if $n\geq 4$.
We now look at the case $k = 2$. Note that $X^2_1=S^3$ is $C$-trivial. We now check that $X^2_n$ is not $C$-trivial if $n \geq 2$.
Let $\xi$ be a complex bundle over $S^{2}$ with $c(\xi)\neq 1$ and $c_1(\xi)$ a generator. Let $\eta$ denote the non-trivial complex line bundle over $\mathbb R\mathbb P^n$. Let $\pi_1,\pi_2$ be the two projections of $S^{2}\times\mathbb R\mathbb P^n$ onto the the first and the second factor respectively. Then, $$\begin{array}{rcl} c((\pi_1^*\xi-1)\otimes (\pi_2^*\eta-1)) & = & 1+ c_1(\xi)\times ((1+t)^{-1}-1)\\ & = & 1+ c_1(\xi)\times (-t+t^2-t^3+\cdots)\\ & \neq & 1 \end{array}$$ as $c_1(\xi)$ is a generator. Here $t\in H^2(\mathbb R\mathbb P^n;\mathbb Z)=\mathbb Z_2$ is the unique non-zero element. Then, arguing as in the above case, it follows that there must exist a bundle $\theta\in \widetilde{K}(\Sigma^k\mathbb R\mathbb P^n)$ with $c(\theta)\neq 1$. Thus $X^2_n$ is not $C$-trivial if and only if $n\geq 2$.
To complete the proof of the theorem we finally show that $X^6_n$ is $C$-trivial for all $n>0$. This will imply, by Theorem\,\ref{hahaha}, that $X^k_n$ is $C$-trivial for all $k\geq 6$ and $k$ even. First note that $X^6_1=S^7$ is $C$-trivial. That $X^6_2$ is $C$-trivial follows from the fact that $X^4_2$ is $C$-trivial and by Theorem\, \ref{hahaha}. By Theorem\,1.4 of \cite{tanaka}, $X^6_n$ is $W$-trivial whenever $n>3$.
Hence by Lemma\,\ref{howdoyoudo} $X^6_n$ is $C$-trivial when $n>3$. Finally, we look at $X^6_3$. The long exact sequence of the pair $(\mathbb R\mathbb P^3,\mathbb R\mathbb P^2)$ shows that the inclusion map $i:\mathbb R\mathbb P^2\longrightarrow \mathbb R\mathbb P^3$ induces an isomorphism $i:H^2(\mathbb R\mathbb P^3;\mathbb Z)\longrightarrow H^2(\mathbb R\mathbb P^2;\mathbb Z)$. Hence the map $\Sigma^6i:\Sigma^6\mathbb R\mathbb P^2\longrightarrow \Sigma^6\mathbb R\mathbb P^3$ induces isomorphism in integral cohomology in degree $8$. Since the only non-zero cohomology in even degree (for both $X^6_2$ and $X^6_3$) is in degree $8$ and $X^6_2$ is $C$-trivial, it follows by Lemma\,\ref{prelim1} (2) that $X^6_3$ is $C$-trivial. This completes the proof that $X^6_n$ is $C$-trivial.
This takes care of all the cases and completes the proof of the theorem. \qed
We now come to the proof of Theorem\,\ref{thirdtheorem}. First note that if $m$ is odd then the stunted real projective space $X_{m,m-2}$ admits a splitting $$X_{m,m-2}=S^m\vee S^{m-1}$$ and if $m$ is even then $$X_{m,m-2}=\Sigma^{m-2}\mathbb R\mathbb P^2.$$
We now prove Theorem\,\ref{thirdtheorem}.
{\em Proof of Theorem\,\ref{thirdtheorem}.} We first prove (1). If $k$ is odd and $m$ is even, then the integral cohomology of $X^k_{m,n}$ is trivial in even degrees and hence $X^k_{m,n}$ is $C$-trivial in this case proving (1).
Next, if $k,m$ are both odd there exists a map $\alpha:X^k_{m,n}\longrightarrow S^{k+m}$ inducing isomorphism in top integral cohomology. By Corollary\,\ref{secondcor}, $X^k_{m,n}$ is not $C$-trivial. This proves (2).
Next we prove (3). By Theorem 7.3 \cite{adams}, the projection $j:\mathbb R\mathbb P^m \rightarrow X_{m,n}$ maps $\widetilde{K}(X_{m,n})$ isomorphically into the subgroup of $\widetilde{K}(\mathbb R\mathbb P^m)$ generated by class of $2^t\nu$, where $\nu = \xi \otimes \mathbb C$ is the complexification of the canonical line bundle $\xi$ over $\mathbb R \mathbb P^m$. Let $\alpha \in \widetilde{K}(X_{m,n})$ be the generator such that $j^*(\alpha) = 2^t\nu$. If $z \in H^2(\mathbb R \mathbb P^m;\mathbb Z)$ is the unique non-zero element, then the total Chern class $$c(2^t\nu) = c(\nu)^{2^t} = (1+z)^{2^t} = (1+z^{2^t}).$$ Since $j^*:H^k(X_{m,n};\mathbb Z) \rightarrow H^k(\mathbb R \mathbb P^m;\mathbb Z)$ is injective for $0 \leq k \leq m$, we have $c(\alpha) = 1$ if and only if $m < 2^{t+1}$. As $\alpha$ is a generator, we conclude that $X_{m,n}$ is $C$-trivial if and only of $m< 2^{t+1}$.
We now prove (4).
We shall only prove the $C$-triviality for the case $k=2$. Then in view of Theorem\,\ref{hahaha}, $X^k_{m,n}$ will be $C$-trivial for all $k\geq 4$ and $k$ is even. By Theorem\, 1.3 of \cite{aniajay}, $X^2_{m,n}$ is $W$-trivial if $m \neq 6,7$. By Lemma\,\ref{howdoyoudo} we see that for $n$ even, $X^2_{m,n}$ is $C$-trivial if $m \neq 6,7$. We are now left to prove $C$-triviality of the following cases: $X^2_{6,n}$ and $X^2_{7,n}$ for $n$ even. We prove these as follows.
We first prove that $X^2_{6,n}$ is $C$-trivial for $n$ even. As $X^2_{6,4}=\Sigma^6\mathbb R\mathbb P^2$, it follows from Theorem\,\ref{secondtheorem} that $X^2_{6,4}$ is $C$-trivial. We next look at the case $X^2_{6,2}$ and let $\xi$ be a complex vector bundle over $X^2_{6,2}$. We shall show that $c_3(\xi)=0=c_4(\xi)$. We first claim that $c_3(\xi)=0$. For if $c_3(\xi)\neq 0$, then $w_6(\xi_{\mathbb R})\neq 0$ as the mod-$2$ reduction homomorphism is an isomorphism. Now observe that $w_i(\xi_{\mathbb R})=0$ for $1\leq i\leq 5$. This contradicts the well-known fact that for a real bundle the first non-zero Stiefel-Whitney class appears in degree a power of two. Thus $c_3(\xi)=0$. Assume now that $c_4(\xi)\neq 0$. consider the exact sequence $$\cdots\rightarrow \widetilde{K}^{-2}(X_{8,6})\stackrel{\alpha}\longrightarrow\widetilde{K}^{-2}(X_{8,2})\stackrel{j}\longrightarrow \widetilde{K}^{-2}(X_{6,2})\longrightarrow\widetilde{K}^{-1}(X_{8,6})\rightarrow \cdots.$$ Using the Atiyah-Hirzebruch spectral sequence in complex $K$-theory it is easy to see that if $m$ is even, then $$\widetilde{K}^{-1}(X_{m,n})=0.$$
Thus the homomorphism $j$ is epimorphic. Hence, $X^2_{6,2}$ is $C$-trivial as $X^2_{8,2}$ is $C$-trivial.
Next we prove that $X^2_{7,n}$ is $C$-trivial for $n$ even. Clearly, $X^2_{7,6}=S^9$ is $C$-trivial. That $X^2_{7,2}$ and $X^2_{7,4}$ is $C$-trivial follows from $C$-triviaility of $X^2_{6,2}$ and $X^2_{6,4}$ and by Lemma\,\ref{newlemma}. This completes the proof of (4).
We now prove (5). Here $k$ is even and $n$ is odd. First we assume $m$ is even. We look at the cofiber sequence $$X_{n+1,n}\stackrel{j}\longrightarrow X_{m,n}\stackrel{\alpha}\longrightarrow X_{m,n+1}$$ and the associated exact sequence $$\cdots\rightarrow \widetilde{K}^{-k}(X_{m,n+1})\stackrel{\alpha}\longrightarrow\widetilde{K}^{-k}(X_{m,n})\stackrel{j}\longrightarrow \widetilde{K}^{-k}(X_{n+1,n})\longrightarrow\widetilde{K}^{-k+1}(X_{m,n+1})\rightarrow\cdots.$$
As noted above, since the last group in the above exact sequence is zero the homomorphism $j$ is epimorphic. Since $\Sigma^kX_{n+1,n}$ is an even dimensional sphere, and therefore not $C$-trivial, we conclude that $X^k_{m,n}$ is not $C$-trivial. Note that, in particular, there exists a complex vector bundle $\xi$ over $X^k_{m,n}$ with $$c_{\frac{k+n+1}{2}}(\xi)\neq 0.$$
Next we assume $m$ is odd. Consider the obvious map $$j: X_{m,n}\longrightarrow X_{m+1,n}.$$ The homomorphism $j$ in integral cohomology is an isomorphism in degree $(n+1)$. The homomorphism $\Sigma^kj$ is an isomorphism in integral cohomology in degree $(k+n+1)$. Now if $\xi$ is a complex vector bundle over $X^k_{m+1,n}$ with $$c_{\frac{k+n+1}{2}}(\xi)\neq 0,$$ we have that $$c_{\frac{k+n+1}{2}}(j^*\xi)\neq 0.$$ Thus $X^k_{m,n}$ is not $C$-trivial when $k$ is even and $m,n$ are odd. This completes the proof of (5) and the theorem. \qed
We remark that the fact that $$\widetilde{K}^{-k}(X_{m,n})=0$$ whenever $k$ is odd and $m$ is even also gives another proof of Theorem\,\ref{thirdtheorem} (1).
\end{document} |
\begin{document}
\begin{abstract}
This paper describes the complete list of all 205,822 exceptional
Dehn fillings on the 1-cusped hyperbolic 3-manifolds that have ideal
triangulations with at most 9 ideal tetrahedra. The data is
consistent with the standard conjectures about Dehn filling and
suggests some new ones. \end{abstract} \title{A census of exceptional \ Dehn fillings}
\section{Introduction}
\subsection{Dehn filling} Suppose $M$ is a compact orientable \3-manifold with $\partial M$ a torus. A \emph{slope} on $\partial M$ is an unoriented isotopy class of simple closed curve, or equivalently a primitive element of $H_1(\partial M; \Z)$ modulo sign. The set of all slopes will be denoted $\mathit{Sl}(M)$, which can be viewed as the rational points in the projective line $P^1\big(H_1(\partial M; \R)\big) \cong \RP^1$. The Dehn fillings of $M$ are parameterized by $\alpha \in \mathit{Sl}(M)$, with $M(\alpha)$ being the Dehn filling where $\alpha$ bounds a disk in the attached solid torus. When the interior of $M$ admits a hyperbolic metric of finite volume, it is called a \emph{\1-cusped hyperbolic
\3-manifold}. For such hyperbolic $M$, Thurston showed that all but finitely many $M(\alpha)$ are also hyperbolic \cite{ThurstonsNotes}. The nonhyperbolic Dehn fillings are called \emph{exceptional}, and the corresponding slopes the \emph{exceptional slopes}. Understanding the possible exceptional fillings has been a major topic in the study of 3-manifolds over the past 40 years; see the surveys \cite{Gordon1998,
Gordon1999, Gordon2003, Gordon2012} for further background.
This paper gives a census of all exceptional Dehn fillings on a certain collection of \1-cusped hyperbolic \3-manifolds. Specifically, let $\cC_t$ be the set of all orientable \1-cusped hyperbolic \3-manifolds that have ideal triangulations with at most $t$ ideal tetrahedra. For $t \leq 9$, the set $\cC_t$ has been enumerated by \cite{HildebrandWeeks1989, CallahanHildebrandWeeks1999,
Thistlethwaite2010, Burton2014} and is included with SnapPy \cite{SnapPy}, whose nomenclature for these manifolds (e.g.~$m004$, $s011$, $v1002$, $t12345$, and $o9_{60000}$) I will use freely throughout. Each manifold $M$ in $\cC_t$ has a preferred basis for $H_1(\partial M; \Z)$, and so I will denote slopes in $\mathit{Sl}(M)$ by elements in $\Z^2$. See Figure~\ref{fig:cusped} for some basic statistics on the $59{,}107$ manifolds in $\cC_9$. The main result of this paper is: \begin{theorem}\label{thm:except}
There are precisely 205{,}822 exceptional Dehn fillings on the
manifolds in $\cC_9 $, that is, pairs $(M, \alpha)$ where
$M(\alpha)$ is not hyperbolic, of the types listed in
Table~\ref{table:exceptsum} and distributed as in
Figure~\ref{fig:numexcep}. \end{theorem} The list of these exceptional $(M, \alpha)$ together with the precise topology of each $M(\alpha)$ is available at \cite{ExcepPaperData}. Here, in addition to describing the proof of Theorem~\ref{thm:except} in Section~\ref{sec:proof}, I will give summaries of this data as it relates to known results and open questions about Dehn filling in Sections \ref{sec:exconj} and \ref{sec:new}.
\input tables/cusped_census_fancy
\subsection{Prior work}
In the 1990s, Hodgson and Weeks studied the exceptional Dehn fillings on the 286 manifolds in $\cC_5$; this work was never published but is referred to extensively in \cite{Gordon1998} and provided many key examples in the subject. The series of papers \cite{MartelliPetronio2006, MartelliPetronioRoukema2014, Martelli2018} classified all exceptional fillings on an important series of chain links with as many as 7 components; as noted in \cite[\S 3]{Martelli2018}, this determines the exceptional fillings on more than 95\% of the 4{,}587 manifolds in $\cC_7$. John Berge (personal communication) independently did a search for exceptional fillings on $\cC_9$ using a new version of his program Heegaard \cite{Heegaard3}, and found more than $99.3\%$ of the exceptional fillings included in Theorem~\ref{thm:except}.
\section{Background and conventions} \label{sec:conven}
\input tables/excepvenn
I first review the different types of nonhyperbolic \3-manifolds to establish my conventions on the kinds of exceptional Dehn fillings one can study. Sources vary slightly on the latter point, and here I use a relatively fine-grained division, which is illustrated in Figure~\ref{fig:excepvenn}. The summary for experts, who may safely skip this section, is that here atoroidal means geometrically atoroidal, the manifold $S^2 \times S^1$ is neither reducible nor an honorary lens space, and the term Seifert fibered will not include $\RP^3 \mathop{\#} \RP^3$. I will assume familiarity with basic 3-manifold topology, the Geometrization Theorem, and the resulting general structure of \3-manifolds, see e.g.~\cite{Hatcher3Manifolds,
Scott1983, Bonahon2002} for details. Throughout, all \3-manifolds will be compact, orientable, and be either closed or have boundary that is a union of tori; the symbol $M$ will always refer to such a manifold.
Our first two kinds of nonhyperbolic \3-manifolds are those containing certain spheres and tori. An embedded \2-sphere in $M$ is \emph{essential} if it does not bound a \3-ball. If there are no essential spheres then $M$ is \emph{irreducible}, and this includes all hyperbolic $M$. Those $M$ containing \emph{separating} essential spheres are called \emph{connected sums}; here, I avoid the more common term reducible for this as for some authors reducible is the complement of irreducible and so includes $S^2 \times S^1$ whose only essential sphere is nonseparating. When $M$ is not a connected sum it is \emph{prime}. An $M$ is \emph{toroidal} when it contains an embedded essential torus $T$, that is, one where $\pi_1 T \to \pi_1 M$ is injective and $T$ is not isotopic to a component of $\partial M$; this is sometimes called \emph{geometrically toroidal}. When $M$ is not toroidal it is \emph{atoroidal}. All hyperbolic $M$ are atoroidal.
When $M$ has a foliation by circles it is \emph{Seifert fibered} and called a \emph{Seifert fibered space}; I will shortly revise this definition to exclude a particularly unusual such manifold. The Seifert fibered manifolds are exactly those admitting these six of the eight possible geometries: $S^3$, $\E^3$, $\H^2 \times \R$, $S^2 \times \R$, $\Nil$, and $\PSLRtilde$ and in particular are not hyperbolic. Those with spherical geometry are precisely the \3-manifolds with finite $\pi_1$, including $S^3$ itself and the lens spaces $L(p, q)$ which are all quotients of $S^3$ by a cyclic group. There are only two $M$ with an $S^2 \times \R$ geometry, namely $S^2 \times S^1$ and $\RP^3 \mathop{\#} \RP^3$, which are both rather special. First, $S^2 \times S^1$ is the only closed \3-manifold with Heegaard genus one that is not a lens space, and also the only one whose fundamental group is infinite cyclic. Second, $\RP^3 \mathop{\#} \RP^3$ is the unique Seifert fibered manifold that is a connected sum. I henceforth adopt the nonstandard convention that $\RP^3 \mathop{\#} \RP^3$ is not Seifert fibered; this way, all Seifert fibered spaces are prime.
Any irreducible $M$ has a collection of disjoint essential tori that cut it up into pieces that are either Seifert fibered or hyperbolic. The minimal such collection is unique up to isotopy and gives the \emph{JSJ decomposition} of $M$. A \emph{graph manifold} is one where all the pieces in the JSJ decomposition are Seifert fibered. So Seifert fibered manifolds are graph manifolds as are those that admit the $\Sol$ geometry; the latter are virtually torus bundles over the circle with Anosov monodromy. An irreducible $M$ that is neither hyperbolic nor a graph manifold has a nontrivial JSJ decomposition where at least one piece is hyperbolic; such $M$ have a \emph{hyperbolic piece}.
Figure~\ref{fig:excepvenn} summarizes all the different types of nonhyperbolic \3-manifolds. Of course, many manifolds satisfy several of these conditions, and in certain tables I will want each nonhyperbolic manifold to have a single type. In such cases, the type used will be the most restricted possible in Figure~\ref{fig:excepvenn}; for example, the type of $L(3, 1)$ will be a lens space, even though it also has finite $\pi_1$, is Seifert fibered, and is a graph manifold. (This always makes sense because I set up the various definitions to minimize overlaps that are not containments.) This more restrictive convention is used in Tables~\ref{table:exceptsum} and Tables~\ref{table:exlens} only; Table~\ref{table:distance} is correct with either convention.
\section{Evidence for standard conjectures} \label{sec:exconj}
A great deal has been proven about the possibilities for exceptional Dehn fillings; with regards to the gaps in our knowledge, the fillings of Theorem~\ref{thm:except} are consistent with the standard conjectures as I now describe.
\subsection{Knots in the 3-sphere} I start with the 1{,}267 manifolds in $\cC_9$ that are exteriors of knots in $S^3$, which collectively have some 2{,}615 additional exceptional fillings.
\begin{enumerate}[labelindent=-0.2em, labelsep=0.2em, leftmargin=*] \item \label{item:cable} There are no Dehn fillings that are connected
sums, consistent with the Cabling Conjecture \cite[\S
2.2]{Gordon2003}.
\item The Berge Conjecture \cite[\S 3.2]{Gordon2003} holds for the 178
nontrivial lens space fillings.
\item \label{item:intsur}
All 1{,}143 nontrivial Seifert fibered fillings are along
integral slopes and have the form $S^2(q_1, q_2, q_3)$ or
$\RP^2(q_1, q_2)$; compare \cite[\S 3.3]{Gordon2003}. In particular,
all fillings with finite fundamental group are integral.
\end{enumerate} We now turn to considering all of the manifolds in $\cC_9$.
\subsection{Distances between exceptional slopes}
A key topological invariant of a pair of slopes $\alpha$ and $\beta$ on a torus is their geometric intersection number $\Delta(\alpha, \beta)$. When $\alpha$ and $\beta$ are exceptional slopes for a particular $M$, then $\Delta(\alpha, \beta) \leq 8$ by \cite{LackenbyMeyerhoff2013}. Gordon conjectured there are only four possible $M$ with exceptional slopes where $ \Delta \geq 5$ \cite[Conjecture~3.4]{Gordon1998}, and this holds for $\cC_9$. Much of the work on exceptional fillings has focused on understanding the maximum possible $\Delta(\alpha, \beta)$ where $M(\alpha)$ and $M(\beta)$ are particular types of exceptional fillings. I summarize what is observed for $\cC_9$ and how it relates to the known upper bounds on $\Delta(\alpha, \beta)$ in Table~\ref{table:distance}. In all cases, the maximum value of $\Delta(\alpha, \beta)$ for $\cC_9$ is the same as that already found in the literature, compare with \cite{Gordon1998, Gordon1999, Gordon2003, Gordon2012} and also page 971 and Section A.2 of \cite{MartelliPetronio2006}. I think it very likely that all possible maximum values of $\Delta(\alpha, \beta)$ have been observed at this point.
\subsection{Atoroidal Seifert fibered and finite $\pi_1$ fillings}
There are two cases of $M$ in $\cC_9$ with slopes $\alpha$ and $\beta$ with $\Delta(\alpha, \beta) = 4$ where $M(\alpha)$ is an atoroidal Seifert fibered space with $\pi_1\big(M(\alpha)\big)$ infinite and $\pi_1\big(M(\beta)\big)$ is finite and noncyclic. These are already contained in \cite{MartelliPetronio2006}, but this aspect is not highlighted there and so is worth describing here. The first example is $m007$ where $m007(-2, 1)$ is the Seifert fibered space $S^2\big((2,1), (3,1), (9,-7)\big)$ and $m007(2, 1)$ is $S^2\big((2,1), (3,2), (3,-1)\big)$ which has nonabelian fundamental group of order 120; this example is $M3_2$ with slopes $-4$ and $0$ in Table A.3 of \cite{MartelliPetronio2006}. The second is $m034$ with $m034(2, 1) = S^2\big((2,1), (3,1), (11,-9)\big)$ and $m034(-2, 1) = S^2\big((2, 1), (3, 2), (5, -3)\big)$ where the latter has nonabelian fundamental group of order 2{,}040; it is the example described in Table A.8 \cite{MartelliPetronio2006}, with $r/s = 2$ and slopes $-4$ and $0$. Here, my conventions for describing Seifert fibered spaces follow Regina \cite{Regina}.
\subsection{Many exceptional fillings} For a 1-cusped manifold $M$, let $e(M)$ denote the number of exceptional fillings. The distribution of $e(M)$ is shown in Figure~\ref{fig:numexcep}. There are only 11 manifolds in $\cC_9$ where $e(M) \geq 7$, namely $m003$, $m004$, $m006$, $m007$, $m009$, $m016$, $m017$, $m023$, $m035$, $m038$, and $m039$. According to Gordon \cite[pages 136--7]{Gordon1998}, these 11 were first noticed by Hodgson when he examined the 286 manifolds in $\cC_5$. Gordon writes there that ``In view of this data it is tempting to believe that these eleven manifolds are the only ones with $e(M) \geq 7$'', and it was later shown \cite {LackenbyMeyerhoff2013} that one always has $e(M) \leq 10$. These 11 are also the only manifolds with $e(M) \geq 7$ among all Dehn fillings on the magic manifold \cite{MartelliPetronio2006}. In light of the additional data here, it is safe to promote this temptation to a conjecture.
\subsection{Connected sums}
The connected sums in this census are all built of quite simple pieces. Specifically, the summands all have finite $\pi_1$ or are $S^2 \times S^1$; there are only two summands in all but three cases: the filling $o9_{39343}(1, 0)$ is $\RP^3 \mathop{\#} \RP^3 \mathop{\#} \RP^3$ and both $o9_{41447}(1, 0)$ and $o9_{43255}(1, 0)$ are the manifold $L(3, 1) \mathop{\#} \RP^3 \mathop{\#} \RP^3$. While there are infinite families with two connected sum fillings \cite{Eudave-MunozWu1999}, it is an open question whether there is a manifold with three such fillings, see \cite[\S 4]{HoffmanMatignon2003}. In $\cC_9$ there are only 14 manifolds with two distinct Dehn fillings that are connected sums, and none with more than two. Another question from \cite[\S 4]{HoffmanMatignon2003} is when there are two such fillings, must both have at least one summand that is $\RP^3 = L(2, 1)$, $L(3, 1)$, or $L(4, 1)$? The answer is yes for the 14 such manifolds in $\cC_9$.
\input tables/distance
\section{New observations} \label{sec:new}
Here are some interesting patterns that I couldn't find in the existing literature. I encourage you to download the complete data at \cite{ExcepPaperData} and find others that I have missed.
\subsection{Finite nonabelian fillings} The maximum number of fillings on $M$ in $\cC_9$ where the fundamental group is finite and nonabelian is three. There are only four such $M$, namely $m011$, $s757$, $v2702$, and $v2797$. I conjecture that these are the only four manifolds with this property.
\subsection{Toroidal fillings} The maximum number of toroidal fillings on $M$ in $\cC_9$ is 4, and there only 27 such $M$, namely $s772$, $s778$, $s911$, $v2640$, $t08282$, $t11538$, $t12033$, $t12035$, $t12036$, $t12041$, $t12043$, $t12045$, $t12050$, $t12548$, $t12648$, $o9_{35259}$, $o9_{36732}$, $o9_{37030}$, $o9_{38039}$, $o9_{39094}$, $o9_{40054}$, $o9_{41000}$, $o9_{41004}$, $o9_{41006}$, $o9_{41007}$, $o9_{41008}$, $o9_{43799}$. Are there are infinitely many such examples? Perhaps we should expect there to be since the previous list includes manifolds with 6, 7, 8, and 9 ideal tetrahedra. None of the examples with 4 toroidal Dehn fillings is the exterior of a knot in $S^3$, consistent with a conjecture of \cite[Page 60]{Eudave-Munoz1997}.
\input tables/exlens \input tables/longslopes
\subsection{Lengths of exceptional slopes} \label{sec:slopelength} Hoffman and Purcell \cite{HoffmanPurcell2017} studied the length of exceptional slopes $\alpha$ in the horotorus cutting off a maximal cusp for $M$. By the $6$-Theorem, the length $\ell(\alpha)$ of such $\alpha$ is at most $6$. Table~\ref{table:exlens} details the longest exceptional slopes of each type observed in $\cC_9$; compare with Table 1 of \cite{HoffmanPurcell2017}. The new feature is slopes of length more than 4 yielding manifolds with finite fundamental group; these are listed in Table~\ref{table:longslopes}. Can some of these be made into an infinite family of finite exceptional slopes with $\ell(\alpha) \to 5$ analogous to Proposition~4.2 of \cite{HoffmanPurcell2017}?
The referee kindly pointed out that one can use a covering trick to create even longer slopes for some types starting with the examples in Table~\ref{table:longslopes}. Specifically, the extreme example for lens spaces comes from $M = o9_{18855}$ and the slope $\alpha = (1, 1)$ where $M(\alpha) = L(39,16)$ and $\ell(\alpha) \approx 3.92794$. The core curve of the Dehn filling turns out to generate $H_1(M(\alpha); \Z) \cong \Z/39\Z$, and consequently one can take a $39$-fold cyclic cover of $M$ to get the exterior of a knot in $S^3$ whose meridian $\mu$ also has $\ell(\mu) \approx 3.92794$. As discussed in \cite{HoffmanPurcell2017}, it is conjectured that for an $S^3$ filling one always has $\ell(\mu) \leq 4$, and there are several families of such where $\ell(\mu) \to 4$ from below.
One can apply the same trick to $s546(-1, 1)$ from Table~\ref{table:longslopes} to produce a hyperbolic knot in the Poincar\'e homology sphere where the meridian has length about $4.442966$; specifically, take the 17-fold cyclic cover of $s546$ corresponding to the kernel of the map $\pi_1\big(s546(-1, 1)\big) \to H_1\big(s546(-1, 1); \Z\big) \cong \Z/17\Z$.
\subsection{A cabling conjecture for $S^2 \times S^1$} As per Table~\ref{table:distance}, there are no known hyperbolic knot exteriors in $S^2 \times S^1$ with a Dehn filling that is a connected sum. Thus, as in Section \ref{sec:exconj}(\ref{item:cable}) for the case of $S^3$, I conjecture that none exist, i.e.~that no hyperbolic knot in $S^2 \times S^1$ has a Dehn surgery yielding a connected sum.
\section{Outline of the proof of Theorem~\ref{thm:except}} \label{sec:proof}
I turn now to the proof of Theorem~\ref{thm:except}. Initially, I found a candidate $\cE$ for the list of all exceptional fillings as a byproduct of another project. However, the proof of the correctness of $\cE$ follows the approach of \cite{MartelliPetronioRoukema2014}. The list $\cE$, related data, and the code used in the proof can all be obtained from \cite{ExcepPaperData}; to run the code, which requires using several software packages together in consort, the Docker image \cite{KitchenSink} may be helpful.
\begin{proof}[Proof of Theorem~\ref{thm:except}]
The set $\cE$ consists of 205{,}822 pairs $(M, \alpha)$ where
$M \in \cC_9$ and $\alpha \in \mathit{Sl}(M)$. There are two things to
show: that every $M(\alpha)$ is not hyperbolic of the type
claimed in Table~\ref{table:exceptsum} and that all other fillings
on $M \in \cC_9$ are hyperbolic.
For the latter task, for each $M \in \cC_9$ I found an embedded cusp
neighborhood so that I could measure the lengths of slopes in its
horotorus boundary; this was done rigorously in SnapPy \cite{SnapPy}
running inside \cite{SageMath} using the approach of
\cite{HIKMOT2016} and \cite[\S 3.6]{DunfieldHoffmanLicata2015}. By
the 6-Theorem of \cite{Agol2000, Lackenby2000}, it suffices to
examine all slopes $\beta \in \mathit{Sl}(M)$ where
$\ell(\beta) \leq 6$. For the cusp neighborhoods I used, overall
there were some 355{,}128 such slopes. For the 149{,}306 pairs
$(M, \beta)$ that were not in $\cE$, I checked that $M(\beta)$ was
hyperbolic using the method of \cite{HIKMOT2016} as reimplemented in
SnapPy. As in the proof of Theorem 5.2 of \cite{HIKMOT2016}, it was
sometimes necessarily to search around for a triangulation that
could be used to certify the existence of a hyperbolic structure.
This completes the proof that filling along any slope not in $\cE$
yields a hyperbolic manifold.
In the other direction, to show that each $M(\alpha)$ in $\cE$
is not hyperbolic, I primarily used Regina \cite{Regina},
specifically its combinatorial recognition methods \cite[\S
4]{Burton2013}. These work when the input triangulation has the
very particular form associated to a standard triangulation of a
Seifert fibered space or graph manifold. Of course, there are many
triangulations of such manifolds which do not have this structure,
so I generated many different 1-vertex triangulations of each
$M(\alpha)$ and fed them into Regina until it succeeded in
recognizing the topology. This worked for all but 2{,}890 of the
$M(\alpha)$. Of those remaining, in 680 cases the Recognizer
program of \cite{Matveev1998, Spine} showed that they were graph
manifolds. (Currently, Regina can only identify graph manifolds
where the graph in question is either a segment with two or three
vertices or a loop with one vertex. These 680 all have slightly more
complicated graphs, for example a loop with either two or three
vertices.) For each of the remaining 2{,}210 manifolds, Regina
found at least one essential normal torus. Cutting along a suitable
collection of such essential tori gave pieces that always included a
cusped hyperbolic 3-manifold with an ideal triangulation with at
most 6 ideal tetrahedra; in particular, each of these 2{,}210
manifolds is nonhyperbolic with a non-trivial JSJ decomposition with
a hyperbolic piece. Thus every $M(\alpha)$ in $\cE$ is not
hyperbolic. This completes the proof that $\cE$ is precisely the
list of exceptional fillings on the manifolds in $\cC_9$.
To prove the correctness of Table~\ref{table:exceptsum}, the hard
part is ensuring that nothing listed as a (proper) graph manifold is
actually Seifert fibered. Everything else can be read off from the
Seifert/graph descriptions found in the previous step, though I
double-checked much of it in other ways. For example, I used Magma
\cite{Magma} to give an independent check that the 59{,}200
spherical manifolds had the claimed type of fundamental group (this
could also be done with GAP \cite{GAP}). I also had Regina compute
directly which manifolds are toroidal using normal surface
techniques and this matched what follows from the Seifert/graph
descriptions. As mentioned, Regina identifies structure in the
given triangulation, which might well be a graph manifold structure
that can be simplified after the fact. For example, it will
sometimes return graph manifolds where one of the nodes is a solid
torus. In such instances, additional triangulations were examined
until a more concise description was found. To certify a graph
description as minimal, I just checked that all Seifert pieces have
incompressible boundary (i.e.~no solid tori) and that no two Seifert
pieces are glued together so that the fibers match up; here, I took
care to consider the possibility of switching the Seifert fibration
for the exceptional piece which is both
$D^2\big( (2, 1), (2, 1)\big)$ and the twisted circle bundle over
the M\"obius band. \end{proof}
{\RaggedRight \small
}
\end{document} |
\begin{document}
\newtheorem{lemma}{Lemma} \newtheorem{proposition}[lemma]{Proposition} \newtheorem{theorem}[lemma]{Theorem} \newtheorem{definition}[lemma]{Definition}
\newtheorem{hypothesis}[lemma]{Hypothesis} \newtheorem{conjecture}[lemma]{Conjecture} \newtheorem{remark}[lemma]{Remark} \newtheorem{example}[lemma]{Example} \newtheorem{property}[lemma]{Property} \newtheorem{corollary}[lemma]{Corollary} \newtheorem{algorithm}[lemma]{Algorithm}
\title{On the polycirculant conjecture} \author{Aleksandr Golubchik\\
\small (Osnabrueck, Germany)}
\maketitle
\begin{abstract} In the paper the foundation of the $k$-orbit theory is developed. The theory opens a new simple way to the investigation of groups and multidimensional symmetries.
The relations between combinatorial symmetry properties of a $k$-orbit and its automorphism group are found. It is found the local property of a $k$-orbit. The difference between 2-closed group and $m$-closed group for $m>2$ is discovered. It is explained the specific property of Petersen graph automorphism group $n$-orbit. It is shown that any non-trivial primitive group contains a transitive imprimitive subgroup and as a result it is proved that the automorphism group of a vertex transitive graph (2-closed group) contains a regular element (polycirculant conjecture).
Using methods of the $k$-orbit theory, it is considered different possibilities of permutation representation of a finite group and shown that the most informative, relative to describing of the structure of a finite group, is the permutation representation of the lowest degree. Using this representation it is obtained a simple proof of the W. Feit, J.G. Thompson theorem: Solvability of groups of odd order. It is described the enough simple structure of lowest degree representation of finite groups and found a way to constructing of the simple full invariant of a finite group.
To the end, using methods of $k$-orbit theory, it is obtained one of possible polynomial solutions of the graph isomorphism problem.
\end{abstract}
\section{Introduction} A permutation group $G$ on a $n$-element set $V$ is called regular, if its every stabilizer (a subgroup that fixes some element $v\in V$) is trivial. Every permutation of the regular group can be decomposed into cycles of the same length.
A permutation group, containing a regular subgroup, is called semiregular.
Let $G$ be a permutation group on a $n$-element set $V$, $V^k$ be Cartesian power of $V$ and $V^{(k)}\subset V^k$ be the non-diagonal part of $V^k$, i.e. every $k$-tuple of $V^{(k)}$ has $k$ different values of its coordinates. The action of $G$ on $V$ forms the partition of $V^{(k)}$ on classes of $k$-tuples related with $G$. This partition is called the system of $k$-orbits of $G$ on $V^{(k)}$ and we write it as $Orb_k(G)$. If $\langle v_1\ldots v_k\rangle\in V^{(k)}$, then $G\langle v_1\ldots v_k\rangle\equiv \{\langle gv_1\ldots gv_k\rangle : g\in G\}$ is a $k$-orbit from $Orb_k(G)$.
For considered tasks it is of interest a maximal subgroup of $Aut(Orb_k(G))$ that maintains $k$-orbits from $Orb_k(G)$. We shall denote this subgroup as $aut(Orb_k(G))$. Thus $aut(Orb_k(G))\equiv\cap_{X_k\in Orb_k(G)} Aut(X_k)$ and $G\leq aut(Orb_k(G))$.
\begin{definition} We call a permutation group $G$ a \emph{$k$-defined} group, if $G=aut(Orb_k(G))$.
\end{definition}
There is the obvious property of a $k$-defined group:
\begin{proposition} If a group $G$ is $k$-defined, then it is $(k+1)$-defined.
\end{proposition} {\bfseries Proof:\quad} If a group $G$ is $k$-defined, then, on the one hand, $aut(Orb_{k+1}(G))<aut(Orb_k(G))=G$ and, on the other hand, $G<aut(Orb_{k+1}(G)$. $\Box$
The $k$-defined group is called $k$-closed if it is not $(k-1)$-defined.
P. Cameron \cite{Cameron} has described the conjecture of M. Klin, that every $2$-closed transitive group is semiregular, and the similar polycirculant conjecture of D. Maru$\breve{\rm s}$i$\breve{\rm c}$, that every vertex-transitive finite graph has a regular automorphism.
We shall prove these conjectures in the next reformulation:
\begin{definition}\label{tr.pr} We shall emphasize in the conventional definition of primitivity and imprimitivity of permutation groups the case of cyclic group of a prime order. We shall say that these groups are \emph{trivial primitive} and \emph{trivial imprimitive}. The reason of such consideration will be clear below.
\end{definition}
\begin{theorem}\label{2cl.impr->reg} The $2$-closure of a transitive, imprimitive permutation group contains a regular element.
\end{theorem}
\begin{lemma}\label{tr.impr<tr.pr} A primitive permutation group contains a transitive, imprimitive subgroup.
\end{lemma}
In order to prove these statements we shall study symmetry properties of $k$-orbits.\footnote{The objects, that generalize symmetry properties of $k$-orbits, were applied by author for the polynomial solution of graph isomorphism problem. The part of such investigations is used in \url{http://arXiv.org/find/math/1/AND+au:+Golubchik_Aleksandr+ti:+AND+polynomial+algorithm/0/1/0/2002/0/1}}
\section{$k$-Orbits} A $k$-orbit $X_k$ is a set of $k$-tuples with property $X_k=Aut(X_k)\alpha_k$ for a $k$-tuple $\alpha_k\in X_k$. Such $k$-sets we shall call \emph{automorphic} $k$-sets.
All, what is written below, can become easier for understanding, if to represent a $k$-orbit as a matrix, whose lines are $k$-tuples and columns are values of coordinates of $k$-tuples. A $k$-orbit can be represented by various matrices that differ by lines permutation. Various orders of lines in matrices demonstrate various symmetry properties of $k$-orbit. For example $3$-orbit of symmetric group $S_3$ we can represent as
$$ \begin{array}{lcr}
\begin{array}{||ccc||} \hline 1 & 2 & 3 \\ 2 & 3 & 1 \\ 3 & 1 & 2 \\ \hline 1 & 3 & 2 \\ 3 & 2 & 1 \\ 2 & 1 & 3 \\ \hline \end{array} & \mbox{ or } &
\begin{array}{||cc|c||} \hline 1 & 2 & 3 \\ 2 & 1 & 3 \\ \hline 1 & 3 & 2 \\ 3 & 1 & 2 \\ \hline 2 & 3 & 1 \\ 3 & 2 & 1 \\ \hline \end{array}\,. \end{array} $$
In order to indicate number of $k$-tuples in a $k$-orbit $X_k$ of power $l$ we shall call it $(l,k)$-orbit or write as $X_{lk}$.
$k$-Orbits have the next general number property:
\begin{proposition}\label{homo} Let $X_k$ be a $k$-orbit and $l\leq k$, then all $l$-tuples with the same coordinates $i_1,\ldots i_l\in [1,k]$ form a homogeneous multiset (i.e. all $l$-tuples in this multiset have the same multiplicity).
\end{proposition} {\bfseries Proof:\quad} Let two $k$-subsets $Y_k(u_1,u_2,\ldots,u_l),Z_k(v_1,v_2,\ldots,v_l) \subset X_k$ consist of all $k$-tuples, that have their $l$ coordinates $i_1,\ldots i_l$ equal to $u_1,u_2,\ldots,u_l$ and $v_1,v_2,\ldots,v_l$ correspondingly. Let $g\in Aut(X_k)$ be such permutation that $\langle v_1\ldots v_l\rangle =\langle gu_1\ldots gu_l\rangle$, then $Z_k=gY_k$. $\Box$
Following constructions simplify the study of $k$-orbits. We call a $k$-orbit as a \emph{cyclic $k$-orbit} or simply a \emph{$k$-cycle}, if it is generated by single permutation. A $k$-cycle, that consists of $l$ $k$-tuples, we write as $(l,k)$-cycle. The order of a generating $k$-cycle permutation can differ from the number of $k$-tuples in the $k$-cycle. The structure of $k$-cycles is enough simple and can be represented with four structure elements:
\begin{example}\label{simple-struc} $$
\begin{array}{||cc|cc||} \hline 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ \hline \end{array}\,, \
\begin{array}{||cc|cc||} \hline 1 & 2 & 3 & 4 \\ 2 & 1 & 3 & 4 \\ \hline \end{array}\,, \
\begin{array}{||cc|cc||} \hline 1 & 2 & 3 & 4 \\ 2 & 1 & 5 & 6 \\ \hline \end{array}\,, \
\begin{array}{||cccc|cc||} \hline 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 3 & 4 & 1 & 6 & 5 \\ 3 & 4 & 1 & 2 & 5 & 6 \\ 4 & 1 & 2 & 3 & 6 & 5 \\ \hline \end{array}\,. $$
\end{example}
The first example shows the $(2,4)$-cycle that is a \emph{concatenation} of two $(2,2)$-cycles. Such $(k,k)$-cycle, that is a $k$-orbit of a cycle of length $k$, we shall call a \emph{$k$-rcycle}. This term is an abbreviation from a ``right cycle'' and indicates on invariance of such $(k,k)$-cycle relative to cyclic permutation of not only its $k$-tuples but also the coordinates of $k$-tuples, or on invariance of the $(k,k)$-cycle relative to not only the left but also the right action of permutation (s. below).
The second is the $(2,4)$-cycle with fix-points. It is represented by the concatenation of the $2$-rcycle and the trivial $(2,2)$-multiorbit, consisting of the single $2$-tuple. Such $k$-multiorbit we shall call a \emph{$k$-multituple} or $(l,k)$-multituple.
The third example is the concatenation of the $2$-rcycle and a $2$-orbit that consists of two $2$-tuples with not intersected values of coordinates. This kind of $(l,k)$-orbit we shall call \emph{$S_l^k$-orbit}. It designates that this $(l,k)$-orbit consists of
$lk$ elements of $V$ and its automorphism group is subdirectproduct of symmetric groups $S_l(B_i)$, where $B_i\subset V$, $i\in [1,k]$, $|B_i|=l$ and $B_i\cap B_j=\emptyset$ for $i\neq j$. From this definition follows that any $(l,1)$-orbit ($l$-element set) is $S_l^1$-orbit.
The fourth example shows the possible structure of a $k$-cycle whose length is not prime. It is seen that the fourth case can be represented through first three cases. So these three cases are fundamental for constructing of any $k$-orbit of any finite group.
One of our tasks is the study of a permutation action on $k$-orbits. Indeed, there exist different possibilities of the permutation action on $k$-orbits, which are arisen from their different symmetry properties.
We shall start with consideration of permutation actions on a $n$-orbit $X_n$ of a group $G$ of the degree $n$.
\subsection{The actions of permutations on $n$-sets}
A $n$-orbit $X_n$ of a group $G$ is a set of all $n$-tuples, any pair of which defines a permutation from $G$. So we can represent any $n$-tuple $\alpha_n=\langle u_1\ldots u_n\rangle$ as a permutation $$ g_{\alpha_n}= \left( \begin{array}{ccc} v_1 & \ldots & v_n \\ u_1 & \dots & u_n \end{array} \right), $$ where $n$-tuple $\langle v_1\ldots v_n\rangle$ is related to the unit of $G$ and will be called as the \emph{initial} $n$-tuple. Of course, any $n$-tuple from $X_n$ can be chosen as the initial. The property of the initial $n$-tuple is the equality of number value to order value of each its coordinate. Here is accepted that sets of number values and order values of coordinates are equal and for ordering of coordinates it is determinate (any time if it is necessary) certain linear order on this set. The next example shows two different orders of coordinates of the same $2$-orbit:
$$ \begin{array}{lcr}
\begin{array}{||cc||} \hline 1 & 2 \\ \hline 1 & 2 \\ 2 & 1 \\ \hline \end{array}\,, &&
\begin{array}{||cc||} \hline 2 & 1 \\ \hline 1 & 2 \\ 2 & 1 \\ \hline \end{array}\,. \end{array} $$
In first case the initial $2$-tuple is $\langle 12\rangle$, in the second it is $\langle 21\rangle$.
Further we shall take the next rule for permutation multiplication: $$ \left( \begin{array}{ccc} v_1 & \dots & v_n \\ u_1 & \dots & u_n \end{array} \right) \left( \begin{array}{ccc} v_1 & \dots & v_n \\ w_1 & \dots & w_n \end{array} \right) = \left( \begin{array}{ccc} w_1 & \dots & w_n \\ x_1 & \dots & x_n \end{array} \right) \left( \begin{array}{ccc} v_1 & \dots & v_n \\ w_1 & \dots & w_n \end{array} \right). $$
From this rule follows that the left action of the permutation
$$ \left( \begin{array}{ccc} v_1 & \dots & v_n \\ u_1 & \dots & u_n \end{array} \right) $$ on the $n$-tuple $\alpha_n=\langle w_1\dots w_n\rangle$ gives the $n$-tuple $\beta_n=\langle x_1\dots x_n\rangle$ that can be considered as:
\begin{enumerate} \item the changing of (number) values of coordinates of the $n$-tuple $\alpha_n$;
\item the mapping of $n$-tuple $\alpha_n$ coordinate-wise on a $n$-tuple $\beta_n$.
\end{enumerate}
The right action of the permutation
$$ \left( \begin{array}{ccc} v_1 & \ldots & v_n \\ w_1 & \dots & w_n \end{array} \right) $$
on the $n$-tuple $\alpha_n=\langle u_1\dots u_n\rangle$ gives the $n$-tuple $\beta_n=\langle x_1\dots x_n\rangle$ that can be interpreted as:
\begin{enumerate} \item the permutation of coordinates of the $n$-tuple $\alpha_n$;
\item the mapping of $n$-tuple $\alpha_n$ coordinate-wise on a $n$-tuple $\beta_n$.
\end{enumerate}
We shall choose every time such interpretation of permutation action that will be more suitable.
If a $n$-orbit $X_n$ contains a $n$-tuple $\alpha_n=\langle v_1\ldots v_n\rangle$, then $X_n=\{\langle gv_1\ldots gv_n\rangle : g\in G\}$ and
$|X_n|=|G|$. Here we have used the first method of permutation action on $n$-tuple, namely: a permutation $g$ changes values of coordinates of $\alpha_n$ or acts on the permutation $g_{\alpha_n}$ from left ($gg_{\alpha_n}$). We shall say also that a permutation $g$ acts from left on the $n$-tuple $\alpha_n$ and write this action as $g\alpha_n$.
The second method gives $X_n=\{\langle v_{g1}\ldots v_{gn}\rangle: g\in G\}$. It is an action of a permutation $g$ on the order of coordinates of $n$-tuple $\alpha_n$ or the action $g_{\alpha_n}g$ of $g$ on $g_{\alpha_n}$ from right. We shall say in this case that $g$ acts on the $n$-tuple $\alpha_n$ from right and write this action as $\alpha_ng$.
\paragraph{$n$-Orbits of left, right cosets of a subgroup.}
Let $A$ be a subgroup of $G$, $Y_n$ be a $n$-orbit of $A$ and $g\in G$, then $gY_n$ is a subset of $X_n$ that represents permutations from a left coset of $A$ in $G$ and a subset $Y_ng$ represents permutations from a right coset of $A$ in $G$.
\begin{definition}\label{GA,AG} For convenience we shall write the sets of left $G\backslash A$ and right $G/A$ cosets of $A$ in $G$ as $GA\equiv \{gA: g\in G\}$ and $AG\equiv \{Ag: g\in G\}$.
The defined notation can be easily distinguished from the group multiplication, because the product of a group with its subgroup is always trivial. The same reasoning will be used in the like formulas by the action of a group on $k$-orbits.
Corresponding to this remark we write $GY_n=\{gY_n: g\in G\}$, $Y_nG=\{Y_ng: g\in G\}$, $AX_n=\{A\alpha_n: \alpha_n\in X_n\}$ and $X_nA=\{\alpha_nA: \alpha_n\in X_n\}$.
\end{definition}
From this definition and notions of left and right cosets of a subgroup we obtain:
\begin{lemma}\label{GY_n=X_nA} Let $n$-orbit $Y_n$ of a subgroup $A<G$ contains initial $n$-tuple, then partitions of $X_n$ on $n$-subsets of left, right cosets of $A$ in $G$ are $L_n=GY_n=X_nA$ and $R_n=Y_nG=AX_n$.
\end{lemma}
Let $Y_n$ be a $n$-orbit of a subgroup $A$ of a group $G$ and $g\in G$.
\begin{lemma}\label{Y_ng} The $n$-subset $Y_ng$ is also a $n$-orbit of the subgroup $A$.
\end{lemma} {\bfseries Proof:\quad} The right action of a permutation on a $n$-orbit changes the order of coordinates of $n$-tuples. The permutation that is defined by any pair of $n$-tuples does not depend on order of coordinates, hence every permutation that is defined by any pair of $n$-tuples from $Y_ng$ belongs to $A$. $\Box$
\begin{lemma}\label{gY_n} The $n$-subset $gY_n$ is a $n$-orbit of the conjugate to $A$ subgroup $B=gAg^{-1}$.
\end{lemma} {\bfseries Proof:\quad} The $n$-subsets $gY_n$ and $gY_ng^{-1}$ define, as in lemma \ref{Y_ng}, the same sets of permutations from $G$. But the set of $n$-tuples $gY_ng^{-1}$ is by definition equivalent to the set of permutations $B=gAg^{-1}$. $\Box$
\begin{proposition}\label{Yn'=gYn''f-1} $n$-Subsets of left and right cosets of a subgroup $A<G$ are connected with elements of $G$.
\end{proposition} {\bfseries Proof:\quad} Let $Y_n$ be a $n$-orbit of $A$, $Y_n'$ be the $n$-subset of a left coset of $A$ and $Y_n''$ be the $n$-subset of a right coset of $A$, then there exist permutations $g,f\in G$ so that $Y_n'=gY_n$ and $Y_n''=Y_nf$. Hence $Y_n'=gY_n''f^{-1}$. $\Box$
We shall say further ``$n$-orbit of coset'' instead of ``$n$-subset of coset'', in order to show that this $n$-subset is a $n$-orbit. It will be referred also to a $k$-subset, if it is a $k$-orbit.
\begin{proposition}\label{H->Yn=gYng-1} Let $H$ be a normal subgroup of $G$, then sets of $n$-orbits of left and right cosets of $H$ are equal and if to choose an arbitrary $n$-tuple from $n$-orbit $Y_n$ of an arbitrary coset of $H$ as the initial, then $Y_n=gY_ng^{-1}$ for any permutation $g\in G$.
\end{proposition} {\bfseries Proof:\quad} The sets of $n$-orbits of left and right cosets of $H$ are equal, because the sets of left and right cosets of $H$ are equal.
$Y_n=gY_ng^{-1}$ for every permutation $g\in G$, because the choice of the initial $n$-tuple determines an equivalence between $Y_n$ and $H$ relative to the defined above action of $G$ on its $n$-orbit. $\Box$
\begin{proposition}\label{Yn=gYng-1->N} Let $A$ be a subgroup of a group $G$, $Y_n$ be a $n$-orbit of $A$ and $Y_n'$, $Y_n''$ be $n$-orbits of left and right cosets of $A$ correspondingly. Let $Y_n'=Y_n''\neq Y_n$, then $A$ has non-trivial normalizer $N_G(A)$.
\end{proposition} {\bfseries Proof:\quad} There exist permutations $g,f\in G\setminus A$ so that $Y_n'=gY_n=Y_nf=Y_n''$. From this equality and lemma~\ref{Y_ng} follows that $gY_ng^{-1}$ is a $n$-orbit of $A$. As $Y_n$ contains the initial $n$-tuple, then $gY_ng^{-1}\cap Y_n\neq \emptyset$ and hence $gY_ng^{-1}=Y_n$. So $gAg^{-1}=A$. $\Box$
\begin{proposition}\label{g*an*g-1,f*Yn*f-1} Let $A$ be a subgroup of $G$, $g,f\in G$, $gag^{-1}=a$ for every $a\in A$, $faf^{-1}\neq a$ for some $a\in A$ and $fAf^{-1}=A$. Let $Y_n$ be a $n$-orbit of $A$ and $\overrightarrow{Y_n}$ be the arbitrary ordered set $Y_n$, then $g\overrightarrow{Y_n}=\overrightarrow{Y_n}g$, $fY_n=Y_nf$, but $f\overrightarrow{Y_n}\neq\overrightarrow{Y_n}f$.
\end{proposition}
\begin{lemma}\label{L_n} Let $X_n$ be a $n$-orbit of a group $G$ and $L_n$ be a partition of $X_n$. If the left action of $G$ on $X_n$ maintains $L_n$, then classes of $L_n$ are $n$-orbits of left cosets of some subgroup of $G$.
\end{lemma} {\bfseries Proof:\quad} Let $Y_n\in L_n$. Since $L_n$ is a partition, the left action of $Aut(Y_n)$ on $Y_n$ is transitive and hence $Y_n$ is a $n$-orbit. $\Box$
The same we have
\begin{lemma}\label{R_n} Let $X_n$ be a $n$-orbit of a group $G$ and $R_n$ be a partition of $X_n$. If the right action of $G$ on $X_n$ maintains $R_n$, then classes of $R_n$ are $n$-orbits of right cosets of some subgroup of~$G$.
\end{lemma}
\paragraph{Intersections and unions of left- and right-automorphic partitions.}
\begin{proposition}\label{Y_n*Z_n} Let $Y_n$ and $Z_n$ be $n$-orbits, then $T_n=Y_n\cap Z_n$ is a $n$-orbit and $Aut(T_n)=Aut(Y_n)\cap Aut(Z_n)$.
\end{proposition} {\bfseries Proof:\quad} It is sufficient to choose an initial $n$-tuple from $T_n$. $\Box$
\begin{definition}\label{covering} Let $M$ be a set and $A$ be a system of subsets of $M$. We say that $A$ is a covering of $M$ if classes of $A$ contain all elements of $M$ and have non-vacuous intersections. If the all intersections are vacuous then we say that (the covering) $A$ is a partition of $M$. So we say that $A$ is a covering, if it is not a partition. We say also that $A$ is a covering, if we do not know, whether it is a partition.
\end{definition}
\begin{definition}\label{automorphic} Let $X_n$ be a $n$-orbit of a group $G$ and $Y_n$ be an arbitrary subset of $X_n$, then we say that $L_n=GY_n$ is \emph{left-automorphic} and $R_n=Y_nG$ is \emph{right-automorphic} covering of $X_n$. This definition we shall apply also to corresponding coverings of a $k$-orbit $X_k$ of a group $G$ for $k<n$.
\end{definition}
\begin{corollary}\label{autom.-part.} Let $L_n$ and $R_n$ be partitions of a $n$-orbit of a group $G$ on $n$-orbits of left and right cosets of a subgroup $A<G$, then $L_n$ and $R_n$ are left-automorphic and right-automorphic partitions. \end{corollary}
\begin{definition}\label{part. op.} Let $M$ be a set and $P,Q$ be partitions of $M$. We write:
\begin{itemize} \item $P\sqsubset Q$ if for every $A\in P$ there exists $B\in Q$, so that $A\subset B$.
\item $P\sqcap Q$ for partition of $M$ that consists of intersections of classes from $P$ and $Q$.
\item $P\sqcup Q$ for partition of $M$ whose class is a union of intersected classes from $P$ and $Q$.
\end{itemize} \end{definition}
\begin{proposition}\label{GA,GB;AG,BG} Let $A,B<G$, then $GA\sqcap GB=G(A\cap B)$, $AG\sqcap BG=(A\cap B)G$, $GA\sqcup GB=Ggr(A,B)$ and $AG\sqcup BG=gr(A,B)G$.
\end{proposition} {\bfseries Proof:\quad} \begin{itemize} \item Since $GA\sqcap GB=G(GA\sqcap GB)$ and $A\cap B\in GA\sqcap GB$, $GA\sqcap GB=G(A\cap B)$. Analogously $AG\sqcap BG=(A\cap B)G$.
\item Let $\{A_i:\,i\in [1,l]\}\subset GA$, $\{B_j:\,j\in [1,m]\}\subset GB$, $A_1=A$, $B_1=B$ and $U=\cup_{(i=1,l)} A_i=\cup_{(j=1,m)} B_j$, then $U\in GA\sqcup GB$ and $e\in U$. Let $f\in A_i$, $g\in A_k$ and $B_j$ has non-vacuous intersections with $A_i$ and $A_k$, then $f=gaba'$ for some elements $a,a'\in A$ and $b\in B$. It shows that every element from $U$ can be represented as a product of elements from $A$ and $B$. Hence $U=gr(A,B)$ and $GA\sqcup GB=Ggr(A,B)$. The same $AG\sqcup BG=gr(A,B)G$.~$\Box$
\end{itemize}
From this proposition follows:
\begin{lemma}\label{L_n,L_n;R_n,R_n} Let $A,B<G$ and $X_n\in Orb_n(G)$, then $X_nA\sqcap X_nB=X_n(A\cap B)$, $AX_n\sqcap BX_n=(A\cap B)X_n$, $X_nA\sqcup X_nB=X_ngr(A,B)$ and $AX_n\sqcup BX_n=gr(A,B)X_n$.
\end{lemma}
\paragraph{Intersection and union of left-automorphic partition with right-automorphic partition.}
Let $L_n=X_nA$ and $R_n=AX_n$. First we see that $L_n$ and $R_n$ have at least one common class the $n$-orbit of $A$ containing initial $n$-tuple. Then from proposition \ref{Yn=gYng-1->N} we know that if $L_n$ and $R_n$ have more as one common class, then $A$ has non-trivial normalizer in $G$.
\begin{lemma}\label{L_n,R_n,B<A}
Let $L_n\sqcap R_n$ be not trivial, i.e. it contains a class $Z_n$ by power $l$, where $1<l<|A|$, then conjugate to $A$ subgroups have non-trivial intersections and $Z_n$ is a $n$-orbit of a some subgroup $B<A$.
\end{lemma} {\bfseries Proof:\quad} Let $U_n\in L_n$, $W_n\in R_n$ and $Z_n=U_n\cap W_n$, then $U_n$ is a $n$-orbit of some conjugate to $A$ subgroup $D$ and $W_n$ is a $n$-orbit of $A$. Taking in opinion that we can choose an initial $n$-tuple from $Z_n$, we obtain that $Z_n$ is a $n$-orbit of a subgroup $B=A\cap D$. $\Box$
\begin{corollary}\label{L_n-sqcap-R_n,p} Let $A$ be a prime order cyclic group, then $L_n\sqcap R_n$ is trivial.
\end{corollary}
The union $L_n\sqcup R_n$ can contain non-automorphic classes: $$
\begin{array}{|c|} \hline 123 \\ 213 \\ \hline 132 \\ 312 \\ \hline 231 \\ 321 \\ \hline \end{array}\ \sqcup\
\begin{array}{|c|} \hline 123 \\ 213 \\ \hline 132 \\ 231 \\ \hline 312 \\ 321 \\ \hline \end{array}\ =\
\begin{array}{|c|} \hline 123 \\ 213 \\ \hline 132 \\ 231 \\ 312 \\ 321 \\ \hline \end{array} $$ and therefore it is not of interest for investigation, nevertheless the symmetry properties of this union can give an information about the structure of the studied group $G$ and help to find subgroups of $G$ that are supergroups for $A$.
\subsection{The actions of permutations on $k$-sets}
In order to consider the actions of permutations on $k$-sets we shall need to have some special operations that we introduce from the beginning.
\subsubsection{Operations on $k$-sets}
\paragraph{Projecting and multiprojecting operators.}
Let $\alpha _k=\langle v_1\ldots v_k\rangle$ be a $k$-tuple, $l\leq k$ and $i_1,i_2,\ldots ,i_l$ be $l$ different coordinates from $[1,k]$. Then $\beta_l=\langle v_{i_1}\ldots v_{i_l}\rangle$ is a $l$-tuple that we call a \emph{projection} of the $k$-tuple $\alpha_k$ on the ordered set of coordinates $I_l=\{i_1<i_2<\ldots<i_l\}$. We shall enter a projecting operator $\hat{p}$ and write this projection as $\beta_l=\hat{p}(I_l)\alpha_k$. The projection of all $k$-tuples of a $k$-set $X_k$ on $I_l$ gives a $l$-set $X_l=\hat{p}(I_l)X_k$.
The projection of all $k$-tuples of the $k$-set $X_k$ on $I_l$, that distinguishes the equal $l$-tuples, is a multiset that we call a \emph{multiprojection} of $X_k$ on $I_l$ and denote it as
$\uplus_{X_k}X_l$ or simply $\uplus X_l$, if from context it is clear, what a multiprojection we consider. By definition $|\uplus X_l|=|X_k|$. Using a multiprojecting operator $\hat{p}_{\uplus}$, we shall write that $\uplus X_l=\hat{p}_{\uplus}(I_l)X_k$.
\paragraph{Concatenating operation.}
Let $\beta_l=\langle v_1\ldots v_l\rangle$ and $\gamma_m=\langle u_1\ldots u_m\rangle$ be $l$- and $m$-tuple, then $(l+m)$-tuple $\langle v_1v_2\ldots v_lu_1u_2\ldots u_m\rangle$ we call a concatenation of $\beta_l$ and $\gamma_m$ and write this as $\beta _l\circ \gamma _m$.
It will be also suitable to use the concatenation of intersected tuples. We shall consider such concatenation as multiset of coordinates, for example $\langle 12\rangle\circ \langle 23\rangle=\langle 1223\rangle$.
We shall use this concatenating operation also for multisets of tuples in the next way:
Let $\uplus Y_l$ and $\uplus Z_m$ be multisets with the same number of tuples and $\phi:\uplus Y_l\leftrightarrow \uplus Z_m$, then $\uplus Y_l\stackrel{\phi}{\circ}\uplus Z_m$ is the $(m+l)$-multiset, that consists of concatenations of $l$-tuples of $Y_l$ with $m$-tuples of $Z_m$ accordingly to the map $\phi$. We shall not write the map $\phi$, if it is clear from context.
\paragraph{Operation properties.}\label{Op}
\begin{lemma}\label{pX_k} $l$-projection of a $k$-orbit is a $l$-orbit.
\end{lemma}
Let $g$ be a permutation, $\alpha_n=\langle v_1\ldots v_n\rangle$ be a $n$-tuple and $I_k$ be a $k$-subspace.
\begin{lemma}\label{pgan=gpan} $g\hat{p}(I_k)\alpha_n=\hat{p}(I_k)g\alpha_n$.
\end{lemma} {\bfseries Proof:\quad} It is sufficient to show the equality for $I_k\in [1,k]$. $$g\hat{p}(I_k)\alpha_n=g\langle v_1\ldots v_k\rangle
=\langle gv_1\ldots gv_k\rangle$$ and $$\hat{p}(I_k)g\alpha_n=\hat{p}(I_k)\langle gv_1\ldots gv_n\rangle
=\langle gv_1\ldots gv_k\rangle\!.\;\Box$$
\begin{lemma}\label{p.ang=pan.g} $\hat{p}(I_k)(\alpha_ng)=\hat{p}(gI_k)\alpha_n$.
\end{lemma} {\bfseries Proof:\quad} $$\hat{p}(I_k)(\alpha_ng)=\hat{p}([1,k])\langle v_{g1}\ldots v_{gn}\rangle=\langle v_{g1}\ldots v_{gk}\rangle$$ and $$\hat{p}(gI_k)\alpha_n=\hat{p}(g[1,k])\langle v_1\ldots v_n\rangle
=\hat{p}(\{g1<\ldots<gk\})\langle v_1\ldots v_n\rangle =\langle v_{g1}\ldots v_{gk}\rangle.\ \Box $$
The equality $\hat{p}(I_k)(\alpha_ng)=(\hat{p}(I_k)\alpha_n)g=\alpha_k(I_k,\alpha_n)g$ has not an interest application, because the corresponding right permutation action on $k$-tuple $\alpha_k$ cannot be disengaged from the $n$-tuple $\alpha_n$, as it takes place by the left permutation action on $k$-tuple $\alpha_k$. But we shall write for convenience $\alpha_kg$ instead of $\hat{p}(I_k)(\alpha_ng)$, where it will not lead to misunderstanding. Similarly, we consider a $k$-projection of equalities $GY_n=X_nA$ and $AX_n=Y_nG$ (lemma \ref{GY_n=X_nA}).
From $\hat{p}(I_k)GY_n=\hat{p}(I_k)X_nA$, it follows $G\hat{p}(I_k)Y_n=\hat{p}(AI_k)X_n$, where on definition \ref{GA,AG} $$
\hat{p}(AI_k)X_n\equiv \{\hat{p}(AI_k)\alpha_n:\,\alpha_n\in X_n\}
\mbox{ and }\hat{p}(AI_k)\alpha_n=\{\hat{p}(aI_k)\alpha_n:\,a\in A\}, $$ so we can write the $k$-projection of this equality as $GY_k(I_k)=\hat{p}(AI_k)X_n$ or (by correct understanding) simply as $GY_k=X_kA$.
For the second equality we have $\hat{p}(I_k)AX_n=A\hat{p}(I_k)X_n=AX_k(I_k)$ and $$\hat{p}(I_k)Y_nG=\hat{p}(GI_k)Y_n\equiv \{\hat{p}(gI_k)Y_n:\, g\in G\}.$$ For convenience we will write the $k$-projection of second equality simply as $AX_k=Y_kG$.
\begin{proposition}\label{p(I_k)P_k*Q_k} Let $X_n$ be a $n$-set and $P_n$, $Q_n$ be two partitions of $X_n$. Let $P_k=\hat{p}(I_k)P_n$ and $Q_k=\hat{p}(I_k)Q_n$ be partitions of $X_k=\hat{p}(I_k)X_n$. It does not necessitate the equality $\hat{p}(I_k)(P_n\sqcap Q_n)=\hat{p}(I_k)P_n\sqcap \hat{p}(I_k)Q_n)$.
\end{proposition} {\bfseries Proof:\quad} $X_n=\{15,26,36,45\}$, $P_n=\{\{15,26\},\{36,45\}\}$ and $Q_n=\{\{15,36\},\{26,45\}\}$.~$\Box$
\subsubsection{Some additional definitions and auxiliary statements}
\begin{definition} The presentation $GY_k=\{gY_k:g\in G\}$ that we use for the action of a group $G$ on a $k$-subset $Y_k$ we shall apply also for the action of a group $G$ on a system of $k$-subsets $P_k$ as $GP_k=\{gP_k:g\in G\}$ and $gP_k=\{gY_k:Y_k\in P_k\}$.
\end{definition} \begin{definition} Let $M$ be a set and $Q$ be a set of subsets of $M$, then we write $\cup Q\equiv \cup_{(U\in Q)}U$. From such definition follows that for $GP_k$ we can consider two kinds of unions: $\cup GP_k$ and $\cup\cup GP_k$. For example, let we have two sets $\{\{1,2\},\{2,3\}\}$ and $\{\{2,3\},\{4,5\}\}$, then the first union of these sets is the set $\{\{1,2\},\{2,3\},\{4,5\}\}$ and the second is $\{1,2,3,4,5\}$.
The symbol $\sqcup$ we shall apply to union of intersected classes of a system of sets, as in example: $\sqcup \{\{1,2\},\{2,3\},\{4,5\}\}=\{\{1,2,3\},\{4,5\}\}$.
\end{definition}
\begin{definition} Let $\alpha_k$ be a $k$-tuple. We shall write the set of coordinates of $\alpha_k$ as $Co(\alpha_k)$. We shall use this notation also for a $k$-set $Y_k$, where $Co(Y_k)=\{Co(\alpha_k):\alpha_k\in Y_k\}$.
\end{definition}
\begin{lemma}\label{rm} Let $M$ be a $m$-element set and $\uplus M$ be a homogeneous multiset with multiplicity $k$. Let $P_m$ be a multipartition of $\uplus M$ on $k$-element multisubsets of $M$, then $P_m$ is a union of $k$ distributions of $m$ elements of set $M$ between $m$ $k$-element classes of $P_m$.
\end{lemma} {\bfseries Proof:\quad} We do an induction on $m$. Let us to represent $P_m$ as $m\times k$ matrix, whose lines are classes of $P_m$. Let $m=1$, then the statement is evidently correct. Let $m>1$ and first $l\leq k$ lines contains all $k$ occurences of an element $u\in M$. By permutation of elements in these $l$ lines we can placed element $u$ in all $k$ columns. Now by permutation of elements in $k$ columns we can replaced element $u$ in the first line. Thus we have obtained $(m-1)\times k$ matrix (without the first line) that by induction hypothesis can be transformed (with permutation of elements in lines) to $m-1$ different elements in each column. Now we need only to do the inverse permutation of element $u$ from the first line with corresponding elements in other $l-1$ lines. $\Box$
From this lemma follows
\begin{corollary}\label{mMmM->MM} Let $M$ be a $m$-element set and $M_2=\uplus M\circ \uplus M$, where $\uplus M$ is homogeneous, then $M_2$ can be partition in $m$-element subsets that are concatenations $M\circ M$.
\end{corollary}
{\bfseries Proof:\quad} Let $|\uplus M|/|M|=k$, $M_2$ be associated with space $I_2=I_1^1\circ I_1^2$ and $L_2$ be the partition of $M_2$ on $k$-element classes so that $\hat{p}(I_1^1)L_2=M$, then $\hat{p}(I_1^2)L_2=P_m$ from lemma \ref{rm}. $\Box$
\begin{lemma}\label{mMmM} Let $M_2$ be a set of pairs that is associated with space $I_2=I_1^1\circ I_1^2$. Let $\hat{p}_{\uplus}(I_1^1)M_2= \hat{p}_{\uplus}(I_1^2)M_2$, then $M_2$ can be partition in cycles.
\end{lemma} {\bfseries Proof:\quad} Let $\langle u_1u_2\rangle\in M_2$, then there exists a pair $\langle u_2u_3\rangle\in M_2$. The continuation gives a first cycle $C_2=\{\langle u_1u_2\rangle,\langle u_2u_3\rangle,\ldots, \langle u_ru_1\rangle\}$. The set $M_2\setminus C_2$ holds the property of the set $M_2$. $\Box$
\begin{definition} Let $Y_k$ be a $k$-subset defined on the subset $U\subset V$, then under $Aut(Y_k)$ we shall understand the permutation group on the set $U$. An extension of $Aut(Y_k)$ on the set $V$ we shall write as $Aut(Y_k;V)$.
\end{definition}
\begin{definition}\label{sdeco} Let $c_1c_2\ldots c_l$ be decomposition of a permutation $g$. Product $a$ of some cycles from this decomposition we shall call a {\em subdecomposition} of $g$ and write this as $a\subset g$.
Let $a$ be a subdecomposition of an automorphism $g\in G$. We shall call the permutation $a$ as a \emph{subautomorphism} of $G$ and write this as $a\subset\in G$. Let $B$ be an intransitive subgroup of $G$ and $A(U)$ be a transitive component of $B$ on the subset $U\subset V$. We shall write this fact as $A(U)<<G$ and say that $A(U)$ is a projection of $B$. It is clear that $A(U)$ is generated by some subautomorphisms of $G$.
We can consider an action of a subautomorphism $a$ on a $k$-orbit $X_k$ of a group $G$, extending it to an action of some automorphism $g\in G$.
\end{definition}
\begin{definition} Let $X_k$ be a $k$-orbit of a group $G$ and $Y_k\subset X_k$ be a $k$-orbit of a subgroup of $G$, then we say that $Y_k$ is a $k$-suborbit of $G$.
\end{definition}
\begin{definition} Let $G$ be a group, $X_n\in Orb_n(G)$, $I_k$ be a $k$-subspace and $Co(I_k)$ be a $1$-orbit of some subgroup $A<G$, then we say that number $k$ is automorphic, $I_k$ is an \emph{automorphic subspace} and $X_k=\hat{p}(I_k)X_n$ is a \emph{right-automorphic $k$-orbit} or a \emph{$k$-rorbit}.
\end{definition}
\subsubsection{The left action of permutations on $k$-sets}
The left action of a permutation on a $k$-set is the same as its action on a $n$-set. The right action of a permutation on a $k$-set follows from its action on a $n$-set. The right action is not just visible combinatorially as the left action, so we shall begin with the left action.
We say that two sets of $k$-tuples $Y_k$ and $Y_k'$ are {\em $S_n$-isomorphic} or simply isomorphic if there exists permutation $g\in S_n$ so that $Y_k'=gY_k$, for example $Y_2=\{\langle 12\rangle,\langle 21\rangle\}$ and $Y_2'=\{\langle 13\rangle,\langle 31\rangle\}$. We shall say that $Y_k$ and $Y_k'$ are $G$-isomorphic, if $g\in G<S_n$ and we study invariants of $G$. We shall not indicate a group relative to that we consider the symmetry, if it is clear from context. From this definition follows:
\begin{proposition} Let $X_k$ be a $k$-orbit and $Y_k$ be an arbitrary $k$-subset of $X_k$, then $Aut(X_k)Y_k$ is a covering of $X_k$ on isomorphic to $Y_k$ classes.
\end{proposition}
and
\begin{corollary} $n$-Orbits of left cosets of a subgroup $A$ of a group $G$ are isomorphic.
\end{corollary}
The $n$-orbits of right cosets of a subgroup $A$ in general are not isomorphic. An example is $3$-orbits of the subgroup $A=gr((12))<S_3$:
$$
\begin{array}{||ccc||} \hline 1 & 2 & 3 \\ 2 & 1 & 3 \\ \hline \end{array}\ \mbox{ and }\
\begin{array}{||ccc||} \hline 1 & 3 & 2 \\ 2 & 3 & 1 \\ \hline \end{array}\,. $$ The same is valid for $k$-orbits of left, right cosets of a subgroup $A$ by $k<n$.
\begin{proposition} Let $G$ be a group, $X_n\in Orb_n(G)$, $X_k\in Orb_k(G)$ and $A<G$, then \begin{itemize} \item $n$-Orbits of left cosets of $A$ form a partition of $X_n$. \item The $G$-isomorphic $k$-orbits of left cosets of $A$ belong to the same $k$-orbit $X_k$ and form a covering of $X_k$.
\end{itemize} \end{proposition} {\bfseries Proof:\quad} The first statement is evident. Let $Y_k\in Orb_k(A)$ be subset of $X_k$ , then a covering $L_k=GY_k$ of $X_k$ contains all $G$-isomorphic to $Y_k$ $k$-orbits. The example of such covering is $L_1=\{\{1,2\}\{2,3\}\{1,3\}\}$ $\Box$
A $k$-orbit $X_k$ of a group $G$ can have different representations through $k$-orbits of the same subgroup $A<G$, because $X_k$ can contain non-isomorphic $k$-orbits of $A$. For example, $1$-orbit of the symmetric group $S_n$ can be represented, on the one hand, as a covering by $1$-orbits of left cosets of $gr((12\ldots (n-1)))$ that are $(n-1)$-element subsets of $V$ and, on the other hand, as a partition on $1$-orbits of left cosets of this subgroup that are $1$-element subsets of $V$.
\begin{lemma}\label{Yk->Un} Let $A<G$, $X_n\in Orb_n(G)$, $Y_n\subset X_n$ be a $n$-orbit of $A$ and $Y_k=\hat{p}(I_k)Y_n$. Let $U_n\subset X_n$ be the union of such classes of $GY_n$ whose $I_k$-projection is $Y_k$, then $U_n$ is a $n$-orbit.
\end{lemma} {\bfseries Proof:\quad} $Aut(U_n)=Aut(Y_k;V)\cap G$. $\Box$
The subset $U_n$ can contain not all $n$-tuples whose $I_k$-projections belongs to $Y_k$. An example is given by the group $S_3$, $U_3=Y_3=\{123,213\}$ and $k=1$:
$$
\begin{array}{||c|cc||} \hline 1 & 2 & 3 \\ 2 & 1 & 3 \\ \hline 1 & 3 & 2 \\ 3 & 1 & 2 \\ \hline 2 & 3 & 1 \\ 3 & 2 & 1 \\ \hline \end{array}\,. $$ In this case $U_1=\{1,2\}$ and the intersections $\{1,2\}\cap \{1,3\}$ and $\{1,2\}\cap \{2,3\}$ are not vacuous.
\paragraph{Now we consider when a subgroup $A<G$ forms a partition of a $k$-orbit $X_k\in Orb_k(G)$.}
Let a $k$-set $X_k$ be a $k$-projection of a $(k+1)$-set $X_{k+1}$, then we shall say that $X_{k+1}$ is an \emph{extension} of $X_k$ on a $(k+1)$-subspace or is a $(k+1)$-extension of $X_k$.
\begin{lemma}\label{Xk-X(k+1)} Let $X_k\in Orb_k(G)$, $A<G$ and $X_kA=GY_k$ be a partition. Let $X_{k+1}\in Orb_{k+1}(G)$ be an extension of $X_k$, then $X_kA$ generates a partition $X_{k+1}A$.
\end{lemma} {\bfseries Proof:\quad} Let $Y_{k+1}\subset X_{k+1}$ and $Y_k=\hat{p}(I_k)Y_{k+1}$. If $GY_{k+1}$ is not a partition, then evidently $GY_k$ contains intersected classes. $\Box$
\begin{lemma}\label{XAUXB=XAB} Let $X_k\in Orb_k(G)$, $A,B<G$ be not conjugate subgroups or $A,B$ be conjugate subgroups and $Y_k,Z_k\subset X_k$ be not isomorphic $k$-orbits of $A$ and $B$ correspondingly. Let $Y_k,Z_k$ be intersected and $X_kA=GY_k,X_kB=GZ_k$ be partitions, then $X_kA\sqcup X_kB=X_kgr(A,B)$.
\end{lemma} {\bfseries Proof:\quad} $G(GY_k\sqcup GZ_k)=GY_k\sqcup GZ_k$. Then $X_kA\sqcup X_kB=GT_k=X_kC$, where $C=gr(A,B)$ accordingly to lemma \ref{L_n,L_n;R_n,R_n}. $\Box$
Without condition $Y_k\cap Z_k\neq\emptyset$ the equality $X_kA\sqcup X_kB=X_kgr(A,B)$ can lead to misunderstanding for conjugate subgroups $A,B$. Consider an example:
$$
\begin{array}{||cc||} \hline 1 & 2 \\ 2 & 1 \\ \hline 1 & 3 \\ 3 & 1 \\ \hline 2 & 3 \\ 3 & 2 \\ \hline \end{array}\,,\
\begin{array}{||cc||} \hline 1 & 3 \\ 2 & 3 \\ \hline 1 & 2 \\ 3 & 2 \\ \hline 2 & 1 \\ 3 & 1 \\ \hline \end{array}\,. $$ \begin{itemize} \item The subgroups $A=gr((12))$ and $B=gr((13))$ are conjugate in $G=S_3$, so they determine the same partitions of $2$-orbits of $G$, if we consider isomorphic (not intersected) $2$-orbits of these subgroups. Therefore in this case $X_kA\sqcup X_kB=X_kA\neq X_kgr(A,B)$.
\item If we consider not isomorphic and not intersected $2$-orbits of the same subgroup $A=gr((12))$, then we have two different partitions $L_1,L_2$ of $2$-orbit $X_2$ of $S_3$. So it can be seen that $X_2A\sqcup X_2A\neq X_2gr(A,A)=X_2A$. This misanderstansing follows from interpretation $X_2A$, first, as $S_3Y_2'$, $Y_2'=\{12,21\}$, and, second, as $S_3Y_2''$, $Y_2''=\{13,23\}$, where $Y_2'\cap Y_2''=\emptyset$. If we take $Y_2''=\{12,32\}$, then $A=gr((12))$, $B=gr((13))$ and formula $X_kA\sqcup X_kB=X_kgr(A,B)$ gives correct result.
\end{itemize}
\begin{lemma}\label{Co(Y_k)=Co(alpha_k)} Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$, $\alpha_k\in Y_k$ and $Y_k$ be a maximal subset with property: $Co(Y_k)=Co(\alpha_k)$, then $L_k=GY_k$ is a partition of $X_k$.
\end{lemma} {\bfseries Proof:\quad} On the condition the permutations of coordinates of $Y_k$ that maintains $Y_k$ maintains also each class of $L_k=GY_k$. $\Box$
\paragraph{In the next statements we shall study reverse question.} Namely, when a subset of a $k$-orbit of a group $G$ generates a subgroup of $G$.
\begin{lemma}\label{L_k} Let $X_k\in Orb_k(G)$ and $L_k$ be a partition of $X_k$. If the left action of $G$ on $X_k$ maintains $L_k$, then classes of $L_k$ are $k$-orbits of left cosets of some subgroup $A<G$.
\end{lemma} {\bfseries Proof:\quad} Let $Y_k\in L_k$, then $L_k=GY_k$, hence a subgroup $A=Aut(Y_k:V)\cap G$ acts on $Y_k$ transitive. $\Box$
\begin{remark} It can be seen that the partitioning of $X_n$ (and hence $X_k$) on $G$-isomorphic classes is not sufficient for the automorphism of this partition. This shows the next partition $L_n$ of a $n$-orbit $X_n$ of the group $G=C_6$:
$$
\begin{array}{||cccccc||} \hline 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 3 & 4 & 5 & 6 & 1 \\ \hline 3 & 4 & 5 & 6 & 1 & 2 \\ 4 & 5 & 6 & 1 & 2 & 3 \\ \hline 5 & 6 & 1 & 2 & 3 & 4 \\ 6 & 1 & 2 & 3 & 4 & 5 \\ \hline \end{array}\, . $$
The classes of $L_n$ are connected with permutation $(135)(246)$, but are not automorphic. Let $Y_n\in L_n$, then in this case $|GY_n|$ is a covering of $X_n$.
\end{remark}
From lemmas \ref{Co(Y_k)=Co(alpha_k)} and \ref{L_k} follows
\begin{corollary}\label{Y_k-A} Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$, $\alpha_k\in Y_k$ and $Y_k$ be a maximal subset with property: $Co(Y_k)=Co(\alpha_k)$, then $Y_k$ is a $k$-suborbit of $G$.
\end{corollary}
Consider a generalization of lemma \ref{L_k}.
\begin{theorem}\label{|Y_k||GY_k|} Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$ and $L_k=GY_k$ be a covering of
$X_k$. If $|Y_k||L_k|$ divides $|G|$, then $Y_k$ is a $k$-orbit of some subgroup $A<G$.
\end{theorem}
{\bfseries Proof:\quad} On condition, there exists a partition $L_n$ of a $n$-orbit $X_n\in Orb_n(G)$ so that $\hat{p}(I_k)L_n=L_k$ and $|L_n|=|L_k|$. Then there exists $Y_n\in L_n$ so that $\hat{p}(I_k)Y_n=Y_k$, $G\hat{p}(I_k)Y_n=\hat{p}(I_k)GY_n=GY_k$. Hence $L_n=GY_n$ is a partition of $X_n$ and we can apply the lemma \ref{L_n}. $\Box$
\begin{corollary}\label{Qk} Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$ and $L_k=GY_k$ be a covering of $X_k$. Let classes of $L_k$ can be assembled in isomorphic partitions of
$X_k$. Let $Q_k$ be a system of these partitions and $|Q_k||X_k|$ divides
$|X_n|$. Let $L_k'\in Q_k$ and $Y_k\in L_k'$, then $X_k$ is a $k$-orbit of a subgroup $B=Aut(L_k')\cap G$, $L_k'=BY_k$ and $Y_k$ is a $k$-orbit of some subgroup $A<B$.
\end{corollary}
{\bfseries Proof:\quad} Let $X_n\in Orb_n(G)$ and $X_k=\hat{p}(I_k)X_n$. Since $|Q_k||X_k|$
divides $|X_n|$, there exists a partition $L_n$ of $X_n$ so that $\hat{p}(I_k)L_n=X_k$ and permutations of classes of $L_n$ correspond with permutations of partitions from $Q_k$. So the classes of $L_n$ are $n$-orbits of subgroups of $G$ and hence $X_k$ is a $k$-orbit of the subgroup $B=Aut(L_k')\cap G$. Since $BY_k=L_k'$ is a partition of $X_k$ and $B$ acts on $L_k'$ transitive, the subgroup $A=Aut(Y_k;V)\cap B$ acts on $Y_k$ transitive. $\Box$
The example of such $G$-isomorphic system of partitions is
$$ Q_1= \{\
\begin{array}{||c||} \hline 1 \\ 2 \\ \hline 3 \\ 4 \\ \hline \end{array}\,,\
\begin{array}{||c||} \hline 1 \\ 3 \\ \hline 2 \\ 4 \\ \hline \end{array}\,,\
\begin{array}{||c||} \hline 1 \\ 4 \\ \hline 2 \\ 3 \\ \hline \end{array}\ \}, $$ where $Q_1$ is formed by $1$-orbits of left cosets of the subgroup $gr((12)(34))<A_4$.
Let $Q_k$ be a set of $S_n$-isomorphic partitions that therefore do not belong to the same $k$-orbit $X_k$, then the action of $G$ on $X_k$ maintains simultaneously all partitions $L_k^i$ as in the previous example, where now $Q_1$ is formed by $1$-orbits of not conjugate subgroups $gr((12)(34))$, $gr((13)(24))$ and $gr((14)(23))$ of the group $S_2\otimes S_2$.
\begin{corollary}\label{rCk-gr} Let $X_k$ be a $k$-rorbit of a group $G$ that contains a $k$-rcycle $rC_k$, then $rC_k$ is a $k$-orbit of some subgroup $A<G$.
\end{corollary} {\bfseries Proof:\quad} It is a special case of the corollary \ref{Y_k-A}. $\Box$
The order of the subgroup $A$ in the corollary can differ from $k$. The example gives the subgroup $A=gr((1234)(56))<S_6$ and $rC_2=\{\langle 56\rangle,\langle 65\rangle\}$.
Projections of $k$-rcycles from $X_k$ on $l$-subspace ($l<k$) can have non-trivial intersections as in example:
$$ \begin{array}{lcr}
\begin{array}{||cc|c||} \hline 1 & 2 & 3 \\ \hline 2 & 3 & 1 \\ 3 & 1 & 2 \\ \hline \end{array}& \mbox{ and }&
\begin{array}{||cc|c||} \hline 1 & 2 & 4 \\ \hline 2 & 4 & 1 \\ 4 & 1 & 2 \\ \hline \end{array}\,. \end{array} $$ So these projections form a covering of $X_l$.
\subsubsection{The special left action of permutations on $k$-sets}
There exists a left action of permutations on a $k$-orbit $X_k$ of a group $G$ that forms a partition $R_k$ of $X_k$ on $k$-orbits of right cosets of a subgroup $A<G$. It is a partition $R_k=AX_k$.
Classes of $AX_k$ as well as classes of $AX_n$ are in general case not isomorphic. Moreover, if the classes of $AX_n$ have the same order, then the classes of $AX_k$ satisfy to this property not always. For example $R_1=gr((12))\{1,2,3\}=\{\{1,2\},\{3\}\}$.
$k$-Orbits of left cosets of a subgroup $A$ can have intersections, $k$-Orbits of right cosets of a subgroup $A$ have no intersection.
$k$-Orbits of left cosets of a subgroup $A$ are $k$-orbits of subgroups that are conjugate to $A$, $k$-orbits of right cosets of a subgroup $A$ are $k$-orbits of the subgroup $A$. These properties we shall assemble in the following statements:
\begin{lemma}\label{conjugate-n} Let $A,B$ be conjugate subgroups of a group $G$ and $X_n$ be a $n$-orbit of $G$, then
\begin{enumerate} \item The partitions of $G$ on left (right) cosets of subgroups $A,B$ are not equal.
\item The partitions of $X_n$ on $n$-orbits of left cosets of subgroups $A,B$ are equal and this partition consists of isomorphic classes.
\item The partitions of $X_n$ on $n$-orbits of right cosets of subgroups $A,B$
are not equal and each partition consists of not isomorphic classes of power $|A|$.
\end{enumerate} \end{lemma} {\bfseries Proof:\quad} The first statement is the fact from the group theory, the second is the repeating of lemma \ref{gY_n} and the third follows from lemma \ref{Y_ng}. $\Box$
\begin{corollary}\label{conjugate-k} Let $A,B$ be conjugate subgroups of a group $G$ and $X_k$ be a $k$-orbit of $G$, then
\begin{enumerate} \item The coverings of $X_k$ on isomorphic $k$-orbits of left cosets of subgroups $A,B$ are equal.
\item The partitions of $X_k$ on $k$-orbits of right cosets of subgroups $A,B$ are not equal, each partition consists of not isomorphic classes, which can differ by power.
\end{enumerate} \end{corollary}
\paragraph{$k$-orbit property of normal subgroups.}
\begin{lemma}\label{AX_k=X_kA} Let $X_k\in Orb_k(G)$, $A<G$ and $R_k=AX_k=X_kA=L_k$. If $A$ is the maximal subgroup, then $A\lhd G$.
\end{lemma}
{\bfseries Proof:\quad} Let $X_n\in Orb_n(G)$, then $R_n=AX_n=X_nA=L_n$, because $L_n$ is the only partition of $X_n$ with property $\hat{p}(I_k)L_n=L_k$ and $|L_n|=|L_k|$, hence $A\lhd G$. $\Box$
\begin{corollary}\label{fix-tuple} Let $Y_k$ be a maximal subset of a $k$-orbit $X_k$ so that $Co(Y_k)=Co(\alpha_k)$ for some $k$-tuple $\alpha_k\in Y_k$, then a stabilizer $Stab(\alpha_k)\lhd Stab(Y_k)$.
\end{corollary}
\begin{lemma}\label{A<|G} Let $A\lhd G$ and $X_k\in Orb_k(G)$, then $R_k=AX_k=X_kA=L_k$.
\end{lemma} {\bfseries Proof:\quad} It is given that $R_n=AX_n=X_nA=L_n$, then $\hat{p}(I_k)AX_n=\hat{p}(I_k)X_nA$ or $AX_k=X_kA$.~$\Box$
So we have
\begin{theorem}\label{s-gr} A group $G$ is a simple group if and only if $AX_k\neq X_kA$ for arbitrary $k$, arbitrary $X_k\in Orb_k(G)$ and each subgroup $A<G$.
\end{theorem}
\paragraph{Intersections and unions of $k$-orbits}
Above we have seen (proposition \ref{Y_n*Z_n}) that the intersection of $n$-orbits is a $n$-orbit. The same is correct
\begin{lemma}\label{Y_k*Z_k} Let $Y_k$ and $Z_k$ be $k$-orbits, then $T_k=Y_k\cap Z_k$ is a $k$-orbit of $Aut(T_k)=Aut(Y_k)\cap Aut(Z_k)$.
\end{lemma} {\bfseries Proof:\quad} It is sufficient to consider the intersection of $n$-orbits of $Aut(Y_k)$ and $Aut(Z_k)$ and then their corresponding $k$-projections. $\Box$
\begin{corollary}\label{R_k*R_k'} Let $Y_k$ and $Z_k$ be $k$-suborbits of a $k$-orbit $X_k$, $A=Aut(Y_k)\cap Aut(X_k)$ and $B=Aut(Z_k)\cap Aut(X_k)$, then $(A\cap B)X_k=AX_k\sqcap BX_k$.
\end{corollary}
For subgroups $G<Aut(Y_k)$ and $G'<Aut(Z_k)$ with $k$-orbits $Y_k\in Orb_k(G)$ and $Z_k\in Orb_k(G')$ the similar relation is not correct. Let us to give a counterexample:
The partition $R_1=AX_1=AV$ is a system of orbits of $A$ on $V$ in the conventional meaning. Let $A=gr((12))<S_3$ and $B=gr((123))<S_3$, then $AV=\{\{1,2\},\{3\}\}$, $BV=\{\{1,2,3\}\}$, $AV\sqcap BV=\{\{1,2\},\{3\}\}$ and $(A\cap B)V=\{\{1\},\{2\},\{3\}\}$.
This example, lemma \ref{Y_k*Z_k} and corollary \ref{R_k*R_k'} determine the relation between intersections of groups, their $k$-orbits and corresponding systems of $k$-orbits of right cosets of this groups (subgroups). The same is valid for the intersection of systems of $k$-orbits of left cosets.
The union of partitions of $X_k\in Orb_k(G)$ on $k$-orbits of left cosets of subgroups $A,B<G$ we have considered in lemma \ref{XAUXB=XAB}.
For the union of partitions of $X_k\in Orb_k(G)$ on $k$-orbits of right cosets of subgroups $A,B<G$ we have:
\begin{lemma}\label{AV*BV,AV+BV} $AX_k\sqcup BX_k=gr(A,B)X_k$.
\end{lemma}
\paragraph{The condition of automorphism of a subspace $I_k\subset V$.}
\begin{lemma}\label{k-rorbit} Let $X_n$ be a $n$-orbit of a transitive group $G$, $X_k=\hat{p}(I_k)X_n$,
$|X_n|/|X_k|>1$ and $I_k$ contains all elements of $V$ that are fixed with $Stab(I_k)<G$, then $I_k$ is automorphic.
\end{lemma} {\bfseries Proof:\quad} Let $Co(I_k)=\{v_i:\, i\in [1,k]\}$, $T_k^1\in Orb_k(Stab(v_1))$, $T_k^1\subset X_k$, $\hat{p}(v_1)T_k^1=v_1$ and $L_k'=GT_k^1$. Let $T_k^i\in Orb_k(Stab(v_i))$ be a class of $L_k'$, then $Stab(I_k)T_k^i=T_k^i$. It follows that all $T_k^i$ consist of $k$-orbits of right cosets of $Stab(I_k)$ and systems of these $k$-orbits of right cosets of $Stab(I_k)$ for different $i$ are isomorphic. Hence each $T_k^i$ contains a fix $k$-tuple $\alpha_k^i$ for that $Co(\alpha_k^i)=Co(I_k)$. The union $\cup_{(i=1,k)} \alpha_k^i$ is a $k$-orbit of a normalizer $N_G(Stab(I_k))$ (corollary \ref{fix-tuple}) that acts transitive on the subset $Co(I_k)$. Hence $I_k$ is automorphic. $\Box$
\subsubsection{The right action of permutations on $k$-sets} \paragraph{Right action isomorphism.} Under right action of a permutation $g$ on a $k$-tuple $\alpha_k$ we understand the mapping of $\alpha_k$ on a $k$-tuple $\beta_k\equiv \alpha_kg$ that is placed on the position of coordinates of $\alpha_k$ in the same $n$-tuple $\alpha_n$ under the right action of $g$ on $\alpha_n$. If $\alpha_k=\langle v_1v_2\ldots v_k\rangle$, then on definition we write $\alpha_kg=\langle v_{g1}v_{g2}\ldots v_{gk}\rangle$. Thus we consider the $k$-tuple $\alpha_k\subset \alpha_n$ with its certain position in $\alpha_n$ that we define by $k$-subspace of coordinates $I_k$.
Let $X_n\in Orb_n(G)$, $Y_n\subset X_n$, $Y_k=\hat{p}(I_k)Y_n$, $g\in S_n$ and $Y_k'=Y_kg=\hat{p}(gI_k)Y_n$, then we say that $Y_k$ and $Y_k'$ are right $S_n$-related. If $g\in G$, then $Y_k$ and $Y_k'$ are right $G$-related. In general case the image $Y_k'$ is not isomorphic to its original $Y_k$. We shall study, when the right action of a permutation transforms $k$-subset $Y_k$ on isomorphic $k$-subset $Y_k'$.
\paragraph{Two kinds of right action isomorphism.} \begin{definition}\label{bYk=Yka} If we study $k$-orbits of a group $G$, $X_k\in Orb_k(G)$, $Y_k\subset X_k$, $a,b\in S_n$ and a $k$-subset $Y_k'=Y_ka=bY_k$, then we say that $Y_k'$ and $Y_k$ are right $S_n$-isomorphic (as in first case of example \ref{simple-struc}). If $a,b\in G$, then we say that $Y_k'$ and $Y_k$ are right $G$-isomorphic.
\end{definition}
Let $Y_k=\hat{p}(I_k)Y_n\subset X_k$, $X_k'\in Orb_k(G)$, $Y_k'=\hat{p}(I_k')Y_n\subset X_k'$ and $Y_k'=Y_ka=bY_k$. If $a\in G$ then $X_k'=X_k$, and $Y_k'\subset X_k$ too. Now we shall find, when from $a\in G$ it follows $b\in G$.
\begin{proposition}\label{SnYk-not-Aut(Xk)Yk} Let $Y_k$ and $Y_k'$ be arbitrary $S_n$-isomorphic subsets of a $k$-orbit $X_k$, then $Y_k$ and $Y_k'$ are not with necessary $Aut(X_k)$-isomorphic.
\end{proposition} {\bfseries Proof:\quad} The $2$-subsets $\{\langle 12\rangle,\langle 23\rangle\}$ and $\{\langle 12\rangle,\langle 24\rangle\}$ from the $2$-orbit $X_2'$ (page \pageref{X_2'}) are $S_n$-isomorphic, but not $Aut(X_2')$-isomorphic. $\Box$
\begin{proposition}\label{SnYk->Aut(Xk)Yk} Let $Y_k$ and $Y_k'$ be arbitrary right $S_n$-isomorphic $k$-subsets of a $k$-orbit $X_k$ of a group $G$, then $Y_k$ and $Y_k'$ are $G$-isomorphic.
\end{proposition}
{\bfseries Proof:\quad} Let $Y_k=\{I_k(i):i\in [1,|Y_n|]\}$ and $Y_k'=\{I_k'(i):i\in [1,|Y_n|]\}$, then $Y_k=\hat{p}(I_k(i))Y_n$ and $Y_k'=\hat{p}(I_k'(i))Y_n$. Maps $I_k(i)\rightarrow I_k'(i)$ are restrictions of automorphisms. Hence a map $\cup I_k(i)\rightarrow \cup I_k'(i)$ is also a restriction of an automorphism. $\Box$
\paragraph{General properties of right permutation action.}
\begin{proposition}\label{Yk'=gYk''f-1} $k$-Orbits of left, right cosets of a subgroup $A$ are connected with elements of~$G$.
\end{proposition} {\bfseries Proof:\quad} From proposition \ref{Yn'=gYn''f-1} we obtain $Y_k'(I_k)=\hat{p}(I_k)Y_n'= \hat{p}(I_k)gY_n''f^{-1} =g\hat{p}(I_k)Y_n''f^{-1} = gY_k''(I_k)f^{-1} = gY_k''(f^{-1}I_k)$. $\Box$
\begin{proposition}\label{H->r.aut} The right action of any automorphism $g\in G$ on a $k$-orbit of a normal subgroup $H\lhd G$ is isomorphic.
\end{proposition} {\bfseries Proof:\quad} Accordingly to proposition \ref{H->Yn=gYng-1} $\hat{p}(I_k)gY_n=\hat{p}(I_k)Y_ng$ or $gY_k=Y_kg$. $\Box$
Now we give a generalization of proposition \ref{Yn=gYng-1->N}.
\begin{lemma}\label{Lk*Rk} Let $A<G$, $X_k\in Orb_k(G)$, $L_k=X_kA=GY_k$, $R_k=AX_k$ and $Q_k=L_k\cap R_k\neq \emptyset$. Let $A$ be a maximal subgroup, then $A$ has non-trivial normalizer $B=N_G(A)$.
\end{lemma} {\bfseries Proof:\quad} Let $Z_k=\cup Q_k$, then $Aut(Y_k;V)\cap Aut(Z_k;V)\lhd Aut(Z_k;V)$, where $Z_k$ is not with necessary automorphic. Hence $A=Aut(Y_k;V)\cap Aut(Z_k;V)\cap G\lhd Aut(Z_k;V)\cap G=B$. $\Box$
\begin{proposition}\label{gakg-1,fYkf-1} Let $A$ be a subgroup of $G$, $g,f\in G$, $gag^{-1}=a$ for every $a\in A$ and $faf^{-1}\neq a$ for some $a\in A$, but $fAf^{-1}=A$. Let $Y_k$ be a $k$-orbit of $A$ and $\overrightarrow{Y_k}$ be the arbitrary ordered set $Y_k$, then $g\overrightarrow{Y_k}=\overrightarrow{Y_k}g$ and $fY_k=Y_kf$, but $f\overrightarrow{Y_k}\neq\overrightarrow{Y_k}f$.
\end{proposition} {\bfseries Proof:\quad} It follows from proposition \ref{g*an*g-1,f*Yn*f-1}. $\Box$
\paragraph{Right permutation action on $k$-rcycles.} \begin{lemma}\label{C_k-conc} Let $C_{lk}$ be a $(l,k)$-cycle, then $C_{lk}=\uplus C_{l_1p_1}\circ \uplus C_{l_2p_2}\circ\ldots \circ \uplus C_{l_qp_q}$, where
$\sum_{i=1}^qp_i=k$, $l_i$ divides $|\uplus C_{l_ip_i}|=l$, $C_{l_ip_i}$ is a $p_i$-projection of a $l_i$-rcycle and different $l_i$-rcycles have no intersection on $V$.
\end{lemma} {\bfseries Proof:\quad} From definition it follows that $C_{lk}=gr(g)\alpha_k$ for some permutation $g$ and $k$-tuple $\alpha_k$. Let $n$-tuple $\beta_n=\beta_{l_1}\circ \beta_{l_2}\circ \ldots\circ \beta_{l_q}$, $g=(\beta_{l_1})(\beta_{l_2})\ldots (\beta_{l_q})$ and $m=\sum_{i=1}^ql_i$, then $g$ generates $(l,m)$-cycle $C_{lm}=\uplus rC_{l_1}\circ \uplus rC_{l_2}\circ\ldots\circ \uplus rC_{l_q}$. The $(l,k)$-cycle $C_{lk}$ is a $k$-projection of the $(l,m)$-cycle $C_{lm}$.~$\Box$
The $(l,k)$-cycle $C_{lk}$ can be represented as a concatenation of $(l,p_i)$-multiorbits, whose $p_i$-projections are either $p_i$-tuple, or $S_{l_i}^{p_i}$-orbits, or $p_i$-projections of $l_i$-rcycles. It is obtained from lemma \ref{C_k-conc} by doing singled out the concatenation of fix $1$-tuples and reassembling the cycles in $S_{l_i}^{p_i}$-orbits as in example:
$$
\begin{array}{||cc|c|c|cc|cc||} \hline 1 & 2 & 3 & 4 & 5 & 7 & 6 & 8 \\ 2 & 1 & 3 & 4 & 7 & 5 & 8 & 6 \\ \hline \end{array}=
\begin{array}{||cc|cc|cc|cc||} \hline 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 2 & 1 & 3 & 4 & 7 & 8 & 5 & 6 \\ \hline \end{array}\,. $$
The difference in representations of type
$$
\begin{array}{||cc|cc||} \hline 5 & 7 & 6 & 8 \\ 7 & 5 & 8 & 6 \\ \hline \end{array}\ \mbox{ and }\
\begin{array}{||cc|cc||} \hline 5 & 6 & 7 & 8 \\ 7 & 8 & 5 & 6 \\ \hline \end{array} $$ can be important if rcycles $\{\langle 57\rangle,\langle 75\rangle\}$ and $\{\langle 68\rangle,\langle 86\rangle\}$ are not $G$-isomorphic.
There exist cases, where the partitioning of a $(l,k)$-cycle on right $G$-related concatenation components cannot be represented as projections of base three types $p$-orbits. The simplest of these cases gives the $3$-orbit of subgroup $gr((12))<S_3$. For this case we have the next right $G$-related $2$-orbits of $gr((12))$:
$$
\begin{array}{||cc||} \hline 1 & 2 \\ 2 & 1 \\ \hline \end{array}\,,\
\begin{array}{||cc||} \hline 1 & 3 \\ 2 & 3 \\ \hline \end{array}\ \mbox{ and }\
\begin{array}{||cc||} \hline 2 & 3 \\ 1 & 3 \\ \hline \end{array}\,. $$
The existence of such decomposition of a $(n,k)$-cycle on condition $k|n$ leeds to some intricate structures as, for example, the automorphism group of Petersen graph (s. below).
\paragraph{Finite group permutation representation.} Let us to consider some examples with different properties of right permutation action. The $2$-orbit of subgroup $gr((12))<S_3$ shows an existence of cases with no non-trivial isomorphic right permutation action for $k$-orbits of non-normal subgroup. The example \ref{S2*S2} (s. below) shows the existence of the right $G$-isomorphism for $2$-orbits $\{\langle 12\rangle,\langle 21\rangle\}$ and $\{\langle 34\rangle,\langle 43\rangle\}$ of normal subgroup $gr((12)(34))$ of group $S_2\otimes S_2$ that follows from proposition \ref{H->r.aut}. The next example is a $n$-orbit of a group $G$ that is the regular permutation representation of $S_3$ in two assemblies.
\begin{example} \label{S3(6)} $$ \begin{array}{lcr}
\begin{array}{||cc|cc|cc||} \hline 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 2 & 1 & 5 & 6 & 3 & 4 \\ \hline 3 & 4 & 1 & 2 & 6 & 5 \\ \hline 4 & 3 & 6 & 5 & 1 & 2 \\ \hline 5 & 6 & 2 & 1 & 4 & 3 \\ \hline 6 & 5 & 4 & 3 & 2 & 1 \\ \hline \end{array}& \mbox{and}&
\begin{array}{||cc|cc|cc||} \hline 1 & 2 & 3 & 5 & 4 & 6 \\ 2 & 1 & 5 & 3 & 6 & 4 \\ \hline 3 & 4 & 1 & 6 & 2 & 5 \\ 4 & 3 & 6 & 1 & 5 & 2 \\ \hline 5 & 6 & 2 & 4 & 1 & 3 \\ 6 & 5 & 4 & 2 & 3 & 1 \\ \hline \end{array}\,. \end{array} $$
\end{example}
The first table is partitioned relative to $G$-isomorphic $2$-subspaces and the second to $S_n$-isomorphic $2$-subspaces. The example shows no existence of a right $G$-isomorphism for $2$-rcycle $\{12,21\}$, but an existence of right $S_6$-isomorphism for this $2$-rcycle. This fact can be explained with next arguments: a subgroup defined by $2$-rcycle $\{12,21\}$ has the trivial normalizer and hence its $2$-orbits of left cosets are not necessitated to be $G$-isomorphic to the $2$-orbits of right cosets, on the one hand, but $G$ is regular and hence necessitates the existence of the isomorphic right action, on the other hand.
Given example shows the difference in properties of the right permutation action in various permutation representations of a finite group. We consider this difference and recall at first some facts from the finite group theory.
Let $F$ be a finite group, $A<F$, $|F|/|A|=n$ and $\overrightarrow{L_n}$, $\overrightarrow{R_n}$ be ordered partitions $FA$ and $AF$. It is known that every transitive permutation representation of $F$ is equivalent to the representation of $F$ given by $n$-orbits $X_n'=\{f\overrightarrow{L_n}: f\in F\}$ or $X_n''=\{\overrightarrow{R_n}f: f\in F\}$. It is also known that $F$ is homomorphic to its image $Aut(X_n')$ ($Aut(X_n'')$) with the kernel of the homomorphism equal to a maximal normal subgroup of $F$ that is contained in $A$. Further we always assume that a finite group is isomorphic to its representation.
A maximal by inclusion subgroup $A$ of a finite group $F$ that contains no normal subgroup of $F$ we call a \emph{md-stabilizer} of $F$ and the corresponding representation of $F$ we call a \emph{md-representation}. A md-stabilizer $A$ of a finite group $F$ defines a minimal degree permutation representation of $F$ in the family of permutation representations of $F$ defined with subgroups of $A$. A maximal degree permutation representation of $F$ is correspondingly the permutation representation defined with trivial minimal subgroup given by the unit of the group and this representation is called a regular representation of $F$.
A finite group can have many (not conjugate) md-stabilizers. For example, $S_5$ contains a transitive md-stabilizer of order $20$ generated by permutations $(12345)$ and $(1243)$ and an intransitive md-stabilizer of order $12$ generated by permutations $(123)(45)$ and $(23)$. The $n$-orbits of these md-stabilizers are correspondingly:
\begin{example}\label{S_5,md} $$ \begin{array}{lcr}
\begin{array}{||ccccc||} \hline 1 & 2 & 3 & 4 & 5 \\ 2 & 3 & 4 & 5 & 1 \\ 3 & 4 & 5 & 1 & 2 \\ 4 & 5 & 1 & 2 & 3 \\ 5 & 1 & 2 & 3 & 4 \\ \hline 5 & 4 & 3 & 2 & 1 \\ 4 & 3 & 2 & 1 & 5 \\ 3 & 2 & 1 & 5 & 4 \\ 2 & 1 & 5 & 4 & 3 \\ 1 & 5 & 4 & 3 & 2 \\ \hline 1 & 3 & 5 & 2 & 4 \\ 3 & 5 & 2 & 4 & 1 \\ 5 & 2 & 4 & 1 & 3 \\ 2 & 4 & 1 & 3 & 5 \\ 4 & 1 & 3 & 5 & 2 \\ \hline 4 & 2 & 5 & 3 & 1 \\ 2 & 5 & 3 & 1 & 4 \\ 5 & 3 & 1 & 4 & 2 \\ 3 & 1 & 4 & 2 & 5 \\ 1 & 4 & 2 & 5 & 3 \\ \hline \end{array}& \mbox{ and }&
\begin{array}{||ccc|cc||} \hline 1 & 2 & 3 & 4 & 5 \\ 2 & 3 & 1 & 4 & 5 \\ 3 & 1 & 2 & 4 & 5 \\ \hline 1 & 2 & 3 & 5 & 4 \\ 2 & 3 & 1 & 5 & 4 \\ 3 & 1 & 2 & 5 & 4 \\ \hline 1 & 3 & 2 & 4 & 5 \\ 2 & 1 & 3 & 4 & 5 \\ 3 & 2 & 1 & 4 & 5 \\ \hline 1 & 3 & 2 & 5 & 4 \\ 2 & 1 & 3 & 5 & 4 \\ 3 & 2 & 1 & 5 & 4 \\ \hline \end{array}\,. \end{array} $$ \end{example}
The first md-stabilizer is a representation of group $C_5C_4=C_4C_5$. The representation of $S_5$ with this md-stabilizer has degree $6$ equal to maximal order of elements of $S_5$. The second md-stabilizer is a representation of group $C_6\otimes C_2$. The representation of $S_5$ with second md-stabilizer (as we shall see) is the automorphism group of Petersen graph.
Because of the property given in proposition \ref{p<n}, the special interest is presented by md-stabilizer of a finite group with the maximal order, which we call a \emph{least degree stabilizer} or a \emph{ld-stabilizer} of a finite group. The corresponding representation of a finite group we call a transitive least degree representation or \emph{tld-representation}. That property urges also to consider a least degree intransitive representation or a \emph{ild-representation} of a finite group, that for some groups, for example for $C_6$, has the degree less than the degree of tld-representation. So under a lowest degree representation or a \emph{ld-representation} we shall understand the smallest degree representation among tld- and ild-representations.
The given consideration puts a question: \emph{is there existing a finite group with two non-similar ld-representations?}. If there exists no two non-similar ld-representations, then the ld-representation is a full invariant of a finite group and hence the study of a finite group number invariants could be reduced to the study of ld-representation number invariants.
For a non-minimal degree representations a simple example, of the same degree non-similar permutation representations, is representations of $S_4$ on sets of right cosets of subgroups $gr((12))$ and $gr((12)(34))$. The first representation contains a stabilizer of $2$-tuple on a $12$-element set $V$ and the second contains a stabilizer of $4$-tuple on $V$.
Let $\phi:F\rightarrow S_n(V)$. We shall denote the image $\phi(F)$ of group $F$ by representation $\phi$ as $F(V)$ and term $F(V)$ also as representation of $F$. The $n$-orbit of $F(V)$ we term for convenience also as representation of $F$.
\paragraph{One possible reformulation of the polycirculant conjecture.} Let $F$ be a finite group and $A<F$ be a md-stabilizer. Let $F$ contains a subgroup $P$ of a prime order $p$ that conjugates with no subgroup of $A$, then it follows that $P$ is a regular subgroup of a representation $F(AF)$
and hence $p$ divides $n=|AF|$.
So the polycirculant conjecture statements that if $F(AF)$ is a $2$-closed representation of a finite group $F$, then $F$ contains the corresponding subgroup $P$.
To all appearance this approach cannot be successful, because it lies out of the inside structure of a $n$-orbit.
\paragraph{Some properties of ld-representations.} \begin{lemma}\label{A_mxB_l} Let $A_m$ and $B_l$ be ld-representations, then the ld-representation of a group $A_m\otimes B_l$ has degree $n=m+l$.
\end{lemma}
The simplest example is
$$
\begin{array}{||cc|cc||} \hline 1 & 2 & 3 & 4 \\ 1 & 2 & 4 & 3 \\ \hline 2 & 1 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ \hline \end{array}. $$
\begin{theorem}\label{ild=ld} The ld-representation of a finite group $F$ is an ild-representation if and only if $F$ is a direct product.
\end{theorem} {\bfseries Proof:\quad} If $F$ is a direct product, then the statement follows from lemma
\ref{A_mxB_l}. So let $X_n=X_{m+l}=X_m\circ X_l$ be an ild-representation of $F$, where $X_m$ and $X_l$ are transitive, then $|X_n|/|X_m|$ and
$|X_n|/|X_l|$ greater than $1$. It follows that a stabilizer of $m$-tuple from $X_m$ and a stabilizer of $l$-tuple from $X_l$ are normal subgroups of $F$ (corollary \ref{fix-tuple}) and elementwise commutative. $\Box$
\begin{lemma}\label{ld=regular} The ld-representation of a group $F$ is regular if and only if $F$ is not a direct product and has a trivial $ld$-stabilizer.
\end{lemma}
\begin{corollary}\label{C_(p^m)} The regular representation of cyclic $p$-group is ld-representation.
\end{corollary}
\begin{corollary}\label{C_n} Let $C_n$ be ld-representation of cyclic group $C$ of order $p_1^{m_1}\cdots p_q^{m_q}$, where $p_i$ are prime, then $C_n$ is intransitive and $n=p_1^{m_1}+\ldots +p_q^{m_q}$.
\end{corollary}
\begin{lemma}\label{any gr.-st.} Any finite group is a ld-stabilizer of some finite group.
\end{lemma} {\bfseries Proof:\quad} Let $A$ be a finite group, then $A$ is a ld-stabilizer of a group $F=gr(A\otimes B,d)$, where a group $B$ is isomorphic to $A$, $d$ is an involution and $dA\otimes B=A\otimes Bd$. $\Box$
The corresponding example gives the representation of dihedral group $D_4$:
$$
\begin{array}{||cc|cc||} \hline 1 & 2 & 3 & 4 \\ 1 & 2 & 4 & 3 \\ \hline 2 & 1 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ \hline 3 & 4 & 1 & 2 \\ 3 & 4 & 2 & 1 \\ \hline 4 & 3 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ \hline \end{array}. $$
\paragraph{The following several sentences do the object, that we study, more visible.}
\begin{proposition}\label{not-intersect} Let $F$ be a finite group and $F(F)$, $F(V)$ be two its images. Let $\alpha_k(V),\beta_k(V)$ be $k$-tuples from $V^{(k)}$ and $l$-tuples $\alpha_l(F),\beta_l(F)$ be their images from $F^{(l)}$, then $\alpha_k(V)$ and $\beta_k(V)$ have no intersection on $V$ if and only if $\alpha_l(F)$ and $\beta_l(F)$ have no intersection on $F$.
\end{proposition}
{\bfseries Proof:\quad} Let $X_n$ be a $n$-orbit of $F(V)$ and $Y_m$ ($m=|F|$) be a $m$-orbit of $F(F)$. The $k$-tuples $\alpha_k(V)$ and $\beta_k(V)$ are situated in $X_n$ and $l$-tuples $\alpha_l(F)$ and $\beta_l(F)$ are situated in $Y_m$. The statement follows from the method of reconstruction of $Y_m$ to $X_n$ that is a substitution of certain not intersected on $F$ $(m/n)$-tuples of $Y_m$ on certain not equal elements of $V$. $\Box$
From this proposition follows directly
\begin{corollary}\label{Lk-Stabilizer} Let $F$ be a finite group, $A<F$ be a md-stabilizer, $B<A$, $V=FB$ and
$G=F(V)$. Let $|G|/|A|=l$, $|A|/|B|=k$, $kl=n$, $X_n\in Orb_n(G)$, $Y_n\subset X_n$ be a $n$-orbit of the subgroup $A(V)<G$ and $L_n=GY_n=X_nA(V)$. Let $I_k=AB\subset V$, $X_k=\hat{p}(I_k)X_n$ and $Y_k=\hat{p}(I_k)Y_n$.
\begin{enumerate} \item Let $L_k=X_kA=GY_k=\hat{p}(I_k)L_n$, then classes of $L_k$ have no intersection on $V$ and hence $L_k$ is a partition of $X_k$.
\item Let $Y_n'\in L_n$, $Y_k'=\hat{p}(I_k)Y_n'$ and $Y_{n-k}'= \hat{p}(I_{n-k})Y_n'$, then $Y_k'$ and $Y_{n-k}'$ have no intersection on $V$.
\end{enumerate} \end{corollary}
\begin{proposition}\label{p<n}
Let $F$ be a finite group and $p$ be a prime divisor of $|F|$. Let $\{n_1,\,n_2,\,\ldots\}$ be degrees of transitive components of the ld-representation of $F$, then $p\leq max(n_i)$.
\end{proposition} {\bfseries Proof:\quad} The group $F$ contains a subgroup of order $p$. Any permutation $g\in S_n$ of prime order $p$ is decomposed in cycles of length either $p$ or $1$. $\Box$
From the definition of a minimal degree permutation representation follows
\begin{proposition}\label{reg->reg} Let a minimal degree permutation representation contains a regular element, then a subordinate non-minimal degree permutation representation contains a regular element too.
\end{proposition}
\paragraph{$n$-Orbits containing $S_n$-isomorphic $\mathbf{k}$-orbits.}
\begin{proposition}\label{Sn-iso-intr} Let $X_n\in Orb_n(G)$, $A<G$, $Y_n\in Orb_n(A)$ and $I_k,J_k$ be $k$-subspaces. Let $\hat{p}(I_k)Y_n$ be $S_n$-isomorphic to $\hat{p}(J_k)Y_n$, then $\hat{p}(I_k)X_n$ is not with necessary $S_n$-isomorphic to $\hat{p}(J_k)X_n$.
\end{proposition} {\bfseries Proof:\quad} The corresponding intransitive example is simply to construct: $$
\begin{array}{||cc|cc|cc||} \hline 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 1 & 4 & 3 & 6 & 5 \\ \hline 3 & 4 & 1 & 2 & 5 & 6 \\ 4 & 3 & 2 & 1 & 6 & 5 \\ \hline \end{array}\,. $$ Here $2$-suborbits $\{\langle 12\rangle,\langle 21\rangle\}$ and $\{\langle 56\rangle,\langle 65\rangle\}$ are $S_n$-isomorphic, but corresponding $2$-orbits have different power. The transitive example is not evident and is presented in $n$-orbit of the automorphism group of Petersen graph (s. below). $\Box$
\begin{theorem}\label{Sn-iso} Non-minimal degree permutation representation of a finite group $F$ contains $S_n$-isomorphic $k$-orbits.
\end{theorem}
{\bfseries Proof:\quad} Let $B<A<F$, $|FB|=n$, $|FA|=m$ and $|AB|=k$. Let $F(FA)$ be minimal and $F(FB)$ be non-minimal degree permutation representations of a finite group $F$. Let $X_n$ be a $n$-orbit of $F(FB)$ and $X_m'$ be a $m$-orbit of $F(FA)$. Let $Y_n\subset X_n$ be a $n$-orbit of $A(FB)$ and $Y_m'\subset X_m'$ be a $m$-orbit of $A(FA)$. Let $Z_n\subset Y_n$ be a $n$-orbit of $B(FB)$ and $Z_m'\subset Y_m'$ be a $m$-orbit of $B(FA)$. Let $P<B$ be a subgroup of a prime order $p$. Let $T_n\subset Z_n$ be a $n$-orbit of $P(FB)$ and $T_m'\subset Z_m'$ be a $m$-orbit of $P(FA)$.
Let $Y_k$, $Z_k\subset Y_k$ and $T_k\subset Z_k$ be $k$-orbits of $A(AB)$, $B(AB)$ and $P(AB)$ correspondingly and let $T_{n-k}\circ T_k=T_n$.
The $m$-orbit $T_m'$, the $n$-orbit $T_n$ and the $k$-orbit $T_k$, can be represent as a concatenation of $p$-rcycles and multituples.
A $p$-rcycle from $T_m'$ generates $k$ $p$-rcycles in $T_{n-k}$ that are associated with cyclic permutation of $k$ right cosets of $A$. But a multituple from $T_m'$ generates $p$-rcycles and multituple in $T_k$. The latter $p$-rcycles are existing because $P<B<A$ and hence the action of $P$ on $T_k$ permutes right cosets of $B<A$.
A $p$-rcycle from $T_n$, that is generated by a $p$-rcycle from $T_m'$, is evidently not $F(FB)$-isomorphic to a $p$-rcycle from $T_n$, that is generated by a multituple from $T_m'$. $\Box$
This situation is demonstrated on example \ref{S3(6)}.
This property can be also emerged in a md-representation of a finite group $F$. An example gives the group $F=C_6\otimes C_2$ in the next representation:
\begin{example}\label{C6*C2} $$
\begin{array}{||cc|cc|cc||} \hline 1 & 6 & 2 & 5 & 3 & 4 \\ 6 & 1 & 5 & 2 & 4 & 3 \\ \hline 2 & 1 & 3 & 6 & 4 & 5 \\ 1 & 2 & 6 & 3 & 5 & 4 \\ \hline 3 & 2 & 4 & 1 & 5 & 6 \\ 2 & 3 & 1 & 4 & 6 & 5 \\ \hline 4 & 5 & 3 & 6 & 2 & 1 \\ 5 & 4 & 6 & 3 & 1 & 2 \\ \hline 3 & 4 & 2 & 5 & 1 & 6 \\ 4 & 3 & 5 & 2 & 6 & 1 \\ \hline 5 & 6 & 4 & 1 & 3 & 2 \\ 6 & 5 & 1 & 4 & 2 & 3 \\ \hline \end{array}\,. $$ \end{example}
It is seen that the $2$-rcycle $\{\langle 25\rangle,\langle 52\rangle\}$ does not belong to $2$-orbit of $G$, containing the $2$-rcycle $\{\langle 16\rangle,\langle 61\rangle\}$. This matrix is a md-representation of a finite group $F$, but not ld-representation, and it contains a submatrix that is a non-minimal degree representation of subgroup $S_3<F$.
In this example the $p$-subgroup does not belong to a stabilizer, but, using the construction given in lemma \ref{any gr.-st.}, we obtain the property for a $p$-subgroup (of order $p$) of a stabilizer of~$F$.
The considered example suggests us the next property of $n$-orbits.
\begin{proposition}\label{nmd Zn} Let $I=\{I_k^i:i\in [1,l]\}$ be a partition of $V$ on $G$-isomorphic $k$-subspaces and $W=\{Co(I_k): I_k\in I\}$.
Let $Z_n$ be a maximal subset of $X_n\in Orb_n(G)$ so that $Co(\hat{p}(I_k^i)Z_n)=W$ for each $i\in [1,l]$.
Let $A<Aut(Z_n)$, $Y_n\subset Z_n$ be a $n$-orbit of $A$, $Y_k^1=\hat{p}(I_k^1)Y_n$ be the representation $A(I_k^1)$ and
$Y_k^2=\hat{p}(I_k^2)Y_n$ be $S_{|A|}^k$-orbit, then $Z_n$ is a non-minimal degree representation.
\end{proposition} {\bfseries Proof:\quad} $Z_n$ is automorphic, because $GZ_n$ is a partition of $X_n$. From proposition \ref{H->r.aut} follows that $A$ contains no normal subgroup of $Aut(Z_n)$. Hence $Z_n$, that is defined on $V$, is isomorphic to a $l$-orbit $Z_l'$, that is defined on $W$ and obtained by the evident reduction of $Z_n$. $\Box$
The next example of a minimal degree representation of the group $S_5\otimes S_2$ contains $S_n$-isomorphic $p$-orbits in the case $(p,n)=1$.
\begin{example}\label{S5*S2} $$
\begin{array}{||c|cc|cc||} \hline 1 & 2 & 5 & 3 & 4 \\ 1 & 5 & 2 & 4 & 3 \\ \hline 2 & 3 & 1 & 4 & 5 \\ 2 & 1 & 3 & 5 & 4 \\ \hline 3 & 4 & 2 & 5 & 1 \\ 3 & 2 & 4 & 1 & 5 \\ \hline 4 & 5 & 3 & 1 & 2 \\ 4 & 3 & 5 & 2 & 1 \\ \hline 5 & 1 & 4 & 2 & 3 \\ 5 & 4 & 1 & 3 & 2 \\ \hline \end{array}\,. $$ \end{example}
This example shows an existence of $n$-orbit of a subgroup of order $p$, that contains $S_n$-isomorphic $p$-rcycles, but no $S_p^p$-orbit $G$-isomorphic to a $p$-rcycle. Here: $2$-rcycles $\{\langle 25\rangle,\langle 52\rangle\}$ and $\{\langle 34\rangle,\langle 43\rangle\}$ are $S_n$-isomorphic, $S_p^p$-orbits $\langle\langle 23\rangle\langle 54\rangle\rangle$ and $\langle\langle 54\rangle\langle 23\rangle\rangle$ are equal and hence $G$-isomorphic and $2$-rcycle $\{\langle 25\rangle,\langle 52\rangle\}$ is $G$-isomorphic to a $2$-orbit $\{\langle 13\rangle,\langle 14\rangle\}$. We shall see that properties of this $5$-orbit give an appearance to unconventional properties of the $10$-orbit of the Petersen graph automorphism group.
\begin{theorem}\label{(p,n)=1,mdr}
Let a prime $p$ divides $|G|$ and does not divide $n$, then $G$ is a md-representation.
\end{theorem} {\bfseries Proof:\quad} Let $(\alpha_p)$ be a cycle, $X_p=G\alpha_p$, $L_p=G(\alpha_p)\alpha_p$ and $L_1=\hat{p}(I_1)L_p$, then $L_1=Co(L_p)$ is a covering of $V$ on $G$-isomorphic $p$-subsets. We consider two possible cases.
\begin{enumerate} \item Let $\sqcup L_1=V$, then $G$ is a primitive group and hence there exists no partition $Q$ of $V$ on $G$-isomorphic subsets for that $G(Q)$ would be a representation of $G(V)$. Indeed, if $g\in G$ is a permutation of order $p$, then $gQ\sqcap Q\neq Q$ for any partition $Q$. It contradicts proposition~\ref{not-intersect}.
\item Let now $\sqcup L_1=Q$, where $Q$ is a partition of $V$, then $Q$ consists of $G$-isomorphic classes. But in this case, because of transitivity $G$ and $(n,p)=1$, there exists a $n$-orbit of a $p$-subgroup whose projections on subspaces from $Q$ are $G$-isomorphic and hence $X_n$ does not contain
$S_n$-isomorphic $(n/|Q|)$-orbits. Thus, accordingly to theorem \ref{Sn-iso}, $X_n$ is a minimal degree permutation representation. $\Box$
\end{enumerate}
An example of the second case representation of a finite group $F$ can be obtained from lemma~\ref{any gr.-st.}, if to assign $A=A_4$ and $p=3$. The consideration of this example for $A=S_3$ and $p=2$ shows that the condition, $p$ does not divide $n$, is not of principal for the second case of theorem~\ref{(p,n)=1,mdr}. We shall see that namely this situation takes place in the $10$-orbit of the Petersen graph automorphism group.
\paragraph{Conditions of $k$-closure and properties of $k$-closed groups.}
\begin{proposition}\label{Xk,Yk Aut} Let $Y_k\in Orb_k(Aut(X_k))$, then it is not follows that $Aut(Y_k)=Aut(X_k)$.
\end{proposition} {\bfseries Proof:\quad} An example: $X_2=\{\langle 12\rangle,\langle 23\rangle,\langle 34\rangle,\langle 41\rangle\}$, $Y_2=\{\langle 13\rangle,\langle 31\rangle,\langle 24\rangle,\langle 42\rangle\}$. $\Box$
\begin{proposition}\label{Xk,Yk iso} Let $k$-orbits $Y_k$ and $X_k$ be isomorphic and $Aut(X_k)=Aut(Y_k)$, then it is not follows that $Y_k=X_k$.
\end{proposition} {\bfseries Proof:\quad} An example: $X_2=\{\langle 13\rangle,\langle 24\rangle\}$, $Y_2=\{\langle 14\rangle,\langle 23\rangle\}$. $\Box$
\begin{proposition}\label{iso-rorbits} Let $X_k$ and $Y_k$ be isomorphic $k$-rorbits with the same automorphism group, then it is not follows that $X_k=Y_k$.
\end{proposition} {\bfseries Proof:\quad} The $2$-orbits $X_2=\hat{p}(\langle 23\rangle)X_5$ and $Y_2=\hat{p}(\langle 45\rangle)X_5$ from example \ref{S5*S2} represent such case. $\Box$
\begin{lemma}\label{k.cl.gr->k.cl.subgr} Let $X_n$ be a $n$-orbit of a $k$-closed group $G$, $A<G$ and $Y_n\subset X_n$ be a $n$-orbit of $A$. Let $Y_k(I_k)=\hat{p}(I_k)Y_n$, $B=\cap_{(I_k\subset I_n)}Aut(Y_k(I_k))$, $P_n=G\,(\cup BY_n)$ and classes of $P_n$ have no intersections, then $A$ is $k$-closed.
\end{lemma} {\bfseries Proof:\quad} Let $X_k(I_k)=\hat{p}(I_k)X_n$. It is given that $G=\cap_{(I_k\subset I_n)}Aut(X_k(I_k))$. Further we have: $Aut(Y_n)<B$, $Y_n\in BY_n$, $Y_n\subset\cup BY_n$ and $X_n=\cup GY_n\subseteq \cup G\cup BY_n=\cup P_n$. Let $Y_n$ be not $k$-closed, then every class of $GY_n$ is not $k$-closed. As classes of $P_n$ have no intersections, then $X_n\subset \cup P_n$ and hence $X_n$ is not $k$-closed. Contradiction. $\Box$
\begin{theorem}\label{Sn.iso->2-cl} Let a transitive $n$-orbit $X_n$ contains $S_n$-isomorphic $k$-projections ($k\geq 2$), then $X_n$ is $2$-closed.
\end{theorem} {\bfseries Proof:\quad} We have two possibilities:
\begin{enumerate} \item There exists a subspace $I_4=\langle 1234\rangle$ so that $4$-orbit $X(1234)=\hat{p}(\langle 1234\rangle)X_n$ is not $2$-closed and $2$-orbits $X(12)=\hat{p}(\langle 12\rangle)X_n$ and $X(34)=\hat{p}(\langle 34\rangle)X_n$ are $S_n$-isomorphic. Then $3$-orbits $X(123)=\hat{p}(\langle 123\rangle)X(1234)$ and $X(234)=\hat{p}(\langle 234\rangle)X(1234)$ are also not $2$-closed.
Let $A(ijk\ldots)=Aut(X(ij))\cap Aut(X(ik))\cap Aut(X(jk))\cap\,\ldots\,$, then $\cup Aut(123)X(1234)$ and $\cup Aut(234)X(1234)$ have to be $2$-closed and equal, i.e. $Aut(123)=Aut(234)=Aut(1234)$. But such equality (for transitive $4$-orbit $X(1234)$) is impossible, because $Aut(123)$ and $Aut(234)$ are conjugate subgroups of $S_n$ and hence are not equal.
\item There exists a subspace $I_6=\langle 123456\rangle$ so that $6$-orbit $X(123456)=\hat{p}(\langle 123456\rangle)X_n$ is not $2$-closed, $3$-orbits $X(123)=\hat{p}(\langle 123\rangle)X_n$ and $X(456)=\hat{p}(\langle 456\rangle)X_n$ are $S_n$-isomorphic and not $2$-closed. Then $\cup Aut(123)X(123456)$ and $\cup Aut(456)X(123456)$ have to be $2$-closed and equal or $Aut(123)=Aut(456)=Aut(123456)$. This equality is also impossible for the same reason. $\Box$
\end{enumerate}
Let $X_n$, $Y_m$ and $Z_l$ be three representations of a finite group $F$. Let $n>m>l$. It is of interest the relation between a $k$-closure property of $Z_l$, $Y_m$ and $X_n$.
It is evident that, if $Y_m$ is not $1$-closed, then $X_n$ is not $1$-closed too and $Z_l$ can be $1$-closed. And, if $Y_m$ is $k$-closed for $k>1$, then $X_n$ is $k$-closed too and $Z_l$ can be not $k$-closed.
\paragraph{Unconventional cyclic structure on $k$-orbits.} Now we shall consider one interesting property of the right permutation action on $k$-orbits that has no analogy in the group theory. The right automorphism action on a $n$-orbit $X_n$ of a group $G$ maps a $k$-subspace $I_k$ on an isomorphic $k$-subspace $I_k'$ and so $X_k=\hat{p}(I_k)X_n=\hat{p}(I_k')X_n=X_k'$. The latter equality generates an unconventional cyclic structure on a $k$-orbit $X_k$, that one can see on next examples:
\begin{example}\label{S2*S2} $$ \left(
\begin{array}{cc|cc} 1 & 2 & 3 & 4 \\ 2 & 1 & 4 & 3 \\ \hline 3 & 4 & 1 & 2 \\ 4 & 3 & 2 & 1 \\ \end{array} \right) \rightarrow \left(
\begin{array}{c|c} B & B' \\ \hline B'& B \\ \end{array} \right) \mbox{ \emph{and} } \left( \begin{array}{ccc} 1 & 2 & 3 \\ 2 & 1 & 3 \\ 1 & 3 & 2 \\ 3 & 1 & 2 \\ 2 & 3 & 1 \\ 3 & 2 & 1 \\ \end{array} \right) \rightarrow \left(
\begin{array}{cc|cc} 1 & 2 & 2 & 3 \\ 2 & 1 & 1 & 3 \\ \hline 2 & 3 & 3 & 1 \\ 1 & 3 & 3 & 2 \\ \hline 3 & 1 & 1 & 2 \\ 3 & 2 & 2 & 1 \\ \end{array} \right) \rightarrow \left(
\begin{array}{c|c} B_1 & B_2 \\ \hline B_2 & B_3 \\ \hline B_3 & B_1 \\ \end{array} \right). $$
\end{example}
We have introduced the right permutation action on $n$-orbits as a permutation of coordinates of $n$-tuples. Of course, we can consider this action as the permutation $n$-tuples with the same result. Such interpretation of the right permutation action leads to next
\begin{lemma}\label{Rk=Yn} Let $X_k$ be a $k$-orbit of a group $G$, $A<G$, $R_k=AX_k$ be a partition of $X_k$ on $k$-orbits of right cosets of $A$ in $G$ and $Y_n$ be $n$-orbit of $A$. Then for any $k$-orbit $Y_k\in R_k$ there exists a subspace $I_k$ so that $Y_k=\hat{p}(I_k)Y_n$.
\end{lemma} {\bfseries Proof:\quad} It follows from $A\alpha_n=Y_n$ for $\alpha_n\in Y_n$ and $A\alpha_k\in R_k$ for $\alpha_k\in X_k$. $\Box$
\begin{theorem}\label{rr} Let $X_n$ be a $n$-orbit of a group $G$, $I_k,I_k'$ be isomorphic subspaces, and $X_{2k}=\hat{p}(I_k\circ I_k')X_n=\uplus X_k\circ \uplus X_k$. Let $A<G$, $R_{2k}=AX_{2k}$ and $R_k=AX_k$, then $R_{2k}$ can be partition in cycles on classes of $R_k$.
\end{theorem} {\bfseries Proof:\quad} It follows from lemmas \ref{Rk=Yn} and \ref{mMmM}. $\Box$
Namely this property we can see on examples.
\begin{remark}\label{rem} Let $I_k^1$, $I_k^2$ and $I_k^3$ be isomorphic subspaces of a $n$-orbit $X_n$, then $\hat{p}(I_k^1)X_n=\hat{p}(I_k^2)X_n=\hat{p}(I_k^3)X_n$. But on the condition it does not follow that $\hat{p}(I_k^1\circ I_k^2)X_n=\hat{p}(I_k^2\circ I_k^3)X_n$.
Hence the corresponding cycle structure on $lk$-orbits of right cosets of a subgroup does not exist with necessary for $l>2$. This fact represents the difference between $2$-closed groups and $m$-closed groups for $m>2$.
\end{remark}
\begin{proposition}\label{rlrl} Let $X_{2k}\in Orb_{2k}(G)$, $A<G$, $R_{2k}=AX_{2k}$, $Y_k^1$ and $Y_k^2$ be isomorphic $k$-orbits of a subgroup $A$, $Y_{2k}=Y_k^1\circ Y_k^2\in R_{2k}$ and $C_{2k}\subset R_{2k}$ be a cycle containing $Y_{2k}$. Let the set $Z_{2k}=\cup C_{2k}$ is a $(2k)$-orbit of some subgroup $B<G$, then $A$ is a normal subgroup of $B$.
\end{proposition} {\bfseries Proof:\quad} Since $Y_k^1$ and $Y_k^2$ are isomorphic, $k$-orbits $Y_k^i$, that form the cycle $C_{2k}$, are $k$-orbits of left and right cosets of $A$ in $B$. Then statement follows from lemma \ref{Lk*Rk}. $\Box$
\section{Correspondence between $k$-orbits and their automorphism\\ groups}
\begin{lemma} Every cycle $c\subset\in G$ of length $l\geq k$ corresponds to a $(l,k)$-cycle $C_{lk}$ of some $k$-orbit $X_k\in Orb_k(G)$.
\end{lemma} {\bfseries Proof:\quad} If $c=(\alpha_l)$, then $l$-orbit $X_l=G\alpha_l$ contains $l$-rcycle $rC_l=gr(c)\alpha_l$. The $X_k$ and $C_{lk}$ are corresponding $k$-projections of $X_l$ and $rC_l$. $\Box$
For $k>l$ the counterexample is given by the cycle $(56)$ in fourth case of example \ref{simple-struc}, where there exists no $(2,3)$-cycle for subautomorphism $(56)$.
The reverse statement for $k$-orbits of not $k$-closed groups is not correct. An example is $2$-orbit of $A_4$ that contains $(4,2)$-cycle related to no subautomorphism of $A_4$.
For $k$-closed groups the reverse statement is also not correct. This shows an example of $2$-closed group that is defined by $2$-orbit $X_2=\{14,25,36,41,52,63\}$. The group $Aut(X_2)$ has a $2$-orbit $X_2'=\{12,13,15,16,21,24,23,26,32,35,31,34,42,45,43,46,51,54,53, 56,62,65,61,64\}$.\label{X_2'} The automorphism group of $2$-suborbit $\{12,13,15,16\}\subset X_2'$ contains a cycle $(2536)$ that does not belong to $Aut(X_2')$. We can see that the possibility for construction of this counterexample gives a concatenation of $S_4^1$-orbit with
$(4,1)$-multituple. But $X_2'$ contains a suborbit $\{12,24,43,31\}$ that is a projection of a $4$-rcycle and also is not a $2$-orbit of a subgroup of $Aut(X_2')$. In latter case the length $l$ of a cycle $(1243)$ is not prime first and does not divide degree $n$ second. For a prime $l$ that does not divide $|Aut(X_2')|=48$ we have the next counterexample: an automorphic $2$-subset $\{13,32,24,45,51\}\subset X_2'$.
\subsection{The local property of $k$-orbits} The trivial case of reverse statement we obtain from corollary \ref{Y_k-A}. For disclosing of a non-trivial local property of an automorphism group $k$-orbits we have to consider the case, where $(p,k)$-subset of a $k$-orbit $X_k$ is a $k$-projection of $p$-rcycle for
$p$ being a prime divisor of $|Aut(X_k)|$. We shall write further a $k$-projection of $l$-rcycle for $k\leq l$ as \emph{$(l,k)$-rcycle} $rC_{lk}$.
\begin{theorem}\label{rC_pk->A}
Let $X_k$ be automorphic, $p\geq k$ be a prime divisor of $|Aut(X_k)|$ and $rC_{pk}\subset X_k$ be a $(p,k)$-rcycle, then $Aut(rC_{pk})<<Aut(X_k)$.
\end{theorem}
{\bfseries Proof:\quad} Let $L_k=Aut(X_k)rC_{pk}$ and $|L_k|p$ divides $|Aut(X_k)|$, then the statement follows from theorem \ref{|Y_k||GY_k|}. Let $|L_k|p>|Aut(X_k)|$, then $|L_k|=|Aut(X_k)|$ and hence $L_k$ can be partition on subsets
$L_k^i$, $i\in [1,p]$ so that $|L_k^i|p=|Aut(X_k)|$. Since subsets $L_k^i$ have no intersections, there exist subgroups $A_i<Aut(X_k)$ so that
$A_iL_k^i=L_k^i$. It follows that $|L_k|p$ divides $|Aut(X_k)|$ and hence
$|L_k|p=|Aut(X_k)|$. $\Box$
The theorem \ref{rC_pk->A} and lemma \ref{XAUXB=XAB} give a possibility for the reconstruction of subautomorphisms of $k$-orbit through its symmetry properties.
The next statement gives the relation between automorphism group of a $k$-orbit and automorphism group of its $k$-suborbit.
\begin{proposition}\label{Aut(Y_k)<<Aut(X_k)} Let $X_k$ be a $k$-orbit and $Y_k\subset X_k$ be a $k$-orbit of a subgroup $A=Aut(Y_k;V)\cap Aut(X_k)$, then $Aut(Y_k)<<A<Aut(X_k)$ if and only if $Aut(Y_k)<<\cap_{(Z_k\in AX_k)}Aut(Z_k;V)$.
\end{proposition} {\bfseries Proof:\quad} The statement follows from evident equality $A=\cap_{(Z_k\in AX_k)}Aut(Z_k;V)$. $\Box$
\section{Primitivity and imprimitivity.}
Below a group $G$ is transitive.
Let $X_k$ be a $k$-rorbit, $Y_k\subset X_k$ be a $k$-subrorbit, $\alpha_k\in Y_k$ and $Co(Y_k)=Co(\alpha_k)$, then $L_k=GY_k$ is a partition of $X_k$ (lemma~\ref{Co(Y_k)=Co(alpha_k)}), but classes of $L_k$ can be intersected on $V$. We shall call a $k$-rorbit $X_k$ for $k<n$ \emph{$V$-coherent}, if $\sqcup Co(X_k)=V$ and \emph{$V$-incoherent} if $\sqcup Co(X_k)$ is a partition of $V$. We shall write simply coherent and incoherent, instead of $V$-coherent and $V$-incoherent, if it will be clear, what a set we consider.
\begin{proposition}\label{Aut-incoherent} The automorphism group of an incoherent $k$-rorbit is imprimitive.
\end{proposition}
\begin{corollary}\label{impr-gr} A group $G$ is imprimitive if and only if it contains an incoherent $k$-rorbit.
\end{corollary}
\begin{corollary}\label{nmd-imprimit} Non-minimal degree representations are imprimitive.
\end{corollary}
The automorphism group of a coherent $k$-rorbit can be imprimitive. The example is the $2$-rorbit $\{13,31,24,42,14,41,23,32\}$.
Let a coherent (incoherent) $k$-rorbit $X_k$ contains no $V$-coherent and no $V$-incoherent $k$-subrorbit, then $X_k$ can be called an \emph{elementary coherent} (\emph{elementary incoherent}) $k$-rorbit.
\begin{lemma}\label{el.coherent-primitive} The automorphism group of an elementary coherent $k$-rorbit is primitive.
\end{lemma}
\begin{corollary}\label{primitive-el.coherent} The group $G$ is primitive if and only if it contains an elementary $V$-coherent $k$-subrorbit.
\end{corollary}
A maximal $k$-subrorbit $Y_k$ of a $k$-rorbit $X_k$, that is a structure element of coherent (incoherent) $k$-subrorbits, we call a \emph{$k$-block}.
Let $Y_k$ be a $k$-block of an incoherent $k$-rorbit $X_k$, then $U=Co(Y_k)$ is a $1$-block or $k$-element block of an imprimitive group $G$ in conventional definition.
Let us to give some examples of coherent and incoherent $k$-rorbits:
\begin{enumerate} \item A $2$-orbit $X_2$ of $S_3$ is elementary coherent and a $2$-rcycle from $X_2$ is a $2$-block.
\item $2$-orbit of $C_5\otimes C_2$ is elementary coherent. This group contains $S_5$-isomorphic elementary coherent $2$-rorbits.
\item A $2$-orbit of $A_5$ is coherent but not elementary coherent. It contains an elementary coherent $2$-orbit of $C_5\otimes C_2$. A $3$-orbit of $A_5$ is elementary coherent and contains elementary coherent suborbits on $4$-element subsets of $V$.
\item All $2$-orbits of $C_2\otimes C_2$ and two from six $2$-orbits of $D_4$ are elementary incoherent. Other four $2$-orbits of $D_4$ are coherent.
\end{enumerate}
\begin{proposition}\label{el.impr->base.typr} Let $p$ be prime and $X_p$ be an elementary incoherent $p$-rorbit, then there exists a subgroup $A<Aut(X_p)$ of order $p$ for that classes of a partition $R_p=AX_p$ are $p$-orbits of the base type, i.e. they are either $p$-rcycles, or $S_p^p$-orbits or $p$-tuples.
\end{proposition} {\bfseries Proof:\quad} An elementary incoherent $p$-rorbit consists of not intersected on $V$ $p$-rcycles. $\Box$
The reverse statement:
\begin{lemma}\label{base.typ->impr} Let $p$ be prime, $X_p$ be a $p$-rorbit and $rC_p\subset X_p$ be a $p$-rcycle that is a $p$-orbit of a subgroup $A<Aut(X_p)$ of order $p$. Let classes of $R_p=AX_p$ be $p$-orbits of the base type, then $Aut(X_p)$ is imprimitive.
\end{lemma} {\bfseries Proof:\quad} Let the statement is not correct and $Aut(X_p)$ be primitive, then $L_p=Aut(X_p)rC_p$ contains intersected on $V$ classes and hence there exists a class of $R_p$ that has a $(p,m<p)$-multituple as a concatenation component. Contradiction. $\Box$
In the next example: $L_1=\{\{124\},\{235\},\{346\},\{451\},\{562\},\{613\}\}$, we see that $\sqcup L_1=V=[1,6]$, where $k=3$ divides $n=6$, but the corresponding $3$-set $X_3=\{124,241,421,\ldots\}$ is not automorphic.
\begin{theorem}\label{not k|n}
Let $X_k$ be an elementary coherent $k$-rorbit, then $\neg (k|n)$.
\end{theorem} {\bfseries Proof:\quad} The statement is correct for $n$ being a prime, because of $k<n$, so we assume that $n$ is not prime. Let the statement is not correct for some $k$, then it is not correct also for a prime divisor $p$ of $k$. So we assume that $k=p$ is a prime. Let $Y_p\subset X_p$ be a $p$-rcycle that is a $p$-orbit of a subgroup $A<Aut(X_p)$ of order $p$, $R_p=AX_p$ and $L_p=Aut(X_p)Y_p$. Since by hypothesis classes of $L_p$ are intersected on $V$, then $R_p$ contains classes that have a $(p,l<p)$-multituple as a concatenation component. Let $X_n\in Orb_n(Aut(X_p))$ and $Y_n\subset X_n$ be a $n$-orbit of $A$, then $Y_n$ is a concatenation of (not with necessary $Aut(X_p)$-isomorphic) $p$-rcycles and a $(p,pr)$-multituple. Let $I_{pr}$ be a subspace defining $(p,pr)$-multituple, then, accordingly to lemma \ref{k-rorbit}, the $pr$-orbit $X_{pr}=\hat{p}(I_{pr})X_n$ is a
$pr$-rorbit and hence $l|pr$. Another, the subspace $I_{pr}$ is a concatenation of $m=pr/l$ $Aut(X_p)$-isomorphic subspaces $I_l^i$ ($i\in [1,m]$). It follows that $R_p$ contains $m$ $Aut(X_p)$-isomorphic classes and that $Aut(X_p)$ contains a subgroup $B$ that acts transitive on the system of these $m$ classes. It follows that $R_p$ and hence $X_p$ can be partition on $m$ $Aut(X_p)$-isomorphic classes. Let $L_p'=L_p\sqcup R_p$ be this partition of $X_p$ on $m$ $Aut(X_p)$-isomorphic classes, then classes of $L_p'$ are $p$-orbits of a normal subgroup of $Aut(X_p)$ and cannot be intersected on $V$. Hence $X_p$ is not elementary coherent $p$-rorbit. Contradiction. $\Box$
\section{Petersen graph}
Here we consider the properties of $n$-orbits that are not visible from the group theory and therefore had hindered to solve the polycirculant conjecture. It is the cases, where $p|n$, but there exists no transitive and imprimitive subgroup of order $p$ and therefore there exists no subgroup of order $p$, whose $n$-orbits could be represented as a concatenation of $(p,p)$-orbits of base type.
The simplest example is a $(n=6)$-orbit of a group $G=S_3\otimes C_2$.
\begin{example}\label{S3xC2} $$
\begin{array}{||cc|c|cc|c||} \hline 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 1 & 3 & 5 & 4 & 6 \\ \hline 1 & 3 & 2 & 4 & 5 & 6 \\ 3 & 1 & 2 & 5 & 4 & 6 \\ \hline 2 & 3 & 1 & 5 & 6 & 4 \\ 3 & 2 & 1 & 6 & 5 & 4 \\ \hline 4 & 5 & 6 & 1 & 2 & 3 \\ 5 & 4 & 6 & 2 & 1 & 3 \\ \hline 4 & 5 & 6 & 1 & 3 & 2 \\ 5 & 4 & 6 & 3 & 1 & 2 \\ \hline 5 & 6 & 4 & 2 & 3 & 1 \\ 6 & 5 & 4 & 3 & 2 & 1 \\ \hline \end{array}\,. $$
\end{example}
It is seen that the pair $\langle 12\rangle$ is $G$-isomorphic to $\langle 45\rangle$, but not $G$-isomorphic to $\langle 36\rangle$, so $6$-orbit of a subgroup $gr((12)(45))$ cannot be represented as a concatenation of $p$-orbits of base type. We shall say that this subgroup of a prime order and its $n$-orbit are \emph{undecomposable}. Nevertheless the given group contains a \emph{decomposable} (on the $(p,p)$-orbits of the base type) subgroup $gr((14)(25)(36))$, where $\{14\},\{25\},\{36\}$ are incoherent $2$-blocks of a corresponding imprimitive subgroup of $G$.
The Petersen graph gives an example, where for the least prime divisor $p=2$ of the degree $n=10$ there exists no decomposable subgroup with incoherent $2$-blocks. From here follows unconventional properties of the automorphism group of this graph. The automorphism group of Petersen graph is a representation $G$ of $S_5$ on $10$-element set $V$. It is a representation of $S_5$ with right (left) cosets of a subgroup of order $12$ represented in example \ref{S_5,md}. This representation can be also obtained by action of $S_5$ on unordered pairs from the set $V'=\{1,2,3,4,5\}$, where $V=\{1=\{1,2\},2=\{1,3\},\ldots,0=\{4,5\}\,\}$.
The following matrix is a $10$-orbit $Y_{10}$ of a transitive, imprimitive subgroup of $G$ (that is isomorphic to a first subgroup from example \ref{S_5,md}).
\begin{example} $$
\begin{array}{||ccccc|ccccc||c||} \hline 12 & 23 & 34 & 45 & 15 & 14 & 24 & 25 & 35 & 13 & \\ \hline \hline 1 & 5 & 8 & 0 & 4 & 3 & 6 & 7 & 9 & 2 & e \\ 5 & 8 & 0 & 4 & 1 & 7 & 9 & 2 & 3 & 6 & (12345) \\ 8 & 0 & 4 & 1 & 5 & 2 & 3 & 6 & 7 & 9 & (13524) \\ 0 & 4 & 1 & 5 & 8 & 6 & 7 & 9 & 2 & 3 & (14253) \\ 4 & 1 & 5 & 8 & 0 & 9 & 2 & 3 & 6 & 7 & (15432) \\ \hline 3 & 6 & 7 & 9 & 2 & 4 & 0 & 8 & 5 & 1 & (2453) \\ 6 & 7 & 9 & 2 & 3 & 8 & 5 & 1 & 4 & 0 & (1435) \\ 7 & 9 & 2 & 3 & 6 & 1 & 4 & 0 & 8 & 5 & (1254) \\ 9 & 2 & 3 & 6 & 7 & 0 & 8 & 5 & 1 & 4 & (1523) \\ 2 & 3 & 6 & 7 & 9 & 5 & 1 & 4 & 0 & 8 & (1342) \\ \hline 4 & 0 & 8 & 5 & 1 & 2 & 9 & 7 & 6 & 3 & (25)(34) \\ 0 & 8 & 5 & 1 & 4 & 7 & 6 & 3 & 2 & 9 & (15)(24) \\ 8 & 5 & 1 & 4 & 0 & 3 & 2 & 9 & 7 & 6 & (23)(14) \\ 5 & 1 & 4 & 0 & 8 & 9 & 7 & 6 & 3 & 2 & (45)(13) \\ 1 & 4 & 0 & 8 & 5 & 6 & 3 & 2 & 9 & 7 & (12)(35) \\ \hline 2 & 9 & 7 & 6 & 3 & 1 & 5 & 8 & 0 & 4 & (2354) \\ 9 & 7 & 6 & 3 & 2 & 8 & 0 & 4 & 1 & 5 & (1325) \\ 7 & 6 & 3 & 2 & 9 & 4 & 1 & 5 & 8 & 0 & (1534) \\ 6 & 3 & 2 & 9 & 7 & 5 & 8 & 0 & 4 & 1 & (1243) \\ 3 & 2 & 9 & 7 & 6 & 0 & 4 & 1 & 5 & 8 & (1452) \\ \hline \end{array} $$ \end{example} and the reassembling of this example: \begin{example} $$
\begin{array}{||cc|cc|cc|cc|cc||c||} \hline 12 & 15 & 14 & 13 & 23 & 35 & 34 & 45 & 24 & 25 & \\ \hline \hline 1 & 4 & 3 & 2 & 5 & 9 & 8 & 0 & 6 & 7 & e \\ 4 & 1 & 2 & 3 & 0 & 6 & 8 & 5 & 9 & 7 & (25)(34) \\ \hline 5 & 1 & 7 & 6 & 8 & 3 & 0 & 4 & 9 & 2 & (12345) \\ 1 & 5 & 6 & 7 & 4 & 9 & 0 & 8 & 3 & 2 & (12)(35) \\ \hline 8 & 5 & 2 & 9 & 0 & 7 & 4 & 1 & 3 & 6 & (13524) \\ 5 & 8 & 9 & 2 & 1 & 3 & 4 & 0 & 7 & 6 & (45)(13) \\ \hline 0 & 8 & 6 & 3 & 4 & 2 & 1 & 5 & 7 & 9 & (14253) \\ 8 & 0 & 3 & 6 & 5 & 7 & 1 & 4 & 2 & 9 & (23)(14) \\ \hline 4 & 0 & 9 & 7 & 1 & 6 & 5 & 8 & 2 & 3 & (15432) \\ 0 & 4 & 7 & 9 & 8 & 2 & 5 & 1 & 6 & 3 & (15)(24) \\ \hline 3 & 2 & 4 & 1 & 6 & 5 & 7 & 9 & 0 & 8 & (2453) \\ 2 & 3 & 1 & 4 & 9 & 0 & 7 & 6 & 5 & 8 & (2354) \\ \hline 6 & 3 & 8 & 0 & 7 & 4 & 9 & 2 & 5 & 1 & (1435) \\ 3 & 6 & 0 & 8 & 2 & 5 & 9 & 7 & 4 & 1 & (1452) \\ \hline 7 & 6 & 1 & 5 & 9 & 8 & 2 & 3 & 4 & 0 & (1254) \\ 6 & 7 & 5 & 1 & 3 & 4 & 2 & 9 & 8 & 0 & (1243) \\ \hline 9 & 7 & 0 & 4 & 2 & 1 & 3 & 6 & 8 & 5 & (1523) \\ 7 & 9 & 4 & 0 & 6 & 8 & 3 & 2 & 1 & 5 & (1534) \\ \hline 2 & 9 & 5 & 8 & 3 & 0 & 6 & 7 & 1 & 4 & (1342) \\ 9 & 2 & 8 & 5 & 7 & 1 & 6 & 3 & 0 & 4 & (1325) \\ \hline \end{array} $$ \end{example}
It can be seen that there exists no partition of $2$-projection $\hat{p}(\langle 14\rangle)Y_{10}$ on not intersected on $V$ classes, but this covering of $V$ with $2$-tuples can be partition in two not intersected coverings: $\{14,15,58,40,08\}$ and $\{23,36,29,79,67\}$ that form elementary coherent $2$-orbits on corresponding two $5$-element subsets of $V$. This case is similar to we could see on example \ref{S5*S2}.
The following example of a $(2,10)$-orbit
$$
\begin{array}{||cc||cc|cc|cc|cc||} \hline 24 & 35 & 12 & 14 & 13 & 15 & 23 & 25 & 45 & 34 \\ \hline \hline 6 & 9 & 1 & 3 & 2 & 4 & 5 & 7 & 0 & 8 \\ 9 & 6 & 3 & 1 & 4 & 2 & 7 & 5 & 8 & 0 \\ \hline \end{array} $$ contains $S_{10}$ isomorphic $2$-rcycles $\{\langle 69\rangle,\langle 96\rangle\}$ and $\{\langle 13\rangle,\langle 31\rangle\}$, but the corresponding two projections of $X_{10}$ are not $S_{10}$ isomorphic, because the $2$-orbit $X_2(\langle 69\rangle)$ consists of $30$ pairs and $2$-orbit $X_2(\langle 13\rangle)$ consists of $60$ pairs. It is a transitive realization of the property of example from proposition \ref{Sn-iso-intr}.
\begin{remark} One can see that given properties of $X_n$ cannot be obtained in the group theory, because they are properties of the internal structure of $X_n$. Of course, the internal structure of $X_n$ characterizes the group and therefore its properties are also group properties. But these properties of a group lie out the group algebra that characterizes $X_n$ as whole.
\end{remark}
\section{The proof of the polycirculant conjecture}
\subsection{The proof of lemma \ref{tr.impr<tr.pr}}
The proof of lemma \ref{tr.impr<tr.pr} follows from
\begin{lemma}\label{qGDn} Let $q$ be the greatest prime divisor of $n$, then $G$ contains a transitive, imprimitive subgroup with incoherent $q$-blocks.
\end{lemma} {\bfseries Proof:\quad} Let the statement is not correct, then every $q$-rorbit $X_q(I_q)$ contains a coherent $q$-subrorbit $Y_q(I_q)$ on some automorphic $k$-subspace $I_k$, where $k$ is a divisor of $n$ greater than $q$, $I_q\subset I_k$ and $X_k(I_k)$ is an incoherent $k$-rorbit. Let $p$ be a prime divisor of $k$, then it follows that $p<q$ and hence there exists an automorphic $p$-subspace $I_p\subset I_q$. Therefore there exists a coherent $p$-subrorbit $Y_p(I_p)$ of a $p$-rorbit $X_p(I_p)$ on the
$q$-subspace $I_q$. Since $q|n$, classes of the partition $L_p=GY_p(I_p)$ of $X_p(I_p)$ are not intersected on $V$ and hence $X_q(I_q)$ can not contain a coherent $q$-subrorbit $Y_q(I_q)$ on the $k$-subspace $I_k$. Contradiction. $\Box$
\subsection{The proof of theorem \ref{2cl.impr->reg}}
Let $G$ be a transitive group, then it contains a transitive, imprimitive subgroup. So we can assume that $G$ is imprimitive. Then for some prime divisor $p$ of $n$ there exists a partition $I=\{I_p^i:\, i\in [1,l]\}$ of $V$ on $G$-isomorphic, automorphic $p$-subspaces. Let $X_n$ be a $n$-orbit of $G$, $X_p=\hat{p}(I_p^i)X_n$ and $X_{2p}^{ij}=\hat{p}(I_p^i\circ I_p^j)X_n=\uplus X_p\circ \uplus X_p$. Accordingly to corollary \ref{mMmM->MM} $X_{2p}^{ij}=\cup_t (X_p\stackrel{\phi_t(i,j)}{\circ} X_p)\equiv\cup L_p(i,j)$ for some maps $\{\phi_t(i,j)\}$. It follows that the action of any cycle of length $p$, that is generated with some $p$-rcycle from $X_p$, on partitions $L_p(i,j)$ generates cycles of length $p$ that are again connected with $p$-rcycles from $X_p$. Hence $G$ contains a regular permutation of order $p$.
\section{Some applications of $k$-orbit theory}
\subsection{Solvability of groups of odd order}
Now we shall show that the $k$-orbit theory gives a simple proof of the W. Feit, J.G. Thompson theorem: Solvability of groups of odd order \cite{G}.
In this section we do not difference between a finite group and its $ld$-representation. Also we assume that a finite group is not a direct product.
\begin{lemma}\label{Aut(incoh)-not-simple} Let $X_k\in Orb_k(G)$ be an incoherent $k$-orbit, then $G$ is not a simple group.
\end{lemma} {\bfseries Proof:\quad} Let $Y_k\subset X_k$ be a $k$-block and $L_k=Aut(X_k)Y_k$, then it is evident that a concatenation of classes from $L_k$ is a $n$-orbit of a normal subgroup $H\lhd Aut(X_k)=Aut(L_k)$ and that every transitive subgroup $G<Aut(L_k)$ has non-trivial intersection with $H$. $\Box$
\begin{corollary}\label{impr-not.simple} Let $G$ be a (non-cyclic) simple group, then it is (non-trivial) primitive.
\end{corollary}
\begin{theorem}\label{prim-invol.} Any primitive group $G$ contains an involution.
\end{theorem} {\bfseries Proof:\quad} Let $n$ be odd, then there exist odd numbers $k<l\leq n$, so that $(k,l)=1$ and there exist automorphic subspaces $I_k\subset I_l$. If $r=[l/k]$ is odd, then $m=l-rk$ is even and there exists an automorphic subspace $I_m$ (lemma \ref{k-rorbit}). If $r$ is even, then $m$ is odd and, because of primitivity of $G$, we can choose $l:=k$ and $k:=m$, if $(k,m)=1$, or else $l:=l$ and $k:=m$. $\Box$
\begin{corollary}\label{odd-solv} Let $G$ be a (non-cyclic) simple group, then it contains an involution.
\end{corollary}
\begin{corollary}\label{odd.od-impr} Let $G$ be a group of odd order, then it is an imprimitive and hence solvable group.
\end{corollary}
\subsection{A full invariant of a finite group}
Here we shall discuss the problem of a full invariant of a finite group $F$ and assume that $F$ is not a direct product.
At first we can note that, if the ld-representation $G$ of $F$ is unique accurate to similarity, then a full invariant of $F$ is defined by a full invariant of $G$. Then we have two cases: $G$ is primitive and $G$ is imprimitive.
\subsubsection{Let $G$ be primitive}
\begin{proposition}\label{k=n-l[n/l]}
Let $\neg(l|n)$, $I_l$ be an automorphic subspace and $k=n-l[n/l]$, then
\begin{enumerate} \item\label{p^m<n}
$k$ divides $|G|$;
\item there exists an automorphic $k$-subspace $I_k$;
\item a subgroup $Stab(I_k)$ has non-trivial normalizer in $G$.
\end{enumerate}
\end{proposition} {\bfseries Proof:\quad} The statements follows directly from lemmas \ref{k-rorbit} and \ref{fix-tuple}. $\Box$
\begin{proposition}\label{V-coherent}
Let $\neg(k|n)$ and $X_k$ be a $k$-rorbit, then $X_k$ contains an elementary $V$-coherent $k$-suborbit.
\end{proposition} {\bfseries Proof:\quad} The statement follows from the definition of an elementary $V$-coherent $k$-orbit. $\Box$
So we see that $|G|$ and $n$ are high dependent and in general the degree $n$ allows to define whether there can exist a group $G$ of order $\nu$. Also an elementary $U$-coherent $k$-suborbit on every possible automorphic subset $U\subset V$ is unique accurate to similarity.
All these facts suggest us the hypothesis that $|G|$ and $n$ could be a full invariant of $G$ in the considered case.
\subsubsection{Let $G$ be imprimitive}
Let $k$ be a maximal automorphic divisor of $n$, then there exists an incoherent $k$-rorbit $X_k$ of $G$. Let $Y_k\subset X_k$ be a $k$-block and $L_k=Aut(X_k)Y_k=Aut(L_k)Y_k$, then $G<Aut(L_k)$, $L_k=GY_k$ and $Y_k$ is a $k$-orbit of a maximal normal subgroup $H\lhd G$, because of lemma \ref{Aut(incoh)-not-simple}.
Let us to assume that we know $Y_k$, $|G|$ and $n$. It gives us the next information: $|L_k|=n/k$, $|X_k|=|Y_k||L_k|$, $|H|=|G|/|L_k|$ and
$|Stab(I_k)|=|G|/|X_k|=|H|/|Y_k|$. In addition we know that $n$-orbit
$X_n$ of $G$ is a $|L_k|\times |L_k|$ matrix $M$, whose elements are multi-classes of $L_k$ and which gives a regular representation of a factor group $\Phi=G/H$. Since $H$ is a maximal normal subgroup, hence $\Phi$ is a simple group.
Let $\Phi$ be not a cyclic group, then it is not a ld-representation. But from here follows that $G$ is not a ld-representation too. This contradiction shows us that $\Phi$ is always a cyclic group.
In order to give a full description of the group $G$ we have to find the elements of matrix $M$. Let $L_k=\{Y_k^i:\,i\in [1,p]\}$ and $\Phi=gr((1\ldots p))$, then we know that $M_{ij}=Y_k^r$, where $r=i+j-1(mod\ p)$. One of possible construction of $M$ is obtained as next. Every element $M_{ij}$ is obtained from the element $M_{1j}$ by permutation of columns and every element $M_{1j}$ is obtained from the element $M_{11}$ by permutation of lines. The columns of the element $M{ij}$ are permutated relative to the columns of the element $M_{1j}$ with automorphisms of $M_{1j}$ that are similar to automorphisms of $M_{11}$. The lines of the element $M_{1j}$ are permutated relative to lines of the element $M_{11}$ on the condition to maintain the automorphism property of $M_{11}$.
So, for obtaining of a full number invariant in this imprimitive case it wants to find the number invariants that allow to calculate corresponding $p^2-1$ permutations.
Now we consider the properties of $Y_k$.
\begin{lemma}\label{Y_k-prim} $Aut(Y_k)$ is imprimitive.
\end{lemma} {\bfseries Proof:\quad} Let $Aut(Y_k)$ be primitive, then $Y_k$ contains an elementary coherent $q$-subrorbit for some prime $q<k$. From here follows that permutations generating elements of matrix $M$ are trivial and hence $G$ is a direct product. Contradiction. $\Box$
\begin{corollary}\label{Y_k-reg} $Aut(Y_k)$ is regular.
\end{corollary}
\begin{corollary}\label{p-group} $G$ is $p$-group.
\end{corollary}
So we can formulate
\begin{hypothesis}\label{full.inv}
Full invariant of not $p$-group is defined with $|G|$ and $n$ and full invariant of $p$-group of order $p^m$ is defined with maximum $mp^2$ permutations or corresponding numbers that define these permutations.
\end{hypothesis}
\subsection{The polynomial algorithm of graph isomorphism testing}
The graph isomorphism problem has a polynomial solution, if the problem of separating of orbits of the automorphism group of a graph has a polynomial solution. So we want to find the partition $O_2=Aut(X_2)X_2\subset Orb_2(Aut(X_2))$ of a $2$-set $X_2\subset V^{(2)}$ polynomially on $n$.
Let $X_k\subset V^{(k)}$ be a $k$-set. We say that $X_k$ is \emph{transitive}, if All $1$-projections of $X_k$, are equal. We say that $X_k$ is \emph{regular}, if it satisfy to the two conditions:
\begin{enumerate} \item Every $l$-multiprojection of $X_k$ for $l\in [1,k]$ is homogeneous.
\item All $l$-projections of $X_k$, containing the same $l$-tuple, are equal.
\end{enumerate}
\begin{lemma}\label{X_2=X_1}
Let $X_2$ be a regular $2$-set and $X_1^1,X_1^2$ be its $1$-projections. Let $|X_2|=|X_1^1|$. If $X_1^1\neq X_1^2$, then $X_2$ is automorphic. If $X_1^1=X_1^2$, then the partition $O_2=Aut(X_2)X_2$ is detected directly.
\end{lemma}
So the problem presents, if $|X_2|/|X_1^1|>1$ and $|X_2|/|X_1^2)|>1$. Let $X_2\subset V^{(2)}$ be a regular $2$-set and $\cup Co(X_2)=V$, then $Aut(X_2)$ is (generally intransitive) group of degree $n$ and all prime cycles from $Aut(X_2)$ have the length not greater than $n$. Let $X_2$ be automorphic and $Y_2\subset X_2$ be a $2$-orbit of a subgroup $A<Aut(X_2)$, then $R_2=AX_2$ is a partition of $X_2$ on $2$-orbits of $A$ and hence classes of $R_2$ are regular $2$-sets.
The fundamental role in polynomial solution of considered problem plays the theorem \ref{rr}. One can see that cyclic structure, that was described for transitive $(2k)$-orbits, exists also on intransitive $(2k)$-orbits. But the direction (left, right, left, right,\,\ldots) must be change to (left, right, right, left, left,\,\ldots). We can also note that in general, if we have a regular $2$-set $X_2\subset V^{(2)}$, then we have also a partition $P_2$ of $V^{(2)}$ on regular $2$-sets invariant relative to $Aut(X_2)$. Thus with given intransitive regular $2$-set $X_2$ we can also find transitive, regular, $Aut(X_2)$-invariant $2$-sets of $P_2$.
\begin{algorithm}\label{R_2,reg} Let now $X_2$ be arbitrary regular, $2$-set and we try find its automorphism, then we follow the next steps:
\begin{enumerate} \item Find an automorphic $2$-subset $Y_2\subset X_2$ that is expected to be a $2$-suborbit of $Aut(X_2)$.
\item Construct a partition $R_2^i$, $i=1,2,\ldots$, iterating the process that follows from theorem \ref{rr}. By each iteration verify classes of $R_2^i$ on regularity and subpartition them if they are not regular.
\end{enumerate} \end{algorithm}
This process leads to an automorphism, possibly trivial.
To find $Y_2$ is not difficult. At first it could be taken a subset $Y_2(u)$ of $X_2$ whose $1$-projection $\hat{p}(I_1^1)Y_2$ is an element $u$ of $V$. Thus we can define whether $Stab(u)$ is trivial. If it is trivial, then it follows that $X_2$ is incoherent and can be partition on coherent $2$-subsets.
If $Aut(X_2)$ is trivial, then given algorithm detects this triviality in maximally $n$ steps, if to assume that in each step only one element of $V$ is separated.
Using lemma \ref{AV*BV,AV+BV} we can, having automorphic partitions $R_k'$ and $R_k''$ of $X_k$, obtain new more big partition $R_k=R_k'\sqcup R_k''$.
For simplification of the process it can be chosen for partitioning on $i$-th iteration the most suitable $2$-set $X_2^i$ from the partition $P_2^i$ of $V^{(2)}$, on regular $2$-sets invariant to $Aut(X_2)$, and, by partitioning of $X_2^i$, the whole partition $P_2^i$ can be further partitioned to regular classes $P_2^{i+1}$ and used in the next iteration.
\section*{Conclusion}
This work was initiated by the polycirculant conjecture, described by P. Comeron in his text \cite{Cameron} and represented on the site (\url{http://www.maths.qmw.ac.uk/~pjc/homepage.html}).
The using of $k$-orbits to the polynomial solution of the graph isomorphism problem was begun by Author in 1984. The generalization of $k$-orbits, regular $k$-sets, was used for describing of the structure of strongly regular graphs and their generalization on dimensions greater as two. This approach discovered the difference between the structure of strongly symmetrical but not automorphic partitions of $V^{(k)}$ and automorphic partitions of $V^{(k)}$.
From this point of view the polycirculant conjecture seemed enough simple. But nevertheless to find a correct proof was very difficult and only the analysis of two examples of permutation groups (one elusive group of order 72 and degree 12, and the automorphism group of Peterson graph), that was presented to author by P. Comeron, leaded to discovery of the specific properties of $k$-orbits, not detectable with group theory, that brought a proof of the conjecture.
In 1997 Author understood the connection between the graph isomorphism problem and the problem of a full invariant of a finite group and has done some attempts to obtain this full invariant by construction of some appropriate group representations. This work gave better understanding of the problem but did not bring the expected result. By construction of the $k$-orbit theory it was of interest to consider a finite group with new representation and this time the result was obtained.
Also the specific symmetry properties of $k$-orbits, that are not visible in other most algebraic theories, gave possibility for simple polynomial solution of the graph isomorphism problem.
\end{document} |
\begin{document}
\title{Hidden Subgroup States are Almost Orthogonal}
\begin{abstract} It~is well known that quantum computers can efficiently find a hidden subgroup~$H$ of a finite Abelian group~$G$. This implies that after only
a polynomial (in~$\log |G|$) number of calls to the oracle function, the states corresponding to different candidate subgroups have exponentially small inner product. We~show that this is true for noncommutative groups also. We~present a quantum algorithm which identifies a hidden subgroup of
an arbitrary finite group~$G$ in only a linear (in~$\log |G|$) number of calls to the oracle function. This is exponentially better than the best classical algorithm. However our quantum algorithm requires an exponential amount of time, as in the classical case. \end{abstract}
\section{Introduction} A~function~$f$ on a finite group (with an arbitrary range) is called {\em H-periodic\/} if $f$ is constant on left cosets of~$H$. If~$f$ also takes distinct values on distinct cosets we say $f$ is {\em strictly} $H$-periodic. Furthermore we call $H$ the {\em hidden subgroup\/} of $f$. Throughout we assume $f$ is efficiently computable. Utilizing the quantum Fourier transform a quantum computer can identify $H$ in time
polynomial in~$\log |G|$. The question has been repeatedly raised as to whether this may accomplished for finite non-Abelian groups~\cite{EH99,Kitaev95,RB98}.
This time bound implies that only a polynomial (in~$\log |G|$) number of calls to the oracle function are necessary to identify~$H$. The main result of this paper is that this more limited result is also true for non-Abelian groups. In other words there exists a quantum algorithm which {\em informational theoretically\/} determines the hidden subgroup efficiently. One may also view this as a quantum state distinguishability problem and from this perspective one may say that this quantum algorithm efficiently distinguishes among the given possible states. This result may be seen as a generalization of the results presented in~\cite{EH99} although in that work the unitary transform was also efficiently implementable. The quantum algorithm presented here requires exponential time. An important open question is whether this may be improved. Even if a time efficient quantum algorithm does exist one must also inquire as to the complexity of postprocessing the resulting information. For example in the case of the dihedral group presented in~\cite{EH99}, although the hidden subgroup is information theoretically determined in a polynomial number of calls to the oracle, it is not known how to efficiently postprocess the resulting information to identify~$H$.
\begin{theorem}[Main]\label{thm:main} Let $G$ be a finite group, $f$ an oracle function on~$G$ which is strictly $H$-periodic for some subgroup $H \leqslant G$. Then there exists a quantum algorithm that calls the oracle function
$4 \log|G| +2$ times and outputs a subset $X \subseteq G$
such that $X=H$ with probability at least \mbox{$1 - 1/|G|$}. \end{theorem}
\section{The Quantum Algorithm} Let $G$ be a finite group, $H \leqslant G$ a subgroup, and $f$ a function on $G$ which is strictly $H$-periodic. For any subset $X = \{x_1,\dots,x_m\} \subseteq G$, let $\ket{X}$ denote the normalized superposition $\frac{1}{\sqrt{m}}\big(\ket{x_1} + \cdots + \ket{x_m}\big)$.
The Hilbert Space~$\mathcal{H}$ in which we work has dimension $|G|^m$ and has an orthonormal basis indexed by the elements of the \mbox{$m$-fold} direct product $\{\ket{(g_1,\dots,g_m)} \mid g_i \in G\}$. The first step in our quantum algorithm is to prepare the state \begin{equation}\label{eq:initial}
\frac{1}{\sqrt{|G|^m}} \sum_{g_1,\dots,g_m \in G} {\ket{g_1,\dots,g_m} \,\ket{f(g_1),\dots,f(g_m)}}. \end{equation}
We show below that picking $m = 4 \log |G| +2$ allows us to identify~$H$ with exponentially small error probability.
By observing the second register we obtain a state~\ket{\Psi} which is a tensor product of random left cosets corresponding to the hidden subgroup~$H$. Let $\ket{\Psi} = \ket{a_{1}H} \otimes \ket{a_{2}H} \otimes \cdots \otimes \ket{a_{m}H}$ where $\{a_1,\dots,a_m\} \subseteq G$. Further, for any subgroup $K \leqslant G$ and any subset $\{b_1,\dots,b_m\} \subseteq G$, define \begin{equation} \ket{\Psi(K,\{b_i\})} = \ket{b_{1}K} \otimes \ket{b_{2}K} \otimes \cdots \otimes \ket{b_{m}K}. \end{equation} The key lemma, stated formally below, is that if $K \not\leqslant H$ then $\braket{\Psi}{\Psi(K,\{g_i\})}$ is exponentially small.
Let $\mathcal{H}_K$ be the subspace of $\mathcal{H}$ spanned by all the vectors of the form $\ket{\Psi(K,\{b_i\})}$ for all subsets $\{b_i\}.$ Let $P_K$ be the projection operator onto $\mathcal{H}_K$
and let $P_{K}^\perp$ be the projection operator onto the orthogonal complement of $\mathcal{H}_K$ in $\mathcal{H}.$ Define the observable $A_K = P_K - P_K^\perp$. Choose an ordering of the elements of $G$, say, $g_1,g_2,\dots,g_{|G|}.$
The algorithm mentioned in Theorem~\ref{thm:main} works as follows. We~first apply $A_\cs{g_1}$ to $\ket{\Psi}$, where $\cs{g} \leqslant G$ denotes the cyclic subgroup generated by \mbox{$g \in G$}. If~the outcome is~$-1$ then we know that $g_1 \not\in H$, and if the outcome is~$+1$ then we know that $g_1 \in H$ with high probability. We~then apply $A_\cs{g_2}$ to the state resulting from the first measurement. Continuing in this manner we test each element of $G$ for membership in $H$ by sequentially applying $A_\cs{g_2}$, $A_\cs{g_3}$ and so on to the resulting states of the previous measurements. (Of~course if we discover $g \in H$ then we know that, say, $g^2 \in H$ and we can omit the test $A_\cs{g^2}$.) We~prove below that each measurement alters the state insignificantly with high probability,
implying that by the application of the final operator $A_\cs{g_{|G|}}$ we have, with high probability, identified exactly which elements of~$G$ are in~$H$ and which are not.
\begin{lemma}\label{lm:key} Let $K \leqslant G$. If $K \not\leqslant H$
then $\langle \Psi|P_K |\Psi \rangle \leq \frac{1}{2^m}$. If~$K \leqslant H$ then $\langle \Psi|P_K |\Psi \rangle = 1$. \end{lemma}
\begin{proof}
Let $|H \cap K| = d$. Notice that for all
$g_1,g_2 \in G$ we have either $|g_1H \cap g_2K| = d$ or $|g_1H \cap g_2K| = 0$. This implies that if $|g_1H \cap g_2K| = d$ then
$\braket{g_1H}{g_2K} = d/\sqrt{|H||K|}$. Therefore \begin{equation*} \braket{\Psi}{\Psi(K,\{b_i\})} =
\begin{cases}\Big(\frac{d}{\sqrt{|H||K|}}\Big)^m
& \mbox{ if $|H \cap K| = d$}\\ 0 & \mbox{ if $|H \cap K| = 0$.} \end{cases} \end{equation*}
There exist exactly $(|H|/d)^m$ vectors of the form $\ket{\Psi(K,\{b_i\})}$ such that $\braket{\Psi}{\Psi(K,\{b_i\})}$
is nonzero. Hence, $\langle \Psi |P_K | \Psi \rangle =
\big(\frac{|H|}{d}\big)^m \big(\frac{d^2}{|H||K|}\big)^m
= \big(\frac{d}{|K|}\big)^m$. If $K \not\leqslant H$ then $d/|K| \leq 1/2$, and if
$K \leqslant H$ then \mbox{$d = |K|$}. \end{proof}
Let $\ket{\Psi_0} = \ket{\Psi}$. For $1 \leq i \leq |G|$, define the unnormalized states \begin{equation*} \ket{\Psi_i} = \begin{cases} \,P_\cs{g_i} \;\ket{\Psi_{i-1}} & \mbox{ if $g_i \in H$}\\ \,P_\cs{g_i}^\perp \;\ket{\Psi_{i-1}} & \mbox{ if $g_i \not\in H$.} \end{cases} \end{equation*} Then $\braket{\Psi_i}{\Psi_i}$ equals the probability that the algorithm given above answers correctly whether $g_j \in H$ for all $1 \leq j \leq i$.
Now, for all $0 \leq i \leq |G|$, let $\ket{E_i} = \ket{\Psi} - \ket{\Psi_i}$.
\begin{lemma}\label{lm:error}
For all $0 \leq i \leq |G|$, we have $\braket{E_i}{E_i} \leq \frac{i^2}{2^m}$. \end{lemma}
\begin{proof} We~prove this by induction on~$i$. Since $\ket{\Psi_0} = \ket{\Psi}$ by definition, $\ket{E_0} = 0$. Now, suppose that $\braket{E_i}{E_i} \leq \frac{i^2}{2^m}$. On~the one hand, if $g_{i+1} \in H$, then $\ket{\Psi_{i+1}} = P_{\cs{g_{i+1}}} \big(\ket{\Psi} - \ket{E_{i}}\big) = \ket{\Psi} - P_{\cs{g_{i+1}}} \ket{E_{i}}$. Hence $\braket{E_{i+1}}{E_{i+1}} \leq \braket{E_i}{E_i} \leq \frac{i^2}{2^m}$. On~the other hand, if $g_{i+1} \not\in H$, then $\ket{\Psi_{i+1}} = P_{\cs{g_{i+1}}}^\perp \big(\ket{\Psi} - \ket{E_{i}}\big) = \ket{\Psi} - P_{\cs{g_{i+1}}} \ket{\Psi} - P_{\cs{g_{i+1}}}^\perp \ket{E_{i}}$. By~Lemma~\ref{lm:key}, we then have $\braket{E_{i+1}}{E_{i+1}}^{1/2} \leq \frac{1}{2^{m/2}} + \braket{E_i}{E_i}^{1/2} \leq \frac{i+1}{2^{m/2}}$. \end{proof}
Since $\ket{\Psi_{|G|}} = \ket{\Psi} - \ket{E_{|G|}}$
and $\braket{E_{|G|}}{E_{|G|}} \leq \frac{|G|^2}{2^m}$ by the above lemma, we obtain the following lower bound for correctly determining all the elements of~$H$.
\begin{lemma}
$\braket{\Psi_{|G|}}{\Psi_{|G|}} \,\geq 1 - \frac{2 |G|}{2^{m/2}}$. \end{lemma}
By choosing $m = 4 \log|G| +2$, the main theorem follows directly.
\section{Conclusion}
We have shown that there exists a quantum algorithm that discovers a hidden subgroup of an arbitrary finite group in $O(\log|G|)$ calls to the oracle function. This is possible due to the geometric fact that the possible pure states corresponding to different possible subgroups are almost orthogonal, i.e. they have exponentially small inner product. Equivalently stated, there exists a measurement, a POVM, that distinguishes among the possible states in a Hilbert Space of dimension $|G|^m$ where $m = O(\log|G|)$. The open question remains in regard to the existence of a POVM which not only distinguishes among the states but is efficiently implementable and the resulting information is efficiently postprocessable.
\end{document} |
\begin{document}
\title[Dihedral symmetry of periodic chain]{Dihedral symmetry
of periodic chain: \\ quantization and coherent states}
\author{P Luft, G Chadzitaskos and J Tolar} \address{Department of Physics\\ Faculty of Nuclear Sciences and Physical Engineering \\ Czech Technical University \\ B\v rehov\'a 7, CZ - 115 19 Prague, Czech Republic}
\ead{jiri.tolar@fjfi.cvut.cz}
\begin{abstract}
Our previous work on quantum kinematics and coherent states over finite configuration spaces is extended: the configuration space is, as before, the cyclic group $\mathbf{Z_{n}}$ of arbitrary order $n=2,3,\ldots$, but a larger group --- the non-Abelian dihedral group $\mathbf{D_{n}}$ --- is taken as its symmetry group. The corresponding group related coherent states are constructed and their overcompleteness proved. Our approach based on geometric symmetry can be used as a kinematic framework for matrix methods in quantum chemistry of ring molecules. \end{abstract}
\pacs{03.65.Fd, 31.15.-p, 31.15.Hz}
\submitto{J. Phys. A: Math. Theor.}
\noindent Keywords: dihedral group, periodic chain, Mackey quantization, finite-dimensional Hilbert space, coherent states
\section{Introduction}
The mathematical arena for ordinary quantum mechanics is, due to Heisenberg's commutation relations, the infinite-dimensional Hilbert space. A useful model for quantum mechanics in a Hilbert space of finite dimension $n$ is due to H. Weyl \cite{Weyl}. Its geometric interpretation as the simplest quantum kinematic on a finite discrete configuration space formed by a periodic chain of $n$ points, was elaborated by J.~Schwinger \cite{Schwinger}. In \cite{Tolar, StovTolar} we proposed a group theoretical formulation of this quantum model in terms of Mackey's quantization \cite{Mackey, HDDTolar}. It is based on Mackey's system of imprimitivity which represents a group theoretical generalization of Heisenberg's commutation relations.
The geometrical picture behind the group theoretical approach is the following \cite{HDDStovTolar}: one has a discrete or continuous configuration space together with a geometrical symmetry group acting transitively on it, i.e. the configuration space is a homogeneous space of the group. In particular, Weyl's model is based on configuration space $\mathbf{Z_{n}}$ (where $\mathbf{Z_{n}}$ is the cyclic group of order $n=2,3,\ldots $) with symmetry $\mathbf{Z_{n}}$ acting on the periodic chain $\mathbf{Z_{n}}$ by discrete translations. In this paper our formulation of Weyl's model is generalized by extending the Abelian symmetry group $\mathbf{Z_{n}}$ of the periodic chain to the dihedral group $\mathbf{D_{n}}$ --- the non-Abelian symmetry group of a regular $n$-sided polygon.
Coherent states belong to the most important tools in many applications of quantum physics. They found numerous applications in quantum optics, quantum field theory, condensed matter physics, atomic physics etc. There are various definitions and approaches to the coherent states dependent on author and application. Our main reference is \cite{Perelomov}, where the systems of coherent states related to Lie groups are described. The basic feature of such systems is that they are overcomplete. As shown for instance in \cite{TCh}, Perelomov's method can be equally well applied to discrete groups. Starting with irreducible systems of imprimitivity we shall construct irreducible sets of generalized Weyl operators, whose action on properly chosen vacuum states will produce the resulting families of coherent states.
In section 2 after recalling Mackey's Imprimitivity Theorem for finite groups \cite{Coleman} the construction of systems of imprimitivity is described. Then necessary notations for the dihedral groups are introduced in section 3. Section 4 is devoted to the construction of the two irreducible systems of imprimitivity for $\mathbf{D_{n}}$ based on $\mathbf{Z_{n}}$, each consisting of a projection--valued measure and an induced unitary representation. From them, the corresponding quantum position and momentum observables are constructed in section 5. This is the starting point for construction of the set of generalized Weyl operators and generalized coherent states in section 6. We apply the method of paper \cite{TCh}, where quantization on $\mathbf{Z}_{n}$ with Abelian symmetry group $\mathbf{Z}_{n}$ and the corresponding coherent states were investigated. Concluding section 7 contains remarks concerning the replacement of the Abelian cyclic symmetry group $\mathbf{Z}_{n}$ by the non-Abelian dihedral group $\mathbf{D}_{n}$ as the group of motions of the configuration space $\mathbf{Z}_{n}$. The interesting feature of our construction is the fact that, even if the group property of the set of Weyl operators is lost, the families of coherent states still possess the required overcompleteness property.
\section{Systems of imprimitivity for finite groups}
We consider the case when the configuration space $\mathbf{M}$ and its symmetry group $\mathbf{G}$ are finite. Our configuration space will be a finite set $\mathbf{M}= \{m_{1}, m_{2},...,m_{n}\}$,
$n=|\mathbf{M}|$. Let $\mathbf{G}$ be a finite group acting transitively on $\mathbf{M}$, and let $\mathbf{H}$ be the stability subgroup. Let $\mathbf{L}$ be an irreducible unitary representation of subgroup $\mathbf{H}$ on Hilbert space $\mathcal{H}^{\mathbf{L}}$.
System of imprimitivity is a pair $(\mathbf{V},\mathbf{E})$, where $\mathbf{E}$ is a projection-valued measure on configuration space $\mathbf{G}/\mathbf{H}$ and $\mathbf{V}$ is a unitary representation of the symmetry group $\mathbf{G}$ such that \begin{equation} \mathbf{V}(g)\mathbf{E}(S)\mathbf{V}(g)^{-1}=\mathbf{E}(g.S) \quad \text{for all} \quad g\in\mathbf{G}, S \subset \mathbf{G}/\mathbf{H}. \end{equation} In a finite-dimensional Hilbert space $\mathcal{H}= \mathbb{C}^{n}$ the standard projection-valued measure is given by finite sums of diagonal matrices \begin{equation}
\mathbf{E}(m_{i}) := \text{diag}(0,0,...,1,...,0), \; i =1,2,...,n. \end{equation}
The Imprimitivity Theorem for finite groups has the following form \cite{Coleman}: \\
{\bf Theorem :} {\it A unitary representation $\mathbf{V}$ of a finite group $\mathbf{G}$ in Hilbert space $\mathcal{H}$ belongs to the imprimitivity system $(\mathbf{V},\mathbf{E})$ with standard projection-valued measure based on $\mathbf{G}/\mathbf{H}$, if and only if $\mathbf{V}$ is equivalent to an induced representation $Ind_{\mathbf{H}}^{\mathbf{G}}(\mathbf{L})$ for some unitary representation $\mathbf{L}$ of subgroup $\mathbf{H}$. The system of imprimitivity is irreducible, if and only if $\mathbf{L}$ is irreducible.}
Thus a unitary representation $\mathbf{V}$ for a system of imprimitivity is constructed directly as an induced representation. Let $\mathbf{G}$ be a finite group of order $r$, $\mathbf{H}$ its subgroup of order $s$. Suppose that $\mathbf{L}$ is a representation of the subgroup $\mathbf{H}$. Let us decompose the group $\mathbf{G}$ into left cosets \begin{equation}\label{cosets}
\mathbf{G}=\{\bigcup_{j=1}^{r/s}t_{j}\cdot\mathbf{H}
\; \vert \; t_{j} \in \mathbf{G}, \; t_{1}=e \}. \end{equation} Group elements $t_{j}$ are arbitrarily chosen representatives of left cosets. If the dimension of the representation $\mathbf{L}$ is $l$, then the induced representation $\mathbf{V}$ of $\mathbf{G}$ is given by
\begin{eqnarray} (\mathbf{V}(g))_{ij}& = & \mathbf{L}(h) \quad \text{ if } \quad t^{-1}_{i}\cdot g \cdot t_{j} = h \quad \text{for} \; \mathrm{some} \; h \in \mathbf{H}, \\ & = & 0 \quad \text{ otherwise }; \end{eqnarray} here $(\mathbf{V}(g))_{ij}$ are $l \times l$ matrices which serve as building blocks for \begin{equation}
\mathbf{V}(g) = Ind_{\mathbf{H}}^{\mathbf{G}}(\mathbf{L}) \end{equation} and the subscript $ij$ denotes the position of the block in $\mathbf{V}(g)$.
\section{Structure of dihedral groups}
The dihedral group $\mathbf{D_{n}}$, where $n=2,3,\ldots$, is a non-Abelian finite group of order $2n$ with the structure of a semidirect product of two cyclic groups: \begin{equation}\label{semi}
\mathbf{D_{n}} = \mathbf{Z_{n}} \triangleright \mathbf{Z_{2}}. \end{equation} It arises as the symmetry group of a regular polygon and is generated by discrete rotations and reflections. The elements of the subgroups $\mathbf{Z_{2}}$ and $\mathbf{Z_{n}}$ will be denoted \begin{equation}\label{ZN}
\mathbf{Z_{2}} = \{+1,-1\}; \quad \mathbf{Z_{n}} =
\{e=r_{0},r_{1},...,r_{n-1}\}. \end{equation}
Group operation in $\mathbf{Z_{2}}$ is multiplication, in $\mathbf{Z_{n}}$ $r_{i}\cdot r_{j}=r_{i+j \pmod{n}}$.
The multiplication law of the semidirect product \eref{semi} is determined by a fixed homomorphism $f$ from $\mathbf{Z_{2}}$ to the group of all automorphisms of the group $\mathbf{Z_{n}}$, $ f:\mathbf{Z_{2}} \rightarrow Aut(\mathbf{Z_{n}})$: \begin{equation}\label{multlaw}
(r_{i},x)\cdot (r_{j},y) = (r_{i}\cdot f(x)(r_{j}),x\cdot y), \; x,y \in \mathbf{Z_{2}}, \; r_{i},r_{j} \in \mathbf{Z_{n}}. \end{equation} Under this multiplication law, $\mathbf{Z_{n}}$ is a normal subgroup. Specifically for $\mathbf{D_{n}}$, the mapping $f$ is simply \begin{equation}\label{}
f:+1 \mapsto Id, \qquad f:-1 \mapsto Inv, \end{equation} where $Id$ is the identical mapping on $\mathbf{Z_{n}}$, $Inv$ is an automorphism of $\mathbf{Z_{n}}$ which maps an element of $\mathbf{Z_{n}}$ into its inverse: \begin{equation}\label{}
Inv: r_{k}\mapsto r_{k}^{-1}=r_{-k \pmod{n}}, \qquad r_i \in \mathbf{Z_{n}}. \end{equation} We shall need the explicit form of the multiplication law: \begin{equation}\label{nasa}
(r_{i},+1)\cdot (r_{j},x) = (r_{i}\cdot r_{j},x) =
(r_{i+j \pmod{n}},x), \end{equation} \begin{equation}\label{nasb}
(r_{i},-1)\cdot (r_{j},x) = (r_{i}\cdot r_{j}^{-1},-x) =
(r_{i-j \pmod{n}},-x). \end{equation}
Thus the elements of $\mathbf{D_{n}}$ can be divided in two disjoint subsets: \begin{enumerate} \item The subset $ \{(r_{k},+1), \, k = 0,1,...,n-1 \}$ forms the subgroup isomorphic to $\mathbf{Z_{n}}$ and the elements $(r_{k},+1)$ have the geometrical meaning of integral multiples of a clockwise rotation of an $n$-sided regular polygon through an angle $2\pi /n$. \item The subset $ \{(r_{k},-1), \, k = 0,1,...,n-1 \}$ consists of mirror symmetries with respect to axes in the $n$--sided polygon: if $n$ is odd, then all axes of mirror symmetries pass through vertices of the $n$--sided polygon; if $n$ is even, then only one half of mirror symmetries have axes passing through opposite vertices, the remaining axes are symmetry axes of two opposite sides of the polygon. \end{enumerate}
Summarizing, the group $\mathbf{D_{n}}$ consists of $n$ rotation symmetries $\mathbf{R}_{k} = (r_{k},+1)$ and $n$ mirror symmetries $\mathbf{M}_{k} = (r_{k},-1)$ obeying the following multiplication rules (with $i,j = 0,1,...,n-1$): \begin{equation}\label{nas1}
\mathbf{R}_{i}\cdot\mathbf{R}_{j} = \mathbf{R}_{i+j \pmod{n}},
\qquad
\mathbf{R}_{i}\cdot\mathbf{M}_{j} = \mathbf{M}_{i+j \pmod{n}}, \end{equation} \begin{equation}\label{nas2}
\mathbf{M}_{i}\cdot\mathbf{R}_{j} = \mathbf{M}_{i-j \pmod{n}},
\qquad
\mathbf{M}_{i}\cdot\mathbf{M}_{j} = \mathbf{R}_{i-j \pmod{n}}. \end{equation}
\section{Quantization on $\mathbf{Z_{n}}$ with $\mathbf{D_{n}}$ as a symmetry group}
The configuration space $\mathbf{Z_{n}}$ will be identified with the set of vertices of a regular $n$--sided polygon. We have seen that $\mathbf{D_{n}}$ acts on $\mathbf{Z_{n}}$ transitively as a group of discrete rotations and mirror symmetries. The stability subgroup $\mathbf{H_{n}}$ of $\mathbf{D_{n}}$ is $\mathbf{Z_{2}}$ for all $n$, hence we can write $ \mathbf{Z_{n}} \cong \mathbf{D_{n}}/\mathbf{Z_{2}}$.
The stability subgroup $\mathbf{Z_{2}}$ is independent of the order of symmetry group $\mathbf{D_{n}}$ and it has exactly two inequivalent irreducible unitary representations, the trivial representation \begin{equation}\label{}
\mathbf{T_{1}}:\mathbf{Z_{2}}\rightarrow \mathbb{C}:\pm 1 \mapsto 1, \end{equation} and the alternating representation \begin{equation}\label{}
\mathbf{T_{2}}:\mathbf{Z_{2}}\rightarrow \mathbb{C}:+1 \mapsto +1,
\quad -1 \mapsto -1. \end{equation}
Now the inequivalent quantum kinematics on the configuration space $\mathbf{Z_{n}}$ are determined by inequivalent systems of imprimitivity on $\mathbf{Z_{n}}$ with the symmetry group $\mathbf{D_{n}}$. We require irreducibility of systems of imprimitivity in order that the corresponding kinematical observables act irreducibly in the Hilbert space. There will be exactly two inequivalent irreducible systems of imprimitivity $(\mathbf{V_1},\mathbf{E_1})$ and $(\mathbf{V_2},\mathbf{E_2})$ with representations induced from irreducible unitary representations $\mathbf{T_{1}}$ and $\mathbf{T_{2}}$.
In both cases the Hilbert space $\mathcal H$ of quantum mechanics is the space of complex functions on the configuration space $\mathbf{Z_{n}}$ and it is isomorphic to $n$--dimensional complex vector space $\mathbb{C}^{n}$ with standard inner product \begin{equation}\label{} <z_1,z_2>=\sum_{i=0}^{n-1}\bar{z}_{1i}z_{2i}. \end{equation} The standard projection-valued measure $\mathbf{E}$ is common to both systems of imprimiti\-vi\-ty $(\mathbf{V_1},\mathbf{E})$ and $(\mathbf{V_2},\mathbf{E})$. It is diagonal and generated by sums of one-dimensional orthogonal projectors on $\mathbb{C}^{n}$ of the form \begin{equation}\label{proj1}
\mathbf{E}(r_{i}) = {}_{i}\left(
\begin{array}{ccccc}
{} & {} & {}^{i} & {} & {} \, {} \\
{} & {} & \cdot & {} & {} \, {} \\
\cdot & \cdot & 1 & \cdot & \cdot \, {} \\
{} & {} & \cdot & {} & {} \, {}\\
{} & {} & \cdot & {} & {} \, {}
\end{array}\right), \quad i=0,1,...,n-1; \end{equation} Measure of an empty set in $\mathbf{Z_{n}}$ is the vanishing operator on $\mathbb{C}^{n}$, measure of the whole configuration space is the unit operator.
In order to obtain the two irreducible systems of imprimitivity, we shall construct the representations induced from $\mathbf{T_{1}}$ and $\mathbf{T_{2}}$ on $\mathbb{C}^{n}$, \begin{equation}\label{}
\mathbf{V_{1}} =
Ind_{\mathbf{Z_{2}}}^{\mathbf{D_{n}}}(\mathbf{T_{1}}), \qquad
\mathbf{V_{2}} =
Ind_{\mathbf{Z_{2}}}^{\mathbf{D_{n}}}(\mathbf{T_{2}}). \end{equation} According to \eref{cosets} the symmetry group $\mathbf{D_{n}}$ is decomposed into left cosets, \begin{equation}\label{}
\mathbf{D_{n}}=
\{\bigcup_{m=0}^{n-1}t_{m}\cdot\mathbf{Z_{2}}|t_{m}\in \mathbf{D_{n}}, \; t_{0} =
e.\} \end{equation} In our case we have $\mathbf{Z_{2}} = \{\mathbf{R}_{0},\, \mathbf{M}_{0}\}$; with the choice of coset representatives $t_{m} = \mathbf{R}_{m}, \; m=0,1,...,n-1$, we obtain the decomposition \begin{equation}\label{}
\mathbf{D_{n}} = \{ \{\mathbf{R}_{0}, \mathbf{M}_{0}\} \cup \,
\{\mathbf{R}_{1}, \mathbf{M}_{1}\} \cup
\,... \cup \{\mathbf{R}_{n-1}, \mathbf{M}_{n-1}\} \}. \end{equation}
Matrices of induced representations are then constructed in block form: dimensions of both representations $\mathbf{V_{1}}$ and $\mathbf{V_{2}}$ are equal to $n$, \begin{equation}\label{}
\text{dim}(\mathbf{V_{l}}) =
\frac{|\mathbf{D_{n}}|}{\mathbf{|Z_{2}|}}\cdot
\text{dim}(\mathbf{T_{l}})=\frac{2n}{2}\cdot 1 = n, \quad l=1,2, \end{equation} and matrix elements ($1\times 1$--blocks) have the following form:
\begin{eqnarray}\label{ind1} \mathbf{V_{l}}(g)_{ij}&=&
\mathbf{T_{l}}(h) \quad \text{if}\quad
t^{-1}_{i}\cdot g \cdot t_{j} = h \quad
\text{for some}\quad h \in \mathbf{Z_{2}}, \cr
&=& 0 \quad \text{otherwise}.
\end{eqnarray} In our case $t_{i} = \mathbf{R}_{i}$, so the matrix element $(\mathbf{V_{i}}(g))_{ij}$ does not vanish if and only if \begin{equation}\label{ind2} \mathbf{R}_{-i \pmod{n}}\cdot g \cdot \mathbf{R}_{j} \in \{\mathbf{R}_{0}, \, \mathbf{M}_{0}\}. \end{equation}
To construct the induced representation $\mathbf{V_{1}}$ --- first for the subgroup of discrete rotations $g=\mathbf{R}_{k}$ --- condition \eref{ind2} \begin{equation}\label{}
\mathbf{R}_{-i \pmod{n}}\cdot \mathbf{R}_{k} \cdot \mathbf{R}_{j}
= \mathbf{R}_{-i+j+k \pmod{n}}\in \{\mathbf{R}_{0}, \, \mathbf{M}_{0}\}
\end{equation} is equivalent to $i=j+k {\pmod{n}}$, hence matrix elements \eref{ind1} of discrete rotations are \begin{equation}\label{v1op}
\ (\mathbf{V_{1}}(\mathbf{R}_{k}))_{ij}=\delta_{i,j+k \pmod{n}}. \end{equation} So the entire matrix is \begin{equation}\label{v1rk}
\mathbf{V_{1}}(\mathbf{R}_{k})=_{k}
\left(\begin{array}{cccccccccc}
& & & & ^{k} & 1 & & & & \\
& & & & & & 1 & & & \\
& & & & & & & \cdot & & \\
& & & & & & & & \cdot & \\
& & & & & & & & & 1 \\
1 & & & & & & & & & \\
& 1 & & & & & & & & \\
& & \cdot & & & & & & & \\
& & & \cdot & & & & & & \\
& & & & 1 & & & & & \
\end{array}\right). \end{equation}
For the representation $\mathbf{V_{1}}$ of mirror symmetries $g=\mathbf{M}_{k}$ condition (\ref{ind2}) acquires the form \begin{equation}\label{}
\mathbf{R}_{-i \pmod{n}}\cdot \mathbf{M}_{k} \cdot
\mathbf{R}_{j} = \mathbf{M}_{-i-j+k \pmod{n}} \in \{\mathbf{R}_{0}, \, \mathbf{M}_{0}\} \Leftrightarrow i = k-j \end{equation} due to (\ref{nas1}) - (\ref{nas2}), so the matrix elements (\ref{ind1}) of mirror symmetries are \begin{equation}\label{}
(\mathbf{V_{1}}(\mathbf{M}_{k}))_{ij} = \delta_{i,k-j \pmod{n}}. \end{equation} The matrix $\mathbf{V_{1}}(\mathbf{M}_{k})$ has the explicit form \begin{equation}\label{v1mk}
\mathbf{V_{1}}(\mathbf{M}_{k}) =
\left(\begin{array}{ccccccccc}
& & & & 1 & & & & \\
& & & \cdot & & & & & \\
_{k} & & 1 & & & & & & \\
& \cdot & & & & & & & \\
1 & & & & & & & & \\
& & & & & & & & 1 \\
& & & & & & & \cdot & \\
& & & & & & \cdot & & \\
& & & & & 1 & & & \
\end{array}\right) \end{equation}
The second representation $\mathbf{V_{2}}$ is obtained similarly via (\ref{ind1}) as the representation induced from $\mathbf{T_{2}}$ with the result \begin{equation}\label{jd}
\mathbf{V_{2}}(\mathbf{R}_{k}) = \mathbf{V_{1}}(\mathbf{R}_{k}),
\quad \mathbf{V_{2}}(\mathbf{M}_{k}) =
-\mathbf{V_{1}}(\mathbf{M}_{k}). \end{equation} The representations $\mathbf{V_{1}}$ and $\mathbf{V_{2}}$ are unitary, reducible and inequivalent; as could be expected, the two systems of imprimitivity differ only on reflections in $\mathbf{D_{n}}$.
\section{Quantum observables}
The basic quantum observables --- position and momentum operators --- defining quantum kinematics on a configuration space have natural definition if a system of imprimitivity is given.
Classical position observable is a Borel mapping from the configuration space, in our case from $\mathbf{Z_{n}}$, to the set of real numbers. For the classical position observable counting the points in $\mathbf{Z_{n}}$, \begin{equation}\label{}
f:\mathbf{Z_{n}}\rightarrow \mathbb{R}:r_{k}\mapsto k, \quad
k=0,1,...,n-1, \end{equation} the corresponding {\it quantized position operator} $\widehat{\mathbf{Q}}$ is expressed in terms of the projection-valued measure (\ref{proj1}) as follows \cite{HDDStovTolar}: \begin{equation}\label{posop}
\widehat{\mathbf{Q}}:=\sum_{k=0}^{n-1}k\cdot\mathbf{E}(f^{-1}(k))
=\sum_{k=0}^{n-1}k\cdot\mathbf{E}(r_{k})
= \text{diag}(0,1,\ldots,n-1). \end{equation}
Note that the position operator is the same for both systems of imprimitivity constructed in previous section, i.e. in both quantum kinematics.
In the continuous case, {\it quantized momentum operators} are obtained from unitary representation $\mathbf{V}$ by means of Stone's theorem \cite{BEH}: {\it To each one-parameter subgroup $\gamma(t)$ of a symmetry group there exists a self-adjoint operator
$\widehat{\mathbf{P}}$ such that} \begin{equation}\label{}
\mathbf{V}(\gamma(t))=\exp(-it\widehat{\mathbf{P}}), \;\; t \in
\mathbb{R}. \end{equation} However, this is not possible in the discrete case. One has to look for self--adjoint operators $\widehat{\mathbf{P_l}}_{g}$ on $\mathbb{C}^{n}$ such that \begin{equation}\label{exp}
\mathbf{V_{l}}(g)=\exp(-i\widehat{\mathbf{P_l}}_{g}),
\qquad l=1,2, \quad g \in \mathbf{D_{n}}. \end{equation} One may try to compute the operators $\widehat{\mathbf{P_l}}_{g}$ by inverting the exponential (\ref{exp}), \begin{equation}\label{ln}
\widehat{\mathbf{P_l}}_{g}= i\cdot \ln(\mathbf{V_{l}}(g)), \end{equation} but then has to face the problem that the complex exponential is not invertible, so the operators $\widehat{\mathbf{P_l}}_{g}$ will not be determined uniquely.
Computation of functions of matrices is possible via the Lagrange--Sylvester theorem (see the Appendix). However, the spectral data needed there have their own physical importance in quantum mechanics, so they will be determined below for the operators $\mathbf{V_{1}}(\mathbf{R_{k}})$ and $\mathbf{V_{1}}(\mathbf{M}_{k})$, $k=0,1,...,n-1$. Because of (\ref{jd}) they are applicable to the other system of imprimitivity, too.
Let us start with discrete rotations. The eigenvalues of operator $\mathbf{V_{1}}(\mathbf{R_{1}})$ are solutions of the secular equation \begin{equation}\label{det}
\det(\lambda\mathbb{I}-\mathbf{V_{1}}(\mathbf{R_{1}}))=0 \qquad \text{or} \qquad \lambda^{n}-1=0, \end{equation} hence the spectrum is \begin{equation}\label{}
\sigma(\mathbf{V_{1}}(\mathbf{R}_{1}))=\{\lambda_{j}=e^{\frac{2\pi
ij}{n}}|j=0,1,...,n-1\}. \end{equation} Then the eigenvalues of operators $\mathbf{V_{1}}(\mathbf{R_{k}})$ are simply the powers of those of $\mathbf{V_{1}}(\mathbf{R_{1}})$, \begin{equation}\label{spek1}
\sigma(\mathbf{V_{1}}(\mathbf{R}_{k}))=
\sigma(\mathbf{V_{1}}((\mathbf{R}_{1})^{k}))=
\{\lambda_{j}^k=e^{\frac{2\pi ijk}{n}}|j=0,1,...,n-1\}. \end{equation} Similarly the spectra of operators $\mathbf{V_{1}}(\mathbf{M}_{k})$ for mirror symmetries are obtained by solving \begin{equation}\label{det1}
\det(\lambda\mathbb{I}-\mathbf{V_{1}}(\mathbf{M}_{k}))=0, \end{equation} but here two cases should be distinguished. \begin{enumerate}
\item If $n$ is {\it odd}, then (\ref{det1}) becomes \begin{equation}\label{spm}
(1-\lambda)(\lambda^{2}-1)^{\frac{n-1}{2}}=0 \qquad \Rightarrow \qquad
\sigma(\mathbf{V_{1}}(\mathbf{M}_{k})) = \{+1,-1\} \end{equation} and the multiplicities of eigenvalues $\pm 1$ are $\frac{n\pm 1}{2}$.
\item If $n$ is {\it even}, then the characteristic polynomial of operator $\mathbf{V_{1}}(\mathbf{M}_{k})$ depends, in addition to dimension $n$, also on parameter $k$. At this point we have also to distinguish if $k$ is odd or even. In the geometric picture
we have to distinguish if the axis of mirror symmetry $\mathbf{M}_{k}$ passes through opposite vertices of the $n$--sided regular polygon ($k$ even), or if it is an axis of two opposite sides of the polygon ($k$ odd). So if $n$ is even, then (\ref{det1}) has following form: \begin{eqnarray}\label{}
0&=&
(1-\lambda)^{\frac{n}{2}+1}(1+\lambda)^{\frac{n}{2}-1}
\text{ if $k$ is even } , \\
0&=& (1-\lambda)^{\frac{n}{2}}(1+\lambda)^{\frac{n}{2}}
\text{ if $k$ is odd }. \end{eqnarray} The spectra for both cases are the same as for odd $n$, but the multiplicities of eigenvalues are different. If $k$ is even, the multiplicity of eigenvalue $+1$ is $\frac{n}{2}+1$, the multiplicity of eigenvalue $-1$ is $\frac{n}{2}-1$; if $k$ is odd, then the multiplicity of both eigenvalues is $\frac{n}{2}$. \end{enumerate}
The evaluation of operators $\widehat{\mathbf{P_1}}_{\mathbf{R}_{k}}$ for discrete rotations can be done using the fact that rotations $\mathbf{R}_{k}$ form an Abelian subgroup $\mathbf{Z_{n}}$ of $\mathbf{D_{n}}$. Thus we have simply \begin{equation}\label{}
\exp(-i\widehat{\mathbf{P}}_{\mathbf{R}_{k}})=
\mathbf{V_{1}}(\mathbf{R}_{k})=(\mathbf{V_{1}}(\mathbf{R}_{1}))^{k}
= \exp(-ik\widehat{\mathbf{P}}) \end{equation} where $ \widehat{\mathbf{P}}=\widehat{\mathbf{P_1}}_{\mathbf{R}_{1}}$
can be interpreted as self--adjoint momentum operator. The spectrum (\ref{spek1}) of $\mathbf{V_{1}}(\mathbf{R}_{1})$ has $n$ different simple eigenvalues $\lambda_{k}=e^{\frac{2\pi ik}{n}}$, so it remains to find the corresponding one--dimensional spectral projectors \begin{equation}\label{proj2}
\mathbb{P}_{k}=|k\rangle \langle k|. \end{equation}
Here $|k\rangle$ are normalized eigenvectors of operator $\mathbf{V_{1}}(\mathbf{R}_{1})$ belonging to eigenvalues $\lambda_{k}$ \cite{StovTolar}: \begin{equation}\label{}
|k\rangle = \frac{1}{\sqrt{n}}\left(\begin{array}{c}
\lambda_{k}^{n-1} \\
\lambda_{k}^{n-2} \\
\cdot \\
\cdot \\
\lambda_{k} \\
1
\end{array}\right), \end{equation} Using (\ref{proj2}), matrix elements of $\mathbb{P}_{k}$ can be written as \begin{equation}\label{}
(\mathbb{P}_{k})_{lm}=
\frac{1}{n}\lambda_{k}^{n-l}\overline{\lambda_{k}^{n-m}}=
\frac{1}{n}e^{\frac{2\pi ik(m-l)}{n}}. \end{equation} Then, using (\ref{ln}) for simple eigenvalues, we have \begin{equation}\label{}
(\widehat{\mathbf{P}})_{lm}=
i (\ln \mathbf{V_{1}}(\mathbf{R}_{1}))_{lm}=
= i \sum_{j=0}^{n-1}\ln (\lambda_j)(\mathbb{P}_{j})_{lm}, \end{equation} hence matrix elements of the momentum operator are obtained: \begin{eqnarray}\label{gr}
(\widehat{\mathbf{P}})_{lm}
&=& \frac{2\pi}{n}\frac{1}{1-e^{\frac{2\pi i(m-l)}{n}}} \quad m \neq l, \\
&=& -\pi\frac{n-1}{n} \quad m = l. \end{eqnarray} Note that this result was obtained in \cite{TCh} by finite Fourier transform of the position operator. For the analysis of operators of mirror symmetries see the Appendix. From the physical point of view unitary operators $\mathbf{V_{1,2}}(\mathbf{M}_{k})$ play the role of parity operators.
\section{Coherent states parametrized by
$\mathbf{Z_{n}}\times \mathbf{D_{n}}$}
In this section generalized coherent states will be determined for each of the two quantum kinematics.
{\it A family of generalized coherent states of type $\{\Gamma(g), \vert\psi_{0}\rangle\}$ in~the sense of Perelomov \cite{Perelomov} is defined for a~representation $\Gamma(g)$ of a~group $\mathbf{G}$ as a~family of states $\{\vert\psi_{g}\rangle\}$, $\vert\psi_{g}\rangle=\Gamma(g) \vert\psi_{0}\rangle$, where $g$ runs over the whole group $\mathbf{G}$ and $\vert\psi_{0}\rangle$ is the `vacuum' vector.}
First take quantum kinematics defined by the system of imprimitivity $(\mathbf{V_1},\mathbf{E})$. To construct group--related coherent states of Perelomov type parametrized by $(a,g) \in \mathbf{Z_{n}}\times \mathbf{D_{n}}$, we define generalized Weyl operators \begin{equation}\label{}
\widehat{\mathbf{W_1}}(a,g)=
\exp(\frac{2\pi ia}{n}\widehat{\mathbf{Q}})
exp(-i\widehat{\mathbf{P_1}}_{g})=
e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}}\mathbf{V_{1}}(g);
\quad a\in \mathbf{Z_{n}},\, g\in \mathbf{D_{n}}. \end{equation} Here \begin{equation}\label{eiaq}
(e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}})_{jk}=
\delta_{j,k}e^{\frac{2\pi iaj}{n}},\quad
\exp(\frac{2\pi ia}{n}\widehat{\mathbf{Q}})=
\left( \begin{array}{ccccc}
1 & & & & \\
& e^{2\pi \frac{ia}{n}} & & & \\
& & \cdot & & \\
& & & \cdot & \\
& & & & e^{\frac{2\pi ia(n-1)}{n}} \
\end{array}\right). \end{equation} Note that, if the system of imprimitivity is irreducible, also the set of generalized Weyl operators defined above acts irreducibly in the Hilbert space $\cal H$.
Restricting $g$ to the subgroup $\mathbf{Z_{n}}$ of discrete rotations, the unitary operators satisfy \begin{equation}\label{xyz}
e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}}e^{im\widehat{\mathbf{P}}}=
e^{\frac{2\pi iam}{n}}e^{im\widehat{\mathbf{P}}}
e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}} \end{equation} and operators $\widehat{\mathbf{W_1}}(a,g)$ form the well--known projective unitary representation of the group $\mathbf{Z_{n}}\times \mathbf{Z_{n}}$, which acts irreducibly in the Hilbert space
${\cal H} =\mathbb{C}^{n}$ \cite{Weyl,StovTolar}.
Unfortunately, if we want to derive a relation similar to \eref{xyz} for operators $\widehat{\mathbf{P_1}}_{\mathbf{M}_{k}}$, by performing the same computation as for $\widehat{\mathbf{P}}$ we obtain \begin{equation}\label{prus}
(e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}}
e^{i\widehat{\mathbf{P_1}}_{\mathbf{M}_{m}}})_{jk}=
e^{\frac{2\pi ia}{n}(2m-2k)}(e^{i\widehat{\mathbf{P_1}}_{\mathbf{M}_{m}}}
e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}})_{jk}. \end{equation} Here the multiplier is $k$--dependent, hence there is neither an operator equality similar to (\ref{xyz}) nor a projective representation property of operators $\widehat{\mathbf{W_1}}(a,g)$.
To construct the system of coherent states in $\mathbb{C}^{n}$, besides the system of operators $\widehat{\mathbf{W_1}}(a,g)$
a properly defined 'vacuum' vector $|0\rangle$ is needed. Then generalized coherent states of type
$\{\widehat{\mathbf{W_1}}(a,g),|0\rangle\}$ are given by \begin{equation}\label{dks}
|a,g\rangle_1 = \widehat{\mathbf{W_1}}(a,g)|0\rangle, \quad a \in
\mathbf{Z}_{n}, \, g \in \mathbf{D}_{n}, \end{equation}
and $|0\rangle= |0,e\rangle_1$. In analogy with continuous case where the coherent states are eigenvectors of the annihilation operator and the vacuum vector belongs to eigenvalue $0$ one would like to have a similar condition \cite{TCh} \begin{equation}\label{en1}
e^{\frac{2\pi}{n}\widehat{\mathbf{Q}}}
e^{i\widehat{\mathbf{P}}}|0\rangle=|0\rangle. \end{equation} But (\ref{en1}) cannot hold true since $1$ is not an eigenvalue of the operator. So our admissible vacuum vectors are required to satisfy (\ref{en1}) up to a non--zero multiplier \cite{TCh}, \begin{equation}\label{en2}
e^{\frac{2\pi}{n}\widehat{\mathbf{Q}}}
e^{i\widehat{\mathbf{P}}}|0\rangle= \lambda|0\rangle. \end{equation} For $n$ spectral values \begin{equation}\label{}
\sigma(e^{\frac{2\pi}{n}\widehat{\mathbf{Q}}}e^{i\widehat{\mathbf{P}}})
= \{\lambda_{k}=e^{\frac{\pi (n-1)}{n}}e^{\frac{2\pi ik}{n}}\;
\vert \; k=0,1,..,n-1\} \end{equation} we obtain a system of $n$ admissible (normalized) vacuum vectors
$|0\rangle^{(k)}$ labeled by $k=0,1,..,n-1$, \begin{equation}\label{}
|0\rangle^{(k)}=\mathcal{A}_{n} \left(
\begin{array}{c}
1 \\
e^{\frac{\pi(3-n)}{n}}e^{\frac{-2\pi ik}{n}}\\
\cdot \\
\cdot \\
e^{\frac{\pi(n-1)}{n}}e^\frac{-2\pi ik(n-1)}{n}
\end{array}\right); \end{equation} here the $j$--th component \begin{equation}\label{g}
(|0\rangle^{(k)})_j=g_{j}^{(k)}= \mathcal{A}_{n}
e^{\frac{\pi j(j-n+2)}{n}}e^{-j\frac{2\pi ik}{n}}, \end{equation} where $j=0,1,\ldots,n-1$ and $\mathcal{A}_{n}$ is the normalization constant \begin{equation}\label{A}
\mathcal{A}_{n}=\frac{1}{\sqrt{\sum_{j=0}^{n-1}e^{\frac{2\pi}{n}j(j-n+2)}}}. \end{equation}
Now we are able to construct $n$ families of coherent states in the first quantum kinematics which are labeled by parameter $k$. Applying (\ref{dks}) for $\mathbf{R}_{m}$, we obtain \begin{eqnarray}\label{vacr}
(|a,\mathbf{R}_{m}\rangle^{(k)}_{1})_{j} & = &
(\widehat{\mathbf{W_{1}}}(a,\mathbf{R}_{m})|0\rangle^{(k)})_{j} =\\
\nonumber = (e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}}\widehat{\mathbf{V_{1}}}
(\mathbf{R}_{m})|0\rangle^{(k)})_{j} & = & e^{\frac{2\pi iaj}{n}}
g^{(k)}_{j-m \pmod{n}}; \end{eqnarray} for $\mathbf{M}_{m}$ we obtain \begin{eqnarray}\label{vacm}
(|a,\mathbf{M}_{m}\rangle^{(k)}_{1})_{j}& = &
(\widehat{\mathbf{W_{1}}}(a,\mathbf{M}_{m})|0\rangle^{(k)})_{j} =\\
\nonumber = (e^{\frac{2\pi ia}{n}\widehat{\mathbf{Q}}}\widehat{\mathbf{V_{1}}}
(\mathbf{M}_{m})|0\rangle^{(k)})_{j} & = & e^{\frac{2\pi iaj}{n}}g^{(k)}_{m-j \pmod{n}}. \end{eqnarray}
Coherent states for the second quantum mechanics with representation $\mathbf{V_{2}}$ are equivalent to those of the first one because they differ on $\mathbf{M}_{m}$ by an unessential phase factor $-1$: \begin{equation}
|a,\mathbf{R}_{m}\rangle^{(k)}_{2}=|a,\mathbf{R}_{m}\rangle^{(k)}_{1}, \qquad
|a,\mathbf{M}_{m}\rangle^{(k)}_{2}=-|a,\mathbf{M}_{m}\rangle^{(k)}_{1}. \end{equation}
\section{Properties of coherent states}
One of the most important properties of coherent states is their overcompleteness expressed by a resolution of unity \begin{equation}\label{}
\sum_{(a,g)\in \mathbf{Z_{n}}\times \mathbf{D_{n}}}
|a,g\rangle^{(k)} \langle a,g|^{(k)}=c_{k}\widehat{\mathbb{I}}, \end{equation} where $c_{k}$ is some non--zero complex number. Let us check this property for our coherent states. From (\ref{vacr}) and (\ref{vacm}) we get \begin{equation*}
\sum_{(a,g)\in \mathbf{Z_{n}}\times \mathbf{D_{n}}}
|a,g\rangle^{(k)}_{1,2} \langle a,g|^{(k)}_{1,2}
=\sum_{a\in \mathbf{Z_{n}},m = 0,..,n-1}
|a,\mathbf{R}_{m}\rangle^{(k)}_{1} \langle a,\mathbf{R}_{m}|^{(k)}_{1}
\end{equation*}
\begin{equation}\label{rou}
+ \sum_{a \in \mathbf{Z_{n}},m=0,..,n-1}
|a,\mathbf{M}_{m}\rangle^{(k)}_{1} \langle a,\mathbf{M}_{m}|^{(k)}_{1}. \end{equation} Matrix element of the first sum on the right--hand side of (\ref{rou}) is, due to (\ref{g}) and (\ref{A}), \begin{equation*}
(\sum_{a,m}|a,\mathbf{R}_{m}\rangle^{(k)}_{1} \langle
a,\mathbf{R}_{m}|^{(k)}_{1})_{jl}
=\sum_{a,m}(|a,\mathbf{R}_{m})\rangle^{(k)}_{1})_{j} (\langle
a,\mathbf{R}_{m}|^{(k)}_{1})_{l} = \end{equation*}
\begin{equation}\label{rour}
=\sum_{a,m}e^{\frac{2\pi ia}{n}(j-l)}
g^{(k)}_{j-m \pmod{n}}\overline{g^{(k)}_{l-m \pmod{n}}}
= n\delta_{j,l}\langle 0|0\rangle^{(k)}=n\delta_{j,l}. \end{equation} Exactly the same result is obtained for the second sum on the right--hand side of (\ref{rou}): \begin{eqnarray}\label{roum}
\nonumber (\sum_{a,m}|a,\mathbf{M}_{m}\rangle^{(k)}_{1} \langle
a,\mathbf{M}_{m}|^{(k)}_{1})_{jl} =\sum_{a,m}e^{\frac{2\pi ia}{n}(j-l)}
g^{(k)}_{m-j \pmod{n}}\overline{g^{(k)}_{m-l \pmod{n}}} = \\
= n\delta_{j,l}\sum_{m}g^{(k)}_{m-j \pmod{n}}
\overline{g^{(k)}_{m-l \pmod{n}}} =n\delta_{j,l}. \end{eqnarray} So we have proved that the resolution of unity is fulfilled: \begin{equation}\label{rouf}
\sum_{(a,g)\in \mathbf{Z_{n}}\times \mathbf{D_{n}}}|a,g\rangle^{(k)}_{1,2}
\langle a,g|^{(k)}_{1,2}=2n\widehat{\mathbb{I}} \end{equation} and this result holds for both representations $\mathbf{V_{1}}$ and $\mathbf{V_{2}}$.
For the inner product (overlap) of two coherent states we have the formulae \begin{eqnarray}\label{sumsum}
\langle
a,\mathbf{R}_{p}|b,\mathbf{R}_{q}\rangle^{(k)}_{1,2}
&=&\sum_{j=1}^{n}e^{\frac{2\pi ij}{n}(b-a)}
\overline{g^{(k)}_{j-p \pmod{n}}}g^{(k)}_{j-q \pmod{n}},\\
\nonumber
\langle a,\mathbf{M}_{p}|b,\mathbf{M}_{q}\rangle^{(k)}_{1,2}
&=&\sum_{j=1}^{n}e^{\frac{2\pi ij}{n}(b-a)}
\overline{g^{(k)}_{p-j \pmod{n}}}g^{(k)}_{q-j \pmod{n}},\\
\nonumber \langle
a,\mathbf{R}_{p}|b,\mathbf{M}_{q}\rangle^{(k)}_{1,2}
&=&\sum_{j=1}^{n}e^{\frac{2\pi ij}{n}(b-a)}
\overline{g^{(k)}_{j-p \pmod{n}}}g^{(k)}_{q-j \pmod{n}}. \end{eqnarray}
Note that the inner products yield the reproducing kernel $ \langle x \vert x' \rangle = K(x,x')$ \cite{AAG}.
If the system is prepared in the coherent state
$|a,g\rangle^{(k)}_{1,2}$, then the probability to measure the eigenvalue $j$ of position operator is given by $\vert \langle j
|a,g\rangle^{(k)}_{1,2}\vert^2$. It is independent of $k$ and is the same in both quantum kinematics, namely,
\begin{eqnarray}
\vert\langle j|a,\mathbf{R}_{m}\rangle^{(k)}_{1,2}\vert^2 &=&
\mathcal{A}_{n}^{2} e^{\frac{2\pi}{n}(j-m)(j-m-n+2)}, \cr
\vert\langle j|a,\mathbf{M}_{m}\rangle^{(k)}_{1,2}\vert^2 &=&
\mathcal{A}_{n}^{2}e^{\frac{2\pi}{n}(m-j)(m-j-n+2)}.
\end{eqnarray}
\section{Concluding remarks} In this paper we have constructed systems of imprimitivity on the finite configuration space $\mathbf{Z}_{n}$ considered as a homogeneous space of the dihedral group $\mathbf{D}_{n}$. We have shown that there exist two inequivalent irreducible systems of imprimitivity $(\mathbf{V_{1}},\mathbf{E})$ and $(\mathbf{V_{2}},\mathbf{E})$. Unitary representations $\mathbf{V_{1}}$ and $\mathbf{V_{2}}$ have clear physical significance of symmetry transformations.
Using these systems of imprimitivity, we have constructed the corresponding families of group related coherent states in the sense of Perelomov. They are connected with the group $\mathbf{Z}_{n}\times \mathbf{D}_{n}$ acting on the discrete phase space $\mathbf{Z}_{n}\times \mathbf{Z}_{n}$. Unfortunately, due to (\ref{prus}) we have lost the group property of the set of operators $\widehat{\mathbf{W}}(a,g)$, i.e. these operators do not form a projective unitary representation of the group $\mathbf{Z}_{n} \times \mathbf{D}_{n}$. In spite of this fact for the first system of imprimitivity $n$ families of coherent states were obtained, generated from $n$ admissible vacuum vectors (\ref{g}). It turned out that the coherent states for the second system of imprimitivity differ from the first only by an unessential phase factor, i.e., they are physically equivalent. For all $n$ families of coherent states the overcompleteness property was demonstrated. We have also evaluated the overlaps of pairs of coherent states in the form of finite sums (\ref{sumsum}). The only physical difference between the two quantum kinematics can be observed in the difference between unitary representations $\mathbf{V_{1}}$ and $\mathbf{V_{2}}$ on mirror symmetries, which have the meaning of parity operators.
Let us note that in quantum optics, discrete phase space $\mathbf{Z}_{n}\times \mathbf{Z}_{n}$ is employed in connection with the quantum description of phase conjugated to number operator \cite{PeggBarnett}. Our approach can also provide a suitable starting point for the approximate solution of the continuous Schr\"odinger equation. In this connection we found instructive the paper \cite{Digernes} on finite approximation of continuous Weyl systems inspired by an approximation scheme due to J. Schwinger \cite{Schwinger*}.
Another interesting application is offered by quantum chemistry, viz. H\"uckel's treatment of delocalized $\pi$-electrons and its generalizations in various kinds of molecules, where molecular orbitals are expressed as linear combinations of atomic orbitals \cite{Rouvray,Ruedenberg}. In this respect our approach seems especially suitable for the treatment of ring molecules with $n$ equivalent carbon atoms called annulenes . In our notation, the set of atomic orbitals would correspond to the standard basis in $\mathcal{H} = \mathbb{C}^n$ and unitary representations $\mathbf{V_{1}}$ and $\mathbf{V_{2}}$ realize the geometric symmetry transformations.
\section*{Appendix} For computation of matrix functions the Lagrange--Sylvester theorem is useful:
\noindent {\bf Theorem} \cite{LSF}.{\it Let $\mathbb{A}$ be an $n\times n$ matrix with spectrum $\sigma(\mathbb{A})=\{ \lambda_{1},\lambda_{2},...,\lambda_{s}\}$, $s\leq n$. Let $q_{j}$ be the multiplicity of eigenvalue $\lambda_{j}$, $j=1,2,...,s$. Let $\Omega \subset \mathbb{C}$ be an open subset of the complex plane such that $\sigma(\mathbb{A})\subset \Omega$. Then the formula \begin{equation}\label{LSF}
f(\mathbb{A}) =
\sum_{j=1}^{s}\sum_{k=0}^{q_{j}-1}\frac{f^{(k)}(\lambda_{j})}{k!}
(\mathbb{A}-\lambda_{j}\mathbb{I})^{k}\mathbb{P}_{j} \end{equation} holds for every function $f$ holomorphic on $\Omega$. Here $\mathbb{P}_{j}$ is the orthogonal projector onto the subspace of $\mathbb{C}^{n}$ which is spanned by the set of all eigenvectors with eigenvalue $\lambda_{j}$: \begin{equation}\label{proj}
\mathbb{P}_{j}:=\prod_{l=1,l\neq
j}^{s}\frac{\lambda_{l}\mathbb{I}-\mathbb{A}}{\lambda_{l}-\lambda_{j}}. \end{equation} }
The formula (\ref{LSF}) can be applied to equation (\ref{ln}) to evaluate operators $\widehat{\mathbf{P_1}}_{g}$ for mirror symmetries. Since the multiplicities of spectral values $\pm 1$ have already been determined, we have only to find the spectral projectors $\mathbb{P}_{k}$ for each representation element $\mathbf{V_{1}}(\mathbf{M}_{k})$. From equation (\ref{ln}) \begin{equation}\label{}
\widehat{\mathbf{P_1}}_{\mathbf{M}_{k}}= i\cdot
\ln(\mathbf{V_{1}}(\mathbf{M}_{k})), \end{equation} we get, using the Lagrange--Sylvester formula (\ref{LSF}) with spectrum (\ref{spm}), the spectral decomposition \begin{eqnarray}\label{ppp}\nonumber
\widehat{\mathbf{P_1}}_{\mathbf{M}_{k}}=i\cdot
\sum_{j=0}^{q_{(+)}-1}\frac{\ln^{(j)}(+1)}{j!}(\mathbf{V_{1}}
(\mathbf{M}_{k})-\mathbb{I})^{j}\widehat{\mathbb{P}}_{+1} \\
+i\cdot \sum_{j=0}^{q_{(-)}-1}\frac{\ln^{(j)}(-1)}{j!}(\mathbf{V_{1}}
(\mathbf{M}_{k})+\mathbb{I})^{j}\widehat{\mathbb{P}}_{-1}, \end{eqnarray} where $q_{(\pm)}$ are multiplicities of eigenvalues $\pm 1$. Strictly said the assumption of the Lagrange--Sylvester formula (\ref{LSF}) is not satisfied since the complex logarithm is not holomorphic on the non--positive part of the real axis and $-1$ belongs to the spectrum of $\mathbf{V_{1}}(\mathbf{M}_{k})$. We will express $\widehat{\mathbf{P}}_{\mathbf{M}_{k}}$ in a formal way and verify (\ref{exp}) using (\ref{LSF}), where function $\exp$ is holomorphic.
Using formula \eref{proj} for the projectors projecting on $q_{(\pm)}$-dimensional subspaces of $\mathbb{C}^n$ \begin{equation}\label{}
\widehat{\mathbb{P}}_{+1} =
\frac{(\mathbf{V_{1}}(\mathbf{M}_{k})+\mathbb{I})}{2}, \qquad \widehat{\mathbb{P}}_{-1} =
-\frac{(\mathbf{V_{1}}(\mathbf{M}_{k})-\mathbb{I})}{2}, \end{equation} and the property \begin{equation}\label{}
(\mathbf{V_{1}}(\mathbf{M}_{k})-\mathbb{I})
(\mathbf{V_{1}}(\mathbf{M}_{k})+\mathbb{I})=
(\mathbf{V_{1}}(\mathbf{M}_{k}))^{2}-\mathbb{I}=\widehat{0}, \end{equation} all elements in the sum \eref{ppp} vanish except $j=0$: \begin{equation}\label{gz}
\widehat{\mathbf{P_1}}_{\mathbf{M}_{k}} = i\cdot
(\frac{\ln(+1)}{2}(\mathbf{V_{1}}(\mathbf{M}_{k})+
\mathbb{I})-\frac{\ln(-1)}{2}(\mathbf{V_{1}}(\mathbf{M}_{k})-\mathbb{I})). \end{equation} Taking the value $-\pi$ for $\ln (-1)$ \begin{equation} \widehat{\mathbf{P_1}}_{\mathbf{M}_{k}} = \frac{\pi}{2}(\mathbf{V_{1}}(\mathbf{M}_{k})-\mathbb{I}); \end{equation} similar calculation leads to \begin{equation}\label{}
\widehat{\mathbf{P_2}}_{\mathbf{M}_{k}}=
\frac{\pi}{2}(\mathbf{V_{2}}(\mathbf{M}_{k})-\mathbb{I}). \end{equation} Note that momentum operators are not uniquely determined. This is caused by the property of exponential mapping which is not one-to-one.
\section*{References}
\end{document} |
\begin{document}
\title{ Nonlinear interaction effects in a three-mode cavity optomechanical system }
\author{Jing Qiu}
\affiliation{Beijing Computational Science Research Center, Beijing 100193, China}
\author{Li-Jing Jin}
\affiliation{Institute for Quantum Computing, Baidu Research, Beijing 100193, China} \affiliation{Beijing Computational Science Research Center, Beijing 100193, China}
\author{Stefano Chesi} \email{stefano.chesi@csrc.ac.cn} \affiliation{Beijing Computational Science Research Center, Beijing 100193, China} \affiliation{Department of Physics, Beijing Normal University, Beijing 100875, China}
\author{Ying-Dan Wang} \email{yingdan.wang@itp.ac.cn} \affiliation{CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100190, China} \affiliation{School of Physical Sciences, University of Chinese Academy of Sciences, No.19A Yuquan Road, Beijing 100049, China} \affiliation{Synergetic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha 410081, China}
\begin{abstract}
We investigate the resonant enhancement of nonlinear interactions in a three-mode cavity optomechanical system with two mechanical oscillators. By using the Keldysh Green's function technique we find that nonlinear effects on the cavity density of states can be greatly enhanced by the resonant scattering of two phononic polaritons, due to their small effective dissipation. In the large detuning limit and taking into account an upper bound on the achievable dressed coupling, the optimal point for probing the nonlinear effect is obtained, showing that such three-mode system can exhibit prominent nonlinear features also for relatively small values of $g/\kappa$.
\end{abstract}
\date{\today}
\maketitle
\section{Introduction}
Optomechanical systems \cite{Aspelmeyer2014RMP} have witnessed remarkable progress in controlling the quantum state of the coupled photonic and mechanical modes. Some highlights are the demonstration of mechanical ground-state cooling \cite{teufel2011sideband,chan2011laser,clark2017sideband}, generation of strongly squeezed light \cite{safavi2013squeezed,purdy2013strong} and mechanical states \cite{wollman2015quantum,PhysRevLett.117.100801}, coherent transduction \cite{andrews2014bidirectional}, and entanglement of remote mechanical oscillators \cite{riedinger2018remote,ockeloen2018stabilized}. All these applications are based on a linearized interaction under strong optical drive, when the optomechanical coupling is greatly enhanced by the large number of intracavity photons. Continuous technical progress has allowed the dressed coupling $G$ to enter and even surpass the strong-coupling regime \cite{groblacher2009observation,Teufel2011Nature,Verhagen2012Nature,Peterson2019PRL}.
On the other hand, nonlinear interactions are necessary for the generation of non-classical states and a variety of interesting effects were predicted \cite{rabl2011photon, Komar2013PRA, Ludwig2012PRL, Liu2013PRL, Xu2015PRA, Borkje2013PRL, Lemonde2013PRL, Lemonde2015PRA, Lemonde2016NC, Jin2018PRA}. For these nonlinear signatures the relevant energy scale is the single-photon optomechanical coupling $g$ which, unfortunately, remains much smaller than both the mechanical frequency $\omega_m$ and cavity damping $\kappa$ in virtually all setups with solid-state oscillators.
Some proposals for effectively enhancing the single-photon coupling strength consider modifying the type of drive, e.g., by introducing a squeezed optical input or a mechanical parametric drive \cite{Lemonde2016NC,lu2015squeezed,yin2017nonlinear}. Recently, it was also shown that multi-mode setups can allow for a large enhancement of nonlinear effects \cite{Jin2018PRA}. An attractive feature of the latter scheme is that relies on an optomechanical chain which is very close to existing experimental setups. In particular, it is essentially equivalent to four-mode optomechanical systems developed for efficient nonreciprocity \cite{peterson2017demonstration, bernier2017nonreciprocal}.
The aim of the present work is to explore if the optomechanical chain of Ref.~\cite{Jin2018PRA} can be further simplified, while preserving a large enhancement factor of the nonlinear signatures with respect to the two-mode system. In an optomechanical cavity, the largest nonlinear effects on the optical density of states (DOS) are due to a resonant scattering process between polaritons (i.e., the coupled eigenmodes of the linearized system)~\cite{Borkje2013PRL, Lemonde2013PRL, Lemonde2015PRA}. The main advantage of multi-mode setups is that two of these polaritons (instead of one) can be mechanical modes weakly hybridized with the optical cavities. Although the nonlinear coupling between phonon-like polaritons is much smaller than $g$, the reduction of the scattering amplitude is compensated by the exceptional coherence properties of the polaritons, whose lifetime is only limited by the mechanical damping $\gamma \ll \kappa$ \cite{Jin2018PRA}.
The above discussion makes intuitively clear that a single cavity interacting with two mechanical oscillators (see Fig.~\ref{fig:Three_mode_cavity}) is the minimal setup where this physics can take place. Indeed, we find that in a three-mode setup the typical figure of merit $(g/\kappa)^2$ of nonlinear effects can be enhanced by a large factor which, quite naturally, depends on the ratio $\kappa/\gamma$. The enhancement is large also far from the optomechanical instability, and can be optimized with respect to the ratio of the two mechanical frequencies. Doing so, we find that the largest enhancement is proportional to $(G/\omega_{m})^2$, thus is particularly interesting in view of the recent success in achieving the ultra-strong coupling regime \cite{Peterson2019PRL}.
The outline of our paper is as follows: In Sec.~\ref{Sec: the system} we introduce the model and in Sec.~\ref{Sec: Polariton eigenmodes} we diagonalize the linear part of the Hamiltonian. The approach to include nonlinear effects is described in Sec.~\ref{sec:method} and applied numerically in Sec.~\ref{Sec: Resonant enhancement}. Physical understanding of the results, together with an approximate analytical treatment in the most relevant regime of large detuning, is provided in Sec.~\ref{Sec:Result_2}. Finally, we conclude in Sec~\ref{Sec: summary} and give some technical details in Appendices~\ref{appendix_equal_wm} and \ref{Appendix_V}.
\begin{figure}
\caption{Schematic illustration of the three-mode optomechanical systems. $\omega_{c}$ is the cavity frequency and $\kappa$ is the cavity damping rate. $\omega_{mi}$ are the mechanical frequencies and $\gamma_{i}$ are the mechanical damping rates. $\omega_{l}$ is the frequency of the laser drive, represented by the red arrow. }
\label{fig:Three_mode_cavity}
\end{figure}
\section{Model}\label{Sec: the system}
As shown in Fig.~\ref{fig:Three_mode_cavity}, we consider a driven optomechanical cavity with two mechanical oscillators. The system is described by the Hamiltonian $H=H_{0}+H_{\mathrm{diss}}$, with: \begin{align} \label{H0} H_{0} = & \omega_{c}a^{\dagger}a+ \left(\alpha e^{-i\omega_{l}t}a^{\dagger}+\mathrm{H.c.} \right) \nonumber \\
& +\sum_{i=1,2}\left( \omega_{mi}b_{i}^{\dagger}b_{i} +g_{i}a^{\dagger}a(b_{i}+b_{i}^{\dagger})\right). \end{align} Here, $a$ is the annihilation operator for cavity mode, $\omega_{c}$ is the cavity frequency, $b_{i}$ $(i=1,2)$ are the annihilation operators of the mechanical modes, $\omega_{mi}$ are the mechanical frequencies (we take $\omega_{m2} \geq \omega_{m1}$), $g_{i}$ is the single-photon optomechanical coupling, and $\alpha$ is proportional to the amplitude of a classical drive at frequency $\omega_{l}$. $H_{\mathrm{diss}}$ describes the dissipation of photons and phonons by independent baths: \begin{align} H_{\mathrm{diss}}=& \sum_{i=1,2}\sum_{j}\omega_{mi,j}f_{mi,j}^{\dagger}f_{mi,j} + \sum_{j}\omega_{c,j}f_{c,j}^{\dagger}f_{c,j} \nonumber \\ &-i\sum_{i=1,2}\sum_{j}\sqrt{\frac{\gamma_{i}}{2\pi\rho_{mi}}}(f_{mi,j}-f_{mi,j}^{\dagger})(b_{i}+b_{i}^{\dagger}) \nonumber \\ &-i\sum_{j}\sqrt{\frac{\kappa}{2\pi\rho_{c}}}(f_{c,j}-f_{c,j}^{\dagger})(a+a^{\dagger}), \label{eq:dissipation term} \end{align} where $f_{c,j}$ is the annihilation operator for cavity-bath mode $j$ (frequency $\omega_{c,j}$), $\kappa$ is the damping rate of photons inside the cavity, and $\rho_{c}$ is the cavity-bath density of states. Furthermore, $f_{mi,j}$ is the annihilation operator for bath mode $j$ of mechanical resonator $i=1,2$ (frequency $\omega_{mi,j}$), $\gamma_{i}$ are the two mechanical damping rates, and $\rho_{mi}$ are the mechanical density of states. Because we consider a Markovian bath, we take $\kappa$, $\rho_{c}$, $\gamma_{i}$, and $\rho_{mi}$ to be frequency independent.
After transforming the cavity mode to a frame rotating at the laser frequency $\omega_{l}$, and performing a standard displacement transformation $a=\overline{a}+d$ (where $\overline{a}$ is the classical cavity amplitude induced by the laser drive), the Hamiltonian of the system takes the form $H_{l}+H_{nl}$ where \begin{equation} H_{l} = -\Delta d^{\dagger}d+\sum_{i=1,2}\left(\omega_{mi}b_{i}^{\dagger}b_{i}+G_{i}(d+d^{\dagger})(b_{i}+b_{i}^{\dagger})\right). \label{eq:linear term} \end {equation} Here, $\Delta=\omega_{l}-\omega_{c}$ is the detuning and $G_{1,2}=g_{1,2}\overline{a}$ are the dressed couplings which, for definiteness, we take as real. The average number of photons in the cavity is $N=\bar{a}^2$. While $H_{l}$ describes the linear interactions between photon modes and mechanical modes, $H_{nl}$ includes the intrinsically nonlinear interaction of system: \begin{equation}\label{eq:nonlinear term} H_{nl} = g_{1}d^{\dagger}d(b_{1}+b_{1}^{\dagger})+g_{2}d^{\dagger}d(b_{2}+b_{2}^{\dagger}). \end{equation} Throughout this work we focus on a red-detuned laser (i.e., $\Delta<0$), which allows to avoid optomechanical instabilities in large range of parameters. Finally, $H_{\mathrm{diss}}$ is transformed in a similar way. In a frame rotating at $\omega_l$ for the phonon bath modes and after a rotating-wave approximation, the final form is similar to Eq.~(\ref{eq:dissipation term}) except for the replacements $\omega_{c,j} \to \Delta_{c,j} = \omega_{c,j} -\omega_l $ and $(f_{c,j}-f_{c,j}^{\dagger})(a+a^{\dagger}) \to (f_{c,j}d^\dag-f_{c,j}^{\dagger}d)$.
\section{Polariton eigenmodes}\label{Sec: Polariton eigenmodes}
As a first step, we consider the diagonalization of the linear problem via a Bogoliubov transformation (where $T$ indicates the transpose): \begin{equation} \left(\begin{array}{ccc} b_{1} & b_{2} & d \end{array}\right) ^T = V\left(\begin{array}{cccccc} c_{1} & c_{2} & c_{3} & c_{1}^{\dagger} & c_{2}^{\dagger} & c_{3}^{\dagger}\end{array}\right)^{T}, \label{eq:trans_relat} \end{equation} leading to $H_{l}=\sum_{i=1,2,3}\omega_{i}c_{i}^{\dagger}c_{i}$. We impose \begin{equation} \omega_{3}\geq\omega_{2}\geq\omega_{1} \geq 0,\label{eq:frequencies_relation} \end{equation}
where the requirement of positive frequencies is to ensure the stability of linear problem (this condition neglects the effect of small damping rates $\kappa,\gamma_i$). The $c_i$ are polariton modes, given by linear combinations of the cavity and mechanical modes. The matrix $V$ can be most easily found in the coordinate representation, in which the quadratures $x_i, p_i$ ($i=1,2,3$) are defined through $b_i = (\omega_{mi}x_i+ip_i)/\sqrt{2\omega_{mi}}$ for $i=1,2$ and $d = (|\Delta|x_3+ip_3)/\sqrt{2|\Delta|}$. With this notation, the linear Hamiltonian $H_l$ takes the form: \begin{equation} H_l = \sum_{i}\frac{p_i^2}{2} + \frac12 \sum_{i,j} x_i M_{i,j} x_j , \end{equation} where \begin{align} \label{M_matrix} M & =\left(\begin{array}{ccc}
\omega_{m1}^{2} & 0 & 2G_{1}\sqrt{\left|\Delta\right|\omega_{m1}} \\
0 & \omega_{m2}^{2} & 2G_{2}\sqrt{\left|\Delta\right|\omega_{m2}} \\
2G_{1}\sqrt{\left|\Delta\right|\omega_{m1}} & 2G_{2}\sqrt{\left|\Delta\right|\omega_{m2}}& \Delta^{2} \end{array}\right). \end{align} $M$ is diagonalized by a (orthogonal) matrix $U$. Explicitly, $(U^T MU)_{i,j} = \omega_i^2 \delta_{i,j}$. Then $V$ can be written in block-matrix form: \begin{equation}\label{V_blocks} V = (V_+ \, V_-), \end{equation} where $V_\pm$ are related to $U$ as follows: \begin{equation}\label{Vpm} V_\pm = \left(\begin{array}{ccc} U_{11}f_\pm \left(\frac{\omega_{m1}}{\omega_{1}}\right) & U_{12}f_\pm \left(\frac{\omega_{m1}}{\omega_{2}}\right) & U_{13}f_\pm \left(\frac{\omega_{m1}}{\omega_{3}}\right) \\ U_{21}f_\pm \left(\frac{\omega_{m2}}{\omega_{1}}\right) & U_{22}f_\pm \left(\frac{\omega_{m2}}{\omega_{2}}\right) & U_{23}f_\pm \left(\frac{\omega_{m2}}{\omega_{3}}\right) \\
U_{31}f_\pm \left(\frac{\left|\Delta\right|}{\omega_{1}}\right) & U_{32}f_\pm \left(\frac{\left|\Delta\right|}{\omega_{2}}\right) & U_{33}f_\pm \left(\frac{\left|\Delta\right|}{\omega_{3}}\right) \end{array}\right), \end{equation} with $f_\pm (x)=(\sqrt{x}\pm \sqrt{1/x})/2 $. Clearly, all matrix elements of $V$ are real.
Unfortunately, analytic expressions of $U$ and $V$ are not available in general. This is at variance with the optomechanical ring treated in Ref.~\cite{Jin2018PRA}, where translational invariance allows to transform the multi-mode problem into independent 2-mode systems. Here, only the special case $\omega_{m1}=\omega_{m2}$ can be easily treated as a 2-mode system, by introducing a `dark' and `bright' mechanical mode as discussed in Appendix~\ref{appendix_equal_wm}. Although $M$ is easily diagonalized, the dark mode is completely decoupled from the cavity (even after including the nonlinear interaction) and scattering between phonon-like modes is not allowed.
Since the nonlinear effects at $\omega_{m1}=\omega_{m2}$ are of the same order of a simple optomechanical cavity, we should consider the general case $\omega_{m1}\neq \omega_{m2}$. We will be able to obtain analytical expressions in the relevant regime of large detuning, based on a perturbative treatment. For this approach we require $|\Delta| \gg \omega_{mi},G_i$, to have the off-diagonal elements of $M$ smaller than the gap $\Delta^2$, see Eq.~(\ref{M_matrix}). Within this approach we obtain the eigenfrequencies as follows (see Appendix~\ref{Appendix_V}): \begin{align}\label{w12} \omega^2_{1,2} \simeq & \frac12 \bigg( \omega_{m1}^{2}+ \omega_{m2}^{2}- B^2_{11}- B^2_{22} \nonumber \\ & \mp \sqrt{\left(\omega_{m1}^{2}- \omega_{m2}^{2}-B^2_{11}+ B^2_{22}\right)^2 + 4B_{12}^{4}} \bigg), \\ \omega^2_{3} \simeq & \Delta^2 + B^2_{11}+ B^2_{22} , \end{align} where we defined \begin{equation}\label{B_def}
B^2_{ij} = \frac{4}{|\Delta|}G_{i}G_j \sqrt{\omega_{mi}\omega_{mj}}. \end{equation} The sign in Eq.~(\ref{w12}) is chosen to satisfy $\omega_2 \geq \omega_1$. Using $\omega_1^2 \geq 0$ we get the stability condition: \begin{equation}\label{eq:stability}
4G_{1}^{2}\omega_{m2}+4G_{2}^{2}\omega_{m1}\leq\left|\Delta\right|\omega_{m1}\omega_{m2}. \end{equation}
\section{General formalism}\label{sec:method}
To characterize the effects of the nonlinear interaction, we follow the treatment developed in Ref.~\cite{Lemonde2013PRL} for the two-mode system and extended to multi-mode optomechanical chains in Ref.~\cite{Jin2018PRA}. Within this approach, the retarded photon Green's function $ G^{R}[d,d^{\dagger};\omega]=-i\int_{-\infty}^{+\infty}dte^{i\omega t}\left\langle [d(t),d^{\dagger}(0) ]\right\rangle \theta(t)$ (where $\theta(t)$ is the Heaviside step function) is computed with the Keldysh diagrammatic technique by including the nonlinear interaction $H_{nl}$ through a dominant second-order correction to the self-energy. This approach is justified by the smallness of the nonlinear interaction.
Several observable quantities can be extracted from $G^{R}[d,d^{\dagger};\omega]$ and we will focus on the cavity DOS $\rho_{d}(\omega)$, defined as follows: \begin{equation}\label{eq:DOS} \rho_{d}(\omega)=-\frac{1}{\pi}\mathrm{Im}G^{R}[d,d^{\dagger};\omega]. \end{equation} The modification of the optomechanically induced transparency (OMIT) signal can also be easily extracted from $G^R[d,d^{\dagger};\omega]$, and is directly related to $\rho_d(\omega)$. The relation between the polaritons and photon Green's functions immediately follows from Eq.~(\ref{eq:trans_relat}), leading to the following expression of $\rho_d$ in terms of the polariton retarded Green's functions: \begin{align}\label{eq:rho_d} \rho_{d}(\omega) = & -\frac{1}{\pi}{\rm Im}\bigg\{ \sum_{i=1}^3 \left( V_{3,i}^{2} G^{R}[c_i,c_i^{\dagger};\omega] \right. \nonumber \\ & \left. + V_{3,i+3}^{2}G^{R}[c_i^{\dagger},c_i;\omega]\right) \bigg\}, \end{align} where we have neglected the contribution from the small off-diagonal components $ G^{R}[c_i,c_j^{\dagger};\omega] $, with $ i \neq j$ (which is justified where the nonlinear interaction and polariton dampings are much smaller than the differences between polariton frequencies). Considering the nonlinear interaction, the Green's functions entering Eq.~(\ref{eq:rho_d}) are: \begin{equation}\label{GR} G^{R}[c_i,c_i^{\dagger};\omega]=\frac{1}{\omega-\omega_{i}+i\frac{\kappa_{i}}{2}-\Sigma_i^{R}(\omega)}, \end{equation} and $G^{R}[c_i^{\dagger},c_i;\omega]=(G^{R}[c_i,c_i^{\dagger};-\omega])^*$. In Eq.~(\ref{GR}), $\kappa_{i}$ is the effective dissipation of polariton $i$ and $\Sigma_i^{R}(\omega)$ is the retarded self-energy. The explicit form of these quantities is discussed below.
First we consider the effect of $H_{\rm diss}$, giving the following damping rates of the polaritons: \begin{equation}\label{kappa_i} \kappa_{i} = \kappa V_{3,i}^{2}-\kappa V_{3,i+3}^{2}+\sum_{j=1,2}\gamma_{j}\left(V_{j,i}+V_{j,i+3}\right)^{2}, \end{equation} where we can recognize three distinct contributions (from the cavity and two phonon baths). The different form in which the matrix elements $V_{i,j}$ enter the photon- and photon-bath contributions is due to the quantum heating induced by the drive on the cavity mode, leading to bath modes with negative frequency in the rotating frame ($\Delta_{c,i}>-\omega_l$). The unperturbed retarded Green's function is simply given by $ G_{0}^{R}[c_i,c_i^{\dagger};\omega]=1/\left(\omega-\omega_{i}+i\kappa_{i}/2\right)$. In the perturbative calculation of $\Sigma_i^{R}(\omega)$, it is also necessary to consider the Keldysh Green's function, $ G_{0}^{K}[c_i,c_i^{\dagger};\omega]= 2i(2n_i+1){\rm Im}G_{0}^{R}[c_i,c_i^{\dagger};\omega]$. Here $n_i$ are the occupation numbers of the free polaritons, which can also be found from $H_{\rm diss}$: \begin{equation} \label{n_i} n_{i} = \frac{\kappa}{\kappa_{i}} V_{3,i+3}^{2}+\sum_{j=1,2}\frac{\gamma_{j}}{\kappa_{i}}\left(V_{j,i}+V_{j,i+3}\right)^{2}n_{B}(\omega_{i}), \end{equation} where $n_{B}(\omega_{i})=1/(e^{\beta\omega_i}-1)$ is the Bose-Einstein distribution function, evaluated at the frequency of polariton $c_{i}$ and the (physical) temperature of the two mechanical baths. In the above expression we assume the optical cavity bath to be effectively at zero temperature.
Now we consider the self-energy induced by the nonlinear interaction Eq.~(\ref{eq:nonlinear term}), which is useful to rewrite in the polariton basis as: \begin{align}\label{H_nl_4terms} H_{nl} = & \big( g_{322} c_{3}c_{2}^{\dagger}c_{2}^{\dagger} + g_{311} c_{3}c_{1}^{\dagger}c_{1}^{\dagger}+ g_{321} c_{3}c_{2}^{\dagger}c_{1}^{\dagger} \nonumber \\ & + g_{211} c_{2}c_{1}^{\dagger}c_{1}^{\dagger} + {\rm H.c.} \big) + \ldots, \end{align} where we rely on the fact that each nonlinear term can only contribute appreciably if the system is close to a resonant condition. Therefore, we dropped contributions which obviously cannot be resonant, e.g., terms $\propto c_{i}^{\dagger}c_{j}^{\dagger}c_{k}^{\dagger}$. Among the terms $\propto c_{i}c_{j}^{\dagger}c_{k}^{\dagger}$, the only important ones are the four explicitly written in Eq.~(\ref{H_nl_4terms}), after taking into account the requirement $\omega_ i \simeq \omega_j + \omega_k$ and our conventional ordering of eigenfrequencies~(\ref{eq:frequencies_relation}).
The various contributions to the self-energy arising from Eq.~(\ref{H_nl_4terms}) can be computed in a relatively straightforward way following the discussion of 2-mode and 4-mode systems \cite{Lemonde2013PRL, Jin2018PRA}. However, as a further simplification, we will consider the regime of low-energy polaritons $i=1,2$ with predominantly mechanical character. In this case, it is expected that the effect of scattering process $g_{211} c_{2}c_{1}^{\dagger}c_{1}^{\dagger} $ is greatly enhanced, due to the long lifetime of the polariton modes \cite{Jin2018PRA}. From Eq.~(\ref{M_matrix}) we see that a simple condition to quench the mixing of optical and mechanical modes is $|\Delta| \gg \omega_{mi},G_i$, i.e., the perturbative regime mentioned already. Since $\omega_3 \simeq |\Delta| \gg \omega_{1,2}$, it is quite clear that the condition $\omega_3 = \omega_j +\omega_k$ cannot be realized, which justifies neglecting the first line of Eq.~(\ref{H_nl_4terms}).
In summary, in the rest of the paper we will focus on a parameter regime where the nonlinear interaction can be approximated as: \begin{equation}\label{H_nl_211} H_{nl} \simeq g_{211} ( c_{2}c_{1}^{\dagger}c_{1}^{\dagger}+ c_{1}c_{1} c_{2}^\dagger ). \end{equation} The explicit expression of $g_{211}$ is \begin{align} g_{211} = \sum_{i=1,2} g_{i}& \left[\left(V_{3,1}V_{3,2}+V_{3,4}V_{3,5}\right)\left(V_{i ,1}+V_{i, 4}\right)\right.\nonumber \\
& \left.+V_{3,1}V_{3,4}\left(V_{i,2}+V_{i,5}\right)\right]. \end{align} Since the nonlinear problem is effectively simplified to a two-mode system, we can rely on previous analysis to write the relevant polariton self-energies as follows: \begin{align} \Sigma_1^{R}(\omega) & = 4g_{211}^{2} \frac{n_1-n_2}{\omega+\omega_{1}-\omega_{2}+i(\kappa_{1}+\kappa_{2})/2}, \label{SR1}\\ \Sigma_2^{R}(\omega) & = 4g_{211}^{2} \frac{n_1+1/2}{\omega-2\omega_{1}+i\kappa_{1}}, \label{SR2} \end{align} while $\Sigma_3^{R}(\omega) \simeq 0$.
\section{Resonant enhancement of nonlinear effects}\label{Sec: Resonant enhancement}
\begin{figure}\label{fig:Boundary_resonance}
\end{figure}
Examples quantifying nonlinear effects through the formalism described above are shown in Fig.~\ref{fig:Boundary_resonance}(a-c). There, we have evaluated $\rho_d$ using Eqs.~(\ref{eq:rho_d}), (\ref{GR}) and the perturbative self-energies Eqs.~(\ref{SR1}), (\ref{SR2}). More precisely, we plot (in logarithmic scale) the following quantity: \begin{equation}\label{I_def}
\mathcal{I} = \max_{\omega}\left|\rho_{d}(\omega)-\rho_{d}^{0}(\omega)\right|, \end{equation} where $\rho_{d}^{0}(\omega)$ is the DOS without nonlinearity, i.e., assuming $H_{nl}=0$. $\mathcal{I}$ gives the maximum deviation of $\rho_{d}(\omega)$ induced by the nonlinear interactions over the whole spectrum. Considering $\mathcal{I}$ as function of the drive strength and detuning at fixed values of $\omega_{m1}$, $\omega_{m2}$, and $g_{1,2}$, the most prominent feature within the stability region is the presence of a sharp line at which the nonlinear effects are enhanced. This condition corresponds to the resonance $\omega_2 = 2\omega_1$, which can be determined from the spectrum of Eq.~(\ref{M_matrix}). We show in Fig.~\ref{fig:Boundary_resonance}(d) how the resonant curves are modified by the ratio $\omega_{m2}/\omega_{m1}$. The resonant lines found from the spectrum match well the enhancement observed in panels (a-c).
Without any restrictions on the maximum photon number $N$, the most favorable regime is the one of large detuning $|\Delta| \gg \omega_{m1}$ and smaller ratios $\omega_{m2}/\omega_{m1} \gtrsim 1$. Similarly to the four-mode optomechanical ring discussed in Ref.~\onlinecite{Jin2018PRA}, a large value of $|\Delta|$ leads to enhanced nonlinear effects by inducing phonon-like polaritons with very small damping. Furthermore, as seen in Fig.~\ref{fig:Boundary_resonance}, a smaller value of $\omega_{m2}/\omega_{m1} $ allows the resonant curve to approach the boundary of the unstable regime. We note, however, that the optimization of $\omega_{m2}/\omega_{m1}$ is nontrivial since for $\omega_{m2}/\omega_{m1}=1$ nonlinear effects drop to small values (see Appendix~\ref{appendix_equal_wm}).
In the spectral domain, the DOS is characterized by three nearly Lorenzian polariton peaks. As it turns out, the largest changes in the DOS occur at the two lower peaks, with frequencies $\omega_{1,2}$. The high-frequency polariton is not involved in the resonant scattering process and the corresponding peak at $\omega_3$ is hardly affected by nonlinear effects. To quantify the change of DOS at $\omega_{1,2}$ we can evaluate Eq.~(\ref{eq:rho_d}) at the relevant polariton frequencies and keep only the dominant contribution $\propto G^{R}[c_j,c_j^{\dagger};\omega_j]$, thus obtaining the approximate expressions: \begin{equation} \label{rho_d_approx_Ceff} \rho_{d}(\omega_{1})\approx \frac{2V_{31}^2}{\pi\kappa_{1}}\frac{1}{1+C_{\mathrm{eff},1}}, \quad \rho_{d}(\omega_{2})\approx \frac{2V_{32}^2}{\pi\kappa_{2}}\frac{1}{1+C_{\mathrm{eff,2}}}, \end{equation} where the effective cooperativities appearing in the denominators are given by \begin{equation} \label{eq:C_eff} C_{\mathrm{eff,1}}=\frac{16g_{211}^{2}(n_{1}-n_{2})}{\kappa_{1}(\kappa_{1}+\kappa_{2})}, \qquad C_{\mathrm{eff,2}}=\frac{4g_{211}^{2}(1+2n_1)}{\kappa_{1}\kappa_{2}}. \end{equation}
The $C_{\mathrm{eff},j}$ are directly related to the relative changes of DOS induced by the nonlinear interaction, since $\left|\rho_{d}(\omega)-\rho_{d}^{0}(\omega)\right|/\rho_d(\omega_j) \simeq C_{\mathrm{eff},j}$ (usually, $C_{\mathrm{eff},j}\ll 1$, due to the weakness of nonlinear interactions). In the following we will mostly discuss $C_{\mathrm{eff,2}}$, which is more directly comparable to the two-mode system (in the two-mode system, $n_{i} \ll 1$ and the nonlinear effects at the lower polariton are small). However, we will discuss at the end of Sec.~\ref{Sec:Result_2} that for the three-mode system the two cooperatives are similar at the optimal point, thus the behavior of $C_{\mathrm{eff,2}}$ is also representative for $C_{\mathrm{eff,1}}$ (see also Fig.~\ref{fig:DOS}).
\begin{figure}\label{fig:C_eff_resonance_total}
\end{figure}
Examples of the numerically evaluated $C_{\mathrm{eff,2}}$ as function of $|\Delta|$ along the resonant curves are shown in Fig.~\ref{fig:C_eff_resonance_total}. The dependence of $C_{\mathrm{eff,2}}$ is non-monotonic, similarly to the four-mode chain \cite{Jin2018PRA}. The decrease of $C_{\mathrm{eff,2}}$ at large $|\Delta|$ is due to the saturation of the polariton damping $\kappa_{1,2}$ to the bare mechanical dissipation rates, $\gamma_{1,2}$ (a more detailed discussion is provided later on). We also see that, in agreement with the previous discussion, the curves with a smaller $\omega_{m2}/\omega_{m1}$ lead to larger values of $C_{\mathrm{eff,2}}$. However, this advantageous behavior does not take into account any practical limitation on the maximum achievable dressed coupling strength.
Since it is difficult to realize values of $G_{1,2}$ approaching the mechanical frequency (the regime of ultrastrong coupling \cite{Peterson2019PRL}), we also consider the effect of a upper cutoff $G_{1,2}\leq G_{\rm max}$, where $G_{\rm max}$ is smaller than $\omega_{m1}$. This restriction is illustrated by the dashed curves of Fig.~\ref{fig:C_eff_resonance_total}. As seen, a system with smaller value of $\omega_{m2}/\omega_{m1}$ suffers a strong reduction on the maximum value of $|\Delta|$, marked by a blue star for each solid curve (considering $G_{\rm max}=0.5 \omega_{m1}$). Therefore, one has to strike a compromise between the generally advantageous effect of reducing $\omega_{m2}/\omega_{m1}$ and the more restrictive range of $|\Delta|$. The optimal choice of $\omega_{m2}/\omega_{m1}$ is generally slightly below $2$. For example, we see that the purple curve with $\omega_{m2}/\omega_{m1} = 1.97$ hits the upper dashed boundary close to its maximum, thus represents the optimal choice when $G_{\rm max}=0.5 \omega_{m1}$.
\begin{figure}\label{fig:Fig_4_C_eff_Vs_g1_omega_m2}
\end{figure}
We also show in Fig.~\ref{fig:Fig_4_C_eff_Vs_g1_omega_m2} a density plot of the maximum $C_{\mathrm{eff,2}}$ (i.e., after optimizing over $\Delta$) as function of $g_1/g_2$ and $\omega_{m2}/\omega_{m1} $, for two different choices of $ G_{\rm max}$. The largest value is marked by a white dot and satisfies $g_1=g_2$. Instead, the optimal value of $\omega_{m2}/\omega_{m1} $ is always slightly below 2 but depends on $ G_{\rm max}$.
Finally, we note that in Fig.~\ref{fig:C_eff_resonance_total} the largest value of $C_{\rm eff,2}$ (the maxima of the dashed curves) are attained in the regime of large detunings $|\Delta|\gg \omega_{m1}$. In the next Sec.~\ref{Sec:Result_2} we explore more explicitly this limit, which allows us to obtain analytical expressions through a perturbative treatment and identify the most relevant parametric dependences.
\section{Large detuning limit}\label{Sec:Result_2}
By assuming $|\Delta| \gg \omega_{m1}, \omega_{m2}$ and relatively small dressed couplings, $G_{1,2}\lesssim \omega_{m1}$, we can diagonalize Eq.~(\ref{M_matrix}) perturbatively and simplify the expressions of the linear transformation matrices $U$ and $V$ (see Appendix~\ref{Appendix_V} for details). Under these conditions, the two lower polariton modes are phonon-like, i.e., they are weakly mixed with the optical cavity. Their dampings take the approximate form: \begin{equation}
\kappa_{i} \simeq \frac{4G_{i}^{2}\omega_{mi}}{\left|\Delta\right|^{3}} \kappa+\gamma_{i}, \label{pert_theory_kappa} \end{equation} where $i=1,2$. The total decay rate is the sum of an optical contribution, due to the small mixing to the optical cavity, and the regular mechanical damping. With the same approach we also obtain the relevant occupation number and effective interaction as follows: \begin{equation}
n_{1} \simeq \left( \frac{\gamma_1 \Delta^2}{\kappa G_1^2} +4 \frac{\omega_{m1}}{|\Delta|}\right)^{-1} , ~~ g_{211} \simeq 3g_1 \frac{G_{1}G_{2}}{\Delta^{2}}. \label{pert_theory_n_g} \end{equation} These expressions can be substituted in Eq.~(\ref{eq:C_eff}), giving: \begin{equation}
C_{\mathrm{eff,2}}\simeq\frac{72 \kappa g^2 G_2^6 |\Delta|^3 \left(1+\frac{\gamma_1 \Delta^2}{2\kappa G_2^2} +2 \frac{\omega_{m1}}{|\Delta|} \right) }
{\left( 4G_2^2\omega_{m1}\kappa +\gamma_1 |\Delta|^3\right)^2\left( 8G_2^2\omega_{m1}\kappa +\gamma_2 |\Delta|^3\right)}, \label{eq:C_eff_2} \end{equation} where we used that, as in Fig.~\ref{fig:Fig_4_C_eff_Vs_g1_omega_m2}, the optimal point occurs for $g_1=g_2 \equiv g$ (thus $G_1=G_2$) and $\omega_{m2} \simeq 2 \omega_{m1}$. Note that the expression of $C_{\mathrm{eff,2}}$ is derived at resonance, implying that $G_2$ and $\Delta$ in Eq.~(\ref{eq:C_eff_2}) must satisfy the resonant condition. From the expressions of $\omega_{1,2}$ in Eq.~(\ref{w12}), and taking into account $g_2/g_1=1$ and $\omega_{m2}/\omega_{m1}\simeq 2$, we obtain the following approximate relationship: \begin{equation}\label{eq:Resonance condition_text}
G_2 \simeq \sqrt{\left(\omega_{m1}- \frac{\omega_{m2}}{2}\right) |\Delta|}. \end{equation}
We now proceed to optimize Eq.~(\ref{eq:C_eff_2}) and set $G_2=G_{\rm max}$. For simplicity we consider equal mechanical dampings $\gamma_{1,2}=\gamma$ (extension to unequal dampings is straightforward) and work under the assumption that $1+\gamma_1 \Delta^2/(2\kappa G_1^2) +2 \omega_{m1}/|\Delta| \simeq 1$ in the numerator of Eq.~(\ref{eq:C_eff_2}). The latter approximation implies: \begin{equation} R = \left(\frac{G_{\mathrm{max}}}{\omega_{m1}}\right)^{2} \frac{\kappa}{\gamma} \gg 1. \label{eq:C_eff_max_1} \end{equation} As it will become clear in the following, $R$ is an important parameter controlling the enhancement of $C_{\mathrm{eff,2}}$ with respect to a two-mode system. Therefore, having a large value of $R$ is desirable. Under these assumptions, Eq.~(\ref{eq:C_eff_2}) simplifies to: \begin{equation}
C_{\mathrm{eff,2}}\simeq\frac{72 \kappa g^2 G_{\rm max}^6 |\Delta|^3 }
{\left( 4G_{\rm max}^2\omega_{m1}\kappa +\gamma |\Delta|^3\right)^2\left( 8G_{\rm max}^2\omega_{m1}\kappa +\gamma |\Delta|^3\right)}, \label{eq:C_eff_2_simplified} \end{equation} where the resonant condition reads: \begin{equation} \label{Delta_resonant}
|\Delta| \simeq \frac{G_{\rm max}^2}{\omega_{m1}}\left(1-\frac{\omega_{m2}}{2\omega_{m1}} \right)^{-1}. \end{equation}
\begin{figure}\label{fig:The-comparison}
\end{figure}
Equations~(\ref{eq:C_eff_2_simplified}) and (\ref{Delta_resonant}) give the maximum $C_{\mathrm{eff,2}}$ at given mechanical frequencies. A comparison of the approximate result and the numerical evaluation is provided in Fig.~\ref{fig:The-comparison}(a), showing good agreement. As noted previously in Fig.~\ref{fig:Fig_4_C_eff_Vs_g1_omega_m2}, there is an optimal value of the ratio $\omega_{m2}/\omega_{m1}$, giving the largest nonlinear effect. To find this value we further optimize Eq.~(\ref{eq:C_eff_2_simplified}) with respect to $\omega_{m2}$ and obtain: \begin{equation} \label{eq:C_eff_optimal} \widetilde{C}_{\mathrm{\mathrm{eff,2}}} \simeq (c_1R+c_2R^{2/3} + \ldots) \left(\frac{g}{\kappa}\right)^{2} , \end{equation} where the coefficients are $c_1 = 9(5\sqrt{5}-11)/4 \approx 0.41$ and $c_2 = 9(7-3\sqrt{5})/\sqrt[3]{16(\sqrt5-1)} \approx 0.97$. The numerical prefactor in Eq.~(\ref{eq:C_eff_optimal}) is expressed through powers of the large enhancement factor $R$ and the fully optimized cooperativity $\widetilde{C}_{\mathrm{\mathrm{eff,2}}} $ exhibits a monotonic dependence on $G_{\rm max}$, due to the quadratic increase of $R$ with $G_{\rm max}$ [see Eq.~(\ref{eq:C_eff_max_1})]. In Fig.~\ref{fig:The-comparison}(b) we show that the increase of $\widetilde{C}_{\mathrm{eff,2}}$ is well described by the approximate Eq.~(\ref{eq:C_eff_optimal}).
Finally, we find that the maximum of $\widetilde{C}_{\mathrm{\mathrm{eff,2}}}$ occurs at \begin{equation}\label{Delta_max_precise} \frac{\omega_{m2}}{\omega_{m1}} \simeq 2- c_0 \left(\frac{\gamma}{\kappa}\right)^{1/3}\left(\frac{G_{\rm max}}{\omega_{m1}}\right)^{4/3}, \end{equation} where $c_0= (\sqrt{5}+1)^{1/3} \simeq 1.5$. Since usually $\gamma \ll \kappa$, the second term of Eq.~(\ref{Delta_max_precise}) represents a small deviation from $\omega_{m2} = 2\omega_{m1}$. The inset of Fig.~\ref{fig:The-comparison}(a) shows that a larger dressed optomechanical coupling $G_{\rm max}$ causes the optimal ratio $\omega_{m2}/\omega_{m1}$ to move farther away from $\omega_{m2}=2\omega_{m1}$, besides allowing for more prominent nonlinear effects, which is in agreement with Eq.~(\ref{Delta_max_precise}).
\subsection{Comparison to a two-mode system}\label{Appendix_parameters}
We perform now a more specific comparison to the two-mode setup, where the optimal point is at $\Delta \simeq -2\omega_{m} $, leading to ~\cite{Jin2018PRA}: \begin{equation} C_{\rm eff} \simeq \frac{45}{8} \frac{g^2}{\kappa^2}. ~~~{\rm (two~modes)} \end{equation} As a reference we consider parameters from a very recent electromechanical setup using a 3D superconducting cavity, which allowed achieving ultrastrong parametric couplings of order $G_{\rm max} \sim 0.4 \omega_m$ \cite{Peterson2019PRL}. With the parameters listed in Table~\ref{table_parameters}, we estimate that the ultrastrong-coupling regime would allow for a potentially large enhancement factor of order $R \simeq 6\times 10^3$. In fact, from Eq.~(\ref{eq:C_eff_optimal}), the figure of merit for the nonlinear effects would be a more accessible $\widetilde{C}_{{\rm eff},2} \simeq 0.5 \times 10^{-4}$ in the three-mode system, instead of $C_{\rm eff} \simeq 10^{-7}$ for the two-mode setup.
\begin{table} \centering \caption{Parameters from two specific setups.} \label{table_parameters} \begin{tabularx}{0.49\textwidth}{XXX} \hline \hline
& Peterson \emph{et al.}~\cite{Peterson2019PRL} & Teufel \emph{et al.}~\cite{Teufel2011Nature} \\ \hline $\omega_{c}/2\pi$ & 6.506 GHz & 7.47 GHz \\ $\kappa/2\pi$ & 1.2 MHz & 170 kHz \\ $\omega_{m}/2\pi$ & 9.696 MHz & 10.69 MHz \\ $\gamma/2\pi$ & $31\pm 1$ Hz & 30 Hz \\ $g/2\pi$ & $167\pm 2$ Hz & 230 Hz \\ $G_{\rm max}/2\pi $ & 3.83 MHz & 0.5 MHz \\ \hline \hline \end{tabularx} \end{table}
It is also instructive to consider parameters from an electromechanical setup with lumped elements (second column of Table~\ref{table_parameters}) and a much smaller $G_{\rm max} \sim 0.05 \omega_m $. Here, also due to the weaker cavity damping, we only have $R \simeq 12$. Indeed, we estimate that the three-mode system could achieve $\widetilde{C}_{{\rm eff},2}
\simeq 2 \times 10^{-5}$, similar to $C_{\rm eff} \simeq 10^{-5}$ of the two-mode setup.
The final values for $\widetilde{C}_{{\rm eff},2} $ in the two scenarios are similar, despite the great difference in $(g/\kappa)^2$. In the first example, the value of $\widetilde{C}_{{\rm eff},2} $ suffers from the relatively large damping of the microwave cavity. Improving that parameter to the $\sim 100$~kHz range would result in a much larger value of $(g/\kappa)^2$, thus approaching $\widetilde{C}_{{\rm eff},2} \simeq 10^{-3}$ in the three-mode system. We also note that the working point of Ref.~\cite{Peterson2019PRL} is at $\Delta = -\omega_m$, when the dressed optomechanical coupling is limited by the optomechanical instability to $G_{\rm max} < 0.5 \omega_m$. However, here we consider large values of $|\Delta|$ and the onset of the instability is less restrictive on $G_{\rm max}$. This is potentially beneficial to the enhancement of nonlinear effects, due to the strong dependence of $R\propto (G_{\rm max}/\omega_m)^2$.
\subsection{Physical origin of the enhancement}
We have seen that in a two-mode optomechanical system $C_{\rm eff} \sim (g/\kappa)^{2}$ \cite{Lemonde2013PRL, Lemonde2015PRA, Borkje2013PRL}. Therefore, in Eq.~(\ref{eq:C_eff_optimal}) we can identify $c_1 R$ as the approximate enhancement factor induced by the three-mode setup. As shown in the previous section, a large value of $R$ can be realized for relatively small values of $G_{\rm max}$, due to the typical smallness of $\gamma/\kappa$.
To understand the physical origin of the enhancement, it is useful to first examine the non-monotonic dependence of $C_{\mathrm{eff,2}}$ as function of $\Delta$, shown in Fig.~\ref{fig:C_eff_resonance_total}. As it turns out, the maximum of $C_{\mathrm{eff,2}}$ occurs approximately when the mechanical damping $\gamma$ becomes comparable to the induced optical damping $\kappa_i^{\rm opt}$ [i.e., the first term of Eq.~(\ref{pert_theory_kappa})]. By taking into account the resonant condition, one has: \begin{equation}\label{kappa_reduced_approx} \kappa_1^{\rm opt}\simeq \left(1-\frac{\omega_{m2}}{2\omega_{m1}} \right)\frac{4 \omega_{m1}^2}{\Delta^2}\kappa, \end{equation} while $\kappa_2^{\rm opt}\simeq 2 \kappa_1^{\rm opt}$. Then, imposing $\kappa_i^{\rm opt} \sim \gamma$ we estimate that the maximum of $C_{\mathrm{eff,2}}$ occurs at: \begin{equation}\label{Delta_maximum}
|\Delta^*|\sim \sqrt{\frac{\kappa}{\gamma}\left(1-\frac{\omega_{m2}}{2\omega_{m1}} \right)}\omega_{m1}. \end{equation}
To see that this value of $\Delta$ is reasonable we first assume $|\Delta|\ll |\Delta^*|$, when the mechanical damping can be neglected. Setting $\gamma_{1,2}=0$ in Eq.~(\ref{eq:C_eff_2}) and also neglecting $2\omega_{m1}/|\Delta|$ in the numerator (since $|\Delta| \gg \omega_{m1}$), we obtain a monotonically increasing function: \begin{equation}\label{eq:C_eff_2_small_Delta}
C_{\mathrm{eff,2}}\simeq\frac{9}{16} \left(\frac{g}{\kappa}\right)^2 \left(\frac{|\Delta|}{\omega_{m1} } \right)^3 . \end{equation}
The cubic dependence of $C_{\mathrm{eff,2}}$ on $\Delta$ has the following origin: In Eq.~(\ref{eq:C_eff}), the product $\kappa_1 \kappa_2$ of damping rates contribute to an enhancement factor $\sim (|\Delta|/\omega_{m1})^4$ to the effective cooperativity. The increase of $n_1$ is $\sim |\Delta|/(4\omega_{m1})$, and can also be attributed to the improved coherence of the polariton modes [i.e., to the small $\kappa_i$ denominator in Eq.~(\ref{n_i})]. On the other hand, as expected, the effective interaction $g_{211}$ is suppressed as $|\Delta|^{-1}$. The final result is the cubic enhancement factor $(|\Delta|/\omega_{m1})^3$.
The initial growth of $C_{\mathrm{eff,2}}$ with $|\Delta|$ is due to the reduction of the hybridization with the optical modes, which improves the coherence properties of the $i=1,2$ polaritons. This mechanism clearly breaks down when $|\Delta|\gg |\Delta^*|$ and $\kappa_i \simeq \gamma_i$ is a constant. In that regime, the behavior of $C_{\mathrm{eff,2}}$ is dominated by the decreasing interaction strength $g_{211}$ between almost purely phononic polaritons, resulting in the non-monotonic dependence.
A rough estimate of the maximum value of $C_{\mathrm{eff,2}}$ is obtained by evaluating Eq.~(\ref{eq:C_eff_2_small_Delta}) at $\Delta=\Delta^*$: \begin{equation}\label{C_eff_max_estimate} C_{\mathrm{eff,2}} \lesssim \left(\frac{g}{\kappa}\right)^2 \left[\frac{\kappa}{\gamma}\left(1-\frac{\omega_{m2}}{2\omega_{m1}}\right) \right]^{3/2} . \end{equation} One can see from the presence of the factor $(1-\frac{\omega_{m2}}{2\omega_{m1}})^{3/2}$ that, in principle, it is advantageous to decrease the ratio $\omega_{m2}/\omega_{m1}$ away from $\omega_{m2} = 2 \omega_{m1}$. However, as discussed, one should take into account practical limitations on the achievable $G_{1,2}$.
With $G_{1,2}\leq G_{\rm max}$, the maximum allowed value of $|\Delta|$ is given by Eq.~(\ref{Delta_resonant}), where the factor $1-\frac{\omega_{m2}}{2\omega_{m1}}$ appears in the denominator (i.e., the allowed range shrinks by reducing $\omega_{m2}/\omega_{m1}$). An approximate criterion to estimate the optimal $\omega_{m2}/\omega_{m1}$ is to impose that the range of allowed values of $\Delta$ extends roughly up to the maximum in $C_{\mathrm{eff,2}}$. Equating Eqs.~(\ref{Delta_resonant}) and (\ref{Delta_maximum}) yields: \begin{equation}\label{estimate_w12_ratio} 1-\frac{\omega_{m2}}{2\omega_{m1}} \sim \left(\frac{\gamma}{\kappa}\right)^{1/3}\left(\frac{G_{\rm max}}{\omega_{m1}}\right)^{4/3}, \end{equation} and substituting this estimate in Eq.~(\ref{C_eff_max_estimate}): \begin{equation}\label{eq:C_eff_max_estimate} \widetilde{C}_{\mathrm{eff,2}}\sim \left(\frac{g}{\kappa}\right)^2 \frac{\kappa}{\gamma}\left(\frac{G_{\rm max}}{\omega_{m1}}\right)^2 . \end{equation}
The above Eqs.~(\ref{estimate_w12_ratio}) and (\ref{eq:C_eff_max_estimate}) are in agreement with the more precise Eqs.~(\ref{Delta_max_precise}) and~(\ref{eq:C_eff_optimal}), respectively. Through this discussion, we see that the optimal values arise from a competition between the reduction in the effective optical damping at large $|\Delta|$ and the presence of a residual mechanical damping, together with practical restrictions in achieving sufficiently large dressed optomechanical couplings.
\begin{figure}\label{fig:DOS}
\end{figure}
\subsection{Lineshape and lower polariton}
We conclude this section by discussing the qualitative change of lineshape induced by nonlinear effects. Since here the damping rates of the two polaritons are comparable [see for example below Eq.~(\ref{kappa_reduced_approx}), where we have obtained $\kappa_2\simeq 2\kappa_1$], the spectral lineshape is not modified qualitatively at small $g$. This behavior is illustrated by the $g=0.01 \kappa$ curves of Fig.~\ref{fig:DOS}(a) and is distinct from what happens in a two-mode system, where a sharp dip can be induced at the higher polariton peak for very small values of $g$ \cite{Lemonde2013PRL, Lemonde2015PRA, Borkje2013PRL}. Therefore, similar to the 4-mode optomechanical ring \cite{Jin2018PRA}, the nonlinear effects could be more easily demonstrated by tuning external parameters like $\Delta$ and $G_i$ across the resonant condition $\omega_2=2\omega_1$. As shown in Fig.~\ref{fig:DOS}(b), this will induce a sharp feature in the dependence of the density of states (or a related observable, e.g., the OMIT signal \cite{Lemonde2015PRA, Jin2018PRA}).
Instead, if the optomechanical coupling can be made larger, one enters in a regime where two distinct resonances appear, as illustrated in Fig.~\ref{fig:DOS} by assuming $g_{1,2}=0.1\kappa$. The splittings of the $\omega_1,\omega_2$ polariton peaks are respectively given by: \begin{equation} \delta_1 \simeq 4 g_{211}\sqrt{ n_1-n_2}, \qquad \delta_2 \simeq 4 g_{211} \sqrt{ n_1+1/2}, \label{splittings} \end{equation} and they are resolved when $\delta_{1,2} \gtrsim \kappa_{1,2}$.
Figure~\ref{fig:DOS}(a) also shows that the nonlinear effects at the upper ($\omega_2$) and lower ($\omega_1$) polaritons are comparable. This fact can be checked from our previous analytical expressions: In the regime of negligible $\gamma_{1,2}$ we have $\kappa_2/ \kappa_1 \simeq n_1/n_2 \simeq 2$, leading to $C_{\mathrm{eff,1}} \simeq \frac 2 3 C_{\mathrm{eff,2}}$ [based on Eq.~(\ref{eq:C_eff})]. On the other hand, around the maximum of the effective cooperativity we can estimate $\kappa_2/ \kappa_1 \simeq n_1/n_2 \simeq 3/2$, leading to $C_{\mathrm{eff,1}} \simeq \frac 2 5 C_{\mathrm{eff,2}}$.
\section{Conclusion}\label{Sec: summary}
In this paper, we investigate the nonlinear interaction effects in a three-mode cavity optomechanical system with one cavity mode and two mechanical modes. To take full the advantage of the two mechanical modes, we concentrate on a regime where resonant scattering of phonon-like polaritons takes place.
Because of the very small polariton dissipation rates, the nonlinear effects on the cavity density of states and related observables can be greatly enhanced compared to a regular optomechanical system. In the large detuning limit and considering an upper bound on the largest achievable dressed coupling, we obtain the optimal value of the nonlinear effects. Our analytic expressions of the optimal value indicate that the nonlinear effects can be enhanced by a parameter which is typically large, since it is proportional to the ratio $\kappa/\gamma$.
Although with small single-photon optomechanical couplings $g_{1,2}$ the nonlinear effects only induce a slight modification of the spectral lineshape, it would still be possible to observe sharp features by tuning system parameters across the resonant condition. On the other hand, if a regime of sufficiently large $g_{1,2}$ can be reached, the splittings of the $\omega_1,\omega_2$ polariton peaks are clearly established.
\begin{acknowledgments}
S.C. acknowledges support from the National Key Research and Development Program of China (Grant No.~2016YFA0301200), NSAF (Grant No. U1930402), NSFC (Grants No. 11974040 and No. 1171101295), and a Cooperative Program by the Italian Ministry of Foreign Affairs and International Cooperation (No. PGR00960). Y.-D. Wang acknowledges support from NSFC (Grant No. 11947302) and MOST (Grant No. 2017FA0304500). L.J.J. acknowledges support from NSFC (Grant No. 11804020). It is our pleasure to thank helpful discussions with A. A. Clerk.
\end{acknowledgments}
\appendix
\section{Equal mechanical frequencies}\label{appendix_equal_wm}
If the two mechanical resonators have the same frequency, i.e., $\omega_{m1}=\omega_{m2}=\omega_{m}$, the system becomes equivalent to a two-mode optomechanical cavity. This is easily seen by introducing the mechanical dark mode $b_{-}$ and the bright mode $b_{+}$: \begin{align} b_{-} & = \frac{G_{1}b_{2}-G_{2}b_{1}}{\widetilde{G}},\label{eq:dark mode}\\ b_{+} & = \frac{G_{1}b_{1}+G_{2}b_{2}}{\widetilde{G}},\label{eq:bright mode} \end{align} with $\widetilde{G}=\sqrt{G_{1}^{2}+G_{2}^{2}}$. Hence, we rewrite the Hamiltonian as follows: \begin{align} H_{l}+H_{nl} = -\Delta d^{\dagger}d+\omega_{m}b_{+}^{\dagger}b_{+}+\omega_{m}b_{-}^{\dagger}b_{-} \nonumber \\
+\widetilde{G}(d+d^{\dagger})(b_{+}+b_{+}^{\dagger})+ \widetilde{g}d^{\dagger}d(b_{+}+b_{+}^{\dagger}), \end{align} where $\widetilde{g}=\widetilde{G}/\sqrt{N}$. We see that only the bright mode interacts with the cavity and the optomechanical interaction has the standard form.
\section{Approximate form of $V$}\label{Appendix_V}
In this Appendix, we present the approximate form of $V$ in the large detuning limit $|\Delta| \gg \omega_{mi},G_i$. We first perform a block-diagonalization of $M$ using quasi-degenerate perturbation theory: \begin{align} \label{M_blocks}
e^{-S}M e^S \simeq \left(\begin{array}{ccc} \omega_{m1}^2 - B_{11}^2& -B_{12}^2& 0\\ -B_{21}^2 & \omega_{m2}^2- B_{22}^2 & 0\\ 0 & 0 & \Delta^2 + B^2_{11} +B^2_{22} \end{array}\right), \end{align} where the $B_{ij}^2$, given by Eq.~(\ref{B_def}), are the second-order correction with respect to the unperturbed matrix $M_{i,j}^{(0)}=\Delta^2 \delta_{3,i}\delta_{3,j}$. For easier reference, we repeat here their expression: \begin{equation}\label{B_def_repeated}
B^2_{ij} = \frac{4}{|\Delta|}G_{i}G_j \sqrt{\omega_{mi}\omega_{mj}}. \end{equation} To lowest order, the transformation matrix $S$ is given by: \begin{align}
S & \simeq \frac{1}{|\Delta|} \left(\begin{array}{ccc} 0 & 0 & B_{11} \\ 0 & 0 & B_{22} \\
-B_{11} & -B_{22} & 0 \\ \end{array}\right). \end{align} The eigenvalues of Eq.~(\ref{M_blocks}) are the normal mode frequencies $\omega_{i}$ presented in Eq.~(\ref{w12}). Diagonalization of Eq.~(\ref{M_blocks}) is simply through a rotation by an angle $\theta$. Finally, combining $e^S$ and the rotation by $\theta$, we find that $M$ is diagonalized by: \begin{equation}\label{eq:transformation matrix_1-1} U \simeq \left(\begin{array}{ccc}
\cos\theta & -\sin\theta & \frac{B_{11}}{|\Delta|}\\
\sin\theta & \cos\theta & \frac{B_{22}}{|\Delta|} \\
-\frac{B_{11}\cos\theta+B_{22}\sin{\theta}}{|\Delta|} & \frac{B_{11}\sin\theta-B_{22}\cos{\theta}}{|\Delta|} & 1 \end{array}\right), \end{equation} where $\tan 2\theta =2 B_{12}^2/(\omega_{m2}^2-\omega_{m1}^2+B_{11}^2-B_{22}^2)$.
The above Eq.~(\ref{eq:transformation matrix_1-1}) can be inserted in Eqs.~(\ref{V_blocks}) and (\ref{Vpm}) to get the desired approximate form of $V$. For example, in the calculation of $\kappa_{i}$ ($i=1,2$) we need the following quantities: \begin{align}\label{Vmatrix_simplified} V_{3,i}^2-V_{3,i+3}^2 &= U_{3,i}^2, \nonumber \\ (V_{1,i}+V_{1,i+3})^2 & = U_{1,i}^2 \frac{\omega_{m1}}{\omega_i},\nonumber \\ (V_{2,i}+V_{2,i+3})^2 &= U_{2,i}^2 \frac{\omega_{m2}}{\omega_i}, \end{align} which are readily obtained from Eq.~(\ref{eq:transformation matrix_1-1}). Furthermore, in the main text we impose the additional restriction $G_i \ll \omega_{m1},\omega_{m2}$. Then, it is not difficult to show that the rotation angle $\theta$ is small. In Eq.~(\ref{Vmatrix_simplified}) we can approximate $U_{1,1}= U_{2,2} =1$ and $U_{1,2}= U_{2,1} = 0$, giving $\kappa_{1,2}$ as in Eq.~(\ref{pert_theory_kappa}). Equation~(\ref{pert_theory_n_g}) for $n_i,g_{211}$ is obtained in a similar way.
\begin{thebibliography}{29} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Aspelmeyer}\ \emph {et~al.}(2014)\citenamefont
{Aspelmeyer}, \citenamefont {Kippenberg},\ and\ \citenamefont
{Marquardt}}]{Aspelmeyer2014RMP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Aspelmeyer}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Kippenberg}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Marquardt}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Cavity
optomechanics},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages}
{1391} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Teufel}\ \emph
{et~al.}(2011{\natexlab{a}})\citenamefont {Teufel}, \citenamefont {Donner},
\citenamefont {Li}, \citenamefont {Harlow}, \citenamefont {Allman},
\citenamefont {Cicak}, \citenamefont {Sirois}, \citenamefont {Whittaker},
\citenamefont {Lehnert},\ and\ \citenamefont
{Simmonds}}]{teufel2011sideband}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont
{Teufel}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Donner}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {J.~W.}\ \bibnamefont {Harlow}}, \bibinfo {author}
{\bibfnamefont {M.~S.}\ \bibnamefont {Allman}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo {author} {\bibfnamefont
{A.~J.}\ \bibnamefont {Sirois}}, \bibinfo {author} {\bibfnamefont {J.~D.}\
\bibnamefont {Whittaker}}, \bibinfo {author} {\bibfnamefont {K.~W.}\
\bibnamefont {Lehnert}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\
\bibnamefont {Simmonds}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Sideband cooling of micromechanical motion to the quantum ground state},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf
{\bibinfo {volume} {475}},\ \bibinfo {pages} {359--363} (\bibinfo {year}
{2011}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chan}\ \emph {et~al.}(2011)\citenamefont {Chan},
\citenamefont {Mayer~Alegre}, \citenamefont {Safavi-Naeini}, \citenamefont
{Hill}, \citenamefont {Krause}, \citenamefont {Gr{\"o}blacher}, \citenamefont
{Aspelmeyer},\ and\ \citenamefont {Painter}}]{chan2011laser}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Chan}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont
{Mayer~Alegre}}, \bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont
{Safavi-Naeini}}, \bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont
{Hill}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Krause}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gr{\"o}blacher}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}}, \ and\
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Painter}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Laser cooling of a nanomechanical
oscillator into its quantum ground state},}\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {478}},\
\bibinfo {pages} {89--92} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clark}\ \emph {et~al.}(2017)\citenamefont {Clark},
\citenamefont {Lecocq}, \citenamefont {Simmonds}, \citenamefont {Aumentado},\
and\ \citenamefont {Teufel}}]{clark2017sideband}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont
{Clark}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Lecocq}},
\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Simmonds}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Aumentado}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.~D.}\ \bibnamefont {Teufel}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Sideband cooling beyond the quantum backaction
limit with squeezed light},}\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {541}},\ \bibinfo {pages}
{191--195} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Safavi-Naeini}\ \emph {et~al.}(2013)\citenamefont
{Safavi-Naeini}, \citenamefont {Gr{\"o}blacher}, \citenamefont {Hill},
\citenamefont {Chan}, \citenamefont {Aspelmeyer},\ and\ \citenamefont
{Painter}}]{safavi2013squeezed}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont
{Safavi-Naeini}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Gr{\"o}blacher}}, \bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont
{Hill}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chan}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}}, \ and\ \bibinfo
{author} {\bibfnamefont {O.}~\bibnamefont {Painter}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Squeezed light from a silicon micromechanical
resonator},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {500}},\ \bibinfo {pages} {185--189}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Purdy}\ \emph {et~al.}(2013)\citenamefont {Purdy},
\citenamefont {Yu}, \citenamefont {Peterson}, \citenamefont {Kampel},\ and\
\citenamefont {Regal}}]{purdy2013strong}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont
{Purdy}}, \bibinfo {author} {\bibfnamefont {P.-L.}\ \bibnamefont {Yu}},
\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Peterson}}, \bibinfo
{author} {\bibfnamefont {N.~S.}\ \bibnamefont {Kampel}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.~A.}\ \bibnamefont {Regal}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Strong optomechanical squeezing of light},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\
}\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {031012} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wollman}\ \emph {et~al.}(2015)\citenamefont
{Wollman}, \citenamefont {Lei}, \citenamefont {Weinstein}, \citenamefont
{Suh}, \citenamefont {Kronwald}, \citenamefont {Marquardt}, \citenamefont
{Clerk},\ and\ \citenamefont {Schwab}}]{wollman2015quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~E.}\ \bibnamefont
{Wollman}}, \bibinfo {author} {\bibfnamefont {C.~U.}\ \bibnamefont {Lei}},
\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Weinstein}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Suh}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Kronwald}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Marquardt}}, \bibinfo {author}
{\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}}, \ and\ \bibinfo {author}
{\bibfnamefont {K.~C.}\ \bibnamefont {Schwab}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Quantum squeezing of motion in a mechanical
resonator},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {349}},\ \bibinfo {pages} {952--955}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lei}\ \emph {et~al.}(2016)\citenamefont {Lei},
\citenamefont {Weinstein}, \citenamefont {Suh}, \citenamefont {Wollman},
\citenamefont {Kronwald}, \citenamefont {Marquardt}, \citenamefont {Clerk},\
and\ \citenamefont {Schwab}}]{PhysRevLett.117.100801}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~U.}\ \bibnamefont
{Lei}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Weinstein}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Suh}}, \bibinfo {author}
{\bibfnamefont {E.~E.}\ \bibnamefont {Wollman}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Kronwald}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Marquardt}}, \bibinfo {author}
{\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}}, \ and\ \bibinfo {author}
{\bibfnamefont {K.~C.}\ \bibnamefont {Schwab}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Quantum nondemolition measurement of a quantum
squeezed state beyond the 3 db limit},}\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\
\bibinfo {pages} {100801} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Andrews}\ \emph {et~al.}(2014)\citenamefont
{Andrews}, \citenamefont {Peterson}, \citenamefont {Purdy}, \citenamefont
{Cicak}, \citenamefont {Simmonds}, \citenamefont {Regal},\ and\ \citenamefont
{Lehnert}}]{andrews2014bidirectional}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont
{Andrews}}, \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont
{Peterson}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Purdy}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo
{author} {\bibfnamefont {R.~W.}\ \bibnamefont {Simmonds}}, \bibinfo {author}
{\bibfnamefont {C.~A.}\ \bibnamefont {Regal}}, \ and\ \bibinfo {author}
{\bibfnamefont {K.~W.}\ \bibnamefont {Lehnert}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Bidirectional and efficient conversion between
microwave and optical light},}\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {10}},\
\bibinfo {pages} {321--326} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Riedinger}\ \emph {et~al.}(2018)\citenamefont
{Riedinger}, \citenamefont {Wallucks}, \citenamefont {Marinkovi{\'c}},
\citenamefont {L{\"o}schnauer}, \citenamefont {Aspelmeyer}, \citenamefont
{Hong},\ and\ \citenamefont {Gr{\"o}blacher}}]{riedinger2018remote}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Riedinger}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallucks}},
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Marinkovi{\'c}}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {L{\"o}schnauer}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Hong}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Gr{\"o}blacher}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Remote quantum entanglement between two
micromechanical oscillators},}\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {556}},\ \bibinfo
{pages} {473--477} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ockeloen-Korppi}\ \emph {et~al.}(2018)\citenamefont
{Ockeloen-Korppi}, \citenamefont {Damsk{\"a}gg}, \citenamefont
{Pirkkalainen}, \citenamefont {Asjad}, \citenamefont {Clerk}, \citenamefont
{Massel}, \citenamefont {Woolley},\ and\ \citenamefont
{Sillanp{\"a}{\"a}}}]{ockeloen2018stabilized}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont
{Ockeloen-Korppi}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Damsk{\"a}gg}}, \bibinfo {author} {\bibfnamefont {J.-M.}\ \bibnamefont
{Pirkkalainen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Asjad}},
\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Massel}}, \bibinfo {author}
{\bibfnamefont {M.~J.}\ \bibnamefont {Woolley}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~A.}\ \bibnamefont {Sillanp{\"a}{\"a}}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Stabilized entanglement of massive
mechanical oscillators},}\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {556}},\ \bibinfo {pages}
{478--482} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gr{\"o}blacher}\ \emph {et~al.}(2009)\citenamefont
{Gr{\"o}blacher}, \citenamefont {Hammerer}, \citenamefont {Vanner},\ and\
\citenamefont {Aspelmeyer}}]{groblacher2009observation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Gr{\"o}blacher}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Hammerer}}, \bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Vanner}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Aspelmeyer}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Observation
of strong coupling between a micromechanical resonator and an optical cavity
field},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\
}\textbf {\bibinfo {volume} {460}},\ \bibinfo {pages} {724--727} (\bibinfo
{year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Teufel}\ \emph
{et~al.}(2011{\natexlab{b}})\citenamefont {Teufel}, \citenamefont {Li},
\citenamefont {Allman}, \citenamefont {Cicak}, \citenamefont {Sirois},
\citenamefont {Whittaker},\ and\ \citenamefont
{Simmonds}}]{Teufel2011Nature}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont
{Teufel}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {M.~S.}\ \bibnamefont {Allman}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo {author} {\bibfnamefont
{A.~J.}\ \bibnamefont {Sirois}}, \bibinfo {author} {\bibfnamefont {J.~D.}\
\bibnamefont {Whittaker}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\
\bibnamefont {Simmonds}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Circuit cavity electromechanics in the strong-coupling regime},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf
{\bibinfo {volume} {471}},\ \bibinfo {pages} {204--208} (\bibinfo {year}
{2011}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Verhagen}\ \emph {et~al.}(2012)\citenamefont
{Verhagen}, \citenamefont {Del{\'e}glise}, \citenamefont {Weis},
\citenamefont {Schliesser},\ and\ \citenamefont
{Kippenberg}}]{Verhagen2012Nature}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Verhagen}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Del{\'e}glise}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Weis}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Schliesser}}, \ and\
\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Kippenberg}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Quantum-coherent coupling of
a mechanical oscillator to an optical cavity mode},}\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {482}},\ \bibinfo {pages} {63--67} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peterson}\ \emph {et~al.}(2019)\citenamefont
{Peterson}, \citenamefont {Kotler}, \citenamefont {Lecocq}, \citenamefont
{Cicak}, \citenamefont {Jin}, \citenamefont {Simmonds}, \citenamefont
{Aumentado},\ and\ \citenamefont {Teufel}}]{Peterson2019PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{Peterson}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kotler}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Lecocq}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo {author}
{\bibfnamefont {X.~Y.}\ \bibnamefont {Jin}}, \bibinfo {author} {\bibfnamefont
{R.~W.}\ \bibnamefont {Simmonds}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Aumentado}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.~D.}\ \bibnamefont {Teufel}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {Ultrastrong parametric coupling between a superconducting cavity and
a mechanical resonator},}\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo
{pages} {247701} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rabl}(2011)}]{rabl2011photon}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Rabl}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Photon blockade
effect in optomechanical systems},}\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\
\bibinfo {pages} {063601} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {K\'om\'ar}\ \emph {et~al.}(2013)\citenamefont
{K\'om\'ar}, \citenamefont {Bennett}, \citenamefont {Stannigel},
\citenamefont {Habraken}, \citenamefont {Rabl}, \citenamefont {Zoller},\ and\
\citenamefont {Lukin}}]{Komar2013PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{K\'om\'ar}}, \bibinfo {author} {\bibfnamefont {S.~D.}\ \bibnamefont
{Bennett}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Stannigel}},
\bibinfo {author} {\bibfnamefont {S.~J.~M.}\ \bibnamefont {Habraken}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rabl}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Zoller}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Single-photon nonlinearities in two-mode optomechanics},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {013839} (\bibinfo
{year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ludwig}\ \emph {et~al.}(2012)\citenamefont {Ludwig},
\citenamefont {Safavi-Naeini}, \citenamefont {Painter},\ and\ \citenamefont
{Marquardt}}]{Ludwig2012PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Ludwig}}, \bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont
{Safavi-Naeini}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Painter}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Marquardt}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Enhanced
quantum nonlinearities in a two-mode optomechanical system},}\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {109}},\ \bibinfo {pages} {063601} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2013)\citenamefont {Liu},
\citenamefont {Xiao}, \citenamefont {Chen}, \citenamefont {Yu},\ and\
\citenamefont {Gong}}]{Liu2013PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-C.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {Y.-F.}\ \bibnamefont {Xiao}},
\bibinfo {author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Chen}}, \bibinfo
{author} {\bibfnamefont {X.-C.}\ \bibnamefont {Yu}}, \ and\ \bibinfo {author}
{\bibfnamefont {Q.}~\bibnamefont {Gong}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Parametric down-conversion and polariton pair generation
in optomechanical systems},}\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo
{pages} {083601} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2015)\citenamefont {Xu},
\citenamefont {Gullans},\ and\ \citenamefont {Taylor}}]{Xu2015PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Xu}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gullans}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Taylor}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Quantum nonlinear optics near
optomechanical instabilities},}\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\
\bibinfo {pages} {013818} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {B{\o}rkje}\ \emph {et~al.}(2013)\citenamefont
{B{\o}rkje}, \citenamefont {Nunnenkamp}, \citenamefont {Teufel},\ and\
\citenamefont {Girvin}}]{Borkje2013PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{B{\o}rkje}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Nunnenkamp}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont
{Teufel}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Girvin}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Signatures of
nonlinear cavity optomechanics in the weak coupling regime},}\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {111}},\ \bibinfo {pages} {053603} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lemonde}\ \emph {et~al.}(2013)\citenamefont
{Lemonde}, \citenamefont {Didier},\ and\ \citenamefont
{Clerk}}]{Lemonde2013PRL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-A.}\ \bibnamefont
{Lemonde}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Didier}}, \
and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Nonlinear interaction
effects in a strongly driven optomechanical cavity},}\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {111}},\ \bibinfo {pages} {053602} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lemonde}\ and\ \citenamefont
{Clerk}(2015)}]{Lemonde2015PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-A.}\ \bibnamefont
{Lemonde}}\ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Clerk}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Real photons from
vacuum fluctuations in optomechanics: the role of polariton interactions},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {033836} (\bibinfo
{year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lemonde}\ \emph {et~al.}(2016)\citenamefont
{Lemonde}, \citenamefont {Didier},\ and\ \citenamefont
{Clerk}}]{Lemonde2016NC}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.-A.}\ \bibnamefont
{Lemonde}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Didier}}, \
and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Enhanced nonlinear
interactions in quantum optomechanics via mechanical amplification},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\
}\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {11338} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jin}\ \emph {et~al.}(2018)\citenamefont {Jin},
\citenamefont {Qiu}, \citenamefont {Chesi},\ and\ \citenamefont
{Wang}}]{Jin2018PRA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.-J.}\ \bibnamefont
{Jin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Qiu}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Chesi}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.-D.}\ \bibnamefont {Wang}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Enhanced nonlinear interaction effects in a four-mode
optomechanical ring},}\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {033836} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {L{\"u}}\ \emph {et~al.}(2015)\citenamefont {L{\"u}},
\citenamefont {Wu}, \citenamefont {Johansson}, \citenamefont {Jing},
\citenamefont {Zhang},\ and\ \citenamefont {Nori}}]{lu2015squeezed}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont
{L{\"u}}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo
{author} {\bibfnamefont {J.~R.}\ \bibnamefont {Johansson}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Jing}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Squeezed optomechanics with phase-matched amplification and dissipation},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages} {093602} (\bibinfo
{year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2017)\citenamefont {Yin},
\citenamefont {L{\"u}}, \citenamefont {Zheng}, \citenamefont {Wang},
\citenamefont {Li},\ and\ \citenamefont {Wu}}]{yin2017nonlinear}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.-S.}\ \bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont {L{\"u}}},
\bibinfo {author} {\bibfnamefont {L.-L.}\ \bibnamefont {Zheng}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Wang}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Li}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Wu}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Nonlinear effects in modulated quantum optomechanics},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {053861} (\bibinfo
{year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peterson}\ \emph {et~al.}(2017)\citenamefont
{Peterson}, \citenamefont {Lecocq}, \citenamefont {Cicak}, \citenamefont
{Simmonds}, \citenamefont {Aumentado},\ and\ \citenamefont
{Teufel}}]{peterson2017demonstration}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{Peterson}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Lecocq}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo
{author} {\bibfnamefont {R.~W.}\ \bibnamefont {Simmonds}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Aumentado}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.~D.}\ \bibnamefont {Teufel}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Demonstration of efficient nonreciprocity in a
microwave optomechanical circuit},}\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\
\bibinfo {pages} {031001} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bernier}\ \emph {et~al.}(2017)\citenamefont
{Bernier}, \citenamefont {Toth}, \citenamefont {Koottandavida}, \citenamefont
{Loannou}, \citenamefont {Malz}, \citenamefont {Nunnenkamp}, \citenamefont
{Feofanov},\ and\ \citenamefont {Kippenberg}}]{bernier2017nonreciprocal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~R.}\ \bibnamefont
{Bernier}}, \bibinfo {author} {\bibfnamefont {L.~D.}\ \bibnamefont {Toth}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Koottandavida}}, \bibinfo
{author} {\bibfnamefont {M.~A.}\ \bibnamefont {Loannou}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Malz}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Nunnenkamp}}, \bibinfo {author} {\bibfnamefont {A.~K.}\
\bibnamefont {Feofanov}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\
\bibnamefont {Kippenberg}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Nonreciprocal reconfigurable microwave optomechanical circuit},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\
}\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {604} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\title{Epistemic Modality and Coordination under Uncertainty hanks{This work has received funding from the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation program (Grant Agreement No 758540) within the project extit{EXPRESS: From the Expression of Disagreement to New Foundations for Expressivist Semantics}
\begin{abstract} Communication facilitates coordination, but coordination might fail if there's too much uncertainty. I discuss a scenario in which vagueness-driven uncertainty undermines the possibility of publicly sharing a belief. I then show that asserting an epistemic modal sentence, `Might $\phi$', can reveal the speaker's uncertainty, and that this may improve the chances of coordination despite the lack of a common epistemic ground. This provides a game-theoretic rationale for epistemic modality. The account draws on a standard relational semantics for epistemic modality, Stalnaker's theory of assertion as informative update, and a Bayesian framework for reasoning under uncertainty. \end{abstract}
\noindent Shiv and Logan want to spend time together over the coming weekend. They prefer to go to the beach if it will be sunny and to a caf\'e if it will be raining, but they will only go to either place if the other goes. Their predicament is the familiar one of a coordination problem \cite{lew69}. In a variant known as the signalling game, Shiv and Logan coordinate by sending a signal, i.e.\ an utterance that reveals information that is initially available to only one of the players \cite{rabin90,RabinFarrell1996,stalnaker2006-cheaptalk,sky10}.
It's relatively well-understood how Shiv and Logan coordinate if the relevant information (it will be sunny, it will be raining) is public. Roughly, $q$ is public information within a group just in case all members of the group believe that $q$, all believe that all believe that $q$, all believe that all believe that all believe that $q$, and so on. Sometimes, however, a belief may fail to be public. For example: Shiv thinks that it will be raining, but she's not very confident, and indeed she expects that, reasonably, Logan thinks that it will be sunny. In this case Shiv does not believe that she and Logan share the belief that it will be raining. The belief fails to be public. Is there still a way for Shiv and Logan to coordinate for the weekend, despite the uncertainty?
In this paper, I provide a rational reconstruction of the use of epistemic modals in a signalling game. I will present a game-theoretic rationale for epistemic possibility talk: revealing one's uncertainty to the interlocutors can improve one's expected utility despite lack of public information. I will employ a general Bayesian framework for reasoning under uncertainty \cite{franke11,fjr12,Goodman2016-GOOPLI-2,Lassiter2017-LASAVI}. The model will show that, in conditions of uncertainty specified below, rational agents can improve their chances of coordination by means of sentences that make an epistemic hedge.
My focus is on sentences of the form `Might $\phi$', where an epistemic possibility operator takes wide scope. Epistemic possibility modal auxiliaries and adverbs, and expressions of close kin (\textit{might, perhaps, maybe, possibly, probably}) are natural resources to employ in case coordination is challenged by the lack of an epistemic common ground. If Shiv uttered (\ref{epist1}) in the story above, for example, she would be naturally understood as suggesting to go to a caf\'e for the weekend---almost as if she simply asserted that it will be raining, while at the same time hedging that assertion. \begin{enumerate}
\item\label{epist1} It might be raining.
\begin{enumerate}
\item\label{e1a} Let's stay in.
\item\label{e1b} Let's go out.
\end{enumerate} \end{enumerate} In her context, it is very natural for Shiv to continue (\ref{epist1}) as in (\ref{e1a}), much less natural (and not even so coherent) to continue as in (\ref{e1b}). Therefore, even if the speaker is not in a position to outright assert that it will rain, the interlocutors might still coordinate on going to a caf\'e by means of something less committal than that assertion.
\section{Failures of Coordination}\label{failure} A game is a set of players $I$, and sets of actions $x_i$ and utility functions $u_i$ for each player $i$ in $I$. In a coordination game between two players, each having two actions, $a$ and $b$, the players' utility functions are summarized in Table \ref{coord}. Shiv (the Column player $S$) prefers to stay in, $a$, if Logan (the Row player $L$) stays in, and prefers to go out, $b$, if Logan goes out. Same for Logan.
\begin{table}[htb!]
\centering
\begin{tabular}{l c c c}
& & \multicolumn{2}{c}{$S$} \\ \cline{2-4}
& \multicolumn{1}{|l}{} & $a$ & $b$ \\
\multirow{2}{*}{$L$} & \multicolumn{1}{|l}{$a$} & $1,1$ & $0,0$ \\
& \multicolumn{1}{|l}{$b$} & $0,0$ & $1,1$
\end{tabular}
\caption{Coordination Game}
\label{coord} \end{table}
This particular assignment of preferences makes the agents \textit{indifferent} as to whether they stay in or go out so long as each does what the other does. How do they coordinate? Either could commit to an action, and then inform the other about their commitment. But if they are indeed indifferent, they might as well toss a (fair) coin, and resolve to stay in if and only if it lands heads. Suppose furthermore that only one of them, $S$, can see the coin: she will then report to $L$ the outcome of the coin toss by sending a signal (`It's tails!'). Coordination is then easily achieved \cite{rabin90,stalnaker2006-cheaptalk}.
Shiv and Logan can coordinate by ``pivoting'' on the weekend being rainy, rather than the coin landing tails, as well as on any other proposition. Let $q$ and $\bar{q}$ be two mutually exclusive propositions. The agents mutually know that they prefer $a$ if $q$ and $b$ if $\bar{q}$. If Shiv believes that $q$, she may signal so, and thereby share her belief with Logan. Neither Shiv nor Logan has reason to deceive the other, since either receives a positive payoff just in case the other does (as shown in Table \ref{coord}). Therefore, once a belief is shared by signalling, it typically becomes public: both believe it, both believe that both believe it, and so on. In a coordination game, if $q$ is public between Shiv and Logan, the rational (i.e.\ utility maximizing) choice for both is $a$.\footnote{This is version of the traditional signalling game described by David Lewis \cite{lew69}, with a couple of qualifications. (i) Lewis talked about common knowledge, rather than common belief, but the stronger condition doesn't add much at this stage: people can coordinate on something false, so long as enough people believe it. (ii) Lewis worked with the notion of a Nash equilibrium, but the idea of solving the game by reference to an event with an independent prior probability (the coin lands heads, the weekend will be rainy) leads in fact to a generalization known as correlated equilibrium, with which I work in the current paper \cite{aumann1987,Vanderschraaf1995-VANCAC-4}. Games of this kind have played an important role in our understanding of language \cite{sky10}.}
The crucial idea is that signalling turns a belief into public belief. Sometimes, however, beliefs fail to be public. This may be for a number of reasons. In more accidental cases, people are distracted, uncurious, or unintelligent. The impasse here is ``solved'' by a sleight of idealization. Let's assume that Shiv and Logan are Bayesian agents who do not suffer from such accidental shortcomings of rationality. Their credences are mathematically coherent, and they update by Bayes' rule. Still, there are complex cases of failure of public belief known to the literature.
In the following scenario, Shiv and Logan are planning for dinner. However, they are try to coordinate on something vague. The model of vagueness below is discussed in \cite{DeJaegher2003-KRIAGR}, and inspired by the well-known case of two generals' failing to coordinate an attack \cite{Fagin1995-FAGRAK-2,moseshalpern1986}. \begin{quote}
\textbf{\textit{Vagueness}}. Shiv and Logan just moved to a new town. They have been told about a great restaurant. If the restaurant is close, they would like to go there for dinner, but since they are quite tired after a day of moving and unpacking, they prefer to eat in if the restaurant is far. They can either go out or stay in, but if either goes while the other doesn't, both will eat alone and be miserable. \end{quote} In Vagueness, Shiv and Logan are facing a Lewisian coordination problem. Sometimes, however, a restaurant is neither definitely close, nor definitely far. There is no sharp boundary between close and far, and, in the ``borderline area'', each may think that the restaurant is close while the other thinks that it's far. They will coordinate only if a particular belief is public, but in borderline cases, neither believes that they believe the same thing. Vagueness undermines the possibility of sharing a belief in public.
It would be natural for Shiv and Logan to be uncertain about what to do, in their situation. Vagueness has often been linked to uncertainty \cite{Edgington1992-EDGVUA,MacFarlane2016-MACIVA-4}. A common way to describe uncertainty is in terms of degrees of confidence. Let's say that an agent $i$ \textit{thinks} that $q$ just in case $i$ has some positive degree of confidence that $q$ is the case. Then $i$ thinks that $q$ just in case $p(q_i)>1-p(q_i)$, i.e.\ $i$ expects that $q$ is more likely than not. In the borderline area, Shiv may think that the restaurant is close, although her confidence remains low: indeed, below a relevant threshold. Above the threshold, Shiv thinks that the restaurant is definitely close, or, as I shall say, she \textit{believes} that it is close.
What is a confidence threshold? In ordinary life, many factors contribute to an agent's confidence level. In the context of the game, a qualitative characterization helps: an agent believes that $q$ just in case she thinks that $q$, and thinks that others think that $q$ as well. That is, one believes that $q$ just in case one is confident enough that $q$ is the case to think that others think that $q$ as well. And so someone who thinks that the restaurant is close is uncertain so long as she has a reasonable expectation that others are not of the same opinion.\footnote{Of course, this is not an analysis of \textit{think} and \textit{believe}, but a stipulation for describing a probability distribution over propositions. The stipulation plausibly fits with at least some informal uses of \textit{think} and \textit{believe}. The terminology, however, may sound misleading. Does belief imply certainty? Not if \textit{certainty} means `lack of Cartesian doubt', but it might if it means `having high-ish confidence about what others think'. I mean the latter. Likewise, there is a sense in which one could be confident that something is the case while believing that others disagree: but this just shows that there are other characterizations of what confidence thresholds are, besides what I offered. That's fine. It's worth keeping in mind that Shiv and Logan are supposedly rational epistemic peers: if they think that $q$ expecting that someone else does not think that $q$, who is equally rational and in the same epistemic position, then they should be less confident about their judgement. I assume that they are. That's the relevant sense of \textit{confidence}.}
By assuming that confidence thresholds and shared attitudes line up, the failures of coordination in Vagueness are failures to share a belief about what others think. In other words, coordination fails because a belief isn't public. To make this point precise, let there be at least three discrete states $w_1, w_2, w_3$ in the agents' environment. In $w_1$ the restaurant is close ($q$), in $w_3$ it is far ($\bar{q}$), and in $w_2$ it is neither close nor far. As far as the agents know, prior to the interaction, any of these worlds might be theirs.
Crucially, the agents' doxastic states are not aligned in the borderline area. For concreteness, let's assume that Shiv does not distinguish $w_1$ and $w_2$, in which the restaurant looks close to her, and Logan does not distinguish $w_2$ and $w_3$, in which the restaurant looks far to him. (The converse possibility is analogous, and omitted.) Thus, the agents partition the logical space differently. A partition is a set of jointly exhaustive and mutually exclusive subsets of the logical space $W$. Each agent $i$ has their own partition $\Pi_i$, which represents how they distinguish possibilities. Let $\pi_i(w)\in\Pi_i$ be the cell of $\Pi_i$ to which $w$ belongs. An agent $i$ `fails to distinguish' $w$ from any $w'$ that belongs to $\pi_i(w)$. I will say that $i$ thinks that $q$ at $w$ just in case $\pi_i(w)\subseteq q$. Figure \ref{gamesignals} represents the agents' doxastic states with respect to the three possibilities that are salient to them at the beginning of the interaction. Furthermore, I assume that Figure \ref{gamesignals} is how the agents understand their own beliefs as well as those of the others.
\begin{figure}
\caption{Signalling game under uncertainty}
\label{gamesignals}
\end{figure}
Partitions in Figure \ref{gamesignals} are generated as follows. Let's consider a Sorites series \texttt{S} of states $t_1,\ldots,t_n$. The restaurant is definitely close in $t_1$ (it is downstairs), it is definitely far in $t_n$ (it is two time zones away), and the remaining states form a linearly ordered progression from near to far. Shiv and Logan go through a `forced march' \cite{Horgan1994-HORRVA}. For all states in \texttt{S}, they judge whether the restaurant is close, $q$, or far, $\bar{q}$. The judgment has to be made even if the agents hesitate. Both Shiv and Logan think that $q$ in $t_1$ since the restaurant is definitely close. For some $x$ between 1 and $n$, Shiv will presumably flip and think that $\bar{q}$ in $t_x$. For some $y$ between 1 and $n$, Logan will flip too. There comes a point during the forced march at which they both judge `It's far', not necessarily the same point. Thus, both Shiv and Logan think that $q$ in all $t<\textnormal{min}(t_x,t_y)$, and both think that $\bar{q}$ in all $t>\textnormal{max}(t_x,t_y)$. Therefore, we may pool \texttt{S} into three ``uber'' states $w_1,w_2,w_3$ without loss of generality, and continue working with uber states (or worlds). \begin{align*}
w_1 &=\{t\in\texttt{S}:t<\textnormal{min}(t_x,t_y)\}\\
w_2 &=\{t\in\texttt{S}:\textnormal{min}(t_x,t_y)\leq t\leq\textnormal{max}(t_x,t_y)\}\\
w_3 &=\{t\in\texttt{S}:\textnormal{max}(t_x,t_y)<t\} \end{align*} The picture could be complicated by adding more players, each drawing the line between $q$ and $\bar{q}$ at a different point. It would still be possible to pool all states in \texttt{S} into those in which $q$ is true for every player, those in which $\bar{q}$ is true for every player, and the rest. Thus, there is a supervaluational description of the Vagueness game environment, in which players take the place of `precisifications' \cite{DeJaegher2003-KRIAGR}.
Finally, let's suppose that Shiv and Logan have two signals, $\phi$ and $\neg\phi$: `The restaurant is close' and `The restaurant is far'. For the semantics, let $\phi$ be true at $w_1$ and false at $w_3$, so $\neg\phi$ is false at $w_1$ and true at $w_3$. I prefer to remain neutral on further details of the semantics of vague terms, such as `close' and `far'. In order to be more specific, one could say that $\phi$ and $\neg\phi$ are truth-valueless at $w_2$. Alternatively, one could keep classical logic, following \cite{Williamson1994-WILV}. My discussion does not depend on the logic of vague terms.
Sending signals reveals to the signal receiver what the signal sender thinks, and this is how coordination is ordinarily reached, if it is. For if they both think the same then, through signalling, they both come to believe that they both think the same. But uncertainty can undermine coordination. Suppose that the agents know that their interaction is as depicted in Figure \ref{gamesignals}. Suppose first that Shiv thinks that the restaurant is far. Then she sends $\neg\phi$. Upon receiving $\neg\phi$, Logan thinks that Shiv thinks that the restaurant is far, for Shiv has no reason to deceive Logan. Therefore, a signal $\neg\phi$ is how Logan can distinguish a possibility in which Logan thinks that the restaurant is far and Shiv does too, from one in which he thinks that the restaurant is far but Shiv doesn't. But Logan thinks that the restaurant is far in any circumstance in which Shiv does, hence if $\neg\phi$ is sent, Shiv and Logan believe that it's far. Moreover, since they can reason to this point, they believe that they believe that it's far, and so on. Therefore, if $\neg\phi$ is sent, the belief that the restaurant is far becomes public, and they coordinate on staying in.
If instead Shiv thinks that the restaurant is close, she sends $\phi$, but public belief might fail. For Shiv fails to distinguish a possibility in which both think that the restaurant is close from another possibility, $w_2$, in which she thinks that the restaurant is close but Logan doesn't. These cases can be interpreted as those in which Shiv wrongly guesses what Logan thinks. For Shiv's signal $\phi$ (`The restaurant is close!') would come across to Logan as an error of judgement: Logan would think that the restaurant is far, and realize by Shiv's signal that she thinks that it's close. Since Shiv is aware of this, if she thinks that the restaurant is close, she may not think that Logan thinks it is.
Coordination equilibria still exist, although within the limits given by the agents' uncertainty. Let $\gamma$ be the agents' shared prior concerning the chance that they give different judgments as to whether the restaurant is close or far, and let $\delta$ be a real number between 0 and 1. The relevant possibilities can then be evaluated as follows. $$p(w_1)=\delta(1-\gamma) ~ \quad ~ p(w_2)=\gamma ~\quad ~ p(w_3)=(1-\delta)(1-\gamma)$$ That is, with chance $\gamma$ the agents guess that they don't think what the other thinks, i.e., they are in $w_2$, and they assign complementary probabilities to the rest of the cases by splitting them over $\delta$ and $1-\delta$.\footnote{Probabilities are only assigned to sets of possible worlds, in order to uniformly represent an agent's credal state. Therefore, $p(w)$ is strictly speaking a function from the singleton $\{w\}$ to a real number, not a function of a world.} Under the plausible assumption that an agent $i$ chooses $a$ only if $i$ thinks $q$, we can calculate the expected utility of $a$ for $i$. Let $u_i$ be $i$'s utility function, and $j$ be $i$'s opponent. $$eu_i(a)=p(q_i~\&~q_j)\cdot u_i(a,a)+p(q_i~\&~\bar{q}_j)\cdot u_i(a,b)$$ By Table \ref{coord}, $u_i(a,b)=0$ for both agents, hence by Bayes' rule,
$$eu_i(a)=p(q_i)\cdot p(q_j|q_i)\cdot u_i(a,a)$$
Consider $S$ first. The probability $p(q_S)$ that $S$ thinks that $q$ is $\delta(1-\gamma)+\gamma$, and the conditional probability $p(q_L|q_S)$ that $L$ thinks that $q$ while $S$ thinks that $q$ is just the proportion of cases in which $L$ thinks $q$ out of those in which $L$ does: $\frac{\delta(1-\gamma)}{\delta(1-\gamma)+\gamma}$. Hence $eu_S(a)=\delta(1-\gamma)$.
Consider $L$. The probability that $L$ thinks that $q$ is $p(w_1)=\delta(1-\gamma)$, and the probability that $S$ thinks that $q$ given that $L$ thinks that $q$ is just $1$, for all cases in which $S$ thinks that $q$ are cases in which $L$ does too. It follows that $eu_L(a)=\delta(1-\gamma)=eu_S(a)$. Parallel reasoning shows that $eu_S(b)=eu_L(b)=(1-\delta)(1-\gamma)$. Consequently, the coordination equilibrium $(a,a)$ obtains only if $eu_S(a)>eu_S(b)$ and $eu_L(a)>eu_L(b)$, hence only if $\delta>1-\delta$. Similarly, the $(b,b)$ equilibrium obtains only if $1-\delta>\delta$.
I have assumed in the preceding paragraph that $\gamma\not=1$. This seems reasonable, since $\gamma$ represents the chance that an agent doesn't think as the other does. In other words, so long as doxastic misalignment isn't inevitable, coordination equilibria exist under the conditions just derived. While reasonable, this conclusion is not very strong. For even if the agents are rational, and know by the proof above that coordination equilibria exist, it doesn't follow that they will coordinate. The uncertainty may still be too impressive for them to take action.
An upper bound on $\gamma$ would help. Earlier I assumed that a necessary condition for $i$ to choose $a$ is that $i$ thinks that $q$. It seems plausible to say that a sufficient condition for $i$ to choose $a$ is that both think that $q$, i.e., that both consider $q$ more likely than not. That is, $a$ is a best response for both if $p(q_i~\&~q_j)>1/2$. Then $a$ is played if $\delta(1-\gamma)>1/2$, i.e., if \begin{align*}
\tag{Confidence Threshold for $a$} & \gamma<1-\frac{1}{2\delta} \end{align*} By similar reasoning, $b$ is played if \begin{align*}
\tag{Confidence Threshold for $b$} & \gamma<1-\frac{1}{2(1-\delta)} \end{align*} These inequalities are the confidence thresholds for coordination on $(a,a)$ and $(b,b)$ respectively. They are derived assuming that $\delta$ is neither 0 nor 1, but the generality of the conclusion is not lessened. Such values would trivialize the interaction, for $\delta=0$ would mean that the agents do not consider world $w_1$ a genuine possibility, and on the other hand $\delta=1$ would mean that $w_3$ is not a genuine possibility, given how $p(w_1)$ and $p(w_3)$ were defined above.
The value of $1/2$ as tipping point for action has been chosen somewhat arbitrarily, but the conclusion is representative of a general point. Two conditions characterize the existence of a coordination equilibrium in conditions of uncertainty. For the $(a,a)$ outcome, it must be that $\delta>1-\delta$ and $\gamma<1-\frac{1}{2\delta}$; for $(b,b)$, that $1-\delta>\delta$ and $\gamma<1-\frac{1}{2(1-\delta)}$. Uncertainty undermines the agents' confidence that something is the case, so that a belief fails to be public. Nevertheless, if the chance $\gamma$ that their thinking differently is not too high, coordination may still obtain. The inequalities CT$a$ and CT$b$ specify what ``not too high'' means.
\section{Coordination in Times of Uncertainty}\label{assertion} Confronted with a failure to coordinate beliefs, rational agents could change their mind, of course. However, revising judgements doesn't eliminate vagueness: Shiv and Logan would simply go one step further in the forced march. This may be as good as it gets, if the agents' common language is indeed limited to $\phi$ and $\neg\phi$. Shiv and Logan will then have to learn to live with the occasional failures of coordination. On the other hand, if the agents' language includes epistemic vocabulary, then they could make their uncertainty manifest, and this potentially matters for their attempt to coordinate. To characterize this idea, I will begin with a standard relational semantic for \textit{might}.
The idea that more is communicated in conversation than the semantic content of what the interlocutors say goes back to H. P. Grice \cite{Grice1975-GRILA-2}. Gricean reasoning has a strategic nature, and an appreciation of this point has led to a more systematic game-theoretic understanding of it \cite{clark1996,franke11, fjr12,djvr14,benz2018,Parikh2019-PARCAC-14}. Furthermore, recent work has emphasized the connection between Gricean reasoning and more general Bayesian models of inference under uncertainty that have wide applications in the study of human cognition \cite{Goodman2013-GOOKAI,Goodman2016-GOOPLI-2,Lassiter2017-LASAVI}. The result is a framework for probabilistic inference and back-and-forth reasoning whose outline I will follow in the next sections.
Suppose that, besides the two signals $\phi$ and $\neg\phi$, the agents' language includes epistemic vocabulary. They can utter sentences such as `The restaurant might be close' and `The restaurant might be far', namely $\Diamond\phi$ and $\Diamond\neg\phi$, respectively. The general idea is that a sentence $\Diamond\phi$ is true just in case $\phi$ is compatible the information some agent has. Roughly, such information is an agent's evidence, or doxastic mental state. For simplicity we may take the relevant agents to be the participants in the game, though of course this would be implausible for the purposes of natural language semantics. If so, then $\Diamond\phi$ is true at a world $w$ in the game model of Figure \ref{gamesignals} just in case there is some agent $i$ who thinks that $\phi$ in $w$. By this light, in $w_2$ it is true to say, `It might be that $\phi$ and it might be that $\neg\phi$'. This seems the right thing to say when one is uncertain, as Shiv and Logan are in $w_2$.
More formally, we define a semantic model over the game of Figure \ref{gamesignals}. The model $(I,W,\Pi,\llbracket\cdot\rrbracket^{c,g})$ includes a set $I$ of players, a set $W$ of worlds, an interpretation function $\llbracket\cdot\rrbracket^{c,g}$ relative to the context and a variable assignment (superscripts henceforth omitted), and a set of partitions $\Pi=\{\Pi_i:i\in I\}$ of $W$, one partition for each player. A rough but standard Kratzerian semantics for $\Diamond$ can be given in terms of $\Pi$ \cite{kra12,vonFintel2011-VONMMR}. \begin{align*}
\llbracket \Diamond\phi\rrbracket &=\lambda w.\exists i\in I.\exists \pi_i\in\Pi_i.\exists w'\in \pi_i(w): w'\in \llbracket\phi\rrbracket \end{align*} Intuitively, $\Diamond\phi$ is true at $w$ iff there is a $w'$ accessible from $w$ such that $\phi$ is true at $w'$. A world is accessible form another just in case they belong to the same cell of some agent's partition. Thus, epistemically accessible worlds are those that some agent finds indistinguishable on the basis of their doxastic perspective prior to communication. It is straightforward to check that accessibility, thus defined in terms of doxastic partitions, is reflexive, symmetric, and not transitive.
In order to calculate the pragmatic effects of manifesting uncertainty by asserting that $\Diamond\phi$, let's refer to the set $\{w_1,w_2,w_3\}$ as Shiv and Logan's \textit{common ground} at time 0, $cg(0)$: the worlds the interlocutors jointly consider to be possible, at the beginning of their interaction. Following Stalnaker \cite{sta78,sta99,sta02}, conversation is a cooperative enterprise whereby interlocutors narrow down the common ground. The task for the listener is to figure out which world is actual, given what the speaker said. A simple hypothesis is that worlds in the common ground, at any time, have equal chances of being actual. On the basis of this hypothesis, base-rate probabilities may be easily calculated for any time $t$.
$$\textnormal{For all times $t$ and for all $w$ in }cg(t): p(w)=\frac{1}{|cg(t)|}$$ Therefore, once the agents narrow down the common ground to $\{w\}$, the probability that $w$ is the actual world is 1. In $cg(0)$, we have $p(w_1)=p(w_2)=p(w_3)=1/3$.
Suppose for illustration that Shiv thinks the restaurant is close, but can't tell if Logan thinks so as well. For all she knows (we could say, semantically ascending), the actual world is $w_1$ or $w_2$: after all, she cannot distinguish these two possibilities. In both $w_1$ and $w_2$, $\Diamond\phi$ is true, as for both worlds there is an agent, namely Shiv, who thinks that the restaurant is close at those worlds. Therefore (semantically descending), Shiv believes that $\Diamond\phi$ is true. Thus, she asserts so. An assertion is a proposal to update the common ground, by eliminating possibilities that are incompatible with the semantic content of the assertion \cite{sta78}. By the standard semantics I assumed above, and the Stalnakerian dynamics of assertion, an assertion of $\Diamond\phi$ rules out $w_3$, which is incompatible with the truth of the assertion. After Shiv's assertion that $\Diamond\phi$, the possibility that the restaurant is definitely far is no longer relevant for either Shiv or Logan.
Logan may now reason that if Shiv had meant to suggest that $w_1$ is the actual world, she would have sent $\phi$ (`The restaurant is close!') right away, for $w_1$ is the world in which both think that the restaurant is close. But she didn't send $\phi$: she wasn't confident enough for that. So, she doesn't think that the restaurant is definitely close. Since $w_2$ is the only other possibility left, $w_2$ must be the actual world according to the speaker. Thus the agents become aware of a distinction between confidence levels by using epistemic language.
This reasoning can be formalized in a Bayesian framework. At time $1$, after the update, the listener $L$ has equal priors for the worlds in $cg(1)$, i.e.\ $p(w_1)=p(w_2)=1/2$. Moreover, $L$ expects $S$ to be truthful. Since a truthful speaker could send only $\phi$ and $\Diamond\phi$ in $cg(1)$, $L$ holds even priors for the events that these signals are sent, i.e.\ $p(\phi)=p(\Diamond\phi)=1/2$. Finally, $L$ expects $S$ to send $\phi$ in $w_1$, not in $w_2$. For an assertion that $\phi$ reveals the speaker's belief that the restaurant is close, but the speaker believes that the restaurant is close only in $w_1$. Therefore, $L$'s conditional probability for the event that $\phi$ is sent given that $w_1$ is the actual world is at least nearly 1. Conversely for $\Diamond\phi$. \begin{align*}
p(\phi|w_1)\approx 1&\qquad p(\Diamond\phi|w_1)\approx 0\\
p(\phi|w_2)\approx 0&\qquad p(\Diamond\phi|w_2)\approx 1 \end{align*} The last step is for $L$ to update by Bayes rule. The posterior probability that a world is actual is calculated by the listener by conditionalizing on the evidence, namely the observation that $\Diamond\phi$ was sent.
$$p'(w_2)=\frac{p(\Diamond\phi|w_2)\cdot p(w_2)}{p(\Diamond\phi)}\approx\frac{1\cdot 1/2}{1/2}\approx 1$$ From the observation that `It might be raining' was uttered, with the semantics it has, and given what else could have been uttered, the listener draws a conclusion about the speaker's confidence level: in the actual world the restaurant is neither definitely close nor definitely far, and in particular the speaker thinks that it's close but she is not confident. The listener's inference is a defeasible one, and not a semantic entailment. Like ordinary pragmatic reasoning, its conclusion is not packaged in the semantic content of the sentence that was uttered by the speaker. Thus, the sentence `It might be raining' is not about the speaker's credal state (or anybody else's, for that matter). Yet it supports a Bayesian inference to a conclusion about the speaker's credences.
\section{Strategic Hedging}
How do rational agents react to someone's assertion that the restaurant might be close? Earlier I assumed that an agent goes out if they think that the restaurant is close, and stays in if they think that it's far. In $w_2$ the restaurant is neither close nor far, and Shiv thinks that it's close while Logan thinks that it's far. Consequently, they don't coordinate. The assumption is rather crude, however, for we might want to say that uncertainty comes with indecision \cite{MacFarlane2016-MACIVA-4}.
If Shiv says `The restaurant might be close', she signals her uncertainty to Logan, who infers it as a good Bayesian. Moreover, Shiv might expect this inference to be of some consequence. Logan would have to take Shiv's uncertainty into account. At the very least, Shiv might expect that Logan hesitates before taking action, once `Might $\phi$' is asserted. I will assume that she does. I will now show that, as a consequence, it's reasonable for Logan to go out, when he is told that the restaurant might be close, even if he thinks that it's far. That is, the chances of coordination improve despite failure of public belief.
Shiv's expectation about Logan's reaction to her utterance kick-starts an expectation-building process. For she will have higher-order expectations about what her reaction will be to what she expects Logan's reaction to her utterance is, and so on. The result is essentially an instance of iterated reasoning between speaker and listener. Taking notice of each other, the interlocutors adjust their propensity to act.\footnote{There are two slightly different frameworks one could use to reconstruct this process: \textit{iterated best response} models of pragmatic reasoning \cite{franke11}, and \textit{rational speech act} models \cite{frank2017}. The discussion in this section is inspired mainly by the latter, but could be carried out in the former setting with some adjustments.}
Shiv and Logan's reasoning about each others' actions takes place under the assumption that their mental states are incompatible. In other words, we are in $w_2$, and the agents correctly inferred this by Bayesian reasoning as above. Since the agents don't change their mind concerning the restaurant's location as they go through the expectation-building process below, probabilities are normalized at each step. Thus, their mental states remain incompatible throughout, insofar as beliefs can be surmised by dispositions to act. Nevertheless, we will see that Shiv and Logan's expected utilities increase. I indicate with $p_i(x)$ the probability that agent $i$ performs action $x$, and break down the reasoning in several steps.\footnote{More precisely, $p_i(x)$ is short for $p_i(x|q_S~\&~\bar{q}_L)$: the conditional probability that $i$ does $x$ given that $S$ thinks that $q$ and $L$ thinks that $\bar{q}$. We are holding fixed that we are in $w_2$, in which the condition $q_S~\&~\bar{q}_L$ holds.} \begin{itemize}
\item[] \textbf{Step 0}: prior to the use of epistemic modals. $S$ thinks that $q$, so she chooses $a$. This fixes the speaker's prior, which is $p_S$ at step 0. At the same time, $L$ thinks that $\bar{q}$, so he chooses $b$. Coordination at this stage inevitably fails. $$p^0_S(a)=1\qquad p^0_S(b)=0$$ $$p^0_L(a)=0\qquad p^0_L(b)=1$$
\item[] \textbf{Step 1}: using epistemic modals. $S$ signals $\Diamond\phi$ and expects that $L$ hesitates. $L$'s expected hesitation is a matter of randomly choosing $a$ or $b$. This fixes the listener's prior, which is $p_L$ at step 1.
$$p^1_L(a)=p^1_L(b)=0.5$$
\item[] \textbf{Step 2}: expectation-building. $S$ reflects on her action in response to the listener's prior. Each next step from now on is obtained by normalizing an agent's prior with the other player's.
$$p^1_S(a)=\frac{p^0_S(a)}{\Sigma_{i\in I}p_i(a)}=\frac{p^0_S(a)}{p^0_S(a)+p^1_L(a)}=\frac{1}{1+0.5}\approx0.666$$
\item[] \textbf{Step 3}: as in the previous step. $$p^2_L(a)=\frac{p^1_L(a)}{p^1_L(a)+p^2_S(a)}=\frac{0.5}{0.5+0.666}\approx0.428$$ \end{itemize} By proceeding in this way, the probability that $S$ chooses $a$ in $w_2$ tends approximately to $0.6$, and the probability that $L$ does so tends approximately to $0.4$. Conversely for $b$. The step-wise process can be summarized by a system of equations. This is an inductive definition of a function $f_a(n)$ that maps a number $n$ that counts the steps, to the probability that an agent takes action $a$ at step $n$. The probability of doing $a$ for the speaker is given by $f_a(2n)$, i.e., for steps indexed by even numbers, whereas the probability of doing $a$ for the listener is given by $f_a(2n+1)$. A similar series can be defined for $b$. \begin{align*}
f_a(0) &=1\\ f_a(1) &=1/2\\ f_a(n) &=\frac{f_a(n-2)}{f_a(n-1)+f_a(n-2)} \end{align*} $f_a$ defines a divergent sequence of probabilities, oscillating between approximately $0.4$ and approximately $0.6$. This can be observed by simple calculation. Analytic proof is quite involved, and left out of the paper.
While they go through the (first few steps of the) stepwise process, the agents' dispositions to act are incompatible throughout, as an effect of normalizing probabilities. However, the margin by which such incompatibility causes failures of coordination is reduced with each step. So, they still share no public belief, but their expected utility is higher. Recall that the value of an agent's expected utility for action $a$, as calculated above, was: $$eu_i(a)=p(q_i~\&~q_j)\cdot u_i(a,a)=\delta(1-\gamma)$$ However, this equation assumes that one gets a payoff for $a$ just in case both think that $q$. But one's payoff for $a$ increases, via $f_a$, also in proportion to the probability that $S$ thinks that $q$, $L$ thinks that $\bar{q}$, but both do $a$. So we revise the notion of expected utility, indexing it to the number of steps. \begin{align*}
eu^n_i(a) &= eu_i(a)+p(q_S~\&~\bar{q}_L)\cdot p^n_S(a)\cdot p^n_L(a)\cdot u_i(a,a)\\
&= \delta(1-\gamma) + \gamma\cdot p^n_S(a)\cdot p^n_L(a)\cdot u_i(a,a) \end{align*} At Step 0, the listener doesn't think that $q$, thus $p_L^0(a)=0$. Therefore, the overall expected utility of $a$ at 0 is simply $eu_i(a)$, as above. Assuming instead that we are at Step 3: \begin{align*}
eu^3_i(a) &=\delta(1-\gamma)+\gamma\cdot 0.666\cdot 0.428=\delta(1-\gamma)+\gamma\cdot 0.285 \end{align*} More generally, for all actions $x$, and for all $n\geq 0$, the agents' expected utility monotonically increases with the sequence of steps. $$eu_i^0(x)\leq eu_i^n(x)$$ This argument is fairly abstract, but it's a mathematical reconstrution of a plausible conclusion. A reasoning process can be defined on the basis of the agents' expectations, in reaction to the uncertainty manifested by an assertion of `Might $\phi$'. The base step of the induction is the intuitively plausible idea that the speaker, having signalled her uncertainty, expects the listener to hesitate before acting. Based on this, the speaker reflects on how to react to the listener's hesitation, on how the listener would react to her reaction, and so on. The agents need not have perfect powers of reasoning. They need not follow the induction to infinity. It suffices that one or two steps are taken, and already the use of $\Diamond\phi$ leads to higher expected utility.
If the restaurant is neither close nor far, going out is reasonable not only for Shiv (who thinks with little confidence that the restaurant is close), but also for Logan (who thinks with little confidence that it's far), in response to the speaker's assertion that it might be close. This choice is \textit{reasonable} in the very concrete sense of expected utility maximization. Thus, by hedging one's assertion in conditions of uncertainty via the use of epistemic possibility modals, the chances of coordination improve although public belief fails.
\section{Conclusion}
In this paper, I presented a ``proof of concept'' for the use of epistemic modal expressions in signalling games in which uncertainty (about what another player thinks) undermines coordination. Vagueness may trigger uncertainty of this kind, since it undermines the belief that others think in the same way as we do. However, by using `Might $\phi$', we hedge our assertions and make uncertainty manifest. This can be seen by a straightforward application of Bayes' rule, on the basis of a standard semantics for \textit{might} and the Stalnakerian pragmatics of assertion as informative update. In turn, a manifestation of uncertainty may lead interlocutors to accommodate their actions with what they expect the others' actions will be, even though their doxastic mental states remain incompatible throughout. Coordination under uncertainty is facilitated by the strategic assertion of `Might $\phi$'.
By necessity, the view I presented applies only to particular contexts, formalized as particular kinds of games. By no means I suggest that the interaction I described is the only effect epistemic possibility modals have in an interactive setting. The semantics for epistemic modality I adopted is somewhat rough but standard, and could be fine-tuned for the purposes of natural language semantics. The rational speech act model I adopted is an abstract formalization of the computational import of epistemic signalling, but could be understood as an element of a cognitively plausible picture of bounded rationality in interaction.
\end{document} |
\begin{document}
\title[On a smoothness result]{On Smoothness of the elements of some integrable Teichm\"uller spaces} \author{Vincent Alberge and Melkana Brakalova}
\begin{abstract} In this paper we focus on the integrable Teichm\"uller spaces $\tei{p}$ ($p>0$) which are subspaces of the symmetric subspace of the universal Teichm\"uller space. We prove that any element of $\tei{p}$ for $0<p\leq 1,$ is a $\mathcal{C}^1$-diffeomorphism. \end{abstract}
\subjclass[2010]{30C62, 30C99, 30F60} \keywords{Integrable Teichm\"uller spaces, module, reduced module, symmetric and quasisymmetric mappings}
\maketitle
\section{Introduction}
The universal Teichm\"uler space $\utei$ is the space of \emph{quasisymmetric} homeomorphisms of the unit circle $\mathbb{S}^1$ fixing $1$, $i$, and $-1$. A mapping $f: \mathbb{S}^1\rightarrow \mathbb{S}^1$ is said to be quasisymmetric if there exists $M>0$ such that $$
\forall \theta\in\mathbb{R},\forall t>0,\; \frac{1}{M}\leq\left|\frac{f(e^{i(\theta+t)})-f(e^{i\theta})}{f(e^{i\theta})-f(e^{i(\theta-t)})}\right|\leq M . $$ Due to a well-known result by Ahlfors and Beurling \cite{beurling&ahlfors} one can give an equivalent description of $\utei$. More precisely, the universal Teichm\"uller space can be defined as the set of \emph{Teichm\"uller equivalence classes} of \emph{quasiconformal mappings} of the unit disc $\dd$ fixing $1$, $i$, and $-1$ where two such mappings are Teichm\"uller equivalent if they coincide on $\mathbb{S}^1$. A mapping $F:D \rightarrow F(D),$ where $D\subset \mathbb C$ is a domain, is called quasiconformal (or q.c. for short) if it is an orientation-preserving homeomorphism and if its distributional derivatives $\partial_z F$ and $\partial_{\overline{z}}F$ can be represented by locally square integrable functions (also denoted by $\partial_z F$ and $\partial_{\overline{z}}F$) on $D$ such that $$
\left\Vert \frac{\partial_{\overline{z}}F}{\partial_z F} \right\Vert_{\infty} = \underset{z\in D}{\textrm{ ess.sup}}\left| \frac{\partial_{\overline{z}}F\left(z\right)}{\partial_z F\left( z \right)} \right| <1. $$ We also recall that for $z=x+iy$, $\partial_{\overline{z}}=\tfrac{1}{2}(\partial_{x}+i\partial_{y})$ and $\partial_{z}=\tfrac{1}{2}(\partial_{x}-i\partial_{y})$. Furthermore, if $F$ is a quasiconformal mapping, the function $\mu_F =\tfrac{\partial_{\overline{z}}F}{\partial_z F},$ defined a.e., is called the \emph{Beltrami coefficient} associated with $F$. By the measurable Riemann mapping theorem, if a measurable function $\mu$ on $D$ is such that $\left\Vert \mu \right\Vert_{\infty}<1$, then it is the Beltrami coefficient of some quasiconformal mapping, which we will denote here by $F^{\mu}$.
Let us now introduce an important subspace of $\utei$, namely, the \emph{symmetric Teichm\"uller space} denoted here by $\stei$. Following a terminology introduced by Gardiner and Sullivan \cite{gardiner&sullivan}, it is the space of \emph{symmetric} homeomorphism of $\mathbb{S}^1$ fixing $1$, $i$, and $-1$. One recalls that $f:\mathbb{S}^1\rightarrow \mathbb{S}^1$ is symmetric if it is an orientation-preserving homeomorphism of $\mathbb{S}^1$ such that \begin{equation}\label{eq:0} \frac{f(e^{i(\cdot+t)})-f(e^{i\cdot})}{f(e^{i\cdot})-f(e^{i(\cdot-t)})} \underset{t\rightarrow 0^+}{\longrightarrow} 1, \end{equation}
with respect to the uniform convergence on $\mathbb{R}$. As for the universal Teichm\"uller space one has an equivalent description of such a space that involves quasiconformal mappings. Indeed, Gardiner and Sullivan proved (see Theorem 2.1 in \cite{gardiner&sullivan}) that $\stei$ corresponds to the space of Teichm\"uller equivalent classes of quasiconformal mappings of $\dd$ fixing $1$, $i$, and $-1$ admitting a representative which is \emph{asymptotically conformal} on $\mathbb{S}^1$. Let us recall that a quasiconformal mapping $F: \dd \rightarrow \dd$ is said to be asymptotically conformal on $\mathbb{S}^1$ if for every $\epsilon>0$, there exists a compact subset $K_{\epsilon}$ of $\dd$ such that for any $z\in \dd\setminus K_{\epsilon}$, $\left| \mu_{F}(z)\right| <\epsilon$.
Here we focus on some interesting infinite dimensional subspaces of $\utei$, the $p$-\emph{integrable Teichm\"uller spaces}, which we define for each $p>0$ as the set $$ \tei{p}=\left\lbrace f\in\utei\mid \exists F:\dd\rightarrow \dd, \textrm{ q.c. such that } F_{\vert_{\mathbb{S}^1}} = f \textrm{ and }\mu_F \in\inthyp{p} \right\rbrace, $$
where $\sigma$ is the hyperbolic measure on $\dd$, that is, for any $z=x+iy \in\dd$, $d\sigma (z) = (1-\left| z \right|^2)^{-2}dxdy$. It is elementary to observe from such a definition that if $q>p>0$, then $\tei{p} \subset \teiq$. The spaces $\tei{p}$, $p\geq 2$, were first introduced by Guo \cite{guo} through an equivalent description involving univalent functions. At about the same time, Cui \cite{cui} studied the case $p=2$ and gave a few important characterizations of the elements of $\tei{2}$. In particular, he proved that the Beltrami coefficient associated with the \emph{Douady--Earle extension} (see \cite{douady&earle}) of any element of $\tei{2}$ belongs to $\inthyp{2}$. Later on, Takhtajan and Teo \cite{takhtajan&teo} introduced a Hilbert manifold structure on the universal Teichm\"uller space that makes the space $\tei{2}$ the connected component of the identity mapping $\id_{\mathbb{S}^1}$. With respect to such a structure, they proved that the so-called \emph{Weil--Petersson} metric is a Riemannian metric on $\utei$. Following Takhtajan and Teo's work, the space $\tei{2}$ is now reffered to as the \emph{Weil--Petersson Teichm\"uller space}. For further results on $\tei{2}$ we refer to \cite{shen}. Let us point out that one can obtain $\tei{2}\subset\stei$ by combining \cite[Theorem 2 and Lemma 2]{cui} and \cite[Theorem 4]{earle&markovic&saric}, see \cite[Section 3]{fan&hu} for a more detailed explanation. One can also mention the paper \cite{tang} by Tang where in particular, Cui's result concerning the Douady--Earle extension is extended to all spaces $\tei{p}$ with $p\geq 2$. Recently, the second author of this paper proved in \cite{brakalovaxx} that $\tei{2}\subset\stei$ using an approach based on module techniques and the so-called \emph{Teichm\"uller's Modulsatz} (see \cite[\S 4]{T13}), and later on using a different method she proved that for any $p>0$, $\tei{p}\subset \stei$ (see \cite{brakalova18}).
In this paper we only deal with $\tei{p}$ for $0<p\leq 1$ and we give a proof of the following result: \begin{theorem}\label{main-result} Let $p\leq1$. Then, any element of $\tei{p}$ is a $\mathcal{C}^1$-diffeomorphism. \end{theorem}
The strategy of the proof takes advantage of an approach used by the second author of this paper and J. A. Jenkins \cite{brakalova&jenkins02}, modified to the case of the unit disc. We first use the \emph{Teichm\"uller--Wittich--Bellinski\u{\i}} to show that each element of $\tei{1}$ has a non-vanishing derivative at each point of $\mathbb{S}^1$. Then, we use properties of the \emph{reduced module} of a simply-connected domain to show that the derivatives of the elements of $\tei{1}$ are continuous. As mentioned earlier, since $\tei{p} \subset \tei1$ for $0<p\leq 1$, it follows immediately that for $0<p\leq 1$, any element in $\tei{p}$ is continuously differentiable with non-vanishing derivative.
\section{Background}
In this section we recall some classic notions from geometric function theory. Such notions are most notably and thoroughly investigated in Teichm\"uller's \emph{Habilitationsschrift} (Habilitation Thesis) \cite{T13}.
\subsection{Module of a doubly-connected domain} Let $D$ be a (non-degenerate) doubly-connected domain of the extended complex plane, that is, the complement of $D$ is an union of two disjoint simply-connected domains, each bounded by a Jordan curve. It is well known (see \cite{lehto&virtanen,T13}) that there exists a biholomorphic function that maps $D$ onto an annulus of inner radius $r_1$ and outer radius $r_2$ for some $0<r_2<r_1<\infty$. The \emph{module} $\Mod{D}$ of $D$ is $\ln\left( \tfrac{r_2}{r_2}\right)$. It is a \emph{conformal invariant}, namely, if $\Psi : D \rightarrow \Psi(D)$ is a biholomorphic function, then $\Mod{D}=\Mod{\Psi(D)}$.
It is also well known (see \cite{lehto&virtanen,T13}) that the module is \emph{superadditive}. More precisely, if $D_1$ and $D_2$ are two disjoint doubly-connected subdomains of a doubly-connected domain $D_3$, where each separates some $z_0 \in\mathbb{C}$ from $\infty$, then \begin{equation}\label{eq:21} \Mod{D_1}+\Mod{D_2}\leq \Mod{D_3}. \end{equation} In saying that a doubly-connected domain separates $z_0$ from $\infty$, we mean that one component of its complement contains $z_0$ in its interior while the other component contains $\infty$.
Let us now recall two inequalities that will be used in the proof of the main result. For $0<r_2<r_1$ and $\zeta \in\mathbb{C}$ we set $A_{\zeta,r_2 , r_1}=\left\lbrace z \mid r_2 < \left| z -\zeta \right| < r_1\right\rbrace$. Let $F:A_{\zeta,r_2 , r_1} \rightarrow F\left(A_{\zeta,r_2 , r_1} \right)$ be a quasiconformal mapping. Then setting $z=\zeta+re^{i\theta}, r_2<r<r_1$ we have \begin{equation}\label{eq:22}
\Mod{F\left(A_{\zeta , r_2 , r_1}\right)}\leq \frac{1}{2\pi}\iint_{A_{\zeta , r_2 , r_1}}{\frac{1+\left| \mu_{F}(z)\right|}{1-\left| \mu_{F}(z)\right|}\cdot\frac{dxdy}{\left| z-\zeta\right|^2}}, \end{equation} and \begin{equation}\label{eq:23}
2\pi\int_{r_2}^{r_1}{\frac{1}{\int_{0}^{2\pi}{\frac{1+\left| \mu_{F}\left(z \right)\right|}{1-\left| \mu_{F}\left(z \right)\right|}d\theta}}\cdot\frac{dr}{r}}\leq \Mod{F\left(A_{\zeta , r_2 , r_1}\right)}. \end{equation} These estimates could be obtained following Teichm\"uller's approach based on the \emph{length-area method} in \cite[\S 6.3]{T13}, where he arrived at weaker versions of (\ref{eq:22}) and (\ref{eq:23}). Estimates equivalent to (\ref{eq:22}) and (\ref{eq:23})---some proved under more general assumptions and different methods---can be found in \cite{reich&walczak,gutlyanskii&martio,brakalova10} and others.
\subsection{Reduced module of a simply-connected domain} Let $\Omega$ be a simply-connected domain of the complex plane different from $\mathbb{C}$. Let $\zeta \in \Omega$. For $r>0$, let $D(\zeta,r)$ denote the disc of radius $r$ centered at $\zeta$ and let $0<r_2<r_1$ be small enough so that $D(\zeta,r_1)\subset \Omega$. From (\ref{eq:21}) follows$$ \Mod{\Omega\setminus D(\zeta,r_1)} + \ln\left( \frac{r_1}{r_2}\right) \leq \Mod{\Omega\setminus D(\zeta,r_2)}, $$ and therefore $$ \Mod{\Omega\setminus D(\zeta,r_1)} +\ln\left( r_1\right) \leq \Mod{\Omega\setminus D(\zeta,r_2)} +\ln\left( r_2\right). $$
One defines the reduced module $\modred{\Omega}{\zeta}$ of $\Omega$ at $\zeta$ as $\lim_{r\rightarrow 0} \Mod{\Omega \setminus D(\zeta,r)}+\ln(r)$. Using, for example, \emph{Koebe distortion theorem} one can show that this limit is finite and $\modred{\Omega}{\zeta} = \ln\left( \left| \Psi^{\prime}(0)\right|\right)$, where $\Psi : \dd \rightarrow \Omega$ is a biholomorphic function mapping $0$ onto $\zeta$. A detailed proof can be found in \cite[§1.6]{T13}. From here it follows directly that $\zeta\mapsto \modred{\Omega}{\zeta}$ is continuous.
Before concluding this subsection let us add one more property of the reduced module that we will use later.
If $F:\mathbb{C}\rightarrow \mathbb{C}$ is a homeomorphism then, for any $r>0,$ the function $\zeta\mapsto \modred{F\left( D(\zeta,r) \right)}{F(\zeta)}$ is continuous. Indeed, if $\zeta_n \underset{n\rightarrow \infty}{\longrightarrow}\zeta$, then by applying a sequence of biholomorphic functions $z\mapsto F(z+\zeta_n-\zeta)-F(\zeta_n)+F(\zeta), z\in D(\zeta,r),$ one obtains a sequence of domains $D_n,$ which are all images of $D(\zeta,r)$. Since $F(z)$ is a homeomorphisms it follows that $D_n \underset{n\rightarrow \infty}{\longrightarrow} F\left( D(\zeta,r)\right)$ (with respect to the topology induced by the Hausdorff distance on the set of subsets of $\mathbb{C}$). Consider the sequence of biholomorphic functions $\Psi_n : \mathbb{D} \rightarrow D_n$ mapping $0$ onto $F(\zeta),$ normalized by $\Psi_n'(0)>0$. Then for any $n$, $\ln\left( \Psi_{n}^{\prime}(0)\right)=\modred{D_n}{F(\zeta)}=\modred{F\left( D(\zeta_n,r) \right)}{F(\zeta_n)}$ since a translation does not change the reduced module. Furthermore, the sequence of functions $\Psi_n$ forms a normal family and thus, up to a subsequence, $\Psi_n$ converges uniformly (on any compact subset of $\mathbb{D}$) to a biholomorphic function $\Psi_{\infty}: \mathbb{D}\rightarrow F\left( D(\zeta,r) \right)$ mapping $0$ onto $\zeta$. This implies \begin{align*} \modred{F\left(D(\zeta,r)\right)}{F(\zeta)} & =\ln\left( \Psi_{\infty}^{\prime}(0)\right) \\ & = \lim_{n\rightarrow \infty}\ln\left( \Psi_{n}^{\prime}(0) \right) \\ & =\lim_{n\rightarrow \infty}\modred{F\left( D(\zeta_n,r) \right)}{F(\zeta_n)}, \end{align*} and thus we have continuity.
\subsection{Teichm\"uller--Wittich--Bellinski\u{\i} theorem} First, let us recall that a mapping $F:\mathbb{C}\rightarrow\mathbb{C}$ is said to be \emph{conformal} at $z_0 $ if $\lim_{z\rightarrow z_0}{\tfrac{F(z)-F(z_0)}{z-z_0}}$ exists and is different from $0$. Following \cite[Chapter V, Theorem 6.1]{lehto&virtanen} the well-known Teichm\"uller--Wittich--Bellinski\u{\i} theorem can be stated as follows:
\begin{theorem}\label{theorem:1} Let $D$ be a domain of the complex plane and let $z_0\in D$. Let $F:D\rightarrow F(D)$ be a quasiconformal mapping. If there exists a neighborhood $\mathcal{U}$ of $z_0$ contained in $D$ such that $$
\iint_{\mathcal{U}}{\frac{\left|\mu_{F}(z)\right|}{\left| z-z_0 \right|^2}dxdy}<\infty; $$ then $F$ is conformal at $z=z_0$. \end{theorem} The history of this theorem and its extensions is rather long and we may refer the curious reader to some of the following papers \cite{belinskii,reich&walczak,drasin,brakalova&jenkins94,gutlyanskii&martio,brakalova10,shishikura} and to \cite{alberge&brakalova&papadopoulos}.
\section{Proof of the main result}\label{sec:3}
Let $f\in \tei{1}$. By definition, there exists a quasiconformal extension $F$ of $f$ to the closed unit disc such that \begin{equation}\label{eq:3}
\iint_{\dd}{\left|\mu_{F}(z)\right| d\sigma(z)}<\infty. \end{equation}
Let $\widetilde{\mu}$ be a function defined on the extended complex plane which coincides with $\mu$ on $\dd$ and which is identically $0$ outside the disc. Let $F^{\widetilde{\mu}}$ be the unique quasiconformal mapping of the complex plane with Beltrami coefficient $\widetilde{\mu}$ that fixes $1$, $i$, and $-i$. Therefore, we have $F^{\widetilde{\mu}}_{\vert_{\dd}}=F$ and $F^{\widetilde{\mu}}_{\vert_{\mathbb{S}^1}}=f$.
\begin{claim}\label{claim1} The quasiconformal mapping $F^{\widetilde{\mu}}$ is conformal at any point of $\mathbb{S}^1$. Therefore, $f$ is a diffeomorphism of $\mathbb{S}^1$. \end{claim}
We apply Theorem \ref{theorem:1} to derive the conformality of $F^{\widetilde{\mu}}$.
\begin{proof}[Proof of Claim \ref{claim1}] Let $\zeta_0\in\mathbb{S}^1$. Because of (\ref{eq:3}), one can find a compact subset $K$ of $\dd$ such that \begin{equation}\label{eq:4}
\iint_{\dd\setminus K}{\left|\mu_{F}(z)\right| d\sigma(z)}<1. \end{equation} Let $r>0$ be such that $\dd \setminus D(\zeta_0, r)\subset \dd\setminus K$. One first observes that \begin{align*}
\forall z \in D(\zeta_0, r)\cap \dd, \; \left(1-\left| z\right|^2 \right)^2& = \left( 1-\left|z\right| \right)^2 \cdot \left( 1 +\left| z\right|\right)^2 \\ & \leq \left| \zeta_0 -z \right|^2 \cdot \left(1+\left| z \right|\right)^2 \\
&< 4\cdot \left| \zeta_0 -z \right|^2, \end{align*} and therefore \begin{equation}\label{eq:5}
\forall z \in D(\zeta_0, r)\cap \dd, \, \frac{1}{\left| z-\zeta_0\right|^2}< 4\cdot \frac{1}{\left( 1-\left|z\right|^2 \right)^2}. \end{equation} It follows \begin{align*}
\iint_{D(\zeta_0,r)}{\frac{\left| \widetilde{\mu} (z)\right|}{\left| z-\zeta_0 \right|^2}dxdy}& = \iint_{D(\zeta_0,r)\cap{\dd}}{\frac{\left| \mu_{F} (z)\right|}{\left| z-\zeta_0 \right|^2}dxdy} \\ & \leq 4 \iint_{D(\zeta_0,r)\cap{\dd}}{\left| \mu_{F} (z)\right| d\sigma(z)}\\ & \leq 4. \end{align*}
We deduce, by Theorem \ref{theorem:1}, that $F^{\widetilde{\mu}}$ is conformal at $z=\zeta_0$ which proves that $f$ is differentiable at $\zeta_0$ and $\left| f^{\prime}(\zeta_0)\right|>0$. Since this is true for any $\zeta_0 \in \mathbb{S}^1,$ we deduce that $f$ is a diffeomorphism of $\mathbb{S}^1$. \end{proof}
The following two additional results will be needed in the proof of the continuity of $f^{\prime}$ on $\mathbb{S}^1$.
\begin{claim}\label{claim2} Let $\epsilon >0$. Then, there exists $r_{\epsilon}>0$ such that $$
\forall \zeta\in\mathbb{S}^1, \forall 0<\rho_2 <\rho_1\leq r_{\epsilon},\; \left| \Mod{F^{\widetilde{\mu}}\left( A_{\zeta,\rho_2 , \rho_1}\right)} - \ln\left(\frac{\rho_{1}}{\rho_2} \right)\right| < \epsilon. $$ \end{claim}
\begin{claim}\label{claim3} Let $\zeta\in\mathbb{S}^1$ and $r>0$. Then, $$
\lim_{\rho\rightarrow 0}{\Mod{F^{\widetilde{\mu}}\left( A_{\zeta,\rho,r}\right)}+\ln\left( \left| f^{\prime}(\zeta)\right|\rho\right)} = \modred{F^{\widetilde{\mu}}\left( D\left( \zeta, r\right)\right)}{f(\zeta)}. $$ \end{claim}
\begin{proof}[Proof of Claim \ref{claim2}] Let $\zeta\in\mathbb{S}^1$ and $0<\rho_2 < \rho_1$. One the one hand, by applying (\ref{eq:22}) one gets \begin{align}
\Mod{F^{\widetilde{\mu}}\left( A_{\zeta,\rho_2 , \rho_1}\right)} - \ln\left(\frac{\rho_{1}}{\rho_2} \right) & \leq \frac{1}{2\pi}\iint_{A_{\zeta , \rho_2 , \rho_1}}{\frac{1+\left| \widetilde{\mu}(z)\right|}{1-\left| \widetilde{\mu}(z)\right|}\cdot\frac{dxdy}{\left| z-\zeta\right|^2}} - \ln\left(\frac{\rho_{1}}{\rho_2} \right) \nonumber\\
& = \frac{1}{2\pi}\iint_{A_{\zeta , \rho_2 , \rho_1}}{\left(\frac{1+\left| \widetilde{\mu}(z)\right|}{1-\left| \widetilde{\mu}(z)\right|}-1\right)\cdot\frac{dxdy}{\left| z-\zeta\right|^2}} \nonumber\\
& \leq \frac{1}{\pi \left( 1-\left\Vert \mu_F\right\Vert_{\infty} \right)}\iint_{A_{\zeta , \rho_2 , \rho_1}\cap\dd}{\left| \mu_{F}(z)\right|\cdot \frac{dxdy}{\left|z-\zeta \right|^2}}.\label{eq:31} \end{align} On the other hand since $$
\int_{0}^{2\pi}{\frac{1+\left| \widetilde{\mu}\left(z \right)\right|}{1-\left| \widetilde{\mu}\left(z \right)\right|}d\theta}\geq 2\pi , $$
by means of (\ref{eq:23}) one obtains \begin{align}
\Mod{F^{\widetilde{\mu}}\left( A_{\zeta,\rho_2 , \rho_1}\right)} - \ln\left(\frac{\rho_{1}}{\rho_2} \right) & \geq 2\pi\int_{\rho_2}^{\rho_1}{\frac{1}{\int_{0}^{2\pi}{\frac{1+\left| \widetilde{\mu}\left(z \right)\right|}{1-\left| \widetilde{\mu}\left(z \right)\right|}d\theta}}\cdot\frac{dr}{r}} -\ln\left(\frac{\rho_{1}}{\rho_2} \right) \nonumber \\ & = \int_{\rho_2}^{\rho_1}{\frac{2\pi-\int_{0}^{2\pi}{\frac{1+\left| \widetilde{\mu}\left(z \right)\right|}{1-\left| \widetilde{\mu}\left(z \right)\right|}d\theta}}{\int_{0}^{2\pi}{\frac{1+\left| \widetilde{\mu}\left(z \right)\right|}{1-\left| \widetilde{\mu}\left(z \right)\right|}d\theta}}\cdot\frac{dr}{r}} \nonumber \\
& = \int_{\rho_2}^{\rho_1}{\frac{\int_{0}^{2\pi}{\frac{-2\left| \widetilde{\mu}\left(z \right)\right|}{1-\left| \widetilde{\mu}\left(z \right)\right|}d\theta}}{\int_{0}^{2\pi}{\frac{1+\left| \widetilde{\mu}\left(z \right)\right|}{1-\left| \widetilde{\mu}\left(z \right)\right|}d\theta}}\cdot\frac{dr}{r}} \nonumber\\
& \geq \frac{-1}{\pi} \iint_{A_{\zeta , \rho_2 , \rho_1}\cap\dd}{\frac{\left| \mu_{F}(z)\right|}{1-\left| \mu_{F}\left( z \right)\right|}\cdot \frac{dxdy}{\left|z-\zeta \right|^2}}\nonumber \\
& \geq -\frac{1}{\pi \left( 1-\left\Vert \mu_F\right\Vert_{\infty} \right)}\iint_{A_{\zeta , \rho_2 , \rho_1}\cap\dd}{\left| \mu_{F}(z)\right|\cdot \frac{dxdy}{\left|z-\zeta \right|^2}}.\label{eq:32} \end{align} Let $\epsilon>0$. Still because of (\ref{eq:3}) there exists a compact set $K_{\epsilon}$ of $\dd$ such that \begin{equation}\label{eq:6}
\iint_{\dd\setminus K_{\epsilon}}{\left| \mu_{F}(z)\right| d\sigma(z)} <\frac{\pi(1-\left\Vert \mu_{F} \right\Vert_{\infty})}{4}\epsilon. \end{equation} Let $r_{\epsilon}>0$ be the distance between $\mathbb{S}^1$ and $K_{\epsilon}$. Thus, for any $0<\rho_2 < \rho_1\leq r_{\epsilon}$ one obtains by combining (\ref{eq:31}), (\ref{eq:32}), (\ref{eq:5}) and (\ref{eq:6}) $$ \forall \zeta\in\mathbb{S}^1,\; -\epsilon < \Mod{F^{\widetilde{\mu}}\left( A_{\zeta,\rho_2 , \rho_1}\right)} - \ln\left(\frac{\rho_{1}}{\rho_2} \right) < \epsilon, $$ and therefore Claim \ref{claim2} follows. \end{proof}
\begin{proof}[Proof of Claim \ref{claim3}] Let $\zeta\in\mathbb{S}^1$ and let $r>0$. For any $0<\rho<r$, let
$$m(\rho)=\min_{\left| z -\zeta\right|=\rho}{\left| F^{\widetilde{\mu}}(z)-f(\zeta)\right|} \textrm{ and } M(\rho)=\max_{\left| z -\zeta \right|=\rho}{\left| F^{\widetilde{\mu}}(z)-f(\zeta)\right|}. $$ Since $F^{\widetilde{\mu}}$ is conformal at $\zeta$ one has \begin{equation}\label{eq7}
\lim_{\rho\rightarrow 0}\frac{\left| f^{\prime}(\zeta)\right|\rho}{M(\rho)}=\lim_{\rho\rightarrow 0}\frac{\left| f^{\prime}(\zeta)\right|\rho}{m(\rho)}= 1. \end{equation} Furthermore, it is evident that \begin{align*} \Mod{F^{\widetilde{\mu}}\left( D(f(\zeta),r )\right) \setminus D(\zeta, M(\rho))} & \leq \Mod{F^{\widetilde{\mu}}\left( A_{\zeta, \rho, r} \right)} \\ & \hphantom{dsds} \leq \Mod{F^{\widetilde{\mu}}\left( D(\zeta,r )\right)\setminus D(f(\zeta), m(\rho))}. \end{align*}
Therefore, by adding $\ln\left(\left| f^{\prime}(\zeta)\right|\rho\right)$, using (\ref{eq7}), and letting $\rho \rightarrow 0$ it follows that $$
\lim_{\rho\rightarrow 0} \Mod{F^{\widetilde{\mu}}\left( A_{\zeta, \rho, r} \right)}+\ln\left(\left| f^{\prime}(\zeta)\right|\rho\right) = \modred{F^{\widetilde{\mu}}\left( D\left( \zeta, r\right)\right)}{f(\zeta)}, $$ which proves Claim \ref{claim3}. \end{proof}
We have now all the ingredients necessary to complete the proof of our main Theorem \ref{main-result}.
Let $\zeta_0 \in\mathbb{S}^1$. Let $\epsilon>0$. Let $r_{\frac{\epsilon}{5}}>0$ be as in Claim \ref{claim2}. By the continuity of the reduced module discussed earlier one can find a $\delta_{\frac{\epsilon}{5}}>0$ such that if $\zeta\in\mathbb{S}^1$ and $\left|\zeta -\zeta_0\right|<\delta_{\frac{\epsilon}{5}}$ then \begin{equation}\label{eq8}
\left| \modred{F^{\widetilde{\mu}}\left( D(\zeta, r_{\frac{\epsilon}{5}})\right)}{f(\zeta)}-\modred{F^{\widetilde{\mu}}\left( D(\zeta_0, r_{\frac{\epsilon}{5}})\right)}{f(\zeta_0)}\right| <\frac{\epsilon}{5}. \end{equation}
Let $\zeta \in \mathbb{S}^1$ be such that $\left|\zeta -\zeta_0\right|<\delta_{\frac{\epsilon}{5}}$. By Claim \ref{claim3} there exist $r_{\zeta_0,1}, r_{\zeta,1}<r_{\frac{\epsilon}{5}}$ such that for any $\rho\leq r_{\zeta_0,1}$ \begin{equation}\label{eq9}
\left| \Mod{F^{\widetilde{\mu}}\left( A_{\zeta_0, \rho, r_{\frac{\epsilon}{5}}} \right)}+\ln\left(\left| f^{\prime}(\zeta_0)\right|\rho\right) - \modred{F^{\widetilde{\mu}}\left( D\left( \zeta_0, r_{\frac{\epsilon}{5}}\right)\right)}{f(\zeta_0)} \right|<\frac{\epsilon}{5} , \end{equation} and for any $\rho\leq r_{\zeta,1}$ \begin{equation}\label{eq10}
\left| \Mod{F^{\widetilde{\mu}}\left( A_{\zeta, \rho, r_{\frac{\epsilon}{5}}} \right)}+\ln\left(\left| f^{\prime}(\zeta)\right|\rho\right) - \modred{F^{\widetilde{\mu}}\left( D\left( \zeta, r_{\frac{\epsilon}{5}}\right)\right)}{f(\zeta)} \right|<\frac{\epsilon}{5} . \end{equation}
Thus, from the triangle inequality, Claim \ref{claim2}, and Inequalities (\ref{eq8}), (\ref{eq9}), and (\ref{eq10}) we obtain \begin{align*}
& \left| \ln\left( \left| f^{\prime}(\zeta) \right|\right)- \ln\left( \left| f^{\prime}(\zeta_0) \right|\right) \right| \\ &= \left| \ln\left( \left| f^{\prime}(\zeta) \right| r_{\zeta,1}\right)-\ln\left(r_{\zeta,1} \right)- \ln\left( \left| f^{\prime}(\zeta_0) \right|r_{\zeta_0,1}\right) +\ln\left( r_{\zeta_0,1}\right)\right| \\
& \leq \left| \ln\left( \left| f^{\prime}(\zeta) \right| r_{\zeta,1}\right) +\Mod{F^{\widetilde{\mu}}\left( A_{\zeta, r_{\zeta,1}, r_{\frac{\epsilon}{5}}} \right)}- \modred{F^{\widetilde{\mu}}\left( D\left( \zeta, r_{\frac{\epsilon}{5}}\right)\right)}{f(\zeta)}\right| \\
& \hphantom{g}+\left| \ln\left( \left| f^{\prime}(\zeta_0) \right| r_{\zeta_0,1}\right) +\Mod{F^{\widetilde{\mu}}\left( A_{\zeta_0, r_{\zeta_0,1}, r_{\frac{\epsilon}{5}}} \right)}- \modred{F^{\widetilde{\mu}}\left( D\left( \zeta_0, r_{\frac{\epsilon}{5}}\right)\right)}{f(\zeta_0)}\right| \\
& \hphantom{ghg}+\left| \modred{F^{\widetilde{\mu}}\left( D\left( \zeta, r_{\frac{\epsilon}{5}}\right)\right)}{f(\zeta)}-\modred{F^{\widetilde{\mu}}\left( D\left( \zeta_0, r_{\frac{\epsilon}{5}}\right)\right)}{f(\zeta_0)}\right| \\
& \hphantom{gh}+\left| -\Mod{F^{\widetilde{\mu}}\left( A_{\zeta, r_{\zeta,1}, r_{\frac{\epsilon}{5}}} \right)}-\ln\left( r_{\zeta,1}\right)+ \Mod{F^{\widetilde{\mu}}\left( A_{\zeta_0, r_{\zeta_0,1}, r_{\frac{\epsilon}{5}}} \right)}+\ln\left( r_{\zeta_0,1}\right)\right| \\
& \leq 3\frac{\epsilon}{5} +\left| -\Mod{F^{\widetilde{\mu}}\left( A_{\zeta, r_{\zeta,1}, r_{\frac{\epsilon}{5}}} \right)}+\ln\left( \frac{r_{\frac{\epsilon}{5}}}{r_{\zeta,1}}\right) \right| \\
& \hphantom{dsdsdsdd}+ \left|\Mod{F^{\widetilde{\mu}}\left( A_{\zeta_0, r_{\zeta_0,1}, r_{\frac{\epsilon}{5}}} \right)}-\ln\left( \frac{r_{\frac{\epsilon}{5}}}{r_{\zeta_0,1}}\right)\right| \\ & \leq \epsilon. \end{align*}
This shows the continuity of $\left| f^{\prime} \right|$ at any $\zeta_0\in \mathbb{S}^1,$ thus $f^{\prime}$ is continuously differentiable on $\mathbb{S}^1$ and since the derivative is never $0$, any element $f\in \tei{1}$ is a $\mathcal{C}^1$-diffeomorphism on $\mathbb{S}^1$. Since $ \tei{p}\subset \tei{1}$ ($p\leq 1)$ we have shown that Theorem \ref{main-result} holds.
Since every differentiable quasisymmetric function $f$ on $\mathbb{S}^1 $ is symmetric in the sense of (\ref{eq:0}), the following already known property follows from Theorem \ref{main-result}. \begin{corollary} Let $0<p\leq 1$. Then, $\tei{p}\subset \stei$. \end{corollary}
Let us point out that although $\tei{1}\subset \stei$, the quasiconformal extension $F$ of $f$ we were working with may not necessarily be asymptotically conformal on $\mathbb{S}^1$ and Claim \ref{claim2} is not obvious. However, for $p\geq 2$, if one specifically employs the Douady--Earle extension, then Claim \ref{claim2} holds. It seems natural to ask: \begin{question} Let $f\in\tei{p}$ (with $0<p\leq 2$). Is there a quasiconformal asymptotically conformal extension $F$ of $f$ to the closed unit disc for which $\mu_{F} \in \inthyp{p}$? \end{question}
Furthermore, since we obtain smoothness properties for the elements of $\tei{p}$ (for $p\leq 1$), we suggest that one can show higher and higher order of smoothness for $p<1,$ as $p$ gets smaller and smaller. If this is the case we would like to find sharp results on how the order of smoothness depends on $p$, a question that seems to be similar to finding a characterization of $\tei{p}$ using Sobolev spaces for $p\geq 2$. In addition, we pose the following question: \begin{question} What is $\bigcap_{p>0}\tei{p}$? \end{question}
\textsc{Vincent Alberge, Fordham University, Department of Mathematics, 441 East Fordham Road, Bronx, NY 10458, USA}
\textit{E-mail address:} { \href{mailto:valberge@fordham.edu}{\tt valberge@fordham.edu} }
\textsc{Melkana Brakalova, Fordham University, Department of Mathematics, 441 East Fordham Road, Bronx, NY 10458, USA}
\textit{E-mail address:} { \href{mailto:brakalova@fordham.edu}{\tt brakalova@fordham.edu} }
\end{document} |
\begin{document}
\title{Simple proof of Chebotar\"ev's theorem on roots of unity\footnote{Keywords and phrases: roots of unity, non-zero minors} \footnote{2000 Mathematics Subject classification: 11T22 (primary), 42A99, 11C20 (secondary)}} \author{P. E. Frenkel}
\date{} \newcommand{\mathbf Q}{\mathbf Q} \newcommand{\mathbf F}{\mathbf F} \newcommand{\mathbf Z}{\mathbf Z} \newcommand{\mathcal E}{\mathcal E} \newcommand{\mathbf C}{\mathbf C} \newcommand{\mathbf 1}{\mathbf 1} \newcommand{\mathrm {nilpotent}}{\mathrm {nilpotent}}
\maketitle
\begin{abstract} We give a simple proof of Chebotar\"ev's theorem: Let $p$ be a prime and $\omega $ a primitive $p$th root of unity. Then all minors of the matrix $\left(\omega^ {ij}\right)_{i,j=0}^{p-1}$ are non-zero. \end{abstract}
Let $p$ be a prime and $\omega $ a primitive $p$th root of unity. We write $\mathbf F_p$ for the field with $p$ elements. In 1926, Chebotar\"ev proved the following theorem (see \cite{SL}):
\begin{theorem} For any sets $I,J\subseteq \mathbf F_p$ with equal cardinality, the matrix $(\omega^{ij})_{i\in I,j\in J}$ has non-zero determinant. \end{theorem}
Several independent proofs have been
given, including ones by
Dieudonn\'e \cite{D}, Evans and Isaacs \cite{EI}, and Terence Tao \cite{T}. Tao points out that the theorem is equivalent to the inequality $|{\mathrm {supp}} f|+|{\mathrm {supp}} \hat f|\geq p+1$ holding for any function $0\not\equiv f:\mathbf F_p\to \mathbf C$ and its Fourier transform $\hat f$, a fact also discovered independently by Andr\'as Bir\'o. Bir\'o posed this as Problem 3 of the 1998 Schweitzer Competition. The proof I gave in the competition (the one in the present article) is published in Hungarian in \cite[pp. 53--54.]{matlap}. It was also discovered (as part of a more general investigation) by Daniel Goldstein, Robert M. Guralnick and I. M. Isaacs \cite[Section 6]{GGI}.
The proof is based on the following two lemmas. Lemma~\ref{1} is covered by \cite[Chapter 1]{W}, but we include a proof for the sake of completeness.
\begin{lemma}\label{1} $\mathbf Z[\omega]/(1-\omega)=\mathbf F_p.$ \end{lemma}
\begin{proof} Let $\Omega$ be an indeterminate and let $\Phi_p(\Omega)=1+\Omega+\dots+\Omega^{p-1}$ be the minimal polynomial of the algebraic integer $\omega$. Consider the surjective ring homomorphisms $$\mathbf Z[\Omega]\to\mathbf Z[\Omega]/(\Phi_p(\Omega))=\mathbf Z[\omega], \qquad \Omega\mapsto \omega$$ and $$\mathbf Z[\Omega]\to\mathbf Z[\Omega]/(1-\Omega,p)=\mathbf F_p, \qquad\Omega\mapsto 1.$$
The latter kernel
contains the former one
since $\Phi_p(\Omega)\equiv p \mod (1-\Omega)$. Therefore, the latter homomorphism
factors through the former one
via a surjective homomorphism $\mathbf Z[\omega]\to\mathbf F_p$ whose kernel is
the ideal $$(1-\Omega, p)/(\Phi_p(\Omega))=(1-\omega, p)=(1-\omega),$$ the last equality following from $p\equiv\Phi_p(\omega)=0\mod (1-\omega)$. \end{proof}
\begin{lemma}\label{2}
Let
$0\not\equiv g(x)\in \mathbf F_p[x]$ be a polynomial of degree $<p$. Then the multiplicity of any element $0\neq a\in \mathbf F_p$ as a root of $g(x)$ is strictly less than the number of non-zero coefficients of $g(x)$. \end{lemma}
\begin{proof} For $g(x)$ constant, the lemma is obviously true. Assume that it is true for any $g(x)$ of degree $<k$, with some fixed $1\leq k<p$ , and take $g(x)$ of degree $k$. If $g(0)=0$, then $g(x)$ has the same number of non-zero coefficients and the same multiplicity of vanishing at $a$ as $g(x)/x$ does, so the lemma is true for $g(x)$. If $g(0)\neq 0$, then the number of non-zero coefficients exceeds the corresponding number for
the derivative $g'(x)$ by 1, and the multiplicity of vanishing at $a$ exceeds that of $g'(x)$ by at most 1. Now $g'(x)\not\equiv 0$ since $g(x)$ is of positive degree $k<p$, so the inequality of the lemma holds for $g'(x)$ and therefore also for $g(x)$. \end{proof}
\begin{proofof} The theorem is equivalent to saying that if numbers $a_j\in \mathbf Q(\omega)$ $(j\in J)$ satisfy $\sum_{j\in J} a_j\omega^{ij}=0$ for all $i\in I$, then all $a_j$ must be zero. In fact, we may clearly assume that $a_j\in \mathbf Z[\omega]$. The above equalities mean that the polynomial $$g(x)=\sum_{j\in J} a_j x^j\in \mathbf Z[\omega][x]$$ vanishes at $\omega^i$ for all $i\in I$. So $g(x)$ is divisible by $\prod_{i\in I}(x-\omega^i)$. Applying the homomorphism $\mathbf Z[\omega]\to \mathbf Z[\omega]/(1-\omega)=\mathbf F_p$ to the coefficients of $g(x)$ we get a polynomial $\bar g(x)\in \mathbf F_p[x]$
that is divisible by $(x-1)^{|I|}$. On the other hand,
$\bar g(x)$ has at most $|J|$ non-zero coefficients.
As $|I|=|J|$, we deduce from Lemma~\ref{2} that $\bar g(x)\equiv 0$. This means that all $a_j$ are divisible by $1-\omega$. We may divide all of them by $1-\omega$ and iterate the argument. This leads to {\it descente infinie} unless all $a_j$ are zero. \end{proofof}
\noindent {\bf Address.} Mathematics Institute, Budapest University of Technology and Economics, Egry J. u. 1., Budapest, 1111 Hungary. E-mail: frenkelp@renyi.hu
\end{document} |
\begin{document}
\title{Li-Yau inequality under $CD(0,n)$ on graphs} \author{
Florentin M\"unch\footnote{MPI MiS Leipzig, muench@mis.mpg.de} } \date{\today} \maketitle
\begin{abstract} We introduce a modified non-linear heat equation $\partial_t u = \Delta u + \Gamma u$ as a substitute of $\log P_t f$ where $P_t$ is the heat semigroup. We prove an exponential decay of $\Gamma u$ under the Bakry Emery curvature condition $CD(K,\infty)$ and prove the Li-Yau inequality $-\Delta u_t \leq \frac{n}{2t}$ under the Bakry Emery curvature condition $CD(0,n)$. From this, we deduce the volume doubling property which solves a major open problem in discrete Ricci curvature. As an application, we show that there exist no expander graphs satisfying $CD(0,n)$. \end{abstract}
\tableofcontents
\section{Introduction}
The celebrated Li-Yau inequality on $n$-dimensional Riemannian manifolds with non-negative Ricci curvature states \[ -\Delta \log P_t f \leq \frac{n}{2t} \] for all positive $f$ where $P_t = e^{\Delta t}$ denotes the heat semigroup \cite{li1986parabolic}.
Tremendous effort has been invested to establish a Li-Yau inequality on graphs with non-negative Ricci curvature \cite{bauer2015li,horn2014volume, munch2014li,dier2017discrete,gong2018li}. To this end, there have been introduced several non-linear modifications of the Bakry-Emery curvature. These modified curvature notions are hard to compute or estimate in practice. Additionally, non-negative curvature in the modified sense is strictly stronger than non-negative curvature in the classical Bakry Emery sense \cite{munch2017remarks,munch2014li}. Moreover, computing Bakry curvature is computationally simple since it reduces to a semidefinite programming problem \cite{cushing2016bakry}.
Naturally, the question arises, if the Li-Yau inequality and its consequences are also true under non-negative Bakry Emery curvature. Due to the lack of a chain rule for the Laplacian in discrete settings, this turned out to be a hard question and there was no significant progress in this direction for more than five years. There even seemed to be a consensus that strong curvature assumptions are necessary for proving strong analytical and geometrical results as Li-Yau inequality, Harnack inequality, volume doubling or Gaussian heat kernel estimates, as there is a growing number of articles investigating the strong curvature notions \cite{bauer2015li,horn2014volume, horn2019spacial,dier2017discrete, munch2017remarks,munch2014li, gong2017equivalent, gong2018li,fathi2015curvature, lin2015equivalent,lin2017existence, hua2016curvature,gao2016curvature,gao2016one, Lv2019,lippner2016li,Wang2017,qian2017remarks, wu2018nonexistence,lin2019ultracontractivity, wang2018eigenvalue,lin2017log}.
In this paper, we establish a Li-Yau inequality by modifying the heat equation instead of the curvature condition. The motivation is that the discrete Bakry Emery curvature seems not to be compatible with the logarithm, so we aim to replace $\log P_t f$ by $u_t$ which satisfies a suitable differential equation. Assuming a chain rule for the Laplacian, one has \begin{align}\label{eq:dtu} \partial u_t = \partial_t \log P_t f=\frac{\Delta P_t f}{P_t f} = \frac{\Delta e^{u_t}}{e^{u_t}} = \Delta u_t + \Gamma u_t \end{align}
where $\Gamma u_t = |\nabla u_t|^2$. The main idea in this paper is to take \eqref{eq:dtu} as a definition in the discrete setting and use it to approximate $\log P_t f$.
By Picard-Lindel\"of, we have uniqueness and short-time existence of the initial value problem. Long-time existence cannot be guaranteed in general since the non-linear $\Gamma$ term can make the differential equation explode. However, we will prove long time existence via a gradient estimate under non-negative Bakry Emery curvature (see Theorem~\ref{thm:GradEst}). Having established Li-Yau inequality (see Theorem~\ref{thm:LiYau}), we deduce a parabolic Harnack inequality (see Theorem~\ref{thm:Harnack}) which will be used to prove volume doubling (see Theorem~\ref{thm:volDoubling}). By this, we will show that there are no expander graphs satisfying the Bakry-Emery curvature condition $CD(0,n)$ with finite dimension $n$ (see Corollary~\ref{cor:Expanders}).
Let us briefly recall some known consequences of lower bounds on the Bakry Emery curvature. Eigenvalue estimates in terms of the curvature and diameter are given in \cite{chung2017curvature,lin2010ricci}. Buser's inequality has been proven in \cite{liu2014eigenvalue,liu2018buser, liu2015curvature,klartag2015discrete}. Gaussian concentration has been proven in \cite{schmuckenschlager1998curvature, fathi2015curvature}. Optimal diamter bounds under positive curvature and their rigidity have been investigated in \cite{liu2017rigidity,liu2016bakry, fathi2015curvature}. Graphs with non-constant lower bounds on the Bakry Emery curvature have been investigated in \cite{munch2018perpetual,liu2017distance}.
\subsection{Setup and Notation} In this paper, we restrict ourselves to finite graphs. A finite graph $G=(V,q)$ consists of a finite set $V$ and a function $q:V^2 \to [0,\infty)$ with $q(x,x)= 0$ for all $x \in V$. The function $q$ corresponds to the jump rates of the corresponding continuous time Markov chain. A graph is called reversible or undirected if there exists a function $m: V\to (0,\infty)$ s.t. \[ q(x,y)m(x)=q(y,x)m(y). \] The function $m$ is called reversible measure where we use the terminology of Markov chains. We write $x \sim y$ if $q(x,y)>0$ or $q(y,x)>0$. We remark that this definition is slightly non-standard in the non-reversible case as it might happen that $q(x,y)>0=q(y,x)$. We define the minimum jump rate \[ q_{\min}:=\min \{ q(x,y): x,y \in V, \quad q(x,y)>0 \}. \] Indeed, most of the results of this article do not require reversibility. We denote the maximum vertex degree by \[ D:=\max_x \sum_y q(x,y). \] We write $C(V) = \mathbb{R}^V$. The negative semidefinite Laplacian $\Delta: C(V) \to C(V)$ is given by \[ \Delta f(x):=\sum_{y} q(x,y)(f(y)-f(x)). \] The Laplacian generates the heat semigroup $P_t=e^{t\Delta}$ giving the unique solution to the heat equation $\partial_t P_t f = \Delta P_t f$ and $P_0 f = f$. The combinatorial graph distance for $x,y \in V$ is given by \[ d(x,y):= \inf\{n: x=x_0 \sim \ldots \sim x_n=y\}. \] We write $B_r(x):=\{y \in V: d(x,y) \leq r\}$.
For reversible graphs with reversible measure $m$, we can define the $\ell_p$ norms. For $f \in C(V)$, the $\ell_p$ norm w.r.t. $m$ is given by $\|f\|_p := \left(\sum_x |f(x)|^p m(x)\right)^{1/p}$ for $p \in [1,\infty)$ and $\|f\|_\infty := \sup_{x \in V} |f(x)|$. The $\ell_\infty$-norm is also well defined in the non-reversible case.
We recall the Bakry Emery calculus introduced in \cite{BakryEmery85} and first adopted to discrete settings in \cite{schmuckenschlager1998curvature, lin2010ricci}. The operator $\Gamma_k : C(V) \times C(V) \to C(V)$ is inductively given by $\Gamma_0(f,g) := fg$ and \[ 2\Gamma_{k+1}(f,g) := \Delta \Gamma_k(f,g) - \Gamma_k(f,\Delta g) - \Gamma_k(\Delta f,g). \] We write $\Gamma_k f := \Gamma_k(f,f)$ and $\Gamma := \Gamma_1$. A graph is said to satisfy the curvature dimension inequality $CD(K,n)$ if for all $f \in C(V)$, \[ \Gamma_2 f \geq \frac 1 n (\Delta f)^2 + K \Gamma f. \] The parameter $K$ can be seen as a lower Ricci curvature bound and $n$ as a local upper dimension bound.
In the paper, we will assume throughout that $(u_t)_{t \in [0,T)} \in C(V)$ is a solution to \[ \partial_t u_t = \Delta u_t + \Gamma u_t. \] Since this equation has a unique solution by Picard Lindel\"of, we use the semigroup notation and write \[ L_t f := u_t \] when assuming $u_0 = f$.
\subsection{Basics on ordinary differential equations}
As we will employ a modified heat equation, we recall some basic facts about autonomous ordinary differential equations for convenience of the reader. This review is mainly based on \cite{grigor2007ODE}. Let $(U_t)_{t \in \mathbb{R}} \in \mathbb{R}^n$. Let $A:\mathbb{R}^n \to \mathbb{R}^n$ be locally Lipschitz, i.e., there exists $c(R)$ s.t. for all $x,y \in \mathbb{R}^n$, one has $\|A(x)-A(y)\|_2 \leq c(R) \|x-y\|_2$ where $R = \max(\|x\|_2,\|y\|_2)$. Let $u_0 \in \mathbb{R}^n$. Then by the Picard-Lindel\"of theorem, there exists an interval $(a,b)$ with $0 \in (a,b)$ and a solution to $U_0=u_0$ and $\partial_t U_t = A(U_t)$ for $t \in (a,b)$ such that every other solution $\widetilde U_t$ on an interval $I$ which contains zero, satisfies $\widetilde U_t = U_t$ on $I \subset (a,b)$. Moreover, the unique solution $U_t$ continuously depends on $u_0$ for fixed $t$. Furthermore if $b < \infty$, then $U_t$ leaves every compact set, i.e., the set $\{U_t:t \in [0,b)\}$ is unbounded.
\section{A modified heat equation}
In this section, we give fundamental properties of the modified heat equation. Particularly, we prove gradient estimates, long times existence, monotonicity, and we establish a precise comparison to the linear heat semigroup $P_t$.
\subsection{Gradient estimate}
The following theorem is vital to ensure long time existence of $u_t$. It states that the gradient of $u_t$ decays with rate $K$ assuming $CD(K,\infty)$.
We will assume $K\geq 0$ since otherwise we cannot ensure long time existence of $u_t$.
\begin{theorem}[Gradient estimate]\label{thm:GradEst} Let $G=(V,q)$ be a finite graph satisfying
$CD(K,\infty)$ for some $K\geq 0$. Suppose $\|\Gamma u_0\|_\infty \leq q_{\min}/2$. Then for all $t\geq 0$, the solution $u_t$ exists and satisfies \[
\|\Gamma u_t\|_\infty \leq e^{-2Kt} \|\Gamma u_0\|_\infty. \] In particular,
$|u_t(y)-u_t(x)| \leq 1$ for all $x\sim y$ and all $t\geq 0$. \end{theorem} In case of negative curvature, the same statement can be proven under a stronger assumption on the gradient of $u_0$, however, existence of $u_t$ can only be guaranteed until a fixed time depending on the curvature and the initial gradient bound. \begin{proof}
We prove the theorem by contradiction. By the Picard Lindel\"of theorem, one has short time existence and smoothness of the solution since the right side $\Delta + \Gamma$ is smooth. By continuity of the solution in the initial state $u_0$, we can assume $\|\Gamma u_0\|_\infty < q_{\min}/2$ without loss of generality. Suppose $\|\Gamma u_t\|_\infty > e^{-2Kt} \|\Gamma u_0\|_\infty$ for some $t >0$.
Then $\Gamma u_t(x) - \varepsilon t \geq e^{-2Kt} \|\Gamma u_0\|_\infty$ for small $\varepsilon>0$. Let \[
t_0 := \inf\{t>0: \|\Gamma u_t\|_\infty \geq e^{-2Kt} \|\Gamma u_0\|_\infty + \varepsilon t\}. \]
Then $t_0 >0$ as $\|\Gamma u_t\|_\infty$ is continuous in $t$ for small $t\geq 0$. If $\varepsilon$ is small enough, then $\|\Gamma u_{t_0}\|_\infty \leq q_{\min}/2$ since $\|\Gamma u_0\|_\infty < q_{\min}/2$ by assumption. We write $f:= u_{t_0}$ and observe that \[
\inf_{x \colon \Gamma f(x)=\|\Gamma u_{t_0}\|_\infty} \partial_t \Gamma u_{t_0}(x) = \partial_t^- \|\Gamma u_{t_0}\|_\infty \geq -2K e^{-2Kt} \|\Gamma u_{0}\|_\infty +\varepsilon \geq -2K \|\Gamma f\|_\infty + \varepsilon \] where $\partial_t^-$ is the left derivative and the first inequality follows from $t_0>0$. On the other hand, \begin{align*} \partial_t \Gamma u_{t_0} = 2\Gamma(u_{t_0},\partial_t u_{t_0}) = 2\Gamma(f, \Delta f + \Gamma f) = -2\Gamma_2 f + \Delta \Gamma f + 2 \Gamma(f,\Gamma f) \leq -2K\Gamma f + \Delta \Gamma f + 2 \Gamma(f,\Gamma f) \end{align*} where we used $CD(K,\infty)$ in the last estimate. If $\Gamma f$ attains its maximum in $x$, then
$-2K \Gamma f(x) = -2K\|\Gamma f\|_\infty$ and \begin{align*} \Delta \Gamma f(x) + 2\Gamma(f,\Gamma f)(x) &= \sum_{y\sim x} q(x,y)(\Gamma f(y)-\Gamma f(x)) +\sum_{y\sim x} q(x,y)(\Gamma f(y)-\Gamma f(x))(f(y)-f(x))\\ &=\sum_{y\sim x} q(x,y)(\Gamma f(y)-\Gamma f(x))(1 + f(y)-f(x)). \end{align*} Since $\Gamma f$ is maximal in $x$, we have $\Gamma f(y)-\Gamma f(x) \leq 0$ and since $\Gamma f(x) \leq q_{\min}/2$, we have $(f(y)-f(x))^2 \leq 1$ whenever $q(x,y)>0$, implying $1+f(y)-f(x) \geq 0$. In particular, \[ \Delta f(x)+ 2 \Gamma(f,\Gamma f)(x) \leq 0. \] Putting everything together yields \[
-2K\|\Gamma f\|_\infty + \varepsilon \leq \partial_t^- \|\Gamma u_{t_0}\|_\infty = \inf_{x \colon \Gamma f(x)=\|\Gamma f\|_\infty} \partial_t \Gamma u_{t_0}(x) \leq -2K\|\Gamma f\|_\infty. \]
This is a contradiction proving the desired inequality for all $t$ for which $u_t$ exists. Finally, long time existence follows since $|\partial_t u_t| \leq D \|u_t\|_\infty + \|\Gamma u_0\|_\infty \leq D \|u_t\|_\infty + q_{\min}/2$ showing that $u_t$ stays within a compact set for all finite $t$. This finishes the proof. \end{proof}
\begin{remark} One might be tempted to think that the above gradient estimate characterizes $CD(K,\infty)$. This however is not true as in the proof, we use maximality of the gradient. Particularly, the graph $G=(\{1,2,3\},q)$ with $q(1,2)=2$ and $q(2,1)=1$ and $q(2,3)=5$ and $q(3,2)=1$ and $q(1,3)=q(3,1)=0$ satisfies $CD(0,\infty)$ with $0$ as optimal curvature bound, but it satisfies the gradient estimate of the theorem with $K=1$ as can be shown in an elementary computation. \end{remark}
\begin{remark} For comparison purposes, we recall similar gradient estimates. \begin{enumerate}[(i)] \item Lower bounds of the Bakry Emery curvature can be characterized via gradient estimates for the heat semigroup, see \cite{lin2015equivalent,keller2018gradient, hua2017stochastic,gong2017equivalent}. Particularly, $CD(K,\infty)$ is equivalent to \[ \Gamma P_t f \leq e^{-2Kt}P_t \Gamma f. \] \item Lower bounds on the Ollivier curvature (see \cite{ollivier2007ricci,ollivier2009ricci}) can also be characterized via a Lipschitz decay of the heat semigroup. Particularly by \cite{munch2017ollivier}, Ollivier curvature bounded from below by $K$ is equivalent to \[
\|\nabla P_t f \|_\infty \leq e^{-Kt}\|\nabla f \|_\infty. \]
where $\|\nabla f\|_\infty$ denotes the optimal Lipschitz constant w.r.t the canonical graph distance $d$. \item Lower bounds on a modified Ollivier curvature can be characterized via a Lipschitz decay of the logarithm of the heat semigroup. Particularly by \cite[Theorem~1.8]{kempton2019large}, the exponential Ollivier curvature bounded from below by $K$ is equivalent to \[
\|\nabla \log P_t f \|_\infty \leq e^{-Kt}\|\nabla \log f \|_\infty. \] \item By \cite{erbar2018poincare}, the entropic Ricci curvature bounded from below by $K$ for a reversible graph with reversible measure $m$ is equivalent to \[
\|\nabla P_t f\|_\rho^2 \leq e^{-2Kt}\|\nabla f\|^2_{P_t \rho} \] for all positive $f,\rho \in C(V)$ where \[
\|\nabla f\|^2_{\rho} := \sum_{y\sim x} q(x,y)m(x)(f(y)-f(x))^2 \frac{\rho(y)-\rho(x)}{\log \rho(y)- \log \rho(x)} \] and the latter term is set to be $\rho(x)$ if $\rho(y)=\rho(x)$. \end{enumerate}
\end{remark}
\subsection{Monotonicity} We recall that $L_t f$ is the unique maximal solution to $L_0 f = f$ and \[ \partial_t L_t f = \Delta L_t f + \Gamma L_t f. \] As we know that the heat semigroup $P_t$ is monotonous, we expect the same for $L_t$. This indeed holds true when assuming small enough gradients of the initial values as shown in the following theorem.
\begin{theorem}[Monotonicity]\label{thm:Monotonicity} Let $G=(V,q)$ be a finite graph satisfying
$CD(K,\infty)$ for some $K\geq 0$. Suppose $\|\Gamma f\|_\infty \leq q_{\min}/2$. Further suppose $g \geq f$. Then for all $t\geq 0$ for which $L_t g$ exists, \[ L_t g \geq L_t f. \] \end{theorem} \begin{remark}
The condition $\|\Gamma f\|_\infty \leq q_{\min}/2$ is necessary since $L_t f$ can explode otherwise and $L_t g$ can stay constant. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:Monotonicity}] Suppose $L_t g(x) <L_t f(x)$ for some $x$ and $t$. Let $\varepsilon>0$ be small enough s.t. \[ t_0 := \inf\{t:L_t g(x) +\varepsilon t < L_t f(x) \mbox{ for some }x\} \] is finite. Let $x_0$ s.t. $L_t g(x_0) +\varepsilon t = L_t f(x_0)$. We observe $$ f_y:=L_{t_0} f(y) - L_{t_0}f(x_0) \leq L_{t_0} g(y) - L_{t_0}g(x_0) =:g_y. $$ Thus, \[ \sum_{y}q(x_0,y) f_y\left(1+\frac{f_y}2 \right)= \partial_t L_t f(x_0) \geq \varepsilon +\partial_t L_t g(x_0) = \varepsilon \sum_{y}q(x_0,y) g_y\left(1+\frac{g_y}2 \right). \] By Theorem~\ref{thm:GradEst}, we have $g_y \geq f_y \geq -1$ for all $y \sim x_0$ and thus, \[ f_y\left(1+\frac{f_y}2 \right) \leq g_y\left(1+\frac{g_y}2 \right) \] for all $y \sim x_0$. This is a contradiction to the above inequality which finishes the proof. \end{proof}
The monotonicity gives another justification that the non-linear semigroup $L_t$ is a suitable substitute for $\log P_t$. However, the striking argument for the analogy between $L_t$ and $\log P_t$ is given in the next subsection.
\subsection{Semigroup and $\ell_1$-norm comparison}
We now compare $u_t$ with $\log P_t f$ in a precise way. Particularly, $e^{u_t}$ can be be upper and lower bounded by $(P_t e^{\alpha u_0})^{1/\alpha}$ with suitable choices of $\alpha$ as shown in the following theorem.
\begin{theorem}[Semigroup comparison]\label{thm:SemigroupCompare} Let $G=(V,q)$ be a finite graph satisfying $CD(0,\infty)$. Suppose $\Gamma u_0 \leq q_{\min}/2$. Then, \[ P_t e^{\alpha u_0} \geq e^{\alpha u_t} \qquad \mbox{ for } \alpha \geq 1.60, \] and \[ P_t e^{\alpha u_0} \leq e^{\alpha u_t} \qquad \mbox{ for } \alpha \leq 0.76. \] \end{theorem} \begin{proof} Let \[ G(s):=P_{t-s}e^{\alpha u_s}. \] Then, \begin{align*} \partial_s G(s) = P_{t-s} \left(\partial_s e^{\alpha u_s} - \Delta e^{\alpha u_s} \right) = P_{t-s} \left(\alpha\partial_s(u_s) e^{\alpha u_s} - \Delta e^{\alpha u_s} \right) = P_{t-s} \left(\alpha(\Delta u_s +\Gamma u_s) e^{\alpha u_s} - \Delta e^{\alpha u_s} \right) \end{align*} We fix $x \in V$. With $t(y):=u_s(y)-u_s(x)$, we calculate \[ \left(\alpha(\Delta u_s +\Gamma u_s) e^{\alpha u_s} - \Delta e^{\alpha u_s} \right) (x) = e^{\alpha u_s}(x)\sum_y q(x,y) \left[\alpha t(y)\left(1+ \frac {t(y)}2\right) - \left(e^{\alpha t(y)}-1 \right) \right]. \]
By Theorem~\ref{thm:GradEst}, we have $|t(y)|\leq 1$ for all $y \sim x$. For $\alpha \geq 1.60$ and $|t|\leq 1$, one has \[ \alpha t(1+t/2) - (e^{\alpha t}-1) \leq 0 \] and for
$\alpha \leq 0.76$ and $|t|\leq 1$, one has \[ \alpha t(1+t/2) - (e^{\alpha t}-1) \geq 0. \] Thus, for $\alpha \geq 1.60$, we have $\partial_s G(s) \leq 0$ yielding $P_t e^{\alpha u_0} = G(0) \geq G(t) = e^{\alpha u_t}$. Analogously, we obtain $P_t e^{\alpha u_0} \leq e^{\alpha u_t}$ in the case $\alpha \leq 0.76$. This finishes the proof. \end{proof}
As the heat semigroup of a reversible graph preserves the $\ell_1$ norm, we can already conclude that $\|e^{\alpha u_t}\|_1$ is decreasing in $t$ for $\alpha \geq 1.6$ and increasing for $\alpha \leq 0.76$. The following theorem improves the bounds for $\alpha$ to $1.1$ and $1$ respectively.
\begin{theorem}[$\ell_1$-comparison] \label{thm:ell1Compare} Let $G=(V,q)$ be a finite reversible graph satisfying $CD(0,\infty)$. Let $m$ be the reversible measure. Suppose $\Gamma u_0 \leq q_{\min}/2$. Then, \[
\|e^{\alpha u_0}\|_1 \geq \|e^{\alpha u_t}\|_1 \qquad \mbox{ for } \alpha \geq \log 3 , \] and \[
\| e^{\alpha u_0}\|_1 \leq \|e^{\alpha u_t}\|_1 \qquad \mbox{ for } \alpha \leq 1. \] \end{theorem}
\begin{proof} Let \[
G(s):=\|e^{\alpha u_s}\|_1. \] We write $w(x,y)=q(x,y)m(x)=q(y,x)m(y)$ Then, \begin{align*} \partial_s G(s) &= \langle \alpha \partial_s u_s, e^{\alpha u_s} \rangle = \alpha \langle \Delta u_s + \Gamma u_s, e^{\alpha u_s} \rangle = \frac\alpha 2 \sum_{y\sim x}w(x,y) e^{\alpha u_s(x)}(u_s(y) - u_s(x))\left(2+u_s(y)-u_s(x) \right) \\ &= \frac \alpha 4 \sum_{y\sim x} w(x,y) \left[ e^{\alpha u_s(x)}(u_s(y) - u_s(x))\left(2+u_s(y)-u_s(x) \right)\right] \\& \qquad\quad +w(x,y) \left[ e^{\alpha u_s(y)}(u_s(x) - u_s(y))\left(2+u_s(x)-u_s(y) \right)\right] \\ &= \frac \alpha 4 \sum_{y\sim x}w(x,y) \exp\left(\frac{\alpha(u_s(x)+u_s(y))}{2}\right) \cdot \left[ e^{-\alpha t/2}\cdot t (2+t) + e^{\alpha t/2} \cdot (-t) \cdot (2-t) \right] \end{align*} where $t=t(s,x,y)=u_s(y)-u_s(x) \in [-1,1 ]$ for all $y\sim x$ by Theorem~\ref{thm:GradEst}. An elementary calculation shows that for $t \in [-1,1]$, the expression \[ e^{-\alpha t/2}\cdot t (2+t) + e^{\alpha t/2} \cdot (-t) \cdot (2-t) \] is non-negative if $\alpha \leq 1$ and non-positive if $\alpha \geq \log 3$. Thus, $\partial_s G(s)$ is non-negative if $\alpha \leq 1$ and non-positive if $\alpha \geq \log 3$ which directly implies the claim of the theorem. \end{proof}
We remark that in both Theorem~\ref{thm:ell1Compare} and Theorem~\ref{thm:ell1Compare}, the bounds for $\alpha$ can be even improved when assuming that $\Gamma u_0$ is bounded by a smaller constant.
We will use the semigroup comparison together with the Harnack inequality to prove volume doubling. The Harnack inequality will be proven in the next section.
\section{Li-Yau-Hamilton-Harnack type inequalities}
The gradient estimate $|u_t(y)-u_t(x)| \leq 1$ for $y \sim x$ from Theorem~\ref{thm:GradEst} turns out to be essential for proving various Li-Yau-Hamilton-Harnack type inequalities.
\subsection{Li-Yau inequality}
Under several curvature assumptions which seem hard to verify, a Li-Yau inequality has been proven for the heat semigroup \cite{bauer2015li,horn2014volume,munch2014li, dier2017discrete,gong2018li}. Using the gradient estimate from Theorem~\ref{thm:GradEst}, we prove a Li-Yau inequality under $CD(0,n)$ for the modified heat equation \[ \partial_t u_t = \Delta u_t + \Gamma u_t. \] We recall that we consider $u_t$ as a substitute for $\log P_t f$.
\begin{theorem}[Li-Yau inequality]\label{thm:LiYau} Let $G=(V,q)$ be a finite graph satisfying
$CD(0,n)$ for some $n<\infty $. Suppose $\|\Gamma u_0\|_\infty \leq q_{\min}/2$. Then, \[ \Gamma u_t - \partial_t u_t = -\Delta u_t \leq \frac{n}{2t}. \] \end{theorem} \begin{proof} We fix $T>0$. Let $F=-t\Delta u$ on $V \times [0,T]$. Suppose $\max F(x,t) > n/2$. Let the maximum be attained at $(x,t)$ which is fixed from now on. Then, $t>0$ and by $CD(0,n)$, \begin{align*} 0\leq \partial_t F(x,t) = -t \Delta \partial_t u_t(x) - \Delta u_t(x) &= - t\Big(\Delta \Delta u_t(x)+ \Delta \Gamma u_t(x)\Big) - \Delta u_t(x)\\ &\leq -t\left(\Delta \Delta u_t(x) + 2\Gamma(u_t,\Delta u_t)(x) + \frac 2 n(\Delta u_t(x))^2\right) - \Delta u_t(x). \end{align*} We observe \begin{align*} \Delta\Delta u_t(x) + 2\Gamma(u_t,\Delta u_t)(x) &= \sum_y q(x,y) (\Delta u_t(y)-\Delta u_t (x)) + \sum_{y}q(x,y)(\Delta u_t(y)-\Delta u_t (x))(u_t(y)-u_t(x))\\ &=\sum_{y}q(x,y)(\Delta u_t(y)-\Delta u_t (x))(1+u_t(y)-u_t(x)) \geq 0 \end{align*} since $\Delta u_t(y)-\Delta u_t (x) \geq 0$ for by minimality of $\Delta u_t(x)$ and since $1+u_t(y)-u_t(x) \geq 0$ for $y \sim x$ by Theorem~\ref{thm:GradEst}. Thus, \begin{align*} 0\leq \partial_t F(x,t) \leq -\frac{2t}{n}(\Delta u_t(x))^2 - \Delta u_t(x). \end{align*} Rearranging gives $F(x,t)= -t\Delta u_t(x) \leq 2n $ and thus, $-\Delta u_t \leq \frac n{2t}$. This finishes the proof. \end{proof}
\subsection{Harnack inequality}
One of the main application of Li-Yau inequality is the parabolic Harnack inequality which can be considered as in integrated version of Li-Yau inequality. The proof of the following theorem follows \cite{munch2014li} and \cite{bauer2015li}.
\begin{theorem}[Harnack inequality]\label{thm:Harnack} Let $G=(V,q)$ be a finite graph satisfying
$CD(0,n)$ for some $n<\infty $. Suppose $\|\Gamma u_0\|_\infty \leq q_{\min}/2$. Then, for $x,y\in V$ and $T_2>T_1 >0$, \[ u_{T_1}(x)-u_{T_2}(y) \leq \frac n 2 \log \left(\frac{T_2}{T_1}\right) + \frac{2d(x,y)^2}{q_{\min}(T_2-T_1)} \] \end{theorem}
\begin{proof} We first prove the claim for $x\sim y$. We first assume $q(y,x)>0$. By Li-Yau inequality (Theorem~\ref{thm:LiYau}), \begin{align*} u_{T_1}(x)-u_{T_2}(y) &= -\int_{T_1}^s \partial_t u_t(x) dt + (u_s(y)-u_s(x)) - \int_{s}^{T_2}\partial_t u_t(y)dt \\&\leq \int_{T_1}^{T_2} \frac n{2t} dt - \int_{s}^{T_2} \Gamma u_t(y) dt + (u_s(y)-u_s(x)) \\& \leq \frac n 2 \log \left(\frac{T_2}{T_1}\right) - \int_{s}^{T_2} \Gamma u_t(y) dt + \sqrt{\frac{2\Gamma u_s(y)}{q_{\min}}}. \end{align*} Minimizing over $s \in [T_1,T_2]$ and applying \cite[Lemma~5.3]{munch2014li} gives \[ u_{T_1}(x)-u_{T_2}(y) \leq \frac n 2 \log \left(\frac{T_2}{T_1}\right) + \frac{2}{q_{\min}(T_2-T_1)} \] as desired. In the case $q(x,y)>0=q(y,x)$, we keep the $\int_{T_1}^s\Gamma u_t(x)$ term instead of the $\int_{s}^{T_2} \Gamma u_t(y)$ term, and we estimate $u_s(y)-u_s(x)$ by the gradient at $x$ instead of $y$. By doing so, we end up with the same inequality for $u_{T_1}(x)-u_{T_2}(y)$.
The general case, i.e. when $x$ and $y$ are not adjacent, follows by subdividing the time interval $[T_1,T_2]$ in $d(x,y)$ many subintervals of the same size and by applying the above inequality successively. This finishes the proof. \end{proof}
\subsection{Hamilton type estimate} The Hamilton estimate (see \cite{hamilton1993matrix}) states that on Riemannian manifolds with non-negative Ricci curvature, one has for all positive functions $f$, \[
\Gamma \log P_t f \leq \frac 1 t \log \left(\frac{\|f\|_\infty}{P_t f} \right). \] A discrete version of this estimate under a modified Bakry Emery condition has been established in \cite{horn2019spacial}. Here, we prove the Hamilton type estimate for the modified heat equation under the $CD(K,\infty)$ condition. \begin{theorem}[Hamilton gradient estimate]\label{thm:Hamilton} Let $G=(V,q)$ be a finite graph satisfying $CD(K,\infty)$ for some $K\geq 0$. Suppose $u_0 \leq 0$ and $\Gamma u_0 \leq \frac {q_{\min}}2$. Then, \[ \Gamma u_t \leq -\frac{u_t}{\phi(t)}. \] with \[ \phi(t)= \begin{cases} \frac{e^{2Kt}-1}{2K} &: K>0 \\ t&: K=0. \end{cases} \] \end{theorem}
\begin{proof} W.l.o.g., we can assume that $\sup u_0 < 0$ giving $\sup_{t} u_t <0$ by Theorem~\ref{thm:Monotonicity}. Let \[ H(\cdot,t):=-\frac{\phi(t)\Gamma u_t}{u_t} \] be defined on $V\times [0,T]$ for some fixed $T>0$. Assume the maximum of $H$ is attained at $(x,t)$. Denote the maximum by $C$. W.l.o.g., we have $C>0$ and $t>0$ giving $\partial_t H(x,t) \geq 0$. Moreover, $\Gamma u(x)=-Cu(x)/\phi(t)$ and $\Gamma u \leq - Cu/\phi(t)$. We aim to show $C\leq 1$ which will be done by computing $\partial_t H$ in the maximum point and by systematically replacing or estimating $\Gamma u$ by $-Cu/\phi(t)$. By Theorem~\ref{thm:GradEst}, we have
$1+u(y)-u(x)\geq 0$ for $y\sim x$ and thus at the maximum point $(x,t)$, \begin{align*} 2K\Gamma u(x) + \partial_t \Gamma u(x) &= 2K\Gamma u(x)+ 2\Gamma(u,\Delta u + \Gamma u) (x) \\ &\leq \Delta \Gamma u(x) + 2\Gamma(u,\Gamma u)(x) \\&=
\sum_{y\sim x}q(x,y)(\Gamma u(y)-\Gamma u(x))(1+u(y)-u(x)) \\ &\leq -\frac C {\phi(t)} \sum_{y\sim x}q(x,y)(u(y)-u(x))(1+u(y)-u(x)) \\ &=-\frac C {\phi(t)} \left( \Delta u + 2 \Gamma u\right)(x)\\ &=-\frac C {\phi(t)} \partial_t u(x) + \frac{C^2}{{\phi(t)}^2}u(x). \end{align*} where we applied $CD(0,\infty)$ in the first estimate and the inequality $1+u(y)-u(x) \geq 0$ in the second estimate. Hence, \[ \partial_t \Gamma u(x) \leq -\frac C {\phi(t)} \partial_t u(x) + \frac{C^2}{{\phi(t)}^2}u(x) + \frac{2KC}{\phi(t)}u(x). \] We now estimate the time derivative of $H$ at $(x,t)$, \begin{align*} 0 \leq \partial_t H &= -\phi'(t)\frac{\Gamma u}{u} - \phi(t)\left(\frac{\partial_t \Gamma u}u - \frac {\Gamma u \partial_t u}{u^2}\right)\\ &=\frac {C\phi'(t)}{\phi(t)} - \phi(t) \left(\frac{\partial_t \Gamma u}u +\frac C {\phi(t)} \frac {\partial_t u}{u} \right) \\ &\leq \frac {C\phi'(t)}{\phi(t)} - \phi(t) \left(\frac{-\frac C {\phi(t)} \partial_t u + \frac{C^2}{\phi(t)^2}u + \frac{2KC}{\phi(t)}u}u +\frac C {\phi(t)} \frac {\partial_t u}{u} \right) \\ &=\frac{C(\phi'(t)-C-2K\phi(t))}{\phi(t)}\\ &=\frac{C(1-C)}{\phi(t)} \end{align*} which clearly gives $C \leq 1$. Hence, $H\leq 1$ which implies \[ \Gamma u_t \leq -\frac {u_t} {\phi(t)} \] as desired. This finishes the proof. \end{proof} By integrating the Hamilton estimate over space, we also get a corresponding Harnack inequality.
\begin{theorem}[Hamilton Harnack inequality]\label{thm:HamiltonHarnack} Let $G=(V,q)$ be a finite graph satisfying $CD(0,\infty)$. Suppose $q(x,y)>0$ for all $x\sim y$.
Suppose $u_0 \leq 0$ and $\Gamma u_0 \leq \frac {q_{\min}}2$. Then for all $x,y \in V$ and all $t>0$, \[
\left|\sqrt{-u_t(y)}- \sqrt{-u_t(x)} \right| \leq \frac{d(x,y)}{\sqrt{2tq_{\min}}}. \] \end{theorem} \begin{proof} For simplicity, we write $u$ instead of $u_t$.
We can assume $x\sim y$ and $-u(y) \geq -u(x)$ without loss of generality since we assumed $q(x,y)>0$ for all $x\sim y$. Thus, we have \begin{align*} \left(\sqrt{-u(y)}-\sqrt{-u(x)}\right)\left(\sqrt{-u(y)}+\sqrt{-u(x)}\right) =u(x)-u(y) \leq \sqrt{\frac{ 2\Gamma u(x)}{q_{\min}}} \leq \sqrt{\frac{ -2u(x)}{tq_{\min}}} \leq \frac{\sqrt{-u(y)}+\sqrt{-u(x)}}{\sqrt{2tq_{\min}}} \end{align*} where we applied Theorem~\ref{thm:Hamilton} in the second estimate. Dividing by $\left(\sqrt{-u(y)}+\sqrt{-u(x)}\right)$ gives \begin{align*}
\left| \sqrt{-u(y)}-\sqrt{-u(x)} \right| \leq \frac{1}{\sqrt{2tq_{\min}}} \end{align*} which finishes the proof. \end{proof}
\begin{remark} A similar estimate was shown in \cite[Theorem~2.5.2]{munch2019non} under non-negative Ollivier curvature, namely \[
|P_t f(y)-P_t f(x)| \leq \frac{d(x,y)}{\sqrt{tq_{\min}}} \cdot \|f\|_\infty. \] In case of non-negative Bakry Emery curvature, there is a corresponding gradient estimate stating \[ \Gamma P_t f \leq \frac{1}{2t}\left(P_t f^2 - (P_t f)^2 \right), \] see e.g. \cite{liu2014eigenvalue, klartag2015discrete,lin2015equivalent}. \end{remark}
\section{Applications}
As applications of Li-Yau inequality, we prove volume doubling and show that there exist no expander graphs satisfying $CD(0,n)$.
\subsection{Volume doubling}
One of the major questions regarding discrete Ricci curvature is whether non-negative Bakry-Emery curvature implies volume doubling. An affirmative answer was given for the special case of normalized birth death chains in \cite{hua2017ricci}. However the general case stayed open. In this section, we prove volume doubling.
Let us briefly recall the proof idea from the Riemannian setting. We take \[ f:=1_{B_r(x)}. \] Then with $t = C r^2$, \[ P_t f(x) \geq \frac 1 2 \] and with $T=2t$ and Harnack inequality, \[ P_T f(y) \geq P_t f(x) \cdot \left(\frac{T}{t}\right)^{n/2} \exp \left(\frac{d(x,y)^2}{4(T-t)} \right) \geq c>0 \] for all $y \in B_{2r}(x)$ where $c$ is independent from $r$. Thus, \[
\vol(B_r(x))= \|f\|_1=\|P_t f\|_1 \geq \|c 1_{B_{2r(x)}}\|_1 = c \vol(B_{2r}(x)). \] Our proof of volume doubling follows the same idea, however we take a detour via the modified heat equation to apply Harnack inequality.
\begin{theorem}[Volume doubling]\label{thm:volDoubling} Let $G=(V,q)$ be a finite reversible graph satisfying $CD(0,n)$ for some $n<\infty$. Let $r \geq 4n^2 D/q_{\min}$ and $x \in V$. Then, \[ \frac{m(B_{2r}(x))}{m(B_{r}(x))} \leq \left( 9n \sqrt{\frac {D}{q_{\min}}}\right)^{3 n}. \] \end{theorem}
\begin{proof} Let $r\cdot \sqrt{\frac{q_{\min}}{D}} \geq C > 0$, let $x\in V$ and let $u_t \in C(V)$ be given by $\partial_t u_t = \Delta u_t + \Gamma u_t$ and
$$u_0:=-\frac{C}{r}d(x,\cdot) \vee (-C).$$ We remark $\Gamma u_0 \leq \frac {q_{\min}}2$ so we can apply the theory established in this paper.
We have $ e^{\alpha u_0} \geq \alpha u_0 + 1 $ and thus, by Theorem~\ref{thm:SemigroupCompare}, we have \begin{align}\label{eq:LtPtExp} e^{\alpha u_t} \geq P_t{e^{\alpha u_0}} \geq \alpha P_t u_0 + 1 \end{align} for $\alpha = 0.76$. We know $ \Gamma u_0 \leq \frac{DC^2}{2r^2}. $ Thus, by \cite[Theorem~3.1]{lin2015equivalent}, \[ (\Delta P_t u_0)^2 \leq \frac{n}{2t} (P_t \Gamma u_0 - \Gamma P_t u_0) \leq
\frac{n}{2t}\|\Gamma u_0\|_\infty \leq \frac{nDC^2}{4tr^2} \] giving $
|P_t u_0 - u_0| \leq \frac{C\sqrt{nDt}}r $ and hence $ P_t u_0(x) \geq - \frac{C\sqrt{nDt}}r. $ Combining with \eqref{eq:LtPtExp}, we obtain
\[ u_t(x) \geq \frac 1 \alpha \log \left(1 - \alpha\frac{C\sqrt{nDt}}r\right). \] For $d(x,y) \leq R:=2r$ and $t<T$, we have by Theorem~\ref{thm:Harnack}, \[ u_T(y) - u_t (x) \geq -\frac n 2 \log\left(\frac{T}{t}\right) - \frac{2R^2}{q_{\min} (T-t)} \] We now aim to find $t$ and $T$ giving the optimal estimate for $u_T(y)$. We set \[ \sqrt{t}:= \frac{r}{\alpha C \sqrt{nD}} \cdot \frac{{\alpha n}}{\alpha n + 1} \] and get \begin{align*} \frac 1 \alpha \log \left(1 - \alpha\frac{C\sqrt{nDt}}r\right) + \frac n 2 \log{t} &= -\frac{1}\alpha \log(\alpha n + 1) + n \log\left( \frac{r}{\alpha C\sqrt{nD}} \right) - n\log\left(\frac{\alpha n+1}{\alpha n } \right) \\ &\geq -n-\frac 1 \alpha + n \log\left( \frac{r}{\alpha C\sqrt{nD}} \right) \end{align*}
When setting $T:=\frac{4R^2}{n q_{\min}}$ and choosing $C$ s.t. $T/2 \geq t$, we get \[ -\frac n 2 \log T - \frac{2R^2}{q_{\min} (T-t)} \geq -n \log \left( \frac{2R}{\sqrt{nq_{\min}}} \right) - n. \] Putting together gives \[ - u_T(y) \leq 2n + \frac 1 \alpha + n \log\left( \frac{2\alpha C R}{r} \sqrt{\frac D{q_{\min}}} \right) =: Q. \] By Theorem~\ref{thm:ell1Compare} for $\beta = \log 3$ we have \begin{align}\label{eq:ebetaLup}
\|e^{\beta u_T}\|_1 \leq \|e^{\beta u_0}\|_1 \leq \|e^{-\beta C} + (1-e^{-\beta C})1_{B_{r}(x)}\|_1 \leq e^{-\beta C}m(V) + m(B_r(x)). \end{align} On the other hand, \[ e^{\beta u_T} \geq e^{-\beta C} + (e^{-\beta Q}-e^{-\beta C}) 1_{B_R(x)} \] and thus, \begin{align}\label{eq:ebetaLdown}
\|e^{\beta u_T}\|_1 \geq e^{-\beta C}m(V) + (e^{-\beta Q}-e^{-\beta C}) m(B_R(x)) \end{align}
We set $C:= \gamma n$ with \[ \gamma:=\frac{2\alpha n R}{r} \sqrt{\frac{D}{q_{\min}}} \] and get \begin{align*} Q-C &= 2 n + \frac 1 {\alpha} + 2n\log \left(\gamma \right) - \gamma n \\ &\leq n(3 + 2\log \gamma - \gamma) \end{align*} We remark that the assumption $r \geq 4n^2 \sqrt{\frac{D}{q_{\min}}}$ ensures $\Gamma u_0 \leq \frac{q_{\min}} 2$ with our choice of $C$.
Since $\alpha=0.76$ and $R/r=2$ and $n \geq 2$ and $D/q_{\min} \geq 2$ we get $\gamma \geq 8.5$ and $n(3 + 2\log \gamma - \gamma) \leq - 2$. Thus by combining \eqref{eq:ebetaLup} and \eqref{eq:ebetaLdown}, \[ \frac{m(B_r(x))}{m(B_R(x))} \geq e^{-\beta Q} - e^{-\beta C} = e^{-\beta Q} \cdot \left(1 - e^{\beta(Q-C)} \right) \geq e^{-\beta Q}(1-e^{-2\beta}) \geq 0.86 e^{-\beta Q}. \] Taking inverse gives \begin{align*} \frac{m(B_R(x))}{m(B_r(x))} &\leq 1.17 e^{2n\beta+\frac {\beta}{\alpha}} \left({\frac{2\alpha n R}{r} \sqrt{\frac {D}{q_{\min}}}}\right)^{2\beta n} \\& \leq 1.17 \cdot 4.26 \left({\frac{4.14 n R}{r} \sqrt{\frac {D}{q_{\min}}}}\right)^{2\beta n} \\ &\leq \left( 9n \sqrt{\frac {D}{q_{\min}}}\right)^{3 n} \end{align*} which finishes the proof. \end{proof}
\begin{remark} Although the theorem only states the volume doubling for large radii, we get an a priori volume doubling constant for small radii in terms of the parameters $D$, $q_{\min}$ and the maximal radius. Particularly, the global volume doubling constant can be upper bounded only in terms of the dimension $n$, the maximal vertex degree $D$ and the minimum positive jump weight $q_{\min}$. \end{remark}
\subsection{Fast volume growth for small radii}
One might hope for a volume doubling constant only depending on the dimension and being valid also for small radii. This however does not hold true as the following example demonstrates.
\begin{example} Let $\varepsilon>0$ and $G_\varepsilon=(V,q)$ with $V=\{1,2,3\}$ and \begin{align*} q(1,2)&=\varepsilon \\ q(2,3)&=1 \\ q(2,1)&=4 \\ q(3,2)&=4 \\ q(1,3)=q(3,1)&=0. \end{align*} It can be verified e.g. via \cite[Proposition~2.1]{hua2017ricci} that $G_\varepsilon$ satisfies $CD(\frac 1 4,32)$ independently of the choice of $\varepsilon$. However, the reversible measure $m$ satisfies \[ \frac{m(1)}{m(2)}=\frac{q(2,1)}{q(1,2)} = \frac{4}{\varepsilon} \] and similarly, $\frac{m(2)}{m(3)}=4$ showing that
\[ \frac{m(B_2(3))}{m(B_1(3))} \longrightarrow \infty \qquad \mbox{ as } \varepsilon \to 0
\] although the dimension and curvature in the $CD$ condition stay same. This shows that a uniform volume doubling with a doubling constant only depending on the dimension cannot hold true, not even in case of positive curvature. \end{example}
\subsection{No Expander graphs with $CD(0,n)$}
An expander graph family is a growing family of combinatorial undirected graphs $(G_i)$ of constant degree $D$ s.t. the first positive eigenvalue of $-\Delta_{G_i}$ is uniformly lower bounded by some $\lambda >0$. It is conjectured in \cite{cushing2016bakry} that there is no expander graph family satisfying $CD(0,\infty)$. Using the volume doubling property, we can treat the case of finite dimension.
\begin{corollary}\label{cor:Expanders} Let $n<\infty$. Then there is no expander graph family satisfying $CD(0,n)$. \end{corollary} \begin{proof} Let Suppose $(G_i)_i$ is a graph expander family satisfying $CD(0,n)$. By \cite[Proposition~2.5]{peled2013lipschitz}, we have \[
|B_r(x)| \geq \min \left[ \frac{|V_i|}2 , C ^{r} \right] \] where $C>1$ only depends on the spectral gap and the degree. In particular, balls have exponential volume growth. This contradicts the polynomial volume growth of balls following from Theorem~\ref{thm:volDoubling}. This finishes the proof. \end{proof}
The question about expanders satisfying $CD(0,\infty)$ seems not to be tackleable with purely analytic methods as the birth death chain with constant jump rates to one side and constant but different jump rates to the other side has all analytical properties one expects from expanders, and it satisfies $CD(0,\infty)$.
\printbibliography
\end{document} |
\begin{document}
\title{ itlerunning}
\begin{abstract} This paper proposes a definition of what it means for one system description language to encode another one, thereby enabling an ordering of system description languages with respect to expressive power. I compare the proposed definition with other definitions of encoding and expressiveness found in the literature, and illustrate it on a case study: comparing the expressive power of CCS and CSP. \end{abstract}
\section{Introduction}
This paper aims at answering the question what it means for one language to encode another one, and make this definition applicable to order system description languages like CCS, CSP and the $\pi$-calculus with respect to their expressive power.
To this end it proposes a unifying concept of correct translation between two languages, and adapts it to translations \emph{up to} a semantic equivalence, for languages with a denotational semantics that interprets the operators and recursion constructs as operations on a set of values, called a \emph{domain}. Languages can be partially ordered by their expressiveness up to the chosen equivalence according to the existence of correct translations between them.
The concept of a [correct] translation between system description languages (or \emph{process calculi}) was first formally defined by Boudol \cite{Bo85}. There, and in most other related work in this area, the domain in which a system description language is interpreted consists of the closed expressions from the language itself. In \cite{vG94a} I have reformulated Boudol's definition, while dropping the requirement that the domain of interpretation is the set of closed terms. This allows (but does not enforce) a clear separation of syntax and semantics, in the tradition of universal algebra. Nevertheless, the definition employed in \cite{vG94a} only deals with the case that all (relevant) elements in the domain are denotable as the interpretations of closed terms. Examples~\ref{ex:numbers} and~\ref{ex:undenotable} herein will present situations where such a restriction is undesirable. In addition, both \cite{Bo85} and \cite{vG94a} require the semantic equivalence $\sim$ under which two languages are compared to be a congruence for both of them. This is too severe a restriction to capture some recent encodings.
The current paper aims to generalise the concept of a correct translation as much as possible, so that it is uniformly applicable in many situations, and not just in the world of process calculi. Also, it needs to be equally applicable to encodability and separation results, the latter saying that an encoding of one language in another does not exists. At the same time, it tries to derive this concept from a unifying principle, rather than collecting a set of criteria that justify a number of known encodability and separation results that are intuitively justified.
In Sections~\ref{sec:up to} and~\ref{sec:respects} I propose in fact two notions of encoding: \emph{correct} and \emph{valid} translations up to $\sim$. The former drops the restriction on denotability and $\sim$ being a congruence for the whole target language, but it requires $\sim$ to be a congruence for the source language, as well as the source's image within the target. The latter drops both congruence requirements, but at the expense of requiring denotability by closed terms. In situations where $\sim$ is a congruence for the source language's image within the target language \emph{and} all semantic values are denotable, the two notions agree. \advance\textheight 13.6pt
\section{Correct translations and expressiveness}
A language consists of \emph{syntax} and \emph{semantics}. The syntax determines the valid expressions in the language. The semantics is given by a mapping $\denote{\ \ }$ that associates with each valid expression its meaning, which can for instance be an object, concept or statement. This mapping determines the set ${\cal D}$ of all objects, concepts or statements that can be denoted in the language, namely as its image.
A correct translation of one language into another is a mapping from the valid expressions in the first language to those in the second, that preserves their meaning, i.e.\ such that the meaning of the translation of an expression is the same as the meaning of the expression being translated. In order to formalise this, I represent a language ${\cal L}$ as a pair $(\mbox{\bbb T}_{{\cal L}},\denote{\ \ }_{\cal L})$ of a set $\mbox{\bbb T}_{{\cal L}}$ of valid expressions in ${\cal L}$ and a surjective mapping $\denote{\ \ }_{\cal L}:\mbox{\bbb T}_{\cal L}\rightarrow {\cal D}_{\cal L}$ from $\mbox{\bbb T}_{\cal L}$ in some set of meanings ${\cal D}_{\cal L}$.
\begin{definition}{translation} A \emph{translation} from a language ${\cal L}$ into a language ${\cal L}'$ is a mapping ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$. It is \emph{correct} when $\denote{{\cal T}(E)}_{{\cal L}'} = \denote{E}_{\cal L}$ for all $E\in \mbox{\bbb T}_{\cal L}$. Language ${\cal L}'$ is at least as \emph{expressive} as ${\cal L}$ if a correct translation exists. \end{definition} \begin{figure}
\caption{The essence of a correct translation}
\label{dog}
\end{figure} This fundamental notion is illustrated in Figure~\ref{dog}. It is not hard to see that a correct translation from ${\cal L}$ to ${\cal L}'$ exists if and only if anything that can be expressed in ${\cal L}$ can also be expressed in ${\cal L}'$, i.e.\ iff ${\cal D}_{\cal L} \subseteq {\cal D}_{{\cal L}'}$.
In this paper I will argue that this simple notion of a correct translation, when instantiated with appropriate proposals for $\denote{\ \ }$ and ${\cal D}$, is a suitable definition of an encoding from one system description language into another, and thereby a suitable basis for classifying such languages w.r.t.\ expressiveness.
\section{Dividing out a semantic equivalence}\label{sec:dividing out}
\begin{definition}{process graph} A \emph{process graph} over an alphabet $Act$ is a triple $(S,I,\rightarrow)$ with $S$ a set of \emph{states}, $I\in S$ the \emph{initial state}, and $\mathord{\rightarrow} \subseteq S\times Act \times S$ the \emph{transition relation}. \end{definition} In other words, a process graph is a labelled transition system equipped with an initial state.
One way to apply the above definition of a translation to system description languages like CCS and CSP would be to take variable-free (and hence recursion-free) versions of those languages, and to define the meaning $\denote{P}$ of a CCS or CSP expression $P$ to be the process graph $G_P:=(S,P,\rightarrow)$ with as set of states $S$ the set of all CCS/CSP expressions, as initial state the expression $P$, and $\rightarrow$ being the transition relation generated by the standard structural operational semantics of these languages. A variant of this idea is to reduce $S$ to the states that are \emph{reachable} from $P$ by following transitions.
Now it happens to be case that the reachable part of each process graph that can be denoted by a CSP expression is \emph{isomorphic}, but in general not \emph{equal}, to one that can be denoted by a CCS expression. As an example consider the CCS and CSP constants for \emph{inaction}. In CCS this constant is called $0$ whereas in CSP it is called $\mbox{\sc stop}$. The operational semantics generates no outgoing transitions of either process. It is therefore tempting to translate the CSP constant $\mbox{\sc stop}$ into the CCS constant $0$. Yet, this is not a correct translation in the current set-up, as the process graph with initial state $0$ and no other states or transitions is different from the one with initial state $\mbox{\sc stop}$.
One way to deal with this anomaly is to relax \df{translation} by defining an appropriate semantic equivalence $\sim$ on ${\cal D}_{\cal L} \cup {\cal D}_{{\cal L}'}$ and merely requiring that the meanings of an expression and its translation are \emph{equivalent}.
\begin{definition}{translation up to} A translation ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ from a language ${\cal L}$ into a language ${\cal L}'$ is \emph{correct up to} a semantic equivalence $\sim$ on ${\cal D}_{\cal L} \cup{\cal D}_{{\cal L}'}$ when $\denote{{\cal T}(E)}_{{\cal L}'} \sim \denote{E}_{\cal L}$ for all $E\in \mbox{\bbb T}_{\cal L}$. \end{definition} In the example above, an appropriate candidate for $\sim$ could be isomorphism of reachable parts.
In some sense, introducing an appropriate semantic equivalence $\sim$, or maybe a preorder, appears to be the only reasonable way to allow intuitively correct translations, such as of 0 by \mbox{\sc stop}. Nevertheless, it need not be seen as a relaxation---and hence abandonment---of \df{translation}, but rather as an appropriate instantiation. Namely the meaning of a CCS or CSP expression $P$ is no longer a process graph $G$, but instead the equivalence class $[G]_\sim$ of all process graphs in ${\cal D}_{\rm CCS} \cup {\cal D}_{\rm CSP}$ that are equivalent to $G$.
\begin{observation}{translation up to} Let ${\cal L}=(\mbox{\bbb T}_{\cal L},\denote{\ \ }_{\cal L})$ and ${\cal L}'=(\mbox{\bbb T}_{{\cal L}'},\denote{\ \ }_{{\cal L}'})$ be two languages, and ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ a correct translation between them up to an equivalence $\sim$ on ${\cal D}_{\cal L} \cup{\cal D}_{{\cal L}'}$. Then ${\cal T}$ is a correct translation between the languages $(\mbox{\bbb T}_{\cal L},\denote{\ \ }_{\cal L}^\sim)$ and $(\mbox{\bbb T}_{{\cal L}'},\denote{\ \ }_{{\cal L}'}^\sim)$, where $\denote{E}_{\cal L}^\sim$ is defined to be $[\denote{E}_{\cal L}]_\sim$. \end{observation} Hence, correct translations up to some equivalence can be seen as special cases of correct translations. In doing so, it may appear problematic that the meaning $\denote{E}_{\cal L}^\sim$ of an expression $E\in\mbox{\bbb T}_{\cal L}$ becomes dependent on the semantic domain ${\cal D}_{{\cal L}'}$ of the other language, namely by $\denote{E}_{\cal L}^\sim$ being the class of all processes in ${\cal D}_{\cal L} \cup{\cal D}_{{\cal L}'}$ that are equivalent with $\denote{E}_{\cal L}$. This worry can be alleviated by using, instead of ${\cal D}_{\cal L} \cup{\cal D}_{{\cal L}'}$, a natural class of which both ${\cal D}_{\cal L}$ and ${\cal D}_{{\cal L}'}$ are subsets. In the example above this could for instance be the class of all process graphs (over a suitable alphabet).
\section{Translating operators}\label{sec:operators}
Up to isomorphism of reachable parts, so certainly up to coarser equivalences such as strong bisimilarity, the variable-free fragments of CSP and CCS with finitary choice are equally expressive. Namely each of them can express exactly the (equivalence classes of) finite process graphs. Here a process graph is finite if it has finitely many states and transitions, and no loops. In fact, these languages do not lose any expressiveness when omitting their parallel compositions, for parallel composition is not needed to denote any finite process graph.
Hence the treatment above does not address the question whether one of the \emph{operators} of one language, such as parallel composition, can be mimicked by an operator or combination of operators in the other. This is to be blamed on the absence of variables. Once we admit variables in the language, the CCS parallel composition corresponds to the CCS expression $X|Y$, where $X$ and $Y$ are process variables, and a correct translation to CSP ought to translate this expression to a valid CSP expression---a CSP context built from CSP operators and the variables $X$ and $Y$.
Henceforth, I consider single-sorted languages ${\cal L}$ in which \emph{expressions} or \emph{terms} are built from variables (taken from a set $\mathcal{X}$) by means of operators (including constants) and possibly recursion constructs.\footnote{In \sect{compositionality} two postulates will be presented that restrict the class of languages considered in this paper.} The semantics of such a language is given by a domain of values ${\bf D}$, and an interpretation of each $n$-ary operator $f$ of ${\cal L}$ as an $n$-ary operation $f^{\bf D}: {\bf D}^n\rightarrow {\bf D}$ on ${\bf D}$. Using the equations $$\denote{X}_{\cal L}(\rho) = \rho(X) \qquad \mbox{and} \qquad \denote{f(E_1,\ldots,E_n)}_{\cal L}(\rho) = f^{\bf D}(\denote{E_1}_{\cal L}(\rho),\ldots,\denote{E_n}_{\cal L}(\rho))$$ this allows an inductive definition of the meaning $\denote{E}_{\cal L}$ of an ${\cal L}$-expression $E$ as a function of type $(\mathcal{X}\!\!\rightarrow{\bf D})\rightarrow{\bf D}$, associating a value $\denote{E}_{\cal L}(\rho) \mathbin\in {\bf D}$ to $E$ that depends on the choice of a \emph{valuation} $\rho\!:\mathcal{X}\!\!\!\rightarrow\!{\bf D}$. The valuation associates a value from ${\bf D}$ with each variable. Moreover, $\denote{E}_{\cal L}(\rho)$ only depends on the restriction of $\rho$ to those variables that occur free in $E$. In this setting, the class ${\cal D}_{\cal L}$ of possible meanings of ${\cal L}$-expressions is a subclass of $(\mathcal{X}\!\!\rightarrow{\bf D})\rightarrow{\bf D}$. Hence, a translation ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ between two such languages ${\cal L}$ and ${\cal L}'$ that employ the same set $\mathcal{X}$ of variables and are interpreted in the same domain ${\bf D}$ is correct when $\denote{{\cal T}(E)}_{{\cal L}'}(\rho) = \denote{E}_{\cal L}(\rho)$ for all $E\in \mbox{\bbb T}_{\cal L}$ and all valuations $\rho:\mathcal{X}\rightarrow{\bf D}$.
Since normally the names of variables are irrelevant and the cardinality of the set of variables satisfies only the requirement that it is ``sufficiently large'', no generality is lost by insisting that two (system description) languages whose expressiveness is being compared employ the same set of (process) variables. On the other hand, two languages ${\cal L}$ and ${\cal L}'$ may be interpreted in different domains of values ${\bf D}$ and ${\bf D}'$. Without dividing out a semantic equivalence, one must insist that ${\bf D}\subseteq {\bf D}'$; otherwise no correct translation from ${\cal L}$ into ${\cal L}'$ exists. When ${\bf D}\subseteq {\bf D}'$ also $(\mathcal{X}\rightarrow{\bf D}) \subseteq(\mathcal{X}\rightarrow{\bf D}')$, so any function $(\mathcal{X}\rightarrow{\bf D}')\rightarrow{\bf D}'$ restricts to a function $(\mathcal{X}\rightarrow{\bf D})\rightarrow{\bf D}'$. For the purpose of comparing the expressive power of ${\cal L}$ and ${\cal L}'$, the semantics of ${\cal L}'$ can be taken to be the mapping $\denote{\ \ }_{{\cal L}'}:\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf D})\rightarrow{\bf D}')$, where $\denote{E}_{{\cal L}'}(\rho)$ with $E\in\mbox{\bbb T}_{{\cal L}'}$ is considered for valuations $\rho:\mathcal{X}\rightarrow{\bf D}$ only. This restriction entails that when translating ${\cal L}$ into ${\cal L}'$ I compare the meaning of ${\cal L}$-expressions and their translations only under valuations within the domain ${\bf D}$ in which ${\cal L}$ is interpreted. A translation ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ from ${\cal L}$ to ${\cal L}'$ remains correct when $\denote{{\cal T}(E)}_{{\cal L}'}(\rho) = \denote{E}_{\cal L}(\rho)$ for all $E\in \mbox{\bbb T}_{\cal L}$ and all valuations $\rho:\mathcal{X}\rightarrow{\bf D}$.
\begin{example}{numbers} Let ${\cal L}$ be the language whose syntax consists of a binary operator $+$, interpreted as addition in the domain $\mbox{\bbb N}$ of the natural numbers. So $\mbox{\bbb T}_{\cal L}$ contains expressions such as $X+(Y+Z)$. ${\cal L}'$ is the language with unary operators $e^x$ and $\ln(x)$, interpreted as exponentiation and the natural logarithm on the reals $\mbox{\bbb R}$, as well as the binary operator $\times$ of multiplication. If you do not like partial functions, the domain $\mbox{\bbb R}$ can be extended with a special value $\bot$ to capture undefined outcomes. Note that $\mbox{\bbb N}\subset\mbox{\bbb R}$. Using that $\ln(e^x)=x$, the ${\cal L}$-expression $X+Y$ can be translated into the ${\cal L}'$-expression $\ln(e^X\times e^Y)$. Using this, a translation ${\cal T}:\mbox{\bbb T}_{\cal L} \rightarrow\mbox{\bbb T}_{{\cal L}'}$ is defined inductively by ${\cal T}(X):=X$ and ${\cal T}(E+F):=\ln(e^{{\cal T}(E)}\times e^{{\cal T}(E)})$. \end{example}
\section{Correct translations up to a congruence}\label{sec:up to}
This section aims at integrating the instantiations of the notion of a correct translation proposed in Sections~\ref{sec:dividing out} and~\ref{sec:operators}. Let ${\cal L}$ and ${\cal L}'$ be two languages of the type considered in \sect{operators}, with semantic mappings $\denote{\ \ }_{{\cal L}}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf V})\rightarrow{\bf V})$ and $\denote{\ \ }_{{\cal L}'}:\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf V}')\rightarrow{\bf V}')$. Here ${\bf V}$ and ${\bf V}'$ are domains of interpretation prior to quotienting by an appropriate semantic equivalence; they might be sets of process graphs with as states closed CCS expressions and closed CSP expressions, respectively. In order to compare these languages w.r.t.\ their expressive power I need a semantic equivalence $\sim$ that is defined on a unifying domain of interpretation ${\bf Z}$, with ${\bf V},{\bf V}' \subseteq{\bf Z}$. Let ${\bf U}:=\{v\in{\bf V}'\mid \exists v\in {\bf V}.~v'\sim v\}$.
\begin{definition}{equivalence of valuations} Two valuations $\eta,\rho:\mathcal{X}\rightarrow{\bf Z}$ are \emph{$\sim$-equivalent}, $\eta\sim\rho$, if $\eta(X)\sim\rho(X)$ for each $X\in \mathcal{X}$. \end{definition} In case there exists a $v\in {\bf V}$ for which there is no $\sim$-equivalent $v'\in {\bf V}'$, there is no correct translation from ${\cal L}$ into ${\cal L}'$ up to $\sim$. Namely, the semantics of ${\cal L}$ describes, among others, how any ${\cal L}$-operator evaluates the argument value $v$, and this aspect of the language has no counterpart in ${\cal L}'$. Therefore, I will require
\begin{equation}\label{related} \forall v\in {\bf V}.~ \exists v'\in {\bf V}'.~ v'\sim v. \end{equation} This implies that for any valuation $\rho:\mathcal{X}\rightarrow{\bf V}$ there is a valuation $\eta:\mathcal{X}\rightarrow{\bf V}'$ with $\eta\sim\rho$.
\begin{definition}{correct translation} A translation ${\cal T}$ from ${\cal L}$ into ${\cal L}'$ is \emph{correct up to $\sim$} iff (\ref{related}) holds and\\ $\denote{{\cal T}(E)}_{{\cal L}'}(\eta) \sim \denote{E}_{\cal L}(\rho)$ for all $E\in \mbox{\bbb T}_{\cal L}$ and all valuations $\eta:\mathcal{X}\rightarrow{\bf V}'$ and $\rho:\mathcal{X}\rightarrow{\bf V}$ with $\eta\sim\rho$. \end{definition} Note that a correct translation as defined in \sect{operators} is exactly a correct translation up to the identity relation. If a correct translation up to $\sim$ from ${\cal L}$ into ${\cal L}'$ exists, then $\sim$ must be a congruence for ${\cal L}$.
\begin{definition}{congruence} An equivalence relation $\sim$ is a \emph{congruence} for a language ${\cal L}$ interpreted in a semantic domain ${\bf V}$ if $\denote{E}_{\cal L}(\nu)\sim\denote{E}_{\cal L}(\rho)$ for any ${\cal L}$-expression $E$ and any valuations $\nu,\rho:\mathcal{X}\rightarrow {\bf V}$ with $\nu \sim \rho$. \end{definition}
\begin{proposition}{congruence} If a correct translation up to $\sim$ from ${\cal L}$ into ${\cal L}'$ exists, then $\sim$ is a congruence for ${\cal L}$. \end{proposition}
\begin{proof} Let ${\cal T}$ be a correct translation up to $\sim$ from ${\cal L}$ into ${\cal L}'$. Let $E\in\mbox{\bbb T}_{\cal L}$ and let $\nu,\rho:\mathcal{X}\!\rightarrow{\bf V}$ with $\nu\mathord\sim\rho$. By (\ref{related}) there is a valuation $\eta\!:\!\mathcal{X}\!\!\rightarrow\!{\bf V}'$ with $\eta\mathbin\sim\nu$. Hence $\denote{E}_{{\cal L}}(\nu) \mathbin\sim \denote{{\cal T}(E)}_{{\cal L}'}(\eta) \mathbin\sim \denote{E}_{\cal L}(\rho)$. \end{proof} The existence of a correct translation up to $\sim$ from ${\cal L}$ into ${\cal L}'$ does not imply that $\sim$ is a congruence for ${\cal L}'$. However, $\sim$ has the properties of a congruence for those expressions of ${\cal L}'$ that arise as translations of expressions of ${\cal L}$, when restricting attention to valuations into ${\bf U}$. I call this a \emph{congruence for} ${\cal T}({\cal L})$. \begin{definition}{weak congruence} Let ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ be a translation from ${\cal L}$ into ${\cal L}'$. An equivalence $\sim$ on $\mbox{\bbb T}_{{\cal L}'}$ is a \emph{congruence for} ${\cal T}({\cal L})$ if $\denote{{\cal T}(E)}_{{\cal L}'}(\nu)\mathbin\sim\denote{{\cal T}(E)}_{{\cal L}'}(\eta)$ for any $E\mathbin\in\mbox{\bbb T}_{\cal L}$ and $\nu,\eta\!:\!\mathcal{X}\!\!\!\rightarrow \!{\bf U}$ with $\nu \mathbin\sim \eta$. \end{definition}
\begin{proposition}{weak congruence} If a correct translation up to $\sim$ from ${\cal L}$ into ${\cal L}'$ exists, then $\sim$ is a congruence for ${\cal T}({\cal L})$. \end{proposition}
\begin{proof} Let ${\cal T}$ be correct up to $\sim$ from ${\cal L}$ into ${\cal L}'$. Let $E\in\mbox{\bbb T}_{\cal L}$ and let $\nu,\eta:\mathcal{X}\rightarrow{\bf U}$ with $\nu\sim\eta$. By definition of ${\bf U}$ there is a $\rho:\mathcal{X}\rightarrow{\bf V}$ with $\rho\sim\nu$. Hence $\denote{{\cal T}(E)}_{{\cal L}'}(\nu) \sim \denote{E}_{\cal L}(\rho) \sim\denote{{\cal T}(E)}_{{\cal L}'}(\eta)$. \end{proof} In the rest of this section I will show how the concept of a correct transition up to $\sim$ can be seen as an instantiation of the notion of correct translation, analogously to the situation in \sect{dividing out}. To this end I need to unify the types of the semantic mappings $\denote{\ \ }_{{\cal L}}$ and $\denote{\ \ }_{{\cal L}'}$, say as $\denote{\ \ }_{{\cal L}}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf E})\rightarrow{\bf D})$ and $\denote{\ \ }_{{\cal L}'}:\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf E})\rightarrow{\bf D})$. \footnote{In fact, it suffices to obtain mappings
$\denote{\ \ }_{{\cal L}}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf E})\rightarrow{\bf D})$
and
$\denote{\ \ }_{{\cal L}'}:\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf E}')\rightarrow{\bf D}')$
satisfying $((\mathcal{X}\rightarrow{\bf E})\rightarrow{\bf D}) \subseteq ((\mathcal{X}\rightarrow{\bf E}')\rightarrow{\bf D}')$,
and hence ${\bf E}'={\bf E}$ and ${\bf D}\subseteq{\bf D}'$. However, any
mapping $\denote{\ \ }_{{\cal L}}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf E})\rightarrow{\bf D})$
is also a mapping
$\denote{\ \ }_{{\cal L}}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf E})\rightarrow{\bf D}')$,
so one can just as well use ${\bf D}'$ for ${\bf D}$.} This unification process involves dividing out the semantic equivalence $\sim$, as well as changing the type of a semantic mapping without tampering with the essence of its meaning. Below I propose two methods for doing so. The first method applies when $\sim$ is a congruence for both ${\cal L}$ and ${\cal L}'$, whereas the second merely requires that it is a congruence for ${\cal L}$. In both cases, the semantic mappings $\denote{\ \ }_{{\cal L}}$ and $\denote{\ \ }_{{\cal L}'}$ can be understood to be of types $\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf V})\rightarrow{\bf Z})$ and $\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf V}')\rightarrow{\bf Z})$, respectively. Dividing out $\sim$ yields the quotient domain ${\bf D}:={\bf Z}/\!\sim:=\{[z]_\sim\mid z\in {\bf Z}\}$, consisting of the $\sim$-equivalence classes of elements of ${\bf Z}$, together with the mappings $\denote{\ \ }^\sim_{\cal L}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf V})\rightarrow{\bf D})$ and \plat{$\denote{\ \ }^\sim_{\cal L}:\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf V}')\rightarrow{\bf D})$}, where $\denote{E}_{\cal L}^{\sim}(\rho) := [\denote{E}_{\cal L}(\rho)]_\sim$.
\subsection{Translations up to a congruence for both languages} \label{sec:A}
Let $\sim$ be a congruence for both ${\cal L}$ and ${\cal L}'$. Take ${\bf W} := \{v'' \in {\bf Z} \mid \exists v\in {\bf V}.~ v\sim v''\}$ and likewise ${\bf W}' := \{v'' \in {\bf Z} \mid \exists v'\in {\bf V}'.~ v'\sim v''\}$. Furthermore, ${\bf C}:={\bf W}/\!_\sim$ and ${\bf C}':={\bf W}'/\!_\sim$. By (\ref{related}), ${\bf W}\subseteq{\bf W}'$ and ${\bf C}\subseteq{\bf C}'\subseteq{\bf D}$.
Now $\denote{\ \ }_{\cal L}^\sim$ can be recast as a function of type $\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf C})\rightarrow{\bf D})$; namely by defining $\denote{E}_{{\cal L}}^{\sim}(\theta)$ with $\theta:\mathcal{X}\rightarrow{\bf C}$ to be $\denote{E}_{{\cal L}}^{\sim}(\rho)$, for any valuation $\rho:\mathcal{X}\rightarrow{\bf V}$ such that $\theta(X)=[\rho(X)]_\sim$ for all $X\in \mathcal{X}$. The congruence property of $\sim$ ensures that the value \plat{$\denote{E}_{{\cal L}}^{\sim}(\theta)\in{\bf D}$} is independent of the choice of the representatives $\rho(X)$ in the equivalence classes $\theta(X)$.
Likewise, $\denote{\ \ }_{{\cal L}'}^\sim$ can be recast as a function of type $\mbox{\bbb T}_{{\cal L}'} \rightarrow((\mathcal{X}\rightarrow{\bf C}')\rightarrow{\bf D})$, which, as in \sect{operators}, can be restricted to a function of type $\mbox{\bbb T}_{{\cal L}'} \rightarrow((\mathcal{X}\rightarrow{\bf C})\rightarrow{\bf D})$. A translation ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ from ${\cal L}$ into ${\cal L}'$ can be defined to be \emph{correct up to} $\sim$ when (\ref{related}) holds and $\denote{{\cal T}(E)}_{{\cal L}'}^{\sim}(\theta) = \denote{E}_{\cal L}^{\sim}(\theta)$ for all $E\in \mbox{\bbb T}_{\cal L}$ and all valuations $\theta:\mathcal{X}\rightarrow{\bf C}$. It is not hard to check that this definition agrees with \df{correct translation}.
\subsection{Translations up to a congruence for the source language} \label{sec:B}
Let $\sim$ be a congruence for ${\cal L}$. Recast $\denote{\ \ }_{{\cal L}}^\sim$ as a function of type $\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf U})\rightarrow{\bf D})$ by defining $\denote{E}_{{\cal L}}^{\sim}(\eta)$ with $\eta:\mathcal{X}\rightarrow{\bf U}$ to be $\denote{E}_{{\cal L}}^{\sim}(\rho)$, for any valuation $\rho:\mathcal{X}\rightarrow{\bf V}$ with $\rho\sim\eta$. The congruence property of $\sim$ ensures that the value $\denote{E}_{{\cal L}}^{\sim}(\eta)\in{\bf D}$ is independent of the choice of the representative valuation $\rho$.
Since ${\bf U}\subseteq {\bf V}$ also $(\mathcal{X}\rightarrow{\bf U}) \subseteq (\mathcal{X}\rightarrow{\bf V})$, and therefore any function $(\mathcal{X}\rightarrow{\bf V})\rightarrow{\bf D}$ restricts to a function $(\mathcal{X}\rightarrow{\bf U})\rightarrow{\bf D}$. This way, $\denote{\ \ }_{{\cal L}'}^\sim$ can be recast as a function of type $\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf U})\rightarrow{\bf D})$ as well, and unification is achieved. Now a translation ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ from ${\cal L}$ into ${\cal L}'$ can be defined to be \emph{correct up to} $\sim$ when (\ref{related}) holds and $\denote{{\cal T}(E)}_{{\cal L}'}^{\sim}(\eta) = \denote{E}_{\cal L}^{\sim}(\eta)$ for all $E\in \mbox{\bbb T}_{\cal L}$ and all valuations $\eta:\mathcal{X}\rightarrow{\bf U}$. It is straightforward that this definition agrees with \df{correct translation}.
\section{A hierarchy of expressiveness preorders}
An equivalence $\sim$ on a class ${\bf Z}$ is said to be \emph{finer}, \emph{stronger}, or \emph{more discriminating} than another equivalence $\approx$ on ${\bf Z}$ if $p \sim q \Rightarrow p\approx q$ for all $p,q\in{\bf Z}$. \begin{theorem}{hierarchy} Let ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ be a translation from ${\cal L}$ into ${\cal L}'$, and let $\sim,\approx$ be congruences for ${\cal T}({\cal L})$, with $\sim$ finer than $\approx$. If ${\cal T}$ is correct up to $\sim$, then it is also correct up to $\approx$. \end{theorem}
\begin{proof} Let ${\bf U}^\approx := \{v' \in {\bf V}' \mid \exists v\in {\bf V}.~ v\approx v''\}$. Let ${\cal T}$ be correct up to $\sim$. Then $\denote{{\cal T}(E)}_{{\cal L}'}(\eta) \sim \denote{E}_{\cal L}(\rho)$ for all $E\in \mbox{\bbb T}_{\cal L}$ and all $\eta:\mathcal{X}\!\!\rightarrow{\bf V}'$ and $\rho:\mathcal{X}\!\!\rightarrow{\bf V}$ with $\eta\sim\rho$. To establish that ${\cal T}$ also is correct up to $\approx$, let $E\in\mbox{\bbb T}_{\cal L}$, $\nu:\mathcal{X}\!\!\rightarrow{\bf V}'$ and $\rho:\mathcal{X}\!\!\rightarrow{\bf V}$ with $\nu\approx\rho$. Take $\eta:\mathcal{X}\rightarrow{\bf V}'$ with $\eta\sim\rho$---it exists by (\ref{related}). Then $\denote{{\cal T}(E)}_{{\cal L}'}(\eta) \sim \denote{E}_{\cal L}(\rho)$ and hence $\denote{{\cal T}(E)}_{{\cal L}'}(\eta) \approx \denote{E}_{\cal L}(\rho)$. By (\ref{related}) both $\eta$ and $\nu$ are of type $\mathcal{X}\!\!\!\rightarrow{\bf U}^\approx$. Since $\approx$ is a congruence for ${\cal T}({\cal L})$ and $\nu\mathbin\approx\eta$, $\denote{{\cal T}(E)}_{{\cal L}'}(\nu) \approx \denote{{\cal T}(E)}_{\cal L}(\eta) \approx \denote{E}_{{\cal L}'}(\rho)$. \end{proof} When it is necessary to divide out a semantic equivalence, the quality of a translation depends on the choice of this equivalence. In no way would I want to suggest that a language ${\cal L}'$ is at least as expressive as ${\cal L}$ when there is a correct translation of ${\cal L}$ up to \emph{some} equivalence---the equivalence does \emph{not} appear in the scope of an existential quantifier. In fact, this would make any two languages equally expressive, namely by using the universal equivalence, relating any two processes. Instead, the equivalence needs to be chosen carefully to match the intended applications of the languages under comparison. In general, as show by \thm{hierarchy}, using a finer equivalence yields a stronger claim that one language can be encoded in another. On the other hand, when separating two languages ${\cal L}$ and ${\cal L}'$ by showing that ${\cal L}$ \emph{cannot} be encoded in ${\cal L}'$, a coarser equivalence generally yields a stronger claim.
The following corollary of \thm{hierarchy} is a powerful tool for proving the nonexistence of translations.
\begin{corollary}{congruence} If there is a correct translation up to $\sim$ from ${\cal L}$ into ${\cal L}'$, and $\approx$ is a congruence for ${\cal L}'$ that is coarser than $\sim$, then $\approx$ is a congruence for ${\cal L}$. \end{corollary}
\begin{proof} By combining \thm{hierarchy} and \pr{congruence}. \end{proof}
\begin{proposition}{identity} If $\sim$ is a congruence for a language ${\cal L}$, then the identity is a correct translation up to $\sim$ from ${\cal L}$ into itself. \end{proposition} \begin{proof} Immediately from Definitions~\ref{df:correct translation} and~\ref{df:congruence}. \end{proof}
\begin{theorem}{composition} If correct translations up to $\sim$ exists from ${\cal L}_1$ into ${\cal L}_2$ and from ${\cal L}_2$ into ${\cal L}_3$, then there is a correct translation up to $\sim$ from ${\cal L}_1$ into ${\cal L}_3$. \end{theorem}
\begin{proof} For $i=1,2,3$ let $\denote{\ \ }_{{\cal L}_i}:\mbox{\bbb T}_{{\cal L}_i} \rightarrow ((\mathcal{X}\rightarrow{\bf V}_i)\rightarrow{\bf V}_i)$, and for $k=1,2$ let ${\cal T}_k:\mbox{\bbb T}_{{\cal L}_k}\rightarrow\mbox{\bbb T}_{{\cal L}_{k+1}}$ be correct translations up to $\sim$ from ${\cal L}_k$ to ${\cal L}_{k+1}$. I will show that the translation ${\cal T}_2\circ{\cal T}_1:\mbox{\bbb T}_{{\cal L}_1}\rightarrow\mbox{\bbb T}_{{\cal L}_3}$ from ${\cal L}_1$ to ${\cal L}_3$, given by ${\cal T}_2\circ{\cal T}_1(E)={\cal T}_2({\cal T}_1(E))$, is a correct up to $\sim$.
By assumption, $\denote{{\cal T}_1(E)}_{{\cal L}_2}(\eta) \sim \denote{E}_{{\cal L}_1}(\rho)$ for all $E\in \mbox{\bbb T}_{{\cal L}_1}$ and all $\eta:\mathcal{X}\rightarrow{\bf V}_2$ and $\rho:\mathcal{X}\!\rightarrow{\bf V}_1$ with $\eta\sim\rho$, and likewise $\denote{{\cal T}_2(F)}_{{\cal L}_3}(\nu) \sim \denote{F}_{{\cal L}_2}(\eta)$ for all $F\in \mbox{\bbb T}_{{\cal L}_2}$ and all $\nu:\mathcal{X}\!\rightarrow{\bf V}_3$ and $\eta:\mathcal{X}\!\rightarrow{\bf V}_2$ with $\nu\sim\eta$. Let $E\in \mbox{\bbb T}_{{\cal L}_1}$, $\nu:\mathcal{X}\!\rightarrow{\bf V}_3$ and $\rho:\mathcal{X}\!\rightarrow{\bf V}$ with $\nu\sim \rho$; I need to show that \plat{$\denote{{\cal T}_2\circ{\cal T}_1(E)}_{{\cal L}_3}(\nu) \sim \denote{E}_{{\cal L}_1}(\rho)$}.
Let $\eta:\mathcal{X}\rightarrow{\bf V}_2$ be a valuation with $\eta\sim\rho$---it exists by (\ref{related}). Then $\nu\sim\eta$. Taking $F:={\cal T}_1(E)$ one obtains $\denote{{\cal T}_2({\cal T}_1(E))}_{{\cal L}_3}(\nu) \sim \denote{{\cal T}_1(E)}_{{\cal L}_2}(\eta) \sim \denote{E}_{{\cal L}_1}(\rho)$. \end{proof}
\begin{definition}{expressiveness} A language ${\cal L}'$ \emph{can express} or \emph{is at least as
expressive as} a language ${\cal L}$ \emph{up to $\sim$}, if there exists a correct translation up to $\sim$ from ${\cal L}$ into ${\cal L}'$. \end{definition} \thm{composition} shows that this relation is transitive. Restricted to languages for which $\sim$ is a congruence, it is even a preorder.
\section{Compositionality}\label{sec:compositionality}
A substitution in ${\cal L}$ is a partial function $\sigma:\mathcal{X}\rightharpoonup\mbox{\bbb T}_{\cal L}$ from the variables to the ${\cal L}$-expressions. For a given ${\cal L}$-expression $E\in\mbox{\bbb T}_{\cal L}$, $E[\sigma]\in\mbox{\bbb T}_{\cal L}$ denotes the ${\cal L}$-expression $E$ in which each free occurrence of a variable $X\in{\it dom}(\sigma)$ is replaced by $\sigma(X)$, while renaming bound variables in $E$ so as to avoid a free variable $Y$ occurring in an expression $\sigma(X)$ ending up being bound in $E[\sigma]$. In general, a given expression $E\in\mbox{\bbb T}_{\cal L}$ can be written in several ways as $F[\sigma]$. For instance, if ${\cal L}$ features a binary operator $f$, a unary operator $g$ and a constant $c$, then the term $f(c,g(c))\in\mbox{\bbb T}_{\cal L}$ can be written as $F[\sigma]$ with \begin{itemize} \item $F=f(X,Y)$, $\sigma(X)=c$ and $\sigma(Y)=g(c)$, or \item $F=f(X,g(Y))$, $\sigma(X)=c$ and $\sigma(Y)=c$, or \item $F=f(c,g(X))$ and $\sigma(X)=c$. \end{itemize} Likewise, in case ${\cal L}$ contains a recursion construct $\textbf{fix}_XS$, where $S$ is a set of recursion equations $Y=E_Y$, then the expression $\textbf{fix}_X\{X=f(g(c),g(g(X)))\}$, in which the variable $X$ is bound, can be written as $F[\sigma]$ with $F=\textbf{fix}_X\{X=f(Y,g(g(X)))\}$ and $\sigma(Y)=g(c)$.
\begin{definition}{prefix} A term $E\mathbin\in\mbox{\bbb T}_{\cal L}$ is a \emph{prefix} of a term $F\!$, written $E\mathbin\leq F\!$, if $F\mathbin{\stackrel{\alpha}=}E[\sigma]$ for some substitution $\sigma$. Here $\mathrel{\plat{$\stackrel{\alpha}=$}}$ denotes \emph{$\alpha$-recursion}, renaming of bound variables while avoiding capture of free variables. \end{definition} Since $E[\textit{id}]=E$, where $\textit{id}:\mathcal{X}\rightarrow\mbox{\bbb T}_{\cal L}$ is the identity, and $E[\sigma][\xi]\mathrel{\plat{$\stackrel{\alpha}=$}} E[\xi\bullet\sigma]$, where the substitution $\xi\bullet\sigma$ is given by $(\xi\bullet\sigma)(X)=\sigma(X)[\xi]$, it follows that $\leq$ is reflexive and transitive, and hence a preorder. Write $\equiv$ for the kernel of $\leq$, i.e.\ $E\equiv F$ iff $E\leq F \wedge F\leq E$. If $E\equiv F$ then $E$ can be converted into $F$ by means of an injective renaming of its variables.
\begin{definition}{head} An term $H\in\mbox{\bbb T}_{\cal L}$ is a \emph{head} if $H$ is not a single variable and $E\leq H$ implies that $E$ is single variable or $E\equiv H$. It is a \emph{head of} another term $F$ if it is a head, as well as a prefix of $F$. \end{definition} $f(X,Y)$ is a head of $f(c,g(c))$, and $\textbf{fix}_X\{X=f(Y,g(g(X)))\}$ is a head of $\textbf{fix}_X\{X=f(g(c),g(g(X)))\}$.
\begin{postulate}{head} Each expression $E$, if not a variable, has a head, which is unique up to $\equiv$. \end{postulate} This is easy to show for each common type of system description language, and I am not aware of any counterexamples. However, while striving for maximal generality, I consider languages with (recursion-like) constructs that are yet to be invented, and in view of those, this principle has to be postulated rather than derived. This means that here I consider only languages that satisfy this postulate. I also limit attention to languages where the meaning of an expression is invariant under $\alpha$-recursion. \begin{postulate}{alpha} If $E\mathrel{\plat{$\stackrel{\alpha}=$}} F$ then $\denote{E}_{\cal L}=\denote{F}_{\cal L}$. \end{postulate}
The semantic mapping $\denote{\ \ }_{{\cal L}}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\!\rightarrow{\bf V})\rightarrow{\bf V})$ extends to substitutions $\sigma$ by $\denote{\sigma}_{\cal L}(\rho)(X):=\denote{\sigma(X)}_{\cal L}(\rho)$ for all $X\mathbin\in\mathcal{X}$ and $\rho:\mathcal{X}\!\!\rightarrow{\bf V}$---here $\sigma$ is extended to a total function by $\sigma(Y):=Y$ for all $Y\not\in{\it dom}(\sigma)$. Thus $\denote{\sigma}_{\cal L}$ is of type $(\mathcal{X}\rightarrow{\bf V})\rightarrow(\mathcal{X}\rightarrow{\bf V})$, i.e.\ a map from valuations to valuations. The inductive nature of the semantic mapping $\denote{\ \ }_{\cal L}$ ensures that \begin{equation}\label{inductive meaning} \denote{E[\sigma]}_{\cal L}(\rho) = \denote{E}_{\cal L}(\denote{\sigma}_{\cal L}(\rho)) \end{equation} for all expressions $E\in\mbox{\bbb T}_{\cal L}$, substitutions $\sigma:\mathcal{X}\rightharpoonup\mbox{\bbb T}_{\cal L}$ and valuations $\rho:\mathcal{X}\rightarrow{\bf V}$. In case $E$ is $f(X_1,\ldots,X_n)$ this amounts to $\denote{f(E_1,\ldots,E_n)}_{\cal L}(\rho) = f^{\bf D}(\denote{E_1}_{\cal L}(\rho),\ldots,\denote{E_n}_{\cal L}(\rho))$, but (\ref{inductive meaning}) is more general and anticipates language constructs other than functions, such as recursion.
\begin{definition}{compositionality} A translation ${\cal T}$ from ${\cal L}$ to ${\cal L}'$ is \emph{compositional} if ${\cal T}(E[\sigma])\mathrel{\plat{$\stackrel{\alpha}=$}} {\cal T}(E)[{\cal T}\circ\sigma]$ for each $E\in\mbox{\bbb T}_{\cal L}$ and $\sigma:\mathcal{X}\rightharpoonup\mbox{\bbb T}_{\cal L}$, and moreover ${\cal T}(X)=X$ for each $X\in\mathcal{X}$. \end{definition} In case $E=f(t_1,\ldots,t_n)$ for certain $t_i\in\mbox{\bbb T}_{\cal L}$ this amounts to ${\cal T}(f(t_1,\ldots,t_n)) \mathrel{\plat{$\stackrel{\alpha}=$}} E_f({\cal T}(t_1),\ldots,{\cal T}(t_n))$, where $E_f:={\cal T}(f(X_1,\ldots,X_n))$ and $E_f(u_1,\ldots,u_n)$ denotes the result of the simultaneous substitution in this expression of the terms $u_i\in\mbox{\bbb T}_{{\cal L}'}$ for the free variables $X_i$, for $i=1,\ldots,n$. Again, \df{compositionality} is more general and anticipates language constructs other than functions, such as recursion.
\begin{theorem}{compositionality} If any correct translation from ${\cal L}$ to ${\cal L}'$ up to $\sim$ exists, then there exists a compositional translation that is correct up to $\sim$. \end{theorem}
\begin{proof} Pick a representative from each $\equiv$-equivalence class of terms. With \emph{the head of an expression} $E$ I mean the chosen representative out of the $\equiv$-equivalence class of heads of $E$. Now each term $E\notin\mathcal{X}$ can uniquely be written as $H[\sigma]$, with $H$ the head of $E$ and ${\it dom}(\sigma)$ the set of free variables of $H$.
Given a correct translation ${\cal T}_0$, define the translation ${\cal T}$ inductively by \begin{center} \begin{tabular}{ll} ${\cal T}(X) := X$ & for $X\in \mathcal{X}$\\ ${\cal T}(E) := {\cal T}_0(H)[{\cal T}\circ\sigma]$ & when $E\mathrel{\plat{$\stackrel{\alpha}=$}} H[\sigma]$ as stipulated above. \end{tabular} \end{center} First I show that ${\cal T}$ is compositional, using induction on $E$. So let $E\in\mbox{\bbb T}_{\cal L}$ and $\xi:\mathcal{X}\rightarrow\mbox{\bbb T}_{\cal L}$. I have to show that ${\cal T}(E[\xi])\mathrel{\plat{$\stackrel{\alpha}=$}} {\cal T}(E)[{\cal T}\circ\xi]$. The case $E\in\mathcal{X}$ is trivial, so let $E\mathrel{\plat{$\stackrel{\alpha}=$}} H[\sigma]$. For each free variable $X$ of $H$, $\sigma(X)$ is a proper subterm of $E$, so by the induction hypothesis ${\cal T}(\sigma(X)[\xi])\mathrel{\plat{$\stackrel{\alpha}=$}} {\cal T}(\sigma(X))[{\cal T}\circ\xi]$. Thus $\begin{array}[t]{@{}l@{~}ll@{}} ({\cal T}\!\circ(\xi\bullet\sigma))(X) &= {\cal T}((\xi\bullet\sigma)(X)) & \mbox{by definition of functional composition $\circ$} \\ &= {\cal T}(\sigma(X)[\xi]) & \mbox{by definition of the relation $\bullet$ between substitutions} \\ &\mathrel{\plat{$\stackrel{\alpha}=$}} {\cal T}(\sigma(X))[{\cal T}\circ\xi] & \mbox{by induction, derived above; trivial if $X\not\in{\it dom}(\sigma)$} \\ &= (({\cal T}\!\circ\xi)\bullet({\cal T}\!\circ\sigma))(X) & \mbox{by definition of the relations $\circ$ and $\bullet$.} \end{array}$\\ This shows that the substitutions ${\cal T}\!\circ(\xi\bullet\sigma)$ and $({\cal T}\!\circ\xi)\bullet({\cal T}\!\circ\sigma)$ are equal up to $\alpha$-recursion, from which it follows that that $F[{\cal T}\circ(\xi\bullet\sigma)] \mathrel{\plat{$\stackrel{\alpha}=$}} (F[{\cal T}\circ\sigma])[{\cal T}\circ\xi]$ for all terms $F\in\mbox{\bbb T}_{{\cal L}'}$. \\ $\begin{array}[b]{@{}l@{~}ll@{}} \mbox{Hence}~ {\cal T}(E[\xi]) &\mathrel{\plat{$\stackrel{\alpha}=$}} {\cal T}(H[\sigma][\xi]) & \mbox{since $E\mathrel{\plat{$\stackrel{\alpha}=$}} H[\sigma]$.} \\ &\mathrel{\plat{$\stackrel{\alpha}=$}} {\cal T}(H[\xi\bullet\sigma]) & \mbox{by the identity used already in proving transitivity of $\leq$} \\ &= {\cal T}_0(H)[{\cal T}\circ(\xi\bullet\sigma)] & \mbox{by definition of ${\cal T}$} \\ &\mathrel{\plat{$\stackrel{\alpha}=$}} ({\cal T}_0(H)[{\cal T}\circ\sigma])[{\cal T}\circ\xi] & \mbox{derived above} \\ &= {\cal T}(H[\sigma])[{\cal T}\circ\xi] & \mbox{by definition of ${\cal T}$} \\ &\mathrel{\plat{$\stackrel{\alpha}=$}} {\cal T}(E)[{\cal T}\circ\xi] & \mbox{since $E\mathrel{\plat{$\stackrel{\alpha}=$}} H[\sigma]$.} \end{array}$
It remains to be shown that ${\cal T}$ is correct up to $\sim$, i.e.\ that $\denote{{\cal T}(E)}_{{\cal L}'}(\eta) \sim \denote{E}_{\cal L}(\rho)$ for all terms $E\in \mbox{\bbb T}_{\cal L}$ and all valuations $\eta:\mathcal{X}\rightarrow{\bf V}'$ and $\rho:\mathcal{X}\rightarrow{\bf V}$ with $\eta\sim\rho$. Let $\eta$ and $\rho$ be such valuations. I proceed with structural induction on $E$. When handling a term $E\mathrel{\plat{$\stackrel{\alpha}=$}} H[\sigma]$, $\sigma(X)$ is a proper subterm of $E$ for each free variable $X$ of $H$. So by the induction hypothesis $\denote{{\cal T}(\sigma(X)}_{{\cal L}'}(\eta) \sim \denote{\sigma(X)}_{\cal L}(\rho)$. The valuation $\denote{\sigma}_{\cal L}(\rho)$ is defined such that $\denote{\sigma}_{\cal L}(\rho)(X)=\denote{\sigma(X)}_{\cal L}(\rho)$ for each $X\in\mathcal{X}$. Likewise, $\denote{{\cal T}\circ\sigma}_{{\cal L}'}(\eta)(X)=\denote{{\cal T}(\sigma(X)}_{{\cal L}'}(\eta)$ for each $X\in\mathcal{X}$. Hence $\denote{{\cal T}\circ\sigma}_{{\cal L}'}(\eta) \sim \denote{\sigma}_{\cal L}(\rho)$.
(*) \begin{list}{$\bullet$}{\leftmargin 10pt} \item $\denote{{\cal T}(X)}_{{\cal L}'}(\eta) = \denote{X}_{{\cal L}'}(\eta) \begin{array}[t]{@{~}ll} =\eta(X) & \mbox{by definitions of ${\cal T}$ and $\denote{\ \ }_{{\cal L}'}$} \\ \sim\rho(X) & \mbox{since $\eta\sim\rho$} \\ =\denote{X}_{{\cal L}}(\rho) & \mbox{by definition of $\denote{\ \ }_{{\cal L}}$.} \end{array}$ \item $\denote{{\cal T}(H[\sigma])}_{{\cal L}'}(\eta) \begin{array}[t]{@{~}ll@{}} = \denote{{\cal T}_0(H)[{\cal T}\circ\sigma]}_{{\cal L}'}(\eta) & \mbox{by definition of ${\cal T}$} \\ = \denote{{\cal T}_0(H)}_{{\cal L}'}(\denote{{\cal T}\circ\sigma}_{{\cal L}'}(\eta)) & \mbox{by (\ref{inductive meaning})} \\ \sim \denote{H}_{\cal L}(\denote{\sigma}_{\cal L}(\rho)) & \mbox{by (*) above, as ${\cal T}_0$ is a correct translation\,~~~~~~~~~~~} \\ = \denote{H[\sigma]}_{\cal L}(\rho) & \mbox{by (\ref{inductive meaning}).}
\mbox{
$\Box$\global\@qededtrue} \end{array}$ \end{list} \end{proof}
\noindent Hence, for the purpose of comparing the expressive power of languages, correct translations between them can be assumed to be compositional.
\section{Comparing the expressive power of CCS and CSP}
As an application of my approach, in this section I quantify the degree to which the parallel composition of CSP can be expressed in CCS\@. It turns out that there exists a correct translation up to trace equivalence, but not up to the version of weak bisimilarity equivalence that takes divergence into account. This combination of an encoding and a separation result is typical when comparing system description languages. Here we see that for applications where divergence and branching time are a concern, the CSP parallel composition cannot be encoded in CCS; however, when linear time reasoning is all that matters, it can.
\subsection{CCS}
CCS \cite{Mi90ccs} is parametrised with a set ${\cal A}$ of {\em names}. The set $\bar{\cal A}$ of {\em co-names} is $\bar\mathcal{A}:=\{\bar{a} \mid a\in {\cal A}\}$, and $\mathcal{L}:=\mathcal{A} \cup \bar\mathcal{A}$ is the set of \emph{labels}. The function $\bar{\cdot}$ is extended to $\mathcal{L}$ by declaring $\bar{\bar{\mbox{$a$}}}=a$. Finally, \plat{$Act := \mathcal{L}\dcup \{\tau\}$} is the set of {\em actions}. Below, $a$, $b$, $c$, \ldots range over $\mathcal{L}$ and $\alpha$, $\beta$ over $Act$. A \emph{relabelling function} is a function $f:\mathcal{L}\rightarrow \mathcal{L}$ satisfying $f(\bar{a})=\overline{f(a)}$; it extends to $Act$ by $f(\tau):=\tau$. Let $\mathcal{X}$ be a set $X$, $Y$, \ldots of \emph{process variables}. The set $\mathcal{E}$ of CCS terms or \emph{process expressions} is the smallest set including: \begin{center} \begin{tabular}{lll} $\alpha.E$ & for $\alpha\in Act$ and $E\in\mathcal{E}$ & \emph{prefixing}\\ $\sum_{i\in I}E_i$ & for $I$ an index set and $E_i\in\mathcal{E}$ & \emph{choice} \\
$E|F$ & for $E,F\in\mathcal{E}$ & \emph{parallel composition} \\ $E\backslash L $ & for $L\subseteq\mathcal{L}$ and $E\in\mathcal{E}$ & \emph{restriction} \\ $E[f]$ & for $f$ a relabelling function and $E\in\mathcal{E}$ & \emph{relabelling} \\ $X$ & for $X\in\mathcal{X}$ & a \emph{process variable} \\ $\textbf{fix}_XS$ & for $S:\mathcal{X}\rightharpoonup \mathcal{E}$ and $X\in {\it dom}(S)$ & \emph{recursion}. \end{tabular} \end{center} One writes $E_1+E_2$ for $\sum_{i\in I}E_i$ with $I=\{1,2\}$, and $0$ for $\sum_{i\in \emptyset}E_i$. A partial function $S:\mathcal{X}\rightharpoonup \mathcal{E}$ is called a \emph{recursive specification}. The variables in its domain ${\it dom}(S)$ are called \emph{recursion variables} and the equations $Y=S(Y)$ for $Y\in{\it dom}(S)$ \emph{recursion equations}. A recursive specification $S:\mathcal{X}\rightharpoonup \mathcal{E}$ is traditionally written as $\{Y=S(Y)\mid Y\in {\it dom}(S)\}$.
\begin{table}[t] \begin{center} \framebox{$\begin{array}{ccc} \alpha.E \goto{\alpha} E & \displaystyle\frac{E_j \goto{\alpha} E_j'}{\sum_{i\in I}E_i \goto{\alpha} E_j'}\makebox[0pt][l]{~~($j\in I$)}
\\[4ex]
\displaystyle\frac{E\goto{\alpha} E'}{E|F \goto{\alpha} E'|F} &
\displaystyle\frac{E\goto{a} E' ,~ F \goto{\bar{a}} F'}{E|F \goto{\tau} E'| F'} &
\displaystyle\frac{F \goto{\alpha} F'}{E|F \goto{\alpha} E|F'}\\[4ex] \displaystyle\frac{E \goto{\alpha} E', ~\alpha\not\in L\cup\bar{L}}{E\backslash L \goto{\alpha} E'\backslash L} & \displaystyle\frac{E \goto{\alpha} E'}{E[f] \goto{f(\alpha)} E'[f]} & \displaystyle\frac{S(X)[\textbf{fix}_YS/Y]_{Y\in {\it dom}(S)} \goto{\alpha} E}{\textbf{fix}_XS\goto{\alpha}E} \end{array}$} \end{center} \caption{Structural operational semantics of CCS} \label{tab:CCS} \end{table}
CCS is traditionally interpreted in the domain ${\rm T}_{\rm CCS}$ of closed CCS expressions up to $\alpha$-recursion. Hence a valuation $\rho:\mathcal{X}\rightarrow{\rm T}_{\rm CCS}$, valuating each variable as a closed CCS expression, is just a closed substitution. The semantic mapping $\denote{\ \ }_{\rm CCS}$ is given by $\denote{E}_{\rm CCS}(\rho) := E[\rho]$---a CCS expression $E$ evaluates, under the valuation $\rho:\mathcal{X}\rightarrow{\rm T}_{\rm CCS}$, to the result of performing the substitution $\rho$ on $E$. In fact, this is a common way to provide many system description languages with a semantics. Consequently, the distinction between syntax and semantics can, to a large extent, be dropped. It is for this reason that the semantic interpretation function $\denote{\ \ }$ rarely occurs in papers on CCS-like languages.
The ``real'' semantics of CCS is given by the labelled transition relation $\mathord\rightarrow \subseteq {\rm T}_{\rm CCS}\times Act \times{\rm T}_{\rm CCS}$ between closed CCS expressions. The transitions \plat{$p\goto{\alpha}q$} with $p,q\in{\rm T}_{\rm CCS}$ and $\alpha\in Act$ are derived from the rules of \tab{CCS}. Formally a transition \plat{$p\goto{\alpha}q$} is part of the transition relation of CCS if there exists a well-founded, upwards branching tree (a \emph{proof} of the transition) of which the nodes are labelled by transitions, such that \begin{itemize}\itemsep 0pt \item the root is labelled by $p\goto{\alpha}q$, and \item if $\varphi$ is the label of a node $n$ and $K$ is the set of labels of the nodes
directly above $n$, then $\frac{K}{\varphi}$ is a rule from \tab{CCS}, with closed CCS
expressions substituted for the variables $E,F,\ldots$. \end{itemize}
\subsection{CSP}
CSP \cite{BHR84,OH86,BR85,Ho85} is parametrised with a set ${\cal A}$ of {\em communications\/}; \plat{$Act := \mathcal{A}\dcup \{\tau\}$} is the set of {\em actions}. Below, $a$, $b$ range over $\mathcal{A}$ and $\alpha$, $\beta$ over $Act$. The set $\mathcal{E}$ of CSP terms is the smallest set including: \begin{center} \begin{tabular}{lll} $\mbox{\sc stop}$ && \emph{inaction} \\ $\mbox{\sc div}$ && \emph{divergence} \\ $(a\rightarrow E)$ & for $a\in \mathcal{A}$ and $E\in\mathcal{E}$ & \emph{prefixing}\\ $E \mathbin\Box F$ & for $E,F\in\mathcal{E}$ & \emph{external choice} \\ $E \sqcap F$ & for $E,F\in\mathcal{E}$ & \emph{internal choice} \\
$E\|_AF$ & for $E,F\in\mathcal{E}$ and $A\subseteq{\cal A}$ & \emph{parallel composition} \\ $E/ b$ & for $b\in\mathcal{A}$ and $E\in\mathcal{E}$ & \emph{concealment} \\ $f(E)$ & for $E\in\mathcal{E}$ and $f:Act\rightarrow Act$ with $f(\tau)=\tau$ and
$f^{-1}(a)$ finite& \emph{renaming} \\ $X$ & for $X\in\mathcal{X}$ & a \emph{process variable} \\ $\mu X\cdot E$ & for $E\in \mathcal{E}$ and $X\in \mathcal{X}$ & \emph{recursion}. \end{tabular} \end{center} As in \cite{OH86}, I here leave out the guarded choice $(x:B\rightarrow P(x))$ and the constant \mbox{\sc run} of \cite{BHR84}, and the inverse image and sequential composition operator, with constant \mbox{\sc skip}, of \cite{BHR84,BR85}. The semantics of CSP was originally given in quite a different way \cite{BHR84,BR85}, but \cite{OH86} provided an operational semantics of CSP in the same style as the one of CCS, and showed its consistency with the original semantics. It is this operational semantics I will use here; it is given by the rules in \tab{CSP}. Let $\mathcal{L}:=\mathcal{A}$. \begin{table}[htb] \begin{center} \framebox{$\begin{array}{ccccc} \mbox{\sc div}\goto{\tau}\mbox{\sc div} & (a\rightarrow E) \goto{a} E & E \sqcap F \goto{\tau} E & E \sqcap F \goto{\tau} F \\[2ex] \displaystyle\frac{E\goto{a} E'}{E\mathbin\Box F \goto{a} E'} & \displaystyle\frac{F\goto{a} F'}{E\mathbin\Box F \goto{a} F'} & \displaystyle\frac{E\goto{\tau} E'}{E\mathbin\Box F \goto{\tau} E'\mathbin\Box F} & \displaystyle\frac{F\goto{\tau} F'}{E\mathbin\Box F \goto{\tau} E\mathbin\Box F'} \\[4ex]
\displaystyle\frac{E\goto{\alpha} E'~~{\scriptstyle(\alpha\notin A)}}{E\|_AF \goto{\alpha} E'\|_AF} & \multicolumn{2}{c}{
\displaystyle\frac{E\goto{a} E'~~F\goto{a} F'~~{\scriptstyle(a\in A)}}{E\|_AF \goto{a} E'\|_AF'}} &
\displaystyle\frac{F\goto{\alpha} F'~~{\scriptstyle(\alpha\notin A)}}{E\|_AF \goto{\alpha} E\|_AF'} \\[4ex] \displaystyle\frac{E \goto{b} E'}{E/ b \goto{\tau} E'/ b} & \displaystyle\frac{E \goto{\alpha} E'~~{\scriptstyle(\alpha\neq b)}}{E/ b \goto{\alpha} E'/ b} & \displaystyle\frac{E \goto{\alpha} E'}{f(E) \goto{f(\alpha)} f(E')} & \multicolumn{2}{c}{\mu X\cdot E \goto{\tau} E[\mu X\cdot E/X]} \end{array}$} \end{center} \caption{Structural operational semantics of CSP} \label{tab:CSP} \end{table}
\subsection{Trace semantics and convergent weak bisimilarity}
I will compare the expressive power of CCS and CSP up two semantic equivalences: a linear time and a branching time equivalence. For the former I take \emph{trace equivalence} \cite{Ho80} and for the latter a version of weak bisimilarity that takes divergence into account \cite{HP80,Sti87,Ab87,Wa90}---called \emph{convergent weak bisimilarity} in \cite{vG93}. Unlike the standard weak bisimilarity of \cite{Mi90ccs}, this relation is finer than the failures-divergences semantics of \cite{BHR84,OH86,BR85,Ho85}.
The relation $\mathord\Rightarrow \subseteq {\rm T}_{\rm CCS}\times {\cal L}^* \times{\rm T}_{\rm CCS}$ is the transitive closure of $\rightarrow$ that abstracts from $\tau$-steps. Formally, $\dto{}$ is the transitive closure of \plat{$\goto{\tau}$} and $p\dto{a_1\cdots a_n}q$ for $n\mathbin\geq 0$ holds iff there are $p_0,p_1,\ldots,p_n$ with \mbox{$p_0\mathbin=p$}, $p_{i-1} \dto{}\goto{a_i}p_i$ for $i=1,\ldots,n$, and $p_n\dto{} q$. Below, ${\rm T}$ is a set that contains ${\rm T}_{\rm CCS}$ and ${\rm T}_{\rm CSP}$.
\begin{definition}{traces} The set $T(p)\subseteq \mathcal{L}^*$ of \emph{traces} of a process $p\mathbin\in {\rm T}$ is given by $s\in T(p)$ iff $\exists p'.~p\dto{s} p'$. Two processes $p,q\in{\rm T}$ are \emph{trace equivalent} if $T(p)=T(q)$. \end{definition}
\begin{definition}{weak bisimilarity} A relation ${\cal B} \subseteq {\rm T}\times{\rm T}$ is a \emph{weak bisimulation} \cite{Mi90ccs} if \begin{itemize} \item for any $p,p',q\in{\rm T}$ and $s\in \mathcal{L}^*$ with $p{\cal B} q$ and
$p\dto{s}p'$, there is a $q'$ with $q\dto{s}q'$ and $p'{\cal B} q'$, \item for any $p,q,q'\in{\rm T}$ and $s\in \mathcal{L}^*$ with $p{\cal B} q$ and
$q\dto{s}q'$, there is a $p'$ with $p\dto{s}p'$ and $p'{\cal B} q'$. \end{itemize} Two processes $p,q\in{\rm T}$ are \emph{weakly bisimilar}, $p\bis{w} q$, if they are related by a weak bisimulation. \end{definition} All we need to know about the \emph{convergent} weak bisimilarity ($\bis[\downarrow]{w}$) is that a process that has a divergence cannot be related to a divergence-free process, and that restricted to divergence-free processes it coincides with weak bisimilarity. Here a process \emph{has a divergence} if it can do an infinite sequence of transitions that from some point onwards are all labelled $\tau$.
Trace equivalence and (convergent) weak bisimilarity are congruences for CSP\@. The (convergent) weak bisimilarity fails to be a congruence for the $+$ of CCS, a problem that is commonly solved by taking its congruence closure. I do not need to do this when translating CSP into CCS, because correct translations need not be a congruence for the whole target language.
Note that even when restricting CCS to just $0$, action prefixing and $+$, there is no correct translation of this language into CSP up to the congruence closure of $\bis[\downarrow]{w}$---this is a direct consequence of \cor{congruence}.
\subsection{A correct translation of CSP into CCS up to trace equivalence}\label{sec:CSP-CCS}
For any choice of a CSP set of communications $\mathcal{A}$, I create a CCS set of names $\mathcal{B}$ and construct a translation from CSP with communications from $\mathcal{A}$ into CCS with names from $\mathcal{B}$.
Let $\mathcal{B}:= \{a,a',a''\mid a\in \mathcal{A}\}$, consisting of 3 disjoint copies of $\mathcal{A}$. For $A\subseteq\mathcal{A}$, let $S_A$ be the recursive specification given by the single CCS equation $\displaystyle \{X\mathbin=\!\sum_{a\in A}\bar{a}.a'.a''.a'.X +
\!\!\!\!\sum_{a\in \mathcal{A}\!-A}\!\!\!\bar{a}.a''.X\}$ and $S'_A$ be the recursive specification given by the single CCS equation $\displaystyle \{X\mathbin=\!\sum_{a\in A}\bar{a}.\bar{a}'.\bar{a}'.X +
\!\!\!\!\sum_{a\in \mathcal{A}\!-A}\!\!\!\bar{a}.a''.X\}$. Now, up to trace equivalence, and assuming that $P$ features names from $\mathcal{A}$ only,
$(P|\textbf{fix}_XS_A)\backslash \mathcal{A}$ is a process that differs from $P$ by the replacement of each $a$-transition by a sequence of transitions $a' a'' a'$ if $a\in A$, and by the single transition $a''$ otherwise. Likewise, $(P|\textbf{fix}_XS'_A)\backslash \mathcal{A}$ differs from $P$ by the replacement of each $a$-transition by a $\bar{a}'\bar{a}'$ if $a\in A$, and $a''$ otherwise. Let $\mathcal{A}':=\{a' \mid a\in\mathcal{A}\}$, and let the relabelling function $f$ be such that $f(a'')=a$. Then the following is a correct translation of CSP into CCS up to trace equivalence.
${\cal T}(X)=X$
${\cal T}(\mu X\cdot E) = \textbf{fix}_X\{X={\cal T}(E)\}$
${\cal T}(a\rightarrow E)=a.{\cal T}(E)$
${\cal T}(\mbox{\sc stop})={\cal T}(\mbox{\sc div})=0$
${\cal T}(E\sqcap F) = {\cal T}(E\mathbin\Box F) = {\cal T}(E) + {\cal T}(F)$
${\cal T}(E/ b) = ({\cal T}(E) | \textbf{fix}_X\{X=\bar{b}.X\})\backslash \{b\}$
${\cal T}(f(E)) = {\cal T}(E)[f]$
$\displaystyle {\cal T}(E\|_AF) = \left(
\left( ({\cal T}(E)|\textbf{fix}_XS_A)\backslash \mathcal{A}
\big| ({\cal T}(F)|\textbf{fix}_XS'_A)\backslash \mathcal{A} \right)\backslash \mathcal{A}'\right)[f]$
\subsection{The untranslatability of CSP into CCS up to convergent weak bisimilarity} \label{sec:no translation}
In this section I show that there is no translation of CSP into CCS up to convergent weak bisimilarity. Suppose that ${\cal T}$ is such a translation. Let $\rho:\mathcal{X}\rightarrow {\rm T}_{\rm CSP}$ and $\eta:\mathcal{X}\rightarrow {\rm T}_{\rm CCS}$ satisfy $\rho(X)=\rho(Y)=(b\rightarrow\mbox{\sc stop})\mathbin\Box (b\rightarrow(c\rightarrow\mbox{\sc stop}))$ and $\eta(X)=\eta(Y)=b.0+b.c.0$. Then \plat{$\rho\bis[\downarrow]{w}\eta$}. So
\[
{\cal T}(X\|_{\{b,c\}}Y)[\eta] = \denote{{\cal T}(X\|_{\{b,c\}}Y)}_{\rm CCS}(\eta)
\bis[\downarrow]{w} \denote{X\|_{\{b,c\}}Y}_{\rm CSP}(\rho) \bis[\downarrow]{w} b.0+b.c.0. \] Let $\nu:\mathcal{X}\rightarrow {\rm T}_{\rm CCS}$ satisfy $\nu(X)=\nu(Y)=b.0$. By the same reasoning as above
\[
{\cal T}(X\|_{\{b,c\}}Y)[\nu] \bis[\downarrow]{w} b.0. \]
Since $b.0$ has no divergence, neither does ${\cal T}(X\|_{\{b,c\}}Y)[\nu]$, so there must be a state $p\in{\rm T}_{\rm CCS}$ with
\plat{${\cal T}(X\|_{\{b,c\}}Y)[\nu] \dto{} p \gonotto{\tau}$}. By \cite[Proposition 7.1 (or 8)]{BFG04}, it follows from the operational semantics of CCS that if $E[\sigma] \goto{\alpha} q$ for $E\mathbin\in\mbox{\bbb T}_{\rm CCS}$, $\sigma:\mathcal{X}\rightarrow {\rm T}_{\rm CCS}$ and $q\in{\rm T}_{\rm CCS}$, then $q$ must have the form $F[\sigma']$ with $F\in\mbox{\bbb T}_{\rm CCS}$ and for each variable $W$ that occurs free in $F$ there is a variable $Z$ that occurs free in $E$, such that either $\sigma(Z)=\sigma'(W)$ or \plat{$\sigma(Z)\goto{\beta}\sigma'(W)$} for some $\beta\in Act$\footnote{In general
multiple occurrences of $Z$ in $E$ may give rise to different associated variables $W$
in $F$.}---moreover, $F$ depends on $E$ and on the existence of the $\beta$-transitions, but not any other property of $\sigma$. So, for some $n\geq 0$,
$${\cal T}(X\|_{\{b,c\}}Y)[\nu] \goto{\tau} E_1[\nu_1] \goto{\tau} E_2[\nu_2] \goto{\tau}\ldots \goto{\tau} E_n[\nu_n] \gonotto{\tau}$$ where, for any free variable $Z$ of $E_i$, $\nu_i(Z)$ is either $0$ or $b.0$. This execution path can be simulated by
$${\cal T}(X\|_{\{b,c\}}Y)[\eta] \goto{\tau} E_1[\eta_1] \goto{\tau} E_2[\eta_2] \goto{\tau}\ldots \goto{\tau} E_n[\eta_n] \gonotto{\tau}$$ where $\eta_i(Z)=b.0+b.c.0$ iff $\nu_i(Z)=b.0$ and $\eta_i(Z)=0$ iff $\nu_i(Z)=0$---i.e.\ always choosing $\eta(Z)\goto{b}0$ over \plat{$\eta(Z)\goto{b}c.0$}. By the properties of $\bis[\downarrow]{w}$, \plat{$E_n[\eta_n] \bis[\downarrow]{w} b.0+b.c.0$}. So there is a process $E_{n+1}[\eta_{n+1}]$ with \plat{$E_n[\eta_n]\goto{b}E_{n+1}[\eta_{n+1}]\dto{}\goto{c}$}. It must be that $E_{n+1}[\eta_{n+1}]\bis[\downarrow]{w}c.0$.
The only rule in the structural operational semantics of CCS that has multiple premises has a conclusion with label $\tau$. Furthermore, any rule with a $\tau$-labelled premise, has a $\tau$-labelled conclusion. Hence, since the transition \plat{$E_n[\eta_n]\goto{b}E_{n+1}[\eta_{n+1}]$} is not labelled $\tau$, its proof has only one branch. This branch could stem from a transition from $\eta(X)$ or from $\eta(Y)$, but not both. W.l.o.g.\ I assume it does not stem from $\eta(X)$.
Let $\xi:\mathcal{X}\rightarrow {\rm T}_{\rm CCS}$ satisfy $\xi(X)=b.0$ and $\xi(Y)=b.0+b.c.0$. Since in the proofs of the transitions in the above path from ${\cal T}(X\|_{\{b,c\}}Y)[\eta]$ the transition \plat{$\eta(X)\goto{b}c.0$} is never used, that path can be simulated by
$${\cal T}(X\|_{\{b,c\}}Y)[\xi] \goto{\tau} E_1[\xi_1] \goto{\tau} E_2[\xi_2] \goto{\tau}\ldots \goto{\tau} E_n[\xi_n] \goto{b} E_{n+1}[\xi_{n+1}].$$
Note that \plat{${\cal T}(X\|_{\{b,c\}}Y)[\xi] \bis[\downarrow]{w} b.0$}. Due to the properties of $\bis[\downarrow]{w}$ the above derivation can be extended with
$$E_{n+1}[\xi_{n+1}] \goto{\tau} E_{n+2}[\xi_{n+2}] \goto{\tau}\ldots \goto{\tau} E_{n+k}[\xi_{n+k}]$$ ending in a \emph{deadlock} state, where no further transitions are possible. This derivation, in turn, can be simulated by
$$E_{n+1}[\eta_{n+1}] \goto{\tau} E_{n+2}[\eta_{n+2}] \goto{\tau}\ldots \goto{\tau} E_{n+k}[\eta_{n+k}],$$ still ending in a deadlock state. This contradicts $E_{n+1}[\eta_{n+1}]\bis[\downarrow]{w}c.0$.
$\Box$
\section{Valid translations up to a preorder}\label{sec:respects}
Let ${\cal L}$ and ${\cal L}'$ be languages with $\denote{\ \ }_{{\cal L}}:\mbox{\bbb T}_{{\cal L}} \rightarrow ((\mathcal{X}\rightarrow{\bf V})\rightarrow{\bf V})$ and $\denote{\ \ }_{{\cal L}'}:\mbox{\bbb T}_{{\cal L}'} \rightarrow ((\mathcal{X}\rightarrow{\bf V}')\rightarrow{\bf V}')$. In this section I explore an alternative for the notion of a correct translation up to an equivalence $\sim$. This alternative doesn't have a build-in requirement that $\sim$ must be a congruence for ${\cal L}$;\footnote{Moreover, it may be a preorder rather than an equivalence.} however it only deals with semantic values denotable by closed terms.
Let ${\rm T}_{\cal L}$ be the set of closed ${\cal L}$-expressions, i.e.\ having no free variables. The meaning $\denote{P}_{\cal L}(\rho)$ of a closed term $P\in {\rm T}_{\cal L}$ is independent of the valuation $\rho:\mathcal{X}\rightarrow{\bf V}$, and hence denoted $\denote{P}_{\cal L}$.
\begin{definition}{respects} A translation ${\cal T}$ from ${\cal L}$ into ${\cal L}'$ \emph{respects} $\sim$ if (\ref{related}) holds and $\denote{{\cal T}(P)}_{{\cal L}'}(\eta) \sim \denote{P}_{\cal L}$ for all closed ${\cal L}$-expressions $P\in{\rm T}_{\cal L}$ and all valuations $\eta:\mathcal{X}\rightarrow{\bf U}$, with ${\bf U}:=\{v\in{\bf V}'\mid \exists v\in {\bf V}.~v'\sim v\}$. \end{definition}
\begin{observation}{correct respects} If ${\cal T}$ is a correct translation from ${\cal L}$ into ${\cal L}'$ up to $\sim$, then it respects $\sim$. \end{observation} Usually one employs translations ${\cal T}$ with the property that for any $E\in\mbox{\bbb T}_{\cal L}$ any free variable of ${\cal T}(E)$ is also a free variable of $E$---I call these \emph{free-variable respecting translations}, or \emph{fvr-translations}. If there is at least one $Q\in{\rm T}_{{\cal L}'}$ with $\denote{Q}_{{\cal L}'}\in{\bf U}$, then any translation ${\cal T}$ from ${\cal L}$ into ${\cal L}'$ can be modified to an fvr-translation ${\cal T}^\circ$ from ${\cal L}$ into ${\cal L}'$, namely by substituting $Q$ for all free variables of ${\cal T}(E)$ that are not free in $E$. This modification preserves the properties of respecting $\sim$ and of being correct up to $\sim$. An fvr-translation ${\cal T}$ from ${\cal L}$ into ${\cal L}'$ \emph{respects} $\sim$ iff $\denote{{\cal T}(P)}_{{\cal L}'} \sim \denote{P}_{\cal L}$ for all closed ${\cal L}$-expressions $P\in{\rm T}_{\cal L}$.
\begin{observation}{fvr} Let ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ be an fvr-translation from ${\cal L}$ into ${\cal L}'$, and let $\sim,\approx$ be equivalences (or preorders) on a class ${\bf Z} \subseteq {\bf V}\cup{\bf V}'$, with $\sim$ finer than $\approx$. If ${\cal T}$ respects $\sim$, then it also respects $\approx$.
The identity is a $\sim$-respecting fvr-translation from any language into itself.
If $\sim$-respecting fvr-translations exists from ${\cal L}_1$ into ${\cal L}_2$ and from ${\cal L}_2$ into ${\cal L}_3$, then there is a $\sim$-respecting fvr-translation from ${\cal L}_1$ into ${\cal L}_3$. \end{observation}
\noindent Respecting an equivalence or preorder is a very weak correctness requirement for translations. In spite of the separation result of \sect{no translation}, there trivially exists a translation from CSP to CCS that respects $\bis[\downarrow]{w}$, or even strong bisimilarity. This follows from the observation that---thanks to the arbitrary index sets $I$ and ${\it dom}(S)$ that may be used for choice and recursion---up to $\bis[\downarrow]{w}$ every process graph is denotable by a CCS expression. In particular, compositionality is in no way implied by respect for an equivalence. It therefore makes sense to add compositionality as a separate requirement. The following shows that also the notion of a compositional $\sim$-respecting transition is a bit too weak.
\begin{example}{undenotable} Let ${\cal L}'$ be the language CCS without the recursion construct, but interpreted in a domain of arbitrary process graphs (similar to the graph model of ACP \cite{BW90}). Let ${\cal L}$ be the same language, but with an extra operator $\_\!\_/\mathcal{L}$ that relabels all transitions into $\tau$. The compositional translation ${\cal T}$ from ${\cal L}$ into ${\cal L}'$ with ${\cal T}(X/\mathcal{L}):=0$ respects \plat{$\bis[\downarrow]{w}$}. This is because the interpretation of any closed ${\cal L}$-expression is a process graph without infinite paths, and after relabelling all transitions into $\tau$ such a graph is equivalent to $0$. Yet, there are process graphs $G$---those with infinite paths---that cannot be denoted by closed ${\cal L}$-expressions, and for which $G/\mathcal{L} \not\!\bis[\downarrow]{w} 0$, demonstrating that ${\cal T}$ should not be seen as a valid translation. \end{example} Based on this, I add the denotability of all semantic values as a requirement of a valid translation.
\begin{definition}{valid} A translation ${\cal T}$ from ${\cal L}$ into ${\cal L}'$ is \emph{valid up to} $\sim$ if it is compositional and respects $\sim$, while ${\cal L}$ satisfies
\begin{equation}\label{denotable} \forall v\in{\bf V}.~\exists P\in{\rm T}_{\cal L}.~\denote{P}_{\cal L}=v\;. \end{equation} \end{definition} The following theorem (in combination with \thm{compositionality} and \obs{correct respects}) shows that this notion of a valid translation is consistent with the notion of a correct translation, and can be seen as extending that notion to situations where $\sim$ is not known to be a congruence.
\begin{theorem}{valid correct} Let ${\cal T}: \mbox{\bbb T}_{\cal L} \rightarrow \mbox{\bbb T}_{{\cal L}'}$ be a translation from ${\cal L}$ into ${\cal L}'$, and $\sim$ be a congruence for ${\cal T}({\cal L})$. If ${\cal T}$ is valid up to $\sim$, then it is correct up to $\sim$. \end{theorem}
\begin{proof} Suppose ${\cal T}$ is valid up $\sim$. Then $\denote{{\cal T}(P)}_{{\cal L}'}(\eta) \sim \denote{P}_{\cal L}$ for all all closed ${\cal L}$-expressions $P\in {\rm T}_{\cal L}$ and all valuations $\eta:\mathcal{X}\rightarrow{\bf U}$. To establish that ${\cal T}$ is correct up to $\sim$, let $E\in \mbox{\bbb T}_{\cal L}$ and let $\eta:\mathcal{X}\rightarrow{\bf V}'$ and $\rho:\mathcal{X}\rightarrow {\bf V}$ be valuations with $\eta\sim\rho$. So $\eta:\mathcal{X}\rightarrow{\bf U}$. I need to show that $\denote{{\cal T}(E)}_{{\cal L}'}(\eta) \sim \denote{E}_{\cal L}(\rho)$.
Let $\sigma:\mathcal{X}\rightarrow{\rm T}_{\cal L}$ be a substitution with $\denote{\sigma(X)}_{\cal L}=\rho(X)$ for all $X\in\mathcal{X}$---such a substitution exists by (\ref{denotable}). Furthermore, define $\nu:\mathcal{X}\rightarrow{\bf V}'$ by $\nu(X):=\denote{{\cal T}(\sigma(X))}_{{\cal L}'}(\eta)$ for all $X\in\mathcal{X}$. Since ${\cal T}$ respects $\sim$ I have $\nu(X)\sim\rho(X)$ for all $X\in\mathcal{X}$; thus $\eta\sim\rho\sim\nu$ and also $\nu:\mathcal{X}\rightarrow{\bf U}$.\\ $\begin{array}[b]{@{}l@{~}ll@{}} \mbox{Hence}~\denote{{\cal T}(E)}_{{\cal L}'}(\eta) & \sim \denote{{\cal T}(E)}_{{\cal L}'}(\nu) & \mbox{since $\sim$ is a congruence for ${\cal T}({\cal L})$}\\ & = \denote{{\cal T}(E)}_{{\cal L}'}(\denote{{\cal T}\circ\sigma}_{{\cal L}'}(\eta)) & \mbox{expanding the definition of $\nu$} \\ & = \denote{{\cal T}(E)[{\cal T}\circ\sigma]}_{{\cal L}'}(\eta) & \mbox{by (\ref{inductive meaning})}\\ & = \denote{{\cal T}(E[\sigma])}_{{\cal L}'}(\eta) & \mbox{by compositionality of ${\cal T}$} \\ & \sim \denote{E[\sigma]}_{{\cal L}} & \mbox{since ${\cal T}$ respects $\sim$}\\ & = \denote{E}_{\cal L}(\denote{\sigma}_{\cal L}) & \mbox{by (\ref{inductive meaning})} \\ & = \denote{E}_{\cal L}(\rho) & \mbox{by definition of $\rho$.} \end{array}$ \end{proof}
\section{Related work}
The greatest expressibility result presented so far is by De Simone \cite{dS85}, who showed that a wide class of languages, including CCS, SCCS, CSP and ACP, are expressible up to strong bisimulation equivalence in {\sc Meije}. Vaandrager \cite{Va93} established that this result crucially depends on the use of unguarded recursion, and its noncomputable consequences. {\em Effective} versions of CCS, SCCS, {\sc Meije} and ACP, not using unguarded recursion, are incapable of expressing all effective De Simone languages. Nevertheless, \cite{vG94a} isolated a \emph{primitive effective} dialect of ACP (featuring primitive recursive renaming operators) in which a large class of primitive effective languages, including primitive effective versions of CCS, SCCS, CSP and {\sc Meije}, can be encoded. All these results fall within the scope of the notion of translation and expressibility from \cite{Bo85} and \cite{vG94a}, and use strong bisimulation as underlying equivalence.
In the last few years, a great number of encodability and separation results have appeared, comparing CCS, Mobile Ambients, and several versions of the $\pi$-calculus (with and without recursion; with mixed choice, separated choice or asynchronous) \cite{ Boreale98, Nestmann00, NestmannP00, Parrow00, CardelliG00, CardelliGG02, BusiGZ03, BusiGZ09, CarboneM03, Palamidessi03, BPV04, BPV05, Palamidessi05, Nestmann06, PalamidessiSVV06, PV06, CCP07, VPP07, CCAV08, HMP08, Parrow08, PV08, VBG09, PSN11, PN12}; see \cite{Gorla10b,Gorla10a} for an overview. Many of these results employ different and somewhat ad-hoc criteria on what constitutes a valid encoding, and thus are hard to compare with each other. Gorla \cite{Gorla10a} collected some essential features of these approaches and integrated them in a proposal for a valid encoding that justifies most encodings and some separation results from the literature.
Like Boudol \cite{Bo85} and the present paper, Gorla requires a compositionality condition for encodings. However, his criterion is weaker than mine (cf.\ \df{compositionality}) in that the expression $E_f$ encoding an operator $f$ may be dependent on the set of \emph{names} occurring freely in the expressions given as arguments of $f$. The reason for this weakening appears to be that it provides a method for freeing up names that need to be fresh because of the special r\^ole they play in the translation, but might otherwise occur in the expressions being translated.
To address the problem of freeing up names I advocate a slightly different approach, already illustrated in \sect{CSP-CCS}: Most languages with names are parametrised with the set of names that are allowed in expressions. So instead of the single language CCS, there is an incarnation CCS($\mathcal{A}$) for each choice of names $\mathcal{A}$. Likewise, there is an incarnation CSP($\mathcal{A}$) of CSP for each $\mathcal{A}$. A priori, these parameters need not be related. So rather than insisting that for every $\mathcal{A}$ the language CCS($\mathcal{A}$) encodes CSP($\mathcal{A}$), I merely require that for each $\mathcal{A}$ there exists a $\mathcal{B}$ such that CCS($\mathcal{B}$) encodes CSP($\mathcal{A}$). Now the translations obviously are also parametrised by the choice of $\mathcal{A}$, and they may use names in $\mathcal{B}-\mathcal{A}$ as names that are guaranteed to be fresh. It is an interesting topic for future research to see if there are any valid encodability results \`a la \cite{Gorla10a} that suffer from my proposed strengthening of compositionality.
The second criterion of \cite{Gorla10a} is a form of invariance under name-substitution. It serves to partially undo the effect of making the compositionality requirement name-dependent. In my setting I have not yet found the need for such a condition. This criterion as formalised in \cite{Gorla10a} is too restrictive. It forbids the translation of the input process $a(x).E$ from value-passing CCS \cite{Mi90ccs} into the CCS expression $\sum_{v\in \cal V}a_v.E[v/x]$, where $\cal V$ is a given (possibly infinite) set of data values. The problem is that a renaming of the single name $a$ occurring in an expression $E$ of value-passing CCS, say into $b$, would require renaming infinitely many names $a_v$ occurring in ${\cal T}(E)$ into $b_v$, which is forbidden in \cite{Gorla10a}. Yet this translation, from \cite{Mi90ccs}, appears entirely justified intuitively.
The remaining three requirements of Gorla might be seen as singling our a particular preorder $\sqsubseteq$ for comparing terms and their translations. Since in \cite{Gorla10a}, as in \cite{Bo85}, the domain of interpretation consists of the closed expressions, and $\sqsubseteq$ is generally not a congruence for the source or target languages, one needs to compare with the approach of \sect{respects}, where $\sim$ is allowed to be a preorder. The preorder presupposes a transition system with $\tau$-transitions (reduction), and a notion of a success state; and compares processes based on these attributes only.
Hence Gorla's criteria are very close to an instantiation of mine with a particular preorder. Further work is needed to sort out to what extent the two approaches have relevant differences when evaluating encoding and separation results from the literature. Another topic for future work is to sort out how dependent known encoding and separation results are on the chosen equivalence or preorder.
As a concluding remark, many separation results in the literature\cite{CarboneM03,Palamidessi03,PalamidessiSVV06,PV06,PV08,HMP08} are based on the assumption that parallel composition translates homomorphically, i.e.\ ${\cal T}(E|F)={\cal T}(E)|{\cal T}(F)$.\footnote{This assumption is often defended by the theory that non-homomorphic translations reduce the degree of concurrency of the source process---a theory I do not share. Note that my translation of CSP into CCS in \sect{CSP-CCS} is not homomorphic.} This applies for instance to the proof in \cite{HMP08} that there is no valid encoding from the asynchronous $\pi$-calculus into CCS\@. In \cite{Gorla10a} this assumption is relaxed, but the separation proof of \cite{Gorla10a} hinges crucially on the too restrictive form of Gorla's second criterion. Whether the asynchronous $\pi$-calculus is expressible in CCS is therefore still wide open.
\paragraph{Acknowledgement} My thanks to an EXPRESS/SOS referee for careful proofreading.
\end{document} |
\begin{document}
\title[Discrepancies in the distribution of Gaussian primes]{Discrepancies in the distribution \\ of Gaussian primes}
\author{Lucile Devin}
\email{lucile.devin@univ-littoral.fr} \address{Univ. Littoral C\^ote d'Opale, UR 2597
LMPA, Laboratoire de Math\'ematiques Pures et Appliqu\'ees Joseph Liouville,
F-62100 Calais, France }
\keywords{Chebyshev's bias, Gaussian primes, Hecke characters}
\subjclass[2010]{11N05, 11K70}
\date\today
\begin{abstract}
Motivated by questions of Fouvry and Rudnick on the distribution of Gaussian primes, we develop a very general setting in which one can study inequities in the distribution of analogues of primes through analytic properties of infinitely many $L$-functions.
In particular, we give a heuristic argument for the following claim :
for more than half of the prime numbers that can be written as a sum of two squares, the odd square is the square of a positive integer congruent to $1 \bmod 4$.
\end{abstract}
\maketitle
\section{Introduction}
Let $p$ be a prime number, $p\equiv 1 \bmod 4$,
it can be written uniquely as $p = a^2 + 4b^2$ with $a,b >0$ integers.
Here we distinguish the odd and the even square to have uniqueness, and we are interested in properties of the coordinates $a$ and $2b$.
For example, an open problem in this domain is to prove that there are infinitely many prime numbers of the form $p= 1 + 4b^2$.
Allowing a larger range for the coordinates, Fouvry and Iwaniec \cite{FouvryIwaniec_1997} have shown that there are infinitely many prime numbers $p= a^2 + 4b^2$ with $a$ itself a prime number. While Friedlander and Iwaniec \cite{FriedlanderIwaniec_1998} established the case where one of $a$ or $2b$ is a square.
More recent results restricting the values of $a$ and $b$ to thinner sets include \cite{HBL,LSX,Pratt,FriedlanderIwaniec_2018}.
In another direction, starting from the result of Hecke \cite{Hecke} ensuring that the angles defined by $\arctan(\tfrac{2b}{a})$ are equidistributed, it is of interest to study finer statistics of this distribution. Kubilius \cite{Kubilius50,Kubilius51} and Ankeny \cite{Ankeny} initiated the study of the distribution of these angles in shrinking sectors, and further developments followed \cite{Koval,Maknis,Coleman,HarmanLewis,HuangLiuRudnick,CKLMSSWWY}. Recently Rudnick and Waxman \cite{RudnickWaxman} studied the function field analogue and obtained statistical results that hold for almost all arcs of shrinking length.
In this paper we discuss the following two questions: \begin{enumerate}
\item how often is $a<2b$ compared to $a > 2b$?
\item how often is $a \equiv 1 \bmod 4$ compared to $a \equiv 3 \bmod 4$? \end{enumerate}
The results of Hecke \cite{Hecke} on the prime number theorem for $L$-functions associated to Hecke characters provide an answer to these two questions : asymptotically half of the time. Inspired by the letter of Chebyshev to Fuss \cite{ChebLetter}, and the rich literature that followed on Chebyshev's bias (see \cite{MartinScarfy} for a survey on the numerous contributions to the questions on ``prime number races''), we investigate the discrepancy in these equidistribution results. We denote the differences of counting functions $$D_1(x) = \lvert\lbrace p < x : p= a^2 + 4b^2, \lvert a\rvert > \lvert 2b \rvert\rbrace\rvert - \lvert\lbrace p < x : p= a^2 + 4b^2, \lvert a\rvert < \lvert 2b \rvert\rbrace\rvert,$$ $$D_2(x) = \lvert\lbrace p < x : p= a^2 + 4b^2, \lvert a\rvert \equiv 1\bmod{4} \rbrace\rvert - \lvert\lbrace p < x : p= a^2 + 4b^2, \lvert a\rvert \equiv 3\bmod{4} \rbrace\rvert.$$ Then, assuming the Generalized Riemann Hypothesis, the results of Hecke imply that, for $i=1$, $2$, $D_i(x) = O_{\epsilon}(x^{\frac12 + \epsilon})$. As observed in general for functions related to prime counting (see e.g. \cite{Win}), the logarithmic scale is the correct scale to study the functions $D_i$. We recall the definition of the upper and lower logarithmic densities of a set $\mathcal{P} \subset [1,\infty)$:
\[\overline{\delta}(\mathcal{P}) = \limsup_{Y\rightarrow\infty}\frac{1}{Y}\int_{0}^{Y}\mathbf{1}_{\mathcal{P}}(e^{y})\diff y \ \text{ and }
\ \underline{\delta}(\mathcal{P}) = \liminf_{Y\rightarrow\infty}\frac{1}{Y}\int_{0}^{Y}\mathbf{1}_{\mathcal{P}}(e^{y})\diff y,\]
where $\mathbf{1}_{\mathcal{P}}$ is the characteristic function of the set $\mathcal{P}$.
If these two densities are equal, we denote $\delta(\mathcal{P})$ their common value and call it the logarithmic density of the set~$\mathcal{P}$.
In this paper, we give a heuristic model for the distribution of the values of the functions $D_1$ and $D_2$. We formulate two conjectures.
\begin{conj}\label{Conj bias even odd}
There is a bias towards negative values in the distribution of the values of the function $D_1$.
That is to say that for more than half of the $x \in [2,\infty)$ in logarithmic scale, more than half of the primes below $x$ can be written as a sum of two squares with the even square larger than the odd square.
However we have $D_1(x)= \Omega_{\pm}(\frac{\sqrt{x}}{\log x})$.
In particular, the function changes signs infinitely often.
\end{conj}
\begin{conj}\label{Conj bias A mod 4}
There is a complete bias towards positive values in the distribution of the values of the function $D_2$.
That is to say that, for almost all (in logarithmic scale) $x\in [2,\infty)$, more than half of the primes below $x$ can be written as a sum of two squares $p=a^2+4b^2$ with $\lvert a\rvert \equiv 1\bmod 4$.
Precisely, we have $D_2(x) \geq 0$ for almost all $x$ in logarithmic scale, and $D_2(x) = \Omega_{+}(\frac{\sqrt{x}}{\log x})$. \end{conj}
\begin{figure}
\caption{$D_1(x)$ for $x \in [2,5\cdot 10^9]$.}
\label{Fig race odd even}
\end{figure}
\begin{figure}
\caption{$D_2(x)$ for $x \in [2,5\cdot 10^9]$.}
\label{Fig race A mod 4}
\end{figure}
The $\Omega$ notation in the Conjectures is used in the following sense. For two functions~$f,g : [1,\infty) \rightarrow \mathbf{R}$, with~$g$ positive, we write $f(x) = \Omega_{+}(g)$ (resp. $ \Omega_{-}(g)$) if we have $\limsup_{x\rightarrow \infty} \frac{f(x)}{g(x)} >0$ (resp. $\liminf_{x\rightarrow \infty} \frac{f(x)}{g(x)}<0$).
Numerical data up to $5\cdot 10^9$ have been computed to support both conjectures and the graphs of the prime number races are presented in Figures~\ref{Fig race odd even} and~\ref{Fig race A mod 4}. Note in particular that no negative value of $D_2$ was found in the interval.
The paper is organized as follows. Section~\ref{sec General} is the technical heart of the paper where the setting and the precise statement of our main theoretical contribution (Theorem~\ref{Th_GeneDistLim}) are given. Here we emphasize the novel feature (compared to previous results e.g. \cite{RS,ANS,DevinChebyshev}) where we have managed to obtain distribution results involving infinitely many $L$-functions. In Section~\ref{sec cor Gaussian case}, we introduce the setting to study statistics about the distribution of Gaussian primes, and we state our results towards Conjecture~\ref{Conj bias even odd} and Conjecture~\ref{Conj bias A mod 4}: Theorem~\ref{Th race Gaussian primes} and Theorem~\ref{Th race Gaussian primes A mod 4}. These two results are proved in Section~\ref{Sec Hecke L functions} as consequences of Theorem~\ref{Th_GeneDistLim}. Building on the ideas underlying Theorem~\ref{Th race Gaussian primes} and Theorem~\ref{Th race Gaussian primes A mod 4}, we present in Section~\ref{Sec Heuristic} heuristics that led us to state Conjecture~\ref{Conj bias even odd} and Conjecture~\ref{Conj bias A mod 4}. Finally in Section~\ref{Section proof theo general}, we give the proofs of the theoretical contributions stated in Section~\ref{sec General}, notably Theorem~\ref{Th_GeneDistLim}.
\section{Chebyshev's bias using infinitely many $L$-functions}\label{sec General}
\subsection{Motivations and setting}
In \cite{MazurErrorTerm}, Mazur discussed prime number races for elliptic curves,
or more generally, for the Fourier coefficients of a modular form.
For example, he studied graphs of functions
\begin{align*}
x\mapsto \lvert\lbrace p\leq x : a_{p}(E) >0 \rbrace\rvert - \lvert\lbrace p\leq x : a_{p}(E) <0 \rbrace\rvert
\end{align*}
where $a_{p}(E) = p+1 - \lvert E(\mathbf{F}_{p})\rvert$,
for some elliptic curve $E$ defined over $\mathbf{Q}$,
and observed a bias towards negative values when the algebraic rank of the elliptic curve is large.
In \cite{SarnakLetter}, Sarnak commented and explained Mazur's observations.
His analysis of the prime number race involves the zeros of all the symmetric powers $L(\Sym^{n}E,s)$ of the Hasse--Weil $L$-function of $E/\mathbf{Q}$.
Sarnak notes that, in the case of an elliptic curve without complex multiplication, the corresponding distribution may have infinite variance and concludes that Mazur's race should be unbiased.
However, in the case of an elliptic curve with complex multiplication, even though an infinite number of $L$-functions is needed to understand Mazur's bias, Sarnak states that it would be possible to observe (and compute) an actual bias.
In \cite{CFJ}, Cha, Fiorilli and Jouve develop these ideas in the context of elliptic curves over function fields.
The heuristics for Conjecture~\ref{Conj bias even odd} (resp.~\ref{Conj bias A mod 4}) is the implementation of these ideas in the special case of the elliptic curve $y^2 = x^3 - x$ (resp. $y^2 = x^3 + x$) with complex multiplication by the ring of Gaussian integers~$\mathbf{Z}[i]$.
In the spirit of \cite{DevinChebyshev}, the main result of this article is stated in greater generality and deals with the case of prime number races using infinitely many real analytic $L$-functions.
Before stating our theorem, let us present some definitions and notation
Our main result states the existence of a limiting logarithmic distribution for a suitable normalization of a counting function, and thus allow us to discuss the bias in the distribution of the values of this function.
\begin{defi}
Let $F:[1,\infty)\rightarrow\mathbf{R}$ be a real function.
We say that $F$ admits a limiting logarithmic distribution
if there exists a probability measure $\mu$ on the Borel sets in $\mathbf{R}$ such that
for any bounded Lipschitz continuous function $g$, we have
\begin{align*}
\lim_{Y\rightarrow\infty}\frac{1}{Y}\int_{0}^{Y}g(F(e^{y}))\diff y =
\int_{\mathbf{R}}g(t)\diff\mu(t).
\end{align*}
If $F$ admits a limiting logarithmic distribution $\mu$, we say that $\mu([0,\infty))$ is \emph{the bias} of $F$ towards non-negative values.
\end{defi}
Note that our definition of the bias differs from previous literature where it is usually defined as the logarithmic density of the set of $x$ such that $F(x)>0$. However, in the setting of Chebyshev's bias, it is expected (see \cite{KR}, also \cite{RS} assuming the Linear Independence and \cite{MartinNg,DevinChebyshev} under weaker hypotheses) that in general the distribution $\mu$ is continuous at $0$ and then the two definition coincide.
We thus consider it as a sufficiently good approximation.
\subsection{Statement of the main result}
In this paper we use the notion of \emph{analytic $L$-function} defined in \cite[Def. 1.1]{DevinChebyshev}. Let us just recall that a vast majority of the $L$-functions used in analytic number theory (and all the $L$-functions referred to in this article) are proven (or at least conjectured) to be analytic $L$-functions.
We are interested in the prime number race associated to a sequence $\mathcal{S} =\lbrace L(f_m,\cdot) : m\geq 0 \rbrace$ of real analytic $L$-functions of degree $d_m$, and analytic conductor $\mathfrak{q}(f_m)$.
To this sequence we associate real coefficients $\underline{c} = (c_{m})_{m\geq 0}$ such that
the series $$\sum_{f_m\in \mathcal{S}}\lvert c_m \rvert d_m \log \mathfrak{q}(f_m)$$ is convergent.
Moreover, we assume that the Generalized Riemann Hypothesis is true for all the $L$-functions $ L(f_m,\cdot)$, $m\geq 0$ and their \emph{second moment} (see \cite[Def.~1.1.(iii)]{DevinChebyshev}).
That is to say that the non-trivial zeros of the functions $ L(f_m,\cdot)$, $ L(\Sym^2f_m,\cdot)$, and $ L(\wedge^2f_m,\cdot)$, $m\geq 0$ all have real part equal to $\tfrac12$.
We note that, in this case, the difficulty compared to previous results is to deal with the fact that there can be an infinite number of zeros of the functions $L(f_m,\cdot)$, $m\geq 0$ with bounded imaginary part.
Let us first fix the notation for the sets of zeros of the $L$-functions.
For any $\rho\in \mathbf{C}$, we denote by $\ord(\rho,m) = \ord_{s= \rho}(L(f_m,s))$ the order of the zero at $s=\rho$ of the function $L(f_m,s)$. For $M\geq0$, we denote
\begin{align*}
\ord_{\mathcal{S},\underline{c},M}(\rho) = \sum_{m\leq M} c_m \ord(\rho,m), \qquad
\ord_{\mathcal{S},\underline{c}}(\rho) = \sum_{m\geq 0} c_m \ord(\rho,m).
\end{align*}
One has (see e.g. \cite[Prop.~5.7~(1)]{IK}),
$$\ord(\rho,m) \ll \log(\mathfrak{q}(f_m)(\lvert\rho\rvert +3)^{d_m}), $$
with an absolute implicit constant. Thus, the condition on the coefficients~$c_m$ ensures that the series defining $\ord_{\mathcal{S},\underline{c}}(\rho)$ is convergent for each~$\rho$.
When we assume the Generalized Riemann Hypothesis, we omit the $\frac12$ in the notation and write for example, for $\gamma \in \mathbf{R}$,
$\ord_{\mathcal{S},\underline{c}}(\gamma)$ instead of $\ord_{\mathcal{S},\underline{c}}(\tfrac12 +i\gamma)$.
Then, for~$M\geq 0$ and $T >0$, we denote
\begin{align*}
\mathcal{Z}_{\mathcal{S},\underline{c}} = \lbrace \gamma>0 : \ord_{\mathcal{S},\underline{c}}(\gamma)\neq 0 \rbrace, \quad
& \quad \mathcal{Z}_{\mathcal{S},\underline{c}}(T) = \mathcal{Z}_{\mathcal{S},\underline{c}}\cap(0,T] \\
\mathcal{Z}_{\mathcal{S},\underline{c},M} = \lbrace \gamma>0 : \ord_{\mathcal{S},\underline{c},M}(\gamma)\neq 0\rbrace, \quad
& \quad \mathcal{Z}_{\mathcal{S},\underline{c},M}(T) = \mathcal{Z}_{\mathcal{S},\underline{c},M}\cap(0,T],
\end{align*}
the sets of positive imaginary parts of zeros of the product of the $L$-functions in the family. We do not count with multiplicities in these sets.
We now have all the tools to state our main theorem.
\begin{theo}\label{Th_GeneDistLim}
Let $\mathcal{S} =\lbrace L(f_m,\cdot) : m\geq 0 \rbrace$ be a sequence of real analytic $L$-functions of degree $d_m$ and analytic conductor $\mathfrak{q}(f_m)$,
and let $\underline{c} = (c_{m})_{m\geq 0}$ be a sequence of real numbers such that
the series $$\sum_{f_m\in \mathcal{S}}\lvert c_m \rvert d_m \log \mathfrak{q}(f_m)$$ is convergent.
Assume the Riemann Hypothesis is satisfied for all $L(f_m,\cdot)$, $m\geq 0$.
Then the function
\[E_{\mathcal{S},\underline{c}}(x):=\frac{\log x}{\sqrt{x}}\Bigg(
\sum_{p\leq x}\sum_{m\geq 0}c_{m}\lambda_{f_m}(p) + \sum_{m\geq 0}c_{m}\ord_{s=1}(L(f_m,s)) \Li(x) \Bigg)\]
admits a limiting logarithmic distribution $\mu_{\mathcal{S},\underline{c}}$
with average value
\[\mathbb{E}(\mu_{\mathcal{S},\underline{c}}) = m_{\mathcal{S},\underline{c}} := \sum_{m\geq 0}c_{m}
\left(\ord_{s=1}(L(f_m^{(2)},s))
-2\ord_{s=1/2}(L(f_m,s))\right)\]
and variance
\[\Var(\mu_{\mathcal{S},\underline{c}}) = 2\sum_{\gamma \in \mathcal{Z}_{\mathcal{S},\underline{c}}} \frac{\lvert \ord_{\mathcal{S},\underline{c}}(\gamma)\rvert^{2}}{\frac14 +\gamma^{2}}.\]
Moreover, there exists a constant $a>0$ depending on $\mathcal{S}$ and $\underline{c}$, such that we have
\[ \mu_{\mathcal{S},\underline{c}}((-\infty,-R)\cup(R,\infty)) \ll \exp(-a\sqrt{R}).\]
\end{theo}
We prove this theorem in Section~\ref{Section proof theo general}.
A way to understand this statement, is to think that the function $y\mapsto E_{\mathcal{S},\underline{c}}(e^y)$ takes its values with some probability law of mean value $m_{\mathcal{S},\underline{c}}$.
In general (see \cite[Cor. 2.4]{DevinChebyshev}), we expect the probability law to be symmetric with respect to its mean value, so we think of the mean value as a good indicator of the behaviour of $y\mapsto E_{\mathcal{S},\underline{c}}(e^y)$, and the indication is more precise when the variance is relatively small.
In the case of \cite{SarnakLetter}, $E/\mathbf{Q}$ is an elliptic curve without complex multiplication, and we consider the $L$-functions $L(f_1,\cdot) = L(E,\cdot)$ and $L(f_m,\cdot) = L(\Sym^mE,\cdot)$ for $m\geq 2$. The degree of each $L$-function is $d_m = m+1$ and the analytic conductors satisfy $\log\mathfrak{q}(f_m) \ll m \log (m\mathfrak{q}(f_1)) $ (see \cite[(ii)]{SarnakLetter}). We observe that our condition on the convergence of $\sum_{f_m\in \mathcal{S}}\lvert c_m \rvert d_m \log \mathfrak{q}(f_m)$ corresponds to the condition stated in \cite[Cor. 2.9]{CFJ} --- precisely $c_m \ll m^{-3-\eta}$ for some $\eta >0$ --- to ensure the existence of the limiting distribution in the analogous question over function fields.
\subsection{Signs changes}
Most of the conditional results of \cite{RS,MartinNg,DevinChebyshev} giving more precisions on the properties of the limiting distribution $\mu_{\mathcal{S},\underline{c}}$ can be adapted to this case.
Let us present here some results concerning the support of $\mu_{\mathcal{S},\underline{c}}$, as they can help to answer the question ``does the function have infinitely many sign changes?'', and provide Omega-results.
Similarly to \cite[Th. 1.2]{RS}, we have a lower bound for the tails of the distribution, in case all the coefficients have the same sign.
\begin{prop}\label{Prop lower bound tails}
Let $\mathcal{S} =\lbrace L(f_m,\cdot) : m\geq 0 \rbrace$ be a sequence of real analytic $L$-functions of degree $d_m$, and analytic conductor $\mathfrak{q}(f_m)$,
and let $\underline{c} = (c_{m})_{m\geq 0}$ be a sequence of \emph{non-negative} real numbers such that
the series $$\sum_{f\in \mathcal{S}} c_m d_m \log \mathfrak{q}(f_m)$$ is convergent, and at least one $c_m \neq 0$.
Assume the Riemann Hypothesis is satisfied for all $L(f_m,\cdot)$, $m\geq 0$.
Then, there exist a constant $b>0$, depending on $\mathcal{S}$ and $\underline{c}$, such that we have
$$ \min\big(\mu_{\mathcal{S},\underline{c}}((R,\infty)), \mu_{\mathcal{S},\underline{c}}((-\infty,-R))\big) \gg \exp(-\exp bR).$$
\end{prop} As we may want to use coefficients of different signs, we now state a result inspired by \cite[Th.~1.5(b and c)]{MartinNg} and using the notion of \emph{self-sufficient zero} that Martin and Ng introduced.
\begin{defi}\label{Def_selfsuff} We say that an ordinate $\gamma\in\mathcal{Z}_{\mathcal{S},\underline{c}}$ is self-sufficient if it is not in the $\mathbf{Q}$-span of $\mathcal{Z}_{\mathcal{S},\underline{c}}\smallsetminus\lbrace \gamma\rbrace$.
\end{defi} A priori, if there is no special reason for the imaginary parts of the zeros of some $L$-functions to be related, then we do not expect that there are any relation between them. This general idea is called the General Simplicity Hypothesis, or the Linear Independence hypothesis (LI). It is used in particular in \cite{RS} to give more precisions on the limiting distribution in the original case of Chebyshev's bias as well as to compute an explicit value of this bias. To state our next result, we do not need all the strength of this hypothesis. We show that if there are enough self-sufficient zeros in $\mathcal{Z}_{\mathcal{S},\underline{c}}$ then the distribution~$\mu_{\mathcal{S},\underline{c}}$ is supported on all~$\mathbf{R}$.
\begin{prop}\label{Prop inclusive}
Let $\mathcal{S} =\lbrace L(f_m,\cdot) : m\geq 0 \rbrace$ be a sequence of real analytic $L$-functions of degree $d_m$ and analytic conductor $\mathfrak{q}(f_m)$,
and let $\underline{c} = (c_{m})_{m\geq 0}$ be a sequence of real numbers such that
the series $$\sum_{f\in \mathcal{S}} \lvert c_m\rvert d_m \log \mathfrak{q}(f_m)$$ is convergent.
Assume the Riemann Hypothesis is satisfied for all $L(f_m,\cdot)$, $m\geq 0$.
Let $\mathcal{Z}_{\mathcal{S},\underline{c}} = \lbrace \gamma>0 : \ord_{\mathcal{S},\underline{c}}(\gamma)\neq 0 \rbrace$
and $\mathcal{Z}_{\mathcal{S},\underline{c}}^{\mathrm{LI}}$ the set of self-sufficient elements in~$\mathcal{Z}_{\mathcal{S},\underline{c}}$.
Assume that $$\sum_{\gamma \in \mathcal{Z}_{\mathcal{S},\underline{c}}^{\mathrm{LI}}} \frac{\lvert \ord_{\mathcal{S},\underline{c} }(\gamma)\rvert}{\gamma}$$ diverges.
Then
$ \mathrm{supp}(\mu_{\mathcal{S},\underline{c}}) = \mathbf{R}$.
\end{prop}
In particular, under such conditions, we deduce that there cannot be a complete bias, that is to say that the function $E_{\mathcal{S},\underline{c}}$ changes sign infinitely many times, and we obtain Omega-results.
\begin{cor}\label{Cor Omega result}
Under the assumptions of Proposition~\ref{Prop lower bound tails} or of Proposition~\ref{Prop inclusive},
we have
\begin{align*}
\sum_{p\leq x}\sum_{m\geq 0}c_{m}\lambda_{f_m}(p) + \sum_{m\geq 0}c_{m}\ord_{s=1}(L(f_m,s)) \Li(x) = \Omega_{\pm}(\tfrac{\sqrt{x}}{\log x}).
\end{align*}
In particular, the function $x \mapsto \sum_{p\leq x}\sum_{m\geq 0}c_{m}\lambda_{f_m}(p) + \sum_{m\geq 0}c_{m}\ord_{s=1}(L(f_m,s)) \Li(x) $ has infinitely many changes of signs.
\end{cor}
Finally, we observe that in the case of a non-real\footnote{The case of a real zero of maximal real part would give only one direction of Omega-result as in \cite[Lem. 2.1]{FiorilliMartin}.} counter-example to the Generalized Riemann Hypothesis, oscillations results are more easily obtained thanks to Landau's Theorem.
\begin{prop}\label{Prop oscillation without GRH}
Let $\mathcal{S} =\lbrace L(f_m,\cdot) : m\geq 0 \rbrace$ be a sequence of real analytic $L$-functions of degree $d_m$ and analytic conductor $\mathfrak{q}(f_m)$,
and let $\underline{c} = (c_{m})_{m\geq 0}$ be a sequence of real numbers such that
the series $$\sum_{f\in \mathcal{S}} \lvert c_m\rvert d_m \log \mathfrak{q}(f_m)$$ is convergent.
Let $\Theta = \sup\lbrace \re(\rho) : \ord_{\mathcal{S},\underline{c}}(\rho) \neq 0\rbrace $. Assume that $\Theta>\frac12$ and that for each $m\geq 0$, one has $L(f_m,\Theta) \neq 0$. Then for any $\epsilon >0$, we have \begin{align*}
\sum_{p\leq x}\sum_{m\geq 0}c_{m}\lambda_{f_m}(p) + \sum_{m\geq 0}c_{m}\ord_{s=1}(L(f_m,s)) \Li(x) = \Omega_{\pm}(x^{\Theta -\epsilon}). \end{align*}
\end{prop}
\section{Distribution of the angles of Gaussian primes}\label{sec cor Gaussian case}
The representation of a prime number as a sum of two squares is explained in the ring of Gaussian integers $\mathbf{Z}[i]$ where the prime numbers $p\equiv 1 \bmod 4$ split as $p =(a + i2b)(a - i2b)$, while the prime numbers $p \equiv 3 \bmod 4$ are inert.
The aim of Conjectures~\ref{Conj bias even odd} and~\ref{Conj bias A mod 4} is to understand fine statistics on the distribution of Gaussian primes in the plane.
To do so, we study the distribution of the angles of the Gaussian primes which are defined as follows.
For any $\mathfrak p \neq (1+i)$ prime ideal in $\mathbf{Z}[i]$, there exists a unique Gaussian integer $a + 2ib \equiv 1 \bmod (2+2i)$ generating $\mathfrak p$ (starting from $a,b \geq 0$ then either $a + 2ib \equiv 1 \bmod (2+2i)$ or $-a - 2ib \equiv 1 \bmod (2+2i)$).
Then we say that the angle of $\mathfrak{p}$ is the argument of this uniquely determined generator,
\begin{equation*}
\label{Equation def angle}
\theta_{\mathfrak{p}} = \arg(a + 2ib) \in (-\pi,\pi] \text{ where } \mathfrak{p} = (a+2ib), \quad a+2ib \equiv 1 \bmod (2+2i).
\end{equation*}
Note that, if $\mathfrak{p}$ is generated by $a + 2ib \equiv 1 \bmod (2+2i)$, then one also has $a - 2ib \equiv 1 \bmod (2+2i)$ and this latter Gaussian integer generates $\overline{\mathfrak{p}}$.
So for a rational prime $p \equiv 1 \bmod 4$, one can define its Gaussian angle up to its sign, $\theta_p = \pm \theta_{\mathfrak{p}} \in [0,\pi]$, where $\mathfrak{p}\mid p$.
We observe also that this choice is natural : the number of $\mathbf{F}_p$ points of the elliptic curve with affine model $y^2 = x^3 -x$ is exactly given by $p+1 - 2\sqrt{p}\cos(\theta_{p})$.
Note finally that with this definition, the natural angle associated to an inert prime $p \equiv 3 \bmod 4$ is $\theta_p = \pi$.
Hecke proved in \cite{Hecke} that the angles of Gaussian primes are equidistributed on the circle.
In particular, for $0\leq \alpha < \beta \leq \pi$, one has
\begin{equation*}
\lim_{X\rightarrow\infty} \frac{\lvert p \leq X : p\equiv 1 \bmod 4, \alpha < \theta_p < \beta \rvert}{\lvert p \leq X : p\equiv 1 \bmod 4 \rvert} = \frac{\beta - \alpha}{\pi}.
\end{equation*}
The limit depends only on the length of the interval. In the context of Conjecture~\ref{Conj bias even odd}, we are interested in the error term in this result, and in particular to show how it depends on the interval $[\alpha,\beta]$. Our main heuristic for this conjecture is the following result.
\begin{theo}\label{Th race Gaussian primes}
Assume the Generalized Riemann Hypothesis.
Let $\phi$ be a $2\pi$-periodic even function on $\mathbf{R}$,
with Fourier coefficients $\hat{\phi}(m) := \frac{1}{2\pi}\int_{-\pi}^{\pi} \phi(t)e^{2i\pi mt} \diff t$ for $m \in \mathbf{Z}$,
satisfying $\hat{\phi}(m) \ll \lvert m\rvert^{-1 -\epsilon}$ for some $\epsilon >0$.
Then the function
\begin{align*}
E_{\phi}(x) := \frac{\log x}{\sqrt {x}}\Bigg(\sum_{\substack{p\leq x \\ p \equiv 1 \bmod{4}}} \phi(\theta_p) - \Li(x)\int_{0}^{\pi}\phi(t) \frac{\diff t}{2\pi} \Bigg)
\end{align*}
admits a limiting logarithmic distribution $\mu_{\phi}$ as $x\rightarrow\infty$.
Assume also that the vanishing at $s= \frac12$ of the Hecke $L$-functions involved is exactly given by the sign of its functional equation.
Then
the average value of $\mu_{\phi}$ is equal to
$$ \frac{-\hat{\phi}(0)}{2} - \frac{\phi(0) + \phi(\pi)}{4} +\frac{1}{2\pi} \int_{-\pi}^{\pi} \phi(t)\frac{\cos(t)}{\cos(2t)} \diff t, $$
where the integral is understood as the Cauchy principal value.
\end{theo}
Theorem~\ref{Th race Gaussian primes} is just an $\epsilon$ away from Conjecture~\ref{Conj bias even odd}. Indeed, we would like to choose $\phi$ to be a step function but the $m$-th Fourier coefficient of such a function is of size $\lvert m\rvert^{-1}$.
However, as the function $\phi$ can be chosen very generally, it actually gives fine information on the distribution of the angles $\theta_p$. Figure~\ref{fig hist angles} shows the difference between a histogram of the distribution of the angles $\theta_{p}$ and equidistribution. We observe some irregularities in the otherwise relatively well equidistributed behaviour happening at $\frac{\pi}{4}$ and $\frac{3\pi}{4}$ corresponding to the poles of $\frac{\cos(t)}{\cos(2t)}$. This irregularity might only be due to our ``unfolding'' of the angle $\theta_p$ (observe that the angle $2\theta_p$ or even $4\theta_p$ is more usually studied in the literature \cite{Kubilius50,Coleman,RudnickWaxman} and does not exhibit such a bias). We note that our choice of $a + 2ib \equiv 1 \bmod (2+2i)$ imply that the only possibilities for $\lvert a\rvert$ and $\lvert 2b\rvert$ to be very close to each other are $a= \pm2b+1$ while $a = \pm 2b -1$ are excluded. This could explain the behaviour of the distribution around $\frac{\pi}{4}$ and $\frac{3\pi}{4}$.
\begin{figure}
\caption{Relative distribution of the angles $\theta_{p}$ for $p\leq 10^8$ : we count the number of angles $\theta_p$ in $200$ subintervals of $[0,\pi]$ and withdraw the mean value; in red equidistribution; in blue the ``secondary term'' $\frac{\cos x}{\cos 2x} - \frac12$.}
\label{fig hist angles}
\end{figure}
In the case $\phi$ is $\pi$-periodic with mean value $0$, we find that the mean value of $\mu_{\phi}$ is equal to $\frac{- \phi(\pi)}{2}$, this indicates a bias in the distribution of the angles $\theta_p$ due to the fact that our sum defining $E_{\phi}$ does not include the inert primes $p\equiv 3 \bmod 4$.
Note finally that this is a case where the function field analogue differs from the original question. Indeed, \cite[Th. 1.8]{Perret-Gentil} shows that there is a bias in the distribution of the analogue of angles of Gaussian primes in function fields that is in the direction of sectors parametrized by non-squares. Such a phenomenon does not seem to appear here.\\
In the context of Conjecture~\ref{Conj bias A mod 4}, let us first note that for $p = a^2 + 4b^2$ with $a+2ib \equiv 1 \bmod (2+2i)$ then $p \equiv 1 \bmod 8 \Leftrightarrow a \equiv 1 \bmod 4$ and $p \equiv 5 \bmod 8 \Leftrightarrow a \equiv -1 \bmod 4$, so we have \begin{align*} \sum_{p = a^2 + 4b^2 \leq x} \mathbf{1}_{\lvert a\rvert \equiv 1 \bmod 4}(p) - \mathbf{1}_{\lvert a\rvert \equiv -1 \bmod 4}(p) &= \sum_{\substack{p\leq x \\ p \equiv 1 \bmod{8} }} \phi(\theta_p) - \sum_{\substack{p\leq x \\ p \equiv 5 \bmod{8} }} \phi(\theta_p) \\ &= \sum_{\substack{p\leq x \\ p \equiv 1 \bmod{4} }} \phi(\tilde{\theta}_p), \end{align*} where $\phi = \mathbf{1}_{(0,\frac{\pi}{2})}- \mathbf{1}_{(\frac{\pi}{2},\pi)}$ (so that for $a\equiv 1 \bmod 4$ one counts $+1$ when $a$ is positive, and $-1$ when it is negative; and the other way round for $a\equiv -1 \bmod 4$) and we define $\tilde{\theta}_p = \begin{cases}
\theta_{p} & \text{ if } p \equiv 1 \bmod{8} \\
\pi - \theta_{p} & \text{ if } p \equiv 5 \bmod{8} \end{cases}$. Similarly to Theorem~\ref{Th race Gaussian primes}, we have an analogous result in the case $\phi$ is just slightly smoother than a step function.
\begin{theo}\label{Th race Gaussian primes A mod 4}
Assume the Generalized Riemann Hypothesis.
Let $\phi$ be a $2\pi$-periodic even function on $\mathbf{R}$
with Fourier coefficients $\hat{\phi}(m) := \frac{1}{2\pi}\int_{-\pi}^{\pi} \phi(t)e^{2i\pi mt} \diff t$ for $m \in \mathbf{Z}$,
satisfying $\hat{\phi}(m) \ll \lvert m\rvert^{-1 -\epsilon}$ for some $\epsilon >0$.
Then the function
\begin{align*}
F_{\phi}(x) := \frac{\log x}{\sqrt {x}}\Bigg(\sum_{\substack{p\leq x \\ p \equiv 1 \bmod{8} }} \phi(\theta_p) - \sum_{\substack{p\leq x \\ p \equiv 5 \bmod{8} }} \phi(\theta_p) \Bigg)
\end{align*}
admits a limiting logarithmic distribution $\nu_{\phi}$ as $x\rightarrow\infty$.
Assume also that the vanishing at $s= \frac12$ of the Hecke $L$-functions involved is exactly given by the sign of its functional equation.
Then the average value of $\nu_{\phi}$ is equal to
$$\frac{-\hat{\phi}(0)}{2} -\frac{\phi(0) + \phi(\pi)}{4} +\frac{1}{2\pi} \int_{-\pi}^{\pi} \phi(t)\frac{1}{2\cos(t)} \diff t,$$
where the integral is understood as the Cauchy principal value. \end{theo}
Taking $\phi$ such that $\phi(\pi-\theta)= - \phi(\theta)$ in Theorem~\ref{Th race Gaussian primes A mod 4} gives fine information on the distribution of the angles $\tilde{\theta}_p$. Figure~\ref{fig hist angles Amod4} shows the difference between a histogram of the distribution of the angles $\tilde{\theta}_{p}$ and equidistribution. We observe some irregularity in the otherwise relatively well equidistributed behaviour happening at~$\frac{\pi}{2}$ corresponding to the pole of $\frac{1}{\cos(t)}$. Note that along the prime numbers $p= 1+4b^2$ we have $\tilde{\theta}_{p} \rightarrow \frac{\pi}{2}^-$. The jump in the distribution goes well with the idea that there are infinitely many such prime numbers. \begin{figure}
\caption{Relative distribution of the angles $\tilde{\theta}_{p}$ for $p\leq 10^8$ : we count the number of angles $\tilde{\theta}_p$ in $200$ subintervals of $[0,\pi]$ and withdraw the mean value; in red equidistribution; in blue the ``secondary term'' $\frac{1}{\cos x} - \frac12$.}
\label{fig hist angles Amod4}
\end{figure}
Theorem~\ref{Th race Gaussian primes} and~\ref{Th race Gaussian primes A mod 4} both follow from the decomposition of the function $\phi$ in Fourier series and an application of Theorem~\ref{Th_GeneDistLim} to a sequence of $L$-functions associated to the powers of a given Hecke character, with coefficients the Fourier coefficients of $\phi$. We give the details on the precise Hecke characters in Section~\ref{Sec Hecke L functions}. We also comment there on the hypothesis that allows us to give a formula for the mean value of the limiting distribution.
\section{$L$-functions of Hecke characters on $\mathbf{Z}[i]$}\label{Sec Hecke L functions}
Let us first review some properties of Hecke characters that will be useful in this paper. Our references for this section are mostly \cite{Rohrlich}, \cite[Chap. VII.6]{Neukirch} and \cite[Chap. 3.8, Chap. 5.10]{IK}.
Let $\xi$ be a unitary primitive Hecke character on $\mathbf{Z}[i]$ of conductor $\mathfrak{f}$ and frequency $\ell$, we define the Hecke $L$-function associated to $\xi$ as the following Dirichlet series on the half-plane $\re(s)>1$:
\begin{equation}
L(s,\xi) = \sum_{\mathfrak n}\xi(\mathfrak n) N(\mathfrak n)^{-s} = \prod_{\mathfrak p}\left(
1 - \xi(\mathfrak p)N(\mathfrak{p})^{-s} \right)^{-1}
\end{equation}
where $\mathfrak{n}$ runs over all non-zero ideals of $\mathbf{Z}[i]$.
Hecke proved that $L(s,\xi)$ extends meromorphically to the whole plane $\mathbf{C}$ with at most a simple pole at $s=1$ when $\xi$ is the trivial character.
Moreover, the completed $L$-function
$$\Lambda(s,\xi) := (4 N\mathfrak{f})^{s/2}\pi^{-s - (\lvert\ell\rvert + 1)/2}\Gamma\left( \frac{s + \frac{\lvert \ell \rvert}{2}}{2}\right)\Gamma\left( \frac{s + 1+\frac{\lvert \ell \rvert}{2}}{2}\right) L(s,\xi),$$
admits a functional equation
$\Lambda(\xi, s) = W(\xi)\Lambda(\overline{\xi},1-s)$,
where $W(\xi)$ is a complex number of modulus $1$ (the sign of the functional equation) that can be given explicitly via a Gauss sum \cite[(3.85)]{IK}.
In the case $\xi(\overline{\mathfrak{p}}) = \overline{\xi}(\mathfrak{p})$, one has $\Lambda(\xi, s) = \Lambda(\overline{\xi}, s)$ and $W(\xi) = \pm 1$. Then, as it is believed to be true in $100\%$ of the cases (see for example \cite{Greenberg} and \cite{Waxman} for partial results in a family of $L$-functions associated to Hecke characters close to the ones we are considering in this paper), it is common to assume that the multiplicity of the zero at $s = \tfrac12$ of $L(s,\xi)$ is determined by the sign of the functional equation. Precisely, in Theorem~\ref{Th race Gaussian primes} and~\ref{Th race Gaussian primes A mod 4}, to give a formula for the mean value of the limiting distribution, we assume that $$\ord_{s= 1/2}(L(s,\xi)) = \frac{1 - W(\xi)}{2}$$ for each Hecke character $\xi$ involved. With these notation and assumptions fixed, let us now prove Theorem~\ref{Th race Gaussian primes}.
\begin{proof}[Proof of Theorem~\ref{Th race Gaussian primes} as a consequence of Theorem~\ref{Th_GeneDistLim}]
Let us first write $\phi$ as a sum of its Fourier series. Since it is even of period $2\pi$, one has $\phi(\theta) = \sum_{m\geq 0} c_{m}(\phi) \cos (m\theta)$
where the Fourier coefficients are
\begin{align*}
c_0(\phi) &= \frac{1}{2\pi}\int_{-\pi}^{\pi} \phi(t) \diff t\\
c_m(\phi) &= \frac{1}{\pi}\int_{-\pi}^{\pi} \phi(t)\cos(mt) \diff t \text{, for } m\geq 1.
\end{align*}
In equation~\eqref{Equation def angle}, the definition of the angle of Gaussian primes is given so that the function $\mathfrak{p} \rightarrow e^{i\theta_{\mathfrak{p}}}$ comes from a Hecke character.
Precisely, let $\mathfrak{m} = (2+2i)$, there are exactly $4$ invertible elements in $\mathbf{Z}[i]/\mathfrak{m}$ corresponding to the $4$ units of $\mathbf{Z}[i]$ : $\pm 1, \pm i$.
Thus, every ideal $\mathfrak{a} \subset \mathbf{Z}[i]$ co-prime to $\mathfrak{m}$ has a unique generator $\alpha \equiv 1 \bmod \mathfrak{m}$.
Let $\xi$ be the Hecke character on the multiplicative groups of fractional ideals of $\mathbf{Z}[i]$ modulo $\mathfrak{m}$ defined by
\begin{equation*}
\xi((\alpha)) = \begin{cases}
\frac{\alpha}{\lvert \alpha \rvert} &\text{ if } \alpha \equiv 1 \bmod \mathfrak{m} \\
0 &\text{ if } (\alpha, \mathfrak{m}) \neq 1.
\end{cases}
\end{equation*}
So that for any prime ideal $\mathfrak{p}$ co-prime to $2$, one has $\xi(\mathfrak{p}) = e^{i\theta_{\mathfrak{p}}}$.
Then $\xi$ is a unitary Hecke character of frequency $1$, and conductor $\mathfrak{m}$, its finite part is the Dirichlet character $\chi: u \mapsto u^{-1}$ for $u \in\lbrace \pm 1, \pm i\rbrace$ representing the invertible congruence classes modulo $\mathfrak{m}$.
Coming back to our decomposition, for any unramified splitting rational prime $(p) = \mathfrak{p}\overline{\mathfrak{p}}$, one has, for any $m \in \mathbf{N}$
\begin{align*}
2\cos(m\theta_{p}) = \xi^{m}(\mathfrak{p}) + \xi^{m}(\overline{\mathfrak{p}}).
\end{align*}
Moreover, for any integer $m \geq 1$, the character $\xi^{m}$ has frequency $m$ and finite part $\chi(u) = u^{-m}$ for $u \in\lbrace \pm 1, \pm i\rbrace$ (again seen as elements in $\mathbf{Z}[i]/\mathfrak{m}$), it is primitive of conductor respectively $(2+2i)$ if $m$ is odd, $(2)$ if $m \equiv 2 \bmod 4$ and non-primitive of conductor $(1)$ if $m\equiv 0 \bmod 4$. Let us denote $\xi_m$ the primitive character associated to $\xi^m$,
\begin{align*}
\xi^m = \begin{cases}
\xi_m &\text{ if } m \not\equiv 0 \bmod 4, \\
\xi_m \chi_0 &\text{ if } m \equiv 0 \bmod 4
\end{cases}
\end{align*}
where $\chi_0$ is the principal character modulo $(1+i)$.
In particular,
\begin{align*}
\sum_{N\mathfrak{p} \leq x} \xi^m(\mathfrak{p}) = \sum_{N\mathfrak{p} \leq x} \xi_m(\mathfrak{p}) + O(1),
\end{align*}
so we only loose a small error term in considering $\xi_m$ instead of $\xi^m$.
Note that
$L(s,\xi_{0}) = \zeta_{\mathbf{Q}(i)}(s)$ and
for $m\geq 1$,
\begin{multline*}
L(s,\xi_{m}) = (1 - \xi_m(1+i)2^{-s})^{-1} \prod_{p \equiv 1 \bmod 4} (1 - 2\cos(m\theta_p) p^{-s} + p^{-2s})^{-1} \\ \times \prod_{p \equiv 3 \bmod 4} (1 - (-1)^m p^{-2s})^{-1}
\end{multline*}
are analytic $L$-functions in the sense of \cite[Def. 1.1]{DevinChebyshev} (see e.g. \cite[5.10]{IK}).
Hence, we apply Theorem~\ref{Th_GeneDistLim} with $\mathcal{S} = \mathcal{S}_1 = \lbrace L(s,\xi_{m}) : m\geq 0 \rbrace$, and $\underline{c} = \lbrace c_m(\phi): m\geq 0\rbrace$.
Each of these $L$-functions has degree $d_m=2$, and
for $m\geq 1$ we have
\begin{align*}
\mathfrak{q}(\xi^m) \leq 4\times N(2+2i) (\tfrac{m}{2} + 3)(\tfrac{m}{2} + 4) \ll m^2,
\end{align*}
the hypothesis on the Fourier coefficients of $\phi$ is exactly here to ensure that the series $\sum_{m\geq 1} \lvert c_m(\phi) \rvert \log m^2$ is convergent.
Thus, we apply Theorem~\ref{Th_GeneDistLim}. Under the Riemann Hypothesis for $L(s,\xi_m)$, $m\geq 0$, the function
\begin{align*}
E_{\phi}(x) &=\frac{\log x}{2\sqrt{x}}\Bigg(
\sum_{\substack{p\leq x \\ p\equiv 1 \bmod 4}}\sum_{m\geq 0}c_{m}(\phi) 2\cos(m\theta_p) + \sum_{m\geq 0}c_{m}(\phi)\ord_{s=1}(L(s,\xi_m)) \Li(x) \Bigg) \\
&= \frac{\log x}{\sqrt{x}}\Bigg(
\sum_{\substack{p\leq x \\ p\equiv 1 \bmod 4}} \phi(\theta_p) - \frac{c_{0}(\phi) \Li(x)}{2} \Bigg)
\end{align*}
admits a limiting distribution $\mu_{\phi}$
(where we used the fact that for all $m\geq 1$, the function $L(\cdot,\xi_m)$ does not have a pole nor a zero at $s=1$).
Moreover, Theorem~\ref{Th_GeneDistLim} yields the following expression for the mean value of $\mu_{\phi}$:
\begin{align*}
\mathbb{E}(\mu_{\phi}) = \frac{1}{2}\sum_{m\geq 0}c_{m}(\phi)
\left(\ord_{s=1}(L(s,\xi_m^{(2)})) - 2 \ord_{s= 1/2}(L(s,\xi_{m})) \right).
\end{align*}
In the case $m=0$, we have $L(s,\xi_{0}) = \zeta_{\mathbf{Q}(i)}(s) = \zeta(s) L(s,\chi_4)$. Up to the factor at~$2$, we have $L(s,\chi_4^{(2)}) = \zeta(s)$, thus, we get that $\ord_{s=1}(L(s,\xi_0^{(2)})) = -2$.
For $m\geq 1$, we have that
\begin{align*}
L(s,\xi^{m (2)}) &= \prod_{p \equiv 1 \bmod 4} (1 - e^{i2m\theta_p} p^{-s})^{-1}(1 - e^{-i2m\theta_p} p^{-s})^{-1} \prod_{p \equiv 3 \bmod 4} (1 - (-1)^m p^{-s})^{-2} \\
&=\begin{cases}
L(s,\xi^{2m}) \frac{\zeta(s) }{L(s,\chi_4)} (1-2^{-s}) &\text{ for } m \text{ even, } \\
L(s,\xi^{2m}) \frac{L(s,\chi_4)}{\zeta(s) } (1-2^{-s})^{-1} &\text{ for } m \text{ odd. }
\end{cases}
\end{align*}
For each $m\geq 1$, $\ord_{s=1}(L(s,\xi^{2m})) = 0$, so, the function $L(s,\xi_m^{(2)})$ has a pole of order $1$ at $s=1$ when $m$ is even and a zero of order $1$ at $s=1$ when $m$ is odd.
We have
\begin{align*}
\sum_{m\geq 0}c_{m}(\phi)\ord_{s=1}(L(s,\xi_{m}^{(2)})) = -c_0(\phi) - \sum_{m\geq 0}(-1)^mc_{m}(\phi) = -\hat{\phi}(0) - \phi(\pi).
\end{align*}
Now, we need to determine the values of $\ord_{s= 1/2}(L(s,\xi_{m}))$.
The sign of the functional equation for $L(s,\xi_{m})$ depends on the congruence class of $m$ modulo $8$.
Precisely, the formula \cite[3.85]{IK} for the sign of the functional equation gives
\begin{align*}
W(\xi_m) = i^{-m} N(\mathfrak{f}_m)^{-\frac12} \xi_{m,\infty}(\gamma_m) \sum_{x \in \mathbf{Z}[i]/\mathfrak{f}_m}\xi_{m,\mathrm{fin}}(x) e^{2i\pi \tr(\tfrac{x}{\gamma_m})}
\end{align*}
where $\mathfrak{f}_m$ is the conductor of $\xi_m$ --- that is $(2+2i)$ is $m$ is odd, $(2)$ if $m \equiv 2 \bmod 4$ and $(1)$ if $m \equiv 0 \bmod 4$ ---, $\xi_{m,\infty}$ and $\xi_{m,\mathrm{fin}}$ are respectively the infinite and finite part of~$\xi_m$, and $\gamma_m \in \mathbf{Z}[i]$ is such that $(\gamma_m) = 2\mathfrak{f}_m$.
In particular for odd $m$,
\begin{align*}
W(\xi_m) &= i^{-m} 8^{-\frac12} e^{i\pi \frac{m}{4}} \sum_{x \in \lbrace \pm 1, \pm i\rbrace} x^{-m} e^{2i\pi \tr(\tfrac{x}{4+4i})} \\
&= \frac{e^{-i\pi \frac{m}{4}}}{2\sqrt{2}} (2i - 2i^{m+1}) = \begin{cases}
e^{i\pi \frac{1-m}{4}} &\text{ if } m\equiv 1 \bmod 4, \\
e^{i\pi \frac{3-m}{4}} &\text{ if } m\equiv 3 \bmod 4.
\end{cases}
\end{align*}
For $m \equiv 2 \bmod 4$, we have
\begin{align*}
W(\xi_m) = i^{-m} 4^{-\frac12} \sum_{x \in \lbrace 1,i\rbrace}x^{-m} e^{2i\pi \tr(\tfrac{x}{4})}
=1,
\end{align*}
and for $m \equiv 0 \bmod 4$,
\begin{align*}
W(\xi_m) = i^{-m} 1^{-\frac12} e^{2i\pi \tr(\tfrac{1}{2})} = 1.
\end{align*}
So that \begin{align*}
W(\xi_m) = \begin{cases}
1 &\text{ if } m \equiv 0, 1, 2, 3, 4, 6 \bmod 8, \\
-1 &\text{ if } m \equiv 5 , 7 \bmod 8.
\end{cases}
\end{align*}
Assuming that the sign of the functional equation determines the order of vanishing at $\tfrac12$ for each $L$-function, it yields
\begin{align*}
\sum_{m\geq 0}c_{m}(\phi) \ord_{s= 1/2}(L(s,\xi_{m})) = \sum_{\substack{m \geq 0 \\ m\equiv 5,7 \bmod 8}} c_m(\phi).
\end{align*}
We conclude the proof by writing this sum as an integral.
Using Lemma~\ref{Lem exp sum} below, we have for smooth $\phi$ supported outside $\frac{\pi}{4}\mathbf{Z}$,
\begin{align*}
\sum_{\substack{m \geq 0 \\ m \equiv 5, 7 \bmod{8}}}(\hat{\phi}(m) + \hat{\phi}(-m))
&= \lim_{N\rightarrow\infty} \frac{1}{2\pi} \int_{-\pi}^{\pi} \phi(t)2\frac{\cos(t)}{\sin(4t)}(\sin((8N+10)t) - \sin(2t)) \diff t \\
&= -\frac{1}{2\pi} \int_{-\pi}^{\pi} \phi(t)\frac{\cos(t)}{\cos(2t)} \diff t.
\end{align*}
Moreover, the sum at $t \in \frac{\pi}{4}\mathbf{Z}$ gives the value $\frac{\phi(0) - \phi(\pi)}{4}$ which concludes the proof.
\end{proof}
\begin{lem}\label{Lem exp sum}
Let $q,a \in \mathbf{N}$, $q>0$ and $t \notin \frac{2\pi}{q}\mathbf{Z}$
One has
\begin{multline*}
\sum_{m = 0}^{N}(e^{i(qm +a)t} + e^{i(-qm -a)t}) = \frac{\sin((\tfrac{q}{2}(2N+1)+a)t) - \sin((a-\tfrac{q}{2})t)}{\sin(\tfrac{q}{2}t)}
\end{multline*}
\end{lem}
\begin{proof}
For $t \notin \frac{2\pi}{q}\mathbf{Z}$ one has
\begin{align*}
\sum_{m = 0}^{N}(e^{i(qm +a)t} + e^{i(-qm -a)t}) &= e^{iat} \frac{ 1 - e^{iq(N+1)t}}{1 - e^{iqt}} + e^{-iat} \frac{ 1 - e^{-iq(N+1)t}}{1 - e^{-iqt}} \\
&= 2\cos((\tfrac{q}{2}N + a)t) \frac{\sin(\tfrac{q}{2}(N+1)t)}{\sin(\tfrac{q}{2}t)} \\
&= \frac{\sin((\tfrac{q}{2}(2N+1)+a)t) - \sin((a-\tfrac{q}{2})t)}{\sin(\tfrac{q}{2}t)}.
\end{align*}
\end{proof}
The proof of Theorem~\ref{Th race Gaussian primes A mod 4} follows the same lines, we only give the details on the differences.
\begin{proof}[Proof of Theorem~\ref{Th race Gaussian primes A mod 4} as a consequence of Theorem~\ref{Th_GeneDistLim}]
To separate the congruence classes of~$a$, we will use characters of larger modulus.
For each~$m\geq 0$ let~$\psi_m$ be the Hecke character on the multiplicative groups of fractional ideals of~$\mathbf{Z}[i]$ modulo~$(4)$ defined by
\begin{equation*}
\psi_m((\alpha)) = \begin{cases}
\big(\frac{\alpha}{\lvert \alpha \rvert}\big)^m &\text{ if } \alpha \equiv 1 \bmod (4) \\
- \big(\frac{\alpha}{\lvert \alpha \rvert}\big)^m &\text{ if } \alpha \equiv 3+2i \bmod (4) \\
0 &\text{ if } (\alpha, (4)) \neq 1.
\end{cases}
\end{equation*}
Then~$\psi_m$ is a primitive unitary Hecke character of frequency~$m$, and conductor~$(4)$, its finite part is the Dirichlet character
$$\psi_{m,\mathrm{fin}}: u \mapsto \begin{cases} u^{-m} &\text{ for } u \in\lbrace \pm 1, \pm i\rbrace \\ -\big(\frac{u}{3+2i}\big)^{-m} &\text{ for } u \in \lbrace 3+2i, -3-2i, -2 + 3i, 2 - 3i\rbrace
\end{cases}$$
representing the~$8$ invertible congruence classes modulo~$(4)$.
We have, for any unramified splitting rational prime~$(p) = \mathfrak{p}\overline{\mathfrak{p}}$, and for any~$m \in \mathbf{N}$
\begin{align*}
\psi_{m}(\mathfrak{p}) + \psi_{m}(\overline{\mathfrak{p}}) = \begin{cases}
2\cos(m\theta_{p}) &\text{ if } p \equiv 1 \bmod 8 \\
-2\cos(m\theta_{p}) &\text{ if } p \equiv 5 \bmod 8.
\end{cases}
\end{align*}
So that, for $\phi$ even and $2\pi$-periodic, one has
$$\phi(\theta_{p})\mathbf{1}_{p\equiv 1 \bmod 8} - \phi(\theta_{p})\mathbf{1}_{p\equiv 5 \bmod 8} = \sum_{m\geq 0} c_m(\phi) \big( \psi_{m}(\mathfrak{p}) + \psi_{m}(\overline{\mathfrak{p}}) \big).$$
We apply Theorem~\ref{Th_GeneDistLim} with $\mathcal{S} = \mathcal{S}_2 = \lbrace L(s,\psi_{m}) : m\geq 0\rbrace$, each of these $L$-function has degree $d_m =2 $, and conductor $\mathfrak{q}(\psi_m) \ll (m+1)^2$ for $m\geq 0$, finally we take $c_m = c_m(\phi)$ for $m\geq 0$.
This gives the existence of the limiting logarithmic distribution $\nu_{\phi}$ under the Riemann Hypothesis for the $L(s,\psi_{m}), m\geq 0$.
Moreover, its mean value is given by
\begin{align*}
\mathbb{E}(\nu_{\phi}) = \frac{1}{2}\sum_{m\geq 0}c_{m}(\phi)
\left(\ord_{s=1}(L(s,\psi_m^{(2)})) - 2 \ord_{s= 1/2}(L(s,\psi_{m})) \right).
\end{align*}
Note that
$L(s,\psi_{0}) = L(s,\chi)L(s,\chi')$, where $\chi$ and $\chi'$ are the primitive characters modulo $8$
in particular, up to the factor at $2$, $L(s,\psi_{0}^{(2)}) = \zeta^2(s)$ has a pole of order~$2$ at $s=1$.
For $m\geq 1$, we have
\begin{align*}
L(s,\psi_{m}) = & \prod_{p \equiv 3 \bmod 4} (1 - (-1)^m p^{-2s})^{-1}\prod_{p \equiv 1 \bmod 8} (1 - 2\cos(m\theta_p) p^{-s} + p^{-2s})^{-1} \\ &\prod_{p \equiv 5\bmod 8} (1 + 2\cos(m\theta_p) p^{-s} + p^{-2s})^{-1}.
\end{align*} Thus, we have \begin{align*}
L(s,\psi_{m}^{(2)}) = L(s,\xi^{m (2)})
=\begin{cases}
L(s,\xi^{2m}) \frac{\zeta(s) }{L(s,\chi_4)} (1-2^{-s}) &\text{ for } m \text{ even, } \\
L(s,\xi^{2m}) \frac{L(s,\chi_4)}{\zeta(s) } (1-2^{-s})^{-1} &\text{ for } m \text{ odd. }
\end{cases} \end{align*} The function $L(s,\psi_m^{(2)})$ has a pole of order $1$ at $s=1$ when $m$ is even and a zero of order $1$ at $s=1$ when $m$ is odd. We deduce \begin{align*}
\sum_{m\geq 0}c_{m}(\phi)\ord_{s=1}(L(s,\psi^{m(2)})) = -c_0(\phi) - \sum_{m\geq 0}(-1)^mc_{m}(\phi) = -\hat{\phi}(0) - \phi(\pi). \end{align*}
Now, we need to determine the values of $\ord_{s= 1/2}(L(s,\psi_{m}))$. As before, we assume that it is enough to compute the sign of the functional equation. We have
\begin{align*}
W(\psi_m) &= i^{-m} N(4)^{-\frac12} \psi_{m,\infty}(8) \sum_{x \in \mathbf{Z}[i]/(4)}\psi_{m,\mathrm{fin}}(x) e^{2i\pi \tr(\tfrac{x}{8})} \\
&= \frac{i^{-m}}{4} \sum_{x \in \lbrace \pm 1,\pm i \rbrace}x^{-m} (e^{2i\pi \tr(\tfrac{x}{8})} - e^{2i\pi \tr(\tfrac{x(3+2i)}{8})}) \\
&= \frac{1}{2}( (1 - (-1)^m)i^{1-m} + (1+(-1)^m)) \\
&=\begin{cases}
1 &\text{ if } m\equiv 0 \bmod2 \text{ or } m \equiv 1 \bmod4, \\
-1 &\text{ if } m \equiv 3 \bmod 4.
\end{cases} \end{align*} This gives that the mean value of $\nu_{\phi}$ is \begin{align*}
\frac{-\hat{\phi}(0) - \phi(\pi)}{2} -\sum_{m\equiv 3\bmod 4} \hat{\phi}(m) + \hat{\phi}(-m). \end{align*} We apply Lemma~\ref{Lem exp sum} to obtain for smooth $\phi$ supported outside $\frac{\pi}{2}\mathbf{Z}$, \begin{align*}
\sum_{\substack{m \geq 0 \\ m \equiv 3 \bmod{4}}}(\hat{\phi}(m) + \hat{\phi}(-m)) &= \lim_{N\rightarrow\infty} \frac{1}{2\pi} \int_{-\pi}^{\pi} \phi(t)\frac{\sin((4N+5)t) - \sin(t)}{\sin(2t)} \diff t \\
&= -\frac{1}{2\pi} \int_{-\pi}^{\pi} \phi(t)\frac{1}{2\cos(t)} \diff t. \end{align*} Then considering the sum at $t \in \frac{\pi}{2}\mathbf{Z}$, gives the value $\frac{\phi(0) - \phi(\pi)}{4}$ which concludes the proof of Theorem~\ref{Th race Gaussian primes A mod 4}.
\end{proof}
To conclude this section, let us come back to our hypothesis that the sign of the functional equation of an $L$-function determines its analytic rank. In the case of the $L$-functions used in this section, that is the~$L(\cdot,\xi_{m})$ and~$L(\cdot,\psi_m)$ for $m\geq 1$, we first observe that they correspond to the $L$-functions of certain cusp forms of level a power of~$2$ (the norm of the conductor of the character multiplied by $4$) and of weight $m+1$. Those $L$-functions are referenced in the $L$-functions and Modular Forms Database \cite{lmfdb}, at least for $m\leq 24$, where the analytic rank is calculated and satisfies our assumption.
\section{Heuristic for the conjectures}\label{Sec Heuristic}
Let us now develop our heuristic argument for Conjecture~\ref{Conj bias even odd} and~\ref{Conj bias A mod 4}.
The main idea is to replace $\phi$ in Theorems~\ref{Th race Gaussian primes} and~\ref{Th race Gaussian primes A mod 4} by a difference of indicator function.
Precisely, we take $\phi_1 = \mathbf{1}_{[0,\tfrac{\pi}{4}]\cup[\frac{3\pi}{4},\pi]} - \mathbf{1}_{(\tfrac{\pi}{4},\frac{3\pi}{4})}$ in Theorem~\ref{Th race Gaussian primes} and $\phi_2 = \mathbf{1}_{[0,\tfrac{\pi}{2}]} - \mathbf{1}_{(\tfrac{\pi}{2},\pi]}$ in Theorem~\ref{Th race Gaussian primes A mod 4}, where the functions $\phi_i$ are defined on $[0,\pi]$ and we extend their definition to $\mathbf{R}$ so that they are even and $2\pi$-periodic.
Then, we have
$$c_m(\phi_1) = \begin{cases}
\frac{8}{m\pi} &\text{if } m \equiv 2 \bmod8 \\
-\frac{8}{m\pi} &\text{if } m \equiv 6 \bmod8 \\
0 &\text{otherwise}
\end{cases} \text{ and } c_m(\phi_2) = \begin{cases}
\frac{4}{m\pi} &\text{if } m \equiv 1 \bmod4 \\
-\frac{4}{m\pi} &\text{if } m \equiv 3 \bmod4 \\
0 &\text{otherwise.} \end{cases}$$ As observed earlier, these Fourier coefficients do not satisfy the hypothesis of decay needed to apply Theorem~\ref{Th race Gaussian primes} or~\ref{Th race Gaussian primes A mod 4}. Let us however continue the heuristic argument by ignoring this.
Then we would obtain logarithmic limiting distributions $\mu_{\phi_1}$ and $\nu_{\phi_2}$ with mean values equal to
\begin{align*}
\mathbb{E}(\mu_{\phi_1}) &= - \frac{\phi_1(0) + \phi_1(\pi)}{4} +\frac{1}{2\pi} \int_{-\pi}^{\pi} \phi_1(t)\frac{\cos(t)}{\cos(2t)} \diff t = \frac{-1}{2}\\ \text{and } \mathbb{E}(\nu_{\phi_2}) &= -\frac{\phi_2(0) + \phi_2(\pi)}{4} +\frac{1}{2\pi} \int_{-\pi}^{\pi} \phi_2(t)\frac{1}{2\cos(t)} \diff t = \infty. \end{align*} The variance of $\mu_{\phi_1}$ is given by \[\Var(\mu_{\phi_1}) = 2\sum_{\gamma \in \mathcal{Z}_{\mathcal{S}_1,\underline{c}}} \frac{\lvert \sum_{m \geq 0}c_m(\phi_1)\ord(\gamma,m)\rvert^{2}}{\frac14 +\gamma^{2}} \in \mathbf{R}\cup\lbrace \infty\rbrace, \] where $\mathcal{S}_1$ is the set of Hecke $L$-functions used in the proof of Theorem~\ref{Th race Gaussian primes}, and $\underline{c}= \lbrace c_m(\phi_1) : m\geq 0\rbrace$. We obtain the same formula for $\Var(\nu_{\phi_2})$ with $\phi_1$ replaced by $\phi_2$ and $\mathcal{S}_1$ replaced by $\mathcal{S}_2$, the set of Hecke $L$-functions used in the proof of Theorem~\ref{Th race Gaussian primes A mod 4}.
Let $\phi = \phi_1$ or $\phi_2$ and $\mathcal{S} = \mathcal{S}_1$ or $\mathcal{S}_2$. Let us assume that there exists $B>0$ such that for all $\gamma\in \mathcal{Z}_{\mathcal{S},\underline{c}}$, we have $\lvert \sum_{m \geq 0}c_m(\phi)\ord(\gamma,m)\rvert < B \max\lbrace \lvert c_m(\phi)\ord(\gamma,m) \rvert : m\geq 0 \rbrace$. That is, we assume that for each $\gamma >0$ there are not to many $L$-functions in $\mathcal{S}$ that vanish at $\tfrac12 + i \gamma$. This hypothesis is reminiscent of the bounded multiplicity hypothesis used by Fiorilli in \cite{Fiorilli_EC} and is supported by the general idea that zeros of $L$-functions should be independent while being weaker than the Linear Independence hypothesis. Then we have \begin{align*}
\Var(\mu_{\phi_1}) &< B^2 \sum_{m \geq 0}\sum_{\gamma \in \mathcal{Z}_{m}} \frac{\lvert c_m(\phi)\ord(\gamma,m)\rvert^{2}}{\frac14 +\gamma^{2}} \\
&= B^2 \sum_{m \geq 0} \lvert c_m(\phi)\rvert^{2}\sum_{\gamma \in \mathcal{Z}_{m}} \frac{\lvert \ord(\gamma,m)\rvert^{2}}{\frac14 +\gamma^{2}}. \end{align*} Recall that $$\sum_{\gamma \in \mathcal{Z}_{m}} \frac{\lvert \ord(\gamma,m)\rvert^{2}}{\frac14 +\gamma^{2}} \ll (\log \mathfrak{q}_m)^3 \ll (\log m)^3,$$ and that $c_m(\phi) \ll \frac{1}{m}$. It yields $\Var(\mu_{\phi_1}) < \infty$ and similarly, $\Var(\nu_{\phi_2}) < \infty$.
This concludes our heuristic for Conjecture~\ref{Conj bias even odd}, we found a limiting logarithmic distribution with negative mean value and bounded variance. This indicates a bias towards negative values in the distribution of the values of the function $D_1$. Moreover, if we assume that the set $\mathcal{Z}_{\mathcal{S}_1,\underline{c}}$ has many self-sufficient elements, as in Proposition~\ref{Prop inclusive} --- assumption that is again supported by the fact that the zeros of $L$-functions should be independent --- then we deduce the Omega-result with the aid of Corollary~\ref{Cor Omega result}.
In the case of Conjecture~\ref{Conj bias A mod 4}, we obtained a limiting logarithmic distribution that has infinite mean value and bounded variance, this indicates a very strong bias in the direction of positive values. Let us approach this heuristic by another way and let us write $\phi_{2,N}(\theta) = \sum_{m \leq N} c_m(\phi_2) \cos(m\theta)$. By Chebyshev's inequality, as in \cite[Lem. 2.10]{Fiorilli_HighlyBiased} (see also \cite[Cor. 5.8]{DevinChebyshev}) we have $$ \nu_{\phi_{2,N}}([0,\infty)) \geq 1 - \frac{\Var(\mu_{\phi_{2,N}})}{\mathbb{E}(\nu_{\phi_{2,N}})} = 1 - O((\log N)^{-1}). $$ This heuristically indicates that, in the limit when $N\rightarrow\infty$ there is a complete bias, namely we expect $ \nu_{\phi_{2}}([0,\infty)) =1$, or in other terms, the function $D_2$ is almost always (in logarithmic scale) positive.
\section{Proof of Theorem~\ref{Th_GeneDistLim}}\label{Section proof theo general}
To prove Theorem~\ref{Th_GeneDistLim}, we follow the proof of \cite[Th. 1.2]{ANS} or \cite[Th. 2.1]{DevinChebyshev}, but keeping explicit the dependency on the $L$-function. We show that Theorem~\ref{Th_GeneDistLim} is the consequence of the following result.
\begin{prop}\label{Prop_LimOfDist}
Under the hypotheses of Theorem~\ref{Th_GeneDistLim},
for each $M >0$ and $T>2$, let
\[G_{\mathcal{S},\underline{c},M,T}(x) = m_{\mathcal{S},\underline{c}} -\sum_{\gamma\in\mathcal{Z}_{\mathcal{S},\underline{c},M}(T)}2\re\left(\ord_{\mathcal{S},\underline{c}}(\gamma)\frac{x^{i\gamma}}{\frac12 + i\gamma}\right).\]
The function $G_{\mathcal{S},\underline{c},M,T}$ admits a limiting logarithmic distribution $\mu_{\mathcal{S},\underline{c},M,T}$.
Moreover, there exists a function $M(T)$ satisfying $M(T)\rightarrow\infty$ as $T\rightarrow\infty$ such as for any bounded Lipschitz continuous function $g$, one has
\begin{align*}
\lim_{T\rightarrow\infty}\int_{\mathbf{R}}g(t)\diff\mu_{\mathcal{S},\underline{c},M(T),T}(t) =
\int_{\mathbf{R}}g(t)\diff\mu_{\mathcal{S},\underline{c}}(t).
\end{align*}
\end{prop}
For every $M,T$ fixed, the set $\mathcal{Z}_{\mathcal{S},\underline{c},M}(T)$ is finite.
Thus, by hypothesis, the function $G_{\mathcal{S},\underline{c},M,T}$ is well-defined and it admits a limiting logarithmic distribution as a consequence of Kronecker--Weyl equidistribution Theorem (see e.g. \cite[Th. 4.2]{DevinChebyshev}, \cite[Lem. 4.3]{Humphries}, or \cite[Lem. B.3]{MartinNg}).
The convergence of the measures needs more work on the estimation of the error terms.
Let us first recall the following precise form of \cite[Prop. 4.2]{ANS}, \cite[(4.5)]{DevinChebyshev}.
\begin{prop}\label{Prop for one L function}
Let $L(f,s)$ be an analytic $L$-function of degree $d$, we denote by~$L(f^{(2)},s)$ its second moment $L$-function (as in \cite[Def. 1.1.(iii)]{DevinChebyshev}).
Assume the Riemann Hypothesis holds for $L(f,s)$ and $L(f^{(2)},s)$ .
Let $T >0$, and
\begin{align*}
G_{f,T}(x) = m_{f} -\sum_{\gamma\in\mathcal{Z}_{f}(T)}2\re\left(\ord(\gamma, L(f,s))\frac{x^{i\gamma}}{\frac12 + i\gamma}\right).
\end{align*}
We have the following estimate for all $x>0$ :
\begin{align*}
E_f(x) :=&\frac{\log x}{\sqrt{x}}\left(\sum_{p\leq x} \lambda_{f}(p) + \ord_{s=1}(L(f,s))\Li(x)\right) \\
&= G_{f,T}(x)
- \epsilon_{f}(x,T)
+ O\left(d\frac{\log\mathfrak{q}(f)}{\log x} \right)
\end{align*}
where the function $\epsilon_{f}(x,T)$ satisfies
\begin{equation}\label{Bound_epsilon}
\int_{2}^{Y}\lvert \epsilon_{f}(e^{y},T)\rvert^{2} \diff y \ll Y\frac{\left(\log(\mathfrak{q}(f)T^d)\right)^2}{T} + \frac{\left(\log(\mathfrak{q}(f)T^d)\right)^2\log T}{T}
\end{equation}
with an absolute implicit constant.
\end{prop}
\begin{proof}
The proof is contained in \cite{DevinChebyshev}, where the dependency in the $L$-function is not always written explicitly.
In particular from \cite[Prop. 4.4]{DevinChebyshev} we have
\begin{align*}
\psi(f,x) + \ord_{s=1}(L(f,s))x =
- \sum_{\substack {L(f,\rho)=0 \\ \lvert\im(\rho)\rvert\leq T}}\frac{x^{\rho}}{\rho} - x^{1/2}\epsilon_{f}(x,T) +
O\left(\log(\mathfrak{q}(f)x^d)\log x \right)
\end{align*}
with an absolute implicit constant.
Then taking care of the sum over squares of primes, we
use the Ramanujan--Petersson Conjecture and the Prime Number Theorem to obtain:
\[\theta(f,x) :=\sum_{p\leq x}\lambda_{f}(p)\log p = \psi(f,x)
- \sum_{p^{2}\leq x}\left(\sum_{j=1}^{d} \alpha_{j}(p)^2\right) \log p
+ O(dx^{1/3}).\]
To evaluate the second term, we use the Riemann Hypothesis for the function $L(f^{(2)},s) = L(\Sym^2f,s) L(\wedge^2f,s)^{-1}$.
One has
$\frac{L'(f^{(2)},s)}{L(f^{(2)},s)} = \frac{L'(\Sym^2f,s)}{L(\Sym^2f,s)} - \frac{L'(\wedge^2f,s)}{L(\wedge^2f,s)}$, thus
\begin{align*}
\sum_{p^{2}\leq x}\left(\sum_{j=1}^{d} \alpha_{j}(p)^2\right) \log p
&= -\ord_{s=1}(L(\Sym^2f,s))x^{\frac{1}{2}} + O(x^{\frac{1}{4}}\log x \log (x^{d(d+1)/2}\mathfrak{q}(\Sym^2f))) \\
&\quad + \ord_{s=1}(L(\wedge^2f,s))x^{\frac{1}{2}} + O(x^{\frac{1}{4}}\log x \log (x^{d(d-1)/2}\mathfrak{q}(\wedge^2f))) + O(dx^{1/4}) \\
&= -\ord_{s=1}(L(f^{(2)},s))x^{\frac{1}{2}} + O(dx^{\frac{1}{4}}\log x \log(x^{d}\mathfrak{q}(f)) ).
\end{align*}
Finally, using Stieltjes integral, we write $E_{f}(x) = \frac{\log x}{x^{1/2}} \int_{2}^{x} \frac{\diff (\theta(f,t) + \ord_{s=1}(L(f,s))t)}{\log t}$.
After integration by parts this yields
\begin{multline*}
E_{f}(x) = \frac{1}{\sqrt{x}}\big(\psi(f,x) + x\ord_{s=1}(L(f,s))\big) + \ord_{s=1}(L(f^{(2)},s)) \\ +
O\left( \frac{\log x}{\sqrt{x}} \int_{2}^{x} \frac{\psi(f,t) + t\ord_{s=1}(L(f,s)) + \sqrt{t}\ord_{s=1}(L(f^{(2)},s))}{t(\log t)^{2}}\diff t \right) \\
+ O(dx^{-1/6}\log x \log(\mathfrak{q}(f)x^d)).
\end{multline*}
Using the explicit formula
\begin{multline*}
\psi(f,x) + \ord_{s=1}(L(f,s))x = -\sum_{\substack {L(f,\rho)=0 \\ \lvert\im(\rho)\rvert\leq X}}\frac{x^{\rho}}{\rho}\\ +
O\left(d\log x + \frac{x}{X}\left(d(\log x)^2 + \log(\mathfrak{q}(f)X^{d})\right) + \log(\mathfrak{q}(f)X^{d})\log X\right),
\end{multline*}
and another integration by parts to evaluate the second term we have
\begin{multline*}
\int_{2}^{x} \frac{\psi(f,t) + t\ord_{s=1}(L(f,s)) + \sqrt{t}\ord_{s=1}(L(f^{(2)},s))}{t(\log t)^{2}}\diff t \\
\ll \ord_{s=1}(L(f^{(2)},s))\frac{x^{1/2}}{(\log x)^2} + \sum_{\substack {L(f,\rho)=0 \\ \lvert\im(\rho)\rvert\leq x}}\frac{\lvert x^{\rho}\rvert }{\lvert\rho^2\rvert (\log x)^2} + \log(\mathfrak{q}(f)x^{d})\log x
\end{multline*}
where after the integration we take $X=x$.
The sum over the zeros is convergent and this concludes the proof of Proposition~\ref{Prop for one L function}.
\end{proof}
Then we sum over the $L$-functions, we obtain the following result.
\begin{prop}\label{Prop sum L functions}
Under the hypotheses of Theorem~\ref{Th_GeneDistLim},
there exists a function $M(T) = M_{\mathcal{S},\underline{c}}(T)$, with $M(T) \rightarrow\infty$ as $T \rightarrow\infty$,
such that
we have the following estimate for all $x>0$ and $T>0$.
\begin{equation*}
E_{\mathcal{S},\underline{c}}(x)= G_{\mathcal{S},\underline{c},M(T),T}(x) - \epsilon_{\mathcal{S},\underline{c}}(x,T) + O_{\mathcal{S},\underline{c}}\left( \frac{1}{\log x} + \frac{1}{T}\right),
\end{equation*}
where the function $\epsilon_{\mathcal{S},\underline{c}}(x,T)$ satisfies
\begin{equation}\label{Bound_epsilon total}
\int_{2}^{Y}\lvert \epsilon_{\mathcal{S},\underline{c}}(e^{y},T)\rvert^2 \diff y \ll_{\mathcal{S},\underline{c}} Y\frac{(\log T)^2}{T} + \frac{(\log T)^{3}}{T}.
\end{equation} \end{prop}
\begin{proof}[Proof of Proposition~\ref{Prop sum L functions}]
By definition, one has
\begin{equation*}
E_{\mathcal{S},\underline{c}}(x) = \sum_{m\geq 0} c_{m}E_{f_m}(x).
\end{equation*} Thus, using Proposition~\ref{Prop for one L function}, one has \begin{align*} E_{\mathcal{S},\underline{c}}(x) &= \sum_{m = 0}^{\infty} c_{m} \left(G_{f_m,T}(x) - \epsilon_{f_m}(x,T) + O\left(d_m\frac{\log\mathfrak{q}(f_m)}{\log x} \right) \right). \end{align*} For each $x$ and $T$, the three series are convergent, we separate \begin{multline*}
E_{\mathcal{S},\underline{c}}(x)= G_{\mathcal{S},\underline{c},M,T}(x) -
\sum_{m >M} \Bigg(c_m \sum_{\substack{\gamma\in\mathcal{Z}_{f_m}(T) \\ \gamma\notin\mathcal{Z}_{\mathcal{S},\underline{c},M}(T) }}2 \re\Big( \ord(\gamma,m) \frac{x^{i\gamma}}{\frac12 + i\gamma}\Big) \Bigg) \\
- \sum_{m = 0}^{\infty}c_{m}\epsilon_{f_m}(x,T)
+ O\left(\sum_{m = 0}^{\infty}\lvert c_{m}\rvert d_m\frac{\log\mathfrak{q}(f_m)}{\log x} \right). \end{multline*} For each $m>M$, one has \begin{align*} \sum_{\gamma\in\mathcal{Z}_{f_m}(T)\smallsetminus\mathcal{Z}_{\mathcal{S},\underline{c},M}(T)}2 \re\left( \ord(\gamma,m) \frac{x^{i\gamma}}{\frac12 + i\gamma}\right) \ll \log T \log(\mathfrak{q}(f_m)T). \end{align*} Thus \begin{align*} \sum_{m >M} c_m \sum_{\substack{\gamma\in\mathcal{Z}_{f_m}(T) \\ \gamma\notin\mathcal{Z}_{\mathcal{S},\underline{c},M}(T) }}2 \re\left( \ord(\gamma,m) \frac{x^{i\gamma}}{\frac12 + i\gamma}\right) \ll (\log T)^{2} \sum_{m >M} \lvert c_m\rvert \log(\mathfrak{q}(f_m)). \end{align*} For each $T$ the series is convergent, so there exist $M = M_{\mathcal{S},\underline{c}}(T)$ such that \begin{equation*}
\sum_{m >M} \lvert c_m\rvert \log(\mathfrak{q}(f_m)) \leq \frac{1}{(\log T)^{2}T}. \end{equation*} Let \begin{align*} \epsilon_{\mathcal{S},\underline{c}}(x,T) = \sum_{m = 0}^{\infty}c_{m}\epsilon_{f_m}(x,T). \end{align*} One has \begin{align} \label{Eq sum epsilon} \nonumber\int_{2}^{Y} \lvert \epsilon_{\mathcal{S},\underline{c}}(e^y,T) \rvert^2 \diff y &\leq \sum_{m = 0}^{\infty}\sum_{n =0}^{\infty} \lvert c_{m} \rvert \cdot \lvert c_n\rvert \int_{2}^{Y} \lvert \epsilon_{f_m}(e^y,T) \rvert \cdot \lvert \epsilon_{f_n}(e^y,T) \rvert \diff y \\ \nonumber &\ll \left\lbrace \sum_{m = 0}^{\infty} \lvert c_{m} \rvert \left(Y\frac{\left(\log(\mathfrak{q}(f_m)T^{d_m})\right)^2}{T} + \frac{\left(\log(\mathfrak{q}(f_m)T^{d_m})\right)^2\log T}{T}\right)^{1/2} \right\rbrace^2 \\ &\ll_{\mathcal{S},\underline{c}} Y\frac{(\log T)^2}{T} + \frac{(\log T)^{3}}{T}. \end{align} Finally, since the series \begin{align*}
\sum_{m \geq 0}\lvert c_{m}\rvert d_m\log\mathfrak{q}(f_m) \end{align*} is convergent, the proof is complete. \end{proof}
We can now come back to the proof of Proposition~\ref{Prop_LimOfDist}.
\begin{proof}[Proof of Proposition~\ref{Prop_LimOfDist}] By Proposition~\ref{Prop sum L functions}, $E_{\mathcal{S},\underline{c}}$ is a $B^2$-almost periodic function well approximated by the $G_{\mathcal{S},\underline{c},M(T),T}$'s. Thus Proposition~\ref{Prop_LimOfDist} follows from\footnote{Note that there is a misprint in the proof of \cite[Th.~2.9]{ANS}, (2.10) should read $\frac{1}{Y}\int_{0}^{Y}\lvert \vec{\phi}(y) - \vec{P}_M(y)\rvert \diff y < \epsilon$ for~$Y$ large enough, the constant $A_\epsilon$ may have to be enlarged to include smaller~$Y$'s, see also \cite[Th. 1.17]{Bailleul_Kronecker}, correcting this in more details.} \cite[Th.~2.9]{ANS}. \end{proof}
Then Theorem~\ref{Th_GeneDistLim} follows. \begin{proof}[Proof of Theorem~\ref{Th_GeneDistLim}]
The existence of the limiting logarithmic distribution $\mu_{\mathcal{S},\underline{c}}$ is stated in Proposition~\ref{Prop_LimOfDist}.
In the process of the proof, we used the fact that the function $E_{\mathcal{S},\underline{c}}$ is a $B^2$-almost periodic function, by \cite[Chap. II, \S 6, 4°]{Besicovitch55} it admits a mean value which is
\[\mathbb{E}(\mu_{\mathcal{S},\underline{c}}) = \lim_{T\rightarrow\infty}\mathbb{E}(\mu_{\mathcal{S},\underline{c},M(T),T}) = m_{\mathcal{S},\underline{c}}.\]
Then it follows from \cite[Chap. II, \S 9, 1°]{Besicovitch55} that it admits a second moment which is given via Parseval's identity.
The formula for the variance follows as in \cite[Th. 1.14]{ANS} and \cite[Lem. 2.5, 2.6]{Fiorilli_HighlyBiased}.
For the decay of the tails of the distribution, the proof is similar to the proof of \cite[Th. 1.2]{RS} (see also \cite[Lem. 4.8]{DevinChebyshev}) noting that the measure $\mu_{\mathcal{S},\underline{c},M(T),T}$ is supported inside an interval of the form $[-A(\log T)^2, A(\log T)^2]$, for a positive constant $A$ depending on $\mathcal{S}$ and $\underline{c}$.
\end{proof}
Let us now prove the results on the support of $\mu_{\mathcal{S},\underline{c}}$ and on sign changes that depends on supplementary conditions.
\begin{proof}[Proof of Proposition~\ref{Prop lower bound tails}]
The proof is similar to the proof of~\cite[Th.~1.2]{RS}, Following the notation of~\cite[Sec. 2.2]{RS},
let $\epsilon >0$, $t \geq \log2 + \tfrac12\epsilon$, and
$$F_{\epsilon}(t) = \frac{1}{\epsilon} \int_{t -\frac{\epsilon}{2}}^{t +\frac{\epsilon}{2}} E_{\mathcal{S},\underline{c}}(e^y)\diff y.$$
Using Proposition~\ref{Prop sum L functions} and the bound
\begin{align*}
\lvert \epsilon_{\mathcal{S},\underline{c}}(x,T)\rvert \leq \sum_{m = 0}^{\infty}\lvert c_{m}\epsilon_{f_m}(x,T)\rvert
\ll \frac{\log x}{\sqrt{x}} + \frac{\sqrt{x}}{T}\Big((\log x)^2 + \log T\Big),
\end{align*} letting $T\rightarrow\infty$, we have \begin{align*}
F_{\epsilon}(t) = \frac{4}{\epsilon}\sum_{\gamma \in \mathcal{Z}_{\mathcal{S},\underline{c}}}\ord_{\mathcal{S},\underline{c}}(\gamma)\frac{\sin (t\gamma) \sin (\frac{\epsilon}{2}\gamma)}{\gamma^2} + O(1). \end{align*} The sum $\sum_{\gamma \in \mathcal{Z}_{\mathcal{S},\underline{c}}}\frac{\lvert \ord_{\mathcal{S},\underline{c}}(\gamma)\rvert}{\gamma^2}$ converges, so there exists $T= T(\epsilon)$ such that the function \begin{align*}
\tilde{F}_{\epsilon}(t) = \frac{4}{\epsilon}\sum_{\gamma \in \mathcal{Z}_{\mathcal{S},\underline{c},M(T)}(T)}\ord_{\mathcal{S},\underline{c}}(\gamma)\frac{\sin (t\gamma) \sin (\frac{\epsilon}{2}\gamma)}{\gamma^2} \end{align*} satisfies \begin{align*}
F_{\epsilon}(t) = \tilde{F}_{\epsilon}(t) + O(1). \end{align*} It is then enough to show that $\tilde{F}_{\epsilon}(t)$ is large on a large set. As this is a finite sum, the proof follows from the same argument as in~\cite[Sec. 2.2]{RS}, under the condition that $\ord_{\mathcal{S},\underline{c}}(\gamma)\geq 0$ for all $\gamma$. \end{proof}
\begin{proof}[Proof of Proposition~\ref{Prop inclusive}]
The proof is similar to the proof of~\cite[Th.~1.5(c)]{MartinNg}.
Using~\cite[Lem.~3.8 and Prop.~3.10]{MartinNg}, we write
$\mu_{\mathcal{S},\underline{c}} = \mu^{\mathrm{LI}}\ast\mu^{N}$
where $\hat{\mu}^{\mathrm{LI}}(\xi) = \prod_{\gamma \in \mathcal{Z}_{\mathcal{S},\underline{c}}^{\mathrm{LI}}} J_0\Bigg(\Big\lvert\frac{ 2 \ord_{\mathcal{S},\underline{c}}(\gamma)\xi}{\frac{1}{2} + i \gamma}\Big\rvert\Bigg)$ and $\mu^{N}$ has positive mass in a small interval centred at~$0$.
In particular the law of $\mu^{\mathrm{LI}}$ is the same as the law of $\sum_{\gamma \in \mathcal{Z}_{\mathcal{S},\underline{c}}^{\mathrm{LI}}} \Big\lvert\frac{ 2 \ord_{\mathcal{S},\underline{c}}(\gamma)}{\frac{1}{2} + i \gamma}\Big\rvert X_{\gamma}$ where the $X_{\gamma}$ are independent random variables each of which is uniformly distributed on the unit circle. Applying~\cite[Lem.~6.2]{MartinNg} with the assumption
$\sum_{\gamma\in \mathcal{Z}_{\mathcal{S},\underline{c}}^{\mathrm{LI}}} \Big\lvert\frac{ 2 \ord_{\mathcal{S},\underline{c}}(\gamma)}{\frac{1}{2} + i \gamma}\Big\rvert = \infty $, we conclude that $\mathrm{supp}(\mu^{\mathrm{LI}}) = \mathbf{R}$ and Proposition~\ref{Prop inclusive} follows. \end{proof}
\begin{proof}[Proof of Proposition~\ref{Prop oscillation without GRH}]
The idea of the proof is similar to the proof of \cite[Th. 15.2]{MV-book} and is based on a theorem of Landau (see \cite[Th. 15.1]{MV-book}, also \cite{KP}).
Fix $\epsilon >0$, we consider the real functions
$$f_{\pm} : x \mapsto \sum_{n\leq x}\sum_{m\geq 0}c_m \frac{\Lambda_{f_m}(n)}{\log n} + \sum_{m\geq 0}c_{m}\ord_{s=1}(L(f_m,s)) \Li(x) \pm x^{\Theta - \epsilon},$$
where $\Theta$ is defined in the statement of Proposition~\ref{Prop oscillation without GRH} and for each $m\geq 0$, $\Lambda_{f_m}$ is the von Mangoldt function associated to $f_m$.
Precisely, one has
$$ \Lambda_{f_m} = \begin{cases}
\sum_{j=1}^{d_m} \alpha_{j,m}(p)^k \log p &\text{ if } n = p^k \\
0 &\text{ if } n \text{ is not a prime power,}
\end{cases}$$ where the $\alpha_{j,m}$ are the local roots of $L(f_m,\cdot)$. In particular, using the Ramanujan--Petersson Conjecture, the Prime Number Theorem and the fact that the series $\sum_{m\geq 0}\lvert c_m\rvert d_m$ converges, we see that the functions $f_{\pm}$ are well-defined and we have that \begin{align}\label{Eq comp f and E} f_{\pm}(x) = \frac{\sqrt{x}}{\log x}E_{\mathcal{S},\underline{c}}(x) \pm x^{\Theta - \epsilon} + O_{\mathcal{S},\underline{c}}(x^{\frac12}) = O_{\mathcal{S},\underline{c},\epsilon'}(x^{\Theta + \epsilon'}), \end{align} with $\epsilon'>0$ arbitrarily small. For $\re(s) > \Theta$, write \begin{align*}
F_{\pm}(s) &= \int_{1}^{\infty} f_{\pm}(x) x^{-s-1} \diff s \\
&= \frac{1}{s}\sum_{m\geq 0} c_m \Big(\log(L(f_m,s)) - \ord_{s=1}(L(f_m,s))\log(s-1) \Big) + \frac{r_{\mathcal{S},\underline{c}}(s)}{s} \pm \frac{1}{s-\Theta+\epsilon} \end{align*} where we used absolute convergence to exchange the order of summation between the integral and the sum over $m\geq0$, and where the function~$r_{\mathcal{S},\underline{c}}$ is entire. The second expression gives an analytic continuation of~$F_{\pm}$ to a larger set avoiding lines at the left of points~$\beta + i \gamma$ with $\ord_{\mathcal{S},\underline{c}}(\beta + i\gamma) \neq 0$ where the functions have logarithmic singularities. In particular, by hypothesis, the functions~$F_{\pm}$ are regular at~$s= \Theta$, but are not regular in any half-plane~$\re(s)>\Theta - \epsilon'$ with $\epsilon'>0$. Landau's Theorem then implies that the functions~$f_{\pm}$ have infinitely many changes of signs. We deduce that there are infinitely many $x >0$ such that $f_{-}(x) >0$, using~\eqref{Eq comp f and E}, we obtain \begin{align*}
\sum_{p\leq x}\sum_{m\geq 0}c_{m}\lambda_{f_m}(p) + \sum_{m\geq 0}c_{m}\ord_{s=1}(L(f_m,s)) \Li(x) = \Omega_{+}(x^{\Theta -\epsilon}), \end{align*} and similarly~$f_{+}$ takes negative values infinitely many times, which then yields the~$\Omega_{-}$-result and concludes the proof. \end{proof}
\end{document} |
\begin{document}
\title{Constraints on relaxation rates for $N$-level quantum systems} \author{S.~G.~Schirmer} \email{sgs29@cam.ac.uk}, \affiliation{Department of Applied Maths and Theoretical Physics,
University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, UK} \affiliation{Department of Engineering, Division F, Control Group,
University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK} \author{A.~I.~Solomon} \email{a.i.solomon@open.ac.uk} \affiliation{Department of Physics and Astronomy, The Open University,
Milton Keynes, MK7 6AA, UK} \affiliation{LPTL, University of Paris VI, 75252 Paris, France} \date{\today}
\begin{abstract} We study the constraints imposed on the population and phase relaxation rates by the physical requirement of completely positive evolution for open $N$-level systems. The Lindblad operators that govern the evolution of the system are expressed in terms of observable relaxation rates, explicit formulas for the decoherence rates due to population relaxation are derived, and it is shown that there are additional, non-trivial constraints on the pure dephasing rates for $N>2$. Explicit, \emph{experimentally testable} inequality constraints for the decoherence rates are derived for three and four-level systems, and the implications of the results are discussed for generic ladder, $\Lambda$ and $V$ systems and transitions between degenerate energy levels. \end{abstract} \pacs{03.65.Yz,03.65.Ta,76.20.+q} \maketitle
\section{Introduction} \label{sec:intro}
Understanding the dynamics of open systems is crucial in many areas of physics including quantum optics~\cite{97Scully, 00Gardiner}, quantum measurement theory~\cite{83Kraus}, quantum state diffusion~\cite{98Percival}, quantum chaos~\cite{00Braun}, quantum information processing~\cite{00Nielsen} and quantum control~\cite{PRA64n012414,PRA65n010101,PRA65n042301}. Yet, despite many efforts to shed light on these issues~\cite{76Davies,83Kraus,87Alicki, 00Braun,02Breuer}, many important questions remain.
For instance, it was recognized early by Kraus~\cite{71Kraus}, Lindblad~ \cite{75Lindblad,76Lindblad}, and Gorini, Kossakowki and Sudarshan~\cite{76Gorini} that the dynamical evolution of an open system must be completely positive~
\footnote{A map $\Lambda$ acting on the algebra of bounded operators $B({\cal H})$ on a finite-dimensional Hilbert space ${\cal H}$ is completely positive if the composite map $\Lambda \otimes I_n$ acting on $B({\cal H})\otimes M(n)$, where $M(n)$ is algebra of complex $n \times n$ matrices and $I_n$ the identity in $M(n)$, is positive for any $n\ge 0$.}
to ensure that the state of the open system remains physically valid at all times. Unfortunately, if relaxation rates are introduced ad-hoc based on a phenomenological description of the system, the resulting equations often do not satisfy this condition. For example, the Agarwal/Redfield equations of motion for a damped harmonic oscillator have been shown to violate complete and even simple positivity for certain initial conditions~\cite{JCP107p5236}. Although such master equations may provide physical solutions in some cases, serious inconsistencies such as negative or imaginary probabilities, unbounded solutions and other problems may arise.
For two-level systems the implications of the complete positivity requirement have been studied extensively in the literature, for example by Gorini \textit{et al.} who first showed that there are constraints on the relaxation rates in the weak coupling limit~\cite{76Gorini,78Gorini}, Lendi who provided a comprehensive in-depth analysis of the dissipative optical Bloch equations ~\cite{87Alicki}, and more recently Kimura who extended earlier work by Gorini to the strong coupling regime~\cite{PRA66n062113}. Recently, there has also been considerable research activity on quantum Markov channels for two-level systems, motivated by their importance in quantum computing and communication. See, for instance, Ref.~\cite{PRA67n062312} for a comprehensive analysis. A few simple higher dimensional systems such as a three-level $V$-system with decay from two upper levels to a common ground state have also been studied, for instance by Lendi~\cite{87Alicki}.
In general, however, ensuring complete positivity of the evolution is often neglected for open systems with more than two (or degenerate) energy levels. For instance, the general expressions (6.A.11) in~\cite{95Mukamel} for the relaxation rates ensure complete positivity only for two-level systems, as we shall show. For higher-dimensional systems additional constraints must be imposed if complete positivity is to be maintained. One reason for this neglect of positivity constraints is that, although the general form of the admissible generators for quantum dynamical semi-groups is known, it can be difficult to verify whether a proposed dynamical law for an open system is consistent with positivity requirements. The main objective of this paper is to address is issue.
The paper is organized as follows. Starting with a purely phenomenological description of the interaction of an open system with its environment in terms of observable population and phase relaxation rates --- analogous to the $T_1$ and $T_2$ relaxation times for a two-level system --- we derive a general form for the dissipation superoperator in section~\ref{sec:liouv}. We then explicitly demonstrate with simple examples in section~\ref{sec:constraints} that the relaxation rates cannot be chosen arbitrarily if the evolution of the system is to be physical in the sense that it satisfies complete positivity. In particular, we show that the phase relaxation rates for $N>2$ are correlated even in the absence of population relaxation, i.e., there exist constraints on the phase relaxation rates that are independent of population decay. To understand the nature of these constraints we express the empirically derived relaxation superoperator in Lindblad form (Section~\ref{sec:standard}), and show that it can always be decomposed into two parts, one accounting for population relaxation and the other for pure phase relaxation processes (Section~\ref{sec:decomp}). This decomposition provides a general formula for the decoherence rates induced by population decay, which is consistent with physical expectations and evidence, and \emph{additional} positivity constraints on the decoherence rates resulting from pure phase relaxation for $N>2$.
In section~\ref{sec:dephasing} we study the implications of these additional constraints in depth for three and four-level systems. In particular, we use the abstract positivity constraints to derive explicit inequality constraints for the observable decoherence rates. Such explicit constraints are important from a theoretical and practical perspective because they allow us to make concrete, empirically verifiable predictions about the decoherence rates and the dynamics of the system, as we show in section~\ref{sec:examples} for several common, generic three and four-level systems such as ladder, $\Lambda$, $V$ and tripod systems, and transitions between doubly degenerate energy levels. Experimental data consistent with positivity constraints would be significant and validate the chosen model for the open system dynamics. On the other hand, if the observed relaxation rates for a system do not satisfy the constraints required to ensure complete positivity, it would be a strong indication that the model used is not sufficient to properly describe the dynamics of the system. This does not necessarily mean that the model is useless; it might well be adequate for some purposes, but there will be cases where the model makes unphysical predictions and better models, consistent with physical constraints, are required.
\section{Quantum Liouville Equation for Dissipative Systems} \label{sec:liouv}
The state of an $N$-level quantum system is usually represented by a density operator $\op{\rho}$ acting on a Hilbert space ${\cal H}$. If the system is closed then its evolution is given by the quantum Liouville equation \begin{equation} \label{eq:Liouville}
{\rm i}\hbar \frac{d}{dt} \op{\rho}(t) = \comm{\op{H}}{\op{\rho}(t)}, \end{equation} where $\op{H}$ is the Hamiltonian. Formally, the dynamics of an open system $S$ that is part of a closed supersystem $S+E$ (possibly the entire universe) is determined by the Hamiltonian dynamics (\ref{eq:Liouville}) of $S+E$, and the state of the subsystem $S$ can be obtained by taking the partial trace of the entire system's density operator $\op{\rho}_{S+E}$ over the degrees of freedom of the environment $E$. Often however, the evolution of the (closed) super-system is unknown or too complicated and we are interested only in the dynamics of $S$. It is therefore useful to define a density operator $\op{\rho}$ based on the degrees of freedom of $S$, and describe its non-unitary evolution by amending the quantum Liouville equation to account for the non-Hamiltonian dynamics resulting from the interaction of $S$ with the environment $E$.
In this paper we restrict our attention to the (common) case where the effect of the environment $E$ leads to population and phase relaxation (decay and decoherence, respectively) of the system $S$, and ultimately causes it to relax to an equilibrium state. To clearly define what we mean by the terms population and phase relaxation, note that given an $N$-dimensional quantum system we can choose a complete orthonormal basis $\{\ket{n}:n=1,2,\ldots,N\}$ for its Hilbert space and expand its density operator with respect to this basis: \begin{equation} \label{eq:rho1}
\op{\rho} = \sum_{n=1}^N \left[ \rho_{nn} \ket{n}\bra{n}
+ \sum_{n'>n} \rho_{nn'} \ket{n}\bra{n'} +
\rho_{nn'}^* \ket{n'}\bra{n} \right]. \end{equation} Although we can theoretically choose any Hilbert space basis, physically there is usually a preferred basis. Since the interaction with the environment usually causes the system to relax to an equilibrium state that is a statistical mixture of its energy eigenstates
~\footnote{An energy eigenstate of the system is a Hilbert space wavefunction $\ket{n}$ that satisfies $\op{H}\ket{n}=E_n\ket{n}$, where $\op{H}$ is the Hamiltonian of the system.},
it is sensible to choose a suitable basis of (energy) eigenstates of the system for modelling the relaxation process. In this setting the diagonal elements $\op{\rho}_{nn}$ in expansion Eq.~(\ref{eq:rho1}) of $\op{\rho}$ determine the populations of the (energy) eigenstates $\ket{n}$, and the off-diagonal elements $\rho_{nn'}$ ($n\neq n'$) are called coherences, since they distinguish coherent superpositions of energy eigenstates $\ket{\Psi}= \sum_{n=1}^N c_n\ket{n}$ from statistical (incoherent) mixtures of energy eigenstates $\op{\rho}=\sum_{n=1}^N w_n \ket{n}\bra{n}$.
Population relaxation occurs when the populations of the energy eigenstates change, typically due to spontaneous emission or absorption of quanta of energy at random times. To account for population relaxation as a result of the interaction with an environment we must modify the system's quantum Liouville equation~ (\ref{eq:Liouville}) to: \begin{equation}\label{eq:poptrans}
\dot{\rho}_{nn}(t) = -\frac{{\rm i}}{\hbar}(\comm{\op{H}}{\op{\rho}(t)})_{nn}
- \sum_{k\neq n} \gamma_{kn}\rho_{nn}(t)
+ \sum_{k\neq n} \gamma_{nk}\rho_{kk}(t), \end{equation} where $\op{H}$ represents the Hamiltonian dynamics of $S$, and $\gamma_{kn}$ is the rate of population relaxation from state $\ket{n}$ to state $\ket{k}$, which depends on the lifetime of state $\ket{n}$, and in case of multiple decay pathways, the probability for the particular transition, etc. The $\gamma_{kn}$ are thus by definition real and non-negative. Population relaxation necessarily induces phase relaxation, and we will later derive explicit expressions for the contribution of population relaxation to the phase relaxation rates.
In general, phase relaxation occurs when the interaction of the system with the environment destroys phase correlations between quantum states, and thus converts coherent superposition states into incoherent mixed states. Since coherence is determined by the off-diagonal elements in our expansion of the density operator, this effect can be modelled as decay of the off-diagonal elements of $\op{\rho}$: \begin{equation} \label{eq:dephasing}
\dot{\rho}_{kn}(t) = -\frac{{\rm i}}{\hbar}(\comm{\op{H}}{\op{\rho}(t)})_{kn}
-\Gamma_{kn} \rho_{kn}(t), \end{equation} where $\Gamma_{kn}$ (for $k \neq n$) is the dephasing rate of the transition $\ket{k} \leftrightarrow \ket{n}$.
Hence, population and phase relaxation change the evolution of the system and force us to rewrite its quantum Liouville equation as: \begin{equation} \label{eq:dLE}
\dot{\rho}(t) = -\frac{{\rm i}}{\hbar}\comm{\op{H}}{\op{\rho}(t)} + L_D[\op{\rho}(t)], \end{equation} where $L_D[\op{\rho}(t)]$ is the dissipation (super-)operator determined by the relaxation rates. It is convenient to note here that the $N \times N$ density matrix $\op{\rho}(t)$ can be rewritten as an $N^2$ (column) vector, which we denote as $\lket{\rho(t)}$, by stacking its columns. Since the commutator $\comm{\op{H}}{\op{\rho}(t)}$ and the dissipation (super-)operator $L_D[\op{\rho}(t)]$ are linear operators on the set of density matrices, we can write (\ref{eq:dLE}) in matrix form: \begin{equation}
\frac{d}{dt}\lket{\rho(t)} = \left(-\frac{i}{\hbar}{\cal L}_H + {\cal L}_D\right)\lket{\rho(t)} \end{equation} where ${\cal L}_H$ and ${\cal L}_D$ are $N^2 \times N^2$ matrices representing the Hamiltonian and dissipative part of the dynamics, respectively. Comparison with equations (\ref{eq:poptrans}) and (\ref{eq:dephasing}) shows that the non-zero elements of ${\cal L}_D$ are: \begin{equation}
\begin{array}{rcll}
({\cal L}_D)_{(m,n),(m,n)} &=& -\Gamma_{mn} & \quad m\neq n \\
({\cal L}_D)_{(m,m),(m',m')} &=& +\gamma_{mm} & \quad m\neq m' \\
({\cal L}_D)_{(m,m),(m,m)} &=& -\sum_{{k=1 \atop k\neq m}}^N \gamma_{km}
\end{array} \label{eq:LD} \end{equation} where the index $(m,n)$ should be interpreted as $m+(n-1)N$. $\Gamma_{mn}= \Gamma_{nm}$ implies $({\cal L}_D)_{(m,n),(m,n)}=({\cal L}_D)_{(n,m),(n,m)}$.
For a three-level system subject to population and phase relaxation, for instance, equation (\ref{eq:LD}) gives a dissipation super-operator of the form: \begin{widetext} \begin{equation} \label{eq:LD3}
{\cal L}_D = -\left[\begin{array}{*{9}{c}}
\gamma_{21}+\gamma_{31}&0&0&0&-\gamma_{12}&0&0&0&-\gamma_{13} \\
0&\Gamma_{12}&0&0&0&0&0&0&0 \\
0&0&\Gamma_{13}&0&0&0&0&0&0 \\
0&0&0&\Gamma_{12}&0&0&0&0&0 \\
-\gamma_{21}&0&0&0&\gamma_{12}+\gamma_{32}&0&0&0&-\gamma_{23} \\
0&0&0&0&0&\Gamma_{23}&0&0&0 \\
0&0&0&0&0&0&\Gamma_{13}&0&0 \\
0&0&0&0&0&0&0&\Gamma_{23}&0 \\
-\gamma_{31}&0&0&0&-\gamma_{32}&0&0&0&\gamma_{13}+\gamma_{23}
\end{array}\right] \end{equation} \end{widetext} where $\gamma_{kn}$ and $\Gamma_{kn}$ are the population and phase relaxation rates, respectively.
\section{Physical Constraints on the Dynamical Evolution} \label{sec:constraints}
Although (\ref{eq:LD}) gives a general form for the dissipation superoperator of a system subject to population and phase relaxation, not every superoperator ${\cal L}_D$ of this form is acceptable on physical grounds since the density operator $\op{\rho}(t)$ of the system must remain Hermitian with non-negative eigenvalues for all $t>0$, and its trace must be conserved~\footnote{
Trace conservation means that the sum of the populations of all basis states is preserved, and is equivalent to conservation of probability. This condition may be violated, for instance, if the total population of the system is not conserved, e.g., by atoms being ionized or mapped outside the subspace $S$ or particles being lost from a trap. However, this condition is in principle not restrictive since we can usually amend the Hilbert space ${\cal H}_S$ by adding a subspace $B$ that accounts for population losses from system $S$ so that the total population of $S+B$ is conserved under the open system evolution resulting from the interaction of $S+B$ with the environment.}.
It is easy to see that the relaxation parameters in (\ref{eq:LD}) cannot be chosen arbitrarily if we are to obtain a valid density operator. For instance, it is well known in quantum optics that a two-level atom with decay $\ket{2} \rightarrow\ket{1}$ at the rate $\gamma_{12}>0$ also experiences dephasing at a rate $\Gamma_{12}\ge\frac{1}{2}\gamma_{12}$ since the coherence $\rho_{12}$ must decay with the population of the upper level in order for $\op{\rho}(t)$ to remain positive, consistent with the constraints on the relaxation rates for two-level systems derived in~\cite{76Gorini,PRA66n062113}.
In higher dimensions we also expect population relaxation from state $\ket{n}$ to $\ket{k}$ at the rate $\gamma_{kn}$ to induce dephasing of this transition at the rate $\Gamma_{kn}\ge\frac{1}{2}\gamma_{kn}$. However, for $N>2$ the situation is more complicated. First, a single random decay $\ket{n}\rightarrow \ket{k}$ due to spontaneous emission, for instance, may affect other transitions involving the states $\ket{k}$ or $\ket{n}$. This is perhaps not too surprising but since it is a crucial motivation for the following sections, we shall consider two concrete examples.
First, consider a three-level system subject to decay $\ket{2}\rightarrow \ket{1}$ at the rate $\gamma_{12}$ but no other relaxation. Suppose, for instance, we follow formula (6.A.11) in \cite{95Mukamel} and set $\Gamma_{12}=\frac{1}{2}\gamma_{12}$ and take all other relaxation rates to be zero. Then, assuming $\op{H}=0$ for convenience, the solution of Eq.~(\ref{eq:dLE}) for this dissipation super-operator leads to the density matrix \begin{equation} \label{eq:false1}
\op{\rho}(t) =
\left( \begin{array}{ccc}
\rho_{11} + (1-e^{-t\gamma_{12}})\rho_{22} &
e^{-t\gamma_{12}/2} \rho_{12} & \rho_{13} \\
e^{-t\gamma_{12}/2} \rho_{21} & e^{-t\gamma_{12}} \rho_{22} & \rho_{23} \\
\rho_{31} & \rho_{32} & \rho_{33}
\end{array} \right), \end{equation} which in general is not positive for $t>0$. For example, the superposition state \begin{equation} \label{eq:rho0}
\op{\rho}(0)= \ket{\Psi}\bra{\Psi}, \quad \ket{\Psi}=\frac{1}{\sqrt{3}}(1,1,1)^T \end{equation} evolves under the action of this dynamical generator to a ``state'' $\op{\rho}(t)$ which has a negative eigenvalue (i.e., negative populations) for all $t>0$ as shown in Fig.~\ref{fig:ev} and is thus physically unacceptable.
\begin{figure*}
\caption{Eigenvalues of Eq.~(\ref{eq:false1}), left, and Eq.~(\ref{eq:false2}),
right, for $\op{\rho}(0)$ as in Eq.~(\ref{eq:rho0}).}
\label{fig:ev}
\end{figure*}
Furthermore, population relaxation is not the only source of constraints on the decoherence rates for $N>2$. A perhaps more surprising observation is that even if there is no population relaxation at all, i.e., $\gamma_{kn}=0$ for all $k,n$, and the system experiences only pure dephasing, we cannot choose the decoherence rates $\Gamma_{kn}$ arbitrarily. For example, setting $\Gamma_{12}\neq 0$ and $\Gamma_{23}=\Gamma_{13}=0$ for our three-level system gives \begin{equation} \label{eq:false2}
\op{\rho}(t) =
\left( \begin{array}{ccc}
\rho_{11} & e^{-\Gamma_{12} t} \rho_{12} & \rho_{13} \\
e^{-\Gamma_{12} t} \rho_{21} & \rho_{22} & \rho_{23} \\
\rho_{31} & \rho_{32} & \rho_{33}
\end{array} \right). \end{equation} Choosing $\op{\rho}(0)$ as in Eq.~(\ref{eq:rho0}) we again obtain a density operator $\op{\rho}(t)$ with negative eigenvalues (see Fig.~\ref{fig:ev}). This shows that there must be additional constraints on the decoherence rates to ensure that the state of the system remains physical.
\section{Standard Form of Dissipation Super-Operators} \label{sec:standard}
Significant progress toward solving the problem of finding dynamical generators for open systems that ensure complete positivity of the evolution operator, and hence positivity of the system's density matrix, was made by Gorini, Kossakowski and Sudarshan \cite{76Gorini} who showed that the generator of a quantum dynamical semi-group can be expressed in standard form \begin{eqnarray}
L[\op{\rho}(t)] &=& -{\rm i} \comm{\op{H}}{\op{\rho}(t)} \nonumber \\
& & \displaystyle + \frac{1}{2}\sum_{k,k'=1}^{N^2-1} a_{kk'}
\left( \comm{\op{V}_k \op{\rho}(t)}{\op{V}_{k'}^\dagger}
+\comm{\op{V}_k}{\op{\rho}(t)\op{V}_{k'}^\dagger} \right)
\nonumber\\ \label{eq:L1} \end{eqnarray} where $\op{H}$ is the generator for the Hamiltonian part of the evolution and the $\op{V}_k$, $k=1,2,\ldots,N^2-1$, are trace-zero, orthonormal operators $(\op{V}_k,\op{V}_{k'}):=\mathop{\rm Tr}(\op{V}_k^\dagger\op{V}_{k'})=\delta_{kk'}$ that together with $\op{V}_{N^2}=\frac{1}{\sqrt{N}}\op{I}$ form a basis for the system's Liouville space. Furthermore, the resulting evolution operator is \emph{completely positive} if and only if the coefficient matrix $a=(a_{kk'})$ is positive.
Noting that a positive matrix $(a_{kk'})$ has real, non-negative eigenvalues $\gamma_k$ and can be diagonalized by a unitary transformation, we obtain the second standard representation of dissipative dynamical generator, which was first derived (independently) by Lindblad~\cite{76Lindblad}: \begin{eqnarray}
L[\op{\rho}(t)]
&=& -{\rm i} \comm{\op{H}}{\op{\rho}(t)} \nonumber\\
& & + \frac{1}{2}\sum_{k=1}^{N^2-1}
\gamma_k \left( \comm{\op{A}_k \op{\rho}(t)}{\op{A}_k^\dagger}+
\comm{\op{A}_k}{\op{\rho}(t)\op{A}_k^\dagger} \right). \nonumber\\
\label{eq:L2} \end{eqnarray} Yet, although the general expressions (\ref{eq:L1}) and (\ref{eq:L2}) have been known for more than two decades, it is often unknown whether a proposed generator for the dissipative dynamics for a particular model is completely positive, and some common dissipative generators have been shown not to satisfy this condition, as in the case of the Agarwal/Redfield equations mentioned earlier. In part this may be due to the fact that it is often very difficult in practice to express phenomenologically derived dissipation generators in either of the two standard forms, and hence to verify if a proposed generator satisfies complete positivity.
However, given a matrix representation for the relaxation super-operator of the form (\ref{eq:LD}), which was derived from a purely phenomenological model based on observable decay and decoherence rates, we can express it in standard form (\ref{eq:L1}) and transform abstract positivity requirements into concrete, easily verifiable constraints on the empirically observable relaxation rates. For this purpose we need a basis $\{\op{V}_k\}$ for the Liouville space of the system. A canonical choice is to define $N-1$ diagonal matrices \begin{equation} \label{eq:Vdiag}
\op{V}_{(m,m)} = \frac{1}{\sqrt{m+m^2}}
\left( \sum_{s=1}^m {\bf e}_{ss} - m \, {\bf e}_{m+1,m+1} \right) \end{equation} for $m=1,2,\ldots, N-1$, as well as $N^2-N$ off-diagonal matrices \begin{equation} \label{eq:Voffdiag}
\op{V}_{(m,n)} = {\bf e}_{mn}, \qquad m\neq n, \; m,n=1,2,\ldots, N. \end{equation} where ${\bf e}_{mn}$ is an $N\times N$ matrix whose entries are zero except for a one in $m$th row, $n$th column position. It is quite easy to verify that the $N^2-1$ operators $\op{V}_{(m,n)}$ thus defined are trace-zero $N\times N$ matrices that satisfy the orthonormality condition $\mathop{\rm Tr}(\op{V}_k\op{V}_{k'}^\dagger)=\delta_{kk'}$ for any dimension $N$.
Having defined the basis operators $\op{V}_k$, we can now compute the generators \begin{equation} \label{eq:Lkk}
L_{kk'}[\op{\rho}(t)]
= \frac{1}{2}\left(\comm{\op{V}_k \op{\rho}(t)}{\op{V}_{k'}^\dagger}
+\comm{\op{V}_k}{\op{\rho}(t)\op{V}_{k'}^\dagger} \right) \end{equation} of the dissipation super-operator (\ref{eq:L1}) with respect to this basis, where $k,k'=1,2,\ldots,N^2-1$. Recalling that $L_{kk'}[\op{\rho}(t)]$ is equivalent to ${\cal L}_{kk'}\lket{\rho(t)}$, where each ${\cal L}_{kk'}$ is an $N^2 \times N^2$ matrix and $\lket{\rho(t)}$ is an $N^2$-column vector, we note that any (trace-preserving) dissipation superoperator ${\cal L}_D$ can be written as a linear combination of these dissipation generators: \begin{equation} \label{eq:LDexpand}
{\cal L}_D =\sum_{k,k'=1}^{N^2-1} a_{kk'} {\cal L}_{kk'}. \end{equation} To compute the coefficient matrix $a=(a_{kk'})$ we can rewrite the $N^2\times N^2$ matrices ${\cal L}_{kk'}$ and ${\cal L}_D$ as column vectors $\vec{{\cal L}}_{kk'}$ and $\vec{{\cal L}}_D$ of length $N^4$, and $a$ as a column vector $\vec{a}$ of length $(N^2-1)^2$, and solve the linear equation $\vec{{\cal L}}_D={\cal A}\vec{a}$ where ${\cal A}$ is an $N^4\times(N^2-1)^2$ matrix whose columns are the $\vec{{\cal L}}_{kk'}$. This matrix equation has a solution for any trace-zero Liouville operator since the columns of ${\cal A}$ span the space of trace-zero Liouville operators of the system. This procedure allows us in principle to express any (trace-preserving) dissipation super-operator in standard form (\ref{eq:L1}), and to verify whether it generates a completely positive evolution operator by checking the eigenvalues of the coefficient matrix $(a_{kk'})$. However, in practice this is not very efficient, especially for large $N$. Instead, we would like to be able to express the coefficients $a_{kk'}$ directly in terms of observable relaxation rates. This is the aim of the following sections.
\section{Decomposition of Relaxation Superoperator} \label{sec:decomp}
We now use Eq.~(\ref{eq:LDexpand}) to show that the relaxation super-operator of any $N$-level system subject to both population and phase relaxation processes can be decomposed into a part associated with population relaxation processes and another accounting for pure decoherence. To this end, we introduce two types of decoherence rates: $\Gamma_{mn}^p$ and $\Gamma_{mn}^d$ for decoherence due to population relaxation and pure phase relaxation (dephasing), respectively, and require that $\Gamma_{mn}=\Gamma_{mn}^p+\Gamma_{mn}^d$.
If we have population relaxation $\ket{n}\rightarrow\ket{m}$ at the rate $\gamma_{mn}\ge 0$ for $m,n=1,2,\ldots,N$ (with $\gamma_{mm}=0$), then setting $a_{kk}=\gamma_{mn}$ for $k=m+(n-1)N$ and $a_{kk'}=0$ otherwise in (\ref{eq:LDexpand}) leads to a dissipation superoperator \begin{equation} \label{eq:LDp}
{\cal L}_D^p = \sum_{m,n=1}^N \gamma_{mn}{\cal L}_{(m,n),(m,n)}. \end{equation} Inserting Eq.~(\ref{eq:Lkk}) for ${\cal L}_{(m,n),(m,n)}$ with $k=m+(n-1)N$ and $\op{V}_k$ as in Eqs~(\ref{eq:Vdiag})--(\ref{eq:Voffdiag}), we obtain \[
\begin{array}{lcll}
({\cal L}_D^p)_{(m,m),(m,m)} &=& -\sum_{k=1, k\neq m}^N \gamma_{km}, \\
({\cal L}_D^p)_{(m,m),(m',m')} &=& \gamma_{mm'}, & \quad m\neq m' \\
({\cal L}_D^p)_{(m,n),(m,n)} &=& -\frac{1}{2} \sum_{k=1}^N (\gamma_{km}+\gamma_{kn}),
& \quad m \neq n \end{array}\] which agrees with the general form Eq.~(\ref{eq:LD}) of the relaxation super-operator, yields the correct population relaxation rates, and suggests that the dephasing rates due to population relaxation are given by: \begin{equation} \label{eq:Gammap}
\Gamma_{mn}^p = \frac{1}{2} \sum_{k=1}^N (\gamma_{km}+\gamma_{kn}), \quad m\neq n, \end{equation} i.e., that the \emph{decay-induced decoherence} of the transition between states $\ket{n}$ and $\ket{m}$ is one half of the sum over all decay rates \emph{from} either of the two states $\ket{m}$ or $\ket{n}$ to any other state. Finally, inserting \begin{equation} \label{eq:Gamma}
\Gamma_{mn} = \Gamma_{mn}^d+\frac{1}{2} \sum_{k=1}^N (\gamma_{km}+\gamma_{kn}),
\quad m\neq n \end{equation} into Eq.~(\ref{eq:LD}) and solving Eq.~(\ref{eq:LDexpand}) shows that the dissipation super-operator ${\cal L}_D$ of the system decomposes, ${\cal L}_D={\cal L}_D^p+{\cal L}_D^d$, with $L_D^p$ given by Eq.~(\ref{eq:LDp}) and \begin{equation} \label{eq:LDd}
{\cal L}_D^d = \sum_{m,m'=1}^{N-1} a_{(m,m),(m',m')} {\cal L}_{(m,m),(m',m')}. \end{equation} Thus, given ${\cal L}_D$ and the population relaxation rates $\gamma_{mn}$ of the system, we can compute ${\cal L}_D^d={\cal L}_D-{\cal L}_D^p$ and determine the coefficients $b_{mm'}:= a_{(m,m),(m',m')}$ in (\ref{eq:LDd}) by rewriting the super-operators ${\cal L}_D^d$ and ${\cal L}_{(m,m),(m',m')}$ as column vectors $\vec{l}$ and $\vec{l}_k$, respectively, defining a matrix ${\cal B}$ whose columns are given by $\vec{l}_k$, and setting $\vec{b} ={\cal B}^{-1}\vec{l}$ where ${\cal B}^{-1}$ denotes the pseudo-inverse of ${\cal B}$. However, note that the matrix ${\cal B}$ only has $(N-1)^2$ instead of $(N^2-1)^2$ columns, and we can eliminate all zero rows. The resulting coefficient vector $\vec{b}$ can be rearranged into an $(N-1)\times (N-1)$ coefficient matrix $b=(b_{mm'})$ that depends only on the pure dephasing rates $\Gamma_{mn}^d$. Furthermore, the requirement of positivity of the coefficient matrix $(a_{kk'})$ in Eq.~(\ref{eq:L1}) now reduces to the (much simpler) requirement that the $(N-1)\times (N-1)$ matrix $(b_{mm'})$ be positive semi-definite.
It is important to note that our formula (\ref{eq:Gammap}) for the contribution of population relaxation to the overall decoherence rates, obtained solely by imposing the physical constraint of complete positivity on the evolution of the system, agrees with the expressions given, for instance, by Shore \cite{90Shore} for the general Bloch equations of $N$-level atoms subject to various dissipative processes, but our general expression for the dissipation super-operator covers systems subject to both population decay and pure dephasing processes, and implies the existence of non-trivial constraints on the pure dephasing rates of the system for $N>2$. In the following sections, we shall analyze these constraints in detail for $N=3$ and $N=4$.
\section{Constraints on the Pure Dephasing Rates} \label{sec:dephasing}
\subsection{Three-level Systems} \label{sec:3level}
Expanding the relaxation super-operator (\ref{eq:LD3}) for a three-level system with respect to the basis \begin{equation} \label{eq:V3}
\begin{array}{rcl}
\op{V}_{(1,1)} &=& \frac{1}{\sqrt{2}}({\bf e}_{11}-{\bf e}_{22}), \\
\op{V}_{(2,2)} &=& \frac{1}{\sqrt{6}}({\bf e}_{11}+{\bf e}_{22}-2{\bf e}_{33})\\
\op{V}_{(m,n)} &=& {\bf e}_{mn}, \quad m,n=1,2,3, \; m\neq n
\end{array} \end{equation} where ${\bf e}_{mn}$ is the $3\times 3$ matrix whose entries are zero except for a 1 in the $m$th row, $n$th column position --- which corresponds to the canonical basis (\ref{eq:Vdiag})--(\ref{eq:Voffdiag}) for $N=3$ --- yields an $8\times 8$ coefficient matrix $(a_{kk'})$ whose non-zero entries are $a_{kk}=\gamma_{mn}$ for $k=m+3(n-1)$ and $m\neq n$, as well as $a_{(m,m),(m',m')}=b_{mm'}$, where \[
\left( \begin{array}{c}
b_{11} \\
b_{21} \\
b_{12} \\
b_{22}
\end{array} \right)
= {\underbrace{\left( \begin{array}{rrrr}
1 & -\frac{1}{6} \sqrt{3} & \frac{1}{6} \sqrt{3} & 0 \\
\frac{1}{4} & \frac{5}{12}\sqrt{3} & \frac{1}{12}\sqrt{3} & \frac{3}{4} \\
1 & \frac{1}{6} \sqrt{3} & -\frac{1}{6} \sqrt{3} & 0 \\
\frac{1}{4} & -\frac{5}{12}\sqrt{3} & -\frac{1}{12}\sqrt{3} & \frac{3}{4} \\
\frac{1}{4} & \frac{1}{12}\sqrt{3} & \frac{5}{12}\sqrt{3} & \frac{3}{4} \\
\frac{1}{4} & -\frac{1}{12}\sqrt{3} & -\frac{5}{12}\sqrt{3} & \frac{3}{4}
\end{array} \right)}_{{\cal B}}}^{-1}
\left( \begin{array}{c} \Gamma_{12}^d \\
\Gamma_{13}^d \\
\Gamma_{12}^d \\
\Gamma_{23}^d \\
\Gamma_{13}^d \\
\Gamma_{23}^d
\end{array} \right). \] and the pure dephasing rates $\Gamma_{mn}^d$ are defined by (\ref{eq:Gamma}).
Noting that the pseudo-inverse of the matrix ${\cal B}$ is \[
\left( \begin{array}{*{6}{r}}
\frac{1}{2} & 0 & \frac{1}{2} & 0 & 0 & 0 \\
\frac{1}{2\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{2\sqrt{3}} &-\frac{1}{\sqrt{3}} &0& 0\\
\frac{1}{2\sqrt{3}} & 0&-\frac{1}{2\sqrt{3}} & 0 & \frac{1}{\sqrt{3}}&-\frac{1}{\sqrt{3}}\\
-\frac{1}{6} & \frac{1}{3} & -\frac{1}{6} & \frac{1}{3} & \frac{1}{3} & \frac{1}{3}
\end{array} \right) \] we obtain \begin{equation} \label{eq:b3}
\begin{array}{rcll}
b_{11} &=& \Gamma_{12}^d \\
b_{22} &=& (-\Gamma_{12}^d +2\Gamma_{13}^d+2\Gamma_{23}^d)/3\\
b_{12}=b_{21} &=& (\Gamma_{13}^d-\Gamma_{23}^d)/\sqrt{3}
\end{array} \end{equation} Therefore, the coefficient matrix $(a_{kk'})$ of the relaxation superoperator ${\cal L}_D$ will be positive semi-definite if and only if $\gamma_{mn}\ge 0$ and the real symmetric $2 \times 2$ matrix \[
\left( \begin{array}{cc} b_{11} & b_{12} \\
b_{21} & b_{22}
\end{array} \right) \] has non-negative eigenvalues. The second condition is equivalent to \begin{equation} \label{eq:b3cond}
b_{11}+b_{22} \ge 0, \quad b_{11} b_{22}\ge b_{12}^2. \end{equation} Substituting (\ref{eq:b3}) these conditions become \begin{equation} \label{eq:Gamma3cond}
0 \le \Gamma_{12}^d + \Gamma_{13}^d + \Gamma_{23}^d
\le 2 \sqrt{\Gamma_{12}^d \Gamma_{13}^d
+ \Gamma_{12}^d \Gamma_{23}^d + \Gamma_{13}^d\Gamma_{23}^d} \end{equation} This double inequality provides upper and lower bounds for the pure dephasing rates of the system. As expected we can also show that each relaxation rate must be non-negative since the second inequality is of the form $[a+(b+c)]^2 \le 4[a(b+c)+bc]$, which can be rewritten as $a^2+(b-c)^2\le 2a(b+c)$, and the first inequality implies $a+b+c\ge 0$, which shows that $a\ge0$ and $b+c\ge0$, and by symmetry of $a,b,c$ requires that $\Gamma_{12}^d$, $\Gamma_{13}^d$ and $\Gamma_{23}^d$ be non-negative. Hence, we can rewrite (\ref{eq:Gamma3cond}) as follows: \begin{equation} \label{eq:Gamma3cond2}
(\sqrt{\Gamma_b}-\sqrt{\Gamma_c})^2 \le \Gamma_a \le
(\sqrt{\Gamma_b}+\sqrt{\Gamma_c})^2 \end{equation} where $\{a,b,c\}$ is any permutation of $\{12,13,23\}$.
Note that the choice $\Gamma_{12}^d>0$ and $\Gamma_{13}^d=\Gamma_{23}^d=0$, which corresponds to the second example in section~\ref{sec:constraints} if there is no population relaxation, clearly violates (\ref{eq:Gamma3cond2}), which explains why it results in non-physical evolution. We also see that allowed choices include, for instance, $\Gamma_{12}^d=\Gamma_{23}^d>0$ and $\Gamma_{13}^d=0$.
In general, (\ref{eq:Gamma3cond2}) shows that pure dephasing in a three-level system always affects more than one transition. Furthermore, if two of the pure dephasing rates are equal, say $\Gamma^d$, then the third rate must be between $0$ and $4\Gamma^d$. For instance, consider a triply degenerate atomic energy level with basis states $\ket{m=0}$ and $\ket{m=\pm 1}$. If $\Gamma_{-1,0}^d=\Gamma_{0,1}^d$ then $0\le\Gamma_{-1,1}^d\le4\Gamma_{0,1}^d$, i.e., the decoherence rate of the transition between the outer states can be at most $4\Gamma_{0,1}^d$. But note in particular that it could be zero even if the decoherence rate between adjacent states is non-zero.
\subsection{Four-level Systems} \label{sec:4level}
If we expand the relaxation superoperator ${\cal L}_D$ for a four-level system as discussed in section~\ref{sec:decomp} with respect to the standard basis (\ref{eq:Vdiag})--(\ref{eq:Voffdiag}), we again obtain a coefficient matrix $(a_{(m,n),(m',n')})$ whose non-zero entries are $a_{(m,n),(m,n)}=\gamma_{mn}$ for $m\neq n$, as well as $a_{(m,m),(m',m')}=b_{mm'}$ with $b_{11}$, $b_{12}$, $b_{21}$ and $b_{22}$ as in (\ref{eq:b3}), $b_{31}=b_{13}$, $b_{32}=b_{23}$ and \begin{equation} \label{eq:b4a}
\begin{array}{rcl}
b_{13}&=& \sqrt{6}(-\Gamma_{13}^d+3\Gamma_{14}^d
+\Gamma_{23}^d-3\Gamma_{24}^d)/12 \\
b_{23}&=& \sqrt{2}(-2\Gamma_{12}^d +\Gamma_{13}^d +3\Gamma_{14}^d
+\Gamma_{23}^d+3\Gamma_{24}^d-6\Gamma_{34}^d)/12 \\
b_{33}&=& (-\Gamma_{12}^d-\Gamma_{13}^d+3\Gamma_{14}^d
-\Gamma_{23}^d+3\Gamma_{24}^d+3\Gamma_{34}^d)/6
\end{array} \end{equation} Since the reduced coefficient matrix $b=(b_{mm'})$ is a real, symmetric $3 \times 3$ matrix, \emph{necessary and sufficient} conditions for it to be positive semidefinite are~\cite{92Marcus}: \begin{equation} \label{eq:b4cond}
b_{11} \ge 0, \quad
b_{11} b_{22} \ge b_{12}^2, \quad \det(b) \ge 0. \end{equation} The first two of these conditions are equivalent to (\ref{eq:b3cond}). Thus, the pure dephasing rates for a four-level system must satisfy (\ref{eq:Gamma3cond}) and (\ref{eq:Gamma3cond2}), and the new constraint $\det(b)\ge 0$, or: \begin{equation} \label{eq:det4}
b_{11} b_{22} b_{33} + 2 b_{12} b_{13} b_{23}
\ge b_{11} b_{23}^2 + b_{22} b_{13}^2 + b_{33} b_{12}^2. \end{equation} Unfortunately, inserting (\ref{eq:b3}) and (\ref{eq:b4a}) into this inequality does not yield a nice form for the additional constraint.
We can obtain a more symmetric form of the constraints by choosing a slightly different operator basis: \begin{equation} \label{eq:V4}
\begin{array}{rcl}
\op{V}_{(1,1)}' &=& \frac{1}{2}({\bf e}_{11}-{\bf e}_{22}+{\bf e}_{33}-{\bf e}_{44}) \\
\op{V}_{(2,2)}' &=& \frac{1}{2}({\bf e}_{11}-{\bf e}_{22}-{\bf e}_{33}+{\bf e}_{44}) \\
\op{V}_{(3,3)}' &=& \frac{1}{2}({\bf e}_{11}+{\bf e}_{22}-{\bf e}_{33}-{\bf e}_{44}) \\
\op{V}_{(m,n)}' &=& {\bf e}_{mn}, \quad m,n=1,2,3,4, \quad m\neq n
\end{array} \end{equation} The $\op{V}_{(m,n)}'$ are trace-zero matrices that differ from the standard operator basis only in the choice of the diagonal generators and also form an orthonormal basis for the trace-zero Liouville operators of the system. However, expanding ${\cal L}_D$ with respect to this basis (\ref{eq:V4}) gives a more symmetric coefficient matrix $b'$ with non-zero entries: \begin{equation} \label{eq:b4b}
\begin{array}{rcl}
b_{11}' &=& \Gamma_{tot}^d - (\Gamma_{13}^d+\Gamma_{24}^d) \\
b_{22}' &=& \Gamma_{tot}^d - (\Gamma_{14}^d+\Gamma_{23}^d) \\
b_{33}' &=& \Gamma_{tot}^d - (\Gamma_{12}^d+\Gamma_{34}^d) \\
b_{12}' = b_{21}' &=& (\Gamma_{12}^d-\Gamma_{34}^d)/2 \\
b_{13}' = b_{31}' &=& (\Gamma_{14}^d-\Gamma_{23}^d)/2 \\
b_{23}' = b_{32}' &=& (\Gamma_{13}^d-\Gamma_{24}^d)/2
\end{array} \end{equation} where $\Gamma_{tot}^d=\frac{1}{2}\sum_{n=2}^4\sum_{m=1}^{n-1}\Gamma_{mn}^d$ is half the sum of all pure dephasing rates. Since the eigenvalues of the coefficient matrix are independent of the operator basis, $b'$ has the same eigenvalues as $b$. Furthermore, \emph{necessary} conditions for $b'$ to have non-negative eigenvalues are~\footnote{
To see that these conditions are necessary note that $b_{11}\ge 0$ and $b_{11} b_{22}\ge b_{12}^2$ implies $b_{22}\ge 0$. Inserting $b_{12}=b_{23} =b_{13}=0$ into (\ref{eq:det4}) yields $b_{11}b_{22}b_{33} \ge 0$ and hence $b_{33}\ge 0$; inserting $b_{12}=b_{23}=0$ yields $b_{22}(b_{11}b_{33}-b_{13}^2) \ge 0$, and $b_{12}=b_{13}=0$ yields $b_{11}(b_{22}b_{33}-b_{23}^2)\ge 0$. However, these conditions are not sufficient since setting $b_{11}=b_{22}=b_{33}=b_{12}=b_{23} =1$ and $b_{13}=-1$, for instance, satisfies both (\ref{eq:b4cond2a}) and (\ref{eq:b4cond2b}) but gives $\det(b)=-4$ and thus violates (\ref{eq:b4cond}).}:
\begin{eqnarray}
b_{11} \ge 0, \quad &
b_{22} \ge 0, & \quad
b_{33} \ge 0, \label{eq:b4cond2a}\\
b_{11} b_{22} \ge b_{12}^2, \quad &
b_{11} b_{33} \ge b_{13}^2, & \quad
b_{22} b_{33} \ge b_{23}^2. \label{eq:b4cond2b} \end{eqnarray} Inserting (\ref{eq:b4b}) into (\ref{eq:b4cond2a}) and (\ref{eq:b4cond2b}) yields \begin{equation} \label{eq:pos2a}
\begin{array}{rcl}
\Gamma_{13}^d+\Gamma_{24}^d &\le& \Gamma_{12}^d+\Gamma_{14}^d+\Gamma_{23}^d+\Gamma_{34}^d\\
\Gamma_{14}^d+\Gamma_{23}^d &\le& \Gamma_{12}^d+\Gamma_{13}^d+\Gamma_{24}^d+\Gamma_{34}^d\\
\Gamma_{12}^d+\Gamma_{34}^d &\le& \Gamma_{13}^d+\Gamma_{14}^d+\Gamma_{23}^d+\Gamma_{24}^d \end{array} \end{equation} as well as \begin{equation} \label{eq:pos2b} \begin{array}{rcl}
(\Gamma_{14}^d+\Gamma_{23}^d-\Gamma_{13}^d-\Gamma_{24}^d)^2 &\le& 4\Gamma_{12}^d\Gamma_{34}^d\\
(\Gamma_{12}^d+\Gamma_{34}^d-\Gamma_{13}^d-\Gamma_{24}^d)^2 &\le& 4\Gamma_{14}^d\Gamma_{23}^d\\
(\Gamma_{12}^d+\Gamma_{34}^d-\Gamma_{14}^d-\Gamma_{23}^d)^2 &\le& 4\Gamma_{13}^d\Gamma_{24}^d \end{array} \end{equation} which can be written simply as \begin{equation} \label{eq:pos2c}
|b-c| \le a \le b+c, \quad (b-c)^2 \le 4xy \end{equation} if $\{a,b,c\}$ is a permutation of the set $\{\Gamma_{12}^d+\Gamma_{34}^d, \Gamma_{13}^d+\Gamma_{24}^d,\Gamma_{14}^d+\Gamma_{23}^d\}$ and we let $x$ and $y$ be the summands of $a$, e.g., if $a=\Gamma_{12}^d+\Gamma_{34}^d$ then $x=\Gamma_{12}^d$ and $y=\Gamma_{34}^d$. Setting $a=x+y$ shows in particular that $0\le x+y$ and $0\le 4xy$ and thus $x,y\ge 0$, from which we can conclude especially that $\Gamma_{mn}^d\ge0$ for all $m,n$.
In certain cases these constraints can be simplified. For instance, if the pure dephasing rates for transitions between adjacent states are equal, i.e., $\Gamma_{12}^d=\Gamma_{23}^d=\Gamma_{34}^d=\Gamma_1^d$, as one might expect, for example, for a system consisting of the basis states of a four-fold degenerate energy level, and we set $\Gamma_2^d=\frac{1}{2}(\Gamma_{13}^d+ \Gamma_{24}^d)$ then (\ref{eq:pos2a}) yields the following bounds on the decoherence rate $\Gamma_{14}^d$: \begin{equation}
\max \{2 \Gamma_2^d-3\Gamma_1^d,\Gamma_1^d-2 \Gamma_2^d,0 \}
\le \Gamma_{14} \le \Gamma_1^d + 2 \Gamma_2^d \end{equation} and combined with the second inequality of (\ref{eq:pos2b}) we obtain $0\le \Gamma_2^d \le 4\Gamma_1^d$, and thus $\Gamma_{14}^d \le 9\Gamma_1^d$ and $\Gamma_{13}^d,\Gamma_{24}^d \le 8\Gamma_1^d$. If we further assume that the pure dephasing rates for transitions between next-to-nearest neighbor states are equal as well, i.e., $\Gamma_{13}^d= \Gamma_{14}^d=\Gamma_2^d$, then we obtain $\Gamma_{13}^d=\Gamma_{24}^d\le 4\Gamma_1^d$. Hence, for a system whose pure dephasing rates depend only on the ``distance'' between the states, the former are bounded above by \begin{equation} \label{eq:pos4eq}
\Gamma_{nk}^d \le (n-k)^2 \Gamma_1^d, \end{equation} where $\Gamma_1^d=\Gamma_{n,n+1}^d$ is the dephasing rate for transitions between adjacent sites.
In this special case we can compare the constraints obtained from our \emph{necessary} conditions with the \emph{necessary and sufficient}
conditions (\ref{eq:b4cond}). Inserting $\Gamma_{nk}^d=\Gamma_{|n-k|}^d$, into (\ref{eq:b4b}) yields \begin{equation}
\begin{array}{rcl}
b_{11}' &=& \frac{1}{2}(3\Gamma_1^d -2\Gamma_2^d+\Gamma_3^d) \\
b_{22}' &=& \frac{1}{2}(\Gamma_1^d+2\Gamma_2^d-\Gamma_3^d) \\
b_{33}' &=& \frac{1}{2}(-\Gamma_1^d+2\Gamma_2^d-3\Gamma_1^d) \\
b_{12}' = b_{21}' &=& 0 \\
b_{13}' = b_{31}' &=& \frac{1}{2} (\Gamma_3^d-\Gamma_1^d) \\
b_{23}' = b_{32}' &=& 0
\end{array} \end{equation} and the necessary and sufficient conditions (\ref{eq:b4cond}) become: \begin{equation} \begin{array}{c}
\Gamma_3^d \ge 2 \Gamma_2^d-3\Gamma_1^d \\
\Gamma_3^d \le 2 \Gamma_2^d+\Gamma_1^d \\
\Gamma_1^d\Gamma_3^d \ge (\Gamma_1^d-\Gamma_2^d)^2 \end{array} \end{equation} If $\Gamma_1^d=0$ then $\Gamma_3^d=2\Gamma_2^d=0$. Otherwise, we can multiply the second inequality by $\Gamma_1^d$ and combine it with the third, which leads to $\Gamma_1^d(2\Gamma_2^d+\Gamma_1^d) \ge(\Gamma_1^d-\Gamma_2^d)^2$ and simplifies to $\Gamma_2^d\le 4\Gamma_1^d$. Inserting this result in the first inequality gives $\Gamma_3^d\le 9\Gamma_1^d$, i.e., the necessary conditions (\ref{eq:pos4eq}) are also sufficient.
\section{Examples and Discussion} \label{sec:examples}
We now apply the results of the previous sections to several types of generic three and four-level atoms. The objective in each case is to derive a proper relaxation superoperator, which is consistent with both experimental data and positivity constraints, and to discuss the implications of the latter constraints. Though the emphasis is on atomic systems, the results generally apply to molecular or solid state systems with similar level structures as well.
\subsection{Generic three-level atoms}
Let us first consider the general case of a generic three-level system subject to arbitrary population and phase relaxation processes. Let $\gamma_{mn}$ denote the observed rate of population relaxation from state $\ket{n}$ to state $\ket{m}$ for $m,n=1,2,3$ and $m \neq n$, and let $\Gamma_{12}$, $\Gamma_{23}$ and $\Gamma_{13}$ be the observed decoherence rates for the $1 \leftrightarrow 2$, $2\leftrightarrow 3$ and $1 \leftrightarrow 3$ transitions, respectively. Then the pure dephasing rates of the system according to (\ref{eq:Gamma}) are: \begin{eqnarray} \Gamma_{12}^d &=& \Gamma_{12} - (\gamma_{21}+\gamma_{31}+\gamma_{12}+\gamma_{32})/2 \nonumber \\ \Gamma_{13}^d &=& \Gamma_{13} - (\gamma_{21}+\gamma_{31}+\gamma_{13}+\gamma_{23})/2 \label{eq:dephasing3}\\ \Gamma_{23}^d &=& \Gamma_{23} - (\gamma_{12}+\gamma_{32}+\gamma_{13}+\gamma_{23})/2 \nonumber \end{eqnarray} These dephasing rates must be non-negative and satisfy the inequality constraints (\ref{eq:Gamma3cond2}) for the evolution of the system to be completely positive. Experimental data for the observed relaxation rates that fails to satisfy these conditions should be considered a reason for concern, and might suggest that the physical system under investigation cannot be adequately modelled as a three-level system, for instance.
If the dephasing rates do satisfy the necessary constraints then a physically valid representation of the relaxation superoperator $L_D$ for the system in terms of the observed relaxation rates is \begin{equation} \label{eq:LD3diag}
L_D[\op{\rho}(t)] = \sum_{m\neq n} \gamma_{mn} L_{mn}^p[\op{\rho}(t)]
+\delta_1 L_1^d[\op{\rho}(t)]+\delta_2 L_2^d[\op{\rho}(t)], \end{equation} where the elementary relaxation terms are \begin{eqnarray}
2L_{mn}^p[\op{\rho}(t)]
&=& \comm{V_{(m,n)}\op{\rho}(t)}{\op{V}_{(m,n)}^\dagger}
+\comm{V_{(m,n)}}{\op{\rho}(t)\op{V}_{(m,n)}^\dagger} \nonumber \\
2L_m^d[\op{\rho}(t)]
&=& \comm{A_m \op{\rho}(t)}{A_m}+\comm{A_m}{\op{\rho}(t) A_m} \nonumber\\ \label{eq:Lmn} \end{eqnarray} with $V_{(m,n)}$ as in (\ref{eq:V3}) and the diagonal ``pure dephasing'' generators \begin{eqnarray}
A_1 = \frac{1}{\sqrt{2x(x-\Delta_1)}}
\left[\sqrt{3} \Delta_2 V_{(1,1)} + (x-\Delta_1) V_{(2,2)} \right] && \nonumber \\
A_2 = \frac{-1}{\sqrt{2x(x+\Delta_1)}}
\left[\sqrt{3} \Delta_2 V_{(1,1)} - (x+\Delta_1) V_{(2,2)} \right], && \nonumber \\ \label{eq:A3} \end{eqnarray} and the ``effective dephasing'' rates are \begin{equation}
\delta_{1/2} = (\Gamma_{12}^d+\Gamma_{13}^d+\Gamma_{23}^d \pm x)/3, \end{equation} where $x=\sqrt{\Delta_1^2+3\Delta_2^2}$, $\Delta_1 = 2\Gamma_{12}-\Gamma_{13}- \Gamma_{23}$ and $\Delta_2 = \Gamma_{13}-\Gamma_{23}$.
\begin{figure}
\caption{Three-state atoms: ladder system (left), $\Lambda$-system (top right) and V-system (bottom right) with arrows indicating population decay pathways.}
\label{fig:3level}
\end{figure}
\subsubsection{Ladder configurations} For a three-level atom in a ladder configuration where the main source of population relaxation is spontaneous emission from the excited states to a stable ground state, as shown in Fig.~\ref{fig:3level}, we simply set $\gamma_{21}=\gamma_{31}=\gamma_{32} =0$ in (\ref{eq:dephasing3}) and (\ref{eq:LD3diag}), respectively, to obtain the correct pure dephasing rates \begin{eqnarray} \Gamma_{12}^d &=& \Gamma_{12} - \gamma_{12}/2 \nonumber \\ \Gamma_{13}^d &=& \Gamma_{13} - (\gamma_{13}+\gamma_{23})/2 \label{eq:3ladder}\\ \Gamma_{23}^d &=& \Gamma_{23} - (\gamma_{12}+\gamma_{13}+\gamma_{23})/2 \nonumber \end{eqnarray} and the corresponding relaxation superoperator ${\cal L}_D$. The decay-induced decoherence rates $\Gamma_{mn}^p$ in this case satisfy the interesting equality $\Gamma_{12}^p+ \Gamma_{13}^p=\Gamma_{23}^p$.
This is another way of seeing that, if $\gamma_{12}>0$ as in example 1 considered in section \ref{sec:constraints}, then $\Gamma_{23}$ must be at least $\gamma_{12}/2$ --- recall that we showed explicitly that the naive guess $\Gamma_{12}=\gamma_{12}/2$ and $\Gamma_{13}=\Gamma_{23}=0$ leads to non-physical states with negative eigenvalues.
$\Gamma_{13}=0$, on the other hand, is possible even if $\gamma_{12}>0$ provided that state $\ket{3}$ is stable. It is interesting to note, however, that $\Gamma_{13}=0$ always implies $\Gamma_{12}=\Gamma_{23}$. If there is no pure dephasing then this is obvious since $\Gamma_{12}^p=\Gamma_{23}^p$, but it is true even if there is pure dephasing, since $\Gamma_{13}=0$ implies $\Gamma_{13}^d=0$ and thus the inequality constraint (\ref{eq:Gamma3cond2}) for the pure dephasing rates implies $\Gamma_{12}^d =\Gamma_{23}^d$.
\subsubsection{$\Lambda$ systems} For a $\Lambda$ system for which only the decay rates $\gamma_{12}$ and $\gamma_{32}$ are non-zero as shown in Fig.~\ref{fig:3level}, the pure dephasing rates are:
\begin{eqnarray}
\Gamma_{12}^d &=& \Gamma_{12} - (\gamma_{12}+\gamma_{32})/2 \nonumber\\
\Gamma_{23}^d &=& \Gamma_{23} - (\gamma_{12}+\gamma_{32})/2 \\
\Gamma_{13}^d &=& \Gamma_{13}. \nonumber \end{eqnarray} Moreover, if the lifetime of the excited state is $\gamma^{-1}$ and the system is symmetric, i.e., $\gamma_{12}=\gamma_{32}=\gamma/2$ and $\Gamma_{12}=\Gamma_{23}$, as is often the case, then we have $\Gamma_{12}=\Gamma_{23}=\Gamma^d+\gamma/2$. If $\Gamma^d=0$ then $\Gamma_{13}^d=0$ due to (\ref{eq:Gamma3cond}). Otherwise, setting $\Gamma_{13}^d=\alpha\Gamma^d$ gives $\Delta_1=2(1-\alpha)\Gamma^d$, $\Delta_2=-(1-\alpha)\Gamma^d$, $x=2(1-\alpha)\Gamma^d$, and thus the relaxation superoperator is simply \begin{eqnarray*}
L_D[\op{\rho}(t)]
&=& (\gamma/2) \left\{L_{12}^p[\op{\rho}(t)]+L_{32}^p[\op{\rho}(t)] \right\} \\
& & +[(4-\alpha)\Gamma^d/3] L_1^d[\op{\rho}(t)]
+ \alpha\Gamma^d L_2^d[\op{\rho}(t)] \end{eqnarray*} where the diagonal pure dephasing generators (\ref{eq:A3}) are \begin{eqnarray}
A_1=A_1^\dagger &=&(-\sqrt{3}V_{(1,1)}+V_{(2,2)})/2 \nonumber \\
A_2=A_2^\dagger &=&(V_{(1,1)}+\sqrt{3}V_{(2,2)})/2. \label{eq:A3lambda} \end{eqnarray} Positivity requires $\gamma\ge0$, $4-\alpha\ge 0$ and $\alpha\ge 0$, or $0\le\Gamma_{13}^d\le 4\Gamma^d$, in accordance with (\ref{eq:Gamma3cond2}) and previous observations.
\subsubsection{$V$ systems} Similarly, for a $V$ system for which only the decay rates $\gamma_{21}$ and $\gamma_{23}$ are non-zero as shown in Fig.~\ref{fig:3level}, the pure dephasing rates are simply: \begin{eqnarray} \Gamma_{12}^d &=& \Gamma_{12}^d - \gamma_{21}/2 \nonumber\\ \Gamma_{23}^d &=& \Gamma_{23}^d - \gamma_{23}/2 \\ \Gamma_{13}^d &=& \Gamma_{13}^d - (\gamma_{21}+\gamma_{23})/2 \nonumber \end{eqnarray} If the system is symmetric, i.e., both excited states have the same lifetime, $\gamma_{21}=\gamma_{23}=\gamma$, and $\Gamma_{12}=\Gamma_{23}$ then setting $\Gamma_{13}^d=\alpha\Gamma^d$ leads to the relaxation superoperator \begin{eqnarray*}
L_D[\op{\rho}(t)]
&=& \gamma \left\{L_{21}^p[\op{\rho}(t)] + L_{23}^p[\op{\rho}(t)] \right\} \\
& & +[(4-\alpha)\Gamma^d/3] L_1^d[\op{\rho}(t)]
+ \alpha\Gamma^d L_2^d[\op{\rho}(t)] \end{eqnarray*} with $L_{mn}[\op{\rho}(t)]$ and $L_1[\op{\rho}(t)]$ as defined in (\ref{eq:Lmn}) and the generators $A_1$, $A_2$ as in (\ref{eq:A3lambda}).
\subsubsection{Comparison of $\Lambda$ and $V$ systems} If the excited states have the same lifetime $\gamma^{-1}$ and the pure dephasing rate $\Gamma^d$ for transitions between the upper and lower states is the same for the $\Lambda$ and $V$ configuration, then we have $\Gamma_{12}=\Gamma_{23}= \frac{1}{2}\gamma+\Gamma^d$ in both cases, i.e., the overall decoherence rate for transitions between ground and excited states is the same for both configurations. The main difference, as expected, is the decoherence rate of the $1\leftrightarrow 3$ transition, which is $\Gamma_{13}=\Gamma_{13}^d$ for the $\Lambda$-system, and $\Gamma_{13}=\gamma+\Gamma_{13}^d$ for the $V$-system.
Thus, if pure dephasing of the $1 \leftrightarrow 3$ transition is negligible then it will remain decoherence free for the $\Lambda$ system but not for the $V$ system. However, if pure dephasing is taken into account then the transition between the degenerate ground states of the $\Lambda$ system may not be decoherence free, and comparison of the decoherence rates for both systems, $\Gamma_{13}^\Lambda= (\Gamma_{13}^d)^\Lambda$ and $\Gamma_{13}^V = \gamma+(\Gamma_{13}^d)^V$, shows that $\Gamma_{13}^\Lambda$ could theoretically even be greater than $\Gamma_{13}^V$ if the pure dephasing rate of the transition between the degenerate ground states was greater than the decay rate $\gamma$ plus the pure dephasing rate $\Gamma_{13}^d$ for the $V$ system.
\subsection{Generic four-level atoms}
Again, we will first consider the general case of a generic four-level system subject to arbitrary population and phase relaxation processes. Let $\gamma_{mn}$ denote the observed rate of population relaxation from state $\ket{n}$ to state $\ket{m}$ for $m,n=1,2,3,4$ and $m \neq n$, and $\Gamma_{mn}$ be the observed decoherence rates for the $m \leftrightarrow n$, transitions, as usual. Then the pure dephasing rates of the system according to (\ref{eq:Gamma}) are: \begin{eqnarray} \Gamma_{12}^d &=& \Gamma_{12} - (\gamma_{21}+\gamma_{31}+\gamma_{41}
+\gamma_{12}+\gamma_{32}+\gamma_{42})/2 \nonumber \\ \Gamma_{13}^d &=& \Gamma_{13} - (\gamma_{21}+\gamma_{31}+\gamma_{41}
+\gamma_{13}+\gamma_{23}+\gamma_{43})/2 \nonumber\\ \Gamma_{14}^d &=& \Gamma_{14} - (\gamma_{21}+\gamma_{31}+\gamma_{41}
+\gamma_{14}+\gamma_{24}+\gamma_{34})/2 \nonumber\\ \Gamma_{23}^d &=& \Gamma_{23} - (\gamma_{12}+\gamma_{32}+\gamma_{42}
+\gamma_{13}+\gamma_{23}+\gamma_{43})/2 \nonumber\\ \Gamma_{24}^d &=& \Gamma_{24} - (\gamma_{12}+\gamma_{32}+\gamma_{42}
+\gamma_{14}+\gamma_{24}+\gamma_{34})/2 \nonumber\\ \Gamma_{34}^d &=& \Gamma_{23} - (\gamma_{13}+\gamma_{23}+\gamma_{43}
+\gamma_{14}+\gamma_{24}+\gamma_{34})/2 \nonumber\\ \label{eq:dephasing4} \end{eqnarray} These dephasing rates must satisfy the necessary and sufficient conditions (\ref{eq:b4cond}) for complete positivity, and in particular the inequality constraints (\ref{eq:pos2a}) and (\ref{eq:pos2b}). If the data for a given system does not appear to satisfy these conditions then (unless the data is unreliable) it should be assumed that the system cannot be \emph{properly} modelled as a four-level system subject to population and phase relaxation. As mentioned in the introduction, such models are quite common and may still be adequate for some purposes but can lead to non-physical results such as states with negative eigenvalues, etc.
If the dephasing rates do satisfy the necessary constraints then a physically valid representation of the relaxation superoperator $L_D$ for the system in terms of the observed relaxation rates is \begin{equation} \label{eq:LD4}
L_D[\op{\rho}(t)] = \sum_{m\neq n} \gamma_{mn} L_{mn}^p[\op{\rho}(t)]
+\sum_{m,n=1}^3 b_{mn} L_{mn}^d[\op{\rho}(t)] \end{equation} where the elementary relaxation terms are \begin{eqnarray}
2L_{mn}^p[\op{\rho}(t)]
&=& \comm{V_{(m,n)}\op{\rho}(t)}{\op{V}_{(m,n)}^\dagger}
+\comm{V_{(m,n)}}{\op{\rho}(t)\op{V}_{(m,n)}^\dagger} \nonumber \\
2L_{mn}^d[\op{\rho}(t)]
&=& \comm{V_{(m,m)}'\op{\rho}(t)}{\op{V}_{(n,n)}'}
+\comm{V_{(m,m)}'}{\op{\rho}(t)\op{V}_{(n,n)}'} \nonumber \\ \end{eqnarray} with $V_{(m,n)}'$ as defined in (\ref{eq:V4}), $V_{(m,n)}=V_{(m,n)}'$ for $m\neq n$, and coefficients $b_{mn}$ as in (\ref{eq:b4b}). Note that --- unlike in the three-level case --- we chose \emph{not} to diagonalize the dephasing superoperator since the general expressions are quite complicated and do not confer a significant computational advantage.
\subsubsection{Transition between doubly-degenerate levels}
The results of the last paragraph apply, for instance, to a system consisting of two doubly degenerate energy levels subject to population relaxation as shown in Fig.~\ref{fig:4level} and general phase relaxation. The decay-induced decoherence rates according to (\ref{eq:Gammap}) are: \begin{equation} \label{eq:4atomic1}
\begin{array}{rcl}
\Gamma_{12}^p=\Gamma_{23}^p &=& (\gamma_{12}+\gamma_{32})/2 \\
\Gamma_{14}^p=\Gamma_{34}^p &=& (\gamma_{14}+\gamma_{34})/2 \\
\Gamma_{24}^p &=& (\gamma_{12}+\gamma_{32}+\gamma_{14}+\gamma_{34})/2\\
\Gamma_{13}^p &=& 0.
\end{array} \end{equation} Thus, if all decoherence is the result of population relaxation processes then the transition between the ground states remains decoherence free. However, if there is pure dephasing then (\ref{eq:pos2c}) implies $(b-c)^2\le 4xy$ for $a= x+y$ with $x=\Gamma_{13}^d$, $y=\Gamma_{24}^d$, $b=\Gamma_{12}^d+\Gamma_{34}^d$ $c=\Gamma_{14}^d+\Gamma_{23}^d$. Thus, $x=\Gamma_{13}^d=0$ is possible if and only if $b=c$. Since $\Gamma_{14}^p+\Gamma_{23}^p=\Gamma_{12}^p+\Gamma_{34}^p$ according to (\ref{eq:4atomic1}), this is equivalent to $\Gamma_{12}+\Gamma_{34} =\Gamma_{14}+\Gamma_{23}$. Conversely, if $\Gamma_{12}+\Gamma_{34}\neq\Gamma_{14}
+\Gamma_{23}$ then $b\neq c$, and we have $0<|b-c|<x+y$ and $0<(b-c)^2\le 4xy$, which implies $x>0$, i.e., $\Gamma_{13}^d>0$.
Now suppose both excited states have the same $T_1$-relaxation time, i.e., the same spontaneous emission rate $\gamma$, and the relative probabilities for the possible decay pathways are given by the absolute value of the Clebsch-Gordan coefficients of the transition. Then $\gamma_{12}=\gamma_{34}=\gamma/3$ and $\gamma_{32}=\gamma_{14}=2\gamma/3$, and the decay-induced decoherence rates are $\Gamma_{13}^p=0$, $\Gamma_{12}^p=\Gamma_{23}^p=\Gamma_{14}^p=\Gamma_{34}^p =\gamma/2$ and $\Gamma_{24}^p=\gamma$, as one would reasonably expect.
Furthermore, if the dephasing rates satisfy $\Gamma_{12}^d=\Gamma_{34}^d=: \Gamma_1^d$ and $\Gamma_{14}^d=\Gamma_{23}^d=:\Gamma_2^d$, as one might expect due to symmetry for a typical system, then (\ref{eq:pos2c}) implies especially $(\Gamma_1^d-\Gamma_2^d)^2 \le 4\Gamma_{13}^d\Gamma_{24}^d$. Thus, if we have $\Gamma_{13}^d=0$ then we must also have $\Gamma_1^d=\Gamma_2^d$, and conversely, if $\Gamma_1^d\neq\Gamma_2^d$ then $\Gamma_{13}^d>0$, i.e., the transition between the two ground states can remain decoherence free only if $\Gamma_1^d =\Gamma_2^d$. This observation may seem trivial but it might be a convenient way of ascertaining if the transition between the ground states is decoherence free or not by simply measuring the decoherence of the transitions between the ground and excited states, and the decay rates of the excited states.
Moreover, if $\Gamma_{13}=0$, then we must have $\Gamma_{12}^d=\Gamma_{14}^d= \Gamma_{23}^d=\Gamma_{34}^d=:\Gamma^d$ according to our previous observations. If $\Gamma^d=0$ as well then $\Gamma_{24}^d=0$ due to (\ref{eq:pos2c}) and we have $L_D^d[\op{\rho}(t)]=0$, i.e., no dephasing takes place. Otherwise, setting $\Gamma_{24}^d=\alpha\Gamma^d$ leads to the simplified relaxation superoperator \begin{eqnarray}
L_D[\op{\rho}(t)]
&=& (\gamma/3) \left\{ L_{12}^p[\op{\rho}(t)] + L_{34}^p[\op{\rho}(t)]\right\} \nonumber\\
& & +(2\gamma/3)\left\{L_{32}^p[\op{\rho}(t)] + L_{14}^p[\op{\rho}(t)] \right\}\nonumber \\
& & +(4-\alpha)\Gamma^d \left(\comm{A_1 \op{\rho}(t)}{A_1}
+\comm{A_1}{\op{\rho}(t) A_1} \right)/4 \nonumber\\
& & +\alpha\Gamma^d \left(\comm{A_2 \op{\rho}(t)}{A_2}
+\comm{A_2}{\op{\rho}(t) A_2} \right)/2 \nonumber\\ \end{eqnarray} with $A_1=V_{(1,1)}'$, $A_2=(-V_{(2,2)}'+V_{(3,3)}')/\sqrt{2}$ and $L_{mn}^p [\op{\rho}(t)]$ as defined in (\ref{eq:Lmn}) and $V_{(m,n)}'$ as defined in (\ref{eq:V4}). Again positivity requires $0 \le \alpha \le 4$, i.e., $0 \le \Gamma_{24}^d\le 4\Gamma^d$, consistent with our previous observations, and thus provides an upper bound of $4\Gamma^d+\gamma$ on the total decoherence of the transition between the upper levels.
\begin{figure}
\caption{Four-state atoms: transition between doubly degenerate energy levels (top), tripod system (bottom left) and inverted tripod (bottom right) with arrows indicating population decay pathways.}
\label{fig:4level}
\end{figure}
\subsubsection{Tripod and Inverted Tripod Systems}
Another common type of four-level system is a tripod system, i.e., a transition between a triply degenerate ground state and a non-degenerate excited state $\ket{4}$. With population relaxation due to spontaneous emission as indicated in Fig.~\ref{fig:4level}, the decay-induced decoherence rates according to (\ref{eq:Gammap}) are: \begin{equation} \label{eq:4atomic2}
\begin{array}{rcl}
\Gamma_{12}^p = \Gamma_{13}^p = \Gamma_{23}^p &=& 0 \\
\Gamma_{14}^p = \Gamma_{24}^p = \Gamma_{34}^p
&=& (\gamma_{14}+\gamma_{24}+\gamma_{34})/2
\end{array} \end{equation} Assuming that the lifetime of the excited states is $\gamma^{-1}$ and all decay pathways are equally probable, we obtain $\gamma_{14}=\gamma_{24}=\gamma_{34}= \gamma/3$ and $\Gamma_{14}^p=\Gamma_{24}^p=\Gamma_{34}^p=\gamma/2$, as well as \begin{equation}
L_D^p[\op{\rho}(t)]
= (\gamma/3) ({\cal L}_{14}^p[\op{\rho}(t)]+{\cal L}_{24}^p[\op{\rho}(t)]+{\cal L}_{34}^p [\op{\rho}(t)]) \end{equation} with $L_{mn}^p[\op{\rho}(t)]$ as defined in (\ref{eq:Lmn}).
For comparison, the decay-induced decoherence rates (\ref{eq:Gammap}) for an inverted tripod, i.e., a transition between a non-degenerate ground state $\ket{4}$ and a three-fold degenerate excited state with population relaxation as indicated in Fig.~\ref{fig:4level} are: \begin{equation} \label{eq:4atomic3}
\begin{array}{rcl}
\Gamma_{12}^p &=& \frac{1}{2}(\gamma_{41}+\gamma_{42}) \\
\Gamma_{13}^p &=& \frac{1}{2}(\gamma_{41}+\gamma_{43}) \\
\Gamma_{23}^p &=& \frac{1}{2}(\gamma_{42}+\gamma_{43}) \\
\Gamma_{14}^p &=& \frac{1}{2} \gamma_{41} \\
\Gamma_{24}^p &=& \frac{1}{2} \gamma_{42} \\
\Gamma_{34}^p &=& \frac{1}{2} \gamma_{43}.
\end{array} \end{equation} and assuming the lifetime of the excited states is $\gamma^{-1}$ we obtain thus $\gamma_{41}=\gamma_{42}=\gamma_{43}=\gamma$ and $\Gamma_{14}^p=\Gamma_{24}^p =\Gamma_{34}^p=\gamma/2$, $\Gamma_{12}^p=\Gamma_{23}^p=\Gamma_{13}^p=\gamma$ as well as \begin{equation}
L_D^p[\op{\rho}(t)]
= \gamma ({\cal L}_{41}^p[\op{\rho}(t)]+{\cal L}_{42}^p[\op{\rho}(t)]+{\cal L}_{43}^p [\op{\rho}(t)]) \end{equation} with $L_{mn}^p[\op{\rho}(t)]$ as defined in (\ref{eq:Lmn}).
Interestingly, the decay-induced decoherence rates for transitions between the upper and lower sublevels are the same for both systems. In absence of pure dephasing, the only difference between the two cases is that the degenerate subspace remains decoherence-free for the tripod system, while the decoherence rates for the inverted tripod are equal to the spontaneous emission rate $\gamma$ for the upper levels. This basic situation does not change very much if we add pure dephasing since the tripod and inverted tripod system are equivalent as far as pure dephasing is concerned. However, there will be additional constraints, and we shall in particular study the case $\Gamma_{14}^d=\Gamma_{24}^d=\Gamma_{34}^d =\Gamma^d$ and $\Gamma_{12}^d=\Gamma_{23}^d=\Gamma_2^d$, which one might expect to occur in many systems for reasons of symmetry.
If $\Gamma^d=0$ then the necessary conditions (\ref{eq:pos2b}) can only be satisfied for $\Gamma_2^d=\Gamma_{13}^d=0$, i.e., if there is no pure dephasing for transitions between the upper and lower sublevels then all pure dephasing rates are zero and all decoherence in the system must be due to population relaxation. Hence, $L_D^d[\op{\rho}(t)]=0$. Otherwise, set $\Gamma_2^d= \alpha\Gamma^d$ and $\Gamma_{13}^d=\beta\Gamma^d$. Inserting these values into the coefficient matrix $b'$ [Eq.~(\ref{eq:b4b})] allows us to directly derive necessary and sufficient conditions for positivity of this matrix by computing its eigenvalues. \[
\det(b'/\Gamma^d-\lambda I) = (\beta-\lambda)(\lambda^2-p\lambda+q) \] where $p=3+2\alpha-\beta$ and $q=(4\alpha-\alpha^2-\beta)/2$, shows immediately that the eigenvalues of $b'$ are $\lambda_1=\Gamma_{13}^d$, and $\lambda_{2/3}= (p\pm\sqrt{p^2-4q})/2$. Hence, necessary and sufficient conditions for positive semi-definiteness of $b'$ are $\Gamma_{13}^d\ge 0$, $p\ge 0$ and $q\ge 0$, and the dephasing superoperator can be written as: \begin{eqnarray}
L_D^d[\op{\rho}(t)]
&=& \Gamma_{13}^d \left(\comm{A_1 \op{\rho}(t)}{A_1}
+\comm{A_1}{\op{\rho}(t) A_1} \right)/2 \nonumber\\
& & +\lambda_2 \left(\comm{A_2 \op{\rho}(t)}{A_2}
+\comm{A_2}{\op{\rho}(t) A_2} \right)/2 \nonumber\\
& & +\lambda_3 \left(\comm{A_3 \op{\rho}(t)}{A_3}
+\comm{A_3}{\op{\rho}(t) A_3} \right)/2 \qquad \end{eqnarray} where we have \begin{eqnarray*}
A_1 &=& (V_{(2,2)}'-V_{(3,3)}')/\sqrt{2} \\
A_2 &=& [4\tilde{\alpha}V_{(1,1)}'+(\tilde{\beta}+x)V_{(2,2)}'
-(\tilde{\beta}+x)V_{(3,3)}']/(2x) \\
A_3 &=& [4\tilde{\alpha}V_{(1,1)}'+(\tilde{\beta}-x)V_{(2,2)}'
-(\tilde{\beta}-x)V_{(3,3)}']/(2x) \end{eqnarray*} with $V_{(m,n)}'$ as defined in (\ref{eq:V4}), and $\tilde{\alpha}=\alpha-1$, $\tilde{\beta}=\beta-1$ and $x=\sqrt{8(\alpha-1)^2+(\beta-1)^2}$. Furthermore, $p\ge 0$ implies $\beta\le3+2\alpha$ and $q\ge 0$ is equivalent to $\alpha(4- \alpha)\ge\beta\ge 0$ and hence implies $0\le\alpha,\beta\le 4$.
\section{Conclusions} \label{sec:conclusion}
Starting with very basic assumptions we defined a simple yet general relaxation superoperator, which should be adequate to describe a wide variety of open systems not too strongly coupled to their environment, solely in terms of experimentally observable quantities such as the population relaxation and decoherence rates of the system, without imposing any restrictions on the types of population and phase relaxation that can occur.
The advantage of a relaxation superoperator thus defined is that it can describe the observed dissipative dynamics of the system in principle as accurately as we can measure the relaxation rates. Unfortunately, however, there are several problems with this approach. One is that it can lead to relaxation superoperators that do not preserve complete or even simple positivity, as we have explicitly shown for several examples. Since any violation of positivity effectively means negative or even non-real probabilities, this is serious problem.
To avoid such problems one must impose constraints on the relaxation rates. We have analyzed the nature of these basic constraints by expressing our relaxation superoperator in the standard form for dissipative generators of quantum dynamical semi-groups derived by Gorini, Kossakowski and Sudarshan. We have also shown that it is possible to decompose our generic relaxation superoperator into two distinct parts associated with population relaxation and pure dephasing processes, respectively, and that the coefficients of the Kossakoswki generators for the population relaxation part can be identified (usually uniquely) with the observed population relaxation rates, the only restriction being the obvious one that the decay rates be non-negative. Most importantly, the expressions we obtain for the decoherence rates induced by population relaxation agree with similar expressions found in the literature.
However, population relaxation is usually not the only source of decoherence. To account for other sources of decoherence, we have introduced pure dephasing rates for each transition by subtracting the decoherence induced by population relaxation processes from the observed overall decoherence rates. These pure dephasing rates define the pure-phase-relaxation superoperator, and we express the coefficients of the Kossakoswki generators for this part of the relaxation superoperator explicitly in terms of these pure dephasing rates for three and four-level systems. These expressions, unlike the general expressions for the coefficients of the population relaxation superoperator, are more complicated, and the requirement of complete positivity results in nontrivial constraints on the dephasing rates, which we have analyzed specifically for three- and four-level systems, although the same type of analysis can be performed for systems of higher dimension.
Finally, we have applied these general results to study their concrete implications for several simple but commonly used three- and four-level model systems such as $\Lambda$ and $V$ systems, tripod and inverted tripod systems and transitions between doubly degenerate energy levels. In each case we have attempted to make \emph{concrete} predictions about inequality constraints and correlations of the decoherence rates demanded by the requirement of complete positivity, which are \emph{experimentally verifiable}. Such experimental tests of the constraints could be useful in various ways. Confirmation of the correlations would vindicate the semi-group description of the dynamics. On the other hand, violation of the constraints required by complete positivity would suggest that our model of the system is not really adequate to capture its real dynamics although it may still be useful for certain purposes.
\section{Epilogue: Positive Matrices} \label{sec:epilogue}
There has been a great deal of mathematical work on the properties of positive matrices, and papers such as ``Some inequalities for Positive Definite Symmetric Matrices'' [Siam J. Appl. Math. 19, 679--681 (1970)] by F.~T.~Man, ``The Space of Positive Definite Matrices and Gromov's invariant'' [Trans. Am. Math. Soc. 274, 239 (1982)] by R.~P.~Savage, or ``Positive Definite Matrices and Catalan numbers'' [Proc. Am. Math. Soc. 79, 177-181 (1980)] by F.~T.~Leighton and M.~Newman appear relevant to our problem at first glance. However, despite the connection to this work suggested by their titles, they really address rather problems.
F.~T.~Man, for example, studies the problem of comparing positive definite symmetric matrices, in particular answering the question under what conditions $P>Q$, i.e. $P-Q$ positive, implies $P^2>Q^2$ for positive definite symmetric matrices $P$ and $Q$. Unfortunately, these results are not applicable to our problem. However, they may be relevant for issues such as the comparison of density matrices.
R.~P.~Savage considers the space of $n \times n$ positive definite matrices $X$ with $\det(X)=1$ under isometries $X \rightarrow A X A^T$ where $A \in SL(n,{\mathbb R})$ and shows that it has a collection of simplices preserved by the isometries and that the volume of the top-dimensional ones has a uniform upper bound. One could perhaps say that the density matrices $\op{\rho}$ of interest to us are positive, and that the dynamical Lie group of the system provides isometries of a sort, but our density matrices are positive matrices of trace one, which actually rules out $\det\op{\rho}=1$, since positivity requires the eigenvalues to be non-negative and $\mathop{\rm Tr}(\op{\rho})=1$ requires that the sum of these eigenvalues be one, which implies $\det(\op{\rho})<1$ unless $n=1$ and $\rho=1$.
Similarly, Leighton and Newman show that the number of $n \times n$ integral, triple diagonal matrices that are unimodular, positive definite and whose sub and super diagonal elements are all one, is the Catalan number $\left(2n \over n\right)/(n+1)$, which is an interesting mathematical result but not relevant to our problem since our density matrices, although positive definite, are not usually tri-diagonal, and even if they were, the elements on the sub and super diagonal (the coherences) would have to be less than one for normalized density matrices.
We thank the referee for bringing our attention to the rich mathematical literature on the subject of positive matrices.
\acknowledgments S.G.S thanks A.~Beige, D.~K.~L.~Oi and A.~K.~Ekert (Univ.\ of Cambridge) and A.~D.~Greentree (Univ.\ of Melbourne) for helpful discussions and suggestions, and acknowledges financial support from the Cambridge-MIT Institute, Fujitsu and IST grants RESQ (IST-2001-37559) and TOPQIP (IST-2001-39215).
\end{document} |
\begin{document}
\sloppy \allowdisplaybreaks \title{New Cardinality Estimation Methods for HyperLogLog Sketches}
\author{Otmar Ertl} \affiliation{
\city{Linz}
\country{Austria} } \email{otmar.ertl@gmail.com}
\begin{abstract} This work presents new cardinality estimation methods for data sets recorded by HyperLogLog sketches. A simple derivation of the original estimator was found, that also gives insight how to correct its deficiencies. The result is an improved estimator that is unbiased over the full cardinality range, is easy computable, and does not rely on empirically determined data as previous approaches. Based on the maximum likelihood principle a second unbiased estimation method is presented which can also be extended to estimate cardinalities of union, intersection, or relative complements of two sets that are both represented as HyperLogLog sketches. Experimental results show that this approach is more precise than the conventional technique using the inclusion-exclusion principle. \end{abstract}
\maketitle
\begin{acronym} \acro{HLL}{HyperLogLog} \acro{ML}{maximum likelihood} \acro{BFGS}{Broyden-Fletcher-Goldfarb-Shanno} \acro{RMSE}{root-mean-square error} \end{acronym}
\section{Introduction} Counting the number of distinct elements in a data stream or large datasets is a common problem in big data processing. In principle, finding the number of distinct elements $\symCardinality$ with a maximum relative error $\symError$ in a data stream requires $\symBigO(\symCardinality)$ space \cite{Alon1999}. However, probabilistic algorithms that achieve the requested precision only with high probability are able to drastically reduce space requirements. Many different probabilistic algorithms have been developed over the past two decades \cite{Metwally2008,Ting2014} until a theoretically optimal algorithm was finally found \cite{Kane2010}. Although this algorithm achieves the optimal space complexity of $\symBigO(\symError^{-2}+\log \symCardinality)$ \cite{Alon1999, Indyk2003}, it is not very efficient in practice \cite{Ting2014}.
More practicable and already widely used in many applications is the \ac{HLL} algorithm \cite{Flajolet2007} with a near-optimal space complexity $\symBigO(\symError^{-2} \log\log\symCardinality +\log \symCardinality)$. A big advantage of the \ac{HLL} algorithm is that corresponding sketches can be easily merged, which is a requirement for distributed environments. Unfortunately, the originally proposed estimation method has some problems to guarantee the same accuracy over the full cardinality range. Therefore, a couple of variants have been developed to correct the original estimate by empirical means \cite{Heule2013,Rhodes2015,Sanfilippo2014}.
An estimator for \ac{HLL} sketches, that does not rely on empirical data and that significantly improves the estimation error, is the historic inverse probability estimator \cite{Ting2014, Cohen2015}. It trades memory efficiency for mergeability. The estimator needs to be continuously updated while inserting elements and the estimate depends on the insertion order. Moreover, the estimator cannot be further used after merging two sketches, which limits its application to single data streams. If this restriction is acceptable, the self-learning bitmap \cite{Chen2011}, which provides a similar trade-off and also needs less space than the original \ac{HLL} method, could be used alternatively.
Sometimes not only the number of distinct elements but also a sample of them is needed in order to allow later filtering according to some predicate and estimating the cardinalities of corresponding subsets. In this case the k-minimum values algorithm \cite{Beyer2007, Cohen2007} is the method of choice. It needs more space than the \ac{HLL} algorithm, but also allows set manipulations like construction of intersections, relative complements, or unions \cite{Dasgupta2015}. The latter operation is the only one that is natively supported by \ac{HLL} sketches. A sketch that represents the set operation result is not always needed. One approach to estimate the corresponding result cardinality directly is based on the inclusion-exclusion principle, which however can lead to large errors, especially if the result is small compared to the input set sizes \cite{Dasgupta2015}. Therefore, it was proposed to combine \ac{HLL} sketches with minwise hashing \cite{Pascoe2013, Cohen2016}, which improves the estimation error, even though at the expense of significant more space consumption. It was recently pointed out without special focus on \ac{HLL} sketches, that the application of the \ac{ML} method to the joint likelihood function of two probabilistic data structures leads to better cardinality estimates for intersections \cite{Ting2016}.
\section{HyperLogLog Data Structure} \label{sec:hyperloglog_data_structure}
\begin{algorithm}[t] \caption{Insertion of a data element $\symDataItem$ into a \ac{HLL} sketch. All registers $\boldsymbol{\symRegValVariate} = (\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg)$ start from zero.} \label{alg:insert} \begin{algorithmic} \Procedure {InsertElement}{\symDataItem} \State $\langle \symBitRepA_1, \ldots, \symBitRepA_\symPrecision,\symBitRepB_1,\ldots,\symBitRepB_\symRegRange\rangle_2 \gets$ $(\symPrecision + \symRegRange)$-bit hash value of $\symDataItem$ \State $\symIndexI \gets 1+ \langle \symBitRepA_1, \ldots, \symBitRepA_\symPrecision\rangle_2$ \State $\symRegVal \gets \min(\{\symS\mid \symBitRepB_\symS = 1\}\cup {\{\symRegRange+1\}} )$ \If{$\symRegVal>\symRegValVariate_\symIndexI$} \State $\symRegValVariate_\symIndexI \gets\symRegVal$ \EndIf \EndProcedure \end{algorithmic} \end{algorithm}
The \ac{HLL} algorithm collects information of incoming elements into a very compact sketching data structure, that finally allows to estimate the number of distinct elements. The data structure consists of $\symNumReg = 2^\symPrecision$ registers. All registers start with zero initial value. The insertion of a data element into a \ac{HLL} data structure requires the calculation of a $(\symPrecision+\symRegRange)$-bit hash value. The leading $\symPrecision$ bits of the hash value are used to select one of the $2^\symPrecision$ registers. Among the next following $\symRegRange$ bits, the position of the first 1-bit is determined which is a value in the range $[1,\symRegRange+1]$. The value $\symRegRange+1$ is used, if all $\symRegRange$ bits are zeros. If the position of the first 1-bit exceeds the current value of the selected register, the register value is replaced. The complete update procedure is shown in \cref{alg:insert}.
A \ac{HLL} sketch can be characterized by the parameter pair $(\symPrecision, \symRegRange)$ where the precision parameter $\symPrecision$ controls the relative estimation error which scales like $1/\sqrt{\symNumReg}$ \cite{Flajolet2007} while $\symRegRange$ defines the possible range for registers which is $\lbrace 0, 1,\ldots,\symRegRange+1\rbrace$. The case $\symRegRange=0$ corresponds to a bit array and shows that the \ac{HLL} algorithm can be regarded as generalization of linear counting \cite{Whang1990}. The number of consumed hash value bits $\symPrecision+\symRegRange$ defines the maximum cardinality that can be tracked. Obviously, if the cardinality reaches values in the order of $2^{\symPrecision+\symRegRange}$, hash collisions will become more apparent and the estimation error will increase.
\cref{alg:insert} has some properties which are especially useful for distributed data streams. First, the insertion order of elements has no influence on the final sketch state. Furthermore, any two \ac{HLL} sketches with same parameters $(\symPrecision, \symRegRange)$ representing two different sets can be easily merged. The sketch that represents the union of both sets can be constructed by taking the register-wise maximum values.
The state of a \ac{HLL} sketch is described by the vector $\boldsymbol{\symRegValVariate} = (\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg)$. Under the assumption of a uniform hash function the inserted elements are distributed over all $\symNumReg$ registers according to a multinomial distribution with equal probabilities $1/\symNumReg$ \cite{Flajolet2007}. Therefore, any permutation of $\boldsymbol{\symRegValVariate}$ is equally likely for a given cardinality. Thus, the order of register values $\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg$ contains no information about the cardinality which makes the multiset $\lbrace\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg\rbrace$ a sufficient statistic for $\symCardinality$. Since the values of the multiset are all in the range $[0, \symRegRange+1]$, the multiset can also be written as $\lbrace\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg\rbrace = 0^{\symCountVariate_0}1^{\symCountVariate_1}\cdots\symRegRange^{\symCountVariate_{\symRegRange}}(\symRegRange+1)^{\symCountVariate_{\symRegRange+1}}$ where $\symCountVariate_\symRegVal$ is the multiplicity of value $\symRegVal$. As a consequence, the multiplicity vector $\boldsymbol{\symCountVariate} := (\symCountVariate_0,\ldots,\symCountVariate_{\symRegRange+1})$, which corresponds to the register value histogram, is also a sufficient statistic for the cardinality. By definition we have $\sum_{\symRegVal=0}^{\symRegRange+1}\symCountVariate_{\symRegVal}=\symNumReg$.
\subsection{Poisson Approximation} \label{sec:poisson_approximation} The multinomial distribution is the reason that register values are statistically dependent and that further analysis is difficult. For simplification a Poisson model can be used \cite{Flajolet2007}, which assumes that the cardinality itself is distributed according to a Poisson distribution $\symCardinality \sim \text{Poisson}(\symPoissonRate)$. Under the Poisson model the register values are independent and identically distributed according to \begin{equation} \label{equ:register_value_distribution} \symProbability(\symRegValVariate \leq \symRegVal\vert\symPoissonRate) = \begin{cases} 0 & \symRegVal < 0 \\ e^{-\frac{\symPoissonRate}{\symNumReg 2^\symRegVal}} & 0\leq \symRegVal \leq \symRegRange \\ 1 & \symRegVal > \symRegRange. \end{cases} \end{equation}
The Poisson approximation makes it easier to find an estimator $\symPoissonRateEstimate = \symPoissonRateEstimate(\boldsymbol{\symRegValVariate})$ for the Poisson rate $\symPoissonRate$ than for the cardinality $\symCardinality$ under the fixed-size model. Depoissonization finally allows to translate the estimates back to the fixed-size model. Assume we have found an estimator $\symPoissonRateEstimate$ for the Poisson rate that is unbiased $\symExpectation(\symPoissonRateEstimate\vert\symPoissonRate) = \symPoissonRate$ for all $\symPoissonRate\geq 0$. This implies $\symExpectation(\symPoissonRateEstimate\vert\symCardinality) = \symCardinality$ as it is the only solution of \begin{equation*} \symExpectation(\symPoissonRateEstimate\vert\symPoissonRate)
= \sum_{\symCardinality=0}^\infty \symExpectation(\symPoissonRateEstimate\vert\symCardinality) e^{-\symPoissonRate}\frac{\symPoissonRate^\symCardinality}{\symCardinality!} = \symPoissonRate \quad \text{for all}\ \symPoissonRate\geq 0. \end{equation*} Hence, the unbiased estimator $\symPoissonRateEstimate$ conditioned on $\symCardinality$ is also an unbiased estimator for $\symCardinality$, which motivates us to use $\symPoissonRateEstimate$ directly as estimator for the cardinality $\symCardinalityEstimate := \symPoissonRateEstimate$. As our results will show later, this Poisson approximation works well over the full cardinality range, even for estimators that are not exactly unbiased.
\section{Original Estimation Approach} \label{sec:cardinality_estimation} The original cardinality estimator \cite{Flajolet2007} is based on the idea that the number of distinct element insertions a register needs to reach the value $\symRegVal$ is proportional to $\symNumReg 2^{\symRegVal}$. Given that, a rough cardinality estimate can be obtained by averaging the values $\lbrace \symNumReg 2^{\symRegValVariate_1},\ldots,\symNumReg2^{\symRegValVariate_\symNumReg}\rbrace$. The harmonic mean was found to work best as it is less sensitive to outliers. The result is the so-called raw estimator given by \begin{equation} \label{equ:raw_estimator} \symCardinalityRawEstimate = \alpha_\symNumReg \frac{\symNumReg} {\frac{1}{\symNumReg 2^{\symRegValVariate_1}}+\ldots+\frac{1}{\symNumReg2^{\symRegValVariate_\symNumReg}}}
= \frac{\symAlpha_\symNumReg \symNumReg^2}{\sum_{\symRegVal=0}^{\symRegRange+1}\symCountVariate_\symRegVal 2^{-\symRegVal}}. \end{equation} Here $\alpha_\symNumReg$ is a bias correction factor \cite{Flajolet2007} which can be well approximated by $\symAlpha_\infty := \lim_{\symNumReg\rightarrow\infty} \symAlpha_\symNumReg = \frac{1}{2\log 2}$ in practice, because the additional bias is negligible compared to the overall estimation error.
To investigate the estimation error of the raw estimator and other estimation approaches discussed in the following, we filled \num{10000} \ac{HLL} sketches with up to 50 billion unique elements. To speed up computation we assumed a uniform hash function whose values can be simulated by random numbers. We used the Mersenne Twister random number generator with a state size of \num{19937} bits from the C++ standard library.
\cref{fig:raw_estimate} shows the distribution of the relative error of the raw estimator as function of the true cardinality for $\symPrecision=12$ and $\symRegRange=20$. Corrections for small and large cardinalities have been proposed to reduce the obvious bias. For small cardinalities the \ac{HLL} sketch can be interpreted as bit array by distinguishing between registers with zero and nonzero values. This allows using the linear counting cardinality estimator \cite{Whang1990} \begin{equation} \label{equ:linear_counting_estimator} \symCardinalityEstimate_\text{small} = \symNumReg \log(\symNumReg/\symCountVariate_0). \end{equation} The corresponding estimation error is small for small cardinalities and is shown in \cref{fig:small_range_estimate}. It was proposed to use this estimator as long as $\symCardinalityRawEstimate \leq \frac{5}{2}\symNumReg$ where the factor $\frac{5}{2}$ was empirically determined \cite{Flajolet2007}. For large cardinalities in the order of $2^{\symPrecision+\symRegRange}$, for which a lot of registers are already in a saturated state, meaning that they have reached the maximum possible value $\symRegRange + 1$, the raw estimator underestimates cardinalities. For the 32-bit hash value case $(\symPrecision+\symRegRange=32)$, which was considered in \cite{Flajolet2007}, following correction formula was proposed if $\symCardinalityRawEstimate>2^{32}/30\approx\num{1.43e8}$ to take these saturated registers into account \begin{equation} \label{equ:large_range_estimate} \symCardinalityEstimate_\text{large} = -2^{32}\log(1-\symCardinalityRawEstimate/2^{32}). \end{equation}
The relative estimation error of the original method that includes both corrections is shown in \cref{fig:original_estimate} for the case $\symPrecision=12$ and $\symRegRange=20$. Unfortunately, the ranges where the estimation error is small for $\symCardinalityRawEstimate$ and $\symCardinalityEstimate_\text{small}$ do not overlap, which causes the estimation error to be much larger near the transition region. To reduce the error for cardinalities close to this region, it was proposed to correct the bias of $\symCardinalityRawEstimate$. Empirically collected bias correction data can be either stored as set of interpolation points \cite{Heule2013}, as lookup table \cite{Rhodes2015}, or as best-fitting polynomial \cite{Sanfilippo2014}. However, all these empirical approaches treat the symptom and not the cause.
\begin{figure}
\caption{Relative error of the raw estimator for \boldmath$\symPrecision = 12$ and $\symRegRange=20$.}
\label{fig:raw_estimate}
\end{figure} \begin{figure}
\caption{Relative error of the linear counting estimator for bitmap size 4096 which corresponds to \boldmath$\symPrecision = 12$ and $\symRegRange=0$.}
\label{fig:small_range_estimate}
\end{figure} \begin{figure}
\caption{Relative estimation error of the original method for \boldmath$\symPrecision = 12$ and $\symRegRange=20$.}
\label{fig:original_estimate}
\end{figure}
The large range correction formula \eqref{equ:large_range_estimate} is not satisfying either as it does not reduce the estimation error but makes it even worse. Instead of underestimating cardinalities, they are now overestimated. Another indication for the incorrectness of the proposed large range correction is the fact that it is not even defined for all possible states. For instance, consider a $(\symPrecision,\symRegRange)$-\ac{HLL} sketch with $\symPrecision+\symRegRange=32$ for which all registers are equal to the maximum possible value $\symRegRange+1$. The raw estimate would be $\symCardinalityRawEstimate = \symAlpha_\symNumReg 2^{33}$, which is greater than $2^{32}$ and outside of the domain of the large range correction formula. A simple approach to avoid the need of any large range corrections is to extend the operating range of the raw estimator to larger cardinalities. This can be easily accomplished by increasing $\symPrecision+\symRegRange$, which corresponds to using hash values with more bits. Each additional bit doubles the operating range which scales like $2^{\symPrecision+\symRegRange}$. However, in case $\symRegRange \geq 31$ the number of possible register values, which are $\lbrace 0, 1, \ldots, \symRegRange+1\rbrace$, exceeds 32 and is no longer representable by 5 bits. Therefore, it was proposed to use 6 bits per register in combination with 64-bit hash values \cite{Heule2013}. Even larger hash values are needless in practice, because it is unrealistic to encounter cardinalities of order $2^{64}$.
\subsection{Derivation of the Raw Estimator} \label{sec:derivation_raw_estimator} To better understand why the raw estimator fails for small and large cardinalities, we start with a brief and simple derivation without the restriction to large cardinalities ($\symCardinality\rightarrow\infty$) and without using complex analysis as in \cite{Flajolet2007}. Assume that the register values have following cumulative distribution function \begin{equation} \label{equ:assumed_register_val_distribution} \symProbability(\symRegValVariate \leq \symRegVal\vert\symPoissonRate) = e^{-\frac{\symPoissonRate}{\symNumReg 2^{ \symRegVal}}}. \end{equation} For now we ignore that this distribution has infinite support and differs from the register value distribution under the Poisson model \eqref{equ:register_value_distribution}, whose support is limited to $[0, \symRegRange+1]$. For a random variable $\symRegValVariate$ obeying \eqref{equ:assumed_register_val_distribution} the expectation of $2^{-\symRegValVariate}$ is given by \begin{equation} \label{equ:expectation_power_two} \symExpectation(2^{-\symRegValVariate}) =
\frac{1}{2} \sum_{\symRegVal = -\infty}^\infty 2^{-\symRegVal} e^{-\frac{\symPoissonRate}{\symNumReg 2^{\symRegVal}} } = \frac{\symAlpha_\infty\,\symNumReg\,\symPowerSeriesFunc\!\left(\log_2\!\left(\symPoissonRate/\symNumReg\right)\right)}{\symPoissonRate}, \end{equation} where the function \begin{equation*}
\symPowerSeriesFunc(\symX):= \log(2) \sum_{\symRegVal = -\infty}^\infty 2^{\symRegVal+\symX} e^{-2^{\symRegVal+\symX}} \end{equation*} is a smooth periodic oscillating function with mean 1 and an amplitude that can be bounded by $\symEpsPowerSeriesFunc:=\num{9.885e-6}$ as shown in \cref{fig:power_series_func}. This limit can also be found using Fourier analysis \cite{Ertl2017}. \begin{figure}
\caption{The deviation of $\symPowerSeriesFunc(\symX)$ from 1.}
\label{fig:power_series_func}
\end{figure} For a large ($\symNumReg\rightarrow \infty$) sample $\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg$, that is distributed according to \eqref{equ:assumed_register_val_distribution}, we asymptotically have \begin{equation*} \symExpectation\!\left( \frac{1}{2^ {-\symRegValVariate_1}\!+\ldots+2^{-\symRegValVariate_\symNumReg}} \right) \underset{\symNumReg\rightarrow \infty}{=} \frac{1}{\symExpectation (2^{-\symRegValVariate_1}+\ldots+2^{-\symRegValVariate_\symNumReg})} = \frac{1}{\symNumReg \symExpectation(2^{-\symRegValVariate})}. \end{equation*} Together with \eqref{equ:expectation_power_two} we obtain \begin{equation*} \symPoissonRate = \symExpectation\left( \frac{\symAlpha_\infty\,\symNumReg^2\,\symPowerSeriesFunc(\log_2( \symPoissonRate/\symNumReg))}{2^{-\symRegValVariate_1}+\ldots+2^{-\symRegValVariate_\symNumReg}} \right) \quad \text{for $\symNumReg\rightarrow \infty$}. \end{equation*} Therefore, the asymptotic relative bias of \begin{equation*}
\symPoissonRateEstimate = \frac{\symAlpha_\infty\,\symNumReg^2}{2^{-\symRegValVariate_1}+\ldots+2^{-\symRegValVariate_\symNumReg}} \end{equation*} is bounded by $\symEpsPowerSeriesFunc$, which makes this statistic a good estimator for the Poisson parameter. It also corresponds to the raw estimator \eqref{equ:raw_estimator}, if the Poisson parameter estimate is used as cardinality estimate as discussed in \cref{sec:poisson_approximation}.
\subsection{Limitations of the Raw Estimator} The raw estimator is based on two prerequisites. In practice, only the first requiring $\symNumReg$ to be sufficiently large is satisfied. However, the second assuming that the distribution of register values \eqref{equ:register_value_distribution} can be approximated by \eqref{equ:assumed_register_val_distribution} is not always true. A random variable $\symRegValVariate'$ with cumulative distribution function \eqref{equ:assumed_register_val_distribution} can be transformed into a random variable $\symRegValVariate$ with cumulative distribution function \eqref{equ:register_value_distribution} using \begin{equation} \label{equ:dist_transformation} \symRegValVariate = \min\!\left(\max\!\left(\symRegValVariate',0\right), \symRegRange+1\right). \end{equation} Therefore, register values $\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg$ can be seen as the result after applying this transformation to a sample $\symRegValVariate'_1,\ldots,\symRegValVariate'_\symNumReg$ from \eqref{equ:assumed_register_val_distribution}. If all registers values are in the range $[1,\symRegRange]$, they must be identical to the values $\symRegValVariate'_1,\ldots,\symRegValVariate'_\symNumReg$. In other words, the observed register values are also a plausible sample of distribution \eqref{equ:assumed_register_val_distribution}. Hence, as long as all or at least most register values are in the range $[1,\symRegRange]$, which is the case if $2^\symPrecision \ll \symPoissonRate \ll 2^{\symPrecision +\symRegRange}$, the approximation of \eqref{equ:register_value_distribution} by \eqref{equ:assumed_register_val_distribution} is valid. This explains why the raw estimator works best for intermediate cardinalities. However, for small and large cardinalities many register values are equal to 0 or $\symRegRange+1$, respectively, which contradicts \eqref{equ:assumed_register_val_distribution} and ends up in the observed bias.
\section{Improved Estimator} If we knew the values $\symRegValVariate'_1,\ldots,\symRegValVariate'_\symNumReg$ for which transformation \eqref{equ:dist_transformation} led to the observed register values $\symRegValVariate_1,\ldots,\symRegValVariate_\symNumReg$, we would be able to use the raw estimator \begin{equation} \label{equ:raw_estimate_with_multiplicities} \symPoissonRateEstimate = \frac{\symAlpha_\infty\,\symNumReg^2}{\sum_{\symRegVal=-\infty}^\infty \symCountVariate'_\symRegVal 2^{-\symRegVal}} \end{equation} where $\symCountVariate'_\symRegVal := \vert\lbrace \symIndexI\vert \symRegVal = \symRegValVariate'_\symIndexI\rbrace\vert$ are the multiplicities of value $\symRegVal$ in $\lbrace\symRegValVariate'_1,\ldots,\symRegValVariate'_\symNumReg\rbrace$. Due to \eqref{equ:dist_transformation}, the multiplicities $\symCountVariate'_\symRegVal$ and the multiplicities $\symCountVariate_\symRegVal$ for the observed register values have following relationships \begin{equation} \label{equ:multiplicity_transformation} \symCountVariate_0 = \textstyle\sum_{\symRegVal = -\infty}^{0} \symCountVariate'_\symRegVal,\ \symCountVariate_\symRegVal = \symCountVariate'_\symRegVal\ (1\leq\symRegVal\leq\symRegRange),\ \symCountVariate_{\symRegRange+1} = \textstyle\sum_{\symRegVal = \symRegRange + 1}^{\infty} \symCountVariate'_\symRegVal. \end{equation} The idea is now to find estimates $\hat{\symCount}'_\symRegVal$ for all $\symRegVal\in\mathbb{Z}$ and use them as replacements for $\symCountVariate'_\symRegVal$ in \eqref{equ:raw_estimate_with_multiplicities}. For $\symRegVal\in[1,\symRegRange]$ we can use the trivial estimators $\hat{\symCount}'_\symRegVal := \symCountVariate_\symRegVal$. Estimators for $\symRegVal\leq 0$ and $\symRegVal\geq\symRegRange+1$ can be found by considering the expectation of $\symCountVariate'_\symRegVal$ \begin{equation*} \symExpectation(\symCountVariate'_\symRegVal) = \symNumReg\, \symProbability(\symRegValVariate' = \symRegVal\vert\symPoissonRate) = \symNumReg\,e^{-\frac{\symPoissonRate}{\symNumReg 2^{\symRegVal}}}\left(1-e^{-\frac{\symPoissonRate}{\symNumReg 2^{\symRegVal}}}\right). \end{equation*} According to \eqref{equ:register_value_distribution} we have $\symExpectation(\symCountVariate_0/\symNumReg)= e^{-\frac{\symPoissonRate}{\symNumReg}}$ and $\symExpectation(1-\symCountVariate_{\symRegRange+1}/\symNumReg)= e^{-\frac{\symPoissonRate}{\symNumReg 2^\symRegRange}}$ which gives \begin{align*} \symExpectation(\symCountVariate'_\symRegVal) &= \symNumReg\, (\symExpectation(\symCountVariate_0/\symNumReg))^{2^{-\symRegVal}} \left(1-\left(\symExpectation(\symCountVariate_0/\symNumReg)\right)^{2^{-\symRegVal}}\right), \\ &= \symNumReg\, (\symExpectation(1-\symCountVariate_{\symRegRange+1}/\symNumReg))^{2^{\symRegRange-\symRegVal}} \left(1-(\symExpectation(1-\symCountVariate_{\symRegRange+1}/\symNumReg))^{2^{\symRegRange-\symRegVal}}\right). \end{align*} These two expressions for the expectation suggest to use \begin{align*} \hat{\symCount}'_\symRegVal &= \symNumReg\, (\symCountVariate_0/\symNumReg)^{2^{-\symRegVal}} \left(1-(\symCountVariate_0/\symNumReg)^{2^{-\symRegVal}}\right)\quad\text{and} \\ \hat{\symCount}'_\symRegVal &= \symNumReg\, (1-\symCountVariate_{\symRegRange+1}/\symNumReg)^{2^{\symRegRange-\symRegVal}} \left(1-(1-\symCountVariate_{\symRegRange+1}/\symNumReg)^{2^{\symRegRange-\symRegVal}}\right) \end{align*} as estimators for $\symRegVal \leq 0$ and $\symRegVal \geq \symRegRange+1$, respectively. Both estimators conserve the mass of zero-valued and saturated registers, because \eqref{equ:multiplicity_transformation} is satisfied, if $\symCountVariate'_\symRegVal$ is replaced by $\hat{\symCount}'_\symRegVal$. Plugging all these estimators into \eqref{equ:raw_estimate_with_multiplicities} as replacements for $\symCountVariate'_\symRegVal$ finally gives \begin{equation} \label{equ:correctedestimator} \symPoissonRateEstimate =
\frac{\symAlpha_\infty\symNumReg^2} { \symNumReg\,\symSmallCorrectionFunc(\symCountVariate_0/\symNumReg) + \sum_{\symRegVal=1}^\symRegRange \symCountVariate_\symRegVal 2^{-\symRegVal} + \symNumReg\, \symLargeCorrectionFunc(1-\symCountVariate_{\symRegRange+1}/\symNumReg) 2^{-\symRegRange} } \end{equation} which we call the improved estimator. Here $\symNumReg\,\symSmallCorrectionFunc(\symCountVariate_0/\symNumReg)$ and $2\symNumReg\,\symLargeCorrectionFunc(1-\symCountVariate_{\symRegRange+1}/\symNumReg)$ are replacements for $\symCountVariate_0$ and $\symCountVariate_{\symRegRange+1}$ in the raw estimator \eqref{equ:raw_estimator}, respectively. The functions $\symSmallCorrectionFunc$ and $\symLargeCorrectionFunc$ are defined as \begin{align} \label{equ:sigma} \symSmallCorrectionFunc(\symX) &:= \symX + \sum_{\symRegVal=1}^\infty \symX^{2^\symRegVal} 2^{\symRegVal-1}, \\ \label{equ:tau} \symLargeCorrectionFunc(\symX) &:= \frac{1}{3} \left( 1-\symX - \sum_{\symRegVal=1}^\infty \left( 1- \symX^{2^{-\symRegVal}} \right)^{\!2} 2^{-\symRegVal} \right). \end{align} We can cross-check the new estimator for the linear counting case with $\symRegRange=0$. Using the identity \begin{equation} \label{equ:sigma_tau_relationship} \symSmallCorrectionFunc(\symX) + \symLargeCorrectionFunc(\symX) = \symAlpha_\infty\symPowerSeriesFunc(\log_2(\log(1/\symX)))/\log(1/\symX), \end{equation}
we get \begin{equation*} \symPoissonRateEstimate = \frac{\symAlpha_\infty\symNumReg}{ \symSmallCorrectionFunc(\symCountVariate_0/\symNumReg) + \symLargeCorrectionFunc(\symCountVariate_0/\symNumReg) } = \frac{\symNumReg\log\!\left(\symNumReg/\symCountVariate_0\right)}{\symPowerSeriesFunc\!\left(\log_2\!\left(\log\!\left(\symNumReg/\symCountVariate_0\right)\right)\right)} \end{equation*} which is as expected almost identical to the linear counting estimator \eqref{equ:linear_counting_estimator}, because $\symPowerSeriesFunc(\symX)\approx 1$.
The new estimator can be directly translated into an estimation algorithm that does not depend on magic numbers or special cases as previous approaches. Since $\symCountVariate_\symRegVal\in\lbrace0,\ldots,\symNumReg\rbrace$, the values for $\symNumReg\,\symSmallCorrectionFunc(\symCountVariate_0/\symNumReg)$ and $\symNumReg\, \symLargeCorrectionFunc(1-\symCountVariate_{\symRegRange+1}/\symNumReg)$ can be precalculated and kept in lookup tables of size $\symNumReg+1$. In this way a complete branch-free cardinality estimation can be realized. However, on-demand calculation of $\symSmallCorrectionFunc$ and $\symLargeCorrectionFunc$ is very fast as well. The series \eqref{equ:sigma} converges quadratically for all $\symX\in[0,1)$ and its terms can be calculated recursively using elementary operations. The case $\symX=1$ needs special handling, because the series diverges and causes an infinite denominator in \eqref{equ:correctedestimator} and therefore a vanishing cardinality estimate. As this case only occurs if all register values are in initial state ($\symCountVariate_0=\symNumReg$), this is exactly what is expected. Apart from the trivial roots at $\symX=0$ and $\symX=1$, the calculation of $\symLargeCorrectionFunc$ is slightly more expensive, because it involves square root evaluations and its series converges only linearly. Since $1-\symX^{2^{-\symRegVal}}\leq -\log(\symX) 2^{-\symRegVal}$ for $\symX\in(0,1]$ its convergence speed is comparable to a geometric series with ratio $1/8$. It is also thinkable to calculate $\symLargeCorrectionFunc$ using the approximation $\symLargeCorrectionFunc(\symX)\approx \symAlpha_\infty/\log(1/\symX)-\symSmallCorrectionFunc(\symX)$ which can be obtained from \eqref{equ:sigma_tau_relationship} and $\symPowerSeriesFunc(\symX)\approx 1$. The advantage is that the calculation of $\symSmallCorrectionFunc$ and the additional logarithm is slightly faster than the direct calculation of $\symLargeCorrectionFunc$ using \eqref{equ:tau}. However, for arguments close to 1 both terms may become very large compared to their difference, which requires special care to avoid numerical cancellation. The calculation of $\symLargeCorrectionFunc$ can be omitted at all, if the \ac{HLL} parameters are chosen such that $2^{\symPrecision+\symRegRange}$ is much larger than the expected cardinality. The number of saturated registers is negligible in this case ($\symCountVariate_{\symRegRange+1}\approx 0$) and therefore $\symLargeCorrectionFunc(1 - \symCountVariate_{\symRegRange+1}/\symNumReg)\approx\symLargeCorrectionFunc(1) = 0$.
\subsection{Results}
\begin{figure}
\caption{Relative error of the improved estimator for \boldmath$\symPrecision = 12$ and $\symRegRange=20$.}
\label{fig:raw_corrected_estimation_error_12_20}
\end{figure}
\begin{figure}
\caption{Standard deviations of the relative error of different cardinality estimators for \boldmath$\symPrecision = 12$ and $\symRegRange=20$.}
\label{fig:stdev_comparison}
\end{figure}
\label{sec:corrected_raw_estimation_error} \cref{fig:raw_corrected_estimation_error_12_20} shows the relative estimation error of the new improved estimator, again based on \num{10000} randomly generated \ac{HLL} sketches with parameters $\symPrecision=12$ and $\symRegRange=20$. The experimental results show that new estimator is unbiased up to cardinalities of 10 billions which is a clear improvement over the raw estimator (compare \cref{fig:raw_estimate}). The improved estimator beats the precision of existing methods that apply bias correction on the raw estimator \eqref{equ:raw_estimator} \cite{Heule2013,Rhodes2015,Sanfilippo2014}. Based on the simulated data we have empirically determined the bias correction function $\symBiasCorrectionFunc$ that satisfies $\symCardinality = \symExpectation(\symBiasCorrectionFunc(\symCardinalityRawEstimate)\vert\symCardinality)$ for all cardinalities. By definition, the estimator $\symCardinalityRawEstimate':=\symBiasCorrectionFunc(\symCardinalityRawEstimate)$ is unbiased and a function of the raw estimator. Its standard deviation is compared with that of the improved estimator in \cref{fig:stdev_comparison}. For cardinalities smaller than \num{e4} the empirical bias correction approach is not very precise. This is the reason why all previous approaches had to switch over to the linear counting estimator at some point. The standard deviation of the linear counting estimator is also shown in \cref{fig:stdev_comparison}. Obviously, the previous approaches cannot do better than given by the minimum of both curves for linear counting and raw estimator. In practice, the standard deviation of the combined approach is even larger, because the choice between both estimators must be made based on an estimate and not on the true cardinality, for which the intersection point of both curves, which is approximately \num{3e3} in \cref{fig:stdev_comparison}, would be the ideal transition point. In contrast, the improved estimator performs well over the entire cardinality range.
We also investigated the estimation error of the improved estimator for the case $\symPrecision=22$, $\symRegRange=10$ as shown in \cref{fig:raw_corrected_estimation_error_22_10}. The standard deviation is by a factor 32 smaller according to the $1/\sqrt{\symNumReg}$ error scaling law. Again, cardinalities up to order $2^{\symPrecision+\symRegRange}\approx\num{4e9}$ can be well estimated, because $\symPrecision+\symRegRange=32$ was kept the same for both investigated \ac{HLL} configurations. For $\symPrecision=22$ a small oscillating bias becomes apparent, which is caused by approximating the periodic function $\symPowerSeriesFunc$ by a constant (see \cref{sec:derivation_raw_estimator}).
\begin{figure}
\caption{Relative error of the improved estimator for \boldmath$\symPrecision = 22$ and $\symRegRange=10$.}
\label{fig:raw_corrected_estimation_error_22_10}
\end{figure}
\section{Maximum Likelihood Estimation} \label{sec:max_likelihood_estimation} We know from \cref{sec:poisson_approximation} that any unbiased estimator for the Poisson parameter is also an unbiased estimator for the cardinality. Moreover, we know that under suitable regularity conditions of the probability mass function the \ac{ML} estimator is asymptotically efficient \cite{Casella2002}. This means, if the number of registers $\symNumReg$ is sufficiently large, the estimator should be unbiased.
The log-likelihood function for given register values $\boldsymbol{\symRegValVariate}$, which are assumed to be distributed according to \eqref{equ:register_value_distribution} under the Poisson model, is \begin{equation} \label{equ:log_likelihood_single} \log \mathcal{\symLikelihood}(\symPoissonRate\vert\boldsymbol{\symRegValVariate}) = \sum_{\symRegVal=1}^{\symRegRange+1} \log\left(1-e^{-\frac{\symPoissonRate}{\symNumReg 2^{\min(\symRegVal,\symRegRange)}}}\right) \symCountVariate_\symRegVal - \frac{\symPoissonRate}{\symNumReg}\sum_{\symRegVal=0}^{\symRegRange}\frac{\symCountVariate_\symRegVal}{2^\symRegVal} . \end{equation} After differentiation and multiplying by $\symPoissonRate$ we find that the \ac{ML} estimate $\symPoissonRateEstimate$ is the unique root of the monotone decreasing function \begin{equation*} \symFunc(\symPoissonRate) = \sum_{\symRegVal=1}^{\symRegRange+1} \frac{\frac{\symPoissonRate}{\symNumReg 2^{\min(\symRegVal,\symRegRange)}}}{e^{\frac{\symPoissonRate}{\symNumReg 2^{\min(\symRegVal,\symRegRange)}}}-1} \symCountVariate_\symRegVal - \frac{\symPoissonRate}{\symNumReg}\sum_{\symRegVal=0}^\symRegRange \frac{\symCountVariate_\symRegVal}{2^\symRegVal}. \end{equation*} Since $\symFunc(0)=\symNumReg-\symCountVariate_0\geq 0$ and $\symFunc$ is at least linear decreasing as long as $\symCountVariate_{\symRegRange+1} < \symNumReg$, there exists a unique root. The special case $\symCountVariate_{\symRegRange+1} = \symNumReg$, for which all registers have reached the maximum possible value, the \ac{ML} estimate would be positive infinite.
Using $\symFunc(\symPoissonRateEstimate)=0$ and $1-\frac{\symX}{2}\leq \frac{\symX}{e^\symX-1}\leq 1$ we obtain following bounds for $\symPoissonRateEstimate$ \begin{equation} \label{equ:inequality} \frac{\symNumReg(\symNumReg-\symCountVariate_0)} {\symCountVariate_0+\frac{3}{2}\sum_{\symRegVal=1}^{\symRegRange}\frac{\symCountVariate_\symRegVal}{2^\symRegVal} + \frac{\symCountVariate_{\symRegRange+1}}{2^{\symRegRange+1}}} \leq \symPoissonRateEstimate \leq \frac{\symNumReg (\symNumReg-\symCountVariate_0)} {\sum_{\symRegVal=0}^{\symRegRange} \frac{\symCountVariate_\symRegVal}{2^\symRegVal}}. \end{equation} If the cardinality is in the intermediate range, where $\symCountVariate_0=\symCountVariate_{\symRegRange+1}=0$ the lower and the upper bound differ only by a constant factor and both are proportional to the harmonic mean of $\lbrace \symNumReg 2^{\symRegValVariate_1},\ldots,\symNumReg 2^{\symRegValVariate_\symNumReg} \rbrace$. Hence, consequent application of the \ac{ML} method would have directly suggested to use a cardinality estimator that is proportional to the harmonic mean without knowing the raw estimator \eqref{equ:raw_estimator} in advance. The history of the \ac{HLL} algorithm shows that the raw estimator was first found after several attempts using the geometric mean \cite{Flajolet2007, Durand2003}.
For $\symRegRange=0$, which corresponds to the linear counting case, the \ac{ML} estimator can be found by analytical means. The result is exactly the linear counting estimator \eqref{equ:linear_counting_estimator} which makes us optimistic that the \ac{ML} method in combination with the Poisson model also works well for the more general \ac{HLL} case $\symRegRange>0$. Since $f$ is convex, Newton-Raphson iteration and the secant method \cite{Press2007} will both converge to the root, provided that the function is positive for the chosen starting points. Even though the secant method has the disadvantage of slower convergence, a single iteration is simpler to calculate as it does not require the evaluation of the first derivative. Possible starting points are zero and the lower bound in \eqref{equ:inequality}. The iteration can be stopped, if the relative increment is below a certain limit $\symStopDelta$. Since the expected estimation error scales according to $1/\sqrt{\symNumReg}$, it makes sense to choose $\symStopDelta = \symStopEpsilon/\sqrt{\symNumReg}$ with some constant $\symStopEpsilon$. For our results presented below we used $\symStopEpsilon = 10^{-2}$. In practice, only a handful of iterations are necessary to satisfy the stop criterion.
\subsection{Results} \label{sec:maximum_likelihood_estimation_error}
\begin{figure}
\caption{Relative error of the \ac{ML} estimator for \boldmath$\symPrecision = 12$ and $\symRegRange=20$.}
\label{fig:max_likelihood_estimation_error_12_20}
\end{figure} \begin{figure}
\caption{Relative error of the \ac{ML} estimator for \boldmath$\symPrecision = 22$ and $\symRegRange=10$.}
\label{fig:max_likelihood_estimation_error_22_10}
\end{figure}
We have investigated the estimation error of the \ac{ML} estimator for both \ac{HLL} configurations as for the improved estimator. \cref{fig:max_likelihood_estimation_error_12_20,fig:max_likelihood_estimation_error_22_10} look very similar to \cref{fig:raw_corrected_estimation_error_12_20,fig:raw_corrected_estimation_error_22_10}, respectively. For $\symPrecision=12$, $\symRegRange=20$ the \ac{ML} estimator has a somewhat smaller median bias for cardinalities around \num{200}. In addition, the standard deviation of the relative error is marginally better than that of the improved estimator for cardinalities between \num{e3} and \num{e4} as shown in \cref{fig:stdev_comparison}. For $\symPrecision=22$, $\symPrecision=10$ we see that the the \ac{ML} estimator does not have the small oscillating bias as the improved estimator. Since all observed improvements are not very relevant in practice, the improved estimator should be preferred over the \ac{ML} method, because it leads to a simpler and faster algorithm.
\section{Joint Estimation} \label{sec:cardinality_estimation_set_intersections}
While the union of two sets that are represented by \ac{HLL} sketches can be straightforwardly obtained by taking the register-wise maximums, the computation of cardinalities for other set operations like intersections and relative complements is more challenging. The conventional approach uses the inclusion-exclusion principle \begin{equation} \label{equ:conventional_approach} \begin{aligned} &\left\vert\symSetS_1 \setminus \symSetS_2\right\vert = \left\vert\symSetS_1 \cup \symSetS_2\right\vert - \left\vert\symSetS_2\right\vert, \quad \left\vert\symSetS_2 \setminus \symSetS_1\right\vert = \left\vert\symSetS_1 \cup \symSetS_2\right\vert - \left\vert\symSetS_1\right\vert, \\ &\left\vert\symSetS_1 \cap \symSetS_2\right\vert = \left\vert\symSetS_1\right\vert + \left\vert\symSetS_2\right\vert - \left\vert\symSetS_1 \cup \symSetS_2\right\vert. \end{aligned} \end{equation} It allows to express set operation cardinalities in terms of the union cardinality. However, this approach can lead to very large estimation errors, especially if the result is small compared to the operand cardinalities \cite{Dasgupta2015}. In the worst case, the estimate could be negative without artificial restriction to nonnegative values.
Motivated by the good results we have obtained for a single \ac{HLL} sketch using the \ac{ML} method in combination with the Poisson approximation, we applied the same approach also for the estimation of set operation result sizes. Assume two given \ac{HLL} sketches with register values $\boldsymbol{\symRegValVariate}_1=(\symRegValVariate_{11},\ldots,\symRegValVariate_{1\symNumReg})$ and $\boldsymbol{\symRegValVariate}_2=(\symRegValVariate_{21},\ldots,\symRegValVariate_{2\symNumReg})$ representing sets $\symSetS_1$ and $\symSetS_2$, respectively. The goal is to find estimates for the cardinalities of the pairwise disjoint sets $\symSetA = \symSetS_1\setminus\symSetS_2$, $\symSetB = \symSetS_2\setminus\symSetS_1$, and $\symSetX = \symSetS_1\cap\symSetS_2$. The Poisson approximation allows us to assume that pairwise distinct elements are inserted into the sketches representing $\symSetS_1$ and $\symSetS_2$ at rates $\symPoissonRate_\symSetASuffix$ and $\symPoissonRate_\symSetBSuffix$, respectively. Furthermore, we assume that further unique elements are inserted into both sketches simultaneously at rate $\symPoissonRate_\symSetXSuffix$. We expect that good estimates $\symPoissonRateEstimate_\symSetASuffix$, $\symPoissonRateEstimate_\symSetBSuffix$, and $\symPoissonRateEstimate_\symSetXSuffix$ for the rates are also good estimates for the cardinalities $\vert\symSetA\vert$, $\vert\symSetB\vert$, and $\vert\symSetX\vert$.
\subsection{Joint Log-Likelihood Function} In order to apply the \ac{ML} method, we need to find the joint probability distribution of both sketches. Under the Poisson model the individual registers are independent and identically distributed. Therefore, we first derive the joint probability distribution for a single register that has value $\symRegValVariate_1$ in the first sketch that represents $\symSetS_1$ and value $\symRegValVariate_2$ in the second that represents $\symSetS_2$. The first one can be thought to be constructed by merging two sketches representing $\symSetA$ and $\symSetX$, respectively. Analogously, the sketch for $\symSetS_2$ could have been obtained from sketches for $\symSetB$ and $\symSetX$. Let $\symRegValVariate_\symSetASuffix$, $\symRegValVariate_\symSetBSuffix$, and $\symRegValVariate_\symSetXSuffix$ be the values of the considered register in the sketches for $\symSetA$, $\symSetB$, and $\symSetX$, respectively. The corresponding values in sketches for $\symSetS_1$ and $\symSetS_2$ are given by $\symRegValVariate_1 = \max(\symRegValVariate_\symSetASuffix, \symRegValVariate_\symSetXSuffix)$ and $\symRegValVariate_2 = \max(\symRegValVariate_\symSetBSuffix, \symRegValVariate_\symSetXSuffix)$. Since $\symSetA$, $\symSetB$, and $\symSetX$ are disjoint and therefore $\symRegValVariate_\symSetASuffix$, $\symRegValVariate_\symSetBSuffix$, and $\symRegValVariate_\symSetXSuffix$ are independent, the joint cumulative probability function of $\symRegValVariate_1$ and $\symRegValVariate_2$ is \begin{multline} \label{equ:joint_cdf} \symProbability( \symRegValVariate_1 \leq \symRegVal_1 \wedge \symRegValVariate_2 \leq \symRegVal_2 ) = \\ \symProbability( \symRegValVariate_\symSetASuffix \leq \symRegVal_1) \, \symProbability( \symRegValVariate_\symSetBSuffix \leq \symRegVal_2) \, \symProbability( \symRegValVariate_\symSetXSuffix \leq \min(\symRegVal_1, \symRegVal_2) ) = \\ \begin{cases} 0 & \symRegVal_1 < 0 \vee \symRegVal_2 < 0 \\ e^{ - \frac{\symPoissonRate_{\symSetASuffix}}{\symNumReg 2^{\symRegVal_1}} - \frac{\symPoissonRate_{\symSetBSuffix}}{\symNumReg 2^{\symRegVal_2}} - \frac{\symPoissonRate_{\symSetXSuffix}}{\symNumReg 2^{\min(\symRegVal_1, \symRegVal_2)}} } & 0\leq\symRegVal_1 \leq \symRegRange \wedge 0\leq\symRegVal_2\leq\symRegRange \\ e^{ - \frac{\symPoissonRate_{\symSetBSuffix} + \symPoissonRate_{\symSetXSuffix}}{\symNumReg 2^{\symRegVal_2}} } & 0\leq\symRegVal_2 \leq \symRegRange < \symRegVal_1 \\ e^{ - \frac{\symPoissonRate_{\symSetASuffix} + \symPoissonRate_{\symSetXSuffix}}{\symNumReg 2^{\symRegVal_1}} } & 0\leq\symRegVal_1 \leq \symRegRange < \symRegVal_2 \\ 1 & \symRegRange < \symRegVal_1 \wedge \symRegRange < \symRegVal_2. \end{cases} \end{multline} Here we used that $\symRegValVariate_\symSetASuffix$, $\symRegValVariate_\symSetBSuffix$, and $\symRegValVariate_\symSetXSuffix$ follow \eqref{equ:register_value_distribution} under the Poisson model. The corresponding probability mass function is \begin{multline*} \symProbabilityMass(\symRegVal_1,\symRegVal_2) = \symProbability( \symRegValVariate_1 \leq \symRegVal_1 \wedge \symRegValVariate_2 \leq \symRegVal_2 ) - \symProbability( \symRegValVariate_1 \leq \symRegVal_1-1 \wedge \symRegValVariate_2 \leq \symRegVal_2 ) \\ -\symProbability( \symRegValVariate_1 \leq \symRegVal_1 \wedge \symRegValVariate_2 \leq \symRegVal_2-1 ) +\symProbability( \symRegValVariate_1 \leq \symRegVal_1-1 \wedge \symRegValVariate_2 \leq \symRegVal_2-1 ). \end{multline*} Since the values for different registers are independent under the Poisson model, the joint probability mass function for all registers in both sketches is $\symProbabilityMass(\boldsymbol{\symRegVal}_1,\boldsymbol{\symRegVal}_2) = \prod_{\symIndexI = 1}^{\symNumReg} \symProbabilityMass(\symRegVal_{1\symIndexI},\symRegVal_{2\symIndexI})$. The \ac{ML} estimates $\symPoissonRateEstimate_\symSetASuffix$,
$\symPoissonRateEstimate_\symSetBSuffix$, and $\symPoissonRateEstimate_\symSetXSuffix$ can be finally obtained by maximizing the log-likelihood function given by \begingroup \allowdisplaybreaks \begin{multline} \label{equ:log_likelihood_pair} \log \symLikelihood( \symPoissonRate_\symSetASuffix, \symPoissonRate_\symSetBSuffix, \symPoissonRate_\symSetXSuffix \vert \boldsymbol{\symRegValVariate}_1, \boldsymbol{\symRegValVariate}_2 ) = \sum_{\symIndexI = 1}^{\symNumReg} \log(\symProbabilityMass(\symRegValVariate_{1\symIndexI},\symRegValVariate_{2\symIndexI})) = \\ \sum_{\symRegVal=1}^{\symRegRange} \log\left(1-e^{-\frac{\symPoissonRate_\symSetASuffix+\symPoissonRate_\symSetXSuffix}{\symNumReg 2^{\symRegVal}}}\right) \symCountVariate^{<}_{1\symRegVal} + \log\left(1-e^{-\frac{\symPoissonRate_\symSetBSuffix+\symPoissonRate_\symSetXSuffix}{\symNumReg 2^{\symRegVal}}}\right) \symCountVariate^{<}_{2\symRegVal} \\ + \sum_{\symRegVal=1}^{\symRegRange+1} \log\left(1-e^{-\frac{\symPoissonRate_\symSetASuffix}{\symNumReg 2^{\min\left(\symRegVal,\symRegRange\right)}}}\right) \symCountVariate^{>}_{1\symRegVal} + \log\left(1-e^{-\frac{\symPoissonRate_\symSetBSuffix}{\symNumReg 2^{\min\left(\symRegVal,\symRegRange\right)}}}\right) \symCountVariate^{>}_{2\symRegVal} \\ + \sum_{\symRegVal=1}^{\symRegRange+1} \log\left( 1 -e^{-\frac{\symPoissonRate_\symSetASuffix+\symPoissonRate_\symSetXSuffix}{\symNumReg 2^{\min\left(\symRegVal,\symRegRange\right)}}} - e^{-\frac{\symPoissonRate_\symSetBSuffix+\symPoissonRate_\symSetXSuffix}{\symNumReg 2^{\min\left(\symRegVal,\symRegRange\right)}}} + e^{-\frac{\symPoissonRate_\symSetASuffix+\symPoissonRate_\symSetBSuffix+\symPoissonRate_\symSetXSuffix}{\symNumReg 2^{\min\left(\symRegVal,\symRegRange\right)}}} \right) \symCountVariate^{=}_{\symRegVal} \\ - \frac{\symPoissonRate_\symSetASuffix}{\symNumReg} \sum_{\symRegVal=0}^{\symRegRange} \frac{
\symCountVariate^{<}_{1\symRegVal}+
\symCountVariate^{=}_{\symRegVal}+
\symCountVariate^{>}_{1\symRegVal} }{2^{\symRegVal}} - \frac{\symPoissonRate_\symSetBSuffix}{\symNumReg} \sum_{\symRegVal=0}^{\symRegRange} \frac{
\symCountVariate^{<}_{2\symRegVal}+
\symCountVariate^{=}_{\symRegVal}+
\symCountVariate^{>}_{2\symRegVal} }{2^{\symRegVal}} \\ - \frac{\symPoissonRate_\symSetXSuffix}{\symNumReg} \sum_{\symRegVal=0}^{\symRegRange} \frac{
\symCountVariate^{<}_{1\symRegVal}+
\symCountVariate^{=}_{\symRegVal}+
\symCountVariate^{<}_{2\symRegVal} }{2^{\symRegVal}} \end{multline} \endgroup where $\symCountVariate^{<}_{1\symRegVal}$, $\symCountVariate^{>}_{1\symRegVal}$, $\symCountVariate^{<}_{2\symRegVal}$, $\symCountVariate^{>}_{2\symRegVal}$, and $\symCountVariate^{=}_{\symRegVal}$ are defined as \begin{equation} \label{equ:sufficient_joint_statistic} \begin{aligned}
\symCountVariate^{<}_{1\symRegVal}&:=\left|\lbrace\symIndexI\vert\symRegVal=\symRegValVariate_{1\symIndexI}<
\symRegValVariate_{2\symIndexI}\rbrace\right|,&
\symCountVariate^{>}_{1\symRegVal}&:=\left|\lbrace\symIndexI\vert\symRegVal=\symRegValVariate_{1\symIndexI}>
\symRegValVariate_{2\symIndexI}\rbrace\right|, \\
\symCountVariate^{<}_{2\symRegVal}&:=\left|\lbrace\symIndexI\vert\symRegVal=\symRegValVariate_{2\symIndexI}<
\symRegValVariate_{1\symIndexI}\rbrace\right|,&
\symCountVariate^{>}_{2\symRegVal}&:=\left|\lbrace\symIndexI\vert\symRegVal=\symRegValVariate_{2\symIndexI}>
\symRegValVariate_{1\symIndexI}\rbrace\right|, \\
\symCountVariate^{=}_{\symRegVal}&:=\left|\lbrace\symIndexI\vert\symRegVal=\symRegValVariate_{1\symIndexI}=
\symRegValVariate_{2\symIndexI}\rbrace\right| \end{aligned} \end{equation} for $0\leq \symRegVal \leq \symRegRange+1$. These $5(\symRegRange+2)$ values represent a sufficient statistic for estimating $\symPoissonRate_\symSetASuffix$, $\symPoissonRate_\symSetBSuffix$, and $\symPoissonRate_\symSetXSuffix$ and greatly reduce the number of terms and also the evaluation costs of the log-likelihood function. The derived formula is the generalization of \eqref{equ:log_likelihood_single} to two sketches and therefore has a similar structure.
\subsection{Numerical Optimization} The \ac{ML} estimates $\symPoissonRateEstimate_\symSetASuffix$, $\symPoissonRateEstimate_\symSetBSuffix$, and $\symPoissonRateEstimate_\symSetXSuffix$ are obtained by maximizing \eqref{equ:log_likelihood_pair}. Since the three parameters are all nonnegative, this is a constrained optimization problem. The transformation $\symPoissonRate=e^{\symPhi}$ helps to get rid of these constraints and also translates relative accuracy limits into absolute ones, because $\Delta\symPhi = \Delta\symPoissonRate/\symPoissonRate$. Many optimizer implementations allow the definition of absolute limits rather than relative ones. Quasi-Newton methods are commonly used to find the maximum of such multi-dimensional functions. They all require the computation of the gradient which can be straightforwardly derived for \eqref{equ:log_likelihood_pair}. Among these methods the \ac{BFGS} algorithm \cite{Press2007} is very popular. We used the implementation provided by the Dlib C++ library \cite{King2009} for our experiments. Good initial guess values are important to ensure fast convergence for any optimization algorithm. An obvious choice are the cardinality estimates obtained by application of the inclusion-exclusion principle \eqref{equ:conventional_approach}. However, in order to ensure that their logarithms are all defined, we require that the initial values are not smaller than 1. The optimization loop is continued until the relative changes of $\symPoissonRate_\symSetASuffix$, $\symPoissonRate_\symSetBSuffix$, and $\symPoissonRate_\symSetXSuffix$ are all smaller than a predefined threshold. For the results presented below we used again $\symStopDelta=\symStopEpsilon/\sqrt{\symNumReg}$ with $\symStopEpsilon = 10^{-2}$. A few tens of iterations are typically necessary to satisfy this stop criterion for starting points determined by the inclusion-exclusion principle.
\subsection{Results} \begin{table*} \centering \caption{The cardinalities of \boldmath$\symSetA=\symSetS_1\setminus\symSetS_2$, $\symSetB=\symSetS_2\setminus\symSetS_1$, $\symSetX=\symSetS_1\cap\symSetS_2$, and $\symSetU=\symSetS_1\cup\symSetS_2$ have been estimated from 3000 different \ac{HLL} sketch pairs with $\symPrecision=16$ and $\symRegRange=16$ representing randomly generated sets $\symSetS_1$ and $\symSetS_2$ with fixed intersection and relative complement cardinalities. We have determined the \acs{RMSE} for the inclusion-exclusion principle and the \ac{ML} approach together with the corresponding improvement factor for 40 different cases.} \label{tbl:joint_estimation_results} \csvreader[before reading=\scriptsize, tabular=rrrrrrrrrrrrrrrr, table head= \toprule & \multicolumn{3}{c}{true cardinalities} & \multicolumn{4}{c}{\acs{RMSE} inclusion-exclusion} & \multicolumn{4}{c}{\acs{RMSE} maximum likelihood} & \multicolumn{4}{c}{\acs{RMSE} improvement factor} \\ \cmidrule(l){2-4} \cmidrule(l){5-8} \cmidrule(l){9-12} \cmidrule(l){13-16} &
$|\symSetA|$ &
$|\symSetB|$ &
$|\symSetX|$ &
$|\symSetA|$ &
$|\symSetB|$ &
$|\symSetX|$ &
$|\symSetU|$ &
$|\symSetA|$ &
$|\symSetB|$ &
$|\symSetX|$ &
$|\symSetU|$ &
$|\symSetA|$ &
$|\symSetB|$ &
$|\symSetX|$ &
$|\symSetU|$ \\\midrule, table foot=\bottomrule ] {joint_cardinality_calculation.csv}{ trueCardA=\trueCardA, trueCardB=\trueCardB, trueCardX=\trueCardX, trueJaccardIdx=\trueJaccardIdx, trueRatio=\trueRatio, avgNumFunctionEvaluations=\avgNumFunctionEvaluations, improvementRmseA=\improvementRmseA, improvementRmseB=\improvementRmseB, improvementRmseX=\improvementRmseX, improvementRmseABX=\improvementRmseABX, inclExclRMSEA=\inclExclRMSEA, maxLikeRMSEA=\maxLikeRMSEA, inclExclRMSEB=\inclExclRMSEB, maxLikeRMSEB=\maxLikeRMSEB, inclExclRMSEX=\inclExclRMSEX, maxLikeRMSEX=\maxLikeRMSEX, inclExclRMSEABX=\inclExclRMSEABX, maxLikeRMSEABX=\maxLikeRMSEABX}{ \thecsvrow & \num{\trueCardA} & \num{\trueCardB} & \num{\trueCardX} & \numformat{\inclExclRMSEA} & \numformat{\inclExclRMSEB} & \numformat{\inclExclRMSEX} & \numformat{\inclExclRMSEABX} & \numformat{\maxLikeRMSEA} & \numformat{\maxLikeRMSEB} & \numformat{\maxLikeRMSEX} & \numformat{\maxLikeRMSEABX} & \numformattwo{\improvementRmseA} & \numformattwo{\improvementRmseB} & \numformattwo{\improvementRmseX} & \numformattwo{\improvementRmseABX} } \end{table*}
To evaluate the estimation error of the new joint cardinality estimation approach we have randomly generated \num{3000} independent pairs of sketches for which the relative complement cardinalities $|\symSetA|$ and $|\symSetB|$ and the intersection cardinality $|\symSetX|$ are known. Each pair is constructed by randomly generating three \ac{HLL} sketches filled with $|\symSetA|$, $|\symSetB|$, and $|\symSetX|$ distinct elements, respectively. Then we merged the first with the third and the second with the third to get sketches for $\symSetS_1 = \symSetA\cup\symSetX$ and $\symSetS_2=\symSetB\cup\symSetX$, respectively.
\cref{tbl:joint_estimation_results} shows the results for \ac{HLL} sketches with parameters $\symPrecision=16$ and $\symRegRange=16$. For different cardinality configurations $|\symSetA|$, $|\symSetB|$, and $|\symSetX|$ we have compared the conventional approach using the inclusion-exclusion principle with the new joint \ac{ML} approach. Among the considered cases there are also cardinalities that are small compared to the number of registers in order to prove that the new approach also covers the small cardinality range where many registers are still in initial state. We have determined the relative \ac{RMSE} from the estimates for $|\symSetA| = |\symSetS_1\setminus\symSetS_2|$, $|\symSetB| = |\symSetS_2\setminus\symSetS_1|$, and $|\symSetX| = |\symSetS_1\cap\symSetS_2|$ for all \num{3000} generated examples. In addition, we investigated the estimation error of $\symPoissonRateEstimate_\symSetASuffix+\symPoissonRateEstimate_\symSetBSuffix +
\symPoissonRateEstimate_\symSetXSuffix$ for the union cardinality $|\symSetU| = |\symSetS_1\cup\symSetS_2|$. We also calculated the improvement factor which we defined as the \ac{RMSE} ratio between both approaches. Since we only observed values greater than one, the new \ac{ML} estimation approach improves the precision for all investigated cases. For some cases the improvement factor is even clearly greater than two. Due to the square root scaling of the error, this means that we would need four times more registers to get the same error when using the conventional approach. As the results suggest the new method works well over the full cardinality range without the need of special handling of small or large cardinalities. Obviously, the joint estimation algorithm is also able to reduce the estimation error for unions by a significant amount.
A reason why the \ac{ML} method performs better than the inclusion-exclusion method is that the latter only uses a fraction of the available information given by the sufficient statistic \eqref{equ:sufficient_joint_statistic}, because the corresponding estimator can be expressed as a function of just the three vectors $(\boldsymbol\symCountVariate^{<}_{1} + \boldsymbol\symCountVariate^{=} + \boldsymbol\symCountVariate^{>}_{1})$, $(\boldsymbol\symCountVariate^{<}_{2} + \boldsymbol\symCountVariate^{=} + \boldsymbol\symCountVariate^{>}_{2})$, and $(\boldsymbol\symCountVariate^{>}_{1} + \boldsymbol\symCountVariate^{=} + \boldsymbol\symCountVariate^{>}_{2})$. In contrast, the \ac{ML} method uses all the information as it incorporates each individual value of the sufficient statistic.
\section{Outlook} \label{sec:future_work}
As we have shown, the \ac{ML} method is able to improve the cardinality estimates for results of set operations between two \ac{HLL} sketches. Unfortunately, joint cardinality estimation is much more expensive than for a single sketch, because it requires maximization of a multi-dimensional function. Since we have found the improved estimator which is almost as precise as the \ac{ML} estimator for the single sketch case, we could imagine that there also exists a faster algorithm for joint cardinality estimation of two sketches. It is expected that such a new algorithm makes use of all the information given by the sufficient statistic \eqref{equ:sufficient_joint_statistic}.
The \ac{ML} method can also be used to estimate distance measures such as the Jaccard distance of two sets that are represented as \ac{HLL} sketches. This directly leads to the question whether the \ac{HLL} algorithm could be used for locality-sensitive hashing \cite{Leskovec2014, Wang2014}. The \ac{HLL} algorithm itself can be regarded as hashing algorithm as it maps sets to register values. For sufficiently large cardinalities we can use the Poisson approximation and assume that the number of zero-valued \ac{HLL} registers can be ignored. Furthermore, if $\symPrecision+\symRegRange$ is chosen large enough, the number of saturated registers can be ignored as well. As a consequence, we can simplify \eqref{equ:joint_cdf} and assume that $\symRegValVariate_1$ and $\symRegValVariate_2$ are distributed according to \begin{equation*} \symProbability( \symRegValVariate_1 \leq \symRegVal_1 \wedge \symRegValVariate_2 \leq \symRegVal_2 ) = e^{ - \frac{\symPoissonRate_{\symSetASuffix}}{\symNumReg 2^{\symRegVal_1}} - \frac{\symPoissonRate_{\symSetBSuffix}}{\symNumReg 2^{\symRegVal_2}} - \frac{\symPoissonRate_{\symSetXSuffix}}{\symNumReg 2^{\min(\symRegVal_1, \symRegVal_2)}} } \end{equation*} which yields \begin{multline*} \symProbability(\symRegValVariate_1=\symRegValVariate_2) = \sum_{\symRegVal=-\infty}^\infty \symProbability(\symRegValVariate_1 = \symRegVal \wedge \symRegValVariate_2 = \symRegVal) = \\ \sum_{\symRegVal = -\infty}^{\infty} e^{-\frac{\symPoissonRate_\symSetASuffix + \symPoissonRate_\symSetBSuffix + \symPoissonRate_\symSetXSuffix} {\symNumReg 2^\symRegVal} } \left( 1 - e^{-\frac{\symPoissonRate_\symSetASuffix + \symPoissonRate_\symSetXSuffix} {\symNumReg 2^\symRegVal} } - e^{-\frac{\symPoissonRate_\symSetBSuffix + \symPoissonRate_\symSetXSuffix} {\symNumReg 2^\symRegVal} } + e^{-\frac{\symPoissonRate_\symSetASuffix + \symPoissonRate_\symSetBSuffix + \symPoissonRate_\symSetXSuffix} {\symNumReg 2^\symRegVal} } \right). \end{multline*} for the probability that a register has the same value in both sketches. Using the approximation $\sum_{\symRegVal=-\infty}^\infty e^{-\frac{\symX}{2^\symRegVal}} - e^{-\frac{\symY}{2^\symRegVal}} \approx 2\symAlpha_\infty (\log(\symY)-\log(\symX))$, which is obtained by integrating $\symPowerSeriesFunc(\symX)\approx 1$ on both sides, we get \begin{equation} \symProbability(\symRegValVariate_1=\symRegValVariate_2) \approx 1 + 2\symAlpha_\infty \log\left( 1 - \textstyle\frac{1}{2} \symDistanceMeasure + \textstyle\frac{1}{4} \symDistanceMeasure^2 \textstyle\frac{ \symPoissonRate_\symSetASuffix \symPoissonRate_\symSetBSuffix }{\left( \symPoissonRate_\symSetASuffix+ \symPoissonRate_\symSetBSuffix \right)^2} \right) \end{equation} where $\symDistanceMeasure = \frac{\symPoissonRate_\symSetASuffix +\symPoissonRate_\symSetBSuffix} {\symPoissonRate_\symSetASuffix+\symPoissonRate_\symSetBSuffix+ \symPoissonRate_\symSetXSuffix}$
corresponds to the Jaccard distance. Since $ \frac{ \symPoissonRate_\symSetASuffix \symPoissonRate_\symSetBSuffix }{\left( \symPoissonRate_\symSetASuffix+ \symPoissonRate_\symSetBSuffix \right)^2} $ is always in the range $[0,\frac{1}{4}]$, the probability for equal register values can be bounded by $ 1 + 2\symAlpha_\infty \log( 1 - \textstyle\frac{1}{2} \symDistanceMeasure ) \lesssim \symProbability(\symRegValVariate_1=\symRegValVariate_2) \lesssim 1 + 2\symAlpha_\infty \log( 1 - \textstyle\frac{1}{2} \symDistanceMeasure + \textstyle\frac{1}{16} \symDistanceMeasure^2 ) $ as shown in \cref{fig:equal_register_probability}. \begin{figure}
\caption{The approximate probability range of equal register values as a function of the Jaccard distance.}
\label{fig:equal_register_probability}
\end{figure} This dependency on the Jaccard distance makes the \ac{HLL} algorithm an interesting candidate for locality-sensitive hashing. Furthermore, the method described in \cref{sec:cardinality_estimation_set_intersections} would allow a more detailed estimation of the Jaccard distance by using the estimates for intersection and union sizes. This could be used for additional more precise filtering when searching for similar items. Since \ac{HLL} sketches can be efficiently constructed, because only a single hash function evaluation is needed for each item, the preprocessing step would be very fast. In contrast, preprocessing is very costly for minwise hashing, because of the many required permutations \cite{Li2011}.
\section{Conclusion} \label{sec:conclusion} Based on the Poisson approximation we have presented two new methods to estimate the number of distinct elements from \ac{HLL} sketches. Unlike previous approaches that use patchworks of different estimators or rely on empirically determined data, the presented ones are inherently unbiased over the full cardinality range. The first extends the original estimator by theoretically derived correction terms in order to obtain an improved estimator that can be straightforwardly translated into a fast algorithm. Due to its simplicity we believe that it has the potential to become the standard textbook cardinality estimator for \ac{HLL} sketches. The second estimation approach is based on the \ac{ML} method and can also be applied to two sketches which significantly improves cardinality estimates for corresponding intersections, relative complements, and unions compared to the conventional approach using the inclusion-exclusion principle.
\end{document} |
\begin{document}
\title{Biderivations, commuting mappings and (2-)local derivations of $\mathbb{N}
\begin{quotation} \small\noindent \textbf{Abstract}:
In Fialowski's classification for algebras of maximal class, there are three Lie algebras of maximal class with 1-dimensional homogeneous components: $\mathfrak{m}_0$, $L_1$ and $\mathfrak{m}_2$. In this paper, we studied their biderivations by considering the embedded mapping to derivation algebras. Then we determined commuting mappings on these algebras as an application of biderivations. Finally,
local and 2-local derivations for these three algebras were characterized as the given gradings.
\noindent{\textbf{Keywords}}: $\mathbb{N}$-graded Lie algebras of maximal class; biderivations; commuting mappings; (2-)local derivations
\noindent{\textbf{Mathematics Subject Classification 2020}}: 17B40, 17B65, 17B70
\end{quotation} \setcounter{section}{-1} \section{Introduction} In the past decades, infinite-dimensional Lie algebras play important roles in the study of Lie theory because of their applications in mathematical physics. Among infinite-dimensional Lie algebras, $\mathbb{N}$-graded ones have attached much attention. The theory of $\mathbb{N}$-graded Lie algebras are closely related to nilpotent Lie algebras. For example, it is obvious that any finite-dimensional $\mathbb{N}$-graded Lie algebra is nilpotent. Infinite-dimensional ones are called residual nilpotent Lie algebras. Shalev and Zelmanov \cite{cc} introduced the definition of coclass of a finitely generated and residually nilpotent Lie algebra $\mathfrak{g}$, in analogy with the case of (pro-)$p$-groups, as $cc(\mathfrak{g})=\sum_{i\geq1}(\mathrm{dim}(\mathfrak{g}^{i}/\mathfrak{g}^{i+1})-1)$ (possibly infinity), where $\mathfrak{g}^{i}$ is the lower central series of $\mathfrak{g}$. Lie algebras of coclass 1 are called Lie algebras of maximal class (also called narrow or thin Lie algebras). Fialowski \cite{fc} classified all infinite-dimensional $\mathbb{N}$-graded two-generated Lie algebras $\mathfrak{g}=\bigoplus_{i=1}^{\infty}\mathfrak{g}_i$ with 1-dimensional homogeneous components $\mathfrak{g}_i$. According to her classification, we obtain that up to isomorphism, there are only three $\mathbb{N}$-graded Lie algebras of maximal class with 1-dimensional homogeneous components: $\mathfrak{m}_0$, $L_1$ and $\mathfrak{m}_2$, where $L_1$ is the positive part of the Witt algebra $\mathrm{Der}(\mathbb{F}[x])$. This result was also rediscovered in \cite{cc}. Furthermore, the cohomology of these algebras with coefficients in the trivial modules was studied in \cite{ft}. The adjoint cohomology and deformations were studied in \cite{m0,L1,m2,M1,M2}.
The theory of biderivations and commuting mappings was introduced in \cite{b} for associative algebras, developed for Lie algebras in \cite{c2016,ccz2019,dt2019,e2021,jt2020} and Lie superalgebras in \cite{ccc2021,tmc2020,ycc2021}. It happens quite often that all biderivations are inner, for example, see \cite{ex1,ex2,ex4,ccc2021,ccz2019,ycc2021,zcz2020}. In particular, it was proved that all biderivations of a finite-dimensional complex simple Lie algebra, without the restriction of being skew-symmetric, are inner biderivations in \cite{ex1}. However, compared with the finite-dimensional case, less work has done for infinite-dimensional Lie algebras. In this paper,
biderivations and commuting mappings of $\mathfrak{m}_0$, $L_1$ and $\mathfrak{m}_2$ were studied. Firstly, we determined the biderivations by considering the embedded mappings from these algebras to their derivation algebras. Moveover, we characterized commuting mappings by the biderivatons.
Another generalized derivations that we are also interested in are local and 2-local derivations. The concepts of local and 2-local derivations were introduced in \cite{K} and \cite{P}.
In recent years, local and 2-local derivations have aroused the interest of many authors, see \cite{AKR,l1,M0}. In particular, it is proved that a finite-dimensional nilpotent Lie algebra $L$ with $\mathrm{dim}\ L\geq2$ always admits a 2-local derivation which is not a derivation in \cite{AKR}. However, it does not hold for infinite-dimensional $\mathbb{N}$-graded Lie algebras. In fact, every 2-local derivation of $L_1$ is a derivation \cite{l1}. In this paper, we proved that every local and 2-local derivation is a derivation for $\mathfrak{m}_0$, $L_1$ and $\mathfrak{m}_2$.
\section{Preliminaries} Throughout this paper, the ground field $\mathbb{F}$ is an algebraically closed field of characteristic zero and all vector spaces, algebras are over $\mathbb{F}$. A Lie algebra over $\mathbb{F}$ is a skew-symmetric algebra whose multiplication satisfies the Jacobi identity. Lie algebras of coclass 1 are called \emph{Lie algebras of maximal class}. Up to isomorphism, there are only three $\mathbb{N}$-graded Lie algebras of maximal class with 1-dimensional homogeneous components (see \cite{fc}):
$\bullet$ $\mathfrak{m}_0$: the Lie algebra $\mathfrak{m}_0$ is an $\mathbb{N}$-graded Lie algebra $\mathfrak{m}_{0}=\bigoplus_{i=1}^{\infty} (\mathfrak{m}_0)_i$ with 1-dimensional graded components $(\mathfrak{m}_0)_i$ and generated by the components of degree 1 and 2. For a basis $e_i$ of $(\mathfrak{m}_0)_i$, the non-trivial brackets are $[e_1,e_i]=e_{i+1}$ for all $i\geq 2$.
$\bullet$ $L_1$: the Lie algebra $L_{1}$ is an $\mathbb{N}$-graded Lie algebra $L_{1}=\bigoplus_{i=1}^{\infty}L_{1}^{(i)}$ with 1-dimensional graded components $L_{1}^{(i)}$ and generated by the components of degree 1 and 2. For a basis $e_i$ of $L_{1}^{(i)}$, the non-trivial brackets are $[e_i,e_j]=(j-i)e_{i+j}$ for all $i,j\geq1$.
$\bullet$ $\mathfrak{m}_2$: the Lie algebra $\mathfrak{m}_2$ is an $\mathbb{N}$-graded Lie algebra $\mathfrak{m}_2=\bigoplus_{i=1}^{\infty}(\mathfrak{m}_2)_i$ with 1-dimensional graded components $(\mathfrak{m}_2)_i$ and generated by the components of degree 1 and 2. For a basis $e_i$ of $(\mathfrak{m}_2)_i$, the non-trivial brackets are $[e_1,e_i]=e_{i+1}$ for all $i\geq2$, $[e_2,e_j]=e_{j+2}$ for all $j\geq3$.
\begin{defn} A linear mapping of a Lie algebra $\mathfrak{g}$ is called a \emph{derivation} if it satisfies $$D([x,y])=[D(x),y]+[x,D(y)],$$ for all $x,y$ in $\mathfrak{g}$. Denote by $\mathrm{Der}(\mathfrak{g})$ the set of derivations of $\mathfrak{g}$. \end{defn} For $x\in \mathfrak{g}$, the mapping $\mathrm{ad}\ x: y\mapsto [x,y]$ is a derivation of $\mathfrak{g}$. Call it \emph{inner derivation}. A standard fact is that $\mathrm{Der}(\mathfrak{g})$ and $\mathrm{ad}\ \mathfrak{g}$ are both Lie subalgebra of $\mathrm{Hom}(\mathfrak{g},\mathfrak{g})$ and $\mathrm{ad}\ \mathfrak{g}$ is an ideal of $\mathrm{Der}(\mathfrak{g})$. Moreover, the first cohomology with coefficients in the adjoint module is defined by $\mathrm{H}^{1}(\mathfrak{g},\mathfrak{g})=\mathrm{Der}(\mathfrak{g})/\mathrm{ad}\ \mathfrak{g}$ (see \cite{Fuks}). A bilinear mapping $f$ of $\mathfrak{g}$ is called \emph{skew-symmetric} if $$f(x,y)=-f(y,x),$$ for all $x,y$ in $\mathfrak{g}$. For a bilinear mapping $f$ of $\mathfrak{g}$ and an element $x$ in $\mathfrak{g}$, we define $L_{f,x}$ and $R_{f,x}$ by two linear mappings of $\mathfrak{g}$ satisfying $L_{f,x}(y)=f(x,y)$ and $R_{f,x}(y)=f(y,x)$ for $y\in\mathfrak{g}$. \begin{defn}\label{b} A skew-symmetric bilinear mapping $f$ of a Lie algebra $\mathfrak{g}$ is called a \emph{biderivation} of $\mathfrak{g}$ if $L_{f,x}$ and $R_{f,x}$ both are derivations of $\mathfrak{g}$ for any $x$ in $\mathfrak{g}$. Denote by $\mathrm{BDer}(\mathfrak{g})$ the set of biderivations of $\mathfrak{g}$. \end{defn} Suppose that the mapping $f_{\lambda}: \mathfrak{g}\times \mathfrak{g}\longrightarrow \mathfrak{g}$ is defined by $f_{\lambda}(x,y)=\lambda [x,y]$ for all $x,y\in\mathfrak{g}$, where $\lambda\in\mathbb{F}$. Then it is easy to check that $f_{\lambda}$ is a biderivation of $\mathfrak{g}$. This class of biderivations is called \emph{inner biderivation}. Denote by $\mathrm{IBDer}(\mathfrak{g})$ the set of inner biderivations of $\mathfrak{g}$.
Suppose that $\mathfrak{g}=\bigoplus_{i=1}^{\infty} \mathfrak{g}_i$ is a $\mathbb{N}$-graded Lie algebra. For $x\in\mathfrak{g}$, write $\|x\|$ for the degree of $x$.
If $\mathfrak{g}=\bigoplus_{i=1}^{\infty} \mathfrak{g}_i$ has a homogenous basis $\{e_i\mid i\in\mathbb{N}\}$, we define $e_{i}^{j}\in\mathrm{Hom}(\mathfrak{g},\mathfrak{g})_{j-i}$ and $e^{i,j}_{k}\in\mathrm{Hom}(\mathfrak{g}\wedge\mathfrak{g},\mathfrak{g})_{k-i-j}$, $i<j$ by $e_{i}^{j}:e_i\rightarrow e_j,$ and $e^{i,j}_{k}:(e_i,e_j)\rightarrow e_k$.
It is easy to see that $\mathrm{Der}(\mathfrak{g})$ and $\mathrm{BDer}(\mathfrak{g})$ are $\mathbb{N}$-graded subspaces of $\mathrm{Hom}(\mathfrak{g},\mathfrak{g})$ and $\mathrm{Hom}(\mathfrak{g}\wedge\mathfrak{g},\mathfrak{g})$ respectively, where the homogeneous components of weight $k$ are given by $$\mathrm{Der}_{k}(\mathfrak{g})=\{\phi\in\mathrm{Der(\mathfrak{g})}\mid\phi(\mathfrak{g}_i)\subseteq \mathfrak{g}_{k+i},i\in\mathbb{N}\}$$ and $$\mathrm{BDer}_{k}(\mathfrak{g})=\{f\in\mathrm{BDer(\mathfrak{g})}\mid f(\mathfrak{g}_i,\mathfrak{g}_j)\subseteq \mathfrak{g}_{k+i+j},i,j\in\mathbb{N}\}.$$ \begin{defn} For a Lie algebra $\mathfrak{g}$, we denote the \emph{center} of $\mathfrak{g}$ by $$\mathrm{C}(\mathfrak{g})=\{x\in\mathfrak{g}\mid [x,\mathfrak{g}]=0\}.$$ The Lie algebra $\mathfrak{g}$ is called \emph{centerless} if $\mathrm{C}(\mathfrak{g})=0$. \end{defn} \begin{rem} It is easy to see that $\mathfrak{m}_0$, $L_1$ and $\mathfrak{m}_2$ are all centerless. \end{rem}
\begin{lem}\label{C} Suppose that $\mathfrak{g}$ is a centerless $\mathbb{N}$-graded Lie algebra. Then $\mathfrak{g}$ can be embedded into
a $\mathbb{Z}$-graded Lie algebra $\mathfrak{\widetilde{g}}$ as an ideal such that
$$\mathrm{Der}(\mathfrak{g})\cong \mathrm{ad}_\mathfrak{g}\ \mathfrak{\widetilde{g}}, \quad
\mathrm{Ann}_{\mathfrak{\widetilde{g}}}\ \mathfrak{g}=
\{x\in\mathfrak{\widetilde{g}}\mid [x,\mathfrak{g}]=0\}=0.$$
Moreover, for any $f\in \mathrm{BDer}_{k}(\mathfrak{g})$, $k\in\mathbb{Z}$, there exists a unique linear mapping $\varphi_f:\ \mathfrak{g}\rightarrow \mathfrak{\widetilde{g}}$ such that, for $x,y\in\mathfrak{g}$, $$f(x,y)=[\varphi_f(x),y]=-[\varphi_f(y),x],$$
where $\|f\|=\|\varphi_f\|$. \end{lem} \begin{proof} Since $\mathfrak{g}$ is centerless, we have $\mathfrak{g}\cong \mathrm{ad}\ \mathfrak{g}\triangleleft\mathrm{Der}(\mathfrak{g})$. Let $D\in \mathrm{Der}(\mathfrak{g})$. If $[D,\mathrm{ad}\ x]=\mathrm{ad}\ D(x)=0$ for all $x\in \mathfrak{g}$, then $D(x)\in \mathrm{C}(\mathfrak{g})=0$. So $D=0$. By identifying $\mathfrak{g}$ with $\mathrm{ad}\ \mathfrak{g}$, we get the isomorphism $\mathrm{Der}(\mathfrak{g})\cong \mathrm{ad}_\mathfrak{g}(\mathrm{Der}(\mathfrak{g}))$. Set $\mathfrak{\widetilde{g}}=\mathrm{Der}(\mathfrak{g})$. We get $\mathrm{Der}(\mathfrak{g})\cong \mathrm{ad}_\mathfrak{g}\ \mathfrak{\widetilde{g}}$ and $\mathrm{Ann}_{\mathfrak{\widetilde{g}}}\ \mathfrak{g}=0$. For $f\in \mathrm{BDer}(\mathfrak{g})$ and $x\in\mathfrak{g}$, the mapping $L_{f,x}\in\mathrm{Der}(\mathfrak{g})$ by Definition \ref{b}. Then there exists $y_x\in \mathfrak{\widetilde{g}}$ such that $L_{f,x}=\mathrm{ad}_\mathfrak{g}\ y_x$. This $y_x$ is unique since $\mathrm{Ann}_{\mathfrak{\widetilde{g}}}\ \mathfrak{g}=0$. So we get a unique linear mapping $\varphi_f:\ \mathfrak{g}\rightarrow\mathfrak{\widetilde{g}}$ denoted by $\varphi_f(x)=y_x$. Then,
for any $x,y\in\mathfrak{g}$, we have $f(x,y)=-f(y,x)=L_{f,x}(y)=[\varphi_{f}(x),y]=-[\varphi_{f}(y),x].$ \end{proof}
\begin{defn} A linear mapping $\phi$ of a Lie algebra $\mathfrak{g}$ is called \emph{commuting} if $[\phi(x),x]=0$ for any $x$ in $\mathfrak{g}$. \end{defn} An important application of commuting mappings is to construct biderivations (for example, see \cite{ex1,ex2,ex4}), as shown in the following lemma. \begin{lem}\label{linear}\cite{ex4} Suppose that $\mathfrak{g}$ is a Lie algebra and $\phi$ is a commuting mapping of $\mathfrak{g}$. Then the bilinear form $\phi_f$, satisfying $\phi_{f}(x,y)=[x,\phi(y)]$ for $x,y\in \mathfrak{g}$, is a biderivation of $\mathfrak{g}$. \end{lem} We here recall and introduce theories of local and 2-local derivations of Lie algebras.
\begin{defn} A linear mapping $\Delta$ of a Lie algebra $\mathfrak{g}$ is called a \emph{local derivation} of $\mathfrak{g}$ if for every $x\in\mathfrak{g}$, there exists a derivation $D_{x}$ of $\mathfrak{g}$ (depending on $x$) such that $\Delta(x)=D_{x}(x)$. Denote by $\mathrm{LDer}(\mathfrak{g})$ the set of 2-local derivations of $\mathfrak{g}$. \end{defn}
\begin{defn}\label{bl} A linear mapping $\Delta$ of a Lie algebra $\mathfrak{g}$ is called a \emph{ 2-local derivation} of $\mathfrak{g}$ if for every $x,y\in\mathfrak{g}$, there exists a derivation $D_{x,y}$ of $\mathfrak{g}$ (depending on $x,y$) such that $\Delta(x)=D_{x,y}(x)$ and $\Delta(y)=D_{x,y}(y)$. Denote by $\mathrm{BLDer}(\mathfrak{g})$ the set of 2-local derivations of $\mathfrak{g}$. \end{defn} \begin{rem} Obviously, a derivation is a 2-local derivation. By taking $x=y$ in Definition \ref{bl}, we get that a 2-local derivation is a local derivation automatically. Therefore, for a Lie algebra $\mathfrak{g}$, we have $$\mathrm{LDer}(\mathfrak{g})\supseteq\mathrm{BLDer}(\mathfrak{g})\supseteq\mathrm{Der}(\mathfrak{g}).$$ \end{rem}
\begin{lem}\label{ld} Suppose that $\mathfrak{g}=\bigoplus_{i=1}^{\infty} \mathfrak{g}_i$ is an $\mathbb{N}$-graded Lie algebra. For $k\in\mathbb{Z}$, Set \begin{eqnarray*}
\mathrm{LDer}_k(\mathfrak{g})&=& \{\Delta\in\mathrm{Hom}(\mathfrak{g},\mathfrak{g})|\ \mathrm{for\ any}\ x\in\mathfrak{g}, \mathrm{there\ exists}\ D_{x;k}\in\mathrm{Der}_{k}(\mathfrak{g}), \mathrm{such\ that}\ \\
&& \Delta(x)=D_{x;k}(x)\}. \end{eqnarray*} Then $\mathrm{LDer}(\mathfrak{g})=\sum_{k\in\mathbb{Z}}\mathrm{LDer}_k(\mathfrak{g})$. \end{lem} \begin{proof} For a local derivation $\Delta\in \mathfrak{g}$, it is sufficient to prove that $\Delta\in\sum_{k\in\mathbb{Z}}\mathrm{LDer}_k(\mathfrak{g})$. For any $x\in\mathfrak{g}$, by definition, there exists a derivation $D_x\in\mathrm{Der}(\mathfrak{g})$ and $D_{x;k}\in\mathrm{Der}_k(\mathfrak{g})$ such that $$\Delta(x)=D_x(x)=\sum_{k\in\mathbb{Z}}D_{x;k}(x).$$ Define a mapping $\Delta_k$ by $\Delta_k(x)=D_{x;k}(x)$ for any $x\in\mathfrak{g}$. Then $\Delta_k\in \mathrm{LDer}_k(\mathfrak{g})$ and $\Delta(x)=\sum_{k\in\mathbb{Z}}\Delta_k(x)$. Thus, $\Delta=\sum_{k\in\mathbb{Z}}\Delta_k\in\sum_{k\in\mathbb{Z}}\mathrm{LDer}_k(\mathfrak{g})$. \end{proof}
Next, we characterize all biderivations, linear commuting mappings and (2-)local derivations of $\mathfrak{m}_{0}$, $L_1$ and $\mathfrak{m}_{2}$ one by one.
\section{$\mathbb{N}$-graded Lie algebra $\mathfrak{m}_{0}$}
From the result of the first cohomology of $\mathfrak{m}_0$ with coefficients in the adjoint module \cite[Theorem 2]{m0}, we get the derivations of $\mathfrak{m}_0$ by considering the inner derivations in the following lemma. \begin{lem}\label{m0} $\mathrm{dim}\ \mathrm{Der}_{k}(\mathfrak{m}_0)=\left\{
\begin{array}{ll}
2, & \hbox{$k\geq0$;} \\
0, & \hbox{$k\leq-1$.}
\end{array}
\right. $ In particular,
(1) in $\mathrm{Der}_{k}(\mathfrak{m}_0)$ for $k\geq1$, a basis is $e^{1+k}_1$, $\sum_{i\geq2}e^{i+k}_i$;
(2) in $\mathrm{Der}_{0}(\mathfrak{m}_0)$, a basis is $\sum_{i\geq2}e^{i}_i$, $e^{1}_{1}+\sum_{i\geq3}(i-2)e^{i}_i$. \end{lem}
In order to describe the derivation algebra of $\mathfrak{m}_0$, we denote by $$\mathfrak{\widetilde{m}}_0=\mathfrak{m}_0\oplus\mathrm{span}\{x_{01},x_{02},x_i,i\geq1\}$$
the Lie algebra with brackets: $$[e_1,e_i]=e_{i+1},\ [x_{01},x_1]=-x_1,\ [x_{01},x_k]=kx_k,\ [x_{02},x_1]=x_1,\ [x_1,x_k]=-e_{k+1},$$ $$[x_{01},e_1]=e_1,\ [x_{01},e_i]=(i-2)e_i,\ [x_{02},e_i]=e_i,\ [x_1,e_1]=-e_2,\ [x_k,e_i]=e_{i+k},$$ where $k,i\geq2$. Obviously, $\mathfrak{\widetilde{m}}_0=\bigoplus^{\infty}_{i=0}(\mathfrak{\widetilde{m}}_0)_i$ is a $\mathbb{Z}$-graded Lie algebra with the graded component $(\mathfrak{\widetilde{m}}_0)_i=\left\{
\begin{array}{ll}
\mathrm{span}\{x_{01},x_{02}\}, & \hbox{$i=0$;} \\
\mathrm{span}\{e_{i},x_{i}\}, & \hbox{$i\geq 1$.}
\end{array}
\right.$ By Lemma \ref{m0}, $\mathrm{Der}(\mathfrak{m}_0)\cong\mathfrak{\widetilde{m}}_0$.
\begin{thm}\label{M01} $\mathrm{dim}\ \mathrm{BDer}_{k}(\mathfrak{m}_0)=\left\{
\begin{array}{ll}
1, & \hbox{$k\geq-1$;} \\
0, & \hbox{$k\leq-2$.}
\end{array}
\right. $ In particular, for $k\geq-1$, $\mathrm{BDer}_{k}(\mathfrak{m}_0)$ is spanned by $\sum_{i\geq2}e^{1,i}_{1+i+k}$. \end{thm}
\begin{proof} By Lemma \ref{C}, for $f\in \mathrm{BDer}_{k}(\mathfrak{m}_0)$, $k\in\mathbb{Z}$, there exists a linear mapping $\varphi_f:\ \mathfrak{m}_0\rightarrow \mathfrak{\widetilde{m}}_0$ such that, for $i,j\geq1$, \begin{equation}\label{mb} f(e_i,e_j)=[\varphi_f(e_i),e_j]=-[\varphi_f(e_j),e_i], \end{equation}
where $\|f\|=\|\varphi_f\|=k$. Now we determine $\varphi_f$ in different weight $k$.
\begin{flushleft} $\mathbf{Case\ 1.}\ k\leq-2.$ \end{flushleft} In this case, $\varphi_f(e_1)=\cdots=\varphi_f(e_{-k-1})=0$. Set $\varphi_f(e_{-k})=\alpha x_{01}+\beta x_{02}$ and $\varphi_f(e_{-k+i})=\lambda_ie_i+\mu_ix_i$ for $i\geq 1$. Taking $j=1$ in Eq. (\ref{mb}), we get $[\varphi_f(e_{i}),e_1]=0$. Thus, we have \begin{eqnarray*} &&[\varphi_f(e_{-k}),e_1]=[\alpha x_{01}+\beta x_{02},e_1]=\alpha e_1=0, \\ &&[\varphi_f(e_{-k+1}),e_1]=[\lambda_1e_1+\mu_1x_1,e_1]=-\mu_1e_2=0,\\ &&[\varphi_f(e_{-k+i}),e_1]=[\lambda_ie_i+\mu_ix_i,e_1]=-\lambda_ie_{i+1}=0, \end{eqnarray*} where $i\geq 2$. So $\alpha=\mu_1=\lambda_i=0$ for $i\geq 2$. Taking $i=j$ in Eq. (\ref{mb}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus we have \begin{eqnarray*}
&&[\varphi_f(e_{-k}),e_{-k}]=[\beta x_{02},e_{-k}]=\beta e_{-k}=0, \\
&&[\varphi_f(e_{-k+1}),e_{-k+1}]=[\lambda_1e_1+\mu_1x_1,e_{-k+1}]=\lambda_1e_{-k+2}=0,\\ &&[\varphi_f(e_{-k+i}),e_{-k+i}]=[\lambda_ie_i+\mu_ix_i,e_{-k+i}]=\mu_ie_{-k+2i}=0, \end{eqnarray*} where $i\geq2$. So $\beta=\lambda_1=\mu_i=0$ for $i\geq 2$. Thus $f=\varphi_f=0$.
\begin{flushleft} $\mathbf{Case\ 2.}\ k=-1.$ \end{flushleft} In this case, we set $\varphi_f(e_1)=\alpha x_{01}+\beta x_{02}$ and $\varphi_f(e_i)=\lambda_{i-1}e_{i-1}+\mu_{i-1}x_{i-1}$ for $i\geq 2$. Taking $i=j$ in Eq. (\ref{mb}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus we have \begin{eqnarray*}
&&[\varphi_f(e_{1}),e_{1}]=[\alpha x_{01}+\beta x_{02},e_{1}]=\alpha e_{1}=0, \\
&&[\varphi_f(e_{2}),e_{2}]=[\lambda_{1}e_{1}+\mu_{1}x_{1},e_{2}]=\lambda_1e_3=0,\\ &&[\varphi_f(e_{i}),e_{i}]=[\lambda_{i-1}e_{i-1}+\mu_{i-1}x_{i-1},e_{i}]=\mu_{i-1}e_{2i-1}=0, \end{eqnarray*} where $i\geq3$. So $\alpha=\lambda_1=\mu_{i-1}=0$ for $i\geq 3$. Taking $i=1$ in Eq. (\ref{mb}), we get $[\varphi_f(e_{1}),e_j]=-[\varphi_f(e_{j}),e_1]$. Thus, we have \begin{eqnarray*}
&&[\varphi_f(e_{1}),e_2]=-[\varphi_f(e_{2}),e_1]=\beta e_2=\mu_1e_2, \\
&&[\varphi_f(e_{1}),e_j]=-[\varphi_f(e_{j}),e_1]=\beta e_j=\lambda_{j-1}e_j, \end{eqnarray*} where $j\geq 3$. So $\beta=\mu_1=\lambda_{j-1}$ for $j\geq 3$. Thus, \begin{eqnarray*} &&f(e_1,e_i)=[\varphi_f(e_1),e_i]=[\beta x_{02},e_i]=\beta e_i,\ i\geq2, \\ &&f(e_2,e_i)=[\varphi_f(e_2),e_i]=[\beta x_1,e_i]=0 ,\ i\geq 3,\\ &&f(e_i,e_j)=[\varphi_f(e_i),e_j]=[\beta e_{i-1},e_j]=0 ,\ j>i\geq 3. \end{eqnarray*} That is,
$f=\beta\sum_{i\geq2}e^{1,i}_{i}$.
\begin{flushleft} $\mathbf{Case\ 3.}\ k=0.$ \end{flushleft} In this case, we set $\varphi_f(e_i)=\lambda_{i}e_{i}+\mu_{i}x_{i}$ for $i\geq 1$. Taking $i=j$ in Eq. (\ref{mb}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus we have \begin{eqnarray*}
&&[\varphi_f(e_{1}),e_1]=[\lambda_{1}e_{1}+\mu_{1}x_{1},e_1]=-\mu_1 e_2=0, \\
&&[\varphi_f(e_{i}),e_i]=[\lambda_{i}e_{i}+\mu_{i}x_{i},e_i]=\mu_{i}e_{2i}=0, \end{eqnarray*} where $i\geq 2$. So $\mu_i=0$ for $i\geq 1$. Taking $i=1$ in Eq. (\ref{mb}), we get $[\varphi_f(e_1),e_j]=-[\varphi_f(e_j),e_1]$. Thus, we have \begin{equation*} [\varphi_f(e_1),e_j]=-[\varphi_f(e_j),e_1]=\lambda_1e_{j+1}=\lambda_je_{j+1}, \end{equation*} where $j\geq 2$. So $\lambda_1=\lambda_j$ for $j\geq 1$. Thus, \begin{eqnarray*} &&f(e_1,e_i)=[\varphi(e_1),e_i]=[\lambda_1e_1,e_i]=\lambda_1e_{i+1},\ i\geq2,\\ &&f(e_i,e_j)=[\varphi(e_i),e_j]=[\lambda_1e_{i},e_j]=0,\ j>i\geq2. \end{eqnarray*} That is, $f=\lambda_1\sum_{i\geq 2}e^{1,i}_{1+i}$.
\begin{flushleft} $\mathbf{Case\ 4.}\ k\geq1.$ \end{flushleft} In this case, we set $\varphi_f(e_i)=\lambda_{i}e_{i+k}+\mu_{i}x_{i+k}$ for $i\geq 1$. Taking $i=j$ in Eq. (\ref{mb}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus we have \begin{eqnarray*}
&&[\varphi_f(e_{1}),e_1]=[\lambda_{1}e_{k+1}+\mu_{1}x_{k+1},e_1]=-\lambda_1 e_{k+2}=0, \\
&&[\varphi_f(e_{i}),e_i]=[\lambda_{i}e_{k+i}+\mu_{i}x_{k+i},e_i]=\mu_{i}e_{k+2i}=0, \end{eqnarray*} where $i\geq 2$. So $\lambda_1=\mu_i=0$ for $i\geq 2$. Taking $i=1$ in Eq. (\ref{mb}), we get $[\varphi_f(e_1),e_j]=-[\varphi_f(e_j),e_1]$. Thus, we have \begin{equation*} [\varphi_f(e_1),e_j]=-[\varphi_f(e_j),e_1]=\mu_1e_{k+j+1}=\lambda_je_{k+j+1}, \end{equation*} where $j\geq 2$. So $\mu_1=\lambda_j$ for $j\geq 2$. Thus, \begin{eqnarray*} &&f(e_1,e_i)=[\varphi(e_1),e_i]=[\mu_1e_1,e_i]=\mu_1e_{i+1},\ i\geq2,\\ &&f(e_i,e_j)=[\varphi(e_i),e_j]=[\mu_1e_{i+k},e_j]=0,\ j>i\geq2. \end{eqnarray*} That is, $f=\mu_1\sum_{i\geq 2}e^{1,i}_{1+i+k}$.
In conclusion, $$f=\left\{
\begin{array}{ll}
\lambda_k\sum_{i\geq 2}e^{1,i}_{1+i+k}, & \hbox{$\lambda_k\in\mathbb{F},\ k\geq-1$;} \\
0, & \hbox{$k\leq-2$.}
\end{array}
\right. $$ The proof is complete. \end{proof}
\begin{cor} $\mathrm{BDer}_{0}(\mathfrak{m}_0)=\mathrm{IBDer}(\mathfrak{m}_0)$. \end{cor}
\begin{thm} A linear mapping of $\mathfrak{m}_0$ is commuting if and only if it is a scalar of $\sum_{i\geq1}e^{i}_{i}$. \end{thm} \begin{proof} The `if' direction is easy to verify. We now prove the `only if' direction. Suppose that $\phi$ is linear commuting mapping of weight $k$. That is to say that $\phi(e_i)=a_ie_{i+k}$ for $i\geq1$. By Lemma \ref{linear}, $\phi$ defines a biderivation $\phi_{f}\in\mathrm{BDer}_{k}(\mathfrak{m}_0)$. If $k\leq-2$, then $\phi_{f}=0$ by Theorem \ref{M01}. So $[x,\phi(y)]=0$ for all $x,y\in\mathfrak{m}_0$. From $\phi(y)\in\mathrm{C}(\mathfrak{m}_0)=0$, we have $\phi=0$. If $k\geq-1$, then $\phi_{f}=\lambda\sum_{i\geq2}e^{1,i}_{1+i+k}$ for some $\lambda\in\mathbb{F}$. \begin{flushleft} $\mathbf{Case\ 1.}\ k\geq-1, k\neq 0.$ \end{flushleft} From $\phi_{f}(e_j,e_1)=\lambda\sum_{i\geq2}e^{1,i}_{1+i+k}(e_j,e_1)$ for $j\geq2$, we have $[e_j,a_1e_{k+1}]=-\lambda e_{1+j+k}$. Since $k\neq 0$, we have $[e_j,e_{k+1}]=0$. So $\lambda=0$. We have $\phi_{f}(x,y)=[x,\phi(y)]=0$ for all $x,y\in\mathfrak{m}_0$. So $\phi(y)\in \mathrm{C}(\mathfrak{m}_0)=0$. Then $\phi=0$. \begin{flushleft} $\mathbf{Case\ 2.}\ k=0.$ \end{flushleft} From $\phi_{f}(e_1,e_j)=\lambda\sum_{i\geq2}e^{1,i}_{1+i}(e_1,e_j)$ for $j\geq2$, we have $[e_1,a_je_j]=\lambda e_{1+j}$. Thus $a_j=\lambda$, $j\geq2$. Moreover, for $j\geq2$, from $\phi_{f}(e_j,e_1)=-\phi_{f}(e_1,e_j)=-\lambda e_{1+j}$, we have $[e_j,a_1e_1]=-\lambda e_{1+j}$. Thus $a_1=\lambda$. So $\phi=\lambda\sum_{i\geq1}e^{i}_{i}$. \end{proof} Here we characterize local and 2-local derivations of $\mathfrak{m}_0$ by the result of derivations.
\begin{thm} $\mathrm{LDer}(\mathfrak{m}_0)=\mathrm{BLDer}(\mathfrak{m}_0)=\mathrm{Der}(\mathfrak{m}_0).$ \end{thm} \begin{proof} By Lemmas \ref{ld} and \ref{m0}, it is sufficient to prove $\mathrm{LDer}_{k}(\mathfrak{m}_0)\subseteq\mathrm{Der}(\mathfrak{m}_0)$ for $k\geq0$. By Lemma \ref{m0}, we prove that in the following cases. \begin{flushleft} $\mathbf{Case\ 1.}\ k=0.$ \end{flushleft} Suppose that $\Delta\in\mathrm{LDer}_{0}(\mathfrak{m}_0)$ is a local derivation. For $\Delta(e_i)$, $i\geq1$, by definition of local derivations, there exist $a_i$, $b_i\in \mathbb{F}$, such that \begin{eqnarray}\label{44}
\Delta(e_1)&=&b_1e_{1}, \\
\Delta(e_2)&=&a_2e_2, \\
\Delta(e_i)&=&(a_i+(i-2)b_i)e_i,\ i\geq 3. \end{eqnarray} By linearity of $\Delta$, for $i\geq 3$, \begin{equation}\label{11} \Delta(e_1+e_2+e_i)=\Delta(e_1)+\Delta(e_2)+\Delta(e_i)=b_1e_{1}+a_2e_2+(a_i+(i-2)b_i)e_i. \end{equation} For $\Delta(e_1+e_2+e_i)$, by definition, there exist $a_{12i}$, $b_{12i}$, such that \begin{equation}\label{22} \Delta(e_1+e_2+e_i)=a_{12i}(e_2+e_i)+b_{12i}e_{1}+b_{12i}(i-2)e_i. \end{equation} Comparing Eqs. (\ref{11}) with (\ref{22}), we get $a_{2}+b_{1}(i-2)=a_i+(i-2)b_i$. From Eq. (2.1.3), \begin{equation}\label{313} \Delta(e_i)=(a_{2}+b_{1}(i-2))e_i,\ i\geq 3. \end{equation} From Eqs. (2.1.1), (2.1.2) and (\ref{313}) and Lemma \ref{m0}, we have $$\Delta=a_2\sum_{i\geq2}e^{i}_i+b_1\bigg(e^{1}_{1}+\sum_{i\geq3}(i-2)e^{i}_i\bigg)\in\mathrm{Der}_0(\mathfrak{m}_0).$$
\begin{flushleft} $\mathbf{Case\ 2.}\ k\geq1.$ \end{flushleft} Suppose that $\Delta_k\in\mathrm{LDer}_{k}(\mathfrak{m}_0)$ is a local derivation. For $\Delta_k(e_i)$, $i\geq1$, by definition of local derivations, there exist $a_{k;i}$, $b_{k;i}\in \mathbb{F}$, such that \begin{equation*}
\Delta_k(e_1)=a_{k;1}e_{1+k},\quad \mathrm{and}\quad
\Delta_k(e_i)=b_{k;i}e_{i+k},\ i\geq 2. \end{equation*} For $\Delta_k(e_2+e_i)$, $i\geq 3$, there exist $b_{k;2,i}\in\mathbb{F}$, such that \begin{equation}\label{111}
\Delta_k(e_2+e_i)=b_{k;2,i}\sum_{j\geq2}e^{j+k}_j(e_2+e_i)=b_{k;2,i}(e_{2+k}+e_{i+k}). \end{equation} On the other hand, \begin{equation}\label{1111} \Delta_k(e_2+e_i)=\Delta_k(e_2)+\Delta_k(e_i)=b_{k;2}e_{2+k}+b_{k;i}e_{i+k}. \end{equation} Comparing Eqs. (\ref{111}) with (\ref{1111}), we have $b_{k;2}=b_{k;i}$ for $i\geq 3$. So $\Delta_k=a_{k;1}e_{1}^{1+k}+b_{k;2}\sum_{i\geq2}e^{i+k}_i\in\mathrm{Der}_k(\mathfrak{m}_0)$. \end{proof}
\begin{rem}
The 2-local derivations of $\mathfrak{m}_0$ were also studied in \cite{M0,l1}. In fact, there are a few non-linear 2-local derivations which are not derivations (see \cite{M0,l1}, for example). For $m,q\in \mathbb{N}$, $\lambda\in \mathbb{C}$ and $\theta=(\theta_2,\ldots,\theta_m)\in \mathbb{C}^{m-1}$, define the mapping $\Omega^{(q,m)}_{\theta,\lambda}$ of $\mathfrak{m}_0$ by $$\Omega^{(q,m)}_{\theta,\lambda}\left(\sum_{i=1}^{p}k_ie_i\right)= \left\{
\begin{array}{ll}
\sum\limits_{i=2}^{p}\sum\limits_{j=2}^{m}k_i\theta_je_{i+j-2}, & \hbox{if $k_1\neq 0$;} \\
\lambda k_qe_q, & \hbox{if $x=k_q e_q$ for some $q$ with $2<q\leq p$;} \\
0, & \hbox{others.}
\end{array} \right.$$ In \cite{M0}, the author prove that every 2-local derivation (not necessarily linear) $\Delta$ of $\mathfrak{m}_0$ is of the form $\Delta=D+\Omega^{(q,m)}_{\theta,\lambda}$ for some $D\in\mathrm{Der}(\mathfrak{m}_0)$ \cite[Theorem 4.2]{M0}.
Using this theorem, we can give another proof that if a 2-local derivation of $\mathfrak{m}_0$ is linear, then it is a derivation. Suppose that $\Delta=D+\Omega^{(q,m)}_{\theta,\lambda}$ is a 2-local derivation (not necessarily linear), where $D\in\mathrm{Der}(\mathfrak{m}_0)$ and $\Omega^{(q,m)}_{\theta,\lambda}$. Then $$\Omega^{(q,m)}_{\theta,\lambda}(e_1)=\Omega^{(q,m)}_{\theta,\lambda}(e_2)=0, \ \Omega^{(q,m)}_{\theta,\lambda}(e_q)=\lambda e_q.$$ If $\Delta$ is linear, then $\Omega^{(q,m)}_{\theta,\lambda}$ is additive. Thus, we have \begin{eqnarray*}
&&\Omega^{(q,m)}_{\theta,\lambda}(e_1+e_2)=\sum_{j=2}^{m}\theta_j e_j=0, \\
&& \Omega^{(q,m)}_{\theta,\lambda}(e_2+e_q)=0=\lambda e_q. \end{eqnarray*} Then $\theta_2=\cdots=\theta_m=\lambda=0$. That is, $\Omega^{(q,m)}_{\theta,\lambda}=0$ and $\Delta=D\in\mathrm{Der}(\mathfrak{m}_0)$. \end{rem}
\section{$\mathbb{N}$-graded Lie algebra $L_{1}$}
Different from $\mathfrak{m}_0$ and $\mathfrak{m}_2$, the derivarion algebra of $L_1$ has been described as the inner derivation algebra of $\mathbb{F} e_0\ltimes L_1$.
\begin{lem}\cite{l1}\label{L'} $\mathrm{Der}(L_1)=\mathrm{ad}_{L_1}(\mathbb{F} e_0\ltimes L_1)$, where $\mathbb{F} e_0\ltimes L_1$ has the bracktes: $[e_i,e_j]=(j-i)e_{i+j}$ for all $i,j\geq0$. \end{lem}
\begin{thm}\label{l} $\mathrm{IBDer}(L_1)=\mathrm{BDer}(L_1)$. \end{thm} \begin{proof} Suppose that $f$ is a biderivation of $L_1$. By Lemmas \ref{C} and \ref{L'}, there exists a mapping $\varphi_f:\ L_1\rightarrow L_1\oplus\mathbb{F} e_0$ such that, for $i,j\geq 1$, \begin{equation}\label{L} f(e_i,e_j)=[\varphi_f(e_i),e_j]=-[\varphi_f(e_j),e_i]. \end{equation}
Suppose that $\|f\|=\|\varphi_f\|=k$. If $k\leq-1$, we set $\varphi_f(e_{-k+i})=\lambda_ie_{i}$, $i\geq 0$. If $k\geq0$, we set $\varphi_f(e_i)=\lambda_ie_{k+i}$, $i\geq 1$. Then, if $k\neq 0$, we have $\lambda_i=0$ by taking $i=j$ in Eq. (\ref{L}). So $f=\varphi_f=0$. Now we suppose that $k=0$. Then, from Eq. (\ref{L}), we get $\lambda_i=\lambda_j$ for $i,j\geq 1$. Set $\lambda_i=\lambda$ for $i\geq 1$. Then $f=f_\lambda$ is an inner biderivation. \end{proof}
\begin{thm} A linear mapping of $L_1$ is commuting if and only if it is a scalar multiplication mapping of $L_1$. \end{thm} \begin{proof} The `if' direction is easy to verify. We now prove the `only if' direction. Suppose that $\phi$ is linear commuting mapping of $L_1$. By Lemma \ref{linear}, $\phi$ defines a biderivation $\phi_{f}\in\mathrm{BDer}(L_1)$. By Theorem \ref{l}, for all $x,y\in L_1$, $\phi_{f}(x,y)=[x,\phi(y)]=\lambda[x,y]$ for some $\lambda\in\mathbb{F}$. Then $[x,\phi(y)-\lambda y]=0$. That means $\phi(y)-\lambda y\in \mathrm{C}(L_1)=0$. The proof is complete. \end{proof} From the result of the first cohomology of $L_1$ with coefficients in the adjoint module \cite{L1}, we get the derivations of $L_1$ by considering the inner derivations in the following lemma.
\begin{lem}\label{LL1} $\mathrm{dim}\ \mathrm{Der}_{k}(L_1)=\left\{
\begin{array}{ll}
0, & \hbox{$k\leq-1$;} \\
1, & \hbox{$k\geq0$.}
\end{array}
\right. $ In particular, in $\mathrm{Der}_{k}(L_1)$ for $k\geq0$, a basis is $\sum_{i\geq1,i\neq k}(i-k)e^{k+i}_i$. \end{lem}
Here we characterize local and 2-local derivations of $L_1$ by the result of derivations.
\begin{thm} $\mathrm{LDer}(L_1)=\mathrm{BLDer}(L_1)=\mathrm{Der}(L_1).$ \end{thm} \begin{proof} By Lemmas \ref{ld} and \ref{LL1}, it is sufficient to prove $\mathrm{LDer}_{k}(L_1)\subseteq\mathrm{Der}(L_1)$ for $k\geq0$. Suppose that $\Delta_k\in\mathrm{Der}_k(L_1)$ is a local derivation. Fix a $i_0: i_{0}\geq 1$ and $i_{0}\neq k$. For $\Delta_k(e_{i_{0}})$, by the definition of local derivations, there exists $a_{k;i_0}\in\mathbb{F}$, such that $$\Delta_k(e_{i_{0}})=a_{k;i_0}\sum_{i\geq1,i\neq k}((i-k)e^{k+i}_i)(e_{i_{0}})=a_{k;i_0}(i_0-k)e_{k+i_0}.$$ Similarly, for any $j: j\geq 1$ and $j\neq k, i_0$, there exist $a_{k;j}\in\mathbb{F}$ such that $$\Delta_k(e_j)=a_{k;j}\sum_{i\geq1,i\neq k}((i-k)e^{k+i}_i)(e_j)=a_{k;j}(j-k)e_{k+j}.$$ Moreover, \begin{equation}\label{x} \Delta_k(e_{i_{0}}+e_j)=\Delta_k(e_{i_{0}})+\Delta_k(e_j)=a_{k;i_0}(i_0-k)e_{k+i_0}+a_{k;j}(j-k)e_{k+j}. \end{equation} For $\Delta_k(e_{i_{0}}+e_j)$, by definition, there exist $a_{k;i_0,j}\in\mathbb{F}$ such that \begin{equation}\label{y}
\Delta_k(e_{i_{0}}+e_j)=a_{k;i_0,j}((i_0-k)e_{k+i_0}+(j-k)e_{k+j}). \end{equation} Comparing Eqs (\ref{x}) with (\ref{y}), we have $a_{k;j}=a_{k;i_0}$ for any $j\geq1$ and $j\neq k$. So $\Delta_k=a_{k;i_0}\sum_{i\geq1,i\neq k}(i-k)e^{k+i}_i\in\mathrm{Der}_k(L_1).$ \end{proof} \begin{rem}
The 2-local derivations of $L_1$ were also studied in \cite{l1}. In particular,
authors showed that every 2-local derivation (not necessarily linear) is derivation. \end{rem}
\section{$\mathbb{N}$-graded Lie algebra $\mathfrak{m}_2$}
From the result of the first cohomology of $\mathfrak{m}_2$ with coefficients in the adjoint module \cite[Theorem 2]{m2}, we get the derivations of $\mathfrak{m}_2$ by considering the inner derivations in the following lemma. \begin{lem}\label{M21} $\mathrm{dim}\ \mathrm{Der}_{k}(\mathfrak{m}_2)=\left\{
\begin{array}{ll}
0, & \hbox{$k\leq-1$;} \\
1, & \hbox{$k=0,1$;} \\
2, & \hbox{$k\geq2$.}
\end{array}
\right. $ In particular,
(1) in $\mathrm{Der}_{0}(\mathfrak{m}_2)$, a basis is $\sum_{i\geq1}ie^{i}_i$;
(2) in $\mathrm{Der}_{1}(\mathfrak{m}_2)$, a basis is $\sum_{i\geq2}e^{i+1}_i$;
(3) in $\mathrm{Der}_{2}(\mathfrak{m}_2)$, a basis is $\sum_{i\geq2}e^{i+2}_i$, $e^{3}_{1}-\sum_{i\geq3}e^{i+2}_{i}$;
(4) in $\mathrm{Der}_{k}(\mathfrak{m}_2)$ for $k\geq3$, a basis is $e^{k+1}_1+e^{k+2}_2$, $-\frac{1}{2}e^{k+1}_1+\frac{1}{2}e^{k+2}_2+\sum_{i\geq3}e^{i+k}_i$. \end{lem}
In order to describe the derivation algebra of $\mathfrak{m}_2$, we denote by $$\mathfrak{\widetilde{m}}_2=\mathfrak{m}_2\oplus\mathrm{span}\{x_{0},x_i,i\geq2\}$$
the Lie algebra with brackets: $$[e_1,e_i]=e_{i+1},\ [e_2,e_j]=e_{j+2},\ [x_0,x_i]=ix_i,\ [x_2,x_j]=\frac{1}{2}e_{j+2},\ [x_0,e_{i-1}]=(i-1)e_{i-1},$$ $$[x_2,e_i]=e_{i+2},\ [x_j,e_1]=-\frac{1}{2}e_{j+1},\ [x_j,e_2]=\frac{1}{2}e_{j+2},\ [x_i,e_j]=e_{i+j},$$ where $i\geq2$, $j\geq3$. Obviously, $\mathfrak{\widetilde{m}}_2=\bigoplus^{\infty}_{i=0}(\mathfrak{\widetilde{m}}_2)_i$ is a $\mathbb{Z}$-graded Lie algebra with the graded component $(\mathfrak{\widetilde{m}}_2)_i=\left\{
\begin{array}{ll}
\mathbb{F} x_{0}, & \hbox{$i=0$;} \\
\mathbb{F} e_{1}, & \hbox{$i=1$;} \\
\mathrm{span}\{e_{i},x_{i}\}, & \hbox{$i\geq 2$.}
\end{array}
\right.$ By Lemma \ref{m2}, $\mathrm{Der}(\mathfrak{m}_2)\cong\mathfrak{\widetilde{m}}_2$.
\begin{thm}\label{m2} $\mathrm{dim}\ \mathrm{BDer}_{k}(\mathfrak{m}_2)=\left\{
\begin{array}{ll}
1, & \hbox{$k\geq0$;} \\
0, & \hbox{$k\leq-1$.}
\end{array}
\right. $ In particular, for $k\geq0$, $\mathrm{BDer}_{k}(\mathfrak{m}_2)$ is spanned by $\sum_{i\geq2}e^{1,i}_{k+i+1}+\sum_{i\geq3}e^{2,i}_{k+i+2}$. \end{thm}
\begin{proof} By Lemma \ref{C}, for $f\in \mathrm{BDer}_{k}(\mathfrak{m}_2)$, $k\in\mathbb{Z}$, there exists a linear mapping $\varphi_f:\ \mathfrak{m}_2\rightarrow \mathfrak{\widetilde{m}}_2$ such that, for $i,j\geq1$, \begin{equation}\label{mb2} f(e_i,e_j)=[\varphi_f(e_i),e_j]=-[\varphi_f(e_j),e_i], \end{equation}
where $\|f\|=\|\varphi_f\|=k$. Now we determine $\varphi_f$ in different weight $k$.
\begin{flushleft} $\mathbf{Case\ 1.}\ k\leq-2.$ \end{flushleft} In this case, $\varphi_f(e_1)=\cdots=\varphi_f(e_{-k-1})=0$. Set $\varphi_f(e_{-k})=\alpha x_{0}$, $\varphi_f(e_{-k+1})=\beta e_1$ and $\varphi_f(e_{-k+i})=\lambda_ie_i+\mu_ix_i$ for $i\geq 2$. Taking $j=1$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{i}),e_1]=0$. Thus, we have \begin{eqnarray*} &&[\varphi_f(e_{-k}),e_1]=[\alpha x_{0},e_1]=\alpha e_1=0, \\ &&[\varphi_f(e_{-k+2}),e_1]=[\lambda_2e_2+\mu_2x_2,e_1]=-\lambda_2e_3=0,\\ &&[\varphi_f(e_{-k+i}),e_1]=[\lambda_ie_i+\mu_ix_i,e_1]=-(\lambda_i+\frac{1}{2}\mu_i)e_{i+1}=0, \end{eqnarray*} where $i\geq 3$. So $\alpha=\lambda_2=\lambda_i+\frac{1}{2}\mu_i=0$ for $i\geq 3$. Taking $i=j$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus, we have \begin{eqnarray*}
&&[\varphi_f(e_{-k+1}),e_{-k+1}]=[\beta e_1,e_{-k+1}]=\beta e_{-k+2}=0, \\
&&[\varphi_f(e_{-k+i}),e_{-k+i}]=[\lambda_ie_i+\mu_ix_i,e_{-k+i}]=\mu_ie_{-k+2i}=0, \end{eqnarray*} where $i\geq 2$. So $\beta=\mu_i=0$ for $i\geq 2$. Thus $f=\varphi_f=0$.
\begin{flushleft} $\mathbf{Case\ 2.}\ k=-1.$ \end{flushleft} In this case, we set $\varphi_f(e_1)=\alpha x_{0}$, $\varphi_f(e_2)=\beta e_1$ and $\varphi_f(e_i)=\lambda_{i-1}e_{i-1}+\mu_{i-1}x_{i-1}$ for $i\geq 3$. Taking $i=j$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus, we have \begin{eqnarray*}
&&[\varphi_f(e_{1}),e_{1}]=[\alpha x_{0},e_{1}]=\alpha e_{1}=0, \\
&&[\varphi_f(e_{2}),e_{2}]=[\beta e_{1},e_{2}]=\beta e_3=0,\\ &&[\varphi_f(e_{3}),e_{3}]=[\lambda_{2}e_{2}+\mu_{2}x_{2},e_{3}]=(\lambda_2+\mu_{2})e_{5}=0,\\ &&[\varphi_f(e_{i}),e_{i}]=[\lambda_{i-1}e_{i-1}+\mu_{i-1}x_{i-1},e_{i}]=\mu_{i-1}e_{2i-1}=0, \end{eqnarray*} where $i\geq4$. So $\alpha=\beta=\lambda_2+\mu_{2}=\mu_{i}=0$ for $i\geq 3$. Taking $j=1$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{i}),e_1]=-[\varphi_f(e_{1}),e_i]=0$. Thus, we have \begin{eqnarray*}
&&[\varphi_f(e_3),e_1]=[\lambda_2e_2+\mu_2x_2,e_1]=-\lambda_2e_3=0, \\
&&[\varphi_f(e_i),e_1]=[\lambda_{i-1}e_{i-1},e_1]=-\lambda_{i-1}e_i=0, \end{eqnarray*} where $i\geq 4$. So $\lambda_i=0$ for $i\geq 2$. Thus $f=\varphi_f=0$.
\begin{flushleft} $\mathbf{Case\ 3.}\ k=0.$ \end{flushleft} In this case, we set $\varphi_f(e_1)=\alpha e_{1}$ and $\varphi_f(e_i)=\lambda_{i}e_{i}+\mu_{i}x_{i}$ for $i\geq 2$. Taking $i=j$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus, we have \begin{equation*} [\varphi_f(e_{i}),e_{i}]=[\lambda_ie_i+\mu_ix_{i},e_{i}]=\mu_i e_{2i}=0, \end{equation*} where $i\geq2$. So $\mu_{i}=0$ for $i\geq 2$. Taking $i=1$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{1}),e_j]=-[\varphi_f(e_{j}),e_1]$. Thus, we have \begin{equation*} [\varphi_f(e_{1}),e_j]=-[\varphi_f(e_{j}),e_1]=\alpha e_{j+1}=\lambda_je_{j+1}, \end{equation*} where $j\geq 2$. So $\lambda_i=\alpha$ for $i\geq 2$. Thus, \begin{eqnarray*} &&f(e_1,e_i)=[\varphi_f(e_1),e_i]=[\alpha e_{1},e_i]=\alpha e_{i+1},\ i\geq2, \\ &&f(e_2,e_i)=[\varphi_f(e_2),e_i]=[\alpha e_2,e_i]=\alpha e_{i+2} ,\ i\geq 3,\\ &&f(e_i,e_j)=[\varphi_f(e_i),e_j]=[\alpha e_{i},e_j]=0 ,\ j>i\geq 3. \end{eqnarray*} That is,
$f=\alpha\left(\sum_{i\geq2}e^{1,i}_{i+1}+\sum_{i\geq3}e^{2,i}_{i+2}\right)$.
\begin{flushleft} $\mathbf{Case\ 4.}\ k=1.$ \end{flushleft} In this case, we set $\varphi_f(e_i)=\lambda_{i}e_{i+1}+\mu_{i}x_{i+1}$ for $i\geq 1$. Taking $i=j$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus, we have \begin{eqnarray*}
&&[\varphi_f(e_{1}),e_{1}]=[\lambda_{1}e_{2}+\mu_{1}x_{2},e_{1}]=-\lambda_1 e_{3}=0, \\
&&[\varphi_f(e_{2}),e_{2}]=[\lambda_{2}e_{3}+\mu_{2}x_{3},e_{2}]=(\frac{1}{2}\mu_2-\lambda_2) e_5=0,\\ &&[\varphi_f(e_{i}),e_{i}]=[\lambda_{i}e_{i+1}+\mu_{i}x_{i+1},e_{i}]=\mu_{i}e_{2i+1}=0, \end{eqnarray*} where $i\geq 3$. So $\lambda_2=\frac{1}{2}\mu_2$, $\lambda_1=\mu_i=0$ for $i\geq 3$. Taking $j=1$ in
Eq. (\ref{mb2}), we get $[\varphi_f(e_i),e_1]=-[\varphi_f(e_1),e_i]$. Thus, we have \begin{equation*} [\varphi_f(e_i),e_1]=-[\varphi_f(e_1),e_i]=-(\lambda_i+\frac{1}{2}\mu_i)e_{i+2}=-\mu_1e_{i+2}, \end{equation*} where $i\geq 2$. So $\lambda_i=\mu_1-\frac{1}{2}\mu_i$ for $i\geq2$. Thus, \begin{eqnarray*} &&f(e_1,e_i)=[\varphi_f(e_1),e_i]=[\mu_1 x_{2},e_i]=\mu_1 e_{i+2},\ i\geq2, \\ &&f(e_2,e_i)=[\varphi_f(e_2),e_i]=[\frac{1}{2}\mu_1 e_3+\mu_1x_3,e_i]=\mu_1 e_{i+3} ,\ i\geq 3,\\ &&f(e_i,e_j)=[\varphi_f(e_i),e_j]=[\mu_1 e_{i+1},e_j]=0 ,\ j>i\geq 3. \end{eqnarray*} That is,
$f=\mu_1\left(\sum_{i\geq2}e^{1,i}_{i+2}+\sum_{i\geq3}e^{2,i}_{i+3}\right)$.
\begin{flushleft} $\mathbf{Case\ 5.}\ k\geq2.$ \end{flushleft} In this case, we set $\varphi_f(e_i)=\lambda_{i}e_{i+k}+\mu_{i}x_{i+k}$ for $i\geq 1$. Taking $i=j$ in Eq. (\ref{mb2}), we get $[\varphi_f(e_{i}),e_i]=0$. Thus we have \begin{eqnarray*}
&&[\varphi_f(e_{1}),e_{1}]=[\lambda_{1}e_{k+1}+\mu_{1}x_{k+1},e_{1}]=-(\lambda_1+\frac{1}{2}\mu_1) e_{k+2}=0, \\
&&[\varphi_f(e_{2}),e_{2}]=[\lambda_{2}e_{k+2}+\mu_{2}x_{k+2},e_{2}]=(\frac{1}{2}\mu_2-\lambda_2) e_{k+4}=0,\\ &&[\varphi_f(e_{i}),e_{i}]=[\lambda_{i}e_{k+i}+\mu_{i}x_{k+i},e_{i}]=\mu_{i}e_{2i+k}=0, \end{eqnarray*} where $i\geq 3$. So $\lambda_1=-\frac{1}{2}\mu_1$, $\lambda_2=\frac{1}{2}\mu_2$, $\mu_i=0$ for $i\geq 3$. Taking $j=1$ in
Eq. (\ref{mb2}),
we get $[\varphi_f(e_i),e_1]=-[\varphi_f(e_1),e_i]$. Thus, we have \begin{eqnarray*}
&& [\varphi_f(e_2),e_1]=-[\varphi_f(e_1),e_2]=-\mu_2e_{k+3}=-\mu_1e_{k+3},\\
&& [\varphi_f(e_i),e_1]=-[\varphi_f(e_1),e_i]=-\lambda_ie_{k+i+1}=-\mu_1e_{k+i+1}, \end{eqnarray*} where $i\geq 3$. So $\mu_1=\mu_2=\lambda_i$ for $i\geq 3$. Thus, \begin{eqnarray*} &&f(e_1,e_i)=[\varphi_f(e_1),e_i]=[-\frac{1}{2}\mu_1e_{k+1}+\mu_1 x_{k+1},e_i]=\mu_1 e_{k+i+1},\ i\geq2, \\ &&f(e_2,e_i)=[\varphi_f(e_2),e_i]=[\frac{1}{2}\mu_1 e_{k+2}+\mu_1x_{k+2},e_i]=\mu_1 e_{k+i+2} ,\ i\geq 3,\\ &&f(e_i,e_j)=[\varphi_f(e_i),e_j]=[\mu_1 e_{k+i},e_j]=0 ,\ j>i\geq 3. \end{eqnarray*} That is,
$f=\mu_1\left(\sum_{i\geq2}e^{1,i}_{k+i+1}+\sum_{i\geq3}e^{2,i}_{k+i+2}\right)$.
In conclusion, $$f=\left\{
\begin{array}{ll}
\lambda_k\left(\sum_{i\geq2}e^{1,i}_{k+i+1}+\sum_{i\geq3}e^{2,i}_{k+i+2}\right), & \hbox{$\lambda_k\in\mathbb{F},\ k\geq0$;} \\
0, & \hbox{$k\leq-1$.}
\end{array}
\right. $$ The proof is complete. \end{proof}
\begin{cor} $\mathrm{BDer}_{0}(\mathfrak{m}_2)=\mathrm{IBDer}(\mathfrak{m}_2)$. \end{cor} \begin{thm} A linear mapping of $\mathfrak{m}_2$ is commuting if and only if it is a scalar of $\sum_{i\geq1}e^{i}_{i}$. \end{thm} \begin{proof} The `if' direction is easy to verify. We now prove the `only if' direction. Suppose that $\phi$ is linear commuting mapping of weight $k$. That is to say that $\phi(e_i)=a_ie_{i+k}$ for $i\geq1$. By Lemma \ref{linear}, $\phi$ defines a biderivation $\phi_{f}\in\mathrm{BDer}_{k}(\mathfrak{m}_2)$. If $k\leq-1$, then $\phi_{f}=0$ by Theorem \ref{m2}. So $[x,\phi(y)]=0$ for all $x,y\in\mathfrak{m}_2$. From $\phi(y)\in\mathrm{C}(\mathfrak{m}_2)=0$, we have $\phi=0$. If $k\geq0$, then $\phi_{f}=\lambda(\sum_{i\geq2}e^{1,i}_{k+i+1}+\sum_{i\geq3}e^{2,i}_{k+i+2})$ for some $\lambda\in\mathbb{F}$. \begin{flushleft} $\mathbf{Case\ 1.}\ k\geq2.$ \end{flushleft} From $\phi_{f}(e_j,e_1)=\lambda(\sum_{i\geq2}e^{1,i}_{k+i+1}+\sum_{i\geq3}e^{2,i}_{k+i+2})(e_j,e_1)$ for $j\geq3$, we have $[e_j,a_1e_{k+1}]=-\lambda e_{1+j+k}$. So $\lambda=0$. We have $\phi_{f}(x,y)=[x,\phi(y)]=0$ for all $x,y\in\mathfrak{m}_2$. So $\phi(y)\in \mathrm{C}(\mathfrak{m}_2)=0$. Then $\phi=0$. \begin{flushleft} $\mathbf{Case\ 2.}\ k=1.$ \end{flushleft} From $\phi_{f}(e_2,e_1)=\lambda(\sum_{i\geq2}e^{1,i}_{k+i+1}+\sum_{i\geq3}e^{2,i}_{k+i+2})(e_2,e_1)$, we have $[e_2,a_1e_{2}]=-\lambda e_{4}=0$. So $\lambda=0$. We have $\phi_{f}(x,y)=[x,\phi(y)]=0$ for all $x,y\in\mathfrak{m}_2$. So $\phi(y)\in \mathrm{C}(\mathfrak{m}_2)=0$. Then $\phi=0$. \begin{flushleft} $\mathbf{Case\ 3.}\ k=0.$ \end{flushleft} From $\phi_{f}(e_1,e_j)=\lambda(\sum_{i\geq2}e^{1,i}_{k+i+1}+\sum_{i\geq3}e^{2,i}_{k+i+2})(e_1,e_j)$ for $j\geq2$, we have $[e_1,a_je_j]=\lambda e_{1+j}$. Thus $a_j=\lambda$, $j\geq2$. Moreover, for $j\geq2$, from $\phi_{f}(e_j,e_1)=-\phi_{f}(e_1,e_j)=-\lambda e_{1+j}$, we have $[e_j,a_1e_1]=-\lambda e_{1+j}$. Thus $a_1=\lambda$. So $\phi=\lambda\sum_{i\geq1}e^{i}_{i}$. \end{proof}
Here we characterize local and 2-local derivations of $\mathfrak{m}_2$ by the result of derivations.
\begin{thm} $\mathrm{LDer}(\mathfrak{m}_2)=\mathrm{BLDer}(\mathfrak{m}_2)=\mathrm{Der}(\mathfrak{m}_2).$ \end{thm}
\begin{proof} By Lemmas \ref{ld} and \ref{M21}, it is sufficient to prove $\mathrm{LDer}_{k}(\mathfrak{m}_2)\subseteq\mathrm{Der}(\mathfrak{m}_2)$ for $k\geq0$. By Lemma \ref{M21}, we prove that in the following cases. \begin{flushleft} $\mathbf{Case\ 1.}\ k=0.$ \end{flushleft} Suppose that $\Delta\in\mathrm{LDer}_{0}(\mathfrak{m}_2)$ is a local derivation. For $\Delta(e_i)$, $i\geq1$, by definition, there exists $a_i\in\mathbb{F}$ such that $$\Delta(e_i)=a_i\sum_{j\geq1}je^{j}_j(e_i)=ia_ie_i.$$ Then, for $i\geq 2$, \begin{equation}\label{23} \Delta(e_1+e_i)=\Delta(e_1)+\Delta(e_i)=a_1e_1+ia_ie_i. \end{equation} For $\Delta(e_1+e_i)$, $i\geq2$, by definition, there exists $a_{1,i}\in\mathbb{F}$ such that \begin{equation}\label{24}
\Delta(e_1+e_i)=a_{1,i}\sum_{j\geq1}je^{j}_j(e_1+e_i)=a_{1,i}(e_1+ie_i). \end{equation} Comparing Eqs. (\ref{23}) with (\ref{24}), we have $a_i=a_1$ for $i\geq 1$. So $\Delta=a_1\sum_{i\geq1}ie^{i}_i\in \mathrm{Der}_0(\mathfrak{m}_2)$. \begin{flushleft} $\mathbf{Case\ 2.}\ k=1.$ \end{flushleft} Suppose that $\Delta\in\mathrm{LDer}_{1}(\mathfrak{m}_2)$ is a local derivation. For $\Delta(e_i)$, $i\geq2$, by definition, there exists $a_i\in\mathbb{F}$ such that $$\Delta(e_i)=a_i\sum_{j\geq2}e^{j+1}_{j}(e_i)=a_ie_{i+1}.$$ Then, for $i\geq 3$, \begin{equation}\label{33} \Delta(e_2+e_i)=\Delta(e_2)+\Delta(e_i)=a_2e_3+a_ie_{i+1}. \end{equation} For $\Delta(e_2+e_i)$, $i\geq3$, by definition, there exists $a_{2,i}\in\mathbb{F}$ such that \begin{equation}\label{34}
\Delta(e_2+e_i)=a_{2,i}\sum_{j\geq2}e^{j+1}_j(e_2+e_i)=a_{2,i}(e_3+e_{i+1}). \end{equation} Comparing Eqs. (\ref{33}) with (\ref{34}), we have $a_i=a_2$ for $i\geq 2$. So $\Delta=a_2\sum_{i\geq2}e^{i+1}_i\in \mathrm{Der}_1(\mathfrak{m}_2)$.
\begin{flushleft} $\mathbf{Case\ 3.}\ k=2.$ \end{flushleft} Suppose that $\Delta\in\mathrm{LDer}_{2}(\mathfrak{m}_2)$ is a local derivation. For $\Delta(e_i)$, $i\geq1$, by definition, there exist $a_i,b_i\in\mathbb{F}$ such that \begin{eqnarray*}
\Delta(e_1) &=& b_1e_3, \\
\Delta(e_2) &=&a_2e_4, \\
\Delta(e_i) &=&(a_i-b_i)e_{i+2},\ i\geq 3. \end{eqnarray*} Then, for $i\geq 3$, \begin{equation}\label{43} \Delta(e_1+e_2+e_i)=\Delta(e_1)+\Delta(e_2)+\Delta(e_i)=b_1e_3+a_2e_4+(a_i-b_i)e_{i+2}. \end{equation} For $\Delta(e_1+e_2+e_i)$, $i\geq3$, by definition, there exist $a_{1,2,i},b_{1,2,i}\in\mathbb{F}$ such that \begin{equation}\label{44}
\Delta(e_1+e_2+e_i)=b_{1,2,i}e_3+a_{1,2,i}e_4+(a_{1,2,i}-b_{1,2,i})e_{i+2}. \end{equation} Comparing Eqs. (\ref{43}) with (\ref{44}), we have $a_i-b_i=a_2-b_1$ for $i\geq 3$. So $\Delta=a_2\sum_{i\geq2}e^{i+2}_i+b_1(e^{3}_{1}-\sum_{i\geq3}e^{i+2}_{i})\in \mathrm{Der}_2(\mathfrak{m}_2)$. \begin{flushleft} $\mathbf{Case\ 4.}\ k\geq 3.$ \end{flushleft} Suppose that $\Delta_{k}\in\mathrm{LDer}_{k}(\mathfrak{m}_2)$ is a local derivation. For $\Delta(e_i)$, $i\geq1$, by definition, there exist $a_{k;i},b_{k;i}\in\mathbb{F}$ such that \begin{eqnarray*}
\Delta(e_1) &=& (a_{k;1}-\frac{1}{2}b_{k;1})e_{1+k}, \\
\Delta(e_2) &=&(a_{k;2}+\frac{1}{2}b_{k;2})e_{2+k}, \\
\Delta(e_i) &=&b_{k;i}e_{i+k},\ i\geq 3. \end{eqnarray*} Then, for $i\geq 4$, \begin{equation}\label{53} \Delta(e_3+e_i)=\Delta(e_3)+\Delta(e_i)=b_{k;3}e_{3+k}+b_{k;i}e_{i+k}. \end{equation} For $\Delta(e_3+e_i)$, $i\geq4$, by definition, there exists $b_{k;3,i}\in\mathbb{F}$ such that \begin{equation}\label{54}
\Delta(e_3+e_i)=b_{k;3,i}(e_{3+k}+e_{i+k}). \end{equation} Comparing Eqs. (\ref{53}) with (\ref{54}), we have $b_{k;i}=b_{k;3}$ for $i\geq 3$. Similarly, for $\Delta(e_1+e_2+e_3)$ there exist $a_{k;1,2,3},b_{k;1,2,3}\in\mathbb{F}$ such that \begin{eqnarray*}
\Delta(e_1+e_2+e_3) &=&(a_{k;1,2,3}-\frac{1}{2}b_{k;1,2,3})e_{1+k}+(a_{k;1,2,3}+\frac{1}{2}b_{k;1,2,3})e_{2+k}+b_{k;1,2,3}e_{3+k}\\
&=& \Delta(e_1)+\Delta(e_2)+\Delta(e_3)\\
&=& (a_{k;1}-\frac{1}{2}b_{k;1})e_{1+k}+(a_{k;2}+\frac{1}{2}b_{k;2})e_{2+k}+
b_{k;3}e_{3+k}. \end{eqnarray*} So $(a_{k;2}+\frac{1}{2}b_{k;2})-(a_{k;1}-\frac{1}{2}b_{k;1})=b_{k;3}$. Moreover, $\Delta_k=(a_{k;1}-\frac{1}{2}b_{k;1}+\frac{1}{2}b_{k;3})(e^{k+1}_1+e^{k+2}_2)+b_{k;3}(-\frac{1}{2}e^{k+1}_1+\frac{1}{2}e^{k+2}_2+ \sum_{i\geq3}e^{i+k}_i)\in\mathrm{Der}_{k}(\mathfrak{m}_2)$. \end{proof}
\small\noindent \textbf{Acknowledgment}\\
The authors are supported by NSF of Jilin Province (No. YDZJ202201ZYTS589), NNSF of China (Nos. 12271085, 12071405, 12001141) and the Fundamental Research Funds for the Central Universities.
\end{document} |
\begin{document}
\title{
Using Algebraic Geometry to Reconstruct a Darboux Cyclide from a Calibrated Camera Picture
}
\begin{abstract}
The task of recognizing an algebraic surface from a single apparent contour can be reduced to
the recovering of a homogeneous equation in four variables from its discriminant.
In this paper, we use the fact that Darboux cyclides have a singularity along the absolute conic in order to recognize them up to Euclidean similarity transformations.
\end{abstract}
\section{Introduction}
\label{intro}
Can we obtain complete spatial information about an algebraic surface in 3-space from a 2D picture?
It seems counter-intuitive at first.
If the surface is smooth, then it can be reconstructed from a single apparent contour up to a projective equivalence that fixes all lines passing through the camera location~\cite{Almeida,Forsyth}.
D'Almeida's algorithm has been generalized in \cite{GLSV} to surfaces with ordinary singularities (nodal curves, transversal triple points, and pinch points).
In this paper, we focus on the reconstruction from a single view by a calibrated camera of a Darboux cyclide. These are surfaces in 3-space that have at least two circles passing to a generic point on it (see \cite{Niels,Lubbes}). In \cite{Fou:04}, special Darboux cyclides called Dupin cyclides are used to blend between quadratic surfaces.
A picture from a single view by a calibrated camera allows to determine viewing angles (see \cite{Hartley_Zisserman}). It is easy to reconstruct the equation of a sphere from a single view: the viewing angle already determines everything.
For a ring torus, the situation is slightly more complicated but still doable -- see Figure~\ref{fig:torus} for a geometric construction.
Here, we give an algorithm that reconstructs a Darboux cyclide from the apparent contour of a single view with a calibrated camera,
assuming that the camera is in a general position.
Up to scaling, there are only finitely many solutions.
In particular, if we have one Darboux cyclide with given apparent contour, then we can construct a second one by inversion at a sphere centered at the camera position.
\begin{figure}
\caption{Reconstruction of a torus from a single picture. From the photo with a calibrated camera, the three angles $a,b,c$ are
determined and drawn in a plane. Next, we draw a circle tangential to the rays forming the angle $a$. Next, we draw a second circle
with equal radius tangential to the rays forming the angle $c$. Next, we rotate both circles around the symmetry line
of the two midpoints $p,q$ and obtain the torus.}
\label{fig:torus}
\end{figure}
Algorithmically, our results are strongly based on the ideas in \cite{Almeida}, and the theorems and algorithms in \cite{GLSV}.
The stepping stone is the contour in projective 3-space, including the non-reduced structure in the Euclidean absolute conic.
To compute it, we collect local contributions from special points in the apparent contour:
non-smooth points of the apparent contour that do not arise by projecting special singular points of the Darboux cyclide.
The local contributions are generators for the local conductor ideal.
In many cases, the formula for the conductor ideal is well-known; there is one case which is new (Lemma~\ref{lem:new}).
The non-uniqueness of the solution is explained by the following uncertainty for a non-smooth point $q$ of the apparent contour.
In order to proceed with the computation, we need to know -- or to assume -- whether $q$ is special or whether it is the image of a singular point of the surface.
Depending on the assumptions we make at this step, we get different solutions, or no solution in case the assumptions are inconsistent.
The contents of the paper are organized as following. In Section~\ref{sec:1}, we give a mathematical definition of a pinhole camera
and the calibration we consider on a pinhole camera. Then we formulate accordingly the problem of reconstruction of an
algebraic surface in space from a single view.
In Section~\ref{sec:2}, we recall the definition of Darboux cyclides and their relationship with the absolute conic.
We investigate the
singularities of Darboux cyclides; this is needed later for the reconstruction procedure.
In Section~\ref{sec:3}, we give an analysis of the contour and apparent contour.
We especially discuss the singularities of both curves.
Section~\ref{sec:4} contains our algorithm for the construction of the contour from the given apparent contour.
Especially, this section contains formulas for the local contributions to the conductor mentioned above.
Section~\ref{sec:5} explains the last step of the algorithm, namely the construction of the equation of the surface
when the equation of the contour is already known.
\section{The Calibrated Camera}
\label{sec:1}
The mathematical model of a {\em pinhole camera} is a linear projective map $\pi:(\mathbb{P}^3\setminus\{o\})\to\mathbb{P}^2$ given by a $3\times 4$ matrix,
where $o\in\mathbb{P}^3$ is the position of the camera; its representation by homogeneous coordinates spans the kernel of the matrix.
In this paper, we study the recognition of an algebraic surface $\mathrm{S}:F(x,y,z,w)=0$ by a single view. So, we assume without loss of generality
that $o=(0:0:0:1)$ and that $\pi$ is the projection map $(x:y:z:w)\mapsto (x:y:z)$.
The {\em contour} of $\mathrm{S}$ is the space curve defined by $F=\partial_w F=0$ -- geometrically, this is the locus of all points
with a tangent passing through $o$.
The {\em apparent contour} of $\mathrm{S}$ is the image of the contour. It is an algebraic curve defined by the
discriminant $\Delta_w(F)$. Single view surface recognition is the task of reconstructing $F$ from its discriminant $\Delta_w(F)$.
For any $a,b,c\in\mathbb{R}$ and $d\in\mathbb{R}^\ast$,
the polynomials $F(x,y,z,w)$ and $F(x,y,z,ax+by+cz+dw)$ have the same discriminant up to a constant factor.
Therefore, single view recognition is at best only possible up to the group of projective transformations of the form
$(x:y:z:w)\mapsto (x:y:z:ax+by+cz+dw)$. These are the projective transformations fixing all lines passing through $o$ (and hence $o$ as well).
The camera is calibrated if and only if the image plane is an {\em elliptic plane}. This means that it comes with a metric, called
{\em elliptic distance}. For any two points $a,b\in\mathbb{P}^3\setminus\{p\}\to\mathbb{P}^2$, the elliptic distance of $\pi(a)$ and $\pi(b)$
is equal to the angle $\angle(apb)$. The metric is determined by the {\em absolute conic} $\mathrm{C}_E$ of the elliptic plane, an irreducible
conic without real points. Here, we will assume that $\mathrm{C}_E$ has equation $x^2+y^2+z^2=0$. The elliptic distance of two points $\pi(a)$ and
$\pi(b)$ can then be computed as half of the absolute value of the arctangent of $\frac{i(x^2-1)}{x^2+1}$, where $x$ is the cross ratio (see \cite[p.45]{Hartley_Zisserman}) of $\pi(a)$, $\pi(b)$, and the other two intersection points of the line $\pi(a)\pi(b)$ with $\mathrm{C}_E$, and $i$ is the imaginary unit.
\begin{remark} \rm A well-known way to ``implement'' a calibrated camera
is to mark on the footpoint of $o$ to the image plane on the picture and to specify the elliptic distance of this footpoint to a single other
point. The footpoint has elliptic distance $\pi/2$ to all points on the line of infinity, and one can show that the position of the
footpoint, the position of the line at infinity, and a single distance between the footpoint and any finite point uniquely determines
the absolute conic of the elliptic plane.
\end{remark}
The consequence of our assumption that the camera given by the map $(x:y:z:w)\mapsto (x:y:z)$ is calibrated is the following:
the 3-space is not just a projective 3-space, but it also has the structure of a Euclidean space, i.e. there is a Euclidean metric
defined on the set of finite points. In terms of projective geometry, the Euclidean metric is defined up to scaling by the absolute
conic $\mathrm{C}_A$, a conic on the plane of the infinite plane without real points. In this paper, $\mathrm{C}_A$ is the conic $x^2+y^2+z^2=w=0$.
In general, it is required that $\pi$ maps the Euclidean absolute conic to the absolute conic of the elliptic plane; and this is
obviously true in our setup.
A calibrated camera should, in principle, give more precise information than a camera which is not calibrated. For instance, we may
hope for an answer that is unique not just up to projective transformations of the form $(x:y:z:w)\mapsto (x:y:z:ax+by+cz+dw)$,
but more specifically unique up to scaling $(x:y:z:w)\mapsto (x:y:z:dw)$ (recall $d\ne 0$). The reason for such a hope is that
all maps of the form $(x:y:z:w)\mapsto (x:y:z:ax+by+cz+dw)$ that preserve the Euclidean absolute conic also preserve the plane
at infinity, and they must therefore be scalings.
The hope expressed above has a chance to be fulfilled only if the surface to be recognized has some relation to the Euclidean absolute conic.
\section{Darboux Cyclides}
\label{sec:2}
A \emph{Darboux cyclide} is an algebraic surface of degree four, obtained by intersecting the 3-dimensional M\"obius sphere $x^2+y^2+z^2+t^2=w^2$ with a quadratic surface in $\mathbb{P}^4$. To obtain a model in $\mathbb{P}^3$, we project stereographically from $(0:0:0:1:1)$. The three-dimensional image is defined by a polynomial of the form
\begin{align}
F(x,y,z,w) = A^2 + 2ALw + Q w^2,
\end{align}
where $A:=x^2+y^2+z^2$, $L$ is linear in $x,y,z$ and $Q$ is homogeneous quadratic polynomial in $x,y,z,w$.
To avoid degenerate cases, we will assume that the polynomial $F$ is absolutely irreducible.
This excludes cubic and quadratic Darboux cyclides (see \cite{Pottmann:12,Zhao:19} for a discussion of these degenerate cases)
that would have the infinite plane $w=0$ as a second component.
Since $F$ is contained in the ideal $\langle A,w\rangle_{\mathbb{Q}[x,y,z,w]}^2$, the absolute conic $\mathrm{C}_A$
-- which is defined by $A=w=0$ -- is at least a double curve of the Darboux cyclide.
We need to make this statement a little bit more specific.
A {\em nodal curve} of a surface $\mathrm{S}$ is an irreducible curve $\mathrm{C}$ in the singular locus of $F$ such that the intersection of $\mathrm{S}$ with a generic transversal plane $\mathrm{P}$ has an ordinary double point (also known as ``node'') at any intersection of $\mathrm{P}$ and $\mathrm{C}$.
A {\em cuspidal curve} of a surface $\mathrm{S}$ is an irreducible curve $\mathrm{C}$ in the singular locus of $F$ such that the intersection of $\mathrm{S}$ with a generic plane $\mathrm{P}$ has a cusp at any intersection of $\mathrm{P}$ and $\mathrm{C}$.
\begin{proposition}
Let $\mathrm{D}\subset\mathbb{P}^3$ be an irreducible Darboux cyclide.
Then the absolute conic $\mathrm{C}_A$ is either a nodal or a cuspidal curve of $\mathrm{D}$.
Also, the absolute conic is the only singular curve of $\mathrm{D}$.
\end{proposition}
\begin{proof}
A generic plane $\mathrm{P}$ intersects $\mathrm{D}$ in a plane quartic curve $\mathrm{Q}$.
By Bertini's Theorem \cite[p.137]{griffiths:14}, the quartic $\mathrm{Q}$ is irreducible.
Since $\mathrm{C}_A$ has no real points, the singularities of $\mathrm{Q}$ come in pairs of complex conjugates.
By the genus formula for plane algebraic curves (see \cite{Hartshorne:77}, exercise IV.1.8a),
and by the fact that the genus cannot be negative, the sum of all $\delta$-invariants cannot be bigger than $3$.
On the other hand, conjugate singularities have the same $\delta$-invariant.
Hence, we have exactly one pair of conjugated singularities, which have $\delta$-invariant equal to one.
The only singularities with $\delta$-invariant equal to one are nodes and cusps.
This shows that $\mathrm{C}_A$ is nodal or cuspidal.
Assume, indirectly, that there exists another singular curve $\mathrm{B}$.
It would intersect $\mathrm{P}$ in $\deg(\mathrm{B})$ many points, and these points would be singular points of $\mathrm{Q}$.
Since $\mathrm{Q}$ has at most three singular points, it follows that $\deg(\mathrm{B})=1$, i.e., $\mathrm{B}$ is a line.
The line $\mathrm{B}$ intersects the infinite plane in a single point $p$, which must be real because otherwise the intersection
would also contain the conjugate point.
On the other hand, the infinite plane intersects $\mathrm{D}$ only in the absolute conic, with multiplicity 2.
This is a contradiction because the absolute conic has no real points.
\end{proof}
\begin{proposition} \label{prop:no3}
An irreducible Darboux cyclide does not have triple or quadruple points.
\end{proposition}
\begin{proof}
If a Darboux cyclide $\mathrm{D}$ would have a quadruple point, then it would be a cone, which is obviously not the case.
Assume, indirectly, that $\mathrm{D}$ has a triple point $p$. For any point $q\in \mathrm{C}_A$, the line $\mathrm{L}_{pq}$ must be contained in $\mathrm{D}$.
Then $\mathrm{D}$ would contain a whole quadric cone, or plane in case $p$ is lying on the infinite plane. This contradicts irreducibility.
\end{proof}
\section{The contour and apparent contour}
\label{sec:3}
Assume that $\mathrm{D}$ is an irreducible Darboux cyclide given by the equation $F(x,y,z,w)=0$.
We consider the image of $\mathrm{D}$ by a generic projection.
Still, we want to assume that the projection is $\pi:\mathbb{P}^3\dashrightarrow\mathbb{P}^2$, $(x:y:z:w)\mapsto(x:y:z)$.
In order to achieve this, we translate the center of projection, which is a generic point in $\mathbb{P}^3$, to the point $o=(0:0:0:1)$.
The equation after the translation is again called $F$, admittedly a slight abuse of notation.
As it has been pointed out by a reviewer, this translation is only possible if we do have a central projection and not a parallel projection,
with center at infinity. In this paper, we do not deal with the problem of reconstructing a Darboux cyclide from an image under parallel projection.
The {\em contour} $\mathrm{R}$ is the common zero set of $F$ and $\partial_w F$.
Singular curves of $\mathrm{D}$ are always components of $\mathrm{R}$, with some multiplicity.
The notion of multiplicity for a space curve requires an explanation:
we consider $\mathrm{R}$ as the 1-cycle defined as the intersection of $\mathrm{D}$ and the cubic surface defined by $\partial_w F=0$.
As such, it is a formal sum of irreducible curves with some multiplicity.
The sum of degree times multiplicity over all components is equal to 12, by Bezout's Theorem.
In our situation, $\mathrm{C}_A$ is a component of $\mathrm{R}$ of degree~2.
Its multiplicity is $2$ if $\mathrm{C}_A$ is nodal and $3$ if $\mathrm{C}_A$ is cuspidal.
Of course, there must be another component.
\begin{proposition}
Besides $\mathrm{C}_A$, the contour has exactly one other component.
Its multiplicity is one.
Its degree is 8 if $\mathrm{C}_A$ is nodal and 6 if $\mathrm{C}_A$ is cuspidal.
\end{proposition}
\begin{proof}
The curve is a generic element in the linear system cut out by the partial derivatives of $F$.
By Bertini's Theorem, it is smooth outside the base locus, which is the singular locus of $\mathrm{D}$.
It follows that $\mathrm{C}_A$ is the only multiple component.
The {\em polar map}, defined by the partial derivatives, maps $\mathrm{D}$ to its dual (see \cite[Section~1.2]{cag}).
Since $\mathrm{D}$ is not a ruled surface, the dual is two-dimensional.
By Bertini's Theorem again, it follows that the preimage of a generic plane section is irreducible.
The degree of the second component can be computed by Bezout's Theorem.
It is $12-2\cdot 2=8$ in the nodal case, and $12-2\cdot 3=6$ in the cuspidal case.
\end{proof}
The {\em apparent contour} $\mathrm{B}\subset\mathbb{P}^2$ is defined as the image $\pi(\mathrm{R})$ of the contour.
The elliptic absolute conic $\mathrm{C}_E$ with equation $A=0$ is a double (in the nodal case) or a triple (in the cuspidal case) component.
The second component has degree $8$ or $6$. We denote it by $\mathrm{B}_1$.
This is the curve which is visible in the picture.
The second component of $\mathrm{R}$ -- the one that maps to $\mathrm{B}_1$ -- is denoted by $\mathrm{R}_1$.
The equation of $\mathrm{B}$ is the input of our reconstruction algorithm.
It is a polynomial $U\in\mathbb{Q}[x,y,z]$ of degree $12$ that factors into $U=U_1A^2$ or $U=U_1A^3$.
The first step of our algorithm is to reconstruct the contour $\mathrm{R}$.
It will be necessary to reconstruct $\mathrm{R}$ as a scheme, not just as a 1-cycle: we need the ideal of $\mathrm{R}$.
Note that we already know that $\mathrm{R}$ is generated by a quartic and a cubic form in $\mathbb{Q}[x,y,z,w]$.
For any $q\in \mathrm{B}$, we say that the map $\pi|_\mathrm{R}:\mathrm{R}\to \mathrm{B}$ is locally an isomorphism over $q$ if
there is a neighborhood $U$ of $q$ such that the map maps the inverse image of $U$ isomorphically to $U$.
This implies in particular that the preimage of $q$ has only a single point (but maybe with non-reduced scheme structure).
Also, we call a non-isolated singular point $q$ on a nodal curve {\em special} iff the singular point
obtained by intersection with a generic plane through $q$ is not a node (but more complicated). Similarly, we call a non-isolated singular point $q$ on a cuspidal curve {\em special} iff the singular point
obtained by intersection with a generic plane through $q$ is not a cusp.
\begin{example}\rm
The line $x=y=0$ is a nodal curve on the ``Whitney umbrella'' with equation $x^2-y^2z=0$.
The point $q=(0,0,0)$ is special: a generic plane section through $q$ has a cusp at $q$.
A special point with this property is also called a {\em pinch point}.
\end{example}
\begin{proposition} \label{prop:iso}
The map $\pi|_\mathrm{R}$ is locally an isomorphism over the following points in $\mathrm{B}$:
\begin{description}
\item[\rm a)] smooth points of $\mathrm{B}_1$ outside of $\mathrm{C}_E$;
\item[\rm b)] smooth points of $\mathrm{C}_E$ outside of $\mathrm{B}_1$;
\item[\rm c)] images of isolated singular points of $\mathrm{D}$;
\item[\rm d)] images of special non-isolated singular points of $\mathrm{D}$.
\end{description}
\end{proposition}
\begin{proof}
a) The map $\pi|_{\mathrm{R}_1}$ is generically injective, and it is locally injective for all smooth points in $\mathrm{B}_1$.
Outside of $\mathrm{C}_A$ it coincides with $\pi|_\mathrm{R}$.
c,d) Let $q\in \mathrm{R}$ be an isolated singularity of $\mathrm{D}$ or a special non-isolated singularity.
By Proposition~\ref{prop:no3}, $q$ is a double point of $\mathrm{D}$.
Since the projection is generic, the fiber of the image point, i.e., the line $qo$, intersects $\mathrm{D}$ in three points,
namely $q$ itself and two nonsingular points that are not in $\mathrm{R}$.
We introduce an analytic coordinate system $(x,y,z)$ around $p$ such that the projection is $(x,y,z)\mapsto (x,y)$ and $q=(0,0,0)$.
In this coordinate system, the equation of $\mathrm{D}$ is a quadratic Weierstrass polynomial in $z$, that is, a quadratic polynomial
$\bar{F}(x,y,z)=z^2+F_1(x,y)z+F_2(x,y)$ with $F_1,F_2\in\mathbb{C}[[x,y]]$
(Note that the coefficients might be complex because it could be that $q$ is not a real point).
The contour $\mathrm{R}$ is locally defined by the equations $\bar{F}=\partial_z\bar{F}=0$, which is equivalent to
$z^2+F_1z+F_2=2z+F_1=0$.
The apparent contour $\mathrm{B}$ is locally defined by the discriminant $F_1^2-4F_2=0$.
So, the projection restricted to $\mathrm{B}$ has local inverse $(x,y)\mapsto\left(x,y,-\frac{F_1(x,y)}{2}\right)$.
b) The proof of (c,d) is also valid for almost all points of $\mathrm{C}_E$, namely for those points $q$ such that the line $qo$ intersects $\mathrm{D}$ in three points.
If the line intersects $\mathrm{D}$ in only two or one point, then the order of the discriminant is larger than 2 (in the nodal case) respectively 3 (in the cuspidal case).
Since $\mathrm{C}_E$ is smooth, the order can only be higher if we have a point of intersection with $\mathrm{B}_1$.
\end{proof}
Proposition~\ref{prop:iso} indicates that the unknown curve $\mathrm{R}$ is not so different from the known curve $\mathrm{B}$.
Now we describe the points more closely for which a difference occurs.
\begin{proposition} \label{prop:noniso}
Assume that $q\in \mathrm{B}$ is a point over which the map $\pi|_\mathrm{R}$ is not locally an isomorphism.
Then we have one of the following cases.
\begin{description}
\item[\rm a)] $q$ is a node on $\mathrm{B}_1$ and does not lie on $\mathrm{C}_E$.
The fiber $\pi^{-1}(q)$ intersects $\mathrm{R}_1$ in two smooth points.
\item[\rm b)] $q$ is a cusp on $\mathrm{B}_1$ and does not lie on $\mathrm{C}_E$.
The fiber $\pi^{-1}(q)$ intersects $\mathrm{R}$ in a unique smooth point with multiplicity~2.
\item[\rm c)] $q$ is a transversal intersection of $\mathrm{B}_1$ and $\mathrm{C}_E$.
The fiber $\pi^{-1}(q)$ intersects $\mathrm{R}_1$ in a smooth point and $\mathrm{C}_A$ in a different point.
The second point is not a special singularity.
\item[\rm d)]
$q$ is a tangential intersection of $\mathrm{B}_1$ and $\mathrm{C}_E$, with intersection multiplicity $2$ or $3$ depending
whether $\mathrm{C}_A$ is nodal or cuspidal.
The fiber $\pi^{-1}(q)$ intersects $\mathrm{R}$ in a single point in the intersection of $\mathrm{R}_1$ and $\mathrm{C}_A$.
This point is not a special singularity and not a singular point of $\mathrm{R}_1$.
\end{description}
\end{proposition}
\begin{proof}
First, assume that $q$ does not lie on $\mathrm{C}_E$.
By Proposition~\ref{prop:iso}, the fiber $\pi^{-1}(q)$ intersects $\mathrm{R}_1$ only in smooth points of $\mathrm{D}$.
The apparent contour of a smooth surface under generic projections has only nodes and cusps (see \cite{Ciliberto_Flamini}). This
shows that we have (a) or (b).
Second, assume that $q$ does lie on $\mathrm{C}_E$.
The center of projection $o$ does not lie on the plane at infinity.
Therefore, the fiber $\pi^{-1}(q)$ (a line through $o$) intersects $\mathrm{C}_A$ only once, and the intersection is transversal.
By Proposition~\ref{prop:iso}, the preimage in $\mathrm{C}_A$ is not a special singularity of $\mathrm{D}$.
By Proposition~\ref{prop:iso}(b), the fiber also intersects $\mathrm{R}_1$.
By Lemma~2.3 and Lemma~2.4 in \cite{GLSV}, there is only a single intersection with $\mathrm{R}_1$, and it is transversal.
We distinguish three subcases.
Subcase 1: the preimages in $\mathrm{R}_1$ and in $\mathrm{C}_A$ are distinct. By Lemma~2.5 in \cite{GLSV}, the tangents at the two preimages to $\mathrm{R}_1$ and to $\mathrm{C}_A$
are not coplanar. We obtain (c).
Subcase 2: the fiber meets $\mathrm{R}_1$ and $\mathrm{C}_A$ in the same point, and $\mathrm{C}_A$ is nodal.
By \cite[Proposition 2.1]{GLSV}, the point $q$ is a smooth point in both curves $\mathrm{C}_E$ and $\mathrm{B}_1$, and the two curves meet with
intersection multiplicity 2.
Subcase 3: the fiber meets $\mathrm{R}_1$ and $\mathrm{C}_A$ in the same point $p$, and $\mathrm{C}_A$ is cuspidal. Locally around $p$, the surface $\mathrm{D}$ is
a cylinder over the cuspidal curve with equation $y^2-z^3=0$.
Since $p$ is also in $\mathrm{R}_1$, the center of projection lies in the plane of maximal contact $y=0$.
This holds only for $p$, not for other points on $\mathrm{C}_A$ close to $p$. If $p_x$ parametrizes $\mathrm{C}_A$ so that $p_0=p$, then the
plane of maximal contact changes with $y$ and passes through $o$ exactly at $x=0$. The situation is described in a different
local coordinate system as follows:
the projection map is $(x,y,z)\mapsto (x,y)$, and the surface $\mathrm{D}$ has equation $z^3-(y-xz)^2=0$. The cuspidal curve $\mathrm{C}_A$
has equation $x=z=0$, and it projects to the curve $\mathrm{C}_E$ with equation $y=0$, which appears as a triple component of the discriminant. The
curve $\mathrm{R}_1$ has parameterization $(x,y,z)=\left(x,\frac{4x^3}{27},\frac{4x^2}{9}\right)$, and its image $\mathrm{B}_1$ has equation
$4x^3-27y=0$ and intersects $\mathrm{C}_E$ with intersection multiplicity three.
\end{proof}
\begin{remark} \label{rem:ambi} \rm
It is not always possible to infer the type of singularity of the contour from the type of singularity of the apparent contour.
For instance, a transversal intersection of $\mathrm{C}_E$ and $\mathrm{B}_1$ could be the image of two distinct points as in Proposition~\ref{prop:noniso} case (c), or the image of a pinch point.
As a consequence, the recognition does not always have a unique solution. We can say that the number of solutions, up to scalings, is finite, since for each singularity of the contour we only have a binary choice. In particular, the inversion of a Darboux cyclide at a sphere with center $o$ has the same apparent contour.
There are several suggestions how to obtain additional information in order to have only a unique answer:
we can assume that the solution is defined over $\mathbb{Q}$ and therefore the finite choice which we have to make must be invariant under conjugation by the Galois group of Galois closure of the residue field at the singularity of the contour, or we can use a second view from a different camera position.
It would be nice to have a theoretical result giving necessary criteria for uniqueness; unfortunately, we do not have such a result.
\end{remark}
\section{The conductor of the apparent contour in the contour}
\label{sec:4}
Let $q$ be a point in $\mathrm{B}$.
The local ring $E_q$ of $\mathrm{B}$ at $q$ is defined as the ring of all regular functions on $\mathrm{B}$ defined in some neighborhood of $q$.
Algebraically, it is the quotient of a two-dimensional regular local ring by a local equation of the discriminant.
Let $q_1,\dots,q_k\in \mathrm{R}$ be the points in the fiber (by now we know that $k=1$ or $k=2$).
We define $F_q$ as the ring of all regular functions on $\mathrm{R}$ defined in neighborhoods of $q_1,\dots,q_k$.
If $k=1$, then $F_q$ is the local ring of $\mathrm{R}$ at $q_1$; otherwise, $F_q$ is only semi-local.
The projection $\pi$ induces an injective ring homomorphism $E_q\hookrightarrow F_q$.
Note that the ring homomorphism is finite: over $F_q$, the ring $E_q$ is generated by a single element -- the vertical coordinate --
that fulfills an integral equation, namely a local equation for $\mathrm{D}$.
Recall that the total fraction ring of a ring is the localization at all elements that are not zero divisors.
Because the projection map is an isomorphism over almost all points, the ring inclusion $E_q\hookrightarrow F_q$ extends to an isomorphism of total fraction rings.
We denote the total fraction ring of $E_q$ by $K_q$; the total fraction ring of $F_q$ is also identified with $K_q$.
The {\em conductor} $C_q$ of $E_q$ in $F_q$ is defined as the set of all $a\in E_q$ such that $aF_q\subseteq E_q$.
It is an ideal in both rings $E_q$ and $F_q$.
Our interest in the conductor comes from the fact that the conductor is isomorphic to $F_q$ as an $E_q$-module.
Here is the precise statement; the proof can be found in \cite{GLSV}.
\begin{lemma} \label{lem:sheaves}
Let $\mathrm{X},\mathrm{Y}$ be complete intersections and let $f:\mathrm{X}\to \mathrm{Y}$ be a finite map which is an isomorphism over all generic points of $\mathrm{Y}$.
Let $\omega^0_\mathrm{X},\omega^0_\mathrm{Y}$ be the dualizing sheafs of $\mathrm{X},\mathrm{Y}$.
Let ${\cal C}\subset{\cal O}_\mathrm{X}$ be the sheaf of conductors of ${\cal O}_\mathrm{Y}$ in $f^\ast{\cal O}_\mathrm{X}$.
Then $f_\ast\omega^0_\mathrm{X}\cong {\cal C}{\otimes}\omega^0_\mathrm{Y}$.
\end{lemma}
The rational map $\mathrm{B}\dashrightarrow \mathrm{R}$ which is inverse to $\pi|_\mathrm{R}$ is defined by the global sections of $(\pi|_\mathrm{R})_\ast{\cal O}_\mathrm{R}(1)$.
For a complete intersection in $\mathbb{P}^n$, the dualizing sheaf is the twisted sheaf with degree shift equal to the sum of the
degrees of the intersecting hypersurfaces minus $n-1$.
By Lemma~\ref{lem:sheaves}, we can replace the sheaf $(\pi|_\mathrm{R})_\ast{\cal O}_\mathrm{R}(1)$ by ${\cal C}(m)$, where
\[ m = (d(d-1)-3)-(d+d-1-4)+1 = d^2-3d+3 = 7 , \]
where $d=4$ is the degree of the Darboux cyclide.
\begin{remark} \label{rem:sh}
The isomorphism $(\pi|_\mathrm{R})_\ast{\cal O}_\mathrm{R}\cong{\cal C}(6)$ also implies that the global conductor ideal has a unique element of degree $6$, say $G_0$.
The elements of degree 7 form a vector space of dimension~4, because ${\cal O}_\mathrm{R}(1)$ has four global sections (see \cite[Lemma 3.2]{GLSV}).
We already know a three-dimensional subspace, namely $\langle xG_0,yG_0,zG_0\rangle$.
Let $G_1$ be an element of degree~7 that is not a multiple of $G_0$.
Then the map $\mathrm{B}\dashrightarrow \mathrm{R}$ that is a rational inverse to $\pi|_\mathrm{R}$ can be defined as
\[ (x:y:z) \mapsto \left(xG_0(x,y,z):yG_0(x,y,z):zG_0(x,y,z):G_1(x,y,z)\right) = \left(x:y:z:\frac{G_1(x,y,z)}{G_0(x,y,z)}\right) . \]
\end{remark}
Let us now compute the local conductor ideals for each of the points over which $\pi|_\mathrm{R}$ is not an isomorphism.
We will address these points as {\em special points} of the apparent contour.
\begin{lemma} \label{lem:nc}
If $q$ is a node or cusp of $\mathrm{B}_1$, and not the image of an isolated singular point, then $C_q$ is the maximal ideal.
\end{lemma}
\begin{proof}
This is well known (see \cite[Remark~7.7]{Boehm:17} or \cite[Lemma~A.1]{GLSV}).
\end{proof}
\begin{lemma} \label{lem:cross}
If $q\subset \mathrm{B}_1\cap \mathrm{C}_E$ is the image of two distinct points $q_1\in \mathrm{R}_1$ and $q_2\in \mathrm{C}_A$, then $C_q$ is the sum of the ideal of $\mathrm{B}_1$ and the ideal of the multiple component, i.e., it is the square of the ideal of $\mathrm{C}_E$ in the nodal case and the cube of the ideal of $\mathrm{C}_E$ in the cuspidal case.
\end{lemma}
\begin{proof}
This is covered by \cite[Lemma~A.1]{GLSV}. In general, if $R$ is a UFD and if $F,G$ are coprime in $R$, then the
conductor of $R/\langle FG\rangle$ in $R/\langle F\rangle\times R/\langle G\rangle$ is $\langle F,G\rangle$.
\end{proof}
The remaining special points are intersections of $\mathrm{B}_1$ and $\mathrm{C}_E$.
A {\em covariant} is a function $F$ from the set of ideals of a regular local ring to itself such that
any local analytic automorphism of the local ring that sends $I$ to $I'$ also sends $F(I)$ to $F(I')$.
An example is the derivative ideal $\partial(I)$ of $I$, which is defined as the ideal generated by all partial derivatives
of elements in $I$.
Let $I$ and $J$ be two ideals of a regular local ring, and let $\lambda\in\mathbb{C}^\ast$ be a nonzero constant.
The {\em mixed derivative ideal} $\partial(I,J,\lambda)$ is defined as the ideal generated by all elements of the form
$\partial(F)G+\lambda F\partial(G)$ with $F\in I$, $G\in J$, and $\partial$ being any partial derivative.
The mixed derivative ideal is a covariant of $I$ and $J$.
If $I=\langle F\rangle$ and $J=\langle G\rangle$, then the mixed derivative ideal is generated
by $FG$ and all elements $\partial(F)G+\lambda F\partial(G)$, where $\partial$ is a partial derivative.
Let $I,J,\lambda$ be as above, and let $K$ be another ideal. The {\em mixed jacobian ideal} $\partial(I,J,\lambda,K)$
is defined as the ideal generated by all elements of the form
$\frac{\partial(F,H)}{\partial(x,y)}G+\lambda F\frac{\partial(G,H)}{\partial(x,y)}$ with $F\in I$, $G\in J$, $H\in K$,
and $x,y$ being any regular system of parameters.
The mixed jacobian ideal is a covariant of $I$, $J$, and $K$.
If $I=\langle F\rangle$ and $J=\langle G\rangle$, then the mixed jacobian ideal is generated
by the product of $FG$ and the jacobian ideal of $K$, the product $K$ and of the mixed derivative ideal $\partial(I,J,\lambda)$ and $K$,
and all elements $\frac{\partial(F,H)}{\partial(x,y)}G+\lambda F\frac{\partial(G,H)}{\partial(x,y)}$, $H\in K$.
In the following, we will use the notation $\mathrm{Id}(\mathrm{C}_E)$ and $\mathrm{Id}(\mathrm{B}_1)$ for the local ideal of the curves $\mathrm{C}_E$ and $\mathrm{B}_1$.
\begin{lemma} \label{lem:touch}
Assume that $\mathrm{C}_A$ is a nodal curve in $\mathrm{D}$.
Assume that $q$ is the image of a point of intersection of $\mathrm{C}_A$ and $\mathrm{R}$.
Then the local conductor is
\[ C_q = \partial(\mathrm{Id} (\mathrm{B}_1),\mathrm{Id} (\mathrm{C}_E),-4) . \]
\end{lemma}
\begin{proof}
This is covered by the appendix of \cite{GLSV}; see Remark~\ref{rem:newproof} for a different proof.
\end{proof}
\begin{lemma} \label{lem:new}
Let $q$ be as in Lemma~\ref{lem:touch} above, but now assume that $\mathrm{C}_A$ is cuspidal.
Let $M$ be the maximal ideal at $q$. Then the local conductor is
\[ C_q = \partial(\mathrm{Id} (\mathrm{B}_1),\mathrm{Id} (\mathrm{C}_E),-9,M^2) . \]
\end{lemma}
\begin{proof}
We can choose local coordinates around $p$ as in the proof of Proposition~\ref{prop:noniso}, subcase three:
the projection map is $(x,y,z)\mapsto (x,y)$, and the surface $\mathrm{D}$ has equation $F:=z^3-(y-xz)^2=0$. The ideal of $\mathrm{B}$ is generated
by $y^3(4x^3-27y)$, and the ideal of $\mathrm{R}$ is $\langle F,\partial_z F\rangle$. Using the computer algebra system Maple,
we can compute the conductor and verify that it is indeed generated by the formula stated above.
To prove that the formula is true in general, it is essential that the formula above is a
covariant of $\mathrm{Id}(\mathrm{B}_1)$ and $\mathrm{Id}(\mathrm{C}_E)$.
Let $\alpha:\mathbb{C}[[x,y,z]]\to \mathbb{C}[[x,y,z]]$ be an analytic coordinate change that
transforms the local equations at $p$ to the equations above -- in particular, the subring $\mathbb{C}[[x,y]]$ is mapped to itself
because it corresponds to the projection map in both coordinate systems. Then $\alpha|_{\mathbb{C}[[x,y]]}$ maps the conductor ideal
in the coordinate system above to the conductor in the local system of coordinates at $q$. By the covariance of the formula, we may
use the formula above also for $q$.
\end{proof}
\begin{remark} \label{rem:newproof} \rm
The formula for $C_q$ in Lemma~\ref{lem:touch} is covariant as well.
Hence we can prove Lemma~\ref{lem:touch} in the same way as the proof of Lemma~\ref{lem:new} above.
This proof is shorter and simpler than the one given in \cite{GLSV}, where an analysis by blowing ups was used.
The Maple computation for computing the local conductor ideals and comparing it with the formulas
stated above is available in \url{https://www.risc.jku.at/people/jschicho/pub/darboux/condin}
and \url{https://www.risc.jku.at/people/jschicho/pub/darboux/condout}.
\end{remark}
The combination of all local conductor ideals to a global homogeneous conductor ideal works as follows:
For each special point $q$, compute the maximal homogenous ideal $I_q$ such that the localization at $q$ is equal to $C_q$.
This can be done by taking any ideal such that the localization at $q$ is equal to $C_q$ (as computed by the formulas above) and saturating it by a sufficiently high power of the maximal ideal at $q$.
Then compute the intersection of all ideals $I_q$, where $q$ ranges over all special points.
The final result is an ideal which has one generator $G_0$ of degree~6 and one generator $G_1$ of degree $7$.
Indeed, the stalk of the sheaf of ideals given by the homogeneous conductor ideal at any point in $\mathbb{P}^2$ is generated by the germs of dehomogenizations of
the three polynomials $G_0$, $G_1$ and a local equation for $\mathrm{B}$.
Hence the homogeneous conductor ideal is equal to the saturation of $\langle G_0,G_1,B\rangle$ by the irrelevant ideal $\langle x,y,z\rangle$.
Once we have $G_0$ and $G_1$, we compute $\mathrm{R}$ as the image of the rational map $(x:y:z)\mapsto\left(x:y:z:\frac{G_1}{G_0}\right)$.
If we have done everything correctly
(remember Remark~\ref{rem:ambi} that we need to guess, for a point in the intersection of $\mathrm{B}_1$ and $\mathrm{C}_E$, whether the point is special or whether it is a non-special image of a singular point),
then the ideal of the image is generated by a cubic and a quartic form.
\section{From the contour to the Surface}
\label{sec:5}
We are left with the following task:
given a cubic form $H_0\in\mathbb{Q}[x,y,z,w]$ and a quartic form $H_1\in\mathbb{Q}[x,y,z,w]$ that generate the ideal of the contour, compute the equation $F$ of the Darboux cyclide.
Here is the information about $F$ that we can use at this point:
\begin{itemize}
\item $\partial_wF\in \langle H_0\rangle_{\mathbb{Q}}$
\item $F\in\langle xH_0,yH_0,zH_0,wH_0,H_1\rangle_{\mathbb{Q}}$
\end{itemize}
The equation $F$ is only unique up to scaling, and we may make it unique by changing the first information to $\partial_wF=H_0$.
In order to use the second information, we integrate $H_0$ formally, obtaining quartic form $H_2$ such that $\partial_wH_2=H_0$.
Then we make an ansatz
\[ H_2-c_1xH_0-c_2yH_0-c_3zH_0-c_4wH_0-c_5H_1 \in \mathbb{Q}[x,y,z] \]
with indeterminate symbolic coefficients $c_1,\dots,c_5$.
This gives an inhomogeneous system of linear equations in the five indeterminates; we solve it.
Then we set $F_0:=c_1xH_0+c_2yH_0+c_3zH_0+c_4wH_0+c_5H_1$.
At this step, $F_0$ has the correct discriminant.
However, it is not yet the equation of a Darboux cyclide:
the map $\mathrm{B}\to \mathrm{R}$ we constructed is only unique up to projective transformations preserving all lines through $o$, and these do not preserve the plane at infinity.
So, the surface $\mathrm{S}_0$ defined by $F_0=0$ is projectively equivalent to a Darboux cyclide, but its singular conic is not yet in the infinite plane.
Let $w+ax+by+cz=0$ be the linear equation of the singular conic of $\mathrm{S}_0$.
Then we define $F:=F_0(x,y,z,w-ax-by-cz)$.
Now, the two polynomials $F$ and $F_0$ have the same discriminant with respect to $w$.
Moreover, $F$ is the equation of a Darboux cyclide; we have solved the given problem.
In Algorithm~\ref{algorithm:}, there are two issues that require an explanation. First, in lines 4 and 9, a guess is required whether a point $q$ in the apparent contour
is the image of a special point or not. The computed result does depend on these guesses. Since we want to compute all possible Darboux cyclides that do have the given discriminant and that are in generic coordinate position, we test all possible combinations of guesses. Second, the lines 14, 15, 24, 27 are assertions that
are checked at this point.
If an assertion fails, then something is wrong, and the algorithm should terminate giving an error message. The problem could be caused by a guess in line 4 or 9, so we proceed trying the next guess.
\begin{proposition}
Algorithm~\ref{algorithm:} is correct in the following sense: if the given polynomial $U$ is the equation of the apparent contour of a Darboux cyclide $\mathrm{D}$ in
general position, then there is a guess that computes the equation of $\mathrm{D}$ up to scalings.
\end{proposition}
\begin{proof}
Let $F'$ be the equation of a Darboux cyclide $\mathrm{D}$ in the assumption. Let $U_1$ be the factor different from $A=x^2+y^2+z^2$
(as specified in the algorithm specification). Let $\mathrm{R}'$ be the zero set of $F'$ and $\partial_wF'$, i.e., the contour of $\mathrm{D}$.
Let $\mathrm{B}$ be the apparent contour, and let $f':\mathrm{R}'\to \mathrm{B}$ the projection map.
Let ${\cal C}\subset{\cal O}_\mathrm{B}$ be the sheaf of conductors of ${\cal O}_{\mathrm{B}}$ in $(f')^\ast{\cal O}_{\mathrm{R}'}$.
Each singular point $q$ of $U_1$ is either the image of an isolated double point of $\mathrm{D}$ or not; we may assume that
we make the correct guesses in line 4. Then, by Lemma~\ref{lem:nc}, the stalk of ${\cal C}$ at $q$ is generated by the maximal ideal at $q$.
Similarly, each intersection point $q$ of $\mathrm{B}$ and $\mathrm{C}_E$ is either the image of a special point of $\mathrm{D}$ or not; we may assume that
we make the correct guess in line 9. Then, by Lemmas~\ref{lem:cross}, \ref{lem:touch}, and \ref{lem:new}, the stalk of ${\cal C}$ at $q$ is
generated by $C_q$ as computed in line 8. After the end of the for loop in line 11, the ideal $C$ is the ideal corresponding
to ${\cal C}$. By Remark~\ref{rem:sh}, $C$ has an element $G_0$ of degree $6$ and an element $G_1$ of degree~7 as asserted in lines 14 and 15.
By the same remark, the rational map in line 18 maps $\mathrm{B}$ to a curve $\mathrm{R}\in\mathbb{P}^3$, and the two curves $\mathrm{R},\mathrm{R}'\in\mathbb{P}^3$
are related by a projective isomorphism $\tau$ fixing the homogeneous coordinates $x,y,z$. In particular, the assertion in line 19 holds.
The zero set of equation $F$ computed in lines 21--23 is a quartic surface $\mathrm{S}$ that has contour $\mathrm{R}$. Such a surface, if exists,
is necessarily unique: otherwise there would be linear pencil $F_{\lambda_1,\lambda_2}=\lambda_1 F_1+\lambda_2 F_2$ of quartics such that
$\partial_w F_{\lambda_1,\lambda_2}$ lies in the ideal of $\mathrm{R}$. However, this ideal only contains a single cubic form,
therefore there exists an element $\bar{F}$ in the pencil such that $\partial_w\bar{F}=0$. This is only possible if the zero set of $\partial_w\bar{F}$ is a cone with center $o$. The projection of this cone would be
a quartic, contradicting the fact the zero set of $U_1$ lies in this zero set. So, there is only a single quartic with
contour $\mathrm{R}$. However, $\tau^{-1}(\mathrm{D})$ also has the
property that its contour is $\mathrm{R}$. This implies that $\tau$ necessarily transforms $\mathrm{S}$ to $\mathrm{D}$.
At this point, we see the assertion in line 24 holds; moreover, the constants $a,b,c$ are unique, because a Darboux cyclide only has one
singular conic. The projective transformation
\[ \tau':\mathbb{P}^3\to\mathbb{P}^3, \ (x:y:z:w)\mapsto (x:y:z:w-ax-by-cz) \]
transforms the singular conic of $\mathrm{S}$ to the singular conic of $\mathrm{D}$. The last computation in line 26 applies the transformation to $\mathrm{S}$.
The result is a Darboux cyclide $\mathrm{D}'$ that is projectively equivalent to $\mathrm{D}$,
by the equivalence $\tau'\circ\tau^{-1}$ that fixes the coordinates $x,y,z$ and that fixes the infinite plane $w=0$, because the infinite plane contains the common singular curve of both Darboux cyclides.
But then this projective equivalence is a scaling.
\end{proof}
In order to test the method, one author took some examples of Darboux cyclides in \cite{Takeuchi:00} and applied a randomly generated Euclidean congruence transformation.
This author also computed the discriminant and gave it to the other authors (without revealing the surface equation).
The other two authors used the method in this paper, the computer algebra system Maple, and some interactive guesses of the most probable types of special points -- by Remark~\ref{rem:ambi}, there is a finite ambiguity that cannot be avoided.
In all cases, the secret equation could be recovered up to scaling, despite the predicted ambiguity. The Darboux cyclides that have the same apparent contour, in particular the inversion at the sphere centered at $o$, do not satisfy our assumptions on genericness of the camera position.
The computing times were less than 3 CPU seconds on a Pentium with 1.6 GHz, in any example. This is negligible in relation to the time needed for the analysis of special points and estimating their types.
\begin{example}\rm
The input for the computation in the interactive Maple file \url{https://www.risc.jku.at/people/jschicho/pub/darboux/maplein} was computed by a random translation
of a random instance an equation in \cite{Takeuchi:00}. Its discriminant factors into an octic $U_1$ times $A^2$,
which means that the absolute conic is nodal. To compute the singular points of $U_1$, we compute its discriminant with respect to $x$.
It factors into $D_1^3D_2^2D_3^2D_4$ with $\deg(D_1)=12$, $\deg(D_2)=4$, $\deg(D_3)=2$, and $\deg(D_4)=8$. So, we have 12 cusps,
and $D_1$ is the elimination ideal of the set of cusps. A Darboux cyclide cannot have 12 isolated singularities, hence all 12 cusps
are not images of isolated singularities. The radical ideal of cusps -- which is the intersection of all maximal ideals at all cusps -- is computed
in one step by saturating the ideal generated by the partial derivatives of $U_1$ and $D_1$.
The curve defined by $U_1$ has six nodes, four projecting to the zero set of $D_2$ and two projecting to the zero set of $D_3$. It turns out
that the two projecting to the zero set of $D_3$ are also on $\mathrm{C}_E$. This shows that they are not images of isolated singularities,
because a generic projection does not project isolated singularities onto $\mathrm{C}_E$. But we have to guess whether the other four nodes
are images of isolated singularities or not.
To analyze the common zeroes of $U_1$ and $A$, we compute the resultant of these two polynomials. It factors into $R_1^2D_3^2R_3$,
with $\deg{R_1}=\deg{R_3}=4$. The zeroes of the squared factor $R_1$ correspond to tangential intersections,
the zeroes of $R_3$ correspond to transversal intersections, and the zeroes of $D_3$ correspond to the nodes of $U_1$ that also lie
on $\mathrm{C}_E$. These nodes are necessarily images of special points.
For the tangential intersections, we need to guess if they are special points projecting to tangential intersections.
For the transversal intersections, we have to guess if they are images of pinch points.
In total, there are 8 possible guesses.
We compute the conductor ideal for all 8 cases. The assertion in line 14 fails for 6 cases. The assertion in line 15 fails for one of the two
remaining cases. So, we have only a single case still to consider. In this case, we have no isolated singularities and two special points,
namely those projecting to nodes of $U_1$ that also lie on $A$.
The result is the Darboux cyclide that was used for constructing the input, up to scaling.
The output is shown in \url{https://www.risc.jku.at/people/jschicho/pub/darboux/mapleout}.
\end{example}
\renewcommand{DarbouxFromApparentContour}{DarbouxFromApparentContour}
\begin{algorithm}[H]
\caption{}\label{algorithm:}
\begin{algorithmic}[1]
\Require A polynomial $U=U_1(x^2+y^2+z^2)^2$ or $U=U_1(x^2+y^2+z^2)^3$ in $\mathbb{Q}[x,y,z]$,
the equation of the apparent contour $\mathrm{B}\subset {\mathbb{P}^2}$.
\Ensure The equation $F\in\mathbb{Q}[x,y,z,w]$ of a Darboux cyclide $\mathrm{D}\subset {\mathbb{P}^3}$, such that its
discriminant is equal to $U$.
\Statex
\State {\bf Compute} the singular points of the zero set of $U_1$ and common zeroes of $U_1$ and $x^2+y^2+z^2$.
\State {\bf Initialize} the conductor ideal $C:=\langle 1\rangle_{\mathbb{Q}[x,y,z]}$.
\For{all singular points $q$ of the zero set of $U_1$}
\If{$q$ does not look like the image of an isolated double point}
\State $C:=C\ \cap$ maximal ideal at $q$
\EndIf
\EndFor
\For{all common zeroes $q$ of $U_1$ and $x^2+y^2+z^2$}
\If{$q$ does not look like the image of a special point}
\State Compute $C_q$ by the formulas in Lemmas~\ref{lem:cross}, \ref{lem:touch}, and \ref{lem:new}
\State $C:=C\ \cap C_q$
\EndIf
\EndFor
\State {\bf Assert} that $C$ has a unique element $G_0$ in degree $6$ up to scalar multiplication
\State {\bf Assert} that $C$ has a unique element $G_1$ in degree $7$ up to scalar multiplication
\State $\ \ $ and up to addition of multiples of $G_0$
\State {\bf Compute} the image $\mathrm{R}$ of $\mathrm{B}$ under the rational map
\State $\ \ $ $(x:y:z)\mapsto (xG_0:yG_0:zG_0:G_1)$
\State {\bf Assert} that the ideal of $\mathrm{R}$ is generated by a cubic $H_0$ and a quartic $H_1$
\State {\bf Compute} $H_2:=\int_w H_0$ which is homogeneous
\State {\bf Find} $c_0,\dots,c_5$ such that
\State $\ \ $ $H_2-c_1xH_0-c_2yH_0-c_3zH_0-c_4wH_0-c_5H_1 \in \mathbb{Q}[x,y,z]$
\State $F_0:=c_1xH_0+c_2yH_0+c_3zH_0+c_4wH_0+c_5H_1$
\State {\bf Assert} that there exist $a,b,c\in\mathbb{Q}$ such that $w+ax+by+cz$
\State $\ \ $is the linear equation of a conic which is singular in $F_0$.
\State $F:=F_0(x,y,z,w-ax-by-cz)$.
\State \Return $F$.
\end{algorithmic}
\end{algorithm}
\begin{figure}
\caption{Some Darboux cyclides with $\mathrm{C}_A$ being a nodal curve, together with the real part of their apparent contour, a curve of degree~8.
The complex part of the apparent contour is not visible: it is the elliptic absolute conic $\mathrm{C}_E$.
The equation of $\mathrm{C}_E$ can be assumed to be known because of the assumption that we have a calibrated camera.}
\label{fig:example1}
\end{figure}
\begin{figure}
\caption{Some Darboux cyclides with $\mathrm{C}_A$ being a cuspidal curve, together with the real part of their apparent contour, a curve of degree~6.}
\label{fig:example4}
\end{figure}
\small
\end{document} |
\begin{document}
\title{\uppercase {On The Prolongations of Homogeneous Vector Bundles}
\footnote{ 2000 \textit{Mathematics Subject Classification}. Primary 53C30; Secondary 55R91. } \footnote{ \textit{Key words and phrases:} Homogeneous Space, Fiber bundles, Prolongation, Homogeneous Vector Bundles }
\begin{abstract} In this paper, we introduce a study of prolongations of homogeneous vector bundles. We give an alternative approach for the prolongation. For a given homogeneous vector bundle $E$, we obtain a new homogeneous vector bundle. The homogeneous structure and its corresponding representation are derived. The prolongation of induced representation, which is an infinite dimensional linear representation, is also defined.
\end{abstract}
\section{Introduction}
\hspace{5mm} In this study, we continue to work on prolongations. In our previous work \cite{Myarticle}, we have defined prolongations of finite-dimensional real representations of Lie groups and obtained faithful representations on tangent bundles of Lie groups \cite{Myarticle}. In this work, we use the prolongations of these representations to give an alternative method to prolonge a vector bundle, specially a homogeneous vector bundle, which also has group actions in its structure. We also give the definition of the prolongation of induced representations. In the literature, the well known method for prolongation is to use lifts( for example vertical lifts or complete lifts)\cite{Yano} or to use jet prolongations \cite{Saunders}. For example in \cite{Fisher}, Fisher and Laguer worked on the second order tangent bundles by using jets. For further information about jet manifolds, we refer to \cite{Cordero},\cite{Saunders}.
Homogeneous vector bundles were studied, because of their applications to cohomologies and complex analytic Lie groups. In 1957, Raoul Bott \cite{Bott} dealt with induced representations in the framework of complex analytic Lie groups. In 1964, Griffiths gave differential-geometric derivations of various properties of homogeneous complex manifolds. He gave some differential geometry applications to homogeneous vector bundles and to the study of sheaf cohomology \cite{Griffiths}. In 1988, Purohit \cite{Purohit} showed that there is a one-to-one correspondence between homogeneous vector bundles and linear representations. Various other studies about homogeneous vector bundles can be found in the literature.(\cite{Boralevi},\cite{Harboush})
Moreover, in 1972, {\it{R. W. Brockett and H. J. Sussmann}} described how the tangent bundle of a homogeneous space can be viewed as a homogeneous space \cite{Brockett}. They associated every Lie group $G$ with another Lie group $G^*=Lie(G)\times G$ constructed as a semi direct product with the group operation given by \begin{equation} (a,g).(a',g')=(a+ad(g)(a'),gg') \end{equation}
\noindent where $(a,g), (a',g') \in G^*$. They also showed that if $G$ acts on a manifold $X$, then there exists a left action of $G^*$ on $TX$ by \begin{equation} (a,g).v= d\sigma_g (v)+\bar{a}(g.\pi(v))\hspace{2mm} for\hspace{1mm} all\hspace{1mm} v\in TX \end{equation} \noindent Here $\pi$ denotes the natural projection from $TX$ onto $X$ (i.e. $\pi(v)=x$ if and only if $v\in X$) and $\sigma_g:X \to X$ is the map $ x \to gx$. Clearly, both $\sigma_g(v)$ and $\bar{a}(g.\pi(v))$ belong to $T_{g.\pi(v)}$, so the sum is well-defined.
This paper is organized as follows. In section \ref{pre} we give some basic definitions and theorems that we need for our proofs. In section \ref{Pro}, we give homogeneous vector bundle structure of the prolonged bundle. And at the end , in section \ref{conc}, we give the conclusion and future work.
\section{Preliminaries}\label{pre}
First of all, we give the definition of a homogeneous vector bundle.
\begin{definition} Let $G$ be a Lie group, $F$ be a $n$ dimensional real vector space, and $G$ acts transitively on a manifold $M$. Let $H$ be the isotropy subgroup of $G$ at a fixed point $p_0 \in M$ so that $M$ becomes the coset space $G/H$. In addition, suppose $G$ acts on the vector bundle $E$ sitting over $G/H$ so that its action on the base agrees with the usual action of $G$ an cosets. Then such a structure $(E, \pi, M, F)$ is called a homogeneous vector bundle.\cite{Purohit} \end{definition}
\begin{theorem}\label{sigmaofE}
Homogeneous vector bundles over $G/H$ are in one-to-one correspondence with linear representations of $H$ \cite{Purohit}. \end{theorem} Above mentioned representation is defined as follows:
Let $(E, \pi, M, F)$ be a homogeneous vector bundle where $M=G/H$, $H$ be the isotropy subgroup of $G$ at $p_0$, and $F=\pi^{-1}(p_0)$. Then, there exists a Lie group representation \\$\sigma:H \to Aut(F)$ with \begin{equation} \begin{gathered} \sigma(h):F \to F\\ \hspace{3cm} q \to \sigma(h)(q)=hq \end{gathered} \end{equation} where $h \in H$.
Conversely, if $G$ is a Lie group with isotropy subgroup $H$, $F$ is a finite dimensional real vector space and $\sigma:H \to Aut(F)$ is a representation, then there exists a homogeneous vector bundle $(E,\pi,G/H,F)$ where $E=G\times_H F$ \cite{Adams, Kobayashi}.
Homogeneous vector bundles also corresponds infinite dimensional real representations. This correspondence is illustrated in the following proposition.
\begin{proposition}\cite{Purohit} Let $(E, \pi,M, F)$ be a homogeneous vector bundle, where $M=G/H$, and $\Gamma(E)$ denotes (global) cross sections of the vector bundle $E$. For all $g \in G$, $\rho(g)$ can be defined by the following: \begin{equation} \begin{gathered} \rho(g):\Gamma(E) \to \Gamma(E)\hspace{12cm}\\ \hspace{3cm} \psi \to \rho(g)(\psi):M \to E \hspace{12cm}\\
p \to (\rho(g)(\psi))(p)=g.\psi(g^{-1}p)\hspace{25mm} \end{gathered} \end{equation} Clearly, $\rho$ is a representation of $G$ in $\Gamma(E)$ which is induced by the representation $\sigma$ that is defined in theorem \ref{sigmaofE}. We call the representation $\rho$ as the induced representation of $E$. \end{proposition}
\begin{theorem} If a Lie group $G$ acts transitively and with maximal rank on a differentiable manifold $X$, then $G^*$ acts transitively and with maximal rank on the tangent bundle of $X$ \cite{Brockett}. \end{theorem} We have following remarks for above theorem: \begin{remark}
\begin{enumerate}
\item Clearly, above result implies that the tangent bundle of a coset space $G/H$ is again a coset space and moreover, is of the form $G^*/K$ for some closed subgroup $K$ of $G^*$. \item If $H$ is a closed subgroup of $G$, then $H^*$ can be identified, in an obvious way, with a closed subgroup of $G^*$. One verifies easily that the isotropy group of $0_x$ corresponding to the action of $G^*$ on $TX$ is precisely $H_x^*$, where $H_x$ is the isotropy group of $x$ corresponding to the action of $G$ on $X$ In particular, we have the diffeomorphism $T(G/H)\simeq G^*/H^*$. \end{enumerate} \end{remark}
\begin{definition} Let $\Phi$ be a $n$-dimensional Lie group representation on $G$. "The prolongation of the representation $\Phi$", which is denoted by $\widetilde{\Phi}$ is given by the following equation: \begin{eqnarray} \widetilde{\Phi}:TG \to GL(2n)\hspace{54mm} \nonumber \\
(a,v) \to \widetilde{\Phi}(a,v)=\
\begin{pmatrix}
\Phi(a) & 0\\
[(d(\Phi)_e(v))_j^i].\Phi(a) & \Phi(a)
\end{pmatrix}. \end{eqnarray} \end{definition}
\section {\bf{Prolongation of a Homogeneous Vector Bundle}} \label{Pro}
In this section, we'll introduce a homogeneous vector bundle with prolonged Lie group representation, which is defined in \cite{Myarticle}. \newline
\noindent{\bf{Main Result:}} \\
Let $(E,\pi, G|H, F)$ be the homogeneous vector bundle with the corresponding Lie group representation $\sigma: H\to Aut(F)$. Using the prolongation of the representation $\sigma$ defined in \cite{Myarticle}, we have \begin{equation} \tilde{\sigma}(dR_h(a))=\ \begin{pmatrix} \sigma(h) & 0\\ d(\sigma)_e(a).\sigma(h) & \sigma(h) \end{pmatrix} \end{equation}
where $\tilde{\sigma}: TH \to Aut(TF)$ is the prolongation of representation $\sigma$. Getting composition of $\Theta$ and $\tilde{\sigma}$, we define $\sigma^*:H^* \to Aut(TF)$ as follows:
\begin{equation} \sigma^*(a,h)=\ \begin{pmatrix} \sigma(h) & 0\\ d(\sigma)_e(a).\sigma(h) & \sigma(h) \end{pmatrix} \end{equation}
where $\Theta$ denotes the natural diffeomorphism $\Theta: H^* \to TH$. In the following configuration, we give the summary of what implications we will have next. \begin{equation} E \longleftrightarrow \sigma\longrightarrow \sigma^* \longleftrightarrow E^* \end{equation} where $E=(E, \pi, G/H, F)$, $\sigma:H \to Aut(F)$, $\sigma^*:H^* \to Aut(TF)$ and $ E^*=(E^*, \pi^*, G^*/H^*, TF)$. Now we define the new induced action which will be used for defining equivalence classes. \newline
\subsection{\bf{The Prolonged Action of $H^*$ on $G^*\times TF$:}}
Using the above representation $\sigma^*$, and the natural action of $G^*$ to its coset space $G^*/H^*$, we have the following induced action $\tilde{\alpha}:(G^* \times TF)\times H^* \to G^* \times TF$ which can be obtained by the coset space definition:
\begin{equation}
\tilde{\alpha}(a,g,v,b,h)=((a,g).(b,h),\sigma^*((b,h)^{-1})(v)).\label{complex} \end{equation}
\begin{proposition}
If $v=(\xi ,u)\in TF$ and $\sigma^*((b,h)^{-1})(v)=(\tilde{\xi},\tilde{u})$, then \begin{eqnarray}
\tilde{\alpha}(a,g,v,b,h)=(a+adj(g)(b), gh,\tilde{\xi},\tilde{u}). \end{eqnarray}
\end{proposition}
\begin{proof}
Since $(a,g).(b,h)=(a+adj(g)(b),gh)$ and $(b,h)^{-1}=(adj(h^{-1})(-b),h^{-1})$, we have \begin{equation} \sigma^*((b,h)^{-1})(v)=\sigma^*(adj(h^{-1})(-b),h^{-1})(\xi,u)\nonumber \end{equation} Using the definition of $\sigma^*$ we have \begin{equation}
=\
\begin{pmatrix}
\sigma(h^{-1}) & 0 \\
d(\sigma)_e(-adj(h^{-1})(b)) & \sigma(h^{-1})
\end{pmatrix}.\
\begin{pmatrix}
\xi \\
u
\end{pmatrix}.\nonumber \end{equation}
\begin{equation}
=(\sigma(h^{-1})(\xi), [(d\sigma)_e(-adj(h^{-1})(b))]_j^i \xi_ix_j +[\sigma(h^{-1})]_j^i u_i \dot{x}_j)\label{action} \end{equation} \begin{equation}
=(\tilde{\xi},\tilde{u})\label{sigma} \end{equation} where $v=(\xi,u) \in TF$ for all $(a,g) \in G^*$ and $(b,h) \in H^*$. Therefore, using equation (\ref{sigma}), we finish the proof. \end{proof}
Using above action, it is possible to form an equivalence relation on $G^* \times TF$. \newline
\subsection{\bf{The Prolonged Equivalence Relation:}}
If we use above induced action, we have the following equivalence relation on $G^* \times TF$ by \cite{Adams, Kobayashi}: \newline
$((a,g),(\xi, u)) \simeq((a',g'),(\xi', u'))$ if and only if there exists $(b,h) \in H^*$ such that the following equation holds \begin{equation} ((a',g'),(\xi', u'))=((a,g),(\xi, u)).(b,h).\label{equiv} \end{equation}
The next corollary gives the simplified form of equivalence relation that we have defined above. \begin{corollary}
The equivalence relation defined by the equation (\ref{equiv}) can be given by the following. \begin{equation} ((a,g),(\xi, u))\simeq ((a',g'),(\xi', u')) \Leftrightarrow \left\{ \begin{array}{rcl} a'=a+adj(g)(b),\hspace{6cm}\\ g'=gh,\hspace{78mm} \\ \xi'=\sigma(h^{-1})(\xi),\hspace{66mm} \\ u'=[(d\sigma)_e(-adj(h^{-1})(b))]_j^i \xi_i x_j+[\sigma(h^{-1})]_j^i u_i \dot{x}_j.\hspace{12mm}\end{array}\right. \label{equi} \end{equation} \end{corollary}
\begin{proof} If $((a,g),(\xi, u))\simeq ((a',g'),(\xi', u'))$, then there exist $(b,h) \in H^*$ such that \begin{eqnarray} ((a',g'),(\xi', u'))&=&((a,g),(\xi, u)).(b,h).\nonumber\\
&=&\sigma^*(a,g,(\xi,u),b,h) \end{eqnarray} Using equation (\ref{action}), we have $\xi'=\sigma(h^{-1})(\xi)$ and $u'=[(d\sigma)_e(-adj(h^{-1})(b))]_j^i \xi_ix_j +[\sigma(h^{-1})]_j^i u_i \dot{x}_j. $
Moreover, since $\sigma^*(a,g,(\xi,u),b,h)\in T_{(a,g).(b,h)}$, then we have
\begin{eqnarray} (a',g')&=&(a,g).(b,h)\nonumber\\
&=&(a+adj(g)(b),gh) \end{eqnarray} which finishes the proof. \end{proof}
\begin{definition}
We denote $E^*$ to be set of equivalence classes follows from (\ref{equi}), i.e. $$E^*=G^*\times_{H^*}TF$$ and we refer $E^*$ as the {\it{Prolonged Vector Bundle}}. \end{definition} \begin{remark} The bundle projection of the prolonged bundle is \begin{eqnarray} \pi^*_{E^*}: G^* \times_{H^*} TF \to G^*/H^* \hspace{5cm}\nonumber \\ ((a,g),v)H^* \to \pi^*_{E^*}(((a,g),v)H^*)=(a,g)H^*\hspace{15mm} \nonumber \end{eqnarray} and local trivialization of the prolonged bundle is \begin{eqnarray} \psi^* : (\pi^*_{E^*})^{-1}(V) \to V \times TF\hspace{55mm} \nonumber \\ ((a,g),v)H^* \to \psi^*(((a,g),v)H^* )=((a,gH),\sigma^*(b,h)(v)) \end{eqnarray} where $V \subset T(G/H)$ is an open subset. \end{remark}
So far, we have given the structures of the prolonged bundle. In the following, we define a new prolongation that is obtained by the prolongation of homogeneous vector bundles. \newline
\begin{definition} Let $\rho:G \to Aut(\Gamma(E))$ be the induced representation of the homogeneous vector bundle $E$. Then $\rho^* :TG \to Aut(\Gamma(E^*))$ is called the prolongation of the induced representation $\rho$, where $E^*$ denotes the prolongation of $E$. \end{definition}
\section{Conclusion and Future Work}\label{conc} In this paper, we have introduced a study of prolongations of homogeneous vector bundles. We have used one-to-one correspondence of homogeneous vector bundles and finite dimensional Lie group representations, and we have defined prolongations of homogeneous vector bundles. We introduced the geometric structures of this new bundle, such as local trivialization of the bundle, equivalence classes and the bundle projection. We have defined the prolongations of induced representations which are infinite dimensional linear representation. In future, we plan to study on prolongations of infinite dimensional representations by using one-to-one correspondence of homogeneous vector bundles.
\end{document} |
\begin{document}
\title{Distinguishing Tournaments with Small Label Classes}
\author{Antoni Lozano\thanks{Research Group on Combinatorics, Graph Theory, and Applications, Universitat Polit\`ecnica de Catalunya, Catalonia, {\tt antoni@lsi.upc.edu}. Research supported by projects TIN2014-57226-P (APCOM) and MTM2014-600127-P.}}
\maketitle
\begin{abstract}\noindent A {\em $d$-distinguishing vertex (arc) labeling} of a digraph is a vertex (arc) labeling using $d$ labels that is not preserved by any nontrivial automorphism. Let $\rho(T)$ ($\rho'(T)$) be the minimum size of a label class in a 2-distinguishing vertex (arc) labeling of a tournament $T$. Gluck's Theorem~\cite{G} implies that $\rho(T) \le \lfloor n/2 \rfloor$ for any tournament $T$ of order $n$. In this paper we construct a family of tournaments $\cal H$ such that $\rho(T) \ge \lfloor n/2 \rfloor$ for any order $n$ tournament in $\cal H$. Additionally, we prove that $\rho'(T) \le \lfloor 7n/36 \rfloor + 3$ for any tournament $T$ of order $n$ and $\rho'(T) \ge \lceil n/6 \rceil$ when $T \in {\cal H}$ and has order $n$. These results answer some open questions stated by Boutin~\cite{B1, B2}. \end{abstract}
\section{Introduction}\label{sec:int}
We follow the standard notation in graph theory. In particular, given a directed graph ({\em digraph} for short) $G$, $V(G)$ ($A(G)$) stands for its set of vertices (arcs) and $Aut(G)$ denotes the automorphism group of $G$. We refer to the identity automorphism in $Aut(G)$ as to the {\em trivial} automorphism. A tournament is a complete oriented graph, that is, a digraph $T$ for which for every $u,v \in V(T)$, either $uv \in A(T)$ or $vu \in A(T)$ but not both.
A {\em vertex (arc) labeling} of a digraph $G$ is a total function $\phi: V(G) \rightarrow L$ ($\phi: A(G) \rightarrow L$) which labels each vertex (arc) of $G$ with a label from the set $L$. Given a vertex labeling $\phi$ for a digraph $G$, we say that an automorphism $\sigma \in Aut(G)$ {\em preserves} $\phi$ if $\phi(\sigma(v)) = \phi(v)$ for every vertex $v \in V(G)$. Similarly, we say that $\sigma \in Aut(G)$ {\em preserves} an arc labeling $\phi$ if $\phi(uv) = \phi(\sigma(u)\sigma(v))$ for every arc $uv \in A(G)$. On the contrary, a vertex or arc labeling $\phi$ {\em breaks} an automorphism $\sigma \in Aut(G)$ if $\phi$ is not preserved by $\sigma$. A (vertex or arc) labeling $\phi$ of $G$ that breaks all nontrivial automorphisms in $Aut(G)$ is called {\em distinguishing} for $G$. Additionally, if $\phi$ uses $d$ labels, it is called $d$-distinguighing for $G$.
Albertson and Collins introduced the concept of {\em distinguishing number} in the seminal paper~\cite{AC1} as the instantiation of the idea of ``symmetry breaking'' in graphs. The {\em distin\-guishing number} $D(G)$ of a digraph $G$ is the least cardinal $d$ such that $G$ has a $d$-distinguishing vertex labeling. In recent years, this concept has been extended to the {\em distinguishing index} $D'(G)$, which is defined as the least cardinal $d$ such that $G$ has an $d$-distinghishing arc labeling. A {\em distinguishing vertex class (distinguishing arc class)} of $\phi$ in $G$ is any of the $d$ subsets of $V(G)$ ($A(G)$) having the same label under $\phi$. These notions have been studied in \cite{B1,B2,B3,BP,C,KPW,P}.
With respect to tournaments, Albertson and Collins \cite{AC2} conjectured that every tournament $T$ satisfies $D(T) \le 2$. As Godsil observed in 2002~\cite{Go}, since tournaments have odd order automorphism groups, the conjecture follows from Gluck's Theorem (\cite{G}, see also the shorter and self-contained proof in \cite{M}). In the following statement of Gluck's Theorem, given a permutation group $G$ on $\Omega$, $S \subseteq \Omega$ is a regular subset of $G$ if the setwise stabilizer $\{ g \in G \mid Sg = S\}$ only contains the identity. Therefore, a regular subset plays a similar role to that of a 2-distinguishig vertex class.
\begin{theorem} (Gluck's Theorem, \cite{G,M}). Let G be a permutation group of odd order on a finite set $\Omega$. Then G has a regular subset in $\Omega$. \end{theorem}
Given a tournament $T$, Gluck's Theorem shows the existence of a regular subset $S \subseteq \Omega = V(T)$ for $Aut(T)$. Define a labeling $\phi$ that assigns label 1 to the vertices in $S$ and label 2 to the vertices in $V(T) \setminus S$. Now, the definition of regular subset implies that the only automorphism in $Aut(T)$ preserving labeling $\phi$ is the identity. Therefore, $\phi$ constitutes a 2-distinguishing vertex labeling of the vertices of $T$ and the following fact can be claimed.
\begin{corollary}\cite{Go} If $T$ is a tournament, then $D(T) \le 2$. \end{corollary}
As an added consequence of Gluck's Theorem, we can observe that the distinguishing index of tournaments is also bounded by 2. Suppose $S$ is the regular subset, given by Gluck's Theorem, of the vertices of a tournament $T$. Clearly, vertices in $S$ can be singularized if the arcs lying inside $S$ are labeled with 1 and the rest are labeled with 2. This way, the orbit of a vertex in $S$ by any automorphism will lie inside $S$, and the previous arc labeling will be 2-distinguishing.
\begin{corollary} If $T$ is a tournament, then $D'(T) \le 2$. \end{corollary}
Some literature on the subject has focused on the minimum possible size of a distinguishing vertex class, which has been called {\em the cost of 2-distinguishing}. We define it here both for vertices and arcs. For a digraph $G$ such that $D(G) \le 2$, define $\rho(G)$ ($\rho'(G)$) as the minimum size of a distinguishing vertex (arc) class.
The article is organized as follows. Section~\ref{sec:pre} defines and studies an infinite class $\cal H$ of tournaments that gives rise to lower bounds for $\rho$ and $\rho'$. In Section~\ref{sec:d_num} we prove $\rho(T) \le \lfloor n/2 \rfloor$ for any tournament $T$ of order $n$ and show that this bound is exact for tournaments in $\cal H$. In Section~\ref{sec:d_ind} we show $\rho'(T) \le \lfloor 7n/36 \rfloor + 3$ for any tournament $T$ of order $n$ and prove a lower bound of $\lceil n/6 \rceil$ for tournaments of order $n$ in $\cal H$. Finally, some conclusions and open problems are discussed in Section~\ref{sec:con}.
\section{Class $\cal H$ and black and white labelings}\label{sec:pre}
We introduce here a class of tournaments that will be used in the succeeding sections to provide lower bounds for the cost of 2-distinguishing tournaments with vertices ($\rho$) and with arcs ($\rho'$).
By $\vec{C}_3$ we denote the directed triangle, that is, the tournament containing the vertices $x_1$, $x_2$, and $x_3$ and the arcs $x_1 x_2$, $x_2 x_3$, and $x_3 x_1$.
\begin{definition} The family ${\cal H} = \{ H_k \}_{k \ge 0}$ of tournaments is inductively defined as follows: \begin{itemize} \item $H_0$ is a single vertex tournament and \item $H_k$, for $k > 0$, is the tournament consisting of a copy of $\vec{C}_3$ in which every vertex $x_i$ in $\vec{C}_3$ is substituted by a copy of $H_{k-1}$, called {\em tertian} $T_i$, and an arc $x_i x_j \in A(C_3)$ is substituted by all possible arcs from $T_i$ to $T_j$. \end{itemize} \end{definition}
\begin{observation}
For any $k \ge 0$, $|V(H_k)| = 3^k$. \end{observation}
A {\em module} in a tournament $T$ is a set $X$ of vertices such that each vertex in $V(T) \setminus X$ has a uniform relationship to all vertices in $X$, that is, for every vertex $v \in V(T) \setminus X$, either $uv \in A(T)$ for all $u \in X$ or $vu \in A(T)$ for all $u \in X$. Note that $T$ and sets $\{u\}$, where $u \in V(T)$, are modules. Furthermore, modularity is transitive: if $Y$ is a module in the subtournament $T[X]$ induced by module $X$, then $Y$ is a module in $T$.
According to the definition of $H_k$, each of its three tertians are modules. By transitivity of modularity we can make the following observation.
\begin{observation}\label{ob:disjoint} For every $k \ge 1$, $H_k$ can be decomposed into $3^{k-1}$ pairwise disjoint modules isomorphic to $\vec{C}_3$. \end{observation}
We also need the following property on how vertices in $H_k$ can move in an automorphism.
\begin{proposition}\label{pr:module_move} Let $\sigma \in Aut(H_k)$ be an automorphism and let $T_1, T_2, T_3$ be the tertians of $H_k$. Then, any tertian is mapped by $\sigma$ into another tertian as a whole, that is, for any $u, v \in T_i$, $\sigma(u), \sigma(v) \in T_j$, for $1 \le i, j \le 3$. \end{proposition} \begin{proof} For a tournament $T$ and two vertices $x, y \in V(T)$, define
\[ D_T(x,y) = |\{ z \in T \mid zx \in A(T) \Leftrightarrow yz \in A(T) \}|. \] That is, $D_T(x,y)$ is the number of vertices in $T$ having different relationships with $x$ and $y$. Now, suppose $u, v$ belong to the same tertian $T_i$ of $H_k$. Clearly, since all vertices outside $T_i$ have the same relationship with $u$ and $v$, only vertices in $T_i$ can have a different relationship with $u$ and $v$ and, then, $D_{H_k}(u,v) < 3^{k-1}$. However, if $\sigma(u)$ and $\sigma(v)$ belong to different tertians for an automorphism $\sigma \in Aut(H_k)$, then all the vertices in the other tertian will have a different relationship with $u$ and $v$ and, then, $D_{\sigma(H_k)}(\sigma(u),\sigma(v)) \ge 3^{k-1}$. Since an automorphism should preserve adjacencies, $\sigma$ cannot be an automorphism in $Aut(H_k)$ as we supposed. This contradiction shows that $\sigma(u)$ and $\sigma(v)$ must belong to the same tertian, say $T_j$ ($j$ being not necessarily different from $i$). \end{proof}
We consider labelings that play an important role in Section~\ref{sec:d_num}. In the discussion about 2-distinguishing labelings, although vertex and arc labels formally belong to the set $\{0,1\}$, from this point on we refer to label 1 as {\em white} and to label 2 as {\em black}.
\begin{definition}\label{def:H} We define black labelings and white labelings for $H_k$ as follows: \begin{itemize} \item For $H_0$, a {\em black} {\em (white)} labeling consists of labeling the unique vertex of $H_0$ black (white). \item For $H_k$, $k > 0$, a {\em black} {\em (white)} labeling contains two copies of $H_{k-1}$ with a black (white) labeling and one copy of $H_{k-1}$ with a white (black) labeling. \end{itemize} \end{definition}
\begin{figure}
\caption{From left to right, white labelings for tournaments $H_0$, $H_1$, and $H_2$. Tertians are shadowed in grey. One arc between two tertians implies all arcs between their respective nodes in the same direction.}
\end{figure}
\section{Small distinguishing vertex classes}\label{sec:d_num}
Just by observing that distinguishing vertex classes are closed by complementation, we obtain an upper bound for their size with the help of Gluck's theorem.
\begin{theorem}\label{th:dn} For any tournament $T$ of order $n$, $\rho(T) \le \lfloor n/2 \rfloor$. \end{theorem} \begin{proof} Let $T$ be a tournament of order $n$ and let $S$ be a distinguishing vertex set given by Gluck's Theorem. Then, the set $V(T) \setminus S$ is also distinguishing and either $S$ or $V(T) \setminus S$ has size at most $\lfloor n/2 \rfloor$. \end{proof}
The following proposition shows that the bound given in Theorem~\ref{th:dn} is optimal for the family ${\cal H} = \{H_k\}_{k \ge 0}$ of tournaments from Definition~\ref{def:H}.
\begin{proposition}\label{pr:lower_dn} For every $k \ge 0$, $\rho(H_k) \ge \lfloor 3^k/2 \rfloor$. \end{proposition} \begin{proof} We use a more informative statement to prove the result.
\begin{claim} For any 2-distinguishing vertex labeling $\phi$ of $H_k$: \begin{enumerate} \item $H_k$ has at least $\frac{3^k-1}{2}$ black vertices in $\phi$. \item If $H_k$ has exactly $\frac{3^k-1}{2}$ black (white) vertices in $h$, then $h$ is a white (black) labeling for $H_k$. \end{enumerate} \end{claim} \begin{proof} We proceed by induction on $k$. If $k = 0$, both points 1 and 2 are trivially true. Suppose then that $k > 0$ and let $\phi$ be a 2-distinguishing vertex labeling for $H_k$. Then, since any automorphism in one of the three subtournaments of $H_k$ isomorphic to $H_{k-1}$ is also an automorphism of $H_k$, $\phi$ must be distinghishing for all three copies of $H_{k-1}$. By induction hypothesis, point 1 implies that any of the copies of $H_{k-1}$ must have at least $\frac{3^{k-1}-1}{2}$ black vertices in $h$. Therefore, $H_k$ must have at least \[ 3 \cdot \frac{3^{k-1}-1}{2} = \frac{3^k-3}{2} = \frac{3^k-1}{2}-1\] black vertices in $h$. If $H_k$ contains exactly $\frac{3^k-1}{2}-1$ black vertices, the three copies of $H_{k-1}$ must contain exactly $\frac{3^{k-1}-1}{2}$ black vertices and, by induction hypothesis, point 2 implies that the restriction of $\phi$ to any of the copies of $H_{k-1}$ is a white labeling for it. Then, there is a nontrivial automorphism of $H_k$ consisting of a rotation of its subtournaments which respects the labeling $\phi$, which is a contradiction with the asumption that $\phi$ is distinghishing for $H_k$. Therefore, $H_k$ must contain at least $\frac{3^k-1}{2}$ black vertices, as we wanted to show in point 1.
As for point 2, if $H_k$ contains exactly $\frac{3^k-1}{2}$ black (white) vertices in $h$, since $\frac{3^k-1}{2} = 3 \cdot \frac{3^{k-1}-1}{2} + 1$ and all three copies of $H_{k-1}$ have at least $\frac{3^{k-1}-1}{2}$ black (white) vertices in $h$, it follows that two of the copies must have exactly $\frac{3^{k-1}-1}{2}$ black (white) vertices while the third one must have $\frac{3^{k-1}-1}{2}+1 = \frac{3^{k-1}+1}{2}$ black (white) vertices and $3^k - \frac{3^{k-1}+1}{2} = \frac{3^{k-1}-1}{2}$ white (black) vertices. By induction hypothesis, point 2 implies that two of the copies of $H_{k-1}$ have a white (black) labeling while the third one has a black (white) labeling. Consequently, $h$ is a white (black) labeling for $H_k$. \end{proof}
The previous claim implies that for any 2-distinguishing vertex labeling $\phi$ of $H_k$, there are at least $\frac{3^k-1}{2} = \lfloor 3^k/2 \rfloor$ black vertices. Therefore, $\rho(H_k) \ge \lfloor 3^k/2 \rfloor$. \end{proof}
\begin{theorem}\label{th:exact_bound} For every $k \ge 0$, there is a tournament of order $n = 3^k$ such that $\rho(T) = \lfloor n/2 \rfloor$. \end{theorem} \begin{proof} Given $k \ge 0$, we take $T = H_k$, which has order $n = 3^k$. From Theorem~\ref{th:dn} and Proposition~\ref{pr:lower_dn}, $\rho(T) = \lfloor 3^k/2 \rfloor = \lfloor n/2 \rfloor$. \end{proof}
\section{Small distinguishing arc classes}\label{sec:d_ind}
To get un upper bound of the cost of 2-distinghishing tournaments with the arcs, we will use the concept of determining set. Given a digraph $G$, a subset $S \subseteq V(G)$ is a {\em determining set} of $G$ if for any $\varphi, \psi \in Aut(G)$ such that $\varphi(x) = \psi(x)$ for all $x \in S$, $\varphi = \psi$. Thus, the action of an automorphism on $S$ determines its action on $V(G)$. From the group theory perspective, the pointwise stabilizer of a determining set is trivial while the setwise stabilizer of a distinguishing set is trivial (and therefore, every distinguishing set is a deterimining set).
The {\em determining number} of a digraph $G$, denoted by $Det(G)$, is defined as the minimum size of a determining set for $G$. We will use Theorem 8 in \cite{L}.
\begin{theorem}\label{th:det}\cite{L} For every order $n$ tournament $T$, $Det(T) \le \lfloor n/3 \rfloor$. \end{theorem}
To get an upper bound for $\rho'(T)$, where $T$ is a tournament of order $n$, we start considering a determining set $S \subseteq V(T)$ that, according to Theorem~\ref{th:det}, can be selected with size bounded by $\lfloor n/3 \rfloor$. We can now singularize the vertices in $S$ by coloring some of the arcs in the subtournament of $T$ induced by $S$, $T[S]$. An easy way to do it is coloring in black the arcs of a Hamiltonian path in $T[S]$, and coloring the rest of the arcs in $T$ in white. This way, all the vertices in $S$ will be at a different distance (through the black arcs) from the beginning of the black path, and therefore, $S$ will be fixed pointwise, and $\rho'(T) \le \lfloor n/3 \rfloor -1$. However, we can push the upper bound down by combining determining sets with distinguishing sets.
\begin{theorem}\label{th:di} For any order $n$ tournament $T$, $\rho'(T) \le \lfloor 7n/36 \rfloor + 3$. \end{theorem} \begin{proof}
Let $T$ be a tournament of order $n$. Then, by Theorem~\ref{th:det}, there exists a determining set $S \subseteq V(T)$ such that $|S| \le \lfloor n/3 \rfloor$. Consider the subtournament of $T$ induced by $S$, $T[S]$. By Theorem~\ref{th:dn}, $\rho(T[S]) \le \lfloor |S|/2 \rfloor \le \lfloor n/6 \rfloor$ and, therefore there exists a distinguishing set $R \subseteq S$ that proves it.
We will label some of the arcs in $S$ in black in such a way that every vertex in $S$ will be the extreme of some black arc. In the first place, we select the vertices in $S \setminus R$ by pairs and label the arcs joining the extremes of the selected pairs in black. Then, we select the vertices in $R$ by triples and, for each triple, we label two of its arcs in black. Note that the vertices from $S \setminus R$ which are incident to a black arc cannot be exchanged in any automorphism with the vertices in $R$ which are also incident to a black arc. In case $|S \setminus R|$ is not even or $|R|$ is not a multiple of 3, the previous method of grouping the vertices may leave at most 3 vertices which are not the extremes of any black arc, a maximum of one in $S \setminus R$ and two in $R$. Call $U = \{u_i \mid 1 \le i \le 3 \}$ to this set.
For every possible cardinality of $U$, we calculate how many additional black arcs are needed to avoid the exchange of vertices from $S \setminus R$ and $R$ (and viceversa) in a nontrivial automorphism: \begin{itemize}
\item $|U| = 0$. In this case, represented in Figure~\ref{fig:con}, all the vertices in $S \setminus R$ and $R$ are joined by black paths according to the above method. To complete the labeling, label all the remaining arcs in $T$ in white. Now, note that no vertex $u$ in $S \setminus R$ can map to a vertex in $R$ in an automorphism $\varphi$ because $u$ is the extreme of a black path of length 2 while $\varphi(u)$ is either the extreme of a black path of length 3 or its middle point. Therefore, vertices from the two parts of the partition of $S$ cannot be exchanged in any automorphism. Since $R$ is a distinguishing set for $S$, the whole of $S$ lacks a nontrivial automorphism after the labeling, that is, $S$ is rigid. Additionally note that every vertex in $S$ is the extreme of some black arc while all vertices in $V(T) \setminus S$ are only the extremes of white arcs; therefore, no automorphism can map a vertex in $S$ to a vertex in $V(T) \setminus S$. Since $S$ is a determining set for $T$, our labeling is distinguishing. As for the size of the black label class, observe that there is one black arc for every 2 vertices in $S \setminus R$ and two black arcs for every 3 vertices in $R$. Since we know that $|R| \le \lfloor |S|/2 \rfloor$, we have at most \begin{equation}\label{eq1}
\frac{|S \setminus R|}{2} + \frac{2 |R|}{3} \le
\frac{3 (|S \setminus R|+|R|)+|R|}{6} =
\frac{|S|}{2} + \frac{|R|}{6} \le \frac{\lfloor n/3 \rfloor}{2} + \frac{\lfloor n/6 \rfloor}{6} \le \Bigl\lfloor \frac{7n}{36} \Bigr\rfloor \end{equation} black arcs.
\begin{figure}
\caption{Example of black arcs in $S$ for Theorem~\ref{th:di}, case $|U| = 0$.}
\label{fig:con}
\end{figure}
\item $|U| = 1$. We complete our labeling by labeling an arc in $T[S]$ incident to $u_1$ in black in such a way that if we label the rest of the arcs in $T$ in white, all nontrivial automorphisms in $T[S]$ will be broken. To do so, consider the following subcases: \begin{enumerate}
\item $R = \emptyset$. Then, $|S \setminus R| > 0$ and we label an arc from a vertex in $S \setminus R$ to $u_1$ in black. \item $R \neq \emptyset$. Then, we label an arc from a vertex in $R$ to $u_1$ in black. \end{enumerate}
Note that in both of the above subcases, $u_1$ is joined to a black path which is unlike the rest of black paths (the only one of length 2 in subcase 1, the only one of length 3 in subcase 2). Then, $u_1$ cannot be mapped to any other vertex in an automorphism in $T[S]$ and, similarly to the previous case ($|U| = 0$), we conclude that our labeling is distinguishing. As for the size of the black label class, we have
\[ \Bigl\lfloor \frac{|S \setminus R|}{2} \Bigr\rfloor + \Bigl\lfloor \frac{2 |R|}{3} \Bigr\rfloor + 1 \le
\Bigl\lfloor \frac{|S \setminus R|}{2} + \frac{2 |R|}{3} \Bigr\rfloor + 1 \le \Bigl\lfloor \frac{7n}{36} \Bigr\rfloor + 1 \] black arcs (where the last inequality derives from Equation~\ref{eq1}).
\item $|U| = 2$. Similarly to the previous case, we complete the labeling by labeling arcs in $T[S]$ incident to the vertices in $U$ in black, and labeling the rest of the arcs in $T$ in white. We consider two subcases: \begin{enumerate} \item $S \setminus R = \emptyset$. Then, we just label an arc joining $u_1$ to $u_2$ in black. \item $S \setminus R \ne \emptyset$. Then, we label an arc from a vertex in $S \setminus R$ to $u_1$ and an arc joining $u_1$ to $u_2$ in black. \end{enumerate} Note that in both of the above subcases, $u_1$ and $u_2$ belong to a black path which is unlike the rest of black paths (the only one of length 1 in subcase 1, the only one of length 3 in subcase 2). Then, $u_1$ and $u_2$ cannot be mapped to any vertices in an automorphism in $T[S]$ and, similarly to the previous cases, we conclude that our labeling is distinguishing. The number of black arcs in our labeling can be obtained in a similar way to the previous case, being at most $\lfloor 7n/36 \rfloor + 2$ since here we may need to add two additional black arcs.
\item $|U| = 3$. Similarly to the two previous cases, we complete the labeling by labeling arcs in $T[S]$ incident to the vertices in $U$ in black, and labeling the rest of the arcs in $T$ in white. We consider two subcases: \begin{enumerate} \item $R = \emptyset$. Then, we label an arc joining $u_1$ to $u_2$ and an arc joining $u_2$ to $u_3$ in black. \item $R \neq \emptyset$. Then, as in the previous subcase, we color an arc joining $u_1$ to $u_2$ and an arc joining $u_2$ to $u_3$ in black. Additionally, we color an arc from a vertex in $R$ to $u_1$ in black. \end{enumerate} Note that in both of the above subcases, $u_1$, $u_2$, and $u_3$ belong to a black path which is unlike the rest of black paths (the only one of length 2 in subcase 1, the only one of length 4 in subcase 2). Then, $u_1$, $u_2$, and $u_3$ cannot be mapped to any vertices in an automorphism in $T[S]$ and, similarly to the previous cases, we conclude that our labeling is distinguishing. The number of black arcs in our labeling can be obtained in a similar way to the two previous cases, being at most $\lfloor 7n/36 \rfloor + 3$ since here we may need to add three additional black arcs. \end{itemize}
Therefore, in all cases our labeling for $T$ is distinguishing and proves that $\rho'(T) \le \lfloor 7n/36 \rfloor + 3$. \end{proof}
We now show that the family $\{H_k\}_{k \ge 0}$ introduced in Section~\ref{sec:d_num} provides a lower bound for the distinguishing index of tournaments. We call {\em basic module} to any of the pairwise disjoint modules referred to in Observation~\ref{ob:disjoint}. Note that a nontrivial automorphism in any basic module trivially extends to $H_k$ as a consequence of the definition of module. This fact leads to the following lower bound for $\rho'(H_k)$.
\begin{proposition}\label{pr:lower_di} For every $k \ge 1$, $\rho'(H_k) \ge \lceil 3^{k-1}/2 \rceil$. \end{proposition} \begin{proof} Let $k \ge 1$. Since $\vec{C}_3$ is not rigid, all the $3^{k-1}$ basic modules of $H_k$ must contain an endpoint of some black arc if automorphisms in $H_k$ are to be broken. Since $3^{k-1}$ is odd, $\lfloor 3^{k-1}/2 \rfloor$ black arcs can have a maximum of $3^{k-1}-1$ endpoints, leaving at least one of the modules with a nontrivial automorphism. Therefore, $\rho'(H_k) \ge \lceil 3^{k-1}/2 \rceil$. \end{proof}
We show that the bound $\lceil 3^{k-1}/2 \rceil$ is also un upper bound for the family of tournaments $\{ H_k \}_{k \ge 1}$.
\begin{proposition}\label{pr:upper_di} For every $k \ge 1$, $\rho'(H_k) \le \lceil 3^{k-1}/2 \rceil$. \end{proposition} \begin{proof} We call {\em primitive} any arc whose endpoints belong to the same basic module in $H_k$. We now refine the statement by considering primitive black arcs.
\begin{figure}
\caption{Arc labeling implied by Claim~\ref{cl:upper} for tournament $H_3$. The five straight thick arcs represent the only black arcs.}
\label{fig:arc}
\end{figure}
\begin{claim}\label{cl:upper} For every $k \ge 2$, there is an 2-distinguishing arc labeling $\phi$ of $H_k$ with at most $\lceil 3^{k-1}/2 \rceil$ primitive black arcs. \end{claim} \begin{proof} If $k = 2$, $H_k$ consists of three basic modules. Then, we define the labeling $\phi$ depicted in Figure~\ref{fig:arc} having a primitive black arc (the upper one in the figure) and a black arc going across the two remaining basic modues (the lower ones in the figure). Since each basic module contains vertices with unique properties and cannot be mapped into a different module in any automorphism, by Proposition~\ref{pr:module_move}, each tertian is mapped into itself. The fact that rotations inside the tertians are not possible either, $\phi$ is a 2-distinguishing arc labeling for $H_2$ satisfying the required conditions.
If $k > 2$, we know by induction hypothesis that $\lceil 3^{k-2}/2 \rceil$ black arcs are enough to break all nontrivial automorphisms in each of the three tertians. We also know that every tertian contains a primitive black arc. Consider now the labeling $\phi$ consisting in the union of the labelings given by induction hypothesis for the tertians with a single modification: we select two primitive black arcs from two tertians $T_i$ and $T_j$, $i \ne j$, we label the selected primitive black arcs white and then we label one arc joining $T_i$ and $T_j$ black. There is still a primitive black arc in $H_k$ and, as a result of the relabeling, $\phi$ will have a maximum of \[ 3 \Bigl\lceil \frac{3^{k-2}}{2} \Bigr\rceil - 2 + 1 = 3 \cdot \frac{3^{k-2} + 1}{2} - 1 = \frac{3^{k-1} + 3 - 2}{2} = \Big\lceil \frac{3^{k-1}}{2} \Big\rceil. \] black arcs as claimed. Furthermore, it is clear that labeling $\phi$ is 2-distinguishing for the tertians after the relabeling while, according to Proposition~\ref{pr:module_move}, an automorphism moving vertices between two different tertians would need to move all the vertices, but every tertian has properties different from the rest: for a first tertian (the upper one in the figure), there is no black arc connecting it to the other tertians, for a second tertian there is a black arc coming from outside (lower left), and for a third one there is a black arc going out (lower right). Therefore, $\phi$ is 2-distinguishing for $H_k$. \end{proof}
Claim~\ref{cl:upper} proves the proposition for all $k \ge 2$. For $k = 1$, we can observe that tournament $H_1$ can be clearly made rigid by labeling one of its arcs in black and two of them in white. Therefore, for all $k \ge 1$, $\rho'(H_k) \le \lceil 3^{k-1}/2 \rceil$ as expected. \end{proof}
We now get the following result.
\begin{theorem} For every $k \ge 0$, there is a tournament $T$ of order $n = 3^k$ such that $\rho'(T) = \lceil n/6 \rceil$. \end{theorem} \begin{proof} Given $k \ge 0$, we take $T = H_k$, which has order $n = 3^k$. From Proposition~\ref{pr:lower_di} and Proposition~\ref{pr:upper_di}, $\rho'(T) = \lceil 3^{k-1}/2 \rceil = \lceil 3^k/6 \rceil = \lceil n/6 \rceil$. \end{proof}
\section{Conclusions and open questions}\label{sec:con}
In~\cite{B1}, Boutin proves that $\rho(Q_n) = {\cal O}(Det(Q_n))$, where $Q_n$ is the hypercube of dimension $n$, and asks in Question 9 whether this is also the case of other graph families. In relation with this question, she asks in Problem 4 of~\cite{B2} for graphs $G$ such that $\rho(G)$ is arbitrarily larger than $Det(G)$. We consider tournaments $T$ or order $n$ belonging to the family $\cal H$. From Section~\ref{sec:d_num} we have that $\rho(T) = \lfloor n/2 \rfloor$. On the other hand, it is clear that $Det(T) \le \lfloor n/3 \rfloor$ because each of the $n/3$ basic modules (isomorphic to $\vec{C}_3$ according to Observation~\ref{ob:disjoint}) needs to have either one or two black vertices in order to break the rotations. By Theorem~\ref{th:det}, we finally have $Det(T) = \lfloor n/3 \rfloor$. Therefore, $\rho(T)$ and $Det(T)$, for any $T \in {\cal H}$, are related by a factor of $3/2$ and we can answer affirmatively to both questions.
We conclude with a couple of open questions.
\begin{question} Can the bound in Theorem~\ref{th:di} be improved? In particular, is $\rho'(T) \le \lceil \frac{n}{6} \rceil$ for any tournament $T$ of order $n$? \end{question}
\begin{question} What are the smallest values of $\rho(T)$ and $\rho'(T)$ for a cyclic tournament $T$? \end{question}
\end{document} |
\begin{document}
\title{Successive Ray Refinement and Its Application to Coordinate Descent for LASSO}
\author{ Jun Liu\thanks{Corresponding author. E-mail address: junliu.nt@gmail.com.}\\ SAS Institute Inc. \and Zheng Zhao\thanks{This work was done when Zheng Zhao was with SAS.}\\ Google Inc.\\ \and Ruiwen Zhang \\ SAS Institute Inc. }
\maketitle
\begin{abstract} Coordinate descent is one of the most popular approaches for solving Lasso and its extensions due to its simplicity and efficiency. When applying coordinate descent to solving Lasso, we update one coordinate at a time while fixing the remaining coordinates. Such an update, which is usually easy to compute, greedily decreases the objective function value. In this paper, we aim to improve its computational efficiency by reducing the number of coordinate descent iterations. To this end, we propose a novel technique called Successive Ray Refinement (SRR). SRR makes use of the following ray continuation property on the successive iterations: for a particular coordinate, the value obtained in the next iteration almost always lies on a ray that starts at its previous iteration and passes through the current iteration. Motivated by this ray-continuation property, we propose that coordinate descent be performed not directly on the previous iteration but on a refined search point that has the following properties: on one hand, it lies on a ray that starts at a history solution and passes through the previous iteration, and on the other hand, it achieves the minimum objective function value among all the points on the ray. We propose two schemes for defining the search point and show that the refined search point can be efficiently obtained. Empirical results for real and synthetic data sets show that the proposed SRR can significantly reduce the number of coordinate descent iterations, especially for small Lasso regularization parameters. \end{abstract}
\section{Introduction}
Lasso~\cite{Tibshirani:Lasso:1996} is an effective technique for analyzing high-dimensional data. It has been applied successfully in various areas, such as machine learning, signal processing, image processing, medical imaging, and so on. Let $X =[\mathbf x_1, \mathbf x_2, \ldots, \mathbf x_p] \in \mathcal{R}^{n \times p}$ denote the data matrix composed of $n$ samples with $p$ variables, and let $\mathbf y \in \mathcal{R}^{n \times 1}$ be the response vector. In Lasso, we compute the $\bm \beta$ that optimizes \begin{equation}\label{eq:lasso:problem}
\min_{\bm \beta} f(\bm \beta) = \frac{1}{2} \| X \bm \beta - \mathbf y\|_2^2 + \lambda \|\bm \beta\|_1, \end{equation} where the first term measures the discrepancy between the prediction and the response and the second term controls the sparsity of $\bm \beta$ with $\ell_1$ regularization. The regularization parameter $\lambda$ is nonnegative, and a larger $\lambda$ usually leads to a sparser solution.
Researchers have developed many approaches for solving Lasso in Equation~\eqref{eq:lasso:problem}. Least Angle Regression (LARS)~\cite{Efron:LAR:2004} is one of the most well-known homotopy approaches for Lasso. LARS adds or drops one variable at a time, generating a piecewise linear solution path for Lasso. Unlike LARS, other approaches usually solve Equation~\eqref{eq:lasso:problem} according to some prespecified regularization parameters. These methods include the coordinate descent method~\cite{Friedman:coordinate:2010,Yuan:2012:l1:logistic:jmlr}, the gradient descent method~\cite{beck:2009:fast,wright:2009:sparse}, the interior-point method~\cite{Koh:sparse:logistic:2007}, the stochastic method~\cite{Shai:2009:icml:stochastic:method}, and so on. Among these approaches, coordinate descent is one of the most popular approaches due to its simplicity and efficiency. When applying coordinate descent to Lasso, we update one coordinate at a time while fixing the remaining coordinates. This type of update, which is easy to compute, can effectively decrease the objective function value in a greedy way.
To improve the efficiency of optimizing the Lasso problem in Equation~\eqref{eq:lasso:problem}, the screening technique has been extensively studied in~\cite{Ghaoui:2012,liu_j:14,Ogawa2013,Tibshirani:2012,Wang:2012:report,Xiang:2011}. Screening 1) identifies and removes the variables that have zero entries in the solution $\bm \beta$ and 2) solves Equation~\eqref{eq:lasso:problem} by using only the kept variables. When one is able to discard the variables that have zero entries in the final solution $\bm \beta$ and identify the signs of the nonzero entries, the Lasso problem in Equation~\eqref{eq:lasso:problem} becomes a standard quadratic programming problem. However, it is usually very hard to identify all the zero entries, especially when the regularization parameter is small. In addition, the computational cost of Lasso usually increases as the the regularization parameter decreases. The computational cost increase motivates us to come up with an approach that can accelerate the computation of Lasso for small regularization parameters.
In this paper, we aim to improve the computational efficiency of coordinate descent by reducing its iterations. To this end, we propose a novel technique called Successive Ray Refinement (SRR). Our proposed SRR is motivated by an interesting ray-continuation property on the coordinate descent iterations: for a given coordinate, the value obtained in the next iteration almost always lies on a ray that starts at its previous iteration and passes through the current iteration. Figure~\ref{fig:cd:solution:alpha} illustrates the ray-continuation property by using the data specified in Section~\ref{s:traditional:lasso}. Motivated by this ray-continuation property, we propose that coordinate descent be performed not directly on the previous iteration but on a refined search point that has the following properties: on one hand, the search point lies on a ray that starts at a history solution and passes through the previous iteration, and on the other hand, the search point achieves the minimum objective function value among all the points on the ray. We propose two schemes for defining the search point, and we show that the refined search point can be efficiently computed. Experimental results on both synthetic and real data sets demonstrate that the proposed SRR can greatly accelerate the convergence of coordinate descent for Lasso, especially when the regularization parameter is small.
\begin{figure}
\caption{Illustration of the iterations of coordinate descent. For both plots, the x-axis corresponds to the iteration number $k$.
The y-axis of plot (a) denotes $\beta_i^k$, the value of the $i$th coordinate in the $k$th iteration.
The y-axis of plot (b) denotes $\alpha_i^k$, which is computed using the equation $\beta_i^{k+1} =\alpha^k_i \beta_i^{k-1} + (1-\alpha_i^k) \beta_i^k$.
Ray-continuation property: for a given coordinate $i$, the value obtained in the next iteration denoted by $\beta_i^{k+1}$ almost always lies on a ray that starts at its previous iteration, $ \beta_i^{k-1}$, and passes through the current iteration, $\alpha_i^k$.
For numerical details of plot (a) and plot (b), see Table~\ref{table:iteration:traditiona:cd} and Table~\ref{table:iteration:traditiona:alpha}.
}
\label{fig:cd:solution:alpha}
\end{figure}
\noindent \textbf{Organization } The rest of this paper is organized as follows. We introduce the traditional coordinate descent for Lasso and present the ray-continuation property that motivates this paper in Section~\ref{s:traditional:lasso}, propose the SRR technique in Section~\ref{s:sfg}, discuss the efficient computation of the refinement factor that is used in SRR in Section~\ref{s:one:d:search}, conduct an eigenvalue analysis on the proposed SRR in Section~\ref{s:sfg:eigen:analysis}, and compare SRR with related work in Section~\ref{s:related}. We report experimental results on both synthetic and real data sets in Section~\ref{s:experiment}, and we conclude this paper in Section~\ref{s:conclusion}.
\noindent \textbf{Notations } Throughout this paper, scalars are denoted by italic letters and vectors by bold face letters.
Let $\|\cdot\|_1$ denote the $\ell_1$ norm, let $\|\cdot\|_2$ denote the Euclidean norm, and let $\|\cdot\|_{\infty}$ denote the infinity norm. Let $\langle \mathbf x, \mathbf y \rangle$ denote the inner product between $\mathbf x$ and $\mathbf y$.
Let a superscript denote the iteration number, and let a subscript denote the index of the variable or coordinate. We assume that $X$ does not contain a zero column; that is, $\|\mathbf x_i\|_2 \neq 0, \forall i$.
\section{Coordinate Descent For Lasso}\label{s:traditional:lasso}
In this section, we first review the coordinate descent method for solving Lasso, and then analyze the adjacent iterations to motivate the proposed SRR technique.
Let $\beta_i^k$ denote the $i$th element of $\bm \beta$, which is obtained at the $k$th iteration of coordinate descent. In coordinate descent, we compute $\beta_i^k$ while fixing $\beta_j= \beta_j^k, 1 \le j < i $, and $\beta_j= \beta_j^{k-1}, i < j \le p$. Specifically, $\beta_i^k$ is computed as the minimizer to the following univariate optimization problem: \begin{equation*}
\beta_i^k= \arg \min_{\beta} f([ \beta_1^k,\ldots,\beta_{i-1}^k,\beta,\beta_{i+1}^{k-1},\ldots,\beta_{p}^{k-1} ]^T). \end{equation*}
It can be computed in a closed form as: \begin{equation}\label{eq:beta:i:kplus1}
\beta_i^k = \frac{ S(\mathbf x_i^T \mathbf y - \sum_{j<i} \mathbf x_i^T \mathbf x_j \beta_j^k - \sum_{j>i} \mathbf x_i^T \mathbf x_j \beta_j^{k-1} , \lambda) }{\|\mathbf x_i\|_2^2}, \end{equation} where $S(\cdot, \cdot)$ is the shrinkage function \begin{equation}\label{eq:shrinkage:operator}
S(x, \lambda) =
\left\{
\begin{array}{cc} x -\lambda & x > \lambda \\ x +\lambda & x < -\lambda \\
0 & |x | \le\lambda. \end{array} \right. \end{equation} Let \begin{equation}\label{eq:residual}
\mathbf r_i^k = \mathbf y - X [ \beta_1^k,\ldots,\beta_{i-1}^k,\beta_i^k,\beta_{i+1}^{k-1},\ldots,\beta_{p}^{k-1} ]^T \end{equation} denote the residual obtained after updating $\beta_i^{k-1}$ to $\beta_i^k$. With Equation~\eqref{eq:residual}, we can rewrite Equation~\eqref{eq:beta:i:kplus1} as \begin{equation}\label{eq:beta:i:kplus1:residual}
\beta_i^k = S( \beta_i^{k-1} + \frac{\mathbf x_i^T \mathbf r_{i-1}^k}{\|\mathbf x_i\|_2^2} , \frac{\lambda} {\|\mathbf x_i\|_2^2}). \end{equation} In addition, with the updated $\beta_i^k$, we can update the residual from $\mathbf r_{i-1}^k$ to $\mathbf r_i^k$ as \begin{equation}\label{eq:residual:udpated}
\mathbf r_i^k = \mathbf r_{i-1}^k + \mathbf x_i (\beta_i^{k-1} - \beta_i^k). \end{equation}
Algorithm~\ref{algorithm:traditional} illustrates solving Lasso via coordinate descent. Since the non-smooth $\ell_1$ penalty in Equation~\eqref{eq:lasso:problem} is separable, the algorithm is guaranteed to converge~\cite{Tseng01convergenceof}.
\begin{algorithm} \caption{Coordinate Descent for Lasso} \label{algorithm:traditional} \begin{algorithmic}[1]
\REQUIRE $X$, $\mathbf y$, $\lambda$
\ENSURE $\bm \beta^k$
\STATE $k=0$, $\bm \beta^0 = \mathbf 0$, $\mathbf r^0 = \mathbf y$
\REPEAT
\STATE Set $k=k+1$, $\mathbf r^k = \mathbf r^{k-1}$
\FOR{$i=1$ to $p$}
\STATE Compute $\beta_i^k = S( \beta_i^{k-1} + \frac{\mathbf x_i^T \mathbf r^k}{\|\mathbf x_i\|_2^2} , \frac{\lambda} {\|\mathbf x_i\|_2^2})$
\STATE Update residual $\mathbf r^k = \mathbf r^k + \mathbf x_i (\beta_i^{k-1} - \beta_i^k)$
\ENDFOR
\UNTIL{convergence criterion satisfied} \end{algorithmic} \end{algorithm}
\begin{table} \caption{Applying coordinate descent in Algorithm~\ref{algorithm:traditional} to solving Lasso in Equation~\eqref{eq:lasso:problem} with $\lambda=0$. Since $\lambda=0$ and $X$ is invertible, the optimal function value is 0.} \begin{center}
\begin{tabular}{ccccccc}
\hline
$k$ & $\beta_1^k$ & $\beta_2^k$ & $\beta_3^k$ & $\beta_4^k$ & $\beta_5^k$ & $f(\bm \beta^k)$\\
\hline\hline 1
& 0.048912 & 0.034041 & 0.407960 & 0.055687 & 0.160413
& 0.052449 \\ \hline 2
& 0.057182 & -0.033692 & 0.465254 & 0.027810 & 0.171740
& 0.017591 \\ \hline 3
& 0.036909 & -0.079955 & 0.463604 & -0.000612 & 0.177708
& 0.008085 \\ \hline 4
& 0.019050 & -0.108954 & 0.458440 & -0.017618 & 0.182115
& 0.003933 \\ \hline 5
& 0.005418 & -0.126712 & 0.455218 & -0.026135 & 0.185698
& 0.002304 \\ \hline 6
& -0.005122 & -0.137403 & 0.453694 & -0.029295 & 0.188740
& 0.001653 \\ \hline 7
& -0.013567 & -0.143688 & 0.453262 & -0.029210 & 0.191398
& 0.001358 \\ \hline 8
& -0.020585 & -0.147239 & 0.453491 & -0.027216 & 0.193771
& 0.001187 \\ \hline 9
& -0.026604 & -0.149101 & 0.454104 & -0.024144 & 0.195923
& 0.001060 \\ \hline 10
& -0.031899 & -0.149927 & 0.454929 & -0.020507 & 0.197895
& 0.000950 \\ \hline ... \\ \hline 28
& -0.081090 & -0.142355 & 0.467964 & 0.030835 & 0.217715
& 0.000106 \\ \hline 29
& -0.082490 & -0.142044 & 0.468368 & 0.032406 & 0.218288
& 0.000093 \\ \hline 30
& -0.083806 & -0.141752 & 0.468749 & 0.033883 & 0.218827
& 0.000082 \\ \hline ... \\ \hline 100
& -0.103999 & -0.137267 & 0.474584 & 0.056543 & 0.227099
& 1.3349e-08 \\ \hline 101
& -0.104015 & -0.137264 & 0.474589 & 0.056561 & 0.227105
& 1.1785e-08 \\ \hline 102
& -0.104030 & -0.137261 & 0.474593 & 0.056577 & 0.227111
& 1.0403e-08 \\ \hline 103
& -0.104044 & -0.137258 & 0.474597 & 0.056593 & 0.227117
& 9.1839e-09 \\ \hline 104
& -0.104057 & -0.137255 & 0.474601 & 0.056608 & 0.227122
& 8.1074e-09 \\ \hline 105
& -0.104069 & -0.137252 & 0.474604 & 0.056621 & 0.227127
& 7.1571e-09 \\ \hline \end{tabular}
\end{center}
\label{table:iteration:traditiona:cd} \end{table}
We demonstrate Algorithm~\ref{algorithm:traditional} using the following randomly generated $X$ and $\mathbf y$:
\begin{equation}\label{eq:X} X = \left[ \begin{array}{ccccc} -0.204708 & 0.478943 & -0.519439 & -0.555730 & 1.965781 \\ 1.393406 & 0.092908 & 0.281746 & 0.769023 & 1.246435 \\ 1.007189 & -1.296221 & 0.274992 & 0.228913 & 1.352917 \\ 0.886429 & -2.001637 & -0.371843 & 1.669025 & -0.438570 \\ -0.539741 & 0.476985 & 3.248944 & -1.021228 & -0.577087 \end{array}
\right], \end{equation}
\begin{equation}\label{eq:y}
\mathbf y = [0.124121, 0.302614, 0.523772, 0.000940, 1.343810]^T. \end{equation}
We show the iterations of coordinate descent for Lasso with $\lambda=0$ in Table~\ref{table:iteration:traditiona:cd} and Figure~\ref{fig:cd:solution:alpha} (a). We set $\lambda=0$ to facilitate the eigenvalue analysis in Section~\ref{s:sfg:eigen:analysis}. Note that the results reported here also generalize to Lasso, because if we know the sign of the optimal solution $\bm \beta^*$, the nonzero entries of $\bm \beta^*$ can be solved by the following equivalent convex smooth problem: \begin{equation}\label{eq:lasso:problem:lambda:0}
\min_{\bm \beta} \frac{1}{2} \| \sum_{i: s_i \neq 0} \mathbf x_i \beta_i- \mathbf y\|_2^2 + \lambda \sum_{i: s_i \neq 0} \beta_i s_i, \end{equation} where $s_i =0$ if $\beta^*_i =0$, $s_i =1$ if $\beta^*_i >0$, and $s_i =-1$ if $\beta^*_i <0$.
It can be observed from the results in Table~\ref{table:iteration:traditiona:cd} and Figure~\ref{fig:cd:solution:alpha} (a) that we can obtain an approximate solution with a small objective function value within a few iterations. However, achieving a solution with high precision takes quite a few iterations for this example. More interestingly, for a particular coordinate, the value obtained in the next iteration almost always lies on a ray that starts at its previous iteration and passes through the current iteration. To show this, we compute $\alpha^k_i$ that satisfies the following equation: \begin{equation}
\beta_i^{k+1} =\alpha^k_i \beta_i^{k-1} + (1-\alpha_i^k) \beta_i^k. \end{equation} Table~\ref{table:iteration:traditiona:alpha} and Figure~\ref{fig:cd:solution:alpha} (b) show the values of $\alpha_i^k$ for different iterations. It can be observed that the values of $\alpha_i^k$ are almost always positive except $\alpha_1^2$ for this example. In addition, most of the values of $\alpha_i^k$ are larger than 1. We tried quite a few synthetic data and observed a similar phenomenon.
\begin{table} \caption{Illustration of the ray-continuation property $\beta_i^{k+1} =\alpha^k_i \beta_i^{k-1} + (1-\alpha_i^k) \beta_i^k$ based on the results obtained in Table~\ref{table:iteration:traditiona:cd}. All the $\alpha_i^k$ are positive except $\alpha_1^2$.} \begin{center}
\begin{tabular}{cccccc}
\hline
$k$ & $\alpha_1^k$ & $\alpha_2^k$ & $\alpha_3^k$ & $\alpha_4^k$ & $\alpha_5^k$ \\
\hline\hline 2
& -1.451503 & 1.683019 & 0.971191 & 2.019562 & 1.526899 \\ \hline 3
& 1.880924 & 1.626844 & 4.128185 & 1.598324 & 1.738268 \\ \hline 4
& 1.763341 & 1.612353 & 1.624041 & 1.500861 & 1.813212 \\ \hline 5
& 1.773143 & 1.602012 & 1.473119 & 1.370962 & 1.849048 \\ \hline 6
& 1.801288 & 1.587906 & 1.283008 & 0.973180 & 1.873809 \\ \hline 7
& 1.830983 & 1.564952 & 0.469896 & 24.529117 & 1.892594 \\ \hline 8
& 1.857679 & 1.524402 & 3.679312 & 2.540794 & 1.906640 \\ \hline 9
& 1.879781 & 1.443912 & 2.346272 & 2.183873 & 1.916832 \\ \hline 10
& 1.897021 & 1.240529 & 2.128716 & 2.068764 & 1.924043 \\ \hline ... \\ \hline 28
& 1.939534 & 1.939949 & 1.939646 & 1.939628 & 1.939556 \\ \hline 29
& 1.939545 & 1.939821 & 1.939620 & 1.939608 & 1.939560 \\ \hline 30
& 1.939552 & 1.939736 & 1.939602 & 1.939594 & 1.939562 \\ \hline \end{tabular}
\end{center}
\label{table:iteration:traditiona:alpha} \end{table}
For a particular iteration number $k$, if $\alpha_i^k=\alpha, \forall i$, we can easily achieve $\bm \beta^{k+1} = \alpha \bm \beta^{k-1} + (1-\alpha) \bm \beta^k$ without needing to perform any coordinate descent iteration. This motivated us to come up with the successive ray refinement technique to be discussed in the next section.
\section{Successive Ray Refinement}\label{s:sfg}
In the proposed SRR technique, we make use of the ray-continuation property shown in Figure~\ref{fig:cd:solution:alpha}, Table~\ref{table:iteration:traditiona:cd}, and Table~\ref{table:iteration:traditiona:alpha}. Our idea is as follows: To obtain $\bm \beta^{k+1}$, we perform coordinate descent based on a refined search point $\mathbf s^k$ rather than on its previous solution $\bm \beta^{k}$. We propose setting the refined search point as: \begin{equation}\label{eq:SGR:scheme}
\mathbf s^k = (1- \alpha^k) \mathbf h^k + \alpha^k \bm \beta^k, \end{equation} where $ \mathbf h^k$ is a properly chosen history solution, $\bm \beta^k$ is the current solution, and $\alpha^k$ is an optimal refinement factor that optimizes the following univariate optimization problem: \begin{equation}\label{eq:one:d:search}
\alpha ^k= \arg \min_{\alpha} \{ g(\alpha) = f( (1-\alpha) \mathbf h^k + \alpha \bm \beta^k ) \}. \end{equation} The setting of $ \mathbf h^k$ to one of the history solutions is based on the following two considerations. First, we aim to use the ray-continuation property to reduce the number of iterations. Second, we need to ensure that the univariate optimization problem in Equation~\eqref{eq:one:d:search} can be efficiently computed. We discuss the computation of Equation~\eqref{eq:one:d:search} in Section~\ref{s:one:d:search}.
\begin{figure}
\caption{The proposed SRR technique.
The search point $\mathbf s^k$ lies on the ray that starts from a properly chosen history solution $\mathbf h^k$ and passes through
the current solution $\bm \beta^k$, and meanwhile it achieves the minimum objective function value among all the points on the ray, optimizing Equation~\eqref{eq:one:d:search}.}
\label{fig:sor:diagram}
\end{figure}
Figure~\ref{fig:sor:diagram} illustrates the proposed SRR technique. When $ \alpha^k=1$, we have $\mathbf s^k=\bm \beta^k$; that is, the refined search point becomes the current solution $\bm \beta^k$. When $ \alpha^k=0$, we have $\mathbf s^k= \mathbf h^k$; that is, the refined search point becomes the specified history solution $\mathbf h^k$. However, our next theorem shows that $\mathbf s^k \neq \mathbf h^k$ because $\alpha^k$ is always positive. In other words, the search point always lies on a ray that starts with the history point $\mathbf h^k$ and passes through the current solution $\bm \beta^k$.
\begin{theorem}\label{theorem:non:negative} Assume that the history point $\mathbf h^k$ satisfies \begin{equation}\label{eq:cd:non:increasing}
f( \mathbf h^k ) > f(\bm \beta^k). \end{equation} Then, $ \alpha^k$ that minimizes Equation~\eqref{eq:one:d:search} is positive. In addition, if $X \mathbf h^k \neq X \bm \beta^k$, $ \alpha^k$ is unique. \end{theorem} \noindent \textbf{Proof } It is easy to verify that $g(\alpha)$ is convex. Therefore, $ \alpha^k$ that minimizes Equation~\eqref{eq:one:d:search} has at least one solution. Equation~\eqref{eq:cd:non:increasing} leads to \begin{equation}\label{eq:g:1:0}
g(1) < g(0). \end{equation} Therefore, the global refinement factor $ \alpha^k \neq 0$. Next, we show that $ \alpha^k$ cannot be negative.
If $ \alpha ^k<0$, due to the convexity of $g(\alpha)$, we have \begin{equation}
g((1-\theta) \alpha^k + \theta) \le (1-\theta) g( \alpha ^k) + \theta g(1), \forall \theta \in [0,1]. \end{equation} Setting $\theta = \frac{ \alpha^k}{ \alpha^k -1}$, we have \begin{equation}
g(0) \le \frac{-1}{ \alpha^k -1} g( \alpha^k ) + \frac{ \alpha^k}{ \alpha^k -1} g(1), \forall \theta \in [0,1]. \end{equation} Making use of Equation~\eqref{eq:g:1:0}, we have $g(1)< g( \alpha^k)$. This contradicts the fact that $ \alpha^k$ minimizes Equation~\eqref{eq:one:d:search}. Therefore, $ \alpha^k$ is always positive.
If $X \mathbf h^k \neq X \bm \beta^k$, $g(\alpha)$ is strongly convex and thus $ \alpha^k$ is unique. This ends the proof of this theorem.
$\Box$
For coordinate descent, the condition in Equation~\eqref{eq:cd:non:increasing} always holds, because the objective function value keeps decreasing. The selection of an appropriate $\mathbf h^k$ is key to the success of the proposed SRR, and the following theorem says that if $\mathbf h^k$ is good enough, the refined search solution $\mathbf s^k$ is an optimal solution to Equation~\eqref{eq:lasso:problem}. \begin{theorem}\label{theorem:optimal:beta} Let $\bm \beta^*$ be an optimal solution to Equation~\eqref{eq:lasso:problem}. If \begin{equation}\label{eq:refinement:optimal}
\bm \beta^* - \mathbf h^k = \gamma ( \bm \beta^k - \mathbf h^k), \end{equation} for some positive $\gamma$, $\mathbf s^k$ achieved by SRR in Equation~\eqref{eq:SGR:scheme} satisfies $f(\mathbf s^k)=f(\bm \beta^* )$. \end{theorem} \noindent \textbf{Proof } When setting $ \alpha^k=\gamma$, we have $\mathbf s^k=\bm \beta^*$ under the assumption in Equation~\eqref{eq:refinement:optimal}. Therefore, with the SRR technique, we can obtain a refined solution $\mathbf s^k$ that is an optimal solution to Equation~\eqref{eq:lasso:problem}.
$\Box$
In the following subsections, we discuss two schemes for choosing the history solution $\mathbf h^k$.
\subsection{Successive Ray Refinement Chain}
\begin{figure}
\caption{Illustration of the proposed SRRC technique. $\mathbf s^k$ is the refined search point and $\bm \beta^{k+1}$
is the point that is obtained by applying coordinate descent (CD) based on the refined search point $\mathbf s^k$. In this illustration,
it is assumed that the optimal refinement factor $\alpha^k$ is larger than 1. When $\alpha^k \in (0,1)$, $\mathbf s^k$ lies
between $\mathbf s^{k-1}$ and $\bm \beta^k$.}
\label{fig:sorc:diagram}
\end{figure}
In the first scheme, we set \begin{equation}
\mathbf h^k = \mathbf s^{k-1}. \end{equation} That is, the history point is set to the most recent refined search point. Figure~\ref{fig:sorc:diagram} demonstrates this scheme. Since the generated points follow a chain structure, we call this scheme the Successive Ray Refinement Chain (SRRC). In SRRC, $\mathbf s^{k-1}$, $\bm \beta^k$, and $\mathbf s^k$ lie on the same line. In addition, coordinate descent (CD) controls the direction of the chain. In this illustration, it is assumed that the optimal refinement factor $\alpha^k$ is larger than 1 in each step. According to Theorem~\ref{theorem:non:negative}, $\alpha^k >0$. When $\alpha^k \in (0,1)$, $\mathbf s^k$ lies between $\mathbf s^{k-1}$ and $\bm \beta^k$. When $\alpha^k =1$, $\mathbf s^k$ coincides with $\bm \beta^k$.
In Algorithm~\ref{algorithm:SGR:outer}, we apply the proposed SRRC to coordinate descent for Lasso. Compared with the traditional coordinate descent in Algorithm~\ref{algorithm:traditional}, the coordinate update is based on the search point $\mathbf s^{k-1}$ rather than on the previous solution $\bm \beta^{k-1}$. When $\alpha^k$ in line 9 of Algorithm~\ref{algorithm:SGR:outer} is set to 1, Algorithm~\ref{algorithm:SGR:outer} becomes identical to Algorithm~\ref{algorithm:traditional}.
\begin{algorithm} \caption{Coordinate Descent plus SRRC (CD+SRRC) for Lasso} \label{algorithm:SGR:outer} \begin{algorithmic}[1]
\REQUIRE $X$, $\mathbf y$, $\lambda$
\ENSURE $\bm \beta^k$
\STATE Set $k=0$, $\mathbf s^0 = \mathbf 0$, $\mathbf r_s^0 = \mathbf y$
\REPEAT
\STATE Set $k=k+1$, $\mathbf r^k = \mathbf r_s^{k-1}$
\FOR{$i=1$ to $p$}
\STATE Compute $\beta_i^k = S( s_i^{k-1} + \frac{\mathbf x_i^T \mathbf r^k}{\|\mathbf x_i\|_2^2} , \frac{\lambda} {\|\mathbf x_i\|_2^2})$
\STATE Obtain $\mathbf r^k = \mathbf r^k + \mathbf x_i (s_i^{k-1} - \beta_i^k)$
\ENDFOR
\IF{convergence criterion not satisfied}
\STATE Set $\alpha^k=\arg \min_{\alpha} f( (1-\alpha) \mathbf s^{k-1} + \alpha \bm \beta^k)$
\STATE Set $\mathbf s^k = (1- \alpha^k) \mathbf s^{k-1} + \alpha^k \bm \beta^k$
\STATE Set $\mathbf r_s^k = (1- \alpha^k) \mathbf r_s^{k-1} + \alpha^k \mathbf r^k$
\ENDIF
\UNTIL{convergence criterion satisfied} \end{algorithmic} \end{algorithm}
\begin{table} \caption{Illustration of coordinate descent plus SRRC for solving the same problem as in Table~\ref{table:iteration:traditiona:cd}. Note that the optimal function value is 0.} \begin{center} \begin{small} \begin{tabular}{cccccccc}
\hline
$k$ & $\beta_1^k$ & $\beta_2^k$ & $\beta_3^k$ & $\beta_4^k$ & $\beta_5^k$ & $f(\bm \beta^k)$ & $ \alpha^k$ \\
\hline\hline 1
& 0.048912 & 0.034041 & 0.407960 & 0.055687 & 0.160413
& 0.052449 \\ \hline 2
& 0.058130 & -0.041464 & 0.471828 & 0.024612 & 0.173040
& 0.016773
& 1.114740 \\ \hline 3
& 0.022324 & -0.108065 & 0.459034 & -0.018702 & 0.181180
& 0.004209
& 1.520601 \\ \hline 4
& -0.000996 & -0.137517 & 0.452455 & -0.033262 & 0.187045
& 0.001791
& 1.610933 \\ \hline 5
& -0.010602 & -0.144776 & 0.452250 & -0.033032 & 0.190141
& 0.001452
& 1.114831 \\ \hline 6
& -0.029911 & -0.153851 & 0.453133 & -0.026748 & 0.196740
& 0.001091
& 2.700667 \\ \hline 7
& -0.047531 & -0.149751 & 0.458275 & -0.006750 & 0.203971
& 0.000632
& 3.936469 \\ \hline 8
& -0.052347 & -0.148709 & 0.459668 & -0.001392 & 0.205944
& 0.000530
& 1.398237 \\ \hline 9
& -0.058803 & -0.147299 & 0.461526 & 0.005839 & 0.208586
& 0.000407
& 2.059226 \\ \hline 10
& -0.064683 & -0.145997 & 0.463220 & 0.012415 & 0.210994
& 0.000308
& 2.134921 \\ \hline ... \\ \hline 13
& -0.078615 & -0.142905 & 0.467249 & 0.028058 & 0.216701
& 0.000129
& 1.100414 \\ \hline 14
& -0.093529 & -0.139593 & 0.471552 & 0.044782 & 0.222808
& 0.000023
& 9.617055 \\ \hline 15
& -0.094167 & -0.139451 & 0.471743 & 0.045510 & 0.223071
& 0.000020
& 0.997764 \\ \hline 16
& -0.104249 & -0.137213 & 0.474657 & 0.056823 & 0.227201
& 3.3020e-11
& 16.530123 \\ \hline ... \\ \hline 28
& -0.104260 & -0.137210 & 0.474660 & 0.056835 & 0.227205
& 6.3207e-15
& 1.197748 \\ \hline 29
& -0.104260 & -0.137210 & 0.474660 & 0.056835 & 0.227205
& 5.7081e-15
& 0.890278 \\ \hline 30
& -0.104260 & -0.137210 & 0.474660 & 0.056835 & 0.227205
& 2.0807e-15
& 6.945350 \\ \hline \end{tabular} \end{small} \end{center}
\label{table:iteration:SGR:1} \end{table}
Table~\ref{table:iteration:SGR:1} illustrates Algorithm~\ref{algorithm:SGR:outer} with the same input $X$ and $\mathbf y$ that are used in Table~\ref{table:iteration:traditiona:cd}. Comparing Table~\ref{table:iteration:SGR:1} with Table~\ref{table:iteration:traditiona:cd}, we can see that the number of iterations can be significantly reduced with the usage of the SRRC technique. Specifically, to achieve a function value of below $10^{-3}$, the traditional coordinate descent takes 10 iterations, whereas the one with the SRRC technique takes 7 iterations; to achieve a function value below $10^{-4}$, the traditional coordinate descent takes 29 iterations, whereas the one with the SRRC technique 14 iterations; and to achieve a function value below $10^{-8}$, the traditional coordinate descent takes 103 iterations, whereas the one with the SRRC technique takes 16 iterations.
As can be seen from Figure~\ref{fig:sorc:diagram}, we generate two sequences: $\{\mathbf s^k\}$ and $\{\bm \beta^k\}$. At iteration $k$, the SRRC technique is very greedy in that it constructs the search point $\mathbf s^k$ by using the two existing points $\mathbf s^{k-1}$ and $\bm \beta^k$ to achieve the lowest objective function value. If the search point $\mathbf s^{k-1}$ is dense at some iteration number $k$ and $\alpha^k \neq 1$, it can be shown that $\mathbf s^k$ is also dense. This is not good for Lasso, which usually has a sparse solution. Interestingly, our empirical simulations show that Algorithm~\ref{algorithm:SGR:outer} can set $\alpha^k = 1$ in some iterations, leading to a sparse search point.
\subsection{Successive Ray Refinement Triangle}
In the second scheme, we set \begin{equation}
\mathbf h^k = \bm \beta^{k-1}. \end{equation} Figure~\ref{fig:sort:diagram} demonstrates this scheme. Since the generated points follow a triangle structure, we call this scheme the Successive Ray Refinement Triangle (SRRT). SRRT is less greedy compared to SRRC because $\bm \beta^{k-1}$ leads to a higher objective function value than $\mathbf s^{k-1}$ leads to. However, SRRT can sometimes outperform SRRC in solving Lasso.
\begin{figure}
\caption{Illustration of the proposed SRRT technique. $\mathbf s^k$ is the refined search point and $\bm \beta^{k+1}$
is the point obtained by applying coordinate descent (CD) based on the refined search point $\mathbf s^k$. In this illustration,
it is assumed that the optimal refinement factor $\alpha^k$ is larger than 1. When $\alpha^k \in (0,1)$, $\mathbf s^k$ lies
between $\bm \beta^{k-1}$ and $\bm \beta^k$.}
\label{fig:sort:diagram}
\end{figure}
Algorithm~\ref{algorithm:orrt} shows the application of the proposed SRRT technique to coordinate descent for Lasso. Similar to Algorithm~\ref{algorithm:SGR:outer}, if $\alpha^k$ in line 9 is set to 1, Algorithm~\ref{algorithm:orrt} reduces to the traditional coordinate descent in Algorithm~\ref{algorithm:traditional}. Table~\ref{table:iteration:orrt} illustrates Algorithm~\ref{algorithm:orrt}. Similar to SRRC, SRRT greatly reduces the number of iterations used in coordinate descent for Lasso.
\begin{algorithm} \caption{Coordinate Descent plus SRRT (CD+SRRT) for Lasso} \label{algorithm:orrt} \begin{algorithmic}[1]
\REQUIRE $X$, $\mathbf y$, $\lambda$
\ENSURE $\bm \beta^k$
\STATE Set $k=0$, $\mathbf s^0 = \mathbf 0$, $\mathbf r_s^0 = \mathbf y$
\REPEAT
\STATE Set $k=k+1$, $\mathbf r^k = \mathbf r_s^{k-1}$
\FOR{$i=1$ to $p$}
\STATE Compute $\beta_i^k = S( s_i^{k-1} + \frac{\mathbf x_i^T \mathbf r^k}{\|\mathbf x_i\|_2^2} , \frac{\lambda} {\|\mathbf x_i\|_2^2})$
\STATE Obtain $\mathbf r^k = \mathbf r^k + \mathbf x_i (s_i^{k-1} - \beta_i^k)$
\ENDFOR
\IF{convergence criterion not satisfied}
\STATE Set $\alpha^k= \arg \min_{\alpha} f( (1-\alpha) \bm \beta^{k-1} + \alpha \bm \beta^k)$
\STATE Set $\mathbf s^k = (1- \alpha^k) \bm \beta^{k-1} + \alpha^k \bm \beta^k$
\STATE Set $\mathbf r_s^k = (1- \alpha^k) \mathbf r^{k-1} + \alpha^k \mathbf r^k$
\ENDIF
\UNTIL{convergence criterion satisfied} \end{algorithmic} \end{algorithm}
\begin{table} \caption{Illustration of coordinate descent plus SRRT for solving the same problem as in Table~\ref{table:iteration:traditiona:cd}. Note that the optimal function value is 0.} \begin{center} \begin{small} \begin{tabular}{cccccccc}
\hline
$k$ & $\beta_1^k$ & $\beta_2^k$ & $\beta_3^k$ & $\beta_4^k$ & $\beta_5^k$ & $f(\bm \beta^k)$ & $ \alpha^k$ \\
\hline\hline 1
& 0.048912 & 0.034041 & 0.407960 & 0.055687 & 0.160413
& 0.052449 \\ \hline 2
& 0.058130 & -0.041464 & 0.471828 & 0.024612 & 0.173040
& 0.016773
& 1.114740 \\ \hline 3
& 0.032838 & -0.089244 & 0.463272 & -0.006319 & 0.178907
& 0.006746
& 1.077199 \\ \hline 4
& -0.010078 & -0.154209 & 0.449373 & -0.043957 & 0.189153
& 0.001610
& 2.336008 \\ \hline 5
& -0.015176 & -0.152741 & 0.450482 & -0.038189 & 0.191151
& 0.001435
& 0.960038 \\ \hline 6
& -0.087427 & -0.134480 & 0.471220 & 0.044793 & 0.220728
& 0.000061
& 15.373834 \\ \hline 7
& -0.098214 & -0.134199 & 0.474324 & 0.054977 & 0.225125
& 0.000019
& 1.138984 \\ \hline 8
& -0.104044 & -0.135210 & 0.475348 & 0.058970 & 0.227325
& 0.000005
& 1.492143 \\ \hline 9
& -0.106491 & -0.136042 & 0.475553 & 0.060121 & 0.228188
& 0.000002
& 1.414778 \\ \hline 10
& -0.106739 & -0.136361 & 0.475482 & 0.059962 & 0.228250
& 0.000002
& 1.141313 \\ \hline ... \\ \hline 16
& -0.104212 & -0.137315 & 0.474613 & 0.056673 & 0.227176
& 1.2569e-08
& 1.246072 \\ \hline 17
& -0.104115 & -0.137272 & 0.474607 & 0.056637 & 0.227143
& 6.9462e-09
& 1.481390 \\ \hline 18
& -0.104106 & -0.137256 & 0.474611 & 0.056648 & 0.227141
& 5.5802e-09
& 1.169237 \\ \hline ... \\ \hline 28
& -0.104268 & -0.137209 & 0.474662 & 0.056843 & 0.227209
& 1.1516e-11
& 1.808904 \\ \hline 29
& -0.104264 & -0.137210 & 0.474660 & 0.056839 & 0.227207
& 3.9831e-12
& 4.426169 \\ \hline 30
& -0.104262 & -0.137210 & 0.474660 & 0.056836 & 0.227206
& 1.1488e-12
& 1.651176 \\ \hline \end{tabular} \end{small} \end{center}
\label{table:iteration:orrt} \end{table}
\subsection{Convergence of CD plus SRR}
In this subsection, we show that both the combination of CD and SRRC (CD+SRRC) and the combination of CD and SRRT (CD+SRRT) are guaranteed to converge.
\begin{theorem}\label{theorem:convergence} For the sequence $\mathbf s^0, \bm \beta^1 ,\mathbf s^1, \bm \beta^2, \mathbf s^2, \bm \beta^3 ,\ldots$ generated by CD+SRRC and CD+SRRT, the objective function value is monotonically decreasing until convergence; that is, \begin{equation}
f(\mathbf s^{k-1}) \ge f(\bm \beta^k) \ge f(\mathbf s^k) \ge f(\bm \beta^{k+1}). \end{equation} In addition, if $f(\mathbf s^{k-1}) = f(\bm \beta^k)$, we have $\mathbf s^{k-1}=\bm \beta^k$ and $\bm \beta^k$ is an optimal solution; that is, \begin{equation}
f(\bm \beta^k) = \min_{\bm \beta} f(\bm \beta). \end{equation} Therefore, we have \begin{equation}\label{eq:convergence:mononotone}
\lim_{k \rightarrow \infty} f(\bm \beta^k) = \min_{\bm \beta} f(\bm \beta). \end{equation} \end{theorem}
\noindent \textbf{Proof } $\bm \beta^k$ is computed by applying coordinate descent based on $\mathbf s^{k-1}$; that is, \begin{equation}
\beta_i^k= \arg \min_{\beta} f([ \beta_1^k,\ldots,\beta_{i-1}^k,\beta,s_{i+1}^{k-1},\ldots,s_{p}^{k-1} ]^T), \end{equation} or equivalently \begin{equation} \label{eq:soft:prove}
\beta_i^k = \frac{ S(\mathbf x_i^T \mathbf y - \sum_{j<i} \mathbf x_i^T \mathbf x_j \beta_j^k - \sum_{j>i} \mathbf x_i^T \mathbf x_j s_j^{k-1} , \lambda) }{\|\mathbf x_i\|_2^2}. \end{equation} Therefore, we have \begin{equation}\label{eq:f:b:s} \begin{aligned}
& f([ \beta_1^k,\ldots,\beta_{i-1}^k,\beta_i^k,s_{i+1}^{k-1},\ldots,s_{p}^{k-1} ]^T) \\
& \le f([ \beta_1^k,\ldots,\beta_{i-1}^k,s_i^{k-1},s_{i+1}^{k-1},\ldots,s_{p}^{k-1} ]^T). \end{aligned} \end{equation} for all $i$.
Since $\|\mathbf x_i\|_2 \neq 0$, $f([ \beta_1^k,\ldots,\beta_{i-1}^k,\beta,s_{i+1}^{k-1},\ldots,s_{p}^{k-1} ]^T)$ is strongly convex in $\beta$. As a result, if the equality in Equation~\eqref{eq:f:b:s} holds, we have $\beta_i^k=s_i^{k-1}$. Recursively applying Equation~\eqref{eq:f:b:s}, we have the following two facts: $f(\bm \beta^k) \le f(\mathbf s^{k-1})$ and if $f(\bm \beta^k) = f(\mathbf s^{k-1})$, then $\mathbf s^{k-1}=\bm \beta^k$.
If $\mathbf s^{k-1}=\bm \beta^k$, it follows from Equation~\eqref{eq:soft:prove} that \begin{equation} \begin{aligned}
\|\mathbf x_i\|_2^2 \beta_i^k & = S(\mathbf x_i^T \mathbf y - \sum_{j \neq i} \mathbf x_i^T \mathbf x_j \beta_j^k , \lambda) \\
& = S(\mathbf x_i^T \mathbf y - \mathbf x_i^T X \bm \beta^k + \|\mathbf x_i\|_2^2 \beta_i^k , \lambda), \end{aligned} \end{equation} which leads to \begin{equation}\label{eq:beta:k:optimal}
\mathbf x_i^T \mathbf y - \mathbf x_i^T X \bm \beta^k \in \mbox{SGN}(\beta_i^k), \end{equation} where \begin{equation} \mbox{SGN}(t)= \left\{ \begin{array}{cc}
\{ 1 \}, & t >0 \\
\{ -1 \}, & t <0 \\
\left[-1,1\right], & t =0. \\ \end{array} \right. \end{equation} Since $\bm \beta^*$ is an optimal solution to Equation~\eqref{eq:lasso:problem} if and only if \begin{equation}
\mathbf x_i^T \mathbf y - \mathbf x_i^T X \bm \beta^* \in \mbox{SGN}(\beta_i^*), \forall i, \end{equation} it follows from Equation~\eqref{eq:beta:k:optimal} that $\bm \beta^k$ is an optimal solution to Equation~\eqref{eq:lasso:problem}.
The relationship $f(\bm \beta^k) \ge f(\mathbf s^k)$ is guaranteed by the univariate optimization problem in Equation~\eqref{eq:one:d:search}. Therefore, the sequence $\{f(\bm \beta^k)\}$ is decreasing. Meanwhile, the squence $\{f(\bm \beta^k)\}$ has a lower bound $\min_{\bm \beta} f(\bm \beta)$. According to the well-known monotone convergence theorem, we have Equation~\eqref{eq:convergence:mononotone}.
This completes the proof of this theorem.
$\Box$
\section{Efficient Refinement Factor\\ Computation}\label{s:one:d:search}
In this section, we discuss how to efficiently compute the refinement factor $\alpha^k$ in Equation~\eqref{eq:one:d:search}. The function $g(\alpha)$ can be written as: \begin{equation} \begin{aligned}
g(\alpha) & = \frac{1}{2} \left\| X ((1-\alpha) \mathbf h^k + \alpha \bm \beta^k) - \mathbf y \right\|_2^2 + \lambda \|(1-\alpha) \mathbf h^k + \alpha \bm \beta^k\|_1 \\
& = \frac{1}{2} \left\| \mathbf r^k_h- \alpha ( \mathbf r^k_h- \mathbf r^k) \right\|_2^2 + \lambda \| \mathbf h^k-\alpha ( \mathbf h^k - \bm \beta^k)\|_1, \end{aligned} \end{equation} where $\mathbf r^k_h = \mathbf y - X \mathbf h^k$ and $\mathbf r^k=\mathbf y - X \bm \beta^k$ are the residuals that correspond to $ \mathbf h^k$ and $\bm \beta^k$, respectively. Note that 1) $\mathbf r^k_h = \mathbf r^{k-1}_s$ for SRRC and $\mathbf r^k_h = \mathbf r^{k-1}$ for SRRT, and 2) both $\mathbf r^k_h$ and $\mathbf r^k$ have been obtained before line 8 of Algorithm~\ref{algorithm:SGR:outer} and Algorithm~\ref{algorithm:orrt}. Before the convergence, we have $\mathbf r^k_h \neq \mathbf r^k$. Therefore, $g(\alpha)$ is strongly convex in $\alpha$, and $\alpha^k$, the minimizer to Equation~\eqref{eq:one:d:search}, is unique.
When $\lambda=0$, Equation~\eqref{eq:one:d:search} has a nice closed form solution, \begin{equation}
\alpha^k = \frac{ \langle \mathbf r^k_h , \mathbf r^k_h - \mathbf r^k \rangle }{\| \mathbf r^k_h - \mathbf r^k \|_2^2}. \end{equation}
Next, we discuss the case $\lambda >0$. The subgradient of $ g(\alpha)$ with regard to $\alpha$ can be computed as \begin{equation} \begin{aligned}
\partial g(\alpha) & = \alpha \| \mathbf r^k_h - \mathbf r^k\|_2^2 - \langle \mathbf r^k_h , \mathbf r^k_h - \mathbf r^k \rangle \\
& + \lambda \sum_{i=1}^p (\beta^k_i - h_i) \mbox{SGN} (h_i-\alpha ( h_i - \beta^k_i)). \end{aligned} \end{equation} Compute $\alpha^k$ is a root-finding problem. According to Theorem~\ref{theorem:non:negative}, we have $\alpha^k >0$. Next, we consider only $\alpha>0$ for $\partial g(\alpha)$. We consider the following three cases: \begin{itemize}
\item[1.] If $h_i =0$, we have $$(\beta^k_i - h_i) \mbox{SGN} (h_i-\alpha ( h_i - \beta^k_i)) = \{ |\beta^k_i| \}.$$
\item[2.] If $h_i (\beta_i^k - h_i) >0$, we have $$(\beta^k_i - h_i) \mbox{SGN} (h_i-\alpha ( h_i - \beta^k_i)) = \{ |\beta^k_i - h_i| \}.$$
\item[3.] If $h_i (\beta_i^k - h_i) <0$, we let
\begin{equation}
w_i = \frac{h_i}{ h_i - \beta^k_i },
\end{equation}
and we have
\begin{equation}
\begin{aligned}
& (\beta^k_i - h_i) \mbox{SGN} (h_i-\alpha ( h_i - \beta^k_i)) = \\
& \left\{
\begin{array}{ll}
\{ -|\beta^k_i - h_i| \} & \alpha \in (0, w_i) \\
\{ |\beta^k_i - h_i| \} & \alpha \in (w_i, +\infty) \\
|\beta^k_i - h_i| \{ [-1,1]\} & \alpha =w_i. \\
\end{array} \right.
\end{aligned}
\end{equation} \end{itemize} For the first two cases, the set $\mbox{SGN} (h_i-\alpha ( h_i - \beta^k_i))$ is deterministic. For the third case, $\mbox{SGN} (h_i-\alpha ( h_i - \beta^k_i))$ is deterministic when $\alpha \neq w_i$. Define \begin{equation}
\Omega(\mathbf h^k, \bm \beta^k) = \{i: h_i (\beta_i^k - h_i) <0 \}. \end{equation} Figure~\ref{fig:sfg:computation} illustrates the function $\partial g(\alpha), \alpha >0$. It can be observed that $\partial g(\alpha)$ is a piecewise linear function. If $\Omega(\mathbf h^k, \bm \beta^k)$ is empty, $\partial g(\alpha)$ is continuous; otherwise, $\partial g(\alpha)$ is not continuous at $\alpha=w_i, i \in \Omega(\mathbf h^k, \bm \beta^k)$.
\begin{figure}
\caption{Illustration of $\partial g(\alpha)$. When $\lambda >0$, it is a non-continuous piecewise linear monotonically increasing function. The intersection between $\partial g(\alpha)$
and the horizontal axis gives $\tilde \alpha^k$, the solution to Equation~\eqref{eq:one:d:search}.}
\label{fig:sfg:computation}
\end{figure}
\subsection{An Algorithm Based on Sorting}
To compute the refinement factor, one approach is to sort $w_i$ as follows:
First, we sort $w_i, i \in \Omega(\mathbf h^k, \bm \beta^k)$, and assume $w_{i_0} \le w_{i_1} \le \ldots \le w_{i_{|\Omega(\mathbf h^k, \bm \beta^k)|}}$.
Second, for $j=1, 2, \ldots, |\Omega(\mathbf h^k, \bm \beta^k)|$, we evaluate $\partial g(\alpha)$ at $\alpha = w_{i_j}$ with the following three cases: \begin{itemize}
\item[1.] If $ 0 \in \partial g(w_{i_j})$, we have $\alpha^k = w_{i_j}$ and terminate the search.
\item[2.] If an element in $\partial g(w_{i_j})$ is positive, $\alpha^k$ lies in the piecewise line starting $\alpha=w_{i_{j-1}}$ and ending $\alpha=w_{i_j}$, and it can be analytically computed.
\item[3.] If all elements in $\partial g(w_{i_j})$ are negative, we set $j=j+1$ and continue the search. \end{itemize}
Finally, if all elements in $\partial g(w_{i_j})$ are negative when $j=|\Omega(\mathbf h^k, \bm \beta^k)|$, $\alpha^k$ lies on the piecewise line that starts at $\alpha=w_{i_j}$. Thus, $\tilde \alpha^k$ can be analytically computed.
With a careful implementation, the naive approach can be completed in $O(p + m\log(m))$, where $m = |\Omega(\mathbf h^k, \bm \beta^k)| $. In Lasso, the solution is usually sparse, and thus $m$ is much smaller than $p$, the number of variables.
\subsection{An Algorithm Based on Bisection}
A second approach is to make use of the improved bisection proposed in~\cite{liu_j:09}. The idea is to 1) determine an initial guess of the interval $[\alpha_1, \alpha_2]$ to which the root belongs, where all elements in $\partial g(\alpha_1)$ are negative and all elements in $\partial g(\alpha_2)$ are positive, 2) evaluate $\partial g(\alpha)$ at $\alpha= \frac{\alpha_1+\alpha_2}{2}$ and update the interval to $[\alpha_1, \alpha)$ if all the elements in $\partial g(\alpha)$ are positive or to $[\alpha, \alpha_2)$ if all the elements in $\partial g(\alpha)$ are negative, 3) set the value of $\alpha$ to the largest value of $w_i$ that satisfy $w_i < \alpha$ if all the elements in $\partial g(\alpha)$ are positive or to the smallest value of $w_i$ that satisfy $w_i > \alpha$ if all the elements in $\partial g(\alpha)$ are negative, and 4) repeat 2) and 3) until finding the root of $\partial g(\alpha)$. With a similar implementation as in~\cite{liu_j:09}, the improved bisection approach has a time complexity of $O(p)$.
\section{An Eigenvalue Analysis on \\the Proposed SRR}\label{s:sfg:eigen:analysis}
Let \begin{equation}\label{eq:LDU:XTX} A= X X^T = L + D + U, \end{equation} where $D$ is $A$'s diagonal part, $L$ is $A$'s strictly lower triangular part, and $U$ is $A$'s strictly upper triangular part. It is easy to see that \begin{equation} L_{ij}=
\left\{ \begin{array}{cc}
\mathbf x_i^T \mathbf x_j & i<j \\
0 & i \ge j ,\\ \end{array} \right. \end{equation} \begin{equation} D_{ij}=
\left\{ \begin{array}{cc}
\mathbf x_i^T \mathbf x_i & i=j \\
0 & i \neq j ,\\ \end{array} \right. \end{equation} \begin{equation} U_{ij}=
\left\{ \begin{array}{cc}
\mathbf x_i^T \mathbf x_j & i>j \\
0 & i \le j. \\ \end{array} \right. \end{equation}
We can rewrite Equation~\eqref{eq:beta:i:kplus1} as \begin{equation}\label{eq:beta:i:kplus1:reformulation} D_{ii} \beta_i^k = S(\mathbf x_i^T \mathbf y - L_{i:} \bm \beta^k - U_{i:}\bm \beta^{k-1}, \lambda), \end{equation} where $L_{i:}$ and $U_{i:}$ denote the $i$th row of $L$ and $U$, respectively. Therefore, we can write coordinate descent iteration as: \begin{equation}\label{eq:beta:i:kplus1:reformulation:2} D \bm \beta^k = S(X^T \mathbf y - L \bm \beta^k - U \bm \beta^{k-1}, \lambda). \end{equation}
When $\lambda=0$, Equation~\eqref{eq:beta:i:kplus1:reformulation:2} becomes \begin{equation}
(L +D) \bm \beta^{k+1} = X^T \mathbf y - U \bm \beta^k, \end{equation} which is the Gauss-Seidel method for solving \begin{equation}\label{eq:gradient:zero}
X^TX \bm \beta = (L+D+U) \bm \beta = X^T \mathbf y. \end{equation} Equation~\eqref{eq:gradient:zero} is also the optimality condition for Equation~\eqref{eq:lasso:problem} when $\lambda=0$. Our next discussion is for the case $\lambda=0$ because it is easy to write the linear systems for the iterations.
Denote \begin{equation}\label{eq:G:def}
G = - (L +D)^{-1} U. \end{equation} Let $G$ have the following eigendecomposition: \begin{equation}
G = P \Delta P^{-1}, \end{equation} where $\Delta =\mbox{diag}(\delta_1, \delta_2, \ldots, \delta_p)$ is a diagonal matrix consisting of its eigenvalues. \begin{lemma}
The magnitudes of the eigenvalues of $G$ are all less than or equal to 1; that is, \begin{equation}
|\delta_i| \le 1, \forall i. \end{equation} \end{lemma} \noindent \textbf{Proof } Let \begin{equation}
G \mathbf z = \sigma \mathbf z, \end{equation} where $\sigma$ is an eigenvalue of $G$ with the corresponding eigenvector being $\mathbf z$. Note that $\sigma$ and the entries in $\mathbf z$ can be complex. Using Equation~\eqref{eq:LDU:XTX} and Equation~\eqref{eq:G:def}, we have \begin{equation}
(L+D - X^TX) \mathbf z = - U \mathbf z = (L+D) \sigma \mathbf z. \end{equation} which leads to \begin{equation}
(L+D) (1- \sigma) \mathbf z = X^TX \mathbf z. \end{equation} If $\sigma =1$, the corresponding eigenvector $\mathbf z$ is in the null space of $ X^TX$. If $\sigma \neq 1$, we have \begin{equation}\label{eq:proof:less:than:1:1}
(L+D) \mathbf z = \frac{1}{(1- \sigma)} X^TX \mathbf z. \end{equation} Premultiplying Equation~\eqref{eq:proof:less:than:1:1} by $\mathbf z^H$, the conjugate transpose of $\mathbf z$, we have \begin{equation}\label{eq:proof:less:than:1:2}
\mathbf z^H (L+D) \mathbf z = \frac{1}{(1- \sigma)} \mathbf z^H X^TX \mathbf z. \end{equation} Taking the conjugate transpose of Equation~\eqref{eq:proof:less:than:1:2}, we have \begin{equation}\label{eq:proof:less:than:1:3}
\mathbf z^H (U+D) \mathbf z = \frac{1}{(1- \bar{\sigma})} \mathbf z^H X^TX \mathbf z, \end{equation} where $\bar {\sigma}$ denotes the conjugate of $\sigma$. Adding Equation~\eqref{eq:proof:less:than:1:2} and Equation~\eqref{eq:proof:less:than:1:3} and subtracting $\mathbf z^H X^TX \mathbf z$, we have \begin{equation}\label{eq:proof:less:than:1:4}
\mathbf z^H D \mathbf z = \left(\frac{1}{(1- \bar{\sigma})} + \frac{1}{(1- \sigma)} -1 \right )\mathbf z^H X^TX \mathbf z. \end{equation} Since $\mathbf z^H D \mathbf z >0$ and $\mathbf z^H X^TX \mathbf z \ge 0$, we have \begin{equation}
0 < \frac{1}{(1- \bar{\sigma})} + \frac{1}{(1- \sigma)} -1 = \frac{1- |\sigma|^2}{|1-\sigma|^2}. \end{equation}
Therefore, we have $1- |\sigma|^2 >0$ or equivalently $|\sigma| < 1$.
This ends the proof of this lemma.
$\Box$
\subsection{An Eigenvalue Analysis on CD+SRRC}\label{ss:srrc:analysis}
For CD+SRRC in Algorithm~\ref{algorithm:SGR:outer}, when $\lambda =0$ we have \begin{equation}
\bm \beta^k = (L+D)^{-1}[X^T \mathbf y - U \mathbf s^{k-1}] \end{equation} \begin{equation}\label{eq:sgr:iteration}
\mathbf s^k = (1- \alpha^k)\mathbf s^{k-1} + \alpha^k \bm \beta^k. \end{equation} It can be shown that \begin{equation}
\mathbf s^k - \mathbf s^{k-1}= \alpha^k \left [ - (L +D)^{-1} U + \frac{1- \alpha^{k-1}}{ \alpha^{k-1}} I \right] (\mathbf s^{k-1} - \mathbf s^{k-2}), \end{equation} \begin{equation}
\bm \beta^k - \bm \beta^{k-1}= - (L +D)^{-1} U (\mathbf s^{k-1} - \mathbf s^{k-2}). \end{equation}
When $k\ge 2$, we denote \begin{equation}
A^k = \alpha^k \left [ - (L +D)^{-1} U + \frac{1- \alpha^{k-1}}{ \alpha^{k-1}} I \right]. \end{equation} It can be shown that \begin{equation}
A^k = P \Sigma^k P^{-1}, \end{equation} where $ \Sigma^k=\mbox{diag}(\sigma_1^k, \sigma_2^k, \ldots, \sigma_p^k)$ is a diagonal matrix and \begin{equation}
\sigma_i^k = \alpha^k (\delta_i + \frac{1- \alpha^{k-1}}{ \alpha^{k-1}}). \end{equation} Therefore, we have \begin{equation}
\mathbf s^k - \mathbf s^{k-1}=\left(\Pi_{i=2}^k A^k \right) ( \mathbf s^1 - \mathbf s^0) = P (\Pi_{i=2}^k \Sigma_k) P^{-1} ( \mathbf s^1 - \mathbf s^0), \end{equation}
For discussion convenience, we let $\Sigma^1 = \Delta$ and \begin{equation}
T^k=\mbox{diag}(t_1^k, t_2^k, \ldots, t_p^k) = \Pi_{i=1}^k \Sigma^k, \end{equation} where \begin{equation}
t_i^k = \Pi_{i=1}^k \sigma_i^k. \end{equation} We have \begin{equation}
\bm \beta^k - \bm \beta^{k-1}= P T^{k-1} P^{-1} ( \mathbf s^1 - \mathbf s^0). \end{equation}
For the traditional coordinate descent in Algorithm~\ref{algorithm:traditional}, $\alpha^k =1, \forall k$. For the proposed CD+SRRC in Algorithm~\ref{algorithm:SGR:outer}, $\alpha^k$ optimizes Equation~\eqref{eq:one:d:search}.
For the data used in Table~\ref{table:iteration:traditiona:cd}, the eigenvalues of $ - (L +D)^{-1} U$ are $$\delta_1=0, \delta_2=0.00219338, \delta_3=0.12412229,$$ $$ \delta_4=0.62606165, \delta_5= 0.93956707.$$ For the traditional coordinate descent in Algorithm~\ref{algorithm:traditional}, when $k=30$, we have $$t_1^{29} =0, t_2^{29}<0.000001, t_3^{29} <0.000001,$$ $$ t_4^{29}=0.000002, t_5^{29}=0.164023.$$ For the coordinate descent with SRRC in Algorithm~\ref{algorithm:SGR:outer}, we have $$t_1^{29} =0, t_2^{29}<0.000001, t_3^{29} <0.000001,$$ $$ t_4^{29}<0.000001, t_5^{29}=0.000008.$$ This explains why the proposed SRRC can greatly accelerate the convergence of the coordinate descent method.
\subsection{An Eigenvalue Analysis on CD+SRRT}
For coordinate descent with SRRT in Algorithm~\ref{algorithm:orrt}, when $\lambda =0$ we have \begin{equation}
\bm \beta^k = (L+D)^{-1}[X^T \mathbf y - U \mathbf s^{k-1}] \end{equation} \begin{equation}
\mathbf s^k = (1- \alpha^k)\bm \beta^{k-1} + \alpha^k \bm \beta^k. \end{equation} When $k \ge 3$, it can be shown that \begin{equation}\label{eq:recursion:srrt}
\bm \beta^k - \bm \beta^{k-1} = G [ (1- \alpha^{k-2}) (\bm \beta^{k-2} - \bm \beta^{k-3}) + \alpha^{k-1} (\bm \beta^{k-1} - \bm \beta^{k-2})]. \end{equation} When $k = 2$, we have \begin{equation}
\bm \beta^2 - \bm \beta^1 = \alpha^1 G (\bm \beta^1 - \bm \beta^0). \end{equation} Using the recursion in Equation~\eqref{eq:recursion:srrt}, we can get \begin{equation}
\bm \beta^3 - \bm \beta^2 = [(1 -\alpha^1) G + \alpha^1 G \alpha^2 G ] (\bm \beta^1 - \bm \beta^0). \end{equation} \begin{equation} \begin{aligned}
\bm \beta^4 - \bm \beta^3 = & [\alpha^1 G (1 -\alpha^2) G + (1 -\alpha^1) G \alpha^3 G \\
& + \alpha^1 G \alpha^2 G \alpha^3 G]
(\bm \beta^1 - \bm \beta^0). \end{aligned} \end{equation} Generally speaking, we can write \begin{equation}
\bm \beta^k - \bm \beta^{k-1} = P T^{k-1} P^{-1} (\bm \beta^1 - \bm \beta^0), \end{equation} where $T^k=\mbox{diag}(t_1^k, t_2^k, \ldots, t_p^k)$ is a diagonal matrix. For $t_i^k$, it is a polynomial function of $\delta_i$; that is, $t_i^k = \phi_k(\delta_i)$, where \begin{equation}
\phi_k(t) = \underbrace{t \times \ldots \times t }_{\lceil k/2 \rceil} \sum_{i=0}^{k - \lceil k/2 \rceil} c_i \times \underbrace{t \times \ldots \times t }_{i} ), \end{equation} and $c_0, c_1, \ldots, c_{k - \lceil k/2 \rceil}$ are dependent on $\alpha^1, \alpha^2, \ldots, \alpha^{k-1}$. When $k=2$, we have \begin{equation}
c_0= \alpha^1. \end{equation} When $k=3$, we have \begin{equation} \begin{array}{lll}
c_0 &= & 1- \alpha^1, \\
c_1 &= & \alpha^1 \alpha^2.\\ \end{array} \end{equation} When $k=4$, we have \begin{equation} \begin{array}{lll}
c_0 &= & \alpha^1 (1-\alpha^2) + \alpha^3(1-\alpha^1), \\
c_1 &= & \alpha^1 \alpha^2 \alpha^3.\\ \end{array} \end{equation} When $k=5$, we have \begin{equation} \begin{array}{lll}
c_0 &=& (1-\alpha^2) (1-\alpha^3), \\
c_1 &=& \alpha^1 (1-\alpha^2) \alpha^4 + \alpha^3(1-\alpha^1) \alpha^4 + \alpha^1 \alpha^2 (1-\alpha^3),\\
c_2 &=& \alpha^1 \alpha^2 \alpha^3 \alpha^4. \end{array} \end{equation} For the coordinate descent with SRRT in Algorithm~\ref{algorithm:orrt}, we have $$t_1^{29} =0, t_2^{29}<0.000001, t_3^{29} <0.000001, $$ $$t_4^{29}=0.000002, t_5^{29}=0.000393,$$ which are smaller than the ones in the traditional coordinate descent shown in Section~\ref{ss:srrc:analysis}.
\section{Related Work}\label{s:related}
In this section, we compare our proposed SRR with successive over-relaxation~\cite{Yong_D:1950} and the accelerated gradient descent method~\cite{Nesterov:2004}.
\subsection{Relationship between SRRC and\\ Successive Over-Relaxation}
Successive over-relaxation (SOR) is a classical approach for accelerating the Gauss-Seidel approach. Our discussion in this section considers only $\lambda=0$, because SOR targets the acceleration of the Gauss-Seidel approach.
From Equation~\eqref{eq:gradient:zero}, we have \begin{equation}\label{eq:gradient:zero:sor}
(wL+D) \bm \beta = w X^T \mathbf y - w U \bm \beta - (w-1) D \bm \beta, \forall w >0 . \end{equation} The iteration used in successive over-relaxation is: \begin{equation}\label{eq:sor:formulation}
\bm \beta^k = (wL+D)^{-1} [w X^Ty - [wU + (w-1) D ] \bm \beta^{k-1}, \end{equation} which can be obtained by plugging $\bm \beta^k $ and $\bm \beta^{k-1}$ into Equation~\eqref{eq:gradient:zero:sor}. Equation~\eqref{eq:sor:formulation} can be rewritten as: \begin{equation}\label{eq:sor:formulation:2}
\bm \beta^k =\bm \beta^{k-1} - w (wL+D)^{-1} [X^TX \bm \beta^{k-1} - X^T \mathbf y]. \end{equation}
For the proposed CD+SRRC in Algorithm~\ref{algorithm:SGR:outer}, when $\lambda =0$ we have \begin{equation}\label{eq:srrc:iteration} \begin{aligned}
\mathbf s^k & = (1- \alpha^k)\mathbf s^{k-1} + \alpha^k (L +D)^{-1} \left[ X^T \mathbf y - U \mathbf s^{k-1} \right] \\
& = \mathbf s^{k-1} - \alpha^k (L +D)^{-1} \left[ X^T X \mathbf s^{k-1} - X^T \mathbf y \right]. \end{aligned} \end{equation}
When $w=1$ and $\alpha^k=1$, both SOR and CD+SRRC reduce to the traditional coordinate descent. Equation~\eqref{eq:sor:formulation} and Equation~\eqref{eq:srrc:iteration} share the following two similaries:
1) both make use of the gradient in the recursive iterations in that $X^TX \bm \beta^{k-1} - X^T \mathbf y$ is the gradient of $\frac{1}{2} \| X \bm \beta - \mathbf y\|_2^2$ at
$\bm \beta^{k-1}$ and $X^TX \mathbf s^{k-1} - X^T \mathbf y$ is the gradient of $\frac{1}{2} \| X \mathbf s^{k-1} - \mathbf y\|_2^2$ at $\mathbf s^{k-1}$, and 2) both use a precondition matrix in that SOR uses $(wL+D)$ whereas SRRC uses $(L +D)$. A key difference is that the precondition matrix used in SRRC is parameter-free whereas the one used in SOR has a parameter. As a result, we can perform an inexpensive univariate search to find the optimal $\alpha^k$ used in SRRC whereas it is usually expensive for SOR to search for an optimal $w$ in the same way as Equation~\eqref{eq:one:d:search}.
When the design matrix has some special structures, it has been shown in~\cite{Yong_D:1950} that the optimal value of $w$ can be found for SOR. However, for the general design matrix $X$, it is hard to obtain the optimal $w$ used for SOR. This might be a major reason that SOR is not widely used in solving Lasso with coordinate descent. For our proposed SRRC, the criterion in Equation~\eqref{eq:one:d:search} enables us to adaptively set the refinement factor $\alpha^k$.
\subsection{Relationship between SRRT and\\ the Nesterov's Method}
The SRRT scheme presented in Figure~\ref{fig:sort:diagram} is similar to the Nesterov's method in that both make use of a search point in the iterations. In addition, both set the search point using \begin{equation}
\mathbf s^k = (1-\alpha^k) \bm \beta^{k-1} + \alpha^k \bm \beta^k. \end{equation} However, the key difference is that the $\alpha^k$ used in the Nesterov's method is predefined according to a specified formula, whereas the $\alpha^k$ used in SRRT is set to optimize the objective function as shown in Equation~\eqref{eq:one:d:search}. Note that if the Nesterov's method sets the $\alpha^k$ to optimize the objective function, it reduces to the traditional steepest descent method thus the good acceleration property of the Nesterov's method is gone.
\section{Experiments}\label{s:experiment}
\begin{table}
\caption{Performance for the synthetic data sets. The results are averaged over 10 runs.
The sparsity is defined as the number of zeros in the solution divided by the total number of variables $p$.}
\label{tab:SyntheticResults} \begin{center} \begin{small}
\begin{tabular}{l|lllll}
\hline
data size & $\lambda$
& CD & CD+SRRC & CD+SRRT & sparsity \\
\hline
\multirow{2}{*}{$n=500$} & 0.5 & 10.0 & 8.8 & 9.2 & 0.9395 \\
\cline{2-6}
& 0.1 & 151.7 & 74.7 & 59.5 & 0.6406 \\
\cline{2-6}
\multirow{2}{*}{$p=1000$} & 0.05 & 463.2 & 179.0 & 109.1 & 0.5763 \\
\cline{2-6}
& 0.01 & 4132.7 & 1419.4 & 326.1 & 0.5146\\
\hline
\hline
\multirow{2}{*}{$n=1000$} & 0.5 & 7.9 & 7.7 & 7.7 & 0.9397 \\
\cline{2-6}
& 0.1 & 54.9 & 31.7 & 29.0 & 0.473 \\
\cline{2-6}
\multirow{2}{*}{$p=1000$} & 0.05 & 125.4 & 59.9 & 47.7 & 0.3126\\
\cline{2-6}
& 0.01 & 748.0 & 293.4 & 128.9 & 0.118\\
\hline
\hline
\multirow{2}{*}{$n=1000$} & 0.5 & 7.9 & 7.7 & 7.8 & 0.8856 \\
\cline{2-6}
& 0.1 & 26.3 & 17.3 & 17.1 & 0.3102 \\
\cline{2-6}
\multirow{2}{*}{$p=500$} & 0.05 & 35.5 & 21.3 & 20.2 & 0.1678 \\
\cline{2-6}
& 0.01 & 47.9 & 26.3 & 25.1 & 0.0382 \\
\hline
\end{tabular} \end{small} \end{center} \end{table}
\begin{table}
\caption{Performance for the real data sets.
The sparsity is defined as the number of zeros in the solution divided by the total number of variables $p$.}
\label{tab:RealResults} \begin{center} \begin{small}
\begin{tabular}{l|lllll}
\hline
data size & $\frac{\lambda}{\|X^T\mathbf y\|_{\infty}}$
& CD & CD+SRRC & CD+SRRT & sparsity \\
\hline
\multirow{2}{*}{$n=38$} & 0.5 & 122 & 68 & 84 & 0.9982\\
\cline{2-6}
& 0.1 & 155 & 90 & 103 & 0.9964 \\
\cline{2-6}
\multirow{2}{*}{$p=7129$}
& 0.05 & 254 & 119 & 127 & 0.9961 \\
\cline{2-6}
& 0.01 & 2053 & 424 & 343 & 0.9948 \\
\hline
\hline
\multirow{2}{*}{$n=62$} & 0.5 & 31 & 21 & 24 & 0.9975\\
\cline{2-6}
& 0.1 & 157 & 68 & 78 & 0.9840 \\
\cline{2-6}
\multirow{2}{*}{$p=2000$}
& 0.05 & 308 & 115 & 118 & 0.9775 \\
\cline{2-6}
& 0.01 & 2766 & 929 & 375 & 0.9715 \\
\hline
\hline
\multirow{2}{*}{$n=6000$} & 0.5 & 26 & 16 & 11 & 0.9994\\
\cline{2-6}
& 0.1 & 180 & 103 & 108 & 0.9932\\
\cline{2-6}
\multirow{2}{*}{$p=5000$}
& 0.05 & 823 & 337 & 432 & 0.9876\\
\cline{2-6}
& 0.01 & 4621 & 1387 & 1368 & 0.8340\\
\hline
\end{tabular} \end{small} \end{center} \end{table}
In this section, we report experimental results for synthetic and real data sets, studying the number of iterations of CD, CD+SRRC and CD+SRRT for solving Lasso. The consumed computational time is proportional to the number of iterations.
\noindent \textbf{Synthetic Data Sets } We generate the synthetic data as follows. The entries in the $n \times p$ design matrix $X$ and the $n \times 1$ response $\mathbf y$ are drawn from a Gaussian distribution. We try the following three settings of $n$ and $p$: 1) $n=500, p=1000$, 2) $n=1000,p=1000$, and 3) $n=1000, p=500$.
\noindent \textbf{Real Data Sets } We make use of the following three real data sets provided in~\cite{libsvmData}: leukemia, colon, and gisette. The leukemia data set has $n=38$ samples and $p=7129$ variables. The colon data set has $n=62$ samples and $p=2000$ variables. The gisette data set has $n=6000$ samples and $p=5000$ variables.
\begin{figure*}\label{fig:result}
\end{figure*}
\noindent \textbf{Experimental Settings }
For the value of the regularization parameter, we try $\lambda =r \|X^T \mathbf y\|_{\infty}$, where $r=0.5,0.1,0.05,0.01$. For the synthetic data sets, the reported results are averaged over 10 runs. For a particular regularization parameter, we first run CD in Algorithm~\ref{algorithm:traditional} until
$\|\bm \beta^k -\bm \beta^{k-1}\|_2 \le 10^{-6}$, and then run CD+SRRC and CD+SRRT until the obtained objective function value is less than or equal to the one obtained by CD.
\noindent \textbf{Results } Table~\ref{tab:SyntheticResults} and Table~\ref{tab:RealResults} show the results for the synthetic and real data sets, respectively. The last column of each table shows the sparsity of the obtained Lasso solution, which is defined as the number of zero entries in the solution divided by the number of variables $p$. Figure~\ref{fig:result} visualizes the results in these two tables. We can see that when the solution is
very sparse (for example, $\lambda = 0.5 \|X^T \mathbf y\|_{\infty}$), the proposed CD+SRRC and CD+SRRT consume comparable number of iterations to the traditional CD. The reason is that the optimal refinement factor computed by SRR in Equation~\eqref{eq:one:d:search} is equal to or close to 1, and thus CD+SRRC and CD+SRRT is very close to the traditional CD. Note that a regularization parameter $\lambda = 0.5 \|X^T \mathbf y\|_{\infty}$ is usually too large for practical applications because it selects too few variables, and we usually need to try a smaller
$\lambda = r \|X^T \mathbf y\|_{\infty}$ for example, $r=0.01$. It can be observed that the proposed CD+SRRC and CD+SRRT requires much fewer iterations, especially for smaller regularization parameters.
\section{Conclusion}\label{s:conclusion}
In this paper, we propose a novel technique called successive ray refinement. Our proposed SRR is motivated by an interesting ray-continuation property on the coordinate descent iterations: for a particular coordinate, the value obtained in the next iteration almost always lies on a ray that starts at its previous iteration and passes through the current iteration. We propose two schemes for SRR and apply them to solving Lasso with coordinate descent. Empirical results for real and synthetic data sets show that the proposed SRR can significantly reduce the number of coordinate descent iterations, especially when the regularization parameter is small.
We have established the convergence of CD+SRR, and it is interesting to study the convergence rate. We focus on a least squares loss function in~\eqref{eq:lasso:problem}, and we plan to apply the SRR technique to solving the generalized linear models. We compute the refinement factor as an optimal solution to Equation~\eqref{eq:one:d:search}, and we plan to obtain the refinement factor as an approximate solution, especially in the case of generalized linear models.
\end{document} |
\begin{document}
\begin{abstract} We recall the definition of classical polar varieties, as well as those of affine and projective reciprocal polar varieties. The latter are defined with respect to a non-degenerate quadric, which gives us a notion of orthogonality. In particular we relate the reciprocal polar varieties to the ``Euclidean geometry'' in projective space. The Euclidean distance degree and the degree of the focal loci can be expressed in terms of the ranks, i.e., the degrees of the classical polar varieties, and hence these characters can be found also for singular varieties, when one can express the ranks in terms of the singularities. \keywords{Schubert varieties, polar varieties, reciprocal polar varieties, Euclidean normal bundle, Euclidean distance degree, focal locus.} \end{abstract}
\title{Polar varieties revisited}
\section{Introduction}
The theory of polars and polar varieties has played an important role in the quest for understanding and classifying projective varieties. Their use in the definition of projective invariants is the very basis for the geometric approach to the theory of characteristic classes, such as Todd classes and Chern classes. In particular this approach gives a way of defining Chern classes for \emph{singular} projective varieties (see e.g. \cite{Piene1,Piene3}). The \emph{local} polar varieties were used by L\^e--Teissier, Merle and others in the study of singularities.
More recently, polar varieties have been applied to study
the topology of real affine varieties and to find real solutions of polynomial equations
by Bank et al., Safey El Din--Schost, and Mork--Piene
\cite{Bank,Bank2,Bank3,Bank4,Safey,Mork,MorkPiene}, to complexity questions by B\" urgisser--Lotz \cite{BurgL}, to foliations by Soares \cite{Soares} and others, to
focal loci and caustics of reflection by Catanese and Trifogli \cite{C-T,T} and
Josse--P\`ene \cite{JP}, to
Euclidean distance degree by Draisma et al. \cite{ED}.
In this note I will explore the relation of polar and reciprocal polar varieties of possibly singular varieties to the Euclidean normal bundle, the Euclidean distance degree, and the focal loci. For simplicity, we work over an algebraically closed field of characteristic $0$.
\section{Classical polar varieties}
Let $\mathbb G(m,n)$ denote the Grassmann variety of $m$-dimensional linear subspaces of $\mathbb P^n$. Let $L_k\subset \mathbb P^n$ be a linear subspace of dimension $n-m+k-2$.
Consider the special Schubert variety
\[\Sigma(L_k):=\{W\in \mathbb G(m,n) \,|\, \dim W\cap L_{k}\ge k-1\}.\] It has a natural structure as a determinantal scheme and is of codimension $k$ in $\mathbb G(m,n)$.
As is well known, if $L_k'$ is another linear subspace of dimension $n-m+k-2$, then $\Sigma(L_k')$ and $\Sigma(L_k)$ are projectively equivalent (in particular, their rational equivalence classes are equal). So we will often just write $\Sigma_k$ for such a variety.
\begin{example} The case $m=1$, $n=3$. For $\mathbb G(1,3)=\{\text{lines in } \mathbb P^3\}$, the special Schubert varieties are \begin{align*} \Sigma_1&=\{\text{lines meeting a given line}\} \\ \Sigma_2&=\{\text{lines in a given plane}\}. \end{align*} \end{example} \begin{example} The case $m=2$, $n=5$. For $\mathbb G(2,5)=\{\text{planes in } \mathbb P^5\}$, the special Schubert varieties are \begin{align*} \Sigma_1&=\{\text{planes meeting a given plane}\} \\ \Sigma_2&=\{\text{planes intersecting a given 3-space in a line}\}\\ \Sigma_3&=\{\text{planes contained in a given hyperplane}\} \end{align*} \end{example}
More general Schubert varieties are defined similarly, by giving conditions with respect to flags of linear subspaces (see e.g. \cite{KL}). For example, in Example 1, we could consider \[\Sigma_{0,2}=\{\text{lines in a given plane through a given point in the plane}\}.\]
Let $X\subset \mathbb P^n$ be a (possibly singular) variety of dimension $m$. The \emph{Gauss map} $\gamma\colon X\dashrightarrow \mathbb G(m,n)$ is the rational map that sends a nonsingular point $P\in X_{\rm sm}$ to the (projective) tangent space $T_PX$, considered as a point in the Grassmann variety. More precisely, if $V=H^0(\mathbb P^n,\mathcal O_{\mathbb P^n}(1))$, then $\gamma$ is given on $X_{\rm sm}:=X\setminus \operatorname{Sing} X$ by the restriction of the quotient
\[V_X\to \mathcal P_X^1(\mathcal L),\] where $\mathcal P_X^1(\mathcal L)$ denotes the bundle of 1st principal parts of the line bundle $\mathcal L:=\mathcal O_{\mathbb P^n}(1)|_X$. Note that restricted to $X_{\rm sm}$ the sheaf of principal parts is locally free with rank $m+1$, and that the fibers (over $X_{\rm sm}$) of $\mathbb P(\mathcal P_X^1(\mathcal L)) \subset X\times \mathbb P^n$ over $X$ (with respect to projection on the first factor) define the (projective) tangent spaces of $X$.
The \emph{polar varieties} of $X\subset \mathbb P^n$ are the closures of the inverse images
\[M_k:=\overline{\gamma|_{X_{\rm sm}}^{-1}\Sigma_k}\] of the special Schubert varieties via the Gauss map \cite[p.~252]{Piene1}.
In the situation of Example 1,
let $X\subset \mathbb P^3$ be a curve. Then $M_1$ is the set of nonsingular points $P\in X$ such that the tangent line $T_PX$ meets a given line, i.e., it is the ramification points of the linear projection map $X\to \mathbb P^1$.
In Example 2, let $X\subset \mathbb P^5$ be a surface. Then $M_1$ is the ramification locus of the projection map $X\to \mathbb P^2$ with center a plane, and $M_2$ consists of points $P\in X_{\rm sm}$ such that the tangent plane $T_PX$ intersects a given $3$-space in a line. One could also consider more general polar varieties, corresponding to general Schubert varieties, like the set of points $P$ such that $T_PX$ meets a given line; this is the ramification locus of the linear projection of $X$ to $\mathbb P^3$ with the line as center. Note that the cardinality of this 0-dimensional variety is equal to the degree of the tangent developable of $X$.
\section{Polar classes and Chern classes} It was noted classically that the polar varieties are invariant under linear projections and sections. Therefore the \emph{polar classes}, i.e., their rational equivalence classes, are \emph{projective invariants} of the variety. Already Noether, Segre and Zeuthen observed that certain integral combinations of the polar classes of surfaces are \emph{intrinsic} invariants, i.e., they depend only on the surface, not on the given projective embedding. This was pursued by Todd and Severi, who used the polar classes to define what are now called the Chern classes. The formula is the following: \[c_k(X)=\sum_{i=0}^k (-1)^{k-i}\binom{m+1-k+i}{i}[M_i]h^i,\] where $h$ denotes the class of a hyperplane section. Since this expression makes sense also for singular varieties, it gives a definition of Chern classes for singular projective varieties, called the \emph{Chern--Mather} classes (see e.g. \cite{Piene3}). The formula can be inverted to express the polar classes in terms of Chern classes. When the variety $X$ is nonsingular, this is just the expression coming from the fact that in this case, \[[M_k]=c_k(\mathcal P_X^1(\mathcal L)).\] Using the canonical exact sequence \[0\to \Omega_X^1\otimes \mathcal L \to \mathcal P_X^1(\mathcal L) \to \mathcal L \to 0,\] we compute, with $c_i(X)=c_i({\Omega_X^1}^\vee)=(-1)^ic_i(\Omega_X^1)$, \[[M_k]=\sum_{i=0}^k (-1)^{k-i}\binom{m+1-k+i}ic_{k-i}(X)h^i.\] When $X$ is singular, we replace $X$ by its Nash transform $\overline X$, i.e., $\nu\colon\overline X\to X$ is the smallest proper modification of $X$ such that $\gamma$ extends to a morphism $\overline \gamma\colon \overline X \to \mathbb G(m,n)$. If we denote by $V_{\overline X} \to\mathcal P$ the locally free quotient corresponding to $\overline \gamma$, then it follows from the definition that we have \[ [M_k]=\overline \gamma_*c_k(\mathcal P).\] In many cases, this allows us to find formulas for the degrees of the polar classes in terms of the singularities of $X$, see \cite{Piene1,Piene2,Piene3}.
\section{Reciprocal polar varieties}
In \cite{Bank3,Bank4} Bank et al. introduced what they called \emph{dual} polar varieties. These varieties were further studied in \cite{Mork,MorkPiene} under the name of \emph{reciprocal} polar varieties. They are defined with respect to a non degenerate quadric in the ambient projective space, and sometimes with respect to the choice of a hyperplane at infinity.
Let $Q\subset \mathbb P^n$ be a non degenerate quadric. Then $Q$ induces a \emph{polarity}, classically called a \emph{reciprocation}, between points and hyperplanes in $\mathbb P^n$. The \emph{polar} hyperplane $P^\perp$ of $P$ is the linear span of the points on $Q$ such that the tangent hyperplane to $Q$ at that point contains $P$: \[P=(b_0:\cdots :b_n)\mapsto P^\perp \colon \sum_{i=0}^n b_i\frac{\partial q}{\partial X_i}=0.\] The map $P \mapsto P^\perp$ is the isomorphism $\mathbb P^n\to ( \mathbb P^n)^\vee$ given by the symmetric bilinear form associated with the quadratic polynomial $q$ defining $Q$. The point $P^\perp$ in the dual projective space represents a hyperplane in $\mathbb P^n$. If $L\subset \mathbb P^n$ is a linear space of dimension $r$, then $L^\perp \subset (\mathbb P^n)^\vee$ gives a $(n-r-1)$-dimensional linear subspace $(L^\perp)^\vee \subset \mathbb P^n$, which we will, by abuse of notation, also denote by $L^\perp$.
Note that if $H$ is a hyperplane, its \emph{pole} $H^\perp$ is the intersection of the tangent hyperplanes to $Q$ at the points of intersection with $H$. The map $H\mapsto H^\perp$ is the inverse of the above map $P\mapsto P^\perp$.
If the quadric is $q=\sum_{i=0}^n X_i^2$, then the polar of \[P=(b_0:\cdots :b_n)\] is the hyperplane \[P^\perp: b_0X_0+\ldots +b_nX_n=0.\]
Let $X\subset \mathbb P^n$ be a (possibly singular) variety of dimension $m$. Let $K_i\subset \mathbb P^n$ be a linear subspace of dimension $i+n-m-1$. Recall \cite[Def. 2.1, p.~104]{MorkPiene}, \cite[p.~527]{Bank3} the definition of the $i$-th reciprocal polar variety of $X$ (with respect to $K_i$):
$W_{K_i}^{\perp}(X)$, $1\leq i \leq m$, is the Zariski closure of the set \begin{displaymath}
\{ P\in X_{\rm sm}\setminus K_i^{\perp}\,|\, T_P X \not\pitchfork \langle P,K_i^{\perp}\rangle ^{\perp}\}, \end{displaymath} where $T_PX$ denotes the tangent space to $X$ at the point $P$ and
the notation $M\not\pitchfork N$ means that the two linear spaces $M$, $N$ are not transversal, i.e., that their linear span $\langle M,N\rangle$ is not equal to the whole ambient space $\mathbb P^n$. For general $K_i$, the $i$th reciprocal variety has codimension $i$.
The condition $T_P X \not\pitchfork \langle P,K_i^{\perp}\rangle ^{\perp}$ is equivalent to the condition $\dim (T_PX\cap \langle P,K_i^{\perp}\rangle ^{\perp})\ge i-1$, and $T_PX\cap \langle P,K_i^{\perp}\rangle ^{\perp}$ is equal to $T_PX\cap P^\perp\cap K_i$, or to $\langle T_PX^\perp,P\rangle ^\perp\cap K_i$, or to $\langle\langle T_PX^\perp,P\rangle,K_i^\perp \rangle ^\perp$. That the dimension of the latter space is greater or equal to $i-1$ is equivalent to $\dim \langle \langle T_PX^\perp,P\rangle,K_i^\perp \rangle \le n-i$, or to $\dim (\langle T_PX^\perp,P\rangle \cap K_i^\perp)\ge 0$. Thus we obtain the following description of the $i$th polar variety:
\[W_{K_i}^{\perp}(X)=\overline{\{P\in X_{\rm sm}\setminus K_i^{\perp}\,|\, \dim (\langle T_PX^\perp,P\rangle \cap K_i^\perp)\ge 0 \}}.\] In the case $i=m$ we have that $K_m$ is a hyperplane, so $K_m^\perp$ is a point. Assuming this is not a point on $X$, we see that the $m$th reciprocal polar variety of $X$ with respect to $K_m$ is the (finite) set of nonsingular points $P\in X$
\[W_{K_m}^\perp(X)=\{P\in X_{sm} \, | \, K_m^\perp \in \langle T_PX^\perp,P\rangle \}.\]
Let $H_\infty =Z(x_0) \subset \mathbb P^n$ denote ``the hyperplane at infinity'', set $\mathbb A^n =\mathbb P^n \setminus H_\infty$, and consider the affine part $Y:=X\cap \mathbb A^n$ of $X$. Let $L=K_i\subseteq H_\infty$ be a linear space of dimension $i+n-m-1$. Define the \emph{affine reciprocal polar variety} to be the affine part of the reciprocal polar variety: \[W^\perp_L(Y):=W^\perp_L(X)\cap \mathbb A^n.\]
The linear variety $\langle P,L^\perp\rangle^\perp$ is contained in the hyperplane $H_\infty$, so we can consider the affine cone $I_{P,L^\perp}$ of $\langle P,L^\perp\rangle^\perp$ as a linear variety in the affine space $\mathbb A^n$. Then the affine reciprocal polar variety can be written as
\[W^\perp_L(Y)=\overline{\{P\in Y_{\rm sm}\setminus L^\perp \,|\, t_PY\not\pitchfork I_{P,L^\perp}\}},\] where $t_PY$ denotes the affine tangent space at $P$, translated to the origin.
Assume $X=Z(F_1,\ldots,F_r)$ (for some $r\ge n-m$). Consider the case $L=K_m=H_\infty$. Then we see that $W^\perp_{H_\infty}(Y)$ is the closure of the set of smooth points of $Y$ where the $(n-m+1)$-minors of the matrix \[ \left(\begin{array}{ccc} \frac{\partial q}{\partial x_1} &\cdots & \frac{\partial q}{\partial x_n}\\ \frac{\partial F_1}{\partial x_1} & \cdots & \frac{\partial F_1}{\partial x_n}\\ \vdots & \vdots &\vdots\\ \frac{\partial F_r}{\partial x_1} & \cdots & \frac{\partial F_r}{\partial x_n}. \end{array}\right) \] vanish. This generalizes \cite[Prop. 3.2.5]{MorkPiene}.
\begin{example} Assume $q$ is given by $q(x_0,x_1,\ldots,x_n)=x_0^2+\sum_{i=1}^n(x_i-a_ix_0)^2$ for some $a_1,\ldots,a_n$ such that $\sum_{i=1}^n a_i\neq 1$. Then $q$ restricts to (essentially) the square of the Euclidean distance function on $\mathbb A^n=\mathbb P^n \setminus H_\infty$, namely to $\sum_{i=1}^n(x_i-a_i)^2+1$. The affine reciprocal polar variety is given (on smooth points) by the vanishing of the $(n-m+1)$-minors of the matrix \[ \left(\begin{array}{ccc} x_1-a_1&\cdots & x_n-a_n\\ \frac{\partial F_1}{\partial x_1} & \cdots & \frac{\partial F_1}{\partial x_n}\\ \vdots & \vdots &\vdots\\ \frac{\partial F_r}{\partial x_1} & \cdots & \frac{\partial F_r}{\partial x_n}. \end{array}\right) \] just as in \cite[(2.1)]{ED}. \end{example}
\section{Euclidean normal bundles}
Consider a variety $X\subset \mathbb P^n$
and let $Q\subset \mathbb P^n$ be a non-degenerate quadric. As we saw in the previous section, the quadric induces a polarity on $\mathbb P^n$, which can be viewed as an orthogonality, like what one has in a Euclidean space. In \cite{C-T} this was used to define \emph{Euclidean normal spaces} at each point $P\in X_{\rm sm}$. Actually they considered a non-degenerate quadric in a hyperplane at $\infty$, essentially as we saw in the case of affine reciprocal polar varieties. Here we shall consider the orthogonality on all of $\mathbb P^n$, and we define the normal space at a smooth point $P$ as follows:
\[N_PX:=\langle {T_P X}^\perp,P \rangle.\]
We shall now see that, by passing to the Nash modification of $X$, the Euclidean normal spaces are the fibers of a projective bundle.
The Nash modification $\nu\colon\overline X\to X$ is the ``smallest'' proper, birational map such that the pullback of the cotangent sheaf $\Omega^1_X$ of $X$ admits a locally free quotient of rank $m$. This is equivalent to $\nu^*\mathcal P^1_X(1)$ admitting a locally free quotient of rank $m+1$. Denote this quotient by $\mathcal P$, and let $\mathcal K$ denote the kernel of the surjection $\mathcal O_{\overline X}^{n+1} \to \mathcal P$. Thus $\mathcal K$ is a modification of the conormal sheaf $\mathcal N_{X/\mathbb P^n}$ of $X$ in $\mathbb P^n$ twisted by $\mathcal O_X(1)$.
The quadric $Q$ gives an isomorphism $\mathcal O_X^{n+1} \cong \mathcal (O_X^{n+1})^\vee$, hence we get a quotient $\mathcal O_{\overline X}^{n+1} \to \mathcal K^\vee $, whose (projective) fibers are the spaces $ T_P X^\perp$. Adding
the point map $\mathcal O_X^{n+1} \to \mathcal O_X(1)$, we get a surjection
\[\mathcal O_{\overline X}^{n+1} \to \mathcal E:=\mathcal K^\vee \oplus \mathcal O_{\overline X}(1),\]
whose (projective) fibers are the Euclidean normal spaces $N_PX$. Indeed, $\mathbb P(\mathcal E)\subset {\overline X}\times \mathbb P^n$, and the fibers of the structure map $\mathbb P(\mathcal E)\to {\overline X}$ above smooth points of $X$ are the spaces $N_PX\subset \mathbb P^n$ defined above. We call $\mathcal E$ the \emph{Euclidean normal bundle} of $X$ with respect to $Q$ (cf. \cite{C-T} and \cite{ED}).
Let $p\colon \mathbb P(\mathcal E) \to {\overline X}$ denote the structure map, and let $q\colon \mathbb P(\mathcal E) \to \mathbb P^n$ denote the projection on the second factor. The map $q$ is called the \emph{endpoint map} (for an explanation of the name, see \cite{C-T}).
Let $B\in \mathbb P^n$ be a (general) point. Then
\[ p(q^{-1}(B)) = \{ P\in X \, |\, B\in \langle T_PX^\perp,P \rangle \}.\]
Letting $L:=B^\perp$, so that $B=L^\perp$, we see that
\[ p(q^{-1}(B)) = \{ P\in X \, |\, L^\perp\in \langle T_PX^\perp,P \rangle \} = W_L^\perp(X)\]
is a reciprocal polar variety. In particular,
the degree of $q$ is just the degree of the reciprocal polar variety.
\begin{example}
Assume $X\subset \mathbb P^2$ is a (general) plane curve of degree $d$.
The \emph{reciprocal} polar variety is the intersection of the curve with its reciprocal polar, which has degree $d$, so $q$ has degree $d^2$. \end{example}
In \cite{ED} the degree of the endpoint map $q\colon \mathbb P(\mathcal E) \to \mathbb P^n$ was called the \emph{Euclidean distance degree} of $X$:
\[ {\rm ED}\deg X=p_*c_1(\mathcal O_{\mathbb P(\mathcal E)}(1))^n=s_m(\mathcal E),\]
where $m=\dim X$ and $s_m$ denotes the $m$th Segre class. The reason for the name is the relationship to computing critical points for the distance function in the Euclidean setting. We refer to \cite{ED} for many more details. In the case of curves and surfaces (and in a slightly different setting), the degree of $q$ is called the normal class in \cite{JP2}.
Since $\mathcal E=\mathcal K^\vee \oplus \mathcal O_{\overline X}(1)$, we get \[s(\mathcal E)=s(\mathcal K^\vee)s(\mathcal O_{\overline X}(1))=c(\mathcal P)c(\mathcal O_{\overline X}(-1))^{-1}.\]
Therefore,
\[s_m(\mathcal E)=\sum_{i=0}^m c_{i}(\mathcal P)c_1(\mathcal O_{\overline X}(1))^{m-i}.\]
Since $c_{i}(\mathcal P)c_1(\mathcal O_{\overline X}(1))^{m-i}$ is the degree $\mu_i$ of the $i$th polar variety $[M_i]$ of $X$ \cite[p.~256]{Piene1}, we conclude (cf. \cite[Thm.~5.4]{ED}):
\[ {\rm ED}\deg X=\sum_{i=0}^m \mu_i.\]
The $\mu_i$ are called the \emph{ranks} (or classes) of $X$. Note that $\mu_0$ is the degree of $X$ and $\mu_{n-1}$ is the degree of the dual variety $X^\vee$ (provided the dimension of $X^\vee$ is $n-1$). It is known (see \cite[Prop. 3.6, p.~266]{Piene1}, \cite[3.3]{U}, and \cite[(4), p.~189]{K}) that the $i$th rank of $X$ is equal to the $(n-1-i)$th rank of the dual variety $X^\vee$ of $X$. As observed in \cite[Thm.~5.4]{ED}, it follows that the Euclidean distance degree of $X$ is equal to that of $X^\vee$.
Moreover, whenever we have formulas for the ranks $\mu_i$, we thus get formulas for ${\rm ED}\deg X$.
\begin{example}
If $X\subset \mathbb P^n$ is a smooth hypersurface of degree $\mu_0=d$, then $\mu_i=d(d-1)^i$, hence in this case (cf. \cite[(7.1)]{ED})
\[{\rm ED}\deg X=d\sum_{i=0}^{n-1} (d-1)^i= \frac{d((d-1)^n-1)}{d-2}.
\]
If $X$ has only isolated singularities, then only $\mu_{n-1}$ is affected, and we get (from Teissier's formula \cite[Cor. 1.5, p.~320]{Teissier}, and the Pl\"ucker formula for hypersurfaces with isolated singularities (\cite[II.3, p.~46]{Teissier2} and \cite[Cor. 4.2.1, p.~60]{Laumon})
\[{\rm ED}\deg X= \frac{d((d-1)^n-1)}{d-2}-\sum_{P\in {\rm Sing}(X)}(\mu_P^{(n)}+\mu_P^{(n-1)}),
\]
where $\mu_P^{(n)}$ is the Milnor number and $\mu_P^{(n-1)}$ is the sectional Milnor number of $X$ at $P$.
\end{example}
\begin{example}
Assume $X\subset \mathbb P^3$ is a generic projection of a smooth surface of degree $\mu_0=d$, so that $X$ has \emph{ordinary} singularities: a double curve of degree $\epsilon$, $t$ triple points, and $\nu_2$ pinch points. Then (using the formulas for $\mu_1$ and $\mu_2$ given in \cite[p.~18]{Piene2})
\[{\rm ED}\deg X=\mu_0+\mu_1+\mu_2=d^3-d^2+d-(3d -2)\epsilon - 3t-2\nu_2.\]
Further examples can be deduced from results in \cite{Piene1}, \cite{Piene2}, and \cite{Piene3}.
\end{example}
\section{Focal loci}
The \emph{focal locus} (see e.g. \cite{C-T} for an explanation of the name, or \cite{ED}, where it is denoted the \emph{ED discriminant}) is the branch locus $\Sigma_X$ of the endpoint map $q\colon \mathbb P(\mathcal E)\to \mathbb P^n$. More precisely, let $R_X$ denote the ramification locus of $q$; by definition, $R_X$ is the subscheme of $\mathbb P(\mathcal E)$ given on the smooth locus $\mathbb P(\mathcal E)_{\rm sm}$ by the $0$th Fitting ideal $F^0(\Omega^1_{\mathbb P(\mathcal E)/{\mathbb P^n}|_{\mathbb P(\mathcal E)_{\rm sm}}})$. The focal locus $\Sigma_X$ is the closure of the image $q(R_X)$.
Recall that we have, on the Nash modification $\nu\colon \overline X\to X$, the exact sequence
\[0\to \mathcal K\to \mathcal O_{\overline X}^{n+1} \to \mathcal P\to 0,\]
where $\mathcal K$ and $\mathcal P$ are the Nash bundles of the sheaves $\mathcal N_{X/\mathbb P^n}(1)$ and $\mathcal P_X^1(1)$ respectively, and that $\mathcal E=\mathcal K^\vee \oplus \mathcal O_{\overline X}(1)$.
Let $Z\to \overline X$ be a resolution of singularities, and, by abuse of notation, denote by $\mathcal K$, $\mathcal P$, $\mathcal E$ also their pullbacks to $Z$. The class $[R_X]$ of the closure of the ramification locus of $q\colon \mathbb P(\mathcal E) \to \mathbb P^n$ is given by
\[[R_X]=c_1(\Omega^1_{\mathbb P(\mathcal E)})-q^*c_1(\Omega^1_{\mathbb P^n})=c_1(\Omega^1_{\mathbb P(\mathcal E)})+(n+1)c_1(\mathcal O_{\mathbb P(\mathcal E)}(1)).\]
Using the exact sequences
\[0\to p^*\Omega^1_Z\to \Omega^1_{\mathbb P(\mathcal E)}Ê\to \Omega^1_{\mathbb P(\mathcal E)/Z} \to 0\]
and
\[0\to \Omega^1_{\mathbb P(\mathcal E)/Z} \to p^*\mathcal E\otimes O_{\mathbb P(\mathcal E)}(-1) \to \mathcal O_{\mathbb P(\mathcal E)} \to 0 \]
we find
\[[R_X] =p^*\bigl(c_1(\Omega_Z^1)+c_1(\mathcal P)+c_1(\mathcal O_Z(1))\bigr)+mc_1(\mathcal O_{\mathbb P(\mathcal E}(1)).\]
Therefore the degree of $R_X$ with respect to the map $q$ is given by
\[\deg R_X=\bigl(c_1(\Omega_Z^1)+c_1(\mathcal P)+c_1(\mathcal O_Z(1))\bigr)p_* c_1(\mathcal O_{\mathbb P(\mathcal E)}(1))^{n-1}+mp_*c_1(\mathcal O_{\mathbb P(\mathcal E)}(1))^{n}.\] In terms of Segre classes of $\mathcal E$ this gives \[\deg R_X=\bigl(c_1(\Omega_Z^1)+c_1(\mathcal P)+c_1(\mathcal O_Z(1))\bigr)s_{m-1}(\mathcal E)+ms_m(\mathcal E),\] which gives, since $c_1(\Omega^1_Z)=c_1(\mathcal P^1_Z(1))-(m+1)c_1(\mathcal O_Z(1))$, \[\deg R_X=\bigl(c_1(\mathcal P^1_Z(1))+c_1(\mathcal P)-mc_1(\mathcal O_Z(1))\bigr)s_{m-1}(\mathcal E)+ms_m(\mathcal E).\] Now $s_m(\mathcal E)=\sum_{i=0}^m\mu_i$ and $s_{m-1}(\mathcal E)c_1(\mathcal O_Z(1))=\sum_{i=0}^{m-1}\mu_i$, hence \[\deg R_X=\bigl(c_1(\mathcal P^1_Z(1))+c_1(\mathcal P)\bigr)s_{m-1}(\mathcal E)-m\sum_{i=0}^{m-1}\mu_i+m\sum_{i=0}^m\mu_i,\] hence \[\deg R_X=\bigl(c_1(\mathcal P_Z^1(1))+c_1(\mathcal P)\bigr)s_{m-1}(\mathcal E)++m\mu_m.\]
In the special case when $X\subset \mathbb P^n$ is a hypersurface ($m=n-1$), we know by \cite[Cor.~3.4]{Piene1} that \[c_1(\mathcal P_Z^1(1))=c_1(\mathcal P)+c_1(\mathcal R_{Z/X}^{-1}),\] where $\mathcal R_{Z/X}=F^0(\Omega^1_{Z/X})$ is the (invertible) ramification ideal of $Z\to X$. Hence we get \[\deg R_X=\bigl(2c_1(\mathcal P)+c_1(\mathcal R_{Z/X}^{-1})\bigr)s_{n-2}(\mathcal E)+(n-1)\mu_{n-1},\] or \[\deg R_X=\bigl(2c_1(\mathcal P)+c_1(\mathcal R_{Z/X}^{-1})\bigr)\sum_{i=0}^{n-2}c_i(\mathcal P)c_1(\mathcal O_Z(1))^{n-2-i}+(n-1)\mu_{n-1}.\]
\begin{example} Let $X\subset \mathbb P^2$ be a plane curve of degree $\mu_0$ and class $\mu_1=c_1(\mathcal P)$. Then \[ \deg R_X=2c_1(\mathcal P)+\kappa +\mu_1=3\mu_1+\kappa,\] where $\kappa$ is the ``total number of cusps'' of $X$. Note that, again by \cite[Cor.~3.4]{Piene1}, $3\mu_1+\kappa = 3\mu_0+\iota$, where $\iota$ is the ``total number of inflection points'' of $X$. But the degree $\mu_0(X)$ of $X$ is equal to the class $\mu_1(X^\vee)$ of the dual curve $X^\vee$, and $\iota(X)$ of $X$ is $\kappa(X^\vee)$ of $X^\vee$. This shows that the degree of the focal locus, or ED discriminant, of the dual curve is equal to that of $X$: \[ \deg R_{X^\vee}=3\mu_1(X^\vee)+\kappa(X^\vee)=3\mu_0+3\iota=\deg R_X.\]
The focal locus of a plane curve is also known as the \emph{evolute} or the \emph{caustic by reflection}. So, provided the maps $R_X\to \Sigma_X$ and $R_{X^\vee}\to \Sigma_{X^\vee}$ are birational, we have shown that the degree of the evolute of $X$ is equal to the degree of the evolute of the dual curve $X^\vee$. For more on evolutes, see \cite{JP}. In the case that $X$ is a ``Pl\"ucker curve'' of degree $d=\mu_0$ and having only $\delta$ nodes and $\kappa$ ordinary cusps as singularities, and $\iota$ ordinary inflection points, then the classical formula, due to Salmon, is
\[\deg R_X = 3d(d-1)-6\delta -8\kappa.\]
Since in this case $\mu_1=d(d-1)-2\delta - 3\kappa$, this checks with our formula. Moreover, since the number of inflection points is $\iota=3d(d-2)-6\delta - 8\kappa$, $\deg R_{X^\vee}=3d+\iota=3d(d-1)-6\delta -8\kappa =\deg R_X$, as it should. \end{example}
If $X$ is smooth, we have $Z={\overline X}=X$, $\mathcal E=\mathcal N_{X/\mathbb P^n}(1)^\vee \oplus \mathcal O_X(1)$, and $s(\mathcal N_{X/\mathbb P^n}(1)^\vee)=c(\mathcal P^1_X(1))$. Hence we can compute the class of $R_X$ in terms of the Chern classes of $X$ and $\mathcal O_X(1)$. We get
\[\deg R_X=2\bigl(c_1(\Omega_X^1)\sum_{i=0}^{m-1}c_i(\mathcal P^1_X(1))c_1(\mathcal O_X(1))^{m-1-i}+(m+1)\sum_{i=0}^{m-1}\mu_i\bigr) +m\mu_m.\] Since $c_i(\mathcal P^1_X(1))=\sum_{j=0}”\binom{m+1-i+j}{j}c_{i-j}(\Omega_X^1)c_1(\mathcal O_X(1))^j$, and since the $\mu_i$'s can be expressed in terms of the Chern numbers $c_{m-j}(\Omega_X^1)c_1(\mathcal O_X(1))^j$, we see that also $\deg R_X$ can be expressed in terms of these Chern numbers and the Chern numbers $c_1(\Omega_X^1)c_{m-1-j}(\Omega_X^1)c_1(\mathcal O_X(1))^j$.
\begin{example} Assume $X\subset \mathbb P^n$ is a smooth curve of degree $d$. Then \[\deg R_X= 2(2g-2)+4\mu_0+\mu_1=2(2g-2)+4d+2d+2g-2=6(d+g-1),\] as in \cite[Ex. 7.11]{ED}. \end{example}
\begin{example} Let $X\subset \mathbb P^n$ be a smooth surface of degree $\mu_0=d$. Then, as in \cite[Section 5]{C-T} we get: \[\deg R_X=2(15d+9c_1(\Omega_X^1)c_1(\mathcal O_X(1))+c_1(\Omega_X^1)^2+c_2(\Omega_X^1)).\] \end{example}
\begin{example}
Let $X\subset \mathbb P^n$ be a general hypersurface ($m=n-1$) of degree $\mu_0$. It is known that in this case $R_X\to \Sigma_X$ is birational \cite[Thm.~2]{T}.
Since $c_1(\Omega_X^1)=(\mu_0-n-1)c_1(\mathcal O_X(1))$ we get
\[\deg \Sigma_X=\deg R_X=(2\mu_0-n-1)s_{n-2}(\mathcal E)c_1(\mathcal O_X(1))+(n-1)s_{n-1}(\mathcal E).\]
Hence
\[ \deg \Sigma_X=(2\mu_0-n-1)\sum_{i=0}^{n-2} \mu_i +(n-1)\sum_{i=0}^{n-1} \mu_i=(n-1)\mu_{n-1}+2(\mu_0-1)\sum_{i=0}^{n-2}\mu_i.\]
For a smooth hypersurface of degree $d$ in $\mathbb P^n$, we have $\mu_i=d(d-1)^i$. Hence
\[ \deg \Sigma_X=(n-1)d(d-1)^{n-1}+2d(d-1)((d-1)^{n-1}-1)(d-2)^{-1},\]
which checks with the formula found in \cite[Thm.~2]{T}.
\end{example}
\end{document} |
\begin{document}
\title{Commensurated Subgroups, Semistability and Simple Connectivity at Infinity} \begin{abstract}
A subgroup $Q$ of a group $G$ is {\it commensurated} if the commensurator of $Q$ in $G$ is the entire group $G$. Our main result is that a finitely generated group $G$ containing an infinite, finitely generated, commensurated subgroup $H$, of infinite index in $G$ is 1-ended and semistable at $\infty$. If additionally, $Q$ and $G$ are finitely presented and either $Q$ is 1-ended or the pair $(G,Q)$ has 1 filtered end, then $G$ is simply connected at
$\infty$. A normal subgroup of a group is commensurated, so this result is a generalization of M. Mihalik's result in \cite{M1} and of B. Jackson's result in \cite{J}. As a corollary, we give an alternate proof of V. M. Lew's theorem that a finitely generated group $G$ containing an infinite, finitely generated, subnormal subgroup of infinite index is semistable at $\infty$. So, many previously known semistability and simple connectivity at $\infty$ results for group extensions follow from the results in this paper. If $\phi:H\to H$ is a monomorphism of a finitely generated group and $\phi(H)$ has finite index in $H$, then $H$ is commensurated in the corresponding ascending HNN extension, which in turn is semistable at $\infty$. \end{abstract}
\section{Introduction} Given a group $G$ and a subgroup $H$ of $G$, the element $g$ of $G$ is in the {\it commensurator} of $H$ in $G$ (denoted $Comm(H,G)$) if $gHg^{-1}\cap H$ has finite index in both $H$ and $gHg^{-1}$. In the mid-1960's, A. Borel \cite{B} proved a series of results that highlight the critical nature of commensurators in the structure of semisimple Lie groups. These results were extended by G. A. Margulis \cite{Ma} in 1975. If $G$ is the commensurator of $Q$ in $G$, then $Q$ is {\it commensurated} in $G$. In particular, if $H$ is normal in $G$, then $H$ is commensurated in $G$. In \cite {CM} we develop the basic theory of commensurated subgroups and show this theory closely parallels the theory of normal subgroups of a group, but with subtle differences.
A locally-finite, connected CW-complex $X$ is {\it semistable at $\infty$} if any two proper maps $r,s:[0,\infty)\to X$ which converge to the same end are properly homotopic. The early ideas of R. Lee and F. Raymond \cite {LR} and F.E.A. Johnson \cite {FEA} on the `fundamental group of an end' were instrumental in extending the idea of semistability at $\infty$ of a space to the notion of the semistability at $\infty$ for a finitely presented group. The best reference for the fundamentals of the subject of semistability at $\infty$ is R. Geoghegan's book \cite {Ge}. Many classes of finitely generated groups are known to be semistable at $\infty$ (see \cite{M1}, \cite{M2}, \cite{M3}, \cite{M4} and \cite{M5} for instance). It is unknown if all finitely presented groups are semistable at $\infty$. If a finitely presented group $G$ is semistable at $\infty$, then one can define invariants for $G$, such as the fundamental group at an end of $G$ independent of choice of basepoint ray in some associated space. The idea of semistability at $\infty$ is also of interest in the study of cohomology of groups. R. Geoghegan and M. Mihalik have shown (\cite{GM}) that if the group $G$ is finitely presented and semistable at $\infty$, then $H^2(G, \mathbb Z G)$ is free abelian. It should be noted that a basic unsolved problem in the study of group cohomology is whether or not $H^2(G,\mathbb Z G)$ is free abelian for all finitely presented groups $G$.
The study of ends of groups was started by H. Freudenthal \cite{F} and H. Hopf \cite{H}. A finitely generated group $G$ has either $0$, $1$, $2$, or an infinite number of ends. It is elementary to see that finitely presented groups with either $0$ or $2$-ends are semistable at $\infty$. By \cite{M3} and Dunwoody's accessibility theorem \cite{D1} the semistability question for finitely presented groups reduces to the question of whether or not all $1$-ended finitely presented groups are semistable at $\infty$.
The strongest result to date in this subject is the following combination result. \begin{theorem} \label{comb} {\bf (M. Mihalik, S. Tschantz \cite{comb})} If $G=A\ast_HB$ is an amalgamated product where $A$ and $B$ are finitely presented and semistable at $\infty$, and $H$ is finitely generated, then $G$ is semistable at $\infty$. If $G=A\ast_H$ is an HNN-extension where $A$ is finitely presented and semistable at $\infty$ and $H$ is finitely generated, then $G$ is semistable at $\infty$. \end{theorem}
This result generalizes to the obvious statement about graphs of groups and was used by Mihalik and Tschantz in \cite{oner}, to prove that all one relator groups are semistable at $\infty$. It should be noted this result is non-trivial when $A$ and $B$ are free groups.
All word hyperbolic groups are semistable at $\infty$ (see \cite{Sw}). R. Geoghegan has shown that a 1-ended CAT(0) group $G$ is semistable at $\infty$ if and only if some (equivalently any) visual boundary for $G$ has the shape of a locally connected continuum \cite{G}. It is elementary to construct a semistable at $\infty$, 1-ended CAT(0) group with non-locally connected boundary. E.g the direct product of the integers with the free group of rank 2 has visual boundary the suspension of a Cantor set - a non-locally connected space, but with the same shape as the Hawaiian earring, which is a locally connected space. In \cite{M4}, a notion of semistability at $\infty$ for a finitely generated group is defined that generalizes the original definition (i.e., a finitely presented group is semistable at $\infty$ with respect to the alternative definition if and only if it is semistable at $\infty$ with respect to the original definition). With this more general definition, the finitely generated analogs to the main results obtained in \cite{M1} and \cite{M2} are quite apparent. In fact, this more general definition is used to show certain finitely presented groups are semistable at $\infty$ (see \cite{M4}). In his Ph.D dissertation, Vee Ming Lew proved that if $G$ is a finitely generated group containing an infinite, finitely generated subnormal subgroup $H$ of infinite index in $G$, then $G$ is 1-ended and semistable at $\infty$.
Lew's proof of this theorem generalizes arguments used in the proofs in \cite{M1} and \cite{M2}. Our main theorem is used to produce an alternative proof of Lew's theorem.
\begin{theorem}\label{Main} {\bf (Main Theorem)} If a finitely generate group $G$ has an infinite finitely generated commensurated subgroup $Q$, and $Q$ has infinite index in $G$, then $G$ is one-ended and semistable at $\infty$. If additionally, $G$ and $Q$ are finitely presented and either $Q$ has one end or the pair $(G,Q)$ has one filtered end, then $G$ is simply connected at $\infty$. \end{theorem} As an example, the cyclic subgroup $\langle x\rangle$ of the Baumslag-Solitar group $B(m,n)\equiv \langle x,t : t^{-1}x^mt=x^n\rangle$ (for non-zero integers $m,n$), is commensurated in $B(m,n)$.
A connected CW-complex $X$ is {\it simply connected at $\infty$} if for each compact set $C$ in $X$ there is a compact set $D$ in $X$ such that loops in $X-D$ are homotopically trivial in $X-C$. Simple connectivity at $\infty$ implies semistability at $\infty$. As with semistability at $\infty$, the idea of simple connectivity at $\infty$ can be extended from spaces to finitely presented groups and if $G$ is finitely presented and simply connected at $\infty$ then $H^2(G,\mathbb Z G) $ is trivial. In his thesis \cite {Si}, L. Siebenmann developed the idea of simple connectivity at $\infty$ to give an obstruction to finding a boundary for an open manifold. In \cite{LR}, R. Lee and F. Raymond, used the idea of the simple connectivity at $\infty$ of a group in order to analyze manifolds covered by Euclidean space. In \cite{J}, B. Jackson proves:
\begin{theorem} \label{Ja} {\bf (B. Jackson)} Suppose $1\to H\to G\to K\to 1$ is a short exact sequence of infinite finitely presented groups and either $H$ or $K$ is 1-ended, then $G$ is simply connected at $\infty$. \end{theorem}
In \cite{Da}, M. Davis constructs examples of aspherical closed $n$-manifolds for $n\geq 4$, that are not covered by $\mathbb R^n$. In fact, Davis argues that the fundamental groups of his manifolds are semistable at $\infty$, but not simply connected at $\infty$ (and hence not covered by $\mathbb R^n$). All of Davis' group are subgroups of finite index in finitely generated Coxeter groups. In \cite{M5}, Mihalik shows all Artin and Coxeter groups are semistable at $\infty$.
\section{Commensurable Preliminaries}
If $S$ is a finite generating set for a group $G$, $\Gamma(G,S)$ the Cayley graph of $G$ with respect to $S$, and $H$ a subgroup of $G$, then for any $g_1, g_2 \in G$, the {\it Hausdorff} distance between $g_1H$ and $g_2H$, denoted $D_S(g_1H,g_2H)$, is the smallest integer $K$ such that for each element $h$ of $H$ the edge path distance from $g_1h$ to $g_2H$ in $\Gamma$ is $\leq K$ and the edge path distance from $g_2h$ to $g_1H$ in $\Gamma$ is $\leq K$. If no such $K$ exists, then $D_S(g_1H,g_2H)=\infty$. In \cite{CM} we prove the following geometric characterization of commensurated subgroups of finitely generated groups. This characterization is the working definition of commensurated subgroup in this paper.
\begin{proposition} \label{C8}
Suppose $S$ is a finite generating set for a group $G$ and $H$ is a subgroup of $G$, then $g\in G$ is in $Comm(H,G)$ iff the Hausdorff distance $D_S(H,gH)<\infty$ iff $D_S(H,gHg^{-1})<\infty$.
In particular, a subgroup $Q$ of a finitely generated group $G$ is commensurated in $G$ iff the Hausdorff distance $D_S(Q,gQ)$ is finite for all $g\in G$ iff $D_S(Q,gQg^{-1})$ is finite for all $g\in G$. \end{proposition}
Suppose $G$ is a group with finite generating set $S$ and $H$ is a subgroup of $G$. Let $\Lambda (S,H,G)$ be the graph with vertices the left cosets $gH$ of $G$ and a directed edge (labeled $s$) from $gH$ to $fH$ if for some $s\in S$ and $h_1, h_2\in H$, we have $gh_1sh_2=f$. (Equivalently, in the Cayley graph $\Gamma(S,G)$, there is an edge labeled $s$ with initial point in $gH$ and end point in $fH$.) Basically, $\Lambda$ is a (left) {\it Schreier} coset graph. Note that $\Lambda$ may have several edges labeled $s$ at a vertex.
The following result appears in \cite{CM}. \begin{proposition}\label{locfin2}
Suppose $G$ is a group with finite generating set $S$ and $Q$ is commensurated in $G$. Then the graph $\Lambda(S,Q,G)$ is locally finite and $G$ acts (on the left) transitively on the vertices of $\Lambda$ and by isometries (using the edge path metric) on $\Lambda$. For $\Gamma(S,G)$ the Cayley graph of $G$, the projection map $p:\Gamma(S,G)\to \Lambda(S,Q,G)$ respects the action of $G$ and induces a bijection from the filtered ends of $\Gamma(S,G)$ to the ends of $\Lambda(S,Q,G)$. The graph $\Lambda(S,Q,G)$ has 0,1,2 or infinitely many ends. \end{proposition}
\section{Semistability Preliminaries} Much of the groundwork for studying the notion of semistability for a finitely presented group has appeared in \cite{J}, \cite{LR}, and \cite{M1} and is well organized in \cite {G}. We will recall some of the ideas presented in these papers to set the notation for future use.
A continuous function $f:X\to Y$ is {\it proper} if for each compact subset $C$ of $Y$, $f^{-1}(C)$ is compact in $X$. A proper map $r:[0,\infty)\to X$ is called a {\it ray} in $X$. If $K$ is a locally finite, connected CW-complex, then one can define an equivalence relation $\sim$ on the set $A$ of all rays in $ K$ by setting $r \sim s$ if and only if for each compact set $C \subset K$, there exists an integer $N(C)$ such that $r([N(C),\infty))$ and $s([N(C), \infty))$ are contained in the same unbounded path component of $K -C$ (a path component of $K-C$ is {\it unbounded} if it is not contained in any compact subset of $K$). An equivalence class of $A/\sim$ is called {\it an end of} $K$, the set of equivalence classes of $A/\sim$ is called {\it the set of ends of} $K$ and two rays in $K$, in the same equivalence class, are said to {\it converge to the same end}. The cardinality of $A/\sim$, denoted by $e(K)$, is the {\it number of ends of} $K$.
If G is a finitely generated group with generating set $S$, then denote the {\it Cayley graph of $G$ with respect to $S$}, by $\Gamma(G,S)$. We define the {\it number of ends of $G$}, denoted by $e(G)$, to be the number of ends of the Cayley graph of $G$ with respect to a finite generating set (i.e., $e(G) = e(\Gamma(G,S)$). This definition is independent of the choice of finite generating set for $G$. If $G$ is finitely generated, then $e(G)$ is either 0, 1, 2, or is infinite (in which case it has the cardinality of the real numbers). We let $\ast$ denote the basepoint of $\Gamma(G,S)$, which corresponds to the identity of $G$.
If $f, g $ are rays in $K$, then one says that $f$ and $g$ are {\it properly homotopic} if there is a proper map $H : [0,1] \times [0,\infty) \to K$ such that $H\vert_{\{0\}\times[0,\infty)} = f$ and $H\vert_{\{1\}\times[0,\infty)} = g$. If $f(0)=g(0)=v$, one says $f$ and $g$ are {\it properly homotopic relative to $v$} (or $rel\{v\}$) if additionally, $H\vert _{[0,1]\times \{0\}}=v$.
\begin{definition} A locally finite, connected CW-complex $K$ is {\it semistable at $\infty$} if any two rays in $K$ converging to the same end are properly homotopic. \end{definition}
In \cite{M1} (Theorem 2.1) and \cite{M2} (lemma 9) M. Mihalik proves several notions are equivalent to semistability. In \cite{M1} the space considered is simply connected, but simple connectivity is not important in that argument. Mihalik's proofs give the following result.
\begin{theorem}\label{ssequiv} Suppose $K$ is a locally finite, connected and 1-ended CW-complex. Then the following are equivalent \begin{enumerate} \item $K$ is semistable at $\infty$. \item For any ray $r:[0,\infty )\to K$ and compact set $C$, there is a compact set $D$ such that for any third compact set $E$, and loop $\alpha$ based on $r$ and with image in $K-D$, $\alpha$ is homotopic $rel\{r\}$ to a loop in $K-E$. \item For any compact set $C$ there is a compact set $D$ such that if $r$ and $s$ are rays based at $v$ and with image in $K-D$, then $r$ and $s$ are properly homotopic $rel\{v\}$, by a proper homotopy in $K-C$. \end{enumerate} If $K$ is simply connected then a fourth equivalent condition can be added to this list:
4. If $r$ and $s$ are rays based at $v$, then $r$ and $s$ are properly homotopic
$\ \ \ $$rel\{v\}$. \end{theorem}
\begin{example} Note that the 1-ended CW-complex obtained by attaching a loop at $0$ to the interval $[0,\infty)$ is semistable at $\infty$. Consider a ray $r$ which maps $[0,\infty)$ homeomorphically to $[0,\infty) $ and a ray $s$ which maps $[0,1]$ once around the loop and then maps $[1,\infty)$ homeomorphically to $[0,\infty)$. Clearly $r$ and $s$ are properly homotopic, but not by a proper homotopy $rel\{0\}$. \end{example}
The following fact is proved by B. Jackson in \cite{J} \begin{theorem} Suppose $X$ and $Y$ are locally finite, connected CW-complexes with $\pi_1(X)=\pi_1(Y)$. Then the universal cover of $X$ is semistable at $\infty$ iff the universal cover of $Y$ is semistable at $\infty$. \end{theorem}
\begin{definition} If $G$ is a 1-ended, finitely presented group, and $X$ is some (equivalently any) finite two dimensional CW-complex with fundamental group $G$, then we say $G$ {\it is semistable at $\infty$ if the universal cover of $X$ is semistable at $\infty$}. \end{definition}
We now define the notion of semistabilty for a finitely generated group as in \cite{M4} .We give the definition for 1-ended groups since this is what we are interested in. Suppose $G$ is a 1-ended finitely generated group with generating set $S\equiv \{g_1, g_2,\ldots , g_n\}$ and let $\Gamma (G,S)$ be the Cayley graph of $G$ with respect to this generating set. Suppose $\{\alpha_1, \alpha_2,\ldots , \alpha_m\}$ is a finite set of relations in $G$ written in the letters $\{g_1^\pm, g_2^\pm,\ldots , g_n^\pm\}$. For any vertex $v\in \Gamma(G,S)$, there is an edge path cycle labeled $\alpha_i$ at $v$. The two dimensional CW-complex $\Gamma_{(G,S)}(\alpha_1,\ldots , \alpha_m)$ is obtained by attaching to each vertex of $\Gamma(G,S)$, $2$-cells corresponding to the relations $\alpha_1,\ldots ,\alpha_n$.
In \cite{M4} it is shown that if $S$ and $T$ are finite generating sets for the group $G$, and there are finitely many $S$-relations $P$ such that $\Gamma_{(G,S)}(P)$ is semistable at $\infty$ then there are finitely many $T$-relations $Q$ such that $\Gamma_{(G,T)}(Q)$ is semistable at $\infty$. Hence the following definition:
\begin{definition} We say {\it $G$ is semistable at $\infty$} if for some finite generating set $S$ for $G$ and finite set of $S$-relations $P$, the complex $\Gamma_{(G,S)}(P)$ is semistable at $\infty$. \end{definition}
Note that if $G$ has finite presentation $\langle S:P\rangle$, then $G$ is semistable at $\infty$ with respect to definition 2 iff $G$ is semistable at $\infty$ with respect to definition 3 iff $\Gamma_{(G,S)}(P)$ is semistable at $\infty$.
Lemma 2 of \cite{M4} is as follows:
\begin{lemma}\label{rel} Suppose the finitely generated group $G$ is 1-ended and semistable at $\infty$. If $S$ is a finite generating set for $G$ and $P$ is a finite set of $S$-relations in $G$ such that $\Gamma _{(G,S)}(P)$ is semistable at $\infty$, then there is a finite set $Q$ of $S$ relations such that: if $r$ and $s$ are rays in $\Gamma_{(G,S)}(P\cup Q)$, with $r(0)=s(0)$ then $r$ is properly homotopic to $s$ $rel\{r(0)\}$. \end{lemma}
\begin{remark} Using the third equivalent notion of semistability in theorem \ref{ssequiv}, it can be shown that in fact the set of relations $Q$ in the previous lemma are unnecessary in order to draw the same conclusion. I.e. If $\Gamma_{(G,S)}(P)$ is semistable at $\infty$, and $r$ and $s$ are rays in $\Gamma_{(G,S)}(P)$, with $r(0)=s(0)$ then $r$ is properly homotopic to $s$ $rel\{§r(0)\}$. \end{remark} By an {\it edge path ray} in K, we mean a proper map $r : [0, \infty) \to K$ such that for each positive integer $n, r\vert _{[n-1,n]}$ is a homeomorphism to an edge of $K$.
If $G$ is finitely generated with finite generating set $S$, then any edge path ray, $r : ([0,\infty),\{0\}) \to (\Gamma(G,S),*)$, can be represented as $(e_1, e_2,\ldots )$ at $*$ with $e_i \in S^\pm$, and $e_i$ the label of the $i^{th}$ edge of $r$. Any edge path $(e_1, e_2,\ldots ,e_k)$ of $\Gamma(G,S)$ corresponds to some group element $e_1'e_2'\ldots e_k'$ where $e_i' \in S^\pm$. But determining an edge path in $\Gamma(G,S)$ from some word $e_1'e_2'\ldots e_k'$ requires a specified basepoint, since the path $(e_1', e_2',\ldots ,e_k')$ at a vertex $v$ determines a different edge path than $(e_1',e_2',\ldots ,e_k')$ based at another vertex $w$. Note that these edge paths differ by a covering transformation taking $v$ to $w$. By the \textsl{Star} of a subcomplex $A$ contained in a locally finite, connected CW-complex $K$, denoted \textsl{St}($A$), we mean the subcomplex of $K$ consisting of the union of all 1-cells of $K$ that intersect $A$ along with any $n$-cell all of whose vertices lie in $St(A)$. Note then that $A \subseteq \textsl{St}(A)$ and if $A$ is a finite subcomplex, then \textsl{St}($A$) is a finite subcomplex by the local finiteness of $K$. We recursively define the \textsl{Nth Star of $A$} for $N = 1,2,3,\ldots $ by $St^N(A) = St(St^{N-1}(A))$ where $St^0(A) = A$. When it is not clear what the over-complex might be we use the notation $St(A,K)$ to denote the {\it Star of $A$ in $K$}.
Since any ray $r : [0, \infty) \to K$ is properly homotopic to an edge path ray, we may concentrate on edge path rays when dealing with the semistability of a complex.
If $e$ is an edge in $K$ and $(e_1, e_2, e_3,\ldots )$ is an edge path in $K$ based at the terminal point of $e$, then one denotes by $e * (e_1, e_2, e_3,\ldots )$ the edge path given by $e$ followed by $(e_1, e_2, e_3,\ldots )$.
\begin{definition} For a group $G$ with finite generating set $S$ and a subset $T$ of $S$, we say an edge path in $\Gamma(G, S)$ is a {\it $T$-path} if each edge of the path is labeled by an element of $T^{\pm}$. If the path is infinite and proper we call it a {\it $T$-ray}. \end{definition}
\section{Proof of Semistability in the Main Theorem}
The 1-ended part of our main theorem is straightforward:
\begin{proposition} Suppose $Q$ is an infinite finitely generated commensurated subgroup of infinite index in a finitely generated group $G$. Then $G$ is 1-ended. \end{proposition}
\begin{proof} Let $S$ be a finite generating set for $G$, containing a generating set for $Q$. Let $\Gamma\equiv \Gamma(G,S)$ and $\Gamma(Q)\equiv \Gamma(Q,S\cap Q)$. Suppose $C$ is a finite subcomplex of $\Gamma$. Only finitely many translates $g_1\Gamma(Q),\ldots , g_n\Gamma(Q)$ intersect $C$ non-trivially. Choose $D$ a finite subcomplex of $\Gamma$ such that $C\subset D$ and for each $i$, $D$ contains the bounded components of $g_i\Gamma(Q)-C$. Choose $g\in G$ such that $g\Gamma(Q)\cap C=\emptyset$. It suffices to show that for any vertex $v$ of $\Gamma-D$ there is an edge path in $\Gamma-C$ connecting $v$ to $g\Gamma(Q)$. Say $v\in h\Gamma(Q)$ and the Hausdorff distance from $hQ$ to $gQ$ in $\Gamma$ is $K$. Note that the vertices of $k\Gamma(Q)$ are $kQ$. By the choice of $D$, there is an edge path $\alpha$ in $h\Gamma (Q)-C$ from $v$ to $w\in h\Gamma(Q)-St^k(C)$. Choose a path $\beta$ of length $\leq K$ from $w$ to $g\Gamma(Q)$. Then $(\alpha, \beta)$ is a path from $v$ to $g\Gamma(Q)$ avoiding $C$. \end{proof}
For the remainder of the proof, $\mathcal{Q} = \{q_1, q_2,\ldots ,q_n\}$ is a finite generating set for $Q$ and $S\equiv \{q_1, q_2,\ldots ,q_n, k_1, k_2,\ldots ,k_t\}$ is a generating set for $G$ where $k_i\not \in Q$. Let $\mathcal K=\{k_1,\ldots ,k_t\}$. Our hypothesis states that for each $g\in G$, the Hausdorff distance between $Q$ and $gQ$ is finite in $\Gamma(G,S)$.
Consider the left (Scherier) coset graph $\Lambda (S,Q,G)$ with vertex set, the set of all cosets $gQ$ in $G$. A directed edge labeled $s$ will have initial vertex $g_1Q$ and terminal vertex $g_2Q$ if there is an edge labeled $s$ in $\Gamma(G,S)$ beginning in $g_1Q$ and ending in $g_2Q$. By proposition \ref{locfin2}, $\Lambda(S,Q,G)$ is locally finite. There is a quotient map $\rho:\Gamma(G,S)\to \Lambda(S,Q,G)$ respecting the left action of $G$ on these graphs, such that each edge labeled by an element of $Q$ is mapped to a point.
\begin{lemma}\label{approx} Suppose $S$ is a finite generating set for the group $G$ and $Q$ is a finitely generated commensurated subgroup of $G$ (with generating set a subset of $S$). There is an integer $F$ such that if $gQ$ and $hQ$ are distinct cosets (vertices) of $\Lambda(S,Q,G)$ connected by an edge labeled $s\in S^{\pm1}$, then for each $v\in gQ\subset \Gamma(S,G)$ there is a $Q$-path $\alpha$ at $v$ in $\Gamma(S,G)$ of length $< F$ such that the path $(\alpha, s)$ ends in $hQ$.
In particular: Suppose $\alpha\equiv (e_1,e_2,\ldots)$ is an edge path (possibly infinite) at $v\in \Lambda(S,Q,G)$ (with $i^{th}$ edge labeled $e_i$)
and $v'$ is a vertex of $\Gamma(G,S)$ such that $\rho(v')=v$ (equivalently $v'Q=v$), then there is an edge path $\alpha'\equiv(\alpha_0',e_1,\alpha_1', e_1,\ldots )$ at $v'$ with $\alpha_i'$ a $Q$-edge path of length $< F$ such that the edge path (determined by) $\rho \alpha'$ is $\alpha$. I.e. there is $(Q,F)$- ``approximate" path lifting for $\rho$. \end{lemma}
\begin{proof} Suppose $v\in gQ$ and the edge labeled $s$ at $v$ ends in $hQ$. By translation, we assume $v=1\in G$ , $g=1$ and $h=s$. As $Q$ is commensurated in $G$, $sQs^{-1}\cap Q$ has finite index in $Q$. Hence there is an integer $F_s$, such that for any vertex $w\in Q$, there is a $Q$-edge path in $\Gamma(S,G)$ of length $< F_s$ from $w$ to a vertex $w'$ of $Q\cap sQs^{-1}$. As $w'\in sQs^{-1}$, $w's\in sQ$. I.e. the edge labeled $s$ at $w'$ ends in $sQ$. Let $F=max\{F_s\}_{s\in S^{\pm 1}}$. \end{proof}
\begin{remark} For $\alpha$ and $\alpha'$ as in Lemma \ref{approx}, we call $\alpha'$ a $(Q,F)$-{\it approximate lift of} $\alpha$. Note that lemma \ref{approx} does not imply that if $v$ and $w$ are vertices of the same coset $uQ$ then there are approximate lifts of a path $\alpha$ at $\rho (v)\in \Lambda(S,Q,G)$ to $v$ and $w$ that are $G$ translates of one another in $\Gamma(G,S)$. \end{remark}
The next lemma basically has the same proof as lemma 3 of \cite{M2}.
\begin{lemma}\label{3.7}
For each vertex $v$ of $\Lambda(S,Q,G)$, there is an edge path ray $s_v$ at $v$, such that for any finite subgraph $C$ of $\Lambda(S,Q,G)$ only finitely many $s_v$ intersect $C$. Furthermore, if $w\in v\equiv wQ$ let $s_w$ be a $(Q,F)$-approximate lift of $s_{\rho (w)}$ to $w\in \Gamma(G,S)$ then
i) for any finite subgraph $D$ of $\Gamma(G,S)$ there are only finitely many vertices $w\in \Gamma(G,S)$ such that $s_w$ intersects $D$ non-trivially, and
ii) for any $w\in G$, only finitely many vertices $z$ of $s_w$ are such that $zQ$ intersects $D$ non-trivially. \end{lemma} \begin{proof} If $\mathcal G$ is a locally finite infinite graph then for each vertex $v$ of $\mathcal G$ there is an edge path ray $s_v$ at $v$, such that for any finite subgraph $C$ of $\mathcal G$, only finitely many $v$ are such that $s_v$ intersects $C$. (The idea is this: Choose a base vertex $x$. For any integer $n>0$, $\mathcal G-St^n(x)$ has only finitely many components. For the finitely many vertices $v$ in $St(x)$ or a bounded component of $\mathcal G-St(x)$ choose $s_v$ to be an arbitrary edge path ray at $v$. If $v$ is a vertex of $St^2(x)$ or of a bounded component of $\mathcal G-St^2(x)$, and $s_v$ is not defined, then $v$ belongs to an unbounded component of $\mathcal G-St(x)$. Choose $s_v$ to be an edge path ray at $v$ in $\mathcal G-St(x)$. Continue.) Now pick such edge path rays for the vertices of $\Lambda(S,Q,G)$.
As $\rho(s_w)=s_{\rho(w)}$, $s_w$ intersects $D$ iff $s_{\rho(s)}$ intersects $\rho(D)$.
Hence, we may finish the proof of i), by showing at most finitely many vertices $v$ of a coset $gQ$ are such that $s_v$ intersects $D$. Otherwise, there are infinitely many distinct vertices $v_1,v_2,\ldots $ in $gQ\subset \Gamma(G,S)$ such that each edge path ray $s_{v_i}$ passes through the vertex $d$ of $D$. In $\Lambda(S,Q,G)$ write the edge path ray $s_{gQ}\equiv (e_1,e_2,\ldots )$. By lemma \ref{approx}, we may write $s_{v_i}=(\alpha_{i,1},e_1,\alpha_{i,2},e_2,\ldots )$ in $\Gamma(G,S)$, where $\alpha_{i,j}$ is a $Q$-edge path of length $< F$. Let $n(i)$ be such that some vertex of $\alpha _{i,n(i)}$ is $d$. Since the $v_i$ are distinct and the length of each $\alpha_{i,j}$ is $< F$, the sequence of integers $\{n(1),n(2),\ldots\}$ is unbounded. But then the initial vertex of $e_{n(i)}$ (on $s_{gQ}\equiv (e_1,e_2,\ldots )$) is $\rho(d)$. This is impossible since $s_{gQ}\equiv (e_1,e_2,\ldots )$ is proper, and $i)$ is proved.
Part ii) follows immediately from the fact that $\rho(s_w)=s_{\rho(w)}$, a proper map.
\end{proof}
By lemma \ref{approx}, if two distinct cosets $g_1Q$ and $g_2Q$ of $G$ are connected by an edge in $\Gamma (G,S)$ then they are of Hausdorff distance $\leq F$. Choose $M$ such that if two vertices of $Q$ in $\Gamma(G,S)$ are within $2F+1$ of one another, then their $\mathcal Q$-distance is $\leq M$. Let $P$ be the set of all $S$-relations in $G$ of length $\leq 2F+1+M$. Let $\tilde \Gamma$ be $\Gamma_{(G,S)}(P)$.
The next result is lemma 2 of \cite{M2}. \begin{lemma}\label{3.1} At each vertex $v$ of $\Gamma(G,S)$ there exists a $\mathcal{Q}$-ray $q_v$ such that for any finite subcomplex $C$ in $\Gamma(G,S)$, there are only finitely many vertices $v$ such that $q_v$ meet $C$. $\square$ \end{lemma}
For each $S$-relation $r$ of $G$, consider the $\mathcal K$-word $r_{\mathcal K}$ obtained by eliminating from $r$, the $\mathcal Q$-letters (and their inverses). If $v$ is a vertex of $\Gamma(G,S)$ and $\alpha$ the edge path loop corresponding to $r$ at $v$, then $\rho(\alpha)$ (in $\Lambda(S,Q,G))$ has labeling $r_{\mathcal K}$. Let $\tilde \Lambda(S,Q,G)$ be the 2-complex obtained from $\Lambda(S,Q,G)$ by attaching a 2-cell to each loop $\rho r$ (with label $r_{\mathcal K}$) where $r$ is a loop of $ \Gamma (G,S)$ of length $\leq 2F+M+1$ (only one 2-cell for a given such loop in $\Lambda(S,Q,G)$). Then $\tilde \Lambda(S,Q,G)$ is locally finite and there is a natural map $\tilde \rho:\tilde \Gamma(G,S)\to \tilde \Lambda(S,Q,G)$ extending $\rho$ and respecting the action of $G$.
\begin{lemma}\label{3.6} If $k \in {\mathcal K}^{\pm}$ labels an edge of $\tilde{\Gamma}$ from $v$ to $w$ and $r = (e_1, e_2, e_3,\ldots )$ is a $\mathcal{Q}$-ray at $v$, then $r$ is properly homotopic $rel\{v\}$ to $k*(f_1, f_2,\ldots )$, for $(f_1, f_2,\ldots )$ a $\mathcal{Q}$-ray at $w$, by a homotopy $H$ with image a subset of $St^{2F+M+1}(Im(r),\tilde \Gamma)$, and the image of $\tilde \rho \circ H$ is a subset of the finite complex $ St(\tilde\rho(k))$. \end{lemma} \begin{proof} Let $v_i$ be the terminal vertex of $e_i$. Let $v_0=v$, $w_0=w$, $\alpha _0$ be the empty path. For each $i\geq 1$, lemma \ref{approx} implies there is a $\mathcal Q$-edge path $\alpha_i $ of length $< F$ at $v_i$ so that $(\alpha_i, k)$ ends at $w_i\in kQ$. Note that in $\tilde \Gamma$ the distance from $w_i$ to $w_{i+1}$ is $\leq 2F+1$. For $i\geq 1$, let $f_i$ be a $\mathcal Q$-edge path in $\tilde \Gamma$ of length $\leq M$ from $w_{i-1}$ to $w_{i}$. The loop $(\alpha_i,k, f_{i+1}, k^{-1}\alpha_{i+1}^{-1}, e_{i+1}^{-1})$ has length $\leq 2F+1+M$ and so bounds a 2-cell of $\tilde \Gamma$. Hence $(e_1,e_2,\ldots ) $ is properly homotopic to $k\ast (f_1,f_2,\ldots)$ by a homotopy $H$ with image in $St^{2F+1+M}(Im(r)),\tilde \Gamma)$. As each $\alpha_i$ and each $f_i$ is a $\mathcal Q$-word, $\tilde\rho \circ H$ has image in $St(\tilde \rho (k))$. \end{proof}
Recall, for each vertex $v\in \tilde\Gamma$, $s_v$ is a $(Q,F)$-approximate lift of $s_{\rho(v)}$ (see lemma \ref{3.7}). \begin{lemma}\label{3.8}
Suppose $D$ is a finite subcomplex of $ \tilde{\Gamma}$. Then there exists a finite complex $E_1(D) \subseteq \tilde{\Gamma}$ such that if $b = (e_1, e_2, e_3,\ldots )$ is a $\mathcal{Q}$-ray at $v$ with image in $\tilde \Gamma-E_1(D)$, then $b$ is properly homotopic $rel\{v\}$ to $s_v$ by a homotopy in $\tilde \Gamma -D$. \end{lemma} \begin{proof} Let $L=2F+M+1$ (the constant of lemma \ref{3.6}). There are only finitely many vertices $w \in \tilde\Lambda$ such that the edge path rays $s_w$ of lemma \ref{3.7} intersect $St(\tilde\rho(D))$, non-trivially. Call these vertices $y_1, y_2,\ldots ,y_l.$ Since each $s_{y_i}$ is proper, there are integers $J_i$ such that each edge of the ray $s_{y_i}$ following the $J_i^{th}$-edge is in $\tilde \Lambda -St(\tilde \rho(D))$. Let $J$ be the maximum $J_i$ for $i = 1,2,\ldots ,l$. By lemma \ref{3.7}, if $w$ is any vertex of $\tilde \Gamma$, and $e$ is the $j^{th}$-edge of $s_w$ for $j>FJ$, then $\tilde \rho(e)= d$ (or a vertex of $d$) for $d$ the $k^{th}$-edge of $s_{\rho(w)}$ for some $k>J$. By the definition of $J$, $d$ does not intersect $St(\tilde \rho(D))$ and so $\tilde \rho (e)$ does not intersect $St(\tilde \rho(D))$. In particular,
$(\ast)$ If $w$ is any vertex of $\tilde \Gamma$, and $e$ is the $j^{th}$-edge of $s_w$ for $j>FJ$, then $\tilde \rho(e)\subset \tilde \Lambda-St(\tilde \rho D)$.
Let $E_1(D)$ be a compact subcomplex of $\tilde{\Gamma}$ such that $St^{FJL}(D) \subseteq E_1(D)$ and such that $E_1(D)$ contains the finite set of vertices $v$ in $\tilde{\Gamma}$ such that $s_v$ intersects $St^{FJL}(D)$. Assume $b$ and $v$ satisfy the hypothesis of the lemma. The edge path ray $s_v$ (in $\tilde \Gamma-St^{FJL}(D)$) has the form $(\alpha_0,c_1, \alpha_1, c_2,\ldots )$ where $\alpha_i$ is a $\mathcal Q$-path of length $< F$ and $c_i$ is a $\mathcal K$-edge. Here $s_v$ is a $(Q,F)$-approximate lift of $s_{\rho (v)}=(c'_1,c'_2,\ldots)$ (where $c'_i$ has the same label as $c_i$).
Let $v_i,w_i$ be the initial and terminal vertices of $c_i$, respectively. Let $b_0$ be the $\mathcal Q$-edge path ray $(\alpha_0^{-1}, b)$. By Lemma \ref{3.6}, $b_0$ is properly homotopic $rel\{v_1\}$ to $c_1 * b_1$, where $b_1$ is an $\mathcal{Q}$-ray at $w_1$, by a proper homotopy $H_1$ with image in $St^{L}(Im(b_0))$. In particular, $b_1$ has image in $\tilde \Gamma-St^{(FJ-1)L}(D)$. Again by Lemma \ref{3.6}, $(\alpha_1^{-1},b_1)$ is properly homotopic $rel\{v_2\}$ to $c_2*b_2$, where $b_2$ is a $\mathcal{Q}$-edge path ray, by a proper homotopy $H_2$ with image in $St^{L}(Im(b_1))\subset \tilde \Gamma-St^{(FJ-2)L}(D)$. Iterating the above process, the $\mathcal{Q}$-ray $(\alpha_j^{-1},b_j)$ is properly homotopic $rel\{v_{j+1}\}$ to $c_{j+1} * b_{j+1}$, where $b_{j+1}$ is a $\mathcal{Q}$-ray, by a proper homotopy $H_{j+1}$ with image in $St^{L}(Im(b_j))$. Let $H$ be the homotopy of $b$ to $s_v$ obtained by patching together these $H_i$. For $i\leq FJ$, $H_i$ has image in $\tilde \Gamma-D$. By lemma \ref{3.6}, $\tilde\rho \circ H_j$ has image in $St(\tilde\rho(c_{j})).$ By $(\ast)$, if $j> FJ$ then $\tilde \rho(c_j)$ misses $St(\tilde\rho(D))$. So $St(\tilde\rho(c_j))$ misses $\tilde\rho(D)$. For all positive integers $j$, $H_j$ misses $D$ and $H$ misses $D$.
It remanins to show that $H$ is a proper. Let $C \subseteq \tilde{\Gamma}$ be a finite subcomplex. Since $\tilde\rho(s_v)$ is proper in $\tilde{\Lambda}$, there exists an integer $R$ such that if $j > R$, then $\tilde\rho (c_j)$ misses $St(\tilde\rho(C))$. As $\tilde\rho \circ H_j$ has image in $St(\tilde\rho(c_j))$, $H_j$ misses $C$ when $j>R$. Since only finitely many of the proper homotopies $H_j$ have image that intersect an arbitrary finite subcomplex $C$, $H$ is proper. \end{proof}
\begin{lemma}\label{3.9}
Suppose $D \subseteq \tilde{\Gamma}$ is compact. Then there exists a compact set $E_2(D) \subseteq \tilde{\Gamma}$ such that if $e$ is an edge in $\tilde{\Gamma} - E_2(D)$ from $v$ to $w$, then the $\mathcal Q$-ray $q_v$ is properly homotopic to $e*q_w$ $rel\{v\}$, by a proper homotopy in $\tilde \Gamma -D$. \end{lemma} \begin{proof} Again let $L=2F+M+1$ (the constant of lemma \ref{3.6}). Let $E_2(D)$ be a compact subcomplex of $ \tilde{\Gamma}$ containing $St^L(E_1(D))$ and the finite set of vertices $x$ such that $q_x$ intersects $St^L(E_1(D))$. If $e \in {\mathcal K}^{\pm1}$, then by lemma \ref{3.6}, $q_{v}$ is properly homotopic to $e*\beta$ $rel\{v\}$, where $\beta$ is a $\mathcal Q$-ray at $w$, and this homotopy has image in $St^{L}(Im(q_v))$. In particular, $\beta$ avoids $E_1(D)$. By Lemma \ref{3.8}, $\beta$ and $q_w$ are properly homotopic $rel\{w\}$ to $s_w$ by proper homotopies in $\tilde\Gamma-D$. Combining these homotopies gives the result.
If $e \in {\mathcal Q}^{\pm1}$ then lemma \ref{3.8} implies $q_v$ and $e\ast q_w$ are both properly homotopic $rel (v)$, to $s_{v}$ by a proper homotopy in $\tilde \Gamma -D$. Combining homotopies gives the desired homotopy. \end{proof}
\begin{lemma}\label{3.10} Suppose $s = (s_1, s_2, s_3,\ldots )$ is an edge path ray at a vertex $v$ in $\tilde{\Gamma}$. Then $s$ is properly homotopic to $q_v$ $rel\{v\}$. \end{lemma} \begin{proof} Choose a sequence of compact subcomplexes $\{C_i\}_{i=1}^\infty$ such that $\bigcup_{i=1}^\infty C_i = \tilde{\Gamma}$, $C_i$ is contained in the interior of $C_{i+1}$, and such that $C_{i+1}$ contains $E_2(C_i)$. Let $v_i$ be the endpoint of $s_i$. Define $H : [0,\infty) \times [0,\infty) \to \tilde{\Gamma}$ as follows: If $R$ is the largest integer such that the edge $s_i$ misses $C_R$, then by definition of $C_R$, $q_{v_{i-1}}$ is properly homotopic $rel\{v_{i-1}\}$ to $s_i\ast q_{v_i}$ by a proper homotopy $H_i$, missing $C_{R-1}$. Define $H$ on $[i - 1,i] \times [0, \infty)$ to be $H_i$.
In order to check that $H$ is proper, it suffices to show that for any compact set $C \subseteq \tilde{\Gamma}$ only finitely many $H_j$ intersect $C$. This follows from the fact that $C \subseteq C_i$ for some index $i$. Since $s$ is proper, there is an integer $W(i)$ such that for all $j \geq W_i$, $s_j$ lies in $\tilde{\Gamma} - C_{i+1}$. So $H_j$ avoids $C$. Therefore $H$ is proper. \end{proof}
This completes the semistability part of our main theorem.
If $H$ is a group and $\phi:H\to H$ is a monomorphism the group with presentation $\langle t, H:t^{-1}ht \hbox{ for all } h\in H\rangle$ is called the {\it ascending} HNN extension of $H$ by $\phi$ and is denoted $H\ast_{\phi}$. The main theorem of \cite{HNN} states that if $H$ is a finitely presented group and $\phi:H\to H$ a monomorphism, then the ascending HNN extension $H\ast_{\phi}$ is 1-ended and semistable at $\infty$.
Consider a general finite presentation of the form $\langle t, h_1,\ldots,h_n: r_1, \ldots ,r_n, t^{-1}h_1t=w_1,\ldots, t^{-1}h_nt=w_n\rangle$ where $r_i$ and $w_i$ are words in $\{h_1^{\pm 1}, \ldots ,h_n^{\pm 1}\}$ for all $i$. The group $G$ of this presentation is the ascending HNN extension $H\ast_{\phi}$ where $H$ is generated by $\{h_1,\ldots, h_n\}$ and $\phi$ is the monomorphism $\phi:H\to H$, where $\phi(h_i)=w_i$ for all $i$. While $G$ is finitely presented it would seem rare that the finitely generated group $H$ would be finitely presented. It has long been suggested that ascending HNN extensions of this form may be a good place to search for non-semistable at $\infty$, finitely presented groups. In \cite{CM}, we show that if $H$ is finitely generated and the image of the monomorphism $\phi:H\to H$ has finite index in $H$, then $H$ is commensurated in $H\ast_{\phi}$. As a direct consequence of this result and our main theorem we have:
\begin{corollary} Suppose $H$ is a finitely generated group and $\phi:H\to H$ a monomorphism such that $\phi(H)$ has finite index in $H$ then the ascending HNN extension $H\ast_{\phi}$ is semistable at $\infty$. \end{corollary}
\section{A Theorem of Lew} Our goal in this section is to give an alternate proof of a theorem of V. M. Lew. \begin{theorem}\label{Lew} {\bf (V. M. Lew)} Suppose $H$ is an infinite finitely generated subnormal subgroup of the finitely generated group $G$ and $H$ has infinite index in $G$. Then $G$ is 1-ended and semistable at $\infty$. \end{theorem}
\begin{proof} Suppose $k>0$ and $H = N_0 \lhd N_1 \lhd N_2 \lhd \ldots \lhd N_k = G$ is a subnormal series. For $k\in\{1,2\}$ and $G$ finitely presented, theorem \ref{Lew} is proved in \cite{M1} and \cite{M2}. Those proofs easily generalize to the finitely generated case. The result that G is 1-ended can be concluded from results in \cite{C} or \cite{S}. A geometric proof of this fact appears in \cite{LR}. We may assume that $(G:N_{k-1})=\infty$ as $G$ is semistable at $\infty$ iff any subgroup of finite index is semistable at $\infty$.
Let $\mathcal{H} = \{h_1, h_2,\ldots ,h_n\}$ be a finite generating set for $H$. Now, $G$ has generating set $S\equiv \{h_1, h_2,\ldots ,h_n, a_1, a_2,\ldots ,a_m, k_1, k_2,\ldots ,k_t\}$ where, under the projection map $\rho : G \to G/N_{k-1}$, $\rho(k_1),\ldots ,\rho(k_t)$ generate $G/N_{k-1}$ and the set $\{h_1,\ldots ,h_n,a_1,\ldots ,a_m\}$ is a subset of $ N_{k-1}.$ Let $\mathcal K=\{k_1,\ldots ,k_t\}$. We also assume that conjugates of the $h_i$'s by the $k_j$'s are among $a_1,\ldots ,a_m$ with the corresponding defining relations, say $k_ih_j{k_i}^{-1} \equiv a_{ij}$, and $k_i^{-1}h_jk_i \equiv b_{ij}$ for $i = 1,2,\ldots ,t$ and $j=1,2,\ldots ,n$ so that $a_{ij}, b_{ij} \in \{a_1, a_2,\ldots ,a_m\}$. Define $Q$ to be this set of conjugation relations. $$ Q=\{ k_ih_jk_i^{-1}a_{ij}^{-1}, k_i^{-1}h_jk_ib_{ij}^{-1}: i=1,\ldots, t \ \hbox{and} \ j=1,\ldots, n\}$$
Let $A$ be the subgroup of $N_{k-1}$ generated by $\mathcal{A} = \{h_1,\ldots ,h_n,a_1,\ldots ,a_m\}$. Let $A_i= N_i\cap A$ for $i\in \{1,\ldots ,n-2\}$. Then the subnormal sequence $$H=A_0\triangleleft A_1\triangleleft \cdots \triangleleft A_{k-2}\triangleleft A$$ has length $k-1$. The proof splits naturally into the two cases of whether or not $H$ has finite index in $A$. In the case $H$ has finite index in $A$ we give a straightforward argument showing that $H$ is commensurated in $G$ and by our main theorem $G$ is semistable at $\infty$. Note that if $k=1$ this is the only case (since $A\subset N_0=H$). So when the proof of the first case is concluded, we are in position to apply an induction argument (with base case in hand) to the remaining case.
Suppose $H$ has finite index in $A$. Each point of $\Gamma(A,\mathcal A)$ is within a bounded distance of $aH$ for any $a\in A$. In particular the Hausdorff distance between $H$ and $aH$ is bounded.
If $k\in \mathcal K^{\pm 1}$ and $z\in kH$ then $z=kh$ for some $h\in H$. Note that $khk^{-1}\in A$ (it is a product of the $a_{ij}^{\pm 1}$ or $b_{ij}^{\pm 1}$). Since $H$ has finite index in $A$, this point is close to $H$. As each point of $kH$ is close to $H$, left multiplying by $k^{-1}$ shows that each point of $H$ is close to $k^{-1}H$ for all $k\in \mathcal K^{\pm 1}$. We have $H$ is commensurated in $G$. The conditions of our main theorem are satisfied and so in the case $H$ has finite index in $A$, $G$ is semistable at $\infty$.
Now suppose $H$ has infinite index in $A$. The subnormal sequence $H=N_0\triangleleft N_1\triangleleft \cdots \triangleleft N_{k-1}\triangleleft G$ has length $k$. Case 1 (or Mihalik's theorem \cite{M1}) shows that if $k=1$ then $G$ is semistable at $\infty$. Inductively, we assume that if $G'$ is finitely generated and has a subnormal sequence of $H'=N_0'\triangleleft N_1'\triangleleft \cdots \triangleleft N_{k-2}'\triangleleft G'$ of length $k-1$ such that $H'$ is finitely generated and has infinite index in $G'$ then $G'$ is semistable at $\infty$.
In our case, $H$ has infinite index in $A$, and the $k-1$ length subnormal series $H=A_0\triangleleft A_1\triangleleft \cdots \triangleleft A_{k-2}\triangleleft A$ implies that $A$ is semistable at $\infty$. Hence we may choose a finite set of $\mathcal A$-relations $P$ so that $\Gamma_{(A,\mathcal A)} (P)$ is semistable. By using lemma \ref{rel} or remark 1, we may assume that if $r$ and $s$ are $\mathcal A$-rays at $v$ in $\Gamma _{(A,\mathcal A)}(P)$ then $r$ and $s$ are properly homotopic $rel\{v\}$ in $\Gamma _{(A,\mathcal A)}(P)$. In this case, let $\tilde\Gamma$ be $\Gamma_{(G,S)}(P\cup Q)$ (where $Q$ is the set of conjugation relations defined at the beginning of this proof).
If $v\in G$ (so $v$ is a vertex of $\tilde \Gamma$) and $C_v$ is a compact subcomplex of $v\Gamma_{(A,\mathcal A)}(P)\subset \tilde \Gamma$ there is a compact subcomplex $D_v$ of $ v\Gamma_{(A,\mathcal A)}(P)$ such that if $r$ and $s$ are edge path rays at $w\in v\Gamma_{(A,\mathcal A)}(P)-D_v$ then $r$ and $s$ are properly homotopic $rel\{v\}$ by a proper homotopy whose image does not intersect $C_v$. Hence if $C$ is a compact subcomplex of $\tilde\Gamma$ and we let $C_v=C\cap v\Gamma_{(A,\mathcal A)}(P)$ (for the finite set of vertices $v$ such $C\cap v\Gamma_{(A,\mathcal A)}(P)\ne\emptyset$) and let $D=\cup D_v$, then any two $\mathcal A$-rays $r$ and $s$ at $w\in v\Gamma_{(A,\mathcal A)}(P)-D$ are properly homotopic $rel\{w\}$ in $\tilde\Gamma-C$.
We use $\mathcal H$-rays $r_v$, as defined in lemma \ref{3.1}.
Choose a sequence of compact subcomplexes $\{C_i\}_{i=1}^\infty$ of $\tilde \Gamma$ satisfying the following conditions: \begin{enumerate} \item $\bigcup_{i=1}^\infty C_i= \tilde{\Gamma}$ \item $St(C_i)$ is contained in the interior of $C_{i+1}$, and the finite set of vertices $v$ such that $r_v$ intersects $C_i$, is a subset of $C_{i+1}$. \item If $r$ and $s$ are $\mathcal{A}$-rays both based at a vertex $v$ with images missing $C_i$, then $r$ and $s$ are properly homotopic $rel\{v\}$ by a proper homotopy missing $C_{i-1}$. \end{enumerate}
For convenience define $C_i=\emptyset$ for $i<1$ and observe that conditions $(1)$, $(2)$, and $(3)$ remain valid for all $C_i$. The next lemma concludes the proof of the second case and the theorem.
\begin{lemma}\label{4.3} If $v$ is a vertex of $\tilde{\Gamma}$, and $s = (s_1, s_2,\ldots )$ is an $S$-ray at $v$ then $s$ is properly homotopic to $r_v$, $rel\{v\}$.
\end{lemma} \begin{proof} Assume that $s$ has consecutive vertices $v=v_0, v_1,\ldots $. By construction, if $v_j \in C_i - C_{i-1}$ then $r_{v_j}$ avoids $C_{i-1}$. Assume $j$ is the largest integer such that $C_j$ avoids $s_i$, we will show $r_{v_{i-1}}$ is properly homotopic to $s_i*r_{v_i}$ $rel\{v_{i-1}\}$ by a proper homotopy $H_i$ with image avoiding $C_{j-2}$.
If $s_i \in \mathcal{A} ^{\pm 1}$, this is clear by condition $(3)$ with $H_i$ avoiding $C_{j-1}$. If $s_i \in {\mathcal K}^{\pm 1}$, then $s_i * r_{v_i}$ is properly homtopic $rel\{v_{i-1}\}$ to an $\mathcal{A}$-ray, $t_{v_{i-1}}$ (using only 2-cells arising from $Q$) and this homotopy has image in $St(Im(s_i * r_{v_i})) \subset \tilde{\Gamma} - C_{j-1}$. Since $t_{v_{i-1}}$ and $r_{v_{i-1}}$ are $\mathcal{A}$-rays with images avoiding $C_{j-1}$, condition (3) on the sets $C_i$ gives a proper homotopy between them $rel\{v_{i-1}\}$ whose image avoids $C_{j-2}$. Patch these two proper homotopies together to obtain $H_i$.
Let $H$ be the homotopy $rel\{v\}$ of $s$ to $r_v$, obtained by patching together the homotopies $H_i$. We need to check that $H$ is proper. Let $C \subset \tilde \Gamma$ be compact. Choose an index $j$ such that $C \subseteq C_j$. Since $s$ is a proper edge path to infinity, choose an index $N$ such that all edges after the $N^{th}$ edge of $s$ avoid $C_{j+2}$. Then for all $i>N$, $H_i$ avoids $C_j$, so $H$ is proper. \end{proof} This concludes the proof of the theorem. \end{proof} \section{ Simple Connectivity at $\infty$}
Recall, a connected locally finite CW-complex $X$ is simply connected at $\infty$ if for each compact set $C$ in $X$ there is a compact set $D$ in $X$ such that loops in $X-D$ are homotopically trivial in $X-C$. A group $G$ is simply connected at $\infty$ if given some, equivalently any (see theorem 3 of \cite {LR}), finite complex $X$ with $ \pi_1(X)=G$, then the universal cover of $X$ is simply connected at $\infty$.
If $G$ is a group and $H$ a subgroup of $G$ there are various notions for the number of ends of the pair $(G,H)$. Chapter 14 of Geoghegan's book \cite{G} gives a good account of these notions. In particular, the idea of the number of filtered ends of the pair $(G,H)$ is developed and compared to the standard number of ends of a pair. In any case, the number of filtered ends of the pair $(G,H)$ is greater than or equal to the number of standard ends of the pair. Proposition 14.5.9 of \cite{G} shows that if $H$ is a normal subgroup of $G$ then the number of ends of $G/H$, the standard number of ends of $(G,H)$ and the number of filtered ends of $(G,H)$ are all the same. In \cite{CM}, we show that if $G$ is a group with finite generating set $S$ and $Q$ is a finitely generated commensurated subgroup of $G$, then the number of filtered ends of $(G,Q)$ equals the number of ends of $\Lambda(S,Q,G)$.
\begin{theorem} Suppose $G$ is a finitely presented group with finite generating set $S$, and $Q$ is a finitely presented, infinite commensurated subgroup of infinite index in $G$. If $Q$ or $\Lambda (S,Q,G)$ is 1-ended, then $G$ is simply connected at $\infty$. \end{theorem}
\begin{proof} Suppose $\mathcal P=\langle q_1,\ldots ,q_a, k_1,\ldots ,k_b:R\rangle$ is a finite presentation of the group $G$ such that the $q_i$ generate the infinite commensurated subgroup $Q$, no $k_i$ is an element of $ Q$, and $R$ contains relations $R'$ such that $\langle q_1,\ldots ,q_a:R'\rangle$ is a finite presentation of $Q$. Assume that $Q$ has infinite index in $G$. Let $X$ be the Cayley 2-complex of $\mathcal P$, $\tilde X$ the universal cover for $X$ and $\tilde X(Q,v)\subset \tilde X$ the copy of the universal cover of the Cayley 2-complex for $\langle q_1,\ldots ,q_a:R'\rangle$ containing $v$. Let $\mathcal K=\{k_1,\ldots ,k_b\}$ and $\mathcal Q=\{q_1,\ldots q_a\}$.
Let $N_1$ be an integer such that if cosets $gQ$ and $hQ$ of $G$ are connected by an edge in $\tilde X$, then the Hausdorff distance between $gQ$ and $hQ$ in $\tilde X$ is $\leq N_1$. For each relator $r\in R$, let $r'$ be the word obtained from $r$ by removing $\mathcal Q$ letters. For each such (non-trivial) $r'$ and edge loop in $\Lambda(S,Q,G)$ with edge label $r'$, attach a 2-cell and call the resulting locally finite 2-complex $\hat \Lambda(S,Q,G)$. Note that $\Gamma(S,G)$ is the 1-skeleton of $\tilde X$. Extend the map $\rho:\Gamma(S,G) \to \Lambda (S,Q,G)$ (see proposition \ref{locfin2}), to $\rho :\tilde X\to \hat \Lambda(S,Q,G)$. Let $C$ be a finite subcomplex of $\tilde X$. Let $d_1\geq 1$ be an integer such that for each vertex $v$ of $\rho(C)$, there is a $\mathcal K$-edge path in $\hat \Lambda (S,Q,G)$ of length $\leq d_1$ from $v$ to a vertex of $\hat \Lambda (S,Q,G)-\rho(C)$. In particular, for each vertex $v$ of $\tilde X$, there is an edge path at $v$ of length $\leq N_1d_1$ and with end point $w$ such that $\tilde X(Q,w)\cap C=\emptyset$. For each $k\in \{k_1,\ldots , k_b\}$ assume that $Q$ and $kQ$ are within Hausdorff distance $N_1$. Choose $N_2$ so that if $q_1$ and $q_2$ are two $Q$-vertices of $\tilde X$ with the edge path distance in $\tilde X$ between $q_1$ and $q_2$ less than or equal to $ 2N_1+1$ then the edge path distance between $q_1$ and $q_2$ in $\tilde X(Q,q_1)$ is $\leq N_2$. In particular, there is a $\mathcal Q$-edge path between $q_1$ and $q_2$ of length $\leq N_2$. Choose $N_3$ such that if $\alpha$ is an edge path loop at $\ast\in \tilde X$ of length $\leq 2N_1+N_2+1$ then $\alpha$ is homotopically trivial in $St^{N_3}(\ast)$.
\begin{lemma}\label{Qkill}
Suppose $G$ is a finitely presented group, $Q$ is a finitely presented, infinite commensurated subgroup of infinite index in $G$, $\mathcal P$ is a presentation of $G$ as above, and $X$ is the Cayley 2-complex of $\mathcal P$. If $\alpha$ is a $\mathcal Q$-loop in $\tilde X$, with image in $\tilde X-St^{d_1N_1N_3}(C)$ then $\alpha$ is homotopically trivial in $\tilde X-C$. \end{lemma} \begin{proof} We may assume $\ast$ is the initial vertex of $\alpha$. If $\tilde X(Q,v)\cap C=\emptyset$, then as $\alpha$ is homotopically trivial in $\tilde X(Q,v)$ and we are finished. If $\tilde X(Q,v)\cap C\ne\emptyset$, there is an edge path $\beta=(b_1,\ldots, b_k)$ at $v$ with $k\leq N_1d_1$ and with end point $w$ such that $\tilde X(Q,w)\cap C=\emptyset$. Let $v\equiv v_0,\ldots ,v_k\equiv w$ be the consecutive vertices of $\alpha$.
For each vertex $x$ of $\alpha$, there is an edge path of length $\leq N_1$ from $x$ to a vertex of $\tilde X(Q,v_1)$ (if $b_1$ is a $\mathcal Q$-edge, this path is trivial) and hence $\alpha$ is homotopic $rel\{0,1\}$ to a loop $(b_1,\alpha_1,b_1^{-1})$, where $\alpha_1$ is a $\mathcal Q$-loop in $\tilde X(Q,b_1)$, by a homotopy in $St^{N_3}(im(\alpha))$. Inductively, $\alpha$ is freely homotopic to a $\mathcal Q$-loop $\alpha_k$ at the end point of $\beta$, by a homotopy in $St^{kN_3}(im(\alpha))\subset \tilde X-C$. As $\tilde X(Q,w)\cap C=\emptyset$ and $im(\alpha_k)\subset \tilde X(Q,w)$, $\alpha_k$ (and hence $\alpha$) is homotopically trivial in $\tilde X-C$. \end{proof}
\noindent {\bf Case 1: $Q$ is 1-ended.} There are finitely many vertices $w_1,\ldots, w_n\in \tilde X$ such that $\tilde X(Q,w_i)\cap St^{(d_1N_1+1)N_3}(C)\ne \emptyset$. As $\tilde X(Q,w_i)$ is 1-ended, there is a compact subcomplex $D$ of $\tilde X$ such that $St^{(d_1N_1+1)N_3}(C)\subset D$ and for all $i\in \{1,\ldots ,n\}$ and vertices $x,y\in \tilde X(Q,w_i)-D$,
$x$ and $y$ can be joined by a $\mathcal Q$-edge path in $\tilde X(Q,v_i)-St^{(d_1N_1+1)N_3}(C)$. Now, suppose $\alpha$ is an arbitrary loop in $\tilde X-D$ with initial vertex $v$. Choose $L$ a positive integer such that if $q_1$ and $q_2$ are vertices of $\tilde X(Q,\ast)$ that are of distance $\leq N_1|\alpha|$ apart in $\tilde X$ then they are of distance $\leq L$ in $\tilde X(Q, \ast)$. Choose $E$ such that any edge path loop $\tau$ at a vertex $x$ of $\tilde X$, of length $\leq N_1|\alpha|+L$, is homotopically trivial in $St^E(x)$. Let $\beta_1$ be a $\mathcal Q$-path in $\tilde X(Q,v)-St^{(d_1N_1+1)N_3}(C)$ from $v$ to a point
$$w\in \tilde X-(St^E(C)\cup St^{d_1N_1N_3+L}(C)\cup St^{N_1|\alpha |}(D))$$
Write the edge path $\alpha$ as $(e_1,\ldots, e_m)$ with consecutive vertices $v=v_0,\ldots, v_m$. As $w\in \tilde X(Q,v)$ there is an edge path $\tau_1$ of length $\leq N_1$ from $w=w_1$ to $w_2\in \tilde X(Q, v_2)$. Let $\tau _2$ be an edge path of length $\leq N_1$ from $w_2$ to $w_3\in \tilde X(Q, v_3)$. Inductively, $\tau_m $ is an edge path of length $\leq N_1$ from $w_{m}$ to a vertex $w_{m+1}\in \tilde X(Q,v)$. (Note that $\tau_i$ may be taken as the trivial path if $e_i$ is a $\mathcal Q$-edge.) As the edge path $(\tau_1,\ldots, \tau_m)$ has length $\leq N_1|\alpha |$, there is a $\mathcal Q$-path $\lambda$, from $w_{m+1}$ to $ w$ of length $\leq L$. By the definition of $E$, the loop $\tau\equiv (\tau_1,\ldots ,\tau_m,\lambda)$ at $w$ is homotopically trivial in $\tilde X-C$. Hence, it suffices to show that $\alpha $ is freely homotopic to $\tau$ in $\tilde X-C$. (See figure 1.)
\hspace{1.2in}\includegraphics[scale=.79]{QNSSFig1}
\centerline{Figure 1}
First note that each vertex of $(\tau_1,\ldots, \tau_m)$ is in $\tilde X-D$, since the vertex $w\in \tilde X-St^{N_1|\alpha |}(D)$. Next, write $\beta _1$ as the edge path $(b_1,\ldots, b_s)$. Let $\phi_0=e_1$ and let $\phi_i$ be an edge path of length $\leq N_1$ from the end point of $b_i$ to a point of $\tilde X(Q,v_2)$. Let $\psi_i$ be a $\mathcal Q$-edge path of length $\leq N_2$ from the end point of $\phi_{i-1}$ to the end point of $\phi_i$. (Choose $\phi_s=\tau_1$.) Then the loop $(\phi_{i-1},\psi_i,\phi_i^{-1},b_i^{-1})$ has length $\leq 2N_1+N_2+1$ and is homotopically trivial by a homotopy in the $N_3$-star of the initial point of $b_i$. String together these homotopies and we have that the edge path $\langle e_1^{-1},\beta_1,\tau_1)$ is homotopic $rel\{0,1\} $ to the $\mathcal Q$-edge path $\beta_2'\equiv (\psi_1, \ldots, \psi_m)$ by a homotopy with image in $St^{N_3}(im(\beta_1))\subset \tilde X-St^{d_1N_1N_3}(C)$. By the definition of $D$, there is an $\mathcal Q$-edge path $\beta _2$ with the same end points as $\beta_2'$ and with image in $\tilde X-St^{(d_1N_1+1)N_3}(C)$. By lemma \ref{Qkill}, $\beta_2$ and $\beta_2'$ are homotopic $rel\{0,1\}$ by a homotopy in $\tilde X-C$. Continue inductively until $\beta_m$ and $\beta'_{m+1}$ are defined. Since $w\in \tilde X-St^{d_1N_1N_3+L}(C)$, the path $\lambda$ (of length $\leq L$) has image in $\tilde X-St^{d_1N_1N_3}(C)$. By lemma \ref{Qkill}, the $\mathcal Q$-loop $(\beta'_{m+1},\lambda, \beta_1^{-1})$ is homotopically trivial in $\tilde X-C$.
\noindent {\bf Case 2: $\Lambda(S,Q,G)$ is 1-ended.} The letters $N_1$, $N_2$ and $N_3$ remain as in case 1 and we recycle letters used for any other constant.
Given $C$ a finite subcomplex of $\tilde X$. Consider $\rho(C)\subset \hat \Lambda (S,Q,G)$. Choose $D$ a finite subgraph of $ \Lambda (S,Q,G)$ such that any two vertices of $\hat \Lambda (S,Q,G)-D$ can be connected by a path in $\hat \Lambda (S,Q,G)-\rho(St^{N_3}(C))$. For each vertex $v$ of $D$ choose a path $\bar \alpha_v$ from $v$ to a vertex of $\hat \Lambda (S,Q,G)-D$. If $v$ is a vertex of $\hat \Lambda (S,Q,G)-D$, let $\bar \alpha_v$ be the trivial path. Let $N$ be the length of the longest path $\bar \alpha_v$ for $v\in D$. If $v$ is a vertex of $\tilde X$ such that $\rho(v)\in D$ let $\alpha_v$ be an edge path of the form $(\beta_1,\ldots , \beta_m)$ where each $\beta_i$ has length $\leq N_1$ and $\rho(\beta_i)$ has the same end points as the $i^{th}$ edge of $\bar \alpha_{\rho(v)}$ (so $|\alpha_v|\leq N_1N$). In analogy with previous terminology, we call $\alpha_v$ an $N_1$-approximate lift of $\bar \alpha_{\rho(v)}$. If $\rho(v)\not \in D$, let $\alpha_v$ be the trivial path.
Choose an integer $M$ such that if $v$ and $w$ are adjacent vertices of $St(D)$, then there is an edge path $\bar \alpha_{v,w}$ in $\hat \Lambda (S,Q,G)-\rho(St^{N_3}(C))$ of length $\leq M$ from the end point of $\bar\alpha_v$ to the end point of $\bar \alpha_w$. Choose an integer $B$ such that if $\beta$ is a $\tilde X$-edge path of length $\leq (2N+M)N_1+1$ connecting $\ast$ (the vertex of $\tilde X$ corresponding to the identity element of $G$) to a vertex $q\in Q$ then there is a $\mathcal Q$-edge path of length $\leq B$ connecting $\ast$ to $q$. Choose an integer $A$ such that if $\beta$ is an edge path loop at $\ast$ of length $\leq (2N+M)N_1+B+1$ then $\beta$ is homotopically trivial in $St^{A}(\ast)$.
We next show: If $\beta$ is an edge path loop in $\tilde X-St^A(C)$, then $\beta$ is freely homotopic to a loop $\hat \beta$ by a homotopy in $\tilde X-C$ where $\hat \beta$ can be chosen so that for each vertex $v$ of $\hat \beta$, $\rho(v)\not \in \rho(St^{N_3}(C))$. If $e$ is a directed edge of $\tilde X$ or $ \Lambda (S,Q,G)$, with initial point $a$ and terminal point $b$, then let $[a,b]$ represent this edge. Suppose $\beta$ is the edge path $(d_1,d_2,\ldots ,d_n)$ with consecutive vertices $b_1,\ldots , b_{n+1}$. If (cyclically) neither $\rho(b_i)$ nor $\rho(b_{i+1})$ is in $D$, then let $\hat \beta_i$ be the single edge $d_i$. Otherwise, $\rho(b_i)$ and $\rho(b_{i+1})$ belong to $St(D)$. In this case, consider the edge path $\delta_{i}\equiv (\alpha_{b_i}^{-1},d_i, \alpha_{b_{i+1}})$ of $\tilde X$.
If $\rho(b_i)\ne \rho(b_{i+1})$, the edge path $\bar \alpha_{\rho(b_i),\rho(b_{i+1})}$ joins the end points of $\rho(\delta_{i})$ and has length $\leq M$. Let $\alpha_i$ be an $N_1$-approximate lift of $\bar \alpha_{\rho(b_i),\rho(b_{i+1})}$ to the initial point of $\delta _{i}$ (otherwise, let $\alpha_i$ be the trivial path at the initial point of $\delta_i$).
Note that the end point of $ \alpha_i$ and the end point of $\delta_{i}$ belong to the same left $Q$-coset. As the length of $( \alpha_i^{-1},\delta_{i})$ is $\leq (2N+M)N_1+1$ there is a $\mathcal Q$-edge path $\gamma_{i}$ of length $\leq B$ from the initial point to the end point of $( \alpha_i^{-1},\delta_{i})$. The loop $(\gamma_i^{-1}, \alpha_i^{-1}, \delta_i)$ has length $\leq 2N+M+B+1$ and so is homotopically trivial in $\tilde X-C$ (by the definition of $A$). Let $\hat \beta_i=(\alpha_i,\gamma_i)$, for $i\in \{1,\ldots , n\}$. Let $\hat \beta$ be the loop $(\hat \beta_1,\ldots , \hat\beta_n)$. Combining homotopies shows that $\beta$ is freely homotopic to $\hat \beta$ by a homotopy in $\tilde X-C$. As $\rho(\alpha_i)$ avoids $\rho(St^{N_3}(C))$, $\rho(\hat \beta)$ avoids $\rho(St^{N_3}(C))$. (See figure 2.)
\vspace*{-1.5in}
\includegraphics[scale=.68]{QNSSFig2}
\vspace*{-3in}
\centerline{Figure 2}
We conclude the proof of case 2 by showing $\hat \beta$ is homotopically trivial in $\tilde X-C$. The proof is analogous to the closing argument of case 1. Let $v$ be the initial vertex of $\hat \beta$. Choose $L$ a positive integer such that if $q_1$ and $q_2$ are vertices of $\tilde X(Q,\ast)$ that are of distance $\leq N_1|\hat\beta|$ apart in $\tilde X$ then they are of distance $\leq L$ in $\tilde X(Q, \ast)$. Choose $E$ such that any edge path loop $\tau$ at a vertex $x$ of $\tilde X$ and of length $\leq N_1|\hat\beta|+L$, is homotopically trivial in $St^E(x)$. Let $\beta_1$ be a $\mathcal Q$-path from $v$ to a point $w\in \tilde X-St^E(C)$. Write the edge path $\hat\beta$ as $(e_1,\ldots, e_m)$ with consecutive vertices $v\equiv v_1,v_2,\ldots ,v_{m}$.
As $w\in \tilde X(Q,v)$ there is an edge path $\tau_1$ of length $\leq N_1$ from $ w$ to $w_2\in \tilde X(Q, v_2)$. Let $\tau _2$ be an edge path of length $\leq N_1$ from $w_2$ to $w_3\in \tilde X(Q, v_3)$. Inductively, $\tau_m $ is an edge path of length $\leq N_1$ from $w_{m}$ to a vertex $w_{m+1}\in\tilde X(Q,v)$. (Note that $\tau_i$ may be taken as the trivial path if $e_i$ is a $\mathcal Q$-edge.) As the edge path $(\tau_1,\ldots \tau_m)$ begins and ends in $\tilde X(Q,v)$ and has length $\leq N_1|\hat\beta |$, there is a $\mathcal Q$-path $\lambda$, from $w_{m+1}$ to $ w$ of length $\leq L$. By the definition of $E$, the loop $\tau\equiv (\tau_1,\ldots ,\tau_m,\lambda)$ at $w$ is homotopically trivial in $\tilde X-C$. Hence, it suffices to show that $\alpha $ is freely homotopic to $\tau$ in $\tilde X-C$.
Each vertex $b$ of $\beta_1$ is such that $\rho(v)=\rho(b)\in \hat\Lambda (S,Q,G)-\rho(St^{N_3}(C))$ and so the image of $\beta_1$ avoids $St^{N_3}(C)$. As in case 1, this implies that the path $(e_1^{-1}, \beta _1,\tau_1)$ is homotopic $rel\{0,1\}$ to a $\mathcal Q$-edge path $\beta_2$ by a homotopy with image in $St^{N_3}(im(\beta_1))\subset \tilde X-C$. Each vertex $b$ of $\beta_2$ is such that $\rho(b)=\rho(v_2)\in \hat\Lambda(S,Q,G)-\rho(St^{N_3}(C))$ and so the image of $\beta_2$ avoids $St^{N_3}(C)$. The path $(e_2^{-1}, \beta _2,\tau_2)$ is homotopic $rel\{0,1\}$ to a $\mathcal Q$-edge path $\beta_3$ by a homotopy with image in $St^{N_3}(im(\beta_2))\subset \tilde X-C$. Continue inductively until $\beta_{m+1}$ is defined (as a $\mathcal Q$-path from $v$ to $w_{m+1}$). As $\rho(v)\in \hat\Lambda (S,Q,G)-\rho(St^{N_3}(C))$, the $\mathcal Q$-loop $(\beta_1,\lambda^{-1},\beta_{m+1}^{-1})$ has image in $\tilde X(Q,v)\subset \tilde X-C$, and so is homotopically trivial in $\tilde X-C$. (See figure 3.)
\vspace*{-1.5in} \includegraphics[scale=.68]{QNSSFig3}
\vspace*{-3.5in}
\centerline{Figure 3}
Combining homotopies produces a null homotopy of $\hat \beta $ with image in $\tilde X-C$. \end{proof}
\enddocument
DEPARTMENT OF MATHEMATICS WHITE HALL CORNELL UNIVERSITY ITHACA, NY 14853 CURRENT ADDRESS: DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE STARK LEARNING CENTER WILKES UNIVERSITY WILKES-BARRE, PA 18766
\end{document} |
\begin{document}
\title{
Inequalities for trace norms of $2 imes 2$ block matrices}
\begin{abstract} This paper derives an inequality relating the $p$-norm of a positive $2 \times 2$ block matrix to the $p$-norm of the $2 \times 2$ matrix obtained by replacing each block by its $p$-norm. The inequality had been known for integer values of $p$, so the main contribution here is the extension to all values $p \geq 1$. In a special case the result reproduces Hanner's inequality. A weaker inequality which applies also to non-positive matrices is presented. As an application in quantum information theory, the inequality is used to obtain some results concerning maximal $p$-norms of product channels. \end{abstract}
\pagebreak
\section{Introduction and statement of results} Quantum information theory has raised some interesting mathematical questions about completely positive trace preserving maps. Such maps describe the evolution of open quantum systems, or quantum systems in the presence of noise \cite{BS}. Many of these questions are related to the quantum entropy of states, and the associated notion of the trace norm, or $p$-norm, of a state. In one case \cite{K1} the investigation of the additivity question for product channels (which will be explained in Section 5) led to an inequality for $p$-norms of positive $2 \times 2$ block matrices for integer values of $p$. The present paper is devoted to showing that this inequality extends to non-integer values of $p$. Some implications of this result for the additivity question are presented, as well as a somewhat weaker inequality which applies to all $2 \times 2$ block matrices.
The inequality for positive matrices turns out to be closely related to Hanner's inequality \cite{Ha}, which itself relates to the uniform convexity of the matrix spaces $C_p$ (these matrix spaces are the non-commutative versions of the function spaces $L_p$). The precise relation between these results will be described after the statements of Theorem \ref{thm1} and Theorem \ref{thm2} below. Hanner's inequality and uniform convexity for $C_p$ were first established by Tomczak-Jaegermann \cite{T-J} for special values of $p$, and later proved for all $p \geq 1$ by Ball, Carlen and Lieb \cite{BCL}. Many of the ideas and methods used in the proofs of Theorems \ref{thm1} and \ref{thm2} in this paper are taken from the paper by Ball, Carlen and Lieb. The heart of the proof of Theorem \ref{thm1} is the convexity result presented below in Lemma \ref{lemma3}, which extends a result used by Hanner \cite{Ha} in his original paper.
Let $M$ be a $2n \times 2n$ positive semi-definite matrix. It can be written in the block form \begin{eqnarray}\label{def:M} M = \pmatrix{ X & Y \cr Y^{*} & Z} \end{eqnarray} where $X,Y,Z$ are $n \times n$ matrices. The condition $M \geq 0$ requires that $X \geq 0$ and $Z \geq 0$, and also that $Y = X^{1/2} R Z^{1/2}$ where $R$ is a contraction.
Recall that the $p$-norm of a matrix $A$ is defined as \begin{eqnarray}
|| A ||_p = \bigg( {\rm Tr} ( A^{*} A)^{p/2} \bigg)^{1/p} \end{eqnarray} Define the $2 \times 2$ matrix \begin{eqnarray}\label{def:m}
m = \pmatrix{ || X ||_p & || Y ||_p \cr || Y ||_p & || Z ||_p } \end{eqnarray} From H\"older's inequality it follows that \begin{eqnarray}
|| Y ||_p = || X^{1/2} \, R Z^{1/2} ||_p \leq || X ||_p^{1/2} \,\, || Z ||_p^{1/2} \end{eqnarray} which implies that $m \geq 0$ also.
\begin{thm}\label{thm1} Let $M$ and $m$ be defined as in (\ref{def:M}) and (\ref{def:m}). The following inequalities hold:
\par\noindent {\bf a)} for $1 \leq p \leq 2$, \begin{eqnarray}\label{thm1.1}
|| M ||_p \geq || m ||_p \end{eqnarray} \par\noindent {\bf b)} for $2 \leq p \leq \infty$, \begin{eqnarray}\label{thm1.2}
|| M ||_p \leq || m ||_p \end{eqnarray} \end{thm}
Theorem \ref{thm1} is easily proved for integer values of $p$ using H\"older's inequality (see \cite{K1} for details). In the case where $X=Z$ and $Y=Y^{*}$, the norms of $M$ and $m$ simplify in the following way: \begin{eqnarray}
|| M ||_{p}^p & = & || X + Y ||_{p}^p + || X - Y ||_{p}^p \\
|| m ||_{p}^p & = & \bigg( || X ||_p + || Y ||_p \bigg)^p
+ \bigg| || X ||_p - || Y ||_p \bigg|^p \end{eqnarray} With these substitutions, the inequalities (\ref{thm1.1}) and (\ref{thm1.2}) are seen to be special cases of Hanner's inequality \cite{Ha} for the matrix spaces $C_p$. As mentioned above, Hanner's inequality for $C_p$ was proved by Tomczak-Jaegermann \cite{T-J} for special values of $p$, and later proved for all $p \geq 1$ by Ball, Carlen and Lieb \cite{BCL}.
The next Theorem presents a weaker pair of inequalities which hold for all $2 \times 2$ block matrices.
\begin{thm}\label{thm2} Let $X$, $Y$, $Z$, $W$ be complex $n \times n$ matrices. Define the $2 \times 2$ symmetric matrix \begin{eqnarray}\label{def:alpha}
\alpha = \pmatrix{||X||_p & \Big( {1 \over 2} ||Y||_p^{p} + {1 \over 2} ||W||_p^{p} \Big)^{1/p} \cr
\Big( {1 \over 2} ||Y||_p^{p} + {1 \over 2} ||W||_p^{p} \Big)^{1/p} & ||Z||_p} \end{eqnarray} The following inequalities hold:
\par\noindent {\bf a)} for $1 \leq p \leq 2$, \begin{eqnarray}\label{thm2.1}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p \geq 2^{1/p} \bigg[ {p-1 \over 2}\,\, {\rm Tr} (\alpha^2) + {2 - p \over 4}\,\, ({\rm Tr} \alpha )^2 \bigg]^{1/2} \end{eqnarray} \par\noindent {\bf b)} for $2 \leq p \leq \infty$, \begin{eqnarray}\label{thm2.2}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p \leq 2^{1/p} \bigg[ {p-1 \over 2}\,\, {\rm Tr} (\alpha^2) + {2 - p \over 4}\,\, ({\rm Tr} \alpha )^2 \bigg]^{1/2} \end{eqnarray} \end{thm}
Again considering the special case where $X = X^{*} = Z$ and $Y = Y^{*} = W$, the right side of (\ref{thm2.1}) and (\ref{thm2.2}) becomes
\begin{eqnarray} 2^{1/p} \bigg[ ||X||_p^2 + (p-1) \,\, ||Y||_p^2 \bigg]^{1/2} \end{eqnarray} The inequalities in this case were derived in \cite{BCL}, and used to establish the 2-uniform convexity (with best constant) of the space $C_p$. When the block matrix $M$ on the left side of (\ref{thm2.1}) is positive and defined as in (\ref{def:M}), the inequality can be easily derived from Theorem \ref{thm1}, as follows. Observe that in this case \begin{eqnarray}\label{new.m}
|| m ||_p = \bigg( (u + v)^p + (u - v)^p \bigg)^{1/p} \end{eqnarray} where
\begin{eqnarray} u & = & {||X||_p + ||Z||_p \over 2} \\
v & = & \Bigg[ \Bigg({||X||_p - ||Z||_p \over 2}\Bigg)^2 + ||Y||_p^2 \Bigg]^{1/2} \end{eqnarray} Gross's two-point inequality \cite{G} states that for all real numbers $a$ and $b$, and all $1 \leq p \leq 2$, \begin{eqnarray}\label{Gross}
\bigg( |a+b|^p + |a-b|^p \bigg)^{1/p} \geq 2^{1/p} \bigg( a^2 + (p-1) \,b^2 \bigg)^{1/2} \end{eqnarray} Applying Gross's inequality to the right side of (\ref{new.m}) and using (\ref{thm1.1}) immediately gives (\ref{thm2.1}). In section 3 we prove Theorem \ref{thm2} in the general case (where positivity is not assumed) by using some very non-trivial results from the paper \cite{BCL}.
Most of the new work in this paper goes into the proof of Theorem \ref{thm1}, part (a). The proof has three main ingredients: for convenience we state them as separate lemmas here. The first ingredient is a slight modification of a convexity result from \cite{BCL}.
\begin{lemma}\label{lemma2} Let $M = \pmatrix{X & Y \cr Y^{*} & Z} \geq 0$ where $X,Y,Z$ are $n \times n$ matrices. For fixed $Y$, and for $1 \leq p \leq 2$, the function \begin{eqnarray} (X,Z) \longmapsto {\rm Tr} M^p - {\rm Tr} X^p - {\rm Tr} Z^p \end{eqnarray} is jointly convex in $X$ and $Z$. \end{lemma}
The second ingredient extends a convexity result of Hanner \cite{Ha} to the case of positive $2 \times 2$ matrices with positive coefficients.
\begin{lemma}\label{lemma3} Let $A = \pmatrix{a & c \cr c & b} > 0$ where $a,b,c \geq 0$. For $1 \leq p \leq 2$, the function \begin{eqnarray}\label{def:g} g(A) = {\rm Tr} \pmatrix{a^{1/p} & c^{1/p} \cr c^{1/p} & b^{1/p}}^p \end{eqnarray} is convex in $A$. \end{lemma}
The third ingredient is a monotonicity result for positive $2 \times 2$ matrices.
\begin{lemma}\label{lemma4} Let $A = \pmatrix{a & c \cr c & b} > 0$ where $a,b,c \geq 0$. For fixed $c$, and for $1 \leq p \leq 2$, the function \begin{eqnarray}\label{def:h} (a,b) \longmapsto {\rm Tr} A^{p} - a^{p} - b^{p} \end{eqnarray} is decreasing in $a$ and $b$. \end{lemma}
The paper is organised as follows. In Section 2 we present the proof of Theorem \ref{thm1} using Lemmas \ref{lemma2}, \ref{lemma3} and \ref{lemma4}. Section 3 contains the proof of Theorem \ref{thm2}, which is mostly a straightforward adaptation of the proof of the uniform convexity result in \cite{BCL}. Lemmas \ref{lemma2}, \ref{lemma3} and \ref{lemma4} are proved in Section 4, and Section 5 describes an application of Theorem \ref{thm1} in Quantum Information Theory.
\section{Proof of Theorem \ref{thm1}} Many of the ideas in this proof are taken from the proof of Hanner's inequality in \cite{BCL}. First, we borrow the duality argument from Section IV of that paper to show that part (b) follows from part (a). For $p \geq 2$ define $q \leq 2$ to be its conjugate index. Then there is a $2n \times 2n$ matrix $K$ satisfying
$ || K ||_q = 1$ such that \begin{eqnarray}
|| M ||_p = \sup_{L: || L ||_q =1} | \, {\rm Tr} ( L M ) \, | = {\rm Tr} ( K M ) \end{eqnarray} The positivity of $M$ means that $K$ can be assumed to be positive. Let \begin{eqnarray} K = \pmatrix{A & C \cr C^{*} & B} \geq 0 \end{eqnarray} then \begin{eqnarray}\label{a->b} {\rm Tr} ( K M ) & = & {\rm Tr} (A X) + {\rm Tr} ( C Y^{*} ) + {\rm Tr} (C^{*} Y ) + {\rm Tr} ( B Z ) \\ \nonumber
& \leq & || A ||_q \, || X||_p + 2 ||C||_q \, ||Y||_p + ||B||_q \, ||Z||_p \\ \nonumber
& = & {\rm Tr} \pmatrix{||A||_q & ||C||_q \cr ||C||_q & ||B||_q} m \\ \nonumber
& \leq & \bigg|\bigg| \pmatrix{||A||_q & ||C||_q \cr ||C||_q & ||B||_q} \bigg|\bigg|_q \, ||m||_p \\
\nonumber & \leq & ||K||_q \, ||m||_p \\ \nonumber
& = & ||m||_p \end{eqnarray} The first and second inequalities are applications of H\"older's inequality, the last inequality uses part (a) of Theorem \ref{thm1}.
Next we turn to the proof of part (a) of Theorem \ref{thm1}. The inequality becomes an equality at the values $p=1,2$, so we will assume henceforth that $1 < p < 2$. Using the singular value decomposition we can write \begin{eqnarray} Y = U D V^{*} \end{eqnarray} where $U,V$ are unitary matrices and $D \geq 0$ is diagonal. Unitary invariance of the $p$ norm implies that \begin{eqnarray}
||M||_p = \bigg|\bigg| \pmatrix{U^{*} X U & D \cr D & V^{*} Z V} \bigg|\bigg|_p
\end{eqnarray} and also that $||X||_p = ||U^{*} X U||_p$,
$||Z||_p = ||V^{*} Z V||_p$ and $||Y||_p = ||D||_p$. So without loss of generality we will assume henceforth that $Y$ is diagonal and non-negative.
Next we use a diagonalization argument from Section III of \cite{BCL}. Let $U_1, \dots, U_{2^n}$ denote the $2^n$ diagonal $n \times n$ matrices with diagonal entries $\pm 1$. Then for any $n \times n$ matrix $A$ we have \begin{eqnarray} A_d = \sum_{i=1}^{2^n} 2^{-n} \,\, U_i A U_{i}^{*} \end{eqnarray} where $A_d$ is the diagonal part of $A$. Since $Y$ is diagonal this implies that \begin{eqnarray}\label{av1} \sum_{i=1}^{2^n} 2^{-n} \pmatrix{U_i & 0 \cr 0 & U_i} \pmatrix{X & Y \cr Y & Z} \pmatrix{U_{i}^{*} & 0 \cr 0 & U_{i}^{*}} = \pmatrix{X_d & Y \cr Y & Z_d} \end{eqnarray} and by the same reasoning \begin{eqnarray}\label{av2} \sum_{i=1}^{2^n} 2^{-n} \pmatrix{U_i & 0 \cr 0 & U_i} \pmatrix{X & 0 \cr 0 & Z} \pmatrix{U_{i}^{*} & 0 \cr 0 & U_{i}^{*}} = \pmatrix{X_d & 0 \cr 0 & Z_d} \end{eqnarray}
Now we combine (\ref{av1}) and (\ref{av2}) with the convexity result Lemma \ref{lemma2}, which gives \begin{eqnarray}\label{ineq1} {\rm Tr} \pmatrix{X & Y \cr Y & Z}^p - {\rm Tr} \pmatrix{X & 0 \cr 0 & Z}^p
\geq {\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p - {\rm Tr} \pmatrix{X_d & 0 \cr 0 & Z_d}^p \end{eqnarray}
The matrices $X_d,Y,Z_d$ are all diagonal with non-negative entries. Denote these entries by $(x_1, \dots, x_n)$, $(y_1, \dots, y_n)$ and $(z_1, \dots, z_n)$ respectively. Then \begin{eqnarray}\label{eqn1} {\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p = \sum_{i=1}^n {\rm Tr} \pmatrix{x_i & y_i \cr y_i & z_i}^p \end{eqnarray}
Now for $i=1,\dots,n$ define \begin{eqnarray} a_i = x_i^p, \quad b_i = z_i^p, \quad c_i = y_i^p \end{eqnarray} and introduce the $2 \times 2$ matrices \begin{eqnarray} A_i = \pmatrix{a_i & c_i \cr c_i & b_i} \end{eqnarray} It follows that \begin{eqnarray}\label{p-norms}
||X_d||_p & = & (a_1 + \cdots + a_n)^{1/p} \\ \nonumber
||Y||_p & = & (c_1 + \cdots + c_n)^{1/p} \\ \nonumber
||Z_d||_p & = & (b_1 + \cdots + b_n)^{1/p} \end{eqnarray} and the definition (\ref{def:g}) implies that \begin{eqnarray}\label{eqn3}
{\rm Tr} \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p = g(A_1 + \cdots + A_n) \end{eqnarray} Furthermore (\ref{eqn1}) implies that \begin{eqnarray}\label{eqn2} {\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p = g(A_1) + \cdots + g(A_n) \end{eqnarray} Also, for any positive number $k$ we have $g(k A) = k g(A)$. Combining this with the convexity result Lemma \ref{lemma3} gives \begin{eqnarray} g(A_1 + \cdots + A_n) \leq g(A_1) + \cdots + g(A_n), \end{eqnarray} which from (\ref{eqn2}) and (\ref{eqn3}) implies that \begin{eqnarray}\label{ineq2} {\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p \geq
{\rm Tr} \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p \end{eqnarray}
Combining (\ref{ineq1}) with (\ref{ineq2}) gives \begin{eqnarray}\label{ineq3} & {\rm Tr} & \pmatrix{X & Y \cr Y & Z}^p - {\rm Tr} \pmatrix{X & 0 \cr 0 & Z}^p \\ \nonumber \geq
& {\rm Tr} & \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p -
{\rm Tr} \pmatrix{||X_d||_p & 0 \cr 0 & ||Z_d||_p}^p \end{eqnarray} Furthermore \begin{eqnarray}
||X_d||_p \leq ||X||_p, \quad\quad
||Z_d||_p \leq ||Z||_p \end{eqnarray} Applying Lemma \ref{lemma4} to the right side of (\ref{ineq3}) shows that \begin{eqnarray}\label{ineq4}
& {\rm Tr} & \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p -
{\rm Tr} \pmatrix{||X_d||_p & 0 \cr 0 & ||Z_d||_p}^p \\ \nonumber
\geq & {\rm Tr} & \pmatrix{||X||_p & ||Y||_p \cr ||Y||_p & ||Z||_p}^p -
{\rm Tr} \pmatrix{||X||_p & 0 \cr 0 & ||Z||_p}^p \end{eqnarray} Furthermore \begin{eqnarray} {\rm Tr} \pmatrix{X & 0 \cr 0 & Z}^p =
{\rm Tr} \pmatrix{||X||_p & 0 \cr 0 & ||Z||_p}^p \end{eqnarray} and therefore (\ref{ineq3}) and (\ref{ineq4}) imply the result Theorem \ref{thm1}.
\section{Proof of Theorem \ref{thm2}} This proof follows very closely the methods in Section III of \cite{BCL}. First we use a duality argument to deduce (\ref{thm2.2}) from (\ref{thm2.1}). Let $p \geq 2$ and let $q$ be the index conjugate to $p$. Then it follows as in (\ref{a->b}) that there is a matrix $K = \pmatrix{
A & C \cr D & B}$ such that $||K||_q =1$ and \begin{eqnarray}\label{3.1}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p & = & {\rm Tr} \, K \, \pmatrix{X & Y \cr W & Z} \\ \nonumber & = & {\rm Tr} \bigg( AX + CW + DY + BZ \bigg) \end{eqnarray} Define
\begin{eqnarray} a = ||A||_q, \quad b = ||B||_q, \quad c = \Big({1 \over 2}||C||_q^q +
{1 \over 2}||D||_q^q \Big)^{1/q} \end{eqnarray} and similarly \begin{eqnarray}\label{def:x}
x = ||X||_p, \quad z = ||Z||_p, \quad y = \Big({1 \over 2}||Y||_p^p +
{1 \over 2}||W||_p^p \Big)^{1/p} \end{eqnarray} Then applying H\"older's inequality to (\ref{3.1}) gives \begin{eqnarray}\label{3.2}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p \leq ax + bz + 2 c y \end{eqnarray} This is rewritten as \begin{eqnarray}\label{3.3} ax + bz + 2 c y & = & 2 \Big({a+b \over 2}\Big) \Big({x+z \over 2}\Big) + 2 \Big({a-b \over 2}\Big) \Big({x-z \over 2}\Big) + 2 c y \\ \nonumber & = & 2 \Big({a+b \over 2}\Big) \Big({x+z \over 2}\Big) \\ \nonumber & + & 2 \Big({ q-1}\Big)^{1/2} \Big({a-b \over 2}\Big) \Big({1 \over q-1}\Big)^{1/2} \Big({x-z \over 2}\Big) \\ \nonumber & + & 2 \Big({q-1}\Big)^{1/2} c \Big({1 \over q-1}\Big)^{1/2} y \end{eqnarray} Now we apply the Cauchy-Schwarz inequality to the right side of (\ref{3.3}); the result is \begin{eqnarray}\label{3.3a} ax + bz + 2 c y & \leq & 2 \, \bigg[ \Big({a+b \over 2}\Big)^2 + (q-1) \Big({a-b \over 2}\Big)^2 + (q-1) c^2 \bigg]^{1/2} \nonumber \\ & \times & \bigg[ \Big({x+z \over 2}\Big)^2 + {1 \over q-1} \Big({x-z \over 2}\Big)^2 + {1 \over q-1} y^2 \bigg]^{1/2} \end{eqnarray} Furthermore, \begin{eqnarray} \Big({a+b \over 2}\Big)^2 + (q-1) \Big({a-b \over 2}\Big)^2 + (q-1) c^2 = {q-1 \over 2}\,\, {\rm Tr} (k^2) + {2 - q \over 4}\,\, ({\rm Tr} k )^2 \end{eqnarray} where $k$ is the $2 \times 2$ matrix \begin{eqnarray} k = \pmatrix{a & c \cr c & b} \end{eqnarray} Since $q \leq 2$, (\ref{thm2.1}) implies that \begin{eqnarray}\label{3.3b} \bigg[ {q-1 \over 2}\,\, {\rm Tr} (k^2) + {2 - q \over 4}\,\, ({\rm Tr} k )^2 \bigg]^{1/2} & \leq &
2^{-1/q} \,\, \bigg|\bigg| \pmatrix{A & C \cr D & B} \bigg|\bigg|_q \nonumber \\
& = & 2^{-1/q} \, || K ||_q \nonumber \\ & = & 2^{-1/q} \end{eqnarray} Combining (\ref{3.2}), (\ref{3.3a}) and (\ref{3.3b}) gives \begin{eqnarray}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p & \leq & 2^{1-1/q} \,\, \bigg[ \Big({x+z \over 2}\Big)^2 + {1 \over q-1} \Big({x-z \over 2}\Big)^2 + {1 \over q-1} y^2 \bigg]^{1/2} \nonumber \\ & = & 2^{1/p} \,\, \bigg[ \Big({x+z \over 2}\Big)^2 + (p-1) \Big({x-z \over 2}\Big)^2 + (p-1) y^2 \bigg]^{1/2} \nonumber \\ & = & 2^{1/p} \bigg[ {p-1 \over 2}\,\, {\rm Tr} (\alpha^2) + {2 - p \over 4}\,\, ({\rm Tr} \alpha )^2 \bigg]^{1/2} \end{eqnarray} where $\alpha$ was defined in (\ref{def:alpha}), and this proves (\ref{thm2.2}).
Suppose now that $1 \leq p \leq 2$. The first step in the proof of (\ref{thm2.1}) is to reduce the result to the case where the matrix is self-adjoint. This is done by modifying an argument from section III of \cite{BCL}. Given $X$, $Y$, $W$ and $Z$ define the matrices \begin{eqnarray} J = \pmatrix{X & Y \cr W & Z} \end{eqnarray} and \begin{eqnarray} L = \pmatrix{0 & X & 0 & Y \cr X^{*} & 0 & W^{*} & 0 \cr 0 & W & 0 & Z \cr Y^{*} & 0 & Z^{*} & 0} \end{eqnarray} Then $L = L^{*}$ and furthermore \begin{eqnarray}\label{3.4}
{\rm Tr} | L |^p = {\rm Tr} (L^{*} L)^{p/2} = {\rm Tr} (J^{*} J)^{p/2} + {\rm Tr} (J J^{*})^{p/2} =
2 \, {\rm Tr} |J|^p \end{eqnarray} Assuming that (\ref{thm2.1}) holds for self-adjoint matrices, it implies that \begin{eqnarray}\label{3.5}
|| L ||_p \geq 2^{1/p} \,\, \bigg[ {p-1 \over 2}\,\, {\rm Tr} (\beta^2) + {2 - p \over 4}\,\, ({\rm Tr} \beta )^2 \bigg]^{1/2} \end{eqnarray} where $\beta$ is given by \begin{eqnarray}\label{3.6}
\beta = \pmatrix{2^{1/p} \,||X||_p & \Big( ||Y||_p^{p} + ||W||_p^{p} \Big)^{1/p} \cr
\Big( ||Y||_p^{p} + ||W||_p^{p} \Big)^{1/p} & 2^{1/p} \,||Z||_p} \end{eqnarray} Comparing with (\ref{def:alpha}) shows that $\beta = 2^{1/p} \alpha$, and hence (\ref{3.4}) and (\ref{3.5}) imply (\ref{thm2.1}).
The self-adjoint case will be handled by modifying slightly a very non-trivial proof in section III of the paper \cite{BCL}. For convenience we state the hard part of the proof in \cite{BCL} as a separate lemma here, and refer the reader to the original source for its proof.
\begin{lemma}\label{lemma5} {\rm [Ball, Carlen and Lieb]} Let $A$ and $B$ be self-adjoint $n \times n$ matrices, with $A$ non-singular, and suppose that $1 \leq p \leq 2$. Then \begin{eqnarray}
{d^2 \over d r^2} \bigg( {\rm Tr} |A + r B|^p \bigg)^{2/p} \bigg|_{r=0}
\geq 2 (p-1) \bigg( {\rm Tr} |B|^p \bigg)^{2/p} \end{eqnarray} \end{lemma}
Now suppose that $X$, $Y$ and $Z$ are $n \times n$ complex matrices with $X$ and $Z$ self-adjoint. Define \begin{eqnarray} F = \pmatrix{X & 0 \cr 0 & Z}, \quad G = \pmatrix{0 & Y \cr Y^{*} & 0} \end{eqnarray} Using the notation introduced in (\ref{def:x}), the goal is to show that \begin{eqnarray}\label{3.7}
\bigg( {\rm Tr} |F + r G|^{p} \bigg)^{2/p} \geq 2^{2/p} \, \bigg[ \Big({x+z \over 2}\Big)^2 + (p-1) \Big({x-z \over 2}\Big)^2 + (p-1) r^2 y^2 \bigg]
\end{eqnarray} at the value $r=1$, where now $y = ||Y||_p$. First, it is easy to show that (\ref{3.7}) holds at $r=0$: in this case the left side is $(x^p + z^p)^{2/p}$, and Gross's two-point inequality (\ref{Gross}) implies that \begin{eqnarray} (x^p + z^p)^{2/p} \geq 2^{2/p} \Big[ \Big({x+z \over 2}\Big)^2 + (p-1) \Big({x-z \over 2}\Big)^2 \Big] \end{eqnarray} Second, both sides of (\ref{3.7}) are even functions of $r$ (the left side because the matrices $F + r G$ and $F - r G$ have the same spectrum), hence the derivatives of both sides vanish at $r=0$. Therefore it is sufficient to prove that \begin{eqnarray}\label{3.8}
{d^2 \over d r^2} \bigg( {\rm Tr} |F + r G|^{p} \bigg)^{2/p} \geq 2^{2/p} \,
2 (p-1) y^2 = 2 (p-1) \bigg( {\rm Tr} |G|^p \bigg)^{2/p} \end{eqnarray} for all $0 \leq r \leq 1$. The inequality (\ref{3.8}) is established by the following argument (again borrowed from \cite{BCL}). By continuity, it can be assumed that the ranges of $F$ and $G$ span all of ${\bf C}^{2n}$ (recall that $X$, $Y$, $Z$ are $n \times n$ matrices) and therefore that $F + r G$ is non-singular at all but possibly $2n$ values of $r$ in the interval $0 \leq r \leq 1$. By continuity again it is sufficient to establish (\ref{3.8}) at these non-singular values. Let $r_0$ be such a non-singular value, and let $A = F + r_0 G$ and $B = G$. Then at $r=r_0$, (\ref{3.8}) becomes \begin{eqnarray}\label{3.9}
{d^2 \over d r^2} \bigg( {\rm Tr} |A + r B|^p \bigg)^{2/p} \bigg|_{r=0}
\geq 2 (p-1) \bigg( {\rm Tr} |B|^p \bigg)^{2/p} \end{eqnarray} But this is exactly the statement of Lemma \ref{lemma5}, hence (\ref{thm2.1}) is proved.
\section{Proofs of Lemmas} \subsection{Proof of Lemma \ref{lemma2}} This result is a slight modification of a convexity result proved in Section IV of \cite{BCL}. For a positive matrix $M = \pmatrix{X & Y \cr Y^{*} & Z} \geq 0$, define $M_d = \pmatrix{X & 0 \cr 0 & Z} \geq 0$ and $F = M - M_d$. Let \begin{eqnarray} D = \pmatrix{D_1 & 0 \cr 0 & D_2} = D^{*} \end{eqnarray} be a block diagonal self-adjoint matrix, and define \begin{eqnarray*} \phi(s) & = & {\rm Tr} (M + s D)^p - {\rm Tr} (M_d + s D)^p \\ & = & {\rm Tr} (M_d + F + s D)^p - {\rm Tr} (M_d + s D)^p \end{eqnarray*} Then for $1 \leq p \leq 2$ the second derivative of $\phi$ has the following integral representation (see \cite{BCL} for details): \begin{eqnarray}\label{phi-der} {\phi}''(0) & = & p {\gamma}_p \int_{0}^{\infty} t^{p-1} {\rm Tr} \bigg( {1 \over t + M_d + F} D {1 \over t + M_d + F} D - {1 \over t + M_d} D {1 \over t + M_d} D \bigg) d t \nonumber \\ && \end{eqnarray} for some constant $\gamma_p$. Furthermore, the matrices $M_d + F + s D$ and $M_d - F + s D$ have the same spectrum, hence (\ref{phi-der}) can be written \begin{eqnarray}\label{phi-der2} {\phi}''(0) = {p \over 2} {\gamma}_p \int_{0}^{\infty} t^{p-1} {\rm Tr} \bigg( {1 \over t + M_d + F} & D & {1 \over t + M_d + F} \,\, D \\ \nonumber + {1 \over t + M_d - F} & D & {1 \over t + M_d - F}\,\, D \\ \nonumber - 2 \,\,{1 \over t + M_d} & D & {1 \over t + M_d}\,\, D \bigg) d t \end{eqnarray} Ball, Carlen and Lieb \cite{BCL} proved that for $t \geq 0$, and for any self-adjoint matrix $A$, the map \begin{eqnarray} X \longmapsto {\rm Tr} {1 \over t + X} A {1 \over t + X} A \end{eqnarray} is convex on the set of positive matrices. Applying this to (\ref{phi-der2}) with $X = M_d$ and $A = D$ shows that ${\phi}''(0) \geq 0$, which is the convexity result in Lemma \ref{lemma2}.
\subsection{Proof of Lemma \ref{lemma3}} Since $g$ is homogeneous it is sufficient to prove that \begin{eqnarray} g(A + B) \leq g(A) + g(B) \end{eqnarray} for any $A,B$ of the specified form. To prove this, it is sufficient to show that \begin{eqnarray}
{d \over dt} g(A + t B) |_{t=0} \leq g(B) \end{eqnarray} for any $A,B$. Let \begin{eqnarray} A = \pmatrix{a & c \cr c & b},\quad B = \pmatrix{x & y \cr y & z} \end{eqnarray} Define \begin{eqnarray} M = \pmatrix{a^{1/p} & c^{1/p} \cr c^{1/p} & b^{1/p}},\quad L = \pmatrix{a^{(1-p)/p} x & c^{(1-p)/p} y \cr c^{(1-p)/p} y & b^{(p-1)/p} z} \end{eqnarray} Then \begin{eqnarray}\label{der1}
{d \over dt} g(A + t B) |_{t=0} = {\rm Tr} M^{p-1} \, L \end{eqnarray}
The idea of the proof is to maximise the right side of (\ref{der1}) as a function of $M$, and show that the maximum is achieved when $A$ and $B$ are proportional, in which case the bound is an equality. This will be done by explicitly finding the critical points of ${\rm Tr} M^{p-1} \, L$.
To this end write the spectral decomposition of $M$ in the form \begin{eqnarray} M = \pmatrix{a^{1/p} & c^{1/p} \cr c^{1/p} & b^{1/p}} = \lambda P_1 + \mu P_2 \end{eqnarray} where $P_i$ are projectors onto the normalised eigenvectors of $M$, and $\lambda, \mu$ are the eigenvalues (notice that the positivity of $A$ and $B$ implies that both $M$ and $L$ are also positive). If we assume that $\lambda \geq \mu$ then for some $0 \leq t \leq 1$ we have \begin{eqnarray} a^{1/p} & = & \lambda t + \mu (1-t) \\ c^{1/p} & = & \sqrt{t(1-t)}(\lambda - \mu) \\ b^{1/p} & = & \lambda (1-t) + \mu t \end{eqnarray} Furthermore it also follows that \begin{eqnarray} M^{p-1} = \pmatrix{k_{11} & k_{12} \cr k_{12} & k_{22}} = {\lambda}^{p-1} P_1 + {\mu}^{p-1} P_2 \end{eqnarray} where \begin{eqnarray} k_{11} & = & {\lambda}^{p-1} t + {\mu}^{p-1} (1-t) \\ k_{12} & = & \sqrt{t(1-t)}({\lambda}^{p-1} - {\mu}^{p-1}) \\ k_{22} & = & {\lambda}^{p-1} (1-t) + {\mu}^{p-1} t \end{eqnarray} Substituting into (\ref{der1}) gives \begin{eqnarray}\label{der2} {\rm Tr} M^{p-1} \, L = k_{11} a^{(1-p)/p} x + 2 k_{12} c^{(1-p)/p} y + k_{22} b^{(p-1)/p} z \end{eqnarray} Equation (\ref{der2}) is invariant under a rescaling of $M$. Define \begin{eqnarray} h = {\mu \over \lambda}, \quad\quad 0 \leq h \leq 1 \end{eqnarray} then (\ref{der2}) is a function of $t$ and $h$, and can be written as \begin{eqnarray} {\rm Tr} M^{p-1} \, L = F(t,h) = F_1 (t,h) x + F_2 (t,h) y + F_3 (t,h) z \end{eqnarray} where \begin{eqnarray} F_1(t,h) & = & {t + (1-t) h^{p-1} \over (t + (1-t) h)^{p-1}} \\ F_2(t,h) & = & 2 \bigg(t(1-t)\bigg)^{1 - p/2} {1 - h^{p-1} \over (1-h)^{p-1}} \\ F_3(t,h) & = & F_1(1-t,h) \end{eqnarray}
The goal is to maximise $F(t,h)$ over $t$ and $h$. Define \begin{eqnarray} G & = & \Big(t + (1-t) h\Big) \Big(1 - h^{p-1}\Big) - (p-1) (1-h) \Big(t + (1-t) h^{p-1}\Big) \\ H & = & \Big((1-t) + t h\Big) \Big(1 - h^{p-1}\Big) - (p-1) (1-h) \Big((1-t) + t h^{p-1}\Big) \end{eqnarray} and also let \begin{eqnarray} \xi & = & x \Big(t + (1-t) h\Big)^{-p} \\ \eta & = & y (1-h)^{-p} \, \Big(t(1-t)\Big)^{-p/2} \\ \zeta & = & z \Big(1-t + t h\Big)^{-p} \end{eqnarray}
Then explicit calculation shows that \begin{eqnarray} {\partial F \over \partial t} = G \xi - (G-H) \eta - H \zeta \end{eqnarray} and \begin{eqnarray} {\partial F \over \partial h} = - t(1-t) (p-1) (1 - h^{p-2}) (\xi - 2 \eta + \zeta) \end{eqnarray}
The critical equations are \begin{eqnarray}\label{crit} {\partial F \over \partial t} = {\partial F \over \partial h} = 0 \end{eqnarray} One obvious set of solutions is obtained when $t=0$ or $t=1$, or $h=1$. In all of these cases, the matrix $M$ must be diagonal, in which case (\ref{der1}) implies \begin{eqnarray} {\rm Tr} M^{p-1} \, L = {\rm Tr} B = {\rm Tr} \pmatrix{x^{1/p} & 0 \cr 0 & z^{1/p}}^p \leq g(B) \end{eqnarray} and this establishes the result. If $0 < t < 1$ and $h < 1$, the critical equations can be written \begin{eqnarray}\label{crit2} G (\xi - \eta) & = & H(\zeta - \eta) \nonumber \\
\xi - \eta & = & - (\zeta - \eta) \end{eqnarray} It is easy to show that $h < 1$ implies that $G >0$ and $H > 0$, hence the solution of (\ref{crit2}) satisfies $\xi = \eta = \zeta$. In this case $M$ must be proportional to the matrix \begin{eqnarray} \pmatrix{x^{1/p} & y^{1/p} \cr y^{1/p} & z^{1/p}} \end{eqnarray} and substituting into (\ref{der1}) then gives \begin{eqnarray} {\rm Tr} M^{p-1} \, L = g(B) \end{eqnarray} which proves the result.
\subsection{Proof of Lemma \ref{lemma4}} By the convexity result Lemma \ref{lemma3}, it is sufficient to prove that the function $(a,b) \mapsto {\rm Tr} A^{p} - a^{p} - b^{p}$ is decreasing as $a, b \rightarrow \infty$. For $a >> 1$, and for $1 < p < 2$, easy estimates show that \begin{eqnarray} {\rm Tr} A^{p} - a^{p} - b^{p} \simeq p c^2 a^{p-2} \end{eqnarray} which is indeed decreasing. Similarly for $b$.
\section{Application to qubit maps} Quantum information theory has generated an interesting conjecture concerning completely positive maps on matrix algebras. Let $\Phi$ be a completely positive trace-preserving (CPTP) map on the algebra of $n \times n$ matrices. The minimal entropy of $\Phi$ is defined by \begin{eqnarray}\label{def:Smin} S_{\rm min}(\Phi) = \inf_{\rho} S(\Phi(\rho)) \end{eqnarray} where $S$ is the von Neumann entropy and the $\inf$ runs over $n \times n$ density matrices (satisfying $\rho \geq 0$ and ${\rm Tr} \rho = 1$). Minimal entropy is conjectured to be additive for product maps, that is, it is conjectured that \begin{eqnarray}\label{conj1} S_{\rm min}(\Phi_1 \otimes \Phi_2) = S_{\rm min}(\Phi_1) + S_{\rm min}(\Phi_2) \end{eqnarray} for any pair of CPTP maps $\Phi_1$ and $\Phi_2$. The conjecture (\ref{conj1}) has been established in some special cases \cite{S}, \cite{K2} but a general proof remains elusive.
For related reasons, Amosov, Holevo and Werner \cite{AHW} defined the maximal $p$-norm for a CPTP map to be \begin{eqnarray}\label{def:nu}
{\nu}_{p}(\Phi) = \sup_{\rho} || \Phi(\rho) ||_p \end{eqnarray} where the $\sup$ runs again over density matrices. They conjectured that this quantity is multiplicative for product maps, that is \begin{eqnarray}\label{AHW} {\nu}_{p}(\Phi_1 \otimes \Phi_2) = {\nu}_{p}(\Phi_1) \,\, {\nu}_{p}(\Phi_2) \end{eqnarray} Holevo and Werner later discovered a family of counterexamples to this conjecture for $p \geq 4.79$, using maps which act on $3 \times 3$ or higher dimensional matrices \cite{WH}. The conjecture remains open if at least one of the pair is a qubit map (which acts on $2 \times 2$ matrices) or if $p \leq 4$.
As an application of Theorem \ref{thm1}, we now show that it implies the result (\ref{AHW}) in one special case, namely when $\Phi_1$ is the qubit depolarizing channel and $p \geq 2$. This result was derived previously using a lengthier argument \cite{K2}, and the purpose of this presentation is to explore an alternative method which may allow new approaches to the additivity problem. Indeed, the method shown below can be easily extended to cover all unital qubit channels and even some non-unital qubit maps, thus extending the results in \cite{K1} which were derived for integer values of $p$. Unfortunately, the restriction to $p \geq 2$ does not allow any conclusions to be drawn about additivity of minimal entropy.
The depolarizing channel $\Delta$ acts on a state $\rho = \pmatrix{a & c \cr \overline{c} & b}$ by \begin{eqnarray} \Delta(\rho) = \lambda \rho + {1 - \lambda \over 2} I = \pmatrix{{\lambda}_{+} a + {\lambda}_{-} b & \lambda c \cr \lambda \overline{c} & {\lambda}_{-} a + {\lambda}_{+} b} \end{eqnarray} where $\lambda$ is a real parameter and ${\lambda}_{\pm} = (1 \pm \lambda)/2$. We will suppose here that $0 \leq \lambda \leq 1$. The maximal $p$-norm of $\Delta$ is easily computed to be \begin{eqnarray} {\nu}_{p}(\Delta) = \Bigg( \Big({1 + \lambda \over 2}\Big)^p + \Big({1 - \lambda \over 2}\Big)^p \Bigg)^{1/p} \end{eqnarray} Now consider a positive $2n \times 2n$ matrix $M$: \begin{eqnarray} M = \pmatrix{A & C \cr C^{*} & B} \end{eqnarray} The map $\Delta \otimes I$ acts on $M$ via \begin{eqnarray} (\Delta \otimes I) (M) = \pmatrix{{\lambda}_{+} A + {\lambda}_{-} B & \lambda C \cr \lambda C^{*} & {\lambda}_{-} A + {\lambda}_{+} B} \end{eqnarray}
Let $p \geq 2$, and let $q \leq 2$ be the index conjugate to $p$. Then as explained at the start of section 2, there is a positive $2n \times 2n$ matrix $K$ satisfying
$|| K ||_q = 1$ such that \begin{eqnarray}\label{eqn5}
|| (\Delta \otimes I) (M) ||_p = {\rm Tr} \bigg(K (\Delta \otimes I) (M) \bigg) \end{eqnarray}
Following the methods used in (\ref{a->b}), this leads to \begin{eqnarray} {\rm Tr} \bigg(K (\Delta \otimes I) (M) \bigg) & \leq &
\bigg|\bigg| \pmatrix{
{\lambda}_{+} ||A||_p + {\lambda}_{-} ||B||_p & \lambda ||C||_p \cr
\lambda ||C||_p & {\lambda}_{-} ||A||_p + {\lambda}_{+} ||B||_p}
\bigg|\bigg|_{p} \nonumber \\
& = & || \Delta(m) ||_p \end{eqnarray} where $m$ is the $2 \times 2$ matrix
\begin{eqnarray} m = \pmatrix{||A||_p & ||C||_p \cr
||C||_p & ||B||_p} \end{eqnarray} By definition of the $p$-norm this implies \begin{eqnarray}\label{nextineq}
|| (\Delta \otimes I) (M) ||_p \leq {\nu}_{p}(\Delta) \,\, \Big( ||A||_p + ||B||_p \Big) \end{eqnarray}
Now let $\rho$ be a $2n \times 2n$ density matrix, \begin{eqnarray} \rho = \pmatrix{\rho_{11} & \rho_{12} \cr \rho_{21} & \rho_{22}} \end{eqnarray} and consider the case where $M = (I \otimes \Phi)(\rho)$ and $\Phi$ is some other channel, so that $(\Delta \otimes I) (M) = (\Delta \otimes \Phi)(\rho)$. Then \begin{eqnarray} A = \Phi(\rho_{11}), \quad B = \Phi(\rho_{22}) \end{eqnarray} and hence \begin{eqnarray}
||A||_p + ||B||_p \leq \nu_{p}(\Phi) \,\, {\rm Tr} (\rho_{11} + \rho_{22}) = \nu_{p}(\Phi) \end{eqnarray} Therefore (\ref{nextineq}) implies that \begin{eqnarray}\label{ineq5}
|| (\Delta \otimes \Phi) (\rho) ||_p \leq {\nu}_{p}(\Delta) \, \nu_{p}(\Phi) \end{eqnarray} Since (\ref{ineq5}) is valid for all $\rho$, we get \begin{eqnarray} {\nu}_{p}(\Delta \otimes \Phi) \leq {\nu}_{p}(\Delta) \, \nu_{p}(\Phi) \end{eqnarray} and this establishes the result (\ref{AHW}), since the inequality in the other direction follows by restricting to product states.
{\bf Acknowledgements} This work was supported in part by National Science Foundation Grant DMS--0101205.
{~~}
\end{document} |
\begin{document}
\title[Malliavin calculus for Lévy processes]{Malliavin calculus and Clark-Ocone formula for functionals of a square-integrable Lévy process}
\author{Jean-François Renaud} \thanks{Corresponding author: renaud@dms.umontreal.ca} \address{J.-F. Renaud: D{\'e}partement de math{\'e}matiques et de statistique, Universit{\'e} de Montr{\'e}al, C.P. 6128, Succ. Centre-Ville, Montr{\'e}al, Qu{\'e}bec, H3C 3J7, Canada} \email{renaud@dms.umontreal.ca}
\author{Bruno R{\'e}millard} \address{B. Rémillard: Service de l'enseignement des m{\'e}thodes quantitatives de gestion, HEC Montr{\'e}al, 3000 chemin de la C{\^o}te-Sainte-Catherine, Montr{\'e}al, Qu{\'e}bec, H3T 2A7, Canada} \email{bruno.remillard@hec.ca}
\keywords{Clark-Ocone formula; Malliavin derivative; Malliavin calculus; martingale representation; chaotic representation; Lévy process} \subjclass[2000]{60H07, 60G51}
\maketitle
\begin{abstract} In this paper, we construct a Malliavin derivative for functionals of square-integrable Lévy processes and derive a Clark-Ocone formula. The Malliavin derivative is defined via chaos expansions involving stochastic integrals with respect to Brownian motion and Poisson random measure. As an illustration, we compute the explicit martingale representation for the maximum of a Lévy process. \end{abstract}
\section{Introduction}
If $W = (W_t)_{t \in [0,T]}$ is a Brownian motion, then the Wiener-Itô chaos expansion of a square-integrable Brownian functional $F$ is given by \begin{equation}\label{E:wienerito} F = \mathbb{E}[F] + \sum_{n \geq 1} \int_0^T \dots \int_0^T f_n(t_1,\dots,t_n) \, W(dt_1) \dots W(dt_n) , \end{equation} where $(f_n)_{n \geq 1}$ is a sequence of deterministic functions. This chaotic representation can be obtained by iterating Itô's representation theorem and can then be used to define the classical Malliavin derivative in the following way: if the chaos expansion of $F$ satisfies an integrability condition, then $F$ is Malliavin-differentiable and its Malliavin derivative $DF$ is given by \begin{multline}\label{E:derMalliavin} D_t F = f_1(t) \\ + \sum_{n \geq 1} (n+1) \int_0^T \dots \int_0^T f_{n+1} (t_1,\dots,t_n,t) \, W(dt_1) \dots W(dt_n) , \end{multline} for $t \in [0,T]$. This derivative operator is equal to a weak derivative on the Wiener space; the close connection between Hermite polynomials and Brownian motion is at the hearth of that equivalence. See for instance Nualart \cite{nualart1995}.
Quite recently, L{\o}kka \cite{lokka2004} developed similar results for a square-integrable pure-jump Lévy process $L = (L_t)_{t \in [0,T]}$ given by $$ L_t = \int_0^t \int_{\mathbb{R}} z (\mu - \pi)(ds,dz) , $$ where $\mu - \pi$ is the compensated Poisson random measure associated with $L$. In this setup, by mimicking the steps of the Wiener-Itô expansion, L{\o}kka obtained a chaos representation property for the pure-jump Lévy process $L$ just as in Equation~\eqref{E:wienerito} and then defined the corresponding Malliavin derivative as in Equation~\eqref{E:derMalliavin}. Later on, Benth et al. \cite{benthetal2003} introduced chaos expansions and a Malliavin derivative for more general Lévy processes, i.e. Lévy processes with a Brownian component. However, in the latter, neither proofs nor connections with the classical definitions was given.
Our first goal is to provide a detailed construction of a chaotic Malliavin derivative leading to a Clark-Ocone formula for Lévy processes. We extend the definitions of the Malliavin derivatives for Brownian motion and pure-jump Lévy processes to general square-integrable Lévy processes. Secondly, we derive additional results that are useful for computational purposes.
Our approach follows more or less the same steps as those leading to the Wiener-Itô chaos expansion and the chaotic Brownian Malliavin derivative, just as L{\o}kka \cite{lokka2004} did for pure-jump Lévy processes. The definition of the directional Malliavin derivatives is different from those of Benth et al. \cite{benthetal2003}. The main idea is to obtain a chaotic representation property (CRP) by iterating a well-chosen martingale representation property (MRP) and then defining directional Malliavin derivatives in the spirit of Ma et al. \cite{maetal1998}. However, in the context of a general square-integrable Lévy process, one has to deal with two integrators and therefore must be careful with the choice of derivative operators in order to extend the classical definitions. This choice will be made with the so-called commutativity relationships in mind and following Le{\'o}n et al. \cite{leonetal2002}. In the Brownian motion setup, the commutativity relationship between Malliavin derivative and Skorohod integral is given by \begin{equation}\label{E:commute} D_t \int_0^T u_s \, W(ds) = u_t + \int_t^T D_t u_s \, W(ds) , \end{equation} when $u$ is an adapted process. See Theorem 4.2 in Nualart and Vives \cite{nualartvives1990} for the corresponding formula in the Poisson process setup.
We will get the MRP using a denseness argument involving Doléans-Dade exponentials. Our path toward the CRP is different from that of Itô \cite{ito1956} and Kunita and Watanabe \cite{kunitawatanabe1967} who used random measures; see also the recent formulation of that approach given by Kunita \cite{kunita2004} and Solé et al. \cite{soleetal2007}. It is known that the CRP usually implies the MRP and that in general a Lévy process does not possess the MRP nor a predictable representation property. However, we show that the CRP and our well-chosen MRP are equivalent for square-integrable Lévy processes. Finally, just as in the Brownian and pure-jump Lévy setups, a Malliavin derivative and a Clark-Ocone formula are derived. As an illustration, we compute the explicit martingale representation for the maximum of a Lévy process.
This approach to Malliavin calculus for Lévy processes is different from the very interesting contributions of Nualart and Schoutens \cite{nualartschoutens2000}, Le{\'o}n et al. \cite{leonetal2002} and Davis and Johansson \cite{davisjohansson2006}. They developed in sequence a Malliavin calculus for Lévy processes using different chaotic decompositions based on orthogonal polynomials. Their construction also relies on the fact that all the moments of their Lévy process exist. Many other chaos decompositions related to Lévy processes have been considered through the years: see for example the papers of Dermoune \cite{dermoune1990}, Nualart and Vives \cite{nualartvives1990}, Aase et al. \cite{aaseetal2000} and Lytvynov \cite{lytvynov2003}.
On the other hand, Kulik \cite{kulik2006} developed a Malliavin calculus for Lévy processes in order to study the absolute continuity of solutions of stochastic differential equations with jumps, while Bally et al. \cite{ballyetal2007} established an integration by parts formula in order to give numerical algorithms for sensitivity computations in a model driven by a Lévy process; see also Bavouzet-Morel and Messaoud \cite{bavouzetmessaoud2006}. Finally, in a very interesting paper, Solé et al. \cite{soleetal2007} constructed a Malliavin calculus for Lévy processes through a suitable canonical space. While finishing this paper, the work of Petrou \cite{petrou2006} was brought to our attention. In that paper, the same methodology is applied to obtain a Malliavin derivative and a Clark-Ocone formula, but the focus is on financial applications. Since one of our goal is to give a thorough treatment of a Malliavin calculus for square-integrable Lévy processes, and considering the major differences between the two papers, we think that each paper has his own interest.
The rest of the paper is organized as follows. In Section 2, preliminary results on Lévy processes are recalled. In Section 3 and 4, martingale and chaotic representations are successively obtained. Then, in Section 5, the corresponding Malliavin derivative is constructed in order to get a Clark-Ocone formula. Finally, in Section 6, we apply this Clark-Ocone formula to compute the martingale representation of the maximum of a Lévy process.
\section{Preliminary results on Lévy processes}
Let $T$ be a strictly positive real number and let $X = (X_t)_{t \in [0,T]}$ be a Lévy process defined on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, i.e. $X$ is a process with independent and stationary increments, is continuous in probability and starts from $0$ almost surely. We assume that $X$ is the \textit{càdlàg} modification and that the probability space is equipped with the completed filtration $(\mathcal{F}_t)_{t \in [0,T]}$ generated by $X$. We also assume that the $\sigma$-field $\mathcal{F}$ is equal to $\mathcal{F}_T$.
This filtration satisfies \textit{les conditions habituelles} and, for any fixed time $t$, $\mathcal{F}_{t-} = \mathcal{F}_t$. Consequently, the filtration is continuous. This fact is crucial in the statement of our Clark-Ocone formula.
The reader not familiar with Lévy processes is invited to have a look at the books of Schoutens \cite{schoutens2003}, Protter \cite{protter2004} and Bertoin \cite{bertoin1996}.
From the Lévy-Itô decomposition (see \cite{protter2004}, Theorem $42$), we know that $X$ can be expressed as \begin{equation}\label{E:formegen}
X_t = \alpha t + \sigma W_t + \int_0^t \int_{|z| \geq 1} z \, N(ds,dz) + \int_0^t \int_{|z| < 1} z \, \widetilde{N}(ds,dz) \end{equation} where $\alpha$ is a real number, $\sigma$ is a strictly positive real number, $W$ is a standard Brownian motion and $\widetilde{N}$ is the compensated Poisson random measure associated with the Poisson random measure $N$. The Poisson random measure $N$ is independent of the Brownian motion $W$. Its compensator measure is denoted by $\lambda \times \nu$, where $\lambda$ is Lebesgue measure on $[0,T]$ and $\nu$ is the Lévy measure of $X$, i.e. $\nu$ is a $\sigma$-finite measure on $\mathbb{R}$ such that $\nu(\{0\}) = 0$ and $$ \int_{\mathbb{R}} (1 \wedge z^2) \, \nu(dz) < \infty . $$ Therefore the compensated random measure $\widetilde{N}$ is defined by $$ \widetilde{N} ([0,t] \times A) = N ([0,t] \times A) - t \nu(A) . $$ This measure is equal to the measure $\mu - \pi$ mentioned in the introduction.
Finally, let $\mathcal{P}$ be the predictable $\sigma$-field on $[0,T] \times \Omega$ and $\mathcal{B}(\reels)$ the Borel $\sigma$-field on $\mathbb{R}$. We recall that a process $\psi(t,z,\omega)$ is Borel predictable if it is $(\mathcal{P} \times \mathcal{B}(\reels))$-measurable.
\subsection{Square-integrable Lévy processes}
When the Lévy process $X$ is square-integrable, it can also be expressed as \begin{equation}\label{E:Lgen} X_t = \mu t + \sigma W_t + \int_0^t \int_{\reels} z \, \widetilde{N}(ds,dz) , \end{equation} where $\mu = \mathbb{E} [X_1]$. Indeed, in Equation~\eqref{E:formegen} we have that $$
\alpha = \mathbb{E} \left[X_1 - \int_0^1 \int_{|z| \geq 1} z \, N(dt,dz) \right] , $$ so $\mathbb{E}[X_t^2]$ is finite if and only if $$
\int_{\mathbb{R}} z^2 \nu(dz) = \mathbb{E} \left[ \left( \int_0^1 \int_{|z| \geq 1} z \, N(dt,dz) \right)^2 \right] $$ is finite. Note that in general $\mu \neq \alpha$.
Here is a consequence of Itô's formula. \begin{lem}\label{L:procZ} If $h$ belongs to $L^2([0,T], \lambda)$ and if $(t,z) \mapsto e^{g(t,z)} - 1$ belongs to $L^1([0,T] \times \mathbb{R}, \lambda \times \nu)$, define $Z = (Z_t)_{t \in [0,T]}$ by \begin{multline}\label{E:procZ} Z_t = \exp \left\lbrace \int_0^t h(s) \, W(ds) - \frac{1}{2} \int_0^t h^2(s) \, ds + \int_0^t \int_{\reels} g(s,z) \, N(ds,dz) \right. \\ - \left. \int_0^t \int_{\reels} \left( e^{g(s,z)} - 1 \right) \, \nu(dz) ds \right\rbrace . \end{multline} The process $Z$ is a square-integrable martingale if and only if $e^g - 1$ is an element of $L^2([0,T] \times \mathbb{R}, \lambda \times \nu)$. \end{lem} \begin{proof} From the assumptions, we have that $g$ belongs to $L^2([0,T] \times \mathbb{R}, \lambda \times \nu)$ and that $Z$ is a well-defined positive local martingale. Then, if $\mathbb{E}[Z_T] = 1$, it is a martingale. From Itô's formula, we also have that $Z$ is the solution of $$ dZ_t = Z_{t-} \, h(t) \, W(dt) + Z_{t-} \int_{\mathbb{R}} (e^{g(t,z)} - 1) \, \widetilde{N}(dt,dz) , \quad Z_0 = 1 . $$ Let $(\tau_n)_{n \geq 1}$ be the fundamental sequence of stopping times of $Z$. Since $W$ and $N$ are independent, $$ \mathbb{E} [Z_{t \wedge \tau_n}^2] = 1 + \mathbb{E} \left[ \int_0^{t \wedge \tau_n} Z_s^2 \, h^2(s) \, ds \right] + \mathbb{E}\left[ \int_0^{t \wedge \tau_n} Z_s^2 \int_{\mathbb{R}} (e^{g(s,z)} - 1)^2 \, \nu(dz) ds \right] , $$ for every $n \geq 1$. Taking the limit when $n$ goes to infinity yields \begin{equation}\label{E:egalitequadratique} \mathbb{E} [Z_t^2] = 1 + \int_0^t \mathbb{E}[Z_s^2] \, h^2(s) \, ds + \int_0^t \mathbb{E}[Z_s^2] \int_{\mathbb{R}} (e^{g(s,z)} - 1)^2 \, \nu(dz) ds . \end{equation} If we define $G(t) = h^2(t) + \int_{\mathbb{R}} (e^{g(t,z)} - 1)^2 \, \nu(dz)$, then the function $t \mapsto \mathbb{E} [Z_t^2]$ is the solution of $$ F^{\prime}(t) = G(t) F(t) , \quad F(0) = 1 . $$ Hence, \begin{equation}\label{E:lemprocZ} \mathbb{E} [Z_t^2] = \exp \left\lbrace \int_0^t h^2(s) \, ds + \int_0^t \int_{\reels} (e^{g(s,z)} - 1)^2 \, \nu(dz) ds \right\rbrace \end{equation} and the statement follows. \end{proof}
For $h \in L^2([0,T], \lambda)$ and $e^g - 1 \in L^2([0,T] \times \mathbb{R}, \lambda \times \nu)$, the process $Z$ is the Doléans-Dade exponential of the square-integrable martingale $(\overline{M}_t)_{t \in [0,T]}$ defined by $$ \overline{M}_t = \int_0^t h(s) \, W(ds) + \int_0^t \int_{\mathbb{R}} (e^{g(s,z)} - 1) \, \widetilde{N}(ds,dz) . $$ In the literature, this is often denoted by $Z = \mathcal{E}(\overline{M})$, the stochastic exponential of $\overline{M}$.
\subsection{A particular choice for $g$}
If $g$ is an element of $L^2([0,T] \times \mathbb{R}, \lambda \times \nu)$, then $e^{g} - 1$ is not necessarily square-integrable. One way to circumvent this problem is to introduce the bijection $\gamma \colon \mathbb{R} \to (-1,1)$ defined by \begin{equation}\label{E:gamma} \gamma(z) = \begin{cases} e^z - 1 \quad \text{if $z < 0$,}\\ 1 - e^{-z} \quad \text{if $z \geq 0$.} \end{cases} \end{equation} Note that $\gamma$ is bounded. Hence, if $h$ is square-integrable on $[0,T]$ and if $g$ is of the form $g(t,z) = \bar{g}(t) \gamma(z)$, where $\bar{g} \in C([0,T])$, i.e. $\bar{g}$ is a continuous function on $[0,T]$, then $Z$ is square-integrable by Lemma~\ref{L:procZ}.
The idea of introducing the function $\gamma$ is taken from L{\o}kka \cite{lokka2004}. In that paper, it is also proved that the process $(N_t)_{t \in [0,T]}$ defined by \begin{equation}\label{E:defN} N_t = \int_0^t \int_{\reels} z \, \widetilde{N}(ds,dz) \end{equation} and the process $(\widehat{N}_t)_{t \in [0,T]}$ defined by $$ \widehat{N}_t = \int_0^t \int_{\reels} \gamma(z) \, \widetilde{N}(ds,dz) $$ generate the same filtration. Since $$ \mathcal{F}_t^X = \mathcal{F}_t^W \vee \mathcal{F}_t^N $$ for every $t \in [0,T]$ (see Lemma $3.1$ in \cite{soleetal2007}), we have the following lemma. \begin{lem} For every $t \in [0,T]$, $$ \mathcal{F}_t^X = \mathcal{F}_t^W \vee \mathcal{F}_t^N = \mathcal{F}_t^W \vee \mathcal{F}_t^{\widehat{N}} . $$ As a consequence, $\mathcal{F} = \mathcal{F}_T^W \vee \mathcal{F}_T^{\widehat{N}}$. \end{lem}
This means that the processes $X_t = \mu t + \sigma W_t + N_t$ and $\widehat{X}_t = \mu t + \sigma W_t + \widehat{N}_t$ both generate the filtration $(\mathcal{F}_t)_{t \in [0,T]}$.
\section{Martingale representations}
\begin{assump} For the rest of the paper, we suppose that $X$ is a square-integrable Lévy process with a decomposition as in Equation~\eqref{E:Lgen}. \end{assump}
In general, a Lévy process does not possess the classical \textit{predictable representation property} (PRP), i.e. an integrable random variable $F$ (even with finite higher moments) can not always be expressed as \begin{equation*} F = \mathbb{E} [F] + \int_0^T u_t \, dX_t , \end{equation*} where $u$ is a predictable process and where the stochastic integral is understood as an integral with respect to a semimartingale. However, a martingale representation property exists for square-integrable functionals of $X$. It is a representation with respect to $W(dt)$ and $\widetilde{N}(dt,dz)$ simultaneously. This result can be found as far back as the paper of Itô \cite{ito1956}. In this section, we will provide a different proof. But first, here is a preparatory lemma.
\begin{lem}\label{L:densite} The linear subspace of $L^2(\Omega, \mathcal{F}, \mathbb{P})$ generated by $$ \left\lbrace Y(h,g) \mid h \in L^2([0,T], \lambda) , \, g \in C([0,T]) \right\rbrace , $$ where the random variables $Y(h,g)$ are defined by \begin{equation}\label{E:dense} Y(h,g) = \exp \left\lbrace \int_0^T h(t) \, W(dt) + \int_0^T \int_{\reels} g(t) \gamma(z) \, \widetilde{N}(dt,dz) \right\rbrace , \end{equation} is dense. \end{lem} \begin{proof} We adapt the proof of Lemma $1.1.2$ in the book of Nualart \cite{nualart1995}. Let $X$ be a square-integrable random variable such that \begin{equation*} \mathbb{E} \left[ X Y(h,g) \right] = 0 \end{equation*} for every $h \in L^2([0,T], \lambda)$ and $g \in C([0,T])$. Let $W(h) = \int_0^T h(t) \, W(dt)$ and $\widetilde{N}(g) = \int_0^T \int_{\reels} g(t) \gamma(z) \, \widetilde{N}(dt,dz)$. Hence, \begin{equation*} \mathbb{E} \left[ X \exp \left\lbrace \sum_{i=1}^n \left( a_i W(h_i) + b_i \widetilde{N}(g_i) \right) \right\rbrace \right] = 0 \end{equation*} for any $n \geq 1$, any $\{a_1,\dots,a_n,b_1,\dots,b_n\} \subset \mathbb{R}$ and any (sufficiently integrable) functions $\{h_1,\dots,h_n,g_1,\dots,g_n\}$. Then, for a fixed $n$ and fixed functions $\{h_1,\dots,h_n,g_1,\dots,g_n\}$, the Laplace transform of the signed measure on $\mathcal{B}(\mathbb{R}^n) \times \mathcal{B}(\mathbb{R}^n)$ defined by \begin{equation*} (A,B) \mapsto \mathbb{E} \left[ X \mathbb{I}_A \bigr( W(h_1), \dots, W(h_n) \bigr) \mathbb{I}_B \bigr( \widetilde{N}(g_1), \dots, \widetilde{N}(g_n) \bigr) \right] , \end{equation*} is identically $0$. Consequently, the measure on $\mathcal{F} = \mathcal{F}_T$ defined by $E \mapsto \mathbb{E} \left[ X \mathbb{I}_E \right]$ vanishes on every rectangle $A \times B$ if it is a pre-image of the $\mathbb{R}^{2n}$-dimensional random vector $$ \left(W(h_1), \dots, W(h_n), \widetilde{N}(g_1), \dots, \widetilde{N}(g_n) \right) . $$ By linearity of the stochastic integrals, this is also true for random vectors of the form $$ \left(W(h_1), \dots, W(h_n), \widetilde{N}(g_1), \dots, \widetilde{N}(g_m) \right) , $$ when $m$ and $n$ are different. Since $\mathcal{F}$ is generated by those random vectors, the measure is identically zero and $X = 0$. \end{proof}
We now state and prove a Martingale Representation Theorem with respect to the Brownian motion and the Poisson random measure simultaneously. \begin{thm}\label{T:prp} Let $F \in L^2(\Omega, \mathcal{F}, \mathbb{P})$. There exist a unique Borel predictable process $\psi \in L^2(\lambda \times \nu \times \mathbb{P})$ and a unique predictable process $\phi \in L^2(\lambda \times \mathbb{P})$ such that \begin{equation}\label{E:repLevy} F = \mathbb{E}[F] + \int_0^T \phi(t) \, W(dt) + \int_0^T \int_{\reels} \psi(t,z) \, \widetilde{N}(dt,dz) . \end{equation} \end{thm} \begin{proof} For $h \in L^2([0,T], \lambda)$ and $g \in C([0,T])$, we know from the proof of Lemma~\ref{L:procZ} that \begin{multline*} Y_t = \exp \left\lbrace \int_0^t h(s) \, W(ds) - \frac{1}{2} \int_0^t h^2(s) \, ds + \int_0^t \int_{\reels} g(s) \gamma(z) \, \widetilde{N}(ds,dz) \right.\\ \left. - \int_0^t \int_{\reels} \left( e^{g(s) \gamma(z)} - 1 - g(s) \gamma(z) \right) \, \nu(dz) ds \right\rbrace \end{multline*} is a solution of \begin{equation} Y_t = 1 + \int_0^t Y_{s-} h(s) \, W(ds) + \int_0^t \int_{\mathbb{R}} Y_{s-} \left( e^{g(s) \gamma(z)} - 1 \right) \, \widetilde{N}(ds,dz) \end{equation} on $[0,T]$. Hence, $Y_T$ admits a martingale representation as in Equation~\eqref{E:repLevy} with $\phi(t) = Y_{t-} h(t)$ and $\psi(t,z) = Y_{t-} \left( e^{g(t) \gamma(z)} - 1 \right)$. These two processes are predictable. Note that $$ Y_T = Y(h,g) e^{- \theta_T (h,g)} $$ where $$ \theta_T (h,g) = \frac{1}{2} \int_0^T h^2(t) \, dt + \int_0^T \int_{\reels} \left( e^{g(t) \gamma(z)} - 1 - g(t) \gamma(z) \right) \, \nu(dz) dt . $$ Since $\theta_T (h,g)$ is deterministic, $Y(h,g)$ also admits a martingale representation as in Equation~\eqref{E:repLevy} but this time with $$ \phi(t) = Y_{t-} h(t) e^{ \theta_T (h,g)} \quad \text{and} \quad \psi(t,z) = Y_{t-} \left( e^{g(t) \gamma(z)} - 1 \right) e^{\theta_T (h,g)} . $$ Therefore, the first statement follows by a denseness argument. Indeed, from Lemma~\ref{L:densite}, since $F$ is square-integrable, there exists a sequence $(F_n)_{n \geq 1}$ of square-integrable random variables such that $F_n$ tends to $F$ in the $L^2(\Omega)$-norm when $n$ goes to infinity. Moreover, the $F_n$'s are linear combinations of some $Y(h,g)$'s. Then, for each term in this sequence there exist $\phi_n$ and $\psi_n$ such that $$ F_n = \mathbb{E}[F_n] + \int_0^T \phi_n (t) \, W(dt) + \int_0^T \int_{\reels} \psi_n (t,z) \, \widetilde{N}(dt,dz) . $$ Also, since \begin{align*} \mathbb{E} [F_n - F_m]^2 &= \mathbb{E} \biggr[ \mathbb{E}[F_n - F_m] + \int_0^T (\phi_n (t) - \phi_m (t)) \, W(dt) \\ & \qquad \qquad + \int_0^T \int_{\reels} (\psi_n (t,z) - \psi_m (t,z)) \, \widetilde{N}(dt,dz) \biggr]^2 \\ &= \left( \mathbb{E}[F_n - F_m]\right)^2 + \int_0^T \mathbb{E} [\phi_n (t) - \phi_m (t)]^2 \, dt \\ & \qquad \qquad + \int_0^T \int_{\reels} \mathbb{E} [\psi_n (t,z) - \psi_m (t,z)]^2 \, \nu(dz) dt , \end{align*} we get that $(\phi_n)_{n \geq 1}$ and $(\psi_n)_{n \geq 1}$ are Cauchy sequences. It follows that there exist predictable processes $\psi \in L^2(\lambda \times \nu \times \mathbb{P})$ and $\phi \in L^2(\lambda \times \mathbb{P})$ for which the representation of Equation~\eqref{E:repLevy} is verified.
We now prove the second statement. If $F$ admits two martingale representations with $\phi_1, \phi_2, \psi_1, \psi_2$ (these have nothing to do with the previous sequences), then by Itô's isometry
$$0 = \| \phi_1 - \phi_2 \|^2_{L^2(\lambda \times \mathbb{P})} + \| \psi_1 - \psi_2 \|^2_{L^2(\lambda \times \nu \times \mathbb{P})}$$ and then $\phi_1 = \phi_2$ in $L^2(\lambda \times \mathbb{P})$ and $\psi_1 = \psi_2$ in $L^2(\lambda \times \nu \times \mathbb{P})$. \end{proof}
\begin{rem} From now on, we will refer to this martingale representation property of the Lévy process $X$ as the MRP. \end{rem}
\section{Chaotic representations}
We now define multiple integrals with respect to $W(dt)$ and $\widetilde{N}(dt,dz)$ simultaneously and define Lévy chaos as an extension of Wiener-Itô chaos. Then, we show that any square-integrable Lévy functional can be represented by a chaos expansion. We refer the reader to the lecture notes of Meyer \cite{meyer1976} for more details on multiple stochastic integrals.
\subsection{Notation}
In the following, we unify the notation of the Poisson random measure and the Brownian motion. Thus, the superscript $(1)$ will refer to Brownian motion and the superscript $(2)$ to the Poisson random measure. This is also the notation in \cite{benthetal2003}.
Let $\mathcal{X} = [0,T] \times \mathbb{R}$. We introduce two (projection) operators $\Pi_1 \colon \mathcal{X} \to [0,T]$ and $\Pi_2 \colon \mathcal{X} \to \mathcal{X}$ defined by $\Pi_1(t,z) = t$ and $\Pi_2(t,z) = (t,z)$. Consequently, $\Pi_1 \left( [0,T] \times \mathbb{R} \right) = [0,T]$ and $\Pi_2 \left( [0,T] \times \mathbb{R} \right) = [0,T] \times \mathbb{R}$.
For $n \geq 1$, $t \in [0,T]$ and $(i_1,\dots,i_n) \in \{1,2\}^n$, we also introduce the following notations: \begin{equation}\label{E:defsigmant} \Sigma_n (t) = \left\lbrace (t_1,\dots,t_n) \in [0,T]^n \mid t_1 < \dots < t_n \leq t \right\rbrace ; \end{equation} and \begin{multline*} \Sigma_{(i_1,\dots,i_n)} ([0,t] \times \mathbb{R}) \\ = \left\lbrace (x_1,\dots,x_n) \in \Pi_{i_1}(\mathcal{X}) \times \dots \times \Pi_{i_n}(\mathcal{X}) \mid \Pi_1 (x_1) < \dots < \Pi_1 (x_n) \leq t \right\rbrace . \end{multline*} Consequently, $\Sigma_n (T) = \Sigma_{(i_1,\dots,i_n)} (\mathcal{X})$ when $i_k = 1$ for each $k = 1,2,\dots,n$. If $f$ is a function defined on $\Pi_{i_1}(\x) \times \dots \times \Pi_{i_n}(\x)$, we write $f(x_1,\dots,x_n)$, where $x_k \in \Pi_{i_k}(\mathcal{X})$ for each $k = 1,2,\dots,n$. If $\eta_1 = \lambda$ and $\eta_2 = \lambda \times \nu$, let $L^2 \left( \Sigma_{(i_1,\dots,i_n)} (\x) \right)$ be the space of square-integrable functions defined on $\Sigma_{(i_1,\dots,i_n)} (\x)$ and equipped with the product measure $\eta_{i_1} \times \dots \times \eta_{i_n}$ defined on $\Pi_{i_1}(\mathcal{X}) \times \dots \times \Pi_{i_n}(\mathcal{X})$.
\subsection{Multiple integrals and Lévy chaos}
Fix $n \geq 1$ and $(i_1,\dots,i_n) \in \{1,2\}^n$. We define the iterated integral $J_{(i_1,\dots,i_n)} (f)$, for $f$ in $L^2 \left( \Sigma_{(i_1,\dots,i_n)} (\x) \right)$, by \begin{multline*} J_{(i_1,\dots,i_n)} (f) \\ = \int_{\Pi_{i_n}([0,T] \times \mathbb{R})} \dots \int_{\Pi_{i_1}([0,t_2-] \times \mathbb{R})} f(x_1,\dots,x_n) \, M^{(i_1)}(dx_1) \dots M^{(i_n)}(dx_n) \end{multline*} where $M^{(j)}(dx)$ equals $W(dt)$ if $j = 1$ and equals $\widetilde{N}(dt,dz)$ if $j = 2$. The $i_1$ in $J_{(i_1,\dots,i_n)}$ stands for the innermost stochastic integral and the $i_n$ stands for the outermost stochastic integral. For example, if $n = 3$ and $(i_1,i_2,i_3) = (1,1,2)$, then \begin{multline*} J_{(1,1,2)} (f) \\ = \int_0^T \int_{\mathbb{R}} \left[ \int_0^{t_3-} \left( \int_0^{t_2-} f(t_1,t_2,(t_3,z_3)) W(dt_1) \right) W(dt_2) \right] \widetilde{N}(dt_3,dz_3) . \end{multline*}
As $n$ runs through $\mathbb{N}$ and $(i_1,\dots,i_n)$ runs through $\{1,2\}^n$, the iterated integrals generate orthogonal spaces in $L^2(\Omega)$ that we would like to call \textit{Lévy chaos}. Indeed, since $$ \int_{\Pi_{i}([0,T] \times \mathbb{R})} f (x) M^{(i)}(dx) $$ and $$ \int_{\Pi_{j}([0,T] \times \mathbb{R})} g (x) M^{(j)}(dx) $$ are independent if $i \neq j$ and both have mean zero, using Itô's isometry iteratively, we get the following proposition.
\begin{prop}\label{P:orthogonality} If $f \in L^2(\Sigma_{(i_1,\dots,i_n)} (\x))$ and if $g \in L^2(\Sigma_{(j_1,\dots,j_m)} (\x))$, then \begin{multline*} \mathbb{E} \left[ J_{(i_1,\dots,i_n)}(f) J_{(j_1,\dots,j_m)}(g)\right] \\ = \begin{cases} (f,g)_{L^2(\Sigma_{(i_1,\dots,i_n)} (\x))} & \text{if $(i_1,\dots,i_n) = (j_1,\dots,j_m)$;}\\ 0 & \text{if not.} \end{cases} \end{multline*} \end{prop}
We end this subsection with a definition.
\begin{defn}\label{D:tensorproduct} For $n \geq 1$ and $(i_1,\dots,i_n) \in \{1,2\}^n$, the $(i_1,\dots,i_n)$-tensor product of a function $h$ defined on $[0,T]$ with a function $g$ defined on $[0,T] \times \mathbb{R}$ is a function on $\Pi_{i_1}(\x) \times \dots \times \Pi_{i_n}(\x)$ defined by $$ \left( h \otimes_{(i_1,\dots,i_n)} g \right) (x_1,\dots,x_n) = \prod_{1 \leq k \leq n} h \left( \Pi_1(x_k) \right)^{2 - i_k} g \left( \Pi_2(x_k) \right)^{i_k - 1} . $$ \end{defn}
For example, $$ \left( h \otimes_{(1,1)} g \right) (s,t) = h(s)h(t) $$ is a function defined on $[0,T] \times [0,T]$ and $$ \left( h \otimes_{(1,2,1)} g \right) (r,(s,y),t) = h(r)h(t)g(s,y) $$ is a function defined on $[0,T] \times ([0,T] \times \mathbb{R}) \times [0,T]$.
\subsection{Chaotic representation property}
For the rest of the paper, we will assume that $\sum_{(i_1,\dots,i_n)}$ means $\sum_{(i_1,\dots,i_n) \in \{1,2\}^n}$.
Recall that $Z = (Z_t)_{t \in [0,T]}$ was defined in Equation~\eqref{E:procZ} by \begin{multline*} Z_t = \exp \left\lbrace \int_0^t h(s) \, W(ds) - \frac{1}{2} \int_0^t h^2(s) \, ds + \int_0^t \int_{\reels} g(s,z) \, N(ds,dz) \right. \\ - \left. \int_0^t \int_{\reels} \left( e^{g(s,z)} - 1 \right) \, \nu(dz) ds \right\rbrace . \end{multline*}
\begin{lem}\label{L:crp} Let $h \in L^2([0,T])$ and $e^g - 1 \in L^2([0,T] \times \mathbb{R}, \lambda \times \nu)$. Then, $Z_T$ admits the following chaotic representation: \begin{equation} Z_T = 1 + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( h \otimes_{(i_1,\dots,i_n)} (e^{g} - 1) \right) . \end{equation} \end{lem} \begin{proof} We know from the proof of Lemma~\ref{L:procZ} that $Z_T$ is square-integrable and that \begin{equation}\label{E:temp} Z_T = 1 + \int_0^T Z_{t-} h(t) \, W(dt) + \int_0^T \int_{\reels} Z_{t-} (e^{g(t,z)} - 1) \, \widetilde{N}(dt,dz) . \end{equation} Let $\phi^{(1)} (t) = Z_{t-} h(t)$ and $\phi^{(2)} (t,z) = Z_{t-} (e^{g(t,z)} - 1)$. We now iterate Equation~\eqref{E:temp}. Consequently, \begin{align*} Z_T = 1 &+ \int_0^T f^{(1)}(t) \, W(dt) + \int_0^T \int_{\reels} f^{(2)}(t,z) \, \widetilde{N}(dt,dz) \\ &+ \int_0^T \int_0^{t-} Z_{s-} h(s) h(t) \, W(ds) \, W(dt) \\ &+ \int_0^T \int_0^{t-} \int_{\reels} Z_{s-} (e^{g(s,y)} - 1) h(t) \, \widetilde{N}(ds,dy) \, W(dt) \\ &+ \int_0^T \int_{\reels} \int_0^{t-} Z_{s-} h(s) (e^{g(t,z)} - 1) \, W(ds) \, \widetilde{N}(dt,dz) \\ &+ \int_0^T \int_{\reels} \int_0^{t-} \int_{\reels} Z_{s-} (e^{g(s,y)} - 1) (e^{g(t,z)} - 1) \, \widetilde{N}(ds,dy) \, \widetilde{N}(dt,dz) \end{align*} where $f^{(1)}(t) = h(t) = \left( h \otimes_{(1)} (e^g - 1) \right) (t)$ and $f^{(2)}(t,z) = e^{g(t,z)} - 1 = \left( h \otimes_{(2)} (e^g - 1) \right) (t,z)$. Then, after $n$ iterations, we get \begin{multline*} Z_T = 1 + \sum_{k = 1}^{n-1} \sum_{(i_1,\dots,i_k)} J_{(i_1,\dots,i_k)} (f^{(i_1,\dots,i_k)}) \\ + \sum_{(i_1,\dots,i_n)} \int_{\Pi_{i_n}([0,T] \times \mathbb{R})} \dots \int_{\Pi_{i_1}([0,t_2-] \times \mathbb{R})} \phi^{(i_1,\dots,i_n)} \left( x_1,\dots,x_n \right) \\ M^{(i_1)}(dx_1) \dots M^{(i_n)}(dx_n) \end{multline*} where $f^{(i_1,\dots,i_k)} = h \otimes_{(i_1,\dots,i_k)} (e^{g} - 1)$ and where $\phi^{(i_1,\dots,i_n)} = Z_{-} (h \otimes_{(i_1,\dots,i_n)} (e^{g} - 1))$. This means that we can define a sequence $(\psi_n)_{n \geq 2}$ in $L^2(\Omega)$ by \begin{multline*} \psi_n = \sum_{(i_1,\dots,i_n)} \int_{\Pi_{i_n}([0,T] \times \mathbb{R})} \dots \int_{\Pi_{i_1}([0,t_2-] \times \mathbb{R})} \phi^{(i_1,\dots,i_n)} \left( x_1,\dots,x_n \right) \\ M^{(i_1)}(dx_1) \dots M^{(i_n)}(dx_n) . \end{multline*}
From Proposition~\ref{P:orthogonality}, $$
\mathbb{E}[Z_T^2] = 1 + \sum_{k = 1}^{n-1} \sum_{(i_1,\dots,i_k)} \| f^{(i_1,\dots,i_k)} \|_{L^2(\Sigma_{(i_1,\dots,i_k)}(\mathcal{X}))}^2 + \mathbb{E}[\psi_n^2] $$ for each $n \geq 2$. Hence we get that $$ \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} (f^{(i_1,\dots,i_n)}) $$ is a square-integrable series and that there exists a square-integrable random variable $\psi$ such that $\psi_n$ tends to $\psi$ in the $L^2(\Omega)$-norm. Consequently, it is enough to show that $\psi = 0$. Since $f^{(i_1,\dots,i_n)} = h \otimes_{(i_1,\dots,i_n)} (e^{g} - 1)$, using Proposition~\ref{P:orthogonality} once again, we get that \begin{multline*} \sum_{(i_1,\dots,i_n)} \mathbb{E} \left[ \left( J_{(i_1,\dots,i_n)} (f^{(i_1,\dots,i_n)}) \right)^2 \right] \\
= \sum_{k = 0}^n \sum_{\stackrel{(i_1,\dots,i_n)}{|i|=k}} \| h \otimes_{(i_1,\dots,i_n)} (e^{g} - 1) \|^2_{L^2(\Sigma_{(i_1,\dots,i_n)} (\x))} , \end{multline*}
where $|i| = |(i_1,\dots,i_n)| = \sum_{j = 1}^n (2-i_j)$ stands for the number of times the function $h$ appears in the tensor product. Note that when $|i| = k$ there are $\binom{n}{k}$ terms in the innermost summation. Since $h^2 \otimes_{(i_1,\dots,i_n)} (e^{g} - 1)^2$ is a $(i_1,\dots,i_n)$-tensor product, the function given by $$
\sum_{\stackrel{(i_1,\dots,i_n)}{|i|=k}} h \otimes_{(i_1,\dots,i_n)} (e^{g} - 1) $$ is symmetric on $\Pi_{i_1}(\x) \times \dots \times \Pi_{i_n}(\x)$. Consequently, \begin{align*} & \sum_{(i_1,\dots,i_n)} \mathbb{E} \left[ \left( J_{(i_1,\dots,i_n)} (f^{(i_1,\dots,i_n)}) \right)^2 \right] \\
&= \sum_{k = 0}^n \int_{\Sigma_{(i_1,\dots,i_n)} (\x)} \biggr[ \sum_{\stackrel{(i_1,\dots,i_n)}{|i|=k}} h^2 \otimes_{(i_1,\dots,i_n)} (e^{g} - 1)^2 \biggr] \, d\eta_{i_1} \dots d\eta_{i_n}\\
&= \frac{1}{n!} \sum_{k = 0}^n \int_{\Pi_{i_1}(\x) \times \dots \times \Pi_{i_n}(\x)} \biggr[ \sum_{\stackrel{(i_1,\dots,i_n)}{|i|=k}} h^2 \otimes_{(i_1,\dots,i_n)} (e^{g} - 1)^2 \biggr] \, d\eta_{i_1} \dots d\eta_{i_n}\\
&= \frac{1}{n!} \sum_{k = 0}^n \binom{n}{k} \| h \|^{2k}_{L^2(\lambda)} \|e^{g} - 1\|^{2(n-k)}_{L^2(\lambda \times \nu)} \\
&= \frac{1}{n!} \left( \| h \|^2_{L^2(\lambda)} + \|e^{g} - 1\|^2_{L^2(\lambda \times \nu)} \right)^n . \end{align*}
From Equation~\eqref{E:lemprocZ}, we know that $$
\mathbb{E} [Z_T^2] = \exp \left\lbrace \| h \|^2_{L^2(\lambda)} + \|e^{g} - 1\|^2_{L^2(\lambda \times \nu)} \right\rbrace . $$ This means that $\psi = 0$ and the statement follows. \end{proof}
We are now ready to state and prove the chaotic representation property of the Lévy process $X$. The previous lemma and the idea of its proof will be of great use.
\begin{thm}\label{T:crp} Let $F \in L^2(\Omega, \mathcal{F}, \mathbb{P})$. There exists a unique sequence $$ \left\lbrace f^{(i_1,\dots,i_n)}; n \geq 1, (i_1,\dots,i_n) \in \{1,2\}^n \right\rbrace , $$ whose elements are respectively in $L^2 \left( \Sigma_{(i_1,\dots,i_n)} (\x) \right)$, such that \begin{equation}\label{E:chaos} F = \mathbb{E}[F] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( f^{(i_1,\dots,i_n)} \right) . \end{equation} Consequently, \begin{equation}\label{E:isometrie}
\mathbb{E}[F^2] = \mathbb{E}^2[F] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} \| f^{(i_1,\dots,i_n)} \|_{L^2(\Sigma_{(i_1,\dots,i_n)} (\x))}^2 . \end{equation} \end{thm} \begin{proof} From Theorem~\ref{T:prp}, we know there exist a predictable process $\phi^{(1)} \in L^2(\lambda \times \mathbb{P})$ and a Borel predictable process $\phi^{(2)} \in L^2(\lambda \times \nu \times \mathbb{P})$ such that \begin{equation*} F = \mathbb{E}[F] + \int_0^T \phi^{(1)} (t) \, W(dt) + \int_0^T \int_{\reels} \phi^{(2)} (t,z) \, \widetilde{N}(dt,dz) . \end{equation*} Using Itô's isometry, it is clear that $$
\| \phi^{(1)} \|_{L^2(\lambda \times \mathbb{P})}^2 + \| \phi^{(2)} \|_{L^2(\lambda \times \nu \times \mathbb{P})}^2 \leq \mathbb{E}[F^2] . $$ For almost all $t \in [0,T]$, $\phi^{(1)}(t) \in L^2(\Omega, \mathcal{F}_t, \mathbb{P})$ and then from Theorem~\ref{T:prp} there exist processes $\phi^{(1,1)}$ and $\phi^{(1,2)}$ such that \begin{equation*} \phi^{(1)}(t) = \mathbb{E}[\phi^{(1)}(t)] + \int_0^t \phi^{(1,1)} (t,s) \, W(ds) + \int_0^t \int_{\reels} \phi^{(1,2)} (t,s,y) \, \widetilde{N}(ds,dy) . \end{equation*} Similarly, for almost all $(t,z) \in [0,T] \times \mathbb{R}$, $\phi^{(2)}(t,z) \in L^2(\Omega, \mathcal{F}_t, \mathbb{P})$ and \begin{multline*} \phi^{(2)}(t,z) = \mathbb{E}[\phi^{(2)}(t,z)] + \int_0^t \phi^{(2,1)} (t,z,s) \, W(ds) \\ + \int_0^t \int_{\reels} \phi^{(2,2)} (t,z,s,y) \, \widetilde{N}(ds,dy) . \end{multline*} Consequently, \begin{multline*} F = \mathbb{E}[F] + \int_0^T g^{(1)}(t) \, W(dt) + \int_0^T \int_{\reels} g^{(2)}(t,z) \, \widetilde{N}(dt,dz) \\ + \int_0^T \int_0^{t-} \phi^{(1,1)} (t,s) \, W(ds) \, W(dt) \\ + \int_0^T \int_0^{t-} \int_{\reels} \phi^{(1,2)} (t,s,y) \, \widetilde{N}(ds,dy) \, W(dt) \\ + \int_0^T \int_{\reels} \int_0^{t-} \phi^{(2,1)} (t,z,s) \, W(ds) \, \widetilde{N}(dt,dz) \\ + \int_0^T \int_{\reels} \int_0^{t-} \int_{\reels} \phi^{(2,2)} (t,z,s,y) \, \widetilde{N}(ds,dy) \, \widetilde{N}(dt,dz) . \end{multline*} where $g^{(1)}(t) = \mathbb{E}[\phi^{(1)}(t)]$ and $g^{(2)}(t,z) = \mathbb{E}[\phi^{(2)}(t,z)]$. After $n$ steps of this procedure, i.e. after $n$ iterations of Theorem~\ref{T:prp}, we get as in the proof of Lemma~\ref{L:crp} that $$ F = \mathbb{E}[F] + \sum_{k = 1}^{n-1} \sum_{(i_1,\dots,i_k)} J_{(i_1,\dots,i_k)} (f^{(i_1,\dots,i_k)}) + \psi_n $$ where $f^{(i_1,\dots,i_k)} \in L^2 \left( \Sigma_{(i_1,\dots,i_k)}(\mathcal{X}) \right)$, for each $1 \leq k \leq n-1$ and $(i_1,\dots,i_k) \in \{1,2\}^k$, where \begin{multline*} \psi_n = \sum_{(i_1,\dots,i_n)} \int_{\Pi_{i_n}([0,T] \times \mathbb{R})} \dots \int_{\Pi_{i_1}([0,t_2-] \times \mathbb{R})} \phi^{(i_1,\dots,i_n)} \left( x_1,\dots,x_n \right) \\ M^{(i_1)}(dx_1) \dots M^{(i_n)}(dx_n) , \end{multline*} and where $\phi^{(i_1,\dots,i_n)} \in L^2 \left( \eta_{i_1} \times \dots \times \eta_{i_n} \times \mathbb{P} \right)$, for each $(i_1,\dots,i_n) \in \{1,2\}^n$.
From Proposition~\ref{P:orthogonality}, $$
\mathbb{E}[F^2] = \mathbb{E}[F]^2 + \sum_{k = 1}^{n-1} \sum_{(i_1,\dots,i_k)} \| f^{(i_1,\dots,i_k)} \|_{L^2(\Sigma_{(i_1,\dots,i_k)}(\mathcal{X}))}^2 + \mathbb{E}[\psi_n^2] , $$ for each $n \geq 2$ and $$ \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} (f^{(i_1,\dots,i_n)}) $$ is a square-integrable series. Consequently, we know that there exists a square-integrable random variable $\psi$ such that $\psi_n$ tends to $\psi$ in the $L^2(\Omega)$-norm. It is enough to show that $\psi = 0$. Using the argument leading to Proposition~\ref{P:orthogonality}, i.e. the fact that two iterated stochastic integrals of different order are orthogonal, we get that for a fixed $n \geq 2$, $$ \left( J_{(i_1,\dots,i_k)} (f^{(i_1,\dots,i_k)}), \psi_n \right)_{L^2(\Omega)} = 0 $$ for every $1 \leq k \leq n-1$, $(i_1,\dots,i_k) \in \{1,2\}^k$ and $f^{(i_1,\dots,i_k)} \in L^2(\Sigma_{(i_1,\dots,i_k)}(\mathcal{X}))$. Thus, \begin{equation}\label{E:psiortho} \left( J_{(i_1,\dots,i_n)} (f^{(i_1,\dots,i_n)}), \psi \right)_{L^2(\Omega)} = 0 \end{equation} for every $n \geq 1$, $(i_1,\dots,i_n) \in \{1,2\}^n$ and $f^{(i_1,\dots,i_n)} \in L^2(\Sigma_{(i_1,\dots,i_n)} (\x))$.
We now assume that $g = \bar{g} \gamma$ where $\bar{g}$ belongs to $C([0,T])$. Using Equation~\eqref{E:psiortho}, we have that $\psi$ is orthogonal to each random variable $Y (h,g)$ defined in Equation~\eqref{E:dense} since from Lemma~\ref{T:crp} they each possess a chaos decomposition. We also know from Lemma~\ref{L:densite} that these random variables are dense in $L^2(\Omega, \mathcal{F}, \mathbb{P})$, so $\psi = 0$. This means that every square-integrable Lévy functional can be express as a series of iterated integrals. The statement follows. \end{proof}
\begin{rem} From now on, we will refer to the chaotic representation property of Theorem~\ref{T:crp} as the CRP. \end{rem}
\begin{rem}\label{R:crpimpliesmrp} As mentioned before, in general the CRP implies the MRP. Indeed, if $F$ is a square-integrable Lévy functional with chaos decomposition $$ F = \mathbb{E}[F] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( f^{(i_1,\dots,i_n)} \right) , $$ then $$ F = \mathbb{E}[F] + \int_0^T \phi(t) \, W(dt) + \int_0^T \int_{\reels} \psi(t,z) \, \widetilde{N}(dt,dz) , $$ with \begin{equation*} \begin{aligned} \phi(t) &= f^{(1)}(t) + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( f^{(i_1,\dots,i_n,1)} (\cdot,t) \mathbb{I}_{\Sigma_n (t)} \right) ,\\ \psi(t,z) &= f^{(2)}(t,z) + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( f^{(i_1,\dots,i_n,2)} (\cdot,(t,z)) \mathbb{I}_{\Sigma_n (t)} \right) . \end{aligned} \end{equation*} \end{rem}
This last remark, together with our journey from the MRP of Theorem~\ref{T:prp} to the CRP of Theorem~\ref{T:crp}, yields the following interesting proposition.
\begin{prop} For a square-integrable Lévy process, the MRP and the CRP are equivalent. \end{prop}
\subsection{Explicit chaos representation}
In the next proposition, we compute the explicit chaos representation of a \textit{smooth} Lévy functional.
\begin{prop} Let $f$ be a smooth function with compact support in $\mathbb{R}^k$, i.e. let $f \in C^{\infty}_c (\mathbb{R}^k)$, and let $t_j$ belong to $[0,T]$ for each $j=1,\dots,k$. Then, $$ f(X_{t_1},\dots,X_{t_k}) = \mathbb{E}[f(X_{t_1},\dots,X_{t_k})] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} (f^{(i_1,\dots,i_n)}) , $$ where \begin{multline*} f^{(i_1,\dots,i_n)}(\Pi_{i_1}(s_1,w_1),\dots,\Pi_{i_n}(s_n,w_n)) \\ = - (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) \widehat{\phi} (-y) \prod_{1 \leq j \leq n} (i \sigma \xi^{t,y}_{s_j})^{2 - i_j} (e^{i w_j \xi^{t,y}_{s_j}} - 1)^{i_j - 1} \, dy . \end{multline*} with $$ \phi(x) \, dx = \mathbb{P}\{X_t \in dx\} , $$ where $X_t = (X_{t_1},\dots,X_{t_k})$, and with $$ \xi_s^{t,y} = y_1 \mathbb{I}_{[0,t_1]}(s) + \dots + y_k \mathbb{I}_{[0,t_k]}(s) , $$ for $t = (t_1,\dots,t_k)$ and $y = (y_1,\dots,y_k)$. \end{prop} \begin{proof} We follow an idea in \cite{lokka2004} and use Fourier transforms. By the Fourier inversion formula \begin{equation}\label{E:fourierinvf} f(x) = (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) e^{i \langle x,y \rangle} \, dy \end{equation} where $\hat{f}$ is the Fourier transform of $f$ and $\langle x,y \rangle$ denotes the scalar product in $\mathbb{R}^k$ of $x = (x_1,\dots,x_k)$ and $y = (y_1,\dots,y_k)$. Let $\phi^- (x) = \phi(-x)$. If we define $F(x) = \mathbb{E} \left[ f(X_{t_1} + x_1,\dots,X_{t_k} + x_k) \right]$, then $$ F(x) = -(f \ast \phi^-)(x) $$ and also $$ F(x) = -(2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) \widehat{\phi^-}(y) e^{i \langle x,y \rangle} \, dy . $$ Therefore, we have the following equality: $$ \mathbb{E}[f(X_{t_1},\dots,X_{t_k})] = -(2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) \widehat{\phi^-}(y) \, dy . $$
From Equation~\eqref{E:fourierinvf} and Equation~\eqref{E:Lgen}, we have that \begin{align*} f(X_{t_1},\dots,X_{t_k}) &= (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) e^{i \langle X_t,y \rangle } \, dy \\ &= (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) e^{i \mu \langle t,y \rangle } Y^{t,y} \, dy \end{align*} where $$ Y^{t,y} = \exp \left\lbrace \int_0^T i \sigma \xi_s^{t,y} \, W(ds) + \int_0^T \int_{\reels} i z \xi_s^{t,y} \, \widetilde{N}(ds,dz) \right\rbrace . $$ Hence, \begin{equation}\label{E:fdeX} f(X_{t_1},\dots,X_{t_k}) = (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) e^{i \mu \langle t,y \rangle } Z^{t,y} \mathbb{E} [Y^{t,y}] \, dy \end{equation} where \begin{multline*} Z^{t,y} = \exp \biggr\lbrace \int_0^T i \sigma \xi_s^{t,y} \, W(ds) + \frac{1}{2} \sigma^2 \int_0^T (\xi_s^{t,y})^2 \, ds \\ + \int_0^T \int_{\reels} i z \xi_s^{t,y} \, N(ds,dz) - \int_0^T \int_{\reels} (e^{i z \xi_s^{t,y}} - 1) \, \nu(dz) ds \biggr\rbrace . \end{multline*} From Lemma~\ref{L:crp}, we know that $$ Z^{t,y} = 1 + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( (i \sigma \xi^{t,y}) \otimes_{(i_1,\dots,i_n)} (e^{i z \xi^{t,y}} - 1) \right) . $$
On the other hand, \begin{align*} \mathbb{E} [Y^{t,y}] &= e^{-i \mu \langle t,y \rangle } \mathbb{E} \left[ e^{i \langle X_t,y \rangle } \right] \\ &= - e^{-i \mu \langle t,y \rangle } \widehat{\phi} (-y) \end{align*} Then, using Equation~\eqref{E:fdeX} and by Lebesgue's dominated convergence theorem, \begin{align*} f(& X_{t_1},\dots,X_{t_k}) \\ &= - (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) \widehat{\phi} (-y) \, dy \\ & \quad \quad - (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) \widehat{\phi} (-y) \\ & \qquad \qquad \qquad \times \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( (i \sigma \xi^{t,y}) \otimes_{(i_1,\dots,i_n)} (e^{i z \xi^{t,y}} - 1) \right) \, dy \\ &= \mathbb{E}[f(X_{t_1},\dots,X_{t_k})] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} (f^{(i_1,\dots,i_n)}) . \end{align*} where \begin{multline*} f^{(i_1,\dots,i_n)}(\Pi_{i_1}(s_1,w_1),\dots,\Pi_{i_n}(s_n,w_n)) \\ = - (2 \pi)^{-k/2} \int_{\mathbb{R}^k} \hat{f}(y) \widehat{\phi} (-y) \left( (i \sigma \xi^{t,y}) \otimes_{(i_1,\dots,i_n)} (e^{i z \xi^{t,y}} - 1) \right) \, dy . \end{multline*} The statement follows from Definition~\ref{D:tensorproduct}. \end{proof}
\section{Malliavin derivatives and Clark-Ocone formula}
Before defining the Malliavin derivatives, we introduce a last notation: for $n \geq 1$ and $1 \leq k \leq n+1$, define $$ \Sigma^k_n (t) = \left\lbrace (t_1,\dots,t_n) \in [0,T]^n \mid t_1 < \dots < t_{k-1} < t < t_k < \dots < t_n \right\rbrace , $$ i.e. $t$ is at the $k$-th position between the $t_j$'s, where $t_0 = 0$ and $t_{n+1} = T$. Note that $\Sigma^{n+1}_n (t) = \Sigma_n (t)$, where the latter was defined earlier in Equation~\eqref{E:defsigmant}. In a multi-index $(i_1,\dots,i_n)$, we will use $\widehat{i}_k$ to denote the omission of the $k$-th index.
We want to define two directional derivative operators in the spirit of Le{\'o}n et al. \cite{leonetal2002}: one in the direction of the Brownian motion and one in the direction of the Poisson random measure. If $F = J_{(i_1,\dots,i_n)} (f)$, then we would like to define $D^{(1)}_t F$ and $D^{(2)}_{t,z} F$ as follows: $$ D^{(1)}_t F = \sum_{k=1}^n \mathbb{I}_{\{i_k = 1\}} J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)} \bigr( f(\underbrace{\dots}_{k-1},t,\underbrace{\dots}_{n - k}) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) $$ and $$ D^{(2)}_{t,z} F = \sum_{k=1}^n \mathbb{I}_{\{i_k = 2\}} J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)} \bigr( f(\underbrace{\dots}_{k-1},(t,z),\underbrace{\dots}_{n - k}) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) $$ where $J_{(\, \widehat{i} \,)} (f) = f$.
\begin{defn} Let $\mathbb{D}^{1,2} = \mathbb{D}^{(1)} \cap \mathbb{D}^{(2)}$, where if $j = 1$ or if $j = 2$, $\mathbb{D}^{(j)}$ is the subset of $L^2(\Omega, \mathcal{F}, \mathbb{P})$ consisting of the random variables $F$ with chaotic representation
$$ F = \mathbb{E}[F] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \bigr( f^{(i_1,\dots,i_n)} \bigr) $$ such that $$
\sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} \sum_{k=1}^n \mathbb{I}_{\{i_k = j\}} \int_{\Pi_{j}(\mathcal{X})} \bigr\| f^{(i_1,\dots,i_n)} (\cdot,x,\cdot) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr\|^2 \, \eta_{j}(dx) < \infty , $$ where the inside norm is the $L^2(\Sigma_{(i_1,\dots,\widehat{i_k},\dots,i_n)} (\x))$-norm. \end{defn}
From Theorem~\ref{T:crp}, it is clear that $\mathbb{D}^{1,2}$ is dense in $L^2(\Omega)$, since every random variable with a chaos representation given by a finite sum belongs to $\mathbb{D}^{1,2}$.
\begin{defn}\label{D:derivees} The Malliavin derivatives $D^{(1)} \colon \mathbb{D}^{(1)} \to L^2 \left( [0,T] \times \Omega \right)$ and $D^{(2)} \colon \mathbb{D}^{(2)} \to L^2 \left( [0,T] \times \mathbb{R} \times \Omega \right)$ are defined by \begin{multline*} D^{(1)}_t F = f^{(1)}(t) \\ + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} \sum_{k=1}^n \mathbb{I}_{\{i_k = 1\}} J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)} \bigr( f^{(i_1,\dots,i_n)} (\underbrace{\dots}_{k-1},t,\underbrace{\dots}_{n - k}) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) \end{multline*} and \begin{multline*} D^{(2)}_{t,z} F = f^{(2)}(t,z) \\ + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} \sum_{k=1}^n \mathbb{I}_{\{i_k = 2\}} J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)} \bigr( f^{(i_1,\dots,i_n)} (\underbrace{\dots}_{k-1},(t,z),\underbrace{\dots}_{n - k}) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) \end{multline*} if $F = \mathbb{E}[F] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( f^{(i_1,\dots,i_n)} \right)$ is in $\mathbb{D}^{(1)}$ or $\mathbb{D}^{(2)}$. \end{defn}
\begin{rem}\label{R:commutativity} For an iterated integral, the Malliavin derivatives have a property similar to the classical commutativity relationship. Indeed, if $F = J_{(i_1,\dots,i_n)} (f)$, then \begin{equation*} D^{(2)}_{t,z} F = \int_t^T D^{(2)}_{t,z} J_{(i_1,\dots,i_{n-1})} \bigr( f(\cdot,s) \mathbb{I}_{\Sigma_{n-1} (s)} \bigr) \, W(ds) \end{equation*} if $i_n = 1$ and \begin{multline*} D^{(2)}_{t,z} F = J_{(i_1,\dots,i_{n-1})} \bigr( f(\cdot,(t,z)) \mathbb{I}_{\Sigma_{n-1} (t)} \bigr) \\ + \int_t^T \int_{\mathbb{R}} D^{(2)}_{t,z} J_{(i_1,\dots,i_{n-1})} \bigr( f(\cdot,(s,y)) \mathbb{I}_{\Sigma_{n-1} (s)} \bigr) \, \widetilde{N}(ds,dy) \end{multline*} if $i_n = 2$. A similar result holds for $D^{(1)} F$. \end{rem}
\begin{rem}\label{R:extension} If $F = \mathbb{E}[F] + \sum_{n = 1}^{\infty} J_n (f_n)$, where $J_n = J_{(1,\dots,1)}$ is the iterated Brownian stochastic integral of order $n$, then \begin{align*} D^{(1)}_t F &= f_1(t) + \sum_{n = 2}^{\infty} \sum_{k=1}^n J_{n-1} \bigr( f_n (\cdot,t,\cdot) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) \\ &= f_1(t) + \sum_{n = 2}^{\infty} J_{n-1} ( f_n (\cdot,t) ) , \end{align*} because $\sum_{k=1}^n \mathbb{I}_{\Sigma^k_{n-1} (t)} = \mathbb{I}_{[0,T]} (t)$. This is the classical Brownian Malliavin derivative of $F$. The same extension clearly holds for the pure-jump case if the $1$'s are replaced by $2$'s. \end{rem}
The definitions of $\mathbb{D}^{(1)}$ and $\mathbb{D}^{(2)}$ come from the fact that we want the codomains of $D^{(1)}$ and $D^{(2)}$ to be $L^2 \left( [0,T] \times \Omega \right)$ and $L^2 \left( [0,T] \times \mathbb{R} \times \Omega \right)$ respectively. We finally define a norm for $DF = (D^{(1)} F, D^{(2)} F)$ in the following way: $$
\| DF \|^2 = \| D^{(1)} F \|^2_{L^2(\lambda \times \mathbb{P})} + \| D^{(2)} F \|^2_{L^2(\lambda \times \nu \times \mathbb{P})} . $$ This is a norm on the product space $L^2(\lambda \times \mathbb{P}) \times L^2(\lambda \times \nu \times \mathbb{P})$.
\subsection{Properties and interpretation of the Malliavin derivatives}
We begin this section with a result concerned with the \textit{continuity} of $D$. It is an extension of Lemma 1.2.3 in Nualart \cite{nualart1995}. The proof is given in Appendix~\ref{A:lemtech2}.
\begin{lem}\label{L:lemtech2}
If $F$ belongs to $L^2(\Omega)$, if $(F_k)_{k \geq 1}$ is a sequence of elements in $\mathbb{D}^{1,2}$ converging to $F$ in the $L^2(\Omega)$-norm and if $\sup_{k \geq 1} \| D F_k \| < \infty$, then $F$ belongs to $\mathbb{D}^{1,2}$ and $(D F_k)_{k \geq 1}$ converges weakly to $D F$ in $L^2(\lambda \times \mathbb{P}) \times L^2(\lambda \times \nu \times \mathbb{P})$. \end{lem}
There is a similar and stronger result stated in \cite{lokka2004} (Lemma $6$); however we are unable to fill a gap in its proof.
The choice for the definitions of the Malliavin derivative operators was made to extend the classical Brownian Malliavin derivative as well as the Poisson random measure Malliavin derivative in a wider sense than Remark~\ref{R:extension}. As mentioned in the introduction, the classical Brownian Malliavin derivative can be defined by chaos expansions and as a weak derivative. In Nualart and Vives \cite{nualartvives1990}, it is proven that for the Poisson process there is an equivalence between the Malliavin derivative defined with chaos decompositions and another one defined by \textit{adding a mass} with a translation operator. This last result was extended by L{\o}kka \cite{lokka2004} to Poisson random measures. But now we will follow an idea of Le{\'o}n et al. \cite{leonetal2002} to prove that our derivative operators are extensions of the classical ones. Their method relies on the commutativity relationships between stochastic derivatives and stochastic integrals and on quadratic covariation for semimartingales; consequently, it is easily adaptable to our more general context. The details are given in Appendix~\ref{A:interpretation}.
\begin{thm}\label{T:extension} On $\mathbb{D}^{(1)}$ the operator $D^{(1)}$ coincides with the Brownian Malliavin derivative and on $\mathbb{D}^{(2)}$ the operator $D^{(2)}$ coincides with the Poisson random measure Malliavin derivative. \end{thm}
Hence, if $F \in \mathbb{D}^{(1)}$, all the results about the classical Brownian Malliavin derivative, such as the chain rule for Lipschitz functions, can be applied to $D^{(1)} F$; see Nualart \cite{nualart1995} for details. But this is also true for the Poisson random measure Malliavin derivative. For example, an important result in L{\o}kka \cite{lokka2004} is that if $F = g(X_{t_1},\dots,X_{t_n}) \in \mathbb{D}^{(2)}$ and $$ (t,z) \mapsto g \left( X_{t_1} + z \mathbb{I}_{[0,t_1]}(t), \dots, X_{t_n} + z \mathbb{I}_{[0,t_n]}(t) \right) - g \left( X_{t_1}, \dots, X_{t_n} \right) $$ belongs to $L^2(\lambda \times \nu \times \mathbb{P})$, then $$ D^{(2)}_{t,z} F = g \left( X_{t_1} + z \mathbb{I}_{[0,t_1]}(t), \dots, X_{t_n} + z \mathbb{I}_{[0,t_n]}(t) \right) - g \left( X_{t_1}, \dots, X_{t_n} \right) . $$ This is the \textit{adding a mass} formula. Consequently, it also applies in the context of a square-integrable Lévy process.
\subsection{A Clark-Ocone formula}
We now state and prove a Clark-Ocone type formula. This formula gives explicitly the integrands in the martingale representation of Theorem~\ref{T:prp} for a Malliavin-differentiable Lévy functional. It is interesting to note that no particular property of the directional derivatives are needed.
\begin{thm}\label{T:clarkoconeformula} If $F$ belongs to $\mathbb{D}^{1,2}$, then \begin{equation*}\label{E:clarkOconeLevy} F = \mathbb{E}[F] + \int_0^T \mathbb{E} \bigr[ D^{(1)}_t F \mid \mathcal{F}_t \bigr] \, W(dt) + \int_0^T \int_{\reels} \mathbb{E} \bigr[ D^{(2)}_{t,z} F \mid \mathcal{F}_t \bigr] \, \widetilde{N}(dt,dz) . \end{equation*} \end{thm} \begin{proof} Suppose that $F$ has a chaos expansion given by $$ F = \mathbb{E}[F] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \bigr( f^{(i_1,\dots,i_n)} \bigr) . $$ If for example we consider the derivative operator $D^{(2)}$, then from Remark~\ref{R:crpimpliesmrp} we have to show that \begin{multline}\label{E:toprove} \mathbb{E} \bigr[ D^{(2)}_{t,z} F \mid \mathcal{F}_t \bigr] \\ = f^{(2)}(t,z) + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \bigr( f^{(i_1,\dots,i_n,2)} (\cdot,(t,z)) \mathbb{I}_{\Sigma_n (t)} \bigr) . \end{multline} If $i_k = 2$, then \begin{multline*}
\mathbb{E} \bigr[ J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)} \bigr( f^{(i_1,\dots,i_n)} (\cdot,(t,z),\cdot) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) \bigr| \mathcal{F}_t \bigr] \\ = \begin{cases} 0 & \text{if $k = 1,2,\dots,n-1$;}\\ J_{(i_1,\dots,i_{n-1})} \bigr( f^{(i_1,\dots,i_{n-1},2)} (\cdot,(t,z)) \mathbb{I}_{\Sigma_{n-1} (t)} \bigr) & \text{if $k = n$,} \end{cases} \end{multline*} because when $k = 1,2,\dots,n-1$ the outermost stochastic integral in the iterated integral $J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)}$ starts after time $t$. By the definition of $ D^{(2)}_{t,z} F$, this implies that Equation~\eqref{E:toprove} is satisfied. The same argument works for the derivative operator $D^{(1)}$ and thus the result follows. \end{proof}
\section{Martingale representation of the maximum}
Our main goal was to provide a detailed construction of a chaotic Malliavin derivative and a Clark-Ocone formula. Now, to illustrate the results, we compute the explicit martingale representation of the maximum of the Lévy process $X$.
For $0 \leq s < t \leq T$, define $M_{s,t} = \sup_{s \leq r \leq t} X_r$ and $M_t = M_{0,t}$. If $\mathbb{E} [M_T] < \infty$, then one can show that \begin{equation}\label{E:shiryaev} \mathbb{E} [M_T \mid \mathcal{F}_t] = M_t + \int_{M_t - X_t}^{\infty} \bar{F}_{T-t}(z) \, dz , \end{equation} where $\bar{F}_{s}(z) = \mathbb{P} \{ M_s > z \}$; see Shiryaev and Yor \cite{shiryaevyor2004} and Graversen et al. \cite{graversenetal2001}. We will use this equality to prove the next proposition.
\begin{prop} If $X$ a square-integrable Lévy process with Lévy-Itô decomposition $$ X_t = \mu t + \sigma W_t + \int_0^t \int_{\reels} z \, \widetilde{N}(ds,dz) , $$ then its running maximum admits the following martingale representation: $$ M_T = \mathbb{E} [M_T] + \int_0^T \phi(t) \, W(dt) + \int_0^T \int_{\mathbb{R}} \psi(t,z) \, \widetilde{N}(dt,dz) $$ with $\phi(t) = \sigma \bar{F}_{T-t}(a)$ and $\psi(t,z) = \mathbb{E} \left[ \left( M_{T-t} + z - a \right)^+ \right] - \int_a^{\infty} \bar{F}_{T-t}(x) \, dx$, where $a = M_t - X_t$. \end{prop} \begin{proof} Since $X$ is a square-integrable martingale with drift, from Doob's maximal inequality we have that $M_T$ is a square-integrable random variable; see Theorem $20$ in Protter \cite{protter2004}. Let $(t_k)_{k \geq 1}$ be a dense subset of $[0,T]$, let $F = M_T$ and, for each $n \geq 1$, define $F_n = \max \{ X_{t_1}, \dots, X_{t_n} \}$. Clearly, $(F_n)_{n \geq 1}$ is an increasing sequence bounded by $F$. Hence, $F_n$ converges to $F$ in the $L^2(\Omega)$-norm when $n$ goes to infinity.
We want to prove that each $F_n$ is Malliavin differentiable, i.e. that each $F_n$ belongs to $\mathbb{D}^{1,2} = \mathbb{D}^{(1)} \cap \mathbb{D}^{(2)}$. This follows from the following two facts. First, since $$ (x_1,\dots,x_n) \mapsto \max \{ x_1, \dots, x_n \} $$ is a Lipschitz function on $\mathbb{R}^n$ and since $D^{(1)}$ behaves like the classical Brownian Malliavin derivative on the Brownian part of $F_n$, we have that $$ 0 \leq D^{(1)}_t F_n = \sum_{k=1}^n \sigma \mathbb{I}_{\{t \leq t_k\}} \mathbb{I}_{A_k} \leq \sum_{k=1}^n \sigma \mathbb{I}_{A_k} = \sigma , $$
where $A_1 = \{F_n = X_{t_1}\}$ and $A_k = \{F_n \neq X_{t_1} , \dots, F_n \neq X_{t_{k-1}}, F_n = X_{t_k}\}$ for $2 \leq k \leq n$. This implies that $\sup_{n \geq 1} \| D^{(1)} F_n \|_{L^2([0,T] \times \Omega)} \leq \sigma^2 T$. Secondly, since $D^{(2)}$ behaves like the Poisson random measure Malliavin derivative on the Poisson part of $F_n$, we have that $$
0 \leq \bigr| D^{(2)}_{t,z} F_n \bigr| = \bigr| \max \left\lbrace X_{t_1} + z \mathbb{I}_{\{t < t_1\}}, \dots, X_{t_n} + z \mathbb{I}_{\{t < t_n\}} \right\rbrace - F_n \bigr| \leq |z| , $$ where the equality is justified by the following inequality: \begin{multline*}
\bigr\| \max \left\lbrace X_{t_1} + z \mathbb{I}_{\{t < t_1\}}, \dots, X_{t_n} + z \mathbb{I}_{\{t < t_n\}} \right\rbrace - F_n \bigr\|_{L^2([0,T] \times \mathbb{R} \times \Omega)}^2 \\ \leq T \int_{\mathbb{R}} z^2 \, \nu(dz) . \end{multline*} Indeed, if $z \geq 0$, then $$ 0 \leq \max \left\lbrace X_{t_1} + z \mathbb{I}_{\{t < t_1\}}, \dots, X_{t_n} + z \mathbb{I}_{\{t < t_n\}} \right\rbrace - F_n \leq z , $$ and, if $z < 0$, then \begin{align*} 0 & \leq F_n - \max \left\lbrace X_{t_1} + z \mathbb{I}_{\{t < t_1\}}, \dots, X_{t_n} + z \mathbb{I}_{\{t < t_n\}} \right\rbrace \\
& = F_n + \min \left\lbrace - X_{t_1} + |z| \mathbb{I}_{\{t < t_1\}}, \dots, - X_{t_n} + |z| \mathbb{I}_{\{t < t_n\}} \right\rbrace \\
& = \min \left\lbrace F_n - X_{t_1} + |z| \mathbb{I}_{\{t < t_1\}}, \dots, F_n - X_{t_n} + |z| \mathbb{I}_{\{t < t_n\}} \right\rbrace \\
& \leq |z| . \end{align*}
This implies that $\sup_{n \geq 1} \| D^{(2)} F_n \|_{L^2([0,T] \times \mathbb{R} \times \Omega)} \leq T \int_{\mathbb{R}} z^2 \, \nu(dz)$.
Consequently, $\sup_{n \geq 1} \| D F_n \|^2 \leq T (\sigma^2 + \int_{\mathbb{R}} z^2 \, \nu(dz))$ and by Theorem~\ref{T:extension} we have that $F$ is Malliavin differentiable. By the uniqueness of a weak limit, this means that taking the limit of $D^{(1)}_t F_n$ when $n$ goes to infinity yields $$ D^{(1)}_t F = \sigma \mathbb{I}_{[0, \tau]}(t) , $$ where $\tau$ is the first random time when the Lévy process $X$ (not the Brownian motion $W$) reaches its supremum on $[0,T]$, and $$ D^{(2)}_{t,z} F = \sup_{0 \leq s \leq T} \left( X_s + z \mathbb{I}_{\{t < s \}} \right) - M_T . $$ Hence, \begin{align*} \mathbb{E} \left[ D^{(1)}_t F \mid \mathcal{F}_t \right] &= \sigma \mathbb{P} \left\lbrace M_t < M_{t,T} \mid \mathcal{F}_t \right\rbrace\\ &= \sigma \mathbb{P} \left\lbrace M_{T-t} > a \right\rbrace , \end{align*} where $a = M_t - X_t$. Since $M_{t,T} - X_t$ is independent of $\mathcal{F}_t$ and has the same law as $M_{T-t}$, then using Equation~\eqref{E:shiryaev} we get that \begin{align*} \mathbb{E} \left[ D^{(2)}_{t,z} F \mid \mathcal{F}_t \right] &= \mathbb{E} \left[ \sup_{0 \leq s \leq T} \left( X_s + z \mathbb{I}_{\{t < s \}} \right) - M_T \mid \mathcal{F}_t \right] \\ &= \mathbb{E} \left[ \max \{ M_t , M_{t,T} + z \} \mid \mathcal{F}_t \right] - \mathbb{E} \left[ M_T \mid \mathcal{F}_t \right] \\ &= M_t + \mathbb{E} \left[ \left( M_{t,T} + z - M_t \right)^+ \mid \mathcal{F}_t \right] - \mathbb{E} \left[ M_T \mid \mathcal{F}_t \right] \\ &= \mathbb{E} \left[ \left( M_{T-t} + z - a \right)^+ \right] - \int_a^{\infty} \bar{F}_{T-t}(x) \, dx . \end{align*} where $a = M_t - X_t$. The martingale representation follows from the Clark-Ocone formula of Theorem~\ref{T:clarkoconeformula}. \end{proof}
This result extends the martingale representation of the running maximum of Brownian motion.
\section{Proof of Lemma~\ref{L:lemtech2}}\label{A:lemtech2}
We have that $$
\sup_{k \geq 1} \| D^{(1)} F_k \|_{L^2([0,T] \times \Omega)} < \infty $$ and $$
\sup_{k \geq 1} \| D^{(2)} F_k \|_{L^2([0,T] \times \mathbb{R} \times \Omega)} < \infty . $$ Since $L^2([0,T] \times \Omega)$ and $L^2([0,T] \times \mathbb{R} \times \Omega)$ are reflexive Hilbert spaces, there exist a subsequence $(k_j)_{j \geq 1}$, an element $\alpha$ in $L^2([0,T] \times \Omega)$ and an element $\beta$ in $L^2([0,T] \times \mathbb{R} \times \Omega)$ such that $D^{(1)} F_{k_j}$ converges to $\alpha$ in the weak topology of $L^2([0,T] \times \Omega)$ and $D^{(2)} F_{k_j}$ converges to $\beta$ in the weak topology of $L^2([0,T] \times \mathbb{R} \times \Omega)$. Consequently, for any $h \in L^2([0,T])$, $g \in L^2([0,T] \times \mathbb{R})$ and $f \in L^2(\Sigma_{(i_1,\dots,i_n)} (\x))$, we have that $$ \left\langle D^{(1)} F_{k_j}, h \otimes J_{(i_1,\dots,i_n)}(f) \right\rangle_{L^2([0,T] \times \Omega)} \longrightarrow \left\langle \alpha, h \otimes J_{(i_1,\dots,i_n)}(f) \right\rangle_{L^2([0,T] \times \Omega)} $$ and $$ \left\langle D^{(2)} F_{k_j}, g \otimes J_{(i_1,\dots,i_n)}(f) \right\rangle_{L^2([0,T] \times \mathbb{R} \times \Omega)} \longrightarrow \left\langle \beta, g \otimes J_{(i_1,\dots,i_n)}(f) \right\rangle_{L^2([0,T] \times \mathbb{R} \times \Omega)} $$ when $j$ goes to infinity.
Let $F = \mathbb{E}[F] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} ( f^{(i_1,\dots,i_n)} )$ and $F_{k_j} = \mathbb{E}[F_{k_j}] + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} ( f_{k_j}^{(i_1,\dots,i_n)} )$ be the chaos representations of $F$ and $F_{k_j}$. By definition, we have that \begin{multline}\label{E:deriveeun} D^{(1)}_t F_{k_j} = f_{k_j}^{(1)}(t) \\ + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} \sum_{k=1}^n \mathbb{I}_{\{i_k = 1\}} J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)} \bigr( f_{k_j}^{(i_1,\dots,i_n)} (\cdot,t,\cdot) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) . \end{multline}
By the linearity of the iterated integrals, the convergence of $F_{k_j}$ toward $F$ implies that \begin{multline*}
\left\| \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} J_{(i_1,\dots,i_n)} \left( f^{(i_1,\dots,i_n)} - f_{k_j}^{(i_1,\dots,i_n)} \right) \right\|_{L^2(\Omega)} \\
= \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} \left\| f^{(i_1,\dots,i_n)} - f_{k_j}^{(i_1,\dots,i_n)} \right\|^2_{L^2(\Sigma_{(i_1,\dots,i_n)} (\x))} \end{multline*} goes to $0$ when $k$ tends to infinity. Consequently, it implies that each $f_{k_j}^{(i_1,\dots,i_n)}$ converges to $f^{(i_1,\dots,i_n)}$ when $j$ goes to infinity. So, using Proposition~\ref{P:orthogonality} and the expression of the derivative in Equation~\eqref{E:deriveeun}, we get that \begin{align*} & \bigr\langle D^{(1)} F_{k_j}, h \otimes J_{(i_1,\dots,i_n)}(f) \bigr\rangle_{L^2([0,T] \times \Omega)} \\ &= \sum_{k=1}^{n+1} \int_0^T \mathbb{E} \bigr[ J_{(i_1,\dots,i_n)} \bigr( f_{k_j}^{(i_1,\dots,i_{k-1},1,i_k,\dots,i_n)} (\cdot,t,\cdot) \mathbb{I}_{\Sigma^k_n (t)} \bigr) J_{(i_1,\dots,i_n)}(f) \bigr] h(t) \, dt \\ &= \sum_{k=1}^{n+1} \int_0^T \bigr\langle f_{k_j}^{(i_1,\dots,i_{k-1},1,i_k,\dots,i_n)} (\cdot,t,\cdot) \mathbb{I}_{\Sigma^k_n (t)}, f \bigr\rangle_{L^2(\Sigma_{(i_1,\dots,i_n)} (\x))} h(t) \, dt \end{align*} and, as $j$ goes to infinity, this quantity tends to $$ \sum_{k=1}^{n+1} \int_0^T \bigr\langle f^{(i_1,\dots,i_{k-1},1,i_k,\dots,i_n)} (\cdot,t,\cdot) \mathbb{I}_{\Sigma^k_n (t)}, f \bigr\rangle_{L^2(\Sigma_{(i_1,\dots,i_n)} (\x))} h(t) \, dt . $$ This holds for any multi-index $(i_1,\dots,i_n)$ and functions $h$ and $f$. Consequently, \begin{multline*} \alpha(t) = f^{(1)}(t) \\ + \sum_{n = 1}^{\infty} \sum_{(i_1,\dots,i_n)} \sum_{k=1}^n \mathbb{I}_{\{i_k = 1\}} J_{(i_1,\dots,\widehat{i}_k,\dots,i_n)} \bigr( f^{(i_1,\dots,i_n)} (\cdot,t,\cdot) \mathbb{I}_{\Sigma^k_{n-1} (t)} \bigr) \end{multline*} and $F$ belongs to $\mathbb{D}^{(1)}$ with $D^{(1)} F = \alpha$ by the unicity of the weak limit. Moreover, for any weakly convergent subsequence the limit must be equal to $D^{(1)} F$ and this implies the weak convergence of the whole sequence. The same argument works to prove that $F$ belongs to $\mathbb{D}^{(2)}$ and that $(D^{(2)} F_k)_{k \geq 1}$ converges weakly to $D^{(2)} F$ in $L^2(\lambda \times \nu \times \mathbb{P})$.
\section{Interpretation of the directional derivatives}\label{A:interpretation}
We consider the product probability space $$ \left( \Omega_W \times \Omega_N, \mathcal{F}_W \times \mathcal{F}_N, \mathbb{P}_W \times \mathbb{P}_N \right) $$ which is the product of the canonical space of the Brownian motion $W$ and the canonical space of the pure-jump Lévy process $$ N_t = \int_0^t \int_{\mathbb{R}} z \, \widetilde{N}(ds,dz) $$ previously defined in Equation~\eqref{E:defN}; see Solé et al. \cite{soleetal2007} for more details on this last canonical space. Since $L^2(\Omega_W \times \Omega_N)$ is isometric to $L^2(\Omega_W ; L^2(\Omega_N))$ and to $L^2(\Omega_N ; L^2(\Omega_W))$ as Hilbert spaces, we will use the theory of the Brownian Malliavin derivative and the Poisson random measure Malliavin derivative for Hilbert-valued random variables (see \cite{nualart1995} and \cite{nualartvives1990}). This is possible because both operators are closable.
The Brownian Malliavin derivative for Hilbert-valued random variables will be denoted by $D^W$ and the Poisson random measure Malliavin derivative for Hilbert-valued random variables by $D^N$. If we define $\widetilde{W} = (\widetilde{W}_t)_{t \in [0,T]}$ on $\Omega_W \times \Omega_N$ by $$ \widetilde{W}_t (\omega,\omega^{\prime}) = \omega(t) $$ and $\widetilde{N} = (\widetilde{N}_t)_{t \in [0,T]}$ by $$ \widetilde{N}_t (\omega,\omega^{\prime}) = \omega^{\prime}(t) , $$ then the process $\widetilde{X}_t = \mu t + \sigma \widetilde{W}_t + \widetilde{N}_t$ has the same distribution as our initial Lévy process $X_t = \mu t + \sigma W_t + N_t$. For notational simplicity, in what follows we will write $W_t(\omega)$ and $N_t(\omega^{\prime})$ instead of $\widetilde{W}_t (\omega,\omega^{\prime})$ and $\widetilde{N}_t (\omega,\omega^{\prime})$ respectively.
We will proceed by induction. If $F = \int_0^T f(t) \, W(dt)$, then clearly \begin{equation*} D^{(1)}_t F = D^W_t F = f(t) \quad \text{and} \quad D^{(2)}_{t,z} F = D^N_{t,z} F = 0 , \end{equation*} while if $G = \int_0^T \int_{\mathbb{R}} g(t,z) \, \widetilde{N}(dt,dz)$, then \begin{equation*} D^{(1)}_t G = D^W_t G = 0 \quad \text{and} \quad D^{(2)}_{t,z} G = D^N_{t,z} G = g(t,z) . \end{equation*}
Thus, for a fixed $n \geq 1$, we assume that $D^{(1)}$ and $D^W$ coincide for any random variable with chaos expansion of order $n$. First, let $F$ be of the form $$ F = J_{(i_1,\dots,i_n,1)} (f_1 \otimes \dots \otimes f_n \otimes f_{n+1}) = \int_0^T g(s) f_{n+1}(s) \, W(ds) , $$ where \begin{align}\label{E:g} g(s) = J_{(i_1,\dots,i_n)} \bigr( f_1 \otimes \dots \otimes f_n \, \mathbb{I}_{\Sigma_{n} (s)} \bigr) . \end{align} To ease the notation, $J_{(i_1,\dots,i_n)} ( f_1 \dots f_n )$ will mean $J_{(i_1,\dots,i_n)} ( f_1 \otimes \dots \otimes f_n )$. Using the commutativity relationship of Remark~\ref{R:commutativity} and the hypothesis of induction, we have that \begin{align*} D^{(1)}_t F &= f_{n+1}(t) g(t) + \int_t^T f_{n+1}(s) D^{(1)}_t g(s) \, W(ds) \\ &= f_{n+1}(t) g(t) + \int_t^T f_{n+1}(s) D^W_t g(s) \, W(ds) , \end{align*} which is exactly $D^W_t F$, by the classical commutativity relationship of Equation~\eqref{E:commute}.
Secondly, now let $F$ be of the form $$ F = J_{(i_1,\dots,i_n,2)} (f_1 \otimes \dots \otimes f_n \otimes f_{n+1}) = \int_0^T \int_{\mathbb{R}} g(s-) f_{n+1}(s,z) \, \widetilde{N}(ds,dz) . $$ We will use of the \textit{integration by parts formula} for semimartingales, that is $$ \bigr[ Y^{(1)},Y^{(2)} \bigr]_t = Y^{(1)}_t Y^{(2)}_t - \int_0^t Y^{(1)}_{s-} \, dY^{(2)}_s - \int_0^t Y^{(2)}_{s-} \, dY^{(1)}_s $$ if $Y^{(1)}$ and $Y^{(2)}$ are semimartingales; see Protter \cite{protter2004} for details. If $Y^{(1)}_t = g(t)$ and $Y^{(2)}_t = \int_0^t \int_{\mathbb{R}} f_{n+1}(s,z) \, \widetilde{N}(ds,dz)$, we get that \begin{multline*} F = g(T) \int_0^T \int_{\mathbb{R}} f_{n+1}(s,z) \, \widetilde{N}(ds,dz) \\ - \int_0^T \int_0^{t-} \int_{\mathbb{R}} f_{n+1}(s,z) \, \widetilde{N}(ds,dz) dg(t) \\ - \left[ g(\cdot) , \int_0^{\cdot} \int_{\mathbb{R}} f_{n+1}(s,z) \, \widetilde{N}(ds,dz) \right]_T . \end{multline*}
We now consider the two cases where $i_n = 1$ and $i_n = 2$ separately. We have that \begin{equation*} g(t) = \begin{cases} \int_0^t h(s) f_n(s) \, W(ds) & \text{if $i_n = 1$;} \\ \int_0^t \int_{\mathbb{R}} h(s-) f_n(s,z) \, \widetilde{N}(ds,dz) & \text{if $i_n = 2$,} \end{cases} \end{equation*} where $h(s) = J_{(i_1,\dots,i_n)} \bigr( f_1 \otimes \dots \otimes f_{n-1} \, \mathbb{I}_{\Sigma_{n-1} (s)} \bigr)$. If $i_n = 1$, then \begin{multline*} F = g(T) \int_0^T \int_{\mathbb{R}} f_{n+1}(t,z) \, \widetilde{N}(dt,dz) \\ - \int_0^T \left[ \int_0^t \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \right] h(t) f_n(t) \, W(dt) . \end{multline*} If $i_n = 2$, then \begin{multline*} F = g(T) \int_0^T \int_{\mathbb{R}} f_{n+1}(t,z) \, \widetilde{N}(dt,dz) \\ - \int_0^T \int_{\mathbb{R}} \left[ \int_0^{t-} \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \right] h(t-) f_n(t,z) \, \widetilde{N}(dt,dz) \\ - \int_0^T \int_{\mathbb{R}} h(t-) f_n(t,z) f_{n+1}(t,z) \, N(dt,dz) . \end{multline*} Note that the last term is an iterated integral of order $n$ (with respect to $N(dt,dz)$ for the outermost integral, not $\widetilde{N}(dt,dz)$) since $h$ is an iterated integral of order $n-1$. So, by the hypothesis of induction, $D^{(1)}$ and $D^W$ agree for this functional. This is also true for $g(T)$.
Consequently, we repeat the previous steps backward with $D^{(1)}$. If $i_n = 1$, then \begin{align*} D^W_t F &= \left( D^W_t g(T) \right) \int_0^T \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \\ & \qquad - h(t) f_n(t) \int_0^t \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \\ & \qquad \qquad - \int_t^T \left[ \int_0^s \int_{\mathbb{R}} f_{n+1}(r,y) \, \widetilde{N}(dr,dy) \right] \left( D^W_t h(s) \right) f_n(s) \, W(ds) \\ &= \left( D^{(1)}_t g(T) \right) \int_0^T \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \\ & \qquad - h(t) f_n(t) \int_0^t \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \\ & \qquad \qquad - \int_t^T \left[ \int_0^s \int_{\mathbb{R}} f_{n+1}(r,y) \, \widetilde{N}(dr,dy) \right] \left( D^{(1)}_t h(s) \right) f_n(s) \, W(ds) \\ &= D^{(1)}_t \left( g(T) \int_0^T \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \right) \\ & \qquad - D^{(1)}_t \int_0^T \left[ \int_0^s \int_{\mathbb{R}} f_{n+1}(r,y) \, \widetilde{N}(dr,dy) \right] h(s) f_n(s) \, W(ds) \\ &= D^{(1)}_t \biggr( g(T) \int_0^T \int_{\mathbb{R}} f_{n+1}(s,y) \, \widetilde{N}(ds,dy) \\ & \qquad \qquad \qquad - \int_0^T \left[ \int_0^s \int_{\mathbb{R}} f_{n+1}(r,y) \, \widetilde{N}(dr,dy) \right] \, dg(s) \biggr) \\ &= D^{(1)}_t F , \end{align*} and if $i_n = 2$, then the same steps are valid since $D^W$ and $D^{(1)}$ coincide on the extra term.
The equivalence between $D^{(1)}$ and $D^W$ follows from the following fact: for a fixed $n \geq 1$ and a fixed multi-index $(i_1,\dots,i_n)$, the linear subspace of $L^2(\Sigma_{(i_1,\dots,i_n)} (\x))$ generated by functions of the form \begin{equation}\label{E:tensor} f_1 \otimes \dots \otimes f_n , \end{equation} is dense. Indeed, for $f \in L^2(\Sigma_{(i_1,\dots,i_n)} (\x))$, there exists a sequence $(f_n)_{n \geq 1}$, whose elements are finite sums of functions as in Equation~\eqref{E:tensor}, that converges to $f$. We know that $D^{(1)}$ and $D^W$ are equal for each $f_n$. Since $D^{(1)}$ and $D^W$ are continuous (see Lemma~\ref{L:lemtech2}), they also coincide for $f$.
We can apply the same machinery to show that $D^N$ and $D^{(2)}$ are the same.
\end{document} |
\begin{document}
\title{Trading off Worst and Expected Cost in Decision Tree Problems and a Value Dependent Model} \author{Aline Saettler\inst{1} \and Eduardo Laber\inst{1} \and Ferdinando Cicalese\inst{2}} \institute{PUC-Rio, Brazil\\\email{\{asaettler,laber\}@inf.puc-rio.br} \and University of Salerno, Italy\\\email{cicalese@dia.unisa.it}} \maketitle
\begin{abstract} We study the problem of evaluating a discrete function by adaptively querying the values of its variables until the values read uniquely determine the value of the function. Reading the value of a variable is done at the expense of some cost, and the goal is to design a strategy (decision tree) for evaluating the function incurring as little cost as possible in the worst case or in expectation (according to a prior distribution on the possible variables assignments). Except for particular cases of the problem, in general, only the minimization of one of these two measures is addressed in the literature. However, there are instances of the problem for which the minimization of one measure leads to a strategy with a high cost with respect to the other measure (even exponentially bigger than the optimal). We provide a new construction which can guarantee a trade-off between the two criteria. More precisely, given a decision tree guaranteeing expected cost $E$ and a decision tree guaranteeing worst cost $W$ our method can guarantee for any chosen trade-off value $\rho$ to produce a decision tree whose worst cost is $(1 + \rho)W$ and whose expected cost is $(1 + \frac{1}{\rho})E.$ These bounds are improved for the relevant case of uniform testing costs.
Motivated by applications, we also study a variant of the problem where the cost of reading a variable depends on the variable’s value. We provide an $O(\log n)$ approximation algorithm for the minimization of the worst cost measure, which is best possible under the assumption $P \neq NP.$. \end{abstract}
\newcommand{\remove}[1]{}
\section{Introduction}
Decision tree construction is a central problem in several areas of computer science, e.g., in data base theory, in computational learning and in artificial intelligence in general. In a typical scenario there are several possible hypotheses, which can explain some unknown phenomenon and we want to decide which hypothesis provides the correct explanation. We have a prior distribution on the hypotheses and we can use tests to discriminate among the hypotheses. Each test's outcome eliminates some of the hypotheses, and the set of tests is complete, in the sense that by using all the tests we can definitely find the correct hypothesis. Moreover, different tests may have different associated costs. The aim is to define the best testing strategy that allows to reach the correct decision spending as little as possible . If the testing is adaptive a strategy is representable by a tree (called decision tree) with each node being a test and each leaf being a hypothesis. In a generalization of this scenario, one is only interested in identifying a class of possible hypothesis explaining the situation.
In an example of automatic diagnosis, the hypotheses
are possible diseases and we look for the testing strategy (decision tree) which can always identify the disease by using a cheap sequence of tests. In the case we are interested in deciding the drug to administer to the patient rather than exactly identifying the disease we have an instance of the more general variant of the decision tree construction where we are looking for the class of hypotheses containing the correct explanation.
What is the right measure to optimize when constructing the decision tree? Usually, the expected cost of the tests needed to reach the correct decision and the maximum total cost needed to reach the correct decision are used. However, these measures can lead to very different trees and in particular it is possible that the decision tree minimizing one measure is very inefficient with respect to the other measure. A very skewed distribution can induce
a tree optimizing the expected cost with a very skewed shape. As a consequence, in such a tree some decision might induce a very high cost, even exponentially bigger than
the worst cost spent by a strategy that optimizes with respect to the worst case. Conversely optimizing with respect to the worst case can lead to very
bad expected cost. The choice of which measure to choose is crucial especially since in practical applications the real distribution might not be known but only
estimated and possibly be wrong. Therefore, it might be preferable to have decision trees which while optimizing one criteria guarantees to be
efficient with respect to the other.
In this paper, we address the issue regarding the existence of a trade-off between the minimization of the worst testing cost and the expected testing cost of decision trees. Is it is possible to construct decision trees that are efficient with respect to both measures? As mentioned before, these two goals can be incompatible.
\remove{ We present a polynomial time procedure
that given a parameter $\rho >0$ and two decision trees $D_W$ and $D_E$, the former with worst testing cost $W$ and the latter with expected testing cost $E$, produces a decision tree $D$ with worst testing cost at most $(1+\rho)W$ and expected testing cost at most $(1+1/\rho)E$. In words, the combined decision tree loses at most small constant factor with respect to the performance of both the original decision trees which were optimizing with respect to only one measure. For the case of uniform testing costs, the bound can be improved to $(1+\rho)W$ and $(1+2/(\rho^2 +2 \rho))E$ through a more involved analysis. }
The second issue on which we focus in this paper is the way the cost of the tests is defined. We refer the interested reader to \cite{Turney} and references quoted therein for a remarkable account of several types of costs to be taken into account in inference procedures. In most decision tree problems, the assumption is that the cost of the tests is fixed in advance and known to the algorithm. In particular, the cost is independent of the outcome of the test. However, there are also several scenarios in medical applications---one of the main fields motivating automatic diagnosis---where the assumption that a test has a fixed cost independent of the outcome of the test does not apply. Many diagnostic tests actually consist of a multi-stage procedure, e.g., in a first stage the sample is tested against some reagent to check for the presence or absence of an antigene. If this appears to be present below a certain level the test is considered to be negative and no further analysis is performed. Otherwise, the test is {\em necessarily} followed by a second stage where several new reagents are used with significantly higher final costs. Notice that in such a situation there is no real decision left to the strategy between the first and the second stage, so it is reasonable to consider such a two stage procedure as a single test whose cost depends on the outcome.
Value dependent test costs are also useful in application where disruptive tests are used.
Consider the use of bacterial colonies or {\em caviae} to test for toxicity of a samples. In the case no toxicity is found, the testing colony can be reused, as opposed to the case where toxicity is verified leading to the disruption of the colony or the death of the {\em cavia} (a similar model has been studied in \cite{Elser-Kleber}). Analogously, a chemical reagent might be used for performing a test and the outcome of the test is either some chemical reaction changing the nature of the reagents and making them unusable again, or the absence of the reaction in which case the reagent can be (partially) reused. Again we have a test that when positive has higher cost---the necessity of buying new reagents---than in the case of a negative outcome.
For this extended version, where the cost of a test may depend on its outcome, we present an algorithm for building a decision tree that aims to minimize the worst testing cost for identifying the class of the correct hypothesis.
\subsection{Problem Formalization} \noindent {\bf The Discrete Function Evaluation Problem} (DFEP). Our results are presented in terms of the problem of evaluating a discrete function. This problems generalizes most decision tree construction problems studied in the literature.
An instance of the problem is defined by a quintuple $(S, C, T, {\bf p}, {\bf c}),$ where $S = \{s_1, \dots, s_n\}$ is a set of objects, $C = \{C_1, \dots, C_m\}$ is a partition of $S$ into $m$ classes, $T$ is a set of tests, ${\bf p}$ is a probability distribution on $S,$ and ${\bf c}$ is a cost function assigning to each test $t$ a cost $c(t) \in \mathbb{Q^+}$.
A test $t \in T$, when applied to an object $s \in S$, outputs a number $t(s)$ in the set $\{1,\ldots,\ell\}$ and incurs a cost $c(t)$. It is assumed that the set of tests is complete, in the sense that for any distinct $s_1, s_2 \in S$ there exists a test $t$ such that $t(s_1) \neq t(s_2).$ The goal is to define a testing procedure which uses tests from $T$ and minimizes the testing cost (in expectation and/or in the worst case) for identifying the class of an unknown object $s^*$ chosen according to the distribution ${\bf p}.$ We also work with the extended version of the DFEP where the cost of a test is a function that assigns each pair (test $t$, object $s$) to a value $c^{t(s)}(t) \in \mathbb{Q^+}$.
The DFEP can be rephrased in terms of minimizing the cost of evaluating a discrete function that
maps points (corresponding to objects) from some finite subset of $\{1,\ldots,\ell\}^{|T|}$ into values from $\{1, \dots, m\}$ (corresponding to classes), where each object $s\in S$ corresponds to the point $(t_1(s),\ldots,t_{|T|}(s))$
which is obtained by applying each test of $T$ to $s$. This perspective motivates the name we chose for the problem. However, for the sake of uniformity with more recent work \cite{golovin,bellala} we employ the definition of the problem in terms of objects/tests/classes.
\noindent {\bf Decision Tree Optimization.} Any testing procedure can be represented by a \emph{decision tree}, which is a tree where every internal node is associated with a test and every leaf is associated with a set of objects that belong to the same class. More formally, a decision tree $D$ for $(S,C,T,\mathbf{p}, {\bf c})$ is a leaf associated with class $i$ if every object of $S$ belongs to the same class $i$. Otherwise, the root $r$ of $D$ is associated with some test $t \in T$ and the children of $r$ are decision trees for the non empty sets in $\{S_{t}^1,\ldots,S_{t}^\ell\}$, where $S_{t}^i$ is the subset of $S$ that outputs $i$ for test $t$.
Given a decision tree $D$, rooted at $r$, we can identify the class of an unknown object $s^*$ by following a path from $r$ to a leaf as follows: first, we ask for the result of the test associated with $r$ when performed on $s^*$; then, we follow the branch of $r$ associated with the result of the test to reach a child $r_i$ of $r$; next, we apply the same steps recursively for the decision tree rooted at $r_i$. The procedure ends when a leaf is reached, which determines the class of $s^*$.
We define $cost(D,s)$ as the sum of the tests' cost on the root-to-leaf path from the root of $D$ to the leaf associated with object $s$. Then, the \emph{worst testing cost} and the \emph{expected testing cost} of $D$ are, respectively, defined as
\begin{equation} cost_W(D) = \max_{s \in S}\{cost(D,s)\} \,\,\, \mbox{ and} \,\,\, cost_E(D) = \sum_{s \in S} cost(D,s) p(s) \end{equation}
\begin{comment}
An important aspect of this formulation is that here the costs are associated with both the tests and its outcomes. Previous works consider a particular case of this problem in which $c^1(t) = c^2(t) = \ldots = c^{\ell}(t)$ (i. e., the cost depends only on the test).
\end{comment}
\subsection{Our Results}
We present a polynomial time procedure
that given a parameter $\rho >0$ and two decision trees $D_W$ and $D_E$, the former with worst testing cost $W$ and the latter with expected testing cost $E$, produces a decision tree $D$ with worst testing cost at most $(1+\rho)W$ and expected testing cost at most $(1+1/\rho)E$. For the relevant case of uniform costs, the bound can be improved to $(1+\rho)W$ and $(1+2/(\rho^2 +2 \rho))E$ through a more involved analysis.
In addition, we present an algorithm for the minimization of the worst testing cost for the extended version of the $DFEP$ where the cost of a test depend on its outcome. We prove that our algorithm is an $O(ln(n))$ approximation for the case of binary tests. This bound
is the best possible under the assumption that $\mathcal{P} \neq \mathcal{NP}$.
\subsection{Related work}
In a recent paper \cite{labericml}, the authors show that for any instance $I$ of the DFEP, with $n$ objects, it is possible to construct in polynomial time a decision tree $D$ such that $cost_E(D) $ is $ O(\log n \cdot OPT_E(I))$ and $cost_W(D)$ is $ O(\log n \cdot OPT_W(I))$, where $OPT_E(I)$ and $OPT_W(I)$ are, respectively, the minimum expected testing cost and the minimum worst testing cost for instance $I$.
Note that the questions we are studying here are different and possibly more fundamental than those studied in \cite{labericml}: is it possible, even allowing exponential construction time, to build a decision tree whose expected cost is very close to the best possible expected cost achievable and whose worst testing cost is very close to the best possible worst case achievable? How close can we get or better what is the best trade off we can simultaneously guarantee?
For the prefix code problem there are some studies related to the simultaneous minimization of the expected testing cost and the worst case testing cost \cite{garey,larmore,fastalg,laber}. The problem of constructing a prefix code is a particular case of the DFEP in which each object belongs to a distinct class, the testing costs are uniform and the set of tests is in one to one correspondence with the set of all binary strings of length $n$ so that the test corresponding to a binary string $b$ outputs 0 (1) for object $s_i$ if and only if the $i^{th}$ bit of $b$ is 0 (1).
A number of algorithms with different time complexities were proposed to construct decision trees with minimum expected path length (expected testing cost in DFEP terminology) among the decision trees with depth (worst testing cost) at most $L$, where $L$ is a given integer \cite{garey,larmore,fastalg}.
The results of Milidiu and Laber \cite{laber} imply that for any instance $I$ of the the prefix code problem,
there is a decision tree $D$ such that for any integer $c$, with $ 0 < c \leq (n-1)- \lceil \log n \rceil $, $Cost_W(D) - OPT_W(I) = c$ and $Cost_E(D) - OPT_E(I) \leq 1/ \psi^{c-1}$, where $\psi$ is the golden ratio $(1+\sqrt{5})/2$.
When the goal is to minimize only one measure (worst or expected testing cost), there are several algorithms in the literature to solve the particular version of the $DFEP$ in which each object belongs to a distinct class (\cite{garey_id,Kos_id,pandit_id,adler_id,guillory_id,laber_id,guillory2,gupta}).
Approximation algorithms for the general version of the problem, where the number of classes can be smaller than the number of objects, were presented by
\cite{bellala}, \cite{golovin} and \cite{labericml}. For the minimization of the worst testing cost of DFEP, Moshkov has studied the problem in the general case of multiway tests and non-uniform costs and provided an $O(\log n)$-approximation in \cite{Moshkov2}. Our algorithm in Section 3, generalizes Moshkov's algorithm to the value-dependent-test-cost variant of the DFEP Moshkov \cite{Moshkov2} also proved that that no $o(\log n)$-approximation algorithm is possible under the standard complexity assumption $NP \not \subseteq DTIME(n^{O(\log \log n)}).$ The minimization of the worst testing cost is also investigated in \cite{conf/icml/GuilloryB11} under the framework of covering and learning.
Both \cite{bellala} and \cite{golovin} show
$O(\log (1/p_{min}))$ approximations for the expected testing cost (where $p_{min}$ is the minimum probability among the
objects in $S$) \---- the former for binary tests, and the latter for multiway
tests.
\section{Preliminaries and notation} \label{sec:prelim}
In order to explain our results, we use $OPT_W(S,C,T,\mathbf{p},\mathbf{c})$ and $OPT_E(S,C,T,\mathbf{p},\mathbf{c})$, respectively, to denote the cost of the decision tree with minimum worst testing cost and minimum expected testing cost for the input $(S,C,T,\mathbf{p},\mathbf{c})$. Whenever the context permits (it will always permit) we use the simpler notations $OPT_W(S)$ and $OPT_E(S)$.
Let $(S,C,T,\mathbf{p},\mathbf{c})$ be an instance of DFEP and let $S'$ be a subset of $S$. In addition, let $C'$ and $\mathbf{p}'$ be, respectively, the restrictions of $C$ and $\mathbf{p}$ to the set $S'$. Our first observation is that every decision tree $D$ for $(S,C,T,\mathbf{p},\mathbf{c})$ is also a decision tree for $(S',C',T,\mathbf{p}',\mathbf{c})$. The following proposition is a direct consequence of this observation.
\begin{proposition} \label{prop:Subadditivity} Let $(S,C,T,\mathbf{p},\mathbf{c})$ be an instance of the DFEP and let $S'$ be a subset of $S$. Then, $OPT_E(S') \leq OPT_E(S)$ and $OPT_W(S') \leq OPT_W(S)$. \end{proposition}
We say that a pair of objects $(s_i,s_j)$ from a set $S$ is {\em separable} if $s_i$ and $s_j$ belong to different classes. For a set of objects $G$ we use $P(G)$ to denote the number of separable pairs in $G$. In formulae, \begin{equation} P(G) = \sum_{i=1}^{k-1}\sum_{j=i+1}^k n_i n_j, \end{equation} where $n_i$ is the number of objects in $G$ that belong to class $i$. We say that a test $t$ {\em separates} a pair of separable objects $(s,s')$ if $t(s) \neq t(s')$.
\section{A logarithmic approximation for value dependent testing costs}\label{sec:binary}
We first consider the goal of approximating optimal decision trees with respect to the worst testing cost. Recall that if we apply a test $t$ on an object $s \in S$, getting an answer $t(s)$, we pay a cost $c^{t(s)}(t)$. Thus, each test can be associated with $\ell$ different costs since $t(s) \in \{1,\ldots,\ell\}$. Note that now each branch of a decision tree is associated with a cost, while in the classical version of the problem each internal node is associated with a cost.
Our algorithm, called \textsc{DividePairs}, chooses the test ${t}$ that minimizes:
\begin{equation}\label{criterio} \max_{1 \leq i \leq \ell} \left\{\frac{c^i(t)}{P(S) - P(S^i_t)}\right\} \end{equation} over all available tests for the root of the tree. Then the objects in $S$ are splitted according to the values of $t$ for each object, and \textsc{DividePairs} is recursively called for each (non empty) new group of objects. When all objects in a group are from the same class, a leaf is created. We analyze the approximation of the algorithm when $\ell = 2$. Recall that we use $S^i_{t}$ to denote
the subset of objects of $S$ for which test $t \in T$ outputs $i$.
In this case, each test $t \in T$ splits $S$ in two subsets: $S^1_t$ and $S^2_t$.
In order to analyze the algorithm, we use $Cost_W(S)$ to denote the cost of the decision tree that \textsc{DividePairs} constructs for a set of objects $S$. Let $\tau$ be the first test selected by \textsc{DividePairs}.
We can write
the ratio between the worst testing cost of the decision tree generated by
\textsc{DividePairs} and the cost of the decision tree with minimum worst testing cost as
\begin{equation}\label{eqratio} \frac{Cost_W(S)}{OPT_W(S)} = \frac{\max\{ c^1(\tau) + Cost(S^1_{\tau}),c^2(\tau) + Cost(S^2_{\tau}) \}}{OPT_W(S)} \end{equation}
Let $q$ be such that $c^{q}(\tau) + Cost(S^{q}_{\tau}) = \max \{c^{1}(\tau) + Cost(S^{1}_{\tau}),c^{2}(\tau) + Cost(S^{2}_{\tau}) \}$ in equation (\ref{eqratio}). We have that:
\begin{equation}\label{eqratio2} \frac{Cost_W(S)}{OPT_W(S)} = \frac{c^{q}(\tau) + Cost(S^{q}_{\tau})}{OPT_W(S)} \leq \frac{c^{q}(\tau)}{OPT_W(S)} + \frac{Cost(S^{q}_{\tau})}{OPT_W(S^{q}_{\tau})} \end{equation} where the inequality follows from Proposition \ref{prop:Subadditivity}. The following lemma shows that $OPT_W(S)$ is at least $c^{q}(\tau) P(S)/(P(S) - P(S_{\tau}^{q}))$.
\begin{lemma} $c^{q}(\tau) P(S)/(P(S) - P(S_{\tau}^{q}))$ is a lower bound on the worst testing cost of the optimal tree. \end{lemma}
\textit{\textbf{Proof:}} First, we note that in the set of decision trees with optimal worst testing cost, there is a tree $D^*$ in which every internal node has two children. Let $v$ be an arbitrarily chosen internal node in $D^*$, let $\gamma$ be the test associated with $v$ and let $R \subseteq S$ be the set of objects associated with the leaves of the subtree rooted at $v$. Let $i$ be such that $c^i(\tau)/(P(S) - P(S^i_{\tau}))$ is maximized and $j$ be such that $c^j(\gamma)/(P(S) - P(S^j_{\gamma}))$ is maximized. We have that:
\begin{align} \frac{c^{q}(\tau)}{P(S) - P(S_{\tau}^{q})} \leq \frac{c^i(\tau)}{P(S) - P(S_{{\tau}}^i)}
\leq \frac{c^j(\gamma)}{P(S) - P(S_{{\gamma}}^j)} \label{eqgreedy1}
\\ \leq \frac{c^j(\gamma)}{P(R) - P(R_{\gamma}^j)} \label{eqgreedy2} \end{align}
The last inequality in (\ref{eqgreedy1}) holds due to the greedy choice. To prove inequality (\ref{eqgreedy2}), we only have to show that $P(S) - P(S_{\gamma}^j) \geq P(R) - P(R_{\gamma}^j)$. Let $r_{\gamma}^R$ (resp. $r_{\gamma}^S$) be the number of pairs in $R$ (resp.\ $S$) separated by test $\gamma$. Since $R \subseteq S$ we have that $ r_{\gamma}^R \leq r_{\gamma}^S$ and $P(R_{\gamma}^{i}) \leq P(S_{\gamma}^{i})$ for $i=1,2$. Also, note that:
\begin{equation}\label{pairsS} P(S) = r_{\gamma}^S + P(S_{\gamma}^{1}) + P(S_{\gamma}^{2}) \end{equation}
\begin{equation}\label{pairsR} P(R) = r_{\gamma}^R + P(R_{\gamma}^{1}) + P(R_{\gamma}^{2}) \end{equation}
Hence, we have that $P(S) - P(S_{\gamma}^j) \geq P(R) - P(R_{\gamma}^j)$. Thus, we have concluded that inequality (\ref{eqgreedy2}) holds.
For a node $v$, let $S(v)$ be the set of objects associated with the leaves of the subtree rooted at $v$. Let $v_1, v_2, \dots, v_p$ be a root-to-leaf path on $D^*$ as follows: $v_1$ is the root of the tree, and for each $i=1, \dots, p-1$ the node $v_{i+1}$ is a child of
$v_i$ associated with the branch $j$ that maximizes $c^j(t_i)/(P(S) - P(S^j_{t_i}))$, where $t_i$ is the test associated with $v_i$.
We denote by $c^{*}_{t_i}$ the cost that we have to pay going from $v_i$ to $v_{i+1}$. It follows from inequaltity (\ref{eqgreedy2}) that \begin{equation}\label{eqaux}
\frac{ \left [ P(S(v_i)) - P(S(v_{i+1}))\right ]c^{q}(\tau)}{P(S) - P(S_{{\tau}}^{q})} \leq c^{*}_{t_i} \end{equation} for $i=1,\ldots,p-1$. Since the cost of the path from $v_1$ to $v_p$ is not larger than the worst testing cost of the optimal decision tree, we have that
$$OPT_W(S) \geq \sum\limits_{i=1}^{p-1} {c^{*}_{t_i}} \geq \frac{c^{q}(\tau)}{P(S) - P(S_{\tau}^{q})} \sum\limits_{i=1}^{p-1} \left ( P(S(v_i))-P(S(v_{i+1})) \right) = \frac{c^{q}(\tau)P(S)}{P(S) - P(S_{\tau}^{q})} ,$$ where the second inequality follows from (\ref{eqaux}) and the last identity holds because $S(v_1)=S$ and $P(S(v_p))=0$. $\qed$
Replacing the bound on $OPT_W(S)$ given by the previous lemma in equation (\ref{eqratio2}) we get that
\begin{align}\label{eq:ratio_lb} \frac{Cost_W(S)}{OPT_W(S)} \leq \frac{P(S)-P(S_{\tau}^{q})}{P(S)} + \frac{Cost_W(S_{\tau}^{q})}{OPT_W(S_{\tau}^{q})} \end{align}
Note that:
\begin{equation}\label{eq:firstterm} \frac{P(S)-P(S_{\tau}^{q})}{P(S)} = \sum\limits_{i=1}^{P(S)-P(S_{\tau}^{q})}\Bigg(\frac{1}{P(S)}\Bigg) \leq \sum\limits_{i=1}^{P(S)-P(S_{\tau}^{q})}\Bigg(\frac{1}{P(S_{\tau}^{q}) + i}\Bigg) \end{equation}
By induction on the number of pairs, we assume that for each $G \subset S$, $Cost_W(G)/OPT_W(G) \leq H(P(G))$, where $H(n) = \displaystyle\sum\limits_{i=1}^{n} 1/i$. From (\ref{eq:ratio_lb}) and (\ref{eq:firstterm}) we have that
$$\frac{Cost_W(S)}{OPT_W(S)} \leq \sum\limits_{i=1}^{P(S)-P(S_{\tau}^{q})}\Bigg(\frac{1}{P(S_{\tau}^{q}) + i}\Bigg) + H(P(S_{\tau}^{q})) = H(P(S)) \leq 2\ln(n). $$ Thus, we have the following theorem
\begin{theorem} There is an $O(\log n)$ approximation for version of the DFEP with binary tests and value dependent costs. \end{theorem}
\section{A bicriteria approximation} \label{sec:bicriteria} In this section, we present an algorithm that provides a simultaneous approximation for the minimization of expected testing cost and worst testing cost.
There are examples in which the minimization of the expected testing cost produces a decision tree with high worst testing cost, and the minimization of the worst testing cost produces a decision tree with high expected testing cost \cite{labericml}. Therefore, it makes sense to look for a trade-off between minimizing both measures.
Given a positive number $\rho$, two decision trees $D_E$ and $D_W$ for the instance $(S,C,T,\mathbf{p},\mathbf{c})$,
the former with expected testing cost $E$ and the latter with worst testing cost $W$, we devise a polynomial time procedure
to construct a new decision tree $D$, from $D_E$ and $D_W$, with
expected cost at most $(1 + 1/ \rho) E$ and
worst testing cost at most $(1+ \rho) W$. The procedure is very simple:
{\tt CombineTrees}($D_E$,$D_W$,$\rho$)
\begin{enumerate}
\item Define a node $v$ from $D_E$ as replaceable if the cost of the path from the root of $D_E$ to $v$ (including $v$) is at least $\rho W$ and the cost of the path from the root of $D_E$ to the parent of $v$ is smaller than $\rho W$. At this step we traverse $D_E$ to find the set $R$ of the replaceable nodes.
\item For every node $v \in R$ do
\begin{enumerate}
\item Let $S(v)$ be the set of objects associated with leaves located at the subtree rooted at $v$ in $D_E$. In addition, let $D^{S(v)}_W$ be a decision tree for $S(v)$ obtained by disassociating every object in $S-S(v)$ from $D_W$.
\item Replace the subtree of $D_E$ rooted at $v$ with the decision tree $D^{S(v)}_W$
\end{enumerate}
\item Return the tree $D$ obtained by the end of Step 2.
\end{enumerate}
\normalsize
\begin{theorem} The decision tree $D$ has expected testing cost at most $(1+1/ \rho)E$ and worst testing cost at most $(1+\rho)W$. \end{theorem} \begin{proof} First we argue that the worst testing cost of $D$ is at most $(1+\rho)W$. Let $s$ be an object in $S$. If $s$ is not a descendant of a replaceable node in $D_E$ then the cost of the path from the root of $D_E$ to $s$ is at most $\rho W$. Since this path remains the same in $D$, we have that the cost to reach $s$ in $D$ is at most $\rho W$. On the other hand, if $s$ is a descendant of a replaceable node $v$ in $D_E$, then the cost to reach $s$ in $D$ is at most $(1+\rho)W$ because the cost of the path from the root of $D$ to the parent of $v$ is at
most $\rho W$ and the cost to reach $s$ from the root of the tree $D_W^{S(v)}$ is at most $W$.
Now, we prove that the expected testing
cost of $D$ is at most $(1 + 1/ \rho) E$. For that it is enough to show that for every object $s \in S$, the cost to reach $s$ in $D$ is at most $(1 + 1/ \rho)$ times the cost of reaching $s$ in $D_E$. We split the analysis into two cases:
{\bf Case 1.}
$s$ is not a descendant of a replaceable node in $D_E$. In this case, the cost to reach $s$ in $D_E$ is equal to the cost of reaching $s$ in $D$.
{\bf Case 2}. $s$ is a descendant of a replaceable node $v$ in $D_E$. Let $K$ be the cost of the path from the root of $D_E$ to $v$. Then, the cost to reach $s$ in $D_E$ is at least $K$. In addition, since $v$ is replaceable we have that
$K \geq \rho W$. On the other hand, the cost to reach $s$ in $D$ is at most $\rho W +W$. Since $K \geq \rho W$ we have that the cost to reach $s$ in $D$ is at most $(1+1/\rho)$ times the cost of reaching $s$ in $D_E$. \end{proof}
We can improve the approximation for the case where the costs are uniform. In this case, we can assume unitary testing costs so that $W$ is the height of the decision tree $D_W$. Let $L$ and $M$, with $L<M$, be two positive integers whose values will be defined during our analysis.
To obtain a better approximation, we consider an algorithm that picks the decision tree, say $D$, with minimum expected testing cost among the decision trees $D^{L},D^{L+1},\ldots,D^{M}$, where $D^i$ is the decision tree returned by {\tt CombineTrees} when it is executed with parameters $(D_E,D_W, i/W)$. It follows from the previous theorem that $$Cost_W(D) \leq (1+M/W) W =M+ W.$$
The analysis of the expected testing cost of $D$ is more involved. First, we have that
\begin{equation} \label{eq:ub-uniformcosts} Cost_E(D) = \min_{i=L,L+1\ldots,M} \{Cost_E(D^i)\} \leq \frac{ \sum_{i=L}^M Cost_E(D^i)}{M-L+1} \end{equation}
Let $H$ be the height of the decision tree $D_E$. For $j=1,\ldots,H$, let
$C_j$ be the contribution of the leaves located at level $j$ for
the cost of $D_E$ so that $Cost_E(D_E)=\sum_{j=1}^H C_j$.
It follows that $$ Cost_E(D^i) \leq \sum_{j=1}^i C_j + \sum_{j=i+1}^H \frac{C_j (i+W)}{j},$$ because the objects associated with leaves that are located at levels smaller than or equal to $i$ are not modified from $D_E$ to $D^i$ while the remaining objects are located at levels smaller than or equal to $i+W$ in $D^i$. Note that $C_j/j$ in the previous inequality is the sum of the probabilities of the leaves at level $j$. By replacing the last expression in (\ref{eq:ub-uniformcosts}) and grouping the terms around the $C_j$'s we get that
$$\frac{Cost_E({D})}{Cost_E(D_E)} \leq \frac{ \sum_{j=1}^H \alpha_j C_j }{ \sum_{j=1}^H C_j} \leq max_j\{ \alpha_j\},$$ where
\[ \alpha_j = \left\{ \begin{array}{ll}
1 & \mbox{if $j \leq L$};\\ \\
\frac{M-j +1 + \frac{(j-L)W+ (j-L)(j-1+L)/2 }{j}}{M-L+1} & \mbox{if $L < j \leq M$} \\ \\
\frac{W +(M+L)/2}{j} & \mbox{if $j \geq M+1$}
.\end{array}
\right. \]
First, note that the maximum of $\alpha_j$ in the range $ j \geq M+1$ is $(W +(M+L)/2)/j$, which is attained when $j=M+1$. Moreover, if we replace $j=M+1$ in the formula of $\alpha_j$ for the range $L < j \leq M$ we get exactly $(W +(M+L)/2)/j$. Thus, it follows that $$ \frac{Cost_E({D})}{Cost_E(D_E)} \leq \max_{j \in (0,\infty) } \left \{ \frac{M-j +1 + \frac{(j-L)W+ (j-L)(j-1+L)/2 }{j}}{M-L+1} \right \} $$ By simple calculus we can conclude that the expression attains the maximum when $j= \sqrt{L^2-L+2LW}$. Thus,
\begin{equation} \label{eq:finalbound}
\frac{Cost_E({D})}{Cost_E(D_E)} \leq 1+ \frac{L+W - \sqrt{L^2-L+2LW} -1/2 }{M-L+1} \leq 1+ \frac{L+W -\sqrt{L^2+2LW}}{M-L+1}.
\end{equation} To verify the last inequality we need to do some calculations (squaring the terms) and use the fact that $W,L\geq 1$.
Let $r$ be a number in the interval $[0,1/W]$. We can verify that the righthand side of the equation (\ref{eq:finalbound}) is upper bounded by $ 1 + 2/(\rho^2 + 2\rho)$ whenever $M= \rho W$ and $L= W (t + r)$, where $t=\frac{\rho^2}{2\rho+2}$ (the proof is presented in the appendix).
Thus, by setting $M= \rho W$ and $L= \lceil W \rho^2 / (2 \rho+2) \rceil$, where $\rho$ is a positive number that can be written as $i/W$ for some integer $i$, we obtain the following theorem.
\begin{theorem} Let ${\cal I}= (S,C,T,\mathbf{p},\mathbf{c})$ an instance of the DFEP where all the tests have unitary costs. Given two decision trees $D_E$ and $D_W$ for the instance ${\cal I}$,
the former with expected testing cost $E$ and the latter with height $W$ and a positive number $\rho$ that can be written as $i/W$ for some integer $i$, there exists
a polynomial time algorithm that construts a decision tree $D$ with height at most $(1+ \rho) W $
and expected testing cost at most $\left (1 + \frac{2}{\rho^2 + 2\rho} \right) E.$
\end{theorem}
As an example, for $\rho=2$ this new algorithm guarantees that the expected testing cost is at most $(5/4)E$ while the initial algorithm guarantees a $1.5E$ upper bound.
\section{Conclusions}
We presented a polynomial time procedure that given a parameter $\rho >0$, a decision tree $D_W$ with worst testing cost $W$ and a decision tree $D_E$ with expected testing cost $E$, produces a decision tree $D$ with worst testing cost at most $(1+\rho)W$ and expected testing cost at most $(1+1/\rho)E$. When the costs are uniform, the bound can be improved to $(1+\rho)W$ and $(1+2/(\rho^2 +2 \rho))E$. The main question that remains open in this topic is whether for every $\epsilon >0$, there is some integer $n_0$ such that every instance $I$ of the DFEP with more than $n_0$ objects admits a tree $D$ such that $cost_E(D) \leq (1+\epsilon) OPT_E(I)$ and $cost_W(D) \leq (1+\epsilon) OPT_W(I)$. For the prefix code problem, a particular version of the DFEP explained in the introduction, this result holds \cite{laber}.
We also presented an approximation algorithm for the extended version of the DFEP where the cost of the tests depend also on the answers. For the particular case where the tests are binary, our algorithm provides a logarithmic approximation which is the best approximation unless ${\cal P}={\cal NP}$. An interesting question that deserves more investigation is if there exists also a logarithmic approximation algorithm for the most general case where the tests can output more than two values.
\appendix
\section{Calculation of Section \ref{sec:bicriteria}}
Let $r$ be a number in the interval $[0,1/W]$. We have to prove that:
$$\left [ (t+r)W+W -W\sqrt{(t+r)^2+2(t+r)} \right] \leq \frac{2(\rho W-(t+r)W+1)}{(\rho^2+2\rho)}$$
By simple algebraic manipulations we conclude that we have to prove that
$$(\rho^2+2\rho)\left [ (t+r+1) -\sqrt{(t+r)^2+2(t+r)} \right] \leq 2(\rho-(t+r)+1/W),$$
or equivalently,
$$(\rho^2+2\rho)(t+r+1) - 2(\rho-(t+r)+1/W) \leq (\rho^2+2\rho)\sqrt{(t+r)^2+2(t+r)} $$
Replacing $t=\frac{\rho^2}{2\rho+2}$ and using the fact that $r \leq 1/W$, it suffices to show
$$(\rho^2+2\rho)\left (\frac{\rho^2}{2\rho+2}+r \right ) +\rho^2 +2\frac{\rho^2}{2\rho+2} \leq (\rho^2+2\rho)\sqrt{(t+r)^2+2(t+r)} $$
$$(\rho^2+2\rho)\left (\frac{\rho^2}{2\rho+2}+r \right) +\rho^2 +2\frac{\rho^2}{2\rho+2} \leq$$ $$ \frac{\rho^2+2\rho}{2\rho+2}\sqrt{\rho^4+2\rho^2(2\rho+2)r+(2\rho+2)^2r^2+ 2(2\rho+2)\rho^2+2(2\rho+2)^2r} $$
$$(\rho^2+2\rho)\rho^2 +(\rho^2+2\rho)r(2\rho+2)+\rho^2(2\rho+2)+2\rho^2 \leq $$ $$ (\rho^2+2\rho)\sqrt{\rho^4+2\rho^2(2\rho+2)r+(2\rho+2)^2r^2+ 2(2\rho+2)\rho^2+2(2\rho+2)^2r} $$
$$\rho^4+4\rho^3+4\rho^2+(2\rho^3+6\rho^2+4\rho)r
\le $$
$$ (\rho^2+2\rho)\sqrt{\rho^4+4\rho^3+6\rho^2 +(2\rho+2)^2 r^2+ 4(\rho+1)(\rho^2+2\rho+2)r} $$
This can be shown by verifying that the following inequalities hold: $$(\rho^4+4\rho^3+4\rho^2)^2 \leq (\rho^2+2\rho)^2(\rho^4+4\rho^3+6\rho^2) $$
$$(2\rho^3+6\rho^2+4\rho)^2r^2 \leq (\rho^2+2\rho)^2 (4\rho^2+8\rho^2+4)r^2 $$
and $$ 2 (\rho^4+4\rho^3+4\rho^2) (2\rho^3+6\rho^2+4\rho)r \leq (\rho^2+2\rho)^2 4(\rho+1)(\rho^2+2\rho+2)r$$
\end{document} |
\begin{document}
\title{f THE TWO UNIFORM INFINITE QUADRANGULATIONS OF THE PLANE HAVE THE SAME LAW}
\selectlanguage{french}
\begin{abstract} On d\'emontre que les quadrangulations al\'eatoires infinies uniformes d\'efinies respectivement par Chassaing-Durhuus et par Krikun ont la m\^eme loi. \end{abstract}
\selectlanguage{english}
\begin{abstract} We prove that the uniform infinite random quadrangulations defined respectively by Chassaing-Durhuus and Krikun have the same distribution. \end{abstract}
\noindent {\bf AMS classification:} 60C05, 60J80, 05C30.
\section{Introduction}
Planar maps are proper embeddings of connected graphs in the two-dimensional sphere $\mathbb{S}^2$. Their combinatorial properties have been studied by Tutte \cite{Tutte} and many others. Planar maps have recently drawn much attention in the theoretical physics literature as models of random surfaces, especially in the setting of the theory of two-dimensional quantum gravity (see in particular the book \cite{2DQG}). A powerful tool to study these objects is the encoding of planar maps in terms of labelled trees, which was first introduced by Cori and Vauquelin in \cite{CV} and was much developed in Schaeffer's thesis \cite{Schaeffer} (see also Bouttier, Di Francesco and Guitter \cite{BDG} for a generalized version of this encoding). This correspondence between planar maps and trees makes it possible to derive certain asymptotics of large random planar maps in terms of continuous random trees (see the work of Chassaing and Schaeffer \cite{CS}) and to define a Brownian map (see Marckert and Mokkadem \cite{MaMo}) which is a continuous random metric space conjectured to be the scaling limit of various classes of planar maps (see the papers by Marckert and Miermont \cite{MaMi}, Le Gall \cite{LG}, Le Gall and Paulin \cite{LGP}). This approach has led to new asymptotic properties of large planar maps.
Another point of view is to study properties of random infinite planar maps, more precisely to study probability measures on certain classes of infinite planar maps, which are uniform in some sense. This has been done by Angel and Schramm \cite{AS} who introduced a uniform infinite triangulation of the plane, later studied by Angel \cite{A1,A2} and Krikun \cite{Krikun}.
In the present paper, we are interested in infinite random planar quadrangulations. Recall that Schaeffer's bijection (see e.g. \cite{CS}) yields a one-to-one correspondence between rooted planar quadrangulations with $n$ faces and well-labelled trees with $n$ edges. Then, there are two natural ways to define a uniform infinite quadrangulation of the sphere: one as the local limit of uniform finite quadrangulations as their size goes to infinity, and one going through Schaeffer's bijection and using local limits of uniform well-labelled trees. The first approach is developed in Krikun \cite{Kr} while the second one is developed in Chassaing and Durhuus \cite{CD}. The topologies in which the uniform finite quadrangulations converge to the infinite object differ in the two cases: in the Chassaing-Durhuus paper, the topology on quadrangulations is induced by Schaeffer's bijection and the natural topology of local convergence of rooted trees, while the topology used in Krikun's paper is the natural topology of local convergence of rooted planar maps. Therefore, the two uniform infinite random quadrangulations defined in these papers are a priori two different objects. The goal of the paper is to show that these two definitions coincide. This result is stated in Theorem \ref{equality} below. Note that our work also gives an alternative approach to Theorem 1 of Krikun \cite{Kr}: independently of the results of \cite{Kr}, Theorem \ref{equality} shows that the uniform probability measure on the space of all rooted planar quadrangulations with $n$ faces converges as $n\to\infty$ to a probability measure on the space of infinite quadrangulations, in the sense of the metric used in \cite{Kr}.
Let us briefly explain the main point of our argument. Consider a sequence of (deterministic or random) finite well-labelled trees $\theta_n$, that converges as $n \to \infty$ towards an infinite well-labelled tree $\theta_{\infty}$, in the sense that, for every $k \geqslant 1$, the restriction of $\theta_n$ to the first $k$ generations is equal to the same restriction of $\theta_{\infty}$, when $n$ is sufficiently large. Let $Q_n$ be the quadrangulation associated with $\theta_n$ via Schaeffer's bijection and let $Q_{\infty}$ be the infinite quadrangulation associated with $\theta_{\infty}$ via the extension of Schaeffer's bijection that is presented in Subsection \ref{subsec:schaeffer} below ($\theta_{\infty}$ needs to satisfy certain properties so that this makes sense). Then it is not always true that $Q_{\infty}$ is the local limit of $Q_n$ as $n \to \infty$. The problem comes from the fact that $\theta_n$ may have small labels at generations larger than $k(n)$ with $k(n) \to \infty$. Note that this problem may occur even if one knows that $\theta_{\infty}$ has finitely many labels smaller than $K$, for every integer $K$ (the latter property holds for the uniform infinite well-labelled tree thanks to the estimates of \cite{CD}, see Proposition \ref{etiquettes} below). Nonetheless, in the case when $\theta_n$ is uniformly distributed over all well-labelled trees with $n$ edges, the preceding phenomenon does not occur: for every fixed $R>0$, the probability that $\theta_n$ has a label less than $R$ above generation $S$ tends to $0$ as $S \to \infty$, uniformly in $n$. This uniform estimate is stated in Proposition \ref{nseuil} below.
We can combine this estimate with the following combinatorial argument. If two well-labelled trees coincide up to generation $S$, then the associated quadrangulations are also the same within distance $R$ from the root, where $R$ is essentially the minimum label above generation $S$ in either tree. See Proposition \ref{facegen} below for a precise statement.
The paper is organized as follows: Section 2 gives some notation and an extension of Schaeffer's bijection to the infinite case; Section 3 presents the two different definitions of the uniform infinite quadrangulation; and Section 4 contains the key estimates that allow us to prove that these definitions actually lead to the same object.
\section{Preliminaries} \label{sec:prel}
\subsection{Spatial trees} \label{subsec:spatial trees}
Throughout this work we will use the standard formalism on planar trees as found in \cite{Neveu}. Let \[ \mathcal{U} = \bigcup_{n=0}^{\infty} \mathbb{N}^n\] where by convention $\mathbb{N} = \{ 1,2, \ldots \}$ and $\mathbb{N}^0 = \left\{ \emptyset \right\}$. An element $u$ of $\mathcal{U}$ is thus a finite sequence of positive integers. If $u, v \in \mathcal{U}$, $uv$ denotes the concatenation of $u$ and $v$. If $v$ is of the form $uj$ with $j \in \mathbb{N}$, we say that $u$ is the \emph{parent} of $v$ or that $v$ is a \emph{child} of $u$. More generally, if $v$ is of the form $uw$ for $u,w \in \mathcal{U}$, we say that $u$ is an \emph{ancestor} of $v$ or that $v$ is a \emph{descendant} of $u$. A \emph{rooted planar tree} $\tau$ is a subset of $\mathcal{U}$ such that \begin{enumerate} \item $\emptyset \in \tau$ ( $\emptyset$ is called the \emph{root} of $\tau$), \item if $v \in \tau$ and $v \neq \emptyset$, the parent of $v$ belongs to $\tau$ \item for every $u \in \mathcal{U}$ there exists $k_u(\tau) \geqslant 0$ such that $uj \in \tau$ if and only if $j \leqslant k_u(\tau)$. \end{enumerate}
The edges of $\tau$ are the pairs $(u,v)$, where $u, v \in \tau$ and $u$ is the father of $v$. $|\tau|$ denotes the number of edges of $\tau$ and is called the size of $\tau$. $h(\tau)$ denotes the maximal generation of a vertex in $\tau$ and is called the height of $\tau$. We denote by $\mathbf{T}_n$ the set of all rooted planar trees of size $n$ and by $\mathbf{T}_{\infty}$ the set of all infinite rooted planar trees. Then $\mathbf{T} = \cup_{n=0}^{\infty} \mathbf{T}_n$ is the set of all finite rooted planar trees and $\overline{\mathbf{T}} = \mathbf{T} \cup \mathbf{T}_{\infty}$ is the set of all rooted (finite or infinite) planar trees. A \emph{spine} of a tree $\tau$ is an infinite linear sub-tree of $\tau$ starting from its root.
A \emph{rooted labelled tree} (or spatial tree) is a pair $\theta = (\tau, (\ell(u))_{u \in \tau})$ that consists of a planar tree $\tau$ and a collection of integer labels assigned to the vertices of $\tau$ such that if $u,v \in \tau$ and $v$ is a child of $u$, then $|\ell(u) - \ell(v)| \leqslant 1$. For every $l \in \mathbb{Z}$, we denote by $\overline{\mathbf{T}}^{(l)}$ the set of all spatial trees for which $\ell(\emptyset) = l$, by $\mathbf{T}_{\infty}^{(l)}$ the set of all such trees with an infinite number of edges, by $\mathbf{T}_n^{(l)}$ the set of all such trees with $n$ edges and by $\mathbf{T}^{(l)}$ the set of all such trees with finitely many vertices. Similarly as before, $\mathbf{T}^{(l)} = \bigcup_{n=0}^{\infty} \mathbf{T}_n^{(l)}$.
If $\ell(\emptyset) = l$ and in addition $\ell(u) \geqslant 1$ for every vertex $u$ of $\tau$, we say that $\theta$ is an $l$-well-labelled tree. The corresponding sets of spatial trees are denoted by $\overline{\mathbb{T}}^{(l)}$, $\mathbb{T}^{(l)}$, $\mathbb{T}_{\infty}^{(l)}$ and $\mathbb{T}_n^{(l)}$. For $l= 1$ we will simply say well-labelled tree and denote the corresponding sets by $\overline{\mathbb{T}}$, $\mathbb{T}$, $\mathbb{T}_{\infty}$ and $\mathbb{T}_n$.
A finite spatial tree $\omega = (\tau, \ell)$ can be coded by a pair $(C,V)$, where $C = (C(t))_{0 \leqslant t \leqslant 2|\tau|}$ is the contour function of $\tau$ and $V = (V(t))_{0 \leqslant t \leqslant 2|\tau|}$ is the spatial contour function of $\omega$ (see Figure \ref{fig:contour}). To define these contour functions, let us consider a particle which, starting from the root, traverses the tree along its edges at speed one. When leaving a vertex, the particle visits the first non visited child of this vertex if there is such a child, or returns to the parent of this vertex. Since all edges will be crossed twice, the total time needed to explore the tree is $2 |\tau|$. For every $t \in [0, 2|\tau|]$, $C(t)$ denotes the distance from the root of the position of the particle. In addition if $t \in [0, 2 |\tau|]$ is an integer, $V(t)$ denotes the label of the vertex that is visited at time $t$. We then complete the definition of $V$ by interpolating linearly between successive integers. See Figure \ref{fig:contour} for an example. A spatial tree is uniquely determined by its pair of contour functions.
\begin{figure}
\caption{A spatial tree and its pair of contour functions $(C,V)$.}
\label{fig:contour}
\end{figure}
To conclude this section, let us introduce some relevant notation. If $\omega = (\tau,\ell)$ is a labelled tree, $|\omega| = |\tau|$ is the size of $\omega$, $h(\omega) = h(\tau)$ is the height of $\omega$ and, for $S \geqslant 0$, $g_S(\omega)$ is the set of all vertices of $\omega$ at generation $S$. Finally, for every $l \in \mathbb{N}$, we let $N_l(\omega)$ denote the number of vertices of $\omega$ that have label $l$. We define $\mathscr{S}$ as the set of all trees of $\overline{\mathbb{T}}$ that have at most one spine, and for which labels takes each integer value a finite number of times: \begin{equation} \label{spine} \mathscr{S} = \left\{ \omega \in \mathbb{T}_{\infty}: \, \forall l \geqslant 1, \, N_l (\omega) < \infty \text{ and $\omega$ has a unique spine}\right\} \cup \mathbb{T}. \end{equation}
\subsection{Planar maps and quadrangulations} \label{subsec:quadrangulations}
Consider a proper embedding of a finite connected graph in the sphere $\mathbb{S}^2$ (loops and multiple edges are allowed). A (finite) \emph{planar map} is an equivalent class of such embedded graphs with respect to orientation preserving homeomorphisms of the sphere. A planar map is \emph{rooted} if it has a distinguished oriented edge, and the origin of the root is called the root vertex. In what follows, planar maps are always rooted even if this is not mentioned explicitly. The set of vertices will always be equipped with the graph distance. The faces of the map are the connected components of the complement of the union of its edges. A finite planar map is a \emph{quadrangulation} if all its faces have degree $4$.
For every integer $n \geqslant 1$ we let $\mathbf{Q}_n$ denote the set of all (rooted) quadrangulations with $n$ faces and $\mathbf{Q} = \bigcup_{n \geqslant 1} \mathbf{Q}_n$ denote the set of finite quadrangulations. Each set $\mathbf{Q}_n$ is in bijective correspondence with the set $\mathbb{T}_n$ by Schaeffer's bijection \cite{CV,Schaeffer}. There is no bijection between infinite well-labelled trees and infinite quadrangulations, but Schaeffer's correspondence has been extended to $\mathscr{S}$ in \cite{CD}. To discuss this extention, we first have to define precisely what we mean by an infinite quadrangulation. To this end we recall some definitions of \cite{AS,CD} in a slightly different form.
Throughout this work, we consider only infinite graphs such that the degree of every vertex is finite. Consider a proper embedding of an infinite graph in the plane $\mathbb{R}^2$. We say that this embedding is locally finite if every compact subset of $\mathbb{R}^2$ intersects only finitely many edges.
\begin{definition} \label{acceptable} An infinite planar map $\mathcal{M}$ is an equivalent class of locally finite embeddings of an infinite graph in $\mathbb{R}^2$, with respect to orientation preserving homeomorphisms of the plane. \end{definition}
The faces of an infinite planar map $\mathcal{M}$ are the bounded connected components of the complement of the union of its edges. With this definition, every edge of $\mathcal{M}$ is not necessarily adjacent to a face; for example infinite trees have only one ''face'' of infinite degree, which is not a face in the sense of the previous definition. This motivates the next definition.
\begin{definition} A \emph{regular} infinite planar map is an infinite planar map such that every connected component of the complement of the union of its edges is bounded. \end{definition}
In a \emph{regular} infinite planar map, every edge is either shared by two faces or appears twice in the border of a face.
\begin{remark} With the previous definitions, an infinite tree can be embedded as an infinite planar map in $\mathbb{R}^2$, but not as a regular infinite planar map. \end{remark}
\begin{definition} \label{Qinf} An \emph{infinite planar quadrangulation} is a regular infinite planar map having every face bordered by four-sided polygons. A rooted infinite quadrangulation is an infinite quadrangulation with a distinguished oriented edge $(v_0,v_1)$ called the root of the quadrangulation; $v_0$ is called the root vertex of the quadrangulation. We denote by $\overline{\mathbb{Q}}$ the set of all (finite or infinite) rooted planar quadrangulations and we have the self-evident decomposition $\overline{\mathbb{Q}} = \mathbf{Q}\cup \mathbf{Q}_{\infty}$. \end{definition}
\subsection{Schaeffer's correspondence} \label{subsec:schaeffer}
We are now going to describe the extension of Schaeffer's correspondence to the set $\mathscr{S}$. We refer to Section 6 of \cite{CD} for details and proofs.
With every infinite well-labelled tree $\omega \in \mathscr{S}$ we will associate an infinite planar quadrangulation $\Phi(\omega)$. We identify $\mathbb{S}^2$ with the set $\mathbb{R}^2 \cup \{\infty \}$, and we fix an infinite tree $\omega \in \mathscr{S}$. We can also fix an embedding of $\omega$ into $\mathbb{R}^2$ as in Definition \ref{acceptable} above. We root $\omega$ at the edge between vertices $\emptyset$ and $1$. Let $F_0$ denote the complement of the union of edges of $\omega$ in $\mathbb{S}^2$.
\begin{definition} A \emph{corner} of $F_0$ is a sector between two consecutive edges around a vertex. The label of a corner is the label of the corresponding vertex. \end{definition}
A vertex of degree $d$ defines $d$ corners and a tree $\omega \in \mathscr{S}$ has a finite number $C_k(\omega) \geqslant N_k(\omega)$ of corners with label $k$. The map $\Phi(\omega)$ is defined in three steps.
\begin{step}[see Figure \ref{figschaeffer}] A vertex $v_0$ with label $0$ is added in $F_0 \setminus \{\infty\}$ and one edge is added between this vertex and each of the $C_1(\omega)$ corners with label $1$. The new root is taken to be the edge that connects $v_0$ to the corner before the root edge of $\omega$. \end{step}
\begin{remark} Notice that the construction in step 1 is possible because $\omega$ has at most one spine. \end{remark}
After step 1, a uniquely defined rooted infinite planar map $\mathcal{M}_0$ with $C_1(\omega) -1$ faces is obtained (in the sense of the definitions of Section \ref{subsec:quadrangulations}, in particular, the faces are bounded subsets of $\mathbb{R}^2$). Notice that each face of $\mathcal{M}_0$ has a unique corner with label $0$ and two corners with label $1$. Such a face is bordered by the two edges joining $v_0$ to the two corners with label $1$ and, in the case where the two corners with label $1$ correspond to two different vertices, the unique injective path in the tree between these two vertices with label $1$.
It is natural to consider the complement of $\mathcal{M}_0$ and its faces as an additional face of infinite degree. Let us denote this face by $F_{\infty}$. It possesses a unique corner with label $0$ and two corners with label $1$ lying on each side of the spine of $\omega$. In addition, these two corners are the last visited corners with label $1$ during a contour of the left side and right side of $\omega$. $F_{\infty}$ is thus delimited by the two edges joining these vertices and $v_0$, and the unique injective path in the tree joining these two vertices. The spine of $\omega$ lies in this face, except for finitely many vertices.
\begin{figure}
\caption{Left: step 1, edges are added between $v_0$ and corners with label $1$. Middle: step 2, numbering of a few corners in $F_{\infty}$. Right: step 2, a chord between the two sides of the spine.}
\label{figschaeffer}
\end{figure}
The second step takes place independently in each face of $\mathcal{M}_0$, including $F_{\infty}$. Let $F$ be a face of $\mathcal{M}_0$ and let $c_0$ be its corner with label $0$. If $F$ has finitely many vertices -- and therefore finitely many corners -- we number its corners from $0$ to $k-1$ in clockwise order along the border, starting with $c_0$. If $F$ is the infinite face, we number its corners on the right side of the spine with nonnegative integers in clockwise order, starting right after $c_0$. Similarly, we number its corners on the left side of the spine with negative integers in counterclockwise order, starting right after $c_0$. See e.g. Figure \ref{figschaeffer}. Let $\ell(i)$ denote the label of the $i$-th corner, so that $\ell(0) = 0$ and $\ell(1) = \ell(k - 1) = 1$ for a finite face whereas $\ell(1) = \ell (-1) = 1$ for $F_{\infty}$ (note that the function $\ell$ depends of the considered face).
In each face, let us define the successor function for all corners except the corners with label $0$ or $1$ by \[ s(i) = \begin{cases} \min \left\{ j > i : \, \ell(j) = \ell(i) - 1 \right\} & \text{if $i < 0$},\\ \min \left\{ j > i : \, \ell(j) = \ell(i) - 1 \right\} & \text{if $i>0$ and $ \left\{ j>i : \, \ell(j) = \ell(i) -1 \right\} \neq \emptyset $},\\ \min \left\{ j \leqslant 0 : \, \ell(j) = \ell(i) - 1 \right\} & \text{if
$i> 0$ and $ \left\{ j>i : \, \ell(j) = \ell(i) -1 \right\} = \emptyset $.} \end{cases} \] For a finite face, only the second case occurs, while for $F_{\infty}$ the second property of Definition \ref{spine} ensures that $\{ j \leqslant 0 : \, \ell(j) = \ell(i) - 1\}$ is finite.
\begin{step}
In every face, for each corner $i$ with label $\ell(i) \geqslant 2$ and such that $|s(i) - i| \neq 1$ a chord $(i, s(i))$ is added inside the face. \end{step}
\begin{proposition}[\cite{CD}, Property 6.1] \label{intersect} Step 2 can be done in such a way that the various chords $(i,s(i))$ do not intersect. \end{proposition}
\begin{remark}
The condition $|s(i) - i| \neq 1$ means that the chord $(i,s(i))$ does not already exist in $\omega$. In $F_{\infty}$, a chord $(i, s(i))$ can connect two corners that lie on different sides of the spine (see e.g. Figure \ref{figschaeffer}). This happens in the third case occurring in the definition of $s(i)$. In that case, the corner $i$ is visited after the last occurrence of the label $\ell(i) -1$ during the contour of the right side of the spine. \end{remark}
Step 2 defines a uniquely determined regular planar map $\mathcal{M}_1$ whose faces are described by the following proposition: \begin{proposition}[\cite{CD}, Property 6.2] \label{faces} The faces of $\mathcal{M}_1$ are either triangular with labels $l$, $l+1$, $l+1$ or quadrangular with labels $l$, $l+1$, $l+2$, $l+1$. \end{proposition}
\begin{step} All edges of $\mathcal{M}_1$ with the same label on both ends are deleted. \end{step} After this last step, a unique infinite quadrangulation $\Phi(\omega)$ is obtained (see \cite{CD} for details). In addition, labels of vertices in the tree $\omega$ coincide with distances from the root of the corresponding vertices in $\Phi(\omega)$. Furthermore, the function $\Phi$ is one-to-one.
\section{Uniform infinite quadrangulations} \label{sec:quadrangulations}
This section presents two different ways to define a uniform infinite random quadrangulation of the plane.
\subsection{Direct approach}
In \cite{Kr}, the uniform infinite quadrangulation is defined as the law of the local limit of uniformly distributed finite random quadrangulations. This limit is taken with respect to the following topology: for $Q \in \mathbf{Q}$ and $R \geqslant 0$, we denote by $B_{\mathbf{Q},R}(Q)$ the union of the faces of $Q$ that have a vertex at distance strictly smaller than $R$ from the root vertex. We may view $B_{\mathbf{Q},R}(Q)$ as a finite rooted planar map. The set $\mathbf{Q}$ is equipped with the distance \[ d_{\mathbf{Q}} (Q_1,Q_2) = \left( 1 + \sup \left\{ R : \, B_{\mathbf{Q},R}(Q_1) = B_{\mathbf{Q},R}(Q_2) \right\} \right)^{-1} ,\] where the equality $B_{\mathbf{Q},R}(Q_1) = B_{\mathbf{Q},R}(Q_2)$ is in the sense of equality between two finite rooted planar maps.
Let $(\overline{\mathbf{Q}},d_{\mathbf{Q}})$ be the completion of the metric space $(\mathbf{Q},d_{\mathbf{Q}})$. Elements of $\overline{\mathbf{Q}}$ that are not finite quadrangulations are called infinite rooted quadrangulations in the sense of Krikun.
Note that this definition is not equivalent to Definition \ref{Qinf}. For example, the quadrangulation $Q_n$ of Figure \ref{figexample} converges as $n$ goes to infinity in $(\overline{\mathbf{Q}},d_{\mathbf{Q}})$ to an infinite quadrangulation $Q$ in Krikun's sense that is not an infinite planar map in the sense of Definition \ref{acceptable}: any proper embedding of $Q$ in $\mathbb{R}^2$ is not locally finite.
\begin{figure}
\caption{A quadrangulation that converges in Krikun's sense to an infinite quadrangulation that is not an infinite planar map.}
\label{figexample}
\end{figure}
\begin{theorem}[\cite{Kr}, Theorem 1] \label{ThKr} For every $n \geqslant 1$ let $\nu_n$ be the uniform probability measure on $\mathbf{Q}_n$. The sequence $(\nu_n)_{n \in \mathbb{N}}$ converges to a probability measure $\nu$ in the sense of weak convergence in the space of all probability measures on $(\overline{\mathbf{Q}}, d_{\mathbf{Q}})$. Moreover, $\nu$ is supported on the set of infinite rooted quadrangulations (in the sense of Krikun). \end{theorem}
\begin{remark} One can extend the function $Q \in \mathbf{Q} \mapsto B_{\mathbf{Q},R}(Q)$ to a continuous function $B_{\overline{\mathbf{Q}},R}$ on $\overline{\mathbf{Q}}$. $B_{\overline{\mathbf{Q}},R}(Q)$ is naturally interpreted as the union of faces of $Q$ that have a vertex at distance strictly smaller than $R$ from the root. \end{remark}
\subsection{Indirect approach} \label{sec:indirect}
Another possible approach to define a uniform infinite random quadrangulation is to start from a uniform infinite well-labelled tree and to consider the image of its law under Schaeffer's correspondence. This method has been developed in \cite{CD}, to which we refer for details and for proofs of what follows in this section. Let us equip $\overline{\mathbb{T}}$ with the distance \[ d_{\mathbb{T}}(\omega,\omega') = \left( 1 + \sup \left\{ S : \,
B_{\mathbb{T},S}(\omega) = B_{\mathbb{T},R} (\omega') \right\} \right)^{-1} ,\] where $B_{\mathbb{T},S} (\omega)$ is the subtree of $\omega$ up to generation $S$. The metric space $(\overline{\mathbb{T}},d_{\mathbb{T}})$ is complete.
We have the following result: \begin{theorem}[\cite{CD}, Theorem 3.1] \label{cvarbres} Let $\mu_n$ be the uniform probability measure on the set of all well-labelled trees with $n$ edges. The sequence $(\mu_n)_{n \in \mathbb{N}}$ converges weakly to a probability measure $\mu$ supported on $\mathbb{T}_{\infty}$. This limit law is called the law of the uniform infinite well-labelled tree. \end{theorem}
One of the key steps to prove this result is to show the convergence \[ \mu_n \left( \omega \in \overline{\mathbb{T}} : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right) \underset{n \to \infty}{\longrightarrow} \mu \left( \omega \in \overline{\mathbb{T}} : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right) \] for every integer $S > 0$ and every well-labelled tree $\omega^{\star}$ of height $S$. This is done by explicit computations. Let $\omega^{\star}$ be a well-labelled tree of height $S$ , and assume that $\omega^{\star}$ has exactly $k$ vertices at generation $S$, with respective labels $l_1,\ldots , l_k$. Then, \begin{align} \label{boulen} \mu_n \left( \omega \in \overline{\mathbb{T}} : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right) & = \frac{1}{D_n} \sum_{n_1 + \cdots + n_k
= n - |\omega ^{\star}|} \prod_{j=1}^k D_{n_j}^{(l_j)} ,\\ \label{boule} \mu \left( \omega \in \overline{\mathbb{T}} : B_{\overline{\mathbb{T}},S}
(\omega) = \omega^{\star} \right) & = \frac{1}{12^{|\omega^{\star}|}} \sum_{i = 1}^k d_{l_i} \prod_{j \neq i} w_{l_j}, \end{align} where, for every $l \geqslant 1$, $D_n^{(l)}$ is the cardinal of $\mathbb{T}_n^{(l)}$, $D_n^{(1)} = D_n$ and \begin{align} \label{defw} w_{l} & = 2 \frac{l (l +3)}{(l +1)(l +2)},\\ d_{l} & = \frac{2w_{l}}{560} (4 l^4 + 30 l^3 + 59 l^2 + 42 l + 4). \label{defd} \end{align}
\begin{proposition}[\cite{CD}, Theorem 4.3 and Theorem 5.9] \label{etiquettes} The measure $\mu$ is supported on $\mathscr{S}$. Furthermore, \[\mathbb{E}_{\mu} \left[ N_l \right] =
{\it O}(l^3) \quad \text{as $l \to \infty$}.\] \end{proposition}
A tree with law $\mu$ has almost surely a unique spine; \cite{CD} gives a precise description of the law of the labels of this spine and of the subtrees attached to each of its vertices. For every $l> 0$, let $\rho^{(l)}$ be the measure on $\mathbf{T}^{(l)}$ defined by $\rho^{(l)}(\omega) = 12^{-|\omega|}$ for every $\omega \in \mathbf{T}^{(l)}$. Then $\frac{1}{2} \rho^{(l)}$ is the law of the Galton-Watson tree with geometric offspring distribution with parameter $\frac{1}{2}$ and with random labels generated according to the following rules. The root has label $l$ and the label of every other vertex is chosen uniformly in $\{m-1,m,m+1\}$ where $m$ is the label of its parent. Furthermore, these choices are made independently for every vertex. Proposition 2.4 of \cite{CD} proves that $\rho^{(l)}(\mathbb{T}^{(l)}) = w_l$, therefore the measure $\widehat{\rho}^{(l)}$ defined on $\mathbb{T}^{(l)}$ by $\widehat{\rho}^{(l)}(\omega) = w_l^{-1} \rho^{(l)}(\omega) =w_l^{-1} 12^{-|\omega|}$ for every $\omega \in \mathbb{T}^{(l)}$ is a probability measure. The following result will be useful for our purposes. \begin{theorem}[\cite{CD}, Theorem 4.4] \label{descmu} Let $\omega$ be a random tree distributed according to $\mu$ and let $u_0, u_1, u_2, \ldots $ be the sequence of the vertices of its spine listed in genealogical order. For every $n \geqslant 0$, let $Y_n$ be the label of $u_n$. \begin{enumerate} \item The process $(Y_n)_{n \geqslant 0}$ is a Markov chain taking values in $\mathbb{N}$ with transition kernel $\Pi$ defined by: \begin{align*} \Pi (l,l-1) & = \frac{(w_l)^2}{12 d_l} d_{l-1} := q_l& \text{if $l \geqslant 2$,}\\ \Pi (l,l) & = \frac{(w_l)^2}{12} := r_l& \text{if $l \geqslant 1$,}\\ \Pi (l,l+1) & = \frac{(w_l)^2}{12 d_l} d_{l+1} := p_l& \text{if $l \geqslant 1$.} \end{align*} \item Conditionally given $(Y_n)_{n \geqslant 0} = (y_n)_{n \geqslant 0}$, the sequence $(L_n)_{n \geqslant 0}$ of subtrees of $\omega$ attached to the left side of the spine and the sequence $(R_n)_{n \geqslant 0}$ of subtrees attached to the right side of the spine form two independent sequences of independent labelled trees distributed according to the measures $\widehat{\rho}^{(y_n)}$. \end{enumerate} \end{theorem}
We can now map the law of the uniform infinite random tree on the set of quadrangulations using Schaeffer's correspondence. Let us equip $\Phi(\mathscr{S})$ with the distance $d_{\Phi}$ so that $\Phi$ is an isometry from $\mathscr{S}$ onto $\Phi(\mathscr{S})$. We denote by $\mu_{\Phi,n}$ and $\mu_{\Phi}$ the respective image measures of $\mu_n$ and $\mu$ under $\Phi$. The measure $\mu_{\Phi}$ is well defined because $\mu$ is supported on $\mathscr{S}$.
Since $\Phi$ is a bijection between $\mathbb{T}_n$ and $\mathbf{Q}_n$, $\mu_{\Phi,n} = \nu_n$ is the uniform probability measure on the set of quadrangulations with $n$ faces. As a direct consequence of Theorem \ref{cvarbres}, the sequence $(\mu_{\Phi,n})_{n \in \mathbb{N}}$ converges weakly to $\mu_{\Phi}$ in the space of all probability measures on $(\Phi(\mathscr{S}),d_{\Phi})$. Thus, in some sense, $\mu_{\Phi}$ can also be viewed as a uniform probability measure on the space of infinite quadrangulations.
\begin{remark} The topology induced by $d_{\Phi}$ on the set $\Phi(\mathscr{S})$ is rather different than the one that would be induced by $d_{\mathbf{Q}}$. Indeed it may happen that two trees $\omega$ and $\omega'$ are close for the metric $d_{\mathbb{T}}$, but the quadrangulations $\Phi(\omega)$ and $\Phi(\omega')$ are very different for $d_{\mathbf{Q}}$. For example, the linear tree $\omega_n$ with $2n-1$ vertices and with labels given by the sequence $1, 2,\ldots,n-1, n , n-1, \ldots ,2,1$ converges as $n$ goes to infinity to the infinite linear tree $\omega$ with labels given by the sequence $1,2, \ldots$. As a consequence, the quadrangulation $\Phi(\omega_n)$ converges to the infinite quadrangulation $\Phi(\omega)$ in $(\Phi(\mathscr{S}),d_{\Phi})$ as $n$ goes to infinity. On the other hand, for every $n \geqslant 1$, the quadrangulation $\Phi(\omega_n)$ has two vertices at distance $1$ from its root whereas $\Phi(\omega)$ has only one vertex at distance $1$ from its root and therefore the sequence $(\Phi(\omega_n))_{n \in \mathbb{N}}$ does not converge to $\Phi(\omega)$ in $(\overline{\mathbf{Q}},d_{\mathbf{Q}})$.
It is then a natural question to ask whether the two notions of uniform infinite quadrangulation that we have introduced coincide. \end{remark}
\section{Equality of the two uniform infinite quadrangulations} \label{sec:egalite}
In this section, we will show that the two definitions of the uniform infinite quadrangulation coincide. The first problem comes from the fact that we have two different notions of infinite quadrangulations: elements of $\Phi(\mathscr{S})$, which are regular planar maps on one hand, and elements of the completion $\overline{\mathbf{Q}}$ of $\mathbf{Q}$ on the other hand. This problem can be solved by identifying $\Phi(\mathscr{S})$ with a subset of $\overline{\mathbf{Q}}$, allowing us to consider $\mu_{\Phi}$ as a measure on $\overline{\mathbf{Q}}$ supported on $\Phi(\mathscr{S})$.
More precisely, let $R>0$ and $\omega \in \mathscr{S}$. Define $B_R(\Phi(\omega))$ as the union of all faces of $\Phi(\omega)$ that have a vertex at distance strictly smaller than $R$ from the root. Since the tree $\omega$ has only finitely many vertices with label smaller than $R+1$, there are finitely many such faces and $B_R(\Phi(\omega))$ is a finite map. Therefore $\mathbb{S}^2 \setminus B_R(\Phi(\omega))$ has finitely many connected components; and the boundaries of these components are finite length cycles of $\Phi(\omega)$.
Let $\gamma$ be such a cycle. Each edge of $\gamma$ is adjacent to two faces of $\Phi(\omega)$. One has a vertex at distance strictly smaller than $R$ from the root, and the other one has only vertices at distance at least $R$ from the root. The quadrangulation being bipartite, each edge of $\gamma$ connects a vertex at distance $R$ from the root with a vertex at distance $R+1$ from the root. Therefore, by adding to $B_R(\Phi(\omega))$ an extra vertex in the connected component of $\mathbb{S}^2 \setminus B_R(\Phi(\omega))$ bounded by $\gamma$ and an edge between this vertex and each vertex of $\gamma$ at distance $R+1$ from the root, and repeating this operation for every connected component of $\mathbb{S}^2 \setminus B_R(\Phi(\omega))$, we obtain a finite quadrangulation. The sequence of finite quadrangulations obtained in this way for every $R>0$ converges to $\Phi(\omega)$ as $R$ goes to infinity, in the sense of Krikun, showing that for every tree $\omega \in \mathscr{S}$, $\Phi(\omega)$ can be identified with an element of $\overline{\mathbf{Q}}$.
To be able to consider $\mu_{\Phi}$ as a measure on $\overline{\mathbf{Q}}$, we now need to verify that the mapping $\Phi : \mathscr{S} \to \overline{\mathbf{Q}}$ is measurable with respect to the Borel $\sigma$-field of $\left( \overline{\mathbf{Q}}, d_{\mathbf{Q}} \right)$. The following lemma is proved in Section \ref{propbij}: \begin{lemma} \label{measurable} Fix $R > 0$ and $\omega_0 \in \mathscr{S}$. The set $ A = \{ \omega \in \mathscr{S} : \, B_{\overline{\mathbf{Q}},R}(\Phi(\omega)) = B_{\overline{\mathbf{Q}},R}(\Phi(\omega_0)) \}$ is measurable with respect to the Borel $\sigma$-field of $\left( \mathscr{S}, d_{\overline{\mathbb{T}}} \right)$. \end{lemma}
Fix $Q^{\star} \in \overline{\mathbf{Q}}$. Lemma \ref{measurable} implies that \begin{equation*}
\Phi^{-1} \left( \left\{ Q \in \overline{\mathbf{Q}} : \, d_{\mathbf{Q}} (Q,Q^{\star}) \leqslant \frac{1}{R+1} \right\} \right) = \Phi^{-1} \left( \left\{ \vphantom{\frac{1}{R}} Q \in \overline{\mathbf{Q}} : \, B_{\overline{\mathbf{Q}},R}(Q) =
B_{\overline{\mathbf{Q}},R}(Q^{\star}) \right\} \right) \end{equation*} is measurable with respect to the Borel $\sigma$-field of $\left( \mathscr{S} , d_{\overline{\mathbb{T}}} \right)$, proving that $\Phi : \mathscr{S} \to \overline{\mathbf{Q}}$ is measurable. Therefore, we may and will see $\mu_{\Phi}$ as a probability measure on $\left( \overline{\mathbf{Q}} , d_{\mathbf{Q}} \right)$.
We are now ready to state our main result: \begin{theorem} \label{equality} The sequence $\left( \mu_{\Phi,n} \right)_{n \in \mathbb{N}}$ converges weakly to $\mu_{\Phi}$ in the space of all probability measures on $\left(\overline{\mathbf{Q}}, d_{\mathbf{Q}} \right)$. Therefore $\mu_{\phi}$ viewed as a probability measure on $\left(\overline{\mathbf{Q}}, d_{\mathbf{Q}} \right)$ coincides with $\nu$. \end{theorem}
Since $\mu_{\Phi,n} = \nu_n$ and $\nu$ is defined as the limit of the sequence $(\nu_n)$ in the space of all probability measures on $\left(\overline{\mathbf{Q}}, d_{\mathbf{Q}} \right)$, the second assertion is a direct consequence of the first one.
To establish the first assertion, we have to show that for every $Q^{\star} \in \overline{\mathbf{Q}}$ and $R > 0$ one has \begin{equation*} \mu_{\Phi,n} \Big( Q \in \overline{\mathbf{Q}} : B_{\overline{\mathbf{Q}},R}(Q) = B_{\overline{\mathbf{Q}},R}(Q^{\star})\Big) \underset{n \to \infty}{\longrightarrow} \mu_{\Phi} \Big( Q \in \overline{\mathbf{Q}} : B_{\overline{\mathbf{Q}},R}(Q) = B_{\overline{\mathbf{Q}},R}(Q^{\star})\Big). \end{equation*} The remaining part of this work is devoted to the proof of this convergence.
\subsection{A property of Schaeffer's correspondence} \label{propbij}
For every integers $S>0$ and $R>0$ we let \begin{equation} \label{omegas} \Omega_S (R) = \{ \omega \in \overline{\mathbb{T}} : \, \omega \text{ has a label} \leqslant R+1 \text{ strictly above generation } S \}. \end{equation} In the first two statements of this section, $S$ and $R$ are two fixed positive integers.
\begin{proposition} \label{facegen} Let $\omega$ be a tree of $\mathscr{S}$ which does not belong to $\Omega_S(R)$ (i.e. $\omega$ is such that the label of every vertex at a generation strictly greater than $S$ is at least $R+2$). Then $B_{\overline{\mathbf{Q}},R}(\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(B_{\overline{\mathbb{T}},S}(\omega)))$. \end{proposition} \begin{proof} The proof follows step by step the construction of $\Phi(\omega)$ in Section \ref{subsec:schaeffer}. Fix an embedding of $\omega$ as an infinite planar map.
In the first step, an infinite planar map $\mathcal{M}_0(\omega)$ is obtained from $\omega$ by adding an extra vertex $v_0$ with label $0$ and edges between $v_0$ and corners with label $1$. Similarly, we can construct a planar map $\mathcal{M}_0(B_{\overline{\mathbb{T}},S}(\omega))$. The extra edges in these two maps are uniquely determined by corners with label $1$, and these corners are determined by $B_{\overline{\mathbb{T}},S}(\omega)$ (no vertex at a generation greater than $S$ has a label less than $2$). We consider the unique ``infinite face'' of $\mathcal{M}_0$ as an extra face. The maps $\mathcal{M}_0(\omega)$ and $\mathcal{M}_0(B_{\overline{\mathbb{T}},S}(\omega))$ then have the same number of faces, say $p$, which in addition have the same boundaries, composed by the two edges joining $v_0$ to the corners with label $1$ and, in the case when these corners with label $1$ belong to different vertices, the unique injective path in the tree between these two vertices. Let $F_1(\omega),\ldots, F_p(\omega)$ and $F_1(B_{\overline{\mathbb{T}},S}(\omega)),\ldots , F_p(B_{\overline{\mathbb{T}},S}(\omega))$ denote the faces of $\mathcal{M}_0(\omega)$ and $\mathcal{M}_0(B_{\overline{\mathbb{T}},S}(\omega))$ respectively, listed in such a way that, for every $i$, the faces $F_i(\omega)$ and $F_i(B_{\overline{\mathbb{T}},S}(\omega))$ have the same boundary.
In the second step, edges $(c,s(c))$ are added inside each face for every corner $c$, finally giving two regular planar maps $\mathcal{M}_1(\omega)$ and $\mathcal{M}_1(B_{\overline{\mathbb{T}},S}(\omega))$. Let us consider a face $F_i(\omega)$ of $\mathcal{M}_0(\omega)$ and the corresponding face $F_i(B_{\overline{\mathbb{T}},S}(\omega))$. The corners of these faces are numbered $(c_{i,j})_{j \in J_i}$ for $F_i(\omega)$ and $(c'_{i,j})_{j \in J'_i}$ for $F_i(B_{\overline{\mathbb{T}},S}(\omega))$, the numbering being in clockwise order for a finite face and counterclockwise order for corners on the left side if the spine of the tree, clockwise order for corners on the right side of the spine of the tree, in the case of the infinite face.
For every $i \in \{1, \ldots ,p \}$, let $v_{i,1}, \ldots , v_{i,k_i}$ be the vertices of $F_i(\omega)$ at generation $S$ that have at least one child. These vertices are also vertices of $F_i(B_{\overline{\mathbb{T}},S}(\omega))$ at generation $S$ and their labels are greater than $R+1$. For every $j \leqslant k_i$, let $e_{i,j}$ be the last corner before $v_{i,j}$ in $F_i(\omega)$ and with label $R+1$. This corner is the same in $F_i(\omega)$ and $F_i(B_{\overline{\mathbb{T}},S}(\omega))$. The same edge $(e_{i,j},s(e_{i,j}))$ joining $e_{i,j}$ to the first corner following $e_{i,j}$ with label $R$ is thus added to $F_i(\omega)$ and $F_i(B_{\overline{\mathbb{T}},S}(\omega))$ (note that this corner is also the first corner with label $R$ following every corner of $v_{i,j}$, see Figure \ref{faceisole}).
\begin{figure}
\caption{A face $F_i$ and a cycle $\gamma_{i,j}$ associated with a vertex $v_{i,j}$ at generation $S$.}
\label{faceisole}
\end{figure}
Therefore for every $j \in \{ 1, \ldots k_i \}$ the same cycle $\gamma_{i,j}$ composed by the edge $(e_{i,j},s(e_{i,j}))$ and the genealogical path between $e_{i,j}$ and $s(e_{i,j})$ appears in both $F_i(\omega)$ and $F_i(B_{\overline{\mathbb{T}},S}(\omega))$ (see Figure \ref{faceisole}). For $j \neq j'$ the (strict) interiors of the cycles $\gamma_{i,j}$ and $\gamma_{i,j'}$ are either disjoint, or one of them is contained in the other one. Here we define the interior of a cycle as the connected component of the complement of this cycle which does not contain $v_0$.
Let us now show that if a face $f$ of $\Phi(\omega)$ intersects the interior of a cycle $\gamma_{i,j}$, the labels of vertices of $f$ are greater than or equal to $R$. We first deal with the case when $f$ has a vertex $u$ that belongs to the interior of the cycle $\gamma_{i,j}$. If the label of $u$ is greater than or equal to $R+2$ the conclusion is obvious. If not, the label of $u$ is $R+1$ and for $f$ to have a vertex with label $R-1$, $u$ must be connected to vertices with label $R$ by two edges: the only possible choice for a vertex with label $R$ is $s(e_{i,j})$ and the last vertex of $f$ would then belong to the domain bounded by the union of the two edges connecting $u$ to $s(e_{i,j})$ (see Figure \ref{faceisole}) so that its label could not be $R-1$. The case when no vertex of $f$ belongs to the interior of the cycle $\gamma_{i,j}$ is treated in a similar manner.
The previous discussion shows that faces of $\Phi(\omega)$, respectively of $\Phi(B_{\overline{\mathbb{T}},S}(\omega))$, that intersect the interior of a cycle $\gamma_{i,j}$, are not taken into account in the definition of $B_{\overline{\mathbf{Q}},R}(\Phi(\omega))$, respectively of $B_{\overline{\mathbf{Q}},R}(\Phi(B_{\overline{\mathbb{T}},S}(\omega)))$.
Let us denote by $\Phi_1(\omega)$ the planar map obtained after the second step of the construction of $\Phi(\omega)$ (this map is denoted by $\mathcal{M}_1$ in Section \ref{subsec:schaeffer}). Let us consider the map $\widetilde{\Phi}_1(\omega)$ obtained by removing every edge and vertex of $\Phi_1(\omega)$ lying in the interior of a cycle $\gamma_{i,j}$. By construction, every vertex of $\omega$ with generation strictly greater than $S$ belongs to the interior of a cycle $\gamma_{i,j}$. It follows that \begin{equation} \label{phi1} \widetilde{\Phi}_1(\omega) = \widetilde{\Phi}_1 \left( B_{\overline{\mathbb{T}},S} (\omega) \right). \end{equation}
Finally, let $\Phi_2(\omega)$ denote the map obtained by removing every edge of $\widetilde{\Phi}_1(\omega)$ connecting two vertices of $\widetilde{\Phi}_1(\omega)$ with the same label less than or equal to $R$. Every face of $\Phi(\omega)$ that is taken into account in the ball $B_{\overline{\mathbf{Q}},R}(\Phi(\omega))$ is also a quadrangular face of $\Phi_2(\omega)$. Conversely, every quadrangular face of $\Phi_2(\omega)$ having a vertex with label strictly smaller than $R$ is also a face of $B_{\overline{\mathbf{Q}},R}(\Phi(\omega))$. In other words, the ball $B_{\overline{\mathbf{Q}},R}(\Phi(\omega))$ is the union of the quadrangular faces of $\Phi_2(\omega)$ having a vertex with label strictly smaller than $R$. From \eqref{phi1} we have \[ \Phi_2(\omega) = \Phi_2 \left(B_{\overline{\mathbb{T}},S} (\omega) \right), \] and the previous observations allow us to conclude that \[ B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega) \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi \left( B_{\overline{\mathbb{T}},S} (\omega) \right) \right) \] which completes the proof. \end{proof}
\begin{corollary} \label{pbcle} Let $\omega_0 \in \mathscr{S}$. There exists a countable collection $\left(\omega^{S,R}_i\right)_{i \in I}$ of trees in $\mathscr{S} \cap \Omega_S(R)^c$ verifying for all $i \in I$ \[ B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega_i^{S,R}) \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega_0) \right) \] and such that for every $\omega \in \mathscr{S} \cap \Omega_S(R)^c$ the following assertions are equivalent: \begin{enumerate}
\item $B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega) \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega_0) \right);$
\item there exists $i \in I$ such that $B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i^{S,R})$. \end{enumerate} \end{corollary} \begin{proof} The collection $\left(\omega_i^{S,R}\right)_{i \in I}$ that consists of all finite trees $\omega'$ having at most $S$ generations and such that $B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega') \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega_0) \right)$ is countable and has the desired properties. Indeed, if $\omega \in \mathscr{S} \cap \Omega_S(R)^c$ and if there exists $i \in I$ such that $B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i^{S,R})$, Proposition \ref{facegen} ensures that $B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega) \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega_i^{S,R}) \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega_0) \right)$. Conversely, if $B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega) \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega_0) \right)$ and $\omega \in \mathscr{S} \cap \Omega_S(R)^c$, then $\omega' = B_{\overline{\mathbb{T}},S} (\omega)$ verifies $B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega) \right) = B_{\overline{\mathbf{Q}},R} \left( \Phi (\omega') \right)$ by Proposition \ref{facegen} and $\omega'$ belongs to the collection $\left(\omega_i^{S,R}\right)_{i \in I}$. \end{proof}
We conclude this section with the proof of Lemma \ref{measurable}. \begin{proof}[Proof of Lemma \ref{measurable}] Fix $R>0$. For every $S>0$ the set $\Omega_S(R)$ is open and closed in $\overline{\mathbb{T}}$. In addition one has \begin{equation*} \begin{split} A & = \bigcup_{S > 0} \left( \left\{ \omega \in \mathscr{S} : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \right\} \cap \Omega_S(R)^c \right)\\ & = \bigcup_{S > 0} \bigcup_{i \in I_{S,R}} \left( \left\{ \omega \in
\mathscr{S} : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i^{S,R}) \right\} \cap \Omega_S(R)^c \right) \end{split} \end{equation*} where $\left(\omega_i^{S,R}\right)_{i \in I_{S,R}}$ is the collection given by Corollary \ref{pbcle}. This shows that the set $A$ is measurable. \end{proof}
\subsection{Asymptotic behavior of labels on the spine} \label{spinelabels}
Recall that the sequence $(Y_k)_{k \geqslant 0}$ of the successive labels of vertices of the spine is a Markov chain with transition matrix $\Pi$ given by Theorem \ref{descmu}. In this section we study the asymptotic behavior of this Markov chain. \begin{lemma} \label{labspine} The Markov chain $(Y_k)_{k \geqslant 0}$ is transient. In addition, for every $\varepsilon > 0$ there exists $\alpha > 0$ such that for $k$ large enough one has \[
\mathbb{P}\left[ Y_j \geqslant \alpha k, \, \forall j \geqslant 0 \, \middle|
\, Y_0 =k \right] \geqslant 1 - \varepsilon. \] \end{lemma}
\begin{proof} The Taylor expansion $\frac{q_k}{p_k} = 1 - \frac{8}{k} + {\it O} \left( \frac{1}{k^2} \right)$ (\cite{CD}, Lemma 5.5) implies that there exists $C >0$ such that \[ \prod_{i=2}^{k} \frac{q_i}{p_i} \underset{k \to \infty}{\sim} C k^{-8}. \] A standard argument for birth and death processes then ensures that $Y$ is transient. Furthermore, for every $k > j \geqslant 1$, \[ \mathbb{P}_k \left[T_j = \infty \right] =
\frac{\sum_{i=j}^{k-1}\frac{q_i}{p_i} \frac{q_{i-1}}{p_{i-1}} \cdots \frac{q_{j+1}}{p_{j+1}}}
{\sum_{i=j}^{\infty}\frac{q_i}{p_i} \frac{q_{i-1}}{p_{i-1}} \cdots \frac{q_{j+1}}{p_{j+1}}} \] where $T_j$ is the hitting time of $j$. Therefore one has, for $\alpha < 1$, \begin{align*} \mathbb{P}_k \left[T_{[\alpha k]} = \infty \right] & =
\frac{\frac{1}{k}\sum_{i=[\alpha k]}^{k-1}\frac{q_i}{p_i} \frac{q_{i-1}}{p_{i-1}} \cdots
\frac{q_{[\alpha k] +1}}{p_{[\alpha k] +1}}}
{\frac{1}{k}\sum_{i=[\alpha k]}^{\infty}\frac{q_i}{p_i} \frac{q_{i-1}}{p_{i-1}} \cdots
\frac{q_{[\alpha k]+1}}{p_{[\alpha k]+1}}}\\ & \underset{k \to \infty}{\longrightarrow} \frac{\int_{\alpha}^1 \left( \frac{\alpha}{t} \right)^8 dt}
{\int_{\alpha}^{\infty} \left( \frac{\alpha}{t} \right)^8 dt} =
1 - \alpha^7. \end{align*} The desired result follows. \end{proof}
\begin{proposition} \label{limlabspine} Let $Z$ be a nine-dimensional Bessel process started at $0$. Then \[
\left( \frac{1}{\sqrt{n}} Y_{[nt]} \right)_{t \geqslant 0} \underset{n \to
\infty}{\longrightarrow} \left( Z_{\frac{2}{3} t} \right)_{t \geqslant 0} \] in the sense of convergence in distribution in the space $D(\mathbb{R}_+,\mathbb{R}_+)$. \end{proposition} \begin{proof} The convergence in the proposition is a direct consequence of a more general result by Lamperti \cite{Lam} which we now recall. Let $(X_n)_{n \geqslant 0}$ be a time-homogeneous Markov chain on $\mathbb{R}_+$ verifying: \begin{enumerate} \item for every $K > 0$ one has uniformly in $x \in \mathbb{R}_+$ \[\lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i = 0}^{n-1}
\mathbb{P} \left( X_i \leqslant K \, \middle| \, X_0 = x \right) = 0;\] \item for every $k \in \mathbb{N}$ the following moments exist and are bounded as functions of $x \in \mathbb{R}_+$
\[ m_k (x) = \mathbb{E} \left[ (X_{n+1} - X_n)^k \middle| X_n = x \right];\] \item there exist $\beta > 0$ and $\alpha > - \beta /2$ such that \begin{align*} \lim_{x \rightarrow \infty} m_2(x) & = \beta,\\ \lim_{x \rightarrow \infty} x \, m_1(x) & = \alpha. \end{align*} \end{enumerate} Let us define the process $\left(x_t^{(n)} \right)_{t \in \mathbb{R}_+}$ by $x_t^{(n)} = n^{-1/2} X_i$ if $t = \frac{i}{n}$, $i = 0,1,2, \ldots$, and linear interpolation on intervals of the form $\left[ \frac{i-1}{n}, \frac{i}{n} \right]$. Lamperti's theorem states that $\left(x_t^{(n)}\right)_{t \in \mathbb{R}_+}$ converges in distribution to the diffusion process $(x_t)_{t \in \mathbb{R}_+}$ with generator \[ L = \frac{\alpha}{x} \frac{\mathrm{d}}{\mathrm{d}x} + \frac{\beta}{2} \frac{\mathrm{d}^2}{\mathrm{d}x^2}.\]
In our case, we consider the Markov chain $\widetilde{Y}$ whose transition matrix is given by $\widetilde{\Pi} (x,y) = \Pi ([x],[y])$ if $y = x+1,x-1$ or $x$. Assertion 1. easily follows from Lemma \ref{labspine} and assertion 2. is trivial. In addition one has $p_n = \frac{1}{3} + \frac{4}{3n} + {\it O} (n^{-2})$ and $q_n = \frac{1}{3} - \frac{4}{3n} + {\it O} (n^{-2})$ (\cite{CD}, Lemma 5.5) giving: \begin{align*} \lim_{x \rightarrow \infty} m_2(x) & = \frac{2}{3},\\ \lim_{x \rightarrow \infty} x \, m_1(x) & = \frac{8}{3}. \end{align*} and therefore assertion 3. holds with $\alpha = 8/3$ and $\beta = 2/3$.
The rescaled chain $Y$ thus converges in law to the diffusion process with generator \[L = \frac{2}{3} \left(\frac{4}{x} \frac{\mathrm{d}}{\mathrm{d}x} + \frac{1}{2} \frac{\mathrm{d}^2}{\mathrm{d}x^2} \right),\] which was the desired result. \end{proof}
\subsection{Asymptotic properties of small labels} \label{smalllabels}
Thanks to Corollary \ref{pbcle}, the proof of Theorem \ref{equality} will reduce to showing that the $\mu_n$-measure of certain balls in the space of trees converges to the corresponding $\mu$-measure. Still we need to show that the error made by disregarding trees that belong to $\Omega_S(R)$ is small when $S$ is large. In this section we fix $R,\varepsilon >0$ and we write $\Omega_S = \Omega_S(R)$ to simplify notation.
\begin{lemma} \label{seuil} There exists an integer $S^{\star} > 0$ such that $\mu(\Omega_S) \leqslant \varepsilon$ for every $S > S^{\star}$. \end{lemma} \begin{proof} Let $\Omega = \bigcap_{S=1}^{\infty} \Omega_S$. If $\omega \in \Omega$ then $\omega$ has infinitely many vertices with label in $\{1, \ldots, R+1 \}$ and there exists $l \in\{1, \ldots, R+1 \}$ such that $N_l(\omega) = \infty$. Since $\mu$ is supported on $\mathscr{S}$, one has $\mu ( \Omega) = 0$ and therefore $\mu (\Omega_S) \rightarrow 0$ as $S \rightarrow \infty$. \end{proof}
The main ingredient of the proof of Theorem \ref{equality} is Proposition \ref{nseuil}, which gives an analog of Lemma \ref{seuil} when $\mu$ is replaced by $\mu_n$, with \emph{uniformity} in $n$. To establish this estimate, we will need an upper bound for the probability that there exists a vertex at generation $S$ with label smaller than $S^{\alpha}$, where $\alpha < 1/2$ is fixed (see Lemma \ref{estimlabels} below). Let us first give an easy preliminary lemma: \begin{lemma} \label{size} Fix $S>0$. There exist positive integers $N_1(S)$ and $K_{\varepsilon}(S)$ such that for every $n > N_1(S)$:
\[ \mu_n \left( \omega : \, \left|B_{\overline{\mathbb{T}},S}(\omega)\right|
> K_{\varepsilon}(S) \right) < \varepsilon .\] \end{lemma} \begin{proof}
This result is a direct consequence of the convergence of the measures $\mu_n$ to $\mu$. Indeed, as $\left|B_{\overline{\mathbb{T}},S}(\omega)\right|$ is finite for every tree $\omega$, one can chose $K_{\varepsilon}(S)$ large enough such that
\[ \mu \left( \omega : \, \left|B_{\overline{\mathbb{T}},S}(\omega)\right| > K_{\varepsilon}(S) \right) < \varepsilon.\] The convergence of $\mu_n$ to $\mu$ then gives $N_1(S)$ such that the inequality of the lemma is true for $n > N_1(S)$. \end{proof}
There exists a finite number of well-labelled trees with height exactly $S$ and having at most $K_{\varepsilon}$ edges. Let us denote this number by $M_{\varepsilon}(S)$.
For every $S > 0$ and $\alpha \in \left[0, \frac{1}{2} \right[$ we let \[A_{\alpha}(S) = \left\{ \omega \in \overline{\mathbb{T}} : \, \omega \text{ has a vertex at generation $S$ with label } \leqslant S^{\alpha} \right\}.\]
\begin{lemma} \label{estimlabels} Fix $\alpha < \frac{1}{2}$. For every sufficiently large integer $S$, there exists $N_2(S)$ such that, for every $n > N_2(S)$, one has \[ \mu_n \left( A_{\alpha}(S) \right) < \varepsilon. \] \end{lemma} \begin{proof} We first observe that it is enough to prove the bound $\mu \left( A_{\alpha}(S) \right) < \varepsilon$ when $S$ is large. Indeed the set $A_{\alpha}(S)$ is closed in $\overline{\mathbb{T}}$, and thus we have $\limsup_n \mu_n(A_{\alpha}(S)) \leqslant \mu(A_{\alpha}(S))$.
Recall the notation $\rho^{(l)}$ and $\widehat{\rho}^{(l)}$ introduced in Section \ref{sec:indirect}. For $H > 0$ and $l>0$ one has \begin{equation*} \widehat{\rho}^{(l)} \left( \vphantom{\rho^l} h(\omega) > H \right) =
\frac{1}{w_l} \sum_{\substack{\omega \in \mathbb{T}^{(l)}\\ h(\omega) > H}} 12^{-|\omega|} \leqslant
\frac{1}{w_l} \sum_{\substack{\omega \in \mathbf{T}^{(l)}\\ h(\omega) > H}} 12^{-|\omega|} = \frac{1}{w_l} \rho^{(l)} \left( \vphantom{\rho^l} h(\omega) > H \right). \end{equation*} Therefore \[ \widehat{\rho}^{(l)} \left( \vphantom{\rho^l} h(\omega) > H \right) \leqslant \frac{2}{w_l} \mathbb{P}_{GW(1/2)} \left[\vphantom{\rho^l} h(\omega) > H \right], \] where $\mathbb{P}_{GW(1/2)}$ is the law of a Galton-Watson tree whose offspring distribution is geometric with parameter $1/2$. Theorem 1 (page 19) of \cite{AN} gives \[ \lim_{H \rightarrow \infty} H \, \mathbb{P}_{GW(1/2)} \left[
\vphantom{\rho^k} h(\omega) > H \right] = 1. \] From the explicit formula for $w_l$ we have $\frac{2}{w_l} \leqslant \frac{3}{2}$ for every $l \geqslant 0$. Hence there exists $H_1 > 0$ such that for $H>H_1$ \[ \widehat{\rho}^{(l)} \left(\vphantom{\rho^l} h(\omega) > H \right) \leqslant \frac{2}{H}. \] Fix $\eta \in \left]0, \frac{1}{2}\right[$. Recall that $g_S(\omega)$ is the set of vertices of $\omega$ at generation $S$ and that, for every integer $k$, $L_k$ and $R_k$ are the subtrees of $\omega$ attached respectively to the left side and to the right side of the $k$-th vertex of the spine of $\omega$. For $S > (1- \eta)^{-1} H_1$, Theorem \ref{descmu} and the previous bound give \[ \mu \left[ g_S(\omega) \cap \bigcup_{0 \leqslant k \leqslant
[\eta S] - 1} (L_k \cup R_k) \neq \emptyset \right] \leqslant 2 \sum_{k=1} ^{[\eta S] - 1} \frac{2}{S-k} \leqslant 4 \frac{\eta}{1
- \eta} \leqslant 8 \eta, \] and therefore \[ \mu \left( A_{\alpha}(S) \right) \leqslant 8 \eta + \mu \left( \exists s \in g_S(\omega) \cap \bigcup_{k = [\eta S]}^{S} \left( L_k \cup R_k \right) \, : \, \ell(s) \leqslant S^{\alpha} \right). \] Applying the Markov property at time $[\eta S]$ to the Markov chain $Y$ and then using Proposition \ref{limlabspine} and Lemma \ref{labspine} we find $\delta > 0$ and $S_1$ such that for $S > S_1$ one has \[ \mu \left( Y_k \geqslant [\delta \sqrt{S}] , \, \forall k \geqslant
[\eta S] \right) \geqslant 1 - \eta. \] We now have \begin{equation} \label{Aalphaintermediaire} \mu \left( A_{\alpha}(S) \right) \leqslant 9 \eta + \mu \left( \left\{ \exists s \in g_S(\omega) \cap \bigcup_{k = [\eta S]}^{S} \left( L_k \cup R_k \right) : \ell(s) \leqslant S^{\alpha} \right\} \cap \left\{ \forall k \geqslant [\eta S], Y_k \geqslant [\delta \sqrt{S}] \right\} \right). \end{equation} Let us fix a collection $(y_k)_{[\eta S] \leqslant k \leqslant S}$ such that $y_k \geqslant [\delta \sqrt{S}]$ for every $k$. Theorem \ref{descmu} gives \begin{align} \label{conditionel} \mu & \left( \exists s \in g_S(\omega) \cap \bigcup_{k = [\eta S]}^{S}
\left( L_k \cup R_k \right) \, : \, \ell(s) \leqslant S^{\alpha} \middle| Y_k = y_k, \, [\eta S] \leqslant k \leqslant S \right) \notag \\ & \leqslant 2 \sum_{k = [\eta S]}^{S} \widehat{\rho}^{(y_k)} \left( \vphantom{\sqrt{S}} \exists s \in g_{S-k} (\omega) :
\, \ell(s) \leqslant S^{\alpha} \right)
= 2 \sum_{k=0}^{S - [\eta S]} \widehat{\rho}^{(y_{S-k})} \left(\vphantom{\sqrt{S}} \exists s \in g_{k} (\omega) :
\, \ell(s) \leqslant S^{\alpha} \right). \end{align} If $0 \leqslant k \leqslant S - [\eta S]$, one has: \begin{equation*} \begin{split} \widehat{\rho}^{(y_{S-k})} \left(\vphantom{\sqrt{S}} \exists s \in g_{k} (\omega) :
\, \ell(s) \leqslant S^{\alpha} \right) & \leqslant \widehat{\rho}^{(y_{S-k})} \left(\vphantom{\sqrt{S}} \inf_{s \in \omega} \ell(s) \leqslant S^{\alpha} \right) \\ & = \frac{1}{w_{y_{S-k}}} \sum_{\substack{\omega \in \mathbb{T}^{(y_{S-k})}\\ \inf_{s \in \omega}
\ell(s) \leqslant S^{\alpha}}} 12^{- |\omega|} \\ & = \frac{1}{w_{y_{S-k}}} \sum_{\substack{\omega \in
\mathbf{T}^{(y_{S-k})}\\ 0 < \inf_{s \in \omega}
\ell(s) \leqslant S^{\alpha} }} 12^{- |\omega|}\\ & = \frac{1}{w_{y_{S-k}}} \rho^{(y_{S-k})} \left(\vphantom{\sqrt{S}}
0 < \inf_{s \in \omega} \ell(s) \leqslant S^{\alpha} \right) .\\ \end{split} \end{equation*} But \[ \rho^{(y_{S-k})} \left(\vphantom{\sqrt{S}} \inf_{s \in \omega} \ell(s) > 0 \right) = w_{y_{S-k}} \] and \begin{equation*} \rho^{(y_{S-k})} \left(\vphantom{\sqrt{S}} \inf_{s \in \omega} \ell(s) > S^{\alpha} \right) = \rho^{(y_{S-k} - [S^{\alpha}])} \left(\vphantom{\sqrt{S}} \inf_{s
\in \omega} \ell(s) > 0 \right) = w_{y_{S-k} - [S^{\alpha}]}. \end{equation*} We thus have \[ \widehat{\rho}^{(y_{S-k})} \left(\vphantom{\sqrt{S}} \exists s \in g_{k} (\omega) :
\, \ell(s) \leqslant S^{\alpha} \right) \leqslant \frac{1}{w_{y_{S-k}}} \left( w_{y_{S-k}} - w_{y_{S-k} - [S^{\alpha}]} \right) = 1 - \frac{w_{y_{S-k} - [S^{\alpha}]}}{w_{y_{S-k}}}. \] Using our assumption $y_{S-k} \geqslant [ \delta \sqrt{S}]$, a Taylor expansion gives \[ 1 - \frac{w_{y_{S-k} - [S^{\alpha}]}}{w_{y_{S-k}}} = 4 S^{\alpha} y_{S-k}^{-3} + o\left( S^{\alpha} y_{S-k}^{-3} \right)
\leqslant \frac{4}{\delta^3} S^{\alpha - \frac{3}{2}} + o\left( S^{\alpha - \frac{3}{2}} \right),\] and the right-hand side of \eqref{conditionel} is smaller than $\frac{8}{\delta^3} S^{\alpha -1/2} + o \left(S^{\alpha - 1/2} \right)$. From \eqref{Aalphaintermediaire} we now get \[ \mu \left( A_{\alpha}(S) \right) \leqslant 9 \eta + \frac{8}{\delta^3} S^{\alpha - \frac{1}{2}} + o\left( S^{\alpha - \frac{1}{2}} \right). \] Hence $\mu(A_{\alpha}(S)) < 10 \eta$ as soon as $S$ is large enough. This completes the proof. \end{proof}
\begin{proposition} \label{nseuil} For every sufficiently large integer $S$, there exists an integer $N(S)$ such that, for every $n>N(S)$, one has \[ \mu_n(\Omega_S) \leqslant \varepsilon. \] \end{proposition} \begin{proof}
In this proof $\alpha \in \left]\frac{1}{3}, \frac{1}{2} \right[$ is fixed. For $S > 0$, Lemma \ref{size} gives $K_{\varepsilon}(S) > 0$ and $N_1(S) > 0$ such that if $n > N_1(S)$ then $\mu \left( \omega : \, |B_{\overline{\mathbb{T}},S} (\omega)| > K_{\varepsilon}(S) \right) < \varepsilon $. Let us also recall that the number of well-labelled trees with height $S$ and size smaller than $K_{\varepsilon}(S)$ is denoted by $M_{\varepsilon}(S)$. Lemma \ref{estimlabels} shows that, for $S$ large enough, there exists $N_2(S)$ such that $\mu_n(A_{\alpha}(S)) < \varepsilon$ for every $n > N_2(S)$. Therefore, for $S$ large enough and for $n > N_1(S) \vee N_2(S)$ one has \begin{align} \mu_n (\Omega_S) & = \sum_{\substack{\omega^{\star} \notin A_{\alpha}(S)\\
|\omega^{\star}| \leqslant K_{\varepsilon}(S), \, h(\omega^{\star}) = S}} \mu_n \left( \{\omega
: \, B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \} \cap \Omega_S \right) \notag\\ & \qquad + \mu_n (A_{\alpha}(S)) + \mu_n
\left( \omega: \, |B_{\overline{\mathbb{T}},S} (\omega)| > K_{\varepsilon}(S) \right) \notag \\ & \leqslant 2 \varepsilon + \sum_{\substack{\omega^{\star} \notin A_{\alpha}(S)\\
|\omega^{\star}| \leqslant K_{\varepsilon}(S), \, h(\omega^{\star}) = S}} \mu_n \left( \{\omega
: \, B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \} \cap \Omega_S \right). \label{munosr} \end{align}
Fix a tree $\omega^{\star} \notin A_{\alpha}(S)$ with height $S$ and size smaller than $K_{\varepsilon}(S)$. We assume that $S$ is large enough so that $S^{\alpha} > R + 1$. We denote by $k$ the number of vertices of $\omega^{\star}$ at generation $S$ and by $l_1, \ldots , l_k$ the labels of these vertices. By considering the subtrees of $\omega$ originating from vertices at generation $S$, one obtains: \begin{equation} \label{omegastar} \mu_n \left( \{\omega : \, B_{\overline{\mathbb{T}},S}(\omega) = \omega^{\star} \} \cap \Omega_S \right) \leqslant
\frac{1}{D_n} \, \sum_{n_1 + \cdots + n_k = n - |\omega^{\star}|} \, \sum_{i = 1}^k D_{n_i}^{(l_i)}(R) \prod_{j \neq i} D_{n_j}^{(l_j)} \end{equation} where $D_n^{(l)}(R)$ is the number of trees in $\mathbb{T}_n^{(l)}$ with at least one vertex with label less than or equal to $R+1$ (compare \eqref{omegastar} with formula \eqref{boulen}). Since $\omega \notin A_{\alpha}(S)$, we have $l_i > S ^{\alpha} > R+1$ and thus $D_{n_i}^{(l_i)}(R) = D_{n_i}^{(l_i)} - D_{n_i}^{(l_i - R - 1)}$ for $i = 1, \ldots , k$. The bound \eqref{omegastar} then gives \begin{align*} \mu_n & \left( \{\omega : \, B_{\overline{\mathbb{T}},S}(\omega) = \omega^{\star} \} \cap \Omega_S \right) \\ & \leqslant
\frac{1}{D_n} \, \sum_{n_1 + \cdots + n_k = n - |\omega^{\star}|} \, \sum_{i = 1}^k (D_{n_i}^{(l_i)} - D_{n_i}^{(l_i - R - 1)}) \prod_{j \neq i} D_{n_j}^{(l_j)}\\ & =
\frac{k}{D_n} \, \sum_{n_1 + \cdots + n_k = n - |\omega^{\star}|} \prod_{j=1}^k D_{n_j}^{(l_j)} - \frac{1}{D_n} \, \sum_{i = 1}^k \, \sum_{n_1 + \cdots + n_k = n -
|\omega^{\star}|} \, D_{n_i}^{(l_i - R - 1)} \prod_{j \neq i} D_{n_j}^{(l_j)}. \end{align*}
By Theorem \ref{cvarbres}, $\mu_n \left(\omega : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right) \rightarrow \mu \left(\omega : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right)$ as $n \rightarrow \infty$. Using this convergence and identities \eqref{boulen} and \eqref{boule}, we get the existence of an integer $N(\omega^{\star},S)$ such that for $n > N(\omega^{\star},S)$ one has \begin{equation*}
\frac{1}{D_n} \, \sum_{n_1 + \cdots + n_k = n - |\omega^{\star}|} \prod_{j=1}^k D_{n_j}^{(l_j)}
\leqslant 12^{- |\omega^{\star}|} \, \sum_{t = 1}^k d_{l_t} \prod_{s \neq t} w_{l_s} \, + \, \frac{\varepsilon}{K_{\varepsilon}(S)M_{\varepsilon}(S)} \end{equation*} and for $i = 1 ,\ldots, k$ \begin{equation*} \begin{split}
\frac{1}{D_n}\, & \sum_{n_1 + \cdots + n_k = n - |\omega^{\star}|} \, D_{n_i}^{(l_i - R -1)} \prod_{j \neq i} D_{n_j}^{(l_j)} \\
& \geqslant 12^{- |\omega^{\star}|} \Big( d_{l_i - R -1} \prod_{j \neq i} w_{l_j} + \sum_{t \neq
i} d_{l_t} w_{l_i - R -1} \prod_{j \neq t, i} w_{l_j} \Big) - \frac{\varepsilon}{K_{\varepsilon}(S)M_{\varepsilon}(S)}. \end{split} \end{equation*} We now have for every $n > N(\omega^{\star},S)$: \begin{align} \mu_n & \left( \{\omega : \, B_{\overline{\mathbb{T}},S}(\omega) = \omega^{\star} \} \cap \Omega_S \right) \notag \\
& \leqslant \frac{2\varepsilon}{M_{\varepsilon}(S)}
+ k 12^{- |\omega^{\star}|} \, \sum_{t = 1}^k d_{l_t} \prod_{s \neq t} w_{l_s}
- 12^{- |\omega^{\star}|} \sum_{i=1}^k \Big( d_{l_i - R -1} \prod_{j \neq i} w_{l_j} + \sum_{t \neq
i} d_{l_t} w_{l_i - R -1} \prod_{j \neq t, i} w_{l_j} \Big) \notag \\
& = \frac{2\varepsilon}{M_{\varepsilon}(S)}
+ 12^{- |\omega^{\star}|} \, \sum_{t = 1}^k (d_{l_t} - d_{l_t -R-1}) \prod_{s \neq
t} w_{l_s}
+ 12^{- |\omega^{\star}|} \, \sum_{t = 1}^k d_{l_t}\left( \sum_{i \neq
t} (w_{l_i} - w_{l_i - R-1}) \prod_{s \neq t,i} w_{l_s} \right).\label{muncapomega} \end{align} Define \begin{align*} d(\omega^{\star}) & = \max_{i= 1 \ldots k} \left( 1 - \frac{d_{l_i - R-1}}{d_{l_i}} \right),\\ w(\omega^{\star}) & = \max_{i= 1 \ldots k} \left( 1 - \frac{w_{l_i - R-1}}{w_{l_i}} \right). \end{align*} From the bound \eqref{muncapomega}, we get \begin{align} \mu_n & \left( \{\omega : \, B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \} \cap \Omega_S \right) \notag \\
& \leqslant \frac{2\varepsilon}{M_{\varepsilon}(S)}
+ d(\omega^{\star}) \, 12^{- |\omega^{\star}|} \, \sum_{t = 1}^k d_{l_t} \prod_{s \neq
t} w_{l_s}
+ k w(\omega^{\star}) \, 12^{- |\omega^{\star}|} \, \sum_{t = 1}^k d_{l_t} \prod_{s \neq
t} w_{l_s} \notag \\ & = \frac{2\varepsilon}{M_{\varepsilon}(S)} + \left( d(\omega^{\star}) + k . w(\omega^{\star}) \right) \mu \left( \omega : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right) \label{omegastarfinal} \end{align} where we used \eqref{boule} in the last equality.
Let us now define $N^{\star}(S) = \max_{|\omega^{\star}| \leqslant K_{\varepsilon}(S)} N(\omega^{\star},S) \vee N_1(S) \vee N(S)$. For $S$ large enough and for $n > N^{\star}(S)$ we obtain using (\ref{munosr}) and (\ref{omegastarfinal}): \begin{equation*} \mu_n(\Omega_S) \leqslant 4 \varepsilon
+ \sum_{\substack{\omega^{\star} \notin A_{\alpha}(S)\\ |\omega^{\star}| \leqslant
K_{\varepsilon}(S), \, h(\omega^{\star}) = S}}
\left( d(\omega^{\star}) + |g_S(\omega^{\star})| . w(\omega^{\star}) \right) \mu \left(\omega : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right). \end{equation*}
A Taylor expansion gives $w(\omega^{\star}) \leqslant 4 (5R + 2) S^{-3 \alpha} + o\left(S^{-3 \alpha} \right)$ where the remainder is uniform over $\omega^{\star} \notin A_{\alpha}(S)$. In addition, $\sup_{\omega^{\star} \notin A_{\alpha}(S)} d(\omega^{\star}) \rightarrow 0$ as $S \rightarrow \infty$. This allows us to find $S^{\star}$ such that for $S > S^{\star}$ and $n> N^{\star}(S)$: \begin{align} \mu_n(\Omega_S) & \leqslant 4 \varepsilon +
\sum_{\substack{\omega^{\star} \notin A_{\alpha}(S)\\ |\omega^{\star}| \leqslant K_{\varepsilon}(S), \, h(\omega^{\star}) = S}}
\left( \varepsilon + |g_S(\omega^{\star})| . 4(5R+2) S^{-3 \alpha} \right) \mu \left(\omega : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right) \notag \\ & \leqslant 5 \varepsilon +
4(5R+2) S^{-3 \alpha} \sum_{\substack{\omega^{\star} \notin A_{\alpha}(S)\\ |\omega^{\star}| \leqslant
K_{\varepsilon}(S), \, h(\omega^{\star})=S}} |g_S(\omega^{\star})| \, \mu \left(\omega : B_{\overline{\mathbb{T}},S} (\omega) = \omega^{\star} \right) \notag \\ & \leqslant 5 \varepsilon + 4(5R+2) S^{-3 \alpha} \mathbb{E}_{\mu}
\left[|g_S(\omega)| \right]. \label{munomegas} \end{align}
The description of $\mu$ given in Theorem \ref{descmu} allows us to estimate $\mathbb{E}_{\mu} \left[|g_S(\omega)| \right]$. Indeed we have for every integer $H > 0$ and $k \geqslant 1$ \[
\mathbb{E}_{\widehat{\rho}^{(k)}} \left[ |g_H(\omega)| \right] \leqslant
\frac{1}{w_k} \mathbb{E}_{\rho^{(k)}} \left[ |g_H(\omega)| \right] =
\frac{2}{w_k} \mathbb{E}_{GW(1/2)} \left[ |g_H(\omega)| \right] = \frac{2}{w_k} \leqslant 2. \] It follows that \[
\mathbb{E}_{\mu} \left[ |g_S(\omega)| \right] \leqslant 4S +1. \] Recalling that $\alpha > \frac{1}{3}$, we get that for every $S$ large enough and for $n > N^{\star}(S)$, \[ \mu_n(\omega_S) \leqslant 6 \varepsilon. \] This completes the proof. \end{proof}
\subsection{Proof of the main result}
In this section we fix $Q^{\star} \in \overline{\mathbf{Q}}$ and $R >0$. As in the previous section, we write $\Omega_S = \Omega_S(R)$ to simplify notation. From the remarks following Theorem \ref{equality}, the proof reduces to verifying the convergence \begin{equation} \label{conv} \mu_n \Big( \omega : B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (Q^{\star}) \Big) \underset{n \to \infty}{\longrightarrow} \mu \Big( \omega : B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (Q^{\star}) \Big). \end{equation}
First of all, we need to reformulate the problem in terms of trees. Since $Q^{\star} \in \overline{\mathbf{Q}}$, we know that there exists a finite quadrangulation $Q_0 \in \mathbf{Q}$ such that $d_{\mathbf{Q}} (Q_0,Q^{\star}) < \frac{1}{R+1}$ and therefore $B_{\overline{\mathbf{Q}},R} (Q_0) = B_{\overline{\mathbf{Q}},R} (Q^{\star})$. Then there exists $\omega_0 \in \mathbb{T}$ such that $\Phi(\omega_0) = Q_0$. The convergence (\ref{conv}) can now be restated as \begin{equation} \label{convbis} \mu_n \Big( \omega : B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big) \underset{n \to \infty}{\longrightarrow} \mu \Big( \omega : B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big). \end{equation} We fix $\varepsilon > 0$ in the remaining part of this proof.
We need to characterize the trees $\omega$ for which $\Phi(\omega)$ has the same ball of radius $R$ as $\Phi(\omega_0)$. As we have already mentioned at the end of Section \ref{sec:quadrangulations}, the main difficulty comes from the fact that two trees that are very similar in $\overline{\mathbb{T}}$ can give very different quadrangulations if they have vertices with small labels in high generations. We can remedy this problem thanks to Proposition \ref{nseuil}.
Note that $\omega_0$ is a finite tree. Let $S_0$ denote the height of $\omega_0$. According to Lemma \ref{seuil} and Proposition \ref{nseuil} we can choose $S_1 > S_0$ such that if $S \geqslant S_1$ and $n \geqslant N(S)$ then $\mu (\Omega_S) < \varepsilon$ and $\mu_n (\Omega_S) < \varepsilon$.
Let $S > S_1$ and let $(\omega_i)_{i \in I}$ be the collection of trees given by Corollary \ref{pbcle}, such that, for every $\omega \in \mathscr{S} \cap \Omega_S^c$, the equality $B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0))$ holds if and only if there exists $i \in I$ such that $B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i)$. If $A \vartriangle B$ denotes the symmetric difference between two sets $A$ and $B$, we have \begin{equation*} \begin{split} \mu & \left( \left\{ \omega \in \overline{\mathbb{T}} : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \right\} \vartriangle \bigcup_{i\in I} \left\{ \omega \in \overline{\mathbb{T}} : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\} \right)\\ & \leqslant \mu \left( \Omega_S \right) < \varepsilon. \end{split} \end{equation*} We deduce from this last bound that \begin{equation*} \begin{split}
\Bigg| & \mu_n \Big( \omega : \, B_{\overline{\mathbb{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big)
- \mu \Big( \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0))
\Big) \Bigg|\\
& \leqslant \Bigg| \mu \Big( \bigcup_{i\in I} \left\{ \omega : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\} \Big)
- \mu_n \Big(\omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R}
(\Phi(\omega_0)) \Big) \Bigg|\\
& \qquad + \Bigg| \mu \Big( \bigcup_{i\in I} \left\{ \omega : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\} \Big)
- \mu \Big( \omega : \, B_{\overline{\mathbf{Q}},R} ( \Phi(\omega)) = B_{\overline{\mathbf{Q}},R}
(\Phi(\omega_0)) \Big) \Bigg|\\
& \leqslant \Bigg| \mu \Big( \bigcup_{i\in I} \left\{ \omega : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\} \Big)
- \mu_n \Big( \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0))
\Big) \Bigg| + \varepsilon . \end{split} \end{equation*} The set $\bigcup_{i\in I} \left\{ \omega \in \overline{\mathbb{T}} : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\}$ is both open and closed in $\overline{\mathbb{T}}$, and thus \begin{equation*} \mu_n \left( \bigcup_{i\in I} \left\{ \omega : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S}(\omega_i) \right\} \right) \underset{n \to \infty}{\longrightarrow} \mu \left( \bigcup_{i\in I} \left\{ \omega : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\} \right). \end{equation*} Therefore there exists $N'(S) > 0$ such that for $n > N'(S)$: \begin{equation*} \begin{split}
\Bigg| & \mu_n \Big( \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big)
- \mu \Big( \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big) \Bigg|\\
& \leqslant \Bigg| \mu_n \Big( \bigcup_{i\in I} \left\{ \omega : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\} \Big)
- \mu_n \Big( \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big) \Bigg| + 2 \varepsilon \\
& = \Bigg| \mu_n \Big( \bigcup_{i\in I} \left\{ \omega : \, B_{\overline{\mathbb{T}},S} (\omega) = B_{\overline{\mathbb{T}},S} (\omega_i) \right\} \cap \Omega_S \Big)
- \mu_n \Big( \left\{ \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0))
\right\} \cap \Omega_S \Big) \Bigg| \\ & \qquad + 2 \varepsilon \end{split} \end{equation*} by the choice of the collection $(\omega_i)_{i \in I}$.
We also know that $\mu_n(\Omega_S) < \varepsilon$ for $n > N(S)$, and it follows that, for $n > N(S) \vee N'(S)$, \begin{equation*}
\Bigg| \mu_n \Big( \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big) - \mu
\Big( \omega : \, B_{\overline{\mathbf{Q}},R} (\Phi(\omega)) = B_{\overline{\mathbf{Q}},R} (\Phi(\omega_0)) \Big) \Bigg| \leqslant 3 \varepsilon. \end{equation*} This completes the proof of Theorem \ref{equality}. \qed
\noindent {\bf Acknowledgements.} The author would like to thank Jean-Fran\c{c}ois Le Gall for many helpful discussions about this work.
\addcontentsline{toc}{section}{References}
\def$'${$'$}
\end{document} |
\begin{document}
\subjclass{47D06; 82C05; 60J80; 34K30}
\keywords{Markov evolution, configuration space, stochastic semigroup, sun-dual semigroup, correlation function, scale of Banach spaces }
\begin{abstract}
The evolution of an infinite system of interacting point entities with traits $x\in \mathds{R}^d$ is studied. The elementary acts of the evolution are state-dependent death of an entity with rate that includes a competition term and independent fission in the course of which an entity gives birth to two new entities and simultaneously disappears. The states of the system are probability measures on the corresponding configuration space and the main result of the paper is the construction of the evolution $\mu_0\to \mu_t$, $t>0$, of states in the class of sub-Poissonian measures.
\end{abstract}
\maketitle
\section{Introduction} \subsection{Posing}
In recent years, there has been a lot of studies of the stochastic dynamics of structured populations, see, e.g., \cite{Ba,BL,Chris,JK11,KK,KK2}. Typically, the structure is introduced by assigning to each entity a trait $x\in X$. Then the population dynamics consists in changing the traits of its members that includes also their appearance and disappearance. Usually, one endows the trait space $X$ with a locally-compact topology and assumes that: (a) the populations are locally finite, i.e., compact subsets of $X$ may contain traits of finite sub-populations only; (b) the dynamics of a given entity is mostly affected by the interaction with entities whose traits belong to a compact neighborhood of its own trait. Then the local structure of the population is determined by the network of such interactions. Since the traits of a finite population lie in a compact subset of $X$, each of its members has a compact neighborhood containing the traits of the rest of population. In view of this, in order to clear distinguish between global and local effects one should deal with infinite populations and noncompact trait spaces. In the statistical mechanics of interacting physical particles, this conclusion had led to the concept of the thermodynamic (infinite-volume) limit, see, e.g., \cite[pp. 5,6]{Simon}, and, thereby, to the description of the states of thermal equilibrium as probability measures on the space of particle configurations. Such states are constructed from local conditional states and are Gibbsian, i.e., they satisfy a specific consistency condition.
In this article, we study the Markov evolution of a possibly infinite system of point entities (particles) with trait space
$X=\mathds{R}^d$, $d\geq 1$. The pure states of the system are locally finite configurations $\gamma \subset \mathds{R}^d$, see, e.g., \cite{Berns,KK,KK1,KK2}, whereas the general states are probability measures on the space of all such configurations. The elementary acts of the evolution are: (a) state-dependent disappearance (death) with rate $m(x) + \sum_{y\in \gamma\setminus x} a(x-y)$; (b) independent fission with rate $b(x|y_1, y_2)$ in the course of which the particle with trait $x\in \gamma$ gives birth to two particles, with traits $y_1, y_2 \in \mathds{R}^d$, and simultaneously disappears from $\gamma$. The model with this kind of death and budding instead of fission, cf. \cite{Chris}, is known as the Bolker-Pacala model. Its recent study can be found in \cite{KK,KK1}, see also the literature quoted therein. A similar model with fission (fragmentation) in which each particle produces a (random) finite number of new particles was introduced and studied in \cite{tan}. The main result of the present work is the construction of the global in time evolution of states in a certain class of probability measures.
\subsection{The overview} As mentioned above, the state space of the model is the set $\Gamma$ of all subsets $\gamma \subset \mathds{R}^d$ such that the set $\gamma_\Lambda:=\gamma\cap\Lambda$ is finite whenever $\Lambda \subset \mathds{R}^d$ is compact. For compact $\Lambda$, we define the map $\Gamma\ni \gamma \mapsto N_\Lambda (\gamma) =
|\gamma_\Lambda|\in \mathds{N}_0$, where $|\cdot|$ denotes cardinality and $\mathds{N}_0$ stands for the set of nonnegative integers. Then $\mathcal{B}(\Gamma)$ will denote the smallest $\sigma$-field of subsets of $\Gamma$ with respect to which all these maps are measurable. That is, $\mathcal{B}(\Gamma)$ is generated by the family of sets \begin{equation}
\label{o1}
\Gamma^{\Lambda,n} :=\{ \gamma \in \Gamma: N_\Lambda (\gamma) =
n\}, \qquad n\in \mathds{N}_0, \ \ \Lambda - {\rm compact}. \end{equation} It is known \cite{KK,KK2} that $(\Gamma,\mathcal{B}(\Gamma))$ is a standard Borel space. The set of $n$-point configurations $\Gamma^n$ and the set of all finite configurations $\Gamma_0$ then are \begin{equation*}
\Gamma^n = \{ \gamma\in \Gamma: |\gamma|=n\}, \qquad \Gamma_0 := \bigcup_{n=0}^\infty \Gamma^n \in \mathcal{B}(\Gamma). \end{equation*} For compact $\Lambda$, we let $\Gamma_\Lambda =\{ \gamma: \gamma\subset \Lambda\} \subset
\Gamma_0$ and define \begin{equation*}
\mathcal{B}(\Gamma_\Lambda) = \{ \mathbb{A}\cap \Gamma_\Lambda: \mathbb{A} \in
\mathcal{B}(\Gamma)\} \subset \mathcal{B}(\Gamma_0) = \{ \mathbb{A}\cap \Gamma_0: \mathbb{A} \in
\mathcal{B}(\Gamma)\} \subset \mathcal{B}(\Gamma). \end{equation*} Clearly, $(\Gamma_0, \mathcal{B}(\Gamma_0))$ and $(\Gamma_\Lambda, \mathcal{B}(\Gamma_\Lambda))$ are standard Borel spaces. By $\mathcal{P}(\Gamma)$, $\mathcal{P}(\Gamma_0)$, $\mathcal{P}(\Gamma_\Lambda)$ we denote the sets of all probability measures on $(\Gamma,\mathcal{B}(\Gamma))$, $(\Gamma_0,\mathcal{B}(\Gamma_0))$ and $(\Gamma_\Lambda,\mathcal{B}(\Gamma_\Lambda))$, respectively.
For a compact $\Lambda$ and $\mathbb{A}\in \mathcal{B}(\Gamma_\Lambda)$, we set $\mathbb{C}_{\mathbb{A}} = \{ \gamma \in \Gamma: \gamma_\Lambda \in \mathbb{A}\}$ and let $\mathcal{B}_\Lambda (\Gamma)$ be the sub-$\sigma$-field of $\mathcal{B}(\Gamma)$ generated by all such \emph{cylinder} sets $\mathbb{C}_{\mathbb{A}}$. A \emph{cylinder} function $F : \Gamma \to \mathds{R}$ is a $\mathcal{B}_\Lambda (\Gamma)/\mathcal{B}(\mathds{R})$-measurable function for some compact $\Lambda$. Here by $\mathcal{B}(\mathds{R})$ we denote the Borel $\sigma$-field of subsets of $\mathds{R}$. For a compact $\Lambda$ and a given $\mu \in \mathcal{P}(\Gamma)$, by setting \begin{equation} \label{Rel} \mu(\mathbb{C}_{\mathbb{A}}) = \mu^\Lambda (\mathbb{A}) \end{equation}
we determine $\mu^\Lambda \in \mathcal{P}(\Gamma_\Lambda)$ -- the \emph{projection} of $\mu$. Note that all such projections $\{\mu^\Lambda\}_{\Lambda}$ of a given $\mu \in \mathcal{P}(\Gamma)$ are consistent in the Kolmogorov sense.
Each $\mu\in \mathcal{P}(\Gamma)$ is characterized by its values on the sets (\ref{o1}); in particular, by their local moments \begin{equation}
\label{o2} \int_\Gamma N_\Lambda^m d\mu =: \mu(N_\Lambda^m) = \sum_{n=0}^\infty n^m \mu(\Gamma^{\Lambda, n}), \qquad m\in \mathds{N}. \end{equation} This characterization naturally includes the dependence of $\mu(\Gamma^{\Lambda, n})$ on $n$. A homogeneous Poisson measure $\pi_\varkappa \in \mathcal{P}(\Gamma)$ with density $\varkappa >0$ has the property $\pi_\varkappa(\Gamma_0) = 0$. For this measure, it follows that \begin{equation}
\label{o3}
\pi_\varkappa (\Gamma^{\Lambda, n}) = \frac{\left(\varkappa |\Lambda|\right)^n}{n!} \exp\left( - \varkappa |\Lambda| \right), \end{equation}
where $|\Lambda|$ stands for the volume of $\Lambda$. In our consideration, the set of sub-Poissonian measures $\mathcal{P}_{\rm exp}(\Gamma)$ plays an important role, see Definition \ref{0df} below and the corresponding discussion in \cite{KK,KK1}. For each $\mu \in \mathcal{P}_{\rm exp}(\Gamma)$, there exists $\varkappa >0$ such that \begin{equation}
\label{o3a}
\mu(N_\Lambda^m) \leq \pi_\varkappa (N_\Lambda^m), \end{equation} holding for all compact $\Lambda$ and $m\in \mathds{N}$.
The Markov evolution is described by the Kolmogorov equation \begin{equation}
\label{1}
\dot{F}_t = L F_t , \qquad F_t|_{t=0} = F_0, \end{equation} where $\dot{F}_t$ denotes the time derivative of an \emph{observable} $F_t:\Gamma\to \mathds{R}$. The operator $L$ determines the model, and in our case it is \begin{gather}
\label{2}
(LF)(\gamma) = \sum_{x\in \gamma}\left( m (x) + \sum_{y\in \gamma\setminus x} a (x-y)\right) \left[ F(\gamma\setminus x) - F(\gamma)
\right] \\[.2cm] \nonumber + \sum_{x\in \gamma}
\int_{(\mathds{R}^d)^2} b(x|y_1 , y_2)\left[ F(\gamma\setminus x \cup\{y_1, y_2\}) - F(\gamma)
\right] dy_1 d y_2. \end{gather} In expressions like $\gamma \cup x$, we treat $x$ as the singleton $\{x\}$. The first term in (\ref{2}) describes the death of the particle with trait $x$ occurring: (i) independently with rate $m(x) \geq 0$; (ii) under the influence (competition) of the rest of the particles in $\gamma$ occurring with rate \begin{equation}
\label{3} E^a (x, \gamma\setminus x) := \sum_{y\in \gamma\setminus x} a (x-y)\geq 0. \end{equation} The second term in (\ref{2}) describes independent fission with rate
$b(x|y_1 , y_2)\geq 0$.
The evolution of states $\mu_0 \to \mu_t$ is defined by the Fokker-Planck equation \begin{equation}
\label{4}
\dot{\mu}_t = L^* \mu_t, \qquad \mu_t|_{t=0} =\mu_0, \end{equation} where $L^*$ is related to (\ref{2}) according to the rule $(L^* \mu)(\mathbb{A})= \mu(L \mathds{1}_{\mathbb{A}} )$, $\mathbb{A}\in \mathcal{B}(\Gamma)$; $\mathds{1}_{\mathbb{A}}$ is the indicator function. Both evolutions are in the duality $\mu_0 (F_t) = \mu_t(F_0)$. Here and in the sequel, we use the notation $\mu(F)= \int F d \mu$, cf. (\ref{o2}).
The direct use of $L$ and/or $L^*$ as linear operators in appropriate Banach spaces is possible only if one restricts the consideration to states on $\Gamma_0$. Otherwise, the sums in (\ref{2}) and (\ref{3}) -- taken over infinite configurations -- may not exist. At the same time, constructing evolutions of finite sub-populations contained in compact sets followed by taking the `infinite-volume' limit -- as it is done in the theory of Gibbs fields \cite{Simon} -- can hardly be realized here as the evolution usually destroys the consistency of the local states. Instead of trying to construct global states from local ones, we will we proceed as follows. Let $C_0 (\mathds{R}^d)$ stand for the set of continuous real-valued functions with compact support. Then the map \[ \Gamma \ni \gamma \mapsto F^\theta (\gamma) := \prod_{x\in \gamma} (1+ \theta (x)), \qquad \theta \in \varTheta:=\{\theta\in C_0 (\mathds{R}^d): \theta(x) \in (-1,0]\}, \] is clearly measurable and satisfies $0<F^\theta (\gamma) \leq 1$ for all $\gamma$. The set $\varTheta$ clearly has the following properties: (a) for each pair of distinct $\gamma, \gamma' \in \Gamma$, there exists $\theta \in \varTheta$ such that $F^\theta (\gamma)\neq F^\theta (\gamma')$; (b) for each pair $\theta , \theta'\in \varTheta$, the point-wise combination $\theta + \theta' +\theta \theta'$ is also in $\varTheta$; (c) the zero function belongs to $\varTheta$. From this it follows that $\{F^\theta: \theta \in \varTheta\}$ is a measure defining class, i.e., $\mu(F^\theta)= \nu(F^\theta)$, holding for all $\theta \in \varTheta$, implies $\mu=\nu$ for each $\mu, \nu \in \mathcal{P}(\Gamma)$, see \cite[Proposition 1.3.28, page 113]{AKKR}. Noteworthy, for each $\theta \in \varTheta$, $\mu(F^\theta) = \mu^{\Lambda_\theta} (F^\theta)$, where a compact $\Lambda_\theta$ is such that $\theta(x) = 0$ for $x\in \Lambda^c_{\theta}:=\mathds{R}^d \setminus \Lambda_\theta$.
Our results related to (\ref{2}), (\ref{4}) consist in the following: \begin{itemize}
\item[1.] Constructing the evolution $[0,+\infty)\ni t \mapsto \mu_t \in \mathcal{P}(\Gamma_0)$, $\mu_t|_{t=0}=\mu_0\in \mathcal{P}(\Gamma_0)$,
by proving the existence of a unique classical solution of (\ref{4}) in
the Banach space $\mathcal{M}$ of signed measures on $\Gamma_0$ with bounded
variation. \item[2.] Constructing the evolution $[0,+\infty) \ni t
\mapsto \mu_t \in \mathcal{P}_{\rm exp}(\Gamma)$, $\mu_t|_{t=0}=\mu_0\in \mathcal{P}_{\rm exp}(\Gamma)$, such that: \begin{itemize}
\item[2.1.] for each compact $\Lambda$ and $t\geq 0$, $\mu_t^{\Lambda}$ -- as a measure on $\Gamma_0$ -- lies in the
domain $\mathcal{D}(L^*) \subset \mathcal{M}$;
\item[2.2.] for
each $\theta \in \varTheta$, the map $(0,+\infty) \ni t \mapsto \mu_t (F^\theta)$ is continuously
differentiable and the following holds \begin{equation}
\label{Jan}
\frac{d}{dt}
\mu_t (F^\theta) = (L^* \mu^{\Lambda_\theta}_t)(F^\theta). \end{equation} \end{itemize} \end{itemize} Item 1 is realized in Theorem \ref{1ftm}. The main idea of how to construct the evolution $\mu_0 \to \mu_t$ stated in item 2 is to obtain it from the evolution $B_0 (\theta)\to B_t(\theta)$, $\theta \in \varTheta$ by solving the evolution equation related to those in (\ref{1}) and (\ref{4}). Here $B_0 (\theta) = \mu_0 (F^\theta)$ with $\mu_0 \in \mathcal{P}_{\rm exp}(\Gamma)$. This is realized in Theorem \ref{1tm} and Corollary \ref{Jaco}. One of the hardest points of this scheme is to prove that $B_t(\theta)= \mu_t (F^\theta)$ for a unique sub-Poissonian measure. At this stage, we deal with the evolution of local states constructed in realizing item 1.
\section{Preliminaries and the Model} We begin by briefly introducing the relevant aspects of the technique used in this work. Its more detailed description (including the notations) can be found in \cite{KK,KK2} and in the publications quoted therein.
\subsection{Measures and functions on configuration spaces} It is know that \begin{equation*}
B_{\pi_\varkappa} (\theta):= \pi_\varkappa (F^\theta) = \exp\left( \varkappa\int_{\mathds{R}^d} \theta (x)
dx\right). \end{equation*} Obviously, $B_{\pi_\varkappa}$ can be continued to an exponential type entire function of $\theta \in L^1(\mathds{R}^d)$. \begin{definition}
\label{0df} The set of sub-Poissonian measures $\mathcal{P}_{\rm exp} (\Gamma)$ consists of all those $\mu\in \mathcal{P}(\Gamma)$ for each of which
$\mu(F^\theta)$ can be continued to an exponential type entire function of $\theta \in L^1(\mathds{R}^d)$. \end{definition} It can be shown that $\mu\in \mathcal{P}_{\rm exp} (\Gamma)$ if and only if $\mu(F^\theta)$ might be written in the form \begin{equation}
\label{6b}
\mu(F^\theta) = 1 + \sum_{n=1}^\infty \frac{1}{n!} \int_{(\mathds{R}^d)^n} k^{(n)}_\mu (x_1 , \dots , x_n) \theta (x_1) \cdots \theta(x_n) d x_1 \cdots d x_n, \end{equation} where $k^{(n)}_\mu$ is the $n$-th order \emph{correlation function} of $\mu$. Each $k^{(n)}_\mu$ is a symmetric element of $L^\infty((\mathds{R}^d)^n)$, and the collection $\{k^{(n)}_\mu \}_{n\in \mathds{N}}$ satisfies \begin{equation}
\label{6c}
\|k^{(n)}_\mu\|_{L^\infty((\mathds{R}^d)^n} \leq \varkappa^{ n}, \qquad n \in \mathds{N}, \end{equation} holding with some $\varkappa >0$. Note that $k^{(n)}_\mu $ is positive and $k^{(n)}_{\pi_\varkappa} = \varkappa^n$; hence, (\ref{6c}) means that $k^{(n)}_\mu (x_1 , \dots , x_n)\leq \varkappa^n$ by which one gets (\ref{o3a}).
Now we turn to functions $G:\Gamma_0 \to \mathds{R}$. It can be proved that such a function is $\mathcal{B}(\Gamma_0)/\mathcal{B}(\mathds{R})$-measurable if and only if there exists the collection of symmetric Borel functions $G^{(n)}:(\mathds{R}^d)^n \to \mathds{R}$, $n\in \mathds{N}$, such that \begin{equation}
\label{7}
G(\eta) = G^{(n)}(x_1 , \dots , x_n), \qquad {\rm for} \ \ \eta = \{x_1 , \dots , x_n\}. \end{equation} \begin{definition}
\label{1df}
A measurable function $G:\Gamma_0 \to \mathds{R}$ is said to have bounded support if: (a) there exists compact
$\Lambda \subset \mathds{R}^d$ such that $G(\eta) =0$ whenever $\eta\cap\Lambda \neq \eta$; (b) there exists
$N\in \mathds{N}$ such that $G(\eta) =0$ whenever $|\eta|>N$. By $B_{\rm bs}(\Gamma_0)$ we denote the set of all bounded functions with bounded support.
For each $G\in B_{\rm bs}(\Gamma_0)$, by $\Lambda_G$ and $N_G$ we denote the smallest $\Lambda$ and $N$ with the properties just mentioned,
and use the notations $C_G = \sup_{\eta\in \Gamma_0} |G(\eta)|$. \end{definition} The Lebesgue-Poisson measure $\lambda$ on $(\Gamma_0, \mathcal{B}(\Gamma_0))$ is defined by the integrals \begin{equation}
\label{8} \int_{\Gamma_0} G(\eta) \lambda (d \eta) = G(\varnothing) + \sum_{n=1}^\infty \frac{1}{n!} \int_{(\mathds{R}^d)^n} G^{(n)}(x_1 , \dots x_n) d x_1 \cdots d x_n, \end{equation} with all $G\in B_{\rm bs}(\Gamma_0)$. For such $G$, we set \begin{equation}
\label{9}
(KG)(\gamma) = \sum_{\eta \Subset \gamma} G(\eta), \qquad \gamma \in \Gamma, \end{equation} where $\eta\Subset \gamma$ means that $\eta\subset \gamma$ and $\eta\in \Gamma_0$. Clearly, cf. Definition \ref{1df}, we have that \begin{equation}
\label{10}
|(KG)(\gamma)| \leq C_G \left(1 + |\gamma\cap \Lambda_G| \right)^{N_G}, \qquad G\in B_{\rm bs}(\Gamma_0). \end{equation} Like in (\ref{7}), we introduce the function $k_\mu:\Gamma_0 \to \mathds{R}$ such that $k_\mu (\eta) = k^{(n)}_\mu (x_1 , \dots, x_n)$ for $\eta =\{x_1 , \dots, x_n\}$, $n\in \mathds{N}$, and $k_\mu(\varnothing) =1$. Then we rewrite (\ref{6b}) as follows \begin{equation}
\label{11} \mu (F^\theta) = \int_{\Gamma_0} k_\mu(\eta) e(\theta;\eta)
\lambda (d \eta), \qquad e(\theta;\eta):= \prod_{x\in
\eta}\theta(x). \end{equation} For $\mu\in \mathcal{P}_{\rm exp}(\Gamma)$ and a compact $\Lambda$, let $\mu^\Lambda$ be the corresponding projection. It is possible to show that $\mu^\Lambda$, as a measure on $(\Gamma_\Lambda, \mathcal{B}(\Gamma_\Lambda))$, is absolutely continuous with respect to the Lebesgue-Poisson measure $\lambda$. Hence, we may write \begin{equation}
\label{12} \mu^\Lambda ( d\eta) = R^\Lambda_\mu (\eta) \lambda (d\eta), \qquad \eta \in \Gamma_\Lambda. \end{equation} For each compact $\Lambda$, the Radon-Nikodym derivative $R^\Lambda_\mu$ and the correlation function $k_\mu$ satisfy \begin{equation}
\label{13} k_\mu(\eta) = \int_{\Gamma_\Lambda} R^\Lambda_\mu (\eta \cup \xi) \lambda ( d \xi), \qquad \eta \in \Gamma_\Lambda. \end{equation} For each $G\in B_{\rm bs}(\Gamma_0)$ and $k:\Gamma_0 \to \mathds{R}$ such that $k^{(n)}\in L^\infty ((\mathds{R}^d)^n)$ the integral \begin{equation}
\label{14}
\langle \! \langle G, k \rangle \! \rangle := \int_{\Gamma_0}
G(\eta) k(\eta) \lambda ( d \eta) \end{equation} surely exists. By (\ref{6b}), (\ref{9}), (\ref{12}) and (\ref{14}) we then obtain \begin{equation}
\label{15}
\int_\Gamma \left(KG \right)(\gamma) \mu (d\gamma) = \langle \! \langle G, k_\mu \rangle \! \rangle \end{equation} holding for all $G\in B_{\rm bs}(\Gamma_0)$ and $\mu \in \mathcal{P}_{\rm exp}(\Gamma)$. Set \begin{equation}
\label{16} B_{\rm bs}^{\star}(\Gamma_0) = \{ G \in B_{\rm bs}(\Gamma_0): \left( KG \right) (\gamma) \geq 0 \ \ {\rm for} \ \ {\rm all} \ \ \gamma \in \Gamma\}. \end{equation} By \cite[Theorems 6.1, 6.2 and Remark 6.3]{Tobi} we know that the following is true. \begin{proposition}
\label{1pn} Let a measurable function $k: \Gamma_0 \to \mathds{R}$ have the following properties: \begin{eqnarray*} &(a) & \ \ \langle \! \langle G, k_\mu \rangle \! \rangle \geq 0, \qquad {\rm for} \ \ {\rm all} \ \ G \in B_{\rm bs}^{\star}(\Gamma_0); \\[.2cm]
&(b) & \ \ k(\varnothing) =1; \\ &(c) & \ \ k(\eta) \leq C^{|\eta|}, \qquad {\rm for} \ \ {\rm some} \ \ C>0. \end{eqnarray*} Then there exists a unique $\mu\in \mathcal{P}_{\rm exp}(\Gamma)$ such that $k$ is its correlation function. \end{proposition} Throughout the paper we use the following easy to check identities holding for appropriate functions $g:\mathds{R}^d \to \mathds{R}$ and $G:\Gamma_0 \to \mathds{R}$: \begin{equation}
\label{17} \forall x\in \gamma\qquad \sum_{\eta \Subset \gamma}\prod_{z\in \eta} g(z)= (1+ g(x))\sum_{\eta \Subset \gamma\setminus x}\prod_{z\in \eta} g(z), \end{equation} \begin{equation}
\label{18}
\int_{\Gamma_0} \sum_{\xi \subset \eta} G(\xi, \eta, \eta
\setminus \xi) \lambda ( d\eta)= \int_{\Gamma_0} \int_{\Gamma_0} G(\xi, \eta\cup \xi,
\eta) \lambda ( d\xi) \lambda ( d\eta). \end{equation}
\subsection{The model} As mentioned above, the model which we consider in this work is described by the generator given in (\ref{2}). Its entries are subject to the following \begin{assumption}
\label{ass1} The nonnegative measurable $a$, $b$ and $m$ satisfy: \begin{itemize} \item[(i)] $a$ is integrable and bounded; hence, we may set $$\sup_{x\in \mathds{R}^d}a(x) = a^*, \qquad \int_{ \mathds{R}^d}a(x) dx = \langle a \rangle.$$ \item[(ii)] There exist positive $r$ and $a_*$ such that $a(x) \geq a_*$
whenever $|x|\leq r$.
\item[(iii)] For each $x\in \mathds{R}^d$, $b(x|y_1, y_2) d y_1 d y_2$ is a symmetric finite measure on $(\mathds{R}^d)^2$; hence, we may set \[
\langle b \rangle = \int_{(\mathds{R}^d)^2} b(x|y_1, y_2) d y_1 d y_2, \]
where, for simplicity, we consider the translation invariant case. The mentioned symmetry means that $b(x|y_1, y_2) = b(x|y_2, y_1)$. \item[(iv)]
The function $$\beta (y_1 - y_2) = \int_{\mathds{R}^d} b(x|y_1 , y_2) d x $$ is supposed to be such that $\sup_{x\in \mathds{R}^d}\beta(x) =: \beta^*<\infty$. By the translation invariance it follows that $$\int_{ \mathds{R}^d}\beta(x) dx = \langle b \rangle.$$ \end{itemize} \end{assumption}
Noteworthy, we do not exclude the case where $b$ is a distribution. For instance, by setting $$b(x|y_1, y_2)= \frac{1}{2} \left(\delta (x-y_1) + \delta (x-y_2) \right)\beta (y_1-y_2),$$ we obtain the Bolker-Pacala model \cite{KK1} as a particular case of our model. \begin{remark}
\label{1rk} The function $\beta$ describes the dispersal of siblings, which compete with each other. As in the Bolker-Pacala model, here the following situations may occur: \begin{itemize}
\item \emph{short dispersal:} there exists $\omega >0$
such that $a(x)
\geq \omega \beta(x)$ for all $x\in \mathds{R}^d$;
\item \emph{long dispersal:} for each $\omega >0$, there
exists $x\in \mathds{R}^d$ such that $ a(x)
< \omega \beta(x)$. \end{itemize} \end{remark} For $\eta \in \Gamma_0$, we set, cf. (\ref{3}), \begin{eqnarray}
\label{19} E^a(\eta) & = & \sum_{x\in \eta} E^a(x, \eta\setminus x) = \sum_{x\in \eta} \sum_{y\in \eta\setminus x} a(x-y), \\[.2cm] \nonumber E^b(\eta) & = & \sum_{x\in \eta} \sum_{y\in \eta\setminus x} \beta(x-y) = \sum_{x\in \eta} \sum_{y\in \eta\setminus x}
\int_{\mathds{R}^d} b(z|x, y) d z. \end{eqnarray} The properties mentioned in (ii) and (iv) of Assumption \ref{ass1} imply the following fact, proved in \cite[Lemma 3.1]{KKa}. For the reader convenience, we repeat the proof in Appendix below. \begin{proposition}
\label{2pn} There exist $\omega>0$ and $\upsilon\geq 0$ such that the following holds \begin{equation}
\label{2pnN} \upsilon |\eta| + E^a(\eta) \geq \omega E^b(\eta), \qquad \eta\in \Gamma_0. \end{equation} \end{proposition} The inequality in (\ref{2pnN}) can be rewritten in the form \begin{equation}
\Phi_{\omega}(\eta):= \sum_{x\in \eta} \sum_{y\in \eta\setminus x} \left[ a(x-y)-\omega \int_{\mathbb{R}^d}b(z|x,y)dz \right] \ge -\upsilon |\eta|. \label{fi} \end{equation} \begin{proposition} \label{pfi} Assume that (\ref{fi}) holds for some $\omega_0>0$ and $\upsilon_0>0$. Then for each $\omega < \omega_0$, it holds also for $\upsilon=\upsilon_0\omega/\omega_0$. \end{proposition} \begin{proof} For $\omega \in [0, \omega_0]$ by adding and subtracting $\frac{\omega}{\omega_0}E^a(\eta)$ we obtain $$\Phi_\omega(\eta)=\frac{\omega}{\omega_0} \left[ \left(\frac{\omega_0}{\omega}-1 \right)
E^a(\eta)+\Phi_{\omega_0}(\eta) \right]\ge -\frac{\omega}{\omega_0}\upsilon_0|\eta|.$$ \end{proof}
\section{The Evolution of States of the Finite System}
Here we assume that the initial state in (\ref{4}) has the property $\mu_0(\Gamma_0)=1$, i.e., the system in $\mu_0$ is finite. Then the evolution will be constructed in the Banach space of signed measures with bounded variation, where the generator $L^*$ can be defined as an unbounded linear operator and $C_0$-semigroup techniques can be applied.
\subsection{The statement}
As just mentioned, we will solve (\ref{4}) in the Banach space $\mathcal{M}$ of all signed measures on $(\Gamma_0,\mathcal{B}(\Gamma_0))$ with bounded variation. Let $\mathcal{M}^{+}$ stand for the cone of positive elements of $\mathcal{M}$. By means of the Hahn-Jordan decomposition $\mu = \mu^{+} - \mu^{-}$, $\mu^{\pm}\in \mathcal{M}^{+}$, the norm of
$\mu\in \mathcal{M}$ is set to be $\|\mu\|_{\mathcal{M}}= \mu^{+} (\Gamma_0) + \mu^{-}(\Gamma_0)$. Then $\mathcal{P}(\Gamma_0)$ is a subset of $ \mathcal{M}^{+}$. The linear functional $\varphi_{\mathcal{M}}(\mu) := \mu(\Gamma_0)= \mu^{+}(\Gamma_0) - \mu^{-}(\Gamma_0)$ has the property $\varphi_{\mathcal{M}}(\mu) =
\|\mu\|_{\mathcal{M}}$ for each $\mu\in\mathcal{M}^{+}$. That is,
$\|\cdot \|_{\mathcal{M}}$ is additive on the cone $\mathcal{M}^{+}$ and hence $\mathcal{M}$ is an $AL$-space, cf. \cite{TV}.
For a strictly increasing function $\chi: \mathds{N}_0 \to [0, +\infty)$, we set \begin{equation}
\label{18a} \mathcal{M}_\chi =\left\{ \mu \in \mathcal{M}: \int_{\Gamma_0}\chi(
|\eta|) \mu^{\pm} (d \eta)<\infty\right\}, \qquad \mathcal{M}^{+}_\chi = \mathcal{M}_\chi \cap \mathcal{M}^{+}, \end{equation} and introduce \begin{eqnarray}
\label{18b}
\varphi_{\mathcal{M}_\chi} (\mu) = \int_{\Gamma_0} \chi(|\eta|) \mu^{+}
(d \eta)- \int_{\Gamma_0} \chi( |\eta|) \mu^{-} (d \eta), \qquad \mu \in \mathcal{M}_\chi. \end{eqnarray} Note that $\mathcal{M}_\chi$ is a proper subset of $\mathcal{M}$ and the corresponding embedding is continuous. Set, cf. Assumption \ref{ass1} and (\ref{19}), \begin{equation}
\label{22}
\Psi(\eta) = M(\eta) + E^a (\eta) + \langle b \rangle |\eta|, \qquad M(\eta):= \sum_{x\in \eta} m(x) \leq m^* |\eta|, \end{equation} and then \begin{equation}
\label{22a}
\mathcal{D}= \left\{ \mu \in \mathcal{M}: \int_{\Gamma_0} \Psi(\eta)
\mu^{\pm}(d\eta)<\infty\right\}. \end{equation}
By (\ref{19}) we have that $\Psi(\eta)\leq C|\eta|^2$ for an appropriate $C>0$; hence, $\mathcal{M}_{\chi_2} \subset \mathcal{D}$, where $\chi_m (n) = (1+n)^m$, $m\in \mathds{N}$. Then, for $\mu \in \mathcal{D}$, we define \begin{equation}
\label{22o} (A\mu)(d\eta) = - \Psi (\eta) \mu(d\eta), \qquad (B\mu)(d\eta) =
\int_{\Gamma_0} \Xi (d \eta|\xi) \mu( d \xi), \end{equation} where the measure kernel $\Xi$ is \begin{eqnarray}
\label{22b}
\Xi (\mathbb{A}|\xi) &=& \sum_{x\in \xi} \left( m(x) + E^a (x, \xi\setminus x) \right)\mathds{1}_{\mathbb{A}}(\xi \setminus x) \\[.2cm] \nonumber
& + & \sum_{x\in \xi}\int_{(\mathds{R}^d)^2} b(x|y_1, y_2) \mathds{1}_{\mathbb{A}} (\xi \setminus x\cup\{y_1 , y_2\}) d y_1 d y_2, \qquad \mathbb{A} \in \mathcal{B}(\Gamma_0), \end{eqnarray} and $\mathds{1}_{\mathbb{A}}$ is the indicator of $\mathbb{A}$. Then we set $L^* = A+B$. By direct inspection one checks that $L^*$ satisfies $\mu(LF) = (L^* \mu)(F)$ holding for all $\mu \in \mathcal{D}$ and appropriate $F:\Gamma_0 \to [0, +\infty)$, see (\ref{2}).
Along with $\chi_m$ defined above we also consider $\chi^\kappa (n) := e^{\kappa n}$, $\kappa >0$, and the space $\mathcal{M}_{\chi^\kappa}$. By a global solution of (\ref{4}) in $\mathcal{M}$ with $\mu_0 \in \mathcal{D}$ we understand a continuous map $[0,+\infty) \ni t \mapsto \mu_t \in \mathcal{D}\subset \mathcal{M}$, which is continuously differentiable in $\mathcal{M}$ on $(0,+\infty)$ and is such that both equalities in (\ref{4}) hold. \begin{theorem}
\label{1ftm} The problem in (\ref{4}) with $\mu_0 \in \mathcal{D}$ has a unique global solution $\mu_t\in\mathcal{M}$, which has the following properties: \begin{itemize}
\item[{\it(a)}] for each $m\in \mathds{N}$, $\mu_t \in
\mathcal{M}_{\chi_m}\cap\mathcal{P}(\Gamma_0)$ for all $t>0$
whenever $\mu_0 \in
\mathcal{M}_{\chi_m}\cap\mathcal{P}(\Gamma_0)$;
\item[{\it (b)}] for each $\kappa>0$ and $\kappa' \in (0,
\kappa)$, $\mu_t \in \mathcal{M}_{\chi^{\kappa'}}\cap\mathcal{P}(\Gamma_0)$ for all $t\in (0,T(\kappa, \kappa'))$
whenever $\mu_0 \in
\mathcal{M}_{\chi^{\kappa}}\cap\mathcal{P}(\Gamma_0)$, where
\begin{equation}
\label{22T}
T(\kappa, \kappa') = \frac{\kappa -
\kappa'}{\langle b \rangle}e^{-\kappa} ;
\end{equation} \item[{\it (c)}] for all $t>0$, $\mu_t (d\eta) = R_t (\eta) \lambda (d\eta)$ whenever $\mu_0 (d\eta) = R_0(\eta) \lambda (d\eta)$. \end{itemize} \end{theorem}
\subsection{The proof} To prove Theorem \ref{1ftm}, as well as to elaborate tools for studying the evolution of infinite systems, we use the Thieme-Voigt perturbation technique \cite{TV}, the basic elements of which we present here in the form adapted to the context.
To prove claim (c) along with the space $\mathcal{M}$ we will consider its subspace consisting of measures absolutely continuous with respect to the Lebesgue-Poisson measure defined in (\ref{8}). This is $\mathcal{R}:=L^1 (\Gamma_0 , d \lambda)$ in which we have a similar functional $\varphi_{\mathcal{R}}(R) = \int_{\Gamma_0} R(\eta) \lambda (d\eta)$. Then we define $\mathcal{R}^{+}$ and $\mathcal{R}^{+}_1$ consisting of positive elements and probability densities, respectively. Note that $\varphi_{\mathcal{R}}(R) =
\|R\|_{\mathcal{R}}$ for $R\in \mathcal{R}^{+}$ and hence $\mathcal{R}$ is also and $AL$-space. For $\chi: \mathds{N}_0 \to [0,+\infty)$ as in (\ref{18a}), we set \begin{eqnarray}
\label{18c} & & \mathcal{R}_\chi = \left\{ R\in \mathcal{R}: \int_{\Gamma_0}
\chi(|\eta|) |R(\eta)|\lambda (d \eta ) < \infty\right\}, \\[.2cm] \nonumber & & \varphi_{\mathcal{R}_\chi}(R) = \int_{\Gamma_0}
\chi(|\eta|) R(\eta)\lambda (d \eta ), \qquad R\in \mathcal{R}_\chi, \\[.2cm] \nonumber & & \mathcal{R}_\chi^{+} = \mathcal{R}_\chi \cap \mathcal{R}^{+}, \qquad \mathcal{R}_{\chi,1}^{+} = \{ R\in \mathcal{R}_\chi^{+}: \varphi_{\mathcal{R}}(R) = 1\}. \end{eqnarray} Now let $\mathcal{E}$ be either $\mathcal{M}$ or $\mathcal{R}$, and
$\|\cdot \|_{\mathcal{E}}$ stand for the corresponding norm. The sets $\mathcal{E}^{+}$, $\mathcal{E}_1^{+}$, $\mathcal{E}_\chi$, $\mathcal{E}_\chi^{+}$, $\mathcal{E}_{\chi,1}^{+}$, and the functionals $\varphi_{\mathcal{E}}$, $\varphi_{\mathcal{E}_\chi}$ are defined analogously, i.e., they should coincide with the corresponding objects introduced above if $\mathcal{E}$ is replaced by $\mathcal{M}$ or $\mathcal{R}$ (by $\mathcal{M}_{1}^{+}$ we then understand $\mathcal{P}(\Gamma_0)$). Let $\mathcal{D}\subset \mathcal{E}$ be a linear subspace, $\mathcal{D}^{+} =\mathcal{D}\cap \mathcal{E}^{+}$ and $(A,\mathcal{D})$, $(B,\mathcal{D})$ be operators on $\mathcal{E}$. Set also $\mathcal{D}_\chi =\{ u\in \mathcal{D}\cap \mathcal{E}_\chi: A u \in \mathcal{E}_\chi\}$ and denote by $A_\chi$ the {\it trace} of $A$ in $\mathcal{E}_\chi$, i.e., the restriction of $A$ to $\mathcal{D}_\chi$. Recall that a $C_0$-semigroup of bounded linear operators $S=\{S(t)\}_{t\geq 0}$ in $\mathcal{E}$ is called \emph{positive} if $S(t):\mathcal{E}^{+}\to \mathcal{E}^{+}$ for each $t\geq 0$. A \emph{sub-stochastic} (resp. \emph{stochastic}) semigroup in $\mathcal{E}$ is a positive $C_0$-semigroup such that $\varphi_{\mathcal{E}} (S(t)u) \leq \varphi_{\mathcal{E}} (u)$ (resp. $\varphi_{\mathcal{E}} (S(t)u) = \varphi_{\mathcal{E}} (u)$) whenever $u\in \mathcal{E}^{+}$. \begin{proposition}\cite[Proposition 2.2]{TV}
\label{TV0pn} Let $(A,\mathcal{D})$ be the generator of a positive $C_0$-semigroup in $\mathcal{E}$, and $(B,\mathcal{D})$ be positive, i.e., $B:\mathcal{D}^{+}\to \mathcal{E}^{+}$. Suppose also that \begin{equation}
\label{40} \forall u\in\mathcal{D}^{+} \qquad \varphi_{\mathcal{E}}((A+B)u) \leq 0. \end{equation} Then, for each $r\in (0,1)$, the operator $(A+rB, \mathcal{D})$ is the generator of a sub-stochastic semigroup in $\mathcal{E}$. \end{proposition} \begin{proposition}\cite[Proposition 2.7]{TV}
\label{TVpn} Assume that: \begin{itemize}
\item[(i)] $-A:\mathcal{D}^{+} \to \mathcal{E}^{+}$ and $B:\mathcal{D}^{+} \to \mathcal{E}^{+}$; \item[(ii)] $(A,\mathcal{D})$ be the generator of a sub-stochastic semigroup $S=\{S(t)\}_{t\geq 0}$ on $\mathcal{E}$ such that $S (t):\mathcal{E}_\chi \to \mathcal{E}_\chi$ for all
$t\geq 0$ and the restrictions $S (t)|_{\mathcal{E}_\chi}$ constitute a $C_0$-semigroup on $\mathcal{E}_{\chi}$ generated by $(A_\chi, \mathcal{D}_\chi)$; \item[(iii)] $B:\mathcal{D}_\chi \to \mathcal{E}_\chi$ and $ \varphi_{\mathcal{E}} \left( (A+B) u\right) = 0$, for $u\in \mathcal{D}^{+}$; \item[(iv)] there exist $c>0$ and $\varepsilon >0$ such that \[ \varphi_{\mathcal{E}_\chi} \left( (A+B) u\right) \leq c
\varphi_{\mathcal{E}_\chi} (u) - \varepsilon \|A u\|_{\mathcal{E}}, \qquad {\rm for} \ \ u\in \mathcal{D}_\chi \cap \mathcal{E}^{+}. \] \end{itemize} Then the closure of $(A+B,\mathcal{D})$ in $\mathcal{E}$ is the generator of a stochastic semigroup $S_{\mathcal{E}}= \{S_{\mathcal{E}}(t)\}_{t\geq 0}$ on $\mathcal{E}$ which leaves $\mathcal{E}_\chi$ invariant. The restrictions
$S_{\mathcal{E}_\chi}(t):=S_{\mathcal{E}}(t)|_{\mathcal{E}_\chi}$, $t\geq 0$, constitute a $C_0$-semigroup $S_{\mathcal{E}_\chi}$ on $\mathcal{E}_\chi$ generated by the trace of the generator of $S_{\mathcal{E}}$ in $\mathcal{E}_\chi$. \end{proposition} {\it Proof of Theorem \ref{1ftm}.} Along with $L^*=A+B$ defined in (\ref{22a}) and (\ref{22b}) we consider the operator in $\mathcal{R}$ defined according to the rule $ (L^* \mu) (d\eta) = (L^\dagger R_\mu) (\eta) \lambda ( d\eta)$. Then $L^\dagger = A^\dagger + B^\dagger$ with \begin{eqnarray}
\label{L} (A^\dagger R)(\eta) & = & - \Psi (\eta) R(\eta), \\[.2cm] \nonumber (B^\dagger R)(\eta) & = & \int_{\mathds{R}^d} \left(m(x) + E^a(x,\eta) \right) R(\eta \cup x) d x \\[.2cm] \nonumber & + &
\int_{\mathds{R}^d} \sum_{y_1\in \eta}\sum_{y_2 \in \eta \setminus y_1} b(x|y_1 , y_2) R(\eta \cup x \setminus \{y_1 , y_2\}) d x, \end{eqnarray} the domain of which is, cf. (\ref{22a}), \begin{equation}
\label{L2} \mathcal{D}^\dagger = \left\{ R \in \mathcal{R}: \int_{\Gamma_0}
\Psi(\eta) |R(\eta)| \lambda ( d \eta) < \infty \right\}. \end{equation} For $R\in \mathcal{D}^\dagger \cap \mathcal{R}^{+}$, by (\ref{18}) and (\ref{22}) we obtain from (\ref{L}) \begin{eqnarray}
\label{L4} \varphi_{\mathcal{R}} ( B^\dagger R) & = & \int_{\Gamma_0} \left(\sum_{x\in \eta} [m(x) + E^a(x,\eta \setminus x)] \right) R(\eta)\lambda (d \eta) \\[.2cm] \nonumber & + & \int_{\Gamma_0}
\left(\sum_{x\in \eta} \int_{(\mathds{R}^d)^2} b(x|y_1 , y_2) d y_1 d y_2 \right) R(\eta)\lambda (d \eta) \\[.2cm] \nonumber & = & \int_{\Gamma_0} \Psi (\eta) R(\eta)\lambda (d \eta) = - \varphi_{\mathcal{R}}( A^\dagger R). \end{eqnarray} By (\ref{L2}) and (\ref{L4}) we then get that: (a) $B^\dagger : \mathcal{D}^\dagger \to \mathcal{R}$ and $B^\dagger : \mathcal{R}^{+} \cap \mathcal{D}^\dagger \to \mathcal{R}^{+}$; (b) $\varphi_{\mathcal{R}} ((A^\dagger + B^\dagger)R) =0$ for each $R\in \mathcal{R}^{+} \cap \mathcal{D}^\dagger$. In the same way, we prove that the operators defined in (\ref{22a}) and (\ref{22o}) satisfy: (a) $B : \mathcal{D} \to \mathcal{M}$ and $B : \mathcal{D}^{+} \to \mathcal{M}^{+}$; (b) $\varphi_{\mathcal{M}} ((A + B)\mu) =0$ for each $\mu \in \mathcal{D}^{+}$. Thus, both pairs $(A, \mathcal{D})$, $(B, \mathcal{D})$ and $(A^\dagger, \mathcal{D}^\dagger)$, $(B^\dagger, \mathcal{D}^\dagger)$ satisfy item (i) of Proposition \ref{TVpn}. We proceed further by setting \begin{eqnarray}
\label{L3} (S(t) \mu) (d \eta) & = & \exp\left(- t \Psi(\eta)\right) \mu( d\eta), \quad \mu \in \mathcal{M}, \quad t>0, \\[.2cm] \nonumber (S^\dagger(t) R) (\eta)& = & \exp\left(- t \Psi(\eta)\right) R( \eta), \quad R\in \mathcal{R}. \end{eqnarray} Obviously, $S=\{S(t)\}_{t\geq 0}$ and $S^\dagger=\{S^\dagger(t)\}_{t\geq 0}$ are sub-stochastic semigroups on $\mathcal{M}$ and $\mathcal{R}$, respectively. They are generated respectively by $(A, \mathcal{D})$ and $(A^\dagger, \mathcal{D}^\dagger)$. Clearly, the restrictions
$S(t)|_{\mathcal{M}_\chi}$ and $S^\dagger(t)|_{\mathcal{R}_\chi}$ constitute positive $C_0$-semigroups for $\chi_m$ and $\chi^\kappa$ as in Theorem \ref{1ftm}. Likewise, $B:\mathcal{D}_\chi\to \mathcal{M}_\chi$ and $B^\dagger:\mathcal{D}^\dagger_\chi\to \mathcal{R}_\chi$. Thus, the conditions in items (ii) and (iii) of Proposition \ref{TVpn} are satisfied in both cases.
Now we turn to item (iv) of Proposition \ref{TVpn}. By (\ref{18b}) we have \begin{eqnarray*}
\varphi_{\mathcal{M}_\chi} ((A+B)\mu) & = & \varphi_{\mathcal{M}_\chi}
(L^* \mu) = \int_{\Gamma_0} (LF_\chi)(\eta) \mu(d\eta), \quad F_\chi(\eta):= \chi(|\eta|), \qquad \\[.2cm] \nonumber \varphi_{\mathcal{R}_\chi} ((A^\dagger+B^\dagger)R) & = & \varphi_{\mathcal{R}_\chi} (L^\dagger R) = \int_{\Gamma_0} (L F_\chi)(\eta) R(\eta) \lambda (d\eta). \end{eqnarray*} Then the condition in item (iv) is satisfied if, for some positive $c$ and $\varepsilon$ and all $\eta$, the following holds \begin{equation}
\label{L5a}
(L F_\chi)(\eta) + \varepsilon \Psi (\eta) \leq c \chi(|\eta|). \end{equation} For $\chi_m (n) = (1+n)^m$, $m\in \mathds{N}$, by (\ref{2}) we have, cf. (\ref{22}), \begin{eqnarray}
\label{L6} (L F_{\chi_m})(\eta) & = & - \left( M(\eta) + E^a (\eta)\right)
\epsilon_m (|\eta|) + \langle b \rangle |\eta| \epsilon_{m}
(|\eta|+1), \\[.2cm] \nonumber \epsilon_m (n) & := & (n+1)^m - n^m = (n+1)^{m-1} + (n+1)^{m-2}n + \cdots + n^{m-1} \\[.2cm] \nonumber & \leq & m(n+1)^{m-1}. \end{eqnarray} For $\chi^\kappa(n) = e^{\kappa n}$, we have \begin{eqnarray*}
(L F_{\chi^\kappa})(\eta) = - \left( M(\eta) + E^a (\eta)\right)
e^{\kappa |\eta|} (1- e^{-1}) + \langle b \rangle |\eta| e^{\kappa
|\eta|} (e-1). \end{eqnarray*} By (\ref{L6}) the condition in (\ref{L5a}) takes the form \begin{equation}
\label{L8}
- \left( M(\eta) + E^a (\eta)\right)\left(
\epsilon_m (|\eta|) -\varepsilon\right) + \langle b \rangle |\eta|
\left(\epsilon_{m} (|\eta|+1) + \varepsilon\right) \leq c
\left(|\eta|+1 \right)^m. \end{equation}
since $\epsilon_m (|\eta|) \geq 1$. For $\varepsilon < 1$, the validity of (\ref{L8}) will follow whenever $c$ satisfies \[ c \geq m \langle b \rangle \left( 2^{m-1} + 1\right). \] Hence, for $\chi=\chi_m$, all the conditions of Proposition \ref{TVpn} are met for both choices of $\mathcal{E}$ and the corresponding operators. Therefore, we have two semigroups: $S_{\mathcal{M}}$ and $S_{\mathcal{R}}$, with the properties described in the mentioned statement. Then $\mu_t = S_\mathcal{M}(t) \mu_0$ is the unique solution of the Fokker-Planck equation with $\mu_0 \in \mathcal{D}$, which proves claim (a) of Theorem \ref{1tm}. At the same time, $R_t = S_{\mathcal{R}}(t) R_0(\eta )$ is the unique solution of \begin{equation}
\label{L9}
\dot{R}_t = L^\dagger R_t, \qquad R_t|_{t=0} = R_{\mu_0}\in \mathcal{D}^\dagger. \end{equation} By (\ref{L2}) we have that $R_{\mu_0}\in \mathcal{D}^\dagger$ and $\mu_0\in \mathcal{D}$ are equivalent. By direct inspection one checks that $\mu_t(d \eta) = R_t (\eta) \lambda (d \eta)$ solves (\ref{4}) if $R_t$ solves (\ref{L9}). Then the unique solution $\mu_t = S_{\mathcal{M}}(t) \mu_0$ of (\ref{4}) has the mentioned form, which proves claim (c).
To complete the proof we fix $\kappa >0$ and consider the trace of $A$ in $\mathcal{M}_{\chi^\kappa}$, cf. (\ref{22o}), defined on the domain \begin{equation*}
\mathcal{D}_{\kappa}:=\left\{ \mu \in \mathcal{M}_{\chi^\kappa}:
\int_{\Gamma_0} \Psi (\eta) e^{\kappa |\eta|}\mu^{\pm }(d\eta ) < \infty\right\}. \end{equation*} First, we split $B$ into the sum $B_1 + B_2$, where for $\mathbb{A} \in
\mathcal{B}( \Gamma_0)$ we set, cf. (\ref{22b}), \begin{equation}
\label{L11}
(B_1 \mu)(\mathbb{A}) = \int_{\Gamma_0} \left( \sum_{x\in \eta}[m(x) + E^a(x, \eta \setminus
x)] \mathds{1}_{\mathbb{A}} (\eta\setminus x)\right) \mu(d\eta) , \end{equation} and \begin{equation}
\label{L12} (B_2 \mu)(\mathbb{A}) = \int_{\Gamma_0} \left(\sum_{x\in \eta}
\int_{(\mathds{R}^d)^2} b(x|y_1 , y_2) \mathds{1}_{\mathbb{A}} (\eta \setminus x \cup\{y_1 , y_2\}) d y_1 d y_ 2\right) \mu( d\eta). \end{equation} For $\mu\in \mathcal{D}_\kappa^{+} :=\mathcal{D}_\kappa \cap \mathcal{M}^{+}$, from (\ref{L11}) we have \begin{eqnarray}
\label{L13} \varphi_{\mathcal{M}_{\chi^\kappa}} (B_1 \mu) & = & \int_{\Gamma_0}
e^{\kappa |\xi|} \int_{\Gamma_0} \sum_{x\in \eta}[m(x) + E^a(x, \eta \setminus
x)]\delta_{\eta\setminus x}( d \xi) \mu(d\eta) \\[.2cm] \nonumber
& = & \int_{\Gamma_0} e^{\kappa (|\eta|-1)} \left(M(\eta) + E^a(\eta) \right) \mu(d\eta) \\[.2cm] \nonumber
& \leq & - e^{-\kappa} \varphi_{\mathcal{M}_{\chi^\kappa}} ( A \mu) . \end{eqnarray} For $r = e^{-\kappa}$, by (\ref{L13}) we have that $\varphi_{\mathcal{M}_{\chi^\kappa}} (A+ r^{-1} B_1 \mu)\leq 0$ for each $\mu \in \mathcal{D}_\kappa^{+}$. Then by Proposition \ref{TV0pn} we obtain that $(A+ B_1, \mathcal{D}_\kappa)$ generates a sub-stochastic semigroup $S_\kappa$ on $\mathcal{M}_{\chi^\kappa}$. For $\kappa'\in (0,\kappa)$, let us show now that $B_2$ acts as a bounded linear operator from $\mathcal{M}_{\chi^\kappa}$ to $\mathcal{M}_{\chi^{\kappa'}}$. In view of the Hahn-Jordan decomposition, it is enough to consider the action of $B_2$ on positive elements of $\mathcal{M}_{\chi^\kappa}$. Since $B_2$ is positive, cf. (\ref{L12}), for $\mu\in \mathcal{M}^{+}_{\chi^\kappa}$, we have \begin{eqnarray}
\label{L14}
\|B_2 \mu\|_{\mathcal{M}_{\chi^{\kappa'}}} & = & \int_{\Gamma_0}
e^{\kappa'|\xi|} \int_{\Gamma_0} \sum_{x\in
\eta}\int_{(\mathds{R}^d)^2} b(x|y_1, y_2) \delta_{\eta\setminus x \cup\{y_1 , y_2\}} (d \xi) dy_1 dy_2 \mu ( d\eta) \qquad \\[.2cm]
\nonumber & = & e^{\kappa'} \int_{\Gamma_0} e^{\kappa'|\eta|}
\sum_{x\in \eta}\int_{(\mathds{R}^d)^2} b(x|y_1, y_2)
dy_1 dy_2 \mu ( d\eta) \\[.2cm]
\nonumber & = & e^{\kappa'} \langle b \rangle \int_{\Gamma_0} |\eta|
e^{- (\kappa-\kappa')|\eta|} e^{\kappa|\eta|} \mu ( d\eta) \\[.2cm] \nonumber & \leq & \frac{e^{\kappa'} \langle b \rangle}{e (\kappa -
\kappa')} \|\mu\|_{\mathcal{M}_{\chi^{\kappa}}}. \end{eqnarray} Let $(B_2)_{\kappa'\kappa}: \mathcal{M}^{+}_{\chi^\kappa} \to \mathcal{M}^{+}_{\chi^{\kappa'}}$ be the operator as just described. For $n\in \mathds{N}$, we set \begin{equation}
\label{L15} \kappa_l = \kappa - (\kappa - \kappa')l /n, \qquad l=0, 1, \dots , n. \end{equation} By means of (\ref{L14}) and (\ref{L15}) we then estimate of the operator norm \begin{equation}
\label{L16}
\|(B_2)_{\kappa_{l+1}\kappa_l}\| \leq \frac{ e^{\kappa} n \langle b \rangle}{e (\kappa - \kappa')}. \end{equation} Next, for $t>0$ and $0\leq t_n \leq \cdots \leq t_0 = t$, we consider the following bounded linear operator acting from $\mathcal{M}_{\chi^\kappa}$ to $\mathcal{M}_{\chi^{\kappa'}}$ \begin{equation*}
T_{\kappa' \kappa}^{(n)} (t,t_1, t_2 , \dots , t_n) = S_{\kappa_n}(t-t_1) (B_2)_{\kappa_n \kappa_{n-1}} S_{\kappa_{n-1}}(t_1-t_2) \cdots (B_2)_{\kappa_1 \kappa} S_{\kappa}(t_n), \end{equation*} where $S_{\kappa_{l}}$ is the sub-stochastic semigroup in $\mathcal{M}_{\chi^{\kappa_l}}$ generated by $(A+B_1, \mathcal{D}_{\kappa_l})$. By the latter fact we have that $T_{\kappa' \kappa}^{(n)} (t,t_1, t_2 , \dots , t_n): \mathcal{M}_{\chi^\kappa}\to \mathcal{D}_{\kappa'}$ and \begin{eqnarray}
\label{L17a}
\frac{d}{dt} T_{\kappa' \kappa}^{(n)} (t,t_1, t_2 , \dots , t_n) & = & (A+ B_1)T_{\kappa' \kappa}^{(n)} (t,t_1, t_2 , \dots , t_n),\\[.2cm] \nonumber T_{\kappa' \kappa}^{(n)} (t,t, t_2 , \dots , t_n) & = & (B_2)_{\kappa' \kappa_{n-1}} T_{\kappa_{n-1} \kappa}^{(n-1)} (t, t_2 , \dots , t_n). \end{eqnarray} As $(B_2)_{\kappa' \kappa_{n-1}}$ is the restriction of $(B_2, \mathcal{D}_{\kappa'})$ to $\mathcal{M}_{\chi^{\kappa_{n-1}}} \subset \mathcal{D}_{\kappa'}$ and $T_{\kappa' \kappa}^{(n-1)} (t,t_2, t_2 , \dots , t_n): \mathcal{M}_{\chi^\kappa}\to \mathcal{D}_{\kappa'}$, the second line in (\ref{L17a}) can be rewritten as \begin{equation}
\label{L17b} T_{\kappa' \kappa}^{(n)} (t,t, t_2 , \dots , t_n) = B_2 T_{\kappa' \kappa}^{(n-1)} (t, t_2 , \dots , t_n). \end{equation} On the other hand, since all the semigroups $S_{\kappa_{l}}$ are sub-stochastic and $(B_2)_{\kappa' \kappa}$ are positive, by (\ref{L16}) we get the following estimate of its operator norm \begin{equation}
\label{L18}
\|T^{(n)}_{\kappa' \kappa} (t,t_1, t_2 , \dots , t_n)\| \leq \left( \frac{ e^{\kappa} n \langle b \rangle}{e (\kappa - \kappa')}\right)^n. \end{equation} We also set $T^{(0)}_{\kappa' \kappa}(t)= S_{\kappa'}
(t)|_{\mathcal{M}_{\chi^\kappa}}$, and then consider \begin{equation}
\label{L19} Q_{\kappa'\kappa}(t) := \sum_{n=0}^\infty \int_0^t\int_0^{t_1} \cdots \int_0^{t_{n-1}} T^{(n)}_{\kappa'\kappa} (t,t_1, t_2 , \dots , t_n) d t_n d t_{n-1} \cdots d t_1. \end{equation} By (\ref{L18}) we conclude that the series in (\ref{L19}) converges uniformly on compact subsets of $[0, T(\kappa, \kappa'))$, see (\ref{22T}), to a continuously differentiable function \[ (0,T(\kappa, \kappa')) \ni t \mapsto Q_{\kappa'\kappa}(t) \in \mathcal{L}(\mathcal{M}_{\chi^\kappa}, \mathcal{M}_{\chi^{\kappa'}}), \] where the latter is the Banach space of all bounded linear operators acting from $\mathcal{M}_{\chi^\kappa}$ to $\mathcal{M}_{\chi^{\kappa'}}$. By (\ref{L17a}) and (\ref{L17b}) we obtain \begin{equation}
\label{L20} \frac{d}{dt} Q_{\kappa'\kappa}(t) = (A + B_1 + B_2) Q_{\kappa'\kappa}(t) = L^* Q_{\kappa'\kappa}(t). \end{equation} Thus, assuming that $\mu_0 \in \mathcal{M}_{\chi^\kappa}$ we get that $\tilde{\mu}_t := Q_{\kappa'\kappa}(t) \mu_0$, for $t \in [0,T(\kappa, \kappa'))$, lies in $\mathcal{M}_{\chi^{\kappa'}}$ and solves (\ref{4}). Therefore, $\tilde{\mu}_t$ coincides with $\mu_t = S_{\mathcal{M}}(t)\mu_0$, which completes the proof.
{
$\square$}
\section{The Evolution of States of the Infinite System: Posing} \label{Sec3}
In this section, we begin to construct the evolution of states $\mu_0\to \mu_t$ assuming that the system in $\mu_0$ is infinite and hence the method developed in Sect. 3 does not work anymore. Instead, we will obtain $\mu_0\to \mu_t$ from the evolution $B_0 \to B_t$, where $B_0(\theta)=\mu_0 (F^\theta)$ and $\mu_0\in \mathcal{P}_{\rm exp} (\Gamma)$, see Definition \ref{0df}. In view of (\ref{11}), the evolution $B_0 \to B_t$ can be constructed as the evolution of correlation functions. The latter will be performed in the following three steps: (a) constructing $k_0\to k_t$ for $t< T$ (for some $T<\infty$) (Sect. \ref{Sec4}); (b) proving that $k_t$ is the correlation function of a unique $\mu_t \in \mathcal{P}_{\rm exp}(\Gamma)$ (Sect. \ref{Sec5}); (c) continuing $k_t$ to all $t>0$ (Sect. \ref{Sec6}).
To make the first step, we derive from (\ref{1}) the corresponding evolution equation with the operator $L^\Delta$ obtained from (\ref{2}) by (\ref{17}), (\ref{18}) and the following rule \begin{equation}
\label{20} \mu(L F^\theta) = \int_{\Gamma_0} (L^\Delta k_\mu)(\eta) e(\theta; \eta) \lambda (d\eta). \end{equation} Then we prove that the equation $\dot{k}_t = L^\Delta k_t$ has a unique solution $k_t$, $t<T$, in a scale of Banach spaces such that $k_t^{(n)}$ satisfies (\ref{6c}) with $\varkappa$ dependent on $t$. The restriction $t<T$ arises from the proof as no direct semigroup method can be applied here. The proof just mentioned does not guarantee that the solution $k_t$ is a correlation function, and even its usual positivity is not certain. Step (b) is made by constructing suitable approximations $k_t^{\rm app}$ to the mentioned solution $k_t$. By this construction $k_t^{\rm app}$ satisfies condition (a) of Proposition \ref{1pn}. Then we prove that, for all $G\in B_{\rm bs}(\Gamma_0)$, $\langle \! \langle G, k_t^{\rm app} \rangle \! \rangle$ converges to $\langle \! \langle G, k_t \rangle \! \rangle$ as the approximations are eliminated. This yields that also $k_t$ satisfies condition (a) of Proposition \ref{1pn}. The remaining conditions (b) and (c) are checked directly. Then $k_t = k_{\mu_t}$ for a unique $\mu_t\in \mathcal{P}_{\rm exp}(\Gamma)$. This also implies the usual positivity of $k_t$ which is then used to obtain the continuation to all $t>0$.
\subsection{The operators} To make the first step mentioned above we calculate $L^\Delta$ according to (\ref{20}) and obtain it in the following form \begin{eqnarray}
\label{21} L^\Delta & = & A_1^\Delta + A_2^\Delta + B_1^\Delta + B_2^\Delta, \\[.2cm] \nonumber (A_1^\Delta k)(\eta) & = & - \Psi(\eta) k(\eta),\\[.2cm] \nonumber ( A_2^\Delta k)(\eta) & = & \int_{\mathds{R}^d} \sum_{y_1\in
\eta}\sum_{y_2\in \eta\setminus y_1} k(\eta \cup x \setminus \{y_1, y_2\}) b(x|y_1 , y_2) d x,\\[.2cm] \nonumber (B_1^\Delta k)(\eta) & = & - \int_{\mathds{R}^d} k(\eta \cup x) E^a(x, \eta) d x,\\[.2cm] \nonumber (B_2^\Delta k)(\eta) & = & 2 \int_{(\mathds{R}^d)^2} \sum_{y_1 \in
\eta}k(\eta \cup x \setminus y_1) b(x|y_1 , y_2) d y_2 d x, \end{eqnarray} where $\Psi$ is as in (\ref{22}). Since the correlation functions of measures from $\mathcal{P}_{\rm exp}(\Gamma)$ satisfy (\ref{6c}), we introduce \begin{equation}
\label{nk} \|k \|_{\alpha} = \esssup_{\eta \in \Gamma_0}e^{-\alpha
|\eta|} |k(\eta)|, \qquad \alpha \in \mathds{R}, \end{equation} and the corresponding $L^\infty$-like Banach spaces \begin{equation}
\label{23}
\mathcal{K}_\alpha = \{k:\Gamma_0 \to \mathds{R}: \|k\|_\alpha
<\infty\}. \end{equation}
For $\alpha' < \alpha$, we have that $\| k\|_{\alpha'} \ge \|
k\|_{\alpha}$. Therefore, $\mathcal{K}_{\alpha'} \hookrightarrow \mathcal{K}_{\alpha}$, where ``$\hookrightarrow$'' denotes continuous embedding. Thus, $\{\mathcal{K}_\alpha\}_{\alpha \in \mathds{R}}$ is an ascending scale of Banach spaces.
Our aim now is to define linear operators which act as in (\ref{21}), cf. (\ref{22}). First, for a given $\alpha \in \mathds{R}$, we define an unbounded operator $(L^\Delta_\alpha, \mathcal{D}_\alpha^\Delta)$, where \begin{equation}
\label{24}
\mathcal{D}_\alpha^\Delta = \{ k \in \mathcal{K}_\alpha: \Psi k \in \mathcal{K}_\alpha\}. \end{equation} Thus, $A_1^\Delta$ maps $\mathcal{D}_\alpha^\Delta$ to $\mathcal{K}_\alpha$. Furthermore, for each $k\in \mathcal{D}_\alpha^\Delta$, one finds $C>0$ such that
$(1+\Psi(\eta)) |k(\eta)| \leq e^{\alpha |\eta|} C$. We apply this fact and item (iv) of Assumption \ref{ass1} to get \begin{gather*} \left\vert(A^\Delta_2 k)(\eta) \right\vert \leq \frac{Ce^{-\alpha +
\alpha|\eta|}}{1 + \Psi(\eta)} \sum_{y_1\in \eta} \sum_{y_2 \in \eta\setminus y_1} \beta (y_1 - y_2) \leq C\beta^* e^{-\alpha +
\alpha|\eta|}, \end{gather*} which means that $A_2^\Delta:\mathcal{D}_\alpha^\Delta \to \mathcal{K}_\alpha$. In a similar way, we prove that $B_i^\Delta:\mathcal{D}_\alpha^\Delta \to \mathcal{K}_\alpha$, $i=1,2$. Thus, the expression in (\ref{21}) defines $(L^\Delta_\alpha, \mathcal{D}_\alpha^\Delta)$. By the inequality \begin{equation}
\label{25} n^p e^{-\sigma n} \le \left( \frac{p}{e\sigma}\right)^p , \qquad p\ge 1, \quad \sigma>0, \quad n\in \mathds{N}, \end{equation} one readily proves that \begin{equation}
\label{26}
\forall \alpha' < \alpha \qquad \mathcal{K}_{\alpha'} \subset \mathcal{D}^\Delta_\alpha. \end{equation} The next step is to introduce bounded operators $L_{\alpha
\alpha'}^{\Delta}: \mathcal{K}_{\alpha'} \to \mathcal{K}_\alpha$. To this end, by means of (\ref{25}) and the inequality $|k(\eta) | \le e^{\alpha |\eta|} \|k\|_{\alpha}$ (see (\ref{nk})), for $\alpha' < \alpha$ we obtain from (\ref{21}) the following estimate \begin{eqnarray}
\label{27}
\|A_1^\Delta k\|_\alpha & \leq & \esssup_{\eta \in \Gamma_0} e^{-\alpha
|\eta|}\Psi (\eta) |k(\eta)| \\[.2cm] \nonumber & \leq & \bigg{(} ( m^* + \langle b \rangle +
a^*)\esssup_{\eta\in \Gamma_0}\left[|\eta|^2 e^{-(\alpha-
\alpha')|\eta|} \right] \bigg{)} \|k\|_{\alpha'} \\[.2cm] \nonumber & = & \frac{4 ( m^* + \langle b \rangle + a^*) }{e^2(\alpha-\alpha')^2}
\|k\|_{\alpha'}. \end{eqnarray}
In a similar way, one estimates $\|A_2^\Delta k\|_\alpha$ and
$\|B_i^\Delta k\|_\alpha$, $i=1,2$, which then yields, cf. (\ref{21}), \begin{equation}
\label{28}
\|L^\Delta k\|_\alpha \leq \left(4\frac{ m^* + \langle b \rangle + a^* + \beta^* e^{-\alpha'} }{e^2(\alpha-\alpha')^2} + \frac{\langle a \rangle e^{\alpha'} + 2 \langle b \rangle}{e(\alpha-\alpha')}
\right)\|k\|_{\alpha'}. \end{equation} Then we define a bounded operator $L^\Delta_{\alpha \alpha'}: \mathcal{K}_{\alpha'} \to \mathcal{K}_\alpha$, the norm of which is estimated by means of (\ref{28}). In view of (\ref{26}), we have that each $k\in \mathcal{K}_{\alpha'}$ lies in $\mathcal{D}^\Delta_\alpha$, and \begin{equation}
\label{29} L^\Delta_{\alpha \alpha'} k = L^\Delta_{\alpha }k. \end{equation} In the sequel, we consider two types of operators with the action as in (\ref{21}): (a) unbounded operators $(L^\Delta_\alpha, \mathcal{D}(L^\Delta_\alpha))$, $\alpha\in \mathds{R}$, with the domains as in (\ref{24}); (b) bounded operators $L^\Delta_{ \alpha \alpha'}$ just described. These operators are related to each other by (\ref{29}), i.e., $L^\Delta_{\alpha\alpha'}$ can be considered as the restriction of $L^\Delta_{\alpha }$ to $\mathcal{K}_{\alpha'}$.
\subsection{The statements}
For $\alpha \in \mathds{R}$, we set, cf. (\ref{15}), (\ref{16}) and Proposition \ref{1pn}, \begin{equation}
\label{32} \mathcal{K}^\star_\alpha = \{ k \in \mathcal{K}_\alpha: k(\varnothing) =1 \ {\rm and} \ \langle \! \langle G, k \rangle \! \rangle \geq 0 \ {\rm for} \ {\rm all} \ G\in B^\star_{\rm bs} (\Gamma_0) \}. \end{equation} Note that \begin{equation}
\label{32a} \mathcal{K}^\star_\alpha \subset \mathcal{K}_{\alpha}^{+} :=\{ k\in \mathcal{K}_\alpha: k(\eta ) \geq 0\}. \end{equation} Since the spaces defined in (\ref{23}) form an ascending scale, we have that $k\in \mathcal{K}_{\alpha_0}$ lies in all $\mathcal{K}_\alpha$ with $\alpha>\alpha_0$. Recall that the model parameters satisfy Assumption \ref{ass1} which, in particular, imply the validity of Proposition \ref{2pn}. \begin{theorem}
\label{1tm} There exists $c\in \mathds{R}$ dependent on the model parameters only such that, for each $\mu_0\in \mathcal{P}_{\rm exp}(\Gamma_0)$, there exists a unique map $[0,+\infty) \ni t \mapsto k_t \in \mathcal{K}^{\star}_{\alpha_t}$ with $\alpha_t = \alpha_0 + ct$ and $\alpha_0> - \log \omega$ such that $k_0=k_{\mu_0}\in \mathcal{K}^\star_{\alpha_0}$, which has the following properties: \begin{itemize}
\item[(i)] For each $T>0$ and all $t\in [0,T)$, the map $$ [0,T)\ni t \mapsto k_t \in \mathcal{K}_{\alpha_t} \subset
\mathcal{D}(L^\Delta_{\alpha_T}) \subset \mathcal{K}_{\alpha_T}$$ is continuous on $[0,T)$ and continuously differentiable on $(0,T)$ in $\mathcal{K}_{\alpha_T}$. \item[(ii)] For all $t\in (0,T)$ it satisfies \begin{equation*}
\dot{k}_t = L^\Delta_{\alpha_T} k_t. \end{equation*} \end{itemize} \end{theorem} \begin{corollary}
\label{Jaco} Let $k_t\in \mathcal{K}^\star_{\alpha_t}$, $t\geq 0$, be as in Theorem \ref{1tm}, and then $\mu_t\in\mathcal{P}_{\rm exp}(\Gamma)$ be the measure corresponding to this $k_t$ according to Proposition \ref{1pn}. Then the map $t \mapsto \mu_t$ is such that \begin{itemize} \item[1.] for each compact $\Lambda$ and $t\geq 0$, $\mu_t^{\Lambda}$ lies in the
domain $\mathcal{D}\subset \mathcal{M}$ defined in (\ref{22a});
\item[2.] for
each $\theta \in \varTheta$, the map $[0,+\infty) \ni t \mapsto \mu_t (F^\theta)$ is continuous and continuously differentiable on $(0,+\infty)$ and the following holds, cf. (\ref{Jan}), \begin{equation}
\label{Ja}
\frac{d}{dt} \mu_t (F^\theta) = (L^* \mu_t^{\Lambda_\theta}) (F^\theta) =
\langle\!\langle e(\theta, \cdot), L^\Delta_{ \alpha_T}
k_t\rangle\!\rangle, \end{equation} where the latter equality holds for all $T>t$, see (\ref{11}) and (\ref{14}). \end{itemize} \end{corollary} The proof of these statements is done in the remainder of the paper. Its main steps are: (a) constructing the evolution $k_{\mu_0}\to k_t$ for $t<T$ for some $T<\infty$; (b) proving that $k_t$ belongs to $\mathcal{K}^\star_\alpha$ with an appropriate $\alpha$, that by Proposition \ref{1pn} will allow us to associate $k_t$ with a unique $\mu\in \mathcal{P}_{\rm exp}(\Gamma)$; (c) proving that $k_t$ lies in $\mathcal{K}_{\alpha_t}$ on the mentioned time interval, which will be used to continue $k_t$ to all $t>0$.
\section{The solution on a bounded time interval}
\label{Sec4}
Here we make step (a) of the program formulated at the end of Sect. \ref{Sec3}.
\subsection{The statement}
Let us fix some $\alpha_1\in \mathds{R}$, take $\alpha_2 >\alpha_1$ and consider the following Cauchy problem in $\mathcal{K}_{\alpha_2}$ \begin{equation}
\label{33}
\dot{k}_t = L^\Delta_{\alpha_2} k_t , \qquad k_t|_{t=0} = k_0 \in \mathcal{K}_{\alpha_1}. \end{equation} By its solution on a time interval $[0, T)$ we mean a continuous (in $\mathcal{K}_{\alpha_2}$) map $[0, T)\ni t \mapsto k_t\in \mathcal{D}^\Delta_{\alpha_2}$, which is continuously differentiable on $(0, T)$ and satisfies both equalities in (\ref{33}). For $\alpha, \alpha'\in \mathds{R}$ such that $\alpha'< \alpha$ and for $\upsilon\geq 0$ as in Proposition \ref{2pn}, we set \begin{equation}
\label{34}
T(\alpha, \alpha') = \frac{\alpha - \alpha'}{2 \langle b \rangle + \upsilon + \langle a \rangle e^{\alpha}}. \end{equation} \begin{lemma}
\label{1lm} Let $\omega$ and $\upsilon$ be as in Proposition \ref{2pn}. Then for each $\alpha_1 > - \log \omega$ and an arbitrary $k_0 \in \mathcal{K}_{\alpha_1}$, the problem in (\ref{33}) has a unique solution $k_t\in \mathcal{D}^\Delta_{\alpha_2}$ on the time interval $[0, T(\alpha_2, \alpha_1))$. \end{lemma} In contrast to the case of finite configurations described in Theorem \ref{1ftm}, the construction of a $C_0$-semigroup that solves (\ref{33}) is rather hopeless. In view of this, the proof of Lemma \ref{1lm} will be done in the following steps: \begin{itemize}
\item[(i)] the operator $L^\Delta$ will be written in the form
$L^\Delta = A^\Delta_\upsilon + B^\Delta_\upsilon$, see
(\ref{45}), in such a way that
$A^\Delta_\upsilon:=A^\Delta_{1,\upsilon} + A^\Delta_{2}$ can be used to construct a certain (sun-dual) $C_0$-semigroup in $\mathcal{K}_{\alpha_2}$; \item[(ii)] this semigroup and $B^\Delta_{\upsilon} := B^\Delta_1 + B^\Delta_{2,\upsilon}$, see (\ref{46}), will be used to construct the family of operators $\{Q_{\alpha \alpha'} (t): t\in [0, T(\alpha, \alpha'))\}$, see (\ref{34}) and Lemma \ref{3lm}, such that $Q_{\alpha \alpha'} (t)\in \mathcal{L}(\mathcal{K}_{\alpha'}, \mathcal{K}_\alpha)$ and $k_t = Q_{\alpha_2 \alpha_1}(t)k_0$ is the solution in question. $\mathcal{L}(\mathcal{K}_{\alpha'}, \mathcal{K}_\alpha)$ stands for the Banach space of all bounded operators acting from $\mathcal{K}_{\alpha'}$ to $\mathcal{K}_{\alpha}$.
\end{itemize}
\subsection{The predual semigroup} Here we make the first step in constructing the semigroup mentioned in item (i) above. For $\alpha \in \mathbb{R}$, the space predual to $\mathcal{K}_\alpha$ is \begin{equation}
\label{35}
\mathcal{G}_\alpha := L^1(\Gamma_0, e^{\alpha|\cdot|}d \lambda), \end{equation} which for $\alpha>0$ coincides with $\mathcal{R}_\chi$ defined in (\ref{18c}) with $\chi(n) = e^{\alpha n}$. Here, however, we allow $\alpha$ to be any real number. The norm in $\mathcal{G}_\alpha$ is \begin{equation}
\label{36}
|G|_\alpha=\int_{\Gamma_0}|G(\eta)|e^{\alpha |\eta|}\lambda(d \eta).
\end{equation}
Clearly, $|G|_{\alpha'} \le |G|_{\alpha}$ whenever $\alpha'<\alpha$. Then $\mathcal{G}_{\alpha} \hookrightarrow \mathcal{G}_{\alpha'}$, and this embedding is also dense. In order to use Proposition \ref{2pn} we modify the operators introduced in
(\ref{21}) by adding and subtracting the term $\upsilon |\eta|$. This will lead also to the corresponding reconstruction of the predual operators. For an appropriate $G:\Gamma_0 \to \mathds{R}$, set, cf. (\ref{22}), \begin{eqnarray}
\label{37}
(A_{1,\upsilon}G)(\eta) & = & - \Psi_\upsilon (\eta) G(\eta) = - \left(\upsilon |\eta| + E^a(\eta) + M(\eta) + \langle b \rangle |\eta| \right) G(\eta), \\[.2cm] \nonumber (A_2 G)(\eta)& = &\sum_{x \in \eta} \int_{(\mathbb{R})^2}G(\eta
\setminus x \cup y_1 \cup y_2)b(x|y_1,y_2)dy_1 dy_2,\\[.2cm] \nonumber \mathcal{D}_\alpha & = & \{ G:\in \mathcal{G}_\alpha: \Psi_\upsilon G \in \mathcal{G}_\alpha\}. \end{eqnarray} By Proposition \ref{2pn} we have that \begin{equation}
\label{37J} \Psi_\upsilon (\eta) \geq \omega E^b(\eta). \end{equation} The operator $(A_{1,\upsilon}, \mathcal{D}_\alpha)$ is the generator of the semigroup $S_{0,\alpha} = \{S_{0,\alpha}\}_{t\geq 0}$ of multiplication operators which act in $\mathcal{G}_\alpha$ as follows, cf. (\ref{L3}), \begin{equation}
\label{38} ( S_{0,\alpha}(t) G)(\eta) = \exp\left(- t \Psi_\upsilon (\eta) \right) G(\eta). \end{equation} Let $\mathcal{G}_\alpha^{+}$ be the cone of positive elements of $\mathcal{G}_\alpha $ The semigroup defined in (\ref{38}) is obviously \emph{sub-stochastic}. Set $\mathcal{D}_\alpha^{+} = \mathcal{D}_\alpha \cap \mathcal{G}_\alpha ^{+}$. By (\ref{18}), (\ref{36}) and (\ref{37}) we get \begin{eqnarray} \label{39}
|A_2G|_\alpha & = & \int_{\Gamma_0} e^{\alpha|\eta|}|(A_2G)(\eta)|\lambda(d \eta) \\[.2cm]
& \leq & \int_{\Gamma_0 } e^{\alpha|\eta|} \int_{(\mathbb{R}^d)^2}
\sum_{x\in \eta} |G(\eta \setminus x \cup y_1
\cup y_2)| b(x|y_1,y_2) dy_1dy_2 \lambda(d \eta)\nonumber\\[.2cm] & = & \int_{\Gamma_0} \int_{\mathbb{R}^d} \sum_{y_1 \in
\eta}\sum_{y_2 \in \eta \setminus y_1}e^{\alpha (|\eta|-1)}|G(\eta)|
b(x|y_1,y_2) dx \lambda(d \eta) \nonumber\\[.2cm] \nonumber
& = & e^{-\alpha} \int_{\Gamma_0} e^{\alpha|\eta|}E_b(\eta)
|G(\eta)|\lambda(d \eta) \leq (e^{-\alpha}/ \omega)
|A_{1,\upsilon}G|_{\alpha}. \end{eqnarray} The latter estimate follows by (\ref{37J}), see also (\ref{19}). \begin{lemma}
\label{2lm} Let $\upsilon$ and $\omega$ be as in Proposition \ref{2pn} and $A_{1,\upsilon}$, $A_2$ and $\mathcal{D}_\alpha$ be as in (\ref{37}). Then for each $\alpha> - \log \omega$, the operator $(A_{\upsilon} , \mathcal{D}_\alpha):=(A_{1,\upsilon} + A_2, \mathcal{D}_\alpha)$ is the generator of a sub-stochastic semigroup $S_\alpha=\{S_\alpha(t)\}_{t\geq 0}$ on $\mathcal{G}_\alpha$. \end{lemma} \begin{proof} We apply Proposition \ref{TV0pn} with $\mathcal{E}= \mathcal{G}_\alpha$, $\mathcal{D}=\mathcal{D}_\alpha$ and $A= A_{1,\upsilon}$. For some $r\in (0, 1)$, we set $B = r^{-1} A_2$, which is clearly positive. By (\ref{39}) $B$ is defined on $\mathcal{D}_\alpha$. To show that (\ref{40}) holds we take $G\in \mathcal{D}_\alpha^{+}$ and proceed as in (\ref{39}). That is, \begin{eqnarray*} & & \int_{\Gamma_0} \left( (A_{1,\upsilon} +r^{-1}
A_2)G\right)(\eta) e^{\alpha|\eta|}\lambda ( d \eta) = -
\int_{\Gamma_0} \Psi_\upsilon (\eta) G(\eta) e^{\alpha|\eta|} \lambda ( d\eta) \\[.2cm]\nonumber & &+ r^{-1} \int_{\Gamma_0} \sum_{x\in \eta} \int_{(\mathds{R}^d)^2} G(\eta
\setminus x\cup \{y_1, y_2\})b(x|y_1 , y_2) e^{\alpha|\eta|} d y_1 d y_2 \lambda (d\eta) \\[.2cm] \nonumber
& & \leq - \int_{\Gamma_0} \left(\upsilon |\eta| + E^a (\eta) -
r^{-1}e^{-\alpha}E^b (\eta) \right) G(\eta) e^{\alpha|\eta|} \lambda (d \eta). \end{eqnarray*} Now, for $\alpha > - \log \omega$, we pick $r\in (0,1)$ in such a way that $r^{-1}e^{-\alpha} \leq \omega$, which by Proposition \ref{2pn} implies that (\ref{40}) holds for this choice. Then the operator $A_{1,\upsilon} + r (r^{-1} A_2)$ satisfies Proposition \ref{TV0pn} by which the proof follows. \end{proof}
By the definition of the sub-stochasticity of $S_\alpha$ we have that $|S_\alpha(t) G|_\alpha \leq |G|_\alpha$ whenever $G\in \mathcal{G}_\alpha^{+}$. Let us show now that the same estimate holds also for all $G\in \mathcal{G}_\alpha$. Each such $G$ in a unique way can be decomposed $G=G^{+} - G^{-}$ with $G^{\pm} \in \mathcal{G}_\alpha^{+}$. Moreover, by (\ref{36}) we have that \[
|G|_\alpha = \int_{\Gamma_0} e^{\alpha |\eta|} \left( G^{+}(\eta) +
G^{-}(\eta)\right) \lambda (d\eta)= |G^{+}|_\alpha + |G^{-}|_\alpha. \] Then \begin{eqnarray}
\label{39J}
|S_\alpha (t) G|_\alpha & = & |S_\alpha (t) (G^{+}-G^{-})|_\alpha \leq
|S_\alpha (t) G^{+}|_\alpha + |S_\alpha (t) G^{-}|_\alpha \\[.2cm]
\nonumber & \leq & |G^{+}|_\alpha + | G^{-}|_\alpha = |G|_\alpha . \end{eqnarray}
\subsection{The sun-dual semigroup}
Let $S_\alpha(t)$ be an element of the semigroup as in Lemma \ref{2lm}. Then its adjoint $S^*_\alpha(t)$ is a bounded linear operator in $\mathcal{K}_\alpha$. Clearly, $\{S^*_\alpha(t)\}_{t\geq 0}$ is a semigroup. However, it is not strongly continuous and hence cannot be directly used to construct (classical) solutions of differential equations. This obstacle is usually circumvented as follows, see \cite{P}. Set, cf. (\ref{14}), \begin{equation*}
\mathcal{D}_\alpha^* = \{ k\in \mathcal{K}_\alpha: \exists \hat{k} \in\mathcal{K}_\alpha \ \forall G \in \mathcal{D}_\alpha \ \langle \! \langle A_\upsilon G, k\rangle \! \rangle = \langle \! \langle G, \hat{k}\rangle \! \rangle\}. \end{equation*} Then the operator $(A^*_{\upsilon},\mathcal{D}_\alpha^*)$ is adjoint to $(A_{\upsilon},\mathcal{D}_\alpha)$. It acts as follows \begin{eqnarray*}
(A^*_{\upsilon} k)(\eta) & = & - \Psi_\upsilon (\eta) k(\eta) \\[.2cm] \nonumber & + & \int_{\mathds{R}^d} \sum_{y_1 \in \eta}\sum_{y_2 \in \eta\setminus y_1
} k(\eta\cup x\setminus \{y_1,y_2\}) b(x|y_1 , y_2) d x. \end{eqnarray*} By direct inspection one obtains that $\mathcal{K}_{\alpha'} \subset \mathcal{D}_\alpha^*$ whenever $\alpha'< \alpha$. Let $\mathcal{Q}_\alpha$ be the closure of $\mathcal{D}_\alpha^*$ in $\mathcal{K}_\alpha$. Then we have \begin{equation}
\label{43}
\mathcal{K}_{\alpha'}\subset \mathcal{D}_\alpha^* \subset
\mathcal{Q}_\alpha \subsetneq \mathcal{K}_\alpha, \qquad
\alpha'<\alpha. \end{equation} Now we set \begin{equation*}
\mathcal{D}_\alpha^\odot= \{ k\in \mathcal{D}_\alpha^*: A_\upsilon^* k \in \mathcal{Q}_\alpha\}, \end{equation*} and denote by $A^\odot_\upsilon$ the restriction of $A_\upsilon^*$ to $\mathcal{D}_\alpha^\odot$. Then $(A^\odot_\upsilon, \mathcal{D}_\alpha^\odot)$ is the generator of a $C_0$-semigroup, which we denote by $S^\odot_\alpha=\{S^\odot_\alpha (t)\}_{t \geq 0}$. This is the semigroup which we have aimed to construct. It has the following property, see \cite[Lemma 10.1]{P}. \begin{proposition}
\label{Papn} for each $k\in \mathcal{Q}_\alpha$ and $t\geq 0$, it follows that
$\|S^\odot_\alpha (t) k\|_\alpha = \|S^*_\alpha(t)k\|_\alpha \leq
\|k\|_\alpha$. Moreover, for each $\alpha'< \alpha$ and $k\in \mathcal{K}_{\alpha'}$, the map $[0,+\infty)\ni t \mapsto S^\odot_\alpha (t) k\in \mathcal{Q}_\alpha$ is continuous. \end{proposition}
The estimate $\|S^*_\alpha(t)k\|_\alpha \leq \|k\|_\alpha$ is obtained by means of (\ref{39J}). The continuity follows by (\ref{43}) and the fact that $S^\odot_\alpha$ is a $C_0$-semigroup. \subsection{The resolving operators: proof of Lemma \ref{1lm}} Now we construct the family of operators $\{Q_{\alpha \alpha'}(t)\}$ such that the solution of (\ref{33}) is obtained in the form $k_t = Q_{\alpha_2 \alpha_1}(t) k_0$. This construction, in which we employ $S^\odot$, resembles the one used to get (\ref{L19}). We begin by rearranging the operators in (\ref{21}) as follows \begin{equation}
\label{45} L^\Delta = A^\Delta + B^\Delta = A^\Delta_\upsilon + B^\Delta_\upsilon, \end{equation} where $A^\Delta_\upsilon = A^\Delta_{1,\upsilon} + A^\Delta_2$, see (\ref{37}), and \begin{eqnarray}
\label{46}
B^\Delta_\upsilon &= & B_1^\Delta+
B^\Delta_{2,\upsilon},\\[.2cm]\nonumber
(B^\Delta_{2,\upsilon}k)(\eta) &=& (B^\Delta_{2}k)(\eta) +
\upsilon |\eta|k(\eta)\\[.2cm]\nonumber & = & 2
\int_{(\mathds{R}^d)^2} \sum_{y_1\in \eta} b(x|y_1, y_2) k(\eta \cup
x \setminus y_1) d x d y_2 + \upsilon |\eta|k(\eta), \end{eqnarray} whereas $B_1^\Delta$ is as in (\ref{21}). By means of (\ref{46}), for $\alpha \in \mathds{R}$ and $\alpha' < \alpha$, we define $(B^\Delta_\upsilon)_{\alpha\alpha'}\in \mathcal{L}(\mathcal{K}_{\alpha'}, \mathcal{K}_{\alpha})$ the norm of which can be estimated similarly as in (\ref{27}), (\ref{28}), which yields \begin{equation}
\label{47}
\| (B^\Delta_\upsilon)_{\alpha \alpha'}\| \leq \frac{2 \langle b \rangle + \upsilon + \langle a \rangle e^{\alpha'}}{e(\alpha
-\alpha')}. \end{equation} Now let $\mathbf{B}$ be either $B^\Delta_\upsilon$ or $B^\Delta_{2,\upsilon}$, and $\mathbf{B}_{\alpha \alpha'}$ be the corresponding bounded operator. Then, cf. (\ref{47}), \begin{equation}
\label{48}
\|\mathbf{B}_{\alpha \alpha'}\| \leq \frac{\varpi(\alpha;\mathbf{B})}{e(\alpha-\alpha')}, \end{equation} where \begin{equation}
\label{49}
\varpi(\alpha;B^\Delta_{\upsilon}) = 2 \langle b \rangle + \upsilon + \langle a \rangle
e^{\alpha}, \quad \varpi(\alpha;B^\Delta_{2,\upsilon}) = 2 \langle b \rangle +
\upsilon. \end{equation} For some $\alpha_1, \alpha_2$ such that $\alpha_1 < \alpha_2$, we then set $\Sigma_{\alpha_2 \alpha_1}(t) =
S^{\odot}_{\alpha_2}(t)|_{\mathcal{K}_{\alpha_1}}$, $t>0$, where $S^{\odot}_{\alpha}$ is the sub-stochastic semigroup as in Proposition \ref{Papn}. Let also $\Sigma_{\alpha_2 \alpha_1}(0)$ be the embedding operator $\mathcal{K}_{\alpha_1}\to \mathcal{K}_{\alpha_2}$. Hence, see Proposition \ref{Papn}, the operator norm satisfies \begin{equation}
\label{50}
\| \Sigma_{\alpha_2 \alpha_1}(t)\|\leq 1, \qquad t\geq 0. \end{equation} We also have \begin{eqnarray}
\label{51} \Sigma_{\alpha_2 \alpha_1}(t) & = & \Sigma_{\alpha_2 \alpha_1}(0) S^{\odot}_{\alpha_1}(t), \\[.2cm] \nonumber \Sigma_{\alpha_3 \alpha_1}(t+s) & = & \Sigma_{\alpha_3 \alpha_2}(t)\Sigma_{\alpha_2 \alpha_1}(s), \quad \ \alpha_3 > \alpha_2, \end{eqnarray} holding for all $t, s\geq 0$. Moreover, \begin{equation*}
\frac{d}{dt} \Sigma_{\alpha_2 \alpha_1}(t) = A^\Delta_\upsilon \Sigma_{\alpha_2 \alpha_1}(t), \end{equation*} which follows by Lemma \ref{2lm} and the construction of the semigroup $S^{\odot}_\alpha$. Now we set \begin{equation}
\label{53}
T(\alpha_2 , \alpha_1;\mathbf{B}) = \frac{\alpha_2 -
\alpha_1}{\varpi(\alpha_2;\mathbf{B})}, \end{equation} see (\ref{48}), (\ref{49}), and also \begin{equation}
\label{54} \mathcal{A}(\mathbf{B})= \{ (\alpha_1 , \alpha_2, t): - \log \omega < \alpha_1 < \alpha_2 , \ t\in[0,T(\alpha_2, \alpha_1; \mathbf{B}))\}. \end{equation} Note that $T(\alpha_2 , \alpha_1;B^\Delta_\upsilon)$ coincides with $T(\alpha_2 , \alpha_1)$ defined in (\ref{34}). \begin{lemma}
\label{3lm} For both choices of $\mathbf{B}$, there exist the corresponding families $\lbrace Q_{\alpha_2 \alpha_1}(t;\mathbf{B}): (\alpha_1, \alpha_2,t) \in \mathcal{A}(\mathbf{B}) \rbrace$, each element of which has the following properties: \begin{itemize}
\item[{\it(a)}] $Q_{\alpha_2 \alpha_1}(t;\mathbf{B}) \in \mathcal{L}(\mathcal{K}_{\alpha_1},
\mathcal{K}_{\alpha_2})$; \item[{\it(b)}] the map $[0, T(\alpha_2, \alpha_1;\mathbf{B})) \ni t \mapsto Q_{\alpha_2 \alpha_1}(t;\mathbf{B})\in \mathcal{L}(\mathcal{K}_{\alpha_1}, \mathcal{K}_{\alpha_2})$ is continuous; \item[{\it(c)}] the operator norm of $Q_{\alpha_2 \alpha_1}(t;\mathbf{B}) \in \mathcal{L}(\mathcal{K}_{\alpha_1}, \mathcal{K}_{\alpha_2})$ satisfies
$$\|Q_{\alpha_2 \alpha_1}(t;\mathbf{B}) \| \le \frac{T(\alpha_2, \alpha_1;\mathbf{B})}{T(\alpha_2, \alpha_1;\mathbf{B})-t},$$ \item[{\it(d)}] for each $\alpha_3 \in (\alpha_1, \alpha_2)$ and $t <T(\alpha_3, \alpha_1;\mathbf{B})$, the following holds \begin{equation}
\label{54b} \frac{d}{dt} Q_{\alpha_2 \alpha_1}(t;\mathbf{B}) = ((A^{\Delta}_\upsilon)_{\alpha_2 \alpha_3} + \mathbf{B}_{\alpha_2 \alpha_3})Q_{\alpha_3 \alpha_1}(t;\mathbf{B}), \end{equation} which yields, in turn, that \begin{eqnarray}
\label{54a} \frac{d}{dt} Q_{\alpha_2 \alpha_1}(t;B^\Delta_\upsilon ) & = & L^\Delta_{\alpha_2}Q_{\alpha_2 \alpha_1}(t;B^\Delta_\upsilon ) \\[.2cm]\frac{d}{dt} Q_{\alpha_2 \alpha_1}(t;B^\Delta_{2,\upsilon} ) & = & ((A^{\Delta}_\upsilon)_{\alpha_2} + (B^\Delta_{2,\upsilon})_{\alpha_2} )Q_{\alpha_2 \alpha_1}(t;B^\Delta_{2,\upsilon} ), \nonumber \end{eqnarray} where $L^\Delta_{\alpha_2}$ is as in (\ref{33}), see also (\ref{45}), and $(B^\Delta_{2,\upsilon})_{\alpha_2} $ denotes $(B^\Delta_{2,\upsilon} , \mathcal{D}^\Delta_{\alpha_2})$, see (\ref{24}). \end{itemize} \end{lemma} \begin{proof} Fix some $T< T(\alpha_2, \alpha_1;\mathbf{B})$ and then take $\alpha \in (\alpha_1, \alpha_2]$ and positive $\delta < \alpha- \alpha_1$ such that $$T< T_\delta:= \frac{\alpha - \alpha_1 - \delta}{\beta(\alpha_2;\mathbf{B})}.$$ Then take some $l\in \mathds{N}$ and divide $[\alpha_1, \alpha]$ into $2l+1$ subintervals in the following way: $\alpha_1=\alpha^0$, $\alpha = \alpha^{2l+1}$ and \begin{equation}
\label{55}
\alpha^{2s}=\alpha_1+\frac{s}{l+1}\delta + s \epsilon, \qquad \alpha^{2s+1}=\alpha_1+\frac{s+1}{l+1}\delta + s \epsilon, \end{equation} where $\epsilon = (\alpha - \alpha_1 - \delta)/l$ and $s=0, 1, ..., l$. Now for $0\leq t_l \leq t_{l-1}\cdots \leq t_1 \leq t_0 := t$, define \begin{eqnarray}
\label{56}
\Pi^{(l)}_{\alpha \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B}) & = & \Sigma_{\alpha \alpha^{2l}}(t-t_1)\mathbf{B}_{\alpha^{2l}\alpha^{2l-1}}
\cdots \Sigma_{\alpha^{2s+1}
\alpha^{2s}}(t_{l-s}-t_{l-s+1})\mathbf{B}_{\alpha^{2s}\alpha^{2s-1}}
\qquad
\nonumber \\[.2cm] & \times & \Sigma_{\alpha^3 \alpha^{2}}(t_{l-1}-t_l)\mathbf{B}_{\alpha^{2}\alpha^{1}}\Sigma_{\alpha^1 \alpha_1}(t_l). \end{eqnarray} By the very construction we have that $ \Pi^{(l)}_{\alpha \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B}) \in \mathcal{L}(\mathcal{K}_\alpha , \mathcal{K}_{\alpha_1})$, and the map $$(t,t_1,...,t_l) \mapsto \Pi^{(l)}_{\alpha \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B})$$ is continuous (Proposition \ref{Papn} and the fact that each $\mathbf{B}_{\alpha^{2s}\alpha^{2s-1}}$ is bounded). Moreover, by (\ref{50}) and (\ref{48}) we have \begin{eqnarray}
\label{56a}
\| \Pi^{(l)}_{\alpha \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B})\|
& \leq & \prod_{s=0}^l \|\mathbf{B}_{\alpha^{2s}\alpha^{2s-1}} \| \leq \prod_{s=0}^l \frac{\varpi(\alpha^{2s};\mathbf{B})}{e(\alpha^{2s}-\alpha^{2s-1})} \\[.2cm] \nonumber & \leq & \left(\frac{l\upsilon(\alpha_2;\mathbf{B})}{e(\alpha - \alpha_1-\delta)} \right)^l \leq \left(\frac{l}{eT_\delta}\right)^l. \end{eqnarray} By (\ref{51}) we also have that \[ \Sigma_{\alpha^{2s+1} \alpha^{2s}}(t_{l-s}-t_{l-s+1})= \Sigma_{\alpha^{2s+1} \alpha^{2s}}(0)S^\odot_{\alpha^{2s}}(t_{l-s}-t_{l-s+1}). \] Taking the derivative of both sides of the latter we obtain \begin{eqnarray*} \frac{d}{dt}\Sigma_{\alpha^{2s+1} \alpha^{2s}}(t)=(A_\upsilon^{\Delta})_{\alpha^{2s+1} \alpha''} \Sigma_{\alpha'' \alpha^{2s}}(t) = (A_\upsilon^{\Delta})_{\alpha^{2s+1}} \Sigma_{\alpha^{2s+1} \alpha^{2s}}(t), \end{eqnarray*} holing for each $\alpha''\in (\alpha^{2s}, \alpha^{2s+1})$. Here $(A_\upsilon^{\Delta})_{\alpha}$ stands for the unbounded operator defined in (\ref{37}). Then we obtain from (\ref{56}) the following \begin{eqnarray}
\label{57} \frac{d}{dt} \Pi^{(l)}_{\alpha \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B}) & = & (A^{\Delta}_\upsilon)_{\alpha \alpha'} \Pi^{(l)}_{\alpha' \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B}) \\[.2cm]\nonumber & = & (A^{\Delta}_\upsilon)_{\alpha} \Pi^{(l)}_{\alpha \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B}). \end{eqnarray} Now we set \begin{equation}
\label{58} Q_{\alpha \alpha_1}(t;\mathbf{B}) = \Sigma_{\alpha \alpha_1}(t)+ \sum_{l=1}^{\infty}\int_{0}^{t} \int_0^{t_1}...\int_0^{t_{l-1}}\Pi^{(l)}_{\alpha \alpha_1}(t, t_1, t_2, ... , t_l;\mathbf{B})dt_l...dt_1. \end{equation} By (\ref{56a}) the series in (\ref{58}) converges uniformly of compact subsets of $[0,T_\delta)$, which proves claims (a) and (b). The estimate in (c) follows directly from (\ref{56a}). Finally, (\ref{54a}) follows by (\ref{57}), cf. (\ref{L20}).\end{proof}
By solving (\ref{54b}) with the initial condition $Q_{\alpha_2
\alpha_1}(t+s;\mathbf{B})|_{t=0} = Q_{\alpha_2 \alpha_1}(s;\mathbf{B})$ we obtain the following `semigroup' property of the family $\lbrace Q_{\alpha_2 \alpha_1}(t;\mathbf{B}): (\alpha_1, \alpha_2,t) \in \mathcal{A}(\mathbf{B}) \rbrace$. \begin{corollary}
\label{JKco} For each $\alpha\in (\alpha_1, \alpha_2)$ and $t,s>0$ such that \[ s< T(\alpha, \alpha_1;\mathbf{B}), \quad t < T(\alpha_2, \alpha;\mathbf{B}), \quad t+s < T(\alpha_2, \alpha_1;\mathbf{B}), \] the following holds \[ Q_{\alpha_2 \alpha_1}(t+s;\mathbf{B}) = Q_{\alpha_2 \alpha}(t;\mathbf{B})Q_{\alpha\alpha_1}(s;\mathbf{B}). \] \end{corollary} \begin{remark}
\label{Jan10rk} Since $B^\Delta_{2,\upsilon}$ is positive, by (\ref{56}) we obtain that $Q_{\alpha_2 \alpha_1}(t;B^\Delta_{2,\upsilon}):\mathcal{K}_{\alpha_1}^+ \to\mathcal{K}_{\alpha_2}^+$. This positivity will be used to continue $k_t$ to all $t>0$. It is the only reason for us to use $Q_{\alpha_2 \alpha_1}(t;B^\Delta_{2,\upsilon})$ since $B^\Delta_{\upsilon}$ is not positive, and hence the positivity of $Q_{\alpha_2 \alpha_1}(t;B^\Delta_{\upsilon})$ cannot be secured. \end{remark}
\noindent {\it Proof of Lemma \ref{1lm}.} Set \begin{equation}
\label{Jan3} Q_{\alpha_2 \alpha_1}(t) = Q_{\alpha_2 \alpha_1}(t;B^\Delta_\upsilon), \qquad t < T(\alpha_2 , \alpha_1;B^\Delta_\upsilon) = T(\alpha_2 , \alpha_1) \end{equation} Then the solution in question is obtained by setting $k_t = Q_{\alpha_2 \alpha_1}(t) k_0$, which definitely satisfies (\ref{33}) by (\ref{54a}) and (\ref{51}). Its uniqueness can be proved as in the proof of Lemma 4.8 in \cite{KK}.
{$\square$}
Before proceeding further, we prove some corollary of Lemma \ref{3lm} related to the predual evolution in $\mathcal{G}_\alpha$, see (\ref{35}). Let $S_\alpha$ be the semigroup as in Lemma \ref{2lm}. For $\alpha' >\alpha$, let $S_{\alpha \alpha'}(t)$ be the restriction of $S_\alpha(t)$ to $\mathcal{G}_{\alpha'} \hookrightarrow \mathcal{G}_\alpha$. Along with the operators defined in (\ref{37}) we consider the predual operators to $B^\Delta_\upsilon$, see (\ref{21}) and (\ref{46}). That is, they act \begin{eqnarray*}
(B_1 G) (\eta)& =& - \sum_{x\in \eta} G(\eta\setminus x) E^a (x, \eta\setminus x), \\[.2cm] \nonumber (B_{2,\upsilon} G) (\eta)& =& 2 \int_{(\mathds{R}^d)^2} \sum_{x\in
\eta} G(\eta\setminus x \cup y_1) b(x|y_1, y_2) d y_1 d y_2 +
\upsilon |\eta| G(\eta). \end{eqnarray*} By means of these expressions we can define bounded operators acting from $\mathcal{G}_{\alpha}$ to $\mathcal{G}_{\alpha'}$ for $\alpha'< \alpha$. It turns out that the estimate of the norm is exactly as in (\ref{47}), that is, \begin{equation*}
\|(B_\upsilon)_{\alpha' \alpha} \|= \frac{2 \langle b \rangle + \upsilon + \langle a \rangle e^{\alpha'}}{e(\alpha - \alpha')}. \end{equation*} Recall that $\mathcal{A}(B^\Delta_\upsilon)$ is defined in (\ref{54}). For $(\alpha_2, \alpha_1, t)\in \mathcal{A}(B^\Delta_\upsilon)$, let $T<T(\alpha_2, \alpha_1)$ be fixed. Pick $\alpha\in [\alpha_1,\alpha_2)$ and $\delta< \alpha_2-\alpha$ such that $T< T(\alpha_2, \alpha +\delta)$. Then, for some $l\in \mathds{N}$, set, cf. (\ref{55}), \begin{equation*}
\alpha_{2s} = \alpha_2 - \frac{s}{l+1}\delta - s \epsilon, \quad \alpha^{2s+1} = \alpha_2 - \frac{s+1}{l+1}\delta - s \epsilon, \end{equation*} where $\epsilon = (\alpha_2 - \alpha - \delta)/ l$. For $0\leq t_l\leq \cdots \leq t_1 \leq t_0:=t$ we then define, cf. (\ref{56}), \begin{eqnarray*}
\Omega^{(l)}_{\alpha\alpha_2} (t, t_1 , \dots , t_n) & = & S_{\alpha \alpha^{2l}} (t-t_1) (B_\upsilon)_{\alpha^{2l}\alpha^{2l-1}} S_{\alpha^{2l-1}\alpha^{2l-2}} (t_1-t_2) \times \\[.2cm] & \times & S_{\alpha^3\alpha^2}(t_{l-1}- t_l) (B_\upsilon)_{\alpha^{2}\alpha^{1}} S_{\alpha^1\alpha_2}(t_{l}) . \nonumber \end{eqnarray*} Set \begin{equation}
\label{Du4} H_{\alpha \alpha_2} (t) = S_{\alpha\alpha_2}(t) + \sum_{l=1}^\infty \int_0^t \int_0^{t_1}\cdots \int_0^{t_{l-1}} \Omega^{(l)}_{\alpha\alpha_2} (t, t_1 , \dots , t_n) d t_l d t_{l-1} \cdots d t_1. \end{equation} Then exactly as in the case of Lemma \ref{3lm} we prove the following statement. \begin{proposition}
\label{Du1pn} Each member of the family of operators $\{H_{\alpha\alpha_2}(t): (\alpha_2 , \alpha, t)\in \mathcal{A}(B^\Delta_\upsilon)\}$ defined in (\ref{Du4}) has the following properties: \begin{itemize}
\item[(a)] $H_{\alpha\alpha_2}(t) \in \mathcal{L}(\mathcal{G}_{\alpha_2}, \mathcal{G}_{\alpha})$, the operator norm of which satisfies \[
\|H_{\alpha\alpha_2}(t)\| \leq \frac{T(\alpha_2,\alpha)}{T(\alpha_2,\alpha)-t}; \] \item[(b)] For each $k\in \mathcal{K}_\alpha$ and $G\in \mathcal{G}_{\alpha_2}$, it follows that \begin{equation}
\label{Du5} \langle \! \langle G, Q_{\alpha_2 \alpha}(t) k\rangle \!\rangle = \langle \! \langle H_{\alpha \alpha_2}(t) G, k\rangle \!\rangle. \end{equation} \end{itemize}
\end{proposition}
\section{The Identification Lemma}
\label{Sec5}
Our aim now is to prove that the solution obtained in Lemma \ref{1lm} has the property $k_t=k_{\mu_t}$ for a unique $\mu_t \in \mathcal{P}_{\rm exp}(\Gamma)$. We call this \emph{identification} since it allows us to identify the mentioned solutions as the correlation functions of sub-Poissonian states.
Recall that $\upsilon$ and $\omega$ appear in Proposition \ref{2pn} and $\mathcal{K}^\star_\alpha$ is defined in (\ref{32}). \begin{lemma}[Identification]
\label{ILlm} For each $\alpha_2>\alpha_1>-\log \omega$, it follows that $Q_{\alpha_2 \alpha_1}(t)=Q_{\alpha_2 \alpha_1}(t;B_\upsilon^\Delta): \mathcal{K}_{\alpha_1}^\star \to \mathcal{K}_{\alpha_2}^\star$ for all $t \in [0, \tau (\alpha_2, \alpha_1)]$ with $\tau(\alpha_2, \alpha_1) = T(\alpha_2, \alpha_1)/3$. \end{lemma} The proof consists in the following steps: \begin{itemize}
\item[(i)] constructing an approximation $k_t^{\rm app}$ of $k_t = Q_{\alpha_2 \alpha_1}(t)k_{0}$, $k_0\in \mathcal{K}_{\alpha_1}^\star$, such that
$\langle \! \langle G, k_t^{\rm app}\rangle \! \rangle \geq 0$ for
all $G\in B^\star_{\rm bs}(\Gamma_0)$;
\item[(ii)] proving that $\langle \! \langle G, k_t^{\rm app}\rangle \!
\rangle \to \langle \! \langle G, k_t\rangle \! \rangle$ as the
approximation is eliminated. \end{itemize} \begin{figure}
\caption{The evolution in spaces}
\end{figure}
Fig. 1 provides an illustration to the idea of how to realize step (i). The origin of the inequality in question is in (\ref{15}) and (\ref{16}). To relate $k_t$ with a positive measure one uses local approximations of $\mu_0$, the densities of which (not necessarily normalized) evolve $R^{\rm app}_0 \to R^{\rm app}_t$ in $L^1$-like spaces according to Theorem \ref{1tm}. These approximations are tailored in such a way that the corresponding correlation functions (\ref{13}) (that have the desired property by construction) also evolve $q_0^{\rm app}\to q_t^{\rm app}$ in $L^1$-like spaces $\mathcal{G}_\vartheta$. The technique developed in Sect. \ref{Sec4} allows for proving that $\langle \! \langle G, k_t^{\rm app}\rangle \! \rangle$ converges to $\langle \! \langle G, k_t\rangle \! \rangle$ only if $k_t^{\rm app}= Q_{\alpha \alpha_0}(t) q_0^{\rm app}$. That is, at this stage there is no connection between the evolutions $q_0^{\rm app}\to q_t^{\rm app}$ and $q_0^{\rm app}\to k_t^{\rm app}$ as they take place in (different) spaces, $\mathcal{G}_\vartheta$ and $\mathcal{K}_\alpha$, respectively. It turns out, that these spaces have an intersection $\mathcal{U}_\alpha^\sigma$ constructed with the help of some objects dependent on a para,eter, $\sigma>0$. To employ this fact we use auxiliary models (indexed by $\sigma$), for which we prove that both evolutions $q_0^{\rm app}\to q_t^{\rm app}$ and $q_0^{\rm app}=k_0^{\rm app}\to k_t^{\rm app}$ take place in $\mathcal{U}_\alpha^\sigma$ and thus coincide. That is $q_t^{\rm app}=k_t^{\rm app}$ for $t\leq \tau$ with some positive $\tau$, that yields the desired positivity of $k_t^{\rm app}$. Then step (ii) includes also taking the limit $\sigma \to 0^+$.
\subsection{Auxiliary evolutions} For $\sigma >0$ and $x\in \mathds{R}^d$, we set \begin{eqnarray}
\label{U1} & & \phi_\sigma (x) = \exp\left(- \sigma |x|^2 \right), \quad \langle \phi_\sigma \rangle = \int _{\mathds{R}^d}\phi_\sigma (x) d x.\\[.2cm] \nonumber
& & b_\sigma (x|y_1 , y_2) = b (x|y_1 , y_2) \phi_\sigma (y_1) \phi_\sigma (y_2). \end{eqnarray} Consider \begin{equation}
\label{U2} L^{\Delta, \sigma}=A^{\Delta, \sigma}+B^{\Delta, \sigma}=A_\upsilon^{\Delta, \sigma}+B_\upsilon^{\Delta, \sigma}, \end{equation} that is obtained from the corresponding operators in (\ref{21}) and (\ref{45}), (\ref{46}) by replacing $b$ with $b_\sigma$ given in (\ref{U1}). Since this substitution does not affect $\mathcal{D}^\Delta_\alpha$, see (\ref{24}), we will use the latter as the domain of the corresponding unbounded operators. Then we repeat the construction as in the proof of Lemma \ref{3lm} and obtain the family $\{Q^\sigma_{\alpha_2\alpha_1}(t): (\alpha_1 , \alpha_2 , t)\in \mathcal{A}(B^\Delta_\upsilon)\}$ corresponding to the choice $\mathbf{B}= B_\upsilon^{\Delta, \sigma}$. Along with the evolution $t \mapsto Q^\sigma_{\alpha_2\alpha_1}(t) k_0$ we will consider two more evolutions in $L^\infty$- and $L^1$-like spaces. The latter one will be positive in the sense of Proposition \ref{1pn} by the very construction. The auxiliary $L^\infty$-like space where we are going to construct $t\mapsto k_t^{\rm app}$ lies in the intersection of the just mentioned $L^1$-like space with the spaces $\mathcal{K}_\alpha$, see Fig. 1, and hence is also positive in the sense of Proposition \ref{1pn}. These arguments will allow us to realize item (i) of the program.
\subsubsection{$L^\infty$-like evolution} For $u: \Gamma_0 \to \mathbb{R}$, we define the norm \begin{equation}
\label{U3}
\| u \|_{\sigma, \alpha} = \esssup_{\eta \in
\Gamma_0}\frac{|u(\eta)|\exp(-\alpha|\eta|)}{e(\phi_\sigma; \eta)}, \end{equation} where $$e(\phi_\sigma; \eta)= \prod_{x\in \eta}\phi_\sigma
(x)=\exp\left( -\sigma\sum_{x\in \eta} |x|^2\right),$$ cf. (\ref{11}). Then we consider the Banach space $\mathcal{U}_{\sigma,
\alpha} = \{u: \Gamma_0 \to \mathbb{R}: \| u \|_{\sigma, \alpha} <\infty\}$. Clearly, \begin{equation}
\label{Jan4} \mathcal{U}_{\sigma, \alpha}\hookrightarrow \mathcal{K}_\alpha, \qquad \alpha \in \mathds{R}. \end{equation} The space predual to $\mathcal{U}_{\sigma, \alpha}$ is the $L^1$-space equipped with the norm, cf. (\ref{35}), (\ref{36}), \begin{equation}
\label{U4}
|G|_{\sigma, \alpha}=\int_{\Gamma_0} |G(\eta)|\exp (\alpha |\eta|)e(\phi_\sigma;\eta)\lambda(d
\eta). \end{equation} In this space, we define $A^\sigma_{1,\upsilon}$ which acts exactly as in (\ref{37}), and $A^\sigma_{2}$ which acts as in (\ref{37}) with $b$ replaced by $b_\sigma$. Their domain is the same $\mathcal{D}_\alpha$. Then, like in (\ref{39}), by means of (\ref{18}) and (\ref{U4}) we obtain \begin{eqnarray*}
|A_2^\sigma G|_{\sigma,\alpha} & = & \int_{\Gamma_0} \left(\sum_{x \in \eta} \int_{(\mathds{R}^d)^2}
|G(\eta \setminus x \cup \{y_1, y_2\})|b_\sigma(x|y_1,y_2)dy_1 dy_2 \right)\\[.2cm] & \times &\exp (\alpha |\eta|)e(\phi_\sigma;\eta)\lambda(d \eta) \nonumber\\[.2cm] & = & e^\alpha \int_{\Gamma_0} \left( \int_{(\mathds{R}^d)^3}
|G(\eta \cup \{y_1, y_2\})| b_\sigma(x|y_1,y_2) \phi_\sigma (x) d x d y_1 d y_2 \right)\nonumber\\[.2cm] & \times & \exp (\alpha |\eta|)e(\phi_\sigma;\eta) \lambda (d\eta) \nonumber\\[.2cm] & \leq & e^\alpha \int_{\Gamma_0} \left( \int_{(\mathds{R}^d)^2}
|G(\eta \cup \{y_1, y_2\})| \beta(y_2-y_1) e(\phi_\sigma;\eta\cup\{y_1,y_2\}) d y_1 d y_2 \right) \nonumber\\[.2cm]
& \times & \exp (\alpha |\eta|) \lambda (d\eta) \nonumber\\[.2cm]
& = & e^{-\alpha}\int_{\Gamma_0} E^b(\eta) |G(\eta)| e^{\alpha
|\eta|} e(\phi_\sigma;\eta) \lambda(d
\eta)\nonumber\\[.2cm] \nonumber & \leq & (e^{-\alpha}/\omega) \int_{\Gamma_0} e^{\alpha |\eta|} \Psi_\upsilon (\eta)|G(\eta)| e(\phi;\eta) \lambda(d \eta)\nonumber\\[.2cm] \nonumber & = & (e^{-\alpha}/\omega)
|A_{1,\upsilon}^\sigma G|_{\sigma, \alpha}. \end{eqnarray*} This allows us to prove the following analog of Lemma \ref{2lm}. \begin{proposition}
\label{U1pn} Let $\upsilon$ and $\omega$ be as in Proposition \ref{2pn} and $A_{1,\upsilon}^\sigma$, $A_2^\sigma$ and $\mathcal{D}_\alpha$ be as just described. Then for each $\alpha> - \log \omega$, the operator $(A_{\upsilon}^\sigma , \mathcal{D}_\alpha):=(A^\sigma_{1,\upsilon} + A^\sigma_2, \mathcal{D}_\alpha)$ is the generator of a sub-stochastic semigroup $S_{\sigma,\alpha}=\{S_{\sigma,\alpha}(t)\}_{t\geq 0}$ on $\mathcal{G}_{\sigma,\alpha}$. \end{proposition} Let $S_{\sigma,\alpha}^{\odot}$ be the sun-dual semigroup, the definition of which is pretty analogous to that of $S_{\alpha}^{\odot}$, see Proposition \ref{Papn}. Then, for $\alpha'< \alpha$, we define $\Sigma_{\alpha \alpha'}^\sigma (t)=
S^{\odot}_{\sigma, \alpha}(t)|_{\mathcal{U}_{\sigma.\alpha'}}$. As in Proposition \ref{Papn} we then get that the map \begin{equation*}
[0, +\infty) \ni t \mapsto \Sigma_{\alpha \alpha'}^\sigma (t) \in \mathcal{L}(\mathcal{U}_{\sigma , \alpha'}, \mathcal{U}_{\sigma , \alpha}) \end{equation*} is continuous and \begin{equation*}
\|\Sigma_{\alpha, \alpha'}^\sigma (t)\| \leq 1, \qquad {\rm for} \ {\rm all} \ t\geq 0. \end{equation*} The operators $B^{\Delta,\sigma}_{\upsilon}= B^{\Delta,\sigma}_{1}+ B^{\Delta,\sigma}_{2,\upsilon}$ act as in (\ref{46}) with $b$ replaced by $b_\sigma$. Then we define the corresponding bounded operators and obtain, cf. (\ref{47}), \begin{equation*}
\|(B^{\Delta,\sigma}_{\upsilon})_{\alpha \alpha'} \| \leq \frac{2 \langle b \rangle + \upsilon + \langle a \rangle e^{\alpha'}}{e(\alpha
-\alpha')}. \end{equation*} Thereafter, we take $\delta>0$ as in Lemma \ref{3lm} and the division as in (\ref{55}), and then define \begin{eqnarray*} \Pi_{\alpha \alpha'}^{l,\sigma}(t, t_1, t_2, ... , t_l) & = & \Sigma_{\alpha \alpha^{2l}}^{\sigma}(t-t_1)(B_\upsilon^{\Delta, \sigma})_{\alpha^{2l}\alpha^{2l-1}}\cdots \Sigma_{\alpha^{2s+1} \alpha^{2s}}^{\sigma}(t_{l-s}-t_{l-s+1})\\[.2cm] & \times &(B_\upsilon^{\Delta, \sigma})_{\alpha^{2s}\alpha^{2s-1}}\cdots \Sigma_{\alpha^3 \alpha^{2}}^{\sigma}(t_{l-1}-t_l)(B_\upsilon^{\Delta, \sigma})_{\alpha^{2}\alpha^{1}}\Sigma_{\alpha^1 \alpha'}^{\sigma}(t_l), \end{eqnarray*} As in the proof of Lemma \ref{3lm} we obtain the family $\{ U^\sigma_{\alpha_2 \alpha_1}(t):(\alpha_1 , \alpha_2, t)\in \mathcal{A}(B^{\Delta}_\upsilon)\}$, see (\ref{54}), with members defined by \[ U^\sigma_{\alpha_2 \alpha_1}(t) = \Sigma_{\alpha_2 \alpha_1}^\sigma (t) + \sum_{l=1}^\infty \int_{0}^{t} \int_0^{t_1}...\int_0^{t_{l-1}}\Pi^{l, \sigma}_{\alpha_2 \alpha_1}(t, t_1, t_2, ... , t_l)dt_l...dt_1, \] where the series converges for $t< T(\alpha_2, \alpha_1)$ defined in (\ref{34}), cf. (\ref{53}) and (\ref{Jan3}). For this family, the following holds, cf. (\ref{54a}), \begin{equation}
\label{U9}
\frac{d}{dt} U^\sigma_{\alpha_2 \alpha_1}(t) = L^{\Delta,
\sigma}_{\alpha_2,u} U^\sigma_{\alpha_2 \alpha_1}(t), \end{equation} where the action of of $L^{\Delta,\sigma}_{\alpha_2,u}$ is as in (\ref{U2}) and the domain is \begin{equation}
\label{U9a} \mathcal{D}^{\Delta,\sigma}_{\alpha_2 , u} = \{ u\in \mathcal{U}_{\sigma, \alpha_2}: \Psi_\upsilon u \in \mathcal{U}_{\sigma, \alpha_2}\}\subset \mathcal{D}^\Delta_{\alpha_2}, \end{equation} where the latter inclusion follows by (\ref{Jan4}) and (\ref{24}). Then by (\ref{U9a}) we have that \begin{equation}
\label{Jan10} L^{\Delta,\sigma}_{\alpha, u} u = L^{\Delta,\sigma}_{\alpha} u , \qquad u\in \mathcal{D}^{\Delta,\sigma}_{\alpha ,
u} . \end{equation} Now by (\ref{U9}) we prove the following statement. \begin{proposition}
\label{U2pn} For each $\alpha_2>\alpha_1 > - \log \omega$, the problem
\begin{equation}
\label{U10}
\dot{u}_t = L^{\Delta,\sigma}_{\alpha_2,u} u_t, \qquad u_t|_{t=0}= u_0 \in \mathcal{U}_{\sigma,\alpha_1}
\end{equation} has a unique solution $u_t \in \mathcal{U}_{\sigma,\alpha_2}$ on the time interval $[0, T(\alpha_2, \alpha_1))$. This solution is given by $u_t = U^\sigma_{\alpha_2 \alpha_1}(t) u_0$. \end{proposition} \begin{corollary}
\label{U1co} Let $\alpha_2>\alpha_1 > - \log \omega$ be as in Proposition \ref{U2pn} and $Q^\sigma_{\alpha_2\alpha_1}(t)$ be as described at the beginning of this subsection. Then for each $t< T(\alpha_2 , \alpha_1)$ and $u_0 \in \mathcal{U}_{\sigma,\alpha_1}\subset \mathcal{K}_{\alpha_1}$, it follows that \begin{equation}
\label{U11} U^\sigma_{\alpha_2 \alpha_1}(t) u_0 = Q^\sigma_{\alpha_2 \alpha_1}(t) u_0. \end{equation} \end{corollary} \begin{proof} By (\ref{Jan10}) we get that the solution of (\ref{U10}) is also the unique solution of the following ``$\sigma$-analog" of (\ref{33})
$$\dot{u}_t = L^{\Delta,\sigma}_{\alpha_2} u_t, \quad u_t|_{t=0}= u_0, $$ and hence is given by the right-hand side of (\ref{U11}). Then the equality in (\ref{U11}) follows by the uniqueness just mentioned. \end{proof}
\subsubsection{$L^1$-like evolution} Now we take $L^{\Delta,\sigma}$ as given in (\ref{U2}) and define the corresponding operator $L^{\Delta,\sigma}_\vartheta$ in $\mathcal{G}_\vartheta$, $\vartheta \in \mathds{R}$, introduced in (\ref{35}), (\ref{36}), with domain $\mathcal{D}_\vartheta$ given in (\ref{37}). By (\ref{U2}) and (\ref{21}) we have that $A^\Delta_1 : \mathcal{D}_\vartheta \to \mathcal{G}_\vartheta$. Next, for $q\in \mathcal{D}_\vartheta$, we have \begin{eqnarray}
\label{Jan11}
|A^{\Delta, \sigma}_2 q |_\vartheta & \leq & \int_{\Gamma_0}
e^{\vartheta|\eta|}\left( \int_{\mathds{R}^d} \sum_{y_1\in
\eta}\sum_{y_2\in \eta\setminus y_1 } |q(\eta\cup x\setminus\{y_1, y_2\}) | b_\sigma (x|y_1. y_2) d x \right) \lambda ( d \eta) \qquad \quad \\[.2cm] \nonumber &\leq & \int_{\Gamma_0}
e^{\vartheta|\eta|+ 2 \vartheta} \int_{\mathds{R}^d} |q(\eta\cup x)|
\left( \int_{(\mathds{R}^d)^2} b(x|y_1 , y_2) d y_1 d y_2 \right) d x \lambda ( d \eta) \\[.2cm] \nonumber &=& \langle b \rangle e^\vartheta \int_{\Gamma_0} |\eta| e^{\vartheta |\eta|} |q(\eta)| \lambda ( d \eta) \leq e^\vartheta \int_{\Gamma_0} \Psi(\eta)
e^{\vartheta |\eta|} |q(\eta)| \lambda ( d \eta), \end{eqnarray} see item (iii) of Assumption \ref{ass1} and (\ref{22}). Hence, $A^{\Delta,\sigma}_2 : \mathcal{D}_\vartheta \to \mathcal{G}_\vartheta$. Next, for the same $q$, we have \begin{eqnarray}
\label{Jan12}
|B^{\Delta}_1 q |_\vartheta & \leq & \int_{\Gamma_0}
e^{\vartheta|\eta|}\left( \int_{\mathds{R}^d} |q(\eta\cup x)| E^a(x, \eta) d x \right) \lambda ( d \eta) \\[.2cm] \nonumber & = &
e^{-\vartheta} \int_{\Gamma_0} e^{\vartheta |\eta|} E^a (\eta)
|q(\eta)| \lambda ( d \eta) \leq e^{-\vartheta} \int_{\Gamma_0}
\Psi(\eta) e^{\vartheta |\eta|} |q(\eta)| \lambda ( d \eta). \end{eqnarray} Hence, $B^\Delta_1 : \mathcal{D}_\vartheta \to \mathcal{G}_\vartheta$. Finally, \begin{eqnarray}
\label{Jan9}
|B^{\Delta,\sigma}_2 q |_\vartheta & \leq & 2 \int_{\Gamma_0}
e^{\vartheta|\eta|} \left( \int_{(\mathds{R}^d)^2} \sum_{y_1\in
\eta} |q(\eta \cup x\setminus y_1)| b_\sigma (x|y_1, y_2) d y_2 d x \right) \lambda (d\eta) \qquad \\[.2cm] \nonumber & \leq & 2
\int_{\Gamma_0} e^{\vartheta|\eta|+\vartheta} \left(
\int_{(\mathds{R}^d)^3}|q(\eta \cup x)| b (x|y_1, y_2) d x d y_1 d y_2\right) \lambda (d\eta) \qquad \\[.2cm] \nonumber & = & 2
\langle b \rangle \int_{\Gamma_0} e^{\vartheta|\eta|} |\eta|
|q(\eta)| \lambda ( d \eta) \leq \int_{\Gamma_0} \Psi(\eta)
e^{\vartheta |\eta|} |q(\eta)| \lambda ( d \eta). \end{eqnarray} Then by (\ref{Jan11}), (\ref{Jan12}) and (\ref{Jan9}) we conclude that, for an arbitrary $\vartheta \in \mathds{R}$, $L^{\Delta,\sigma}= A^{\Delta}_1 + A^{\Delta,\sigma}_2 + B^{\Delta}_1 + B^{\Delta,\sigma}_2$ maps $\mathcal{D}_\vartheta$ to $\mathcal{G}_\vartheta$ and hence can be used to define the corresponding unbounded operator $(L^{\Delta,\sigma}_\vartheta, \mathcal{D}_\vartheta)$. Let us then consider the corresponding Cauchy problem \begin{equation}
\label{Jan8}
\dot{q}_t = L^{\Delta,\sigma}_\vartheta q_t , \qquad q_t|_{t=0} =
q_0 \in \mathcal{D}_\vartheta. \end{equation} Recall that $\mathcal{G}_{\vartheta'} \subset \mathcal{D}_\vartheta$ for each $\vartheta'> \vartheta$. \begin{lemma}
\label{Janln} For a given $\vartheta >0$ and $\vartheta'> \vartheta$, assume that the problem in (\ref{Jan8}) with $q_0 \in \mathcal{G}_{\vartheta'}$
has a solution $q_t\in\mathcal{G}_\vartheta$ on a time interval
$[0,\tau)$. Then this solution is unique. \end{lemma} \begin{proof} Set \[
w_t (\eta) = (-1)^{|\eta|}q_t (\eta). \]
Then $|w_t|_\vartheta= |q_t|_\vartheta$ and $q_t$ solves (\ref{Jan8}) if and only if $w_t$ solves the following equation \begin{equation}
\label{Jan13}
\dot{w}_t = \left( A_1^{\Delta} - A_2^{\Delta,\sigma} -
B_1^{\Delta} + B_2^{\Delta,\sigma} \right) w_t. \end{equation} By Proposition \ref{TV0pn} we prove that $(A_1^{\Delta} - B_1^{\Delta}, \mathcal{D}_\vartheta)$ generates a sub-stochastic semigroup on $\mathcal{G}_\vartheta$. Indeed, $(A_1^{\Delta} , \mathcal{D}_\vartheta)$ generates a sub-stochastic semigroup defined in (\ref{38}) with $\upsilon =0$, and $- B_1^{\Delta}$ is positive and defined on $\mathcal{D}_\vartheta$, see (\ref{Jan12}). Also by (\ref{Jan12}), for $w\in \mathcal{G}^{+}_\vartheta$ and $r\in (0,1)$, we get \begin{eqnarray*}
& & \int_{\Gamma_0} e^{\vartheta|\eta|}\left( \left(A_1^{\Delta} - r^{-1}B_1^{\Delta}\right) w\right)(\eta) \lambda(d\eta) = -
\int_{\Gamma_0} e^{\vartheta|\eta|} \Psi(\eta) w(\eta) \lambda ( d \eta) \\[.2cm] \nonumber & & + r^{-1} \int_{\Gamma_0}
e^{\vartheta|\eta|}\left( \int_{\mathds{R}^d} w(\eta\cup x) E^a(x, \eta) d x \right) \lambda ( d \eta) \\[.2cm] \nonumber & & = -
\int_{\Gamma_0} e^{\vartheta|\eta|} \Psi(\eta) w(\eta) \lambda ( d
\eta) + r^{-1} e^{-\vartheta} \int_{\Gamma_0} e^{\vartheta |\eta|} E^a (\eta) w(\eta) \lambda ( d \eta) \\[.2cm] \nonumber & & \leq - \left(1 - r^{-1} e^{-\vartheta} \right) \int_{\Gamma_0} \Psi(\eta) e^{\vartheta
|\eta|} w(\eta) \lambda ( d \eta) \leq 0, \end{eqnarray*} where the latter inequality holds for $r\in (e^{-\vartheta}, 1)$. Therefore, $(A_1^{\Delta} - B_1^{\Delta}, \mathcal{D}_\vartheta)= (A_1^{\Delta} - r r^{-1} B_1^{\Delta}, \mathcal{D}_\vartheta)$ generates a sub-stochastic semigroup $V_\vartheta =\{V_\vartheta (t)\}_{t\geq 0}$ on $\mathcal{G}_\vartheta$. For each $\vartheta'' \in (0, \vartheta)$, we have that $\mathcal{G}_\vartheta \hookrightarrow \mathcal{G}_{\vartheta''}$. By the estimates in (\ref{Jan11}) and (\ref{Jan9}), similarly as in (\ref{47}) we obtain that \begin{eqnarray*}
|A^{\Delta, \sigma}_2 w|_{\vartheta''} &\leq & \frac{\langle b
\rangle}{e(\vartheta - \vartheta'')} |w|_\vartheta, \\[.2cm]
|B^{\Delta, \sigma}_2 w|_{\vartheta''} & \leq & \frac{2\langle b
\rangle}{e(\vartheta - \vartheta'')} |w|_\vartheta, \end{eqnarray*} which we then use to define a bounded operator $C^{\Delta, \sigma}_{\vartheta''\vartheta} :\mathcal{G}_\vartheta \to \mathcal{G}_{\vartheta''}$. It acts as $- A^{\Delta,\sigma}_2 + B^{\Delta,\sigma}_2$ and its norm satisfies \begin{equation}
\label{Jan15}
\|C^{\Delta, \sigma}_{\vartheta''\vartheta}\| \leq \frac{3\langle b \rangle}{e(\vartheta - \vartheta'')}. \end{equation} Assume now that (\ref{Jan13}) has two solutions corresponding to the same initial condition $w_0$. Let $v_t$ be their difference. Then it solves (\ref{Jan13}) with the zero initial condition and hence satisfies \begin{equation}
\label{Jan16}
v_t = \int_0^t V_{\vartheta''} (t - s) C^{\Delta,
\sigma}_{\vartheta'' \vartheta}
v _s d s \end{equation} where $v_t$ in the left-hand side is considered as an element of $\mathcal{G}_{\vartheta''}$ and $t>0$ will be chosen later. Now for a given $n\in \mathds{N}$, we set $\epsilon = (\vartheta - \vartheta'')/n$ and $\vartheta^l = \vartheta - l \epsilon$, $l=0, \dots , n$. Next, we iterate (\ref{Jan16}) due times and get \begin{eqnarray*} v_t & = & \int_0^t \int_0^{t_1} \cdots \int_{0}^{t_{n-1}} V_{\vartheta''} (t-t_1) C^{\Delta,
\sigma}_{\vartheta'' \vartheta^{n-1}} V_{\vartheta^{n-1}} (t_1-t_2) C^{\Delta,
\sigma}_{\vartheta^{n-1} \vartheta^{n-2}} \times \cdots \times
\\[.2cm] & \times & V_{\vartheta^{1}} (t_{n-1}-t_n) C^{\Delta,
\sigma}_{\vartheta^{n-1} \vartheta} v_{t_n} d t_n \cdots d t_1. \end{eqnarray*} Then we take into account that $V_\vartheta$ is sub-stochastic, $C^{\Delta,
\sigma}_{\vartheta^l \vartheta^{l-1}}$ are positive and satisfy (\ref{Jan15}), and thus obtain from the latter that $v_t$ satisfies \[
|v_t|_{\vartheta''} \leq \frac{1}{n!} \left(\frac{n}{e} \right)^n \left(\frac{3 t \langle b \rangle}{\vartheta - \vartheta''}\right)^n
\sup_{s\in [0,t]}|v_s|_{\vartheta}. \] Then, since $n$ is an arbitrary positive integer, for all $t< (\vartheta - \vartheta'')/ 3\langle b \rangle$ it follows that $v_t =0$. To prove that $v_t=0$ for all $t$ of interest one has to repeat the above procedure appropriate number of times. \end{proof} Let us now take $u\in \mathcal{U}_{\sigma, \alpha}$ with some $\alpha \in \mathds{R}$, for which by (\ref{U3}) we have \[
|u(\eta)|\leq \|u\|_{\sigma,\alpha} e^{\alpha |\eta|} e(\phi_\sigma, \eta). \] Then the norm of this $u$ in $\mathcal{G}_\vartheta$ can be estimated as follows, see (\ref{U1}), \begin{equation} \label{x}
|u|_\vartheta\leq \|u\|_{\sigma,\alpha} \int_{\Gamma_0} \exp\left(
(\alpha + \vartheta) |\eta|\right) e(\phi_\sigma, \eta)\lambda
(d\eta) = \|u\|_{\sigma,\alpha} \exp\left( (\alpha + \vartheta) \langle \phi \rangle\right). \end{equation} This means that $\mathcal{U}_{\sigma, \alpha}\hookrightarrow \mathcal{G}_{\vartheta}$ for each pair of real $\alpha$ and $\vartheta$. Moreover, for the operators discussed above this implies, cf. (\ref{Jan10}), \begin{equation}
\label{U12}
L^{\Delta,\sigma}_{\alpha,u}u = L_{\vartheta}^{\Delta, \sigma}u, \qquad u\in\mathcal{D}^{\Delta,\sigma}_{\alpha,u}. \end{equation} \begin{corollary}
\label{Jan2co} Let $\alpha_1$ and $\alpha_2$ be as in Proposition \ref{U2pn}. Then, for each $q_0 \in \mathcal{U}_{\sigma,\alpha_1}$, the problem in (\ref{Jan8}) has a unique solution $q_t \in \mathcal{U}_{\sigma,\alpha_2}$, $t<T(\alpha_2, \alpha_1)$, which coincides with the unique solution of (\ref{U10}). \end{corollary} \begin{proof} By (\ref{U12}) we have that the unique solution of (\ref{U10}) $u_t$ solves also (\ref{Jan8}), and this is a unique solution in view of Lemma \ref{Janln}. \end{proof}
\subsection{Local approximations}
Our aim now is to prove that, cf. Proposition \ref{1pn}, the following holds \begin{equation}
\label{Jan18} \langle \! \langle G, Q^\sigma_{\alpha_2 \alpha_1}(t) k_0 \rangle \! \rangle \geq 0, \qquad G\in B_{\rm bs}^\star (\Gamma_0), \end{equation} for suitable $t>0$. By Corollaries \ref{U1co} and \ref{Jan2co} to this end it is enough to prove (\ref{Jan18}) with $Q^\sigma_{\alpha_2 \alpha_1}(t) k_0$ replaced by $q_t$.
For $\mu_0\in \mathcal{P}_{\rm exp}(\Gamma)$ and a compact $\Lambda$, let $\mu^\Lambda_0\in \mathcal{P}(\Gamma_\Lambda)$ be the corresponding projection to $\Gamma_\Lambda$ defined in (\ref{Rel}). Let $R^\Lambda_0$ be its Radon-Nikodym derivative, see (\ref{12}). For $N\in \mathds{N}$ and $\eta\in \Gamma_0$, we then set \begin{equation}
\label{Jan19}
R^{\Lambda, N}_0 (\eta) = \left\{ \begin{array}{ll}
R^{\Lambda}_0(\eta) , \qquad &{\rm if} \ \ \eta\in \Gamma_\Lambda \
\ {\rm and} \ \ |\eta|\leq N;\\[.2cm]
0, \qquad &{\rm otherwise}. \end{array} \right. \end{equation} Until the end of this subsection, $\Lambda$ and $N$ are fixed. Having in mind (\ref{13}) we introduce \begin{equation}
\label{Jan20}
q_0^{\Lambda,N}( \eta) = \int_{\Gamma_0} R^{\Lambda,N}_0 (\eta\cup
\xi) \lambda ( d \xi), \qquad \eta \in \Gamma_{0}. \end{equation} For $G\in B^{\star}_{\rm bs}(\Gamma_0)$, by (\ref{15}), (\ref{18}) and (\ref{Jan20}) we have \begin{equation}
\label{Jan21} \langle\!\langle G, q_0^{\Lambda,N} \rangle\!\rangle = \langle\!\langle K G, R_0^{\Lambda,N} \rangle\!\rangle \geq 0. \end{equation} By (\ref{Jan19}) it follows that $R^{\Lambda, N}_0 \in
\mathcal{R}^{+}$ and $\|R^{\Lambda, N}_0\|_\mathcal{R}\leq 1$. Moreover, for each $\kappa>0$, we have, see (\ref{8}), \begin{eqnarray}
\label{Jan22}
\|R^{\Lambda, N}_0\|_{\mathcal{R}_{\chi^\kappa}} =
\int_{\Gamma_\Lambda} e^{\kappa |\eta|} R^{\Lambda, N}_0 (\eta)
\lambda ( d\eta) \leq e^{\kappa N} \|R^{\Lambda,
N}_0\|_{\mathcal{R}} \leq e^{\kappa N}. \end{eqnarray} Let $S^\sigma_\mathcal{R}$ be the stochastic semigroup on $\mathcal{R}$ constructed in the proof of Theorem \ref{1ftm} with $b$ replaced by $b_\sigma$. Recall that $R_t = S_\mathcal{R}(t) R_0$ is the solution of (\ref{L9}). Set \begin{eqnarray}
\label{Jan23} R^{\Lambda, N}_t & = & S_\mathcal{R}^\sigma(t) R^{\Lambda, N}_0, \qquad t>0, \\[.2cm] q^{\Lambda, N}_t (\eta) & = & \int_{\Gamma_0} R^{\Lambda, N}_t (\eta\cup \xi) \lambda ( d \xi), \quad \eta \in \Gamma_0. \nonumber \end{eqnarray} \begin{proposition}
\label{JJ1pn} For each $\vartheta\in \mathds{R}$ and $t \in [0, \tau_\vartheta)$, $\tau_\vartheta:= [ e \langle b \rangle ( 1+ e^\vartheta)]^{-1}$, it follows that $q^{\Lambda, N}_t \in \mathcal{G}_\vartheta^{+}$. Moreover, \begin{equation}
\label{Jan24} \langle\!\langle G, q_t^{\Lambda,N} \rangle\!\rangle \geq 0 \end{equation} holding for each $G\in B^{\star}_{\rm bs}(\Gamma_0)$ and all $t>0$. \end{proposition} \begin{proof} Since $S_\mathcal{R}^\sigma$ is stochastic and $R_0^{\Lambda,N}$ is as in (\ref{Jan19}), then $R_t^{\Lambda,N}\in \mathcal{R}^{+}$ for all $t>0$. Hence, $q^{\Lambda, N}_t(\eta) \geq 0$ for all those $t>0$ for which the integral in the second line in (\ref{Jan23}) makes sense. By (\ref{22T}) we have that $T(\kappa, \kappa')$, as a function of $\kappa$, attains its maximum value $T_{\kappa'} = e^{-\kappa'}/ e \langle b \rangle$ at $\kappa = \kappa'+1$. By (\ref{Jan22}) we have that $R^{\Lambda, N}_0 \in \mathcal{R}_{\chi^\kappa}$ for any $\kappa>0$. Then, for each $\kappa>0$, by Proposition \ref{TVpn} it follows that $R^{\Lambda, N}_t \in \mathcal{R}_{\chi^\kappa}$ for $t< T_\kappa$. Taking all these fact into account we then get \begin{eqnarray}
\label{Jan25}
|q_t^{\Lambda,N}|_\vartheta & = & \int_{\Gamma_0} e^{\vartheta|\eta|}
q_t^{\Lambda,N} (\eta) \lambda ( d\eta)\\ & = & \nonumber \int_{\Gamma_0}
\int_{\Gamma_0} e^{\vartheta|\eta|} R^{\Lambda, N}_t (\eta\cup \xi) \lambda (d\eta)\lambda ( d \xi) \\ & = &
\int_{\Gamma_0} \left( 1 + e^\vartheta\right)^{|\eta|} R^{\Lambda, N}_t (\eta) \lambda (d\eta) = \| R^{\Lambda, N}_t\|_{\mathcal{R}_{\chi^\kappa}} \nonumber \end{eqnarray} with $\kappa = \log (1 +e^\vartheta)$. For these $\kappa$ and $\vartheta$, we have that $T_\kappa = \tau_\vartheta$. Then $q_t^{\Lambda,N}\in \mathcal{G}_\vartheta$ for $t< \tau_\vartheta$, holding by (\ref{Jan25}). The existence of the integral in (\ref{Jan24}) follows by the equality \[ \langle \! \langle G, q^{\Lambda,N}_t \rangle \! \rangle = \langle \! \langle K G, R^{\Lambda,N}_t \rangle \! \rangle, \] (\ref{10}) and the fact that $R^{\Lambda,N}_t \in \mathcal{R}^{+}_{\chi_m}$ for all $t>0$ and $m\in \mathds{N}$, see claims (a) and (c) of Theorem \ref{1ftm}. The validity of the inequality in (\ref{Jan24}) is straightforward, cf. (\ref{Jan21}). \end{proof} \begin{corollary}
\label{JJ1co} For each $\alpha \in \mathds{R}$, it follows that $q_0^{\Lambda,N}\in \mathcal{U}_{\sigma,\alpha}^{+}$. \end{corollary} \begin{proof}
Set $I_N(\eta)=1$ whenever $|\eta|\leq N$ and $I_N(\eta)=0$ otherwise. By (\ref{Jan19}), (\ref{Jan20}) and (\ref{13}) we have that \begin{eqnarray*} q_0^{\Lambda,N}(\eta) & = & I_N (\eta) \mathds{1}_{\Gamma_\Lambda} (\eta) \int_{\Gamma_\Lambda}R_0^\Lambda(\eta \cup\xi)\lambda (d \xi) \\[.2cm] & = & k_0(\eta) I_N (\eta) \mathds{1}_{\Gamma_\Lambda} (\eta)\leq \varkappa^N I_N (\eta) \mathds{1}_{\Gamma_\Lambda} (\eta). \end{eqnarray*} The latter estimate follows by the fact that $k_0=k_{\mu_0}$ for some $\mu_0\in \mathcal{P}_{\rm exp}(\Gamma)$, and thus $k_0(\eta) \leq
\varkappa^{ |\eta|}$ for some $\varkappa>0$, see Definition \ref{0df} and (\ref{6c}). Then $q_0^{\Lambda,N}\in \mathcal{U}_{\sigma,\alpha}$ by (\ref{U3}). The stated positivity i immediate. \end{proof} By (\ref{x}) and Corollary \ref{JJ1co} we obtain that $q_0^{\Lambda,N}\in \mathcal{G}_{\vartheta}^{+}$ for each $\vartheta\in \mathds{R}$. Now we relate $q_t^{\Lambda,N}$ with solutions of (\ref{Jan8}). \begin{lemma}
\label{JJ1lm} For each $\vartheta \in \mathds{R}$, the map $[0, \tau_\vartheta)\ni t\mapsto q_t^{\Lambda,N} \in \mathcal{G}_\vartheta$ is continuous and continuously differentiable on $(0, \tau_\vartheta)$. Moreover, $q_t^{\Lambda,N} \in \mathcal{D}_\vartheta$, see (\ref{37}), and solves the problem in (\ref{Jan8}) on the time interval $[0,\tau_\vartheta)$ with $q_0^{\Lambda,N}$ as the initial condition. \end{lemma} \begin{proof} Fix an arbitrary $\vartheta \in \mathds{R}$. The stated continuity of $t\mapsto q_t^{\Lambda,N}$ follows by (\ref{Jan23}). Let us prove that $q_t^{\Lambda,N}$ be differentiable in $\mathcal{G}_\vartheta$ on $(0,\tau_\vartheta)$ and the following holds \begin{equation}
\label{Mar1}
\dot{q}^{\Lambda,N}_t (\eta) = \int_{\Gamma_0} \dot{R}^{\Lambda,N}_t (\eta\cup
\xi)\lambda(d\xi). \end{equation}
For small enough $|\tau|$, we have \begin{eqnarray}
\label{Mar2} & & \frac{1}{\tau} \left(q^{\Lambda,N}_{t+\tau} (\eta)- q^{\Lambda,N}_t (\eta)\right) - \int_{\Gamma_0} \dot{R}^{\Lambda,N}_t (\eta\cup
\xi)\lambda(d\xi) \\[.2cm] & & \qquad = \int_{\Gamma_0} \left[\frac{1}{\tau} \left(R^{\Lambda,N}_{t+\tau} (\eta\cup\xi)- R^{\Lambda,N}_t (\eta\cup\xi)\right) - \dot{R}^{\Lambda,N}_t (\eta\cup
\xi) \right]\lambda(d\xi). \nonumber \end{eqnarray} Then by (\ref{18}) we get \begin{eqnarray*}
\left\vert {\rm LHS}(\ref{Mar2})\right\vert_\vartheta \leq
\int_{\Gamma_0} \left(1+e^\vartheta \right)^{|\eta|} \left\vert\frac{1}{\tau} \left(R^{\Lambda,N}_{t+\tau} (\eta)- R^{\Lambda,N}_t (\eta)\right) - \dot{R}^{\Lambda,N}_t (\eta) \right\vert\lambda(d\eta), \end{eqnarray*} that proves (\ref{Mar1}), cf. (\ref{Jan25}). The continuity of $t\mapsto \dot{q}^{\Lambda,N}_t$ follows by (\ref{Mar1}) and the fact that $R^{\Lambda,N}_t = S^\sigma_{\mathcal{R}} (t)R^{\Lambda,N}_0$, which also yields that \begin{equation}
\label{Jan27} \dot{q}_t^{\Lambda,N} (\eta) = \int_{\Gamma_0} \left(L^{\dagger,\sigma}_\vartheta R_t^{\Lambda,N} \right) (\eta\cup \xi) \lambda ( d\xi), \end{equation} where $L^{\dagger,\sigma}_\vartheta$ is the trace of $L^{\dagger,\sigma}$ (the generator of $S^\sigma_{\mathcal{R}}$) in $\mathcal{R}_{\chi^\kappa}$ with $\kappa = \log(1+e^\vartheta)$. By
(\ref{37}) it follows that $\Psi_\upsilon (\eta) \leq C_\varepsilon e^{\varepsilon |\eta|}$ holding for an arbitrary $\varepsilon >0$ and the corresponding $C_\varepsilon>0$. For each $t<T_\kappa = \tau_\vartheta$, one can pick $\kappa'>\kappa$ such that $R_t^{\Lambda, N}\in \mathcal{R}_{\chi^{\kappa'}}$. For these $t$ and $\kappa'$, we thus pick $\varepsilon>0$ such that $1+ e^{\vartheta + \varepsilon} = e^{\kappa'}$, and then obtain, cf. (\ref{Jan25}), \begin{eqnarray}
\label{Mar4}
|\Psi_\upsilon q_t^{\Lambda,N}|_\vartheta \leq C_\varepsilon
\|R^{\Lambda,N}_t\|_{\mathcal{R}^{\chi^{\kappa'}}}. \end{eqnarray} Hence, $q_t^{\Lambda,N}\in \mathcal{D}_\vartheta$ for this $t$. Let us now prove that $q_t^{\Lambda,N}$ solves (\ref{Jan8}). In view of (\ref{Jan27}), (\ref{L}) and (\ref{Mar4}), to this end it is enough to prove that \begin{eqnarray}
\label{Mar5} \left(L^\Delta q_t^{\Lambda,N}\right)(\eta) & = & - \int_{\Gamma_0} \Psi(\eta\cup\xi) R^{\Lambda,N}_t (\eta \cup \xi) \lambda (d\xi) \\[.2cm] \nonumber & + &\int_{\mathds{R}^d} \int_{\Gamma_0} \left( m(x)+ E^a(x, \eta \cup \xi)\right)R^{\Lambda,N}_t (\eta \cup \xi\cup x) \lambda (d\xi) dx \\[.2cm] \nonumber & + &\int_{\mathds{R}^d} \int_{\Gamma_0}\sum_{y_1\in \eta \cup\xi} \sum_{y_2\in \eta
\cup\xi\setminus y_1} b(x|y_1 , y_2) R^{\Lambda,N}_t (\eta \cup \xi\cup x\setminus\{y_1,y_2\} ) \lambda (d\xi)d x, \end{eqnarray} holding point-wise in $\eta\in\Gamma_0$. By (\ref{22}) and (\ref{19}) we get \begin{equation}
\label{Mar6}
\Psi (\eta \cup \xi) = \Psi (\eta)+ \Psi (\xi) + 2 \sum_{x\in
\eta}\sum_{y\in \xi} a(x-y). \end{equation} Let $I_1 (\eta)$ denote the first summand in the right-hand side of (\ref{Mar5}). By (\ref{18}) and (\ref{Mar6}) we then write it as follows \begin{eqnarray}
\label{Mar7} I_1 (\eta) & = & - \Psi(\eta)q_t^{\Lambda,N} (\eta) - 2 \int_{\mathds{R}^d} E^a (x, \eta) q_t^{\Lambda,N} (\eta\cup x) dx \\[.2cm]\nonumber & - & \int_{\Gamma_0}\Psi (\xi)R^{\Lambda,N}_t (\eta \cup \xi) \lambda (d\xi). \end{eqnarray} To calculate the latter summand in (\ref{Mar7}) we again use (\ref{22}) and (\ref{18}) to obtain the following: \begin{eqnarray}
\label{Mar8}
\int_{\Gamma_0}\left( \sum_{x\in \xi} m(x) \right) R^{\Lambda,N}_t (\eta \cup
\xi) \lambda (d \xi)& = & \int_{\Gamma_0} \int_{\mathds{R}^d} m(x) R^{\Lambda,N}_t (\eta \cup
\xi\cup x) \lambda (d \xi) dx \qquad \quad \\[.2cm] \nonumber & = & \int_{\mathds{R}^d}
m(x) q_t^{\Lambda,N} (\eta\cup x) dx. \end{eqnarray} \begin{eqnarray}
\label{Mar9}
& & \int_{\Gamma_0}\left( \sum_{x\in \xi}\sum_{y\in \xi\setminus x} a(x-y) \right) R^{\Lambda,N}_t (\eta \cup
\xi) \lambda (d \xi)\\[.2cm] \nonumber & & \quad = \int_{\Gamma_0} \int_{(\mathds{R}^d)^2} a(x-y) R^{\Lambda,N}_t (\eta \cup
\xi\cup\{ x,y\}) \lambda (d \xi) dx d y \\[.2cm] \nonumber & & \quad = \int_{(\mathds{R}^d)^2}
a(x-y) q_t^{\Lambda,N} (\eta\cup \{x,y\}) dx d y . \end{eqnarray} \begin{eqnarray}
\label{Mar10}
\int_{\Gamma_0}\left(\langle b \rangle \sum_{x\in \xi} 1 \right) R^{\Lambda,N}_t (\eta \cup
\xi) \lambda (d \xi) = \langle b \rangle \int_{\mathds{R}^d} q^{\Lambda,N}_t (\eta
\cup x) dx. \end{eqnarray} In a similar way, we get the second $I_2$ (resp. the third $I_3$) summands of the right-hand side of (\ref{Mar5}) as follows \begin{eqnarray}
\label{Mar11} I_2(\eta) & = & \int_{\mathds{R}^d} \left(m(x) + E^a (x, \eta) \right)q^{\Lambda,N}_t (\eta
\cup x) dx \\[.2cm] \nonumber & + & \int_{(\mathds{R}^d)^2} a (x-y) q^{\Lambda,N}_t (\eta
\cup\{ x,y\}) dx d y. \end{eqnarray} \begin{eqnarray}
\label{Mar12} I_3(\eta) & = & \int_{\mathds{R}^d} \sum_{y_1\in \eta} \sum_{y_2\in
\eta\setminus y_1} b(x|y_1, y_2)q^{\Lambda,N}_t (\eta
\cup x\setminus \{y_1,y_2\}) dx \\[.2cm] & + & \nonumber 2
\int_{(\mathds{R}^d)^2} \sum_{y_1\in \eta} b(x|y_1,y_2) q^{\Lambda,N}_t (\eta \cup x\setminus y_1) dx d y_2 \\[.2cm] \nonumber & + & \langle b \rangle \int_{\mathds{R}^d} q^{\Lambda,N}_t (\eta
\cup x) dx . \end{eqnarray} Now we plug (\ref{Mar8}), (\ref{Mar9}) and (\ref{Mar10}) into (\ref{Mar7}), and then use it together with (\ref{Mar11}) and (\ref{Mar12}) in the right-hand side of (\ref{Mar5}) to get its equality with the left-hand side, see (\ref{21}). This completes the proof. \end{proof} \begin{corollary}
\label{JJ2co} Let $\alpha_1> -\log \omega$ and $\alpha_2 >\alpha_1$ be chosen. Then $k^{\Lambda,N}_t = Q^\sigma_{\alpha_2\alpha_1} (t) q_0^{\Lambda,N}$ has the property \begin{equation}
\label{Jan32} \langle\! \langle G, k_t^{\Lambda,N} \rangle\! \rangle \geq 0, \end{equation} holding for all $G\in B_{\rm bs}^\star (\Gamma_0)$ and $t<T(\alpha_2,\alpha_1)$. \end{corollary} \begin{proof} The proof of (\ref{Jan32}) will be done by showing that $k^{\Lambda,N}_t= q^{\Lambda,N}_t$, for $t< T(\alpha_2,\alpha_1)$ and then by employing (\ref{Jan24}), which holds for all $t>0$.
By Corollary \ref{JJ1co} it follows that $q_0^{\Lambda,N}\in \mathcal{U}_{\sigma,\alpha_1}$, and hence $u_t = U^\sigma_{\alpha_2\alpha_1}(t)q_0^{\Lambda,N}$ is a unique solution of (\ref{U10}), see Proposition \ref{U2pn}. By Lemma \ref{JJ1lm} $q_t^{\Lambda,N}$ solves (\ref{Jan8}) on $[0,\tau_\vartheta)$, which by Corollary \ref{Jan2co} yields $u_t=q_t^{\Lambda,N}$ for $t< \min\{\tau_\vartheta; T(\alpha_2,\alpha_1)\}$. If $\tau_\vartheta < T(\alpha_2,\alpha_1)$, we can continue $q_t^{\Lambda,N}$ beyond $\tau_\vartheta$ by means of the following arguments. Since $u_t=q_t^{\Lambda,N}$ lies in $\mathcal{U}_{\sigma,\alpha_2}$ for all $t< \min\{\tau_\vartheta; T(\alpha_2,\alpha_1)\}$, by (\ref{x}) we get that $q_t^{\Lambda,N}$ lies in the initial space $\mathcal{G}_{\vartheta'}$ and hence can further by continued. Thus, $u_t=q_t^{\Lambda,N}$ for all $t< T(\alpha_2,\alpha_1)$. Now by (\ref{U11}) we get $q_t^{\Lambda,N}=u_t= k^{\Lambda,N}_t$, that completes the proof. \end{proof}
\subsection{Taking the limits}
We prove that (\ref{Jan32}) holds when the approximation is removed. Recall that $k_t^{\Lambda,N}$ in (\ref{Jan32}) depends on $\sigma>0$, $\Lambda$ and $N$. We first take the limits $\Lambda\to \mathds{R}^d$ and $N\to +\infty$. Below, by an exhausting sequence $\{\Lambda_n\}_{n\in \mathds{N}}$ we mean a sequence of compact $\Lambda_n$ such that: (a) $\Lambda_n\subset \Lambda_{n+1}$ for all $n$; (b) for each $x\in \mathds{R}^d$, there exits $n$ such that $x\in \Lambda_n$. \begin{proposition}
\label{JJ10pn} Let $\alpha_1>-\log \omega$, $\alpha_2>\alpha_1$ and $k_0\in \mathcal{K}^\star_{\alpha_1}$ be fixed. For these $\alpha_1$, $\alpha_2$ and $t< T(\alpha_2 , \alpha_1)$, let $k_t^{\Lambda,N}$ and $Q^\sigma_{\alpha_2\alpha_1}(t)$ be the same as in Corollary \ref{JJ2co} and (\ref{Jan18}), respectively. Then, for each $G\in B_{\rm bs}(\Gamma_0)$ and any $t<T(\alpha_2,\alpha_1)$, the following holds \[ \lim_{n\to +\infty} \lim_{l\to +\infty} \langle \!\langle G, k_t^{\Lambda_n, N_l}\rangle \!\rangle = \langle \!\langle G, Q^\sigma_{\alpha_2\alpha_1} (t) k_0\rangle \!\rangle, \] for arbitrary exhausting $\{\Lambda_n\}_{n\in \mathds{N}}$ and increasing $\{N_l\}_{l\in \mathds{N}}$ sequences of sets and positive integers, respectively. \end{proposition} The proof of this statement can be performed by the literal repetition of the proof of a similar statement given in Appendix of \cite{Berns}.
Recall that, for $\alpha_2 > \alpha_1$, $T(\alpha_2,\alpha_1)$ was defined in (\ref{34}). For these, $\alpha_2$, $\alpha_1$, we set \begin{equation}
\label{Jan41} \alpha= \frac{1}{3}\alpha_2 + \frac{2}{3}\alpha_1, \quad \ \alpha'= \frac{2}{3}\alpha_2 + \frac{1}{4}\alpha_1. \end{equation} Clearly, \begin{equation}
\label{Jan40} \tau(\alpha_2, \alpha_1):= \frac{1}{3} T(\alpha_2, \alpha_1) < \min\{ T(\alpha_2 , \alpha'); T(\alpha , \alpha_1)\}. \end{equation} \begin{lemma}
\label{JJ10lm} Let $\alpha_1$, $\alpha_2$ and $k_0$ be as in Proposition \ref{JJ10pn}, and let $k_t$ be the solution of (\ref{33}). Then for each $G\in B_{\rm bs}(\Gamma_0)$ and $t\in [0,\tau(\alpha_2,\alpha_1)]$, the following holds \begin{equation}
\label{Jan42} \lim_{\sigma \to 0^+} \langle \!\langle G, Q^\sigma_{\alpha_2\alpha_1} (t) k_0\rangle \!\rangle = \langle \!\langle G, k_t\rangle \!\rangle. \end{equation} \end{lemma} \begin{proof} We recall that the solution of (\ref{33}) is $k_t=Q_{\alpha_2\alpha_1}(t)k_0$ with $Q_{\alpha_2\alpha_1}(t)$ given in (\ref{Jan3}) and $t\leq T(\alpha_2, \alpha_1)$, see Lemma \ref{1lm}. For $\alpha$ and $\alpha'$ as in (\ref{Jan41}) and $t\leq \tau(\alpha_2, \alpha_1)$, write \begin{eqnarray}
\label{Jan43} Q_{\alpha_2\alpha_1}(t) k_0 & = & Q^\sigma_{\alpha_2\alpha_1}(t) k_0 + \Upsilon_{1}(t,\sigma) + \Upsilon_{2}(t,\sigma), \\[.2cm] \nonumber \Upsilon_{1}(t,\sigma)& = & \int_0^t Q_{\alpha_2\alpha'}(t-s) \left[(A^\Delta_2)_{\alpha'\alpha} - (A^{\Delta.\sigma}_2)_{\alpha'\alpha} \right]Q^\sigma_{\alpha\alpha_1}(s) k_0 d s, , \\[.2cm] \nonumber \Upsilon_{2}(t,\sigma)& = & \int_0^t Q_{\alpha_2\alpha'}(t-s) \left[(B^\Delta_2)_{\alpha'\alpha} - (B^{\Delta,\sigma}_2)_{\alpha'\alpha} \right]Q^\sigma_{\alpha\alpha_1}(s) k_0 d s. \end{eqnarray} The validity of (\ref{Jan43}) is verified by taking the $t$-derivatives from both sides and then by using e.g., (\ref{54b}). Note that the norms of the operators $(A^\Delta_2)_{\alpha'\alpha}$. $(B^\Delta_2)_{\alpha'\alpha}$, $(A^{\Delta,\sigma}_2)_{\alpha'\alpha}$, $(B^{\Delta.\sigma}_2)_{\alpha'\alpha}$ can be estimated as in (\ref{48}). For $G$ as in (\ref{Jan42}), we then have \begin{equation}
\label{Jan44} \langle \!\langle G, Q_{\alpha_2\alpha_1}(t) k_0\rangle \!\rangle - \langle \!\langle G, Q^\sigma_{\alpha_2\alpha_1}(t) k_0\rangle \!\rangle = \langle \!\langle G, \Upsilon_1(t,\sigma) \rangle \!\rangle + \langle \!\langle G, \Upsilon_2(t,\sigma) \rangle \!\rangle. \end{equation} By (\ref{Du5}) and (\ref{Jan43}) it follows that \begin{eqnarray*}
\langle \!\langle G, \Upsilon_1(t,\sigma) \rangle \!\rangle & = & \int_0^t \langle \!\langle G, Q_{\alpha_2\alpha'}(t-s) \left[(A^\Delta_2)_{\alpha'\alpha} - (A^{\Delta.\sigma}_2)_{\alpha'\alpha} \right]Q^\sigma_{\alpha\alpha_1}(s) k_0 \rangle \!\rangle d s \qquad \\[.2cm] \nonumber &=& \int_0^t \langle \!\langle H_{\alpha'\alpha_2}(t-s) G, v_s^\sigma \rangle \!\rangle d s = \int_0^t \langle \!\langle G_{t-s}, v_s^\sigma \rangle \!\rangle d s, \end{eqnarray*} where \begin{equation*}
v^\sigma_s =\left[(A^\Delta_2)_{\alpha'\alpha} - (A^{\Delta.\sigma}_2)_{\alpha'\alpha} \right]k^\sigma_s := \left[(A^\Delta_2)_{\alpha'\alpha} - (A^{\Delta.\sigma}_2)_{\alpha'\alpha} \right]Q^\sigma_{\alpha\alpha_1}(s) k_0 \in \mathcal{K}_{\alpha'}, \end{equation*} and \begin{equation}
\label{Jan46a} G_{t-s} = H_{\alpha'\alpha_2}(t-s) G \in \mathcal{G}_{\alpha'}, \end{equation} which makes sense since obviously $G\in \mathcal{G}_{\alpha_2}$. In view of (\ref{21}) we then get \begin{eqnarray}
\label{Jan47} & & \int_0^t \langle \!\langle G_{t-s}, v_s^\sigma \rangle \!\rangle d s = \int_{\Gamma_0} G_{t-s} (\eta) \bigg{(}\int_{\mathds{R}^d} \sum_{y_1\in\eta} \sum_{y_2\in \eta\setminus y_1} k^\sigma_s (\eta
\cup x\setminus\{y_1,y_2\}) \\[.2cm] \nonumber & & \qquad \times \left[1-\phi_\sigma (y_1) \phi_\sigma (y_2)\right] b(x|y_1, y_2)dx \bigg{)} \lambda ( d\eta) \\[.2cm] \nonumber & &\qquad = \int_{\Gamma_0} \bigg{(} \int_{(\mathds{R}^d)^3}G_{t-s} (\eta\cup\{y_1,y_2\}) k^\sigma_s (\eta \cup x)\\[.2cm] \nonumber & & \qquad \times \left[1-\phi_\sigma (y_1) \phi_\sigma
(y_2)\right] b(x|y_1, y_2) d x dy_1 dy_2 \bigg{)} \lambda (d\eta). \end{eqnarray} Since $k^\sigma_s = Q^\sigma_{\alpha\alpha_1}(s) k_0$ is in $\mathcal{K}_\alpha$, we have that \begin{equation}
\label{Jan48}
|k^\sigma_s (\eta \cup x)| \leq \|k^\sigma_s\|_\alpha e^{\alpha
|\eta| +\alpha} \leq e^{\alpha |\eta| +\alpha} \frac{T(\alpha,
\alpha_1) \|k_0\|_{\alpha_1}}{ T(\alpha, \alpha_1) - \tau(\alpha_2,\alpha_1)}, \end{equation} where $\alpha $ is as in (\ref{Jan41}) and $s\leq t \leq \tau(\alpha_2 , \alpha_1)$. Now for $s\leq t$, we set \begin{equation}
\label{Jan49}
g_s (y_1, y_2) = \int_{\Gamma_0} e^{\alpha |\eta|} |G_{s}
(\eta\cup\{y_1,y_2\})| \lambda (d \eta). \end{equation} Let us show that $g_s \in L^1((\mathds{R}^d)^2)$. By (\ref{Jan46a}) we have \begin{eqnarray}
\label{Jan50} & & \int_{(\mathds{R}^d)^2} g_s (y_1, y_2) d y_1 d y_2 =
e^{-2\alpha} \int_{\Gamma_0} |\eta|(|\eta|-1)
e^{-(\alpha'-\alpha)|\eta|} |G_s(\eta)| e^{\alpha'|\eta|} \lambda (d\eta)\qquad \\[.2cm]\nonumber & & \quad \qquad \leq\frac{4 e^{-2\alpha -
2}}{(\alpha'- \alpha)^2} |G_s|_{\alpha'} \leq \frac{4 e^{-2\alpha -
2}T(\alpha_2 , \alpha')|G|_{\alpha_2}}{(\alpha'- \alpha)^2[T(\alpha_2, \alpha')-\tau(\alpha_2 ,\alpha_1)]}. \end{eqnarray} Turn now to (\ref{Jan47}). By means of item (iv) of Assumption \ref{ass1} and by (\ref{Jan48}) and (\ref{Jan49}) we get \begin{eqnarray*}
& & \int_0^t \left\vert\langle \!\langle G_{t-s}, v_s^\sigma \rangle \!\rangle\right\vert d s \\[.2cm] \nonumber & & \qquad \leq \beta^* C(\alpha_2,
\alpha_1)\|k_0\|_{\alpha_1}\int_0^t \int_{(\mathds{R}^d)^2} g_s (y_1, y_2)\left[1-\phi_\sigma (y_1) \phi_\sigma (y_2)\right] ds d y_1 d y_2,
\end{eqnarray*} where we have taken into account that $\alpha$ and $\alpha'$ are expressed through $\alpha_2$ and $\alpha_1$, see (\ref{Jan41}). Then the function under the latter integral is bounded from above by $g_s(y_1, y_2)$ which by (\ref{Jan50}) is integrable on $[0,t]\times (\mathds{R}^d)^2$. Since this function converges point-wise to $0$ as $\sigma \to 0^+$, by Lebesgue's dominated convergence theorem we get that \begin{equation*}
\langle \!\langle G, \Upsilon_1(t,\sigma) \rangle \!\rangle \to 0 , \qquad {\rm as} \ \ \sigma\to 0^{+}. \end{equation*} The proof that the second summand in the right-hand side of (\ref{Jan44}) vanishes in the limit $\sigma\to 0^{+}$ is pretty analogous. \end{proof} {\it Proof of Lemma \ref{ILlm}.} By (\ref{32}) and Proposition \ref{1pn} we have that each $k_0\in \mathcal{K}^\star_{\alpha_1}$ is the correlation function of some $\mu_0\in \mathcal{P}_{\rm exp}(\Gamma_0)$. By (\ref{21}) we readily conclude that \[ \dot{k}_t (\varnothing) = (L^\Delta_{\alpha_2} k_t)(\varnothing)=0. \] Hence, $k_t(\varnothing)=k_0(\varnothing)=1$. At the same time, for $t\leq\tau(\alpha_2, \alpha_1)$ given in (\ref{Jan40}), we have that $$\langle \!\langle G, k_t \rangle\!\rangle = \lim_{\sigma\to 0^+}\lim_{n\to +\infty} \lim_{l\to +\infty} \langle \!\langle G, k_t^{\Lambda_n, N_l} \rangle\!\rangle,$$ that follows by Lemma \ref{JJ10lm} and Proposition \ref{JJ10pn}. Then $\langle \!\langle G, k_t \rangle\!\rangle \geq 0$ by (\ref{Jan32}) that completes the proof.
{$\square$}
\section{The Global Solution}
\label{Sec6}
In this section, we continue the solution obtained in Lemma \ref{1lm} to all $t>0$ and thus prove that it satisfies the upper bound following from property (i) in Theorem \ref{1tm}.
\subsection{Comparison statements}
Note that the time bound $T(\alpha, \alpha_1)$ defined in (\ref{34}) is a bounded function of $\alpha >\alpha_1$. Then the solution obtained in Lemma \ref{1lm} may abandon the scale of spaces $\{\mathcal{K}_\alpha\}_{\alpha \in \mathds{R}}$ in finite time. To overcome this difficulty we compare $k_t$ with some auxiliary functions. \begin{lemma}
\label{complm} Let $\alpha_2$, $\alpha_1$ and $\tau(\alpha_2,\alpha_1)$ be as in Lemma \ref{ILlm}. Then for each $t \in [0, \tau (\alpha_2,\alpha_1)]$ and arbitrary $k_0 \in \mathcal{K}_{\alpha_1}^\star$, the following holds \begin{equation} \label{59} 0 \le (Q_{\alpha_2 \alpha_1}(t; B_\upsilon^\Delta)k_0)(\eta) \le (Q_{\alpha_2 \alpha_1}(t; B_{2,\upsilon}^\Delta)k_0)(\eta), \qquad \eta \in \Gamma_0. \end{equation} \end{lemma} \begin{proof} The left-hand side inequality follows by Lemma \ref{ILlm} and (\ref{32a}). By the second line in (\ref{54a}) we conclude that $w_t = Q_{\alpha_2 \alpha_1}(t; B_{2,\upsilon}^\Delta) k_0$ is the unique solution of the equation \begin{equation*}
\dot{w}_t = ((A^\Delta_\upsilon)_{\alpha_2} +
(B^\Delta_{2,\upsilon})_{\alpha_2}) w_t, \qquad w_t|_{t=0}= k_0, \end{equation*} on the time interval $[0, T(\alpha_2,\alpha_1;B_{2,\upsilon}^\Delta))\supset [0, T(\alpha_2,\alpha_1;B_{\upsilon}^\Delta))$ since $T(\alpha_2,\alpha_1;B_\upsilon^\Delta)\le T(\alpha_2,\alpha_1;B_{2,\upsilon}^\Delta)$. Then we have that $w_t -k_t \in \mathcal{K}_{\alpha_2}$ for all $t\leq\tau(\alpha_2,\alpha_1)$. Now we choose $\alpha', \alpha \in [\alpha_1, \alpha_2]$ according to (\ref{Jan41}) so that (\ref{Jan40}) holds, and then write \begin{eqnarray}
\label{61}
w_t -k_t & = & (Q_{\alpha_2 \alpha_1}(t; B_{2,\upsilon}^\Delta)k_0)(\eta) - (Q_{\alpha_2 \alpha_1}(t; B_{\upsilon}^\Delta)k_0)(\eta) \\[.2cm] \nonumber & = & \int_0^t Q_{\alpha_2\alpha'}(t-s;B_{2,\upsilon}^\Delta)(-B_1^\Delta)_{\alpha'\alpha}k_s ds, \qquad t<\tau(\alpha_2,\alpha_1), \end{eqnarray} where the operator $(-B_1^\Delta)_{\alpha'\alpha}$ is positive with respect to the cone $\mathcal{K}_\alpha^+$ defined in (\ref{32a}). In the integral in (\ref{61}), for all $s\in [0, \tau(\alpha_2,\alpha_1)]$, we have that $k_s \in \mathcal{K}_{\alpha}$ and $Q_{\alpha_2\alpha'}(t-s;B_{2,\upsilon}^\Delta) \in \mathcal{L}(\mathcal{K}_{\alpha'},\mathcal{K}_{\alpha_2})$ is positive. We also have that $k_s \in \mathcal{K}_{\alpha}^\star \subset \mathcal{K}_{\alpha}^+$ (by Lemma \ref{ILlm}). Therefore $w_t -k_t \in \mathcal{K}_{\alpha_2}^+$ for $t\le \tau(\alpha_2,\alpha_1)$, which yields (\ref{59}). \end{proof} The next step is to compare $k_t$ with \begin{equation}
\label{63}
r_t(\eta) = \|k_0\|_{\alpha_1}\exp\left( (\alpha_1 + c t)|\eta|\right), \end{equation} where $\alpha_1$ is as in Lemma \ref{complm} and \begin{equation}
\label{64} c = \langle b \rangle + \upsilon - m_*, \qquad m_*= \inf_{x\in \mathds{R}^d} m(x). \end{equation} Let us show that $r_t\in \mathcal{K}_\alpha$ for $t\leq \tau(\alpha_2, \alpha_1)$, where $\alpha$ is given in (\ref{Jan41}). In view of (\ref{nk}), this is the case if the following holds \begin{equation}
\label{64a} \alpha_1 + c \tau(\alpha_2, \alpha_1) \leq \frac{1}{3}\alpha_2 + \frac{2}{3}\alpha_1, \end{equation} which amounts to $c \leq 2\langle b \rangle + \upsilon + \langle a \rangle e^{\alpha_2}$, see (\ref{Jan40}) and (\ref{34}). The latter obviously holds by (\ref{64}). \begin{lemma}
\label{comp1lm} Let $\alpha_1$, $\alpha_2$ and $k_t = Q_{\alpha_2\alpha_1}(t)k_0$ be as in Lemma \ref{complm}, and $r_t$ be as in (\ref{63}), (\ref{64}). Then $k_t (\eta) \leq r_t(\eta)$ for all $t\leq \tau(\alpha_2,\alpha_1)$ and $\eta\in \Gamma_0$. \end{lemma} \begin{proof} The idea is to show that $w_t (\eta) \leq r_t(\eta)$ and then to apply the estimate obtained in Lemma \ref{complm}. Set $\tilde{w}_t =Q_{\alpha_2 \alpha_1}(t; B_{2,\upsilon}^\Delta) r_0$. Since $k_0 \in \mathcal{K}_{\alpha_1}$, we have that $k_0 \leq r_0$. Then by the positivity discussed in Remark \ref{Jan10rk} we obtain $w_t \leq \tilde{w}_t$, and hence $k_t \leq \tilde{w}_t$, holding for all $t \leq \tau(\alpha_2, \alpha_1)$. Thus, it remains to prove that $\tilde{w}_t (\eta) \leq r_t(\eta)$. To this end we write, cf. (\ref{61}), \begin{equation}
\label{65} \tilde{w}_t - r_t = \int_0^t Q_{\alpha_2 \alpha'}(t-s; B_{2,\upsilon}^\Delta) D_{\alpha'\alpha} r_s d s, \end{equation} where $\alpha'$ and $\alpha$ are as in (\ref{Jan41}) and the bounded operator $D_{\alpha'\alpha}$ acts as follows: $D= A^\Delta_\upsilon
+ B^\Delta_{2,\upsilon} - J_{c}$, where $(J_{c}k)(\eta) = c |\eta| k(\eta)$ with $c$ as in (\ref{64}). The validity of (\ref{65}) can be established by taking the $t$-derivative of both sides and then taking into account (\ref{63}) and (\ref{54a}). Note that $r_s$ in (\ref{65}) lies in $\mathcal{K}_\alpha$, as it was shown above. By means of (\ref{21}) the action of $D$ on $r_s$ can be calculated explicitly yielding \begin{eqnarray}
\label{66} (D r_t)(\eta) & = & - \Psi_\upsilon (\eta) r_t(\eta) + \int_{\mathds{R}^d}\sum_{y_1\in \eta} \sum_{y_2\in \eta\setminus
\eta_1} r_t(\eta\cup x\setminus \{y_1,y_2\}) b(x|y_1,y_2) d x \\[.2cm] \nonumber&
+ & \upsilon |\eta| r_t(\eta) + 2\int_{(\mathds{R}^d)^2}\sum_{y_1\in
\eta} r_t(\eta\cup x\setminus y_1) b(x|y_1,y_2) d x d y_2 -
c|\eta| r_t(\eta) \\[.2cm]\nonumber & = & \bigg{(} - M(\eta) -
E^a(\eta) - \langle b \rangle |\eta| + e^{-\alpha_1 - c t}E^b
(\eta) + 2 \langle b \rangle |\eta| - c|\eta| \bigg{)} r_t (\eta). \end{eqnarray} Since $\alpha_1 >-\log \omega$, by Proposition \ref{2pn} we have that \[
- E^a(\eta) + e^{-\alpha_1 - c t}E^b (\eta) \leq \upsilon |\eta|, \] by which and (\ref{64}) we obtain from (\ref{66}) that $(D r_t)(\eta) \leq 0$. We apply this in (\ref{65}) and obtain $\tilde{w}_t \leq r_t$ which completes the proof. \end{proof} \begin{remark}
\label{JK10rk} By (\ref{64}) we obtain that $c\leq 0$ (and hence $k_t\in \mathcal{K}_{\alpha_1}$) whenever \[ m_* \geq \langle b \rangle + \upsilon. \] In the short dispersal case, see Remark \ref{1rk}, one can take $\upsilon =0$. In the long dispersal case, by Proposition \ref{pfi} one can make $\upsilon$ as small as one wants by taking small enough $\omega$ and hence big enough $\alpha_1$. Then, the evolution of $k_t$ leaves the initial space invariant if the following holds \begin{equation}
\label{67} m_* > \langle b \rangle. \end{equation} In the short dispersal case, one can allow equality in (\ref{67}). \end{remark}
\subsection{Completing the proof}
The choice of the initial space should satisfy the condition $\alpha_1 > -\log \omega$. At the same time, the parameter $\alpha_2>\alpha_1$ can be taken arbitrarily. In view of the dependence of $T(\alpha_2, \alpha_1)$ on $\alpha_2$, see (\ref{34}), the function $\alpha_2 \mapsto T(\alpha_2, \alpha_1)$ attains maximum at $\alpha_2 = \alpha_1 + \delta (\alpha_1)$, where \begin{equation}
\label{Jan26}
\delta (\alpha) = 1+ W\left(\frac{2\langle b \rangle + \upsilon}{\langle a \rangle} e^{-\alpha -1} \right), \end{equation} Here $W$ is Lambert's function, see \cite{W}. Then we have \begin{equation}
\label{68} T_{\max} (\alpha_1) = \max_{\alpha_2 >\alpha_1} T(\alpha_2, \alpha_1) = \exp\left( -\alpha_1 - \delta(\alpha_1) \right) /\langle a \rangle. \end{equation} {\it Proof of Theorem \ref{1tm}.} Fix $\upsilon$ and then find small $\omega$ (see Proposition \ref{pfi}) such that the inequality in Proposition \ref{2pn} holds true. Thereafter, take $\alpha_0 >-\log \omega$ such that $k_{\mu_0}\in \mathcal{K}_{\alpha_0}$. Then take $c$ as given in (\ref{64}) with this $\upsilon$. Next, set $T_1 = T_{\max} (\alpha_0)/3$, see (\ref{68}), and also $\alpha^*_1 = \alpha_0+ cT_1$, $\alpha_1 = \alpha_0 + \delta(\alpha_0)$, see (\ref{Jan26}). Clearly, $\alpha_1^* < \alpha_1$ that can be checked similarly as in (\ref{64a}). By Lemma \ref{ILlm} it follows that, for $t\leq T_1$, $k_t = Q_{\alpha_1 \alpha_0} (t) k_{\mu_0}$ lies in $\mathcal{K}^\star_{\alpha_1}$, whereas by Lemma \ref{comp1lm} we have that $k_t\in \mathcal{K}^\star_{\alpha_t}$ with $\alpha_t = \alpha_0 +c t\leq \alpha_1^*$. Clearly, for $T<T_1$, the map $[0, T)\ni t \mapsto k_t\in \mathcal{K}_{\alpha_T}$ is continuous and continuously differentiable, and both claims (i) and (ii) are satisfied since (by construction) $\dot{k}_t = L^\Delta_{\alpha_1} k_t = L^\Delta_{\alpha_T} k_t$, see (\ref{29}). Now, for $n\geq 2$, we set \begin{gather}
\label{70} T_n = T_{\rm max} (\alpha^*_{n-1}) /3, \quad \alpha_n^* = \alpha^*_{n-1} + c T_n , \\[.2cm] \nonumber \alpha_n = \alpha^*_{n-1} + \delta(\alpha^*_{n-1}). \end{gather} As for $n=1$, we have that $\alpha_n^* < \alpha_n$ and $T_n< T(\alpha_n,\alpha_{n-1}^*)$ holding for all $n\geq 2$. Thereafter, set \begin{equation*}
k_t^{(n)} = Q_{\alpha_n \alpha_{n-1}^*} (t) k^{(n-1)}_{T_{n-1}},\qquad t \in [0,T (\alpha_n,\alpha_{n-1}^*)), \end{equation*} where $k^{(1)}_t = Q_{\alpha_1 \alpha_0} (t) k_{\mu_0}$. Then, for each $T<T_n$ both maps $[0,T)\ni t \mapsto k^{(n)}_t \in \mathcal{K}_{\bar{\alpha}_{n-1}(T)}$ and $[0,T)\ni t \mapsto L^\Delta_{\bar{\alpha}_{n-1}(T)} k^{(n)}_t \in \mathcal{K}_{\bar{\alpha}_{n-1}(T)}$ are continuous, where $\bar{\alpha}_{n-1}(T) := \alpha_{n-1}^* + cT$. The continuity of the latter map follows by the fact that $k^{(n)}_t \in \mathcal{K}_{\bar{\alpha}_{n-1}(t)}\hookrightarrow \mathcal{K}_{\bar{\alpha}_{n-1}(T)}$ and that
$L^\Delta_{\bar{\alpha}_{n-1}(T)}|_{\mathcal{K}_{\bar{\alpha}_{n-1}(t)}} = L^\Delta_{\bar{\alpha}_{n-1}(T)\bar{\alpha}_{n-1}(t)}$, see (\ref{29}). Moreover $k^{(n)}_0 = k^{(n-1)}_{T_{n-1}}$ and $L^\Delta_{\alpha^*_{n-1}+\varepsilon } k^{(n)}_0 = L^\Delta_{\alpha_{n-1}^*+\varepsilon } k^{(n-1)}_{T_{n-1}}$ holding for each $\varepsilon>0$. Then the map in question $t\mapsto k_t$ is \begin{equation*}
k_{t+T_1 \cdots + T_{n-1}} = k_t^{(n)}, \qquad t\in [0,T_n], \end{equation*} provided that the series $\sum_{n\geq 1}T_n$ is divergent. By (\ref{68}) we have \begin{equation}
\label{71} \sum_{n\geq 1}T_n = \frac{1}{3 \langle a \rangle} \sum_{n\geq 1}\exp\left( -\alpha_{n-1}^* - \delta(\alpha_{n-1}^* ) \right). \end{equation} For the convergence of the series in the right-hand side it is necessary that $\alpha_{n-1}^* + \delta(\alpha_{n-1}^*)\to +\infty$, and hence $\alpha_{n-1}^* \to +\infty$ as $n\to +\infty$, since $\delta(\alpha)$ is decreasing. By (\ref{70}) we have $\alpha_n^*= \alpha_0 + c(T_1+\cdots +T_n)$. Then the convergence of $\sum_{n\geq 1}T_n$ would imply that $\alpha_n^*\leq \alpha^*$ for some number $\alpha^*>0$ that contradicts the convergence of the right-hand side of (\ref{71}).
{$\square$} \vskip.1cm \noindent {\it Proof of Corollary \ref{Jaco}.} For a compact $\Lambda$, let us show that $\mu^\Lambda_t\in \mathcal{D}$, that is, $R_{\mu_t}^\Lambda \in \mathcal{D}^\dagger$, see (\ref{L2}). For $k_t =k_{\mu_t}$ described in Theorem \ref{1tm}, by (\ref{13}) we have \begin{equation*} R_{\mu_t}^{\Lambda}(\eta)=\int_{\Gamma_{\Lambda}}
(-1)^{|\xi|}k_t(\eta \cup \xi)\lambda(d\xi). \end{equation*} Let $\alpha >\alpha_0$ be such that $k_t\in \mathcal{K}_{\alpha}$. Then using (\ref{nk}), (\ref{8}), (\ref{22}) and (\ref{25}) we calculate \begin{eqnarray*} \int_{\Gamma_{\Lambda}}\Psi(\eta) R_{\mu_t}^{\Lambda}(\eta) \lambda(d\eta)&=& \int_{\Gamma_{\Lambda}}\Psi(\eta)
\int_{\Gamma_{\Lambda}} (-1)^{|\xi|}k_t(\eta \cup \xi)\lambda(d\xi)\lambda(d\eta)\\[.2cm]
&\le & \int_{\Gamma_{\Lambda}}\Psi(\eta) \|k\|_{\alpha}e^{\alpha|\eta|}\lambda(d\eta) \int_{\Gamma_{\Lambda}}e^{\alpha|\xi|} \lambda(d\xi)\\[.2cm]
&\le& \|k\|_{\alpha} (m^*+a^*+\langle b\rangle) \int_{\Gamma_{\Lambda}} |\eta|^2e^{\alpha|\eta|}\lambda(d\eta)
\exp\left(|\Lambda|e^{\alpha}\right)\\[.2cm]
&=& \|k\|_{\alpha} (m^*+a^*+\langle b\rangle) |\Lambda|e^\alpha \left( 2 +
|\Lambda|e^\alpha\right)\exp\left(2|\Lambda|e^{\alpha}\right), \end{eqnarray*}
where $|\Lambda|$ is the Euclidean volume of $\Lambda$. That yields $\mu_t^{\Lambda} \in \mathcal{D}$. The validity of (\ref{Ja}) follows by (\ref{11}).
{$\square$}
\section*{Acknowledgment} The authors are grateful to Krzysztof Pilorz for valuable assistance and discussions. In the period 2016-17, the research of both authors related to this paper was supported by the European Commission under the project STREVCOMS PIRSES-2013-612669. In March 2017, during his stay in Bucharest Yuri Kozitsky was supported by Research Institute of the University of Bucharest. In 2018, he was supported by National Science Centre, Poland, grant 2017/25/B/ST1/00051. All these supports are cordially acknowledged. \appendix \setcounter{secnumdepth}{1}
\section{The proof of Proposition \ref{2pn}} According to Assumption \ref{ass1}, $\beta$ is Riemann integrable, then for an arbitrary $\varepsilon >0$, one can divide $\mathbb{R}^d$ into equal cubic cells $E_l$, $l\in \mathbb{N}$, of side $h>0$ such that the following holds \begin{equation} \label{pz} h^d\sum_{l=1}^{+\infty} \beta_l \le \langle b \rangle+\varepsilon, \qquad \beta_l:=\sup_{x\in E_l}\beta(x). \end{equation}
For $r>0$, set $K_r(x)=\lbrace y\in \mathbb{R}^d:|x-y|<r \rbrace$, $x\in \mathbb{R}^d$, and \begin{equation} \label{ar} a_r = \inf_{x\in K_{2r}(0)}a(x). \end{equation} Then we fix $\varepsilon$ and pick $r>0$ such that $a_r>0$. For $r$, $h$ and $\varepsilon$ as above, we prove the statement by the induction in the number of points in $\eta$. By (\ref{fi}) we rewrite (\ref{2pnN}) in the form \begin{equation}
\label{u} U_{\omega}(\eta):=\upsilon|\eta|+\Phi_{\omega}(\eta)\ge 0, \end{equation} and, for some $x\in \eta$, consider \begin{eqnarray*} U_{\omega}(x,\eta \setminus x)&:= & U_{\omega}(\eta)- U_{\omega}(\eta \setminus x)\\ &=& \upsilon +2 \left( \sum_{y\in \eta \setminus x} a(x-y)-\omega \sum_{y\in \eta \setminus x}\beta(x-y) \right). \end{eqnarray*}
Set $c_d=|K_1|$ and let $\Delta(d)$ be the packing constant for rigid balls in $\mathbb{R}^d$, cf. \cite{gro}. Then set \begin{equation} \label{del} \delta=\max \lbrace \beta^*; (\langle b \rangle+\varepsilon)g_d(h,r), \rbrace, \end{equation} where $$g_d(h,r)=\frac{\Delta(d)}{c_d}\left( \frac{h+2r}{hr} \right)^d.$$ Next, assume that $\upsilon$ and $\omega$ satisfy, cf. (\ref{ar}), \begin{equation} \label{ome} \omega \le \min \left\{ \frac{\upsilon}{2\delta}; \frac{a_r}{\delta} \right\}. \end{equation} Let us show that \begin{itemize} \item[(i)] for each $\eta=\lbrace x,y \rbrace$, (\ref{ome}) implies (\ref{u}); \item[(ii)] for each $\eta$, one finds $x\in \eta$ such that $U_{\omega}(x,\eta \setminus x)\ge 0$ whenever (\ref{ome}) holds. \end{itemize} To prove (i) by (\ref{ome}) and (\ref{del}) we get \begin{eqnarray*} U_{\omega}(\lbrace x,y \rbrace)&=&2 \upsilon +2a(x-y)-2\omega\beta(x-y)\\ &\geq & (\upsilon - 2\omega\beta^*)+2a(x-y)\ge 0. \end{eqnarray*} To prove (ii), for $y\in \eta$, we set \begin{equation} \label{s}
s=\max_{y\in \eta} |\eta \cap K_{2r}(y)|. \end{equation}
Let also $x\in \eta$ be such that $|\eta \cap K_{2r}(x)|=s$. For this $x$, by $E_l(x)$, $l\in \mathbb{N}$, we denote the corresponding translates of $E_l$ which appear in (\ref{pz}). Set $\eta_l=\eta \cap E_l(x)$ and let $l_* \in \mathbb{N}$ be such that $\eta \subset \bigcup_{l\le l_*}E_l(x)$ which is possible since $\eta$ is finite. For a given $l$, a subset $\zeta_l \subset \eta_l$
is called $r-$admissible if for each distinct $y,z\in \zeta_l$, one has that $K_r(y)\cap K_r(z)= \emptyset$. Such a subset $\zeta_l$ is called maximal $r-$admissible if $|\zeta_l|\ge |\zeta'|$ for any other $r-$admissible $\zeta_l'$. It is clear that \begin{equation} \label{etal} \eta_l \subset \bigcup_{z\in \zeta_l}K_{2r}(z). \end{equation}
Otherwise, one finds $y\in \eta_l$ such that $|y-z|\ge 2r$, for each $z\in \zeta_l$, which yields that $\zeta_l$ is not maximal. Since all the balls $K_r(z)$, $z\in \zeta_l$, are contained in the $h-$extended cell \begin{equation*}
E_l^h(x):=\lbrace y\in \mathbb{R}^d: \inf_{z\in E_l(x)}|y-z|\le h \rbrace, \end{equation*}
their maximum number - and hence $|\zeta_l|$ - can be estimated as follows \begin{equation} \label{zetal}
|\zeta_l|\le \Delta(d)V(E_l^h(x))/c_dr^d=h^d\frac{\Delta(d)}{c_d}\left( \frac{h+2r}{hr} \right)^d=h^dg_d(h,r), \end{equation} where $c_d$ and $\Delta(d)$ are as in (\ref{del}). Then by (\ref{s}) and (\ref{etal}) we get \begin{equation*} \sum_{y\in \eta \setminus x}\beta(x-y)\le \sum_{l=1}^{l_*} \sum_{z\in \zeta_l} \sum_{y\in K_{2r}(z)\cap \eta_l}\beta_l. \end{equation*} The cardinality of $K_{2r}(z)\cap \eta_l$ does not exceed $s$, see (\ref{s}), whereas the cardinality of $\zeta_l$ satisfies (\ref{zetal}). Then \begin{equation} \label{ogr} \sum_{y\in\eta\setminus x}\beta(x-y)\le s g_d(h,r)\sum_{l=1}^{\infty}\beta_l h^d \le sg_d(h,r)(\langle b \rangle +\varepsilon)\le s\delta. \end{equation} On other hand, by (\ref{ar}) and (\ref{s}) we get \begin{equation*} \sum_{y\in\eta\setminus x}a(x-y)\ge \sum_{y\in (\eta\setminus x)\cap K_{2r}(x)}a(x-y) \ge (s-1)a_r. \end{equation*} We use this estimate and (\ref{ogr}) in (\ref{u}) and obtain $$U_{\omega}(x, \eta \setminus x)\ge 2\delta \left[ \left( \frac{\upsilon}{2\delta}-\omega \right)+(s-1)\left(\frac{a_r}{\delta}-\omega \right) \right]\ge 0,$$
see (\ref{ome}). Thus, (ii) also holds and the proof follows by the induction in $|\eta|$.
\end{document} |
\begin{document}
\begin{abstract} We study the existence of stationary classical solutions of the incompressible Euler equation in the plane that approximate singular stationary solutions of this equation. The construction is performed by studying the asymptotics of equation $-\varepsilon^2 \Delta u^\varepsilon=(u^\varepsilon-q-\frac{\kappa}{2\pi} \log \frac{1}{\varepsilon})_+^p$ with Dirichlet boundary conditions and $q$ a given function. We also study the desingularization of pairs of vortices by minimal energy nodal solutions and the desingularization of rotating vortices. \end{abstract}
\title{Desingularization of vortices for the Euler equation}
\section{Introduction}
\subsection{Singular solutions to the Euler equation}
The incompressible Euler equations \[ \left\{
\begin{aligned}
\nabla \cdot \mathbf{v} &= 0, \\
\mathbf{v}_t + \mathbf{v}\cdot \nabla \mathbf{v}&=-\nabla p,
\end{aligned} \right. \] describe the evolution of the velocity $\mathbf{v}$ and the pressure $p$ in an incompressible flow. In $\mathbf{R}^2$, the vorticity $\omega = \nabla \times \mathbf{v}=\partial_1 \mathbf{v}_2-\partial_2 \mathbf{v}_1$ of a solution of the Euler equations obey the transport equation \[ \omega_t + \mathbf{v} \cdot \nabla \omega = 0 \] and the velocity field $\mathbf{v}$ can be recovered from the vorticity function $\omega$ through the Biot--Savart law \[ \mathbf{v} = \omega * \frac{1}{2\pi} \frac{-x^\perp}{\abs{x}^2}, \] where $x^\perp=(x_2, -x_1)$. Special singular solutions of the Euler equations are given by \footnote{One needs to give a meaning to the equation in this case, since the velocity field generated by a vortex point is singular precisely on that vortex point. It consists in considering that each vortex point is transported only by the velocity field created by the other vortex points (see e.g. S.\thinspace Schochet \protect{\cite{Schochet_CPDE_95}} for details and further discussion). } \[ \omega = \sum_{i=1}^k \kappa_i \delta_{x_i(t)}, \] corresponding to \[
\mathbf{v}(x)=-\sum_{i=1}^k \frac{\kappa_i}{2\pi} \frac{(x-x_i(t))^\perp}{\abs{x-x_i(t)}^2}, \] and the positions of the vortices $x_i : \mathbf{R} \to \mathbf{R}^2$ satisfy \[
\dot{x}_i(t)=-\sum_{\substack{j=1 \\ j \ne i}}^k \frac{\kappa_j}{2\pi} \frac{(x_i(t)-x_j(t))^\perp}{\abs{x_i(t)-x_j(t)}^2}. \] In terms of the Kirchhoff--Routh function \[
\mathcal{W}(x_1, \dotsc, x_k)=\frac{1}{2} \sum_{i \ne j} \frac{\kappa_i\kappa_j}{2\pi} \log \frac{1}{\abs{x_i-x_j}}, \] the positions obey Kirchhoff's law \begin{equation} \label{equationKirchhoff}
\kappa_i \dot{x}_i=(\nabla_{x_i} \mathcal{W})^\perp, \end{equation} which is a Hamiltonian formulation of the dynamics of the vortices.
In simply-connected bounded domains $\Omega \subset \mathbf{R}^2$, similar singular solutions exist. If one requires for example that the normal component of $\mathbf{v}$ vanishes on the boundary, the associated Kirchoff--Routh function is then given by \begin{equation} \label{eqKRDomainsHomog}
\mathcal{W}(x_1, \dotsc, x_k)=\frac{1}{2} \sum_{i \ne j} \kappa_i\kappa_j G(x_i, x_j)+\sum_{i=1}^k \frac{\kappa_i^2}{2}H(x_i, x_i), \end{equation} where $G$ is the Green function of $-\Delta$ on $\Omega$ with Dirichlet boundary conditions and $H$ is its regular part.\footnote{The function $x \mapsto H(x, x)$ is called the \emph{Robin function} of $\Omega$.} One can also prescribe a condition $v_n$ on the outward component of the velocity on the boundary. Since we are dealing with an incompressible flow, the boundary data should satisfy $\int_{\partial \Omega} v_n=0$. Let $\mathbf{v}_0$ be the unique harmonic field whose normal component on the boundary is $v_n$; i.e., $\mathbf{v}_0$ satisfies \[ \left\{ \begin{aligned}
\nabla \cdot \mathbf{v}_0&=0, & & \text{in $\Omega$}, \\
\nabla \times \mathbf{v}_0&=0, & & \text{in $\Omega$}, \\
n \cdot \mathbf{v}_0&=v_n& & \text{on $\partial \Omega$}, \end{aligned} \right. \] where $\nabla \times (u, v)=\partial_1 v-\partial_2 u$ and $n$ is the outward normal, then the positions of the vortices are obtained by the modified law \[
\dot{x}_i=(\nabla_{x_i} \mathcal{W})^\perp +\mathbf{v}_0. \] Since $\Omega$ is simply-connected $\mathbf{v}_0$ can be written $\mathbf{v}_0=(\nabla \psi_0)^\perp$ where the stream function $\psi_0$ is characterized up to a constant by \begin{equation}
\label{eqpsi0} \left\{ \begin{aligned} -\Delta \psi_0&=0& &\text{in $\Omega$}, \\ -\frac{\partial \psi_0}{\partial \tau}&=v_n & & \text{on $\partial \Omega$}, \end{aligned} \right. \end{equation} where $\frac{\partial \psi_0}{\partial \tau}$ denotes the tangential derivative on $\partial \Omega$. The Kirchhoff--Routh function associated to the vortex dynamics becomes then \begin{equation} \label{KRDomains}
\mathcal{W}(x_1, \dotsc, x_k)=\frac{1}{2} \sum_{i \ne j} \kappa_i\kappa_j G(x_i, x_j)+\sum_{i=1}^k \frac{\kappa_i^2}{2}H(x_i, x_i)+\sum_{i=1}^k \kappa_i \psi_0(x_i), \end{equation} see C.\thinspace C.\thinspace Lin \cite{Lin1941} (who uses opposite sign conventions).
\subsection{Desingularization of vortices} One way to justify the weak formulation for point vortex solutions of the Euler equations is to approximate these solutions by classical solutions. This can actually be done, on finite time intervals, by considering regularized initial data for the vorticity (see e.g.\ C.\thinspace Marchioro and M. Pulvirenti \cite{MarchioroPulvirenti1983}).
Critical points of the Kirchhoff--Routh function $\mathcal{W}$ give rise to stationary vortex points solutions of the Euler equations. As noted above, these weak stationary solutions can be approximated by classical solutions of the Euler equations. These do not need be stationary solutions though, and one can wish to approximate the stationary vortex-point solutions by stationary classical solutions. In the simplest case, corresponding to a single point vortex in a simply-connected domain, we obtain the following
\begin{theorem}\label{thm:resu}
Let $\Omega \subset \mathbf{R}^2$ be a bounded simply-connected smooth\footnote{Here and in the sequel, smooth means Lipschitz and is sufficient for our goals.} domain and $v_n:\partial \Omega \to \mathbf{R}\in L^s(\partial \Omega)$ for some $s>1$ be such that $\int_{\partial \Omega} v_n = 0$. Let $\kappa >0$ be given. For $\varepsilon>0$ there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\Omega$ with outward boundary flux given by $v_n$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon) \subset B(x_\varepsilon, C\varepsilon)$ for some $x_\varepsilon \in \Omega$ and $C>0$ not depending on $\varepsilon$. Moreover, as $\varepsilon \to 0$, \[ \int_\Omega \omega_\varepsilon \to \kappa, \] and \[
\mathcal{W}(x^\varepsilon) \to \sup_{x \in \Omega} \mathcal{W}(x). \] \end{theorem}
Other situations, corresponding to pairs of vortices of opposite signs, multiply-connected bounded domains or unbounded domains are discussed in Section~\ref{sect:resu}.
We are aware essentially of two methods to construct stationary solutions of the Euler equations that we call the vorticity method and the stream-function method.
The vorticity method was introduced by V.\thinspace Arnold (see \cite{ArnoldKhesin}*{Chapter II \S 2}), and was implemented successfully by G.\thinspace R.\thinspace Burton \cite{Burton1988} and B.\thinspace Turkington \cite{Turkington1983}. It roughly consists in maximizing the kinetic energy \[
\frac{1}{2}\int_{\Omega} \int_{\Omega} \omega(x)G(x, y)\omega(y)\, dx\, dy+\int_{\Omega} \psi_0(x)\omega(x)\, dx+\frac{1}{2} \int_{\Omega} \abs{\nabla \psi_0}^2, \] under some constraints on the sublevel sets of $\omega$. The function $\omega$ is the vorticity of the flow and a stream function $\psi$ is the solution to \[ \left\{
\begin{aligned}
-\Delta \psi &=\omega & & \text{in $\Omega$},\\
\psi&=\psi_0 & & \text{on $\partial \Omega$}.
\end{aligned} \right. \] Considering suitable families of constraints on the sublevel sets of $\omega$, one can obtain families of solutions converging to stationary vortex-point solutions. The differentiability of those solutions is not guaranteed (the solutions correspond to vortex patches of constant density).
The stream-function method starts from the observation that if $\psi$ satisfies \[
-\Delta \psi=f(\psi), \] for some arbitrary function $f \in C^1(\mathbf{R})$, then $\mathbf{v}=(\nabla \psi)^\perp$ and $p=F(\psi)-\frac{1}{2}\abs{\nabla \psi}^2$, with $F(s)=\int_0^s f$ form a stationary solution to the Euler equations. Moreover, the velocity $\mathbf{v}$ is irotational on the set where $f(\psi)=0$.
We now set $q=-\psi_0$ and $u=\psi-\psi_0$, so that $u=0$ on $\partial \Omega$ and $-\Delta u = f(u-q)$ in $\Omega$. If we assume that $\inf_{\Omega} q > 0$ and $f(t)=0$ when $t \le 0$, the vorticity set $\{ x \: :\: f(\psi(x))> 0 \}$ is bounded away from the boundary. When $f$ satisfies also some monotonicity and growth conditions, $\Omega=\mathbf{R}^2_+$ and $q(x)=W x_1+d$ with $W > 0$ and $d>0$, J.\thinspace Norbury \cite{Norbury1975} has shown the existence of solutions to $-\Delta u = \nu f(u-q)$, where $\nu > 0$ is a Lagrange multiplier a priori unknown by minimizing $\int_{\Omega} \abs{\nabla u}^2$ under the constraint \[
\int_{\Omega} F(u-q)=\mu \] in $H^1_0(\Omega)$ when $\Omega$ is the half-plane $\mathbf{R}^2_+$. M.\thinspace S.\thinspace Berger and L.\thinspace E.\thinspace Fraenkel \cite{BergerFraenkel1980} have obtained corresponding results for a bounded domain $\Omega \subset \mathbf{R}^2$, and they began studying the asymptotics for variable $\mu$ and $q$, but the lack of information on $\nu$ remained an obstacle.
The unknown $\nu$ can be avoided by minimizing $\int_{\Omega} \frac{1}{2}\abs{\nabla u}^2-\frac{1}{\varepsilon^2}F(u-q)$ under the natural constraint $\int_{\Omega} \frac{1}{2}\abs{\nabla u}^2-\frac{1}{\varepsilon^2}uf(u-q)=0$. Yang Jianfu \cite{Yang1991} has used this approach in $\mathbf{R}^2_+$ with $q(x)=Wx_1+d$ and has studied the asymptotic behavior of the solution $u^\varepsilon$ when $\varepsilon \to 0$: If \begin{align*}
A_\varepsilon&=\{ x \in \mathbf{R}^2_+ \: :\: f(u^\varepsilon-q) > 0\}, & \kappa_\varepsilon&=\frac{1}{\varepsilon^2} \int_{\Omega} f(u^\varepsilon-q), \end{align*} and $x^\varepsilon \in A^\varepsilon$, then $\diam A^\varepsilon \to 0$, $\dist(x^\varepsilon, \partial \mathbf{R}^2_+)\to 0$, and \[
\frac{u^\varepsilon}{\kappa^\varepsilon}-G(\cdot, x^\varepsilon) \to 0 \] in $W^{1, r}_{\mathrm{loc}}(\mathbf{R}^2_+)$, for $r \in [1, 2)$. Li Gongbao, Yan Shusen and Yang Jianfu \cite{LiYanYang2005} obtained a similar result on bounded domains, with the additional information that $q(a^\varepsilon) \to \min_{\Omega} q$. These results are in striking contrast with the observation made at the beginning that the dynamics of the vortices is governed by the Kirchhoff--Routh function $\mathcal{W}$ defined by \eqref{KRDomains}, which implies that stationary vortices should be localized around a critical point of $x \mapsto \frac{\kappa^2}{2} H(x, x)-\kappa q(x)$.
In fact, the results in \cites{Yang1991, LiYanYang2005} do not answer the question about the desingularization of stationary vortex point solutions to the Euler equation. Indeed, in the case of bounded domains for example, their solutions satisfy $\Norm{\nabla u}_{\mathrm{L}^2}^2=O\bigl(\logeps^{-1}\bigr)$, so that testing the equation against the function $\min(u^\varepsilon, q)$ and using the fact that $q$ is harmonic and nonnegative, we have \[
\kappa^\varepsilon \min_{\partial \Omega} q \le
\frac{1}{\varepsilon^2}\int_{\Omega} f(u^\varepsilon-q) =\int_{\Omega \setminus A^\varepsilon} \abs{\nabla u^\varepsilon}^2=O\bigl(\logeps^{-1}\bigr), \] i.e.\ $\kappa^\varepsilon \to 0$. In some sense, the family of solutions $u^\varepsilon$ provides a desingularization of point-vortex solutions with vanishing vorticity. The asymptotic position is consistent with the fact that when the vorticities tend to zero, the term $\sum_{i=1}^k \kappa_i \psi_0(x_i)$ becomes dominant in the Kirchhoff--Routh function \eqref{KRDomains}.
In order to desingularize point-vortex solutions with non-vanishing vorticity, M.\thinspace S. Berger and L.\thinspace E.\thinspace Fraenkel \cite{BergerFraenkel1980}*{Remark 2} suggest that $q$ should grow like $\log \frac{1}{\varepsilon}$. This brings us to the study of the problem \begin{equation} \label{problemPeps} \left\{ \begin{aligned} -\Delta u^\varepsilon &=\frac{1}{\varepsilon^2} f(u^\varepsilon - q^\varepsilon ) & &\text{in $\Omega$, }\\ u^\varepsilon &= 0 & &\text{on $\partial \Omega$}, \end{aligned} \right. \tag{\protect{$\mathcal{P}^\varepsilon$}} \end{equation} where $q^\varepsilon=q+\frac{\kappa}{2\pi} \log \frac{1}{\varepsilon}$.
In Section \ref{sectionSingleVortex}, we study $(\mathcal{P}_\varepsilon)$ in a bounded domain: we first construct solutions and then analyze their asymptotic behavior. Theorem~\ref{thm:resu} is an easy consequence of the results in Section~\ref{sectionSingleVortex}. In Section~\ref{sectionmultiply} we present and extension to multiply-connected domains, while in Section~\ref{sectUnbounded}, we present an extension to unbounded domains which are a perturbation of a half-plane. In Section~\ref{sectionVortexPair} we modify slightly $(\mathcal{P}_\varepsilon)$ in order to construct desingularized solutions for two point vortices of opposite signs.
As a final remark, our results seem connected with the work of M.\thinspace del Pino, M.\thinspace Kowalczyk, and M.\thinspace Musso \cite{delPinoKowalczykMusso2005} on the equation \[ -\Delta u=\varepsilon^2 K(x)e^u \] for which the energy concentrates in small balls around points $x_1^\varepsilon, \dotsc, x_k^\varepsilon$. These points tend to a critical point of the function $-\sum_{i=1}^k 2\log K(x_i)-8\pi H(x_i, x_i)-\sum_{i \ne j} 8\pi G(x_i, x_j)$. The connection is clear when one rewrites their equation as $-\Delta u=\frac{1}{\varepsilon^2}\exp(u+\log K-\frac{8\pi}{2\pi}\log \frac{1}{\varepsilon})$. Other related work include the study of the equation $-\Delta u = u^p$ as $p \to \infty$ by P.\thinspace Esposito, M.\thinspace Musso and A. Pistoia \cites{EspositoMussoPistoia2006,EspositoMussoPistoia2007}, and the recent work of T.\thinspace Bartsch, A.\thinspace Pistoia and T.\thinspace Weth \cite{BartschPistoiaWeth} in which systems of three and four vortices are desingularized by studying the equation $-\Delta u= \varepsilon^2 \sinh u$. In all the references, whereas the vorticity concentrates at points, its support does not shrink as $\varepsilon \to 0$.
We also bring to the attention of the reader that there is a similar situation with similar results for three-dimensional axisymmetric incompressible inviscid flows by vorticity methods \cites{Burton1987, FridemannTurkington1981} and stream-function methods \cites{BergerFraenkel1974, AmbrosettiStruwe1989, Yang1995}. However we are not aware of a counterpart of the present work for three-dimensional axisymmetric incompressible inviscid flows.
\noindent{\bf Acknowledgements.} This work was initiated during a visit of the second author at Laboratoire Jacques-Louis Lions of Universit\'e Pierre \& Marie Curie. The authors wish to thank Franck Sueur for fruitful remarks following a first version of the manuscript.
\section{Single vortices in bounded domains} \label{sectionSingleVortex}
In this section, $\Omega\subset \mathbf{R}^2$ is a bounded simply-connected smooth domain, $f : \mathbf{R} \to \mathbf{R}$ is the real function defined by $f(s)=s_+^p$ for some $1<p<+\infty$ and where $s_+=\max(s, 0)$, $\kappa > 0$ is given as well as $q\in \mathrm{W}^{1, r}(\Omega)$ for some $r > 2$.\footnote{Notice that for the proof of Theorem~\ref{thm:resu} we only require a harmonic function $q$ but the proofs of Theorems~\ref{thmLocalMinimum} and \ref{thmRotating} require more general $q$.} We will consider solutions of the boundary value problem \eqref{problemPeps} where $\varepsilon>0$ is a real parameter. The solutions we consider are the least energy solutions obtained by minimizing the energy functional \begin{equation} \label{energyFunctional}
\mathcal{E}^\varepsilon(u)= \int_{\Omega} \Bigl(\frac{|\nabla u|^2}{2} - \frac{1}{\varepsilon^2}F(u-q^\varepsilon)\Bigr) \end{equation} over the natural constraint given by the Nehari manifold \[ \mathcal{N}^\varepsilon = \left\{ u\in H^1_0(\Omega)\setminus \{0\} \ : \ \langle d\mathcal{E}^\varepsilon(u), u\rangle = 0\right\}, \] where $F(s)=\frac{1}{p+1}s_+^{p+1}$ is a primitive of $f$. It is standard to prove the (see e.g.~\cite{Willem1996}*{Theorem 2.18}) \begin{proposition} \label{prop:2.1} Assume that $q^\varepsilon\geq 0$ on $\Omega$, so that $\mathcal{N}^\eps \neq \emptyset$, and define \[ c^\varepsilon = \inf_{u\in \mathcal{N}^\varepsilon} \mathcal{E}^\varepsilon(u). \] Then, there exists $u^\varepsilon \in \mathcal{N}^\varepsilon$ such that $\mathcal{E}^\varepsilon(u^\varepsilon)=c^\varepsilon$, and $u^\varepsilon$ is a positive solution of $(\mathcal{P}^\eps)$. \end{proposition}
Note that $q$ is bounded since $r>2$, and therefore $q^\varepsilon\geq 0$ provided $\varepsilon$ is sufficiently small.
Our focus is the asymptotics of $u^\varepsilon$ when $\varepsilon \to 0$. In order to describe the asymptotic behavior of $u^\varepsilon$, we introduce the limiting profile $U_\kappa : \mathbf{R}^2 \to \mathbf{R}$ defined as the unique radially symmetric solution of the problem \[ \tag{\protect{$\mathcal{U}_\kappa$}} \label{Ukappa} \left\{ \begin{aligned}
&-\Delta U_\kappa = f(U_\kappa), \\
&\int_{\mathbf{R}^2} f(U_\kappa) =\kappa. \end{aligned} \right. \] For every $\kappa>0$, there exists $\rho_\kappa>0$ such that \[
U_\kappa(y)= \left\{ \begin{aligned}
&V_{\rho_\kappa}(y)& &\text{if $y \in B(0, \rho_\kappa)$}, \\
&\frac{\kappa}{2\pi} \log \frac{\rho_\kappa}{\abs{y}} & &\text{if $y \in \mathbf{R}^2 \setminus B(0, \rho_\kappa)$}, \end{aligned}\right. \] where $V_\rho : B(0,\rho) \to \mathbf{R}$ satisfies \[ \left\{ \begin{aligned} \displaystyle -\Delta V_\rho &= V_\rho^p & & \text{in $B(0, \rho)$}, \\ V_\rho &= 0 && \text{on $\partial B(0, \rho)$}. \end{aligned} \right. \] One can show that $\kappa=\gamma \rho^{-\frac{2}{p-1}}$, for some constant $\gamma > 0$ depending on the value of $p$.
The Kirchhoff-Routh function $\mathcal{W}$ for one vortex of vorticity $\kappa$ is defined by \[
\mathcal{W}(x)=\frac{\kappa^2}{2} H(x, x)-\kappa q(x). \] Let us also define the quantity \[ \mathcal{C} = \frac{\kappa^2}{4\pi} \log \rho_\kappa +
\int_{B(0, \rho_\kappa)}\Bigl(\frac{|\nabla U_{\rho_\kappa}|^2}{2} - \frac{U_{\rho_\kappa}^{p+1}}{p+1}\Bigr). \] While the function $\mathcal{W}$ depends on $x \in \Omega$ and on $\kappa$, the quantity $\mathcal{C}$ only depends on $\kappa$ and on $p$.
We set \begin{equation} \begin{aligned}\label{defiq}
A^\varepsilon&=\Big\{ x \in \Omega \: :\: u^\varepsilon(x)> q^\varepsilon(x)\Big\}, \\
\omega^\varepsilon&=\frac{1}{\varepsilon^2} f(u^\varepsilon-q^\varepsilon), \\
\kappa^\varepsilon&=\int_{\Omega} \omega^\varepsilon, \\
x^\varepsilon&=\frac{1}{\kappa^\varepsilon}\int_{\Omega} x \, \omega^\varepsilon(x)\, dx, \\
\rho^\varepsilon&=\rho_{\kappa^\varepsilon}, \end{aligned} \end{equation} and respectively refer to these as the vorticity set, the vorticity, the total vorticity, the center of vorticity, and the vorticity radius.
We will prove
\begin{theorem}\label{thm:K1} As $\varepsilon \to 0$, we have \[
u^\varepsilon=U_{\kappa^\varepsilon} \Big(\frac{\cdot-x^\varepsilon}{\varepsilon}\Big)+\kappa^\varepsilon\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon \rho^\varepsilon}+ H(x^\varepsilon, \cdot)\Bigr)+o(1), \] \text{in $\mathrm{W}^{2, 1}_\mathrm{loc}(\Omega)$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$}, where \[
\kappa^\varepsilon=\kappa+\frac{2\pi}{\log \frac{1}{\varepsilon}}\Bigl(q(x^\varepsilon)-\kappa H(x^\varepsilon, x^\varepsilon) -\frac{\kappa}{2\pi} \log \frac{1}{\rho_\kappa} \Bigr)+o(\logeps^{-1}), \] and \[
\mathcal{W}(x^\varepsilon) \to \sup_{x \in \Omega} \mathcal{W}(x). \] One also has \[
B(x^\varepsilon, \Bar{r}^\varepsilon) \subset A^\varepsilon \subset B(x^\varepsilon, \mathring{r}^\varepsilon), \] with $\Bar{r}^\varepsilon=\varepsilon \rho_\kappa+o(\varepsilon)$ and $\mathring{r}^\varepsilon=\varepsilon \rho_\kappa +o(\varepsilon)$. Finally, \[
\mathcal{E}^\varepsilon (u^\varepsilon)= \frac{\kappa^2}{4\pi}\log \frac{1}{\varepsilon}-\mathcal{W}(x^\varepsilon)+\mathcal{C}+o(1). \] \end{theorem}
Since $\mathcal{W}(x) \to -\infty$ as $x \to \partial \Omega$, by Theorem~\ref{thm:K1}, up to a subsequence, $x^\varepsilon \to x^*\in \Omega$. Combined with standard elliptic estimates this yields the convergence $u^\varepsilon \to \kappa G(x_*, \cdot\, )$ in $\mathrm{W}^{1, p}_0(\Omega)$ for any $p<2$ and in $\mathcal{C}^k_{\mathrm{loc}}(\Omega\setminus \{x_*\})$ for any $k\in \mathbf{N}$. If $\partial \Omega$ is smooth enough, then one also has convergence in $\mathcal{C}^k_{\mathrm{loc}}(\Bar{\Omega}\setminus \{x_*\}\})$.
The proof of Theorem~\ref{thm:K1} is twofold. First, in Corollary~\ref{cor:upper}, we prove a sharp upper bounds for the critical level $c^\varepsilon$. Then, in Proposition~\ref{prop:1mai} we show that any solution satisfying this upper bound needs to satisfy the asymptotic expansion.
\subsection{Upper bounds on the energy} \label{upperBounds}
We will derive upper bounds for $c^\varepsilon$ by constructing elements of $\mathcal{N}^\varepsilon$ similar to the asymptotic expression of Theorem~\ref{thm:K1}.
\begin{lemma} \label{lemmaHatuNehari} For every $\Hat{x} \in \Omega$, if $\varepsilon>0$ is small enough, there exists \[
\Hat{\kappa}^\varepsilon=\kappa+\frac{2\pi}{\log \tfrac{1}{\varepsilon}}\Bigl( q(\Hat{x})-\kappa H(\Hat{x}, \Hat{x})+\dfrac{\kappa}{2\pi} \log \rho_\kappa \Bigr)+O\bigl(\logeps^{-2}\bigr), \] such that, if \[
\Hat{u}^\varepsilon(x)=U_{\Hat{\kappa}^\varepsilon}\Bigl(\frac{x-\Hat{x}}{\varepsilon}\Bigr)+\Hat{\kappa}^\varepsilon
\Bigl( \frac{1}{2\pi} \log \frac{1}{\varepsilon\rho_{\Hat{\kappa}^\varepsilon}}+H(\Hat{x}, x) \Bigr), \] then \[
\Hat{u}^\varepsilon\in \mathcal{N}^\varepsilon. \] Moreover, we have \[
\Hat{A}^\varepsilon:=\Bigl\{ x \: :\: \Hat{u}^\varepsilon(x) > q(x)+\frac{\kappa}{2\pi} \log \frac{1}{\epsilon} \Bigr\} \subset B(\Hat{x}, \Hat{r}^\varepsilon), \] with $\Hat{r}^\varepsilon=O(\varepsilon)$. \end{lemma} \begin{proof} For $\sigma \in \mathbf{R}$, define \begin{align*}
\Hat{\kappa}^{\varepsilon, \sigma}&=\frac{q^\varepsilon(\Hat{x})+\sigma}{\tfrac{1}{2\pi} \log \tfrac{1}{\varepsilon \rho_\kappa}+H(\hat{x}, \Hat{x})}, \\
\Hat{\rho}^{\varepsilon, \sigma}&=\rho_{\Hat{\kappa}^{\varepsilon, \sigma}}, \\
\Hat{u}^{\varepsilon, \sigma}(x)&=U_{\Hat{\kappa}^{\varepsilon, \sigma}}\Bigl(\frac{x-\Hat{x}}{\varepsilon}\Bigr)+\Hat{\kappa}^{\varepsilon, \sigma} \Bigl( \frac{1}{2\pi} \log \frac{1}{\varepsilon\rho_{\Hat{\kappa}^{\varepsilon, \sigma}}}+H(\Hat{x}, x) \Bigr). \end{align*} First note that when $\varepsilon>0$ is sufficiently small, $\Hat{u}^{\varepsilon, \sigma}(x)=\hat{\kappa}_{\sigma, \varepsilon} G(\Hat{x}, x)$ in a neighborhood of $\partial \Omega$, so that $\Hat{u}^{\varepsilon, \sigma} \in W^{1, 2}_0(\Omega)$ and we can define \[
g^\varepsilon(\sigma)=\langle d \mathcal{E}^\varepsilon (\Hat{u}^{\varepsilon, \sigma}), \Hat{u}^{\varepsilon, \sigma} \rangle. \] Among the terms involved in $g^\varepsilon(\sigma)$, we may already compute \[ \begin{split}
\int_{\Omega} \abs{\nabla \Hat{u}^{\varepsilon, \sigma}}^2
&=\int_{B(\Hat{x}, \varepsilon \rho_{\Hat{\kappa}^{\varepsilon, \sigma}})}\!\!\!\!\!\!\!\!\!\!\!\! \abs{\nabla (U_{\Hat{\kappa}^{\varepsilon, \sigma}}(\tfrac{\cdot-\Hat{x}}{\varepsilon}) +\Hat{\kappa}^{\varepsilon, \sigma} H(\Hat{x}, \cdot))}^2 \\ &\qquad\qquad+(\Hat{\kappa}^{\varepsilon, \sigma})^2\int_{\Omega \setminus B(\Hat{x}, \rho_{\Hat{\kappa}^{\varepsilon, \sigma}} \varepsilon)}\!\!\!\!\!\!\!\!\!\!\!\! \abs{\nabla G(\Hat{x}, \cdot)}^2 \\
&=\int_{B(0, \rho_{\Hat{\kappa}^{\varepsilon, \sigma}})}\!\!\!\!\!\!\!\!\!\!\!\! \abs{\nabla U_{\Hat{\kappa}^{\varepsilon, \sigma}}}^2+O(\varepsilon) \\ &\qquad\qquad+(\Hat{\kappa}^{\varepsilon, \sigma})^2\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon \rho_{\Hat{\kappa}^{\varepsilon, \sigma}}}+H(\Hat{x}, \Hat{x})+O(\varepsilon) \Bigr) \\
&=\int_{B(0, \rho_{\Hat{\kappa}^{\varepsilon, \sigma}})}\!\!\!\!\!\!\!\!\!\!\!\! \abs{\nabla U_{\Hat{\kappa}^{\varepsilon, \sigma}}}^2
+\Hat{\kappa}^{\varepsilon, \sigma} \bigl( q^\varepsilon(\Hat{x})+\sigma \bigr)+O(\varepsilon). \end{split} \]
In order to estimate the second term involved in $g^\varepsilon(\sigma)$, namely $\frac{1}{\varepsilon^2}\int_{\Omega} f(\Hat{u}^{\varepsilon, \sigma}-q^\varepsilon)\Hat{u}^{\varepsilon, \sigma}$, we first claim that \begin{equation} \label{HatAepssigma}
\Hat{A}^{\varepsilon, \sigma}:=\bigl\{x \in \Omega \: :\: \Hat{u}^{\varepsilon, \sigma}(x) >
q^\varepsilon(x)\bigr\} \subset B(\Hat{x}, r^\varepsilon), \end{equation} with $r^\varepsilon=O(\varepsilon)$. Indeed, let $x \in \Hat{A}^{\varepsilon, \sigma} \setminus B(\Hat{x}, \Hat{\rho}^{\varepsilon, \sigma}\varepsilon)$. One has, by definition of $\Hat{u}^{\varepsilon, \sigma}(x)$ and of $\Hat{\kappa}^{\varepsilon, \sigma}$, \[
\Hat{\kappa}^{\varepsilon, \sigma}\Bigl(\frac{1}{2\pi}\log
\frac{1}{\varepsilon}+\frac{1}{2\pi} \log \frac{\varepsilon}{\abs{x-\Hat{x}}}+H(\Hat{x}, x)\Bigr) > q(x)+\frac{\kappa}{2\pi} \log \frac{1}{\varepsilon},
\] so that \begin{equation} \label{ineqVorticitySetUpperFrac}
\frac{\dfrac{1}{2\pi} \log \dfrac{1}{\varepsilon}+\dfrac{1}{2\pi} \log
\dfrac{\varepsilon}{\abs{x-\Hat{x}}}+ H(\Hat{x}, x)}{\dfrac{\kappa}{2\pi} \log \dfrac{1}{\varepsilon}+q(x)} \ge \dfrac{\log \dfrac{1}{\varepsilon}+H(\Hat{x}, \Hat{x})}{\dfrac{\kappa}{2\pi} \log \dfrac{1}{\varepsilon}+q(\Hat{x})+\sigma}. \end{equation} Since $q$ and $H(\Hat{x}, \cdot)$ are bounded functions, one obtains that \[ \frac{1}{\kappa}+\frac{\log \frac{\varepsilon}{\abs{x-\Hat{x}}}}{\kappa \log \dfrac{1}{\varepsilon}}\ge \frac{1}{\kappa}+O\bigl(\logeps^{-1}\bigr), \] and the claim is proved. We deduce from~\eqref{HatAepssigma}, that for every $x \in \Hat{A}^{\varepsilon, \sigma}$ \[
\Hat{u}^{\varepsilon, \sigma}(x)-q^\varepsilon(x)=U_{\Hat{\kappa}^{\varepsilon, \sigma}}\Bigl(\frac{x-\Hat{x}}{\varepsilon}\Bigr)+\sigma+O(\varepsilon). \]
We may now estimate \[ \begin{split}
\frac{1}{\varepsilon^2}\int_{\Omega}& f(\Hat{u}^{\varepsilon, \sigma}-q^\varepsilon)\Hat{u}^{\varepsilon, \sigma}
=\frac{1}{\varepsilon^2}\int_{\Hat{A}^{\varepsilon, \sigma}} f(\Hat{u}^{\varepsilon, \sigma}-q^\varepsilon)\Hat{u}^{\varepsilon, \sigma}\\
&=\frac{1}{\varepsilon^2}\int_{\Hat{A}^{\varepsilon, \sigma}} f(\Hat{u}^{\varepsilon, \sigma}-q^\varepsilon)U_{\Hat{\kappa}^{\varepsilon, \sigma}}(\tfrac{\cdot-\Hat{x}}{\varepsilon}) \\
&\qquad\qquad+\frac{\Hat{\kappa}^{\varepsilon, \sigma}}{\varepsilon^2}\int_{\Hat{A}^{\varepsilon, \sigma}} f(\Hat{u}^{\varepsilon, \sigma}-q^\varepsilon)\bigl(\tfrac{1}{2\pi}\log \tfrac{1}{\varepsilon \Hat{\rho}^{\varepsilon, \sigma}}+ H(\Hat{x}, \cdot)\bigr) \\
&=\int_{\mathbf{R}^2} f(U_{\Hat{\kappa}^{\varepsilon, \sigma}}+\sigma)U_{\Hat{\kappa}^{\varepsilon, \sigma}}+O(\varepsilon) \\ &\qquad\qquad+ \Hat{\kappa}^{\varepsilon, \sigma}\bigl(\tfrac{1}{2\pi} \log \tfrac{1}{\varepsilon \rho_\kappa}+ H(\Hat{x}, \Hat{x})+O(\varepsilon)\bigr)\Bigl(\int_{\mathbf{R}^2} f(U_\kappa+\sigma)+O(\varepsilon)\Bigr)\\
&=\int_{\mathbf{R}^2} f(U_{\Hat{\kappa}^{\varepsilon, \sigma}}+\sigma)U_{\Hat{\kappa}^{\varepsilon, \sigma}} \\ &\qquad\qquad+ \big(\tfrac{\kappa}{2\pi} \log \tfrac{1}{\varepsilon}+q(\Hat{x})+\sigma\big)\int_{\mathbf{R}^2}f(U_{\Hat{\kappa}^{\varepsilon, \sigma}}+\sigma)+O(\varepsilon \logeps). \end{split} \] Summarizing, we have \[ \begin{split} g^\varepsilon(\sigma)&=\frac{\kappa}{2\pi}\log \frac{1}{\varepsilon} \Bigl(\Hat{\kappa}^{\varepsilon, \sigma} - \int_{\mathbf{R}^2} f(U_{\Hat{\kappa}^{\varepsilon, \sigma}}+\sigma)\Bigr)+O(1)\\ &=\frac{\kappa}{2\pi} \log \frac{1}{\varepsilon} \Bigl(\int_{\mathbf{R}^2} f(U_{\kappa})-f(U_{\kappa}+\sigma)\Bigr)+O(1). \end{split} \] Since $g^\varepsilon$ is continuous and $\sigma \cdot \Bigl(\int_{\mathbf{R}^2} f(U_\kappa)-f(U_\kappa+\sigma)\Bigr)<0$ when $\sigma \ne 0$, there exists $\sigma^\varepsilon$ such that $g(\sigma^\varepsilon)=0$ and $\sigma^\varepsilon \to 0$ as $\varepsilon\to 0$. One then sets $\Hat{\kappa}^\varepsilon=\Hat{\kappa}^{\varepsilon, \sigma^\varepsilon}$. \end{proof}
\begin{lemma} \label{lemmaEnergyHatu} For every $\Hat{x} \in \Omega$, we have \[
c^\varepsilon \le \frac{\kappa^2}{4\pi}\log \frac{1}{\varepsilon}
-\mathcal{W}(\Hat{x})+\mathcal{C}+o(1)\qquad\text{as }\varepsilon\to 0. \] \end{lemma} \begin{proof} By Lemma~\ref{lemmaHatuNehari}, $\Hat{u}^\varepsilon \in \mathcal{N}^\varepsilon$, so that $c^\varepsilon \leq \mathcal{E}^\varepsilon(\Hat{u}^\varepsilon)$. We compute the energy of $\Hat{u}^\varepsilon$ as follows. First, \[ \begin{split}
\int_{\Omega} \abs{\nabla \Hat{u}^\varepsilon}^2
&=\int_{\Omega} \Hat{u}^\varepsilon \Delta \Hat{u}^\varepsilon\\
&= -\int_{\mathbf{R}^2} U_\kappa \Delta U_\kappa+(\Hat{\kappa}^\varepsilon)^2\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon}+H(\Hat{x}, \Hat{x})\Bigr)+o(1)\\
&=\int_{\mathbf{R}^2} \abs{\nabla (U_\kappa)_+}^2+\frac{\kappa^2}{2\pi} \log \frac{1}{\varepsilon} +2\kappa q(\Hat x)-\kappa^2 H(\Hat x, \Hat x) +\frac{\kappa^2}{2\pi} \log{\rho_\kappa} +o(1). \end{split} \] Next, \[ \begin{split}
\frac{1}{\varepsilon^2}\int_{\Omega} F(\Hat{u}^\varepsilon-q^\varepsilon)
&=\frac{1}{\varepsilon^2}\int_{\Hat{A}^\varepsilon} F(\Hat{u}^\varepsilon-q^\varepsilon)\\
&=\frac{1}{\varepsilon^2}\int_{\Hat{A}^\varepsilon} F(\Hat{u}^\varepsilon-q^\varepsilon(x^\varepsilon))+o(1)\\
&=\int_{\mathbf{R}^2} F(U_\rho)+o(1), \end{split} \] and the conclusion follows from the definitions of $\mathcal{W}$ and $\mathcal{C}$. \end{proof}
\begin{corollary}\label{cor:upper} We have \[ c^\varepsilon \leq \frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon} -\sup_{x\in \Omega} \mathcal{W}(x) +\mathcal{C} + o(1). \] \end{corollary}
\subsection{Asymptotic behavior of solutions}
The main goal of this section is to prove \begin{proposition}\label{prop:1mai}
Let $(v^\varepsilon)$ be a family of solutions to \eqref{problemPeps} such that $v^\varepsilon \ne 0$ \begin{equation} \label{assumptEnergyUpperbound}
\mathcal{E}^\varepsilon(v^\varepsilon) \le \frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon}+O(1), \end{equation} as $\varepsilon \to 0$. Define the quantities $A^\varepsilon$, $\omega^\varepsilon$, $\kappa^\varepsilon$, $x^\varepsilon$ and $\rho^\varepsilon$ for $v^\varepsilon$ as in \eqref{defiq} for $u^\varepsilon$.
Then
\[
v^\varepsilon=U_{\kappa^\varepsilon} (\tfrac{\cdot-x^\varepsilon}{\varepsilon})+\kappa^\varepsilon\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon \rho^\varepsilon}+ H(x^\varepsilon, \cdot)\Bigr)+o(1), \] in $\mathrm{W}^{2, 1}_\mathrm{loc}(\Omega)$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$, where \[
\kappa^\varepsilon=\kappa+\frac{2\pi}{\log \frac{1}{\varepsilon}}\Bigl(q(x^\varepsilon)-\kappa H(x^\varepsilon, x^\varepsilon) -\frac{\kappa}{2\pi} \log \frac{1}{\rho_\kappa} \Bigr)+o(\logeps^{-1}), \] In particular, we have \[
\mathcal{E}^\varepsilon (v^\varepsilon)= \frac{\kappa^2}{4\pi}\log \frac{1}{\varepsilon}-\mathcal{W}(x^\varepsilon)+\mathcal{C}+o(1) \] and \[
B(x^\varepsilon, \Bar{r}^\varepsilon) \subset A^\varepsilon \subset B(x^\varepsilon, \mathring{r}^\varepsilon), \] with $\Bar{r}^\varepsilon=\varepsilon \rho_\kappa+o(\varepsilon)$ and $\mathring{r}^\varepsilon=\varepsilon \rho_\kappa +o(\varepsilon)$. \end{proposition} In other words, $v^\varepsilon$ satisfies the same asymptotics as the one stated in Theorem~\ref{thm:K1} for $u^\varepsilon$ except for the convergence of $x^\varepsilon$.
In the sequel, $v^\varepsilon$ denotes a family of nontrivial solutions to \eqref{problemPeps} verifying \eqref{assumptEnergyUpperbound}. We divide the proof of Proposition \ref{prop:1mai} into several steps.
\subsubsection{Step 1: First quantitative properties of the solutions}
In this section, we derive various types of estimates for $v^\varepsilon$. \begin{proposition}\label{propositionEstimatesueps} We have, as $\varepsilon \to 0$, \begin{gather} \label{ineqMuAeps}\muleb{2}(A^\varepsilon) = O\bigl(\logeps^{-1}\bigr), \\ \label{ineqVortexEnergy} \int_{A^\varepsilon} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2 =O(1), \\ \label{ineqVortexPotential}\frac{1}{\varepsilon^2}\int_{A^\varepsilon} F(v^\varepsilon-q^\varepsilon) =O(1), \\
\label{eq:2etoiles}\int_{\Omega\setminus A^\varepsilon} |\nabla v^\varepsilon|^2 \leq \frac{\kappa^2}{2\pi} \log\frac{1}{\varepsilon} + O(1), \\ \label{ineqTotalVorticity}\int_{\Omega} \omega^\varepsilon \leq \kappa + O\bigl(\logeps^{-1}\bigr). \end{gather} \end{proposition} \begin{proof} First note that for $\varepsilon>0$ sufficiently small, \begin{equation} \label{ineqEnergy}
\Bigl(\frac{1}{2}-\frac{1}{p+1}\Bigr)\int_\Omega |\nabla v^\varepsilon|^2 \leq \mathcal{E}^\varepsilon(v^\varepsilon). \end{equation} Indeed, \[
\mathcal{E}^\varepsilon(v^\varepsilon) = \frac{1}{2}\int_\Omega |\nabla v^\varepsilon|^2 - \frac{1}{p+1}\int_\Omega \frac{1}{\varepsilon^2}f(v^\varepsilon-q^\varepsilon)(v^\varepsilon-q^\varepsilon)_+, \] and, by testing $(\mathcal{P}^\varepsilon)$ against $v^\varepsilon$, \[
0 = \frac{1}{p+1}\int_\Omega |\nabla v^\varepsilon|^2 - \frac{1}{p+1}\int_\Omega \frac{1}{\varepsilon^2}f(v^\varepsilon-q^\varepsilon)v^\varepsilon. \] Since $(v^\varepsilon-q^\varepsilon)_+\leq v^\varepsilon$ when $q^\varepsilon\geq 0$, and hence when $\varepsilon$ is sufficiently small, \eqref{ineqEnergy} follows by subtraction.
In order to obtain \eqref{ineqMuAeps}, first note that since $q$ is bounded from below, for $\varepsilon$ sufficiently small, $\inf_{\Omega} q_\varepsilon > \frac{\kappa}{4\pi} \log \frac{1}{\varepsilon}$. By the Chebyshev and Poincar\'e inequalities, it follows that \[
\muleb{2}(A^\varepsilon) \le \Bigl(\frac{1}{\inf_{\Omega} q^\varepsilon}\Bigr)^2 \int_{\Omega} \abs{v^\varepsilon}^2 \le \frac{C}{\logeps^2} \int_{\Omega} \abs{\nabla v^\varepsilon}^2\le \frac{C'}{\logeps}, \] where the last inequality is a consequence \eqref{ineqEnergy} and \eqref{assumptEnergyUpperbound}.
We claim that \begin{equation} \label{ineqomegaepsL1}
\int_{\Omega} \omega^\varepsilon \leq C. \end{equation} By testing $(\mathcal{P}^\eps)$ against $\min(v^\varepsilon, q^\varepsilon)$ we obtain \begin{equation} \label{ineqVorticityEnergy} \begin{split}
\int_{\Omega} \omega^\varepsilon= \int_{A^\varepsilon} \frac{1}{\varepsilon^2}f(v^\varepsilon-q^\varepsilon)& \leq \frac{1}{\inf_\Omega q^\varepsilon}\int_{A^\varepsilon} \frac{q^\varepsilon}{\varepsilon^2}f(v^\varepsilon-q^\varepsilon)\\ &=\frac{1}{\inf_\Omega q^\varepsilon}\int_{\Omega\setminus A^\varepsilon} \abs{\nabla v^\varepsilon}^2 + \frac{1}{\inf_\Omega q^\varepsilon}\int_{A^\varepsilon} \nabla v^\varepsilon\nabla q. \end{split} \end{equation} In view of \eqref{ineqEnergy}, this yields \[ \kappa^\varepsilon \leq C \frac{\mathcal{E}^\varepsilon(v^\varepsilon) +o(1)}{\log \frac{1}{\varepsilon}}, \] and the estimate \eqref{ineqomegaepsL1} follows from assumption \eqref{assumptEnergyUpperbound}.
Testing now $(\mathcal{P}^\eps)$ against $(v^\varepsilon-q^\varepsilon)_+$, we obtain \begin{equation} \label{eqNehariVortex}
\int_{A^\varepsilon} |\nabla (v^\varepsilon-q^\varepsilon)|^2 = \int_{A^\varepsilon} \frac{1}{\varepsilon^2} (v^\varepsilon-q^\varepsilon)_+^{p+1} -\int_{A^\varepsilon}\nabla (v^\varepsilon-q^\varepsilon) \nabla q. \end{equation} The Gagliardo--Nirenberg inequality \cite{Nirenberg1959}*{p.\thinspace 125} yields \begin{equation} \label{ineqGN} \int_{A^\varepsilon} \frac{1}{\varepsilon^2} (v^\varepsilon-q^\varepsilon)_+^{p+1} \leq C \int_{A^\varepsilon}
\frac{1}{\varepsilon^2} (v^\varepsilon-q^\varepsilon)_+^{p} \left(\int_{A^\varepsilon} |\nabla
(v^\varepsilon-q^\varepsilon)|^2\right)^{\frac{1}{2}}, \end{equation} so that \[ \begin{split}
\int_{A^\varepsilon} |\nabla (v^\varepsilon-q^\varepsilon)|^2 &\leq C \bigl(\Norm{\omega^\varepsilon}_{\mathrm{L}^1}+\Norm{\nabla q^\varepsilon}_{L^2(A^\varepsilon)} \bigr) \Bigl(\int_{A^\varepsilon} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2\Bigr)^{\frac{1}{2}} \\ &\leq C'\Bigl(\int_{A^\varepsilon} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2\Bigr)^{\frac{1}{2}}. \end{split} \] Inequality \eqref{ineqVortexEnergy} can therefore be deduced from \eqref{ineqomegaepsL1}, and \eqref{ineqVortexPotential} follows from \eqref{eqNehariVortex}. Finally, \[ \begin{split}
\frac{1}{2}\int_{\Omega \setminus A^\varepsilon} \abs{\nabla v^\varepsilon}^2&=\mathcal{E}^\varepsilon(v^\varepsilon)+\frac{1}{\varepsilon^2}\int_{A^\varepsilon} F(v^\varepsilon-q^\varepsilon)-\frac{1}{2} \int_{A^\varepsilon} \abs{\nabla v^\varepsilon}^2 \\
&\le \frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon}+O(1), \end{split} \] so that \eqref{eq:2etoiles} holds, and inequality \eqref{ineqTotalVorticity} then follows from \eqref{ineqVorticityEnergy}. \end{proof}
\begin{remark} The use of the Gagliardo--Nirenberg inequality to obtain \eqref{ineqGN} is the only step in our proof that requires $f$ to be a power-like nonlinearity. \end{remark}
\subsubsection{Step 2: Structure of the vorticity set}
We now examine the vorticity set $A^\varepsilon$ further. Since $A^\varepsilon$ is open, it contains at most countably many connected components that we label $A^\varepsilon_i$, $i \in I^\varepsilon$. If $q$ were a harmonic function (e.g. if the only goal was to prove Theorem \ref{thm:resu}), one would deduce from the fact that $u^\varepsilon$ is a minimal energy solution that $A^\varepsilon$ is connected whenever $q^\varepsilon \ge 0$ \cite{BergerFraenkel1974}*{Theorem 3F}, \cite{Norbury1975}*{Theorem 3.4}, \cite{AmbrosettiMancini1981}*{Theorem 4}, \cite{Yang1991}*{Theorem 1}, \cite{LiYanYang2005}*{Proposition 3.1}; this would simplify considerably the analysis that we perform below.
First we have a control on the total area and on the diameter of each connected component.
\begin{lemma} \label{lemmaAreaDiameter} If $\varepsilon > 0$ is sufficiently small, we have \begin{equation} \label{ineqVorticityAreaStrong}
\muleb{2}(A^\varepsilon) \le C \varepsilon^2 \end{equation} and, for every $i \in I^\varepsilon$, \begin{equation} \label{ineqVorticityDiameter}
\diam(A^\varepsilon_i) \le C \varepsilon. \end{equation} \end{lemma} \begin{proof} Set \[
w^\varepsilon=\frac{v^\varepsilon}{\min_{\partial A^\varepsilon}q^\varepsilon}. \] Since $v^\varepsilon =q^\varepsilon $ on $\partial A^\varepsilon$, we have, by \eqref{eq:2etoiles}, \begin{equation} \label{ineqCapacity}
\frac{2\pi}{\capa(A^\varepsilon, \Omega)}
\ge \frac{2\pi}{\displaystyle \int_{\Omega\setminus A^\varepsilon} \abs{\nabla w^\varepsilon}^2}
\ge 2 \pi \frac{\frac{\kappa^2}{4\pi} \bigl(\log \frac{1}{\varepsilon}\bigr)^2+O(\logeps)}{\displaystyle \int_{\Omega\setminus A^\varepsilon} \abs{\nabla v^\varepsilon}^2}=\log \frac{1}{\varepsilon}+O(1). \end{equation} By Proposition~\ref{propositionCapacityArea}, it follows that \[
\log \frac{\muleb{2}(\Omega)}{\muleb{2}(A^\varepsilon)} \ge 2\log \frac{1}{\varepsilon}+O(1), \] from which \eqref{ineqVorticityAreaStrong} follows.
Similarly, we have \[
\frac{2\pi}{\capa(A^\varepsilon_i, \Omega)} \geq \frac{2\pi}{\capa(A^\varepsilon, \Omega)} \geq \log \frac{1}{\varepsilon}+O(1). \] It hence follows from Proposition~\ref{propositionBoundDiameter} and the boundedness of $\Omega$ that \[
\log C\Bigl(1+\frac{1}{\diam (A_i^\varepsilon)}\Bigr) \ge \log \frac{1}{\varepsilon}+O(1), \] which implies \eqref{ineqVorticityDiameter}. \end{proof}
\begin{lemma} \label{lemmaVortexSplit} There exist positive constants $\gamma$ and $c$ such that when $\varepsilon$ is small enough, for every $i \in I_\varepsilon$, if \begin{equation} \label{eqSplitVortices}
\int_{A^\varepsilon_i} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2 > \gamma^2, \end{equation} then \begin{gather} \label{ineqLowerBoundArea} \muleb{2}(A_i^\varepsilon)\ge c\varepsilon^2, \\ \label{ineqLowerBoundDiam} \diam(A^\varepsilon_i)\ge c\varepsilon, \\ \label{ineqLowerBoundDistance} \dist(A^\varepsilon_i, \partial \Omega)\ge c, \\ \label{ineqLowerBoundVortex} \int_{A^\varepsilon_i} \omega_\varepsilon \ge c, \end{gather} while if \eqref{eqSplitVortices} does not hold, then for every $s \ge 1$, \begin{equation} \label{ineqfsVanishing} \int_{A^\varepsilon_i} f(v^\varepsilon-q^\varepsilon)^s \le C \Norm{\nabla q}_{\mathrm{L}^r(A^\varepsilon_i)}^{sp} \muleb{2}(A^\varepsilon_i)^{1+\frac{sp}{2}(1-\frac{2}{r})}, \end{equation} where $C>0$ only depends on $s \ge 1$. \end{lemma}
\begin{proof} Starting from \eqref{eqNehariVortex}, and applying the Sobolev and Cauchy--Schwarz inequalities we obtain \begin{multline} \label{ineqGradientVortices}
\int_{A^\varepsilon_i} \abs{\nabla (v^\varepsilon-q^\varepsilon)_+}^2
= \int_{A^\varepsilon_i} \frac{f(v^\varepsilon-q^\varepsilon)}{\varepsilon^2}(v^\varepsilon-q^\varepsilon)_+-\int_{A^\varepsilon_i} \nabla q \cdot \nabla (v^\varepsilon-q^\varepsilon)\\
\le C\frac{\muleb{2}(A^\varepsilon_i)}{\varepsilon^2} \Bigl(\int_{A^\varepsilon_i} \abs{\nabla (v^\varepsilon-q^\varepsilon)_+}^2\Bigr)^{\frac{p+1}{2}}\\ + \Norm{\nabla q}_{\mathrm{L}^2(A^\varepsilon_i)}\Norm{\nabla (v^\varepsilon-q^\varepsilon)_+}_{\mathrm{L}^2(A^\varepsilon_i)}. \end{multline} By Lemma~\ref{lemmaAreaDiameter}, we may choose $\gamma$ sufficiently small so that \[
\gamma^{p-1}\le \frac{\varepsilon^2}{2C\muleb{2}(A^\varepsilon_i)}, \] independently of $\varepsilon$, and therefore if \eqref{eqSplitVortices} does not hold we obtain \begin{equation} \label{ineqVanVorticesuq}
\frac{1}{2}\int_{A^\varepsilon_i} \abs{\nabla (v^\varepsilon-q^\varepsilon)_+}^2\le \int_{A^\varepsilon_i} \abs{\nabla q}^2. \end{equation} Applying successively Sobolev inequality, \eqref{ineqVanVorticesuq} and Lemma~\ref{lemmaAreaDiameter}, we conclude \[
\begin{split}
\int_{A^\varepsilon_i} f(v^\varepsilon-q^\varepsilon)^s
&\le C \Bigl( \int_{A^\varepsilon_i} \abs{\nabla (v^\varepsilon-q^\varepsilon)_+}^2 \Bigr)^\frac{sp}{2} \muleb{2}(A^\varepsilon_i)\\
&\le C'\Bigl( \int_{A^\varepsilon_i} \abs{\nabla q}^2 \Bigr)^\frac{sp}{2} \muleb{2}(A^\varepsilon_i)\\
&\le C'' \Norm{\nabla q}_{\mathrm{L}^r(A^\varepsilon_i)}^{sp}
\muleb{2}(A^\varepsilon_i)^{1+\frac{sp}{2}(1-\frac{2}{r})}.
\end{split} \]
Assume now that \eqref{eqSplitVortices} holds. Combined with \eqref{ineqGradientVortices} and \eqref{ineqVortexEnergy}, this yields \[
\gamma^2 \le C \frac{\muleb{2}(A^\varepsilon_i)}{\varepsilon^2}+C \Norm{\nabla q}_{\mathrm{L}^2(A^\varepsilon_i)}. \] Since $\Norm{\nabla q}_{\mathrm{L}^2(A^\varepsilon_i)} \to 0$ as $\varepsilon \to 0$, one must have $\muleb{2}(A^\varepsilon) \ge c \varepsilon^2$. The isodiametric inequality then yields \eqref{ineqLowerBoundDiam}.
Turning back to \eqref{ineqCapacity}, and using Proposition~\ref{propositionBoundDiameter}, we obtain \[
\log C\Bigl(1+\frac{\dist(A_i^\varepsilon, \partial \Omega)}{\varepsilon}\Bigr) \ge \log \frac{1}{\varepsilon}+O(1), \] from which \eqref{ineqLowerBoundDistance} follows.
Testing $(\mathcal{P}^\eps)$ against $(v^\varepsilon-q^\varepsilon)_+ \chi_{A_\varepsilon^i}$, applying the Gagliardo--Nirenberg inequality and using then \eqref{ineqVortexEnergy}, we have \[ \int_{A^\varepsilon_i} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2 \leq C(\int_{A^\varepsilon_i} \omega_\varepsilon+\Norm{\nabla q^\varepsilon}_{L^2(A^\varepsilon_i)})\Bigl(\int_{A^\varepsilon_i} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2\Bigr)^{\frac{1}{2}} \le C'\int_{A^\varepsilon_i} \omega_\varepsilon, \] (cf.\ the proof of Proposition~\ref{propositionEstimatesueps}) and the inequality \eqref{ineqLowerBoundVortex} follows. \end{proof}
In view of Lemma~\ref{lemmaVortexSplit}, we can split the vortices in two classes: the vanishing vortices \begin{align} \label{eqDefVeps} V^\varepsilon&=\bigcup \Bigl\{A_i^\varepsilon \: :\: \int_{A_i^\varepsilon} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2 \le \gamma^2\Bigr\}, \\ \intertext{and the essential vortices} \label{eqDefEeps} E^\varepsilon&=\bigcup \Bigl\{A_i^\varepsilon \: :\: \int_{A_i^\varepsilon} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2 > \gamma^2\Bigr\}. \end{align} In view of \eqref{ineqVortexEnergy}, $E^\varepsilon$ contains finitely many connected components. We can thus split $E^\varepsilon=\bigcup_{j=1}^{k^\varepsilon} E^\varepsilon_j$, where $E^\varepsilon_j$ are nonempty open sets which are not necessarily connected such that, up to a subsequence, \begin{equation} \label{eqDistEepsi}
\frac{\dist(E^\varepsilon_i, E^\varepsilon_j)}{\varepsilon} \to \infty \end{equation} as $\varepsilon \to 0$, and \begin{equation} \label{ineqDiamEepsi} \Tilde{\rho}= \limsup_{\varepsilon \to 0} \frac{\diam (E^\varepsilon_i)}{\varepsilon} < \infty. \end{equation} By definition of $E^\varepsilon$ and by \eqref{ineqVortexEnergy}, $k^\varepsilon$ is bounded as $\varepsilon \to 0$. Finally, \begin{equation} \label{eqDistbord}
\liminf_{\varepsilon \to 0} \dist(E^\varepsilon_i, \partial \Omega)>0. \end{equation}
We set \begin{align*}
\omega^\varepsilon_v&=\omega^\varepsilon \charfun{V^\varepsilon}, &
\omega^\varepsilon_i&=\omega^\varepsilon \charfun{E^\varepsilon_i}, &
\kappa^\varepsilon_i&=\int_{\Omega} \omega^\varepsilon_i. \end{align*} By \eqref{ineqTotalVorticity}, we have \begin{equation} \label{ineqSumVortices}
\sum_{i=1}^{k^\varepsilon} \kappa^\varepsilon_i \le \kappa+O\bigl(\logeps^{-1}\bigr). \end{equation}
\begin{lemma} \label{lemmaVanishingVorticity} For every $s \ge 1$, we have \[
\Norm{\omega^\varepsilon_v}_{\mathrm{L}^s} = o\bigl(\varepsilon^{p(1-\frac{2}{r})-2(1-\frac{1}{s})}\bigr). \] In particular, if $\frac{1}{s} \ge 1-p(\frac{1}{2}-\frac{1}{r})$, then $\omega^\varepsilon_v \to 0$ in $\mathrm{L}^s(\Omega)$. \end{lemma}
\begin{proof} Set \[
I_v^\varepsilon=\Bigl\{ i \in I^\varepsilon\: :\: \int_{A_i^\varepsilon} \abs{\nabla (v^\varepsilon-q^\varepsilon)}^2 \le \gamma^2\Bigr\} \] We have, by Lemma~\ref{lemmaVortexSplit} and by \eqref{ineqVorticityAreaStrong}, \[ \begin{split}
\int_{\Omega} \abs {\omega^\varepsilon_v}^s
&=\sum_{i \in I^\varepsilon_v}\int_{A^\varepsilon_i} \abs{\omega^\varepsilon_v}^s \\
&\le C\frac{1}{\varepsilon^{2s}} \sum_{i \in I^\varepsilon_v} \Norm{\nabla q}_{\mathrm{L}^r(A^\varepsilon_i)}^{sp} \muleb{2}(A^\varepsilon_i)^{1+\frac{sp}{2}(1-\frac{2}{r})}\\
&\le C\muleb{2}(V^\varepsilon) \max_{i \in I^\varepsilon_v} \Norm{\nabla q}_{\mathrm{L}^r(A^\varepsilon_i)}^{sp} \frac{\muleb{2}(A^\varepsilon_i)^{1+sp(\frac{1}{2}-\frac{1}{r})}}{\varepsilon^{2s}}\\
&\le C' \Norm{\nabla q}_{\mathrm{L}^r(V^\varepsilon)}^{sp} \varepsilon^{sp(1-\frac{2}{r})-2(s-1)}. \qedhere \end{split} \] \end{proof}
\begin{lemma} \label{lemmaNonVanisingVortex} For $\varepsilon > 0$ sufficiently small, $k_\varepsilon \ge 1$. \end{lemma} \begin{proof} Assume by contradiction that there is a sequence $(\varepsilon_n)$ such that $\varepsilon_n \to 0$ and $k_{\varepsilon_n} =0$. Take $s > 1$ such that $\frac{1}{s} \ge 1-p(\frac{1}{2}-\frac{1}{r})$. Since $\omega_{\varepsilon_n}=\omega_{\varepsilon_n}^v \to 0$ in $\mathrm{L}^s(\Omega)$ for some $s > 1$ by Lemma~\ref{lemmaVanishingVorticity}; by classical estimates, \cite{GilbargTrudinger2001}*{Theorem 8.15} $v_{\varepsilon_n} \to 0$ in $L^\infty(\Omega)$. Therefore, when $n$ is large enough, one would have $\omega_{\varepsilon_n}=0$ and thus $v_{\varepsilon_n} = 0$. \end{proof}
\subsubsection{Step 3: Small scale asymptotics}
We define \[
x^\varepsilon_i=\frac{1}{\kappa^\varepsilon_i}\int_{\Omega}\omega^\varepsilon_i(x)x\, dx. \] By \eqref{eqDistEepsi} and \eqref{eqDistbord}, $x^\varepsilon_i \in \Omega$ and $x^\varepsilon_i\ne x^\varepsilon_j$ when $i \ne j$ and $\varepsilon$ is small. We also define \[
v^\varepsilon_i(y)=v^\varepsilon(x^\varepsilon_i+\varepsilon y)-q^\varepsilon(x^\varepsilon_i), \] and \[
q^\varepsilon_i(y)=q(x^\varepsilon_i+\varepsilon y)-q(x^\varepsilon_i). \] By \eqref{eqDistbord}, for every $R>0$, $v^\varepsilon_i$ is well-defined in $B(0, R)$ when $\varepsilon$ is sufficiently small, and it satisfies there the equation \begin{equation} \label{eqLimit}
-\Delta v^\varepsilon_i=f(v^\varepsilon_i-q^\varepsilon_i). \end{equation}
\begin{lemma} \label{lemmaSmallScaleLocalEstimates} For every $R > 0$ and $s\ge 1$, there exist $\varepsilon(R)>0$ and $C>0$ such that for $0<\varepsilon\leq \varepsilon(R)$ we have \begin{equation} \label{ineqRenormEstimate}
\Norm{f(v^\varepsilon_i-q^\varepsilon_i)}_{\mathrm{L}^s(B(0, R))}\le C. \end{equation} Moreover, for $2\Tilde{\rho} < \abs{y} < R$, we have
\begin{equation} \label{ineqvepsiDecay}
\Bigl\lvert v^\varepsilon_i(y)-\frac{\kappa^\varepsilon_i}{2\pi}\log \frac{1}{\varepsilon\abs{y}}+q^\varepsilon(x^\varepsilon_i)-\kappa^\varepsilon_i H(x^\varepsilon_i, x^\varepsilon_i)
-\sum_{j \ne i} \kappa_j^\varepsilon G(x^\varepsilon_i, x^\varepsilon_j)\Bigr\rvert
\le \frac{\kappa}{2\pi} \log \frac{\abs{y}}{\abs{y}-\Tilde{\rho}}+o(1), \end{equation}
and \begin{equation} \label{ineqNablavepsiDecay} \Bigl\lvert \nabla v^\varepsilon_i(y)-\frac{\kappa}{2\pi} \frac{y}{\abs{y}^2}\Bigr\rvert \le \frac{\Bar{C}}{\abs{y}^3}+o(1). \end{equation} as $\varepsilon \to 0$, where $\Bar{C}$ does not depend on $R$. \end{lemma} \begin{proof} Consider $D^{\varepsilon, R}_i=\bigcup \bigl\{A^\varepsilon_j \: :\: A^\varepsilon_j \cap B(x^\varepsilon_i, \varepsilon R)\ne \emptyset \bigr\}$. By \eqref{ineqVorticityDiameter}, $\muleb{2}(D_i^{\varepsilon, R})=O(\varepsilon^2)$ as $\varepsilon \to 0$, so that one obtains, by Sobolev's inequality, \begin{multline*}
\int_{B(0, R)} f(v^\varepsilon_i-q^\varepsilon_i)^s \le
\frac{1}{\varepsilon^2}\int_{D_i^{\varepsilon, R}} f(v^\varepsilon-q^\varepsilon)^s \\
\le C\frac{1}{\varepsilon^2} \Norm{\nabla(v^\varepsilon-q^\varepsilon)_+}_{\mathrm{L}^2(A^\varepsilon)}^{sp} \muleb{2}(D_i^{\varepsilon, R}) = O(1), \end{multline*} which proves \eqref{ineqRenormEstimate}.
We have \begin{equation} \label{eqvepsiGomega}
v^\varepsilon_i(y)=\int_{\Omega} G(x^\varepsilon_i+\varepsilon y, z) \omega^\varepsilon(z)\, dz-q^\varepsilon(x^\varepsilon_i). \end{equation} We first prove \eqref{ineqvepsiDecay}. By a classical estimate \cite{GilbargTrudinger2001}*{Theorem 8.15}, \begin{equation} \label{ineqGomegav}
\Bigl\lvert\int_{\Omega} G(x, z) \omega^\varepsilon_v(z)\, dz\Bigr\rvert
\le C \Norm{\omega^\varepsilon_v}_{\mathrm{L}^s}. \end{equation} Since by Lemma~\ref{lemmaVanishingVorticity}, $\omega^\varepsilon_v \to 0$ in $\mathrm{L}^s(\Omega)$ for some $s > 1$, we have \[
\int_{\Omega} G(x^\varepsilon_i+\varepsilon y, z) \omega^\varepsilon_v(z)\, dz
\to 0 \] uniformly in $y$. We also have, since $\diam E^\varepsilon_j=O(\varepsilon)$, $\abs{x^\varepsilon_i-x^\varepsilon_j}/\varepsilon \to \infty$, for $j \ne i$, and $\abs{y} \le R$, \[
\int_{\Omega} G(x^\varepsilon_i+\varepsilon y, z) \omega^\varepsilon_j(z)\, dz
=\kappa_j^\varepsilon G(x^\varepsilon_i, x^\varepsilon_j)+o(1), \] and \[
\int_{\Omega} H(x^\varepsilon_i+\varepsilon y, z) \omega^\varepsilon_i(z)\, dz
=\kappa^\varepsilon_i H(x^\varepsilon_i, x^\varepsilon_i)+O(\varepsilon). \] Finally, we have \[ \begin{split}
\int_{\Omega} \frac{1}{2\pi}\log \frac{1}{\abs{x^\varepsilon_i+\varepsilon y-z}} \omega^\varepsilon_i(z)\, dz
&=\int_{E^\varepsilon_i} \frac{1}{2\pi} \log \frac{1}{\abs{x^\varepsilon_i+\varepsilon y-z}} \omega^\varepsilon_i(z)\, dz\\
&=\frac{\kappa^\varepsilon_i}{2\pi}\log \frac{1}{\varepsilon \abs{y}}+\frac{1}{2\pi} \int_{E^\varepsilon_i} \log \frac{\varepsilon \abs{y}}{\abs{x^\varepsilon_i+\varepsilon y-z}} \omega^\varepsilon_i(z)\, dz. \end{split} \] In view of \eqref{ineqDiamEepsi}, $\abs{x^\varepsilon_i-z} \le (1+o(1))\Tilde{\rho}\varepsilon$ when $z\in {\rm supp}(\omega^\varepsilon_i)$ so that for sufficiently small $\varepsilon$ \[ \left\lvert\int_{E^\varepsilon_i} \log \frac{\abs{\varepsilon y}}{\abs{\varepsilon y+x^\varepsilon_i-z}} \omega^\varepsilon_i(z)\, dz\right\rvert\le \kappa^\varepsilon_i \log\frac{\abs{y}}{\abs{y}-\Tilde{\rho}} + o(1). \]
We now prove \eqref{ineqNablavepsiDecay}. By Lemma~\ref{lemmaVanishingVorticity}, $\varepsilon \omega^\varepsilon_v \to 0$ in $\mathrm{L}^s(\Omega)$ for $\frac{1}{s} \ge \frac{1}{2}-p(\frac{1}{2}-\frac{1}{r})$. Choosing $s > 2$, by \eqref{eqvepsiGomega} and classical elliptic estimates, one obtains that \[
\int_{\Omega} \varepsilon G(x, z) \omega^\varepsilon_v(z)\, dz \to 0 \] as a function of $x$ in $\mathrm{W}^{2, s}_{\mathrm{loc}}(\Omega)$ and thus in $C^1_{\mathrm{loc}}(\Omega)$. Therefore, \[
\int_{\Omega} \varepsilon \nabla G(x_i^\varepsilon+\varepsilon y, z) \omega^\varepsilon_v(z)\, dz \to 0 \] uniformly in $y$ on compact subsets. One also has \[
\int_{\Omega} \varepsilon \nabla G(x^\varepsilon_i+\varepsilon y, z) \omega^\varepsilon_j(z)\, dz
=\varepsilon \kappa_j^\varepsilon \nabla G(x^\varepsilon_i, x^\varepsilon_j)+o(1) \] and \[
\int_{\Omega} \varepsilon \nabla H(x^\varepsilon_i+\varepsilon y, z) \omega^\varepsilon_j(z)\, dz
=\varepsilon \kappa^\varepsilon_i \nabla H(x_i, x_j)+O(\varepsilon^2). \] Finally, recall that $\int_{\Omega} \omega^\varepsilon_i=\kappa^\varepsilon_i$ and $\int_{\Omega} (x^\varepsilon_i-z)\omega^\varepsilon_i(z)\, dz=0$, so that \begin{multline*}
\int_{\Omega} \varepsilon \frac{x^\varepsilon_i+\varepsilon y -z}{\abs{x^\varepsilon_i+\varepsilon y-z}^2} \omega^\varepsilon_i(z)\, dz-\kappa^\varepsilon_i\frac{y}{\abs{y}^2}= \\
\varepsilon \int_{E^\varepsilon_i} \Bigl(\frac{x^\varepsilon_i+\varepsilon y -z}{\abs{x^\varepsilon_i+\varepsilon y-z}^2}- \frac{\varepsilon y}{\abs{\varepsilon y}^2}-L(\varepsilon y) (x^\varepsilon_i-z) \Bigr)\omega^\varepsilon_i(z)\, dz, \end{multline*} where \[
L(a)h=\frac{\abs{a}^2h-2(a \cdot h)a}{\abs{a}^4}. \] On the other hand, for $2 \abs{h} \le \abs{a}$, \[
\Bigl \lvert \frac{a+h}{\abs{a+h}^2}-\frac{a}{\abs{a}^2} - L(a)h \Bigr \rvert
\le C \frac{\abs{h}^2}{\abs{a}^3}, \] so that, by \eqref{ineqDiamEepsi}, \begin{multline*}
\Bigl\lvert \int_{\Omega} \varepsilon \frac{x^\varepsilon_i+\varepsilon y -z}{\abs{x^\varepsilon_i+\varepsilon y-z}^2} \omega^\varepsilon_i(z) \, dz-\kappa^\varepsilon_i\frac{y}{\abs{y}^2}\Bigr\rvert \\ \le \int_\Omega \varepsilon \frac{\abs{x^\varepsilon_i-z}^2}{\abs{\varepsilon y}^3}\omega_i^\varepsilon (z)\, dz \le C \varepsilon \frac{(\diam E^\varepsilon_i)^2}{{\abs{\varepsilon y}^3}}\le \frac{\Bar{C}}{\abs{y}^3}, \end{multline*} and the lemma is proved. \end{proof}
\begin{lemma} \label{lemmaLocalAsymptotics} When $\varepsilon$ is small, we have $k^\varepsilon=1$. Moreover, \[
\kappa^\varepsilon_1=\kappa+\frac{2\pi}{\log \frac{1}{\varepsilon}}\Bigl(q(x^\varepsilon_1)-\kappa H(x^\varepsilon_1, x^\varepsilon_1)-\frac{\kappa}{2\pi} \log \frac{1}{\rho_\kappa} \Bigr)+o(\logeps^{-1})
\] and $v^\varepsilon_1 \to U_\kappa$ in $\mathrm{W}^{3, r}_{\mathrm{loc}}(\mathbf{R}^2)$ as $\varepsilon\to 0$. \end{lemma} \begin{proof} Set \[w^\varepsilon_i(y)= v^\varepsilon_i(y)-\frac{\kappa^\varepsilon_i}{2\pi}\log \frac{1}{\varepsilon}+q^\varepsilon(x^\varepsilon_i)-\kappa^\varepsilon_i H(x^\varepsilon_i, x^\varepsilon_i)
-\sum_{j \ne i} \kappa_j^\varepsilon G(x^\varepsilon_i, x^\varepsilon_j), \] so that in particular \[
-\Delta w^\varepsilon_i=f(v^\varepsilon_i-q^\varepsilon_i). \] By \eqref{ineqRenormEstimate}, \eqref{ineqvepsiDecay} and classical elliptic estimates \cite{GilbargTrudinger2001}*{Theorem 9.11}, the sequence $(w^\varepsilon_i)$ is bounded in $\mathrm{W}^{2, s}_{\mathrm{loc}}(\mathbf{R}^2)$ for every $s \ge 1$. By Rellich's compactness theorem, it is compact in $\mathrm{W}^{1, t}_{\mathrm{loc}}(\mathbf{R}^2)$ for every $1 \le t < \infty$, and therefore bounded on compact subsets. On the other hand, by construction, all the $v^\varepsilon_i+q^\varepsilon_i(x^\varepsilon_i)-q^\varepsilon_i$ take positive and negative value at a uniformly bounded distance from the origin, so that there exists a bounded sequence $\check{x}_i^\varepsilon$ such that $v^\varepsilon_i(\check{x}_i^\varepsilon)=q^\varepsilon_i(\check{x}_i^\varepsilon)-q^\varepsilon_i(x_i^\varepsilon)$. Therefore, $v^\varepsilon_i(\check{x}_i^\varepsilon)$ and $w^\varepsilon_i(\check{x}_i^\varepsilon)$ remain bounded and we obtain that for each $i \in \{1, \dotsc, k^\varepsilon\}$ \[ q^\varepsilon(x^\varepsilon_i)-\frac{\kappa^\varepsilon_i}{2\pi}\log \frac{1}{\varepsilon}-\kappa^\varepsilon_i H(x^\varepsilon_i, x^\varepsilon_i)
-\sum_{j \ne i} \kappa_j^\varepsilon G(x^\varepsilon_i, x^\varepsilon_j)=O(1). \] This implies that \begin{equation} \label{eqVorticitiesGreen}
\frac{\kappa^\varepsilon_i}{2\pi}\log \frac{1}{\varepsilon} + \sum_{\substack{ j \ne i}} \kappa_j^\varepsilon \log \frac{1}{\abs{x^\varepsilon_i-x^\varepsilon_j}} = \frac{\kappa}{2\pi}\log \frac{1}{\varepsilon} +O(1), \end{equation} and, in view of \eqref{ineqSumVortices}, that \[
k_\varepsilon \frac{\kappa}{2\pi} \log \frac{1}{\varepsilon}\ge \sum_{1 \le i, j \le k_\varepsilon } \frac{\kappa^\varepsilon_i }{2\pi}\log \frac{1}{\varepsilon} +O(1)=k_\varepsilon \frac{\kappa}{2\pi}\log \frac{1}{\varepsilon}+\sum_{\substack{1 \le i, j \le k^\varepsilon \\ j \ne i}} \kappa^\varepsilon_j\log \frac{\abs{x^\varepsilon_i-x^\varepsilon_j}}{\varepsilon}+O(1). \] Therefore, \[
\sum_{\substack{1 \le i, j \le k^\varepsilon \\ j \ne i}} \kappa^\varepsilon_j \log \frac{\abs{x^\varepsilon_i-x^\varepsilon_j}}{\varepsilon} \le O(1), \] and since $\abs{x^\varepsilon_i-x^\varepsilon_j}/\varepsilon \to \infty $ as $\varepsilon \to 0$, we deduce by \eqref{ineqLowerBoundVortex} that $k^\varepsilon\le 1$ for $\varepsilon$ sufficiently small. By Lemma~\ref{lemmaNonVanisingVortex}, $k^\varepsilon=1$. Going back to \eqref{eqVorticitiesGreen}, we get \[
\kappa^\varepsilon_1=\kappa+O\bigl(\logeps^{-1}\bigr). \] Since $v_1^\varepsilon-q^\varepsilon_1$ is compact in $\mathrm{W}^{1, r}_{\mathrm{loc}}(\mathbf{R}^2)$ and $f \in C^1(\mathbf{R})$, the sequence $f(v_1^\varepsilon-q^\varepsilon_1)$ is compact in $\mathrm{W}^{1, r}_{\mathrm{loc}}(\mathbf{R}^2)$. In view of \eqref{eqLimit}, $v^\varepsilon_1$ is compact in $\mathrm{W}^{3, r}_{\mathrm{loc}}$. Let $v$ be one of its accumulation points. It satisfies \[
-\Delta v=f(v) \] and \[
\int_{\mathbf{R}^2} f(v)=\kappa. \] Moreover, letting $\varepsilon$ go to zero, by \eqref{ineqvepsiDecay} we obtain \[
v(y)=\frac{\kappa}{2\pi} \log \frac{\Tilde{\rho}}{\abs{y}}+O\Bigl(\log\bigl(1+\frac{1}{\abs{y}} \bigr)\Bigr) \] for some $\Tilde{\rho} \in \mathbf{R}$, and \[
\nabla v(y)=\frac{\kappa}{2\pi}\frac{y}{\abs{y}^2}+O\Bigl(\frac{1}{\abs{y}^3}\Bigr). \] By a symmetry result of L.\thinspace A.\thinspace Caffarelli and A.\thinspace Friedman \cite[Theorem 1]{CaffarelliFriedman1980} (see also \cite[Theorem 4.2]{Fraenkel2000}), $v$ is radial, and therefore \[
v(y)=\frac{\kappa}{2\pi}\log \frac{\rho_\kappa}{\abs{y}} \] when $\abs{y} \ge \rho_\kappa$. Hence, $v=U_\kappa$. In view of \eqref{ineqvepsiDecay}, this yields \[
\Bigl\lvert \frac{\kappa}{2\pi}\log \frac{\rho_\kappa}{\abs{y}}+q^\varepsilon(x^\varepsilon_1)-\frac{\kappa^\varepsilon_1}{2\pi}\log \frac{1}{\varepsilon\abs{y}}-\kappa^\varepsilon_1 H(x^\varepsilon_1, x^\varepsilon_1)
\Bigr\rvert \le \kappa \log \frac{\abs{y}}{\abs{y}-R}+o(1). \] First fixing $y$, this implies that \[
\frac{\kappa-\kappa^\varepsilon_1}{2\pi} \log \frac{1}{\varepsilon}=O(1), \] and next we deduce that for every $2\Tilde{\rho}<\abs{y}<R$, \[
\Bigl\lvert \frac{\kappa}{2\pi}\log \frac{\rho_\kappa}{\varepsilon}+q(x^\varepsilon_1)-\frac{\kappa^\varepsilon_1}{2\pi}\log \frac{1}{\varepsilon}-\kappa^\varepsilon_1 H(x^\varepsilon_1, x^\varepsilon_1)
\Bigr\rvert \le \kappa \log \frac{\abs{y}}{\abs{y}-\Tilde{\rho}}+o(1), \] as $\varepsilon \to 0$. We obtain the required asymptotic development of $\kappa^\varepsilon_1$ by letting $R\to +\infty$ and choosing sufficiently large $\abs{y}$. \end{proof}
\subsubsection{Step 4: Global asymptotics}
We are now going to prove that $v^\varepsilon$ is well approximated by \[
\Tilde{v}^\varepsilon=U_{\kappa^\varepsilon_1}\Bigl(\frac{\cdot-x^\varepsilon_1}{\varepsilon}\Bigr)+\kappa^\varepsilon_1\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon \rho_{\kappa_1^\varepsilon}}+H(x^\varepsilon_1, \cdot)\Bigr). \]
\begin{proposition} \label{propositionAsymptoticsW21} We have \[
v^\varepsilon=\Tilde{v}^\varepsilon+o(1) \] in $\mathrm{W}^{2, 1}_{\mathrm{loc}}(\Omega)$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$. \end{proposition} \begin{proof}
Choose $r>\Tilde{\rho}$ so that $E^\varepsilon_1 \subset B(x^\varepsilon_1, \varepsilon r)$ when $\varepsilon$ is small.
By Lemma~\ref{lemmaLocalAsymptotics}, and the invariance of the $\dot{\mathrm{W}}^{2, 1}$ semi-norm by scaling, we have \[
\int_{B(x^\varepsilon_1, 2\varepsilon r)} \abs{D^2 v^\varepsilon-D^2 \Tilde{v}^\varepsilon} \to 0 \] as $\varepsilon \to 0$. Define \begin{align*}
\Tilde{\omega}^\varepsilon_1(x)&=\frac{1}{\varepsilon^2}f(\Tilde{v}^\varepsilon-q^\varepsilon), \\
w_v^\varepsilon(x)&=\int_{\Omega} G(x, y) \omega^\varepsilon_v(y)\, dy, \\
w_r^\varepsilon(x)&=\int_{\Omega} H(x, y) \bigl(\omega^\varepsilon_1(y)-\Tilde{\omega}^\varepsilon_1(y)\bigr)\, dy, \\
w_s^\varepsilon(x)&=\int_{\Omega} \Gamma(x-y) \bigl(\omega^\varepsilon_1(y)-\Tilde{\omega}^\varepsilon_1(y)\bigr)\, dy, \end{align*} where $\Gamma (x)=\frac{1}{2\pi} \log \frac{1}{\abs{x}}$, so that $v^\varepsilon - \Tilde{v}^ \varepsilon=w_v^\varepsilon+w_r^\varepsilon+w_s^\varepsilon$. Since by Lemma~\ref{lemmaVanishingVorticity}, $\omega_v \to 0$ in $\mathrm{L}^s(\Omega)$ for some $s > 1$, we have, by elliptic estimates, $w^\varepsilon_v \to 0$ in $\mathrm{W}^{2, s}_{\mathrm{loc}}(\Omega)$. Next, since by \eqref{ineqDiamEepsi} $x^\varepsilon_1$ stays away from $\partial \Omega$ and $\omega^\varepsilon_1-\Tilde{\omega}^\varepsilon_1 \to 0$ in $\mathrm{L}^1(\Omega)$ by Lemma~\ref{lemmaLocalAsymptotics}, we have $w_r^\varepsilon \to 0$ in $C^\infty_{\mathrm{loc}}(\Omega)$. Finally, we have \[
D^2 w_s^\varepsilon (x)=\int_{\Omega} D^2\Gamma(x-y) \bigl(\omega^\varepsilon_1(y)-\Tilde{\omega}^\varepsilon_1(y)\bigr)\, dy. \] Since $\int_{\Omega} \omega^\varepsilon_1=\int_{\Omega} \Tilde{\omega}^\varepsilon_1=\kappa^\varepsilon_1$, one also has \[
D^2 w_s^\varepsilon (x)=\int_{B(x^\varepsilon_1, \varepsilon r)} \bigl(D^2\Gamma(x-y)-D^2\Gamma(x-x^\varepsilon_1)\bigr) \bigl(\omega^\varepsilon_1(y)-\Tilde{\omega}^\varepsilon_1(y)\bigr)\, dy. \] For every $y \in B(x^\varepsilon_1, \varepsilon r)$ and $x \in \Omega \setminus B(x^\varepsilon_1, \varepsilon 2r)$ \[
\abs{D^2\Gamma(x-y)-D^2\Gamma(x-x^\varepsilon_1)} \le C \frac{\abs{y-x^\varepsilon_1}}{\abs{x-x^\varepsilon_1}^3}, \] so that \[ \abs{D^2 w_s^\varepsilon (x)} \le \frac{C \varepsilon}{\abs{x^\varepsilon_1-x}^3}\Norm{\omega^\varepsilon_1-\Tilde{\omega}^\varepsilon_1}_{\mathrm{L}^1}. \] Integrating the previous inequality we conclude \begin{multline*}
\int_{\Omega \setminus B(x^\varepsilon_1, \varepsilon 2r)} \abs{D^2 w_s^\varepsilon (x)} \le C\varepsilon \Norm{\omega^\varepsilon_1-\Tilde{\omega}^\varepsilon_1}_{\mathrm{L}^1}\int_{\mathbf{R}^2 \setminus B(x^\varepsilon_1, \varepsilon 2r)}\frac{1}{\abs{x^\varepsilon_1-x}^3}\, dx \\
=C\Norm{\omega^\varepsilon_1-\Tilde{\omega}^\varepsilon_1}_{\mathrm{L}^1}\frac{2\pi \varepsilon}{\varepsilon R}=o(1). \end{multline*}
The $\mathrm{W}^{2, 1}_\mathrm{loc}(\Omega)$ convergence implies the $\mathrm{W}^{1, 2}_\mathrm{loc}(\Omega)$ and the $\mathrm{L}^\infty_{\mathrm{loc}}(\Omega)$ convergences. One needs then to prove the convergence in a neighbourhood of the boundary. Consider $U \subset V \subset \Omega$ open bounded sets such that $\partial \Omega \subset \Bar{U}$, $\Bar{U} \subset V$ and $\supp \omega_\varepsilon \cap V = \emptyset$. One has \[ \left\{
\begin{aligned}
-\Delta (v_\varepsilon-\Tilde{v}_\varepsilon)&=\omega^\varepsilon_v && \text{in $U$},\\
v_\varepsilon-\Tilde{v}_\varepsilon&=0 && \text{on $\partial \Omega$}.
\end{aligned} \right. \] Since $v_\varepsilon-\Tilde{v}_\varepsilon \to 0$ in $\mathrm{W}^{1,2}(V \setminus U)$ and in $\mathrm{L}^\infty (V \setminus U)$ and $\omega^\varepsilon_v \to 0$ in $\mathrm{L}^s(\Omega)$ for some $s > 1$, one obtains by classical regularity estimates that $v_\varepsilon-\Tilde{v}_\varepsilon \to 0$ in $\mathrm{W}^{1,2}(U)$ and in $\mathrm{L}^\infty(U)$. \end{proof}
\begin{corollary} \label{corollaryAsymptotic} When $\varepsilon$ is small enough, $A^\varepsilon$ is connected, $x^\varepsilon_1=x^\varepsilon$, $\kappa^\varepsilon_1=\kappa^\varepsilon$, $\partial (A^\varepsilon_1-x^\varepsilon_1)/\varepsilon$ tends to $\partial B(0, \rho_\kappa)$ as a $C^2$ manifold. In particular, $-\Delta v^\varepsilon=0$ in $\Omega \setminus B(x^\varepsilon_1, 2\varepsilon \rho_\kappa)$ and
\[
\omega^\varepsilon=\Tilde{\omega}^\varepsilon+o(1) \]
in $\mathrm{L}^1(\Omega)$. \end{corollary} \begin{proof} Assume that $y\in A^\varepsilon\setminus B(x_1^\varepsilon, \varepsilon \rho_{\kappa^\varepsilon})$. We have \begin{equation} \label{ineqAepsBall}
q(y)+ \frac{\kappa}{2\pi} \log \frac{1}{\varepsilon} < v^\varepsilon(y) \le \frac{\kappa_1^\varepsilon}{2\pi} \log \frac{1}{\abs{y-x^\varepsilon_1}}+o(1), \end{equation} uniformly in $y$, so that $\abs{y-x^\varepsilon_1}=O(\varepsilon)$. One obtains then in view of Proposition~\ref{propositionAsymptoticsW21} that $(A^\varepsilon_1-x^\varepsilon_1)/\varepsilon$ is connected when $\varepsilon$ is small and the required convergence of the boundary. \end{proof}
\begin{corollary} \label{corEnergy} We have \[ \begin{split} \mathcal{E}^\varepsilon(v^\varepsilon) =\frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon}-\mathcal{W}(x^\varepsilon)+\mathcal{C}+o(1). \end{split} \] \end{corollary} \begin{proof} First we have in view of Proposition~\ref{propositionAsymptoticsW21} and Corollary~\ref{corollaryAsymptotic}, \[ \begin{split}
\int_{\Omega} \abs{\nabla v^\varepsilon}^2
&=\int_{\Omega} v^\varepsilon \omega^\varepsilon\\
&=\int_{\Omega} \Tilde{v}^\varepsilon \omega^\varepsilon+o(1). \end{split} \] Since $\Norm{\Tilde{v}^\varepsilon-q^\varepsilon}_{\mathrm{L}^\infty}$ remains bounded as $\varepsilon \to 0$, we obtain, by Proposition~\ref{propositionAsymptoticsW21} \[
\int_{\Omega} \abs{\nabla v^\varepsilon}^2=\frac{1}{\varepsilon^2}\int_{\Omega} \Tilde{v}^\varepsilon f\bigl(\Tilde{v}^\varepsilon-q^\varepsilon(x^\varepsilon)\bigr)+o(1). \] Similarly, by Proposition~\ref{propositionAsymptoticsW21}, \[ \frac{1}{\varepsilon^2}\int_{\Omega} F(v^\varepsilon-q^\varepsilon)=\frac{1}{\varepsilon^2}\int_{\Omega} F\bigl(\Tilde{v}^\varepsilon-q^\varepsilon(x^\varepsilon)\bigr)+o(1). \] It suffices then to compute $\mathcal{E}^\varepsilon(\Tilde{v}^\varepsilon)$ as in the proof of Lemma~\ref{lemmaEnergyHatu}. \end{proof}
\subsubsection{Conclusion}We are now in position to present the
\begin{proof}[Proof of Proposition~\ref{prop:1mai} completed] It is a direct consequence of Lemma~\ref{lemmaLocalAsymptotics}, Proposition~\ref{propositionAsymptoticsW21}, Corollary~\ref{corollaryAsymptotic} and Corollary~\ref{corEnergy}. \end{proof}
\noindent and the
\begin{proof}[Proof of Theorem~\ref{thm:K1}] It is a direct consequence of the upper estimate of Corollary~\ref{cor:upper} and the asymptotic properties obtained in Proposition~\ref{prop:1mai}. \end{proof}
\section{Single vortices in multiply connected domains} \label{sectionmultiply}
In this section we assume that $\Omega \subset \mathbf{R}^2$ is a bounded smooth multiply-connected domain; it can be written as \[
\Omega = \Omega_0 \setminus \bigcup_{h=1}^m \Omega_h, \] where $\Omega_0, \dotsc, \Omega_m$ are bounded simply-connected domains with $\Bar{\Omega}_h \subset \Omega$ for every $h \in \{1, \dotsc, m\}$. In place of problem \eqref{problemPeps}, we consider the problem of finding $u$ and $\lambda^\varepsilon_1, \dotsc, \lambda^\varepsilon_m$ such that \begin{equation} \label{problemPepsstar} \left\{ \begin{aligned} -\Delta u^\varepsilon &=\frac{1}{\varepsilon^2} f(u^\varepsilon - q^\varepsilon ) & &\text{in $\Omega$, }\\ u^\varepsilon &= 0 & &\text{on $\partial \Omega_0$},\\ u^\varepsilon &= \lambda^\varepsilon_h & &\text{on $\partial \Omega_h$},\\ \int_{\partial \Omega_h} \frac{\partial u}{\partial n}&=0 & & \text{for $h \in \{1, \dotsc, m\}$}. \end{aligned} \right. \tag{\protect{$\mathcal{P}^\varepsilon_*$}} \end{equation}
The natural space to deal with this problem is the space of functions that are constant on the complement of $\Omega$: \[ H^1_*(\Omega)=\Bigl\{u \in H^1(\Omega) \: :\: \nabla u=0 \text{ in $\bigcup_{h=1}^m \Omega_h$}\Bigr\}. \] It is standard to show that solutions of \eqref{problemPepsstar} are critical points of the functional $\mathcal{E}^\varepsilon$ defined on $H^1_*(\Omega)$ by \eqref{energyFunctional}. We consider least energy solutions obtained by minimization of the functional on the Nehari manifold.
In order to state our result we also need the corresponding (appropriate) Green functions. Following C.\thinspace C.\thinspace Lin \cites{Lin1941, Lin1943}, we define $G_*$ as the solution of \[
\left\{
\begin{aligned}
-\Delta G(\cdot, y)&=\delta_y & & \text{in $\Omega$,}\\
G(\cdot, y)&=0 & & \text{on $\partial \Omega_0$},\\
G&=\lambda_h & & \text{on $\partial \Omega_h$},\\
\int_{\partial \Omega_h} \frac{\partial G}{\partial n}&=0 & & \text{for $h \in \{1, \dotsc, m\}$}.\\
\end{aligned}\right. \] Its regular part $H_*$ is defined by \[
H_*(x,y)=G_*(x,y)-\frac{1}{2\pi} \log \frac{1}{\abs{x-y}}. \] P.\thinspace Koebe \cite{Koebe1918}*{\S 6} (see also \cite{Lin1943}*{\S 9}), defined $G_*$ in terms of the Green function for the Dirichlet problem $G$ and the unique solutions $Z_k$ of \[
\left\{
\begin{aligned}
-\Delta Z_k&=0 & & \text{in $\Omega$,}\\
Z_k&=0 & & \text{on $\partial \Omega_0$},\\
Z_k&=\delta_{kh} & & \text{on $\partial \Omega_h$ with $h \in \{1, \dotsc, m\}$.}\\
\end{aligned}\right. \] Since the $Z_k$ are linearly independent, the matrix $(\omega_{kh})_{1 \le k, h \le n}$ defined by \[
\omega_{kh}=\int_{\Omega} \nabla Z_k \cdot \nabla Z_h. \] is invertible; let $(\omega^{kh})_{1 \le k, h \le n}$ denote its inverse. We have \begin{equation} \label{eqGreenRelationship}
G_*(x,y)=G(x,y)+\sum_{k, h=1}^m Z_k(x)\omega^{kh}Z_h(y). \end{equation}
The Kirchhoff--Routh function in this context is defined by \[
\mathcal{W}_*(x)=\frac{\kappa^2}{2}H_*(x, x)-\kappa q(x), \] and the various quantities $A^\varepsilon, \omega^\varepsilon, \kappa^\varepsilon, x^\varepsilon, \rho^\varepsilon$ are still defined by \eqref{defiq}.
Theorem~\ref{thm:K1} generalizes then to \begin{theorem}\label{thm:K1m} As $\varepsilon \to 0$, we have \[
u^\varepsilon=U_{\kappa^\varepsilon} \Big(\frac{\cdot-x^\varepsilon}{\varepsilon}\Big)+\kappa^\varepsilon\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon \rho^\varepsilon}+ H_*(x^\varepsilon, \cdot)\Bigr)+o(1), \] \text{in $\mathrm{W}^{2, 1}_\mathrm{loc}(\Omega)$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$}, where \[
\kappa^\varepsilon=\kappa+\frac{2\pi}{\log \frac{1}{\varepsilon}}\Bigl(q(x^\varepsilon)-\kappa H(x^\varepsilon, x^\varepsilon) -\frac{\kappa}{2\pi} \log \frac{1}{\rho_\kappa} \Bigr)+o(\logeps^{-1}), \] and \[
\mathcal{W}_*(x^\varepsilon) \to \sup_{x \in \Omega} \mathcal{W}_*(x). \] One also has \[
B(x^\varepsilon, \Bar{r}^\varepsilon) \subset A^\varepsilon \subset B(x^\varepsilon, \mathring{r}^\varepsilon), \] with $\Bar{r}^\varepsilon=\varepsilon \rho_\kappa+o(\varepsilon)$ and $\mathring{r}^\varepsilon=\varepsilon \rho_\kappa +o(\varepsilon)$. Finally, \[
\mathcal{E}^\varepsilon (u^\varepsilon)= \frac{\kappa^2}{4\pi}\log \frac{1}{\varepsilon}-\mathcal{W}_*(x^\varepsilon)+\mathcal{C}+o(1). \] \end{theorem} \begin{proof} The proof of Theorem~\ref{thm:K1m} follows almost the same lines as one of Theorem~\ref{thm:K1}, so that we only mention the few adaptations. First, the functions $G$ and $H$ should be replaced by $G_*$ and $H_*$. In view of the regularity of $\Theta_h$ and of \eqref{eqGreenRelationship} this does not bring any trouble in the upper estimate nor the small scale and global asymptotics.
Next, the proof of Theorem~\ref{thm:K1} relies on the Dirichlet boundary condition to estimate $\capa(A^\varepsilon, \Omega)$ in \eqref{ineqCapacity}. Here, we define instead \[
w^\varepsilon=\frac{v^\varepsilon-\max_{\partial \Omega} v^\varepsilon}{\min_{\partial A_\varepsilon} q^\varepsilon - \max_{\partial \Omega}v^\varepsilon}. \] For every $h \in \{1, \dotsc, m\}$, let $\Theta_h \in \mathrm{H}^1_*(\Omega)$ be the unique solution of \[
\left\{
\begin{aligned}
-\Delta \Theta_h&=0 & & \text{in $\Omega$,}\\
\Theta_h&=0 & & \text{on $\partial \Omega_0$},\\
\Theta_h&=\mu_{kh} & & \text{on $\partial \Omega_h$ and $h \in \{1, \dotsc, m\}$},\\
\int_{\partial \Omega_h} \frac{\partial \Theta_h}{\partial n}&=\delta_{hk} & & \text{for $k \in \{1, \dotsc, m\}$},
\end{aligned}\right. \] where $\mu_{kh}$ are unknown constants that are part of the problem\footnote{ This solution can be found by minimizing the functional $u \mapsto \frac{1}{2}\int_{\Omega} \abs{\nabla u}^2+u\vert_{\partial \Omega_k}$ over $\mathrm{H}^1_*(\Omega)$. (A similar problem appears in \cite[Chapter I, (3)]{BBH})}. By construction of $\Theta_k$, one has \[
v^\varepsilon \vert_{\Omega_h}=\int_{\partial \Omega_h} v^\varepsilon\frac{\partial \Theta_h}{\partial n} =\int_{\Omega} \nabla v^\varepsilon \cdot \nabla \Theta_h=\int_{\Omega} \omega_\varepsilon \Theta_h, \] and hence, in view of \eqref{ineqTotalVorticity}, \[
\Norm{v^\varepsilon}_{\mathrm{L}^\infty(\partial \Omega)} \le \max_{h \in \{1, \dotsc, m\}} \Norm{\Theta_h}_{\mathrm{L}^\infty(\Omega)}\bigl(\kappa+O(\logeps^{-1})\bigr), \] Therefore, \[
\frac{2\pi}{\capa (A^\varepsilon, \Omega)} \ge \frac{2\pi}{\int_\Omega \abs{\nabla w^\varepsilon}^2} \ge \log \frac{1}{\varepsilon}+O(1), \] and one can continue as in the proof of Lemma~\ref{lemmaAreaDiameter}. \end{proof}
\section{Single vortices in unbounded domains} \label{sectUnbounded}
In this section, we assume that $\Omega \subset \mathbf{R}^2$ is an unbounded simply-connected domain whose boundary is bounded in one direction; to fix the ideas, \[
]a_0, +\infty[\times \mathbf{R} \subset \Omega \subset ]a_1, +\infty[ \times \mathbf{R}. \] Our goal is to carry out an analysis similar to that of the previous section.
We assume that $q \in \mathrm{W}^{1, 1}_\textrm{loc}(\Omega)$, \[
\sup_{x \in \Omega} \int_{B(x, 1)} \abs{\nabla q}^r < \infty \] for some $r > 2$, and that \[
q(x) \ge W(x_1-a_0)+d, \] for some $W > 0$ and $d > 0$, where $x=(x_1, x_2)$. Since $\partial \Omega$ is bounded in the $x_1$ direction, this is equivalent with requiring that \[
q(x) \ge W \dist(x, \partial \Omega)+d'. \] The natural space for solutions is \[
\mathrm{D}^{1, 2}_0(\Omega)= \{ u \in \mathrm{W}^{1, 1}_{\mathrm{loc}}(\Omega) \: :\: \int_{\Omega} \abs{\nabla u}^2 < \infty\}. \]
The Nehari manifold $\mathcal{N}^\varepsilon$ and the infimum value $c^\varepsilon$ are defined as in Proposition~\ref{prop:2.1}. The existence of a minimizer $u^\varepsilon \in \mathcal{N}^\varepsilon$ as in Proposition~\ref{prop:2.1} such that $\mathcal{E}^\varepsilon(u^\varepsilon)=c^\varepsilon$ is no longer direct nor true because of compactness issues.
In a first step, we derive upper bounds on $c^\varepsilon$. Next, we perform the a priori asymptotic analysis of solutions of $(\mathcal{P}^\eps)$ satisfying similar upper bounds. Finally, we prove existence results in appropriate cases of $\Omega$ and $q$.
\subsection{Upper bound on the energy}
\begin{proposition} \label{propUnboundedUpper} We have \[ c^\varepsilon \leq \frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon} - \sup_{x\in \Omega} \mathcal{W}(x) +\mathcal{C} + o(1). \] \end{proposition} \begin{proof} The proof goes as the proof of Corollary~\ref{cor:upper}. The main difference is that $q$ and $H(\Hat{x}, \cdot)$ are not bounded as in the proof of Lemma~\ref{lemmaHatuNehari}. However, since $\lim_{x \to \infty} \frac{1}{2\pi} \log \frac{1}{\abs{x-\Hat{x}}} +H(x, \Hat{x})=0$, one still has, for every $x \in \Omega$, \[
H(\Hat{x}, x) \le \frac{q(x)}{\kappa}+C, \] whence, starting from \eqref{ineqVorticitySetUpperFrac}, one obtains \[
\frac{\dfrac{1}{2\pi} \log \dfrac{1}{\varepsilon}+\dfrac{1}{2\pi} \log \dfrac{\varepsilon}{\abs{x-\Hat{x}}}+ q(x)+C}{\dfrac{\kappa}{2\pi} \log \dfrac{1}{\varepsilon}+\frac{q(x)}{\kappa}} \ge \dfrac{\log \dfrac{1}{\varepsilon}+H(\Hat{x}, \Hat{x})}{\dfrac{\kappa}{2\pi} \log \dfrac{1}{\varepsilon}+q(\Hat{x})+\sigma}. \] Since $q \ge 0$, it follows that \[
\frac{1}{\kappa}+\frac{\dfrac{1}{2\pi} \log \dfrac{\varepsilon}{\abs{x-\Hat{x}}}}{\kappa \log \dfrac{1}{\varepsilon}}\ge \frac{1}{\kappa}+O\bigl(\logeps^{-1}\bigr), \] and it suffices to continue as in Lemmas~\ref{lemmaHatuNehari}~and~\ref{lemmaEnergyHatu}. \end{proof}
\subsection{Functional inequalities on the half-plane}
In order to perform the asymptotic analysis of the solutions and to study their existence, we first provide some useful functional type inequalities and convergence results on the half-plane $\mathbf{R}^2_+$ that will be used in the next section.
\begin{proposition} \label{ineqUnboundedIneqW} We have for $u\in \mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$, \[
\muleb{2}\bigl(\{ x \in \mathbf{R}^2_+ \: :\: u(x) \ge Wx_1\}\bigr)\le C \int_{\mathbf{R}^2_+} \abs{\nabla u}^2, \] and, for every $p>0$, \[
\int_{\mathbf{R}^2_+} \bigl(u(x)-Wx_1\bigr)^p_+\, dx \le \frac{C}{W^2} \Bigl( \int_{\mathbf{R}^2_+} \abs{\nabla u}^2 \Bigr)^{1+\frac{p}{2}}. \] \end{proposition}
A similar statement is proved by Yang Jianfu \cite[Lemma 4]{Yang1991} with a different proof relying on an isometry between $\mathcal{D}^{1,2}_0(\mathbf{R}^2_+)$ and the space of cylindrically symmetric elements of $\mathcal{D}^{1,2}_0(\mathbf{R}^4)$ \cite[Lemma 1]{Yang1991}.
\begin{proof} Define $A_u=\{ x \in \mathbf{R}^2_+ \: :\: u(x) \ge Wx_1\}$. First we have, by the Chebyshev and Hardy inequalities \[
\muleb{2}(A_u ) \le \frac{1}{W^2} \int_{\mathbf{R}^2_+} \frac{\abs{u(x)}^2}{\abs{x_1}^2}\, dx \le \frac{4}{W^2} \int_{\mathbf{R}^2_+} \abs{\nabla u}^2. \] By Sobolev's inequality, it follows \[ \begin{split}
\int_{\mathbf{R}^2_+} (u(x)-Wx_1)_+^p\, dx &=\int_{A_u} (u-Wx_1)^p\, dx \\
&\le C \Norm{\nabla(u-Wx_1)}^p_{2} \muleb{2}(A_u) \\
&\le \frac{C'}{W^2} \Norm{\nabla u}_{2}^2(\Norm{\nabla u}_2+W \muleb{2}(A_u)^\frac{1}{2})^p \\
&\le \frac{C''}{W^2} \Bigl( \int_{\mathbf{R}^2_+} \abs{\nabla u}^2\Bigr)^{1+\frac{p}{2}}. \qedhere \end{split} \] \end{proof}
As a consequence
\begin{lemma} \label{lemmaUnboundedInequalityq} We have for $u\in \mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$, \[
\muleb{2}\bigl(\{ x \in \mathbf{R}^2_+ \: :\: u(x) \ge q(x)\}\bigr)\le C \int_{\mathbf{R}^2_+} \abs{\nabla u}^2, \] and for every $p>0$ \[
\int_{\mathbf{R}^2_+} (u-q)^p_+ \le C \Bigl( \int_{\mathbf{R}^2_+} \abs{\nabla u}^2 \Bigr)^{1+\frac{p}{2}}. \] \end{lemma}
We also have a compactness theorem
\begin{lemma} \label{lemmaUnboundedCompactness} For every $p < \infty$ and $L>0$, the map $\Phi : \mathrm{D}^{1, 2}_0(\mathbf{R}^2_+) \to \mathrm{L}^p(\mathbf{R}_+\times ]-L, L[) : u \mapsto (u-Wx_1)_+$ is completely continuous. \end{lemma} \begin{proof} By Rellich's Theorem, $u \mapsto \Phi(u)\chi_{]0, \lambda[\times ]-L, L[}$ is completely continuous for every $\lambda > 0$. On the other hand, \[ \int_{]\lambda, +\infty[\times ]-L, L[ } \hspace{-2em}(u(x)-Wx_1)_+^p\, dx \le \frac{C}{\lambda}\int_{]\lambda, +\infty[\times ]-L, L[} \hspace{-2em}(u(x)-\tfrac{W}{2}x_1)_+^{p+1}\, dx \le \frac{C}{\lambda} \Norm{\nabla u}_2^{p+3}, \] therefore, on every bounded subset of $\mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$, $\Phi$ is a uniform limit of completely continuous maps. The conclusion follows. \end{proof}
\begin{lemma}\label{campeones} Let $(u_n) \subset \mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$. If $(u_n)$ is bounded in $\mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$ and \[ \sup_{y\in \mathbf{R}} \int_{\mathbf{R}_+ \times ]y-1, y+1[} (u_n-Wx_1)_+^p \to 0, \] then \[
\int_{\mathbf{R}^+_2} (u_n-Wx_1)_+^s \to 0, \] for every $s>0$. \end{lemma}
This kind of result was first obtained by P.-L.\thinspace Lions \cite[Lemma I.1]{Lions1984}. The idea of our proof comes from V.\thinspace Coti Zelati and P.\thinspace Rabinowitz \cite{CotiZelatiRabinowitz1992}.
\begin{proof} By the Gagliardo--Nirenberg inequality \cite[p.\thinspace 125]{Nirenberg1959}, \[ \begin{split}
\int_{\mathbf{R}_+\times ]y-1, y+1[}(u_n-Wx_1)_+^{p+2}
\le C\int_{\mathbf{R}_+\times ]y-1, y+1[}\hspace{-4em}& (u_n-Wx_1)_+^p \\ &\times \int_{\mathbf{R}_+ \times ]y-1, y+1[}\hspace{-4em} (\abs{\nabla (u_n-Wx_1)_+}^2+\abs{(u_n-Wx_1)_+}^2). \end{split} \] Integrating with respect to $y\in \mathbf{R}$, one obtains \[ \begin{split}
\int_{\mathbf{R}^2_+}(u_n-Wx_1)_+^{p+2} \le C\biggl(\sup_{y \in \mathbf{R}} \int_{\mathbf{R}_+\times ]y-1, y+1[}\hspace{-4em}& (u_n-Wx_1)_+^p\biggr)\\ &\times\int_{\mathbf{R}_+^2} \bigl(\abs{\nabla (u_n-Wx_1)_+}^2+\abs{(u_n-Wx_1)_+}^2\bigr). \end{split} \] Since by Lemma~\ref{ineqUnboundedIneqW} \[ \int_{\mathbf{R}_+^2} \abs{\nabla (u_n-Wx_1)_+}^2+\abs{(u_n-Wx_1)_+}^2 \le C \bigl(\Norm{\nabla u_n}_{2}^2+\Norm{\nabla u_n}_{2}^4\bigr), \] $(u_n-Wx_1) \to 0$ in $\mathrm{L}^{p+2}(\mathbf{R}^2_+)$. By Lemma~\ref{ineqUnboundedIneqW}, the general case $s \ne p+2$ follows by interpolation. \end{proof}
\subsection{Asymptotic behavior of solutions} \label{sectionUnboundedAsymptotics} In this section, we assume that $(v^\varepsilon)$ is a sequence of solutions to $(\mathcal{P}^\eps)$ satisfying \eqref{assumptEnergyUpperbound}. We shall prove \begin{proposition} \label{prop:1maiUnbounded} Proposition~\ref{prop:1mai} holds under the assumptions on $\Omega$ and $q$ of this section. \end{proposition}
\subsubsection{Step 1: First quantitative properties of the solutions}
We first have the counterpart of Proposition~\ref{propositionEstimatesueps} \begin{proposition} \label{propositionUnboundedEstimatesueps} The estimates \eqref{ineqMuAeps}, \eqref{ineqVortexEnergy}, \eqref{ineqVortexPotential}, \eqref{eq:2etoiles} and \eqref{ineqTotalVorticity} hold for some constant $C$ independent of $\varepsilon$. \end{proposition} \begin{proof} The proof of Proposition~\ref{propositionEstimatesueps} provides the estimates \eqref{ineqVortexEnergy}, \eqref{ineqVortexPotential}, \eqref{eq:2etoiles} and \eqref{ineqTotalVorticity} without any modification. The inequality \eqref{ineqMuAeps} needs a little more work, since its proof in Proposition~\ref{propositionEstimatesueps} relies on the Poincar\'e inequality. In the present setting, we replace it by the Chebyshev inequality and Lemma~\ref{lemmaUnboundedInequalityq} \[
\begin{split} \muleb{2}(\{ x \in \Omega \: :\: v^\varepsilon(x) \ge q(x)+\tfrac{\kappa}{2\pi} \log \tfrac{1}{\varepsilon}\})
&\le \frac{1}{(\frac{\kappa}{2\pi} \log \frac{1}{\varepsilon})^4} \int_{\Omega} (v^\varepsilon-q)_+^4 \\
&\le \frac{C}{\logeps^4} \logeps^3=C \logeps^{-1}. \qedhere
\end{split} \] \end{proof}
\subsubsection{Step 2: Structure of the vorticity set} As previously, we consider the connected components of $(A^\varepsilon_i)_{i \in I_\varepsilon}$ of $A_\varepsilon$.
\begin{lemma} \label{lemmaUnboundedAreaDiameter} If $\varepsilon > 0$ is sufficiently small, we have for every $i \in I^\varepsilon$, \begin{equation} \label{ineqUnboundedVorticityDiameter}
\diam(A^\varepsilon_i) \le C \varepsilon \frac{\dist(A^\varepsilon_i, \partial \Omega)}{e^{2W \dist(A^\varepsilon, \partial\Omega)}}. \end{equation} Moreover, if for every $x \in \Omega$, one defines \[
A^\varepsilon_x=\bigcup \Bigl\{ A^\varepsilon_i \: :\: B(x, \tfrac{1}{2}\dist(x, \partial \Omega) +1) \cap A^\varepsilon_i \ne \emptyset \Bigr\}, \] then
\[
\muleb{2}(A^\varepsilon_x) \le C \varepsilon^2e^{-\mu \dist(x, \partial \Omega)}. \]
\end{lemma} \begin{proof} Let \[
w=\frac{v^\varepsilon}{\min_{\partial A^\varepsilon_i}q^\varepsilon}. \] Proceeding as in \eqref{ineqCapacity}, we obtain, using once more Proposition~\ref{propositionCapacityArea} \[
\frac{2\pi (\tfrac{\kappa}{2\pi}\log \tfrac{1}{\varepsilon}+W\dist(A^\varepsilon_i, \Omega)+d')^2}{\frac{\kappa^2}{2\pi} \log \frac{1}{\varepsilon}}\le \log \biggl(C\bigl(1+\dist(A^\varepsilon_i, \partial \Omega)\bigr)\Bigl(1+\frac{\dist(A^\varepsilon, \partial \Omega)}{\diam A^\varepsilon_i}\Bigr)\biggr). \] Therefore, \[
\frac{1}{\varepsilon}\le C\frac{1+\dist(A^\varepsilon, \partial \Omega)}{e^{2W(\dist(A_\varepsilon, \partial \Omega)-1)}} \Bigl(1+\frac{\dist(A^\varepsilon_i, \partial \Omega)}{\diam A^\varepsilon_i}\Bigr), \] from which \eqref{ineqUnboundedVorticityDiameter} follows.
Consider now $A^\varepsilon_x$. By \eqref{ineqUnboundedVorticityDiameter}, $A^\varepsilon_x \subset B(x, \frac{2}{3} \dist(x, \partial \Omega)+1)$ when $\varepsilon$ is small enough, so that \[
\frac{2\pi}{\capa_\Omega (A^\varepsilon_x)} \ge \frac{\bigl(\tfrac{\kappa}{2\pi}\log \tfrac{1}{\varepsilon} + \frac{W}{3}\dist(x, \partial \Omega)+d'\bigr)^2}{\frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon}}. \] By Proposition~\ref{propositionCapacityLocalArea}, we obtain \[
\muleb{2}(A^\varepsilon_x) \le C\bigl(\dist(x, \partial \Omega)+1\bigr)^2 \varepsilon^2 e^{-\frac{4W}{3}\dist(x, \partial \Omega)} \le C \varepsilon^2 e^{-\mu\dist(x, \partial \Omega)}. \qedhere \] \end{proof} \begin{remark} A slightly more careful proof shows that one can take any $\mu < W/2$, provided $C$ is large enough. \end{remark}
The next Lemma, counterpart of Lemma~\ref{lemmaVortexSplit}, insures that essential vortices are not too far from the boundary.
\begin{lemma} \label{lemmaUnboundedVortexSplit} There exists constants $\gamma, C, c>0$, such that, when $\varepsilon$ is small enough: If \eqref{eqSplitVortices} holds, we have \eqref{ineqLowerBoundArea}, \eqref{ineqLowerBoundDiam}, \eqref{ineqLowerBoundDistance}, \eqref{ineqLowerBoundVortex} and \[
\dist(A^\varepsilon_i, \partial \Omega)\le C, \] while if \eqref{eqSplitVortices} does not hold, then \eqref{ineqfsVanishing} holds. \end{lemma} \begin{proof} The proof follows essentially the one of Lemma~\ref{lemmaVortexSplit}. The inequality \eqref{ineqLowerBoundDiam} follows immediately from \eqref{ineqLowerBoundArea} and \eqref{ineqUnboundedVorticityDiameter}. \end{proof}
As in the case of a bounded domain, the vorticity set can be split into a vanishing vorticity set $V^\varepsilon$ and an essential one $E^\varepsilon$, defined by \eqref{eqDefVeps} and \eqref{eqDefEeps}. Since the gradient of $q$ is only locally integrable, Lemma~\ref{lemmaVanishingVorticity} only gives local information.
\begin{lemma} \label{lemmaUnboundedVanishing} For every $s \ge 1$, we have \[
\sup_{x \in \Omega} \Norm{\omega^\varepsilon_v}_{\mathrm{L}^s(B(x, 1))} = o(\varepsilon^{p(1-\frac{2}{r})-2(1-\frac{1}{s})}). \] In particular, if $\frac{1}{s} \ge 1-p(\frac{1}{2}-\frac{1}{r})$, then $\omega^\varepsilon_v \to 0$ in $\mathrm{L}^s_{\mathrm{loc}}(\Omega)$. \end{lemma}
\subsubsection{Step 3: Small scale asymptotics} For the small scale asymptotics, one first note that Lemma~\ref{lemmaSmallScaleLocalEstimates} still holds. Indeed, the only step that relied on the boundedness of $\Omega$ was \eqref{ineqGomegav}. For every $\rho>0$, regularity estimates still yields for $x \in B(x^\varepsilon_i, \frac{1}{2})$ \[
\Bigl\lvert\int_{B(x^\varepsilon_i, \rho)} G(x, y)\omega^\varepsilon_v(y)\, dy \Bigr\rvert\le C \Norm{\omega^\varepsilon_v}_{\mathrm{L}^s(B(x^\varepsilon_i, 2\rho))}, \] and the conclusion follows from Lemma~\ref{lemmaUnboundedVanishing}. On the other hand, since $\Omega$ is contained in a half-plane, by comparing its Green function by the Green function of a half-plane, we have \[
G(x, y) \le \frac{1}{2\pi} \log \Bigl(1+\frac{C\bigl(1+ \dist(x, \partial \Omega)\bigr)}{\abs{x-y}}\Bigr). \] Since $\dist(x^\varepsilon_i, \partial \Omega)$ is bounded, we have, for every $x \in B(x^\varepsilon_i, 1)$, \[
\int_{\Omega \setminus B(x^\varepsilon_i, \rho)} G(x, y)\omega^\varepsilon_v(y)\, dy \le \frac{\kappa^\varepsilon}{2\pi} \log \Bigl(1+\frac{C}{\rho}\Bigr) \to 0, \] as $\rho \to \infty$, uniformly in $\varepsilon > 0$.
Lemma~\ref{lemmaSmallScaleLocalEstimates} being established, the proof of Lemma~\ref{lemmaLocalAsymptotics} also adapts straightforwardly.
\subsubsection{Step 4: Global asymptotics} For Proposition~\ref{propositionAsymptoticsW21}, one obtains a little more than the $\mathrm{W}^{2, 1}_{\mathrm{loc}}(\Omega)$ convergence. Setting $\Omega_\delta=\{ x \in \Omega \: :\: \dist(x, \partial \Omega)> \delta\}$, one has
\begin{proposition} We have \[
v^\varepsilon=\Tilde{v}^\varepsilon+o(1) \] in $\mathrm{W}^{2, 1}_{\mathrm{loc}}(\Omega_\delta)$ for every $\delta > 0$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$. \end{proposition} \begin{proof} One defines $\Tilde{\omega}^\varepsilon_1$ and $w^\varepsilon_v$, and $w^\varepsilon_s$ as in the proof of Proposition~\ref{propositionAsymptoticsW21}. One defines \begin{align*}
w^\varepsilon_r(x)&=\int_{\Omega} \Bigl(H(x,y)-\frac{1}{4\pi}\log (\abs{x-y}^2+4x_1y_1)\Bigr) \bigl(\omega^\varepsilon_1(y)-\Tilde{\omega}^\varepsilon_1(y)\bigr)\, dy,\\
w^\varepsilon_h(x)&=\int_{\Omega} \frac{1}{4\pi}\log (\abs{x-y}^2+4x_1y_1) \bigl(\omega^\varepsilon_1(y)-\Tilde{\omega}^\varepsilon_1(y)\bigr)\, dy. \end{align*} Recalling that $0 < c \le \dist(x^\varepsilon_1, \partial \Omega) \le C$, one treats the terms $w^\varepsilon_v$, $w^\varepsilon_s$ and $w^\varepsilon_v$ as in the proof of Proposition~\ref{propositionAsymptoticsW21}; the term $w^\varepsilon_s$ is treated similarly to the term $w^\varepsilon_s$. The proof of the convergences up to the boundary follows then as in the proof of Proposition~\ref{propositionAsymptoticsW21}. \end{proof}
For Corollary~\ref{corollaryAsymptotic}, we have, instead of \eqref{ineqAepsBall}, \[
q(y)+ \frac{\kappa}{2\pi} \log \frac{1}{\varepsilon} < v^\varepsilon(y^\varepsilon) \le \frac{\kappa_1^\varepsilon}{2\pi} \log \Bigl(1+\frac{C\dist(x^\varepsilon_1, \partial \Omega)}{\abs{y-x^\varepsilon_1}}\Bigr)+O(1). \] The remaining part of the proof carries over identically since $\dist(x^\varepsilon_1, \partial \Omega)$ remains bounded as $\varepsilon \to 0$. Corollary~\ref{corEnergy} also follows without any modification.
\subsection{Existence of solutions}
In this section we present sufficient conditions for the existence of a minimizer for $c^\varepsilon$.
Assume that $\Omega \subset ]a_0, +\infty[ \times \mathbf{R}$ is a Lipschitz domain, and that \begin{equation} \label{condPerturbation}
\lim_{t \to +\infty} \inf \{ x_1 \in \mathbf{R} \: :\: \exists x_2 \in \mathbf{R}, (x_1, x_2) \in \Omega \text{ and } \abs{x_2} \ge t\}=0. \end{equation} Assume also that there exist $\Hat{W}, \Hat{d}>0$ such that and \[
\lim_{t \to +\infty} \inf_{\abs{x_2} > t} \frac{q(x)-\Hat{W}x_1-\Hat{d}}{1+\abs{x_1}} \ge 0. \] We define \[ \Hat{\mathcal{E}}^\varepsilon (u)= \frac{1}{2} \int_{\mathbf{R}^2_+} \abs{\nabla u}^2-\frac{1}{\varepsilon^2}\int_{\mathbf{R}^2_+} F(u-\Hat{W}x_1-\Hat{d}) \] and the minimax level \[ \Hat{c}^\varepsilon=\inf_{u \in \mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)}\max_{t>0} \Hat{\mathcal{E}}^\varepsilon(tu). \]
We first recall and investigate about the case where $q$ is affine and $\Omega$ is the half-plane. In this case, by definition, $c^\varepsilon=\Hat{c}^\varepsilon$.
\begin{theorem}[Yang \cite{Yang1991}] \label{thmYang} If $\Omega=\mathbf{R}^2_+$ and $q(x)=Wx_1+d$, then problem \eqref{problemPeps} admits a solution $u \in \mathrm{D}^{1, 2}_0(\Omega)$. \end{theorem}
The proof in \cite{Yang1991} allows to state that
\begin{proposition}\label{prop:al} The critical level $c^\varepsilon=\Hat{c}^\varepsilon$ depends continuously on $W$ and $d$. \end{proposition} \begin{proof}[Sketch of the proof] We can assume without loss of generality that $\varepsilon=1$ and skip any reference to it. Given converging sequences $W_n \to W$ and $d_n \to d$, we set \[
\mathcal{E}_n(u)=\frac{1}{2} \int_{\mathbf{R}^2_+} \abs{\nabla u}^2-\int_{\mathbf{R}^2_+} F(u-W_nx_1-d_n) \] By Theorem~\ref{thmYang}, $\mathcal{E}$ and $\mathcal{E}_n$ possess (some) ground-states $u$ and $u_n$, for which we set $c_n=\mathcal{E}(u_n)$. There exist $\tau_n \to 1$ such that $\dualprod{d\mathcal{E}_n(\tau_n u)}{\tau_n u}=0$. Therefore, \[
c_n \le \mathcal{E}_n(\tau_n u) \to \mathcal{E} (u)=c. \] This implies that $c$ is upper semi-continuous. In particular, since \[
\Bigl(\frac{1}{2}-\frac{1}{p+1}\Bigr)\Norm{\nabla u_n}^2\le \mathcal{E}_n(u_n) \] the sequence $(u_n)$ is bounded in $\mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$. Choosing $\check{W}=\inf W_n>0$, we obtain by Proposition~\ref{ineqUnboundedIneqW} \begin{multline*}
\Bigl(\int_{\mathbf{R}^2_+} (u_n-\tfrac{1}{2}\check{W}x_1)^{p+1}_+\Bigr)^\frac{2}{p+3}
\le \int_{\mathbf{R}^2_+} \abs{\nabla u_n}^2
\le \int_{\mathbf{R}^2_+} u_n f(u_n-W_nx_1-d_n) \\
\le \int_{\mathbf{R}^2_+} u_n f(u_n-\check{W}x_1)
\le C \int_{\mathbf{R}^2_+} (u_n-\tfrac{1}{2}\check{W}x_1)^{p+1}_+, \end{multline*} so that $(u_n-\frac{1}{2} \check{W} x_1)_+ \not \to 0$ in $L^{p+1}(\mathbf{R}^2_+)$. By Lemma~\ref{campeones}, up to translation in the $x_2$ direction, we have $(u_n-\frac{1}{2} \check{W} x_1)_+ \not \to 0$ in $L^{p+1}(\mathbf{R}_+\times ]-1, 1[)$. Hence, there exists $0\neq v\in \mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$ such that $u_n \rightharpoonup v$ in $\mathrm{D}^{1, 2}_0(\mathbf{R}^2_+)$ and $u_n \to v$ almost everywhere and in $L^r_{\mathrm{loc}}(\mathbf{R}^2_+)$ for $r \ge 1$. In particular, $d\mathcal{E}(v)=0$ and by Fatou's Lemma, we have \[ \begin{split}
c &\le \mathcal{E}(v)=\int_{\mathbf{R}^2_+} (W x_1+d)^p (v-Wx_1-d)+(\tfrac{1}{2}-\tfrac{1}{p+1})(v-Wx_1-d)_+^{p+1}\\
&\le \liminf_{n \to \infty}\int_{\mathbf{R}^2_+} (W_n x_1+d)^p (u_n-Wx_1-d)+(\tfrac{1}{2}-\tfrac{1}{p+1})(u_n-Wx_1-d)_+^{p+1}\\
&= \liminf_{n \to \infty} \mathcal{E}_n(u_n)=\liminf_{n \to \infty} c_n. \qedhere \end{split} \] \end{proof}
\begin{proposition} \label{propositionPS} If \[
c^\varepsilon < \Hat{c}^\varepsilon \] then there exists $u_\varepsilon \in \mathrm{D}^{1, 2}_0(\Omega)$ such that $d\mathcal{E}^\varepsilon(u_\varepsilon)=0$ and $\mathcal{E}^\varepsilon(u_\varepsilon)=c^\varepsilon$. \end{proposition} \begin{proof} We use the same strategy as P.\thinspace Rabinowitz \cite{Rabinowitz1992} for the nonlinear Schr\"odinger equation on $\mathbf{R}^N$.
The minimization problem can be reformulated as a mountain-pass problem (see, e.g.\ \cite[Chapter 4]{Willem1996}). By Ekeland's variational principle, there exists a sequence $(u_n) \subset \mathrm{D}^{1, 2}_0(\Omega)$ such that $d\mathcal{E}^\varepsilon(u_n) \to 0$ and $\mathcal{E}^\varepsilon(u_n) \to c^\varepsilon$, see \cite[Theorem 4.3]{MawhinWillem} or \cite[Theorem 1.15]{Willem1996}. We have \[
\Bigl(\frac{1}{2}-\frac{1}{p+1}\Bigr)\Norm{\nabla u}_{\mathrm{L}^2}^2\le \mathcal{E}^\varepsilon(u_n)-\dualprod{\mathcal{E}^\varepsilon(u_n)}{u_n} \to c^\varepsilon, \] so that $(u_n)$ is bounded in $\mathrm{D}^{1, 2}_0(\Omega)$. There exists $u \in \mathrm{D}^{1, 2}_0(\Omega)$ such that, up to a subsequence, $u_n \rightharpoonup u$ $\mathrm{D}^{1, 2}_0(\Omega)$. By Rellich's Theorem, for every $\varphi \in \mathrm{D}^{1, 2}_0(\Omega)$, $\dualprod{d\mathcal{E}^\varepsilon(u_n)}{\varphi} \to \dualprod{d\mathcal{E}^\varepsilon(u)}{\varphi}$, so that $d\mathcal{E}^\varepsilon(u)=0$. If $u \ne 0$, then $u \in \mathcal{N}^\varepsilon$ and by Fatou's Lemma \[
\begin{split}
\mathcal{E}^\varepsilon(u)&=\frac{1}{\varepsilon^2}\int_{\Omega} \frac{f(u-q^\varepsilon)u}{2}-F(u-q^\varepsilon)\\
&=\frac{1}{2\varepsilon^2}\int_{\Omega} q^\varepsilon(u-q^\varepsilon)_+^{p}+(1-\tfrac{2}{p+1}) (u-q^\varepsilon)_+^{p+1} \\
&\le \liminf_{n \to \infty} \frac{1}{2\varepsilon^2}\int_{\Omega} q^\varepsilon(u_n-q^\varepsilon)_+^{p}+(1-\tfrac{2}{p+1})(u_n-q^\varepsilon)_+^{p+1}\\
&= \liminf_{n \to \infty} \mathcal{E}^\varepsilon(u_n)-\tfrac{1}{2}\dualprod{d\mathcal{E}^\varepsilon(u_n)}{u_n}=c^\varepsilon,
\end{split} \] so that $u$ fits the claim.
Otherwise, for any $\delta < \min( \Hat{W}, \Hat{d})$, let $R > 0$ be such that \[
-\delta \le \inf \bigl\{ s \in \mathbf{R} \: :\: \exists r \in \mathbf{R}, (s, r) \in \Omega \text{ and } \abs{s}\ge R\bigr\}, \] and, \begin{equation} \label{ineqqdelta}
q(x)\ge \Hat{q}_\delta(x):=(\Hat{W}-\delta)x_1+\Hat{d}-\delta \qquad \text{if $\abs{x_2} \ge R$}. \end{equation}
We have, for $\Omega_R = \{ x \in \Omega \: :\: |x_2|\geq R\}$, and in view of Lemma~\ref{lemmaUnboundedCompactness}, \begin{equation} \label{ineqOmegaR} \begin{split}
c^\varepsilon&=\lim_{n \to \infty}\mathcal{E}^\varepsilon(u_n)-\dualprod{d\mathcal{E}^\varepsilon(u_n)}{u_n}\\
&\le \liminf_{n \to \infty} \frac{1}{\varepsilon^2}\int_{\Omega} u_n (u_n-q)_+^p
= \liminf_{n \to \infty} \frac{1}{\varepsilon^2}\int_{\Omega \setminus \Omega_R} u_n (u_n-q)_+^p \\
&\le C \liminf_{n \to \infty} \frac{1}{\varepsilon^2}\int_{\Omega_R} \Bigl(u_n-\frac{q}{1+\delta}\Bigr)_+^{p+1}
\le C \liminf_{n \to \infty} \frac{1}{\varepsilon^2}\int_{\Omega_R} (u_n-\Hat{q}_\delta)_+^{p+1}. \end{split} \end{equation}
Let $\psi \in C^\infty(\mathbf{R})$ such that $\supp \psi \subset [-2\delta, -\delta]$, $\psi (t)=0$ for $t \le -2 \delta$ and $\psi(t)=1$ for $t \ge -\delta$. We set $\varphi(x_1, x_2)=\psi(x_1)$. Note that $\supp \nabla \varphi \cap \Bar{\Omega}$ is compact, so that by Rellich's Theorem, \[
\int_{\Omega} \abs{\nabla \varphi}^2\abs{u_n}^2 \to 0, \] and therefore, defining $v_n=\varphi u_n$, \[
\int_{\Omega} \abs{\nabla v_n}^2=\int_{\Omega} \abs{\nabla u_n}^2+o(1). \] For every $\tau > 0$ \begin{multline*}
\max_{\theta > 0} \mathcal{E}(\theta u_n)
\ge \mathcal{E}(\tau u_n)
=\Hat{\mathcal{E}}_{\delta}(\tau v_n)+\frac{\tau^2}{2}\int_{\Omega} \abs{\nabla u_n}^2-\abs{\nabla v_n}^2 \\ +\frac{1}{\varepsilon^2}\int_{\Omega} F(\tau v_n-\Hat{q}_\delta)-F(\tau u_n-q). \end{multline*} Choose now $\tau_n$ such that $\Hat{\mathcal{E}}_\delta(\tau_n v_n)=\sup_{\tau > 0}\Hat{\mathcal{E}}_\delta(\tau v_n)$. If $\tau_n \ge 1$, we have, \begin{multline*}
\tau_n^2 \int_{\Omega} \abs{\nabla v_n}^2 = \frac{1}{\varepsilon^2}\int_{\Omega} \tau_n v_n f(\tau_n v_n-\Hat{q}_\delta) \ge \tau_n^{p+1}\frac{1}{\varepsilon^2}\int_{\Omega} (v_n-\Hat{q}_\delta)_+^{p+1} \\
\ge \tau_n^{p+1}\frac{1}{\varepsilon^2}\int_{\Omega_R} (v_n-\Hat{q}_\delta)^{p+1}_+
= \tau_n^{p+1} \frac{1}{\varepsilon^2}\int_{\Omega_R} (u_n-\Hat{q}_\delta)^{p+1}_+, \end{multline*} so that by \eqref{ineqOmegaR} we obtain \[ \tau_n \le \max\Biggl(1, \biggl( \frac{\int_{\Omega} \abs{\nabla v_n}^2}{\int_{\Omega_R} (u_n-\Hat{q}_\delta)^{p+1}_+} \biggr)^\frac{1}{p-1}\Biggr), \] and the quantity on the right-hand side is bounded in view of \eqref{ineqOmegaR}. This implies that $\tau_n v_n \rightharpoonup 0$ and $\tau_n u_n \rightharpoonup 0$ in $D^{1, 2}(\Omega)$, and by Lemma~\ref{lemmaUnboundedCompactness}, that \[ \int_{\Omega \setminus \Omega_R} F(\tau_n v_n-\Hat{q}_\delta)-F(\tau_n u_n-q) \to 0, \qquad\text{as }n\to +\infty. \] On the other hand, by \eqref{ineqqdelta}, $\Hat{q}_\delta \le q$ in $\Omega \setminus \Omega_R$, and \[
\int_{\Omega_R} F(\tau_n v_n-\Hat{q}_\delta)-F(\tau_n u_n-q) = \int_{\Omega_R} F(\tau_n u_n-\Hat{q}_\delta)-F(\tau_n u_n-q)\ge 0. \]
Hence, \[
\liminf_{n \to \infty} \mathcal{E}(u_n) \ge \liminf_{n \to \infty} \Hat{\mathcal{E}}_\delta(\tau_n v_n) \ge \Hat{c}_\delta := \inf_{v \in \mathrm{D}^{1, 2}_0(]-2\delta, +\infty[\times \mathbf{R})} \Hat{\mathcal{E}}_\delta(v), \] and the conclusion follows from Proposition~\ref{prop:al}, sending $\delta$ to zero. \end{proof}
From Proposition~\ref{propositionPS}, we derive
\begin{theorem} \label{theoremExistenceLevels} If \[
\sup_{x \in \Omega}\frac{\kappa^2}{2} H(x, x)-\kappa q(x) > \frac{\kappa^2}{4\pi} \Bigl(\log \frac{\kappa}{2\pi \Hat{W}}-1\Bigr)-\kappa \Hat{d}, \] then, if $\varepsilon$ is sufficiently small, there exists $u^\varepsilon \in \mathrm{D}^{1, 2}_0(\Omega)$ such that $d\mathcal{E}^\varepsilon(u^\varepsilon)=0$ and $\mathcal{E}^\varepsilon(u^\varepsilon)=c^\varepsilon$. \end{theorem} \begin{proof} By Proposition~\ref{propUnboundedUpper}, we have \[
c^\varepsilon \le \frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon}-\sup_{x\in \Omega}\Bigl(\frac{\kappa^2}{2} H(x, x)-\kappa q(x) \Bigr) +\mathcal{C} + o(1). \] On the other hand, in view of Theorem~\ref{thmYang}, $\Hat{\mathcal{E}}^\varepsilon$ possesses a ground-state whose energy is bounded by $\frac{\kappa^2}{4\pi}\log \frac{1}{\varepsilon}+O(1)$. It follows from Proposition~\ref{prop:1maiUnbounded} applied to these ground-states that \[ \begin{split}
\Hat{c}^\varepsilon&=\frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon}-\sup_{x\in \mathbf{R}^2_+} \Bigl(\frac{\kappa^2}{4\pi} \log 2x_1-\kappa (\Hat{W}x_1+\Hat{d})\Bigr) +\mathcal{C} + o(1) \\
&=\frac{\kappa^2}{4\pi} \log \frac{1}{\varepsilon}- \Bigl( \frac{\kappa^2}{4\pi} \Bigl(\log \frac{\kappa}{2\pi \Hat{W}}-1\Bigr)-\kappa \Hat{d} \Bigr) +\mathcal{C} + o(1). \end{split} \] Therefore, when $\varepsilon$ is small enough, $c^\varepsilon < \Hat{c}^\varepsilon$, and the conclusion follows from Proposition~\ref{propositionPS}. \end{proof}
\section{Pair of vortices in bounded domains} \label{sectionVortexPair}
In this section, $\Omega\subset \mathbf{R}^2$, $f : \mathbf{R} \to \mathbf{R}$ and $q: \Omega \to \mathbf{R}$ are as in Section~\ref{sectionSingleVortex}. For ${\boldsymbol{\varepsilon}}=(\varepsilon_+, \varepsilon_-) >0$, $\kappa_+>0$ and $\kappa_-<0$ given, and consider solutions of the boundary value problems \[ \left\{ \begin{aligned} -\Delta u^{\boldsymbol{\varepsilon}} &= \frac{1}{\varepsilon_+{}^2} f(u^{\boldsymbol{\varepsilon}} - q^\varepsilon_+) - \frac{1}{\varepsilon_-{}^2} f(q^\varepsilon_- -u^{\boldsymbol{\varepsilon}}) &
& \text{in $\Omega$}, \\ u^{\boldsymbol{\varepsilon}} &= 0 & &\text{on $\partial \Omega$}, \end{aligned} \right. \tag{\protect{$\mathcal{Q}^{\boldsymbol{\varepsilon}}$}} \label{Qeps} \] where $q^\varepsilon_\pm = q+ \frac{\kappa_\pm}{2\pi} \log \frac{1}{\varepsilon_\pm}$.
We consider are the least energy nodal solutions of \eqref{Qeps} obtained by minimizing the energy functional \[
\mathcal{E}^{\boldsymbol{\varepsilon}}(u)= \int_{\Omega} \Bigl(\frac{|\nabla u|^2}{2} - \frac{1}{\varepsilon_+{}^2}F(u-q^\varepsilon_+) -\frac{1}{\varepsilon_-{}^2}F(q^\varepsilon_- -u) \Bigr)
\] over the natural constraint given by the nodal Nehari set \[ \mathcal{M}^{\boldsymbol{\varepsilon}} = \left\{ u\in H^1_0(\Omega) \ : \ u_+\neq 0, u_- \ne 0, \ \langle d\mathcal{E}^{\boldsymbol{\varepsilon}}(u), u_+\rangle = \langle d\mathcal{E}^{\boldsymbol{\varepsilon}}(u), u_-\rangle =0\right\}. \] It is a standard \cites{CastroCossioNeuberger,BartschWethWillem,BartschWeth2003,BartschWeth2005} to prove the \begin{proposition} Assume that $q^\varepsilon_+$ is positive on $\Omega$ and $q^\varepsilon_-$ is negative on $\Omega$, so that $\mathcal{M}^\eps \neq \emptyset$, and define \[ d^{\boldsymbol{\varepsilon}} = \inf_{u\in \mathcal{M}^{\boldsymbol{\varepsilon}}} \mathcal{E}^{\boldsymbol{\varepsilon}}(u). \] There exists $u^{\boldsymbol{\varepsilon}} \in \mathcal{M}^{\boldsymbol{\varepsilon}}$ such that $\mathcal{E}^{\boldsymbol{\varepsilon}}(u^{\boldsymbol{\varepsilon}})=d^{\boldsymbol{\varepsilon}}$, and $u^{\boldsymbol{\varepsilon}}$ is a nonnegative solution of $(\mathcal{Q}^\eps)$. \end{proposition}
Our focus is the asymptotics of $u^{\boldsymbol{\varepsilon}}$ for a sequence ${\boldsymbol{\varepsilon}} \to (0, 0)$. We assume that $0 < c < \frac{\log \varepsilon_+}{\log \varepsilon_-} < C < \infty$, and we will write $\logEps$ instead of $\log \varepsilon_+$ or $\log \varepsilon_-$ in asymptotic expansions.
We extend the definition of $U_\kappa$ given by \eqref{Ukappa} for $\kappa < 0$ by $U_\kappa=-U_{-\kappa}$ and $\rho_{\kappa}=\rho_{-\kappa}$. One still has, when $\abs{x}$ is large enough, $U_\kappa(x)=\frac{\kappa}{2\pi}\log \frac{\rho_\kappa}{\abs{x}}$. We also set \[
\mathcal{C}_\pm= \frac{\kappa_\pm^2}{4\pi} \log \rho_{\kappa_\pm} +
\int_{B(0, \rho_{\kappa_\pm})}\Bigl(\frac{|\nabla U_{\rho_{\kappa_\pm}}|^2}{2} - \frac{U_{\rho_{\kappa_\pm}}^{p+1}}{p+1}\Bigr). \] The Kirchhoff--Routh function $\mathcal{W}$ is defined for $(x_+, x_-)\in \Omega^2_*=\{ (y_+, y_-) \in \Omega \: :\: y_+ \ne y_-\}$ by \[ \begin{split}
\mathcal{W}(x_+,x_-)= \, &\frac{\kappa_+^2}{2}H(x_+, x_+) + \frac{\kappa_-^2}{2}H(x_-, x_-) + \kappa_+\kappa_- G(x_+, x_-)\\ &
-\frac{\kappa_+}{2\pi} q(x_+) -\frac{\kappa_-}{2\pi} q(x_-). \end{split} \] We set \begin{equation} \begin{aligned}\label{defiqNodal}
A^{\boldsymbol{\varepsilon}}_\pm &=\Big\{ x \in \Omega \: :\: \pm u^{\boldsymbol{\varepsilon}}(x)> \pm q^{\boldsymbol{\varepsilon}}(x) + \frac{\kappa_\pm}{2\pi} \log \frac{1}{\varepsilon_\pm}\Big\}, \\
\omega_\pm^{\boldsymbol{\varepsilon}}&=\pm \frac{1}{\varepsilon_\pm{}^2} f(\pm(u^{\boldsymbol{\varepsilon}}-q^{\boldsymbol{\varepsilon}})), \\
\kappa^{\boldsymbol{\varepsilon}}_\pm&=\int_{\Omega} \omega_\pm^{\boldsymbol{\varepsilon}}, \\
x^{\boldsymbol{\varepsilon}}&=\frac{1}{\kappa^{\boldsymbol{\varepsilon}}}\int_{\Omega} x \, \omega^{\boldsymbol{\varepsilon}}_\pm(x)\, dx, \\
\rho^{\boldsymbol{\varepsilon}}_\pm&=\rho_{\kappa^{\boldsymbol{\varepsilon}}_\pm}. \end{aligned} \end{equation}
We will prove
\begin{theorem}\label{thm:K3} As ${\boldsymbol{\varepsilon}} \to 0$, we have \[ \begin{split}
u^{\boldsymbol{\varepsilon}}= \, & U_{\kappa_+^{\boldsymbol{\varepsilon}}} \Big(\frac{\cdot-x^{\boldsymbol{\varepsilon}}}{\varepsilon_+}\Big)+\kappa_+^{\boldsymbol{\varepsilon}}\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon_+ \rho_{+}^{\boldsymbol{\varepsilon}}}+ H(x^{\boldsymbol{\varepsilon}}_+, \cdot)\Bigr)\\ &+U_{\kappa_-^{\boldsymbol{\varepsilon}}} \Big(\frac{\cdot-x^{\boldsymbol{\varepsilon}}}{\varepsilon_-}\Big)+\kappa_-^{\boldsymbol{\varepsilon}}\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon_- \rho_-^{\boldsymbol{\varepsilon}}}+ H(x^{\boldsymbol{\varepsilon}}_-, \cdot)\Bigr)+o(1), \end{split} \] in $\mathrm{W}^{2, 1}_\mathrm{loc}(\Omega)$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$, where \[
\kappa^{\boldsymbol{\varepsilon}}=\kappa_\pm+\frac{2\pi}{\log \frac{1}{\varepsilon_\pm}}\Bigl(q(x^{\boldsymbol{\varepsilon}})-\kappa_\pm H(x^{\boldsymbol{\varepsilon}}, x^{\boldsymbol{\varepsilon}})-\kappa_\mp G(x_\pm, x_\mp) -\frac{\kappa}{2\pi} \log \frac{1}{\rho_{\kappa_\pm}} \Bigr)+o(\logEps^{-1}), \] and \[
\mathcal{W}(x^{\boldsymbol{\varepsilon}}_+,x^{\boldsymbol{\varepsilon}}_-) \to \sup_{(x_+,x_-) \in \Omega^2_*} \mathcal{W}(x_+,x_-). \] One also has \[
B(x^{\boldsymbol{\varepsilon}}_\pm, \Bar{r}_\pm^{\boldsymbol{\varepsilon}}) \subset A_\pm^{\boldsymbol{\varepsilon}} \subset B(x_\pm^{\boldsymbol{\varepsilon}}, \mathring{r}_\pm^{\boldsymbol{\varepsilon}}), \] with $\Bar{r}^{\boldsymbol{\varepsilon}}_\pm=\varepsilon_\pm \rho_{\kappa_\pm}+o(\varepsilon_\pm)$ and $\mathring{r}_\pm^{\boldsymbol{\varepsilon}}=\varepsilon_\pm \rho_{\kappa_\pm} +o(\varepsilon_\pm)$. Finally, \[
\mathcal{E}^{\boldsymbol{\varepsilon}} (u^{\boldsymbol{\varepsilon}})= \frac{\kappa^2_+}{4\pi} \log \frac{1}{\varepsilon_+}+\frac{\kappa^2_-}{4\pi} \log \frac{1}{\varepsilon_-}-\mathcal{W}(x^{\boldsymbol{\varepsilon}}_+,x^{\boldsymbol{\varepsilon}}_-)+\mathcal{C}_++\mathcal{C}_-+o(1). \] \end{theorem}
\subsection{Upper bounds on the energy}
We compute upper bounds on $d^\varepsilon$ by constructing suitable elements in $\mathcal{M}^\eps$.
\begin{lemma} \label{lemNodalUpperBound} For every $\Hat{x}_+, \Hat{x}_- \in \Omega$ such that $\Hat{x}_+\ne \Hat{x}_-$, there exists \[
\Hat{\kappa}^{\pm}_{{\boldsymbol{\varepsilon}}}=\kappa_\pm+\frac{2\pi}{\log \dfrac{1}{\varepsilon_\pm}}\Bigl( q(\Hat{x}_\pm)-\kappa_\pm H(\Hat{x}_\pm, \Hat{x}_\pm)-\kappa_{\mp} G(\Hat{x}_\pm, \Hat{x}_\mp)+\dfrac{\kappa_\pm}{2\pi} \log \rho_{\kappa_\pm} \Bigr)+O\bigl(\logEps^{-2}\bigr), \] such that, if \[ \begin{split}
\Hat{u}^{\boldsymbol{\varepsilon}}(x) =&U_{\Hat{\kappa}_+^{\boldsymbol{\varepsilon}}}\Bigl(\frac{x-\Hat{x}_+}{\varepsilon_+}\Bigr)+\Hat{\kappa}^{\boldsymbol{\varepsilon}}_+
\Bigl( \frac{1}{2\pi} \log \frac{1}{\varepsilon_+\Hat{\rho}_+^{\boldsymbol{\varepsilon}}}+H(\Hat{x}_+, x) \Bigr)\\ &+U_{\Hat{\kappa}_-^{\boldsymbol{\varepsilon}}}\Bigl(\frac{x-\Hat{x}_-}{\varepsilon_-}\Bigr)+\Hat{\kappa}^{\boldsymbol{\varepsilon}}_-
\Bigl( \frac{1}{2\pi} \log \frac{1}{\varepsilon_-\Hat{\rho}_-^{\boldsymbol{\varepsilon}}}+H(\Hat{x}_-, x) \Bigr), \end{split} \] then \[
\Hat{u}^{\boldsymbol{\varepsilon}}\in \mathcal{M}^{\boldsymbol{\varepsilon}}. \] Moreover, \[
\Hat{A}_\pm^{\boldsymbol{\varepsilon}}:=\bigl\{ x \: :\: \pm \Hat{u}^{\boldsymbol{\varepsilon}}(x) > \pm q^{\boldsymbol{\varepsilon}}_\pm(x) \bigr\} \subset B(\Hat{x}_\pm, \Hat{r}_\pm^{\boldsymbol{\varepsilon}}), \\
\] with $\Hat{r}_\pm^{\boldsymbol{\varepsilon}}=\varepsilon_\pm \rho_{\kappa_\pm}+o({\boldsymbol{\varepsilon}})$. \end{lemma}
\begin{proof} For every $\boldsymbol{\sigma}=(\sigma_+, \sigma_-) \in \mathbf{R}^2$, we define \begin{gather*}
\Hat{\kappa}^\pm_{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}=\frac{q^{\boldsymbol{\varepsilon}}_\pm(x_\pm)-\kappa_\mp G(x_\pm, x_\mp)+\sigma_\pm}{\frac{1}{2\pi} \log \frac{1}{\varepsilon_\pm \rho_{\kappa_\pm}}+H(x_\pm, x_\pm)},\\ \begin{split}
\Hat{u}_{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}} =&U_{\Hat{\kappa}_+^{\boldsymbol{\varepsilon}}}\Bigl(\frac{x-\Hat{x}_+}{\varepsilon_+}\Bigr)+\Hat{\kappa}^\varepsilon_+
\Bigl( \frac{1}{2\pi} \log \frac{1}{\varepsilon_+\Hat{\rho}_+^{\boldsymbol{\varepsilon}}}+H(\Hat{x}_+, x) \Bigr)\\ &+U_{\Hat{\kappa}_-^{\boldsymbol{\varepsilon}}}\Bigl(\frac{x-\Hat{x}_-}{\varepsilon_-}\Bigr)+\Hat{\kappa}^\varepsilon_-
\Bigl( \frac{1}{2\pi} \log \frac{1}{\varepsilon_-\Hat{\rho}_-^{\boldsymbol{\varepsilon}}}+H(\Hat{x}_-, x) \Bigr), \end{split} \end{gather*} and we set \[
g^{{\boldsymbol{\varepsilon}}}_\pm(\boldsymbol{\sigma})=\langle d \mathcal{E}_{{\boldsymbol{\varepsilon}}} (\Hat{u}^{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}), \Hat{u}^{{\boldsymbol{\varepsilon}}, \sigma_\pm} \rangle. \]
We compute as in the proof of Lemma~\ref{lemmaHatuNehari}, \begin{equation}\label{eq:badaboum1}
\int_{\Omega} \abs{\nabla u^{{\boldsymbol{\varepsilon}}, \sigma_\pm}}^2
=\int_{B(0, \rho_{\Hat{\kappa}_{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}})} \abs{\nabla U_{\Hat{\kappa}_{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}}}^2+\Hat{\kappa}^{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}_{\pm} \Bigl(\frac{\kappa_\pm}{2\pi} \log \frac{1}{\varepsilon_\pm}+q(\Hat{x}_\pm)+\sigma_\pm\Bigr)+O(\abs{{\boldsymbol{\varepsilon}}}). \end{equation} We also set \[ \Hat{\omega}^{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}=\frac{1}{\varepsilon_+^2} f(\Hat{u}^{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}} - q^{\boldsymbol{\varepsilon}}_+) - \frac{1}{\varepsilon_-^2} f(q^{\boldsymbol{\varepsilon}}_- -\Hat{u}^{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}), \] and we compute as in the proof of Lemma~\ref{lemmaHatuNehari} \begin{multline}\label{eq:badaboum2}
\frac{1}{\varepsilon_\pm^2}\int_{\Omega} \Hat{\omega}^{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}} \Hat{u}^{{\boldsymbol{\varepsilon}}, \boldsymbol{\sigma}}_\pm
=\int_{\mathbf{R}^2} F(U_{\kappa_\pm}+\sigma_\pm) \\+ (\tfrac{\kappa}{2\pi} \log \tfrac{1}{\varepsilon_\pm}+q(\Hat{x}_\pm)+\sigma_\pm)\int_{\mathbf{R}^2}f(U_{\kappa_\pm}+\sigma_\pm)+o(1). \end{multline} Combining \eqref{eq:badaboum1} and \eqref{eq:badaboum2} we obtain \[
g^{\boldsymbol{\varepsilon}}_\pm(\boldsymbol{\sigma})=\frac{\kappa_\pm}{2\pi} \log \frac{1}{\varepsilon_\pm} \Bigl( \int_{\mathbf{R}^2} f(U_{\kappa_\pm})-f(U_{\kappa_\pm}+\sigma_\pm)\Bigr)+O(1). \] By the Poincar\'e--Miranda Theorem (see e.g.\ \cite{Kulpa1997}), when $\abs{{\boldsymbol{\varepsilon}}}$ is small, there exists $\boldsymbol{\sigma}_{\boldsymbol{\varepsilon}}$ such that $g^{\boldsymbol{\varepsilon}}(\boldsymbol{\sigma}_{\boldsymbol{\varepsilon}})=0$ and $\boldsymbol{\sigma}_{\boldsymbol{\varepsilon}}=o(1)$ as ${\boldsymbol{\varepsilon}} \to 0$. \end{proof}
Evaluating $\mathcal{E}_{\boldsymbol{\varepsilon}}(\Hat{u}_{\boldsymbol{\varepsilon}})$ yields
\begin{corollary}\label{cor:Nodalupper} As $\abs{{\boldsymbol{\varepsilon}}} \to 0$, we have \begin{equation*}\begin{split} d^{\boldsymbol{\varepsilon}} \leq\ &\frac{\kappa^2_+}{4\pi} \log \frac{1}{\varepsilon_+}+\frac{\kappa^2_-}{4\pi} \log \frac{1}{\varepsilon_-}-\mathcal{W}(x_+,x_-)+\mathcal{C}_++\mathcal{C}_-+o(1). \end{split}\end{equation*} \end{corollary}
\subsection{Asymptotic behavior of solutions}
We shall prove the counterpart of Proposition~\ref{prop:1mai}
\begin{proposition}\label{prop:1maiNodal} Let $(v^{\boldsymbol{\varepsilon}})$ be a family of solutions to \eqref{problemPeps} such that $v^{\boldsymbol{\varepsilon}}_\pm \ne 0$ \begin{equation} \label{assumptEnergyUpperboundNodal}
\mathcal{E}^{\boldsymbol{\varepsilon}}(v^{\boldsymbol{\varepsilon}}) \le \frac{\kappa^2_+}{4\pi} \log \frac{1}{\varepsilon_+}+\frac{\kappa^2_-}{4\pi} \log \frac{1}{\varepsilon_-}+O(1), \end{equation} as ${\boldsymbol{\varepsilon}} \to 0$. Define the quantities $A_\pm^{\boldsymbol{\varepsilon}}$, $\omega_\pm^{\boldsymbol{\varepsilon}}$, $\kappa_\pm^{\boldsymbol{\varepsilon}}$, $x_\pm^{\boldsymbol{\varepsilon}}$ and $\rho_\pm^{\boldsymbol{\varepsilon}}$ for $v^{\boldsymbol{\varepsilon}}$ as in \eqref{defiqNodal} for $u^{\boldsymbol{\varepsilon}}$. Then \[ \begin{split}
v^{\boldsymbol{\varepsilon}}= \, & U_{\kappa_+^{\boldsymbol{\varepsilon}}} \Big(\frac{\cdot-x^{\boldsymbol{\varepsilon}}}{\varepsilon_+}\Big)+\kappa_+^{\boldsymbol{\varepsilon}}\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon_+ \rho_{+}^{\boldsymbol{\varepsilon}}}+ H(x^{\boldsymbol{\varepsilon}}_+, \cdot)\Bigr)\\ &+U_{\kappa_-^{\boldsymbol{\varepsilon}}} \Big(\frac{\cdot-x^{\boldsymbol{\varepsilon}}}{\varepsilon_-}\Big)+\kappa_-^{\boldsymbol{\varepsilon}}\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon_- \rho_-^{\boldsymbol{\varepsilon}}}+ H(x^{\boldsymbol{\varepsilon}}_-, \cdot)\Bigr)+o(1), \end{split} \] \text{in $\mathrm{W}^{2, 1}_\mathrm{loc}(\Omega)$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$}, where \[
\kappa^{\boldsymbol{\varepsilon}}=\kappa_\pm+\frac{2\pi}{\log \frac{1}{{\boldsymbol{\varepsilon}}_\pm}}\Bigl(q(x^{\boldsymbol{\varepsilon}})-\kappa_\pm H(x^{\boldsymbol{\varepsilon}}, x^{\boldsymbol{\varepsilon}})-\kappa_\mp G(x_\pm, x_\mp) -\frac{\kappa}{2\pi} \log \frac{1}{\rho_{\kappa_\pm}} \Bigr)+o(\logEps^{-1}). \] In particular, we have \[
\mathcal{E}^{\boldsymbol{\varepsilon}} (v^{\boldsymbol{\varepsilon}})= \frac{\kappa^2_+}{4\pi} \log \frac{1}{\varepsilon_+}+\frac{\kappa^2_-}{4\pi} \log \frac{1}{\varepsilon_-}-\mathcal{W}(x_+,x_-)+\mathcal{C}_++\mathcal{C}_-+o(1). \] and \[
B(x^{\boldsymbol{\varepsilon}}_\pm, \Bar{r}_\pm^{\boldsymbol{\varepsilon}}) \subset A_\pm^{\boldsymbol{\varepsilon}} \subset B(x_\pm^{\boldsymbol{\varepsilon}}, \mathring{r}_\pm^{\boldsymbol{\varepsilon}}), \] with $\Bar{r}^{\boldsymbol{\varepsilon}}_\pm=\varepsilon_\pm \rho_{\kappa_\pm}+o(\varepsilon_\pm)$ and $\mathring{r}_\pm^{\boldsymbol{\varepsilon}}=\varepsilon_\pm \rho_{\kappa_\pm} +o(\varepsilon_\pm)$. \end{proposition}
In other words, $v^{\boldsymbol{\varepsilon}}$ satisfies the same asymptotics as the one stated in Theorem~\ref{thm:K1} for $v^{\boldsymbol{\varepsilon}}$ except for the convergence of $x^{\boldsymbol{\varepsilon}}$.
\subsubsection{Step 1: First quantitative properties of the solutions}
\begin{proposition} \label{propositionNodalEstimatesueps} We have, as $\abs{{\boldsymbol{\varepsilon}}} \to 0$, \begin{gather*}
\muleb{2}(A^{\boldsymbol{\varepsilon}}_\pm) =O\bigl(\logEps^{-1}\bigr), \\
\int_{A^{\boldsymbol{\varepsilon}}_+} \abs{\nabla (v^{\boldsymbol{\varepsilon}}-q^{\boldsymbol{\varepsilon}}_\pm)}^2 =O(1), \\
\frac{1}{\varepsilon_{\pm}^2}\int_{A^{\boldsymbol{\varepsilon}}_\pm} F(\pm(v^{\boldsymbol{\varepsilon}}-q^{\boldsymbol{\varepsilon}}_\pm)) =O(1),\\
\int_{\Omega\setminus A^{\boldsymbol{\varepsilon}}_\pm} \abs{\nabla v^{\boldsymbol{\varepsilon}}_\pm}^2 \leq \frac{\kappa^2_\pm}{2\pi} \log\frac{1}{{\boldsymbol{\varepsilon}}_\pm} + O(1), \\
\pm \int_{\Omega} \omega^{\boldsymbol{\varepsilon}}_\pm \leq \pm \kappa_\pm + O\bigl(\logEps^{-1}\bigr). \end{gather*} \end{proposition} \begin{proof} First note that by Theorem~\ref{thm:K1}, \[
\mathcal{E}_{\boldsymbol{\varepsilon}}(v^{\boldsymbol{\varepsilon}}_\pm) \ge \frac{\kappa_+^2}{4\pi} \log \frac{1}{\varepsilon_\pm}+O(1). \] By \eqref{assumptEnergyUpperboundNodal}, this implies that \[
\mathcal{E}_{\boldsymbol{\varepsilon}}(v^{\boldsymbol{\varepsilon}}_\pm) =\frac{\kappa_+^2}{4\pi} \log \frac{1}{\varepsilon_\pm}+O(1). \] We are now in position to proceed as in the proof of Proposition~\ref{propositionEstimatesueps}, testing $(\mathcal{Q}^{\boldsymbol{\varepsilon}})$ against $v^{\boldsymbol{\varepsilon}}_+$ and $v^{\boldsymbol{\varepsilon}}_-$ instead of $v^{\boldsymbol{\varepsilon}}$, then against $\min(v^{\boldsymbol{\varepsilon}}, q^{\boldsymbol{\varepsilon}}_+)$ and $\max(v^{\boldsymbol{\varepsilon}}, q^{\boldsymbol{\varepsilon}}_-)$ instead of $\min(v^{\boldsymbol{\varepsilon}}, q^{\boldsymbol{\varepsilon}})$, and finally against $(v^{\boldsymbol{\varepsilon}}-q^{\boldsymbol{\varepsilon}}_+)_+$ and $(q^{\boldsymbol{\varepsilon}}_--v^{\boldsymbol{\varepsilon}})_+$ instead of $(v^{\boldsymbol{\varepsilon}}-q^{\boldsymbol{\varepsilon}}_+)_+$. We skip the details. \end{proof}
\subsubsection{Step 2: Structure of the vorticity set} In this subsection we further describe the vorticity set $A^{\boldsymbol{\varepsilon}}=A^{\boldsymbol{\varepsilon}}_+\cup A^{\boldsymbol{\varepsilon}}_-$. Since it is an open set, it contains at most countably many connected components that we label $A^{\boldsymbol{\varepsilon}}_{\pm, i}$, $i \in I^{\boldsymbol{\varepsilon}}_\pm$. First we have a control on the total area and on the diameter of each connected component.
\begin{lemma} \label{lemmaNodalAreaDiameter} If $\abs{{\boldsymbol{\varepsilon}}}$ is sufficiently small, we have
\[
\muleb{2}(A^{\boldsymbol{\varepsilon}}_\pm) \le C \varepsilon_\pm^2 \]
and, for every $i \in I^{\boldsymbol{\varepsilon}}_\pm$, \begin{equation} \label{ineqNodalVorticityDiameter}
\diam(A^{\boldsymbol{\varepsilon}}_{\pm, i}) \le C \varepsilon_\pm. \end{equation} \end{lemma} \begin{proof} It suffices to repeat the arguments in the proof of Lemma~\ref{lemmaNodalAreaDiameter}. \end{proof}
\begin{lemma} \label{lemmaNodalVortexSplit} There exists constants $\gamma, C, c>0$ such that, when $\abs{{\boldsymbol{\varepsilon}}}$ is small enough, if \begin{equation} \label{eqNodalSplitVortices}
\int_{A^{\boldsymbol{\varepsilon}}_{\pm, i}} \abs{\nabla (v^{\boldsymbol{\varepsilon}} - q^{\boldsymbol{\varepsilon}}_\pm)}^2 > \gamma^2, \end{equation} then for every $j \in I^{\boldsymbol{\varepsilon}}_\mp$, \begin{gather} \label{ineqNodalLowerBoundMeas} \muleb{2}(A_{\pm, i}^{\boldsymbol{\varepsilon}})\ge c\varepsilon^2_\pm, \\ \label{ineqNodalLowerBoundDiam} \diam(A^{\boldsymbol{\varepsilon}}_{\pm, i})\ge c\varepsilon_\pm, \\ \label{ineqNodalLowerBoundBoundary} \dist(A^{\boldsymbol{\varepsilon}}_{\pm, i}, \partial \Omega)\ge c, \\ \label{ineqNodalSignDistance} \dist(A^{\boldsymbol{\varepsilon}}_{\pm, i}, A^{\boldsymbol{\varepsilon}}_{\mp, j})\ge c, \end{gather} while if \eqref{eqNodalSplitVortices} does not hold, then \[
\int_{A^{\boldsymbol{\varepsilon}}_{\pm, i}} \abs{\omega^{\boldsymbol{\varepsilon}}}^s \le C \Norm{\nabla q}_{\mathrm{L}^r(A^{\boldsymbol{\varepsilon}}_{\pm, i})}^{sp} \muleb{2}(A^{\boldsymbol{\varepsilon}}_{\pm,i})^{1+sp(\frac{1}{2}-\frac{1}{r})}, \] where $C$ only depends on $s \ge 1$. \end{lemma} \begin{proof} The proof is very similar to the one of Lemma~\ref{lemmaVortexSplit} except for \eqref{ineqNodalSignDistance} which remains to be proved. To that purpose, we consider the function \[
\eta_{\boldsymbol{\varepsilon}}=\frac{\frac{v^{\boldsymbol{\varepsilon}}_+}{\kappa_+}+\frac{v^{\boldsymbol{\varepsilon}}_-}{\kappa_+}}{\log \frac{1}{\varepsilon_+\varepsilon_-}}. \] We have \[ \eta_{\boldsymbol{\varepsilon}} \restrictedto{A^{\boldsymbol{\varepsilon}}_{\pm, i}}=\frac{\log \frac{1}{\varepsilon_+}}{\log \frac{1}{\varepsilon_+\varepsilon_-}}+O\bigl(\logEps^{-1}\bigr), \] and \[ \eta_{\boldsymbol{\varepsilon}} \restrictedto{A^{\boldsymbol{\varepsilon}}_{\mp, j}}=\frac{- \log \frac{1}{\varepsilon_-}}{\log \frac{1}{\varepsilon_+\varepsilon_-}}+O\bigl(\logEps^{-1}\bigr). \] Therefore, \[
\frac{2\pi}{\capa(A^{\boldsymbol{\varepsilon}}_+, \mathbf{R}^2\setminus A^{\boldsymbol{\varepsilon}}_-)}\ge \log \frac{1}{\varepsilon_+ \varepsilon_-}+O(1). \] Using Proposition~\ref{propositionCapacityBoundDistance} with $\Omega=\mathbf{R}^2 \setminus \overline{A^{\boldsymbol{\varepsilon}}_{\pm, i}}$ and $K=\overline{A^{\boldsymbol{\varepsilon}}_{\mp, j}}$, and applying \eqref{ineqNodalVorticityDiameter} to $A^{\boldsymbol{\varepsilon}}_{\mp, j}$ and \eqref{ineqNodalLowerBoundMeas} to $A^{\boldsymbol{\varepsilon}}_{\pm, i}$, we are led to \[
\log \frac{1}{\varepsilon_+\varepsilon_-} \le \log C \Bigl( 1+\frac{\dist(A^{\boldsymbol{\varepsilon}}_{\pm, i}, A^{\boldsymbol{\varepsilon}}_{\mp, j})}{\varepsilon_\mp}\Bigr)\Bigl( 1+\frac{\dist(A^{\boldsymbol{\varepsilon}}_{\pm, i}, A^{\boldsymbol{\varepsilon}}_{\mp, j})}{\varepsilon_\pm}\Bigr)+O(1), \] which can not hold if $\dist(A^{\boldsymbol{\varepsilon}}_{\pm, i}, A^{\boldsymbol{\varepsilon}}_{\mp, j}) \to 0$. \end{proof}
The vorticity set is split into four subsets: \begin{align*}
V^{\boldsymbol{\varepsilon}}_\pm&=\bigcup \Bigl\{A_{\pm, i}^{\boldsymbol{\varepsilon}} \: :\: \int_{A_{\pm, i}^{\boldsymbol{\varepsilon}}} \abs{\nabla (v^{\boldsymbol{\varepsilon}} - q^{\boldsymbol{\varepsilon}}_\pm}^2 \le \gamma^2\Bigr\}, \\
E^{\boldsymbol{\varepsilon}}_\pm&=\bigcup \Bigl\{A_{\pm, i}^{\boldsymbol{\varepsilon}} \: :\: \int_{A_{\pm, i}^{\boldsymbol{\varepsilon}}} \abs{\nabla (v^{\boldsymbol{\varepsilon}} - q^{\boldsymbol{\varepsilon}}_\pm)}^2 > \gamma^2\Bigr\}. \end{align*} By Proposition~\ref{propositionNodalEstimatesueps}, the sets $E^{\boldsymbol{\varepsilon}}_+$ and $E^{\boldsymbol{\varepsilon}}_-$ contain finitely many connected components, and by \eqref{ineqNodalLowerBoundMeas}, \eqref{ineqNodalLowerBoundDiam}, \eqref{ineqNodalLowerBoundBoundary} and \eqref{ineqNodalSignDistance}, they can thus be split as $E^{\boldsymbol{\varepsilon}}_\pm=\bigcup_{j=1}^{k^{\boldsymbol{\varepsilon}}_\pm} E^{\boldsymbol{\varepsilon}}_{\pm, j}$, where $E^{\boldsymbol{\varepsilon}}_{\pm, j}$ are nonempty open sets such that \begin{gather*}
\frac{\dist(E^{\boldsymbol{\varepsilon}}_{\pm, i}, E^{\boldsymbol{\varepsilon}}_{\pm, j})}{\varepsilon_\pm} \to \infty,\\
\liminf_{{\boldsymbol{\varepsilon}} \to 0} \dist(E^{\boldsymbol{\varepsilon}}_{\pm, i}, E^{\boldsymbol{\varepsilon}}_{\mp, j}) >0,\\
\liminf_{{\boldsymbol{\varepsilon}} \to 0} \dist(E^{\boldsymbol{\varepsilon}}_{\pm, i}, \partial \Omega) >0, \\
\limsup_{{\boldsymbol{\varepsilon}} \to 0} \frac{\diam (E^{\boldsymbol{\varepsilon}}_{\pm, i})}{\varepsilon_\pm} < \infty, \end{gather*} as ${\boldsymbol{\varepsilon}} \to 0$. By definition of $E^{\boldsymbol{\varepsilon}}$ and by \eqref{ineqVortexEnergy}, $k^{\boldsymbol{\varepsilon}}_+$ and $k^{\boldsymbol{\varepsilon}}_-$ remain bounded as ${\boldsymbol{\varepsilon}} \to 0$.
\subsubsection{Step 3: Small scale asymptotics} We set \begin{align*}
\omega^{\boldsymbol{\varepsilon}}_{\pm, v}&=\omega^{\boldsymbol{\varepsilon}} \charfun{V_{\pm}^{\boldsymbol{\varepsilon}}}, &
\omega^{\boldsymbol{\varepsilon}}_{\pm, i}&=\omega^{\boldsymbol{\varepsilon}} \charfun{E^{\boldsymbol{\varepsilon}}_{\pm, i}}, \\
\kappa^{\boldsymbol{\varepsilon}}_{\pm, i}&=\int_{\Omega} \omega^{\boldsymbol{\varepsilon}}_{\pm, i}, &
x^{\boldsymbol{\varepsilon}}_{\pm, i}&=\frac{1}{\kappa^{\boldsymbol{\varepsilon}}_{\pm, i}}\displaystyle\int_{\Omega} x\omega^{\boldsymbol{\varepsilon}}_{\pm, i}(x)\, dx. \end{align*}
Using the analogues of Lemma~\ref{lemmaVanishingVorticity} and Lemma~\ref{lemmaSmallScaleLocalEstimates}, one obtains the analogue of Lemma~\ref{lemmaLocalAsymptotics}.
\begin{lemma} \label{lemmaNodalLocalAsymptotics} When ${\boldsymbol{\varepsilon}}$ is small, we have $k^{\boldsymbol{\varepsilon}}_+=k^{\boldsymbol{\varepsilon}}_-=1$, and \begin{multline*}
\kappa^{\boldsymbol{\varepsilon}}_{\pm, 1}=\kappa_\pm +\frac{2\pi}{\log \frac{1}{\varepsilon_\pm}}\Bigl(q(x^{\boldsymbol{\varepsilon}}_\pm)-\kappa_\pm H(x^{\boldsymbol{\varepsilon}}_\pm, x^{\boldsymbol{\varepsilon}}_\pm)-\kappa_{\mp} G(x^{\boldsymbol{\varepsilon}}_{\pm}, x^{\boldsymbol{\varepsilon}}_{\mp})-\frac{\kappa_\pm}{2\pi} \log \frac{1}{\rho_{\kappa_\pm}} \Bigr) \\ +o({\logEps}^{-1}) \end{multline*} and $v^{\boldsymbol{\varepsilon}}_{\pm} \to U_{\kappa_\pm}$ in $\mathrm{W}^{1, r}_{\mathrm{loc}}(\mathbf{R}^2)$. \end{lemma}
\subsubsection{Step 4: Global asymptotics}
The counterpart of Proposition~\ref{propositionAsymptoticsW21} is now
\begin{proposition} \label{propositionAsymptoticsW21Nodal} We have \[ \begin{split}
v^{\boldsymbol{\varepsilon}}= & \ U_{\kappa_{+,1}^{\boldsymbol{\varepsilon}}}\Bigl(\frac{\cdot-x^{\boldsymbol{\varepsilon}}_{+, 1}}{\varepsilon_+}\Bigr)+\kappa^{\boldsymbol{\varepsilon}}_{+, 1}\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon_+ \rho_{\kappa_{+, 1}^{\boldsymbol{\varepsilon}}}}+H(x^{\boldsymbol{\varepsilon}}_{+, 1}, \cdot)\Bigr) \\ &+U_{\kappa_{-,1}^{\boldsymbol{\varepsilon}}}\Bigl(\frac{\cdot-x^{\boldsymbol{\varepsilon}}_{-, 1}}{\varepsilon_-}\Bigr)+\kappa^{\boldsymbol{\varepsilon}}_{-, 1}\Bigl(\frac{1}{2\pi} \log \frac{1}{\varepsilon_- \rho_{\kappa_{-, 1}^{\boldsymbol{\varepsilon}}}}+H(x^{\boldsymbol{\varepsilon}}_{-, 1}, \cdot)\Bigr)+o(1) \end{split} \] in $\mathrm{W}^{2, 1}_{\mathrm{loc}}(\Omega)$, in $\mathrm{W}^{1, 2}_0(\Omega)$, and in $\mathrm{L}^\infty(\Omega)$. \end{proposition}
We have now all the ingredients to complete the \begin{proof}[Proof of Proposition~\ref{prop:1maiNodal}] It follows from the combination of Lemma~\ref{lemmaNodalLocalAsymptotics}, Proposition~\ref{propositionAsymptoticsW21Nodal} and the counterparts of Corollaries~\ref{corollaryAsymptotic} and \ref{corEnergy}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:K3}] Since the solutions have the upper bound Corollary~\ref{cor:Nodalupper}, one can conclude from Proposition~\ref{prop:1maiNodal}. \end{proof}
\section{Desingularized solutions of the Euler equation} \label{sect:resu}
\subsection{Bounded domains} In bounded domains we shall successively consider stationary vortices, rotating vortices and stationary pairs of vortices.
\subsubsection{Stationary vortices in simply-connected bounded domains} Let us first deduce Theorem~\ref{thm:resu} from Theorem~\ref{thm:K1}.
\begin{proof}[Proof of Theorem~\ref{thm:resu}] Take $q=-\psi_0$, where $\psi_0$ satisfies \eqref{eqpsi0}. One checks that $\psi_0 \in W^{1+\frac{1}{s}, s}(\Omega)$ so that $u \in W^{1, r}(\Omega)$ for every $r < \infty$. Define $\mathbf{v}_\varepsilon=(\nabla u_\varepsilon)^\perp$ where $u_\varepsilon$ is given by Proposition~\ref{prop:2.1}. The conclusion then follows from Theorem~\ref{thm:K1}. \end{proof}
We have constructed in Theorem~\ref{thm:resu} a family of solutions that concentrates around a global maximum of the Kirchhoff--Routh function $\mathcal{W}$; it is also possible to construct family of solutions that concentrate around a \emph{local} maximum of $\mathcal{W}$:
\begin{theorem}\label{thmLocalMinimum}
Let $\Omega \subset \mathbf{R}^2$ be a bounded simply-connected smooth domain and $v_n:\partial \Omega \to \mathbf{R}\in L^s(\partial \Omega)$ for some $s>1$ be such that $\int_{\partial \Omega} v_n = 0.$ Let $\kappa >0$ be given and let $\Hat{x} \in \Omega$ be a strict local minimizer of $\mathcal{W}$. For $\varepsilon>0$ there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\Omega$ with outward boundary flux given by $v_n$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon) \subset B(x_\varepsilon, C\varepsilon)$ for some $x_\varepsilon \in \Omega$ and $C>0$ not depending on $\varepsilon$. Moreover, as $\varepsilon \to 0$, \[
\int_\Omega \omega_\varepsilon \to \kappa \] and $x_\varepsilon \to \Hat{x}$. \end{theorem} \begin{proof} Assume that $\Hat{x}$ is the unique minimizer of $\mathcal{W}$ in $B(\Hat{x}, \rho)$. Define $q \in C^\infty(\Bar{\Omega})$ so that $q=-\psi_0$ in $B(\Hat{x}, \rho/2)$, where $\psi_0$ satisfies \eqref{eqpsi0} and for every $x \in \Omega$, \[
\kappa q(x)-\frac{\kappa^2}{2} H(x, x) > \kappa q(x_*)-\frac{\kappa^2}{2} H(x_*, x_*). \] We now apply Theorem~\ref{thm:K1} with $q$. By construction of $q$, we have $x_\varepsilon \to \Hat{x}$.
But then, one has, still by Theorem~\ref{thm:K1} \[
u_\varepsilon(x) \ge \frac{\kappa}{2\pi} \log \frac{1}{\abs{x_\varepsilon-x}}+O(1). \] Therefore, when $\varepsilon$ is small enough, $u_\varepsilon \le -\psi_0+\frac{\kappa}{2\pi}\log \frac{1}{\varepsilon}$ and $u_\varepsilon \le q_\varepsilon$ in $\Omega \setminus B(x_\varepsilon, \rho/2)$. Therefore, for such $\varepsilon$, $u_\varepsilon$ solves $-\varepsilon^2\Delta u_\varepsilon=f(u_\varepsilon+\psi_0- \frac{\kappa}{2\pi} \log \frac{1}{\varepsilon})$ in $\Omega$. One can now take $\mathbf{v}_\varepsilon= (\nabla (u_\varepsilon+\psi_0))^\perp$ and show that this is a stationary solution to the Euler equation. \end{proof}
\subsubsection{Stationary vortices in multiply-connected bounded domains}
If $\Omega$ is not simply connected then $\Omega = \Omega_0 \setminus \bigcup_{h=1}^m \Omega_h$, where $\Omega_0, \dotsc, \Omega_m$ are bounded simply connected domains, one can prescribe for $h \in \{1, \dotsc, m\}$, the circulations $\int_{\partial \Omega_h} \mathbf{v}\cdot \tau=\gamma_h$. In that case $\mathbf{v}_0$ is the unique harmonic field whose normal component on the boundary is $v_n$; i.e., $\mathbf{v}_0$ satisfies \[ \left\{ \begin{aligned}
\nabla \cdot \mathbf{v}_0&=0, & & \text{in $\Omega$}, \\
\nabla \times \mathbf{v}_0&=0, & & \text{in $\Omega$}, \\
n \cdot \mathbf{v}_0&=v_n& & \text{on $\partial \Omega$},\\ \int_{\partial \Omega_h} \mathbf{v}_0 \cdot \tau &=\gamma_h & &\text{for $h \in \{1, \dotsc, m\}$}. \end{aligned} \right. \] If $\int_{\partial \Omega_h} v_n=0$ for every $h \in \{1, \dotsc, m\}$, $\mathbf{v}_0=(\nabla \psi_0)^\perp$ where \begin{equation}
\label{eqpsi0NotConnected} \left\{ \begin{aligned} -\Delta \psi_0&=0& &\text{in $\Omega$}, \\ -\frac{\partial \psi_0}{\partial \tau}&=v_n & & \text{on $\partial \Omega$},\\ \int_{\partial \Omega_h} \frac{\partial \psi_0}{\partial n} & = \gamma_h & &\text{for $h \in \{1, \dotsc, m\}$}.\\ \end{aligned} \right. \end{equation} The Kirchhoff--Routh function associated to the vortex dynamics is then given by \[
\mathcal{W}_*(x)=\frac{\kappa^2}{2}H_*(x, x)+\kappa \psi_0(x), \] where one should recall that $\psi_0$ depends on $v_n$ and $\gamma_h$ for $h \in \{1, \dotsc, m\}$.
We have \begin{theorem}\label{thm:MultiplyConnected} Let $\Omega \subset \mathbf{R}^2$ be a bounded smooth domain and $v_n:\partial \Omega \to \mathbf{R}\in L^s(\partial \Omega)$ for some $s>1$ be such that $\int_{\partial \Omega_h} v_n = 0$ for every $h \in \{0, \dotsc, m\}$. Let $\gamma_h \in \mathbf{R}$ for $h \in \{1, \dotsc, m\}$ and let $\kappa >0$ be given. For $\varepsilon>0$ there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\Omega$ with outward boundary flux given by $v_n$ and circulations given by $\gamma_h$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon) \subset B(x_\varepsilon, C\varepsilon)$ for some $x_\varepsilon \in \Omega$ and $C>0$ not depending on $\varepsilon$. Moreover, as $\varepsilon \to 0$, \[
\int_\Omega \omega_\varepsilon \to \kappa, \] and \[
\mathcal{W}_*(x^\varepsilon) \to \sup_{x \in \Omega} \mathcal{W}_*(x). \] \end{theorem} \begin{proof}
The proof is almost identical to the one of Theorem~\ref{thm:resu}, it relies on Theorem~\ref{thm:K1m} instead of Theorem~\ref{thm:K1}. \end{proof}
\begin{remark} One could similarly prove a counterpart of Theorem~\ref{thmLocalMinimum} for multiply connected domains. \end{remark}
\subsubsection{Rotating vortices in a discs} If $\Omega$ is invariant under rotation, one can consider the Euler equation in a reference frame rotating with angular velocity $\alpha$: \[ \left\{
\begin{aligned}
\nabla \cdot \mathbf{v} &= 0, \\
\mathbf{v}_t + \mathbf{v}\cdot \nabla \mathbf{v}&=-\nabla p+2\alpha \mathbf{v}^\perp-\alpha^2x.
\end{aligned} \right. \] The vorticity of $\mathbf{v}$ with respect to an inertial frame is $\nabla \times \mathbf{v}+2\alpha$. The movement of singular vortices is governed by Kirchhoff's law \eqref{equationKirchhoff}, where $\mathcal{W}$ is replaced by $\mathcal{W}_\alpha(x)=\mathcal{W}(x)+\sum_i \alpha \frac{\abs{x}^2}{2}$.
The stream-function method to construct stationary solutions in a rotating reference frame can be adapted to this situation. If $-\Delta \psi=f(\psi)-2\alpha$, setting $\mathbf{v}=(\nabla \psi)^\perp$ and $p= F(\psi)-\frac{\alpha^2}{2}\abs{x}^2-\frac{1}{2}\abs{\nabla \psi}^2$ yields a solution\footnote{With the same velocity field, choosing as pressure $p=F(\psi)-2\alpha \psi-\frac{1}{2}\abs{\nabla \psi}^2$ would of course give a solution to the Euler equation in a Galilean frame. }. In particular, the solution is irrotational outside on the set where $\psi=0$.
\begin{theorem} \label{thmRotating} Let $\rho > 0$, $\kappa >0$ and $\alpha > 0$. If $\kappa < 2\pi \alpha \rho^2$ For $\varepsilon>0$ there exist smooth rotating solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $B(0, \rho)$ with angular velocity $\alpha$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon)$ is contained in a disc of radius $O(\varepsilon)$ around a point rotating on the circle of radius $\sqrt{\rho^2-\frac{\kappa}{2\pi\alpha}}$. Moreover, as $\varepsilon \to 0$, \[ \int_\Omega \omega_\varepsilon \to \kappa. \] \end{theorem} \begin{proof} Take \[
q(x)=-\alpha \frac{\abs{x}^2}{2}. \] and apply Theorem~\ref{thm:K1}. One checks that \[
\mathbf{v}_\varepsilon(x, t)=(\nabla u_\varepsilon)^\perp (R(\alpha t) x), \] where $R(\alpha t)$ denote the rotation of $\alpha t$, satisfies Euler equation. Since \[
\mathcal{W}_\alpha(x)=\frac{\kappa^2}{4\pi}\log \frac{\rho^2-\abs{x}^2}{\rho}+\frac{\kappa \alpha}{2} \abs{x}^2, \] attains its maximum on the circle of radius $\sqrt{\rho^2-\frac{\kappa}{2\pi\alpha}}$, one has the desired concentration result. \end{proof}
\begin{remark} When $\kappa > 2\pi \alpha \rho^2$, the minimizer concentrates around $0$; one recovers thus stationary solutions as in Theorem~\ref{thm:resu}. \end{remark}
\subsubsection{Stationary pairs of vortices in bounded domains}
\begin{theorem} Let $\Omega \subset \mathbf{R}^2$ be a bounded simply-connected smooth domain and $v_n:\partial \Omega \to \mathbf{R}\in L^s(\partial \Omega)$ for some $s>2$ be such that $\int_C v_n = 0$ over each connected component $C$ of $\partial \Omega$. Let $\kappa_+ >0$ and $\kappa_- < 0$ be given. For $\varepsilon>0$ there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\Omega$ with outward boundary flux given by $v_n$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon^\pm) \subset B(x_\varepsilon^\pm, C\varepsilon)$ for some $x^{\pm}_\varepsilon \in \Omega$ and $C>0$ not depending on $\varepsilon$. Moreover, as $\varepsilon \to 0$, \[ \int_\Omega \omega_\varepsilon^\pm \to \kappa^\pm \]
and \[
\mathcal{W}(x_\varepsilon^+, x_\varepsilon^-) \to \sup_{x^+, x^- \in \Omega} \mathcal{W}(x^+, x^-). \]
\end{theorem} \begin{proof} This follows from Theorem~\ref{thm:K3} in the same lines as Theorem~\ref{thm:resu}. \end{proof}
\begin{remark} There is also a counterpart of Theorem~\ref{thmLocalMinimum} for vortex pairs, concerning the existence of solutions near local maxima of the Kirchoff--Routh function and a counterpart of Theorem~\ref{thm:MultiplyConnected} for domains which are not simply connected. \end{remark}
\begin{remark} One can also address the question of rotating vortex pairs. Combining the ingredients of the proof of Theorem~\ref{thmRotating}, one can prove the existence of rotating vortex pairs of strength $\kappa_+ > 0$ and $\kappa_- > 0$ that concentrates around two antipodal rotating points at distance $\rho _+$ and $\rho _-$ which maximize the function \[
\frac{\alpha \kappa_+}{2} \rho _+^2 +\frac{\alpha \kappa_-}{2}\rho _-^2+\frac{\kappa_+^2}{4\pi}\log (1-\rho _+^{2})+\frac{\kappa_-^2}{4\pi}\log (1-\rho _-^{2})+\frac{\kappa_+\kappa_-}{2\pi} \log \frac{1+\rho _+\rho _-}{\rho _++\rho _-}. \] In contrast with Theorem~\ref{thmRotating}, the pair of vortices obtained is always a nontrivial pair of rotating vortices for any $\alpha \ne 0$, $\kappa_+ > 0$ and $\kappa_- < 0$. \end{remark}
\subsection{Unbounded domains} We now consider the application of the results of Section~\ref{sectUnbounded} to the desingularization of vortices in unbounded domains.
\subsubsection{Translating vortex pair in the plane} We first consider the construction of a pair of vortices in $\mathbf{R}^2$. First recall that pair of vortices translating at velocity $\mathbf{W}$ in a flow with vanishing velocity at infinity is, up to a Galilean change of variables a pair of stationary vortices in a flow with velocity at infinity $-\mathbf{W}$. The stream-function of the corresponding irrotational flow is $\psi_0(x)=\mathbf{W}^\perp \cdot x$. Therefore, the positions of two vortices of opposite intensities $\kappa$ and $-\kappa$ in the moving reference frame is a critical point of the Kirchhoff--Routh $\mathcal{W}$ defined by \[
\frac{-\kappa^2}{2\pi} \log \frac{1}{\abs{x-y}} + \mathbf{W}^\perp \cdot x. \]
\begin{theorem} Let $W \ge 0$ and $\kappa \ge 0$, for every $\varepsilon > 0$ there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\mathbf{R}^2$ symmetric with respect to the $x_2$ axis and such that $\lim_{x_1 \to \infty} \mathbf{v}_\varepsilon(x)=(0, W)$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon) \cap \mathbf{R}^2_+ \subset B(\Bar{x}, C\varepsilon)$, where $\Bar{x}= (\frac{\kappa}{4\pi W}, 0)$. \end{theorem} \begin{proof} The problem can be reduced to finding a solution in $\mathbf{R}^2_+$ with vanishing flux on the boundary. The corresponding Kirchhoff--Routh function is \[
\mathcal{W}(x)=\frac{\kappa^2}{4\pi} (\log 2x_1)-\kappa Wx_1 \] This follows from the existence result of Theorem~\ref{thmYang}, the asymptotics of Proposition~\ref{propUnboundedUpper} and Proposition~\ref{prop:1maiUnbounded}. \end{proof}
\subsubsection{Stationary vortex in the half-plane with non-vanishing flux} The method just used extends to non-vanishing flux boundary conditions:
\begin{theorem} Let $v_n \in L^1 (\mathbf{R})\cap L^s_{\mathrm{loc}}(\mathbf{R})$ for $s > 1$. If $\int_{-\infty}^0 v_n=-\int_0^\infty v_n>0$. For every $W > 0$ and $\kappa > 0$, if $\kappa/W$ is small enough and if $\epsilon > 0$ sufficiently small there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\Omega$ with outward boundary flux given by $v_n$ and $\lim_{x_1 \to \infty} \mathbf{v}_\varepsilon(x)=(0, W)$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon) \subset B(x_\varepsilon, C\varepsilon)$ for some $x_\varepsilon \in \Omega$ and $C>0$ not depending on $\varepsilon$, and $\int_{\mathbf{R}^2_+} \omega_\varepsilon \to \kappa$. \end{theorem} \begin{proof} Define $\psi_0$ by \[ \left\{ \begin{aligned}
-\Delta \psi_0 & = 0 & & \text{in $\mathbf{R}^2_+$}, \\
\partial_2 \psi_0 &=v_n& & \text{on $\partial \mathbf{R}^2_+$}, \\
\psi_0(0, x_2) &\to 0 & &\text{as $\abs{x_2} \to \infty$},\\
\frac{\psi_0(x)}{x_1} &\to -W & & \text{as $x_1 \to \infty$}. \end{aligned} \right. \] One checks that by our assumptions, \[
\psi_0(0) > 0. \] In order to apply Theorem~\ref{theoremExistenceLevels}, we need to find $\Hat{x} \in \Omega$ such that \begin{equation} \label{eqTrenchStrict}
\kappa \psi_0 (\Hat{x})+\frac{\kappa^2}{4\pi} \log 2 \Hat{x}_1 > \frac{\kappa^2}{4\pi} \Bigl(\log \frac{\kappa}{2\pi W}-1\Bigr). \end{equation} One takes $\Hat{x}=(\frac{\kappa}{4\pi W}, 0)$. If $\kappa/W$ is small enough, one has \[
\kappa \psi_0 (\Hat{x}) > 0, \] and one checks that \[
\kappa \psi_0 (\Hat{x})+\frac{\kappa^2}{4 \pi} \log 2 \Hat{x}_1 > \frac{\kappa^2}{4\pi} \log \frac{\kappa}{2\pi W} > \frac{\kappa^2}{4\pi} \Bigl(\log \frac{\kappa}{2\pi W}-1\Bigr). \] The conclusion follows then from Theorem~\ref{theoremExistenceLevels}. \end{proof}
\subsubsection{Stationary vortex in a perturbed half-plane} Instead of perturbing the boundary condition on the half-plane, one can instead perturb the geometry. The first situation is the situation in which one has for example enlarged a little bit the half-plane around $0$:
\begin{theorem} Assume that $\Omega$ is a simply-connected perturbation of $\mathbf{R}^2_+$ in the sense of \eqref{condPerturbation}. Let $\Bar{x} \in \partial \Omega$ be such that $x_1 > \Bar{x}_1$ for every $x \in \Omega$, $\partial \Omega$ is of class $C^2$ in a neighborhood of $\Bar{x}$, then for every $W > 0$, if $\kappa > 0$ is sufficiently small and if $\epsilon > 0$ sufficiently small there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\Omega$ with vanishing boundary flux and $\lim_{x_1 \to \infty} \mathbf{v}_\varepsilon(x)=(0, W)$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon) \subset B(x_\varepsilon, C\varepsilon)$ for some $x_\varepsilon \in \Omega$ and $C>0$ not depending on $\varepsilon$ and $\int_{\Omega} \omega_\varepsilon \to \kappa$. \end{theorem}
\begin{proof} We are going to obtain the solutions by applying Theorem~\ref{theoremExistenceLevels} with $q=-\psi_0$. Let $\mathbf{v}_0$ be the irrotationnal stationary solution to the Euler equation with vanishing flux on $\partial \Omega$ and $\lim_{x_1 \to \infty} \mathbf{v}_0(x)=(0, W)$, i.e.\ $\mathbf{v}_0=\nabla \psi_0^\perp$ with \begin{equation} \label{eqPsi0} \left\{
\begin{aligned}
-\Delta \psi_0 &= 0, & &\text{in $\Omega$}, \\
\psi_0 &= 0 & &\text{on $\partial \Omega$}, \\
\tfrac{\psi_0(x)}{x_1} &\to -W &&\text{as $x \to \infty$}.
\end{aligned} \right. \end{equation} In order to apply Theorem~\ref{theoremExistenceLevels}, we need to find $\Hat{x} \in \Omega$ such that the condition \eqref{eqTrenchStrict} holds. First, by the strong maximum principle, one has $\partial_1 \psi(\Bar{x})>-W$, so that there exists $\gamma \in (0, W)$ such that in a neighborhood of $\Bar{x}$, \[
\psi_0(x) > -\gamma (x_1-\Bar{x_1}). \] If we consider the point $\Hat{x}=(\Bar{x}_1+\frac{\kappa}{4\pi W}, \Bar{x}_2)$, one has \[
\kappa \psi_0(\Hat{x}) > -\gamma \frac{\kappa^2}{4\pi W}. \] On the other hand, if $K$ denotes the curvature of $\partial \Omega$ at $\Bar{x}$, one has by Proposition~\ref{propositionAsymptotH}, \[
\frac{\kappa^2}{2} H (\Hat{x}, \Hat{x})=-\frac{\kappa^2}{4\pi} \log \frac{\kappa}{2\pi W}+O(\kappa^3). \] Therefore, if $\kappa$ is small enough, one has \eqref{eqTrenchStrict}, and one can then apply Theorem~\ref{theoremExistenceLevels} to obtain the conclusion. \end{proof}
\subsubsection{Translating vortex pair near a translating axisymmetric obstacle} We can also treat a situation in some sense opposite to the situation of the previous section. We obtain the desingularization of vortices on a set which is obtained by removing some part of the half-plane. By a Galilean change of variables and by extension by symmetry of the flow, this corresponds also physically to a rigid body in translation together with a pair of vortices. A similar problem was studied through the vorticity method by B.\thinspace Turkington \cite{Turkington1983}
\begin{theorem} Let $D \subset \mathbf{R}^2$ be a compact simply-connected set with non-empty interior and symmetric with respect to the $x_1$ variable. Then for every $\kappa> 0$ and $W > 0$, if $\varepsilon > 0$ is sufficiently small there exist smooth stationary solutions $\mathbf{v}_\varepsilon$ of the Euler equation in $\mathbf{R}^2 \setminus D$ symmetric with respect to the $x_2$ axis, with vanishing boundary flux and such that $\lim_{x_1 \to \infty} \mathbf{v}_\varepsilon(x)=(0, W)$, corresponding to vorticities $\omega_\varepsilon$, such that ${\rm supp}(\omega_\varepsilon) \cap \mathbf{R}^2_+ \subset B(x_\varepsilon, C\varepsilon)$ for some $x_\varepsilon \in \Omega \cap \mathbf{R}^2_+$ and $C>0$ not depending on $\varepsilon$ and $\int_{\mathbf{R}^2_+ \setminus D} \omega_\varepsilon \to \kappa$. \end{theorem}
\begin{proof} Set $\Omega=\mathbf{R}^2_+ \setminus D$. We shall consider the case $W > 0$ and $\kappa > 0$, and we shall assume that $B(0, \rho) \subset D \subset B(0, R)$. We use again Theorem~\ref{theoremExistenceLevels} and therefore we shall prove that \eqref{eqTrenchStrict} holds for some $\Hat{x} \in \mathbf{R}^2$ where $\psi_0$ solves \eqref{eqPsi0}. We shall take $\Hat{x}^\lambda=(\frac{\kappa}{4\pi W}, \lambda \frac{\kappa}{4\pi W})$ where $\lambda \in \mathbf{R}$. By the maximum principle on $\Omega$, one has for $x \in \Omega$, \[
\psi_0(x)>-Wx_1+W \frac{x_1 \rho^2}{\abs{x}^2}. \] Hence, we have \[
\kappa \psi_0(\Hat{x}^\lambda) \ge - \frac{\kappa^2 }{4\pi} + \frac{4\pi W^2}{1+\lambda^2}, \] with $c' > 0$. We also use the formula of the Green function $\Tilde{G}$ of $\mathbf{R}^2_+ \setminus B(0, R)$ used by B.\thinspace Turkington \cite[p.\thinspace 1047]{Turkington1983} \[
\Tilde{G}(x, y)=\frac{1}{4\pi} \log \frac{1+\dfrac{4x_1y_1}{\abs{x-y}^2}}{1+\dfrac{4R^2 x_1y_1 } {(x_1y_1+x_2 y_2-R^2)^2+(x_2y_1-x_1y_2)^2}} \] Since $\Tilde{G}(x, y) \le G(x, y)$, one has therefore \[
H(x,x)\ge \frac{1}{2\pi} \log 2 x_1 - \frac{1}{2\pi} \log \Bigl(1+ \frac{4R^2x_1^2}{(\abs{x}^2-R^2)^2}\Bigr), \]
whence \[
\frac{\kappa^2}{2} H(\Hat{x}^\lambda, \Hat{x}^\lambda) \ge \frac{\kappa^2}{4\pi} \log \frac{\kappa}{2\pi W}+O(\lambda^{-4}). \] One checks thus that for $\lambda$ sufficiently large, \[
\kappa \psi_0 (\Hat{x}^\lambda)+\frac{\kappa^2}{4\pi} \log 2 \Hat{x}^\lambda_1 > \frac{\kappa^2}{4\pi} \Bigl(\log \frac{\kappa}{2\pi W}-1\Bigr). \] and the conclusion thus follows from Theorem~\ref{theoremExistenceLevels}. \end{proof}
\appendix
\section{Capacity estimates}
Let $\Omega \subset \mathbf{R}^2$ be open. The electrostatic capacity of a compact set $K \subset \Omega$ is \[
\capa(K, \Omega)=\inf \Bigl\{ \int_{\Omega} \abs{\nabla \varphi}^2 \: :\: \varphi \in C^\infty_c(\Omega)\text{ and } \varphi = 1 \text{ on $K$} \Bigr\}. \] Let us first recall the following standard capacity estimate which was discovered by H.\thinspace Poincar\'e \cite[p.\thinspace 17--22]{Poincare} and whose first complete proof was given by G.\thinspace Szeg\H o~\cite{Szego1930}.
\begin{proposition} \label{propositionCapacityArea} Let $\Omega \subset \mathbf{R}^2$ have finite measure. For every $K \subset \Omega$, \[
\frac{4\pi}{\capa(K, \Omega)} \le \log \frac{\muleb{2}(\Omega)}{\muleb{2}(K)}. \] \end{proposition} \begin{proof} One shows by the P\'olya--Szeg\H o inequality (for a modern treatment, see e.g. \cite{Kawohl1985}, \cite{LiebLoss2001} or \cite{BrockSolynin2000}) that \[
\capa(K, \Omega) \ge \capa (\overline{B(0, \rho)}, B(0,R) \] if $\rho$ and $R$ are chosen so that $\muleb{2}(B(0,\rho))=\muleb{2}(K)$ and $\muleb{2}(B(0,R))=\muleb{2}(\Omega)$. One can then compute explicitly the right-hand-side to reach the conclusion. \end{proof}
When $\mathcal{L}^2(\Omega)=+\infty$, Proposition~\ref{propositionCapacityArea} loses its interest. However, one still has:
\begin{proposition} \label{propositionCapacityLocalArea} Let $K \subset \mathbf{R}^2_+$, we have \[
\frac{4\pi}{\capa(K, \Omega)} \le \log \frac{8\pi \sup_{x \in K} \abs{x}^2}{\muleb{2}(K)}. \] \end{proposition} \begin{proof} Set $a=\sup_{x \in K} \abs{x}^2=1$ and define the conformal transformation \[
\psi(z)= \frac{z-a}{z+a}. \] We have $\psi(\mathbf{R}^2_+)=B(0, 1)$. By the previous Lemma, we have \[
\frac{4\pi}{\capa(\psi(K), B(0, 1))} \le \log \frac{2\pi}{\muleb{2}(\psi(K))}. \] The conclusion comes from \[
\muleb{2}(\psi(K))=\int_K \abs{\psi'}^2\ge \frac{\muleb{2}(K)}{4a^2}. \qedhere \] \end{proof}
Another question about estimates of the capacity is whether one can estimate the diameter of $K$, instead of its area, by its capacity. This is possible if one assumes moreover that $K$ is connected. L.\thinspace E.\thinspace Fraenkel \cite{Fraenkel1981} has obtained in this direction the inequality \[
\frac{2\pi}{\capa(K, \Omega)}\le \log C \frac{\diam K}{\sqrt{\mathcal{L}^2(\Omega)}}. \] We improve this estimate so that it holds on unbounded sets and it takes into account the distance from the boundary.
\begin{proposition} \label{propositionBoundDiameter} Let $\Omega$ be such that $\mathbf{R}^2\setminus \Omega$ is connected and contains a ball of radius $\rho$ and $K \subset \Omega$ be compact. Then, \[
\frac{2\pi}{\capa(K, \Omega)}\le \log 16\Bigl(1+ \frac{\dist(K, \partial \Omega)}{2\rho}\Bigr)\Bigl(1+ \frac{2\dist(K, \partial \Omega)}{\diam(K)}\Bigr). \] \end{proposition} \begin{proof} Since $K$ is compact, up to translations and rotations we can assume that $0 \in K$ and $\dist(K, \partial \Omega)=\dist(0, \partial \Omega)$. Let $A^*$ and $\Omega^*$ be the sets obtained by circular symmetrization around $0$ introduced by V.\thinspace Wolontis \cite[III.1]{Wolontis1952} (see also J.\thinspace Sarvas \cite{Sarvas1972}). We have \begin{gather*}
\capa(A^*, \Omega^*) \le \capa(A, \Omega),\\
[-\diam(A)/2, 0] \subset A^*, \end{gather*} and, since $\mathbf{R}^2 \setminus \Omega^*$ contains a ball of radius $\rho$, \[
[\dist(A, \partial \Omega), \dist(A, \partial \Omega)+2\rho] \subset \mathbf{R}^2\setminus \Omega^*. \] We have thus \[
\capa(A, \Omega) \ge \capa([-\diam(A)/2, 0], \mathbf{R}^2 \setminus [\dist(A, \partial \Omega), \dist(A, \partial \Omega)+2\rho]. \] Now, identifying $\mathbf{R}^2$ with $\mathbf{C}$, there exists a M\"obius transformations that brings the points $-\diam(A)/2$, $0$, $\dist(A, \partial \Omega)$ and $\dist(A, \partial \Omega)+2\rho$ to $-1$, $0$, $s$ and $\infty$ with \[
s=\frac{(2\rho+\dist(K, \partial \Omega)+\frac{1}{2}\diam(K))\dist(K, \partial \Omega)}{\rho \diam(K)}, \] from which we deduce that \[
\capa(A, \Omega) \ge \capa ([-1, 0], \mathbf{C} \setminus [s, +\infty[). \] The conclusion comes from the next lemma. \end{proof}
As in L.\thinspace E.\thinspace Fraenkel's proof \cite{Fraenkel1981}, we use
\begin{lemma} Let $s>0$. We have \[
\frac{2\pi}{\capa([-1, 0], \mathbf{R}^2 \setminus [s, \infty))}\le \log 16(1+s). \] \end{lemma} \begin{proof} We have the formula \cite[5.60 (1)]{Vuorinen1988} \[
\capa([-1, 0], \mathbf{R}^2 \setminus [s, \infty))=2 \frac {\mathcal{K}(\sqrt{1/(1+s)})}{\mathcal{K}(\sqrt{s/(1+s)})}, \] where $\mathcal{K}$ is the complete elliptic integral of the first kind \[
\mathcal{K}(\gamma)=\int_0^{\frac{\pi}{2}} \frac{1}{\sqrt{1-\gamma^2 (\sin \theta)^2}}\,d\theta. \] Since (see \cite{AndersonVamanamurthyVuorinen1997}) \[
\frac{\mathcal{K}(\gamma)}{\mathcal{K}(\sqrt{1-\gamma^2})} > \frac{\pi}{2\log\Bigl(2 \dfrac{1+\sqrt{1-\gamma^2}}{\gamma}\Bigr) } \] We have then \[
\capa([-1, 0], \mathbf{R}^2 \setminus [s, \infty))> \frac{\pi}{\log 2(\sqrt{s}+\sqrt{1+s})}> \frac{\pi}{\log 4\sqrt{1+s}}=\frac{2\pi}{\log 16(1+s)}.\qedhere \] \end{proof}
We also have an estimate in the case where the inner radius $\rho$ of $\mathbf{R}^2 \setminus \Omega$ is replaced by the connectedness and the measure of $\mathbf{R}^2 \setminus \Omega$.
\begin{proposition} \label{propositionCapacityBoundDistance} Let $\Omega$ be such that $\mathbf{R}^2\setminus \Omega$ is connected and has finite measure and $K \subset \Omega$ be compact. We have \[
\frac{2\pi}{\capa(K, \Omega)}\le \log 16\Bigl(1+ \frac{\pi \dist(K, \partial \Omega)\diam(\mathbf{R}^2 \setminus \Omega)}{2\muleb{2}(\mathbf{R}^2 \setminus \Omega)}\Bigr)\Bigl(1+ \frac{2\dist(K, \partial \Omega)}{\diam(K)}\Bigr) \] \end{proposition} \begin{proof} One begins as in the proof of the previous proposition. We have then that \[
[\dist(K, \partial \Omega), \dist(K, \partial \Omega)+\frac{2\muleb{2}(\mathbf{R}^2 \setminus \Omega)}{\pi \diam (\mathbf{R}^2 \setminus \Omega)}] \subset \mathbf{R}^2\setminus \Omega^*. \] And one continues as previously. \end{proof}
\section{Green function asymptotics}
This appendix is devoted to the study of the asymptotic expansion of Green's function near a point of the boundary:
\begin{proposition} \label{propositionAsymptotH} Let $\Omega \subset \mathbf{R}^2$ and assume that $\Omega$ is of class $C^2$ around $0$ and that the tangent to $\partial \Omega$ is perpendicular to $x_1$. One has then as $\varepsilon \to 0$, \[
G (\varepsilon x, \varepsilon y)=\frac{1}{4\pi} \log \frac{\abs{x-y}^2+4x_1y_1}{\abs{x-y}^2}- \varepsilon \frac{K}{2\pi} \frac{x_1 \abs{y}^2+y_1 \abs{x}^2}{\abs{x-y}^2+4x_1y_1}+o(\varepsilon). \] uniformly on compact subsets of $\mathbf{R}^2_+ \times \mathbf{R}^2_+$, where $K$ is the curvature of $\partial \Omega$ at $0$. In particular, \[
H (\varepsilon x, \varepsilon x)=\frac{1}{2\pi} \log 2\varepsilon x_1-\varepsilon \frac{K\abs{x}^2}{4\pi x_1} +o(\varepsilon). \] \end{proposition}
\begin{proof} Define \[
w_{\varepsilon, y}(x) = \frac{1}{\varepsilon} \Bigl(\frac{1}{4\pi} \log \frac{\abs{x-y}^2+4x_1y_1}{\abs{x-y}^2}-G(\varepsilon x, \varepsilon y)\Bigr). \] This function is defined for every $x, y \in \Omega^\varepsilon=\{ z \in \mathbf{R}^2 \: :\: \varepsilon z \in \Omega \}$. Moreover, $w_{\varepsilon, y}$ satisfies \[ \left\{\begin{aligned}
-\Delta w_{\varepsilon, y} &= 0 &&\text{in $\Omega$},\\
w_{\varepsilon, y} &= \frac{1}{4\pi\varepsilon} \log \frac{\abs{x-y}^2+4x_1y_1}{\abs{x-y}^2} && \text{on $\partial \Omega$}. \end{aligned}\right. \]
By construction, $w_{\varepsilon, y}$ is a bounded function. We first claim that $w_{\varepsilon, y}$ is bounded uniformly in $L^\infty(\Omega_\varepsilon)$ as $\varepsilon \to 0$ and $y$ stays in a compact subset of $\mathbf{R}^2$. Indeed, since $\Omega$ is $C^2$ around $0$, there exists $r > 0$ such that if $z \in \partial \Omega \cap B(0, r)$, $\abs{z_1} \le C \abs{z_2}^2$. One has thus, for $x \in \partial \Omega_\varepsilon \cap B(0, \frac{r}{\varepsilon})$, $\abs{x_1} \le C \varepsilon \abs{x_2}^2$, and therefore, when $\varepsilon$ is small enough \[
\abs{w_{\varepsilon, y}(x)} \le \frac{C'}{\varepsilon} \frac{\varepsilon y_1 \abs{x_2}^2}{\abs{x-y}^2} \] On the other hand, if $x \in \partial \Omega_\varepsilon \setminus B(0, \frac{r}{\varepsilon})$, then if $\varepsilon$ is small enough, $x \in \partial \Omega_\varepsilon \cap B(0, \frac{r}{2\varepsilon})$ so that $x_1 \le 2 \abs{x-y}$ and $\abs{x-y} \ge \frac{r}{2\varepsilon}$, and \[
\abs{w_{\varepsilon, y}}(x) \le C \varepsilon. \]
Since, $\Omega$ is of class $C^2$, there exists a function $f : I \subset \mathbf{R} \to \mathbf{R}$ such that $\partial \Omega \cap B(0, r')= \{(f(t), t) \in \Omega \: :\: t \in I \}$. One has thus, using the Taylor expansion of $f$ and recalling that $f(0)=0$ and $f'(0)=0$, \[
w_{\varepsilon, y}(x)=\frac{1}{4\pi \varepsilon} \log \Bigl( 1+ \frac{4 y_1 \varepsilon^{-1}f(\varepsilon x_2)}{( \varepsilon^{-1}f(\varepsilon x_2)-y_1)^2+(x_2-y_2)^2}\Bigr). \]
Therefore, by classical regularity estimates, $w_{\varepsilon, y}$ converges uniformly with respect to compact subsets of $\mathbf{R}^2_+ \times \mathbf{R}^2_+$ to the unique bounded solution of \[
\left\{ \begin{aligned}
-\Delta w_y &= 0 &&\text{in $\mathbf{R}^2_+$}, \\
w_y&=\frac{f''(0)}{2\pi}\frac{y_1 x_2^2}{y_1^2+(x_2-y_2)^2}&&\text{on $\partial \mathbf{R}^2_+$}. \end{aligned} \right. \] One can check that \[
w_y(x)=\frac{f''(0)}{2\pi}\frac{y_1 (x_1^2+x_2^2)+x_1(y_1^2+y_2^2)}{(x_1+y_1)^2+(x_2-y_2)^2}. \] The announced expressions for $G(\varepsilon x, \varepsilon y)$ and $H(\varepsilon x, \varepsilon x)$ follow. \end{proof}
\begin{bibdiv} \begin{biblist}
\bib{AmbrosettiStruwe1989}{article}{
author={Ambrosetti, A.},
author={Struwe, M.},
title={Existence of steady vortex rings in an ideal fluid},
date={1989},
ISSN={0003-9527},
journal={Arch. Rational Mech. Anal.},
volume={108},
number={2},
pages={97\ndash 109},
}
\bib{AmbrosettiMancini1981}{incollection}{
author={Ambrosetti, Antonio},
author={Mancini, Giovanni},
title={On some free boundary problems},
date={1981},
booktitle={Recent contributions to nonlinear partial differential
equations},
series={Res. Notes in Math.},
volume={50},
publisher={Pitman},
address={Boston, Mass.},
pages={24\ndash 36},
}
\bib{AndersonVamanamurthyVuorinen1997}{book}{
author={Anderson, Glen~D.},
author={Vamanamurthy, Mavina~K.},
author={Vuorinen, Matti~K.},
title={Conformal invariants, inequalities, and quasiconformal maps},
series={Canadian Mathematical Society Series of Monographs and Advanced
Texts},
publisher={John Wiley \& Sons Inc.},
address={New York},
date={1997},
ISBN={0-471-59486-5}, }
\bib{ArnoldKhesin}{book}{
author={Arnold, Vladimir~I.},
author={Khesin, Boris~A.},
title={Topological methods in hydrodynamics},
series={Applied Mathematical Sciences},
publisher={Springer-Verlag},
address={New York},
date={1998},
volume={125},
ISBN={0-387-94947-X},
}
\bib{BartschPistoiaWeth}{unpublished}{
author={Bartsch, Thomas},
author={Pistoia, Angela},
author={Weth, Tobias},
title={$n$-vortex equilibria for ideal fluids in bounded planar domains
and new nodal solutions of the $\sinh$-poisson and the Lane-Emden-Fowler
equations},
note={preprint}, }
\bib{BartschWeth2003}{article}{
author={Bartsch, Thomas},
author={Weth, Tobias},
title={A note on additional properties of sign changing solutions to
superlinear elliptic equations},
date={2003},
ISSN={1230-3429},
journal={Topol. Methods Nonlinear Anal.},
volume={22},
number={1},
pages={1\ndash 14},
}
\bib{BartschWeth2005}{article}{
author={Bartsch, Thomas},
author={Weth, Tobias},
title={Three nodal solutions of singularly perturbed elliptic equations
on domains without topology},
date={2005},
ISSN={0294-1449},
journal={Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire},
volume={22},
number={3},
pages={259\ndash 281},
}
\bib{BartschWethWillem}{article}{
author={Bartsch, Thomas},
author={Weth, Tobias},
author={Willem, Michel},
title={Partial symmetry of least energy nodal solutions to some
variational problems},
date={2005},
ISSN={0021-7670},
journal={J. Anal. Math.},
volume={96},
pages={1\ndash 18},
}
\bib{BergerFraenkel1974}{article}{
author={Berger, M.~S.},
author={Fraenkel, L.~E.},
title={A global theory of steady vortex rings in an ideal fluid},
date={1974},
ISSN={0001-5962},
journal={Acta Math.},
volume={132},
pages={13\ndash 51},
}
\bib{BergerFraenkel1980}{article}{
author={Berger, M.~S.},
author={Fraenkel, L.~E.},
title={Nonlinear desingularization in certain free-boundary problems},
date={1980},
ISSN={0010-3616},
journal={Comm. Math. Phys.},
volume={77},
number={2},
pages={149\ndash 172}, }
\bib{BBH}{book}{
author={Bethuel, Fabrice},
author={Brezis, Ha{\"{\i}}m},
author={H{\'e}lein, Fr{\'e}d{\'e}ric},
title={Ginzburg-Landau vortices},
series={Progress in Nonlinear Differential Equations and their
Applications, 13},
publisher={Birkh\"auser Boston Inc.},
place={Boston, MA},
date={1994},
pages={xxviii+159},
isbn={0-8176-3723-0}, }
\bib{BrockSolynin2000}{article}{
author={Brock, Friedemann},
author={Solynin, Alexander~Yu.},
title={An approach to symmetrization via polarization},
date={2000},
ISSN={0002-9947},
journal={Trans. Amer. Math. Soc.},
volume={352},
number={4},
pages={1759\ndash 1796},
}
\bib{Burton1987}{article}{
author={Burton, G.~R.},
title={Vortex rings in a cylinder and rearrangements},
date={1987},
ISSN={0022-0396},
journal={J. Differential Equations},
volume={70},
number={3},
pages={333\ndash 348},
}
\bib{Burton1988}{article}{
author={Burton, G.~R.},
title={Steady symmetric vortex pairs and rearrangements},
date={1988},
ISSN={0308-2105},
journal={Proc. Roy. Soc. Edinburgh Sect. A},
volume={108},
number={3\ndash 4},
pages={269\ndash 290},
}
\bib{CaffarelliFriedman1980}{article}{
author={Caffarelli, Luis~A.},
author={Friedman, Avner},
title={Asymptotic estimates for the plasma problem},
date={1980},
ISSN={0012-7094},
journal={Duke Math. J.},
volume={47},
number={3},
pages={705\ndash 742},
}
\bib{CastroCossioNeuberger}{article}{
author={Castro, Alfonso},
author={Cossio, Jorge},
author={Neuberger, John~M.},
title={A sign-changing solution for a superlinear {D}irichlet problem},
date={1997},
ISSN={0035-7596},
journal={Rocky Mountain J. Math.},
volume={27},
number={4},
pages={1041\ndash 1053},
}
\bib{CotiZelatiRabinowitz1992}{article}{
author={Coti~Zelati, Vittorio},
author={Rabinowitz, Paul~H.},
title={Homoclinic type solutions for a semilinear elliptic {PDE} on
{${\bf R}\sp n$}},
date={1992},
ISSN={0010-3640},
journal={Comm. Pure Appl. Math.},
volume={45},
number={10},
pages={1217\ndash 1269},
}
\bib{delPinoKowalczykMusso2005}{article}{
author={del Pino, Manuel},
author={Kowalczyk, Michal},
author={Musso, Monica},
title={Singular limits in {L}iouville-type equations},
date={2005},
ISSN={0944-2669},
journal={Calc. Var. Partial Differential Equations},
volume={24},
number={1},
pages={47\ndash 81},
}
\bib{EspositoMussoPistoia2006}{article}{
author={Esposito, Pierpaolo},
author={Musso, Monica},
author={Pistoia, Angela},
title={Concentrating solutions for a planar elliptic problem involving
nonlinearities with large exponent},
date={2006},
ISSN={0022-0396},
journal={J. Differential Equations},
volume={227},
number={1},
pages={29\ndash 68},
}
\bib{EspositoMussoPistoia2007}{article}{
author={Esposito, Pierpaolo},
author={Musso, Monica},
author={Pistoia, Angela},
title={On the existence and profile of nodal solutions for a
two-dimensional elliptic problem with large exponent in nonlinearity},
date={2007},
ISSN={0024-6115},
journal={Proc. Lond. Math. Soc. (3)},
volume={94},
number={2},
pages={497\ndash 519},
}
\bib{Fraenkel1981}{article}{
author={Fraenkel, L.~E.},
title={A lower bound for electrostatic capacity in the plane},
date={1981},
ISSN={0308-2105},
journal={Proc. Roy. Soc. Edinburgh Sect. A},
volume={88},
number={3\ndash 4},
pages={267\ndash 273},
}
\bib{Fraenkel2000}{book}{
author={Fraenkel, L.~E.},
title={An introduction to maximum principles and symmetry in elliptic
problems},
series={Cambridge Tracts in Mathematics},
publisher={Cambridge University Press},
address={Cambridge},
date={2000},
volume={128},
ISBN={0-521-46195-2},
}
\bib{FridemannTurkington1981}{article}{
author={Friedman, Avner},
author={Turkington, Bruce},
title={Vortex rings: existence and asymptotic estimates},
date={1981},
ISSN={0002-9947},
journal={Trans. Amer. Math. Soc.},
volume={268},
number={1},
pages={1\ndash 37},
}
\bib{GilbargTrudinger2001}{book}{
author={Gilbarg, David},
author={Trudinger, Neil~S.},
title={Elliptic partial differential equations of second order},
series={Classics in Mathematics},
publisher={Springer-Verlag},
address={Berlin},
date={2001},
ISBN={3-540-41160-7},
}
\bib{Kawohl1985}{book}{
author={Kawohl, Bernhard},
title={Rearrangements and convexity of level sets in {PDE}},
series={Lecture Notes in Mathematics},
publisher={Springer-Verlag},
address={Berlin},
date={1985},
volume={1150},
ISBN={3-540-15693-3},
}
\bib{Koebe1918}{article}{
author={Koebe, P.},
title={Abhandlungen zur Theorie der konformen Abbildung. IV. Abbildung
mehrfach zusammenh\"angender schlichter Bereiche auf Schlitzbereiche.},
date={1918},
journal={Acta Math.},
volume={41},
pages={305\ndash 344}, }
\bib{Kulpa1997}{article}{
author={Kulpa, Wladyslaw},
title={The {P}oincar\'e-{M}iranda theorem},
date={1997},
ISSN={0002-9890},
journal={Amer. Math. Monthly},
volume={104},
number={6},
pages={545\ndash 550},
}
\bib{LiYanYang2005}{article}{
author={Li, Gongbao},
author={Yan, Shusen},
author={Yang, Jianfu},
title={An elliptic problem related to planar vortex pairs},
date={2005},
ISSN={0036-1410},
journal={SIAM J. Math. Anal.},
volume={36},
number={5},
pages={1444\ndash 1460},
}
\bib{LiebLoss2001}{book}{
author={Lieb, Elliott~H.},
author={Loss, Michael},
title={Analysis},
edition={Second},
series={Graduate Studies in Mathematics},
publisher={American Mathematical Society},
address={Providence, RI},
date={2001},
volume={14},
ISBN={0-8218-2783-9},
}
\bib{Lin1941}{article}{
author={Lin, C.~C.},
title={On the motion of vortices in two dimensions. {I}. {E}xistence of
the {K}irchhoff--{R}outh function},
date={1941},
journal={Proc.\ Nat.\ Acad.\ Sci.\ U. S. A.},
volume={27},
pages={570\ndash 575},
}
\bib{Lin1943}{book}{
author={Lin, C.~C.},
title={On the {M}otion of {V}ortices in {T}wo {D}imensions},
series={University of Toronto Studies, Applied Mathematics Series, no.
5},
publisher={University of Toronto Press},
address={Toronto, Ont.},
date={1943},
}
\bib{Lions1984}{article}{
author={Lions, P.-L.},
title={The concentration-compactness principle in the calculus of
variations. {T}he locally compact case. {II}},
date={1984},
ISSN={0294-1449},
journal={Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire},
volume={1},
number={4},
pages={223\ndash 283},
}
\bib{MarchioroPulvirenti1983}{article}{
author={Marchioro, C.},
author={Pulvirenti, M.},
title={Euler evolution for singular initial data and vortex theory},
date={1983},
ISSN={0010-3616},
journal={Comm. Math. Phys.},
volume={91},
number={4},
pages={563\ndash 572},
}
\bib{MawhinWillem}{book}{
author={Mawhin, Jean},
author={Willem, Michel},
title={Critical point theory and {H}amiltonian systems},
series={Applied Mathematical Sciences},
publisher={Springer-Verlag},
address={New York},
date={1989},
volume={74},
ISBN={0-387-96908-X},
}
\bib{Nirenberg1959}{article}{
author={Nirenberg, L.},
title={On elliptic partial differential equations},
date={1959},
journal={Ann. Scuola Norm. Sup. Pisa (3)},
volume={13},
pages={115\ndash 162},
}
\bib{Norbury1975}{article}{
author={Norbury, J.},
title={Steady planar vortex pairs in an ideal fluid},
date={1975},
ISSN={0010-3640},
journal={Comm. Pure Appl. Math.},
volume={28},
number={6},
pages={679\ndash 700},
}
\bib{Poincare}{book}{
author={Poincar\'e, Henri},
title={Figures d'\'equilibre d'une masse fluide},
address={Paris},
date={1903}, }
\bib{Rabinowitz1992}{article}{
author={Rabinowitz, Paul~H.},
title={On a class of nonlinear {S}chr\"odinger equations},
date={1992},
ISSN={0044-2275},
journal={Z. Angew. Math. Phys.},
volume={43},
number={2},
pages={270\ndash 291},
}
\bib{Sarvas1972}{article}{
author={Sarvas, Jukka},
title={Symmetrization of condensers in {$n$}-space},
date={1972},
journal={Ann. Acad. Sci. Fenn. Ser. A I},
number={522},
pages={44},
}
\bib{Schochet_CPDE_95}{article}{
author={Schochet, Steven},
title={The weak vorticity formulation of the {$2$}-{D} {E}uler equations
and concentration-cancellation},
date={1995},
ISSN={0360-5302},
journal={Comm. Partial Differential Equations},
volume={20},
number={5-6},
pages={1077\ndash 1104},
}
\bib{Szego1930}{article}{
author={Szeg{\H o}, G.},
title={{\" U}ber einige extremalaufgaben der potentialtheorie},
date={1930},
journal={Math. Z.},
volume={31},
pages={583\ndash 593}, }
\bib{Turkington1983}{article}{
author={Turkington, Bruce},
title={On steady vortex flow in two dimensions. {I}, {II}},
date={1983},
ISSN={0360-5302},
journal={Comm. Partial Differential Equations},
volume={8},
number={9},
pages={999\ndash 1030, 1031\ndash 1071},
}
\bib{Vuorinen1988}{book}{
author={Vuorinen, Matti},
title={Conformal geometry and quasiregular mappings},
series={Lecture Notes in Mathematics},
publisher={Springer-Verlag},
address={Berlin},
date={1988},
volume={1319},
ISBN={3-540-19342-1},
}
\bib{Willem1996}{book}{
author={Willem, Michel},
title={Minimax theorems},
series={Progress in Nonlinear Differential Equations and their
Applications, 24},
publisher={Birkh\"auser Boston Inc.},
address={Boston, MA},
date={1996},
ISBN={0-8176-3913-6},
}
\bib{Wolontis1952}{article}{
author={Wolontis, Vidar},
title={Properties of conformal invariants},
date={1952},
ISSN={0002-9327},
journal={Amer. J. Math.},
volume={74},
pages={587\ndash 606},
}
\bib{Yang1991}{article}{
author={Yang, Jianfu},
title={Existence and asymptotic behavior in planar vortex theory},
date={1991},
journal={Math.\ Models Methods Appl.\ Sc.},
volume={1},
number={4},
pages={461\ndash 475}, }
\bib{Yang1995}{article}{
author={Yang, Jianfu},
title={Global vortex rings and asymptotic behaviour},
date={1995},
ISSN={0362-546X},
journal={Nonlinear Anal.},
volume={25},
number={5},
pages={531\ndash 546},
}
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\title{Treewidth of grid subsets}
\begin{abstract} Let $Q_n$ be the graph of $n\times n\times n$ cube with all non-decreasing diagonals (including the facial ones) in its constituent $1\times1\times1$ cubes. Suppose that a set $S\subseteq V(Q_n)$ separates the left side of the cube from the right side. We show that $S$ induces a subgraph of tree-width at least $\frac{n}{\sqrt{18}}-1$. We use a generalization of this claim to prove that the vertex set of $Q_n$ cannot be partitioned to two parts, each of them inducing a subgraph of bounded tree-width. \end{abstract}
Let $G_n$ be the plane triangulated $n\times n$ grid, and consider any (non-proper) coloring of vertices of $G_n$ by two colors. A well-known HEX lemma implies that $G_n$ contains a monochromatic path with at least $n$ vertices. We consider a $3$-dimensional analogue of this claim.
Let $Q_n$ be the $n\times n\times n$ grid with all non-decreasing diagonals in its constituent unit cubes; more precisely, the vertex set of $Q_n$ is $\{(x,y,z)\in\bb{Z}^3:0\le x,y,z\le n-1\}$ and two distinct vertices $(x,y,z)$ and $(x',y',z')$ are adjacent if $x\le x'\le x+1$, $y\le y'\le y+1$ and $z\le z'\le z+1$. A result of Matou\v{s}ek and P\v{r}\'{\i}v\v{e}tiv\'{y}~\cite{matopriv} applied in this special case shows that any coloring of vertices of $Q_n$ by two colors contains a connected monochromatic subgraph with $\Omega(n^2)$ vertices. We aim to show that it actually contains a large monochromatic subgraph which is ``2-dimensional'' in nature, i.e., with a large grid minor. It is well-known that a graph contains a large grid as a minor if and only if it has a large tree-width~\cite{twchu1,twchu2,RSey,quickly}, and thus we can state our main result in the following equivalent form.
\begin{theorem}\label{thm-main2} For every $t\ge 0$, there exists $n\ge 1$ such that for any partition $A_1,A_2$ of the vertex set of $Q_n$, either $A_1$ or $A_2$ induces a subgraph of $Q_n$ of tree-width at least $t$. \end{theorem}
Recall that a \emph{tree decomposition} $(T,\beta)$ of a graph $G$ is a tree $T$ and a function $\beta:V(T)\to 2^{V(G)}$ assigning a \emph{bag} $\beta(u)$ to each vertex $u\in V(T)$, such that every vertex of $G$, as well as both ends of every edge of $G$, are contained in some bag, and such that $\{u:v\in\beta(u)\}$ induces a connected subtree of $T$ for every $v\in V(G)$. The \emph{width} of the decomposition is the maximum of the sizes of its bags minus one, and the \emph{tree-width} $\brm{tw}(G)$ of $G$ is the minimum of the widths of its tree decompositions.
Let us remark that the presence of diagonals is important for the validity of Theorem~\ref{thm-main2}; if the diagonals of $Q_n$ are omitted, the graph becomes bipartite, and thus it can be partitioned to two independent sets (of tree-width $0$).
Theorem~\ref{thm-main2} is motivated by a notion from the algorithmic graph theory. We say that a class ${\cal G}$ of graphs is \emph{tree-width fragile} if for every $k\ge 1$, there exists $t_k\ge 0$ such that every graph $G\in{\cal G}$, there exist pairwise disjoint sets $A_1,\ldots,A_k\subseteq V(G)$ satisfying $\brm{tw}(G-A_i)\le t_k$ for $1\le i\le k$. For example, planar graphs are known to have this property~\cite{baker1994approximation,rs3}. Many interesting graph problems have efficient algorithms when restricted to graphs with bounded tree-width, enabling Baker~\cite{baker1994approximation} to exploit this property in design of approximation algorithms for planar graphs.
It is natural to ask which more general graph classes are tree-width fragile, as the same approximation algorithms can be used for such graph classes. Eppstein~\cite{eppstein00} proved that this is the case for graphs avoiding some apex graph as a minor (a graph $H$ is \emph{apex} if $H-v$ is planar for some $v\in V(H)$), and in particular for graphs embedded in any fixed surface. DeVos et al.~\cite{devospart} generalized the argument to all proper minor-closed classes of graphs.
Of course, not all graph classes are tree-width fragile. It is easy to see that any graph class with this property must be sparse and must have sublinear separators~\cite{twd}. Theorem~\ref{thm-main2} gives another class of obstructions. \begin{corollary} The graph class $\{Q_n:n\ge 1\}$ is not tree-width fragile. \end{corollary}
Indeed, Theorem~\ref{thm-main2} shows that the condition of tree-width fragility fails already for $k=2$; unlike the previously known graph classes that are not tree-width fragile, the graphs $Q_n$ have bounded maximum degree and balanced separators of order $O\bigl(|V(Q_n)|^{2/3}\bigr)$. Interestingly, the class $\{Q_n:n\ge 1\}$ is \emph{fractionally} tree-width fragile, where the fractional version of tree-width fragility is defined in the standard way~\cite{twd}; it is the first known class of graphs showing that tree-width fragility and fractional tree-width fragility do not coincide.
Another motivation for Theorem~\ref{thm-main2} comes from graph coloring theory. Many results are known on the variants of the coloring where the color classes are not required to be independent sets, but rather satisfy some other constraints, such as inducing subgraphs of bounded maximum degree~\cite{cowen1997defective,edwards2014relative}, with bounded component size~\cite{alon2003partitioning,liusmall}, or, as in our case, bounded tree-width~\cite{devospart,ding2000surfaces}. For this notion of \emph{low tree-width coloring}, the previous results mostly focus on positive results, showing that graphs from some class have a low tree-width coloring using a constant number of colors. On the other hand, Theorem~\ref{thm-main2} gives a lower bound, presenting an example of a natural class of graphs that do not have low tree-width coloring by two colors.
Let us now give a brief idea of the proof of Theorem~\ref{thm-main2}. Recall that every graph containing a $t\times t$ grid as a minor has tree-width at least $t$. Hence, we aim to construct a monochromatic grid in $Q_n$ by connecting appropriately chosen connected subgraphs by disjoint paths. Of course, we need to deal with the situation that such paths are blocked by the vertices of the other color. Such blocking subgraphs must be in a sense larger than the subgraphs we are trying to join. We now switch to the other color class, considering these blocking subgraphs to be the nodes of a grid we are constructing and trying to find paths between these nodes. We may switch back and forth between the color classes several times, repeatedly enlarging the node subgraphs. For this procedure to end, we need to argue that eventually, the node subgraphs cannot be separated by a set inducing a subgraph of small tree-width. Abstracting the problem further, we reach the following claim of independent interest.
Let $X$ be a subset of vertices of $Q_n$ that separates the left side of the grid from the right side. Supposing that $X$ is minimal with this property, a geometric intuition tells us that $X$ should correspond to a surface in the $3$-dimensional Euclidean space separating the left side of $Q_n$ from the right side, and that such a surface should contain a subdivision of a large $2$-dimensional grid in $Q_n$. Although this geometric intuition is somewhat misleading and difficult to make precise, the overall conclusion that $Q_n[X]$ should have large tree-width is true.
\begin{theorem}\label{thm-main1a} Let $n\ge 1$ be an integer, let $S_1$ be the set of vertices of the left side of $Q_n$ and let $S_2$ be the set of vertices of the right side of $Q_n$. If a set $X\subseteq V(Q_n)\setminus (S_1\cup S_2)$ intersects every paths in $G$ from $S_1$ to $S_2$, then the subgraph of $Q_n$ induced by $X$ has tree-width at least $\frac{n}{\sqrt{18}}-1$. \end{theorem}
Again, note that the presence of diagonals is necessary for the validity of Theorem~\ref{thm-main1a}. To prove Theorem~\ref{thm-main2}, we need a generalization of Theorem~\ref{thm-main1a}. A plane graph $H$ is a \emph{near-triangulation} if every face of $H$ except for the outer one has length three. A triple $(G,S_1,S_2)$, where $G$ is a graph and $S_1$ and $S_2$ are vertex-disjoint connected subgraphs of $G$, is an \emph{$(n\times n)$-slab (with sides $S_1$ and $S_2$)} if there exist pairwise vertex-disjoint near-triangulations $R_1, \ldots, R_n\subseteq G$ (the \emph{rows} of the slab) and pairwise vertex-disjoint near-triangulations $C_1, \ldots, C_n\subseteq G$ (the \emph{columns} of the slab) such that for $1\le i,j\le n$, the intersection $R_i\cap C_j$ is a path with one end in $S_1$ and the other end in $S_2$, and such that for $s\in\{1,2\}$ and for every row or column $H$, the intersection of $H$ with $S_s$ is a subpath of the boundary of the outer face of $H$.
\begin{theorem}\label{thm-main1} Let $(G,S_1,S_2)$ be an $(n\times n)$-slab with rows and columns of maximum degree at most $\Delta\ge 3$ and let $X$ be a subset of $V(G)\setminus (V(S_1)\cup V(S_2))$. If every path in $G$ from $S_1$ to $S_2$ intersects $X$, then $G[X]$ has tree-width at least $\frac{n}{\sqrt{3\Delta}}-1$. \end{theorem}
Clearly, Theorem~\ref{thm-main1} implies Theorem~\ref{thm-main1a}. The proof of Theorem~\ref{thm-main1} is topological in nature and we give it in the following two sections. Section~\ref{sec-main2} is devoted to the proof of Theorem~\ref{thm-main2}.
\section{ $\{0,\pm1, \star\}$-valued functions on graphs and homotopy}
In this section, we develop a discrete variant of the basic tools of homotopy theory.
Let $\mc{L}=\{0,\pm1, \star\}$. We will consider function $f: V(G)\to \mc{L}$ for a graph $G$. Recall that a \emph{separation} of $G$ is pair $(A,B)$ of subsets of vertices such that $V(G)=A\cup B$ and $G$ has no edge with one end in $A\setminus B$ and the other end in $B\setminus A$. Without explicitly assuming it, we will typically consider the vertices with function values $-1$ and $1$ as corresponding to sets $A\setminus B$ and $B\setminus A$, respectively, for some separation $(A,B)$ of $G$, and label $0$, as roughly corresponding to a union of some components of $G[A \cap B]$. This motivates the following definition. We say that $f:V(G) \to \mc{L}$ is \emph{continuous} if vertices $u,v\in V(G)$ are not adjacent whenever $f(v)=1$ and $f(u)=-1$. We say that $f:V(G) \to \mc{L}$ is \emph{holomorphic} if $f$ is continuous and additionally vertices $u,v\in V(G)$ are not adjacent whenever $f(v)=0$ and $f(u)=\star$. We say that $f:V(G) \to \mc{L}$ is \emph{entire}, if it is continuous and $\star \not \in \brm{Image}(f)$.
Let $C^0(G,\mc{L})$ denote the set of functions $f: V(G) \to \mc{L}$. Fix an arbitrary orientation of $E(G)$, that is for every $e \in E(G)$ we distinguish its beginning vertex denoted by $e^-$ and its end vertex denoted by $e^+$. Let $C^1(G,\bb{Z})$ denote the lattice of all functions $f: E(G) \to \bb{Z}$. The operator $d: C^0(G,\mc{L}) \to C^1(G,\bb{Z})$ is defined by $df(e)=f(e^+)-f(e^-)$ if $\star \not \in \{f(e^+),f(e^-)\}$ and $df(e)=0$, otherwise.
For a directed walk $W=(v_0,e_1,v_1,\ldots,e_n,v_n)$ on $G$, not necessarily respecting the orientation we fixed, and for $1\le i\le n$, let $\varepsilon_i\in C^1(G,\bb{Z})$ be defined by $\varepsilon_i(e_i)=1$ if $v_i=e^+_i$, $\varepsilon(e_i)=-1$ if $v_i=e_i^-$ and $\varepsilon_i(e)=0$ for $e\in E(G)\setminus \{e_i\}$. Let $I_W=\sum_{i=1}^n\varepsilon_i$. For a walk $W$ and $h \in C^1(G,\bb{Z})$ we define $\int_W h \colonequals \sum_{e \in E(G)}I_W(e)h(e)$. Let $W^{-1}$ denote the reversal of $W$, that is the walk $(v_n,e_n,v_{n-1}, e_{n-1},\ldots, e_1,v_0)$. For two walks $W_1$ from $u$ to $v$ and $W_2$ from $v$ to $w$, let $W_1W_2$ denote the concatenation of $W_1$ and $W_2$. \begin{lemma}\label{lemma-entire} If $W$ is a walk from $u$ to $v$ on a graph $G$ and $f\in C^0(G,\mc{L})$ is entire on $V(W)$ then $$\int_W df = f(v)-f(u).$$ \end{lemma} \begin{proof} Let $W=(v_0,e_1,v_1,\ldots,e_n,v_n)$, where $v_0=u$ and $v_n=v$, and for $1\le i\le n$, let $\varepsilon_i$ be as in the definition of $I_W$. We have \begin{align*} \int_W df&=\sum_{e \in E(G)} I_W(e)df(e)=\sum_{i=1}^n \varepsilon_i(e_i)df(e_i)\\ &=\sum_{i=1}^n f(v_i)-f(v_{i-1})=f(v_n)-f(v_0)=f(v)-f(u). \end{align*} \end{proof}
We refer to closed walks of length three on a graph $G$ as \emph{triangles}. We say that a triangle $T$ is \emph{$f$-contractible} for $f \in C^0(G,\mc{L})$, if $f$ is holomorphic on $V(W)$. The next observation is the first step towards obtaining an extension of Lemma~\ref{lemma-entire} to non-entire functions.
\begin{lemma}\label{lem:triangle}
Let $T$ be a triangle in $G$ and let $f \in C^0(G,\mc{L})$. If $f$ is continuous then $|\int_T df|\leq 1$, and if $T$ is $f$-contractible then $\int_T df=0$. \end{lemma} \begin{proof} Let $V(T)=\{u,v,w\}$. If $f$ is entire on $V(T)$, then the claim follows from Lemma~\ref{lemma-entire}. Hence, we can assume that $f(w)=\star$. If $f(u)=\star$ or $f(v)=\star$, then $df(e)=0$ for every $e\in E(T)$ and $\int_T df=0$.
Otherwise, $|\int_T df|=|f(u)-f(v)|$. Since $f$ is continuous, we have $|f(u)-f(v)|\le 1$. Furthermore, if $T$ is $f$-contractible, then $f(u)\neq 0\neq f(v)$, and thus $f(u)=f(v)$ and $\int_T df=0$. \end{proof}
Let $f \in C^0(G,\mc{L})$ be continuous. From Lemma~\ref{lem:triangle} it follows that $\int_W df=0$ for every $W$ that is a sum of $f$-contractible triangles. We need a slight generalization of this fact. We will say that a closed directed walk $W$ on $G$ is \emph{$(f,k)$-almost contractible} if there exist triangles $T_1,T_2,\ldots,T_n$ of $G$ such that $I_W=\sum_{i=1}^n I_{T_i}$ and $T_{k+1},T_{k+2},\ldots,T_n$ are $f$-contractible.
Let $W_i$ be a walk from $u_i$ to $v_i$ on $G$ for $i=1,2$. We say that $W_1$ and $W_2$ are \emph{$(f,k)$-almost homotopic} if there exists a walk $Q$ from $u_1$ to $u_2$ and a walk $R$ from $v_1$ to $v_2$ such that the walk $QW_2R^{-1}W_1^{-1}$ is $(f,k)$-almost contractible, and furthermore $f$ is constant and integer on each of $V(Q)$ and $V(R)$. We say that $W_1$ and $W_2$ are \emph{$f$-homotopic} if they are $(f,0)$-almost homotopic. From Lemma~\ref{lem:triangle} we deduce the following corollary.
\begin{lemma}\label{lem:homotopic}
Let $f \in C^0(G,\mc{L})$ be continuous and let $W_1$ and $W_2$ be walks on $G$. If $W_1$ and $W_2$ are $(f,k)$-almost homotopic then $$\left|\int_{W_1} df-\int_{W_2}df\right| \leq k.$$ In particular, if $W_1$ and $W_2$ are $f$-homotopic then $\int_{W_1} df =\int_{W_2}df$. \end{lemma} \begin{proof} Let $Q$ and $R$ be the walks and $T_1,T_2,\ldots,T_n$ be the triangles showing that $W_1$ and $W_2$ are $(f,k)$-almost homotopic. Let $W=QW_2R^{-1}W_1^{-1}$. By Lemma~\ref{lem:triangle}, we have \begin{align*}
\left|\int_W df\right|&=\left|\sum_{e\in E(G)} I_W(e)df(e)\right|=\left|\sum_{e\in E(G)}\sum_{i=1}^n I_{T_i}(e)df(e)\right|\\
&=\left|\sum_{i=1}^n\int_{T_i} df\right|=\left|\sum_{i=1}^k\int_{T_i} df\right|\le k. \end{align*} On the other hand, since $f$ is constant on $Q$ and $R$, Lemma~\ref{lemma-entire} implies that \begin{align*}
\left|\int_W df\right|&=\left|\int_Q df+\int_{W_1} df-\int_R df-\int_{W_2} df\right|=\left|\int_{W_1} df-\int_{W_2} df\right|, \end{align*} and the claim of the lemma follows. \end{proof}
For a function $h:Y\to \bb{R}$ and a set $X\subseteq Y$, let $h(X)=\sum_{x\in X} h(x)$. Let $P=v_0v_1\ldots v_n$ be a path in a graph $G$. For a function $f\in C^0(G,\mc{L})$ that is entire on $P$ and for $1\le i\le n-1$, let $\lambda_{P,f}(v_i)=\frac{1}{2}(f(v_{i+1})-f(v_{i-1}))$.
\begin{lemma}\label{lemma-path} Let $f\in C^0(G,\mc{L})$ be a function that is entire on a path $P=v_0v_1\ldots v_n$ in a graph $G$, such that $f(v_0)\neq 0$ and $f(v_n)\neq 0$. Suppose that $g\in C^0(G,\mc{L})$ satisfies $g(v)=f(v)$ for every $v\in V(P)$ such that $g(v)\neq \star$, and $f(v)=0$ for every $v\in V(P)$ such that $g(v)=\star$. Let $X=\{v\in V(P):g(v)=0\}$. Then $$\frac{1}{2}\int_P dg=\lambda_{P,f}(X).$$ Furthermore, if $g$ is holomorphic on $P$, then $\lambda_{P,f}(X)$ is an integer. \end{lemma} \begin{proof} Let $P_1$, \ldots, $P_k$ be the maximal subpaths of $P$ such that $g$ is entire on $P_i$ for $i=1,\ldots,k$. Note that $$\int_P dg=\sum_{j=1}^k \int_{P_j} dg,$$ and thus it suffices to prove that $\frac{1}{2}\int_{P_j} dg=\lambda_{P,f}(X\cap V(P_j))$ for $1\le j\le k$. Let $P_j=v_av_{a+1}\ldots v_b$. By Lemma~\ref{lemma-entire}, we have $\frac{1}{2}\int_{P_j} dg=\frac{1}{2}(g(v_b)-g(v_a))=\frac{1}{2}(f(v_b)-f(v_a))$. Let $V(P_j)\cap X=\{v_{c_1},v_{c_2},\ldots, v_{c_p}\}$, where $c_1<c_2<\ldots<c_p$. Since $f$ is continuous on $P$, for $1\le i\le p-1$ we have $f(v_{c_i+1})=f(v_{c_{i+1}-1})$. Furthermore, $f(v_{c_1-1})=f(v_a)$ and $f(v_{c_p+1})=f(v_b)$; this is the case even if say $c_1=a$, since then $g(v_{a-1})=\star$ and $f(v_{a-1})=f(v_a)=0$. Therefore, $\lambda_{P,f}(X\cap V(P_j))=\frac{1}{2}\sum_{i=1}^p f(v_{c_i+1})-f(v_{c_i-1})=\frac{1}{2}(f(v_b)-f(v_a))=\frac{1}{2}\int_{P_j}dg$, as required. Moreover, if $g$ is holomorphic on $P$, then $f(v_a)=\pm 1$, since either $a=0$ or $g(v_{a-1})=\star$, and similarly $f(v_b)=\pm 1$. Hence, $\int_{P_j} dg\in\{-2,0,2\}$ and $\lambda_{P,f}(X\cap V(P_j))=\frac{1}{2} \int_{P_j} dg$ is an integer. \end{proof}
\section{Treewidth of slab separators}
To give a lower bound on tree-width, we use the following claim (which is well-known, although usually stated with only non-negative weights).
\begin{lemma}\label{lem:weights}
Let $t\ge 0$ be an integer. Let $H$ be a graph with $\brm{tw}(H) \leq t$. Let $\lambda:V(H) \to \bb{R}$ be such that $|\lambda(v)|\le 1$ for every $v\in V(H)$. If $\lambda(V(H)) \geq 3t+3$, then there exists a separation $(K,L)$ of $H$ such that $$\frac{1}{3}\lambda(V(H)) \leq \lambda(K\setminus L) \leq \frac{2}{3}\lambda(V(H))$$ and $|K \cap L| \leq t+1$. \end{lemma} \begin{proof} Let $(T,\beta)$ be a tree decomposition of $H$ with bags of size at most $t+1$. For an edge $uv$ of $T$, let $T_{u,v}$ denote the component of $T-uv$ containing $v$, and let $S_{u,v}=\bigcup_{w\in V(T_{u,v})} \beta(w)\setminus \beta(u)$.
We first show that there exists $u\in V(T)$ such that every neighbor $v$ of $u$ in $T$ satisfies $\lambda(S_{u,v})\le \frac{2}{3}\lambda(V(H))$. If not, then for each $u\in V(T)$, let $\pi(u)$ denote a neighbor of $u$ in $T$ such $\lambda(S_{u,\pi(u)})>\frac{2}{3}\lambda(V(H))$. Since $T$ is a tree, there exists an edge $uv\in E(T)$ such that $\pi(u)=v$ and $\pi(v)=u$. However, then $$\lambda(V(H))=\lambda(S_{u,v})+\lambda(S_{v,u})+\lambda(\beta(u)\cap \beta(v))>\frac{4}{3}\lambda(V(H))-(t+1),$$ which contradicts the assumption $\lambda(V(H)) \geq 3t+3$.
Let $u\in V(T)$ be a vertex such that every neighbor $v$ of $T$ satisfies $\lambda(S_{u,v})\le \frac{2}{3}\lambda(V(H))$. Let $v_1$, $v_2$, \ldots, $v_m$ be the neighbors of $u$ in $T$ ordered so that $\lambda(S_{u,v_1})\ge \lambda(S_{u,v_2})\ge\ldots\ge \lambda(S_{u,v_m})$. Note that $\sum_{i=1}^m S_{u,v_i}=\lambda(V(H))-\lambda(\beta(u))\ge \lambda(V(H))-(t+1)\ge\frac{1}{3}\lambda(V(H))$. Let $m'$ be the smallest index such that $$\sum_{i=1}^{m'} \lambda(S_{u,v_i})\ge \frac{1}{3}\lambda(V(H)).$$ Let $K=\beta(u)\cup \bigcup_{i=1}^{m'} S_{u,v_i}$ and $L=\beta(u)\cup \bigcup_{i=m'+1}^{m} S_{u,v_i}$. By the choice of $m'$, we have $\lambda(K\setminus L)\ge \frac{1}{3}\lambda(V(H))$. If $m'=1$, then $\lambda(K\setminus L)=\lambda(S_{u,v_1})\le \frac{2}{3}\lambda(V(H))$. Hence, we can assume that $m'\ge 2$, and thus $\lambda(S_{u,v_i})<\frac{1}{3}\lambda(V(H))$ for $1\le i\le m$. By the choice of $m'$, we have $\bigcup_{i=1}^{m'-1} \lambda(S_{u,v_i})<\frac{1}{3}\lambda(V(H))$, and thus $\lambda(K\setminus L)<\frac{1}{3}\lambda(V(H))+\lambda(S_{u,v_{m'}})<\frac{2}{3}\lambda(V(H))$. \end{proof}
We are now ready to prove our first main result.
\begin{proof}[Proof of Theorem~\ref{thm-main1}.] Let $R_1$, \ldots, $R_n$ and $C_1$, \ldots, $C_n$ be rows and columns of $G$ of maximum degree at most $\Delta$. For $1\le i,j\le n$, let $G\uparrow(i,j)$ denote the path $R_i\cap C_j$, directed from $S_1$ to $S_2$.
Let $(A,B)$ be a separation of $G$ such that $V(S_1)\subseteq A\setminus B$, $V(S_2)\subseteq B\setminus A$ and $X=A\cap B$. Let $f:V(G) \to \mc{L}$ be defined by $$f(v)=\begin{cases} -1, & \text{if } v \in A\setminus B, \\
0, & \text{if } v \in X,\\
1, & \text{if } v \in B \setminus A. \end{cases}$$ Note that $f$ is entire. By Lemma~\ref{lemma-entire} we have \begin{equation}\label{eq:1} \int_{G\uparrow(i,j)}df=2. \end{equation} for $1\le i,j\le n$.
For every $v\in V(G\uparrow(i,j))\cap X$, let $\lambda(v)=\lambda_{G\uparrow(i,j),f}(v_i)$. For every $v\in X\setminus\bigcup_{1\le i,j\le n} V(G\uparrow(i,j))$, let $\lambda(v)=0$. By Lemma~\ref{lemma-path}, \begin{equation}\label{eq:2} \int_{G\uparrow(i,j)}df=2\lambda(V(G \uparrow (i,j)) \cap X) \end{equation} It follows from (\ref{eq:1}) and (\ref{eq:2}) that $\lambda(X)= n^2$.
Let $H\colonequals G[X]$ and let $t\colonequals \brm{tw}(H)$. We aim to prove that $t\ge \frac{n}{\sqrt{3\Delta}}-1$. If $t>n^2/3-1$ or $t\ge n-1$, then this claim holds, since $\Delta\ge 3$. Hence, we can assume that $n^2\ge 3t+3$ and $t<n-1$. By Lemma~\ref{lem:weights} there exists a separation $(K,L)$ of $H$ such that \begin{equation}\label{eq:2.5} \frac{n^2}{3} \leq \lambda(K\setminus L) \leq \frac{2n^2}{3} \end{equation}
and $|K \cap L| \leq t+1<n$. Let $R\colonequals \{i \in [n] : V(R_i)\cap (K\cap L)=\emptyset\}$ and $C\colonequals \{j \in [n] : V(C_j)\cap (K\cap L)=\emptyset\}$, be the sets of indices of rows and, respectively, columns of $(G,S_1,S_2)$ disjoint from $K\cap L$. Note that these sets are non-empty. Let $S\colonequals (R \times [n]) \cup ([n] \times C)$.
Let $g \in C^0(G,\mc{L})$ be defined by $g(v)=f(v)$ for $v \in V(G)\setminus L$ and $g(v)=\star$ for $v \in L$. Note that $g$ is continuous and holomorphic on $V(G) - (K\cap L)$. Let $h(p)\colonequals \frac{1}{2}\int_{G\uparrow p}dg$ for $p \in [n]^2$. By Lemma~\ref{lemma-path}, we have \begin{equation}\label{eq:4} h(p) = \lambda(V(G \uparrow p) \cap (K\setminus L)), \end{equation} for every $p\in [n]^2$, and $h(p)$ is an integer for every $p\in [n]^2$ such that $V(G \uparrow p)\cap (K\cap L)=\emptyset$. Note that for $i_1\in R$, $i_2\in C$ and $1\le j_1,j_2\le n$, the paths $G \uparrow(i_1,j_1)$ and $G \uparrow(i_1,j_2)$ are $g$-homotopic, and the paths $G \uparrow(j_1,i_2)$ and $G \uparrow(j_2,i_2)$ are $g$-homotopic, since $R_{i_1}$ and $C_{i_2}$ are near-triangulations and $g$ is holomorphic on them. It follows that $G \uparrow p_1$ and $G \uparrow p_2$ are $g$-homotopic for all $p_1,p_2 \in S$. Thus $h$ is constant on $S$ by Lemma~\ref{lem:homotopic}. Let $H\in\bb{Z}$ be its value.
Note that $|[n]\setminus C|\le t+1$. Let us fix an element $c\in C$ and consider any $i\in [n]$. Let $t_i=|V(R_i)\cap (K\cap L)|$. For any $j\in [n]\setminus C$, the path $G \uparrow (i,j)$ is $(g,\Delta t_i)$-almost homotopic to the path $G \uparrow (i,c)$, and by Lemma~\ref{lem:homotopic} we have $|h(i,j) - H| \leq \Delta t_i$. Thus
\begin{equation}\label{eq:5}
\left|\left(\sum_{j \in [n]} h(i,j)\right) - Hn\right|\le \sum_{j \in [n]} |h(i,j) - H| \leq \Delta t_i(t+1). \end{equation} Summing (\ref{eq:5}) over $i \in [n]$ and applying (\ref{eq:4}), we obtain
$$|\lambda(K\setminus L) - Hn^2| = \left|\sum_{p \in [n]^2} h(p) - Hn^2\right| \leq \Delta (t+1)^2.$$ However, by (\ref{eq:2.5}), $\lambda(K\setminus L)$ differs from an integer multiple of $n^2$ by at least $n^2/3$. It follows that $t \geq \frac{n}{\sqrt{3\Delta}}-1$, as desired. \end{proof}
\section{Partitions of a cube}\label{sec-main2}
Consider the $N\times N\times N$ grid $Q_N$ with non-decreasing diagonals. A path in $Q_N$ with vertices $v_1=(x_1,y_1,z_1),v_2=(x_2,y_2,z_2),\ldots,v_k=(x_k,y_k,z_k)$ in order is \emph{a staircase from $v_1$ to $v_k$} if $x_i+1 = x_{i+1}$, $y_i\le y_{i+1}$ and $z_i\le z_{i+1}$ for $1\le i\le k-1$. The \emph{$b$-square $\square(v,b)$} around a vertex $v=(x,y,z)\in V(Q_N)$ is defined by $$\square(v,B) = \{(x,y+d_y,z+d_z):0\le d_y,d_z\le b\}.$$ The \emph{$b$-enlargement} $B(b,P)$ of a staircase $P$ is the subgraph of $Q_N$ induced by $\bigcup_{v \in V(S)}\square(v,b)$. The \emph{left side} of the $b$-enlargement of a staircase from $u$ to $v$ is $\square(u,b)$ and the \emph{right side} is $\square(v,b)$. For each edge $yz$ of the $b$-enlargement, let $W_{yz}$ be the closed walk consisting of the path parallel to the staircase from the left side to $y$, the edge $yz$, the path parallel to the staircase from $z$ to the left side, and possibly an edge in the left side.
We need the following property (another proof of a similar statement is implicit in~\cite{matopriv}, Proposition 3.1).
\begin{lemma}\label{lemma-connect} Let $b \ge 0$ be an integer, and let $G$ be the $b$-enlargement of a staircase in $Q_N$. If $X\subseteq V(G)$ is a minimal set that intersects every path from the left side of $G$ to the right side, then $G[X]$ is connected. \end{lemma} \begin{proof} Let $S_1$ and $S_2$ be the left and the right side of $G$, respectively. Since we can extend the staircase if necessary, we can assume that $X$ is disjoint from $S_1\cup S_2$. Let $(A,B)$ be a separation of $G$ such that $S_1\subseteq A$, $S_2\subseteq B$ and $X=A\cap B$. Let $l\in S_1$ and $r\in S_2$ be arbitrary vertices of the sides of $G$.
Consider any two vertices $x,y\in X$, and let $C$ be the vertex set of the component of $G[X]$ containing $x$. Let $f:V(G)\to \mc{L}$ be defined by $$f(v)=\begin{cases} -1, & \text{if } v \in A\setminus B, \\
0, & \text{if } v \in C,\\
\star, & \text{if } v \in X\setminus C,\\
1, & \text{if } v \in B \setminus A. \end{cases}$$ By the minimality of $X$, there exist paths $P_x$ and $P_y$ from $l$ to $r$ in $G$ such that $V(P_x)\cap X=\{x\}$ and $V(P_y)\cap X=\{y\}$. Let $W$ be the closed walk consisting of $P_x$ and the reverse of $P_y$. Note that $I_W=\sum_{i=1}^n I_{T_i}$ for some triangles $T_1$, \ldots, $T_n$ in $G$ (since $W$ is the sum of walks $W_{yz}$ for edges $yz$ of $W$ together with a closed walk in the left side of $G$, it suffices to observe that this claim holds for the walks $W_{yz}$ and for closed walks in the left side).
Since $f$ is holomorphic, $W$ is $(f,0)$-almost contractible, and thus $P_x$ and $P_y$ are $f$-homotopic. By Lemma~\ref{lem:homotopic}, we have $\int_{P_x} df=\int_{P_y} df$. Since $f$ is entire on $P_x$, we have $\int_{P_x} df=2$, and thus $f(y)\neq\star$ (as otherwise we would have $\int_{P_y} df=0$). Therefore, $y$ is in the same component of $G[X]$ as $x$.
We conclude that $G[X]$ is connected. \end{proof}
Let $A_1,A_2$ be a partition of vertices of $Q_N$. Let $b\ge 0$ and $i\in\{1,2\}$ be integers, and let $P$ be a staircase. We say that $P$ is \emph{$(b,i)$-blocked} if every path in the $b$-enlargement of $P$ joining its left side $S_1$ with its right side $S_2$ intersects $A_i\setminus (S_1\cup S_2)$.
\begin{lemma}\label{lemma-permpair} Let $A_1,A_2$ be a partition of vertices of $Q_N$. Let $b\ge 0$ and $i\in\{1,2\}$ be integers. Let $P$ be a $(b,i)$-blocked staircase, let $M$ be the $(b+1)$-enlargement of $P$, and let $M_i$ be the subgraph of $M$ induced by $A_i\cap V(M)$. There exists a connected component of $M_i$ containing all paths in $M_i$ which join the left side of $M$ with the right side. \end{lemma} \begin{proof} Let $M_0$ be the $b$-enlargement of $P$. Let $X$ be a minimal subset of $A_i\cap V(M_0)$ such that every path in $M_0$ from the left side to the right side intersects $X$. By Lemma~\ref{lemma-connect}, $M_0[X]$ is connected.
Let $P'$ be a path in $M_i$ joining the left side of $M$ with the right side. For a vertex $u=(x,y,z) \in M$, let $v=(x,y_0,z_0) \in V(P)$ be the unique vertex of $P$ such that $u \in \square(v,b+1)$. Let $\pi(u)$ be the point of $\square(v,b)$ closest to $u$ in the Euclidean distance. That is, $\pi(u)=(x,\min(y,y_0+b),\min(z,z_0+b))$. Clearly $u$ is adjacent to $\pi(u)$. Moreover, it is easy to check that if $u$ and $u'$ are adjacent then $\pi(u)$ and $\pi(u')$ are adjacent or equal.
It follows that the subgraph $\pi(P')$ of $M_0$ induced by $\{\pi(u): u \in V(P')\}$ is connected and contains vertices both in the left and in the right side of $M_0$. Therefore, $\pi(P')$ intersects $X$, and thus $P'$ belongs to the same component of $M_i$ as $X$, as desired. \end{proof}
The \emph{$t \times t$-grid} is the graph whose vertex set consists of all pairs $\{(x,y) \in \bb{Z}^2 : 0 \leq x,y \leq t-1\}$
and two vertices $(x_1,y_1)$ and $(x_2, y_2)$ are adjacent iff $|x_1-x_2|+|y_1-y_2|=1$. Let $P_1$ and $P_2$ be two staircases. A staircase $P$ \emph{joins} $P_1$ with $P_2$ if $P_1$ is the initial segment of $P$ and $P_2$ is the final segment of $P$, or vice versa.
Consider the $N\times N\times N$ grid $Q_N$. For an integer $n\ge 1$ and a point $v=(x,y,z) \in V(Q_N)$ with $x,y,z\le N-n$, let $Q_n(v)$ denote the subgrid of $Q_N$ induced by vertices $(x',y',z') \in V(Q_N)$ such that $x \leq x' \leq x + n-1, y \leq y' \leq y + n-1,$ and $z \leq z' \leq z + n-1$. For a positive integer $d$, let $p_d(j,k)=(4dj+4dk,2dj+dk,dj+2dk)$. This definition is motivated by the following fact.
\begin{observation}\label{lemma-obsjoin} Let $n\ge 1$, $b\ge 0$ and $d\ge n+b$ be integers. For every vertex $z=(j,k)$ of the $(2t+1)\times (2t+1)$ grid, let $p_z=p_d(j,k)$ and let $P_z$ be a staircase in $Q_n(p_z)$. For every edge $uz$ of the $(2t+1)\times (2t+1)$ grid, there exists a staircase $P_{uz}$ joining $P_u$ with $P_z$. Furthermore, the staircases can be chosen so that for any edges $uz$ and $u'z'$ with $\{u,z\}\cap \{u',z'\}=\emptyset$, the $b$-enlargements of $P_{uz}$ and $P_{u'z'}$ are vertex-disjoint. \end{observation}
In the following proof of Theorem~\ref{thm-main2} we construct a subgraph of $Q_n[A_1]$ or $Q_n[A_2]$ which closely resembles a subdivision of the $t \times t$ grid. Unfortunately, obtaining an actual subdivision seem to require a lot more work, and therefore we content ourselves with obtaining the following less structured certificate of large tree-width. A collection of non-empty subsets $\mc{B}$ of the vertex set of a graph $G$ is called a \emph{bramble} if for all $B,B' \in \mc{B}$ the subgraph $G[B \cup B']$ of $G$ induced by $B \cup B'$ is connected (and in particular, $G[B]$ is connected for every $B \in \mc{B}$). The \emph{order} of $B$ is the minimum size of the set $S \subseteq V(G)$ such that $S \cap B \neq \emptyset$ for every $B \in \mc{B}$. It is shown in~\cite{bramble} that if $G$ contains a bramble of order $t$ then $G$ has tree-width at least $t-1$.
We are now ready to complete the proof of Theorem~\ref{thm-main2}.
\begin{proof}[Proof of Theorem~\ref{thm-main2}] We will prove by induction on $b$ that there exists $N=N(b)$ such that for every partition $(A_1,A_2)$ of the vertex set of $Q_N$ and each $i\in \{1,2\}$, if the tree-width of both $Q_N[A_1]$ and $Q_N[A_2]$ is less than $t$, then $Q_N$ contains a $(b,i)$-blocked staircase. Note that the theorem will follow, by choosing $b\colonequals\lceil\sqrt{18}(t+1)\rceil-1$ and $n\colonequals N(b)$. The $b$-enlargement $B$ of a staircase with sides $S_1$ and $S_2$ is a $(b+1,b+1)$-slab. As every path in $B$ from $S_1$ to $S_2$ intersects $A_i\setminus (S_1 \cup S_2)$, Theorem~\ref{thm-main1} implies that the tree-width of $Q_n[A_i]$ is at least $\frac{b+1}{\sqrt{18}}-1 \geq t$.
The base case $b=0$ is trivial with $N(0)=t+2$: we have $V(Q_t(1,1,1))\cap A_i\neq\emptyset$, since $A_{3-i}$ induces a subgraph of tree-width less than $t$. We move on to the induction step. Let $n_0=N(b-1)$ and let $N=(8t+5)(n_0+b)$. By the induction hypothesis, for every $v=(x,y,z)$ with $0\le x,y,z< N-n_0$, the subgrid $Q_{n_0}(v)$ contains a $(b-1,3-i)$-blocked staircase which we denote by $P_v$.
Let $d=n_0+b$, and for every vertex $z=(j,k)$ of the $(2t+1)\times (2t+1)$ grid, let $p(j,k)=p_d(j,k)$. Note that $Q_{n_0}(p(j,k))\subseteq Q_N$. Let $P_z=P(p(j,k))$; by Lemma~\ref{lemma-permpair}, there exists a connected subgraph $M_z$ of the $b$-enlargement $B_z$ of $P_z$ such that $V(M_z)\subseteq A_{3-i}$ and $M_z$ contains all paths in $B_z[A_{3-i}]$ joining the left side of $B_z$ with the right side.
For each edge $yz$ of the $(2t+1)\times (2t+1)$ grid, let $P_{yz}$ be the path as in Observation~\ref{lemma-obsjoin}. If $P_{yz}$ is $(b,i)$-blocked then the proof of the induction step is finished. Hence, we can assume that for every edge $yz$, there exists a path $R_{yz}$ with $V(R_{yz})\subseteq A_{3-i}$ in the $b$-enlargement of $P_{yz}$ joining the left side of this enlargement to the right side. Note that $R_{yz}$ intersects $M_y$ and $M_z$.
For $0 \leq j \leq 2t$, let $S'_j$ be the path forming the $j$-th column of the $(2t+1)\times (2t+1)$ grid, and let $$S_j=V\left(\bigcup_{y\in V(S'_j)} M_y\cup \bigcup_{yz\in E(S'_j)} R_{yz}\right).$$ Let the set $T_j$ be similarly defined for the $j$-th row of the grid. As observed in the previous paragraph, $Q_N[S_j]$ and $Q_N[T_j]$ are connected for all $0 \leq j\leq 2t$, and $M_{(j,k)} \subseteq S_j \cap T_k$. Let $\mc{B}=\{S_j \cup T_j\}_{0 \leq j \leq 2t}$. Clearly $\mc{B}$ is a bramble in $Q_N[A_{3-i}]$, and no vertex of $Q_N$ belongs to more than two elements of $\mc{B}$ by Observation~\ref{lemma-obsjoin}. It follows that the order of $\mc{B}$ is at least $t+1$, and thus $Q_N[A_{3-i}]$ has tree-width at least $t$, yielding the desired contradiction. \end{proof}
\noindent {\bf Acknowledgement.} This research was partially completed at a workshop held at the Bellairs Research Institute in Barbados in April 2014. We thank the participants of the workshop, and especially Paul Seymour, for helpful discussions.
\end{document} |
\begin{document}
\title{Elliptic problems with mixed nonlinearities and potentials singular at the origin and at the boundary of the domain}
\author[B. Bieganowski]{Bartosz Bieganowski}
\address[B. Bieganowski]{\newline\indent
Faculty of Mathematics, Informatics and Mechanics, \newline\indent
University of Warsaw, \newline\indent
ul. Banacha 2, 02-097 Warsaw, Poland}
\email{\href{mailto:bartoszb@mimuw.edu.pl}{bartoszb@mimuw.edu.pl}}
\author[A. Konysz]{Adam Konysz} \address[A. Konysz]{\newline\indent Faculty of Mathematics and Computer Science, \newline\indent Nicolaus Copernicus University, \newline\indent ul. Chopina 12/18, 87-100 Toru\'n, Poland} \email{\href{mailto:adamkon@mat.umk.pl}{adamkon@mat.umk.pl}}
\date{} \date{\today}
\begin{abstract} We are interested in the following Dirichlet problem $$ \left\{ \begin{array}{ll}
-\Delta u + \lambda u - \mu \frac{u}{|x|^2} - \nu \frac{u}{\mathrm{dist}(x,\mathbb{R}^N \setminus \Omega)^2} = f(x,u) & \quad \mbox{in } \Omega \\ u = 0 & \quad \mbox{on } \partial \Omega, \end{array} \right. $$ on a bounded domain $\Omega \subset \mathbb{R}^N$ with $0 \in \Omega$. We assume that the nonlinear part is superlinear on some closed subset $K \subset \Omega$ and asymptotically linear on $\Omega \setminus K$. We find a solution with the energy bounded by a certain min-max level, and infinitely many solutions provided that $f$ is odd in $u$. Moreover we study also the multiplicity of solutions to the associated normalized problem.
\noindent \textbf{Keywords:} variational methods, singular potential, nonlinear Schr\"odinger equation, multiplicity of solutions
\noindent \textbf{AMS Subject Classification:} 35Q55, 35A15, 35J20, 58E05 \end{abstract}
\maketitle
\pagestyle{myheadings} \markboth{\underline{B. Bieganowski, A. Konysz}}{
\underline{Elliptic problems with singularities and mixed nonlinearities}}
\section{Introduction}
We are interested in the problem \begin{equation}\label{eq} \left\{ \begin{array}{ll}
-\Delta u + \lambda u - \mu \frac{u}{|x|^2} - \nu \frac{u}{\mathrm{dist}(x,\mathbb{R}^N \setminus \Omega)^2} = f(x,u) & \quad \mbox{in } \Omega \\ u = 0 & \quad \mbox{on } \partial \Omega, \end{array} \right. \end{equation}
where $\lambda, \mu \in \mathbb{R}$ are real parameters, $f : \Omega \times \mathbb{R} \rightarrow \mathbb{R}$, $\Omega \subset \mathbb{R}^N$ is a bounded domain in $\mathbb{R}^N$ with $0 \in \Omega$, and $K \subset \Omega$ is a closed set with $| \mathrm{int}\, K | > 0$.
Semilinear problems of general form $$ -\Delta u = h(x,u) $$ appear when one looks for stationary states of time-dependent problems, including the \textit{heat equation} $\frac{\partial u}{\partial t} - \Delta u = h(x,u)$ or the \textit{wave equation} $\frac{\partial^2 u}{ \partial t^2} - \Delta u = h(x,u)$. In nonlinear optics the \textit{nonlinear Schr\"odinger equation} is studied \begin{equation}\label{eq:schroed}
\mathbf{i} \frac{\partial \Psi}{\partial t} + \Delta \Psi = h(x, |\Psi|) \Psi, \quad (t,x) \in \mathbb{R} \times \Omega \end{equation} and looking for standing waves $\Psi(t,x) = e^{i\lambda t} u(x)$ leads then to a semilinear problem.
The time-dependent equation \eqref{eq:schroed} appears in physical models in the case of bounded domains $\Omega$ (\cite{Fibich, Fibich2, Zuazua}), as well as in the case $\Omega = \mathbb{R}^N$ (\cite{Dorfler, Nie}). Two points of view of solutions to \eqref{eq} are possible; either $\lambda$ may be prescribed or may be considered as a part of the unknown. In the latter case a natural additional condition is the prescribed mass $\int_\Omega u^2 \, dx$. In the paper we will consider both cases, namely we will look for solutions for the unconstrained problem \eqref{eq} as well as the constrained one, see \eqref{eq:normalized} below.
The equation \eqref{eq} (and systems of such equations) on bounded domains has been studied in the presence of bounded potentials \cite{B2} and singular at the origin \cite{Gao}, see also \cite{Felli, GuoMed, Kostenko} for the case of unbounded $\Omega$. Its constrained counterpart without the potential has been studied e.g. in \cite{Noris,PV}, where \eqref{eq:normalized} was studied with $f(x,u)=|u|^{p-2}u$, $\nu=\mu=0$ in the mass-subcritical, mass-critical and mass-supercritical cases. In this paper we are interested in the presence of a potential $$
V(x) = -\frac{\mu}{|x|^2} - \frac{\nu}{\mathrm{dist}\,(x, \mathbb{R}^N \setminus \Omega)^2} $$ which is singular in $\Omega$ as well as on the whole boundary $\partial \Omega$. We mention here that Schr\"odinger operators were studied with potentials being singular at the point on the boundary \cite{Chen}, as well as with potentials being singular on the whole boundary \cite{Tai, Tai2}. We assume that $\Omega$ is a domain satisfying the following condition \begin{enumerate} \item[(C)] $-\Delta d \geq 0$ in $\Omega$, in the sense of distributions, where $d(x) := \mathrm{dist} (x, \mathbb{R}^N \setminus \Omega)$. \end{enumerate} This condition allows us to study the singular potential by means of Hardy-type inequalities (Section \ref{sect:2}). As we will see in Section \ref{sect:2} (see Proposition \ref{prop:convex}), any convex domain $\Omega$ satisfies (C).
We impose the following condition on parameters appearing in the problem \begin{enumerate} \item[(N)] $\mu,\nu \geq 0$, $\frac{\mu}{(N-2)^2} + \nu < \frac{1}{4}$, $N \geq 3$. \end{enumerate}
On the nonlinear part of \eqref{eq} we propose the following assumptions.
\begin{enumerate} \item[(F1)] $f : \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ is a Carath\'eodory function and there is $2 < p < 2^*$ such that $$
|f(x,u)| \lesssim 1+|u|^{p-1}, \quad u \in \mathbb{R}, \ x \in \Omega $$ \item[(F2)] $f(x,u) = o(u)$ uniformly in $x \in \Omega$ as $u \to 0$
\item[(F3)] $F(x,u)/|u|^2 \to +\infty$ as $|u| \to +\infty$ for $x \in K$, where $F(x,u) := \int_0^u f(x,s) \, ds$;
\item[(F4)] $f(x,u)/|u|$ is nondecreasing on $(-\infty, 0)$ and on $(0,\infty)$
\item[(F5)] $f(x,u)=\Theta(x)u$ for sufficiently large $|u|$ and $x \in \Omega \setminus K$, where $\Theta \in L^\infty(\Omega \setminus K)$. \end{enumerate}
A simple example satisfying all foregoing conditions is the following $$ f(x,u) = \left\{ \begin{array}{ll}
\Gamma(x) |u|^{p-2}u, & \quad x \in K, \\
\frac{|u|^2}{1+|u|^2} u \chi_{|u| \leq 1} + \frac12 u \chi_{|u| > 1} & \quad x \in \Omega \setminus K, \end{array} \right. $$ where $\Gamma\in L^\infty(K)$ is a nonnegative function.
To show the boundedness of minimizing sequences to the problem \eqref{eq} we impose the following abstract condition \begin{enumerate}
\item[(A)] $-\lambda$ is not an eigenvalue of $-\Delta - \frac{\mu}{|x|^2} - \frac{\nu}{d(x)^2} - \Theta(x)$ with Dirichlet boundary conditions on $L^2(\Omega \setminus K)$. \end{enumerate}
As we will see in Section \ref{sect:2}, (A) is satisfied if e.g. $\lambda \geq |\Theta|_\infty$ (cf. Theorem \ref{th:spectr}).
\begin{Th}\label{th:main1} Suppose that (C), (N), (F1)--(F5), (A) are satisfied and $\lambda \geq 0$. Then there is a nontrivial weak solution to \eqref{eq} with the energy level $c$ satisfying \eqref{gsl}. \end{Th}
\begin{Th}\label{th:main2} Suppose that (C), (N), (F1)--(F5), (A) hold, $\lambda \geq 0$, and $f$ is odd in $u \in \mathbb{R}$. Then there is infinitely many weak solutions to \eqref{eq}. \end{Th}
In the last section we also study the normalized problem \begin{equation}\label{eq:normalized} \left\{ \begin{array}{ll}
-\Delta u + \lambda u - \mu \frac{u}{|x|^2} - \nu \frac{u}{\mathrm{dist}(x, \mathbb{R}^N\setminus\Omega)^2} = f(x,u) & \quad \mbox{in } \Omega \\ u = 0 & \quad \mbox{on } \partial \Omega, \\ \int_\Omega u^2 \, dx = \rho > 0, \end{array} \right. \end{equation} where $\rho$ is fixed and $(\lambda, u) \in \mathbb{R} \times H^1_0(\Omega)$ is an unknown. Then we obtain the following multiplicity result in the so-called \textit{mass-subcritical case}.
\begin{Th}\label{main:3} Suppose that (C), (N), (F1) hold with $p < 2_* := 2+\frac{4}{N}$, and $f$ is odd in $u \in \mathbb{R}$. Then there is infinitely many weak solutions to \eqref{eq:normalized}. \end{Th}
In what follows, $\lesssim$ denotes the inequality up to a multiplicative constant. Moreover $C$ denotes a generic constant which may vary from one line to another.
\section{The domain \texorpdfstring{$\Omega$}{Ω} and the singular Schr\"odinger operator}\label{sect:2}
We recall that if $A \subset \mathbb{R}^N$ is a closed, nonempty set, we can define the distance function $\mathrm{dist} (\cdot, A) : \mathbb{R}^N \rightarrow [0,\infty)$ by $$
\mathrm{dist}(x, A) := \inf_{y \in A} |x-y|, \quad x \in \mathbb{R}^N. $$ We collect the following properties of the distance function: \begin{itemize}
\item[(i)] $|\mathrm{dist}(x, A) - \mathrm{dist}(x',A) \leq |x-x'|$ for all $x,x' \in \mathbb{R}^N$,
\item[(ii)] if $x \in \mathbb{R}^N \setminus A$, then $\mathrm{dist}(\cdot, A)$ is differentiable at $x$ if and only if there is unique $u \in A$ such that $\mathrm{dist}(x,A) = |x-y|$ and then $\nabla \mathrm{dist}(x, A) = \frac{x-y}{|x-y|}$. \end{itemize}
Now we consider $A := \mathbb{R}^N \setminus \Omega$. It is clear that $A$ is a closed subset of $\mathbb{R}^N$. We recall that we denote $d(x) = \mathrm{dist} (x, \mathbb{R}^N \setminus \Omega)$. Observe that, due to Rademacher’s theorem (\cite[Theorem 2.2.1]{Ziemer}) and (i), $d$ is differentiable almost everywhere and, from (ii) $|\nabla d| = 1$, almost everywhere. We remind that the assumption (C) says that $$ -\Delta d \geq 0 \mbox{ in } \Omega $$ holds in the sense of distributions. We note the following fact.
\begin{Prop}\label{prop:convex}
If $\Omega \subset \mathbb{R}^N$ is a convex domain in $\mathbb{R}^N$, then $d \big|_\Omega$ is concave and satisfies (C). \end{Prop}
\begin{proof}
First note that $d \big|_\Omega : \Omega \rightarrow [0,\infty)$ is a concave function. Indeed, fix any $x,y \in \Omega$ and $\alpha \in [0,1]$. Let $z = \alpha x + (1-\alpha)y$. Choose $z_0 \in \partial \Omega$ such that $d(z) = |z-z_0|$. Let $T_{z_0} := z_0 + \mathrm{span} \{ z-z_0 \}^\perp$ be an affine subspace of $\mathbb{R}^N$ orthogonal to $z-z_0$ containing $z_0$. Define $x_0, y_0$ as orthogonal projections of $x, y$ respectively onto $T_{z_0}$. Then $$
d(z) = |z-z_0| = \alpha |x-x_0| + (1-\alpha)|y-y_0| \geq \alpha d(x) + (1-\alpha) d(y), $$ which completes the proof of concavity. Moreover, since $d$ is concave on $\Omega$, from \cite[Theorem 6.8]{Ev} there is a nonnegative Radon measure $\mu$ on $\Omega$ satisfying $$ -\Delta d = \mu \quad \mbox{in the sense of distributions, } $$ namely $$ \int_{\Omega} \nabla d \cdot \nabla \varphi \, dx = \int_\Omega \varphi \, d\mu \quad \mbox{for } \varphi \in {\mathcal C}_0^\infty (\Omega). $$ Clearly, for $\varphi \geq 0$ we get $$ \int_{\Omega} \nabla d \cdot \nabla \varphi \, dx \geq 0 $$ and condition (C) holds. \end{proof} To study singular terms in \eqref{eq} we recall the following Hardy-type inequalities. If $u \in H^1_0 (\Omega)$, where $\Omega$ is a domain in $\mathbb{R}^N$ with finite Lebesgue measure and $0 \in \Omega$, then (see \cite{BV}) \begin{equation}\label{hardy1}
\frac{(N-2)^2}{4} \int_\Omega \frac{u^2}{|x|^2} \, dx \leq \int_\Omega |\nabla u|^2 \, dx. \end{equation} Now let $\Omega \subset \mathbb{R}^N$ be a bounded domain satisfying (C). Then, for $u \in H^1_0(\Omega)$, the following Hardy inequality involving the distance function holds (see \cite{BFT}) \begin{equation}\label{hardy2}
\frac14 \int_\Omega \frac{u^2}{d(x)^2} \, dx \leq \int_{\Omega} |\nabla u|^2 \, dx. \end{equation}
We consider the operator $\mathcal{A} := -\Delta - \frac{\mu}{|x|^2} - \frac{\nu}{d(x)^2} - \Theta(x)$ on $L^2 (\Omega \setminus K)$ with Dirichlet boundary conditions. Then the domain is ${\mathcal D}({\mathcal A}) := H^2 (\Omega \setminus K) \cap H^1_0 (\Omega \setminus K)$.
\begin{Th}\label{th:spectr}
The operator ${\mathcal A} : {\mathcal D}(A) \subset L^2 (\Omega \setminus K) \rightarrow L^2 (\Omega \setminus K)$ is elliptic, self-adjoint on $L^2 (\Omega \setminus K)$ and has compact resolvents. Moreover the spectrum $\sigma({\mathcal A}) \subset (-|\Theta|_\infty, +\infty)$ and consists of eigenvalues $-|\Theta|_\infty < \lambda_1 \leq \lambda_2 \leq \ldots$ with $\lambda_n \to +\infty$ as $n \to +\infty$. \end{Th}
\begin{proof}
It is well-known that ${\mathcal D}({\mathcal A})$ is closed in $L^2(\Omega \setminus K)$. It is easy to check that ${\mathcal A}$ is self-adjoint. The compactness of its resolvents easily follows from the Rellich-Kondrachov theorem. Hence its spectrum consists of eigenvalues $\lambda_n$ with $\lambda_n \to +\infty$. To see that $\sigma({\mathcal A}) \subset (-|\Theta|_\infty, +\infty)$, suppose that $\lambda$ is an eigenvalue of ${\mathcal A}$ with an associated eigenfunction $u \in H^1_0(\Omega \setminus K)$. We can treat $u$ as a function in $H^1_0(\Omega)$ being zero on $K$. Then, using \eqref{hardy1} and \eqref{hardy2}, \begin{align*}
\lambda \int_{\Omega \setminus K} u^2 \, dx &\geq - \int_\Omega |\nabla u|^2 + \mu \frac{u^2}{|x|^2} + \nu \frac{u^2}{d(x)^2} \, dx + \lambda \int_{\Omega \setminus K} u^2 \, dx \\ &= -\int_{\Omega \setminus K} \Theta(x) u^2 \, dx \geq - |\Theta|_\infty \int_{\Omega \setminus K} u^2 \, dx. \end{align*}
Hence $\lambda \geq -|\Theta|_\infty$. To show that $\lambda \neq -|\Theta|_\infty$, suppose by contradiction that there is $u \in H^1_0 (\Omega \setminus K)$ with $$
\int_{\Omega \setminus K} |\nabla u|^2 - \mu \frac{u^2}{|x|^2} - \nu \frac{u^2}{d(x)^2} - \Theta(x) u^2 \, dx = -|\Theta|_\infty \int_{\Omega \setminus K} u^2 \, dx. $$ Thus $$
\int_{\Omega \setminus K} |\nabla u|^2 - \mu \frac{u^2}{|x|^2} - \nu \frac{u^2}{d(x)^2} \, dx = \int_{\Omega \setminus K} (\Theta(x) - |\Theta|_\infty) u^2 \, dx \leq 0. $$ However, using \eqref{hardy1} and \eqref{hardy2} we get $$
\int_{\Omega \setminus K} |\nabla u|^2 - \mu \frac{u^2}{|x|^2} - \nu \frac{u^2}{d(x)^2} \, dx \geq \left(1 - \frac{4\mu}{(N-2)^2} - 4 \nu \right) \int_{\Omega \setminus K} |\nabla u|^2 \, dx. $$
Hence $\int_{\Omega \setminus K} |\nabla u|^2 \, dx = 0$ and $u = 0$, which is a contradiction. Hence $\sigma({\mathcal A}) \subset (-|\Theta|_\infty, +\infty)$. \end{proof}
\section{Variational setting and critical point theory}
Suppose that $(E, \| \cdot \|)$ is a Hilbert space and ${\mathcal J} : E \rightarrow \mathbb{R}$ is a nonlinear functional of the general form $$
{\mathcal J}(u) = \frac12 \|u\|^2 - {\mathcal I}(u), $$ where ${\mathcal I}$ is of ${\mathcal C}^1$ class and ${\mathcal I}(0)=0$. We introduce the so-called \textit{Nehari manifold} $$ {\mathcal N} := \{ u \in E \setminus \{ 0 \} \ : \ {\mathcal J}'(u)(u) = 0 \}. $$ Observe that ${\mathcal I}'(u)(u) > 0$ on ${\mathcal N}$. Indeed, $$
0 = {\mathcal J}'(u)(u) = \|u\|^2 - {\mathcal I}'(u)(u), \quad u \in {\mathcal N}. $$ To utilize the mountain pass approach, we consider the following space of paths $$
\Gamma := \{ \gamma \in {\mathcal C} ([0,1], E) \ : \ \gamma(0) = 0, \ \|\gamma(1)\| > r, \ {\mathcal J}(\gamma(1)) < 0 \} $$ and the following mountain pass level $$ c := \inf_{\gamma \in \Gamma} \sup_{t \in [0,1]} {\mathcal J}(\gamma(t)). $$ Moreover we set $$ \Gamma_Q := \Gamma \cap {\mathcal C}([0,1],Q). $$ We propose an abstract theorem which is a combination of \cite[Theorem 5.1]{B} and \cite[Theorem 2.1]{BM-Indiana}. The proof is a straightforward modification of proofs of mentioned theorems, however we include it here for the reader's convenience.
\begin{Th}\label{abstract} Suppose that \begin{itemize} \item[(J1)] there is $r > 0$ such that $$
a := \inf_{\|u\|=r} {\mathcal J}(u) > 0; $$ \item[(J2)] there is a closed vector subspace $Q \subset E$ such that $\frac{{\mathcal I} (t_n u_n)}{t_n^2} \to +\infty$ for $t_n \to +\infty$, $u_n \in Q$ and $u_n \to u \neq 0$; \item[(J3)] for all $t > 0$ and $u \in {\mathcal N}$ there holds $$ \frac{t^2-1}{2} {\mathcal I}'(u)(u) - {\mathcal I}(tu) + {\mathcal I}(u) \leq 0. $$ \end{itemize} Then $\Gamma_Q \neq \emptyset$, ${\mathcal N} \cap Q \neq \emptyset$ and \begin{equation}\label{gsl}
0 < \inf_{\|u\|=r} {\mathcal J}(u) \leq c \leq \inf_{\gamma \in \Gamma_Q} \sup_{t \in [0,1]} {\mathcal J}(\gamma(t)) = \inf_{{\mathcal N} \cap Q} {\mathcal J} = \inf_{u \in Q \setminus \{0\}} \sup_{t \geq 0} {\mathcal J}(tu), \end{equation} Moreover there is a Cerami sequence for ${\mathcal J}$ on the level $c$, i.e. a sequence $\{ u_n \}_n \subset E$ such that $$
{\mathcal J}(u_n) \to c, \quad (1+\|u_n\|) {\mathcal J}'(u_n) \to 0. $$ \end{Th}
\begin{proof}
Observe that there exists $v \in Q \setminus \{0\}$ with $\|v\| > r$ such that ${\mathcal J}(v) < 0$. Indeed, fix $u \in Q \setminus \{0\}$ and from (J2) there follows that \begin{equation}\label{infty}
\frac{{\mathcal J}(tu)}{t^2} = \frac12 \|u\|^2 - \frac{{\mathcal I}(tu)}{t^2} \to - \infty \quad \mbox{as } t \to +\infty \end{equation}
and we may take $v := t u$ for sufficiently large $t > 0$. In particular, the family of paths $\Gamma_Q$ is nonempty. Moreover, ${\mathcal J}(tu) \to 0$ as $t \to 0^+$ and for $t = \frac{r}{\|u\|} > 0$ we get ${\mathcal J}(tu) > 0$. Hence, taking \eqref{infty} into account, $(0,+\infty) \ni t \mapsto {\mathcal J}(tu) \in \mathbb{R}$ has a local maximum, which is a critical point of ${\mathcal J}(tu)$ and $tu \in {\mathcal N}$. Hence ${\mathcal N} \cap Q \neq \emptyset$. Suppose that $u \in {\mathcal N} \cap Q$. Then, from (J3), \begin{equation*} {\mathcal J}(tu) = {\mathcal J}(tu) - \frac{t^2-1}{2} {\mathcal J}'(u)(u) \leq {\mathcal J}(u) \end{equation*} and therefore $u$ is a maximizer (not necessarily unique) of ${\mathcal J}$ on $\mathbb{R}_+ u := \{ su \ : \ s > 0 \}$. Hence, for any $u \in {\mathcal N}\cap Q$ there are $0 < t_{\min} (u) \leq 1 \leq t_{\max}(u)$ such that $t u \in {\mathcal N}\cap Q$ for any $t \in [t_{\min}(u), t_{\max} (u)]$ and $$ [t_{\min}(u), t_{\max} (u)] \ni t \mapsto {\mathcal J}(tu) \in \mathbb{R} $$ is constant. Moreover ${\mathcal J}'(tu)(u) > 0$ for $t \in (0, t_{\min}(u))$ and ${\mathcal J}'(tu)(u) < 0$ for $t \in (t_{\max} (u), +\infty)$, $Q \setminus {\mathcal N}$ consists of two connected components and any path $\gamma \in \Gamma_Q$ intersects ${\mathcal N}\cap Q$. Thus $$ \inf_{\gamma \in \Gamma_Q} \sup_{t \in [0,1]} {\mathcal J}(\gamma(t)) \geq \inf_{{\mathcal N}\cap Q} {\mathcal J}. $$ Since $$ \inf_{{\mathcal N}\cap Q} {\mathcal J} = \inf_{u \in Q \setminus \{0\}} \sup_{t > 0} {\mathcal J}(tu) $$ there follows, under (J1), that $$ c = \inf_{\gamma \in \Gamma} \sup_{t \in [0,1]} {\mathcal J}(\gamma(t)) \leq \inf_{\gamma \in \Gamma_Q} \sup_{t \in [0,1]} {\mathcal J}(\gamma(t)) = \inf_{{\mathcal N} \cap Q} {\mathcal J} = \inf_{u \in Q \setminus \{0\}} \sup_{t > 0} {\mathcal J}(tu). $$ The existence of a Cerami sequence follows from the mountain pass theorem. \end{proof}
To study the multiplicity of solutions we will recall the symmetric mountain pass theorem. We consider the following condition \begin{enumerate}
\item[(J4)] there exists a sequence of subspaces $\widetilde{E}_1 \subset \widetilde{E_2} \subset \ldots \subset E$ such that $\dim \widetilde{E}_k=k$ for every $k \geq 1$ and there is a radius $R_k$ such that $\sup_{u \in \widetilde{E}_k,\ \|u\|\geq R_k} {\mathcal J} \leq 0$. \end{enumerate} Then, the following theorem holds.
\begin{Th}[{\cite[Corolarry 2.9]{AmbrRab}}, {\cite[Theorem 9.12]{Rabinowitz}}]\label{multipl} Suppose that ${\mathcal J}$, as above, is even and satisfies (J1), (J4) and a Palais-Smale condition (namely, any Palais-Smale sequence for ${\mathcal J}$ contains a convergent subsequence). Then ${\mathcal J}$ has an unbounded sequence of critical values. \end{Th} We work in the usual Sobolev space $H^1_0(\Omega)$ being the completion of ${\mathcal C}_0^\infty(\Omega)$ with respect to the norm $$
\| u \|_{H^1} := \left( \int_{\Omega} |\nabla u|^2 + u^2 \, dx \right)^{1/2}. $$ Define the bilinear form $B : H^1_0(\Omega) \times H^1_0 (\Omega) \rightarrow \mathbb{R}$ by $$
B(u,v) := \int_{\Omega} \nabla u \cdot \nabla v + \lambda uv \, dx - \mu \int_{\Omega} \frac{uv}{|x|^2} \, dx - \nu \int_\Omega \frac{uv}{d(x)^2} \, dx, \quad u,v \in H^1_0(\Omega). $$
\begin{Lem} $B$ defines an inner product on $H^1_0 (\Omega)$. Moreover, the associated norm is equivalent with the usual one. \end{Lem} \begin{proof} To check that $B$ is positive-definite we utilize \eqref{hardy1}, \eqref{hardy2}, and (N) to get \begin{align*}
B(u,u)& = \int_{\Omega} |\nabla u |^2 + \lambda u^2 \, dx - \mu \int_{\Omega} \frac{u^2}{|x|^2} \, dx - \nu \int_\Omega \frac{u^2}{d(x)^2} \, dx\\
&\geq \left(1 - \frac{4 \mu}{(N-2)^2} - 4\nu \right) \int_\Omega |\nabla u|^2 \, dx + \lambda u^2 \, dx \geq \left(1 - \frac{4 \mu}{(N-2)^2} - 4\nu \right) \int_\Omega |\nabla u|^2 \, dx \end{align*} and the statement follows from the Poincar\'e inequality. Moreover, from $$
\int_\Omega |\nabla u|^2 + \lambda u^2 \, dx \geq B(u,u) \geq \left(1 - \frac{4 \mu}{(N-2)^2} - 4\nu \right) \int_\Omega |\nabla u|^2 \, dx $$ there follows that $B$ generates a norm on $H^1_0(\Omega)$ equivalent to the standard one. \end{proof}
Let $\| \cdot \|$ denote the norm generated by $B$, namely $$
\|u\| := \sqrt{B(u,u)}, \quad u \in H^1_0(\Omega). $$ Then we can define the energy functional ${\mathcal J} : H^1_0 (\Omega) \rightarrow \mathbb{R}$ by \begin{equation}\label{eq:J}
{\mathcal J}(u) := \frac12 \|u\|^2 - \int_\Omega F(x,u) \, dx, \end{equation} where $G(x,u) := \int_0^u g(x,s) \, ds$ and $F$ is given in (F3). It is well-known that under (F1), (F2) the functional is of ${\mathcal C}^1$ class and $$ {\mathcal J}'(u)(v) = B(u,v) - \int_\Omega f(x,u)v \, dx, \quad u,v \in H^1_0(\Omega). $$ Hence, its critical points are weak solutions to \eqref{eq}.
\section{Verification of (J1)--(J4)}
Observe that (F1), (F2) imply that for every $\varepsilon > 0$ one can find $C_\varepsilon > 0$ such that $$
|f(x,u)| \leq \varepsilon |u| + C_{\varepsilon}|u|^{p-1}. $$ There follows also a similar inequality for $F$, namely \begin{equation}\label{eq:F-eps}
F(x,u) \leq \varepsilon u^2 + C_\varepsilon |u|^p. \end{equation} We note also, that if in addition (F4) holds, then $F(x,u) \geq 0$. Moreover we recall that the functional ${\mathcal J}$ is defined by \eqref{eq:J}.
\begin{Lem}\label{lem:assump} Suppose that (C), (N), (F1)--(F5) hold. Then ${\mathcal J}$ satisfies (J1)--(J3) in Theorem \ref{abstract} and (J4) in Theorem \ref{multipl}. \end{Lem} \begin{enumerate} \item[(J1)] Using \eqref{eq:F-eps} and Sobolev embeddings we obtain $$
\int_\Omega F(x,u)\,dx\leq \varepsilon|u|_2^2+C_\varepsilon |u|_p^p\lesssim \varepsilon\|u\|^2+C_\epsilon\|u\|^p. $$ Hence can choose $\varepsilon>0$ and $r>0$ such that $$
\int_\Omega F(x,u)\,dx\leq \frac{1}{4}\|u\|^2 $$
for all $\|u\|\leq r$. Then we get \begin{align*}
\mathcal{J}(u) &= \frac{1}{2}\|u\|^2-\int_\Omega F(x,u)\,dx \geq \frac{1}{4}\|u\|^2=\frac{r^2}{4}>0 \end{align*}
for all $\|u\|=r.$ \item[(J2)] Let $Q := H^1_0 (\mathrm{int}\, K)$. Let $t_n\to+\infty$, $u_n \in Q$ and $u_n\to u\neq 0$. Then from Fatou's lemma and (F3) $$ \frac{\mathcal{I}(t_nu_n)}{t_n^2}=\frac{\int_K F(x,t_nu_n)\,dx}{t_n^2} \to+\infty \quad \mbox{as } n \to +\infty. $$
\item[(J3)] Fix $u \in {\mathcal N}$. Define $$ (0,\infty) \ni t \mapsto \varphi(t):=\frac{t^2-1}{2}\mathcal{I}'(u)(u)-\mathcal{I}(tu)+\mathcal{I}(u)\in\mathbb{R}. $$ Note that $\varphi(1)=0$. Moreover \begin{align*} \varphi'(t)&=t \mathcal{I}'(u)(u)-\mathcal{I}'(tu)(u)=\int_\Omega f(x,u)tu\,dx-\int_\Omega f(x,tu)u\,dx. \end{align*} Suppose that $t\in(0,1)$. Then (F3) implies that for a.e. $x \in \Omega$, $f(x,tu)u \leq tf(x,u)u$ and therefore $\varphi'(t) \geq 0$. Similarly $\varphi'(t)\leq 0$ for $t>1$ which implies that $\varphi(t)\leq\varphi(0)=0$ for all $t>0$.
\item[(J4)] Let $\widetilde{E} \subset H^1_0(\mathrm{int}\,K) \subset H^1_0(\Omega)$ be a finite dimensional subspace. Note that on $\widetilde{E}$ all norms are equivalent. Suppose, by contradiction, that there is a sequence $(u_n) \subset \widetilde{E}$ such that $\|u_n\| \to +\infty$ and ${\mathcal J}(u_n) > 0$. Let $w_n(x) := u_n(x) / \|u_n\|$. It is clear that $\|w_n\|=1$ and, since $\widetilde{E}$ is finite-dimensional, there is $w \in \widetilde{E} \setminus \{0\}$ such that $\|w_n-w\| \to 0$. In particular $|\mathrm{supp}\, (w) \cap K| > 0$. Then, for a.e. $x \in \mathrm{supp}\, (w) \cap K$ we have that
$$
u_n(x)^2 = \|u_n\|^2 w_n(x)^2 \to +\infty.
$$
Hence, by the Fatou's lemma and (F3)
$$
0 < \frac{{\mathcal J}(u_n)}{\|u_n\|^2} = \frac12 - \int_{\Omega} \frac{F(x,u_n)}{\|u_n\|^2} \, dx \leq \frac12 - \int_{\mathrm{supp}\,(w) \cap K} \frac{F(x,u_n)}{u_n^2} w_n^2 \, dx \to -\infty,
$$
which is a contradiction. \end{enumerate}
\section{Cerami sequences and proofs of main theorems}
\begin{Lem}\label{lem:bdd} Any Cerami sequence for ${\mathcal J}$ is bounded. \end{Lem}
\begin{proof}
Suppose that $\|u_n\|\to+\infty$ up to a subsequence. We define $v_n = \frac{u_n}{\|u_n\|}$. Then $\|v_n\|=1$ and $v_n \rightharpoonup v_0$ in $H^1_0(\Omega)$. From compact Sobolev embeddings, $v_n \to v_0$ in $L^2(\Omega)$, in $L^p (\Omega)$ and almost everywhere.
We consider three cases.
\begin{itemize} \item Suppose that $v_0 = 0$. Condition (J3) implies that $$ {\mathcal J}(u) \geq {\mathcal J}(tu) - \frac{t^2-1}{2} {\mathcal J}'(u)(u). $$
Taking $t\mapsto\frac{t}{\|u_n\|}$ we obtain that $$
{\mathcal J}(u_n) \geq {\mathcal J}\left(\frac{t}{\|u_n\|}u_n\right) - \frac{\frac{t^2}{\|u_n\|^2}-1}{2} {\mathcal J}'(u_n)(u_n) = {\mathcal J}(t v_n) + o(1). $$ Hence $$ {\mathcal J}(u_n) \geq \frac{t^2}{2} - \int_\Omega F(tv_n) \, dx + o(1). $$ Moreover $$
\left| \int_\Omega F(tv_n) \, dx \right| \leq \varepsilon t^2 \int_\Omega |v_n|^2 \, dx + C_\varepsilon t^p \int_\Omega |v_n|^p \, dx \to 0 \quad \mbox{as } n \to \infty. $$ Thus $$ c + o(1) = {\mathcal J}(u_n) \geq \frac{t^2}{2} + o(1) $$ for any $t > 0$ - a contradiction.
\item Now we suppose that $v_0 \neq 0$ and $|\mathrm{supp}\, v_0 \cap K | > 0$. Then $$
o(1)=\frac{{\mathcal J}(u_n)}{\|u_n\|^2} = \frac12 - \int_\Omega \frac{F(u_n)}{\|u_n\|^2} \, dx = \frac12 - \int_\Omega \frac{F(u_n)}{u_n^2} v_n^2 \, dx \leq \frac12 - \int_{\mathrm{supp}\, v_0 \cap K} \frac{F(u_n)}{u_n^2} v_n^2 \, dx \to -\infty, $$ a contradiction.
\item Suppose that $v_0 \neq 0$ and $|\mathrm{supp}\, v_0 \cap K| = 0$. Then $\mathrm{supp}\, v_0 \subset \Omega \setminus K$. Fix $\varphi \in {\mathcal C}_0^\infty (\Omega \setminus K)$ and note that $$ o(1) = {\mathcal J}'(u_n)(\varphi) = \langle u_n, \varphi \rangle - \int_{\Omega} f(x,u_n) \varphi \, dx. $$ Observe that \begin{align*}
\int_{\Omega} f(x,u_n)\varphi \, dx = \|u_n\| \int_{\Omega} \frac{f(x,u_n)}{u_n} v_n \varphi \, dx = \|u_n\| \left( \int_{\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v} \frac{f(x,u_n)}{u_n} v_n \varphi \, dx + o(1) \right) \end{align*}
Observe that for a.e. $x \in \mathrm{supp}\, \varphi \cap \mathrm{supp}\, v$ we get that $|u_n(x)| = |v_n(x)| \|u_n\| \to +\infty$. Fix $x \in \mathrm{supp}\, \varphi \cap \mathrm{supp}\, v$; then from (F5), for sufficiently large $n$, $f(x,u_n(x)) = \Theta(x) u_n(x)$. Thus $$ \frac{f(x,u_n(x))}{u_n(x)} v_n(x) \varphi(x) = \Theta(x) v_n(x) \varphi(x) $$ and therefore $$ \frac{f(x,u_n(x))}{u_n(x)} v_n(x) \varphi(x) \to \Theta(x) v_n(x) \varphi(x) $$ pointwise, a.e. on $\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v$. Combining (F4) and (F5) we also get that $$
\left| \frac{f(x,u_n(x))}{u_n(x)} \right|^2 \leq |\Theta|_\infty^2 $$ and $\frac{f(\cdot ,u_n)}{u_n} \to \Theta$ in $L^2(\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v)$. Thus, from Lebesgue dominated convergence theorem and the H\"older inequality we get $$ \int_{\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v} \frac{f(x,u_n)}{u_n} v_n \varphi \, dx \to \int_{\Omega} \Theta(x) v \varphi \, dx. $$ Hence $$ \langle v, \varphi \rangle = \int_\Omega \Theta(x) v \varphi \, dx. $$
In particular, $0$ is an eigenvalue of the operator $-\Delta + \lambda - \frac{\mu}{|x|^2} - \frac{\nu}{d^2} - \Theta(x)$ with Dirichlet boundary conditions on $\Omega \setminus K$, which contradicts (A). \end{itemize}
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:main1}] Since Cerami sequence $u_n$ is bounded we have following convergences (up to a subsequence): \begin{align*} u_n\rightharpoonup u_0 \quad & \mbox{in } H^1_0(\Omega),\\ u_n\to u_0 \quad & \mbox{in } L^2(\Omega), \mbox{ and in } L^p(\Omega),\\ u_n\to u_0\quad & \mbox{a.e. on }\Omega. \end{align*} Hence, for any $\varphi\in {\mathcal C}^\infty_0(\Omega)$, $$ {\mathcal J}'(u_n)(\varphi)-{\mathcal J}'(u_0)(\varphi)=\langle u_n-u_0,\varphi\rangle-\int_\Omega \left(f(x,u_n)-f(x,u_0)\right)\varphi\,dx\to 0, $$ because obviously weak convergence of $u_n$ implies that $$ \langle u_n-u_0,\varphi\rangle\to 0, $$ and we will use the Vitali convergence theorem to prove that $$ \int_\Omega \left(f(x,u_n)-f(x,u_0)\right)\varphi\,dx\to 0. $$ Hence we need to check the uniform integrability of the family $\left\{ \left(f(x,u_n)-f(x,u_0)\right)\varphi \right\}_n$. Using (F1) and Lemma \ref{lem:bdd} we obtain that for any measurable set $E \subset \Omega$ \begin{align*}
\int_E|f(x,u_n)-f(x,u_0)|\varphi\,dx &\leq \int_E|f(x,u_n)\varphi|\,dx+\int_E|f(x,u_0)\varphi|\,dx\\
&\lesssim \int_E|\varphi|\,dx+\int_E|u_n|^{p-1}|\varphi|\,dx+\int_E|\varphi|\,dx+\int_E|u_0|^{p-1}|\varphi|\,dx\\
&\lesssim |\varphi \chi_E|_1+ |u_n|_p^{p-1} |\varphi \chi_E|_p +|u_0|_p^{p-1} |\varphi \chi_E|_p \\
&\lesssim |\varphi \chi_E|_1+|\varphi \chi_E|_p. \end{align*} Then, for any $\varepsilon > 0$, we can choose $\delta > 0$ small enough that $$
\int_E|f(x,u_n)-f(x,u_0)|\varphi\,dx < \varepsilon $$
for $|E|<\delta$. Hence ${\mathcal J}'(u_n)(\varphi)\to{\mathcal J}'(u_0)(\varphi)$, and ${\mathcal J}'(u_0)=0$. \end{proof}
\begin{proof}[Proof of Theorem \ref{th:main2}] The statement follows directly from Theorem \ref{multipl} and Lemma \ref{lem:assump}. \end{proof}
\section{Multiple solutions to the mass-subcritical normalized problem}
In what follows we are interested in the normalized problem \eqref{eq:normalized}, where $\lambda$ is not prescribed anymore and is the part of the unknown $(\lambda, u) \in \mathbb{R} \times H^1_0(\Omega)$. Then, solutions are critical point of the energy functional $$
{\mathcal J}_0 (u) := \frac12 |\nabla u|_2^2 - \frac{\mu}{2} \int_\Omega \frac{u^2}{|x|^2} \, dx - \frac{\nu}{2} \int_\Omega \frac{u^2}{d(x)^2} \, dx - \int_\Omega F(x,u) \, dx $$ restricted to the $L^2$-sphere in $H^1_0(\Omega)$ $$ {\mathcal S} := \left\{ u \in H^1_0(\Omega) \ : \ \int_\Omega u^2 \, dx = \rho \right\} $$ and $\lambda$ arises as a Lagrange multiplier.
We recall the well-known Gagliardo-Nirenberg inequality \begin{equation}\label{gn-ineq}
|u|_p \leq C_{p,N} |\nabla u|_2^{\delta_p} |u|_2^{1-\delta_p}, \quad u \in H^1_0(\Omega), \end{equation} where $\delta_p := N \left(\frac12 - \frac{1}{p} \right)$ and $C_{p,N} > 0$ is the optimal constant.
\begin{Lem}\label{coercive} ${\mathcal J}_0$ is coercive and bounded from below on ${\mathcal S}$. \end{Lem}
\begin{proof} Using (F1), \eqref{hardy1}, \eqref{hardy2} and \eqref{gn-ineq}, we obtain \begin{align*}
{\mathcal J}_0(u) &= \frac12 |\nabla u|_2^2 - \frac{\mu}{2} \int_\Omega \frac{u^2}{|x|^2} \, dx - \frac{\nu}{2} \int_\Omega \frac{u^2}{d(x)^2} \, dx - \int_\Omega F(x,u) \, dx \\
&\geq \frac12 \left(1 - \frac{4\mu}{(N-2)^2} - 4\nu \right) |\nabla u|_2^2 -C_1 |\Omega| - C_1|u|_p^p \\
&\geq \frac12 \left(1 - \frac{4\mu}{(N-2)^2} - 4\nu \right) |\nabla u|_2^2 - C_1|\Omega|- C\left(|\nabla u|_2^{\delta_p} |u|_2^{1-\delta_p} \right)^p\\
&\geq \frac12 \left(1 - \frac{4\mu}{(N-2)^2} - 4\nu \right) |\nabla u|_2^2 -C_1|\Omega|+ C|\nabla u|_2^{\delta_pp}, \end{align*} where $$ \delta_p p = N \left( \frac12 - \frac1p \right) p = N \left( \frac{p}{2}-1 \right) < N \cdot \frac{2}{N} = 2. $$ Thus ${\mathcal J}_0$ is coercive and bounded from below on ${\mathcal S}$. \end{proof}
\begin{Lem}
${\mathcal J}_0$ satisfies the Palais-Smale condition on ${\mathcal S}$, i.e. any Palais-Smale sequence for ${\mathcal J}_0 |_{\mathcal S}$ has a convergent subsequence. \end{Lem}
\begin{proof}
Let $(u_n) \subset {\mathcal S}$ be a Palais-Smale sequence for ${\mathcal J}_0 |_{\mathcal S}$. Then Lemma \ref{coercive} implies that $(u_n)$ is bounded in $H_0^1 (\Omega)$. Hence we may assume that (up to a subsequence) \begin{align*} u_n \rightharpoonup u \quad & \mbox{in } H^1_0(\Omega), \\ u_n \to u \quad & \mbox{in } L^p (\Omega), \\ u_n \to u \quad & \mbox{a.e. on } \Omega. \end{align*} Moreover $$ {\mathcal J}_0'(u_n) + \lambda_n u_n \to 0 \quad \mbox{in } H^{-1}(\Omega) := (H^1_0 (\Omega))^* $$ for some $\lambda_n \in \mathbb{R}$. In particular $$
{\mathcal J}_0'(u_n)(u_n) + \lambda_n |u_n|_2^2 \to 0. $$ Note that $$
\lambda_n = - \frac{{\mathcal J}_0'(u_n)(u_n)}{|u_n|_2^2} + o(1)= - \frac{\|u_n\|^2 - \int_\Omega f(x,u_n)u_n \, dx}{|u_n|_2^2} + o(1). $$ Observe that, from (F1) $$
\left| \int_\Omega f(x,u_n)u_n \, dx \right| \lesssim 1 + |u_n|_p^p \lesssim 1. $$ Therefore $(\lambda_n) \subset \mathbb{R}$ is bounded, and (up to a subsequence) $\lambda_n \to \lambda_0$. Therefore, up to a subsequence, \begin{align*}
o(1) &= {\mathcal J}_0'(u_n)(u_n) + \lambda_n |u_n|_2^2 - {\mathcal J}_0'(u_n)(u) - \lambda_n \int_{\Omega} u_n u \, dx \\
&= {\mathcal J}_0'(u_n)(u_n) - {\mathcal J}_0'(u_n)(u) + \lambda_n \int_{\Omega} u_n (u_n-u) \, dx \\
&= \|u_n\|^2 - \langle u_n, u \rangle - \int_{\Omega} f(x,u_n) (u_n-u) \, dx + \lambda_n \int_\Omega u_n (u_n-u) \, dx. \end{align*}
It is clear that $\langle u_n, u\rangle \to \|u\|^2$ and that $$
\left| \lambda_n \int_\Omega u_n (u_n-u) \, dx \right| \lesssim |u_n|_2^2 |u_n-u|_2^2 \to 0. $$ Moreover, from (F1) \begin{align*}
\left| \int_\Omega f(x,u_n)(u_n-u) \, dx \right| \lesssim |u_n-u|_2 + |u_n|_p^{p-1} |u_n-u|_p \to 0. \end{align*}
Hence $\|u_n\| \to \|u\|$ and therefore $u_n \to u$ in $H^1_0(\Omega)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{main:3}] From \cite[Theorem II.5.7]{Str} we obtain that ${\mathcal J}_0$ has at least $\hat{\gamma}({\mathcal S})$ critical points, where $$ \hat{\gamma} ({\mathcal S}) := \sup \{ \gamma(K) \ : \ K \subset {\mathcal S} \mbox{ - symmetric and compact} \} $$ and $\gamma$ denotes the Krasnoselskii's genus for symmetric and compact sets. We will show that $\hat{\gamma}({\mathcal S}) = +\infty$. Indeed, fix $k \in \mathbb{N}$. It is sufficient to construct a symmetric and compact set $K \subset {\mathcal S}$ with $\gamma(K) = k$. Choose functions $w_1, w_2, \ldots, w_k \in {\mathcal C}_0^\infty (\Omega) \cap {\mathcal S}$ with pairwise disjoint supports, namely $w_i w_j = 0$ for $i \neq j$. Now we set $$ K := \left\{ \sum_{i=1}^k \alpha_i w_i \in {\mathcal S} \ : \ \sum_{i=1}^k \alpha_i^2 = 1 \right\}. $$ It is clear that $K \subset {\mathcal S}$ is symmetric and compact. We will show that $\gamma(K) = k$. In what follows $S^{m-1}$ denotes the $(m-1)$-dimensional sphere in $\mathbb{R}^m$ of radius $1$ centered at the origin. Note that $h : K \rightarrow S^{k-1}$ given by $$ K \ni \sum_{i=1}^k \alpha_i w_i \mapsto h \left( \sum_{i=1}^k \alpha_i w_i \right) := (\alpha_1, \ldots, \alpha_k) \in S^{k-1} $$ is a homeomorphism, which is odd. Hence $\gamma(K) \leq k$. Suppose by contradiction that $\gamma(K) < k$. Then there is a continuous and odd function $\widetilde{h} : K \rightarrow S^{\gamma(K) -1}$. However, $\widetilde{h} \circ h^{-1} : S^{k-1} \rightarrow S^{\gamma(K)-1}$ is an odd, continuous map, which contradicts the Borsuk-Ulam theorem \cite[Proposition II.5.2]{Str}, \cite[Theorem D.17]{Willem}. Hence $\gamma(K) = k$. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} We prove that every \{finitely generated residually finite\}-by-sofic group satisfies Kaplansky's direct and stable finiteness conjectures with respect to all noetherian rings.
We use this result to provide countably many new examples of finitely presented non-LEA groups, for which soficity is still undecided, satisfying these two conjectures. Deligne's famous example $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ of a non residually finite group is among our examples, along with the families of amalgamated free products $\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}\text{SL}_n(\mathbb Z[1/p])$ and HNN extensions $\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}$, where $p>2$ is a prime, $n\geq 3$ and $\mathbb F_r$ is a free group of rank $r$, for all $r\geq 2$. \end{abstract} \title{Groups satisfying Kaplansky's stable finiteness conjecture}
\section{Introduction} \noindent In \cite[pp. 122-123]{Kap} Irving Kaplansky posed what nowadays is known as \emph{Kaplansky's direct finiteness conjecture}:
\begin{KCfields} Given a field $\mathbb{K}$ and a group $G$, the group ring $\mathbb{K}[G]$ is directly finite. That is to say, if $x,y\in \mathbb{K}[G]$ are such that $xy=1$, then $yx=1$. \end{KCfields} One could look at the matrix rings $\mathrm{Mat}_{n\times n}(\mathbb{K}[G])$ and ask whether or not these rings are directly finite for all $n\in\mathbb N$. This is known as \emph{Kaplansky's stable finiteness conjecture}. Although it might look stronger than the direct finiteness conjecture, they are in fact equivalent \cite{Vir}. Hence in this paper, for the sake of simplicity, we restrict our arguments to direct finiteness.
Kaplansky himself proved that, given a field $\mathbb{K}$ of characteristic zero and a group $G$, the group ring $\mathbb{K}[G]$ is directly finite. Since then progress has been done, but the conjecture is still unresolved. In \cite{AOmP}, Ara, O'Meara and Perera proved that $D[G]$ is directly finite whenever $G$ is a residually amenable group and $D$ is a division ring. Later Elek and Szab\'{o} generalized this result to the wider class of all sofic groups \cite[Corollary 4.7]{ElSz04}.
Sofic groups were introduced in 1999 by Gromov, in an attempt to solve Gottschalk's conjecture in topological dynamics \cite{Gro}. The class of sofic groups is far from being completely understood, and it still puzzles the experts. In particular, it is not yet known if all groups are sofic.
Soficity is stable under many group-theoretic operations \cite{CaLu,CSC,Pes08}. At the same time, it is still unclear how this notion behaves under taking group extensions: While it is known that a sofic-by-amenable group is again sofic \cite[Proposition 7.5.14]{CSC}, it is still an open problem whether or not finite-by-sofic, free-by-sofic or sofic-by-sofic groups, among others, are again sofic groups.
Here we use the standard notation of an $\mathcal{A}$-by-$\mathcal{B}$ group to denote a group $G$ with a normal subgroup $N\trianglelefteq G$ such that $N\in\mathcal{A}$ and $G/N\in\mathcal{B}$, where $\mathcal{A}$ and $\mathcal{B}$ are given classes of groups (e.g. $\mathcal{A}$ being the free groups and $\mathcal{B}$ being the sofic groups, in the case of a free-by-sofic group).
Since there is yet no known example of a group which fails to be sofic, the original Kaplansky's conjecture and the following variant are still open problems: \begin{KCdrings} Given a division ring $D$ and a group $G$, the group ring $D[G]$ is directly finite. \end{KCdrings}
A major recent breakthrough has been made by Virili \cite{Vir}. He proved that any crossed product $N\ast G$ is directly finite whenever $N$ is a left-noetherian ring (respectively: right-noetherian ring) and $G$ is a sofic group (see Section \ref{prof} for the definition and \cite{Pas,Vir} for more details about crossed products).
Group rings are basic examples of crossed products, hence we state another generalization of the original conjecture: \begin{KCnrings} Given a noetherian\footnote{The main concern of this work are groups, hence in what follows we focus on noetherian rings, that is, rings that are both left- and right-noetherian, rather than specifying if the ring is left- or right-noetherian. All our statements remain true when one restricts to left-noetherian, or right-noetherian, rings.} ring $N$ and a group $G$, the group ring $N[G]$ is directly finite. \end{KCnrings}
As a consequence of his general result on crossed products, Virili deduced that the group ring $N[G]$ of a \{polycyclic-by-finite\}-by-sofic group $G$ is directly finite with respect to all noetherian rings $N$ \cite[Corollary 5.4]{Vir}, and that the group ring $D[G]$ of a free-by-sofic group $G$ is directly finite with respect to all division rings $D$ \cite[Corollary 5.5]{Vir}. As mentioned above, the interest in these classes of groups arises because they are not known to be sofic.
The aim of this paper is to prove the following Theorem, which establishes Kaplansky's direct and stable finiteness conjectures for the group ring of many groups that are not known to be sofic.
\begin{theor}\label{Kap RF-by-sofic} Let $N$ be a noetherian ring and $G$ be a \{finitely generated residually finite\}-by-sofic group. Then $N[G]$ is directly finite and, equivalently, stably finite. \end{theor}
As a corollary, we partially extend \cite[Corollary 5.5]{Vir} from division rings to noetherian rings. \begin{corol} Let $N$ be a noetherian ring and $G$ be a \{finitely generated free\}-by-sofic group. Then $N[G]$ is directly finite and, equivalently, stably finite. \end{corol} Moreover, we construct countably many pairwise non-isomorphic finitely presented groups that satisfies both of these conjectures, and that are not known to be sofic. In particular, these groups are not locally embeddable into amenable groups, also called LEA groups (see Section \ref{sec.4} for definitions).
\begin{corol2} There exists an infinite family $\{G_i\}_{i\in\mathbb N}$ of pairwise non-isomorphic finitely presented non-LEA groups. These groups are not known to be sofic, and $D[G_i]$ is directly finite and, equivalently, stably finite, with respect to all division rings $D$. \end{corol2} The groups described in Corollary B are given by the HNN extensions $\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}$ and by the amalgamated free products $\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}\text{SL}_n(\mathbb Z[1/p])$. Here $n\geq 3$ is an integer, $p>2$ is a prime number and $\mathbb F_r$ is a free subgroup of $\text{SL}_n(\mathbb Z[1/p])$ of rank $r$, for all $r\geq2$. See Corollary~\ref{corKaNi} and Corollary~\ref{cor2} for the precise statements and the proofs.
The paper is organized as follows: In Section~\ref{sec.spacemarked} we define the space of marked groups, we recall some useful properties and we prove preliminary results that will lead to the proof of the Main Theorem. In Section~\ref{prof} we prove the Main Theorem and we give some corollaries. In Section~\ref{sec.4} we apply our result to countably many pairwise non-isomorphic groups, which are not yet known to be sofic, and for which we establish Kaplansky's direct finiteness and stable finiteness conjectures. Deligne's famous example $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ of a non residually finite group is among our interests.
\subsubsection*{Acknowledgments} The author is very grateful to his advisor, Goulnara Arzhantseva, for inspiring questions and suggestions. He wants to thank Simone Virili for sharing the text of his PhD Thesis, his work \cite{Vir} and for many constructive discussions over the topic. Thanks are also due to Nikolay Nikolov, for explaining the proof of \cite[Theorem 1]{KaNi}.
\section{The space of marked groups}\label{sec.spacemarked}
In this section, we briefly discuss the space of marked groups. For more details and properties, see~\cite{Ch,ChGu,Gri84}. We prove the following theorem:
\begin{theorem}\label{RF-by-sofic} Let $Q$ be a group and $G$ be a finitely generated \{finitely generated residually finite\}-by-$Q$ group. Then $G$ is the limit, in the space of marked groups, of finite-by-$Q$ groups. \end{theorem} We stress the following particular case:
\begin{corollary}\label{stressed.cor} Let $G$ be a finitely generated \{finitely generated residually finite\}-by-sofic group. Then $G$ is the limit, in the space of marked groups, of finite-by-sofic groups. \end{corollary}
A \emph{marked group} is a pair $(G,S)$, where $G$ is a finitely generated group and $S$ is a finite sequence of elements that generate $G$. If $\lvert S\rvert=n$ then $(G,S)$ is called an $n$-marked group. Two $n$-marked groups $(G,(s_1,\dots,s_n))$ and $(G',(s'_1,\dots,s'_n))$ are isomorphic if the bijection $\varphi(s_i):=s_i'$ extends to an isomorphism of groups $\varphi\colon G\to G'$. In particular, two marked groups with given generating sets of different size are never isomorphic as marked groups, although they might be isomorphic as abstract groups.
Let $\mathcal{G}_n$ denote the set of $n$-marked groups, up to isomorphism of marked groups. Then $\mathcal{G}_n$ corresponds bijectively to the set of normal subgroups of the free group $\mathbb F_n$ on $n$ free generators.
Let $(G,S),(G',S')\in\mathcal{G}_n$ and let $N$, $N'$ be the normal subgroups of $\mathbb F_n$ such that $\mathbb F_n/N\cong G$, $\mathbb F_n/N'\cong G'$. Let $B_{\mathbb F_n}(r)$ denote the ball of radius $r$ in $\mathbb F_n$ centered at the identity element. The function \begin{equation} v(N,N'):=\sup\{r\in\mathbb N\mid N\cap B_{\mathbb F_n}(r)=N'\cap B_{\mathbb F_n}(r)\} \end{equation} defines on $\mathcal{G}_n$ the ultrametric \begin{equation}\label{ultrametric} d\bigl((G,S),(G',S')\bigr):=2^{-v(N,N')}. \end{equation} An $n$-marked group $(G,S)$ can be viewed as an $(n+1)$-marked group by adding the trivial element $e_G$ to $S$. This defines an isometric embedding of $\mathcal{G}_n$ into $\mathcal{G}_{n+1}$.
Let $\mathcal{G}:=\bigcup_{n\in\mathbb N}\mathcal{G}_n$ be the \emph{space of marked groups}. The ultrametrics of \eqref{ultrametric} can be extended to an ultrametric $d\colon \mathcal{G}\times\mathcal{G}\to \mathbb R_{\geq0}$, and $\bigl(\mathcal{G},d\bigr)$ is a compact totally disconnected ultrametric space~\cite{ChGu}. The following fact is well known.
\begin{lemma}\label{char.conv} Let $(G,S)$ and $\{(G_r,S_r)\}_{r\in\mathbb N}$ be marked groups in $\mathcal{G}_n$ and let $N, \{N_r\}_{r\in\mathbb N}$ be normal subgroups of $\mathbb F_n$ such that $\mathbb F_n/N\cong G$ and $\mathbb F_n/N_r\cong G_r$. The following are equivalent: \begin{enumerate}
\item the sequence $\{(G_r,S_r)\}_{r\in\mathbb N}$ converges to the point $(G,S)$ in the space $\bigl(\mathcal{G}_n,d\bigr)$;
\item for all $x\in N$ (respectively: for all $y\notin N$) there exists $\bar{r}$ such that $x\in N_r$ (respectively: $y\notin N_r$) for $r\geq\bar{r}$;
\item for all $R\in\mathbb N$ there exists $\bar{r}$ such that the Cayley graphs $\mathrm{Cay}(G,S)$ and $\mathrm{Cay}(G_r,S_r)$ have balls of radius $R$ isomorphic as labeled directed graphs, for all $r\geq\bar{r}$. \end{enumerate} \end{lemma}
The following easy lemma is used in the proof of Theorem~\ref{RF-by-sofic}.
\begin{lemma}\label{piccololemma} Let $K\unlhd N\unlhd G$ be a subnormal chain of groups, where $N$ is finitely generated, and $K$ has finite index in $N$.
Then there exists a normal subgroup $\widetilde{K}\unlhd G$, contained in $K$, such that $N/\widetilde{K}$ is finite. \begin{proof} Since $N$ is finitely generated, for any given positive integer $d\in\mathbb N$, $N$ has only finitely many subgroups of index $d$. Let $\widetilde{K}$ be the intersection of all subgroups $H\unlhd N$ with $[N:H]=[N:K]$. Then $\widetilde{K}$ is a finite index characteristic subgroup of $N$, and hence it is normal in $G$.
\end{proof} \end{lemma} We are now ready to prove Theorem \ref{RF-by-sofic}.
\begin{proof}[\bf Proof of Theorem \ref{RF-by-sofic}] Let $G$ be a \{finitely generated residually finite\}-by-$Q$ group and suppose $G$ is generated by a finite set $S$. Let $N\unlhd G$ be a finitely generated residually finite subgroup such that $G/N\cong Q$.
As $N$ is residually finite, there exists a family $\{K_r\}_{r\in\mathbb N}$ of finite index normal subgroups of $N$ such that $\bigcap_{r\in\mathbb N}K_r=\{e\}$ and $K_{r+1}\subseteq K_r$ for all $r\in\mathbb N$. As $N$ is finitely generated, we can assume that $K_r\unlhd G$ for all $r\in\mathbb N$ by Lemma~\ref{piccololemma}.
Consider the family of finite-by-$Q$ groups $\{G/K_r\}_{r\in\mathbb N}$. For every $r$, let $S_r$ be the image of $S$ in $G/K_r$ under the canonical projection $\lambda_r\colon G\twoheadrightarrow G/K_r$. We prove that the sequence $\{(G/K_r,S_r)\}_{r\in\mathbb N}$ converges to $(G,S)$ in $\mathcal{G}_{\lvert S\rvert}$ using the second condition of Lemma~\ref{char.conv}.
Let $\pi\colon\mathbb F\twoheadrightarrow G$ be the canonical surjective homomorphism from the finitely generated free group $\mathbb F$ on $\lvert S\rvert$ free generators, and $\pi_r\colon\mathbb F\twoheadrightarrow G/K_r$. Then $\pi_r=\lambda_r\circ\pi$: \begin{equation*} \xymatrix{\mathbb F\ar[rr]^{\pi}\ar[rrd]_{\pi_r}&&G\ar[d]^{\lambda_r}\\ &&G/K_r } \end{equation*} Set $\Lambda:=\ker\pi$ and $\Lambda_r:=\ker\pi_r$. By definition, $\Lambda\leq\Lambda_r$ for every $r$, so, to check that the second condition of Lemma~\ref{char.conv} is satisfied, we have only to prove that for all $y\notin\Lambda$ there exists $\bar{r}$ such that $y\notin\Lambda_r$, for $r\geq \bar{r}$. Let $y\notin \Lambda$ and $\pi(y)=:g\in G\setminus\{e_G\}$. Note that if $g\notin N$ then $g\notin K_r$ for all $r$, as $K_r\subseteq N$. Thus, for such $g\notin N$ we have \begin{equation} \pi_r(y)=\lambda_r(\pi(y))=\lambda_r(g)\neq e_{G/K_r}\qquad \forall r\in\mathbb N, \end{equation} that is to say, $y\notin \Lambda_r$ for all $r\in\mathbb N$.
If $g\in N\setminus\{e_G\}$, then there exists $\bar{r}$ such that $g\notin K_r$ for all $r\geq \bar{r}$, because $\bigcap_{r\in\mathbb N}K_r=\{e\}$ and because this family of normal subgroups is totally ordered by inclusion. In particular, \begin{equation} \pi_r(y)=\lambda_r(g)\neq e_{G/K_r}\qquad \forall r\geq \bar{r}. \end{equation} This implies that $y\notin\Lambda_r$ for $r\geq\bar{r}$, and the proof is completed.
\end{proof}
\section{Proof of the Main Theorem}\label{prof} We first show that the assertions of Kaplansky's direct finiteness and stable finiteness conjectures are preserved under taking limits in the space of marked groups. \begin{proposition}\label{limit} Let $N$ be a noetherian ring and $(G,S)$ be the limit, in the space of marked groups, of the sequence $\{(G_r,S_r)\}_{r\in\mathbb N}$. If $N[G_r]$ is directly finite (respectively: stably finite) for all $r\in\mathbb N$ then so is $N[G]$.
\begin{proof} Suppose first that $N[G_r]$ is directly finite for all $r\in\mathbb N$, and consider two non-trivial elements $x=\sum_{g\in G}k_gg$ and $y=\sum_{g\in G}h_gg\in N[G]$ such that $xy=1$. We want to prove that $yx=1$.
Let $yx=\sum_{g\in G}l_gg$ and consider $$m:=\max\{\lVert g\rVert_S\mid k_g\neq 0\text{ or }h_g\neq 0\text{ or }l_g\neq 0\},$$ where $\lVert -\rVert_S$ denotes the norm induced by the word metric on $G$ given by the finite generating set $S$. That is to say: $$\lVert g\rVert_S:=\min\{k\mid g=s_1\dots s_k,\quad s_i\in S\cup S^{-1}\}.$$ Since the sequence $\{(G_r,S_r)\}_{r\in\mathbb N}$ converges to $(G,S)$, by Lemma~\ref{char.conv} there exists $\bar{r}$ such that, for all $r\geq \bar{r}$, $\mathrm{Cay}(G,S)$ and $\mathrm{Cay}(G_r,S_r)$ have balls of radius $m$ isomorphic as labeled directed graphs. The group-coefficients of $x$ and $y$ have the same partial multiplication in $G$ as in $G_r$, and moreover $yx=1$ in $N[G_r]$. It thus follows that $yx=1$ in $N[G]$.
The same arguments work when $N[G_r]$ is stably finite for all $r\in\mathbb N$.
\end{proof} \end{proposition}
\begin{proof}[\bf Proof of the Main Theorem] First we suppose that the group in question is finitely generated, so let $G$ be a finitely generated \{finitely generated residually finite\}-by-sofic group. By Corollary~\ref{stressed.cor}, $G$ is the limit of finite-by-sofic groups, which satisfy Kaplansky's direct finiteness conjecture with respect to all noetherian rings \cite[Corollary 5.4]{Vir}. Hence, by Proposition~\ref{limit}, $G$ satisfies the conjecture with respect to all noetherian rings.
If the group $G$ is not finitely generated, consider a finitely generated residually finite normal subgroup $K$ such that $G/K$ is sofic. Then $G$ is the directed union of its finitely generated subgroups containing such $K$: \begin{equation}\label{dir.union} G=\bigcup\{H\mid K\leq H\leq G\text{ and }H\text{ is finitely generated}\}. \end{equation} These are finitely generated \{finitely generated residually finite\}-by-sofic groups, and hence satisfy Kaplansky's direct finiteness conjecture by the first part of the proof.
Fix a noetherian ring $N$ and consider two elements $x,y\in N[G]$ such that $xy=1$. The group-coefficients of $x$, $y$ and $yx$, being non-trivial only finitely many times, sit in some finitely generated subgroup $H\leq G$ appearing in the directed union in \eqref{dir.union}.
The group ring $N[H]$ is directly finite by the first part of this proof, so $yx=1$ in $N[H]$. This implies that $yx=1$ in $N[G]$. Thus, $G$ satisfies the conjecture. \end{proof}
There is a variant of Lemma~\ref{piccololemma}, in the case when the group $N/K$ is solvable: Assume we are given a subnormal chain $K\unlhd N\unlhd G$ and suppose that $N/K$ is solvable, then there exists a normal subgroup $\widetilde{K}\unlhd G$, contained in $K$, such that $N/\widetilde{K}$ is solvable as well.
To the author's knowledge, the following are open questions: \begin{question}\label{question1} Let $N$ be a noetherian ring and $G$ be a solvable-by-sofic group. Is $N[G]$ directly finite? \end{question} \begin{question} Let $N$ be a noetherian ring and $G$ be a solvable group. Does there exist a noetherian ring $N'$ such that $N[G]$ embeds into $N'$? \end{question} An affirmative answer to the latter question implies an affirmative answer to Question \ref{question1}. If Question \ref{question1} has an affirmative answer, then our argument as in the proof of Theorem~\ref{RF-by-sofic} implies that a finitely generated \{residually solvable\}-by-sofic group $G$ is the limit in $\mathcal{G}$ of solvable-by-sofic groups, and hence that a \{residually solvable\}-by-sofic group is directly finite.
We recall now the definition and basic facts on crossed products. They are useful in the applications of our Main Theorem.
Given a ring $R$ and a group $G$, a \emph{crossed product} $R\ast G$ of $G$ over $R$ is a ring constructed as follows. Assign uniquely to every $g\in G$ a symbol $\bar{g}$, and let $\bar{G}$ be the collection of these symbols. As a set, \begin{equation*}
R\ast G:=\Bigl\{\sum_{g\in G}r_g \bar{g}\mid g\in G, r_g\in R\text{ is almost always }0 \Bigr\}. \end{equation*} The sum is defined component-wise, \begin{equation*} \Bigl(\sum_{g\in G}r_g\bar{g}\Bigr)+ \Bigl(\sum_{g\in G}s_g\bar{g}\Bigr):=\sum_{g\in G}(r_g+s_g)\bar{g}. \end{equation*} The product in $R\ast G$ is specified in terms of two maps \begin{equation*}
\tau\colon G\times G\to U(R),\qquad \sigma\colon G\to \mathrm{Aut}(R), \end{equation*} where $U(R)$ is the group of units of $R$ and $\mathrm{Aut}(R)$ is the group of ring automorphisms of $R$. Let $r^{\sigma(g)}$ denote the result of the action of $\sigma(g)$ on $r$. Then, for all $r\in R$ and $g,g_1,g_2,g_3\in G$, the maps $\sigma$ and $\tau$ satisfy \begin{equation*}
\sigma(e)=1,\qquad\tau(e,g)=\tau(g,e)=1, \end{equation*} and \begin{equation*}
\tau(g_1,g_2)\tau(g_1g_2,g_3)=\tau(g_2,g_3)^{\sigma(g_1)}\tau(g_1,g_2g_3),\qquad
r^{\sigma(g_2)\sigma(g_1)}=\tau(g_1,g_2)r^{\sigma(g_1g_2)}\tau(g_1,g_2)^{-1}. \end{equation*} These conditions guaranties that the product \begin{equation*} \Bigl(\sum_{g\in G}r_g\bar{g}\Bigr)\cdot \Bigl(\sum_{g\in G}s_g\bar{g}\Bigr):=\sum_{g\in G}\Bigl(\sum_{h_1h_2=g} r_{h_1}s_{h_2}^{\sigma(h_1)}\tau(h_1,h_2)\Bigr)\bar{g} \end{equation*} is associative.
Certain crossed products have their own specific name. If the maps $\sigma$ and $\tau$ are trivial, the crossed product $R\ast G$ is the group ring $R[G]$. If $\sigma$ is trivial, then $R\ast G=R^t[G]$ is a \emph{twisted group ring}, while if $\tau$ is trivial then $R\ast G=RG$ is a \emph{skew group ring}.
Given a normal subgroup $N\unlhd G$ and a fixed crossed product $R\ast G$, we have \begin{equation}\label{crpro1} R\ast G=\bigl( R\ast N\bigr)\ast G/N, \end{equation} where the latter is some crossed product of the group $G/N$ over the ring $R\ast N$, and $R\ast N$ is the subring of $R\ast G$ induced by the subgroup $N$ \cite[Lemma 1.3]{Pas} (that is, the maps $\sigma$ and $\tau$ associated to the crossed product $R\ast N$ are the restrictions of the ones associated to $R\ast G$). In particular, \begin{equation}\label{crpro2} R[G]=R[N]\ast G/N. \end{equation}
\noindent We now state some interesting corollaries of our main result.
There exist one-relator groups which are not residually finite~\cite{BaSo62}, or not even residually solvable~\cite{Ba69}. Whether or not all one-relator groups are sofic is a well-known open problem.
Wise recently proved that one-relator groups with torsion are residually finite~\cite{Wis}, answering a longstanding conjecture of Baumslag~\cite{Ba67}. Combining this deep result of Wise with our Main Theorem, we obtain:
\begin{corollary}\label{1-by-sofic} Let $D$ be a division ring and $G$ be a \{finitely generated one-relator\}-by-sofic group. Then $D[G]$ is directly finite and, equivalently, stably finite. \begin{proof} Let $N\unlhd G$ be a normal subgroup of $G$ such that $G/N$ is sofic and $N$ is a finitely generated one-relator group. If $N$ is torsion-free, then its group ring $D[N]$ embeds into a division ring $D'$ \cite{LeLe}. By \eqref{crpro2} we have that $D[G]=D[N]\ast G/N$. Hence $D[G]$ embeds into $D'\ast G/N$, which is directly finite \cite[Corollary 5.4]{Vir}. Thus also $D[G]$ is directly finite.
If $N$ has torsion, then it is residually finite \cite{Wis}. Thus $G$ satisfies the hypotheses of the Main Theorem, and $D[G]$ is directly finite. \end{proof} \end{corollary}
From Corollary \ref{1-by-sofic}, in the particular case when the sofic group is trivial, we recover the fact that the group ring of a finitely generated one-relator group is directly and stably finite.
\begin{corollary} Let $D$ be a division ring and $G$ be a finitely generated one-relator group, then $D[G]$ is directly finite and, equivalently, stably finite. \end{corollary}
Finitely generated right-angled Artin groups are known to be residually finite. We deduce the following: \begin{corollary} Let $N$ be a noetherian ring and $G$ be a \{finitely generated right-angled Artin group\}-by-sofic group. Then $N[G]$ is directly finite and, equivalently, stably finite. \end{corollary}
\begin{remark} Here is an alternative way of proving the Main Theorem, which was suggested by Simone Virili after the first version of this paper was written. In~\cite{Mon} it is observed that, given a noetherian ring $N$ and $$\mathcal{C}:=\{\text{groups }G\text{ such that }N[G]\text{ is directly finite}\},$$ if a group $G$ is fully residually $\mathcal{C}$ then $G\in\mathcal{C}$. One then argues that a \{finitely generated residually finite\}-by-sofic group is residually finite-by-sofic, which is equivalent of being fully residually finite-by-sofic. Hence the Main Theorem follows.
\end{remark}
\section{Examples}\label{sec.4}
We now apply our Main Theorem to some concrete groups, whose (non-)soficity still intrigues the experts. We conclude that they satisfy Kaplansky's direct and stable finiteness conjectures. Moreover, we provide countably many new explicit presentations of groups satisfying these two conjectures. These groups are not Locally Embeddable into Amenable (LEA for short), and it is not known whether or not they are sofic.
A finitely generated LEA group is the limit in $\mathcal{G}$ of amenable groups. In particular, LEA groups are sofic. There exist examples of sofic groups which are not LEA, and the class of LEA groups is the biggest known class of groups strictly contained in the class of sofic groups.
In what follows, we use the fact that a finitely presented LEA group is residually amenable~\cite[Proposition 7.3.8]{CSC}. We refer to \cite[\S 7.3]{CSC} for more informations about LEA groups.
\subsection{Deligne's group} A famous example considered by Deligne~\cite{De78} is the following. Let $n\geq 2$ be an integer and $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ be the preimage of the symplectic group $\text{Sp}_{2n}(\mathbb Z)$ in the universal cover $\widetilde{\text{Sp}_{2n}(\mathbb R)}$ of $\text{Sp}_{2n}(\mathbb R)$. It is known~\cite{De78} that $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ is given by the following central extension \begin{equation}\label{central.ext} \{e\}\longrightarrow \mathbb Z\longrightarrow \widetilde{\text{Sp}_{2n}(\mathbb Z)}\longrightarrow \text{Sp}_{2n}(\mathbb Z)\longrightarrow\{e\}. \end{equation} The group $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ is finitely presented as it is an extension of two finitely presented groups. Moreover, the group is not residually finite~\cite{De78} and it satisfies Kazhdan's property (T)~\cite[Example 1.7.13 (iii)]{BeHaVa}. This immediately implies that it is not an LEA group.
Our Main Theorem implies that $N[\widetilde{\text{Sp}_{2n}(\mathbb Z)}]$ is directly finite for all noetherian rings $N$. Indeed, $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ is \{finitely generated free\}-by-sofic, as shown by~\eqref{central.ext}.
\subsection{Finitely presented amalgamated products and HNN extensions} From now on, if $\Gamma$ is a group then $\bar{\Gamma}$ denotes an isomorphic copy of $\Gamma$. If $\Gamma=\langle X\mid R\rangle$ is a presentation of the group $\Gamma$, let $\bar{X}$ and $\bar{R}$ denote the same generators and relators in the isomorphic copy $\bar\Gamma$.
Let $p>2$ be a prime number and $n\geq 3$. The group $\text{SL}_n(\mathbb Z[1/p])$ is finitely presented~\cite[Theorem 4.3.21]{HO} and has Kazhdan's property (T)~\cite{BeHaVa}. Moreover, it satisfies the \emph{congruence subgroup property}~\cite{BaMiSe}. This means that every finite index subgroup $H\leq \text{SL}_n(\mathbb Z[1/p])$ contains the kernel of the natural projection \begin{equation}\label{projection}
\pi_q\colon\text{SL}_n(\mathbb Z[1/p]) \twoheadrightarrow \text{SL}_n(\mathbb Z/q\mathbb Z), \end{equation} for some $q$ coprime with $p$. In particular, if the finite index subgroup $H$ is normal in $\text{SL}_n(\mathbb Z[1/p])$, it follows that \begin{equation}\label{eq.quotients} \frac{\text{SL}_n(\mathbb Z[1/p])}{H}\cong\text{SL}_n(\mathbb Z/q\mathbb Z) \qquad\text{or}\qquad\frac{\text{SL}_n(\mathbb Z[1/p])}{H}\cong \text{PSL}_n(\mathbb Z/q\mathbb Z), \end{equation} for exactly one $q$ coprime with $p$.
In what follows, given an element $x\in\text{SL}_n(\mathbb Z[1/p])$ and a projection $\pi$ from $\text{SL}_n(\mathbb Z[1/p])$ onto $\text{SL}_n(\mathbb Z/q\mathbb Z)$ or $\text{PSL}_n(\mathbb Z/q\mathbb Z)$, we denote the order of $\pi(x)$ by $o_x$.
The proof of the following theorem is adapted from \cite[Theorem 1]{KaNi}, where an analogous fact is proved, but in the case when the amalgamated subgroup is infinite cyclic. In that case, the resulting group is known to be sofic. In contrast to \cite{KaNi}, our aim is to produce non-LEA groups that are not known to be sofic and that satisfy Kaplansky's direct and stable finiteness conjectures. \begin{theorem}\label{theoKaNi} Let $p>2$ be a prime number, $n\geq 3$, let $\Gamma:=\text{SL}_n(\mathbb Z[1/p])=\langle X\mid R\rangle$. Let $\langle a,b\rangle =F\leq\Gamma$ be the subgroup generated by the matrices \begin{equation*} a=\begin{pmatrix}
1&2&0\\ 0&1&0\\ 0&0& \mathrm{I}_{n-2}
\end{pmatrix},
\qquad b=\begin{pmatrix}
1&0&0\\ 2&1&0\\ 0&0& \mathrm{I}_{n-2}~
\end{pmatrix}, \end{equation*} where $\mathrm{I}_{n-2}$ is the identity matrix of dimension $n-2$. Then the group $$G:=\Gamma\ast_F \Gamma=\langle X,\bar{X}\mid R,\bar{R},a=\bar{a},b=\bar{b}\rangle$$ is not LEA. \begin{proof} The group $G$ is finitely presented. Hence it is sufficient to prove that it is not residually amenable. Let $$ x=\begin{pmatrix}
1&\frac{2}{p}&0\\ 0&1&0\\ 0&0&\mathrm{I}_{n-2}
\end{pmatrix} $$ and consider the element $g=[x,\bar{x}]\in G$. Since $x\notin F$, using normal forms for the elements of the amalgamated free product \cite[I.11]{LS}, it follows that $g\neq e_G$.
Let $\pi\colon G\twoheadrightarrow A$ be a surjective homomorphism with $A$ amenable. We claim that $\pi(g)=e_A$. Indeed, consider the restriction $\pi\restriction_\Gamma\colon\Gamma\to \pi(\Gamma)$. The group $\pi(\Gamma)\leq A$ is amenable and moreover it is a quotient of $\Gamma$, which is a group with Kazhdan's property $(T)$. Hence $\pi(\Gamma)$ is finite and, in particular, $\pi(x)$ has finite order $o_x$.
The element $x$ is unipotent, so $\pi(x)$ is unipotent too. As the group $\Gamma$ satisfies the congruence subgroup property, it follows that $\pi(x)$ is an element of some $\text{SL}_n(\mathbb Z/q\mathbb Z)$ or $\text{PSL}_n(\mathbb Z/q\mathbb Z)$, for $q$ coprime with $p$. As $\pi(x)$ is unipotent, the order $o_x$ divides a power of $q$. Moreover $\gcd(p,q)=1$, so $\gcd(p,o_x)=1$.
As $x^p=a$, we have that $\langle \pi(a)\rangle\leq\langle \pi(x)\rangle$ and that $$o_a=o_{x^p}=\frac{o_x}{\gcd(p, o_x)}=o_x.$$ This implies that the two finite groups $\langle \pi(a)\rangle$ and $\langle \pi(x)\rangle$ have the same cardinality, and so $\langle \pi(a)\rangle=\langle \pi(x)\rangle$.
The same argument applies to the elements $\bar{x}$ and $\bar{a}$, so $\langle \pi(\bar{a})\rangle=\langle \pi(\bar{x})\rangle$. As in the group $G$ we have $a=\bar{a}$, it follows that $\langle\pi(x)\rangle=\langle\pi(\bar{x})\rangle$, and so $\pi(g)=[\pi(x),\pi(\bar{x})]=e_A$. That is, the element $g$ is mapped to the trivial element in all amenable quotients of $G$. Thus, $G$ is not residually amenable. \end{proof} \end{theorem}
In the next corollary we construct countably many pairwise non-isomorphic groups, not known to be sofic, satisfying Kaplansky's direct and stable finiteness conjectures.
\begin{corollary}\label{corKaNi} With the notations of the previous theorem, for $r\geq 2$ let $F_r\leq\Gamma$ be generated by $\{b^iab^{-i}\mid i=0,\dots,r-1\}$. Then the groups \begin{equation*}
G_r:=\Gamma\ast_{F_r}\Gamma=\langle X,\bar{X}\mid R,\bar R,a=\bar{a},\dots ,b^{r-1}ab^{-r+1}=\bar b^{r-1}\bar a\bar b^{-r+1}\rangle \end{equation*} are pairwise non-isomorphic and are not LEA. Moreover they are free-by-sofic, and hence $D[G_r]$ is directly finite (equivalently: stably finite) with respect to all division rings $D$, for all $r\geq2$. \begin{proof} The subgroup $F_r$ is a free group of rank $r$. The argument of the proof of Theorem~\ref{theoKaNi} shows that the element $g=[x,\bar{x}]$ is mapped to the trivial element in all amenable quotients of $G_r$. Hence $G_r$ is not LEA.
By the universal property of amalgamated free products, we have the following commuting diagram $$\xymatrix{ \Gamma \,\,\ar@{^{(}->}[r]\ar[ddr]_{\text{id}} &G_r\ar@{.>}^{\exists ! \varphi}[dd] & \,\,\bar\Gamma\ar@{_{(}->}[l]\ar[ddl]^{\text{id}}\\ &&\\ &\Gamma& }$$ where $\varphi\colon G_r\twoheadrightarrow \Gamma$ is a surjective homomorphism. Let $K=\ker\varphi$, then $K\cap \Gamma=K\cap \bar{\Gamma}=\{e\}$. This implies that $K$ acts freely on the Bass-Serre tree associated to the amalgamated free product $G_r$, that is to say, $K$ is a free group.
Hence $G_r$ is free-by-sofic and, given a division ring $D$, the group ring $D[G_r]$ is directly finite by our Main Theorem. Note that $K$ is not finitely generated, so we cannot conclude that $N[G_r]$ is directly finite for noetherian rings~$N$.
It remains to prove that the family $\{G_r\}_{r\geq 2}$ consists of pairwise non-isomorphic finitely presented groups. To this aim, we recall the notion of deficiency of a finitely presented group.
The \emph{deficiency} $\mathrm{def}(G)$ of a finitely presented group $G$ is defined as $\max\{\lvert X\rvert-\lvert R\rvert \}$ over all the finite presentations $G=\langle X\mid R\rangle$. It is invariant under isomorphism, and we have $$\mathrm{def}(G_r)=2\cdot \mathrm{def}(\text{SL}_m(\mathbb Z[1/p])-r.$$ Therefore, the groups $\{G_r\}_{r\geq 2}$ are pairwise non-isomorphic.
\end{proof} \end{corollary}
Note that the groups $G_r$ do not have property (T). This follows, for instance, from \cite[Remark 2.3.5 and Theorem 2.3.6]{BeHaVa}.
Our result extends further to HNN extensions. In \cite{Ber}, we have characterized the residual amenability of particular HNN extensions $A\ast_H$ and amalgamated free products $A\ast_H A$, in terms of the amalgamated subgroup $H$ being closed in the proamenable topology of $A$ \cite[Corollaries 1.8 and 1.10]{Ber}. Using these results and Corollary~\ref{corKaNi}, we obtain:
\begin{corollary}\label{cor2} Let $p>2$ be a prime number, $n\geq 3$ and $\text{SL}_n(\mathbb Z[1/p])=\langle X\mid R\rangle$. For $r\geq 2$ the groups \begin{equation*} \Gamma_r:=\langle X, t\mid R,\, tat^{-1}=a,\, t(bab^{-1})t^{-1}=bab^{-1},\dots,\,t(b^{r-1}ab^{-(r-1)})t^{-1}=b^{r-1}ab^{-(r-1)}\rangle \end{equation*} are pairwise non-isomorphic and are not LEA. Moreover they are free-by-sofic, and hence $D[\Gamma_r]$ is directly finite (equivalently: stably finite) with respect to all division rings $D$, for all $r\geq 2$. \end{corollary}
We end with the following question: \begin{question} Are the groups $G_r$ and $\Gamma_r$ sofic/hyperlinear? \end{question}
\end{document} |
\begin{document}
\title[On torsion in linearized Legendrian contact homology]{On torsion in linearized Legendrian contact homology}
\author{Roman Golovko}
\begin{abstract} In this short note we discuss certain examples of Legendrian submanifolds, whose linearized Legendrian contact (co)homology groups over integers have non-vanishing algebraic torsion. More precisely, for a given arbitrary finitely generated abelian group $G$ and a positive integer $i$ we construct examples of Legendrian submanifolds of the standard contact vector space, whose $i$-th linearized Legendrian contact (co)homology over $\mathbb Z$ computed with respect to a certain augmentation is isomorphic to $G$. \end{abstract}
\address{Faculty of Mathematics and Physics, Charles University, Sokolovsk\'{a} 49/83, 18675 Praha 8, Czech Republic} \email{golovko@karlin.mff.cuni.cz} \date{\today} \thanks{} \subjclass[2010]{Primary 53D12; Secondary 53D42}
\keywords{torsion, Legendrian contact homology}
\maketitle
\section{Introduction and main result} The Legendrian contact homology of a closed Legendrian submanifold $\Lambda$ of the standard contact vector space $(\mathbb{R}^{2n+1},\xi_{st}=\ker \alpha_{st})$, where $\alpha_{st}=dz-ydx$, is a modern Legendrian invariant defined by Eliashberg--Givental--Hofer \cite{EGHSFT} and Chekanov \cite{ChekanovDGAL}, and developed by Ekholm--Etnyre--Sullivan \cite{EkhEtnSul2005}. It is a homology of the Legendrian contact homology (LCH) differential graded algebra (often called the Chekanov--Eliashberg differential graded algebra). Chekanov--Eliashberg DGA is a unital noncommutative differential graded algebra freely generated by the generically finite set of integral curves of the Reeb vector field $\partial_z$ that start and end on $\Lambda$ and called Reeb chords. Legendrian contact homology is often defined over $\mathbb{Z}_2$, but if $\Lambda$ is spin it can be also defined over other fields, over $\mathbb{Z}$ \cite{OrientLegContHom, KarlssonLCHor} and even more general coefficient rings whose structure envolves certain topological information about $\Lambda$ such as $\mathbb{Z}_2[H_1(\Lambda;\mathbb{Z})]$ or $\mathbb{Z}[H_1(\Lambda;\mathbb{Z})]$ \cite{EkhEtnSul2005, KarlssonLCHor}.
The Legendrian contact homology DGA is not finite rank, even in fixed degree; the same holds in homology: the graded pieces of the Legendrian contact homology are often infinite dimensional and difficult to compute. In order to deal with this issue Chekanov \cite{ChekanovDGAL} proposed to use an augmentation of the DGA to produce a generically finite-dimensional linear complex, whose homology is called linearized Legendrian contact homology.
Given an exact Lagrangian filling $L$ of $\Lambda$ in the symplectization $(\mathbb{R}\times \mathbb{R}^{2n+1}, d(e^t \alpha_{st}))$ with vanishing Maslov number, it induces the augmentation of the Chekanov--Eliashberg algebra, i.e. a unital DGA homomorphism $\varepsilon: \mathcal A(\Lambda)\to (\mathbb{Z}_2,0)$, see \cite{EkhomHonaKalmancobordisms}. If besides that $L$ is equipped with a spin structure extending the given spin structure on $\Lambda$, then one also has an augmentation $\varepsilon: \mathcal A(\Lambda)\to (\mathbb{Z},0)$, see \cite{EkhomHonaKalmancobordisms, KarlssonOrc}.
Most of the computations of linearized Legendrian contact homology groups have been done for the Chekanov-Eliashberg algebras with $\mathbb{Z}_2$-coefficients. One can ask whether for integral coefficients one can get some elements of finite order in the linearized Legendrian contact (co)homology, i.e. if one can get a non-trivial algebraic torsion in linearized Legendrian contact (co)homology. In particular, one can ask whether an arbitrary finitely generated abelian group can be realized as a linearized Legendrian contact (co)homology of some Legendrian.
We provide the following answer to this question in high dimensions: \begin{theorem} \label{topresnongaug} Given a finitely generated abelian group $G$ and $i\in \mathbb{N}$. There is a Legendrian submanifold $\Lambda$ in $\mathbb R^{2i+7}$ of Maslov number $0$ such that the Chekanov-Eliashberg algebra of $\Lambda$ admits an augmentation $\varepsilon: \mathcal A(\Lambda)\to (\mathbb{Z}, 0)$ such that $LCH_{\varepsilon}^i(\Lambda;\mathbb{Z})$ is isomorphic to $G$. \end{theorem}
\section{Proof of Theorem \ref{topresnongaug}}
We start with the following construction of a spin manifold, whose first homology with $\mathbb{Z}$-coefficients is isomorphic to $G$. \subsection{Construction of a spin manifold} \label{spincontrolledh1} Given a finitely generated abelian group $G$, we can write it as $$G\simeq \mathbb{Z}^k\times \mathbb{Z}_{p_1}\times\dots \times \mathbb{Z}_{p_s},$$ where $k$ is a non-negative integer and $p_1,\dots, p_s$ are non-negative integers that are powers of some not necessarily distinct prime numbers. Then we construct a closed, oriented, connected $3$-manifold $N$ such that $H_1(N;\mathbb Z)\simeq G$. We get $N$ from the $k$ distinct copies of $S^1\times S^2$ and the collection of lens spaces $L(p_1,q_1),\dots,L(p_s,q_s)$. More precisely, since we know that \begin{align} H_1(S^1\times S^2;\mathbb Z)\simeq \mathbb{Z}\quad \mbox{and}\quad H_1(L(p_i,q_i);\mathbb{Z})\simeq \mathbb{Z}_{p_i}, \end{align} we define $$N= \#^k (S^1\times S^2)\# L(p_1,q_1)\# \dots \#L(p_s,q_s).$$
Since we know that all closed, orientable manifolds of dimension $3$ are spin, we can say that $N$ is spin. We observe that \begin{align*} &H_1(N;\mathbb{Z})=H_1(\#^k (S^1\times S^2) \# L(p_1,q_1)\# \dots \#L(p_s,q_s);\mathbb{Z})\\ &\simeq H_1(S^1\times S^2;\mathbb{Z})^k\times H_1(L(p_1,q_1);\mathbb{Z})\times \dots \times H_1(L(p_s,q_s);\mathbb{Z}) \\ &\simeq\mathbb{Z}^k\times \mathbb{Z}_{p_1}\times\dots \times \mathbb{Z}_{p_s}\simeq G. \end{align*} Then since $S^n$ is spin for all $n\geq 0$ and the product of spin manifolds is a spin manifold, we can say that $N\times S^{i-2}$ is a spin manifold. Finally, when $i\neq 3$, K\"{u}nneth formula implies that \begin{align} \label{compihom} H_{1}(N\times S^{i-2}; \mathbb{Z})\simeq H_{1}(N; \mathbb{Z})\simeq G. \end{align} From now on we will denote $N\times S^{i-2}$ by $\Lambda$. \subsection{Construction of a Legendrian submanifold and its filling} Now we recall the following statement, which can be seen as an implication of the h-principle of Murphy (\cite[Remark A.5]{LLEIHDCM}). \begin{proposition} \label{PorpGeoRealization} Every closed spin $n$-dimensional manifold admits a Legendrian embedding $i:\Lambda \to (\mathbb{R}^{2n+1}, \xi_{st})$ with a vanishing Maslov number. \end{proposition} \begin{remark}
Str\"angberg in \cite{LegendrianAprroximation} described the $C^0$-approximation procedure for $2$-dimensional submanifolds in the standard contact $\mathbb{R}^5$ by Legendrian submanifolds generalizing the standard approximation result for closed Legendrians in the standard contact $\mathbb{R}^3$. The construction of Str\"angberg as mentioned in \cite{LegendrianAprroximation} admits an extention to certain high dimensions, that will also potentially lead to Proposition \ref{PorpGeoRealization}. \end{remark} We apply Proposition \ref{PorpGeoRealization} to $\Lambda$ from Section \ref{spincontrolledh1} and get a Legendrian embedding $i:\Lambda \to (\mathbb{R}^{2n+1}, \xi_{st})$ with a vanishing Maslov number, this Legendrian embedding will still be denoted by $\Lambda$. We push $\Lambda$ slightly in $\partial_z$-direction (Reeb direction) and get a $2$-copy of $\Lambda$ that we call $\overline{\Lambda}$. Following the construction of Mohnke from \cite{Mohnketorusfilliingpair} we note that $\overline{\Lambda}$ admits an exact Lagrangian filling of Maslov number $0$ diffeomorphic to $\mathbb{R}\times \Lambda$. Following \cite{EkhomHonaKalmancobordisms,KarlssonOrc}, we observe that this spin Maslov number $0$ exact Lagrangian filling induces the augmentation $\varepsilon$ to $\mathbb Z$. Then we would like to study linearized Legendrian contact cohomology of $\mathcal A(\overline{\Lambda})$, linearized with respect to $\varepsilon$.
\subsection{Application of the isomorphism of Seidel-Ekholm-Dimitroglou Rizell}
Recall that there is an isomorphism due to Seidel that has been described by Ekholm in \cite{Seidelsisowfc} and completely proven by Dimitroglou Rizell in \cite{DimitroglouRizellLiftingPSH}. \begin{theorem}[Seidel--Ekholm--Dimitroglou Rizell] \label{SEDRI} Let $\Lambda$ be a Legendrian submanifold of Maslov number $0$ of $\mathbb{R}^{2n+1}_{st}$, which admits an exact Lagrangian filling $L$ of Maslov number $0$. Then \begin{align} LCH_{\varepsilon}^{i}(\Lambda; \mathbb{Z}_{2})\simeq H_{n-i}(L_{\Lambda}; \mathbb{Z}_{2}) \end{align} where $\varepsilon$ is the augmentation induced by $L$. \end{theorem}
The homology and cohomology groups in the above result are defined over $\mathbb{Z}_2$. Recall that after the proof of Dimitrogloy Rizell \cite{DimitroglouRizellLiftingPSH} signs of the cobordism maps between Chekanov-Eliashberg algebras have been studied by Karlsson in \cite{KarlssonOrc}. In addition, the work of Ekholm--Lekili \cite{EkholmLekiliduality} implies the enhancement of Theorem \ref{SEDRI}, which compares not just the corresponding homology and cohomology groups, but the corresponding $A_{\infty}$-structures, which holds with signs and works over an arbitrary field. Therefore, we can say that the isomorphism of Seidel--Ekholm--Dimitroglou Rizell holds over $\mathbb{Z}$.
Then we take the linearized Legendrian contact cohomology of $\overline{\Lambda}$ and apply the isomorphism of Seidel-Ekholm--Dimitroglou Rizell over $\mathbb{Z}$ and get that \begin{align} \label{Seidelsisofor2copy} LCH^{i}_{\varepsilon}(\overline{\Lambda}; \mathbb Z) \simeq H_{1}(\mathbb R\times \Lambda; \mathbb{Z})\simeq H_{1}(\Lambda;\mathbb{Z}). \end{align} Then combining Formula (\ref{Seidelsisofor2copy}) with Formula (\ref{compihom}) we get that \begin{align} \label{thefinform} LCH^{i}_{\varepsilon}(\overline{\Lambda}; \mathbb Z)\simeq G. \end{align}
\begin{remark} We can also take the $1$-jet space $J^1(\Lambda)\simeq \mathbb{R}\times T^{\ast} \Lambda$ equipped with the standard contact $1$-form $\alpha=dz+\theta$, where $\theta$ is a primitivve of the standard symplectic form on $T^{\ast} \Lambda$. Then we take $0_{\Lambda}$, and denote by $\overline{\Lambda}$ the $2$-copy of $0_{\Lambda}$. In this case one does not need to apply the isomorphism of Seidel--Ekholm--Dimitroglou Rizell, but one can simply rely on the analysis of pseudoholomorphic curves in the duality paper of Ekholm--Etnyre--Sabloff \cite{EESDuality} to get Formula \ref{thefinform}), and hence the analogue of Theorem \ref{topresnongaug} will hold. \end{remark}
\begin{remark} Note that the examples we construct have ``complicated'' topology that allows us to realize an arbitrary finitely generated abelian group as a Legendrian contact cohomology group. One could ask whether an arbitrary finitely generated abelian group can be realized as a linearized Legendrian contact (co)homology of a Legendrian with ``simple'' topology, for example Legendrian knots and links. This question is due to Bourgeois \cite{BourgeoisTorsion} and it remains wide open. \end{remark}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{A binary embedding \\ of the stable line-breaking construction}
\runtitle{Binary embedding of the stable line-breaking construction}
\begin{aug}
\author{\fnms{Franz} \snm{Rembart}\thanksref{T1}\ead[label=e1]{rembart@stats.ox.ac.uk}} \and \author{\fnms{Matthias} \snm{Winkel}\thanksref{T2}\ead[label=e2]{winkel@stats.ox.ac.uk}} \address{Department of Statistics, University of Oxford, 24--29 St Giles, Oxford OX1 3LB \\ \printead{e1,e2}} \today \runauthor{F. Rembart and M. Winkel}
\affiliation{University of Oxford} \thankstext{T1}{Supported by EPSRC grant EP/P505666/1} \thankstext{T2}{Supported by EPSRC grant EP/K029797/1}
\end{aug}
\begin{abstract} We embed Duquesne and Le Gall's stable tree into a binary compact continuum random tree (CRT) in a way that solves an open problem posed by Goldschmidt and Haas. This CRT can be obtained by applying a recursive construction method of compact CRTs as presented in earlier work to a specific distribution of a random string of beads, i.e.\ a random interval equipped with a random discrete measure. We also express this CRT as a tree built by replacing all branch points of a stable tree by rescaled i.i.d. copies of a Ford CRT. Some of these developments are carried out in a space of $\infty$-marked metric spaces generalising Miermont's notion of a $k$-marked metric space.
\end{abstract}
\begin{keyword} \kwd{stable tree} \kwd{line-breaking construction} \kwd{string of beads} \kwd{continuum random tree} \kwd{marked metric space} \kwd{recursive distribution equation}
\end{keyword}
\begin{keyword}[class=MSC] \kwd{60J80}
\end{keyword} \end{frontmatter}
\section{Introduction}\label{fig1} \label{Intro}
Stable trees were introduced by Duquesne and Le Gall \cite{17} as a family of continuum random trees (CRTs) parametrised by a self-similarity parameter $\alpha \in (1,2]$ to describe the genealogical structure of continuous-state branching processes with branching mechanism $\lambda \mapsto \lambda^{\alpha}$. As such they form a subclass of L\'evy trees \cite{23} and contain Aldous's Brownian CRT \cite{6,7,8} as a special case ($\alpha=2$). They were studied by Miermont and others \cite{2, 16, 17, 23,33,35,36,38} in the context of self-similar fragmentations and by several authors to establish invariance principles \cite{24,Kor12,CP09,HM10,Die13} and other properties \cite{11,CK14}. Furthermore, they have deeper connections to random maps and Liouville quantum gravity \cite{Subord,DMS14,MS15}.
We represent trees as \textit{$\mathbb R$-trees}, i.e.\ compact metric spaces $(\mathcal T, d)$ such that any two points $x,y \in \mathcal T$ are connected by a unique path $[[x,y]]$ in $\mathcal{T}$, which is furthermore required to have length $d(x,y)$. All our $\mathbb{R}$-trees are \em rooted \em at a distinguished $\rho\in\mathcal{T}$. We refer to a rooted $\mathbb R$-tree $(\mathcal T, d, \rho)$ equipped with a probability measure $\mu$ as a \textit{weighted} $\mathbb R$-tree $(\mathcal T, d, \rho, \mu)$, and equip sets of isometry classes of $\mathbb{R}$-trees and weighted $\mathbb R$-trees with the Gromov-Hausdorff and the Gromov-Hausdorff-Prokhorov topology, respectively.
Ever since Aldous \cite{6}, such trees have been built sequentially from a single branch $[[\rho,\Sigma_0]]$, grafting further branches (line segments) $]]J_{k-1},\Sigma_k]]$ to build trees $\mathcal T_k$ spanned by a growing finite number of points $\rho,\Sigma_0,\ldots,\Sigma_k$, $k\ge 1$, finally passing to the closure/completion $\mathcal T$ of $\bigcup_{k\ge 0}\mathcal T_k$. In a given weighted $\mathbb R$-tree $(\mathcal T,d,\rho,\mu)$, a natural sequence $(\Sigma_k,k\ge 0)$ may be obtained as an independent sample from $\mu$. For the Brownian CRT, Aldous \cite{6} gave an autonomous description of the resulting tree-growth process $(\mathcal T_k,k\ge 0)$ by breaking the half-line $[0,\infty)$ at the points $(S_k,k\ge 0)$ of an inhomogeneous Poisson process with linearly growing intensity $tdt$ on $[0,\infty)$, each segment $]S_k,S_{k+1}]$ grafted in a point $J_k$ chosen uniformly from the length measure on the structure $\mathcal T_k$ already built, with $\mathcal T_0=[0,S_0]$.
In Aldous's construction, the \em branch points \em $J_k$, $k\ge 0$, are distinct, the trees \em binary\em. This construction reveals some of the local complexity of the limiting tree, since elementary thinning of Poisson processes shows that every branch receives a dense set of branch points. Goldschmidt and Haas \cite{12} generalised this line-breaking construction to all stable trees $(\mathcal T,d,\rho,\mu)$, which are not binary for $\alpha\in(1,2)$. They describe \begin{equation} \mathcal T_k=\bigcup_{i=0}^k[[\rho,\Sigma_i]],\quad k \geq 0,\qquad\mbox{ for a sample }\Sigma_i\sim\mu,i\ge 0, \label{findimmarg1} \end{equation} not quite autonomously, as Aldous does in the special case $\alpha=2$, but by assigning weights \begin{equation} W^{(i)}_{k}, \quad i \in [b_k], \quad k \geq 0, \label{weights} \end{equation} to each branch point $v_i$ of $\mathcal T_k$, where $(v_i, i \geq 1)$ is the sequence of \em distinct branch points \em in their order of appearance in $(\mathcal T_k, k \geq 0)$, and $b_k\ge 0$ is the number of branch points of $\mathcal T_k$. Here, $[b]:=\{1,\ldots,b\}$.
Specifically, it will be convenient to change the usual parametrisation of the stable trees from a parameter $\alpha \in (1,2]$ to an \em index \em $\beta =1-1/\alpha \in (0,1/2]$. For $k \geq 0$, the sum of the branch point weights $(W_k^{(i)}, i \in[b_k])$ and the total length $L_k={\rm Leb}(\mathcal T_k)$ of $\mathcal T_k$ is given by $S_k$, where $(S_k, k \geq 0)$ is the \em Mittag-Leffler Markov chain \em \cite{2,12,39} with parameter $\beta$, starting from $S_0 \sim \text{ML}(\beta, \beta)$ with transition density
\begin{equation*}
f_{S_{k+1}\lvert S_k=z} \left( y \right)=f(z,y)=\frac{1-\beta}{\Gamma(1/\beta)}(y-z)^{1/\beta-2}\frac{yg_{\beta}(y)}{g_\beta(z)}, \quad 0 < z <y, \quad k \geq 0. \end{equation*} Here, $\text{ML}(\alpha, \theta)$ denotes the Mittag-Leffler distribution with parameters $ 0 < \alpha < 1$ and $\theta > -\alpha$ (cf. Section \ref{secml}), and $g_\beta(\cdot)$ is the density of ML$(\beta, 0)$. Then $S_k=W_k^{(1)}+\cdots+W_k^{(b_k)} + L_k \sim {\rm{ML}}(\beta, \beta+k)$.
\begin{algorithm}[Goldschmidt-Haas \cite{12}] \label{GH} Let $\beta\in(0,1/2]$. We grow discrete $\mathbb R$-trees $\mathcal T_k$ with weights $W_k^{(i)}$ in the branch points $v_i$, $i\in[b_k]$, of $\mathcal T_k$, and edge lengths between vertices, as follows.
\begin{enumerate}\item[0.] Let $(\mathcal T_0,\rho)$ be isometric to $([0,S_0],0)$, where $S_0\sim{\rm ML}(\beta,\beta)$; let $b_0=0$ and $W_0^{(i)}=0$, $i\ge 1$.
\end{enumerate}
Given $(\mathcal T_j,(v_i,i\in[b_j]),(W_j^{(i)},i\in[b_j]))$, $0\le j\le k$, and $S_k=L_k+W_k^{(1)}+\cdots+W_k^{(b_k)}$, where $L_k={\rm Leb}(\mathcal T_k)$,
\begin{enumerate}\item select $I_k=i$ for each branch point $v_i$ of $\mathcal T_k$ with probability proportional to $W_k^{(i)}$, $i\in[b_k]$;
or select an edge $E_k\subset\mathcal T_k$ with probability proportional to its length and let $b_{k+1}=b_k+1$, $I_k=b_{k+1}$;
\item if an edge $E_k$ is selected, sample $v_{b_{k+1}}$ from the normalised length measure on $E_k$;
\item sample $S_{k+1}$ with density $f(S_k,\cdot)$ and an independent $B_k\sim{\rm Beta}(1,1/\beta-2)$; attach to $\mathcal T_k$ at $J_k:=v_{I_k}$ a new branch of
length $(S_{k+1}-S_k)B_k$ to form $\mathcal T_{k+1}$; increase the weight of $J_k=v_{I_k}$ to
$$W_{k+1}^{(I_k)}=W_k^{(I_k)}+(S_{k+1}-S_k)(1-B_k),\qquad \text{and set}\quad W_{k+1}^{(j)}=W_k^{(j)},\quad j \neq I_k.$$
\end{enumerate} \end{algorithm}
When $\beta=1/2$, we understand $B_k=1$, so $W_k^{(i)}=0$ for all $i \geq 1$, $k \geq 0$, and $L_k=S_k$ for all $k \geq 0$. We obtain a sequence of compact binary $\mathbb R$-trees whose evolution is determined by attachment points chosen uniformly at random according to the length measure, and the total length given by the Mittag-Leffler Markov chain of parameter $\beta=1/2$, which can be seen to correspond to an inhomogeneous Poisson process of rate $\frac{1}{2}tdt$. Hence, this reduces to Aldous's line-breaking construction of the Brownian CRT \cite{6}.
It was shown in \cite{12} that the sequence of trees $(\mathcal T_k, k \geq 0)$ in Algorithm \ref{GH} has the same distribution as the sequence of trees from \eqref{findimmarg1}, i.e. we can formally define the \em stable tree of index $\beta \in (0,1/2]$ \em as the (Gromov-Hausdorff) limit $\mathcal T$ of $\mathcal T_k$, as $k\rightarrow\infty$. See also \cite{12} for an alternative line-breaking construction of the sequence $(\mathcal T_k, k \geq 0)$, where branch point selection is based on vertex degrees instead of weights.
Goldschmidt and Haas \cite{12} asked if there was a sensible way to associate a notion of length with the branch point weights in Algorithm \ref{GH}. We answer this question by using the branch point weights to build rescaled Ford trees whose lengths correspond to these weights. Ford trees arise in the scaling limit of Ford's alpha model studied in \cite{9,2} and in the context of the alpha-gamma-model \cite{10} for $\gamma=\alpha$, which is also related to the stable tree in the case when $\gamma=1-\alpha$. Ford trees are examples of binary self-similar CRTs and have also been constructed via line-breaking:
\begin{algorithm}[Haas-Miermont-Pitman-Winkel \cite {1,2}] \label{Ford} Let $\beta^\prime\in(0,1)$. We grow $\mathbb{R}$-trees $\mathcal{F}_m$, $m\ge 1$:
\begin{enumerate}\setcounter{enumi}{-1}\item Let $(\mathcal{F}_1,\rho)$ be isometric to $([0,S_1^\prime],0)$, where $S_1^\prime\sim{\rm ML}(\beta^\prime,1-\beta^\prime)$.
\end{enumerate}
Given $\mathcal{F}_j$, $1\le j\le m$, let $S_m^\prime={\rm Leb}(\mathcal{F}_m)$ denote the length of $\mathcal{F}_m$;
\begin{enumerate}\item select an edge $E_m\subset\mathcal{F}_m$ with probability proportional to its length;
\item if $E_m$ is external, sample $D_m\sim{\rm Beta}(1,1/\beta^\prime-1)$ and place $J_m\in E_m$ to split $E_m$ into length proportions $D_m$ and $1-D_m$;
otherwise, sample $J_m$ from the normalised length measure on $E_m$;
\item sample $S_{m+1}^\prime$ with density $f(S_m^\prime,\cdot)$; attach to $\mathcal{F}_m$ at $J_m$ an edge of length $S_{m+1}^\prime\!-\!S_m^\prime$ to form
$\mathcal{F}_{m+1}$.
\end{enumerate} \end{algorithm}
The sequence of trees $(\mathcal F_m, m \geq 1)$ has as its (Gromov-Hausdorff) limit a CRT $\mathcal F$ as $k \rightarrow \infty$, a so-called \textit{Ford CRT} of index $\beta' \in (0,1)$, see \cite{1,2}. We refer to the trees $\mathcal F_m$, $m \geq 1$, as \em Ford trees\em. In the case when $\beta'=1/2$, Algorithm \ref{Ford} corresponds to Aldous's construction of the Brownian CRT.
We combine the line-breaking constructions given in Algorithms \ref{GH} and \ref{Ford} in the framework of $\infty$-marked $\mathbb R$-trees, which we introduce in Section \ref{IMRT} as a natural extension of Miermont's notion of $k$-marked trees \cite{22}. An $\infty$-marked $\mathbb R$-tree $(\mathcal T, (\mathcal R^{(i)}, i \geq 1))$ is an $\mathbb R$-tree $(\mathcal T, d, \rho)$ with non-empty closed connected subsets $\mathcal R^{(i)} \subset \mathcal T$, $i \geq 1$. We will refer to this setting as a \textit{two-colour} framework, meaning that the \em marked \em set $\bigcup_{i \geq 1} \mathcal R^{(i)}$ and the \em unmarked \em remainder $\mathcal T \setminus \bigcup_{i \geq 1} \mathcal R^{(i)}$ are associated with two different colours. The marked components in the line-breaking construction below correspond to rescaled Ford trees with lengths equal to the branch point weights in Algorithm \ref{GH} and the unmarked remainder gives rise to a stable tree. Selection of a branch point in Algorithm \ref{GH} corresponds to an insertion into the respective marked component in the enhanced line-breaking construction given by Algorithm \ref{twocolour}. \pagebreak
\begin{algorithm}[Two-colour line-breaking construction] \label{twocolour} Let $\beta\in(0,1/2]$. We grow $\infty$-marked $\mathbb{R}$-trees $(\mathcal{T}_k^*,(\mathcal{R}_k^{(i)},i\ge 1))$, $k\ge 0$, as follows.
\begin{enumerate}\setcounter{enumi}{-1}\item Let $(\mathcal{T}_0^*,\rho)$ be isometric to $([0,S_0],0)$, where $S_0\sim{\rm ML}(\beta,\beta)$; let $r_0=0$ and $\mathcal{R}_0^{(i)}=\{\rho\}$, $i\ge 1$.
\end{enumerate}
Given $(\mathcal{T}_j^*,(\mathcal{R}_j^{(i)},i\ge 1))$, $0\le j\le k$, let $S_k={\rm Leb}(\mathcal T_k^*)$ be the length of $\mathcal{T}_k^*$ and $r_k=\#\{i\ge 1\colon\mathcal{R}_k^{(i)}\neq\{\rho\}\}$;
\begin{enumerate}\item select an edge $E_k^*\subset\mathcal{T}_k^*$ with probability proportional to its length; if $E_k^*\subset\mathcal{R}_k^{(i)}$ for some $i\in[r_k]$, let $I_k=i$; otherwise, i.e.\ if $E_k^*\subset\mathcal{T}_k^*\setminus\bigcup_{i\in[r_k]}\mathcal{R}_k^{(i)}$, let $r_{k+1}\!=\!r_k\!+\!1$, $I_k\!=\!r_{k+1}$;
\item if $E_k^*$ is an
external edge of $\mathcal{R}_k^{(i)}$, sample $D_k\sim{\rm Beta}(1,1/\beta-2)$ and place $J_k^*$ to split $E_k^*$ into length proportions $D_k$ and
$1-D_k$;
otherwise, i.e.\ if $E_k^*\subset\mathcal{T}_k^*\setminus\bigcup_{i\in[r_k]}\mathcal{R}_k^{(i)}$ or if $E_k^*$ is an internal edge of $\mathcal{R}_k^{(i)}$, sample $J_k^*$ from the normalised length measure on $E_k^*$;
\item sample $S_{k+1}$ with density $f(S_k,\cdot)$ and an independent $B_k\sim{\rm Beta}(1,1/\beta-2)$; attach to $\mathcal{T}_k^*$ at $J_k^*$ a new
branch of length $S_{k+1}-S_k$ to form $\mathcal{T}_{k+1}^*$, and add to $\mathcal{R}_k^{(I_k)}$ the part of length $(S_{k+1}-S_k)(1-B_k)$ closest to the root
to form $\mathcal{R}_{k+1}^{(I_k)}$; set $\mathcal{R}_{k+1}^{(j)}=\mathcal{R}_k^{(j)}$, $j\neq I_k$.
\end{enumerate} \end{algorithm}
Indeed, we obtain the correspondence of the branch point weights in Algorithm \ref{GH} and the lengths of the marked subtrees in Algorithm \ref{twocolour}, as well as marked subtrees as in Algorithm \ref{Ford}, up to scaling:
\begin{theorem}[Weight-length representation]\label{Mainresult1} Let $({\mathcal T}_k,({W}_k^{(i)}, i \geq 1), k \geq 0)$ be as in Algorithm \ref{GH}. Let $(\mathcal T_k^*,(\mathcal R_k^{(i)}, i \geq 1), k \geq 0)$ be the sequence of $\infty$-marked $\mathbb R$-trees constructed in Algorithm \ref{twocolour}, and let $\widetilde{W}_k^{(i)}={\rm Leb}(\mathcal R_k^{(i)})$ denote the length of $\mathcal R_k^{(i)}$, $i \geq 1$, respectively. For $k \geq 1$, contract each component $\mathcal R_k^{(i)}$ to a single branch point $\widetilde{v}_i$ by using an equivalence relation, and denote the resulting tree by $\widetilde{\mathcal T}_k$. Then \begin{equation}\label{weighteq} \left(\widetilde{\mathcal T}_k, \left(\widetilde{W}_k^{(i)}, i \geq 1 \right), k \geq 0\right) \,{\buildrel d \over =}\, \left({\mathcal T}_k, \left({W}_k^{(i)}, i \geq 1 \right), k \geq 0\right). \end{equation} See Figure \ref{fig1}. Furthermore, there exist positive random variables $C^{(i)}$ and subsequences $(k_m^{(i)}, m \geq 1)$, $i \geq 1$, such that the rescaled marked subtrees grow like Ford trees of index $\beta'=\beta/(1-\beta)$, i.e. \begin{equation}\label{subfordgrowth} \left(C^{(i)} \mathcal R^{(i)}_{k_m^{(i)}}, m \geq 1\right)\,{\buildrel d \over =}\, \left(\mathcal F_m, m \geq 1 \right), \end{equation} for all $i \geq 1$ where $(C^{(i)} \mathcal R^{(i)}_{k_m^{(i)}}, m \geq 1)$, $i \geq 1$, are independent of each other. \end{theorem}
To obtain limiting $\infty$-marked CRTs, we introduce a suitable metric $d_{\rm GH}^\infty$ in Section \ref{MMS}.
\begin{theorem}[Convergence of two-colour trees] \label{Mainresult2} Let $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1), k \geq 0)$ be as above. Then \begin{equation} \lim \limits_{k \rightarrow \infty} \left(\mathcal T_k^*, \left(\mathcal R_k^{(i)}, i \geq 1 \right)\right) = \left(\mathcal T^*, \left( \mathcal R^{(i)}, i \geq 1 \right) \right) \quad \text{a.s.} \end{equation} with respect to $d_{\rm GH}^\infty$, where $(\mathcal T^*, (\mathcal R^{(i)}, i \geq 1))$ is a compact $\infty$-marked $\mathbb R$-tree. Furthermore, \begin{itemize} \item the tree $\widetilde{\mathcal T}$, obtained from $\mathcal T^*$ by contracting each component $\mathcal R^{(i)}$ to a single branch point $\widetilde{v}_i$, is a stable tree of parameter $\beta$; \item there exist scaling factors $(C^{(i)}, i \geq 1)$ such that the trees $C^{(i)} \mathcal R^{(i)}$, $i \geq 1$, are i.i.d. copies of a Ford CRT $\mathcal F$ of index $\beta'=\beta/(1-\beta)$, and the trees $C^{(i)} \mathcal R^{(i)}$, $i \geq 1$, are independent of $\widetilde{\mathcal T}$. \end{itemize} \pagebreak \end{theorem}
The scaling factors $C^{(i)}$ can be given explicitly in terms of the masses of the subtrees of the stable tree $\widetilde{\mathcal T}$ above the branch point $\widetilde{v}_i$.
We can in fact use this, with the ingredients listed in Theorem \ref{Mainresult2}, to construct the two-colour tree $(\mathcal T^*,( \mathcal R^{(i)}, i \geq 1 ))$ from a stable tree $(\mathcal T, \mu)$ by replacing each branch point by a rescaled independent copy of a Ford CRT:
\begin{theorem}[Branch point replacement in a stable tree] \label{branchrepl} Let $(\mathcal T, d, \rho, \mu)$ be a stable tree of index $\beta \in (0,1/2]$ equipped with an i.i.d. sequence of labelled leaves $(\Sigma_k, k \geq 0)$ sampled from $\mu$. Consider the reduced trees $(\mathcal T_k, k \geq 0)$ as in \eqref{findimmarg1} with branch points $(v_i, i \geq 1)$ in order of appearance. For each $i \geq 1$, consider the path from the root to the leaf with the smallest label above $v_i$ and the following variables: \begin{itemize} \item the total mass ${P}^{(i)}=\sum_{j \geq 1} {P}_j^{(i)}$ of the subtrees rooted at $v_i$ on this path with masses $({P}_j^{(i)}, j \geq 1)$, in the order of their smallest labels; \item the random variable ${D}^{(i)}=\lim_{n \rightarrow \infty}\big(1-\sum_{j\in[n]}P_j^{(i)}/P^{(i)}\big)^{1-\beta}(1-\beta)^{\beta-1}n^\beta$ derived from $({P}^{(i)}_j, j \geq 1)$. \end{itemize} For $i \geq 1$, replace $v_i$ by an independent Ford tree $\mathcal F^{(i)}$ of index $\beta'=\beta/(1-\beta)$ with distances rescaled by $\left(C^{(i)}\right)^{-1}=\left({P}^{(i)}\right)^{\beta} \cdot \left({D}^{(i)}\right)^{\beta/(1-\beta)}
=\lim_{n \rightarrow \infty}\big(P^{(i)}-\sum_{j\in[n]}P_j^{(i)}\big)^{\beta}(1-\beta)^{-\beta}n^{\beta^2/(1-\beta)}$. Specifically, the root of $\mathcal F^{(i)}$ is identified with $v_i$ and the subtrees rooted at $v_i$ are attached to leaves of $\mathcal F^{(i)}$ in the order of their appearance in Algorithm \ref{Ford}. Then the tree $\mathcal T^*$ obtained here in the limit after all replacements has the same distribution as the tree $\mathcal T^*$ in Theorem \ref{Mainresult2}. \end{theorem}
We will formalise this construction in Section \ref{bprepl}. The random variable ${D}^{(i)}$ is the so-called \em $(1-\beta)$-diversity \em of the mass partition $({P}^{(i)}_j/{P}^{(i)}, j \geq 1) \sim {\rm GEM}(1-\beta, -\beta)$, where GEM$(\alpha, \theta)$ denotes the Griffiths-Engen-McCloskey distribution with parameters $\alpha\in[0,1)$, $\theta > -\alpha$, whose ranked version is the Poisson-Dirichlet distribution ${\rm PD}(\alpha,\theta)$. Note that, when $\beta=1/3$, we have $\beta'=1/2$, which means that we replace the branch points of the stable tree by rescaled i.i.d.\ Brownian CRTs. This should be compared with Le Gall \cite{Subord}, who effectively contracts subtrees in the middle of a Brownian CRT to obtain a stable tree of parameter 3/2. Neither his subtrees nor our $\mathcal{T}^*$ appear to be rescaled Brownian CRTs.
The proofs of Theorems \ref{Mainresult2} and \ref{branchrepl}, in particular the compactness of $\mathcal T^*$, are based on an embedding of $(\mathcal T_k^*, k \geq 0)$ into a compact CRT whose existence follows from earlier work \cite{RW} where we constructed CRTs via i.i.d.\ copies of a random string of beads, i.e.\ any random interval equipped with a random discrete probability measure, see Section \ref{Embedding} here for details. The distribution $\nu$ of the string of beads needed to obtain this compact CRT combines two $(\beta, \theta)$-strings of beads (for $\theta=\beta$ and $\theta=1-2\beta$), which arise in the framework of ordered $(\beta, \theta)$-Chinese restaurant processes as introduced in \cite{1}. A \em $(\beta, \theta)$-string of beads \em is an interval of length $K \sim {\rm{ML}}(\beta, \theta)$ equipped with a discrete probability measure whose atom sizes are PD$(\beta, \theta)$, arranged in a random order that yields a regenerative property.
It is crucial for our argument to equip each reduced tree with a mass measure which effectively captures projected subtree masses. This naturally leads to a new line-breaking construction of the stable tree where the selection of the attachment point $J_k$ is based on masses rather than lengths, and where a proportion of the mass in $J_k$ is spread over the new branch, depending on the degree ${\rm deg}(J_k,\mathcal{T}_k)$ of $J_k$ in $\mathcal{T}_k$.
\begin{algorithm}[Line-breaking construction of the stable tree with masses] \label{masssel} Let $\beta\in(0,1/2]$. We grow weighted $\mathbb{R}$-trees $(\mathcal{T}_k,\mu_k)$, $k\ge 0$, as follows.
\begin{enumerate}\setcounter{enumi}{-1}\item Let $(\mathcal{T}_0,\mu_0)$ be isometric to a $(\beta,\beta)$-string of beads.
\end{enumerate}
Given $(\mathcal{T}_j,\mu_j)$ with $\mu_j=\sum_{x\in\mathcal{T}_j}\mu_j(x)\delta_x$, $0\le j\le k$,
\begin{enumerate}\stepcounter{enumi}
\item[1.-2.] sample $J_k$ from $\mu_k$;
\item[3.] given ${\rm deg}(J_k,\mathcal{T}_k)=d\ge 2$, let $Q_k\sim{\rm Beta}(\beta,(d-2)(1-\beta)+1-2\beta)$, and let $\xi_k$ be an
independent $(\beta,\beta)$-string of beads; to form $(\mathcal{T}_{k+1},\mu_{k+1})$, remove $Q_k\mu_k(J_k)\delta_{J_k}$ from $\mu_k$ and attach
to $\mathcal{T}_k$ at $J_k$ an isometric copy of $\xi_k$ with measure rescaled by $Q_k\mu_k(J_k)$ and metric rescaled by $(Q_k\mu_k(J_k))^\beta$.
\end{enumerate} \end{algorithm}
\begin{theorem} \label{samestable} In Algorithm \ref{masssel}, $(\mathcal T_k, k \geq 0)$ has the same distribution as the sequence of trees in \eqref{findimmarg1} (and as in Algorithm \ref{GH}). In particular, $\lim_{k \rightarrow \infty} \mathcal T_k = \mathcal T$ a.s. in the Gromov-Hausdorff topology for a stable tree $\mathcal T$. Furthermore, $\lim_{k \rightarrow \infty} (\mathcal T_k, \mu_k)= (\mathcal T, \mu)$ a.s. in the Gromov-Hausdorff-Prokhorov topology. \end{theorem} The proof of Theorem \ref{samestable} is based on the following well-known property of the stable tree, phrased in different terminology in \cite[Corollary 10(3)]{35}, and \cite[discussion after Corollary 8]{1}, where the link between $(\beta, \beta)$-strings of beads and a Bessel bridge of dimension $2\beta$ was established.
\begin{prop} \label{betastring} Let $(\mathcal T, \mu)$ be a stable tree of parameter $\beta \in (0,1/2]$, and let $\Sigma_0 \sim \mu$. Consider the \em spine \em
$\mathcal T_0=[[\rho,\Sigma_0]]$, and equip $\mathcal T_0$ with the mass measure $\mu_0$, capturing the masses of the connected components of
$\mathcal T \setminus \mathcal T_0$ projected onto $\mathcal T_0$. Then $(\mathcal T_0, \mu_0)$ is a $(\beta, \beta)$-string of beads.
\end{prop}
This paper is structured as follows. We introduce the framework of $\infty$-marked $\mathbb R$-trees in Section \ref{MMS}, and collect some preliminary results in Section \ref{prelim}. Section \ref{sec4} is devoted to the study of the two-colour line-breaking construction, while Section \ref{sec5} deals with its convergence to compact CRTs, as well as the branch point replacement. In Section \ref{Sec6}, we study a discrete two-colour tree-growth process whose two-step scaling limit is the two-colour CRT. An appendix includes some proofs postponed from earlier sections.
\section{$\mathbb R$-trees and marked metric spaces} \label{MMS}
\subsection{$\mathbb R$-trees and the Gromov-Hausdorff topology} \label{RT}
A compact metric space $(\mathcal T, d)$ is called an \textit{$\mathbb R$-tree} \cite{20,26} if for each $x,y \in \mathcal T$ the following holds. \begin{enumerate} \item[(i)] There is an isometry $f_{x,y} \colon [0, d(x,y)] \rightarrow \mathcal T$ such that $f_{x,y}(0)=x$ and $f_{x,y}(d(x,y))=y$. \item[(ii)] For all injective paths $g \colon [0,1] \rightarrow \mathcal T$ with $g(0)=x$ and $g(1)=y$, we have $g([0,1])=f_{x,y}([0,d(x,y)])$. \end{enumerate} We denote the range of $f_{x,y}$ by $[[x,y]]:=f_{x,y}\left([0,d(x,y)]\right)$. All our $\mathbb R$-trees will be \em rooted \em at a distinguished element $\rho$, the \textit{root} of $\mathcal T$. We call two $\mathbb R$-trees $(\mathcal T, d, \rho)$ and $(\mathcal T', d', \rho')$ \textit{equivalent} if there is an isometry from $\mathcal T$ to $\mathcal T'$ that maps $\rho$ onto $\rho'$. We denote by $\mathbb T$ the set of equivalence classes of rooted $\mathbb R$-trees, which we equip with the {Gromov-Hausdorff distance} $d_{\rm GH}$ \cite{21} to obtain the Polish space $(\mathbb T, d_{\rm GH})$. The \textit{Gromov-Hausdorff distance} between two $\mathbb{R}$-trees $(\mathcal T, d, \rho)$ and $(\mathcal T',d', \rho')$ is defined as \begin{equation} d_{\rm GH} \left(\left(\mathcal T, d,\rho\right), \left(\mathcal T', d',\rho'\right)\right) := \inf \limits_{\varphi, \varphi'} \left\{\max \left\{\delta \left(\varphi\left(\rho\right), \varphi'\left(\rho'\right)\right), \delta_{\rm H}\left(\varphi \left(\mathcal T\right), \varphi'\left(\mathcal T'\right)\right)\right\}\right\}, \label{GHdist} \end{equation} where the infimum is taken over all metric spaces $(\mathcal M, \delta)$ and all isometric embeddings $\varphi\colon \mathcal T \rightarrow \mathcal M$, $\varphi' \colon \mathcal T' \rightarrow \mathcal M$ into the common metric space $(\mathcal M, \delta)$, and $\delta_{\rm H}$ is the Hausdorff distance between compact subsets of $(\mathcal M, \delta)$. It is well-known that the Gromov-Hausdorff distance only depends on equivalence classes of rooted $\mathbb R$-trees, and we equip $\mathbb T$ with the Borel $\sigma$-algebra $\mathcal B(\mathbb T)$ induced by $d_{\rm GH}$.
We can enhance a rooted $\mathbb R$-tree by considering a probability measure $\mu$ on its Borel sets $\mathcal B(\mathcal T)$, and call $(\mathcal T, d, \rho, \mu)$ a \textit{weighted} $\mathbb R$-tree. We call $(\mathcal T, d, \rho, \mu)$ and $(\mathcal T', d', \rho', \mu')$ \textit{equivalent} if there is an isometry from $\mathcal T$ to $\mathcal T'$ such that $\rho$ is mapped onto $\rho'$ and $\mu'$ is the push-forward of $\mu$ under this isometry. We let $\mathbb{T}_{\rm w}$ denote the set of equivalence classes of compact weighted $\mathbb R$-trees. Then $\mathbb{T}_{\rm w}$ is Polish when equipped with the \textit{Gromov-Hausdorff-Prokhorov distance} $d_{\rm GHP}$ induced by \begin{equation} d_{\rm GHP} \left(\left(\mathcal T, d,\rho, \mu\right), \left(\mathcal T', d',\rho', \mu'\right)\right) := \inf \limits_{\varphi, \varphi'} \left\{\max\left\{\delta\left(\varphi\left(\rho\right), \varphi'\left(\rho'\right)\right), \delta_{\rm H}\left(\varphi\left(\mathcal T\right), \varphi'\left(\mathcal T'\right)\right), \delta_{\text P}\left(\varphi_*\mu, \varphi'_*\mu'\right) \right\} \right\} \label{GHPdist} \end{equation} for weighted $\mathbb{R}$-trees $(\mathcal T, d, \rho, \mu)$ and $(\mathcal T',d', \rho', \mu')$, where $\varphi, \varphi', \delta_{\text H}$ are as in \eqref{GHdist}, $\varphi_*\mu$, $\varphi'_*\mu$ are the push-forwards of $\mu$, $\mu'$ via $\varphi, \varphi'$, respectively, and $\delta_{\text P}$ is the Prokhorov distance on the space of Borel probability measures on $(\mathcal M, \delta)$ given by $\delta_{\rm P}\left(\mu, \mu' \right)=\inf \left\{ \epsilon >0\colon \mu(D) \leq \mu'( D^\epsilon)+\epsilon \quad \forall D \subset \mathcal M \text{ closed}\right\}$, where $D^\epsilon:=\{x \in \mathcal M\colon \inf_{y \in D} \delta(x,y) \leq\epsilon\}$ denotes the \textit{$\epsilon$-thickening} of $D$.
While some of our developments are more easily stated in $(\mathbb T,d_{\rm GH})$ or $(\mathbb T_{\rm w},d_{\rm GHP})$, others benefit from more explicit embeddings into a particular metric space $(\mathcal{M},\delta)$, which we will mostly choose as $$\mathcal{M}=l^1(\mathbb{N}_0^2):=\left\{(s_{i,j})_{i,j\in\mathbb N_0}\in[0,\infty)^{\mathbb N_0^2}\colon\sum_{i,j\in\mathbb N_0}s_{i,j}<\infty\right\}$$ with the metric induced by the $l^1$-norm. This is a variation of Aldous's \cite{8,7,6} choice $\mathcal{M}=l^1(\mathbb{N})$. We denote by $\mathbb{T}^{\rm emb}$ the space of all compact $\mathbb{R}$-trees $\mathcal T\subset l^1(\mathbb{N}_0^2)$ with root $0\in\mathcal T$, which we equip with the Hausdorff metric $\delta_{\rm H}$, and by $\mathbb{T}^{\rm emb}_{\rm w}$ the space of all weighted compact $\mathbb{R}$-trees $(\mathcal T,\mu)$ with $\mathcal T\in\mathbb{T}^{\rm emb}$, which we equip with the metric $\delta_{\rm HP}((\mathcal T,\mu),(\mathcal T^\prime,\mu^\prime))=\max\{\delta_{\rm H}(\mathcal T,\mathcal T^\prime),\delta_{\rm P}(\mu,\mu^\prime)\}$.
\begin{prop}[e.g. \cite{RW}]\label{embprop}\begin{enumerate}\item[\rm(i)] $(\mathbb{T}^{\rm emb},\delta_{\rm H})$ and $(\mathbb{T}^{\rm emb}_{\rm w},\delta_{\rm HP})$ are separable and complete.
\item[\rm(ii)] For all $\mathcal T,\mathcal T^\prime\in\mathbb{T}^{\rm emb}$ we have $d_{\rm GH}(\mathcal T,\mathcal T^\prime)\le\delta_{\rm H}(\mathcal T,\mathcal T^\prime)$, and for all $(\mathcal T,\mu),(\mathcal T^\prime,\mu^\prime)\in\mathbb{T}^{\rm emb}_{\rm w}$, we have $d_{\rm GHP}((\mathcal T,\mu),(\mathcal T^\prime,\mu^\prime))\le\delta_{\rm HP}((\mathcal T,\mu),(\mathcal T^\prime,\mu^\prime))$.
\item[\rm(iii)] Every rooted compact $\mathbb{R}$-tree is equivalent to an element of $\mathbb{T}^{\rm emb}$, and every rooted weighted compact $\mathbb{R}$-tree is
equivalent to an element of $\mathbb{T}^{\rm emb}_{\rm w}$.
\item[\rm(iv)] For $\mathcal T_n\in\mathbb{T}^{\rm emb}$ with $\mathcal T_n\subseteq \mathcal T_{n+1}$, $n\ge 1$, and the closure $\mathcal T:=\overline{\bigcup_{n\ge 1}\mathcal T_n}$, we
have $(\mathcal T_n,n\ge 1)$ convergent in $(\mathbb{T},d_{\rm GH})$ if and only if $\lim_{n\rightarrow\infty}\delta_{\rm H}(\mathcal{T}_n,\mathcal{T})=0$. In particular, in this
case $\mathcal{T}$ is compact.
\end{enumerate} \end{prop}
For $\mathcal T\in\mathbb T^{\rm emb}$ and $c>0$, we define $c\mathcal T:=\{cx\colon x\in\mathcal T\}$. More generally for any $\mathbb{R}$-tree $(\mathcal T,d)$, we slightly abuse notation and denote by $c\mathcal T$ the metric space $(\mathcal T, cd)$ obtained when all distances are multiplied by $c$. We consider random $\mathbb R$-trees whose equivalence class in $\mathbb T$ has the distribution of a stable or Ford tree, and also refer to these trees as stable or Ford trees, and to the associated law on $\mathbb T$ as their \em distribution\em.
If $x \in \mathcal T\setminus \{\rho\}$ is such that $\mathcal T \setminus \{x\}$ is connected, we call $x$ a $\textit{leaf}$ of $\mathcal T$. A \textit{branch point} is an element $x \in \mathcal T$ such that $\mathcal T \setminus \{x\}$ has at least three connected components. We refer to the number of these components as the \textit{degree} {{deg}}$(x, \mathcal T)$ of $x$. We denote the sets of all leaves and branch points by ${\rm Lf}(\mathcal T)$ and ${\rm Br}(\mathcal T)$. If $\mathcal T \setminus {\rm Br}(\mathcal T)$ has only finitely many connected components, we call $\mathcal T$ a \em discrete \em $\mathbb{R}$-tree and these components (with or without one or both endpoints) \textit{edges}. We denote the set of edges by Edg($\mathcal T$), and call
$\#{\rm Lf}(\mathcal T)$ the \em size \em of $\mathcal T$. Also, $|\mathcal T|:=\#{\rm Edg}(\mathcal T)$. We call the discrete graph with edge set Edg($\mathcal T$) the \textit{shape} of $\mathcal T$.
In the case of discrete weighted $\mathbb R$-trees it will often be of interest how the total mass of $1$ is distributed between the edges, with possibly some mass in branch points, which for convenience we will also write in the form $E=\{v\}$. For any weighted $\mathbb R$-tree $(\mathcal T, \mu)$ with $n$ edges/branch points $E_1, \ldots, E_n$, the vector $(X_1, \ldots, X_n)$ with $X_i:=\mu(E_i)$, $i \in [n]$, is called the \textit{mass split} in $\mathcal T$. We will also consider mass splits in subtrees $\mathcal R\subset \mathcal T$, i.e.\ mass splits in $(\mathcal R, \mu(\mathcal R)^{-1} \mu \restriction_{\mathcal R})$. To distinguish mass splits in the \enquote{big} tree $\mathcal T$ and in \enquote{small} subtrees, we will speak of the \textit{total} and \textit{internal} (or \textit{relative}) mass splits, respectively.
The limiting trees of the weighted $\mathbb R$-trees in our constructions will be \textit{continuum trees}, i.e.\ weighted $\mathbb R$-trees $(\mathcal T, d, \mu)$ such that the probability measure $\mu$ on $\mathcal T$ satisfies the following three properties. (i) $\mu$ is supported by the set Lf($\mathcal T)$ of leaves of $\mathcal T$. (ii) $\mu$ is non-atomic, i.e. for any $x \in {\rm Lf}(\mathcal T)$, $\mu(x)=0$. (iii) For any $x \in \mathcal T \setminus {\rm Lf}(\mathcal T)$ and $\mathcal T_x := \{ \sigma \in \mathcal T\colon x \in [[\rho, \sigma]]\}$, we have $\mu(\mathcal T_x)>0$.
It is an immediate consequence of (i)-(iii) that, for any continuum tree $(\mathcal T, d)$, the set of leaves ${\rm Lf} (\mathcal T)$ is uncountable and that it has no isolated points. Finally, we introduce the notion of a \textit{reduced} subtree \begin{equation} \mathcal R(\mathcal T, x_1,\ldots,x_n):= \bigcup \limits_{i\in[n]} [[\rho, x_i]] \end{equation} of an $\mathbb R$-tree $\mathcal T$ spanned by the root and $x_1,x_2,\ldots,x_n \in {\rm Lf}(\mathcal T)$. Note that $\mathcal R(\mathcal T, x_1,\ldots,x_n)$ is a discrete $\mathbb R$-tree with root $\rho$ and leaves $x_1,\ldots,x_n$. We further consider the projection map \begin{equation} \pi_k\colon \mathcal T \rightarrow \mathcal R(\mathcal T, x_1, \ldots, x_k) , \quad y \mapsto f_{\rho, y}\left(\sup\{t \geq 0\colon f_{\rho, y}(t) \in \mathcal R(\mathcal T,x_1,\ldots,x_k) \}\right),\label{pm} \end{equation} where $f_{\rho,y} \colon [0, d(\rho,y)] \rightarrow \mathcal T$ is the unique isometry with $f_{\rho,y}(0)=\rho$ and $f_{\rho,y}(d(\rho,y))=y$ from the definition of an $\mathbb R$-tree. The push-forward of a probability measure $\mu$ on $\mathcal T$ via this projection map is denoted by $(\pi_k)_*\mu$, i.e. \begin{equation} (\pi_k)_*\mu\left(D\right)=\mu\left(\pi_k^{-1}\left(D\right)\right), \qquad D \subset \mathcal R(\mathcal T, x_1, \ldots, x_k) \text{ Borel measurable}. \label{pfpm} \end{equation} More details on $\mathbb R$-trees and proofs for the statements made in this section can be found in \cite{20,25,26}.
\subsection{$\infty$-marked $\mathbb R$-trees} \label{IMRT}
We introduce $\infty$-marked $\mathbb R$-trees to capture the framework of an $\mathbb R$-tree with infinitely many marked components. This is a generalisation of Miermont's concept of a $k$-marked metric space, \cite[Section 6.4]{22}. In the context of the two-colour line-breaking construction, the marked components correspond to the rescaled Ford trees by which we replace the branch points in the stable line-breaking construction. Each Ford tree, i.e. each connected red component, is related to a new marked subset of the $\infty$-marked $\mathbb R$-tree.
A \textit{k-marked} $\mathbb R$-tree $(\mathcal T,d, \rho, (\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}))$, $k \geq 1$, is a rooted $\mathbb R$-tree $(\mathcal T, d, \rho)$ with non-empty closed connected subsets $\mathcal R^{(1)}, \ldots, \mathcal R^{(k)} \subset \mathcal T$.
We call two $k$-marked $\mathbb R$-trees $(\mathcal T,d, \rho, (\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}))$ and $(\mathcal T^\prime,d^\prime, \rho, (\mathcal R'^{(1)}, \ldots, \mathcal R^{\prime(k)}) )$ \textit{equivalent} if there exists an isometry from $\mathcal T$ to $\mathcal T^\prime$ such that each $\mathcal R^{(i)}$ is mapped onto $\mathcal R^{\prime(i)}$, $i \in [k]$, respectively, and $\rho$ is mapped onto $\rho'$. If $\mathcal T$ and $\mathcal T^\prime$ are equipped with mass measures $\mu$ and $\mu^\prime$, we speak of \textit{weighted} $k$-marked $\mathbb R$-trees, and we call them \textit{equivalent} if there is an isometry from $\mathcal T$ to $\mathcal T^\prime$ such that each $\mathcal R^{(i)}$ is mapped onto $\mathcal R^{\prime(i)}$, $i \in [k]$, $\rho$ is mapped to $\rho'$ and $\mu^\prime$ is the push-forward of $\mu$ under this isometry. The set of equivalence classes of $k$-marked $\mathbb R$-trees is denoted by $\mathbb T^{[k]}$, and $\mathbb T_{\rm w}^{[k]}$ is the set of equivalence classes of weighted $k$-marked $\mathbb R$-trees.
For $k$-marked $\mathbb{R}$-trees $(\mathcal T,d, \rho, (\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}))$, $(\mathcal T^\prime,d^\prime, \rho^\prime, (\mathcal R^{\prime(1)}, \ldots, \mathcal R^{\prime(k)})) \in \mathbb T^{[k]}$, define \begin{align} d_{\rm{GH}}^{[k]} &\left( \left(\mathcal T,d, \rho, \left(\mathcal R^{(1)}, \ldots, \mathcal R^{(k)} \right)\right), \left(\mathcal T^\prime, d^\prime, \rho^\prime, \left(\mathcal R^{\prime(1)}, \ldots, \mathcal R^{\prime(k)}\right) \right) \right) \nonumber \\ &:=\inf_{\varphi, \varphi^\prime} \left\{ \max \left\{ \delta_{\rm{H}} \left( \varphi\left(\mathcal T\right), \varphi^\prime\left(\mathcal T^\prime\right) \right), \max \limits_{1 \leq i \leq k} \delta_{\rm H} \left( \varphi \left(\mathcal R^{(i)}\right), \varphi'\left( \mathcal R^{\prime(i)} \right)\right), \delta_{} \left( \varphi\left(\rho\right), \varphi \left(\rho^\prime\right) \right) \right\} \right\} \label{GHK} \end{align} where the infimum is taken over all isometric embeddings $\varphi$, $\varphi^\prime$ of $\mathcal T$, $\mathcal T^\prime$ into a common metric space $(\mathcal M, \delta)$, and $\delta_{\text H}$ is the Hausdorff distance on $(\mathcal M, \delta)$. It was shown in \cite{22} that $d^{[k]}_{\rm{GH}}$ is a metric on $\mathbb T^{[k]}$.
\begin{lemma}[{\cite[Proposition 9(ii)]{22}}] \label{sepcomp} The space $(\mathbb T^{[k]}, d_{\rm GH}^{[k]})$ is separable and complete. \end{lemma}\pagebreak
We extend the notion of a $k$-marked $\mathbb R$-tree to an \textit{$\infty$-marked} $\mathbb R$-tree $(\mathcal T, d, \rho, (\mathcal R^{(i)}, i \geq 1))$. The marked components $\mathcal R^{(i)}, i \geq 1$, of an $\infty$-marked $\mathbb R$-tree $(\mathcal T, ( \mathcal R^{(i)}, i \geq 1))$ are themselves $\mathbb R$-trees when equipped with the metric restricted to $\mathcal R^{(i)}$, and rooted at the point of $\mathcal R^{(i)}$ closest to the root of $\mathcal T$, $i\geq 1$.
We will consider $\infty$-marked $\mathbb R$-trees $(\mathcal T, d, \rho, (\mathcal R^{(i)}, i \geq 1))$ with a discrete branching structure, and distinguish between \textit{internal} and \textit{external} edges of $\mathcal R^{(i)}$. External edges of $\mathcal R^{(i)}$ are edges connecting a branch point/root and a leaf of $\mathcal R^{(i)}$, internal edges connect two branch points or the root and a branch point.
As in the $k$-marked case, \textit{$\infty$-marked} $\mathbb R$-trees $(\mathcal T, d, \rho, (\mathcal R^{(i)}, i \geq 1))$, $(\mathcal T^\prime, d^\prime, \rho^\prime, (\mathcal R^{\prime(i)}, i \geq 1))$ are \textit{equivalent} if there is an isometry from $\mathcal T$ to $\mathcal T^\prime$ such that $\rho$ is mapped onto $\rho^\prime$, and each $\mathcal R^{(i)}$ is mapped onto $\mathcal R^{\prime(i)}$, $i \geq 1$, respectively. We write $\mathbb T^{\infty}$ for the set of equivalence classes of compact $\infty$-marked $\mathbb R$-trees, and equip it with the metric $d^{\infty}_{\rm GH}:=\sum_{k \geq 1} 2^{-k}d_{\rm GH}^{[k]}$, i.e. for $(\mathcal T, d, \rho, (\mathcal R^{(i)}, i \geq 1))$, $(\mathcal T^\prime, d^\prime, \rho^\prime, (\mathcal R^{\prime(i)}, i \geq 1)) \in \mathbb T^{\infty}$, \begin{align} d^{\infty}_{\rm GH}&\left(\left(\mathcal T, d, \rho,\left(\mathcal R^{(i)},i \geq 1\right)\right),\left(\mathcal T', d^\prime, \rho^\prime, \left(\mathcal R'^{(i)}, i \geq 1 \right)\right) \right)\nonumber \\ & \hspace{3cm}:= \sum_{k \geq 1} 2^{-k} d_{\rm GH}^{[k]} \left(\left(\mathcal T, \left(\mathcal R^{(i)}, \ldots, \mathcal R^{(k)} \right)\right), \left(\mathcal T', \left(\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}\right)\right)\right). \label{GHKmarked} \end{align}
\begin{corollary} \label{33} The space $(\mathbb T^{\infty}, d_{\rm GH}^{\infty})$ is separable and complete. \end{corollary}
\begin{proof} By Lemma \ref{sepcomp}, for any $k \geq 1$, there exists a countable dense subset $\mathbb C_k \subset \mathbb T_k$ such that for any $\epsilon >0$ and any $(\mathcal T, d, \rho, (\mathcal R^{(1)}, \ldots, \mathcal R^{(k)})) \in \mathbb T^{[k]}$, there is $(\mathcal T', d', \rho', (\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}))\in \mathbb C_k$ with \begin{equation} d_{\rm GH}^{[k]}\left(\left(\mathcal T, \left(\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}\right)\right), \left(\mathcal T', \left(\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}\right)\right)\right) < \epsilon. \label{sepink} \end{equation} For any $k \geq 1$, the set $\mathbb C_k':=\left\{ \left(\mathcal T', \left(\mathcal R^{(1')}, \ldots, \mathcal R^{(k')}, \{\rho'\}, \{\rho'\}, \ldots \right)\right)\colon \left(\mathcal T', \left(\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}\right)\right) \in \mathbb C_k\right\}$ is countable. For any $(\mathcal T, (\mathcal R^{(1)}, \mathcal R^{(2)}, \ldots)) \in \mathbb T^{\infty}$ there is $(\mathcal T', (\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}, \{\rho'\}, \{\rho'\}, \ldots)) \in \mathbb C_k'$ such that \eqref{sepink} holds when we restrict to the first $k$ marked components, i.e. for ${\rm diam}(\mathcal T)=\sup\{d(x,y)\colon x,y\in\mathcal T\}$, \begin{equation*} d_{\rm GH}^{\infty}\left(\left(\mathcal T, \left(\mathcal R^{(i)}, i \geq 1 \right)\right), \left(\mathcal T', \left(\mathcal R'^{(i)}, i \geq 1 \right)\right)\right) \leq 2^{-k} \epsilon \left( 1+2+ \cdots +2^{k-1}\right)+ \sum_{n \geq k+1} 2^{-n} {\rm diam}\left(\mathcal T\right) < \epsilon \end{equation*} for $k$ large enough. Therefore, $\mathbb C_\infty:={\bigcup_{k \geq 1} \mathbb C_k'}$ is countable and dense in $\mathbb T^\infty$, i.e. $(\mathbb T^\infty,d_{\rm GH}^{\infty})$ is separable.
To see that $(\mathbb T^\infty,d_{\text GH}^{\infty})$ is complete, consider a Cauchy sequence $(\mathcal T_n, (\mathcal R_n^{(i)}, i \geq 1), n \geq 1)$ in $\mathbb T^\infty$. By definition of $d_{\rm GH}^{\infty}$, for any $k \geq 1$, the sequence $(\mathcal T_n, (\mathcal R_n^{(1)}, \ldots, \mathcal R_n^{(k)}), n \geq 1)$ is Cauchy in $\mathbb T^{[k]}$. By Lemma \ref{sepcomp}, $(\mathbb T^{[k]}, d_{\rm GH}^{[k]})$ is complete, and we conclude that there is $(\mathcal T, (\mathcal R^{(1)}, \ldots, \mathcal R^{(k)})) \in \mathbb T^{[k]}$ such that \begin{equation} \lim_{n \rightarrow \infty} \left(\mathcal T, \left(\mathcal R_n^{{(1)}}, \ldots, \mathcal R_n^{(k)}\right)\right) = \left(\mathcal T, \left(\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}\right)\right). \label{kvaries} \end{equation} Furthermore, the limits \eqref{kvaries} are easily seen to be consistent as $k$ varies, i.e. given $\eqref{kvaries}$ for some $k \geq 2$, \begin{equation} \lim_{n \rightarrow \infty} \left(\mathcal T, \left(\mathcal R_n^{(1)}, \ldots, \mathcal R_n^{(k-1)}\right)\right) = \left(\mathcal T, \left(\mathcal R^{(1)}, \ldots, \mathcal R^{(k-1)}\right)\right) \end{equation} in $(\mathbb T^{[k-1]}, d_{\rm GH}^{[k-1]})$. We conclude that there is $(\mathcal T, (\mathcal R^{(1)}, \mathcal R^{(2)}, \ldots))$ such that \begin{equation*} \lim \limits_{n \rightarrow \infty} d_{\rm GH}^{\infty}\left(\left(\mathcal T_n, \left(\mathcal R_n^{(i)}, i \geq 1\right)\right),\left(\mathcal T, \left(\mathcal R^{(i)}, i \geq 1\right)\right)\right)=0.
\end{equation*} \end{proof}
We can extend $d_{\rm GH}^{[k]}$ to a metric on $\mathbb T_{\rm w}^{[k]}$ by adding a Prokhorov component to $d_{\rm GH}^{[k]}$. For any $k \in \{1,2, \ldots\}$ and $(\mathcal T, (\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}), \mu)$, $(\mathcal T', (\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}), \mu') \in \mathbb T^{[k]}$, we define \begin{align*} d_{\rm GHP}^{[k]}&\left( \left(\mathcal T, \left(\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}\right), \mu\right), \left(\mathcal T', \left(\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}\right), \mu'\right)\right) \\ & \hspace{-0.8cm} :=\inf_{\varphi, \varphi'} \left\{ \max \left\{ \delta_{\rm{H}} \left( \varphi\left(\mathcal T\right), \varphi'\left(\mathcal T'\right) \right), \max \limits_{1 \leq i \leq k} \delta_{\rm H} \left( \varphi \left(\mathcal R^{(i)}\right), \varphi'\left( \mathcal R'^{(i)} \right)\right), \delta_{} \left( \varphi\left(\rho\right), \varphi' \left(\rho'\right) \right), \delta_{\rm P} \left( \varphi_*\mu, \varphi'_*\mu'\right) \right\} \right\} \end{align*} where $\varphi, \varphi'$ and $\varphi_*\mu$, $\varphi'_*\mu'$ are as in \eqref{GHPdist} and \eqref{GHK}. In the spirit of \eqref{GHKmarked}, we define \begin{align} d_{\rm GHP}^\infty & \left( \left( \mathcal T, \left( \mathcal R^{(i)},i \geq 1 \right), \mu \right), \left( \mathcal T', \left(\mathcal R'^{(i)}, i \geq 1 \right), \mu' \right) \right) \nonumber \\ &\hspace{2cm}= \sum_{k \geq 1} 2^{-k} d_{\rm GHP}^{[k]} \left( \left(\mathcal T, \left(\mathcal R^{(1)}, \ldots, \mathcal R^{(k)}\right), \mu\right), \left(\mathcal T', \left(\mathcal R'^{(1)}, \ldots, \mathcal R'^{(k)}\right), \mu'\right) \right) \end{align} for two weighted $\infty$-marked $\mathbb R$-trees $(\mathcal T, (\mathcal R^{(i)},i \geq 1), \mu)$ and $( \mathcal T', (\mathcal R'^{(i)}, i \geq 1), \mu')$.
\begin{lemma} The function $d_{\rm GHP}^{[k]}$ defines a distance on $\mathbb T_{\rm w}^{[k]}$, and the space $(\mathbb T_{\rm w}^{[k]}, d_{\rm GHP}^{[k]})$ is separable and complete, for any $k \in \{0,1,2, \ldots; \infty\}$. \end{lemma}
\begin{proof} For $k\in\{0,1,2,\ldots\}$, the proof is a direct generalisation of the proof of Lemma \ref{sepcomp}. In particular, it is straightforward to generalise the results about $d_{\rm GHP}$ in \cite[Section 6.2/6.3]{22} to $d_{\rm GHP}^{[k]}$. For $k=\infty$, the claim can then be deduced as in the proof of Corollary \ref{33}. \end{proof}
\begin{remark} Miermont \cite{22} introduced the more general concept of a $k$-marked metric space, and studied the space $\mathbb M^{[k]}$ of equivalence classes of $k$-marked metric spaces. $\mathbb T^{[k]}$ is a closed subset of $\mathbb M^{[k+1]}$ (\cite[Lemma 2.1]{21}), i.e. the results on $(\mathbb T^{[k]}, d_{\rm GH}^{[k]})$ presented here follow from his study of $(\mathbb M^{[k]}, d_{\rm GH}^{[k]})$, $k\ge 0$. \end{remark}
\section{Mittag-Leffler distributions, strings of beads and stable trees} \label{prelim}
\subsection{Dirichlet and Mittag-Leffler distributions} \label{secml}
In this section, we present the distributional relationships that are key for our constructions. A random variable $L$ follows a (generalised) Mittag-Leffler distribution with parameters $(\alpha, \theta)$ for $\alpha >0$ and $\theta > -\alpha$ if its $p$th moment is given by
\begin{equation} \mathbb E\left[ L^p\right]=\frac{\Gamma(\theta+1)\Gamma(\theta/\alpha+1+p)}{\Gamma(\theta/\alpha+1)\Gamma(\theta+p\alpha+1)}, \quad p\geq 1, \label{mlmoms} \end{equation} for short $L \sim \rm{ML}(\alpha, \theta)$. The moments \eqref{mlmoms} uniquely characterise ML$(\alpha, \theta)$, cf. \cite{5}.
The Mittag-Leffler distribution naturally appears when we study lengths in the trees considered in this paper. To analyse mass and length splits across the branches of these trees we have to consider Dirichlet distributions. We will be able to relate mass and length splits on the edges using the following result.
\begin{prop}[\cite{12} Proposition 4.2] \label{DIRML} Let $\beta \in (0,1)$. For $n \geq 2$, let $\theta_1, \ldots, \theta_n >0$ and let $\theta:=\sum_{i\in[n]} \theta_i$. Consider $S \sim {\rm{ML}}(\beta, \theta)$ and an independent vector $(Y_1, \ldots, Y_n) \sim {\rm{Dirichlet}}(\theta_1/\beta, \ldots, \theta_n/\beta)$. Then, \begin{equation} S \cdot \left(Y_1, \ldots, Y_n\right)\,{\buildrel d \over =}\, \left(X_1^{\beta}S^{(1)}, \ldots, X_n^{\beta}S^{(n)}\right) \end{equation} where $(X_1, \ldots, X_n) \sim {\rm{Dirichlet}}(\theta_1, \ldots, \theta_n)$ and $S^{(i)} \sim {\rm{ML}}(\beta, \theta_i)$, $i \in [n]$, are independent. \end{prop}
We will also need some standard properties of the Dirichlet distribution.
\begin{prop} \label{Diri} Let $n \in \mathbb N$, $\theta_1, \ldots, \theta_n>0$ and $X:=(X_1, \ldots, X_n) \sim {\rm{Dirichlet}}(\theta_1, \ldots, \theta_n)$. \begin{enumerate}
\item[{\rm (i)}] \textit{Symmetry}. For any permutation $\sigma\colon [n] \rightarrow [n]$, $\left(X_{\sigma(1)}, \ldots, X_{\sigma(n)}\right) \sim {\rm{Dirichlet}}\left(\theta_{\sigma(1)}, \ldots, \theta_{\sigma(n)}\right)$.
\item[{\rm (ii)}] \textit{Aggregation and deletion}. Let $X':=(\sum_{i\in[m]}X_i, X_{m+1},\ldots, X_n)$ for some $m \in [n-1]$. Then the vectors $ X' \sim {\rm{Dirichlet}}\big(\sum_{i\in[m]}\theta_i, \theta_{m+1}, \ldots, \theta_n \big) $ and $X^*:=( {X_1}/{\sum_{i\in[m]} X_i}, \ldots, {X_m}/{\sum_{i\in[m]} X_i}) \sim {\rm Dirichlet}\left(\theta_1, \ldots, \theta_m\right)$ are independent.
\item[{\rm (iii)}] \textit{Decimation}. Let $i \in [n]$, $m \in \mathbb N$, and let $\theta_{i,1}, \ldots, \theta_{i,m}>0$ be such that $\sum_{j\in[m]} \theta_{i,j}=\theta_i$. Consider an independent random vector $ \left(P_1, \ldots, P_m\right) \sim {\rm{Dirichlet}}\left(\theta_{i,1}, \ldots, \theta_{i,m}\right). $ Then we have $X'':=(X_1, \ldots, X_{i-1}, P_1X_i, \ldots, P_mX_i, X_{i+1}, \ldots, X_n)\sim {\rm{Dirichlet}}\left(\theta_1, \ldots, \theta_{i-1}, \theta_{i,1}, \ldots, \theta_{i,m}, \theta_{i+1}, \ldots, \theta_n\right). $
\item[{\rm (iv)}] \textit{Size-bias}. Let $I \in [n]$ be a random index such that $\mathbb P(I=i|X_1, \ldots, X_n)=X_i$ a.s.\ for $i \in [n]$. Then, conditionally given $I=i$, we have $X \sim {\rm{Dirichlet}}\left(\theta_1, \ldots, \theta_{i-1}, \theta_{i}+1, \theta_{i+1},\ldots, \theta_n\right)$ for any $i \in [n]$. Furthermore, we have $\mathbb P(I=i)=\theta_i\big/\sum_{j\in[n]} \theta_j$.
\end{enumerate} \end{prop} \begin{proof} We refer to \cite[Propositions 13-14, Remark 15]{29}, and the Gamma variable representation for the Dirichlet distribution. \end{proof}
\subsection{Chinese restaurant processes and strings of beads} \label{Sec22}
We consider $(\alpha, \theta)$-strings of beads for $\alpha \in (0,1), \theta >0$, arising in the scaling limit of ordered $(\alpha, \theta)$-Chinese restaurant processes (CRPs), cf. \cite{james2006,1,5}. Consider customers labelled by $[n]:=\{1,\ldots,n\}$ sitting at a random number of tables as follows. Let customer $1$ sit at the first table. At step $n+1$, conditionally given that we have $k$ tables with $n_1, \ldots, n_k$ customers, the next customer labelled by $n+1$ \begin{itemize} \item sits at the $i$th occupied table with probability $(n_i-\alpha)/(n+\theta)$, $i \in [k]$; \item opens a new table to the left of the first table, or between any two tables with probability $\alpha/(k\alpha+\theta)$; \item opens a new table to the right of the last table with probability $\theta/(k\alpha+\theta)$. \end{itemize}
This induces the \textit{ordered} $(\alpha, \theta)$-CRP $(\widetilde{\Pi}_n, n\geq 1)$. The classical \em unordered $(\alpha,\theta)$-CRP \em $(\Pi_n, n \geq 1)$ is obtained from $(\widetilde{\Pi}_n, n\geq 1)$ by ordering the \pagebreak blocks by least labels. For $n \in \mathbb N$, we write $\Pi_n=(\Pi_{n,1}, \ldots, \Pi_{n,K_n})$ and $\widetilde{\Pi}_n=(\widetilde{\Pi}_{n,1}, \ldots, \widetilde{\Pi}_{n,K_n})$ for the blocks of the two partitions of $[n]$, where $K_n$ denotes the number of tables at step $n$. The block sizes at step $n$ form random \textit{compositions} of $n$, $n \geq 1$, i.e.\ sequences of positive integers $(n_1,\ldots,n_k)$ with sum $n=\sum_{j\in[k]} n_j$. The composition related to $\widetilde{\Pi}_n$, $n \geq 1$, can be shown to be \textit{regenerative} in the sense of Gnedin and Pitman \cite{3}.
The number of tables $K_n$ at step $n$, rescaled by $n^\alpha$, converges a.s., i.e. there is $L_{\alpha,\theta} > 0$ a.s. such that \begin{equation} L_{\alpha,\theta} = \lim \limits_{n \rightarrow \infty}{n^{-\alpha}}{K_n} \quad \text{a.s.}. \label{tables} \end{equation} The distribution of $L_{\alpha, \theta}$ can be identified as $\rm{ML}(\alpha, \theta)$. Furthermore, there are limiting proportions $(P_1, P_2, \ldots)$ of the relative table sizes $n^{-1}\#\Pi_{n,i}$, $i \in [K_n]$, as $n \rightarrow \infty$ in order of least labels, i.e. \begin{equation} \lim_{n \rightarrow \infty}\left( n^{-1} \#\Pi_{n,1}, \ldots, n^{-1} \#\Pi_{n,K_n} \right) =\left(P_1, P_2, P_3, \ldots \right) = \left(V_1, \overline{V}_1 V_2, \overline{V}_1 \overline{V}_2 V_3, \ldots\right) \quad \text{a.s.} \label{v1v2} \end{equation} where $(V_i, i \geq 1)$ are independent with $V_i \sim \text{Beta}(1-\alpha, \theta+i\alpha)$, and $\overline{V}_i:=1-V_i$. The distribution of the vector $(P_1, P_2, \ldots)$ is a \textit{Griffiths-Engen-McCloskey distribution} GEM$(\alpha, \theta)$. Ranking $(P_i, i \geq 1)$ in decreasing order we obtain a Poisson-Dirichlet distribution $(P_i^{\downarrow}, i \geq 1):=(P_i, i \geq 1)^{\downarrow} \sim \text{PD}(\alpha, \theta)$. Each $P_i, i \geq 1,$ is further associated with a position on the limiting interval $[0,L_{\alpha,\theta}]$ induced by the table order.
\begin{lemma}[{\cite[Proposition 6]{1}}]\label{cvgcs} Consider an \textit{ordered} $(\alpha, \theta)$-CRP $(\widetilde{\Pi}_n=(\widetilde{\Pi}_{n,1},\ldots, \widetilde{\Pi}_{n,K_n}), n\geq 1)$ for $\alpha \in (0,1)$, $\theta >0$. Let $N_{n,j}:=\sum _{i\in[j]} \# \widetilde{\Pi}_{n,i}$, $j \in [n]$, be the number of customers at the first $j$ tables from the left. Then, \begin{equation} \lim \limits_{n \rightarrow \infty} \left\{n^{-1}{N_{n,j}}, j \geq 0\right\}= \mathcal N_{\alpha, \theta}:=\left\{1-e^{-G_t}, t \geq 0\right\}^{\text{\rm cl}} \quad \text{a.s.} \label{lefttables} \end{equation} with respect to the Hausdorff metric on closed subsets of $[0,1]$, where ${\rm cl}$ denotes the closure in $[0,1]$, and $(G_t, t\geq 0)$ is a subordinator with Laplace exponent $\Phi_{\alpha, \theta}(s)={s\Gamma(s+\theta)\Gamma(1-\alpha)}/{\Gamma(s+\theta+1-\alpha)}.$
There is a continuous local time process $\mathcal L=(\mathcal L(u), u \in [0,1])$ for $\mathcal L_n(u):=\#\{j \in [K_n]\colon n^{-1} N_{n,j} \leq u\}$, $u \in [0,1]$, such that \begin{equation*} \lim \limits_{n \rightarrow \infty} \sup \limits_{u \in [0,1]} \lvert n^{-\alpha} \mathcal L_n(u)- \mathcal L(u)\rvert =0 \qquad \text{a.s.}\end{equation*} where $\mathcal N_{\alpha, \theta}$ is the set of points at which $\mathcal L$ increases a.s.. \end{lemma}
We refer to the collection of open intervals in $[0,1]\setminus \mathcal N_{\alpha, \theta}$ as the \textit{($\alpha,\theta$)-regenerative interval partition} associated with the local time process $\mathcal L$, where $\mathcal L(1)=L_{\alpha,\theta}$ a.s.. Note that the joint law of ranked lengths of components of this interval partition is PD$(\alpha,\theta)$.
The inverse local time $\mathcal L^{-1}$ defined by \begin{equation} \mathcal L^{-1}:[0,L_{\alpha, \theta}) \rightarrow [0,1), \qquad \mathcal L^{-1}(x):=\inf\{u \in [0,1]\colon \mathcal L(u) > x\}, \end{equation} is right-continuous increasing. We equip the random interval $[0, L_{\alpha, \theta}]$ with the Stieltjes measure $d \mathcal L^{-1}$.
\begin{definition}[String of beads]\rm A \textit{string of beads} $(I, \lambda)$ is an interval $I$ equipped with a discrete mass measure $\lambda$. A measure-preserving isometric copy of $([0,L_{\alpha, \theta}], d \mathcal L^{-1})$ associated as above with an $(\alpha, \theta)$-regenerative interval partition $[0,1] \setminus \mathcal N_{\alpha, \theta}$ is called an \textit{$(\alpha, \theta)$-string of beads}, for $\alpha \in (0,1), \theta > 0$. \end{definition}
We can view a string of beads $([0,K], \lambda)$ as a weighted $\mathbb R$-tree consisting of one single branch connecting the root $0$ with a leaf at distance $K$.
Since the lengths of the interval components of an $(\alpha, \theta)$-regenerative interval partition $[0,1] \setminus \mathcal N_{\alpha, \theta}$ are the masses of the atoms of the associated $(\alpha, \theta)$-string of beads, we conclude that the joint law of the masses $(P_i^\downarrow, i \geq 1)$ of the atoms of an $(\alpha, \theta)$-string of beads ranked in decreasing order is PD$(\alpha, \theta)$. It is well-known that the length $L_{\alpha, \theta}\sim \text{ML}(\alpha, \theta)$ of an $(\alpha, \theta)$-string of beads can be recovered from the ranked atom masses $(P_i^{\downarrow}, i \geq 1)$ or the vector $(P_i, i \geq 1)$ of the stick-breaking representation \eqref{v1v2} via \begin{equation} L_{\alpha, \theta}=\lim \limits_{i \rightarrow \infty} i \Gamma(1-\alpha) (P_i^{\downarrow})^{\alpha}=\lim_{k \rightarrow \infty} \left(1-\sum_{i\in[k]} P_i\right)^{\alpha} \alpha^{-\alpha}k^{1-\alpha}, \label{alphadiv} \end{equation} which is the so-called \textit{$\alpha$-diversity} of $(P_i^\downarrow, i \geq 1) \sim {\rm{PD}}(\alpha, \theta)$, cf. \cite[Lemma 3.11]{5}.
One of the key properties of $(\alpha, \theta)$-strings of beads is the regenerative nature inherited from the underlying regenerative interval partition, cf. \cite{3}. Pitman and Winkel \cite{1} developed a method (``$(\alpha,\theta)$-coin-tossing sampling'') to sample an atom of an $(\alpha, \theta)$-string of beads such that the two strings of beads obtained in this way are rescaled independent $(\alpha, \alpha)$- and $(\alpha, \theta)$-strings of beads (the first one being the one closer to the origin). The mass split between the two induced interval components and the selected atom is Dirichlet$(\alpha, 1-\alpha, \theta)$, with parameters assigned in their order on the interval $[0, L_{\alpha, \theta}]$. When $\theta=\alpha$, the special sampling reduces to uniform sampling from the mass measure $d\mathcal L^{-1}$.
\begin{prop}[{\cite[Proposition 10/14(b), Corollary 15]{1}}] \label{cointoss} Let $(I, \lambda):=([0, L_{\alpha, \theta}], d \mathcal L^{-1})$ be an $(\alpha, \theta)$-string of beads for some $\alpha \in (0,1), \theta >0$.
Then there is a random variable $J\in(0,L_{\alpha,\theta})$ on a suitably enlarged probability space such that the following are independent. \begin{itemize} \item The mass split $( \lambda([0, J)), \lambda(J), \lambda((J, L_{\alpha, \theta}])) \sim \text{\rm Dirichlet}(\alpha, 1- \alpha, \theta)$; \item (the isometry class of) the $(\alpha, \alpha)$-string of beads $( \lambda([0,J))^{-\alpha} [0,J), \lambda([0,J))^{-1} \lambda \restriction_{[0,J)} )$; \item (the isometry class of) the $(\alpha, \theta)$-string of beads $( \lambda((J, L_{\alpha, \theta}])^{-\alpha} (J, L_{\alpha, \theta}], \lambda((J, L_{\alpha, \theta}])^{-1} \lambda \restriction_{(J, L_{\alpha, \theta}]} ).$ \end{itemize} \end{prop}
In Section \ref{sec4} we will formulate the algorithms of the introduction based on masses rather than lengths. In particular, the attachment points in the update step will be mass-sampled, not length-sampled. The following lemma will imply that that the algorithms based on masses induce the length versions.
\begin{lemma} \label{equivalence} Let $(X_1, \ldots, X_n) \sim {\rm Dirichlet}(\theta_1, \ldots, \theta_n)$ for some $\theta_1, \ldots, \theta_n >0$ and $n \in \mathbb N$, and let $([0,L_i], \lambda_i)$ be independent $(\alpha, \theta_i)$-strings of beads, respectively, $i\in[n]$.
\begin{itemize} \item Select $I'=j \in [n]$ with probability $X_j$ and, conditionally given $I'=j$, select $L' \in [0, L_j]$ via $(\alpha, \theta_j)$-coin tossing sampling on $([0,L_j], \lambda_j)$. \item Select $I''=j \in [n]$ with probability proportional to $X_j^\alpha L_j$ and, conditionally given $I''=j$, select $L''=B L_j$ where $B \sim {\rm Beta}(1, \theta_j/\alpha)$ is independent. \end{itemize} Then $ \left(I',L_1, \ldots, L_{I'-1}, L', L_{I'}-L', L_{I'+1}, \ldots L_n\right) \,{\buildrel d \over =}\, \left(I'',L_1, \ldots, L_{I''-1}, L'', L_{I''}-L'', L_{I''+1}, \ldots L_n\right). $ \end{lemma}
\begin{proof} We need to show that, for any bounded and continuous function $f\colon \mathbb R^{n+2} \rightarrow \mathbb R$ \begin{equation} \mathbb E \left[ f \left(I',L_1, \ldots, L_{I'-1}, L', L_{I'}\!-\!L', L_{I'+1}, \ldots, L_n \right)\! \right] = \mathbb E \left[ f \left(I'',L_1, \ldots, L_{I''-1}, L'', L_{I''}\!-\!L'', L_{I''+1}, \ldots L_n \right)\! \right] . \label{massselislengthsel}\end{equation}
Conditioning on $I'=j$, and using Proposition \ref{Diri}(iv), the LHS of \eqref{massselislengthsel} is \begin{align*} \sum_{j\in[n]} \mathbb E \left[ f \left(I',L_1, \ldots, L_{I'-1}, L', L_{I'}-L', L_{I'+1}, \ldots L_n\right) \mid I'=j \right] \left({\theta_j}\bigg/{\sum_{i\in[n]} \theta_i}\right).\end{align*} Conditionally given $I'=j$, we select an atom of the $(\alpha, \theta_j)$-string of beads via $(\alpha, \theta_j)$-coin tossing sampling. By Proposition \ref{cointoss} and Proposition \ref{Diri}(ii), the mass split $(1-\lambda_j(L'))^{-1}\left(\lambda_j\left(\left[0, L'\right)\right), \lambda_j\left(\left(L', L_j\right]\right)\right) \sim {\rm Dirichlet}\left(\alpha, \theta_j\right)$ and the $(\alpha, \alpha)$- and the $(\alpha, \theta)$-strings of beads given by \begin{equation*} \left( \lambda\left(\left[0, L'\right)\right)^{-\alpha} [0, L'), \lambda\left(\left[0, L'\right)\right)^{-1}\lambda\restriction_{\left[0, L'\right)} \right), \quad \left( \lambda\left(\left( L', L_j\right]\right)^{-\alpha} \left(L', L_j\right], \lambda\left(\left( L',L_j\right]\right)^{-1}\lambda\restriction_{\left(L',L_j\right]} \right), \end{equation*} respectively, are independent. By Proposition \ref{DIRML}, we conclude that the relative length split on $[0,L_j]$ is $L'/L_j\sim {\rm Beta}(1, \theta_j/\alpha)$. To see \eqref{massselislengthsel}, proceed likewise with the RHS of \eqref{massselislengthsel}, using that, by Proposition \ref{DIRML}, $(L_1, \ldots, L_n)\sim {\rm Dirichlet}(\theta_1/\alpha, \ldots, \theta_n/\alpha)$. More precisely, note that $\mathbb P(I''=j)=(\theta_j/\alpha)/(\sum_{i\in[n]} \theta_i/\alpha)=\theta_j/\sum_{i\in[n]} \theta_i$, and that, conditionally given $I''=j$, we have $L''/L_j\sim {\rm Beta}(1,\theta_j/\alpha)$, as before. \end{proof}
We will also need the following statement about sampling from Poisson-Dirichlet distributions.
\begin{prop}[Sampling from PD$(\alpha, \theta)$, {\cite[Proposition 34]{30}}] \label{usefulPD} Let $(P_i, i \geq 1) \sim \rm{PD}(\alpha, \theta)$ for some $0 \leq \alpha < 1$ and $\theta > -\alpha$, and let $N$ be an index such that \begin{equation*} \mathbb P\left(N=i \mid P_i, i \geq 1 \right)=P_i, \quad i \geq 1. \end{equation*} Let $(P'_i, i \geq 1)$ be obtained from $P$ by deleting $P_N$, and set $P''_i:=P_i'/(1-P_N)$ for $i \geq 1$. Then, $P_N \sim \rm{Beta}(1-\alpha, \alpha + \theta)$, and $(P_i'', i \geq 1) \sim \rm{PD}(\alpha, \alpha + \theta)$ is independent of $P_N$. \end{prop}
\subsection{Line-breaking constructions of the stable tree, and the proof of Theorem \ref{samestable}} \label{lconst}
In this section, we collect some preliminary results on stable trees and prove Theorem \ref{samestable}. Recall the line-breaking construction of the stable tree given by Algorithm \ref{GH} yielding the sequence of compact $\mathbb R$-trees $(\mathcal T_k, k \geq 0)$. Leaves and branch points have a natural order induced by the time of appearance in the sequence $(\mathcal T_k, k \geq 0)$, i.e. we can write $(v_i, i \geq 1)$ for the branch points, and $W_k^{(i)}$ for the branch point weight of $v_i$ in $\mathcal T_k$ (if $v_i \notin {\rm Br}(\mathcal T_k)$ or $i > b_k$, set $W_k^{(i)}=0$). We will list the edges $E_k^{(1)},\ldots,E_k^{(|\mathcal T_k|)}$ of $\mathcal T_k$ and their
lengths $L^{(i)}_k={\rm Leb}(E^{(i)}_k)$, $i\in[|\mathcal T_k|]$, in the order encountered on a depth-first search directed by least labels.
\begin{lemma}[{\cite[Proposition 3.2]{12}}] \label{GH1} For $k \geq 1$, given the shapes of $\mathcal T_0, \ldots, \mathcal T_k$, and $\lvert\mathcal T_k\vert -(k+1)=\ell$, i.e. conditionally given that the tree $\mathcal T_k$ has $k+1+\ell$ edges and $\ell$ branch points $(v_i, i \in [\ell])$, \begin{equation} \left( L_k^{(1)}, \ldots, L_k^{(k+1+\ell)}, W_k^{(1)}, \ldots, W_k^{(\ell)}\right)=S_k \cdot \left(Z_k^{(1)}, \ldots, Z_k^{(k+\ell+1)}, Z_k^{(k+\ell+2)}, \ldots Z_{k}^{(k+1+2\ell)}\right) \end{equation} where $(Z_k^{(1)}, \ldots, Z_k^{(k+\ell+1)}, Z_k^{(k+\ell+2)}, \ldots Z_{k}^{(k+1+2\ell)}) \sim {\rm{Dirichlet}}\left( 1, \ldots, 1, w(d_1)/{\beta}, \ldots, w(d_\ell)/{\beta}\right)$ and $S_k \sim {\rm{ML}}(\beta, \beta+k)$ are independent, $w(d_i)=(d_i-3)(1-\beta)+1-2\beta$ and $d_i={\rm deg}(v_i, \mathcal T_k)$ is the degree of $v_i$. \end{lemma}
\begin{corollary}[Masses as lengths] \label{Massesaslengths} For $k \geq 1$, given the shapes of $\mathcal T_0, \ldots, \mathcal T_k$, and $\lvert\mathcal T_k\vert -(k+1)=\ell$, \begin{equation} \left( L_k^{(1)}, \ldots, L_k^{(k+1+\ell)}, W_k^{(1)}, \ldots, W_k^{(\ell)}\right)= \left(X_{1}^\beta M_k^{(1)}, \ldots, X_{k+1+2\ell}^\beta M_{k}^{(k+1+2\ell)}\right) \label{masl} \end{equation} where the random variables $M_k^{(i)} \sim {\rm ML}(\beta, \beta)$, $i \in [k+1+\ell]$, $M_k^{(k+1+\ell+i)} \sim {\rm ML}(\beta, w(d_i))$, $i \in [\ell]$, and $X=\left(X_1, \ldots, X_{k+1+\ell}, X_{k+2+\ell}, \ldots X_{k+1+2\ell}\right)\sim {\rm{Dirichlet}}\left( \beta, \ldots, \beta, w(d_1), \ldots,w(d_\ell)\right)$ are independent, and $w(d_i)=(d_i-3)(1-\beta)+1-2\beta$ with $d_i={\rm deg}(v_i, \mathcal T_k)$. \end{corollary}
\begin{proof} We apply Lemma \ref{GH1}, and Proposition \ref{DIRML} with $n=k+1+2\ell$, $\theta_i=\beta, i \in [k+1+\ell]$, and $\theta_{k+1+\ell+i}=w(d_i)$,
$i \in [\ell]$. It remains to check that $\theta=\sum_{i\in[n]} \theta_i=\beta+k$, i.e. that
\begin{equation*} {(\beta+k})/{\beta}= k+\ell+1+\sum_{i\in[\ell]} \left({(d_i-3)(1/\beta-1)+(1/\beta-2)} \right).
\end{equation*}
This follows from the fact that the sum of the vertex degrees in a tree with $m$ edges is $2m$, i.e.
$\sum_{i\in[\ell]} d_i=2(k+1+\ell)-(k+1)-1$, since $\mathcal T_k$ has $k+1+\ell$ edges and $(k+1)+1$ degree-1 vertices. \end{proof}
Haas et al. \cite{35} analysed the stable tree as an example of a self-similar CRT. Let $(\mathcal T, d, \rho)$ with mass measure $\mu$ be the stable tree of parameter $\beta \in (0,1/2]$, and let $\Sigma \sim \mu$ be a leaf sampled from $\mu$. Consider the \textit{spine}, i.e.\ the path $[[\rho, \Sigma]]$ from the root to this leaf. Remove all vertices of degree one or two from this path. This yields a sequence of connected components that can a.s. be ranked in decreasing order of mass, and which we denote by $(\overline{\mathcal S}^{(i)}, i \ge 1)$, rooted at vertices $\rho_i \in [[\rho, \Sigma]]$ of a.s. infinite degree, $i\ge 1$, respectively. Each $\overline{\mathcal S}^{(i)}$ further separates into a sequence $(\overline{\mathcal S}^{(i)\downarrow}_j,j\ge 1)$ when removing $\rho_i$. \begin{itemize} \item The \textit{coarse spinal mass partition} is $\big(\overline{P}^{(i)}, i \geq 1\big):=\big(\mu(\overline{\mathcal S}^{(i)}), i \geq 1\big)$,
\item The \textit{fine spinal mass partition} is the sequence $\big(\overline{P}_{j}^{(i)\downarrow}, j\geq 1,i\geq 1\big)^{\downarrow}:=\big(\mu \big(\overline{\mathcal S}_{j}^{(i)\downarrow}\big), j \ge 1, i \ge 1\big)^{\downarrow}$, i.e.\ the ranked sequence of masses of connected components obtained after removal of the whole spine. \end{itemize}
\begin{theorem}[Mass partition in the stable tree, {\cite[Corollary 10]{35}}] \label{masspart}
Let $\beta \in (0,1/2]$, and let $\mathcal T$ be the stable tree of parameter $\beta$. Then the following statements hold. \begin{enumerate} \item[{\rm (i)}] The coarse spinal mass partition has a Poisson-Dirichlet distribution with parameters $(\beta, \beta)$, i.e. \begin{equation*} \left(\overline{P}^{(i)}, i \geq 1\right) = \left(\mu\left(\overline{\mathcal S}^{(i)}\right), i \ge 1\right) \sim {\rm{PD}}\left(\beta, \beta\right). \end{equation*} \item[{\rm (ii)}] The fine spinal mass partition is a $(1-\beta,-\beta)$-fragmentation of the coarse spinal mass partition, i.e. for each block
$\mu(\overline{\mathcal S}^{(i)})$ of the coarse partition, the relative part sizes
$(\mu(\overline{\mathcal S}^{(i)\downarrow}_j)/\mu(\overline{\mathcal S}^{(i)}),j\ge 1)$ are independent with distribution {\rm PD}$(1-\beta, -\beta)$, $i\ge 1$.
\item[{\rm (iii)}]Conditionally given the fine spinal mass partition $(\mu(\overline{\mathcal S}_{j}^{(i)\downarrow}), j \ge 1, i\ge 1)^{\downarrow}$, the rescaled trees equipped with restricted mass measures \begin{equation} \left(\mu\left(\overline{\mathcal S}_{j}^{(i)\downarrow}\right)^{-\beta} \overline{\mathcal S}_{j}^{(i)\downarrow}, \mu(\overline{\mathcal S}_{j}^{(i)\downarrow})^{-1} \mu \restriction_{\overline{\mathcal S}_{j}^{(i)\downarrow}} \right), \quad j \ge 1, i \ge 1, \end{equation}
are i.i.d. copies of $(\mathcal T, \mu)$. \end{enumerate} \end{theorem}
The $\alpha$-diversities of PD$(\alpha, \theta)$ partitions can naturally be interpreted as lengths in trees. In particular the $\beta$-diversity of the coarse spinal mass partition has distribution $S_0 \sim \text{ML}(\beta, \beta)$, which is the starting point of Goldschmidt-Haas' line-breaking constructions. The fragmenting PD$(1-\beta, -\beta)$ random partitions for each block of the coarse spinal mass partition capture important information about the branch points that we relate to sizes of the Ford CRTs by which we replace them in Theorem \ref{branchrepl}. Specifically, the independence of these PD$(1-\beta, -\beta)$ vectors relates to the independence of the Ford trees. Sampling i.i.d. leaves $(\Sigma_k, k \geq 0)$ from the measure $\mu$ of the stable tree yields a natural random order of $(\overline{\mathcal S}_j^{(i)\downarrow}, j \ge 1)$, in terms of smallest leaf labels of the subtrees, which we write as $(\overline{\mathcal S}_j^{(i)}, j \geq 1)$, for each $i \geq 1$.
\begin{corollary} \label{GEM} Let $(\mathcal T, \mu)$ be a stable tree of index $\beta \in (0,1/2]$ with associated reduced tree sequence $(\mathcal T_k, k \geq 0)$. Let
$\overline{\mathcal S}^{(i)}$ be the subtree rooted at $\rho_i \in [[\rho, \Sigma_0]]$, $i \ge 1$, related to the coarse spinal mass partition $(\mu(\overline{\mathcal {S}}^{(i)}), i \ge 1)$.
For each $i \ge 1$, let $(\overline{\mathcal S}_j^{(i)}, j \geq 1)$ denote the connected components of $\overline{\mathcal S}^{(i)} \setminus \rho_i$, ordered in increasing order of least
leaf labels. Then $ ( \mu(\overline{\mathcal S}_j^{(i)})^{-1} \mu(\overline{\mathcal S}_j^{(i)}), j \geq 1) \sim {\rm GEM}\left(1-\beta, -\beta\right).$ \end{corollary}
\begin{proof} This is a direct consequence of Theorem \ref{masspart}(ii) in combination with results on sampling from PD$(\alpha, \theta)$, cf. Theorem \ref{usefulPD}, and the construction \eqref{v1v2} of GEM$(\alpha, \theta)$. \end{proof}
We now show that the line-breaking construction of the stable tree based on masses (Algorithm \ref{masssel}), yields trees $(\mathcal T_k, k \geq 0)$ as in \eqref{findimmarg1} and Algorithm \ref{GH}. The following result will prove Theorem \ref{samestable}.
\begin{prop} \label{masssel2} The sequence of weighted $\mathbb R$-trees $(\mathcal T_k, \mu_k, k \geq 0)$ from Algorithm \ref{masssel} has the same distribution as the sequence of trees in \eqref{findimmarg1} equipped with projected subtree masses, i.e. with the mass measures $(\pi_k)_*\mu, k \geq 1,$ as in \eqref{pm}-\eqref{pfpm}. Furthermore, conditionally given $\lvert \mathcal T_k \rvert =k+1+\ell$, the edges of $\mathcal T_k$ equipped with the mass measure $\mu_k$ restricted to each edge, are rescaled independent $(\beta, \beta)$-strings of beads given via \begin{equation} \left( \mu_k\left(E_k^{(i)}\right)^{-\beta} E_k^{(i)}, \mu_k\left(E_k^{(i)}\right)^{-1} \mu_k \restriction_{E_k^{(i)}}\right), \quad i \in [k+1+\ell], \end{equation} and the total mass distribution \begin{equation*} \left(\mu_k\left(E_k^{(1)}\right), \ldots, \mu_{k}\left(E_k^{(k+1+\ell)}\right), \mu_k\left(v_1\right), \ldots, \mu_k \left(v_\ell\right)\right) \sim {\rm{Dirichlet}}\left(\beta, \ldots, \beta, w\left(d_1\right), \ldots, w\left(d_\ell\right)\right) \end{equation*} where $v_i, i \in [\ell],$ are the branch points of $\mathcal T_k$ of degrees $d_i={\rm deg}(v_i, \mathcal T_k)$, $i \in [\ell]$, respectively, and $w(d_i)=(d_i-3)(1-\beta)+(1-2\beta)$, $i \in [\ell]$, and where we number the edges $E_k^{(i)}, i \in [k+1+\ell]$ by depth-first search. \end{prop}
The proof of Proposition \ref{masssel2} is part of Appendix \ref{appen1}, where we collect several similar proofs. We also record the following consequence of Algorithm \ref{masssel} and Proposition \ref{masssel2}.
\begin{corollary} \label{GEMS} Let $(\mathcal T, \mu)$ be a stable tree of index $\beta \in (0, 1/2]$, and let $(\mathcal T_k, k \geq 0)$ be as in \eqref{findimmarg1} with branch points $(v_i, i \geq 1)$ in order of appearance. Let $k_i:=\inf\left\{k \geq 0\colon [[\rho, \Sigma_k]] \cap [[\rho, v_i]]=[[\rho, v_i]] \right\}$ and let $(\mathcal S_j^{(i)}, j \geq 1)$ be the subtrees of $\mathcal T \setminus [[\rho, \Sigma_{k_i}]]$ rooted at $v_i$ in increasing order of smallest leaf labels, $i\ge 1$. Set $P_j^{(i)}:=\mu(\mathcal S_j^{(i)})$ and $P^{(i)}=\sum_{j \geq 1} \mu(S_j^{(i)} )$, $i \geq 1$. Then the sequences $(P_j^{(i)}/P^{(i)}, j \geq 1), i \geq 1$, are i.i.d. with distribution ${\rm GEM}(1-\beta, -\beta)$. \end{corollary}
\begin{proof} This is a direct consequence of the stick-breaking representation \eqref{v1v2} of GEM$(1-\beta, -\beta)$ and the random variables $(Q_k, k \geq 0)$ splitting branch point mass into subtrees from Algorithm \ref{masssel}. Specifically, conditionally given the branch point degrees in the sequence $(\mathcal T_k, k \geq 0)$, for each branch point $v_i$, we can find a sequence of random variables $(Q_{m}^{(i)}, m \geq 1)$ such that $$P_j^{(i)}= \mu_{k_1^{(i)}-1}\left(v_i\right) Q_j^{(i)} \prod_{m\in[j-1]} \left(1-Q_m^{(i)} \right), \quad j \geq 1,$$ where $Q_m^{(i)}:=Q_{k_m^{(i)}} \sim {\rm Beta}(\beta, m(1-\beta)-\beta))$ and $k_m^{(i)}=\inf\{k \geq 1\colon {\rm deg}(v_i, \mathcal T_k) =m+1\}$. Note that, for $m_1, \ldots, m_i \geq 1$, the random variables $Q_j^{(i)}, j \in [m_i]$, $i \geq 1$, have conditional distributions given $k_j^{(i)}$, $j \in [m_i]$, $i \geq 1$, that do not depend on $k_j^{(i)}$, $j \in [m_i]$, $i \geq 1$, and are hence unconditionally independent. \end{proof}
\section{The binary two-colour line-breaking construction with masses} \label{sec4}
We present an enhanced version of Algorithm \ref{twocolour}, which is based on sampling from the mass measure. We use this enhanced version to prove Theorem \ref{Mainresult1}.
The following (1-marked) string of beads will be at the centre of our construction. For $\beta \in (0,1/2]$, consider
$([0,K_1], \lambda_1)$ and $([0,K_2], \lambda_2)$ two independent $(\beta, 1-2\beta)$- and $(\beta, \beta)$-strings of beads, respectively, and an independent $B \sim \rm{Beta}(1-2\beta, \beta)$. Then scale the two strings by $B$ and $1-B$, as follows: set
\begin{equation} K:=B^\beta K_1+(1-B)^\beta K_2, \qquad K':=B^\beta K_1 \label{twocolstr1}\end{equation} and consider the mass measure $\lambda$ on $[0,K]$ given by \begin{equation}\label{twocolstr2} \lambda\left(\left[0,x\right]\right)=\begin{cases} B \lambda_1\left(\left[0,B^{-\beta}x\right]\right) &\text{ if } x \in \left[0,K'\right], \\ B+ \left(1-B\right) \lambda_2\left(\left[0,\left(1-B\right)^{-\beta}\left(x-K'\right)\right]\right) &\text{ if } x \in \left[K', K\right]. \end{cases} \end{equation} The string of beads $([0,K],\lambda)$ is called a \textit{$\beta$-mixed} string of beads \cite{RW}. We denote the distributions of $([0,K], \lambda)$ and $([0,K],[0,K'], \lambda)$ on $\mathbb{T}_{\rm w}$ and $\mathbb{T}_{\rm w}^{[1]}$ by $\nu_\beta$ and $\nu_\beta^{[1]}$, respectively.
\begin{remark} \label{betamixedlength}
By Proposition \ref{DIRML} with $\theta_1=1-2\beta, \theta_2=\beta$, noting that $(B, 1-B) \sim \text{Dirichlet}(1-2\beta, \beta)$, we have \begin{equation} \left(B^\beta K_1, (1-B)^\beta K_2\right)\,{\buildrel d \over =}\, L \left({B}', 1-{B}'\right)\label{twoclb} \end{equation} where ${B}' \sim \text{Beta}(1/\beta-2, 1)$ is independent of $L$, and $L \sim \text{ML}(\beta, 1-\beta)$. We conclude that for each $\beta$-mixed string of beads $\xi=([0,K], \lambda)$ we have $ (\lambda(x)\colon x \in [0,K], \lambda(x) > 0)^{\downarrow} \sim \text{PD}(\beta, 1-\beta ), $ cf. e.g. \cite[Corollary 1.2]{31}. Although the length of a $\beta$-mixed string of beads $\xi$ is ML$(\beta, 1-\beta)$ and the atom sizes are PD$(\beta,1-\beta)$, we cannot expect that $\xi$ is a $(\beta, 1-\beta)$-string of beads when $\beta \in (0,1/2)$. Specifically, at the junction point in a $(\beta, 1-\beta)$-string of beads, we would expect a Beta$(\beta, 1-2\beta)$ mass split into a rescaled $(\beta, \beta)$- and a rescaled $(\beta, 1-2\beta)$-string of beads in this order (and not vice versa). \end{remark}
We will use the notation
$ \xi = \left( [0,K|, \sum_{i \geq 1}P_i \delta_{X_i}\right)$ for any $(\alpha, \theta)$- or $\beta$-mixed string of beads where $K$ is the length of the string of beads with ranked atomic masses of sizes $1 > P_1 > P_2 > \cdots > 0$, a.s., in the points $X_i \in [0,K]$, $i \geq 1$, respectively.
Let us now explain how to attach a weighted $\mathbb R$-tree onto another weighted $\mathbb R$-tree. This clarifies in particular how to construct weighted $\mathbb R$-trees by attaching strings of beads as a string of beads can be interpreted as a weighted $\mathbb R$-tree consisting of a single branch. For any weighted $\mathbb R$-tree $(\mathcal T, d, \rho, \mu)$, a parameter $\beta \in (0,1/2]$, an element $J \in \mathcal T$ and another weighted $\mathbb R$-tree $(\mathcal T^+, d^+, \rho^+, \mu^+)$ with $\mathcal T \cap \mathcal T^+ = \emptyset$, the tree $(\mathcal T', d', \mu')$ created from $(\mathcal T, d, \mu)$ by \textit{attaching} to $J$ the tree $(\mathcal T^+, d^+, \rho^+, \mu^+)$ with mass measure $\mu^+$ rescaled by $\mu(J)$ and metric $d^+$ rescaled by $\mu(J)^\beta$ is defined as follows. Specifically, set \begin{equation} \mathcal T':= \mathcal T \setminus \{J\} \sqcup \mathcal T^+, \label{attach1}\quad \begin{aligned} d'(x,y):=\begin{cases} d(x,y) & \text{ if } x,y \in \mathcal T, \\ d(x,J)+(\mu(J))^{\beta} d^+(\rho^+,y) & \text{ if } x \in \mathcal T, y \in \mathcal T^+, \\ (\mu(J))^{\beta} d^+(x, y) & \text{ if } x, y \in \mathcal T^+,\end{cases}\qquad\rho'=\rho,\quad \end{aligned} \end{equation} and equip $(\mathcal T',d',\rho')$ with the mass measure $\mu'$ given by $\mu' \restriction_{\mathcal T \setminus \{J\}}=\mu\restriction_{\mathcal T \setminus \{J\}}, \ \mu'\left(J\right)=0, \ \mu' \restriction_{\mathcal T^+}=\mu\left(J\right) \mu^+.$
We are now ready to present the two-colour line-breaking construction with masses.
\begin{algorithm}[Two-colour line-breaking construction with masses] \label{twocolourmass} \rm Let $\beta\in(0,1/2]$. We grow weighted $\infty$-marked $\mathbb{R}$-trees $(\mathcal{T}_k^*,(\mathcal{R}_k^{(i)},i\ge 1),\mu_k^*)$, $k\ge 0$, as follows.
\begin{enumerate}\setcounter{enumi}{-1}\item Let $(\mathcal{T}_0^*,\mu_0^*)$ be isometric to a $(\beta,\beta)$-string of beads; let $r_0=0$ and $\mathcal{R}_0^{(i)}=\{\rho\}$, $i\ge 1$.
\end{enumerate}
Given $(\mathcal{T}_j^*,(\mathcal{R}_j^{(i)},i\ge 1),\mu_j^*)$ with $\mu_j^*=\sum_{x\in\mathcal{T}_j^*}\mu_j^*(x)\delta_x$, $0\le j\le k$, let $r_k=\#\{i\ge 1\colon\mathcal{R}_k^{(i)}\neq\{\rho\}\}$;
\begin{enumerate}\item select an edge $E_k^*\subset\mathcal{T}_k^*$ with probability proportional to its mass $\mu_k^*(E_k^*)$; if
$E_k^*\subset\mathcal{R}_k^{(i)}$ for some $i\in[r_k]$, let $I_k=i$; otherwise, i.e.\ if $E_k^*\subset\mathcal{T}_k^*\setminus\bigcup_{i\in[r_k]}\mathcal{R}_k^{(i)}$, let $r_{k+1}\!=\!r_k\!+\!1$, $I_k\!=\!r_{k+1}$;
\item if $E_k^*$ is an external edge of $\mathcal{R}_k^{(i)}$, perform $(\beta,1-2\beta)$-coin tossing sampling on $E_k^*$ to determine $J_k^*\in E_k^*$ (cf. Proposition \ref{cointoss}); otherwise, i.e.\ if $E_k^*\subset\mathcal{T}_k^*\setminus\bigcup_{i\in[r_k]}\mathcal{R}_k^{(i)}$ or if $E_k^*$ is an internal edge of $\mathcal{R}_k^{(i)}$, sample $J_k^*$ from the normalised mass measure on $E_k^*$;
\item let $(E_k^+,R_k^+,\mu_k^+)$ be an independent $\beta$-mixed string of beads; to form $(\mathcal{T}_{k+1}^*,\mu_{k+1}^*)$ remove $\mu_k^*(J_k^*)\delta_{J_k^*}$ from $\mu_k^*$ and attach to $\mathcal{T}_k^*$ at $J_k^*$ an isometric copy of $(E_k^+,\mu_k^+)$ with measure rescaled by $\mu_k^*(J_k^*)$ and metric rescaled by $(\mu_k^*(J_k^*))^\beta$; add to $\mathcal{R}_k^{(I_k)}$ the (image under the isometry of) $R_k^+$ to form $\mathcal{R}_{k+1}^{(I_k)}$; set $\mathcal{R}_{k+1}^{(i)}=\mathcal{R}_k^{(i)}$, $i \neq I_k$.
\end{enumerate} \end{algorithm}
\subsection{The distribution of two-colour trees}
To analyse Algorithm \ref{twocolourmass}, we will need some more notation, in particular with regard to the marked subtree growth processes $(\mathcal R_k^{(i)}, k \geq 0)$, $i \geq 1$. Define the random subsequences $(k_m^{(i)}, m \geq 1)$, $i \geq 1$, by \begin{equation} k_1^{(i)}:=\inf \left\{n \geq 1\colon \mathcal R_{n}^{(i)} \neq \mathcal R_{0}^{(i)} \right\}=\inf \left\{n \geq 1\colon \mathcal R_{n}^{(i)} \neq \{\rho\}\right\}, \end{equation} and, for $m \geq 1$,
\begin{equation} k_{m+1}^{(i)}:=\inf \left\{n \geq k_m^{(i)}\colon \mathcal R_{n}^{(i)} \neq \mathcal R_{k_m^{(i)}}^{(i)}\right\}, \label{subsequences} \end{equation} i.e. there is a change in $(\mathcal R_k^{(i)}, k \geq 1)$ when $k=k_{m}^{(i)}$ for some $m \geq 1$. Note that $\bigcup_{i \geq 1} \{ k_m^{(i)}, m \geq 1\}=\{1,2, \ldots\}$ is a disjoint union, and that, for any $i \geq 1$, $\mathcal R_k^{(i)}$ is a binary tree for any $k \geq 1$. We will also use the convention that $\rho\notin \mathcal R_k^{(i)}$ for $k \geq k_1^{(i)}$. For $k=k_m^{(i)}-1$, we write
\begin{equation*} R_k^+ = [[J_k^*, \Omega_{m}^{(i)}]] \subset E_k^+ =[[J_k^*, \Sigma_{k+1}]], \qquad \text{i.e.} \quad ]]J_k^*, \Omega_{m}^{(i)}]]= \mathcal R_{k+1}^{(i)} \setminus \mathcal R_{k}^{(i)}. \end{equation*} In other words, at step $k=k_m^{(i)}-1$, $\Omega_{m}^{(i)}$ and $\Sigma_{k+1}$ denote the leaves added to $\mathcal R_{k}^{(i)}$ and $\mathcal T_k^*$, respectively.
We write $\xi_k^{(1)}, \xi_k^{(2)}$ and $\gamma_k$ for the random variables inducing the $\beta$-mixed string of beads $(E_k^+, R_k^+, \mu_k^+)$, i.e. $\left(E_k^+, R_k^+, \mu_k^+\right)$ is built from independent $\xi_k^{(1)}, \xi_k^{(2)}$ and $\gamma_k$ in the same way as $([0,K],[0,K^\prime],\lambda)$ is built from independent $([0,K_1],\lambda_1)$, $([0,K_2],\lambda_2)$ and $B$ in \eqref{twocolstr1}-\eqref{twocolstr2}.
Furthermore, we use an equivalence relation $\sim$ on $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1))$ to contract each marked component $\mathcal R_k^{(i)}$, $i \geq 1$, of $\mathcal T_k^*$ to a single point, i.e. \begin{equation} x \sim y\qquad :\Leftrightarrow \qquad x,y \in \mathcal R_k^{(i)} \quad \text{ for some } i \geq 1. \label{equrel} \end{equation} Note that, for all $i$ with $\mathcal R_k^{(i)} \neq \{\rho\}$, $x,y \in \mathcal R_k^{(i)}$ implies $x,y\in \mathcal R_{k'}^{(i)}$ for all $k' \geq k$, and hence the equivalence relation $\sim$ is consistent as $k$ varies. Denote the equivalence class related to $\mathcal R_k^{(i)}$ by $\widetilde{v}_i:=[\mathcal R_k^{(i)}]_{\sim}$, and let \begin{equation} \widetilde{\mathcal T}_k:=\mathcal T_k^*/\sim \label{ttilde} \end{equation} denote the quotient space of $\mathcal T_k^*, k \geq 0$, with the canonical quotient metric. Furthermore, for $k \geq 0$, let $\widetilde{\mu}_k$ be the push-forward of $\mu_k^*$ under the projection map from $\mathcal T_k^*$ onto $\widetilde{\mathcal T}_k$.
The following characterisation of Ford trees will be useful to obtain the distribution of $\mathcal T_k^*$.
\begin{prop}[{\cite[Proposition 18]{2}}] \label{Forddis} Consider the tree growth process $(\mathcal F_m, m\geq 1)$ from Algorithm \ref{Ford} for some $\beta' \in (0,1)$. The distribution of $\mathcal F_m$ is given in terms of three independent random variables: its shape, the total length $S_m' \sim {\rm ML}(\beta', m-\beta')$ and the length split between the edges of $\mathcal F_m$ which has a ${\rm Dirichlet}\left(1, \ldots, 1, (1-\beta')/\beta', \ldots, (1-\beta')/\beta'\right)$ distribution, where a parameter of $1$ is assigned to each of the $m-1$ internal edges, and a parameter of $(1-\beta')/\beta'$ to each of the $m$ external edges of $\mathcal F_m$. \end{prop}
We can describe the distribution of the tree $\mathcal T_k^*$ as follows.
\begin{prop}[Distribution of $\mathcal T_k^*$] \label{starstrings} Let $(\mathcal{T}_k^*,(\mathcal{R}_k^{(i)},i\ge 1),\mu_k^*, k \geq 0)$ be as in Algorithm \ref{twocolourmass} for some
$\beta \in (0,1/2]$. The distribution of $\mathcal T_k^*$ is characterised by the following independent random variables:
\begin{itemize}
\item the shape $T_k^*$ of $\mathcal T_k^*$ obtained from the shape $\widetilde{T}_k$ of $\widetilde{\mathcal T}_k$ and the shapes $R_k^{(i)}$ of
$\mathcal R_k^{(i)}$, $i \geq 1$, as follows;
\begin{itemize}
\item $\widetilde{T}_k$ has the distribution of the shape of a stable tree $\mathcal T_k$ reduced to the first $k$ leaves, and
\item conditionally given that $\widetilde{T}_k$ has $\ell$ branch points of degrees $d_1, \ldots, d_\ell$, the shapes $R_k^{(1)}, \ldots, R_k^{(\ell)}$ are the shapes
of Ford trees with $m_1:=d_1-2, \ldots, m_\ell:=d_\ell-2$ leaves, respectively;
\end{itemize}
\item the total mass split between the $3k+1$ edges of $\mathcal T_k^*$ has a
\begin{equation}
{\rm Dirichlet}\left(\beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta\right) \label{Es0}
\end{equation}
distribution, with parameter $\beta$ for each internal marked and each unmarked edge, and parameter $1-2\beta$ for each external marked edge with edges ordered
according to depth-first search (first run for unmarked and internal marked edges, then for external marked edges);
\item the $3k+1$ independent $(\beta, \theta)$-strings of beads isometric to
\begin{equation}
\left(\mu_k^*\left(E\right)^{-\beta}E, \mu_k^*\left(E\right)^{-1} \mu_k^* \restriction_{E}\right), \quad E \in {\rm Edg}\left(\mathcal T_k^*\right), \label{Es}
\end{equation}
where $\theta=1-2\beta$ if $E$ is an external marked edge of $\mathcal R_k^{(i)}$ for some $i \in [\ell]$, and $\theta=\beta$ otherwise, again listed according to
depth-first search.
\end{itemize} \end{prop}
\begin{proof} This proof is mainly an application of the properties of the Dirichlet distribution, Proposition \ref{Diri}, and of coin tossing sampling, Proposition \ref{cointoss}. We give a brief sketch of the proof via an induction on $k$.
For $k=0$, the claim is trivial as $(\mathcal T_0^*, \mu_0^*)$ is a $(\beta, \beta)$-string of beads by definition. For the induction step, suppose that the claim holds for some $k \geq 0$.
We first consider the shape transition from $T_k^*$ to $T_{k+1}^*$. Observe that, given $\widetilde{T}_k$ has $\ell$ branch points of degrees $d_1, \ldots, d_\ell$, we have a $ {\rm{Dirichlet}}\left(\beta, \ldots, \beta, w(d_1), \ldots, w(d_\ell)\right)$ mass split in $\widetilde{\mathcal T}_k$ with weight $\beta$ for each edge and weight $w(d)=(d-2)(1-\beta)-\beta$ for each branch point of degree $d \geq 3$. Hence, by Proposition \ref{masssel2}, the overall edge selection is as in Algorithm \ref{masssel}.
Conditionally given that the $i$th branch point of $\widetilde{T}_k$ is selected, an edge of $\mathcal R_k^{(i)}$ is chosen proportionally to the weights assigned by the relative $\rm{Dirichlet}\left(\beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta\right)$ mass split in $\mathcal R_k^{(i)}$, so each internal edge is chosen with probability $\beta/((d_i-2)(1-2\beta)+(d_i-3)\beta)$, each external edge with probability $(1-2\beta)/((d_i-2)(1-2\beta)+(d_i-3)\beta)$. This corresponds to the shape growth rule in a Ford tree growth process of index $\beta/(1-\beta)$, using obvious cancellations, cf. Algorithm \ref{Ford} and Proposition \ref{Forddis}.
In the update step from $\mathcal T_k^*$ to $\mathcal T_{k+1}^*$, we first select an edge of $\mathcal T_k^*$ proportionally to mass. By Proposition \ref{Diri}(iv), the parameter for this edge in the Dirichlet split \eqref{Es0}, conditionally given that it has been selected, is then increased by $1$. We select an atom $J_k^*$ on this edge via $(\beta, \theta)$-coin tossing, where $\theta=1-2\beta$ for external marked edges, and $\theta=\beta$ otherwise, and, by Proposition \ref{cointoss}, the selected edge is split by $J_k^*$ into a rescaled independent $(\beta, \beta)$- and a rescaled independent $(\beta, \theta)$-string of beads where the relative mass split on this edge is ${\rm Dirichlet}(\beta, 1-\beta, \theta)$, which is conditionally independent of the total mass split. Furthermore, the mass $\mu_k^*(J_k^*)$ is split by the independent random variable $\gamma_k \sim {\rm Beta}(1-2\beta, \beta)$ into a marked $(\beta,1-2\beta)$-string of beads, and an unmarked $(\beta, \beta)$-string of beads, which are independent, i.e., by Proposition \ref{Diri}(iii), the claims \eqref{Es0} and \eqref{Es} follow, as statements conditionally given tree shapes.
Finally, these conditional distributions of the Dirichlet mass split \eqref{Es0} and the independent $(\beta, \theta)$-strings of beads \eqref{Es} do not depend on the shape $T_{k+1}^*$, and are hence unconditionally independent. \end{proof}
\begin{remark}\label{rem45}
By Proposition \ref{starstrings} and Lemma \ref{equivalence} we see that Algorithm \ref{twocolourmass} reduces to Algorithm \ref{twocolour}. \end{remark}
\subsection{Identification of the stable line-breaking constructions} \label{stgp}
We now turn to the trees $(\widetilde{\mathcal T}_k, k \geq 0)$ obtained from $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1), k \geq 0)$ by contracting all marked components to single branch points as in \eqref{equrel}-\eqref{ttilde}. This description yields another formulation of the atom selection procedure on $\mathcal T_k^*$ in Algorithm \ref{twocolourmass}.
Given $(\mathcal T_j^*, (\mathcal R_j^{(i)}, i \geq 1), \mu_j^*)$, $0 \leq j \leq k$, and $r_k = \#\{i \geq 1\colon \mathcal R_k^{(i)} \neq \{\rho\}\} =\#\{i \geq 1\colon \widetilde{v}_i \neq \{\rho\}\}$,
\begin{enumerate}\item[1.-2.] select $\widetilde{J}_k$ from $\widetilde{\mu}_k$; if $\widetilde{J}_k \neq \widetilde{v}_i$ for all $i \in [r_k]$, set $J_k^*=\widetilde{J}_k$; otherwise, if $\widetilde{J}_k=\widetilde{v}_i$ for some $i \in [r_k]$, sample an edge $E_k^*$ of $\mathcal R_k^{(i)}$ proportionally to its mass $\mu_k^*(E_k^*)$; if $E_k^*$ is an internal edge of $\mathcal R_k^{(i)}$, sample $J_k^*$ from the normalised mass measure on $E_k^*$; if $E_k^*$ is an external edge of $\mathcal R_k^{(i)}$, perform $(\beta, 1-2\beta)$-coin tossing sampling on $E_k^*$ to determine $J_k^* \in E_k^*$.
\end{enumerate}
It is this view on Algorithm \ref{twocolourmass} that we pursue further now. The following theorem contains the desired weight-length transformation, i.e. the branch point weights in Goldschmidt-Haas' stable line-breaking construction (Algorithm \ref{GH}) are indeed as the lengths of the marked subtrees in the two-colour line-breaking construction (Algorithm \ref{twocolourmass}). Its proof is given with other similar proofs in the appendix.
\begin{theorem} \label{mainresult} Let the sequence $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1 ), \mu_k^*, k \geq 0)$ be as in Algorithm \ref{twocolourmass}, and
associate $( \widetilde{\mathcal T}_k, (\widetilde{v}_i=[\mathcal R_k^{(i)}]_{\sim}, i \geq 1 ), \widetilde{\mu}_k, k \geq 0)$ as in \eqref{ttilde}. Then the following hold. \begin{itemize} \item[\rm(i)] The sequence of trees with mass measures from Algorithm \ref{twocolourmass} and \eqref{ttilde} has the same distribution as the sequence in Algorithm \ref{masssel}, i.e. \begin{equation} \left(\widetilde{\mathcal T}_k, \widetilde{\mu}_k, k \geq 0\right) \,{\buildrel d \over =}\, \left({\mathcal T}_k, \mu_k , k \geq 0\right). \label{emstlb1} \end{equation} \item[\rm(ii)] The sequence of trees with marked component lengths from Algorithm \ref{twocolourmass} and \eqref{ttilde} has the same distribution as the sequence of trees with weights from Algorithm \ref{GH}, i.e. \begin{equation} \left(\widetilde{\mathcal T}_k, \left(\widetilde{W}_k^{(i)}, i \geq 1\right), k \geq 0\right) \,{\buildrel d \over =}\, \left({\mathcal T}_k, \left({W}_k^{(i)}, i \geq 1\right), k \geq 0\right), \label{emstlb2} \end{equation} where $\widetilde{W}_k^{(i)}={\rm Leb}(\mathcal R_k^{(i)})$ is the length of $\mathcal R_k^{(i)}, i \geq 1$, respectively. In particular, letting ${S}^*_k={\rm Leb}(\mathcal T_k^*)$ denote the length of $\mathcal T_k^*$, the sequence $({S}^*_k, k\geq 0)$ is a Mittag-Leffler Markov chain starting from ${\rm ML}(\beta,\beta)$ , i.e. $ \left({S}^*_k, k \geq 0\right) \,{\buildrel d \over =}\, \left(S_k, k \geq 0\right). $ \end{itemize} \end{theorem}
Let us pull some threads together and deduce the first assertion of Theorem \ref{Mainresult1} and the limit of $\widetilde{\mathcal T}_k$.
\begin{proof}[Proof of \eqref{weighteq} in Theorem \ref{Mainresult1}] We noted in Remark \ref{rem45} that the sequence of two-colour trees of Algorithm
\ref{twocolourmass} without mass measures has the same joint distribution as the sequence of two-colour trees of Algorithm \ref{twocolour}. Hence, \eqref{emstlb2} is
precisely \eqref{weighteq}. \end{proof}
\begin{corollary} \label{convstable} In the setting of Theorem \ref{mainresult}, $ \lim_{k \rightarrow \infty} (\widetilde{\mathcal T}_k, \widetilde{\mu}_k)=(\mathcal T, \mu)$ a.s.\ with respect to the Gromov-Hausdorff-Prokhorov distance, where $(\mathcal T, \mu)$ is a stable tree of index $\beta$. \end{corollary} \begin{proof} Goldschmidt and Haas \cite{12} showed this for the RHS of \eqref{emstlb2}, so it also holds for the LHS. \end{proof}
\subsection{Identification of marked subtree growth processes, and the proof of Theorem \ref{Mainresult1}} \label{sec42}
The main aim of this section is to identify the marked tree growth processes $(\mathcal R_k^{(i)}, k \geq 1)$, $i \geq 1$, as rescaled i.i.d. Ford tree growth processes of index $\beta'=\beta/(1-\beta)$. We will show the following.
\begin{theorem} \label{fordembb} Let $(\mathcal T_k^*, (\mathcal R_k^{(i)}, k \geq 1), \mu_k^*, k \geq 0)$ be the weighted $\infty$-marked tree growth process of Algorithm \ref{twocolourmass} for some $\beta \in (0,1/2]$. Then there exists a sequence of scaling factors $(C^{(i)}, i \geq 1)$ such that for all $i \geq 1$ $$ \lim_{k \rightarrow \infty} \mathcal R_k^{(i)}=\mathcal R^{(i)} \quad \text{a.s.} $$ in the Gromov-Hausdorff topology where
$(C^{(i)} \mathcal R^{(i)}, i \geq 1)$ is a sequence of i.i.d. Ford CRTs of index $\beta'=\beta/(1-\beta)$. Furthermore, the sequence $(C^{(i)} \mathcal R^{(i)}, i \geq 1)$ is independent of the stable tree $(\widetilde{\mathcal T}, \widetilde{\mu})=\lim_{k \rightarrow \infty} (\widetilde{\mathcal T}, \widetilde{\mu})$ obtained from $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1), \mu_k^*, k \geq 0)$ as in Corollary \ref{convstable}. \end{theorem}
We will prove this by carrying out the two-colour line-breaking construction using a given stable tree $(\mathcal T, \mu)$ equipped with a sequence of i.i.d. leaves $(\Sigma_k, k \geq 0)$ sampled from $\mu$, and i.i.d.\ sequences of i.i.d.\ ordered $(\beta', 1-\beta')$-Chinese restaurant processes $(\widetilde{\Pi}_n^{(i,m)}, n \geq 1)$, $i \geq 1$, $m \geq 1$, cf. Section \ref{Sec22}.
\begin{definition}[Labelled bead tree/string of beads] A pair $(x, \Lambda)$ is called a \textit{labelled bead} if $\Lambda \subset \mathbb N$ is an infinite label set. A weighted $\mathbb R$-tree $(\mathcal R, \mu_{\mathcal R})$ equipped with a point process $\mathcal P_{\mathcal R}=\sum_{i \geq 1} \delta_{(x_i, \Lambda_{i})}$ on some countable subset $\{x_i, i \geq 1\} \subset \mathcal R$, $x_i \neq x_j, i \neq j$, is called a \textit{labelled bead tree} if $(x_i, \Lambda_i)$ is a labelled bead for every $ i \geq 1$. If $(\mathcal R, \mu_{\mathcal R})$ is a string of beads we call $(\mathcal R, \mu_{\mathcal R}, \mathcal P_{\mathcal R})$ a \textit{labelled string of beads}. \end{definition}
We will also speak of \textit{labelled $(\alpha, \theta)$-strings of beads} for $\alpha \in (0,1)$, $\theta >0$, as induced by an ordered $(\alpha, \theta)$-Chinese restaurant process. Specifically, the label sets are the blocks $\Pi_{\infty,i}$, $i\ge 1$, of the limiting partition of $\mathbb N$, which we relabel by $\mathbb N\setminus\{1\}$ using the increasing bijection $\mathbb{N}\rightarrow\mathbb{N}\setminus\{1\}$. The locations $X_i$ are the locations of the corresponding atom of size $P_i$ on the string, $i\ge 1$. A Ford tree growth process of index $\beta' \in (0,1)$ as in Algorithm \ref{Ford} can be represented in terms of labelled $(\beta', 1-\beta')$-strings of beads $\widehat{\xi}_{m}$, $m \geq 1$, as follows \cite[Corollary 16]{1}.
\begin{prop}[Ford tree growth via labelled strings of beads] \label{labelledbeadford} For $\beta' \in (0,1)$, construct a sequence of labelled bead trees $(\mathcal F_m, \nu_m, \mathcal P_{m}, m \geq 1)$ as follows.
\begin{itemize} \item[\rm 0.] Let $\widehat{\xi}_0=(\mathcal F_1, \nu_1, \mathcal P_1)$ be a labelled $(\beta', 1-\beta')$-string of beads with label set $\mathbb{N}\setminus\{1\}$. \end{itemize} Given $(\mathcal F_j, \nu_j, \mathcal P_{j})$, $1 \leq j \leq m$, with $\mathcal P_{m}=\sum_{i\ge 1}\delta_{X_{m,i},\Lambda_{m,i}}$, to construct $(\mathcal F_{m+1}, \nu_{m+1}, \mathcal P_{m+1})$, \begin{itemize} \item[\rm 1.-2.] select the unique $X_{m,i} \in \mathcal F_m$ such that $m+1 \in \Lambda_{m,i}$;
\item[\rm 3.] to obtain $(\mathcal F_{m+1},\nu_{m+1},\mathcal{P}_{m+1})$, remove $\nu_m(X_{m,i})\delta_{X_{m,i}}$ from $\nu_m$ and $\delta_{(X_{m,i},\Lambda_{m,i})}$
from $\mathcal P_m$; attach to $\mathcal F_m$ at ${X_{m,i}}$ an independent copy $\widehat{\xi}_{m}$ of $\widehat{\xi}_0$ with metric rescaled by
$\nu_m(X_{m,i})^{\beta'}$, mass measure by $\nu_m(X_{m,i})$, and label sets in $\widehat{\xi}_m$ relabelled by the increasing bijection
$\mathbb N\setminus\{1\} \rightarrow \Lambda_{m,i}\setminus\{m+1\}$. \end{itemize} Then the tree growth process $(\mathcal F_m, m \geq 1)$ is a Ford tree growth process of index $\beta' \in (0,1)$. \end{prop}
It will be useful to represent two-colour trees in the space $l^1(\mathbb N_0^2)$ as follows. We denote by $e_{a,b}$, $a,b\geq 0$, the unit coordinate vectors. We will use $e_{k,0}$, $k\geq 0$, to embed a given stable tree $(\mathcal T,d,\rho,\mu)$, using $e_{k,0}$ to embed $\Sigma_k$, $k\geq 0$. Indeed, from now on we assume $(\mathcal T,d,\rho,\mu)=(\mathcal T,d,0,\mu)\in\mathbb T^{\rm emb}_{\rm w}$ is this \em embedded stable tree\em, with embedded leaves $\Sigma_k$, $k\geq 0$. We will use $e_{m,i}$, $i\ge 1$, $m\ge 1$, to embed the $m$th branch of the $i$th red component, so the last step of Algorithm \ref{twocolourmass} is: \begin{enumerate}\item[3.] let $([0,L_k],[0,L_kB_k^\prime],\mu_k^+)$ be an independent $\beta$-mixed string of beads in the notation of \eqref{twoclb}; denote by $M_k$
the size (number of leaves) of $\mathcal R_k^{(I_k)}$; define the scale factor $c=\mu_k^*(J_k^*)$ and set
\begin{align*}&\mathcal{T}_{k+1}^*:=\mathcal{T}_k^*\cup\left(J_k^*+]0,L_kB_k^\prime c^\beta]e_{M_k+1,I_k}\right)\cup\left(J_k^*+L_kB_k^\prime c^\beta e_{M_k+1,I_k}+]0,L_k(1-B_k^\prime)c^\beta]e_{k+1,0}\right)\\
&\mathcal{R}_{k+1}^{(I_k)}:=\mathcal{R}_k^{(I_k)}\!\cup\!\left(J_k^*+]0,L_kB_k^\prime c^\beta]e_{M_k+1,I_k}\right),\quad \mathcal{R}_{k+1}^{(i)}\!:=\!\mathcal{R}_k^{(i)}, i \!\neq\! I_k,\qquad \mu_{k+1}^*:=\mu_k^*-c\delta_{J_k^*}+\lambda_k^+\\
&\mbox{where }\begin{array}{rll}\lambda_k^+(J_k^*+]c^\beta s,c^\beta t]e_{M_k+1,I_k})=&\!\!\!c\mu_k^+(]s,t]),\qquad &0\le s<t\le L_kB_k^\prime,\\[0.2cm]
\lambda_k^+(J_k^*+L_kB_k^\prime c^\beta+]c^\beta s,c^\beta t]e_{M_k+1,I_k})=&\!\!\!c\mu_k^+(L_kB_k^\prime+]s,t]),\qquad &0\le s<t\le L_k(1-B_k^\prime).\end{array}
\end{align*} \end{enumerate}
We will now formulate a modification of Algorithm \ref{twocolourmass} starting from a given stable tree.
Let $(\mathcal T, \mu)$ be a stable tree of index $\beta \in (0,1/2]$ and $(\Sigma_k,k \geq 0)$ an i.i.d. sequence of leaves sampled from $\mu$. Consider the sequence of reduced weighted $\mathbb R$-trees $(\mathcal T_k, \mu_k, k \geq 0)$ where $\mu_k$ captures the masses of the connected components of $\mathcal T \setminus \mathcal T_k$ projected onto $\mathcal T_k$ as in \eqref{findimmarg1}. Let $(v_i, i \geq 1)$ be the sequence of branch points of $\mathcal T$ in order of appearance in $(\mathcal T_k, k \geq 0)$, and denote by $(\mathcal S_j^{(i)}, j \geq 1)$ the subtrees of $\mathcal T \setminus \mathcal T_{k^{(i)}}$ rooted at $v_i$, $i \geq 1$, where $k^{(i)}=\inf\{k \geq 0\colon v_i \in \mathcal T_k\}$ and where indices are assigned in increasing order of least leaf labels $\min\{\ell\ge k^{(i)}\colon\Sigma_\ell\in\mathcal S_j^{(i)}\}$, $j\ge 1$. For $i,j \geq 1$, set $P_j^{(i)}:=\mu(\mathcal S_j^{(i)})$,
\begin{equation}\label{pididef}
D^{(i)} :=\lim_{n \rightarrow \infty} \left(1-\sum_{j\in[n]} P_j^{(i)}/P^{(i)}\right)^{1-\beta} (1-\beta)^{\beta-1}n^{\beta},\qquad \mbox{where }P^{(i)}:=\sum_{j \geq 1} P_j^{(i)}.
\end{equation} This yields an i.i.d. sequence of $(1-\beta)$-diversities $(D^{(i)}, i \geq 1)$ with $D^{(i)} \sim {\rm ML}(1-\beta, -\beta)$, cf. Theorem \ref{masspart} and \eqref{alphadiv}.
In the following algorithm, we build i.i.d. Ford trees in the branch points of the stable tree $(\mathcal T, \mu)$ from i.i.d. labelled $(\beta', 1-\beta')$-strings of beads $\widehat{\xi}_k, k\geq 0$, for $\beta'=\beta/(1-\beta)$. To do so, we consider two separate mass measures: the measures $(\widehat{\mu}_k,k\ge 0)$, that equal $\mu$ on (shifted) subtrees of the stable tree, and the measures $\widehat{\nu}_k$ on the Ford trees, which, restricted to each Ford tree separately, play the role of the mass measures $\nu_{m}$, $m \geq 1$, in the construction in Proposition \ref{labelledbeadford}.
\begin{algorithm}[Algorithm \ref{twocolourmass} with subtrees from a given stable tree] \label{depstruc1} We construct a sequence of weighted $\infty$-marked $\mathbb R$-trees $\big(\widehat{\mathcal T}_k,\big(\widehat{\mathcal R}_k^{(i)}, i \geq 1\big), \widehat{\mu}_k, \widehat{\nu}_k,
\big(\widehat{\Sigma}_n^{(k)}, n \geq 0 \big), k \geq 0\big)$ embedded in $l^1(\mathbb N_0^2)$, each equipped with an infinite leaf sequence $(\widehat{\Sigma}_n^{(k)}, n \geq 0)$ and an additional finite measure $\widehat{\nu}_k$ as follows.
\begin{itemize}
\item[0.] Let $(\widehat{\mathcal T}_0, (\widehat{\mathcal R}_0^{(i)}, i \geq 1), \widehat{\mu}_0, \widehat{\nu}_0, (\widehat{\Sigma}_n^{(0)}, n \geq 0 )) = (\mathcal T, (\{\rho\}, i \geq 1), \mu, 0, (\Sigma_n, n \geq 0))$ be a stable tree. \end{itemize} Given $(\widehat{\mathcal T}_j,(\widehat{\mathcal R}_j^{(i)}, i \geq 1), \widehat{\mu}_j, \widehat{\nu}_j, (\widehat{\Sigma}_n^{(j)}, n \geq 0))$, $0 \leq j \leq k$, let $\widehat{r}_k=\#\{i\ge 1\colon\mathcal{R}_k^{(i)}\neq\{\rho\}\}$; \begin{itemize} \item[1.-2.] let $\widehat{J}_k \in \widehat{\mathcal T}_k$ be the closest point to the leaf $\widehat{\Sigma}_{k+1}^{(k)}$ in
$\mathcal R(\widehat{\mathcal T}_k, \widehat{\Sigma}_1^{(k)}, \ldots, \widehat{\Sigma}_{k}^{(k)})$; if $\widehat{J}_k\in\widehat{\mathcal{R}}_k^{(i)}$ for some $i\in[\widehat{r}_k]$,
set $I_k=i$, otherwise let $I_k=\widehat{r}_k+1$; denote by $M_k\ge 0$ the size of $\mathcal{R}_k^{(I_k)}$; \item[3.] let $\widehat{\xi}_k$ be an independent labelled $(\beta', 1-\beta')$-string of beads; if $M_k\ge 1$, define the scale factor $\widehat{c}=\widehat{\nu}_k(\widehat{J}_k)$, otherwise set $\widehat{c}=1$; write as $([0,K_k],\nu_k,\sum_{j\ge 1}\delta_{(X_{k,j},\Lambda_{k,j})})$
the string of beads $\widehat{\xi}_k$ with metric rescaled by $\widehat{c}^{\beta^\prime}(P^{(I_k)})^{\beta} (D^{(I_k)})^{\beta'}$ and mass measure rescaled by $\widehat{c}$, where $P^{(I_k)}$ and $D^{(I_k)}$ are as in (\ref{pididef});
denote by $\mathcal S_{k,j}$, $j\in\{0,1,2,\ldots;\infty\}$, the
connected components of $\widehat{\mathcal T}_k\setminus\{\widehat{J}_k\}$, where $\mathcal S_{k,\infty}$ contains the root and the other components are ordered by least
label; let $X_{k,0}:=K_k$ and set
$$\widehat{\mathcal T}_{k+1}:=\mathcal S_{k,\infty}\cup\mathcal S_{k,0}\cup\left(\widehat{J}_k+[0,K_k]e_{M_{k+1},I_k}\right)\cup\bigcup_{j\ge 0}\left(X_{k,j}e_{M_{k+1},I_k}+\mathcal S_{k,j+1}\right).$$
if $M_k=0$, let $\widehat{\mathcal R}_{k+1}^{(I_k)}=\widehat{J}_k+[0,K_k]e_{M_{k+1},I_k}$, otherwise add this shifted string to $\widehat{\mathcal R}_{k}^{(I_k)}$
to form $\widehat{\mathcal R}_{k+1}^{(I_k)}$; retain the other
marked components, just shifted by the appropriate $X_{k,j}e_{M_{k+1},I_k}$ if $\widehat{\mathcal R}_k^{(i)}\subset\mathcal S_{j,k}$. Finally, let $\widehat{\mu}_{k+1}$
denote the mass measure obtained from $\widehat{\mu}_k$ by appropriate shifting, and similarly for $\widehat{\nu}_{k+1}$, just with $\nu_k$ shifted onto
$\widehat{J}_k+[0,K_k]e_{M_{k+1},I_k}$ replacing $\widehat{\nu}_k(\widehat{J}_k)\delta_{\widehat{J}_k}$.
\end{itemize}
\end{algorithm}
\begin{remark} \label{pidi} Note that the scaling factor $(C^{(i)})^{-1}:=(P^{(i)})^{\beta} (D^{(i)})^{\beta/(1-\beta)}$ can be rewritten as $$(C^{(i)})^{-1}=\lim_{n \rightarrow \infty} \left( P^{(i)}- \sum_{j\in[n]} P_j^{(i)}\right)^{\beta}(1-\beta)^{-\beta}n^{\beta^2/(1-\beta)} =\lim_{n \rightarrow \infty} \left( \sum_{j\geq n+1} P_j^{(i)}\right)^{\beta}(1-\beta)^{-\beta}n^{\beta^2/(1-\beta)},$$ or, alternatively, using \eqref{alphadiv}, as $ (C^{(i)})^{-1} = \lim \limits_{j \rightarrow \infty} \left(j \Gamma(\beta)\right)^{\beta/(1-\beta)} \left(P_j^{(i)\downarrow}\right)^{\beta}.$ \end{remark}
The following result follows directly from the construction in Algorithm \ref{depstruc1} and Proposition \ref{labelledbeadford}.
\begin{prop} In the setting of Algorithm \ref{depstruc1}, there exists a sequence of i.i.d. Ford CRTs $(\widehat{\mathcal F}_i, i \geq 1)$ of index $\beta'=\beta/(1-\beta)$ which is independent of the stable tree $(\mathcal T, \mu)$ such that, for all $i \geq 1$, $$\lim_{k \rightarrow \infty} \widehat{\mathcal R}_k^{(i)}=:\widehat{\mathcal R}^{(i)}= \left(C^{(i)}\right)^{-1} \widehat{\mathcal F}_i \quad \text{ a.s. w.r.t. to the Gromov-Hausdorff topology}.$$ \end{prop}
We will now prove that the sequence of reduced $\infty$-marked $\mathbb R$-trees constructed in Algorithm \ref{depstruc1} and the sequence of trees constructed in Algorithm \ref{twocolourmass} are equal in distribution.
\begin{prop} \label{algsame} Let $(\widehat{\mathcal T}_k, (\widehat{\mathcal R}_k^{(i)}, i \geq 1), \widehat{\mu}_k, (\widehat{\Sigma}_n^{(k)},n\ge 0), k \geq 0)$ and
$(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1), \mu_k^*, k \geq 0)$ be as in
Algorithms \ref{depstruc1} and \ref{twocolourmass}, respectively,
$\widehat{\pi}_k\colon\widehat{\mathcal T}_k\rightarrow \mathcal R(\widehat{\mathcal T}_k, \widehat{\Sigma}_0^{(k)}, \ldots, \widehat{\Sigma}_k^{(k)})$ the
projection as in \eqref{pm}. Then,
\begin{equation}
\left(\mathcal R\left(\widehat{\mathcal T}_k, \widehat{\Sigma}_0^{(k)}, \ldots, \widehat{\Sigma}_k^{(k)}\right),\left(\widehat{\mathcal R}_k^{(i)}, i \geq 1\right),
(\widehat{\pi}_k)_*\widehat{\mu}_k, k \geq 0 \right)
\,{\buildrel d\over =}\, \left({\mathcal T}_k^*,\left({\mathcal R}_k^{(i)}, i \geq 1\right), \mu_k^*, k \geq 0 \right). \label{algsameq}
\end{equation}
Furthermore, $(P_j^{(x)}, j \geq 1)$ with $P_j^{(x)}:=\widehat{\mu}_k\left(\mathcal S_j^{(x)}\right)/\sum_{\ell \geq 1} \widehat{\mu}_k\left(\mathcal S_\ell^{(x)}\right)$, $j\ge 1$, are i.i.d.
${\rm GEM}(1-\beta, -\beta)$ for all $x \in \mathcal R(\widehat{\mathcal T}_k, \widehat{\Sigma}_0^{(k)}, \ldots, \widehat{\Sigma}_k^{(k)})$ with
$(\widehat{\pi}_k)_*\widehat{\mu}_k(x) >0$, where $(\mathcal S_j^{(x)}, j \geq 1)$ are the connected components of
$\widehat{\mathcal T}_k \setminus \mathcal R(\widehat{\mathcal T}_k, \widehat{\Sigma}_0^{(k)}, \ldots, \widehat{\Sigma}_k^{(k)})$ rooted at
$x \in \mathcal R(\widehat{\mathcal T}_k, \widehat{\Sigma}_0^{(k)}, \ldots, \widehat{\Sigma}_k^{(k)})$, ranked in increasing order of least leaf labels. \end{prop}
The following is a direct consequence of Proposition \ref{algsame}.
\begin{corollary} \label{algsamecor} In Algorithm \ref{twocolourmass}, the tree growth processes
$ \left(C^{(i)} \mathcal R_{k_m^{(i)}}^{(i)}, m \geq 1 \right)$, $i \geq 1$, are i.i.d.\ Ford tree growth processes of index $\beta'=\beta/(1-\beta)$ independent of
the stable tree $({\mathcal T}, {\mu})=\lim_{k \rightarrow \infty} (\widetilde{\mathcal T}_k, \widetilde{\mu}_k)$ of Corollary \ref{convstable}, where the
scaling factors $(C^{(i)})^{-1}=(P^{(i)})^{\beta} (D^{(i)})^{\beta/(1-\beta)}$, $i \geq 1$, are as in Remark \ref{pidi}. \end{corollary}
To prove Proposition \ref{algsame}, we will need a strong form of coagulation-fragmentation duality.
\begin{lemma} \label{coag1} Let $P=(P_i,i\ge1)\sim{\rm GEM}(\alpha,\theta)$ with $\alpha$-diversity $S$, and
$\widehat{\xi}=([0,\widehat{K}],\widehat{\mu},\widehat{\mathcal{P}}\!=\!\sum_{j\ge 1}\!\delta_{(X_j,\widehat{\Lambda}_j)})$ an
independent labelled $(\beta',\theta/\alpha)$-string of beads. Use $([0,\widehat{K}],\widehat{\mu},\widehat{\mathcal{P}})$ to coagulate $(P_i,i\ge1)$ into
$\mu(\{X_j\}):=\sum_{i\in\widehat{\Lambda}_{j}}P_i$, with relative part sizes $Q_m^{(j)}:=P_{\pi_j(m)}/\mu(\{X_j\})$, $m\ge 1$, labelled by the increasing bijection
$\pi_j\colon\mathbb{N}\rightarrow\widehat{\Lambda}_j$, $j\ge 1$. Then
\begin{itemize}\item the string of beads $([0,S^{\beta'}\widehat{K}],\mu)$ is an $(\alpha\beta',\theta)$-string of beads,
\item the sequence of fragments $(Q_m^{(j)},m\ge 1)$ has a ${\rm GEM}(\alpha,-\alpha\beta')$ distribution, for each $j\ge 1$,
\item the string $([0,S^{\beta'}\widehat{K}],\mu)$ and the fragments $(Q_m^{(j)},m\ge 1)$ of $\mu(\{X_j\})$, $j\ge 1$, are independent.
\end{itemize} \end{lemma}
\begin{proof} This is an enriched instance of coagulation-fragmentation duality, see e.g. \cite[Section 5.5]{5}. We use a combinatorial approach, with notation
$(x)_{n\uparrow\gamma}=x(x+\gamma)\cdots(x+(n-1)\gamma)$ and using known distributions of (ordered and unordered) Chinese restaurant partitions \cite{5,1}.
Fix $n\ge 1$.
What is the probability that an ordered $(\beta',\theta/\alpha)$-coagulation groups the tables of an unordered $(\alpha,\theta)$-Chinese restaurant partition of $[n]$
into $m$ groups $(n_{1,1},\ldots,n_{1,k_1}),\ldots,(n_{m,1},\ldots,n_{m,k_m})$? If we denote by $\ell$ the number of new right-most groups opened, and $(\gamma)_{j\uparrow\delta}:=\gamma(\gamma+\delta)\cdots(\gamma+(j-1)\delta)$, then it is
$$\frac{(\theta+\alpha)_{k_1+\cdots+k_m-1\uparrow\alpha}\prod_{i\in[m]}\prod_{j\in[k_i]}(1-\alpha)_{n_{ij}\uparrow 1}}
{(1+\theta)_{n-1\uparrow 1}}
\frac{(\beta')^{m-\ell-1}(\theta/\alpha)^\ell\prod_{i\in[m]}(1-\beta')_{k_i-1\uparrow 1}}
{(1+\theta/\alpha)_{k_1+\cdots+k_m-1\uparrow1}}.$$
What is the probability that an unordered $(\alpha,-\alpha\beta')$-fragmentation of an ordered $(\alpha\beta',\theta)$-Chinese restaurant partition of $[n]$ yields
$m$ tables further split into $(n_{1,1},\ldots,n_{1,k_1}),\ldots,(n_{m,1},\ldots,n_{m,k_m})$? If we denote by $\ell$ the number of new right-most tables, then it is
$$\frac{(\alpha\beta')^{m-\ell-1}\theta^\ell\prod_{i\in[m]}(1-\alpha\beta')_{n_{i,1}+\cdots+n_{i,k_i}-1\uparrow 1}}
{(1+\theta)_{n-1\uparrow 1}}
\prod_{i\in[m]}\frac{(\alpha-\alpha\beta')_{k_i-1\uparrow\alpha}\prod_{j\in[k_i]}(1-\alpha)_{n_{ij}-1\uparrow 1}}
{(1-\alpha\beta')_{n_{i,1}+\cdots+n_{i,k_i}-1\uparrow 1}}.$$
Elementary cancellations show that these two expressions are equal for all $n\ge 1$. Since these structured partitions can be constructed in a consistent way, as
$n\ge 1$ varies, the statement of the lemma merely records different aspects of the limiting arrangement, either asymptotic frequencies in size-biased order of least
labels coagulated by a labelled strings of beads, or respectively a string of beads with blocks further fragmented, with fragments in size-biased order of least labels. \end{proof}
The following result can be proved using the same method. \begin{lemma} \label{coag2}
Let $P=(P_i,i\ge1)\sim{\rm GEM}(\alpha,\theta)$ and, for $\alpha \in (0,1), \theta >0$, let $\widehat{\Lambda}=(\widehat{\Lambda}_1,\ldots,\widehat{\Lambda}_r)$ be an
independent ${\rm Dirichlet}(\theta_1/\alpha,\ldots,\theta_r/\alpha)$ partition of $\mathbb{N}$ with $\sum_{i\in[r]}\theta_i=\theta$. Use
$(\widehat{\Lambda}_1,\ldots,\widehat{\Lambda}_r)$ to coagulate $(P_i,i\ge1)$ into $R_j:=\sum_{i\in\widehat{\Lambda}_{j}}P_i$, with
relative part sizes $Q_m^{(j)}:=P_{\pi_j(m)}/R_j$, $m\ge 1$, labelled by the increasing bijection $\pi_j\colon\mathbb{N}\rightarrow\widehat{\Lambda}_j$, $j\in[r]$.
Then
\begin{itemize}\item the vector $(R_1,\ldots,R_r)$ of aggregate masses has a $ {\rm Dirichlet}(\theta_1,\ldots,\theta_r)$ distribution,
\item the sequence of fragments $(Q_m^{(j)},m\ge 1)$ has a ${\rm GEM}(\alpha,\theta_j)$ distribution, for each $j\in[r]$,
\item the vector $(R_1,\ldots,R_r)$ and the fragments $(Q_m^{(j)},m\ge 1)$ of $R_j$, $j \in [r]$, are independent.
\end{itemize} \end{lemma}
In the context of Algorithm \ref{depstruc1}, it is useful to adopt the following terminology. Consider a branch point of the reduced stable tree and the associated Ford tree. The (unordered) $(\alpha,\theta)$-Chinese restaurant behind $P$ partitions the total branch point mass into subtrees (unmarked tables) which carry leaf labels of the stable tree (unmarked customers). A transition $k\rightarrow k+1$ of the algorithm spreads the subtrees over a new string of beads of the Ford tree. The ordered structures $\widehat{\xi}$ and $\widehat{\Lambda}$, respectively, partition the leaf labels of the Ford tree (marked customers) into marked tables (whose sizes are captured by $\widehat{\nu}$ for each marked component separately). The coagulation takes subtrees as marked customers and so coagulates those unmarked tables that are listed in the same marked table to form a partition of unmarked customers (leaves of the stable tree) into marked tables. The further partition into unmarked tables within each marked table is then a fragmentation of the unmarked customers (leaf labels of the stable tree).
\begin{proof}[Proof of Proposition \ref{algsame}] As the families of weighted discrete $\infty$-marked $\mathbb R$-trees in \eqref{algsameq}, suitably represented, are consistent and at step $k$ uniquely determine the trees at steps $0, \ldots, k-1$, it suffices to show that for fixed $k \geq 0$ \begin{equation} \left(\mathcal R\left(\widehat{\mathcal T}_k, \widehat{\Sigma}_0^{(k)}, \ldots, \widehat{\Sigma}_k^{(k)}\right),\left(\widehat{\mathcal R}_k^{(i)}, i \geq 1\right), (\widehat{\pi}_k)_*\widehat{\mu}_k \right) \,{\buildrel d\over =}\, \left({\mathcal T}_k^*,\left({\mathcal R}_k^{(i)}, i \geq 1\right), \mu_k^* \right). \label{algsameqq} \end{equation} We will prove \eqref{algsameqq} by induction on $k$, showing that the LHS follows the characterisation of the distribution of the two-colour tree on the RHS given in Proposition \ref{starstrings}. The case $k=0$ follows from Proposition \ref{betastring} in combination with Corollary \ref{GEMS}.
For general $k \geq 0$, we obtain the shape $T_k$ of a stable tree $\mathcal T_k$ reduced to the first $k+1$ leaves from the stable tree growth processes with masses naturally embedded in Algorithm \ref{depstruc1}, and conditionally given its shape with $\ell$ branch points $v_1, \ldots, v_\ell$ of degrees $d_1, \ldots, d_\ell$, a Dirichlet$(\beta, \ldots, \beta, m_1+(1-2\beta), \ldots, m_\ell+(1-2\beta))$ mass split between edges and branch points as in Proposition \ref{masssel2} where $m_i:=d_i-2, i \in [\ell]$. We further obtain rescaled independent $(\beta, \beta)$-strings of beads on the branches of the stable tree, i.e.\ the unmarked branches of $\mathcal R(\widehat{\mathcal T}_k, \widehat{\Sigma}_0^{(k)}, \ldots, \widehat{\Sigma}_k^{(k)})$, cf. Theorem \ref{masssel2} and Proposition \ref{betastring}.
From the stick-breaking representation \eqref{v1v2} of GEM($\cdot, \cdot$) and Algorithm \ref{masssel}, the relative masses of the subtrees of $\mathcal T \setminus \mathcal T_k$ rooted at $v_i$ indexed in increasing order of smallest leaf labels form a vector with distribution GEM$(1-\beta, m_i(1-\beta)+(1-2\beta))$, independently for each branch point, $i\in[\ell]$.
From the independent Ford tree growth processes via labelled strings of beads built from the $(\widehat{\xi}_k,k\ge 0)$ in Algorithm \ref{depstruc1}, we have the shapes of conditionally independent Ford trees with $m_1, \ldots, m_\ell$ leaves, and for each Ford tree conditionally given the shape, independently a ${\rm Dirichlet}(\beta^\prime,\ldots,\beta^\prime,1-\beta^\prime,\ldots,1-\beta^\prime)$ partition of $\mathbb{N}$ obtained by relabelling the edge-partition of labels $\mathbb{N}\setminus[m_i]$ by the increasing bijection $\mathbb{N}\setminus[m_i]\rightarrow\mathbb{N}$. These partitions are further split on each internal edge by a labelled $(\beta^\prime,\beta^\prime)$-string of beads, and on each external edge by a labelled $(\beta^\prime,1-\beta^\prime)$-string of beads, again all labelled by $\mathbb{N}$ and obtained by increasing bijections from $\mathbb N$ to the label sets of the edges.
We apply Lemma \ref{coag2} with $P$ as the ${\rm GEM}(1-\beta,m_i(1-\beta)+(1-2\beta))$ split into further subtree masses of the $i$th marked component
and $\widehat{\Lambda}$ as the ${\rm Dirichlet}(\beta^\prime,\ldots,\beta^\prime,1-\beta^\prime,\ldots,1-\beta^\prime)$ partition of marked Ford labels in the $i$th component.
We note that we eventually place subtrees in their size-biased order in $P$ into the further Ford leaves of the $i$th component. Therefore, the coagulation of Lemma
\ref{coag2} produces a ${\rm Dirichlet}(\beta,\ldots,\beta,1-2\beta,\ldots,1-2\beta)$ mass split onto the edges and independent ${\rm GEM}(1-\beta,\beta)$ and ${\rm GEM}(1-\beta,1-2\beta)$ sequences of fragments of these edge masses.
We apply Lemma \ref{coag1} for each edge, with $P$ as the ${\rm GEM}(1-\beta,\beta)$ or ${\rm GEM}(1-\beta,1-2\beta)$ sequence of fragments and with the labelled
$(\beta^\prime,\beta^\prime)$- or $(\beta^\prime,1-\beta^\prime)$-string of beads as $\widehat{\xi}$, independent. Again, we note that we eventually place subtrees in
their size-biased order in $P$ according to the positions of the labels in the labelled string of beads. Therefore, the coagulation of Lemma \ref{coag1} produces a mass
split according to a $(\beta,\beta)$- or $(\beta,1-2\beta)$-string of beads, respectively.
We obtain two-colour shapes as needed for the distribution of the RHS of \eqref{algsameqq} characterised in Proposition \ref{starstrings}. Conditionally given the two-colour shape, we obtain independent Dirichlet splits onto edges that combine to a
${\rm Dirichlet}(\beta,\ldots,\beta,1-2\beta,\ldots,1-2\beta)$ split, with parameters $\beta$ for unmarked and marked internal edges and $1-2\beta$ for marked external edges.
Again conditionally given the two-colour shape, we obtain, independently of the Dirichlet splits, for each unmarked and marked internal edge an independent
$(\beta,\beta)$-string of beads, and for each marked external edge a $(\beta,1-2\beta)$-string of beads. If we arrange the edges in the tree shape suitably by depth first
search and sort the Dirichlet vectors and the vectors of strings accordingly, their joint conditional distribution does not depend on the two-colour shape, so the
two-colour shape, the overall Dirichlet split and the strings of beads are jointly independent.
Finally, Algorithm \ref{depstruc1} scales the strings of beads. We can write $(P^{(i)})^\beta(D^{(i)})^{\beta'}=(D_{m_i}^{(i)})^{\beta'}(P^{(i)}_{(m_i)})^\beta$,
where $D_{m_i}^{(i)}$ is the $(1-\beta)$-diversity of $P$ in the application of Lemma \ref{coag2} above, independent of the total mass
$P^{(i)}_{(m_i)}=\sum_{j\ge m_i+1}P^{(i)}_j$ on the $i$th component, which is further split according to the Dirichlet distribution found above, as required.
Altogether, the distribution is the same as in Proposition \ref{starstrings}. \end{proof}
\begin{proof}[Proof of Theorem \ref{fordembb} and \eqref{subfordgrowth} in Theorem \ref{Mainresult1}] This is a direct consequence of Proposition \ref{algsame} and Corollary \ref{algsamecor}.\end{proof}
In Theorem \ref{fordembb}, we identified the tree growth processes $(\mathcal R_k^{(i)}, k \geq 1)$, $i \geq 1$, as consistent families of tree growth processes which obey the growth rules of a Ford tree growth process of index $\beta'=\beta/(1-\beta)$. Rescaling these processes to obtain i.i.d. sequences of Ford trees requires knowledge of the scaling factor which is incorporated in the limiting stable tree. It is, however, possible to approximate this scaling factor using the tree constructed up to step $k$ only. We are further able to obtain i.i.d. marked subtree growth processes obeying the Ford growth rules (but with wrong starting lengths) applying suitable scaling.
\begin{theorem}[Embedded Ford trees]\label{embford} Let $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1), \mu_k^*, k \geq 0)$ as in Algorithm \ref{twocolourmass}.
\begin{itemize} \item[{\rm (i)}] The normalised tree growth processes in the components, with projected $\mu$-masses, are i.i.d.: \begin{equation} \left(\mathcal G_m^{(i)}, \mu_m^{(i)}, m\geq 1\right)= \left( \mu_{k_1^{(i)}}^* \left(\mathcal R_{k_1^{(i)}}^{(i)}\right)^{-\beta}\mathcal R_{k_m^{(i)}}^{(i)}, \mu_{k_1^{(i)}}^* \left(\mathcal R_{k_1^{(i)}}^{(i)}\right)^{-1} \mu_{k}^* \restriction_{\mathcal R_k^{(i)}}, m \geq 1\right), \quad i \geq 1. \label{embford1} \end{equation}
\item[{\rm (ii)}] The processes $\big( \mu_{k_1^{(i)}}^* \big(\mathcal R_{k_1^{(i)}}^{(i)}\big)^{-\beta}\mathcal R_{k_m^{(i)}}^{(i)}, m \geq 1\big)$, without
$\mu$-masses are i.i.d. Ford tree growth processes of index $\beta'=\beta/1-\beta$ as in Algorithm \ref{Ford}, $i\geq 1$, but starting from
{\rm ML}$(\beta, 1-2\beta)$, not {\rm ML}$(\beta', 1-\beta')$.
\item[{\rm (iii)}] For $i \geq 1$, define
$C_m^{(i)}:=\left(1-\beta\right)^{\beta} m^{-\beta^2/(1-\beta)} \mu_{k_m^{(i)}}^*\big(\mathcal R^{(i)}_{k_m^{(i)}}\big)^{-\beta}$. The processes
$(C_m^{(i)}\mathcal R_{k_m^{(i)}}^{(i)}, m \geq 1)$ with scaling constant depending on $m$, $i \geq 1$, are i.i.d., $\lim_{m \rightarrow \infty} C_m^{(i)}= \left(H^{(i)}\right)^{-\beta/(1-\beta)}\mu_{k_1^{(i)}}^*\big(\mathcal R_{k_1^{(i)}}^{(i)}\big)^{-\beta}$ a.s., where $H^{(i)} \sim {\rm ML}(1-\beta, 1-2\beta)$, and
$\lim_{m \rightarrow \infty} C_m^{(i)} \mathcal R_{k_m^{(i)}}^{(i)}=\mathcal F^{(i)}$ a.s.
in the Gromov-Hausdorff topology where $(\mathcal F^{(i)}, i \geq 1)$ are i.i.d. Ford CRTs of index $\beta'$. \end{itemize} \end{theorem}
\begin{proof} See Section \ref{sec71} in the appendix. \end{proof}
\section{Continuum tree asymptotics} \label{sec5}
In this section, we use embedding to show the convergence of the constructions in Theorems \ref{Mainresult2} and \ref{branchrepl}.
\subsection{Embedding of the two-colour line-breaking construction into a binary compact CRT} \label{Embedding}
In \cite{RW} we constructed CRTs recursively based on recursive distribution equations as reviewed by Aldous and Bandyopadhyay \cite{14}. This method applied to a $\beta$-mixed string of beads yields a compact CRT $(\mathcal T^*, \mu^*)$ in which we can embed the two-colour line-breaking construction. Let us briefly recall the recursive construction of $(\mathcal T^*, \mu^*)$ from {\cite[Proposition 4.12]{RW}} including some useful notation. We only outline the constructions without going into the mathematical details for which we refer to \cite{RW}.
For $\beta \in (0,1/2]$, consider a sequence of independent strings of beads $({\xi}_{\mathbf i}$, $\mathbf{i} \in \mathbb{U})$, $${\xi}_{\mathbf i}=\left([0,{L}_{\mathbf i}], \sum_{j \geq 1} {P}_{\mathbf{i}j}\delta_{{X}_{\mathbf{i}j}}\right), \quad {\mathbf i} \in \mathbb{U},$$ where ${\xi}_{\varnothing}$ is a $(\beta, \beta)$-string of beads independent of the $\beta$-mixed strings of beads ${\xi}_{\mathbf i}, \mathbf{i} \in \mathbb{U}\setminus \{\varnothing\}$, and $\mathbb U:=\bigcup_{n \geq 0} \mathbb N^n$ is the infinite Ulam-Harris tree. Let $(\check{\mathcal T}_0, \check{\mu}_0)= {\xi}_{\varnothing}$, and for $n \geq 0$, conditionally given $(\check{\mathcal T}_n, \check{\mu}_n)$ with $\check{\mu}_n=\sum_{\mathbf{i}j\in\mathbb{N}^{n+1}}\check{P}_{\mathbf{i}j}\delta_{\check{X}_{\mathbf{i}j}}$, attach to each $\check{X}_{\mathbf{i}j}$ an isometric copy of the string of beads $\xi_{\mathbf{i}j}$ \begin{itemize}
\item with metric rescaled by $\check{\mu}_n(\check{X}_{\mathbf{i}j})^\beta$, and mass measure rescaled by $\check{\mu}_n(\check{X}_{\mathbf{i}j})$,
\item so that the atom ${P}_{\mathbf{i}jk}\delta_{{X}_{\mathbf{i}jk}}$ of $\xi_{\mathbf{i}j}$ is scaled to become an atom of $\check{\mathcal T}_{n+1}$ denoted by
$\check{P}_{\mathbf{i}jk}\delta_{\check{X}_{\mathbf{i}jk}}$, $k\geq 1$, \end{itemize} for all ${\mathbf i}j \in \mathbb N^{n+1}$ respectively. Denote the resulting tree by $(\check{\mathcal T}_{n+1}, \check{\mu}_{n+1})$.
By construction, $(\check{\mathcal T}_{n}, \check{\mu}_{n})$ only carries mass in the points $\check{X}_{{\mathbf i}j}, {\mathbf{i}j} \in \mathbb N^{n+1}$, i.e. $\check{\mu}_{n}(\check{\mathcal T}_{n} \setminus \check{\mathcal T}_{n-1})=0$ for $n \geq 0$. Note that, for any $\check{X}_{i_1 i_2 \cdots i_{n+1}} \in \check{\mathcal T}_{n}$, $n \geq 0$, $$ \check{\mu}_{n} \left(\check{X}_{i_1 i_2 \cdots i_{n+1}}\right)=\check{P}_{i_1i_2\cdots i_{n+1}}={P}_{i_1} {P}_{i_1 i_2} \cdots {P}_{i_1 i_2 \cdots i_{n+1}}.$$ This induces a recursive description of the trees $(\check{\mathcal T}_{n}, \check{\mu}_{n}, n \geq 0)$ via the strings of beads $({\xi}_{\mathbf i}, \mathbf{i} \in \mathbb{U})$.
\begin{theorem}[{\cite[Proposition 4.12]{RW}}] \label{reccon} Let $\beta \in (0,1/2]$ and $(\check{\mathcal T}_{n}, \check{\mu}_n, n \geq 0)$ as above. Then there exists a compact CRT $(\mathcal T^*, \mu^*)$ such that $$ \lim \limits_{n \rightarrow \infty}\left(\check{\mathcal T}_n, \check{\mu}_n\right)=\left(\mathcal T^*, \mu^*\right) \text{ a.s. } $$ with respect to the Gromov-Hausdorff-Prokhorov topology. \end{theorem}
We will show that the increasing sequence $(\mathcal T_k^*, k \geq 0)$ of compact $\mathbb R$-trees from Algorithm \ref{twocolourmass} converges a.s. to a tree with the same distribution as $\mathcal T^*$. To do this and handle the marked components, we will embed the sequence of weighted $\infty$-marked $\mathbb R$-trees $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1), \mu_k^*, k \geq 0)$ into a given $(\mathcal T^*, \mu^*)$.
Note that the strings of beads ${\xi}_{\mathbf i}$, $\mathbf{i} \in \mathbb U \setminus \{\varnothing\}$, are $\beta$-mixed strings of beads as used in Algorithm \ref{twocolourmass} but are not elements of the space of (equivalence classes of) weighted $1$-marked $\mathbb R$-trees $\mathbb T_{\rm w}^{[1]}$, as there is no marked component. As we would like to embed into $(\mathcal T^*, \mu^*)$ the two-colour line breaking construction which carries colour marks on $\beta$-mixed strings of beads, we need to determine $I_1 =[0,K_1]\subset I=[0,K]$ such that $(I, I_1, \lambda) \sim \nu_{\beta}^{[1]}$ given some ${\xi}=(I=[0,K], \lambda) \sim \nu_{\beta}$, where $\nu_{\beta}$ and $\nu_{\beta}^{[1]}$ were introduced at the beginning of Section \ref{sec4} as distributions on one-branch trees in $\mathbb{T}_{\rm w}$ and $\mathbb{T}_{\rm w}^{[1]}$, respectively. The existence of the conditional distribution of the point of the colour change $K_1$ given ${\xi}$ is stated in the following lemma.
\begin{lemma}\label{condis} Let ${\xi} \sim \nu_\beta$. Then there exists a unique probability kernel $\kappa$ from $\mathbb T_{\rm w}$ to $\mathbb R$ such that \begin{equation} \mathbb P\left(K_1 \in \cdot \lvert {\xi} \right)=\kappa \left({\xi}, \cdot\right) \quad \text{a.s.}. \label{conddistr} \end{equation} \end{lemma}
\begin{proof} This is a special case of Theorem 6.3 in \cite{32}, since $\mathbb R$ is a Borel space. \end{proof}
Given the weighted $\mathbb R$-tree $(\mathcal T^*, \mu^*)$, we will obtain a sequence of weighted $\infty$-marked $\mathbb R$-trees \begin{equation*} \left(\overline{\mathcal T}_k^*, \left(\overline{\mathcal R}_k^{(i)}, i \geq 1\right), \overline{\mu}_k^*, k \geq 0\right) \end{equation*} with the same distribution as $({\mathcal T}_k^*, ({\mathcal R}_k^{(i)}, i \geq 1), {\mu}_k^*, k \geq 0)$ as an increasing sequence of subsets $\overline{\mathcal T}_k^* \subset \mathcal T^*$, $k \geq 0$, where the mass measure $\overline{\mu}_k^*$ captures the masses of the connected components of $\mathcal T^* \setminus \overline{\mathcal T}_k^*$ projected onto $\overline{\mathcal T}_k^*$, $k \geq 0$. The recursive structure ${\xi}_{\mathbf i}, {\mathbf i} \in \mathbb U$, provides the i.i.d.\ strings of beads needed in Algorithm \ref{twocolourmass}, which the colour change kernel \eqref{conddistr} turns into i.i.d. $1$-marked strings of beads.
\begin{algorithm}[Two-colour embedding] \label {twocemb} \rm Let $\beta\in(0,1/2]$. We embed into the tree $({\mathcal{T}}^*,{\mu}^*)$ of Theorem \ref{reccon} weighted $\infty$-marked $\mathbb{R}$-trees $(\overline{\mathcal{T}}_k^*,(\overline{\mathcal{R}}_k^{(i)},i\ge 1),\overline{\mu}_k^*)$, $k\ge 0$, as follows.
\begin{enumerate}\setcounter{enumi}{-1}\item Let $(\overline{\mathcal{T}}_0^*,\overline{\mu}_0^*)={\xi}_\varnothing$ be the initial $(\beta,\beta)$-string of beads; let $\overline{r}_0=0$ and $\overline{\mathcal{R}}_0^{(i)}=\{\rho\}$, $i\ge 1$.
\end{enumerate}
Given $(\overline{\mathcal{T}}_j^*,(\overline{\mathcal{R}}_j^{(i)},i\ge 1),\overline{\mu}_j^*)$ with $\overline{\mu}_j^*=\sum_{x\in\overline{\mathcal{T}}_j^*}\overline{\mu}_j^*(x)\delta_x$, $0\le j\le k$, let $\overline{r}_k=\#\{i\ge 1\colon\overline{\mathcal{R}}_k^{(i)}\neq\{\rho\}\}$;
\begin{enumerate}\item select an edge $\overline{E}_k^*\subset\overline{\mathcal{T}}_k^*$ with probability proportional to its mass $\overline{\mu}_k^*(\overline{E}_k^*)$; if
$\overline{E}_k^*\subset\overline{\mathcal{R}}_k^{(i)}$ for some $i\in[\overline{r}_k]$, let $\overline{I}_k=i$; otherwise, i.e.\ if $\overline{E}_k^*\subset\overline{\mathcal{T}}_k^*\setminus\bigcup_{i\in[\overline{r}_k]}\overline{\mathcal{R}}_k^{(i)}$, let $\overline{r}_{k+1}\!=\!\overline{r}_k\!+\!1$, $\overline{I}_k\!=\!\overline{r}_{k+1}$;
\item if $\overline{E}_k^*$ is an external edge of $\overline{\mathcal{R}}_k^{(i)}$, perform $(\beta,1-2\beta)$-coin tossing sampling on $\overline{E}_k^*$ to determine $\overline{J}_k^*\in \overline{E}_k^*$; otherwise, i.e.\ if $\overline{E}_k^*\subset\overline{\mathcal{T}}_k^*\setminus\bigcup_{i\in[\overline{r}_k]}\overline{\mathcal{R}}_k^{(i)}$ or if $\overline{E}_k^*$ is an internal edge of $\overline{\mathcal{R}}_k^{(i)}$, sample $\overline{J}_k^*$ from the normalised mass measure on $\overline{E}_k^*$;
\item let $\mathbf{j}\in\mathbb U$ such that $\overline{J}_k^*=\check{X}_{\mathbf{j}}$ and
$\overline{\mu}_k^*(\overline{J}_k^*)=\check{P}_{\mathbf{j}}$; sample
a point $\overline{\Omega}_k$ from $\kappa(\xi_{\mathbf{j}},\cdot)$; to form $(\overline{\mathcal{T}}_{k+1}^*,\overline{\mu}_{k+1}^*)$,
remove $\overline{\mu}_k^*(\overline{J}_k^*)\delta_{\overline{J}_k^*}$ from $\overline{\mu}_k^*$ and add to $\overline{\mathcal{T}}_k^*$ the scaled copy of the string
of beads $\xi_{\mathbf{j}}$ with $\overline{\Omega}_k$ embedded in $\mathcal{T}^*$; set $\overline{\mathcal{R}}_{k+1}^{(\overline{I}_k)}=\overline{\mathcal{R}}_k^{(\overline{I}_k)}\cup[[\overline{J}_k^*,\overline{\Omega}_k]]$ and
$\overline{\mathcal{R}}_{k+1}^{(i)}=\overline{\mathcal{R}}_k^{(i)}$, $i\neq \overline{I}_k$.
\end{enumerate} \end{algorithm}
The proof of the following statement can be found in the Appendix \ref{appen1}, together with similar proofs.
\begin{prop} \label{oversamedist} The sequences of trees constructed in Algorithm \ref{twocolourmass} and Algorithm \ref{twocemb} have the same distribution, i.e. $({\mathcal T}_k^*,({\mathcal R}_k^{(i)}, i \geq 1), {\mu}_k^*, k \geq 0 ) \,{\buildrel d \over =}\, (\overline{\mathcal T}_k^*, (\overline{\mathcal R}_k^{(i)}, i \geq 1), \overline{\mu}_k^*, k \geq 0).$ \end{prop}
\subsection{Convergence of two-colour trees, and the proof of Theorem \ref{Mainresult2}}
Theorem \ref{mainresult} and Corollary \ref{algsamecor} demonstrate that the two-colour line-breaking construction naturally combines the stable tree growth process, and infinitely many rescaled subtree growth processes that build rescaled independent Ford CRTs. We can show that the tree growth process $(\mathcal T_k^*, k \geq 0)$ converges to a compact CRT with the same distribution as the CRT $(\mathcal T^*, \mu^*)$ constructed in the beginning of Section \ref{Embedding}, using the embedding of Algorithm \ref{twocemb} and Proposition \ref{oversamedist}.
\begin{prop}[Convergence of $(\mathcal T_k^*, \mu_k^*, k \geq 0)$] \label{propo2} Let $({\mathcal T}_k^*, {\mu}_k^*, k \geq 0)$ be the sequence of weighted $\mathbb R$-trees from Algorithm \ref{twocolourmass}. Then, there is a compact CRT $(\mathcal T^*, \mu^*)$ such that \begin{equation}
\lim \limits_{k \rightarrow \infty} d_{\rm GHP} \left( \left({\mathcal T}_k^*, {\mathcal \mu}_k^*\right), \left({\mathcal T}^*, \mu^*\right) \right) =0 \quad \text{a.s..} \label{alld1} \end{equation}
\end{prop}
\begin{proof} We prove the claim for the sequence of weighted $\mathbb R$-trees $(\overline{\mathcal T}_k^*, \overline{\mu}_k^*, k \geq 0)$ embedded in a given $(\mathcal T^*, \mu^*)$ as in Section \ref{Embedding}. Then \eqref{alld1} will follow from Proposition \ref{oversamedist}.
By Theorem \ref{mainresult} and Corollary \ref{convstable}, we can couple a stable tree growth process $(\widetilde{\mathcal{T}}_k,\widetilde{\mu}_k)\rightarrow(\mathcal{T},\mu)$ with $(\overline{\mathcal{T}}_k^*,\overline{\mu}_k^*,k\geq 0)$ in such a way that $\widetilde{\mu}_k$ is a push-forward of $\overline{\mu}_k^*$. In particular, we have \begin{equation}\max\{\overline{\mu}_k^*(x),x\in\overline{\mathcal{T}}_k^*\}\le\max\{\widetilde{\mu}_k(x),x\in\widetilde{\mathcal{T}}_k\}\rightarrow 0\qquad\mbox{a.s..}
\label{noatoms} \end{equation} On the other hand, $\overline{\mu}_k^*$ is the pushforward of $\mu^*$ under the projection map $\overline{\pi}_k^*\colon\mathcal{T}^*\rightarrow\overline{\mathcal{T}}_k^*$. Now assume, for contradiction that $\overline{\bigcup_{k\ge 0}\overline{\mathcal{T}}_k^*}\neq\mathcal{T}^*$. Since all leaves are limit points of $\mathcal{T}^*\setminus{\rm Lf}(\mathcal{T}^*)$ and by Theorem \ref{reccon}, $\mathcal{T}^*$ is a CRT, there is $x\in\mathcal{T}^*\setminus\overline{\bigcup_{k\ge 0}\overline{\mathcal{T}}_k^*}$ such that the subtree of $\mathcal{T}^*$ above $x$ has positive mass $c:=\mu^*(\mathcal{T}_x^*)>0$. Since $\overline{\bigcup_{k\ge 0}\overline{\mathcal{T}}_k^*}$ is path-connected, $\mathcal{T}_x^*\cap\overline{\bigcup_{k\ge 0}\overline{\mathcal{T}}_k^*}=\varnothing$, and hence all $\overline{\mu}_k^*$ must have an atom greater than $c$, which contradicts \eqref{noatoms}.
We conclude that $\overline{\bigcup_{k\ge 0}\overline{\mathcal{T}}_k^*}=\mathcal{T}^*$. Since $\mathcal{T}^*$ is compact and the union is increasing in $k\ge 0$, this implies GH-convergence.
The convergence in the GHP sense follows since the mass measure $\overline{\mu}_k^*$ is the projection of $\mu^*$ onto $\overline{\mathcal T}_k^*$, see the proof of \cite[Corollary 23]{1} for details of this argument. \end{proof}
\begin{corollary}[Convergence of two-colour trees] \label{contwoc} Let $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1 ), \mu_k^*, k \geq 0)$ be the two-colour tree growth process from Algorithm \ref{twocolourmass} for some $\beta \in (0,1/2]$. Then there exist a compact CRT $(\mathcal T^*, \mu^*)$, an i.i.d. sequence $(\mathcal F^{(i)}, i \geq 1)$ of Ford CRTs of index $\beta'=\beta/(1-\beta)$ and scaling factors $(C^{(i)}, i \geq 1)$ as in Corollary \ref{algsamecor} with
$ \lim_{k \rightarrow \infty} d_{\rm GHP}^\infty \big( \big( \mathcal T_k^*, \big(\mathcal R_k^{(i)}, i \geq 1\big), \mu_k^* \big) , \big(\mathcal T^*, \big( \big(C^{(i)}\big)^{-1} \mathcal F^{(i)}, i \geq 1 \big), \mu^* \big) \big) = 0$ a.s..
\end{corollary}
\begin{proof}This is a direct consequence of Proposition \ref{propo2} and Corollary \ref{algsamecor}. \end{proof}
It will be convenient to use the representation of Algorithm \ref{depstruc1}. We note the following consequences of the construction, in the light of the Proposition \ref{propo2}.
\begin{corollary}\label{corproj} In the setting of Algorithm \ref{depstruc1}
\begin{enumerate}
\item[(i)] the closure $\widehat{\mathcal{T}}$ in $l^1(\mathbb N_0^2)$ of the increasing union
$\bigcup_{k\ge 0}\mathcal{R}(\widehat{\mathcal{T}}_k,\widehat{\Sigma}_0^{(k)},\ldots,\widehat{\Sigma}_k^{(k)})$ is compact;
\item[(ii)] the natural projection of $\widehat{\mathcal T}$ onto the subspace spanned by $e_{k,0}$, $k\ge 0$, is the stable tree $\mathcal T$;
\item[(iii)] the natural projection of $\widehat{\mathcal T}$ onto the subspace spanned by $e_{m,i}$, $m\ge 1$, scaled by the scaling factor $C^{(i)}$ of
Remark \ref{pidi}, is a Ford CRT for each $i\ge 1$.
\end{enumerate} \end{corollary} \begin{proof} (i) It follows from Propositions \ref{algsame} and \ref{propo2}, that the closure $\widehat{\mathcal{T}}$ in $l^1(\mathbb N_0^2)$ of the increasing union
is compact. (ii) holds by construction since all steps of Algorithm \ref{depstruc1} preserve this projection property for the trees $\widehat{\mathcal T}_k$, $k\ge 0$.
(iii) holds by Corollary \ref{algsamecor} since the scaled projections of
$\mathcal{R}(\widehat{\mathcal{T}}_k,\widehat{\Sigma}_0^{(k)},\ldots,\widehat{\Sigma}_k^{(k)})$ are Ford tree growth processes whose $m$th growth step is for
$k=k_m^{(i)}$, $m\ge 1$, $i\ge 1$. \end{proof}
These two corollaries imply Theorem \ref{Mainresult2}.
\subsection{Branch point replacement in a stable tree, and the proof of Theorem \ref{branchrepl}} \label{bprepl}
The aim of this section is to replace branch points of the stable tree by rescaled independent Ford CRTs. Let us denote the independent Ford tree growth processes underlying Corollary \ref{corproj}(iii) by $({\mathcal F}^{(i)}_m,m\ge 1)$, and the Ford CRTs with leaf labels by $(\mathcal{F}^{(i)},\Omega^{(i)}_m,m\ge 1)$, $i\ge 1$, all embedded in the appropriate coordinates. Now fix $i\ge 1$, and focus on the $m$th subtree of the $i$th branch point of $\mathcal{T}$, suppose $\Sigma_n$ is its smallest label. In Algorithm \ref{depstruc1}, each insertion into the $i$th marked component shifts some subtrees of the $i$th branch point, and the subtree we consider stops being shifted at the $m$th insertion.
The branch point replacement algorithm can be viewed as a change of order of the insertions of Algorithm \ref{depstruc1}. The $k$th step of Algorithm \ref{depstruc1} gets $\Sigma_k$ into its final position $\widehat{\Sigma}_k^{(k)}$ by inserting one branch of a marked component. The $i$th step of the branch point replacement algorithm gets the smallest labelled leaf of all subtrees of the $i$th branch point into their final positions by making all insertions into the $i$th component. This amounts to shifting the $m$th subtree of the $i$th branch point by $\Omega^{(i)}_m$, $m\ge 1$.
\begin{algorithm}[Branch point replacement in the stable tree] \label{algbprepl} We construct a sequence of weighted $i$-marked $\mathbb R$-trees $(\mathcal B^{(i)},(\mathcal{R}^{(1)},\ldots,\mathcal{R}^{(i)}),\mu^{(i)})$. Let $(\mathcal B^{(0)}, \mu^{(0)})=(\mathcal T, \mu)$ be the embedded stable tree with leaves $\Sigma_n^{(0)}\!=\!\Sigma_n, n\!\ge\! 0$. For $i \!\geq\! 1$, conditionally given $(\mathcal B^{(i-1)},(\mathcal{R}^{(1)},\ldots,\mathcal{R}^{(i-1)}),\mu^{(i-1)},(\Sigma_n^{(i-1)},n\ge 0))$, shift the connected components $\mathcal S_m^{(i)}$, $m\in\{0,1,2,\ldots;\infty)$, of $\mathcal B^{(i-1)}\setminus v_i^{(i-1)}$ of the $i$th branch point $v_i^{(i-1)}$: \begin{equation*} \mathcal B^{(i)}:= \mathcal S_\infty^{(i)}\cup\mathcal S_0^{(i)}\cup\left(v_i^{(i-1)}+\left(C^{(i)}\right)^{-1}\mathcal{F}^{(i)}\right)\cup\bigcup_{m\ge 1}\left(\left(C^{(i)}\right)^{-1}\Omega_m^{(i)}+\mathcal S_m^{(i)}\right) \end{equation*} where $\mathcal{F}^{(i)}$ is the independent Ford CRT with labelled Ford leaves $(\Omega_m^{(i)},m\ge 1)$. Take as $\mu^{(i)}$ the measure $\mu^{(i-1)}$ shifted with each of the connected components and set $\mathcal R^{(i)}:=\left(v_i^{(i-1)}+\left(C^{(i)}\right)^{-1}\mathcal{F}^{(i)}\right)$. \end{algorithm}
\begin{theorem}[Branch point replacement] The $\mathbb R$-trees
$(\mathcal B^{(i)},(\mathcal{R}^{(1)},\ldots,\mathcal{R}^{(i)},\{0\},\{0\},\ldots),\mu^{(i)})$, of Algorithm \ref{algbprepl}
converge in $(\mathbb{T}_{\rm w}^{\infty},d_{\rm GHP}^{\infty})$ to a limit with the same distribution as in Corollary \ref{contwoc}, i.e.
\begin{equation*} \lim_{i \rightarrow \infty} d_{\rm GHP}^\infty \left( \left( \mathcal B^{(i)}, \left(\mathcal R^{(1)},\ldots,\mathcal R^{(i)},\{0\},\ldots\right), \mu^{(i)}\right) , \left(\mathcal T^*, \left( \left(C^{(i)}\right)^{-1} \mathcal F^{(i)}, i \geq 1 \right), \mu^* \right) \right) = 0 \quad \text{a.s.}. \end{equation*} \end{theorem}
\begin{proof} By construction, the trees spanned by the first $k$ leaves are the same in Algorithms \ref{depstruc1} and \ref{algbprepl}:
\begin{equation} \left( \mathcal R\left(\widehat{\mathcal T}_k, \Sigma^{(k)}_0, \ldots, \Sigma_k^{(k)} \right), \left(\widehat{\mathcal R}_k^{(i)}, i \geq 1\right) , \widehat{\mu}_k^*, k \geq 0\right) \,=\, \left(\mathcal B^{(k)}_k, \left(\mathcal U_k^{(i)}, i \geq 1 \right), \lambda_k, k \geq 0\right) \label{brepldis} \end{equation} where $\mathcal B_k^{(k)}:=\mathcal R(\mathcal B^{(k)}, \Sigma_0^{(k)}, \ldots, \Sigma_k^{(k)} )$, $\mathcal U_k^{(i)}:=\mathcal R^{(i)} \cap \mathcal B_k^{(k)}$, and $\lambda_k=(\pi_k^{\mathcal B})_*\mu^{(k)}$ denotes the projected mass measure.
By Proposition \ref{algsame} and Corollary \ref{contwoc}, we have convergence of reduced trees to the claimed limit. In particular, for all $\varepsilon>0$, there is $k_0\ge 0$ such that for all $k\ge k_0$, $$d_{\rm GHP}^\infty \left( \left(\mathcal B^{(k)}_k, \left(\mathcal U_k^{(i)}, i \geq 1 \right), \lambda_k\right), \left(\widehat{\mathcal T}, \left( \left(C^{(i)}\right)^{-1} \mathcal F^{(i)}, i \geq 1 \right), \widehat{\mu} \right) \right)<\varepsilon/3.$$ But this is only possible if all connected components of $\widehat{\mathcal{T}}\setminus\mathcal B^{(k)}_k$ have height less than $2\varepsilon/3$. By construction, the components of $\mathcal B^{(k)}\setminus\mathcal B^{(k)}_k$ are bounded in height by the corresponding components of height less than $2\varepsilon/3$. Since $\widehat{\mu}$ and $\mu^{(k)}$ have the same projection onto $\widehat{\mathcal T}_k=\mathcal B^{(k)}_k$, we conclude that also $$d_{\rm GHP}^\infty \left( \left(\mathcal B^{(k)}_k, \left(\mathcal U_k^{(i)}, i \geq 1 \right), \lambda_k\right), \left(\mathcal B^{(k)}, \left(\mathcal R^{(1)}, \ldots,\mathcal R^{(k)},\{0\},\ldots \right), \mu^{(k)} \right) \right)<2\varepsilon/3.$$ By the triangle inequality, this completes the proof. \end{proof}
This formalises and proves Theorem \ref{branchrepl}.
\section{Discrete two-colour tree growth processes} \label{Sec6}
Marchal \cite{18} introduced a tree growth model related to the stable tree. Specifically, he built a sequence of discrete trees $(T_n, n \geq 0)$, which we view as rooted $\mathbb R$-trees with unit edge lengths, equipped with the graph distance, i.e. the distance between two vertices $x,y \in T_n$ is the number of edges between $x$ and $y$.
\begin{algorithm}[Marchal's algorithm] \label{Marchal} Let $\beta \in (0,1/2]$. We grow discrete trees $ T_n$, $n \geq 0$, as follows.
\begin{itemize} \item[0.]Let $T_0$ consist of a root $\rho$ and a leaf $\Sigma_0$, connected by an edge. \end{itemize} Given $T_n$, with leaves $\Sigma_0,\ldots,\Sigma_n$, \begin{itemize} \item[1.] distribute a total weight of $n+\beta$ by assigning $(d-3)(1-\beta)+1-2\beta$ to each vertex of degree $d \geq 3$ and $\beta$ to each edge of $T_n$; select a vertex or an edge in $T_n$ at random according to these weights; \item[2.] if an edge is selected, insert a new vertex, i.e. replace the selected edge by two edges connecting the new vertex to the vertices of the selected edge; proceed with the new vertex as the selected vertex; \item[3.] in all cases, add a new edge from the selected vertex to a new leaf $\Sigma_{n+1}$ to form $T_{n+1}$. \end{itemize} \end{algorithm} \noindent Strengthening a result by Marchal \cite{18}, Curien and Haas \cite{11} showed that the sequence of trees $(T_n, n \geq 0)$ has the stable tree $\mathcal T$ of index $\beta$ as its a.s.\ scaling limit, in the following strong sense: \begin{equation*} \lim \limits_{n \rightarrow \infty} n^{-\beta} T_n=\mathcal T \quad \text{a.s. in the Gromov-Hausdorff topology}. \end{equation*}
The trees $(\mathcal F_m, m \geq 1)$ of a Ford tree growth process can also be obtained as scaling limits of a discrete tree growth process, the so-called Ford alpha-model. Both Marchal's model related to the stable tree and Ford's alpha-model are contained as special cases in the alpha-gamma-model studied in \cite{10}. \begin{definition}[The alpha-gamma-model] Let $\alpha \in [0,1]$ and $\gamma \in (0,\alpha]$. We grow discrete trees $T_n, n \geq 1$: \begin{itemize} \item[0.] Let $T_1$ consist of a root $\rho$ and a leaf $\Sigma_1$, connected by an edge. \end{itemize} Given $T_n$, with leaves $\Sigma_1,\ldots,\Sigma_n$, \begin{itemize} \item[1.] distribute a total weight of $n-\alpha$ by assigning $(d-2)\alpha -\gamma$ to each vertex of $T_n$ of degree $d \geq 3$, $1-\alpha$ to each external edge of $T_n$, and $\gamma$ to each internal edge of $T_n$; select a vertex or an edge in $T_n$ at random according to these weights; \item[2.] if an edge is selected, insert a new vertex, i.e. replace the selected edge by two edges connecting the new vertex to the vertices of the selected edge; proceed with the new vertex as the selected vertex; \item[3.] in all cases, add a new edge from the selected vertex to a new leaf $\Sigma_{n+1}$ to form $T_{n+1}$. \end{itemize} \end{definition}
Note that the case $\gamma=1-\alpha=\beta$ gives Marchal's model, Algorithm \ref{Marchal}, while the case $\gamma=\alpha=\beta'$ was introduced by Daniel Ford in his thesis \cite{9} and is referred to as Ford's alpha-model. In the latter, branch points get assigned weight zero after their creation, i.e. the trees in Ford's alpha model are binary.
\begin{lemma}[Convergence of reduced trees] Let $(T_n, n \geq 1)$ be an alpha-gamma tree-growth process for some $\alpha \in (0,1)$ and $\gamma \in (0, \alpha]$. For $k \geq 1$, consider the reduced tree $\mathcal R \left( T_n, \Sigma_1, \ldots, \Sigma_k \right)$ spanned by the root and the first $k$ leaves, equipped with the graph distance on $T_n$, i.e.\ for any edge $a \rightarrow b$ in $\mathcal R \left( T_n, \Sigma_1, \ldots, \Sigma_k \right)$ the number of edges between $a$ and $b$ in $T_n$. Then there exists an $\mathbb R$-tree $\mathcal R_k$ such that \begin{equation*} \lim_{n \rightarrow \infty} n^{-\gamma} \mathcal R \left( T_n, \Sigma_0, \ldots, \Sigma_k\right)= \mathcal R_k \quad \text{a.s.} \end{equation*} in the Gromov-Hausdorff topology. Furthermore, conditionally given that $T_k$ has a total of $k+\ell$ edges, i.e. that $T_k$ has $\ell$ branch points, the edge lengths of $\mathcal R_k$ are given by $L_k V_k^\gamma D_k$ where \begin{equation*} D_k \sim \rm{Dirichlet}\left( ({1-\alpha})/{\gamma}, \ldots, ({1-\alpha})/{\gamma}, 1, \ldots, 1\right) \end{equation*} with a weight of $(1-\alpha)/\gamma$ for each external edge, and weight $1$ for each internal edge, and \begin{equation*} L_k \sim {\rm{ML}}(\gamma,\ell\gamma+k(1-\alpha)), \quad V_k \sim {\rm{Beta}}\left(k(1-\alpha)+\ell\gamma, (k-1)\alpha-\ell\gamma\right) \end{equation*} are conditionally independent. \end{lemma}
Note that in the stable case, the total length is a $V_k \sim {\rm{Beta}}((k+\ell)(1-\alpha), (k-1-\ell)\alpha-\ell)$ proportion of $L_k \sim {\rm{ML}}(1-\alpha,(k+\ell)(1-\alpha))$, and is uniformly distributed amongst the $k+\ell$ edges. In Ford's model, we have $\ell=k-1$, and we distribute the \enquote{full} length $L_k \sim {\rm{ML}}(\alpha, k-\alpha)$ according to a Dirichlet variable $D_k$ with a parameter of $1/\alpha-1$ for each external edge and parameter $1$ for each internal edge.
In a similar manner, we can obtain the two-colour trees $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1))$, $k \geq 0$, as a.s. scaling limits of the following discrete tree growth process in the space of $\infty$-marked $\mathbb R$-trees with unit edge lengths.
\begin{definition}[The discrete two-colour model] \label{deftwoc} Let $\beta \in (0,1/2]$. We grow discrete two-colour trees $(T^*_n, (R_n^{(i)}, i \geq 1))$, $n \geq 0$, as follows. \begin{itemize} \item[0.] Let $T_0$ consist of a root $\rho$ and a leaf $\Sigma_0$ connected by an edge, let $R_0^{(i)}=\{\rho\}$, $i \geq 1$, and $r_0=0$. \end{itemize} Given $(T^*_n, (R_n^{(i)}, i \geq 1))$, with leaves $\Sigma_0,\ldots,\Sigma_n$ and $r_n=\#\{i\geq 1\colon R_n^{(i)}\neq \{\rho\}\}$, \begin{itemize} \item[1.] distribute a total weight of $n+\beta$ by assigning $\beta$ to each unmarked and each internal marked edge of $T_n$, and $1-2\beta$ to each external marked edge of $T_n$; select an edge in $T_n$ at random according to these weights; \item[2.] if the selected edge is unmarked, replace it by two unmarked edges connecting the new vertex to the vertices of the selected edge and set $I_n=r_n+1$; if the selected edge is a marked edge of $R_n^{(i)}$ for some $i\geq 1$, replace it by two marked edges and set $I_n=i$; proceed with the new vertex as the selected vertex; \item[3.] add a new degree-2 vertex, connect it to the selected vertex by a marked edge, and to a new leaf $\Sigma_{n+1}$ by an unmarked edge; add the marked edge to $R_n^{(I_n)}$ to form $R_{n+1}^{(I_n)}$; set $R_{n+1}^{(i)}=R_n^{(i)}$ for $i\neq I_n$. \pagebreak \end{itemize} \end{definition}
\begin{prop}[Convergence of the discrete two-colour model] \label{distwoccon} Consider the discrete two-colour tree growth process $(T_n^*, (R_n^{(i)}, i \geq 1), n \geq 0)$ from Definition \ref{deftwoc}, which we view as a sequence of $\infty$-marked $\mathbb R$-trees with unit edge lengths. For all $k \geq 0$, let $\mathcal R( T_n^*, (R_n^{(i)},i \geq 1) , \Sigma_0, \ldots, \Sigma_k)$ denote the reduced tree spanned by the root $\rho$ and the leaves $\Sigma_0, \ldots, \Sigma_k$. Then \begin{equation*} \lim_{n \rightarrow \infty} n^{-\beta}\mathcal R \left(T_n^*, \left(R_n^{(i)}, i \geq 1 \right) , \Sigma_0, \ldots, \Sigma_k \right) = \left(\mathcal T_k^*, \left(\mathcal R_k^{(i)}, i \geq 1 \right) \right) \quad \text{a.s.} \end{equation*} with respect to the distance $d_{GH}^\infty$ defined in \eqref{GHK}, where $(\mathcal T_k^*,(\mathcal R_k^{(i)}, i \geq 1),k\ge 0)$ is as in Algorithm \ref{twocolour}.
Conditionally given that $T_k^*$ has $r_k$ marked components $R_k^{(i)} \neq \{\rho\}$ with $d_1-2, \ldots, d_{r_k}-2$ leaves, the distribution of the edge lengths of $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1 ) ) $ is given by $S_k^* D_k$ where $S_k^* \sim {\rm{ML}}(\beta, \beta+k)$ and \begin{equation*} D_k \sim \rm{Dirichlet}\left( 1, \ldots, 1, 1/\beta-2, \ldots, 1/\beta-2\right) \end{equation*} with weight $1$ for each unmarked edge and each internal marked edge, and weight $1/\beta-2$ for each external marked edge, are conditionally independent. \end{prop}
The proof of Proposition \ref{distwoccon} is based on exactly the same techniques as the proof of the corresponding result for the alpha-gamma model, cf. \cite[Propositions 21 and 22]{10}, and the result for $(\alpha, \theta)$-tree growth processes, cf. \cite[Proposition 14]{1}. We omit the details.
\begin{remark} One can obtain the mass measures $\mu_k^*$, $k \geq 0$, as the scaling limits of the empirical measures on the leaves of $T_n$, projected onto the reduced trees, using the same methods as in \cite{1}. In particular, each edge equipped with limiting relative projected subtree masses is a rescaled $(\beta, \theta)$-string of beads where $\theta=\beta$ for internal marked and unmarked edges, and $\theta=1-2\beta$ for external marked edges. It can be shown directly that these strings of beads are independent of each other and of the mass split on $\mathcal T_k^*$, which has distribution ${\rm{Dirichlet}}( \beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta)$, with parameter $\beta$ for each internal marked and unmarked edge, and parameter $1-2\beta$ for each external marked edge of $\mathcal T_k^*$, as in Proposition \ref{starstrings}. \end{remark}
\appendix
\section{Appendix} \label{appen}
We present the proofs postponed from earlier parts of this paper.
\subsection{Coupling proofs of Proposition \ref{masssel2}, Theorem \ref{mainresult} and Proposition \ref{oversamedist}}\label{appen1}
The proofs of Proposition \ref{masssel2}, Theorem \ref{mainresult} (i) and (ii), and of Proposition \ref{oversamedist} are based on coupling arguments and are quite similar to one another. We present the proof of Theorem \ref{mainresult}(i) first.
\begin{proof}[Proof of Theorem \ref{mainresult}(i)] Recall the constructions of $(\mathcal T_k^*, \mu_k^*)$ in Algorithm \ref{twocolourmass} and $(\widetilde{\mathcal T}_k, \widetilde{\mu}_k)$ in \eqref{ttilde}. We couple $(\mathcal T_k, \mu_k, k \geq 0)$ to $(\mathcal T_k^*, \mu_k^*, k \geq 0)$ and identify the distribution as required for Algorithm \ref{masssel}:
\begin{itemize}
\item We couple the initial $(\beta, \beta)$-strings of beads to be equal $(\mathcal T_0, \mu_0)=(\widetilde{\mathcal T}_0, \widetilde{\mu}_0)=(\mathcal T_0^*, \mu_0^*)$.
\item Supposing that $(\mathcal T_k, \mu_k)=(\widetilde{\mathcal T}_k, \widetilde{\mu}_k)$ for some $k \geq 0$, set $J_k:=\widetilde{J}_k=[J_k^*]_{\sim}$, $\xi_k=\xi_k^{(2)}$, and \begin{equation*} Q_k:=\left(1-\gamma_k\right) {\mu_k^*({J_k^*})}/{\widetilde{\mu}_k(\widetilde{J}_k)}, \end{equation*} where we recall that $(1-\gamma_k) \sim \rm{Beta}(\beta, 1-2\beta)$ is the independent scaling factor for $\xi_k^{(2)}$ in the construction of a $\beta$-mixed string of beads from $\xi_k^{(1)}$, $\xi_k^{(2)}$ and $\gamma_k$, as at the beginning of Section \ref{sec4}. If the selected atom $J_k^*$ is an element of a marked component, $Q_k$ is the proportion of the mass of $J_k^*$ added to this marked component in the form of a rescaled independent $(\beta,\beta)$-string of beads $\xi_k^{(2)}$, while a proportion of $1-Q_k$ is split into an unmarked rescaled $(\beta, 1-2\beta)$-string of beads $\xi_k^{(1)}$. \end{itemize} Since $\widetilde{J}_k$ was sampled from $\widetilde{\mu}_k$, $J_k$ is sampled from $\mu_k$, as required for Algorithm \ref{masssel}. It remains to check that the scaling factor $Q_k {\widetilde{\mu}_k}({\widetilde{J}_k})$ induced by Algorithm \ref{twocolourmass}, applied to the $(\beta, \beta)$-string of beads $\xi_k=\xi^{(2)}_k$ that is used in the attachment procedure, is as needed for Algorithm \ref{masssel}. We work conditionally given the event that $\widetilde{\mathcal T}_k$ has $\ell$ branch points $\widetilde{v}_j$ of sizes $d_j={\rm deg}(\widetilde{v}_j, \widetilde{\mathcal T}_k)$, $j \in [\ell]$, respectively.
\begin{itemize} \item If $\widetilde{J}_k \neq \widetilde{v}_i$ for $i \in [\ell]$, then $J_k=\widetilde{J}_k=J_k^*$, and a new branch point $\widetilde{J}_k$ of degree deg$(\widetilde{J}_{k}, \widetilde{\mathcal T}_{k+1})=3=1+\text{deg}(\widetilde{J}_{k}, \widetilde{\mathcal T}_{k})$ is created. The mass ${\mu}_k^*(J_k^*)=\widetilde{\mu}_k(\widetilde{J}_k)$ is split by the independent random variable $\gamma_k \sim \text{Beta}(1-2\beta, \beta)$ into a branch point weight $\widetilde{\mu}_{k+1}(\widetilde{J}_{k})=\gamma_k \widetilde{\mu}_k(\widetilde{J}_k)$ and the isometric copy of the $(\beta, \beta)$-string of beads $\xi_k^{(2)}=\xi_k$, scaled by $\widetilde{\mu}_k(\widetilde{J}_k)(1-\gamma_k)=\widetilde{\mu}_k(\widetilde{J}_k)Q_k$ where $Q_k\sim \text{Beta}(\beta, 1-2\beta)$ is conditionally independent of $\xi_k$ and $(\mathcal T_k, \mu_k, J_k)$ given ${\rm deg}(J_k, \mathcal T_k)=2$, as required.
\item If $\widetilde{J}_k = \widetilde{v}_i$ of degree deg$(\widetilde{v}_i,\widetilde{\mathcal T}_k)=d_i$ for some $i \in [\ell]$, we first select an edge $E_k^*$ of $\mathcal R_k^{(i)}$ from $\mu_k^*$ restricted to ${\mathcal R_k^{(i)}}$. Conditionally given that $E_k^*$ has been selected, we choose $J_k^* \in E_k^*$ according to $(\beta, \theta)$-coin tossing sampling, where $\theta=\beta$ if $E_k^*$ is an internal edge of $\mathcal R_k^{(i)}$, and $\theta=1-2\beta$ otherwise. By Proposition \ref{cointoss} and Proposition \ref{Diri}(iii)-(iv), conditionally given $J_k^* \in E_k^*$, the relative mass split in $\mathcal R_k^{(i)}$ is \begin{equation*} {\rm Dirichlet}\left(\beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta, \beta, 1-\beta, \theta\right) \end{equation*} with parameter $\beta$ for each non-selected internal edge of $\mathcal R_k^{(i)}$, $1-2\beta$ for each non-selected external edge of $\mathcal R_k^{(i)}$, $\beta$ for the part of $E_k^*$ closer to the root, $\theta$ for the other part of $E_k^*$, and $1-\beta$ for the atom $J_k^*$. In any case (i.e. no matter if $E_k^*$ is internal or external), we get by Proposition \ref{Diri}(i)-(ii) that, conditionally given $\widetilde{J}_k=\widetilde{v}_i$, \begin{equation*} \mu_k^*(J_k^*)/\widetilde{\mu}_k(\widetilde{J}_k) \sim \text{Beta} \left(1-\beta, (d_i-2)(1-\beta)\right) \end{equation*} is independent of $\widetilde{\mu}_k(\widetilde{J}_k)$, as the internal relative mass split in $\mathcal R_k^{(i)}$ is independent of its total mass, see Proposition \ref{starstrings} and Proposition \ref{Diri}(ii). Overall, still conditionally given $\widetilde{J}_k = \widetilde{v}_i$, we have that \begin{equation*} \mu_k^*\left(J_k^*\right)\left(1-\gamma_k\right)= \left(1-\gamma_k\right) \left( \mu_k^*\left(J_k^*\right) \widetilde{\mu}_k \left(\widetilde{J}_k\right)^{-1}\right) \widetilde{\mu}_k \left(\widetilde{J}_k\right) = Q_k\widetilde{\mu}_k\left({\widetilde{J}_k}\right) \end{equation*} where $Q_k \sim \text{Beta}(\beta, d_i(1-\beta)-1)$, as is easily checked using Proposition \ref{Diri}(i)-(iii). Note that $Q_k$ is also conditionally independent of $\widetilde{\mu}_k(\widetilde{J}_k)$ given $\widetilde{J}_k=\widetilde{v}_i$ and ${\rm deg}(\widetilde{v}_j,\widetilde{\mathcal T}_k)=d_i$. This is due to the fact that the mass split within $\mathcal R_k^{(i)}$, and the mass split between the edges of $\widetilde{\mathcal T}_k$ and its branch points are conditionally independent given there are $\ell$ branch points $\widetilde{v}_j$ with ${\rm deg}(\widetilde{v}_j, \widetilde{\mathcal T}_k)=d_j$, $j \in [\ell]$. \qedhere \end{itemize} \end{proof}
\begin{proof}[Proof of Proposition \ref{oversamedist}] Similarly to the proof of Theorem \ref{mainresult}(i), let us couple so that the initial weighted $\infty$-marked $\mathbb R$-trees coincide, i.e. let $(\mathcal T_0^*, (\mathcal R_0^{(i)}, i \geq 1), \mu^*_0):=(\overline{\mathcal T}^*_0, (\overline{\mathcal R}_0^{(i)}, i\geq 1), \overline{\mu}^*_0)$. Then, $(\mathcal T_0^*, \mu_0^*)$ is a $(\beta, \beta)$-string of beads, and $\mathcal R_0^{(i)}=\{\rho\}$ for all $i \geq 1$, as required for Algorithm \ref{twocolourmass}.
Supposing that $(\mathcal T^*_k, (\mathcal R_k^{(i)}, i \geq 1), \mu_k^*)=(\overline{\mathcal T}^*_k, (\overline{\mathcal R}_k^{(i)}, i\geq 1), \overline{\mu}^*_k)$ for some $k\geq 0$, set $J_k^*:=\overline{J}_k^*$, $I_k: =\overline{I}_k$, and if $\overline{J}_k^*=\check{X}_{\mathbf{i}j}$, take as $(E_k^+, \mu_k^+)$ the scaled copy of $\xi_{\mathbf{i}j}$ embedded in $\mathcal T^*$ and $R_k^+=[[\overline{J}_k^*, \overline{\Omega}_k]]$. We need to check that the induced update step from $(\mathcal T_k^*,(\mathcal R_k^{(i)}, i \geq 1), \mu_k^*)$ to $(\mathcal T_{k+1}^*,(\mathcal R_{k+1}^{(i)}, i \geq 1), \mu_{k+1}^*)$ is as required in Algorithm \ref{twocolourmass}. Selecting $\overline{J}_k^*$ in Algorithm \ref{twocemb}, we first select an edge $\overline{E}_k^*$ of $\mathcal T_k^*$ proportionally to $\overline{\mu}_k^*(\overline{E}_k^*)$, and perform $(\beta, 1-2\beta)$-coin tossing if $\overline{E}_k^*$ is an external marked edge, and uniform sampling from $\overline{\mu}_k \restriction_{\overline{E}_k^*}$ otherwise, and since $\mu_k^*=\overline{\mu}_k^*$, this means that $J_k^*$ is sampled precisely as required for Algorithm \ref{twocolourmass}, and in particular we have $\mu_k^*(J_k^*)=\overline{\mu}_k^*(\overline{J}_k^*)$. Furthermore, $(E_k^+, R_k^+,\mu_k^+)$ is an independent $\beta$-mixed string of beads, as it is obtained from ${\xi}_{\mathbf{i}j}$ and the transition kernel $\kappa({\xi}_{\mathbf{i}j}, \cdot)$ of Lemma \ref{condis}. Therefore, \begin{equation*} \left(\left(\mathcal T^*_k, \left(\mathcal R_k^{(i)}, i \geq 1\right), \mu_k^*\right), \left(\mathcal T^*_{k+1}, \left(\mathcal R_{k+1}^{(i)}, i \geq 1\right), \mu_{k+1}^*\right)\right) \end{equation*} has the same distribution as $\big((\overline{\mathcal T}^*_k, (\overline{\mathcal R}_k^{(i)}, i \geq 1), \mu_k^*), (\overline{\mathcal T}^*_{k+1}, (\overline{\mathcal R}_{k+1}^{(i)}, i\geq 1), \overline{\mu}^*_{k+1})\big)$, which proves Proposition \ref{oversamedist}, as both Algorithm \ref{twocolourmass} and Algorithm \ref{twocemb} specify Markov chains. \end{proof}
\begin{proof}[Proof of Proposition \ref{masssel2}] Construction \eqref{findimmarg1} and Algorithm \ref{masssel} use the same notation. To avoid confusion in this proof, we denote the sequence of trees of \eqref{findimmarg1} by $({\mathcal T}'_k, {\mu}'_k, k \geq 0)$. We will couple the construction of $(\mathcal T_k, \mu_k, k \geq 0)$ of Algorithm \ref{masssel} to the given sequence $({\mathcal T}'_k, {\mu}'_k, k \geq 0)$, specifically identifying the sequences $(J_k, k \geq 0)$ of attachment points, and $(Q_k, k \geq 0)$ of update random variables.
The coupling is as follows. Set $(\mathcal T_0, \mu_0)=({\mathcal T}'_0, {\mu}'_0)$, and, given $(\mathcal T_k, \mu_k)=({\mathcal T}'_k, {\mu}'_k)$ for some $k \geq 0$, set $J_k:={J}'_k$ where \begin{equation*} {J}'_k:=\text{arg}\inf\left\{d\left(\rho, x\right)\colon x \in {\mathcal T}'_{k+1} \setminus {\mathcal T}'_k \right\}, \end{equation*} let $Q_k=1-{\mu}'_{k+1}({J}'_k)/{\mu}'_{k}({J}'_k)$, and $\xi_k:=\big( {\mu}'_{k+1}({\mathcal T}'_{k+1} \setminus {\mathcal T}'_{k})^{-\beta} {\mathcal T}'_{k+1} \setminus {\mathcal T}'_{k},\ {\mu}'_{k+1}({\mathcal T}'_{k+1} \setminus {\mathcal T}'_{k})^{-1} {\mu}'_{k+1}\restriction_{{\mathcal T}'_{k+1} \setminus {\mathcal T}'_{k}}\big)$.
By Proposition \ref{betastring}, $({\mathcal T}'_0, {\mu}'_0)$ is a $(\beta, \beta)$-string of beads, as required in Algorithm \ref{masssel}. Now assume that $(\mathcal T_k, \mu_k)=({\mathcal T}'_k, {\mu}'_k)$ for some $k \geq 0$ with the distribution claimed in Proposition \ref{masssel2}. Denote the connected components of $\mathcal T \setminus {\mathcal T}'_k$ by $\overline{\mathcal S}_j^{(i)\downarrow}$, $j \ge 1$, $i \ge 1$, completed by their root vertices $\rho_i \in {\mathcal T}'_k$, $i \ge 1$, respectively. Note that ${\mu}'_k(\rho_i)=\sum_{j \ge 1} {\mu}(\overline{\mathcal S}_j^{(i)\downarrow})$.
Since we sample $\Sigma_{k+1}$ from the mass measure $\mu$ on $\mathcal T$, the conditional probability that $\Sigma_{k+1} \in \overline{\mathcal S}_j^{(i)\downarrow}$, given $(\mathcal T, \mu)$, $({\mathcal T}'_k, {\mu}'_k)$ and $(\overline{\mathcal S}_j^{(i)\downarrow}, j \ge 1, i \ge 1)$, is $\mu( \overline{\mathcal S}_j^{(i)\downarrow})={\mu}_k'(\rho_i)({\mu}(\overline{\mathcal S}_j^{(i)\downarrow})/{\mu}'_k(\rho_i))$, i.e.\ we can sample ${J}'_k$ in two steps: first, select one of the atoms $\rho_i$ of ${\mathcal T}'_k$ proportionally to ${\mu}'_k(\rho_i)$, and second, select one of the components $\overline{\mathcal S}_j^{(i)\downarrow}$ with root $\rho_i$ proportionally to relative mass $\mu(\overline{\mathcal S}_j^{(i)\downarrow})/{\mu}'_k(\rho_i)$. By Theorem \ref{masspart}(ii) and Proposition \ref{usefulPD}, we further note that conditionally given $(\mathcal{T}_k^\prime,\mu_k^\prime)$ with $\mu_k^\prime=\sum_{i\ge 1}\mu_k^{\prime}(\rho_i)\delta_{\rho_i}$, we have $(\mu(\mathcal S_j^{(i)})/{\mu}'_k(\rho_i), j \geq 1)^{\downarrow} \sim {\rm PD}(1-\beta,(d_i-3)(1-\beta)+1-2\beta)$ with $d_i={\rm deg}(\rho_i,\mathcal{T}_k^\prime)$, $i\ge 1$, independently.
We have ${J}'_k=\rho_i$ with probability ${\mu}'_k(\rho_i)$, and hence $J_k$ is sampled from $\mu_k$, as required in Algorithm \ref{masssel}. By Theorem \ref{masspart}(iii), the weighted $\mathbb R$-trees $$\left(\mu\left(\overline{\mathcal S}_j^{(i)\downarrow}\right)^{-\beta}\overline{\mathcal S}_j^{(i)\downarrow},
\mu\left(\overline{\mathcal S}_j^{(i)\downarrow}\right)^{-1}\mu \restriction_{\overline{\mathcal S}_j^{(i)\downarrow}}\right),\qquad j\ge 1,\ i\ge 1,$$ are independent copies of $(\mathcal T, \mu)$, i.e. conditionally given $\Sigma_{k+1} \in \overline{\mathcal S}_j^{(i)\downarrow}$, the sampling procedure of $\Sigma_{k+1} \in \overline{\mathcal S}_j^{(i)\downarrow}$ from $\mu(\overline{\mathcal S}_j^{(i)\downarrow})^{-1}\mu \restriction_{\overline{\mathcal S}_j^{(i)\downarrow}}$ is like sampling $\Sigma_0 \in \mathcal T$ from $\mu$. Hence, $\xi_k$ is an independent $(\beta, \beta)$-string of beads, as required in Algorithm \ref{masssel}.
Let us consider the distribution of $Q_k$. Conditionally given ${\rm deg}({J}'_k, {\mathcal T}'_k)=2$, $\Sigma_{k+1}$ is a leaf of a connected component $\overline{\mathcal S}_j^{(i)\downarrow}$ of $\mathcal T \setminus {\mathcal T}'_k$ with root $\rho_i={J}'_k$, which is chosen independently and proportionally to relative mass $\mu(\overline{\mathcal S}_j^{(i)\downarrow})/{\mu}'_k(\rho_i)$. As noted above, the relative mass partition above ${J}'_k$ is PD$(1-\beta,-\beta)$, i.e. by Proposition \ref{usefulPD}, $Q_k \sim {\rm Beta}(\beta, 1-2\beta)$, as required in Algorithm \ref{masssel}.
Conditionally given ${\rm deg}({J}'_k, {\mathcal T}'_k)=d$ for some $d \geq 3$, $\Sigma_{k+1}$ is a leaf of a connected component $\overline{\mathcal S}_j^{(i)\downarrow}$ of $\mathcal T \setminus {\mathcal T}'_k$ with root $\rho_i={J}'_k$. Then the relative mass partition of the connected components $\mathcal T \setminus {\mathcal T}'_k$ with root $\rho_i$ is PD$(1-\beta,(d-3)(1-\beta)+1-2\beta)$ where we note that ${J}'_k$ must have been selected $d-2$ times up step $k$ in order to obtain ${\rm deg}({J}'_k, {\mathcal T}'_k)=d$. Therefore, by Proposition \ref{usefulPD}, conditionally given ${\rm deg}({J}'_k, {\mathcal T}'_k)=d$, $Q_k \sim {\rm Beta}(\beta, (d-3)(1-\beta)+1-2\beta)$, as required in Algorithm \ref{masssel}. Also, by Proposition \ref{usefulPD}, $Q_k$ is conditionally independent of ${\mu}'_k({J}'_k)$ given ${\rm deg}({J}'_k, {\mathcal T}'_k)=d$. The mass split in $({\mathcal T}'_{k+1}, {\mu}'_{k+1})$ is easily found from Proposition \ref{Diri}, cf. the proof of Proposition \ref{starstrings} for a similar elementary Dirichlet argument.\end{proof}
\begin{proof}[Proof of Theorem \ref{mainresult}(ii)]
Recall that the ingredients in Algorithm \ref{GH} to construct the sequence on the RHS of \eqref{emstlb2} are the Mittag-Leffer Markov chain $(S_k, k \geq 0)$, attachment points $(J_k, k \geq 0)$, and i.i.d.\ random variables $B_k$, $k \geq 0$, with $B_1 \sim {\rm Beta}(1, 1/\beta-2)$. We recover these ingredients from the random variables incorporated in the construction of the LHS of \eqref{emstlb2} via the following coupling. \begin{itemize} \item Set $S_0=S_0^*$, i.e. $S_0 \sim {\rm ML}(\beta, \beta)$ is the length of the initial $(\beta, \beta)$-string of beads $(\mathcal T_0^*, \mu_0^*)=(\widetilde{\mathcal T}_0, \widetilde{\mu}_0)$. For $k \geq 0$, set $S_k$ equal to the total length of $\mathcal T_k^*$, i.e. $S_k=S_k^*$.
\item Set $(J_k, k \geq 0)=(\widetilde{J}_k, k \geq 0)$.
\item Set $(B_k, k \geq 0)=(B_k^*, k \geq 0)$, where $B_k^*$ denotes the length split between the unmarked and the marked part of the independent $\beta$-mixed string of beads $(E_k^+, R_k^+, \mu_k^+)$ built from $\xi_k^{(1)}$, $\xi_k^{(2)}$ and $\gamma_k$. By Remark \ref{betamixedlength}, $(B_k, k \geq 0)$ is an i.i.d. sequence with $B_1 \sim {\rm Beta}(1, 1/\beta-2)$, as required. \end{itemize}
We will show that \begin{align} \left( \widetilde{\mathcal T}_{k}, \left( \widetilde{W}_{j}^{(i)}, 0 \leq j \leq k, i \geq 1\right) \right) \,{\buildrel d \over =}\, \left({\mathcal T}_{k}, \left( {W}_j^{(i)}, 0 \leq j \leq k, i \geq 1\right) \right) \label{toshow} \end{align} for all $k \geq 0$, which implies \eqref{emstlb2} as the families of trees $(\widetilde{\mathcal T}_k, k \geq 0)$ and $(\mathcal T_k, k \geq 0)$ are consistent, i.e.\ given the tree $\mathcal T_k$ at step $k$, we can recover the previous steps $\mathcal T_{k-1}, \ldots, \mathcal T_0$ of the tree sequence.
We prove \eqref{toshow} by induction on $k$. For $k=0$ the claim is trivial. Suppose that \eqref{toshow} holds up to $k$.
In the tree growth process $(\widetilde{\mathcal T}_k, k \geq 0)$ edge and branch point selection is based on masses, whereas in $(\mathcal T_k, k \geq 0)$ edges are selected based on length and branch points based on weights. We first prove the correspondence of the selection rules, where we work conditionally given the shape of the tree $\widetilde{\mathcal T}_k=\mathcal T_k$, in particular conditionally given that $\mathcal T_k^*$ has $\ell$ marked components $\mathcal R_k^{(i)} \neq \{\rho\}$, $i \in [\ell]$, of sizes $d_i-2$, $i \in [\ell]$, respectively, or, in other words, that $\widetilde{\mathcal T}_k$ has $\ell$ branch points $\widetilde{v}_i, i \in [\ell]$, of degrees $d_i, i \in [\ell],$ respectively, and a total of $k+\ell+1$ edges. By (i) and Proposition \ref{masssel2}, the total mass split in $\widetilde{\mathcal T}_k$ is \begin{equation} \left(\widetilde{\mu}_k\left(E_k^{(1)}\right), \ldots, \widetilde{\mu}_k\left(E_k^{(k+\ell+1)}\right), \widetilde{\mu}_k\left(\widetilde{v}_1\right), \ldots, \widetilde{\mu}_k\left(\widetilde{v}_\ell\right)\right) \sim \text{Dirichlet}\left(\beta, \ldots, \beta, w(d_1), \ldots, w(d_\ell)\right) \label{massintikay} \end{equation} where $w(d_i)=(d_i-3)(1-\beta)+1-2\beta$ for $i \in [\ell]$. We denote the edge lengths and the branch point weights in $\widetilde{\mathcal T}_k$ by \begin{equation} \widetilde{L}_k=\left(\widetilde{ L}_k^{(1)}, \ldots, \widetilde{ L}_k^{(k+\ell+1)} \right), \quad \widetilde{W}_k=\left(\widetilde{W}_k^{(1)}, \ldots, \widetilde{W}_k^{(\ell)}\right), \end{equation} and use corresponding notation in $\mathcal T_k$. We will show that the joint distributions of edge lengths, weights and the selected attachment points $\widetilde{J}_k$ and $J_k$ in $\widetilde{\mathcal T}_k$ and $\mathcal T_k$, respectively, are the same in Algorithm \ref{twocolourmass} and Algorithm \ref{GH}, i.e. for any $k \geq 0$, and any continuous and bounded function $f\colon \mathbb R^{k+2\ell+1} \rightarrow \mathbb R$, \begin{align} \mathbb E \left [ f\left( \widetilde{L}_k, \widetilde{W}_k\right) \mathbbm{1}_{\{\widetilde{J} _k\in E_k^{(j)}\}} \right] &= \mathbb E \left [ f\left( {L}_k, {W}_k\right) \mathbbm{1}_{\{{J} _k\in E_k^{(j)}\}} \right]\qquad\mbox{for any $j \in [k+\ell+1]$,} \label{equalexp1}\\ \mbox{and}\qquad\mathbb E \left [ f\left( \widetilde{L}_k, \widetilde{W}_k\right) \mathbbm{1}_{\{\widetilde{J}_k = v_j \}} \right] &= \mathbb E \left [ f\left( {L}_k, {W}_k\right) \mathbbm{1}_{\{{J}_k=v_j\}} \right]\qquad\ \mbox{for any $j \in [\ell]$.} \label{equalexp2} \end{align}
Then, together with the coupling, this completes the induction step. It remains to prove \eqref{equalexp1} and \eqref{equalexp2}.
\begin{itemize}
\item \textit{Proof of \eqref{equalexp1}}. Fix some $j \in [k+\ell+1]$, and consider the LHS of \eqref{equalexp1} first. Conditioning on $\widetilde{J}_k \in E_k^{(j)}$, and using the mass split \eqref{massintikay} and Proposition \ref{Diri}(iv), we obtain \begin{equation*}
\mathbb E \left [ f\left( \widetilde{L}_k, \widetilde{W}_k \right) \mathbbm{1}_{\{\widetilde{J} _k\in E_k^{(j)}\}} \right] = \frac{\beta}{k+\beta} \mathbb E \left [ f\left( \widetilde{L}_k, \widetilde{W}_k \right) \middle| {\widetilde{J} _k\in E_k^{(j)}} \right]. \end{equation*} By Proposition \ref{Diri}(iv) and \eqref{massintikay}, conditionally given ${\widetilde{J} _k\in E_k^{(j)}}$, the distribution of the mass split \begin{equation} \left(X_k^{(1)}, \ldots, X_k^{(j-1)}, X_k^{(j)}, X_k^{(j+1)}, \ldots, X_k^{(k+\ell+1)}, X_k^{(k+\ell+2)}, \ldots, X_k^{(k+2\ell+1)}\right) \label{xmasses} \end{equation} with $X_k^{(i)}=\widetilde{\mu}_k(E_k^{(i)})$ for $i \in [k+\ell+1]$ and $X_k^{(i)}=\widetilde{\mu}_k(\widetilde{v}_{i-(k+\ell+1)})$ for $i \in [k+2\ell+1]\setminus [k+\ell+1]$ is \begin{equation} \text{Dirichlet}\left(\beta, \ldots, \beta, 1+\beta, \beta, \ldots, \beta, w(d_1), \ldots, w(d_\ell)\right). \label{xnotation} \end{equation}
Furthermore, still conditionally given ${\widetilde{J} _k\in E_k^{(j)}}$, $\widetilde{J}_k$ is an atom of mass $\widetilde{\mu}_k(\widetilde{J}_k)=:U_k^{(j)} X_k^{(j)}$ sampled from the rescaled independent $(\beta, \beta)$-string of beads related to $E_k^{(j)}$, splitting $E_k^{(j)}$ into two edges $E_{k+1}^{(j)}$ and $E_{k+1}^{(k+\ell+3)}$ of masses $\widetilde{\mu}_k(E_{k+1}^{(j)})=:U_k^{(-)} X_k^{(j)}$ and $\widetilde{\mu}_{k}(E_{k+1}^{(k+\ell+3)})=:U_k^{(+)} X_k^{(j)}$, respectively. By Proposition \ref{cointoss}, the relative mass split on $E_k^{(j)}$ is given by \begin{align*} \left(U_k^{(-)}, U_k^{(j)}, U_k^{(+)} \right) \sim \text{Dirichlet}\left(\beta, 1-\beta, \beta\right), \end{align*} and is independent of $X_k^{(j)}=\widetilde{\mu}_k(E_k^{(j)})$, since, by (i) and Proposition \ref{masssel2}, the $(\beta,\beta)$-string of beads \begin{equation*} \left(\left(X_k^{(j)} \right)^{-\beta} E_k^{(j)}, \left( X_k^{(j)}\right)^{-1} \widetilde{\mu}_k \restriction_{E_k^{(j)}}\right) \end{equation*} is independent of the scaling factor $X_k^{(j)}$. We obtain the refined mass split \begin{equation} \left(\overline{X}_k^{(1)}, \ldots, \overline{X}_k^{(j-1)}, \overline{X}_k^{(-)}, \overline{X}_k^{(j)},\overline{X}_k^{(+)}, \overline{X}_k^{(j+1)}, \ldots, \overline{X}_k^{(k+2\ell+1)}\right) \label{xbar} \end{equation} where $\overline{X}_k^{(i)}=X_k^{(i)}$, $i \in [k+2\ell+1] \setminus \{j, +, -\}$ and $\overline{X}_k^{(-)}= U_k^{(-)}X_k^{(j)}$, $\overline{X}_k^{(j)}= U_k^{(j)}X_k^{(j)}$ and $\overline{X}_k^{(+)}= U_k^{(+)}X_k^{(j)}$. By Proposition \ref{Diri}(iii), the distribution of \eqref{xbar} is \begin{equation*}\text{Dirichlet}\left(\beta, \ldots, \beta, \beta,1-\beta, \beta, \beta, \ldots, \beta, w(d_1), \ldots, w(d_\ell)\right). \end{equation*} Furthermore, the atom $\widetilde{J}_k$ induces the two rescaled independent $(\beta, \beta)$-strings of beads \begin{equation*} \left(\left(X_k^{(-)}\right)^{-\beta} E_{k+1}^{(j)},\left(X_k^{(-)}\right)^{-1} \widetilde{\mu}_k \restriction_{E_{k+1}^{(j)}}\right), \quad \left(\left(X_k^{(+)} \right)^{-\beta} E_{k+1}^{(k+\ell+3)},\left(X_k^{(+)}\right)^{-1} \widetilde{\mu}_k \restriction_{E_{k+1}^{(k+\ell+3)}}\right) \end{equation*} where $X_k^{(i)}=U_k^{(i)}X_k^{(j)}, i \in \{-, +\}$, i.e. the lengths of the edges $E_{k+1}^{(j)}$ and $E_{k+1}^{(k+\ell+3)}$ are given by \begin{equation*} \widetilde{L}_{k+1}^{(j)}=\left(U_k^{(-)}X_k^{(j)}\right)^{\beta} M_k^{(-)}, \quad \widetilde{L}_{k+1}^{(k+\ell+3)}=\left(U_k^{(+)}X_k^{(j)}\right)^{\beta} M_k^{(+)}, \end{equation*} respectively, where $M_k^{(i)} \sim \text{ML}(\beta,\beta)$, $i \in \{-,+\}$, are independent, see Proposition \ref{cointoss}. Conditionally given $\widetilde{J}_k \in E_k^{(j)}$, by \eqref{toshow} and Corollary \ref{Massesaslengths}, the weights $\widetilde{W}_k^{(i)}$ of $\widetilde{\mathcal T}_k$ are therefore \begin{equation*} \widetilde{W}_k^{(i-(k+\ell+1))}=\left(X_k^{(i)}\right)^\beta M_k^{(i)}, \quad i \in [k+2\ell+1]\setminus [k+\ell+1], \end{equation*} where the lengths $\widetilde{L}_k^{(i)}$ are \begin{equation*} \widetilde{L}_k^{(i)}= \begin{cases} \left(X_k^{(i)}\right)^\beta M_k^{(i)}, & i \in [k+\ell+1] \setminus [j], \\ \left(U_k^{(-)}X_k^{(j)}\right)^\beta M_k^{(-)}+\left(U_k^{(+)}X_k^{(j)}\right)^\beta M_k^{(+)}, & i=j, \end{cases} \end{equation*} for independent random variables \begin{equation*} M_k^{(i)} \sim \begin{cases} \text{ML}(\beta, \beta), & i \in [k+\ell+1] \setminus \{j\} \cup \{-, +\},\\ \text{ML}(\beta, w(d_{i-(k+\ell+1)})), & i \in [k+2\ell+1] \setminus [k+\ell+1]. \end{cases} \end{equation*} Also note that, by the definition of $(S_k^*, k \geq 0)$ and the attachment procedure, \begin{equation*} S_{k+1}^*-S_k^*= \widetilde{\mu}_k\left(\widetilde{J}_k\right)^\beta M_k^* = \left(U_k^{(j)}X_k^{(j)}\right)^{\beta} M_k^* \end{equation*} where $M_k^*\sim {\rm ML}(\beta,1-\beta)$ is the length of the attached, independent $\beta$-mixed string of beads. We conclude by Proposition \ref{DIRML} and Proposition \ref{Diri}(i)-(ii) that $S_k^*=A_k^*S_{k+1}^* \sim {\rm ML}(\beta, k+\beta)$ where $S_{k+1}^* \sim {\rm ML}\left(\beta, k+1+\beta\right)$ and $A_k^* \sim {\rm Beta}(k/\beta+2, 1/\beta-1)$ are independent, and that, conditionally given $\widetilde{J}_k \in E_k^{(j)}$, we have $(\widetilde{L}_k, \widetilde{W}_k) = S_{k+1}^* A_k^* {\widetilde{Z}}_k$ where \begin{align*} \widetilde{Z}_k=\left({\widetilde{Z}}_k^{(1)}, \ldots, {\widetilde{Z}}_k^{(j-1)},{\widetilde{Z}}_k^{(j)},{\widetilde{Z}}_k^{(j+1)}, \ldots, {\widetilde{Z}}_k^{(k+\ell+1)}, {\widetilde{Z}}_k^{(k+\ell+2)}, \ldots, {\widetilde{Z}}_k^{(k+2\ell+1)}\right) \end{align*} is independent of $S_{k+1}^*$ and $A_k^*$, and has a ${\rm Dirichlet}(1, \ldots, 1,2,1,\ldots,1, w(d_1)/\beta, \ldots, w(d_\ell)/\beta)$ distribution. Hence, \begin{align*}
\mathbb E \left [ f\left( \widetilde{L}_k, \widetilde{W}_k \right) \mathbbm{1}_{\{\widetilde{J} _k\in E_k^{(j)}\}} \right] \nonumber =\frac{\beta}{k+\beta} \mathbb E \left [ f\left(S_k^* {\widetilde{Z}}_k\right) \middle| \widetilde{J}_k \in E_k^{(j)} \right]. \end{align*}
We now consider the RHS of \eqref{equalexp1}. We condition on $J_k \in E_k^{(j)}$, and apply Lemma \ref{GH1} and Proposition \ref{Diri}(iv) to obtain \begin{align*}
\mathbb E &\left[ f\left( L_k^{(1)}, \ldots, L_k^{(k+\ell+1)}, W_k^{(1)}, \ldots, W_k^{(\ell)}\right) \mathbbm{1}_{\{J_k \in E_k^{(j)}\}}\right] =\frac{\beta}{k+\beta} \mathbb E \left[ f\left( S_k Z_k\right) \middle| {J}_k \in E_k^{(j)} \right] \end{align*} where $S_k \sim \text{ML}(\beta, k+\beta)$ is independent of \begin{align*} Z_k=\left(Z_k^{(1)}, \ldots, Z_k^{(j-1)}, Z_k^{(j)}, Z_k^{(j+1)}, \ldots, Z_k^{(k+\ell+1)}, Z_k^{(k+\ell+2)}, \ldots, Z_k^{(k+2\ell+1)} \right) \end{align*} and $Z_k \sim \text{Dirichlet}(1, \ldots,1,2,1,\ldots, 1,w(d_1)/\beta, \ldots, w(d_\ell)/\beta)$. Hence, we conclude \eqref{equalexp1}.
\item \textit{Proof of \eqref{equalexp2}}. Consider now the LHS of \eqref{equalexp2}. We follow the lines of the proof of \eqref{equalexp1}. Conditionally given $\widetilde{J}_k = \widetilde{v}_j$, the mass split \eqref{xmasses} has distribution \begin{align} \text{Dirichlet}\left(\beta, \ldots, \beta, w(d_1), \ldots, w(d_{j-1}),1+w(d_j), w({d_j+1}),\ldots, w(d_\ell)\right). \label{massintikay2} \end{align} By (i), the mass $\widetilde{\mu}_k(\widetilde{v}_j)$ is split by an independent vector $\left(Q_k, 1-Q_k\right) \sim {\rm Dirichlet}\left(\beta, w(d_j)+1-\beta\right)$ into a new unmarked $(\beta, \beta)$-string of beads \begin{equation*} \left( \widetilde{\mu}_{k+1}\left(E_{k+1}^{(k+l+2)}\right)^{-\beta} E_{k+1}^{(k+\ell+2)}, \widetilde{\mu}_{k+1}\left(E_{k+1}^{(k+\ell+2)}\right)^{-1} \widetilde{\mu}_{k+1}\restriction_{E_{k+1}^{(k+\ell+2)}} \right) \end{equation*} attached to $\widetilde{v}_j$ and the mass $\widetilde{\mu}_{k+1}(\widetilde{v}_j)$, i.e. $\widetilde{\mu}_{k+1}(E_{k+1}^{(k+\ell+2)})=Q_k\widetilde{\mu}_{k}(\widetilde{v}_j)$, $\widetilde{\mu}_{k+1}(\widetilde{v}_j) = (1-Q_k) \widetilde{\mu}_{k}(\widetilde{v}_j)$.
By (i), Proposition \ref{starstrings} and the induction hypothesis, we get $\widetilde{W}_{k+1}^{(i-(k+\ell+2))}=\widetilde{\mu}_{k+1}(\widetilde{v}_{i-(k+\ell+2)})^{\beta}M_{k+1}^{(i)}$ for $i \in [k+2\ell+2]\setminus [k+\ell+2], i \neq j+(k+\ell+2)$ and $\widetilde{L}_{k+1}^{(i)}=\widetilde{\mu}_{k+1}\left(E_{k+1}^{(i)}\right)^{\beta} M_{k+1}^{(i)}$ for $i \in [k+\ell+2]$ where the random variables $M_{k+1}^{(i)} \sim {\rm ML}(\beta, \beta)$ for $i \in [k+\ell+2]$, $M_{k+1}^{(i)} \sim {\rm ML}(\beta, w(d_{i-(k+\ell+2)}))$ for $i \in [k+2\ell+2] \setminus [k+\ell+2], i \neq j+(k+\ell+2)$ and $M_{k+1}^{(i)} \sim {\rm ML}(\beta, w(d_{i-(k+\ell+2)})+(1-\beta))$ for $i=j+(k+\ell+2)$, are independent of the mass split \eqref{massintikay2}. By Proposition \ref{starstrings}, $\widetilde{W}_{k}^{(j)} = B_k^* \widetilde{W}_{k+1}^{(j)}$, where $B_k^* \sim {\rm Beta}((d_j-2)(1/\beta-1) , 1/\beta-2)$ is independent of $\widetilde{W}_{k+1}^{(j)}$. Note that $\widetilde{W}_{k+1}^{(i)}=\widetilde{W}_{k}^{(i)}$ for $i \neq j$, $\widetilde{L}_{k+1}^{(i)}=\widetilde{L}_{k}^{(i)}$ for $i \in [k+\ell+1]$, and hence, $S_{k+1}^*-S_k^*=\widetilde{L}_{k+1}^{(k+\ell+2)}+ \left(\widetilde{W}_{k+1}^{(j)}-\widetilde{W}_{k}^{(j)}\right)$.
By Proposition \ref{DIRML}, $S_k^*=A_k^*S_{k+1}^*$ where $S_{k+1}^* \sim {\rm ML}\left(\beta, k+1+\beta\right)$ and $A_k^* \sim {\rm Beta}(k/\beta+2, 1/\beta-1)$ are independent. The rest of the proof of \eqref{equalexp2} is analogous to the proof of \eqref{equalexp1}.
\end{itemize}
\end{proof}
\subsection{Proof of Theorem \ref{embford}} \label{sec71}
We first consider the evolution of marked subtrees $(\mathcal R_k^{(i)},k \geq 1)$, $i \geq 1$. Recall the notation in Algorithm \ref{twocolourmass}. Given that $\mathcal R_k^{(i)}$ has size $m$, i.e. $k_m^{(i)} \leq k \leq k_{m+1}^{(i)}-1$, we denote the edges and the edge lengths of $\mathcal R_k^{(i)}$ by \begin{equation} E_{m,i}=\left(E_{m,i}^{(1)}, \ldots, E_{m,i}^{(2m-1)}\right), \quad L_{m,i}=\left(L_{m,i}^{(1)}, \ldots, L_{m,i}^{(2m-1)}\right), \end{equation} respectively, where we note that $\mathcal R_{k}^{(i)}$ is a binary tree, i.e. it has $2m-1$ edges for $k_m^{(i)} \leq k \leq k_{m+1}^{(i)}-1$. Recall that $E_{m,i}^{(j)}$ is an internal edge of $\mathcal R_k^{(i)}$ if $1 \leq j \leq m-1$, and an external edge if $m \leq j \leq 2m-1$.
\begin{lemma}[Mass split in marked subtrees]\label{mms} Let $(\mathcal T_k^*, (\mathcal R_k^{(i)}, i \geq 1), \mu_k^*, k \geq 0)$ be as in Algorithm \ref{twocolourmass}, and fix some $i \geq 1$. Then, for $m \geq 1$, conditionally given $k_m^{(i)}=k$, the relative mass split in $\mathcal R_k^{(i)}$ given by \begin{equation} \mu_k^{*}\left( \mathcal R_{k}^{(i)}\right)^{-1} \left(\mu_k^{*}\left(E_{m,i}^{(1)}\right), \ldots, \mu_k^{*}\left(E_{m,i}^{(m-1)}\right),\mu_k^{*}\left(E_{m,i}^{(m)}\right), \ldots, \mu_k^{*}\left(E_{m,i}^{(2m-1)}\right) \right) \label{msms} \end{equation} has a $\rm{Dirichlet}\left(\beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta \right)$ distribution and is independent of $\mu_k^*(\mathcal R_k^{(i)})$ and of the mass split in $\mathcal T_{k}^* \setminus \mathcal R_k^{(i)}$. Furthermore, for $j \in [2m-1]$, \begin{equation} \left( \mu_k^*\left( E_{m,i}^{(j)} \right)^{-\beta} E_{m,i}^{(j)}, \mu_k^*\left( E_{m,i}^{(j)} \right)^{-1} \mu_k^* \restriction_{E_{m,i}^{(j)}} \right) \label{mssob1} \end{equation} is a $(\beta, \theta)$-strings of beads, where $\theta=\beta$ for $j \in [m-1]$ and $\theta=1-2\beta$ for $j \in [2m-1] \setminus [m-1]$. The strings of beads \eqref{mssob1} are independent of each other and of the mass split in $\mathcal R_k^{(i)}$ given by \eqref{msms}. Conditionally given that $k_{m+1}^{(i)}=k'$, \begin{equation} \mu_{k'}^*\left(\mathcal R_{k'}^{(i)}\right)=\left(1-Q_m^{(i)}\right)\mu_{k}^*\left(\mathcal R_{k}^{(i)}\right) \label{mssob3} \end{equation} where $Q_m^{(i)} \sim {\rm{Beta}}(\beta, m(1-\beta)+1-2\beta)$ is independent of $\mathcal R_{k'}^{(i)}$ normalised to unit mass. \end{lemma}
\begin{proof} This is a direct consequence of Proposition \ref{starstrings}, and Proposition \ref{Diri}(ii). To see \eqref{mssob3}, note that $\mathcal R_{k'}^{(i)} \setminus \mathcal R_k^{(i)}=E_{m+1, i}^{(2m)}$ and that $\mu_{k'}^*(E_{m+1,i}^{(2m)})=\gamma_k\mu_k^*(J_k^*)$ where $\gamma_k \sim {\rm Beta}(1-2\beta, \beta)$ is independent, and apply Proposition \ref{Diri}(i)-(ii). \end{proof}
\begin{corollary}[Length split in marked subtrees] \label{lms} In the setting of Lemma \ref{mms}, let $\widetilde{S}_{m,i}\!=\!\sum_{j\in[2m-1]} L_{m,i}^{(j)}$ denote the total length of $\mathcal R_{k_m^{(i)}}^{(i)}$, $m \geq 1$. Then, conditionally given $k_m^{(i)}=k$, \begin{equation} \left( L_{m,i}^{(1)}, \ldots, L_{m,i}^{(m-1)}, L_{m,i}^{(m)}, \ldots, L_{m,i}^{(2m-1)}\right)= \mu^*_k\left(\mathcal R_{k}^{(i)} \right)^{\beta} S_{m,i} \cdot \left(Z_{m,i}^{(1)}, \ldots, Z_{m,i}^{(m-1)},Z_{m,i}^{(m)}, \ldots, Z_{m,i}^{(2m-1)} \right)\label{nl1} \end{equation} where $\mu^*_{k}(\mathcal R_{k}^{(i)} )$, $S_{m,i}\sim {\rm{ML}}(\beta, (m-1)(1-\beta)+1-2\beta)$ and \begin{equation*} \left(Z_{m,i}^{(1)}, \ldots, Z_{m,i}^{(m-1)},Z_{m,i}^{(m)}, \ldots, Z_{m,i}^{(2m-1)} \right) \sim {\rm{Dirichlet}}\left( 1, \ldots, 1, {1}/{\beta}-2, \ldots, {1}/{\beta}-2\right)\end{equation*} are independent. In particular, $\widetilde{S}_{m,i}=\mu_{k}^*(\mathcal R_k^{(i)})^\beta S_{m,i}$. Furthermore, for $m \geq 1$, \begin{equation} \widetilde{S}_{m,i}=B_{m,i} \widetilde{S}_{m+1,i} \label{claimed} \end{equation} where $B_{m,i} \sim {\rm Beta}(m(1/\beta-1), 1/\beta-2)$ and $\widetilde{S}_{m+1,i}$ are independent, i.e. the sequence of lengths of each marked subtree is a Markov chain with the same transition rule as the Mittag-Leffler Markov chain with parameter $\beta/(1-\beta)$ starting from ${\rm ML}(\beta/(1-\beta), (1-2\beta)/(1-\beta))$. \end{corollary}
\begin{proof} Fix $i \geq 1$, and set $X_j= \mu_{k_m^{(i)}}^*(E_{m,i}^{(j)})$, $j \in [2m-1]$, so that $\sum_{j\in[2m-1]}X_j=\mu_{k_m^{(i)}}^*(\mathcal R_{k_m^{(i)}}^{(i)})$. By Lemma \ref{mms}, the edge lengths $L_{m,i}^{(j)}$, $j \in [2m-1]$, are given by $L_{m,i}^{(j)}=X_j^\beta M_m^{(j)}$ where $M_m^{(j)} \sim \text{ML}(\beta, \beta)$ for $j \in [m-1]$, $M_m^{(j)} \sim \rm{ML}(\beta, 1-2\beta)$ for $j \in [2m-1] \setminus [m-1]$, $\sum_{j\in[2m-1]}X_j$ and \begin{equation*} \left(\sum_{j\in[2m-1]} X_j\right)^{-1} \left(X_1, \ldots, X_{m-1}, X_m, \ldots X_{2m-1}\right) \sim {\text{Dirichlet}}\left(\beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta\right) \end{equation*} are independent. We apply Proposition \ref{DIRML} with $n=2m-1$, $\theta_j = \beta$ for $j \in [m-1]$ and $\theta_j=1-2\beta$ for $j \in [2m-1] \setminus [m-1]$ to the vector \begin{align} \left( L_{m,i}^{(1)}, \ldots, L_{m, i}^{(2m-1)}\right) =\left(\sum_{j\in[2m-1]}X_j\right)^\beta \left( \left(\frac{X_1}{\sum_{j\in[2m-1]}X_j}\right)^\beta M_m^{(1)}, \ldots,\left(\frac{X_{2m-1}}{\sum_{j\in[2m-1]}X_j}\right) ^\beta M_m^{(2m-1)}\right) \label{lab1} \end{align} Then $\theta=(m-1)(1-\beta)+1-2\beta$, and hence \eqref{nl1} follows.
To see \eqref{claimed}, recall that $E_{m+1,i}^{(2m)}=\mathcal R_{k_{m+1}^{(i)}}^{(i)} \setminus \mathcal R_{k_{m}^{(i)}}^{(i)}$. By \eqref{lab1} for $m+1$, and Proposition \ref{Diri}(i)-(ii), $\mu_{k_m^{(i)}}^*\big(\mathcal R_{k_m^{(i)}}^{(i)}\big)^{\beta}S_{m,i}=B_{m,i} \mu_{k_{m+1}^{(i)}}^*\big(\mathcal R_{k_{m+1}^{(i)}}^{(i)}\big)^{\beta} S_{m+1,i}$ where the variables $S_{m+1,i}\sim {\rm ML}(\beta, m(1-\beta)+1-2\beta)$, $B_{m,i} \sim \text{Beta}(m(1/\beta-1), 1/\beta -2 )$ and $\mu_{k_{m+1}^{(i)}}^*(\mathcal R_{k_{m+1}^{(i)}}^{(i)})$, are independent, i.e. $\widetilde{S}_{m,i}=B_{m,i}\widetilde{S}_{m+1,i}$. \end{proof}
\begin{proof}[Proof of Theorem \ref{embford}]
(i) Consider a space $\mathbb T_{[m]}$ of weighted discrete $\mathbb R$-trees $(\mathcal T, \mu)$ with $m$ leaves labelled by $[m]$ and mass measure $\mu$ of total mass $\mu(\mathcal T) \in (0,1]$, $m\geq 1$, see e.g. \cite[Section 3.3]{1} for a formal introduction. We define transition kernels $\kappa_m$ from $\mathbb T_{[m]}$ to $\mathbb T_{[m+1]}$, $m \geq 1$: given any $(\mathcal T, \mu) \in \mathbb T_{[m]}$, \begin{itemize} \item select an edge $E$ of $\mathcal T$ according to the normalised mass measure $\mu(\mathcal T)^{-1} \mu$; given $E$, select an atom $J$ of $\mu \restriction_E$ according to $(\beta, \theta)$-coin tossing sampling where $\theta=\beta$ if $E$ is internal, and $\theta=1-2\beta$ if $E$ is external; this determines a selection probability $p_m(x)$ for each atom $x \in \mathcal T$; \item given $J$, let $\gamma \sim {\rm Beta}(1-2\beta, \beta)$ be independent, and attach to $J$ an independent $(\beta, 1-2\beta)$-string of beads with mass measure rescaled by $\gamma\mu(J)$ and metric rescaled by $(\gamma\mu(J))^{\beta}$, and label the new leaf by $m+1$. \end{itemize}
We use the convention that if no atom is selected, we apply a scaling factor of $0$. Note that, in our setting with $(\beta, \beta)$-strings of beads on internal edges and $(\beta,1-2\beta)$-strings of beads on external edges, this does not happen almost surely. Denote by $\kappa_m((\mathcal T, \mu), \cdot)$ the distribution of the resulting tree. We further consider the kernel $\kappa_{0}(\cdot)=\kappa_0((\{\rho\}, \delta_\rho), \cdot)$ taking the singleton tree $\{\rho\}$ of mass $1$, and associating a $(\beta, 1-2\beta)$-string of beads with $\{\rho\}$. We will show that each process in \eqref{embford1} evolves according to the transition kernels $\kappa_m, m \geq 1$, starting from an independent $(\beta,1-2\beta)$-string of beads whose distribution is given by $\kappa_0(\cdot)$.
More formally, for $\ell \geq 1$ and some $m_i \geq 1$, $i \in [\ell]$, we will show that \begin{align} \mathbb E &\left[\prod_{i \in [\ell]} f_i \left( \left( \mathcal G_m^{(i)}, \mu_m^{(i)}\right), m \in [m_i] \right) \right] \nonumber \\ & \hspace{-0.2cm} = \prod_{i \in [\ell]} \int \int \cdots \int f_i \left( R_1, \ldots, R_{m_i}\right) \kappa_{m_i-1}\left( R_{m_i-1},d R_{m_i}\right) \cdots \kappa_1\left( R_1, d R_2\right) \kappa_0 \left(d R_1\right) \label{product} \end{align} for any bounded continuous functions $f_i \colon \mathbb T_{[1]} \times \cdots \times \mathbb T_{[m_i]} \rightarrow \mathbb R$, $i \in [\ell]$.
We first show the equation \eqref{product} for $\ell=1$. For notational convenience, we write $( \mathcal G_m, \mu_m)=( \mathcal G_m^{(1)}, \mu_m^{(1)})$ and $f=f_1$. We further use the notation $\xi_{\beta, \beta}$ and $\xi_{\beta, 1-2\beta}$ for $(\beta, \beta)$- and $(\beta, 1-2\beta)$-strings of beads, respectively, and recall that we denote by $p_m(x)$ the selection probability of $x \in \mathcal T$ for $\mathcal T \in \mathbb T_{[m]}$ using the edge selection rule in combination with coin tossing sampling, as described above. $B_{\beta, 1-2\beta}(\cdot)$ denotes the density of Beta$(\beta, 1-2\beta)$. We obtain, \begin{align*} \mathbb E \left[ f \left( \mathcal G_1, \ldots, \mathcal G_{m_1} \right) \right]
=& \sum \limits_{k_1^{(1)}=1, k_2^{(1)}, \ldots, k_{m_1}^{(1)}} \int \limits_{\xi_0} \sum_{v \in \mathcal \xi_0} \mu_0 \left(v\right) \int \limits_{x_1} {B}_{\beta,1-2\beta}(x_1) \\
&\int \limits_{\xi_1} \left(1-\mu_0\left(v\right)\left(1-\overline{x}_1\right)\right)^{k_2^{(1)}-k_1^{(1)}-1} \mu_0\left(v\right)\left(1-\overline{x}_1\right) \sum \limits_{w_1 \in R_1} p_1\left(w_1\right) \int \limits_{x_2} {B}_{\beta,1-2\beta}(x_2) \\
& \int \limits_{\xi_2} \cdots \left(1-\mu_0\left(v\right)\prod \limits_{i \in [m_1-1]}\left(1-\overline{x}_i\right)\right)^{k_{m_1}^{(1)}-k_{m_1-1}^{(1)}-1} \mu_0\left(v\right)\prod \limits_{i \in [m_1-1]}\left(1-\overline{x}_i\right) \\
& \qquad\qquad\sum \limits_{w_{m_1-1} \in R_{m_1-1}} p_{m_1-1}\left(w_{{m_1}-1}\right) \int \limits_{x_{m_1}} {B}_{\beta,1-2\beta}(x_{m_1}) \int \limits_{\xi_{m_1}} f\left( R_1, \ldots, R_{m_1} \right) \\
& \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_{m_1}\right)dx_{m_1} \cdots \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_2\right) dx_2 \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_1\right)dx_1 \mathbb P\left(\xi_{\beta, \beta} \in d\xi_0\right)\end{align*} where \begin{itemize} \item $\mu_0$ is the mass measure of $\xi_0$; \item $R_1=\xi_1$ with mass measure $\mu_1^{(1)}$ is the initial string of beads, and, for $m \geq 2$, $R_m$ with mass measure $\mu_m^{(1)}$ is created by attaching to $w_{m-1}\in R_{m-1}$ the string of beads $\xi_m$ rescaled by the proportion $x_{m-1}$ of the mass of $w_{m-1}$; \item the sequence $(\overline{x}_i, i \geq 1)$ is defined by $\overline{x}_1=x_1$, $\overline{x}_i = 1-\frac{\mu_{i-1}^{(1)}(w_{i-1})}{\mu^{(1)}_{i-1}(R_{i-1})}(1-x_{i})$, $i=2, \ldots, m_1$; \item the integrals are taken over the whole ranges of $x_i \in [0,1]$ and the subspaces of $\xi_i\in\mathbb T_{\rm w}$ that correspond to strings of beads. \end{itemize}
Note that $\mu_0\left(v\right)\prod_{i \in [m-1]} (1-\overline{x}_i)$ is the relative remaining mass of the first marked component after $m$ transition steps have been carried out in this component.
We can move the sum over $k_1^{(1)}, \ldots, k_{m_1}^{(1)}$ inside the integrals, and note that there is only one term which depends on $k_{m_1}^{(1)}$. Moving the sum over $k_{m_1}^{(1)}$ in front of this factor, we obtain \begin{equation*} \sum \limits_{k_{m_1}^{(1)} \geq k_{m_1-1}^{(1)}+1 } \left(1-\mu_0\left(v\right) \prod \limits_{i \in [m_1-1]} \left(1-\overline{x}_i\right)\right)^{k_{m_1}^{(1)} - k_{m_1-1}^{(1)}-1}\mu_0\left(v\right) \prod \limits_{i \in [m_1-1]} \left(1-\overline{x}_i\right) =1 \end{equation*} as this is the sum over the probability mass function of a geometric random variable (there are infinitely many insertions into the first marked component almost surely). We can proceed inductively and sum the corresponding geometric probabilities over $k_1^{(1)}, \ldots, k_{m_1-1}^{(1)}$ to obtain \begin{align*} \mathbb E \left[ f \left( \mathcal G_1, \ldots, \mathcal G_{m_1} \right) \right] =
&\int \limits_{\xi_0} \sum_{v \in \mathcal \xi_0} \mu_0\left(v\right) \int \limits_{x_1} {B}_{\beta,1-2\beta}(x_1) \int \limits_{\xi_1} \sum \limits_{w_1 \in R_1} p_1\left(w_1\right) \int \limits_{x_2} {B}_{\beta,1-2\beta}(x_2) \\
& \int \limits_{\xi_2} \cdots \sum \limits_{w_{m_1-1} \in R_{m_1-1}} p_{m_1-1}\left(w_{{m_1}-1}\right) \int \limits_{x_{m_1}} {B}_{\beta,1-2\beta}(x_{m_1}) \int \limits_{\xi_{m_1}} f\left(R_1, \ldots, R_{m_1}\right) \\
& \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_{m_1}\right)dx_{m_1} \cdots \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_2\right) dx_2 \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_1\right)dx_1 \mathbb P\left(\xi_{\beta, \beta} \in d\xi_0\right). \end{align*}
We can now take the sum $ \sum_{v \in \xi_0} \mu_0(v)=1$ and the outer integral, as the inner terms are independent of $\mu_0(v)$ and $\xi_0$. This results in \begin{align*} \mathbb E \left[ f \left( \mathcal G_1, \ldots, \mathcal G_{m_1} \right) \right] =
& \int \limits_{x_1} {B}_{\beta,1-2\beta}(x_1) \int \limits_{\xi_1} \sum \limits_{w_1 \in R_1} p_1\left(w_1\right) \int \limits_{x_2} {B}_{\beta,1-2\beta}(x_2) \\
& \int \limits_{\xi_2} \cdots\sum \limits_{w_{m_1-1} \in R_{m_1-1}} p_{m_1-1}\left(w_{{m_1}-1}\right) \int \limits_{x_{m_1}} {B}_{\beta,1-2\beta}(x_{m_1}) \int \limits_{\xi_{m_1}} f\left(R_1, \ldots, R_{m_1}\right) \\
& \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_{m_1}\right)dx_{m_1} \cdots \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_2\right) dx_2 \mathbb P\left(\xi_{\beta, 1-2\beta} \in d\xi_1\right)dx_1. \end{align*} We recognise the definition of the transition kernels $\kappa_m, m \geq 1$, and rewrite this integral in the form \begin{align*} \mathbb E &\left[f \left( \mathcal G_1, \ldots, \mathcal G_{m_1} \right) \right] = \int \limits \int \limits \cdots \int \limits f\left(R_1, \ldots, R_m\right) \kappa_{m-1}\left(R_{m-1}, dR_m\right) \cdots \kappa_1\left(R_1, dR_2\right) \kappa_0\left(dR_1\right) \end{align*}
To see \eqref{product} in the general setting, we express the left-hand side in terms of the distribution of $(\mathcal T_0^*, \mu_0^*)$ and the two-colour transition kernels, which can be described via Algorithm \ref{twocolourmass}, as a sum over $k_j^{(i)}, j\in [m_i], i \in [\ell]$. Then we can proceed as follows. \begin{itemize} \item First integrate out irrelevant transitions which affect components $i \geq \ell+1$ and parts of earlier transitions such as unmarked strings of beads after the creation of the $\ell$th component. These transitions do not affect the marked components $i \in [\ell]$.
\item Move the sums over $k_{m_\ell}^{(\ell)}, \ldots, k_2^{(\ell)}$ inside the integrals. Notice that there is only one term depending on $k_{m_\ell}^{(\ell)}$, i.e. we obtain the sum over $k_{m_\ell}^{(\ell)} \geq k_{m_\ell-1}^{(l)}$, $k_{m_\ell}^{(l)} \neq k_j^{(i)}, j \in [m_i], i \in [\ell-1]$ of the probabilities of selecting the $\ell$th marked component at step $k_{m_\ell}^{(\ell)}$, skipping indices $k_j^{(i)}$ of insertions into other marked components $i \in [\ell-1]$, i.e. \begin{align*} \sum \limits_{k_{m_\ell}^{(\ell)} \geq k_{m_\ell-1}^{(\ell)}+1, k_{m_\ell}^{(l)} \neq k_j^{(i)}, j \in [m_i], i \in [\ell-1]} &\left( 1- \mu_{k_1^{(\ell)}-1}\left(v_\ell\right) \prod_{r \in [m_\ell-1]}\left(1-\overline{x}_r^{(\ell)}\right)\right)^{ k(m,\ell)} \\&\mu_{k_1^{(\ell)}-1}\left(v_\ell\right) \prod_{r \in [m_\ell-1]}\left(1-\overline{x}_r^{(\ell)}\right), \end{align*} where $k(m,\ell):=k_{m_\ell}^{(\ell)}-k_{m_\ell-1}^{(\ell)}-\#\{k_{m_\ell -1}^{(\ell )} < k < k_{m_\ell }^{(\ell )}\colon k=k_j^{(i)}, j \in [m_i], i \in [\ell -1]\}$, and where the sequences $({x}_{i}^{(\ell )}, i \geq 1)$ and $(\overline{x}_{i}^{(\ell )}, i \geq 1)$ are defined as $(x_i, i \geq 1)$ and $(\overline{x}_i, i \geq 1)$, respectively. Note that \begin{equation*} \mu_{k_1^{(\ell )}-1}\left(v_\ell \right) \prod_{r \in [m-1]}\left(1-\overline{x}_r^{(\ell )}\right) \end{equation*} is the mass of the $\ell $th marked component after $m$ transition steps have been carried out in this component. As we have a sum over the probability mass function of a geometric random variable, no matter when insertions into components $i \in [\ell -1]$ happen, this sum is $1$. We can proceed inductively down to $k_2^{(\ell )}$.
\item The sum over the insertion point $v_\ell $ is just a sum over the bead selection probabilities $$\mu_{k_1^{(\ell )}-1}(v_\ell ), \quad k_1^{(\ell )} \geq k_1^{(\ell -1)}+1,$$ which sum to the probability of creating the $\ell $th component (no matter what the sizes of the other components are at this step). The sum over $k_1^{(\ell )}$ is not geometric but it is a sum over the probabilities of success in a Bernoulli sequence with increasing success probability. This sum is again $1$ (as we will open the $\ell $th marked component with probability one).
\item We can put the integrals over the ingredients for the $\ell $th subtree growth process in front of the other integrals, as they do not depend on anything else.
\item Inductively, for $j=\ell -1, \ldots, 1$, repeat these steps to lose all sums over insertion times $k_i^{(j)}$ and first insertion points $v_i$, $i \in [\ell ]$.
\item Finally, the integrand of the outer integral over the distribution of $\xi_0$ is constant, so the integral can be dropped. We obtain precisely the product form of the right-hand side \eqref{product}. \qedhere \end{itemize}
(ii) Note that, by Lemma \ref{mms} (and Proposition \ref{starstrings}), for each $i$ and $k=k_m^{(i)}-1$ for some $m \geq 1$, we are in the situation of Lemma \ref{equivalence} with $n=2m-1$, $\theta_1=\cdots=\theta_{m-1}=\beta$, $\theta_m =\cdots=\theta_{2m-1}=1-2\beta$, $\alpha=\beta$. We recover Algorithm \ref{Ford} with index $\beta'=\beta/(1-\beta)$ and the \enquote{wrong} starting length ${\rm ML}(\beta,1-2\beta)$, cf. Corollary \ref{lms}.
(iii) First, note that, by Corollary \ref{lms}, the lengths of the trees $C_m^{(i)} \mathcal R_{k_m^{(i)}}^{(i)}$ do not depend on $\mu_{k_m^{(i)}}^*(\mathcal R_{k_m^{(i)}}^{(i)})$. Fix some $i \geq 1$ and recall from Lemma \ref{mms} that there are independent random variables $Q_m^{(i)}\sim {\rm Beta}(\beta, m(1-\beta)+1-2\beta)$ such that \begin{equation*} \mu_{k_{m+1}^{(i)}}^*\left(\mathcal R_{k_{m+1}^{(i)}}^{(i)}\right)=\left(1-Q_m^{(i)}\right)\mu_{k_{m}^{(i)}}^*\left(\mathcal R_{k_{m}^{(i)}}^{(i)}\right), \quad m \geq 1. \end{equation*} Define $P_1^{(i)}:=Q_1^{(i)}\sim {\rm Beta}(\beta, 2-3\beta)$, and, for $m \geq 1$, define $P_m^{(i)}:= \overline{Q}_1^{(i)} \overline{Q}_2^{(i)} \cdots \overline{Q}_{m-1}^{(i)} Q_m^{(i)}$, where $\overline{Q}=1-Q$ for any random variable $Q$. Note that $P_m^{(i)}$ is the proportion of the mass of $\mu_{k_1^{(i)}}^{*}(\mathcal R_{k_1^{(i)}}^{(i)})$ attached to the $(m+1)$st leaf of the $i$th marked component.
We recognise the stick-breaking construction \eqref{v1v2} of a PD$(1-\beta, 1-2\beta)$ vector $(P_m^{(i)}, m \geq 1)^{\downarrow}$, and obtain the corresponding $(1-\beta)$-diversity $H^{(i)}$ by \begin{equation} H^{(i)}=\lim_{m \rightarrow \infty} \left(1-\sum_{j\in[m]} P_j^{(i)}\right)^{1-\beta} \left(1-\beta\right)^{-(1-\beta)}m^{\beta} \sim {\rm ML}\left(1-\beta, 1-2\beta \right), \end{equation} as in \eqref{alphadiv}. Now fix some $m_0 \geq 1$ and let $k \geq k_{m_0}^{(i)}$. We consider the reduced tree
\begin{equation} \mathcal R\left(C_m^{(i)} \mathcal R_{k_m^{(i)}}^{(i)}, \Omega_1^{(i)}, \ldots, \Omega_{m_0}^{(i)} \right) \label{red516} \end{equation} spanned by the root $v_i$ and the leaves $\Omega_1^{(i)}, \ldots, \Omega_{m_0}^{(i)}$ of $\mathcal R_k^{(i)}$. Recall from (i), Corollary \ref{lms} and Proposition \ref{Forddis} that the shape and the Dirichlet$(1, \ldots, 1, 1/\beta-2, \ldots, 1/\beta-2)$ length split between the edges $E_{m_0,i}^{(i)}, \ldots, E_{m_0,i}^{(2m_0-1)}$ of $\mathcal R_{k_{m_0}^{(i)}}^{(i)}$ are as required for the reduced tree associated with a Ford CRT of index $\beta'$. Scaling by $C_m^{(i)}$ only affects the total length of the reduced tree \eqref{red516}. We will show that the total length of \eqref{red516} scaled by $C_m^{(i)}$ converges a.s. to some $S_{m_0}' \sim {\rm ML}(\beta', m_0-\beta')$, which is the total length of the reduced tree spanned by the root and the first $m_0$ leaves of a Ford CRT of index $\beta'$, i.e. that \begin{equation*} \lim_{m \rightarrow \infty} {\rm Leb}\left(\mathcal R\left(C_m^{(i)} \mathcal R_{k_m^{(i)}}^{(i)}, \Omega_1^{(i)}, \ldots, \Omega_{m_0}^{(i)} \right) \right) = S_{m_0}' \sim {\rm ML}\left(\beta', m_0-\beta'\right) \end{equation*} where we will use that $$C_m^{(i)}:= \left(1-\beta\right)^{\beta}m^{-\beta^2/(1-\beta)} \mu_{k_m^{(i)}}^*\left(\mathcal R_{k_m^{(i)}}^{(i)}\right)^{-\beta}=\left(1-\sum_{j\in[m]} P_j^{(i)}\right)^{-\beta} \left(1-\beta\right)^{\beta}m^{-\beta^2/(1-\beta)} \mu_{k_1^{(i)}}^*\left(\mathcal R_{k_1^{(i)}}^{(i)}\right)^{-\beta} \label{uhusd},$$ since $1-\sum_{j\in[m]} P_j^{(i)}=\mu_{k_m^{(i)}}^*(\mathcal R^{(i)}_{k_m^{(i)}})/\mu_{k_1^{(i)}}^*(\mathcal R^{(i)}_{k_1^{(i)}})$. Hence $\lim \limits_{m \rightarrow \infty} C_i(m)=(H^{(i)})^{-\beta/(1-\beta)}\mu_{k_1^{(i)}}^*(\mathcal R_k^{(i)})^{-\beta}$ a.s.. Note that $H^{(i)}$ is independent of $\mu_{k_1^{(i)}}^*(\mathcal R_k^{(i)})$ as it only depends on the sequence of independent random variables $(Q_i, i \geq 1)$ which is independent of $\mu_{k_1^{(i)}}^*(\mathcal R_k^{(i)})$.
The shape of $\mathcal R^{(i)}_{k_{m}^{(i)}}$ has the same distribution as the shape of $\mathcal F_m$ where $(\mathcal F_m, m \geq 1)$ is a Ford tree growth process of index $\beta'$. In particular, we already know that the number of edges $N_m+2m_0-1, m \geq m_0$, of the reduced trees \eqref{red516} as a subset of $\mathcal R_{k_m^{(i)}}^{(i)}$ behaves like the number of tables in a $(\beta', m_0-\beta')$-CRP, started at $m_0$, i.e.\ by \eqref{tables}, \begin{equation} \lim_{m \rightarrow \infty} \left(m-m_0\right)^{-\beta/(1-\beta)} N_m =\lim_{m \rightarrow \infty} m^{-\beta/(1-\beta)} N_m =S'_{m_0} \quad\text{ a.s. } \label{scaling1} \end{equation} where $S'_{m_0} \sim {\rm ML}(\beta', m_0-\beta')$. By Lemma \ref{mms}, we conclude that, in the limit, the length of $\mathcal R_{k_{m_0}^{(i)}}^{(i)}$ is given by \begin{equation} \lim_{m \rightarrow \infty} \mu^*_{k_m^{(i)}}\left(\mathcal R_{k_m^{(i)}}^{(i)}\right)^{\beta} \sum_{j\in[N_m]} \left(X'_j\right)^{\beta} M^{(j)} = \mu_{k_{m_0}^{(i)}}^*\left(\mathcal R_{k_{m_0}^{(i)}}^{(i)} \right)^\beta S_{m_0,i} =\widetilde{S}_{m_0,i}\label{scaling2} \end{equation} where $X':=(X_1', \ldots, X_{m-1}', X_m', \ldots, X_{2m-1}') \sim {\rm Dirichlet}(\beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta)$, $\mu^*_{k_m^{(i)}}(\mathcal R_{k_m^{(i)}}^{(i)})$, $(N_m, m \geq 1)$ and the i.i.d. random variables $M^{(j)} \sim {\rm ML}(\beta, \beta), j \geq 1$, are independent.. Note that we do not consider the lengths of the $m_0$ external edges leading to the leaves of the reduced tree \eqref{red516} and the initial $m_0-1$ internal edges, which does not affect the asymptotics. We will use the representation of a Dirichlet vector $X'\sim {\rm Dirichlet}(\beta, \ldots, \beta, 1-2\beta, \ldots, 1-2\beta)$ in terms of independent Gamma variables, i.e. \begin{equation*} X'\,{\buildrel d \over=}\,Y^{-1}\left(Y_1, \ldots,Y_{m-1}, Y_1', \ldots, Y'_{m}\right) \end{equation*} for independent i.i.d. sequences $(Y_j, j \geq 1)$, $(Y_j', j \geq 1)$ with $Y_1 \sim {\rm Gamma}(\beta, 1)$, $Y_1 \sim {\rm Gamma}(1-2\beta,1)$, and $Y=\sum_{j\in[m-1]} Y_j + \sum_{j\in[m]}Y_j' \sim {\rm Gamma}((m-1)(1-\beta)+ 1-2\beta,1)$. By \eqref{scaling2}, \begin{align*} C_m^{(i)} \widetilde{S}_{m_0,i} &= C_m^{(i)} \mu^*_{k_m^{(i)}}\left(\mathcal R_{k_m^{(i)}}^{(i)}\right)^{\beta} \left(\sum_{j\in[N_m+(m_0-1)]} \left(X'_j\right)^{\beta} M^{(j)} + \sum_{j=0}^{m_0-1} \left(X'_{j+m}\right)^{\beta} \overline{M}^{(j)} \right)\end{align*} where $\overline{M}_j$, $j \geq 1$, are i.i.d. with $\overline{M}^{(1)} \sim {\rm ML}(\beta, 1-2\beta)$ and independent of $X'$ and $N_m$, $m \geq 1$, and hence $C_m^{(i)} \widetilde{S}_{m_0,i}$ has the same distribution as \begin{align} \frac{N_m\left(1-\beta\right)^{\beta}}{m^{\beta/(1-\beta)}} \left(m^{-1} \left(\sum_{j\in[m-1]} Y_j + \sum_{j\in[m]}Y_j' \right)\right)^{-\beta} \left(N_m^{-1} \left(\sum_{j\in[N_m+(m_0-1)]} Y_j^{\beta}M^{(j)}+\sum_{j\in[m_0]}Y_j'^{\beta}\overline{M}^{(j)} \right)\right). \label{cim} \end{align}
By the strong law of large numbers, we have $\lim_{m \rightarrow \infty} N_m^{-1} \sum_{j\in[N_m]} Y_j^{\beta}M_m^{(j)} = \mathbb E[Y_1^\beta M_m^{(j)}]=1$ a.s. since $N_m \rightarrow \infty$ a.s., $\mathbb E[Y_1^{\beta}]=\Gamma(2\beta)/\Gamma(\beta)$, and where we use the first moment of the Mittag-Leffler distribution \eqref{mlmoms}. Furthermore, note that $Y_j'':=Y_j+Y_j' \sim {\rm Gamma}(1-\beta, 1)$, $j \in [m-1]$, are i.i.d., and hence $m^{-1} (\sum_{j\in[m-1]}Y_j + \sum_{j\in[m]}Y_j') \rightarrow \mathbb{E}[Y_1'']= 1-\beta$ a.s.. By \eqref{scaling1}, we conclude that the expression in \eqref{cim} converges to $S_{m_0}'$ a.s. where $S_{m_0}' \sim {\rm ML}(\beta', m_0-\beta')$. We already know that $\mathcal R^{(i)}_{k_m^{(i)}}$ and the scaling factor $C_m^{(i)}$ converge almost-surely, and hence, by Proposition \ref{Forddis}, \begin{equation*} \lim_{m \rightarrow \infty} C_m^{(i)} \mathcal R_{k_{m_0}^{(i)}}^{(i)}=\mathcal F_{m_0}^{(i)} \quad \text{ a.s. } \end{equation*} for $(\mathcal F_m^{(i)}, m\geq 1)$ are i.i.d. Ford tree growth processes of index $\beta'$, i.e. (ii) follows as $m_0 \rightarrow \infty$. \qedhere \end{proof}
\nocite{*}
\end{document} |
\begin{document}
\title{A note on the Poisson boundary of lamplighter random walks}
\begin{abstract} The main goal of this paper is to determine the Poisson boundary of lamplighter random walks over a general class of discrete groups $\Gamma$ endowed with a ``rich'' boundary. The starting point is the Strip Criterion of identification of the Poisson boundary for random walks on discrete groups due to Kaimanovich \cite{Kaimanovich2000}. A geometrical method for constructing the strip as a subset of the lamplighter group $\mathbb{Z}_{2}\wr\Gamma$ starting with a ``smaller'' strip in the group $\Gamma$ is developed. Then, this method is applied to several classes of base groups $\Gamma$: groups with infinitely many ends, hyperbolic groups in the sense of Gromov, and Euclidean lattices. We show that under suitable hypothesis the Poisson boundary for a class of random walks on lamplighter groups is the space of infinite limit configurations. \paragraph{Keywords} \end{abstract} \noindent \textbf{Keywords}: Random walk, wreath product, lamplighter group, Poisson boundary.\\
\noindent \textbf{Mathematics Subject Classification (2000)}: 60J50, 60B15, 05C05, 20E08\\
\section{Introduction} Let $\Gamma$ be a finitely generated group, and imagine a lamp sitting at each group element. For simplicity, we consider that the lamps have only two states: $0$ (the lamp is swiched off) or $1$ (the lamp is swiched on), and initially all lamps are off. We think of a lamplighter person moving randomly in $\Gamma$ and switching randomly lamps on or off. We investigate the following model: at each step the lamplighter may walk to some random neighbour vertex, and may change the state of some lamps in a bounded neighbourhood of his position. This model can be interpreted as a random walk on the wreath product $(\mathbb{Z}/2\mathbb{Z})\wr\Gamma$ governed by a probability measure $\mu$. The random walk is described by a transient Markov chain $Z_{n}$, which represents the random position of the lamplighter and the random configuration of the lamps at time $n$. We assume that the lamplighter random walk's projection on the base group $\Gamma$ is transient. Write $\mathbb{Z}_{2}:=\mathbb{Z}/2\mathbb{Z}$ and $G:=\mathbb{Z}_{2}\wr\Gamma$.
Transience of the projected random walk on $\Gamma$ implies that almost every path of the original random walk $Z_{n}$ on $G$ will leave behind a certain (infinitely supported) limit configuration on $\Gamma$. It is then natural to ask whether this limit configurations describe completely the behaviour of the random walk $Z_{n}$ at infinity.
For a more topological viewpoint, we attach to $G=\mathbb{Z}_{2}\wr\Gamma$ a natural boundary $\Omega$ at infinity, such that $G\cup\Omega$ is a metrizable space (not necessarily compact or complete) on which $G$ acts by homeomorphisms and every point in $\Omega$ is an accumulation point of a sequence in $G$. We then show that, in this topology, the random walk $Z_{n}$ converges almost surely to an $\Omega$-valued random variable, under the assumption that the projected random walk on $\Gamma$ converges to the boundary. If we denote by $\mu_{\infty}$ the limit distribution of $Z_{n}$ on $\Omega$, then the measure space $(\Omega,\mu_{\infty})$ provides a model for the behaviour at infinity of the random walk $Z_{n}$. We are interested if this space is maximal, i.e. there is no way (up to sets of measure $0$) of further refining this space. This maximal space is called the \textit{Poisson boundary} of the random walk.
The Poisson boundary of a random walk on a group is a measure-theoretical space, which describes completely the significant behaviour of the random walk at infinity. Another way of defining the Poisson boundary is to say that it is the space of ergodic components of the time shift in the trajectory space.
In order to prove that the measure space $(\Omega,\mu_{\infty})$ is indeed the Poisson boundary of the random walk $Z_{n}$, we shall use the very useful Strip Criterion of identification of the Poisson boundary due to Kaimanovich, which we state here in the most general form. For details see Kaimanovich \cite[Thm. $6.5$ on p. 677]{Kaimanovich2000} and \cite[Thm. $5.19$]{KaimanovichWoess2002}. \begin{proposition}[\textbf{Strip Criterion}]\label{StripCriterion} Let $\mu$ be a probability measure with finite first moment on $G$, and let $(B_{+},\lambda_{+})$ and $(B_{-},\lambda_{-})$ be $\mu$- and $\check{\mu}$-boundaries, respectively. If there exists a measurable $G$-equivariant map $S$ assigning to almost every pair of points $(b_{-},b_{+})\in B_{-} \times B_{+}$ a non-empty ``strip'' $ S(b_{-},b_{+})\subset G$, such that, for the ball $B(id,n)$ of radius $n$ in the metric of $G$, \begin{equation*}
\frac{1}{n}\log| S(b_{-},b_{+})\cap B(id,n) | \to 0 ,\ \mbox{as}\ n\to\infty, \end{equation*} for $(\lambda_{-} \times \lambda_{+})$-almost every $(b_{-},b_{+})\in B_{-} \times B_{+}$, then $(B_{+},\lambda_{+})$ and $(B_{-},\lambda_{-})$ are the Poisson boundaries of the random walks $(G,\mu)$ and $(G,\check{\mu})$, respectively. \end{proposition}
This criterion was applied by Kaimanovich to groups with sufficiently rich geometric boundaries, for which such strips have a natural geometric interpretation.
We shall give a general method for constructing the strip $S$ as a subset of the lamplighter group $\mathbb{Z}_{2}\wr \Gamma$, with the properties required in the Proposition \ref{StripCriterion}. This method requires that the Strip Criterion can be applied to the random walk on $\Gamma$. Also, some additional assumptions are required. The method can be applied to a large class of base groups $\Gamma$, which are endowed with a sufficiently rich boundary, so that the random walk on $\Gamma$ converges to this boundary. The important fact here is that the basic geometry for the lamplighter group $\mathbb{Z}_{2}\wr \Gamma$ is provided by the underlying structure $\Gamma$. We shall explain how this method works when $\Gamma$ is a group with infinitely many ends, a hyperbolic group, or a Euclidean lattice.
The paper is organized as follows. In Section \ref{sec:Lamplighter} we recall some definitions and basic properties of the main objects of study (lamplighter groups and random walks, wreath products). In Section \ref{sec:TheNaturalBoundary}, we attach both to the group $\Gamma$ and $\mathbb{Z}_{2}\wr\Gamma$ certain boundaries, which satisfy some required assumptions. Under the condition that the random walk on $\Gamma$ converges to the boundary almost surely, we prove that the random walk $Z_{n}$ on $\mathbb{Z}_{2}\wr\Gamma$ converges also to the boundary. In Section \ref{sec:The Poisson boundary}, we shall apply the Strip Criterion \ref{StripCriterion} in order to determine the Poisson boundary of random walks over a general class of groups $\mathbb{Z}_{2}\wr\Gamma$. We shall explain here the \textit{half-space method} for constructing a strip as a subset of $\mathbb{Z}_{2}\wr\Gamma$. The general procedure is based on the fact that the strip in the base group $\Gamma$ has additional ``nice'' properties, which help us to lift it to a bigger strip. We shall prove that the strip satisfies the required properties in Proposition \ref{StripCriterion}. Finally, we shall consider some typical examples of groups $\Gamma$, which are endowed with nice geometric boundaries, so that random walks on $\Gamma$ converge to this boundary. For this specific examples, we shall apply the \textit{half-space method}.
Concluding the introduction, let us remark that the first to show that lamplighter groups are fascinating objects in the study of random walks were Kaimanovich and Vershik \cite{KaimanovichVershik1983}. By now, there is a considerable amount of literature on this topic. The paper of Kaimanovich \cite{KaimanovichVershik1983} may serve as a major source for the earlier literature. See also Lyons, Pemantle and Peres \cite{Lyons1996}, Erschler \cite{Erschler2001,Erschler2003}, Revelle \cite{Revelle,Revelle2003}, Pittet and Saloff-Coste \cite{Pittet&Saloff-Coste1996,Pittet&Saloff-Coste2002}, Grigorchuk and Zuk \cite{GrigorchukZuk2001}, Dicks and Schick \cite{DicksSchick2002}, Bartholdi and Woess \cite{BartholdiWoess2005}, Brofferio and Woess \cite{Brofferio 2006}.
\section{Lamplighter groups and random walks}\label{sec:Lamplighter} \paragraph{Lamplighter groups.}Consider an infinite group $\Gamma$, generated by a finite set $S_{\Gamma}$. Denote by $e$ the identity element, and by $d(\cdot,\cdot)$ the word metric on $\Gamma$, that is, the length of the shortest path between two elements in the Cayley graph of $\Gamma$ (with respect to $S$).
Imagine a lamp sitting at each element of $\Gamma$, which can be switched off or on (encoded by 0 and 1). We think of a lamplighter person moving randomly in $\Gamma$ and switching randomly lamps on or off. At every moment of time the lamplighter will leave behind a certain configuration of lamps. The configurations of lamps are encoded by functions $\eta:\Gamma\rightarrow \mathbb{Z}_{2}$. We write $\hat{\mathcal{C}}=\{\eta : \Gamma\rightarrow\mathbb{Z}_{2}\}$ for the set of all configurations, and let $\mathcal{C} \subset \hat{\mathcal{C}}\ $ be the set of all finitely supported configurations, where a configuration is said to have finite support if the set $ supp(\eta)=\{x \in \Gamma : \eta(x)\ne 0 \} $ is finite. Denote by $\bf{0}\bf$ the zero configuration, i.e. the configuration which corresponds to all lamps switched off, and by $\delta_{x}$ the configuration where only the lamp at $x\in\Gamma$ is on and all other lamps are off.
Recall that the \textit{wreath product} of the groups $\mathbb{Z}_{2}$ and $\Gamma$ is a semidirect product of $\Gamma$ and the direct sum of copies of $\mathbb{Z}_{2}$ indexed by $\Gamma$, where every $x\in\Gamma$ acts on $\sum_{x\in\Gamma}\mathbb{Z}_{2}$ by the translation $T_{x}$ defined as \begin{equation*} (T_{x}\eta)(y)=\eta(x^{-1}y),\forall y\in \Gamma. \end{equation*} Let $G:=\mathbb{Z}_{2}\wr\Gamma$ denote the \textit{wreath product}. The elements of $G$ are pairs of the form $(\eta,x)\in\mathcal{C}\times \Gamma$, where $\eta$ represents a (finitely supported!) configuration of the lamps and $x$ the position of the lamplighter. A group operation on $G$ is given by \begin{equation*} (\eta,x)(\eta^{'},x^{'})=(\eta\oplus T_{x}\eta^{'},xx^{'}), \end{equation*} where $x,x^{'}\in\Gamma$, $\eta,\eta^{'}\in\mathcal{C}$, $\oplus$ is the componentwise addition modulo $2$. The group identity is $(\textbf{0},e)$. We shall call $G$ together with this operation the \textit{lamplighter group} over $\Gamma$. \paragraph{Lamplighter distance.}When $S_{\Gamma}$ is a generating set for $\Gamma$, then a natural set of generators for $G=\mathbb{Z}_{2}\wr\Gamma$ is given by \begin{equation*} S_{G}=\{(\delta_{e},e),(\textbf{0},s):s\in S_{\Gamma}\}. \end{equation*} Consider the Cayley graph of $G$ with respect to the generating set $S_{G}$. We lift the word metric $d(\cdot,\cdot)$ on $\Gamma$ to a metric $d_{G}(\cdot,\cdot)$ on $G$ by assigning the following distances (lenghts) to the elements of $S_{G}$: $d_{G}((\textbf{0},e),(\textbf{0},s)):=1$ for $s\in S_{\Gamma}$ and $d_{G}((\textbf{0},e),(\delta_{e},e)):=c> 0$, where $c$ is some arbitrary, but fixed positive constant. Then the distance $d_{G}((\eta,x),(\eta^{'},x^{'}))$ between $(\eta,x)$ and $(\eta^{'},x^{'})$ is the length of the shortest path in the Cayley graph of $G$ joining these two vertices. More precisely, if we denote by $l(x,x^{'})$ the smallest length of a ``travelling salesman'' tour from $x$ to $x^{'}$ that visits each element of the set $\eta \bigtriangleup \eta ^{'}$ (where the two configurations are different), then \begin{equation}\label{GraphGmetric}
d_{G}((\eta,x),(\eta^{'},x^{'}))=l(x,x^{'})+c\cdot|\eta^{'}\bigtriangleup\eta| \end{equation} defines a metric on $G$. \paragraph{Lamplighter random walks.}Let $\mu$ be a probability measure on $G$, such that $supp(\mu)$ generates $G$ as a group. Consider the random walk $Z_{n}$ on $G$ with one-step transition probabilites given by $p((\eta,x),(\eta^{'},x^{'}))=\mu((\eta,x)^{-1}(\eta^{'},x^{'}))$, starting at the identity $(\begin{bf}0\end{bf},e)$. We shall call $Z_{n}$ the \textit{lamplighter random walk} over the base group $\Gamma$ and with law $\mu$. The lamplighter random walk starting at $(\begin{bf}0\end{bf},e)$ can also be described by a sequence of $G$-valued random variables $Z_{n}$ in the following way: \begin{equation}\label{lamplighter random variables}
Z_{0}:=(\begin{bf}0\end{bf},e),\ Z_{n}=Z_{n-1}i_{n},\mbox{ for all } n\geq1, \end{equation} where $i_{n}$, with $i_{n}=(f_{n},z_{n})$, is a sequence of i.i.d. $G$-valued random variables governed by the probability measure $\mu$.
We write $Z_{n}=(\eta_{n},X_{n})$, where $\eta_{n}$ is the random configuration of lamps at time $n$ and $X_{n}$ is the random element of $\Gamma$ at which the lamplighter stands at time $n$. Therefore, the projection of $Z_{n}=(\eta_{n},X_{n})$ on $\Gamma$ is the random walk $X_{n}$ starting at the identity $e$ and with law \begin{equation*} \nu(x)=\sum_{\eta\in\mathcal{C}}\mu(\eta,x). \end{equation*} The law $\nu$ of a random walk on $\Gamma$ is said to have \textit{finite first moment} if \begin{equation*} \sum_{x\in \Gamma}d(e,x)\nu(x)<\infty. \end{equation*} As a \textit{general assumption}, we assume the transience of $X_{n}$. By transience of $X_{n}$, every finite subset of $\Gamma$ is left with probability one after a finite time. Therefore, the sequence $(\eta_{n})_{n\in\mathbb{N}_{0}}$ of configurations converges pointwise to a random limit configuration $\eta_{\infty}$, which is not necessarily finitely supported. Now a natural question is whether the behaviour of the random walk at infinity is completely described by these limit configurations. For this purpose, a notion of "infinity" for the lamplighter group is needed.
\section{Convergence to the boundary}\label{sec:TheNaturalBoundary}
\paragraph{The boundary of $\Gamma$.}Consider the base group $\Gamma$ as above, with the word metric $d(\cdot,\cdot)$ on it. Let $\widehat{\Gamma}=\Gamma\cup\partial\Gamma$ be an extended space (not necessarily compact) with ideal \textit{boundary} $\partial\Gamma$ (the set of points at infinity), such that $\widehat{\Gamma}$ is compatible with the group structure on $\Gamma$ in the sense that the action of $\Gamma$ on itself extends to an action on $\widehat{\Gamma}$ by homeomorphisms.
\paragraph{3.1} \label{sec:BasicAssumptions}\textbf{Basic assumptions.} Returning to the random walk $X_{n}$ on $\Gamma$, assume that: \begin{description} \item(a) The law $\nu$ of the random walk $X_{n}$ has finite first moment on $\Gamma$. \item(b) The random walk $X_{n}$ converges almost surely to a random element of $\partial\Gamma$: there is a $\partial\Gamma$-valued random variable $X_{\infty}$ such that in the topology of $\widehat{\Gamma}$, \begin{equation*}
\lim_{n\to\infty}X_{n}=X_{\infty},\mbox{ almost surely for every starting point } x\in\Gamma. \end{equation*} \item(c) The boundary $\partial\Gamma$ is such that the following \textbf{convergence property} holds: whenever $(x_{n}),(y_{n})$ are sequences in $\Gamma$ such that $(x_{n})$ accumulates at $\xi\in \partial\Gamma$ and \begin{equation}\label{Convergence} \tag{\textbf{CP}} d(x_{n},y_{n})/d(x_{n},e)\to 0\,\mbox{ as }n\to\infty, \end{equation} then $(y_{n})$ accumulates also at $\xi$. \end{description} \paragraph{The boundary of $G=\mathbb{Z}_{2}\wr\Gamma$.}Remark that the natural compactification of $\mathcal{C}$ in the topology of pointwise convergence is the set $\widehat{\mathcal{C}}$ of all, finitely or infinitely supported configurations. Since the vertex set of the Cayley graph of $G$ is $\mathcal{C}\times\Gamma$, the space $\partial G=(\widehat{\mathcal{C}}\times \widehat{\Gamma})\setminus (\mathcal{C} \times\Gamma)$ is a \textit{natural boundary} at infinity for $G$. Let us write $\widehat{G}=\widehat{\mathcal{C}} \times \widehat{\Gamma}$.
The boundary $\partial G$ contains many points towards the lamplighter random walk $Z_{n}$ does not converge, as we shall later see. For this reason, we define a ``smaller'' boundary $\Omega$ for the lamplighter group (which is still dense in $\partial G$), and we shall prove that the random walk $Z_{n}$ converges to a random variable with values in $\Omega$. Define \begin{equation}\label{omega} \Omega =\bigcup_{\mathfrak{u} \in \partial\Gamma}\mathcal{C}_{\mathfrak{u}}\times \{\mathfrak{u}\}, \end{equation} where a configuration $\zeta$ is in $\mathcal{C}_{\mathfrak{u}}$ if and only if $\mathfrak{u}$ is its only accumulation point (i.e., there may be infinitely many lamps switched on only in a neighbourhood of $\mathfrak{u}$) or if $\zeta$ is finitely supported. The set $\mathcal{C}_{\mathfrak{u}}$ is dense in $\widehat{\mathcal{C}}$ because $\mathcal{C}\subset\mathcal{C}_{\mathfrak{u}}$ and $\mathcal{C}$ is dense in $\widehat{\mathcal{C}}$. Hence, $\Omega$ is also dense in $\partial G$.
The action of $G=\mathbb{Z}_{r}\wr\Gamma$ on itself extends to an action on $\widehat{G}$ by homeomorphisms and leaves the Borel subset $\Omega\subset\partial G$ invariant. If we take $(\eta,x)\in G$ and $(\zeta,\mathfrak{u}) \in \Omega$, then \begin{equation*} \label{action} (\eta,x)(\zeta,\mathfrak{u})=(\eta \oplus T_{x}\zeta,x \mathfrak{u}). \end{equation*} If $\mathfrak{u}\in\partial\Gamma$ and $\zeta$ is finitely supported or accumulates only at $\mathfrak{u}$, then $T_{x}\zeta$ can at most accumulate at $x\mathfrak{u}$. Also the configuration $\eta\oplus T_{x}\zeta$ accumulates again at most at $x\mathfrak{u}$ because $\eta$ is finitely supported, so that adding $\eta$ modifies $T_{x}\zeta$ only in finitely many points.
When the \textbf{Basic assumptions $(3.1)$} hold, we are able to state the first result on convergence of the lamplighter random walk $Z_{n}$ to $\Omega$. \begin{theo}\label{ConvergenceTheorem} Let $Z_{n}=(\eta_{n},X_{n})$ be a random walk with law $\mu$ on the group $G=\mathbb{Z}_{r}\wr\Gamma$ such that $supp(\mu)$ generates $G$. If $\Omega$ is defined as in \eqref{omega} and $\mu$ has finite first moment, then there exists an $\Omega$-valued random variable $Z_{\infty}=(\eta_{\infty},X_{\infty})$ such that $Z_{n}\to Z_{\infty}$ almost surely, for every starting point. Moreover the distribution of $Z_{\infty}$ is a continuous measure on $\Omega$. \end{theo} \begin{proof} Without loss of generality, we may suppose that the starting point for the lamplighter random walk $Z_{n}$ is $id=(\begin{bf}0\end{bf},e)$, that is, we start a random walk in the identity element $e$ of $\Gamma$ with all the lamps switched off.
The support of $\mu$ generates $G=\mathbb{Z}_{r}\wr\Gamma$ as a group, therefore, also the law $\nu$ of the projected random walk $X_{n}$ on $\Gamma$ is such that its support generates $\Gamma$. By assumption, the random walk $X_{n}$ on $\Gamma$ is transient, and converges almost surely to a random variable $X_{\infty}\in\partial\Gamma$.
Now, assume that the lamplighter random walk $Z_{n}$ has finite first moment. Let $(y_{n})$ be an unbounded sequence of elements in $\Gamma$, such that $y_{n}\in supp(T_{X_{n-1}}f_{n})$ ($f_{n}$ is a finitely supported configuration), for each $n$, that is, $y_{n}$ is a group element where the lamp is switched on. Since, by assumptions, the law $\nu$ of the random walk $X_{n}$ on $\Gamma$ has finite first moment, the following holds with probability $1$: \begin{equation*} d(y_{n},X_{n-1})/{n}\to 0,\mbox{ as }n\to\infty. \end{equation*} Moreover, it follows from \textit{Kingman's subadditive ergodic theorem} (see Kingman\cite{Kingman1968}) that there exists finite constant $m>0$ such that \begin{equation*} d(X_{n},e)/n\to m,\ \mbox{as}\ n\to\infty,\ \mbox{almost surely}. \end{equation*} Using the last two equations and the triangle inequality, we have $d(X_{n},y_{n})/d(X_{n},e)\to 0$, as $n\to\infty$. Recall now that the boundary $\partial\Gamma$ satisfies the convergence property \eqref{Convergence} and $X_{n}\to X_{\infty}\in\partial\Gamma$. Therefore, the sequence $(y_{n})$ accumulates at $X_{\infty}$. Now, from the definition of the group operation on $G$ and the equation \eqref{lamplighter random variables}, one can remark that the configuration $\eta_{i}$ of lamps at every moment of time $i$ is obtained by adding (componentwise addition modulo 2) to the configuration $\eta_{i-1}$ the configuration $T_{X_{i-1}}f_{i}$ (where $f_{i}$ is finitely supported). Hence \begin{equation*} supp(\eta_{n})\subset \bigcup_{i=1}^{n}supp(T_{X_{i-1}}f_{i}), \end{equation*} which is a union of finite sets. From the above, the sequence $y_{n}\in supp(T_{X_{n-1}}f_{n})$ accumulates at $X_{\infty}$, therefore the sequence $supp(\eta_{n})$ must accumulate at $X_{\infty}$. That is, the random configuration $\eta_{n}$ converges pointwise to a configuration $\eta_{\infty}$, which accumulates at $X_{\infty}$ and $Z_{n}=(\eta_{n},X_{n})$ converges to a random element $Z_{\infty}=(\eta_{\infty},X_{\infty})\in\Omega$.
When the limit distribution of $X_{n}$ is a continuous measure on $\partial\Gamma$ (i.e., it carries no point mass), then the same is true for the limit distribution of $Z_{n}=(\eta_{n},X_{n})$ on $\Omega$. Otherwise, supposing that there exists some single point of $\Omega$ with non-zero measure, then one comes to a contradiction finding some single point in $\partial\Gamma$ with non-zero measure, which is not true because of the continuity of the limit distribution on $\partial\Gamma$.
Even when the limit distribution $\nu_{\infty}$ of $X_{n}$ is not a continuous measure on $\partial\Gamma$, the limit distribution of $Z_{n}$ is still continuous. When the measure $\nu_{\infty}$ is not continuous, there exists $\mathfrak{u}\in\partial\Gamma$, with $\nu_{\infty}(\{\mathfrak{u}\})=\mathbb{P}[X_{n}=\mathfrak{u}|X_{o}=e]>0$. Assume that the limit distribution $\mu_{\infty}$ of $Z_{n}$ is not continuous. Then, there is a configuration $\phi$ such that, the limit configuration of the lamplighter random walk $Z_{n}$ is $\phi$. Then, for every $x\in\Gamma$, all trajectories of the random walk $X_{n}$ starting at $x$ and converging to the deterministic boundary element $\mathfrak{u}$ will have the same limit configuration $\phi_{x}$, accumulating only at $\mathfrak{u}$. Note that the group $\Gamma$ acts also on the space of limit configurations by translations, and for every $y\in supp(\nu)$, $T_{y}\phi_{x}=\phi_{xy}$. Since the support of $\nu$ generates $\Gamma$, this can not happen. Therefore, the distribution of $Z_{n}$ is a continuous measure. One can also prove the continuity of the limit distribution using Borel-Cantelli lemma.
\end{proof}
\section{The half-space method and the Poisson boundary of the lamplighter random walk}\label{sec:The Poisson boundary}
Under the assumptions of Theorem \ref{ConvergenceTheorem}, let $\mu_{\infty}$ be the distribution of $Z_{\infty}$ on $\Omega$, given that the position of the random walk $Z_{n}$ at time $n=0$ is $id=(\begin{bf}0\end{bf},e)$. This is a probability measure on $\Omega$ defined for Borel sets $U\subset\Omega$ by \begin{equation*} \mu_{\infty}(U)=\mathbb{P}[Z_{\infty}\in U\vert Z_{0}=(\begin{bf}0\end{bf},e)]. \end{equation*} The measure $\mu_{\infty}$ is a \textit{harmonic measure} for the random walk $Z_{n}$ with law $\mu$, that is, it satisfies the convolution equation $\mu \ast\mu_{\infty}=\mu_{\infty}$. Since $G$ acts on $\Omega$ by measurable bijections and the measure $\mu_{\infty}$ is stationary with respect to $\mu$, it follows that $(\Omega,\mu_{\infty})$ is a \textit{$\mu$-boundary} (or a \textit{Furstenberg boundary}) for the random walk $Z_{n}$ with law $\mu$, in the sense of Furstenberg \cite{Furstenberg}. There exists a maximal $\mu$-boundary, which is called the \textit{Poisson boundary} for the random walk $Z_{n}$.
The typical situation when a $\mu$-boundary $(B,\lambda)$ can arise is when $B$ is a certain topological or combinatorial boundary of a group $G$, and almost all paths of the random walk $Z_{n}$ with law $\mu$ on $G$ converge (in a certain sense which needs to be specified in each particular case) to a limit point $Z_{\infty}\in B$. Then the space $B$ considered as a measure space with the resulting hitting distribution $\lambda$ (the \textit{harmonic measure} on $B$) is a $\mu$-boundary of the random walk $Z_{n}$.
We want to know if the measure space $(\Omega,\mu_{\infty})$ is indeed maximal. In order to check the maximality of the $\mu$-boundary, we use the very useful Strip Criterion \ref{StripCriterion} of identification of the Poisson boundary due to Kaimanovich \cite[Thm. $6.5$ p. 677]{Kaimanovich2000} and \cite[Thm. $5.19$]{KaimanovichWoess2002}. This criterion is symmetric with respect to the time reversal and leads to a simultaneous identification of the Poisson boundary of the random walk and of the reflected random walk, respectively. Consider now the \textit{reflected random walk} $\check{Z}_{n}=(\check{\eta}_{n},\check{X}_{n})$ on $G$ with law $\check{\mu}(g)=\mu(g^{-1})$ for all $g\in G$, and starting at $(\begin{bf}0\end{bf},e)$. The \textit{reflected random walk} $\check{X}_{n}$ on $\Gamma$ is the random walk on $\Gamma$ with law $\check{\nu}(x)=\nu(x^{-1})$, for all $x\in\Gamma$, and starting at $e$.
\paragraph*{\begin{center}The half-space method\end{center}}\label{sec:Method for constructing the strip} Assume that: \begin{enumerate} \item The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$. Let $\nu_{\infty}$ and $\check{\nu}_{\infty}$ be the respective hitting distributions on $\Gamma$. \item For $\nu_{\infty}\times\check{\nu}_{\infty}$-almost every pair $(\mathfrak{u},\mathfrak{v})\in\partial \Gamma\times\partial\Gamma$, one has a strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ which satisfies the conditions from the Proposition \ref{StripCriterion}. That is, it is a subset of $\Gamma$, it is $\Gamma$-equivariant, and it has subexponential growth, i.e., \begin{equation}\label{SubexponentialGrowthBaseStrip}
\dfrac{1}{n}\log|\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n)|\to 0,\mbox{ as }n\to\infty, \end{equation} where $B(e,n)=\{x\in\Gamma:\ d(e,x)\leq n\}$ is the ball with center $o$ and radius $n$ in $\Gamma$. \item For every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, one can assign to the triple $(\mathfrak{u},\mathfrak{v},x)$ a partiton of $\Gamma$ into \textit{half-spaces} $\Gamma_{\pm}$ such that $\Gamma_{+}$ (respectively, $\Gamma_{-}$) contains a neighbourhood of $\mathfrak{u}$ (respectively, $\mathfrak{v}$), and the assignments $(\mathfrak{u},\mathfrak{v},x)\mapsto\Gamma_{\pm}$ are $\Gamma$-equivariant. \end{enumerate} In the last item, one can partition $\Gamma$ in more that two subsets and the method can still be applied. However, the important subsets are that ones containing a neighbourhood of $\mathfrak{u}$ (respectively, $\mathfrak{v}$), since only there may be infinitely many lamps switched on (because $\mathfrak{u}$ and $\mathfrak{v}$ are the respective boundary points toward the random walks $X_{n}$ and $\check{X}_{n}$ converge). What we want is to build a finitely supported configuration associated to pairs $(\phi_{+},\phi_{-})$ of limit configurations (of the lamplighter random walk and of the reflected random walk) accumulating at $\mathfrak{u}$ and $\mathfrak{v}$, respectively. In order to do this we restrict $\phi_{+}$ and $\phi_{-}$ on $\Gamma_{-}$ and $\Gamma_{+}$, respectively, and then we ``glue together'' the restrictions. Since the new configuration depends on the partition of $\Gamma$, we cannot choose the same partition for all x, because we will have a constant configuration which is not equivariant. Therefore, the partition of $\Gamma$ should depend on $x$.
We state here the main result of this paper. \begin{theo}\label{PoissonTheorem} Let $Z_{n}=(\eta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\wr\Gamma$, such that $supp(\mu)$ generates $G$. Suppose that $\mu$ has finite first moment and $\Omega$ is defined as in \eqref{omega}. If the above assumptions are satisfied, then the measure space $(\Omega,\mu_{\infty})$ is the Poisson boundary of $Z_{n}$, where $\mu_{\infty}$ is the limit distribution on $\Omega$ of $Z_{n}$ starting at $id=(\begin{bf}0\end{bf},e)$. \end{theo} \begin{proof} In order to apply the Strip Criterion \ref{StripCriterion}, we need to find $\mu$- and $\check{\mu}$-boundaries for the lamplighter random walk $Z_{n}$ and the reflected lamplighter random walk $\check{Z}_{n}$, respectively. By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\Omega$-valued random variable. If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are their respective limit distributions on $\Omega$, then the spaces $(\Omega,\mu_{\infty})$ and $(\Omega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks.
Let us take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\Omega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\Gamma$ are their only respective accumulation points. By the continuity of $\mu_{\infty}$ and $\check{\mu}_{\infty}$, the set $\{(b_{+},b_{-})\in\Omega\times\Omega:\mathfrak{u}=\mathfrak{v}\}$ has $(\mu_{\infty}\times\check{\mu}_{\infty})$-measure $0$, so that, in constructing the strip $S(b_{+},b_{-})$ we shall consider only the case $\mathfrak{u}\neq\mathfrak{v}$.
Using the third item in the above assumptions, let us consider a partition of $\Gamma$ into $\Gamma_{+}$, $\Gamma_{-}$, and eventually $\Gamma\setminus(\Gamma_{+}\cup\Gamma_{-})$, where $\Gamma_{+}$ (respectively, $\Gamma_{-}$) contains a neighbourhood of $\mathfrak{u}$ (respectively, $\mathfrak{v}$), and $\Gamma\setminus(\Gamma_{+}\cup\Gamma_{-})$ is the remaining subset (which may be empty). The set $\Gamma\setminus(\Gamma_{+}\cup\Gamma_{-})$ contains neither $\mathfrak{u}$ nor $\mathfrak{v}$. The restriction of $\phi_{+}$ on $\Gamma_{-}$ (respectively, of $\phi_{-}$ on $\Gamma_{+}$) is finitely supported since its only accumulation point is $\mathfrak{u}$ (respectively, $\mathfrak{v}$), which is not in a neighbourhood of $\Gamma_{-}$ (respectively, $\Gamma_{+}$). Now ``put together'' the restriction of $\phi_{+}$ on $\Gamma_{-}$ and of $\phi_{-}$ on $\Gamma_{-}$ in order to get the new configuration \begin{equation}\label{StripConfiguration} \Phi(b_{+},b_{-},x)= \begin{cases} \phi_{-}, & \mbox{on}\ \Gamma_{+}\\ \phi_{+}, & \mbox{on}\ \Gamma_{-}\\ 0, & \mbox{on}\ \Gamma\setminus(\Gamma_{+}\cup\Gamma_{-}) \end{cases} \end{equation} on $\Gamma$, which is, by construction, finitely supported. Now, the sought for the ``bigger'' strip $S(b_{+},b_{-})\subset G$ is the set \begin{equation}\label{LamplighterStrip} S(b_{+},b_{-})=\{\left(\Phi,x\right) :\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\} \end{equation} of all pairs $(\Phi,x)$, where $\Phi=\Phi(b_{+},b_{-},x)$ is the configuration defined above and $x$ runs through the strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ in $\Gamma$. This is a subset of $G=\mathbb{Z}_{2}\wr\Gamma$. We prove that the map $(b_{+},b_{-})\mapsto S(b_{+},b_{-})$ is $G$-equivariant, i.e., for $g=(\eta,\gamma)\in G$: \begin{equation*} gS(b_{+},b_{-})=S(gb_{+},gb_{-}). \end{equation*} Next, \begin{equation*} gS(b_{+},b_{-})=(\eta,\gamma)\cdot\{\left(\Phi,x\right) :\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\}=\left\lbrace (\eta\oplus T_{\gamma}\Phi,\gamma x),\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\right\rbrace . \end{equation*} If $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, then $\gamma x\in\mathfrak{s}(\gamma \mathfrak{u},\gamma\mathfrak{v})$, since $\mathfrak{s}(\gamma \mathfrak{u},\gamma\mathfrak{v})$ is $\Gamma$-equivariant. Also, \begin{equation*} \eta\oplus T_{\gamma}\Phi= \begin{cases} \eta\oplus T_{\gamma}\phi_{-}, & \mbox{on}\ \gamma\Gamma_{+} \\ \eta\oplus T_{\gamma}\phi_{+}, & \mbox{on}\ \gamma\Gamma_{-} \\
0, & \mbox{on}\ \Gamma\setminus(\gamma\Gamma_{+}\cup \gamma\Gamma_{-}). \end{cases} \end{equation*} This means that $\eta\oplus T_{\gamma}\Phi(b_{+},b_{-},x)=\Phi(gb_{+},gb_{-},\gamma x),\forall x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$. On the other side, \begin{equation*} S(gb_{+},gb_{-})=S((\eta\oplus T_{\gamma}\phi_{+},\gamma\mathfrak{u}),(\eta\oplus T_{\gamma}\phi_{-},\gamma\mathfrak{v}))=\{(\Phi(gb_{+},gb_{-},\gamma x),\gamma x),\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\}, \end{equation*} that is, $gS(b_{+},b_{-})=S(gb_{+},gb_{-})$, and this proves the $G$-equivariance of the strip $S(b_{+},b_{-})$.
Finally, let us prove that the strip $S(b_{+},b_{-})$ has subexponential growth. For this, let $(\eta,x)\in S(b_{+},b_{-})$ such that $d_{G}((\begin{bf}0\end{bf},e),(\eta,x))\leq n$. From the definition of the metrics $d_{G}(\cdot,\cdot)$ and $d(\cdot,\cdot)$ on $G$ and $\Gamma$, respectively, it follows that $d(e,x)\leq n$. Therefore, if \begin{equation}\label{small_strip} (\eta,x)\in S(b_{+},b_{-})\cap B((\begin{bf}0\end{bf},e),n),\mbox{ then }x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n), \end{equation} where $B((\begin{bf}0\end{bf},e),n)$ (respectively, $B(e,n)$) is the ball with center $id=(\begin{bf}0\end{bf},e)$ (respectively, $e$) and of radius $n$ in $G$ (respectively, $\Gamma$). Since for every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ we associate only one configuration $\Phi$ in $S(b_{+},b_{-})$, equation \eqref{small_strip} implies that \begin{equation*}
|S(b_{+},b_{-})\cap B((\begin{bf}0\end{bf},e),n)|\leq |\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n)|. \end{equation*} Now, the assumption \eqref{SubexponentialGrowthBaseStrip} that the $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ has subexponential growth leads to \begin{equation*}
\dfrac{\log|S(b_{+},b_{-})\cap B((\begin{bf}0\end{bf},e),n)|}{n}\to 0,\mbox{ as }n\to\infty, \end{equation*} and this proves the subexponential growth of the strip. Since for almost every pair of points $(b_{+},b_{-})\in\Omega\times\Omega$ we have assigned a strip $S(b_{+},b_{-})$ which satisfies the conditions from Proposition \ref{StripCriterion}, it follows that the measure space $(\Omega,\mu_{\infty})$ is the Poisson boundary of the lamplighter random walk $Z_{n}$. \end{proof} As an application of the half-space method, we consider several classes of base groups $\Gamma$: groups with infinitely many ends, hyperbolic groups and Euclidean lattices.
\subsection{Groups with infinitely many ends}\label{GraphsWithInfEnds}
The concept of \textit{ends} in the discrete settings goes back to Freudenthal \cite{Freudenthal1944}. The \textit{space of ends} $\partial\Gamma$ of a finitely generated group $\Gamma$ is defined as the space of ends of its Cayley graph with respect to a certain finite generating set. Recall that an end in a graph is an equivalence class of one-sided infinite paths, where two such paths are equivalent if there is a third one which meets each of the two infinitely often. We omit the description of the topology of $\widehat{\Gamma}=\Gamma\cup\partial\Gamma$, which can be found in Woess \cite{woess}. The space $\widehat{\Gamma}=\Gamma\cup\partial\Gamma$ is called the \textit{end compactification} of $\Gamma$.
A finitely generated group $\Gamma$, has one, two or infinitely many ends. If it has one end, then the end compactification is not suitable for a good description of the structure of $\Gamma$ at infinity. If it has two ends, then it is quasi-isometric with the two-way-infinite path and the Poisson boundary of any random walk with finite first moment on $\Gamma$ is trivial (see Woess \cite[Thm $25.4$]{woess}). Thus, we shall consider here the case when the underlying group $\Gamma$ for the lamplighter random walk $Z_{n}$ is a \textit{group with infinitely many ends}. The natural geometric boundary of $\Gamma$ is the space of ends $\partial\Gamma$.
We shall also use the powerful theory of cuts and structure trees developed by Dunwoody; see the book by Dicks and Dunwoody \cite{Dunwoody&Dicks}, or for another detailed description see Woess \cite{woess} and Thomassen and Woess \cite{Thomassen&Woess}. A detailed study of structure theory may be very fruitful for obtaining information on the behaviour of the random walks. We shall again omit the description of the structure tree $\mathcal{T}$ of the Cayley graph of a finitely generated group $\Gamma$ and of the structure map $\varphi$ between the Cayley graph of $\Gamma$ and its structure tree. The structure tree $\mathcal{T}$ is countable, but not necessarily locally finite, and $\Gamma$ acts on $\mathcal{T}$ by automorphisms.
The following result is a particular case of Theorem \ref{PoissonTheorem}. This is the first example where the half-space method can be applied in order to find the Poisson boundary of lamplighter random walks over groups with infinitely many ends. \begin{theo}\label{PoissonInfEnds} Let $\Gamma$ be a group with infinitely many ends and $Z_{n}=(\eta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\wr \Gamma$, such that $supp(\mu)$ generates $G$. Suppose that $\mu$ has finite first moment and $\Omega$ is defined as in \eqref{omega}. If $\mu_{\infty}$ is the limit distribution on $\Omega$ of the random walk $Z_{n}$ starting at $id=(\begin{bf}0\end{bf},e)$, then $(\Omega,\mu_{\infty})$ is the Poisson boundary of the random walk $Z_{n}$. \end{theo} \begin{proof} In order to apply the Theorem \ref{PoissonTheorem}, we show that the conditions required in the half-space method are satisfied for the base group $\Gamma$ and random walks $X_{n}$ on $\Gamma$. First of all, one can check that the space of ends $\partial\Gamma$ satisfies the convergence property (\ref{Convergence}). When $\Gamma$ is a group with infinitely many ends and the law $\nu$ of the random walk $X_{n}$ has finite first moment, by Woess \cite{WoessAmenable1989}, $X_{n}$ converges in the end topology to a random end from $\partial\Gamma$. The same is true for the random walk $\check{X}_{n}$ with law $\check{\nu}$ on $\Gamma$. Moreover, the limit distributions are continuous measures on $\partial\Gamma$. Let $\nu_{\infty}$ and $\check{\nu}_{\infty}$ be the respective limit distributions on $\partial\Gamma$. The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$, and the first item in the half-space method is fulfilled.
Next, one of the main points of the method is to assign a strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})\subset\Gamma$ to almost every pair of ends $(\mathfrak{u},\mathfrak{v})\in\partial\Gamma\times\partial\Gamma$, and to prove that it satisfies the conditions from Proposition \ref{StripCriterion}. By the continuity of $\nu_{\infty}$ and $\check{\nu}_{\infty}$, the set $\{(\mathfrak{u},\mathfrak{v})\in\partial\Gamma\times\partial\Gamma:\mathfrak{u}=\mathfrak{v}\}$ has $(\nu_{\infty}\times\check{\nu}_{\infty})$-measure $0$, so that, in constructing the strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ we shall consider only the case $\mathfrak{u}\neq\mathfrak{v}$. For this, let $F$ be a $D$-cut, i.e. a finite subset of the Cayley graph of $\Gamma$, whose deletion disconnects $\Gamma$ into precisely two connected components. This cut is used in defining the structure tree of the graph. For details, see Dicks and Dunwoody \cite{Dunwoody&Dicks}. Denote by $F^{0}$ the set of all end vertices (in the Cayley graph of $\Gamma$) of the edges of $F$. For every pair of ends $(\mathfrak{u},\mathfrak{v})\in\partial \Gamma\times\partial\Gamma$, let us define the strip \begin{equation*} \mathfrak{s}(\mathfrak{u},\mathfrak{v})=\bigcup\{\gamma F^{0}:\ \gamma\in\Gamma:\ \widehat{\mathcal{U}}(\mathfrak{u},\gamma F)\neq \widehat{\mathcal{U}}(\mathfrak{v},\gamma F)\}. \end{equation*} The set $\mathcal{U}(\mathfrak{u},F)$ is the connected component which represents the end $\mathfrak{u}$ when we remove the finite set $F$ from $\Gamma$, and $\widehat{\mathcal{U}}(\mathfrak{u},F)$ is its completion (which contains $\mathfrak{u}$) in $\widehat{\Gamma}$. It is clear that $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is a subset of $\Gamma$, and moreover $\gamma \mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathfrak{s}(\gamma\mathfrak{u},\gamma\mathfrak{v}) $, for every $\gamma\in\Gamma$. The strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is the union of all $\gamma F^{0}$ such that the connected components $\widehat{\mathcal{U}}(\mathfrak{u},\gamma F)$ and $\widehat{\mathcal{U}}(\mathfrak{v},\gamma F)$, which contain the ends $\mathfrak{u}$ and $\mathfrak{v}$, respectively, when we remove the set $\gamma F$ from $\Gamma$, are not the same. In other words, $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is the union of all $\gamma F^{0}$ such that the sides (one connected component of $\Gamma\setminus\gamma F$ and its complement in $\Gamma$) of $\gamma F$, seen as edges of the structure tree $\mathcal{T}$, lie on the geodesic between $\varphi\mathfrak{u}$ and $\varphi\mathfrak{v}$ ($\varphi$ is the structure map between $\partial\Gamma$ and its structure tree $\mathcal{T}$). This geodesic can be empty (when $\varphi(\mathfrak{u})=\varphi(\mathfrak{v})$), finite (when $\varphi(\mathfrak{u})$ and $\varphi(\mathfrak{v})$ are vertices in the structure tree $\mathcal{T}$), one way infinite or two way infinite. The latter holds when $\mathfrak{u},\mathfrak{v}$ are distinct thin ends (i.e. ends with finite diameter), and we have to check the subexponential growth of the strip only in this case. Using the properties of the structure tree of the Cayley graph of $\Gamma$, there is an integer $k>0$, such that the following holds: if $A_{0},A_{1},\ldots ,A_{k}$ are oriented edges in the structure tree $\mathcal{T}$ and connected components in $\Gamma$ such that $A_{0} \supset A_{1} \supset \cdots \supset A_{k}$ properly, then $d(A_{k},\Gamma\setminus A_{0})\geq 2$. Finiteness of $F^{0}$ implies that there is a constant $c>0$ such that \begin{equation*} \label{subexp}
|\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n) |\leq cn, \end{equation*} for all $n$, and for all distinct thin ends $\mathfrak{u},\mathfrak{v}$. This proves the subexponential growth of $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$.
Next, let us go to the partition of $\Gamma$ into half-spaces. By the definition of $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is contained in some cut $\gamma F$, for some $\gamma\in\Gamma$. Since a $D$-cut $F$ in a graph has the property that the sets $\gamma F$ do not intersect, for every $\gamma\in\Gamma$, it follows that every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is contained in exactly one cut $\gamma F$. We partition $\Gamma$ in this way: for every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, we look at the $D$-cut $\gamma F$ containing $x$, and remove it from $\Gamma$. Then the set $(\Gamma\setminus\gamma F)$ contains precisely two connected components. This follows from the definition of a $D$-cut, and from the finiteness of the removed set $F$. Moreover, the connected components containing $\mathfrak{u}$ and $\mathfrak{v}$ are different, by the definiton of the strip. Let $\Gamma_{+}$ be the connected component of $(\Gamma\setminus\gamma F)$, which contains $\mathfrak{u}$ and $\Gamma_{-}$ be its complement in $\Gamma$, which contains $\mathfrak{v}$. One can see here that the partition of $\Gamma$ into the half-spaces $\Gamma_{+}$ and $\Gamma_{-}$ depends on the cut $\gamma F$ containing $x$, that is, depends on $x$. The sets $\Gamma_{+}$ and $\Gamma_{-}$ are $\Gamma$-equivariant. From the above, it follows that all the assumptions needed in the half-space method hold in the case of a group with infinitely many ends. Now, we apply Theorem \ref{PoissonTheorem}.
By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\Omega$-valued random variable. If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are their respective limit distributions on $\Omega$, then the spaces $(\Omega,\mu_{\infty})$ and $(\Omega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks.
Take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\Omega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\Gamma$ are their only respective accumulation points. Define the configuration $\Phi(b_{+},b_{-},x)$ like in \eqref{StripConfiguration}, where $\Gamma\setminus(\Gamma_{+}\cup\Gamma_{-})$ is the empty set, and the strip $S(b_{+},b_{-})$ exactly like in \eqref{LamplighterStrip}. From Theorem \ref{PoissonTheorem}, $S(b_{+},b_{-})$ satisfies the conditions from Proposition \ref{StripCriterion}, and it follows that the space $(\Omega,\mu_{\infty})$ is the Poisson boundary of the lamplighter random walk $Z_{n}$ over $G=\mathbb{Z}_{2}\wr \Gamma$. \end{proof}
\subsection{Hyperbolic groups}
Consider the group $\Gamma$ with the word metric $d(\cdot,\cdot)$ on it. The group $\Gamma$ is called (word) \textit{hyperbolic} if its Cayley graph corresponding to a finite generating set $S$ is hyperbolic. A graph is called \textit{hyperbolic} (in the sense of Gromov) if there is a $\delta \geq 0$ such that every geodesic triangle in the graph is $\delta$-thin. We shall not lay out the basic features of hyperbolic graphs and their hyperbolic boundary and compactification, which we denote again by $\partial_{h}\Gamma$ and $\widehat{\Gamma}$. For details, the reader is invited to consult the texts by Gromov \cite{Gromov1987}, Ghys and de la Harpe \cite{GhysHarpe1990}, Coornaert, Delzant and Papadopoulus \cite{CoornaertDelzantPapadopoulos1990}, or, for a presentation in the context of random walks on graphs, Woess \cite[Section 22]{woess}.
If $\Gamma$ is a hyperbolic group, then one can understand how its hyperbolic compactification $\partial_{h}\Gamma$ is related to the end compactification $\partial\Gamma$: the former is finer, that is, the identity on $\Gamma$ extends to a continuos surjection from the hyperbolic to the end compactification which maps $\partial_{h}\Gamma$ onto $\partial\Gamma$. For every two distinct points $\mathfrak{u},\mathfrak{v}$ on the hyperbolic boundary $\partial_{h}\Gamma$, there is a geodesic $\overline{\mathfrak{u}\mathfrak{v}}$ which may not be unique. The boundary of a finitely generated hyperbolic group $\Gamma$ is either infinite or has cardinality $2$. In the latter case, it is a group with two ends that is quasi-isometric with the two-way infinite path, and the Poisson boundary of any random walk with finite first moment is trivial. Thus, we assume that $\partial_{h}\Gamma$ is infinite, and consider lamplighter random walks $Z_{n}$ on $G=\mathbb{Z}_{2}\wr\Gamma$, when $\Gamma$ is a hyperbolic group. The natural geometric boundary of $\Gamma$ is the hyperbolic boundary $\partial_{h}\Gamma$. For the Poisson boundary of the lamplighter random walk $Z_{n}$ on $G=\mathbb{Z}_{2}\wr\Gamma$ we have to distinguish two cases: \textbf{(a)} infinite hyperbolic boundary and infinitely many ends and \textbf{(b)} infinite hyperbolic boundary and only one end.
\paragraph{(a) Infinite hyperbolic boundary and infinitely many ends.} From the fact that the identity on $\Gamma$ extends to a continuous surjection from the hyperbolic to the end compactification, which maps $\partial_{h}\Gamma$ onto $\partial\Gamma$, it follows that, this is exactly the case treated in Section \ref{GraphsWithInfEnds}, i.e., of a group with infinitely many ends, where the ends are the connected components of the hyperbolic boundary. The Poisson boundary of the lamplighter random walk $Z_{n}$ is given by Theorem \ref{PoissonInfEnds}, with the hyperbolic boundary $\partial_{h}\Gamma$ instead of the space of ends $\partial\Gamma$.
\paragraph{(b) Infinite hyperbolic boundary and only one end.} What we want is to determine the Poisson boundary of lamplighter random walks $Z_{n}$ on $G=\mathbb{Z}_{2}\wr\Gamma$, when $\Gamma$ is a finitely generated hyperbolic group with infinite boundary and only one end. In order to use the half-space method defined before, we shall need some additional definitions.
Consider the hyperbolic boundary $\partial_{h}\Gamma$ as being described by equivalence of geodesic rays. For $y\in\Gamma$ and $\mathfrak{u}\in\partial_{h}\Gamma$ let $\pi=[y=y_{0},y_{1},\ldots,\mathfrak{u}]$ be a geodesic ray joining $y$ with $\mathfrak{u}$. For every $x\in\Gamma$ let $\beta_{\mathfrak{u}}(x,\pi)=\limsup_{i\to\infty}(d(x,y_{i})-i)$, and define the \textit{Busemann function} of the point $\mathfrak{u}\in\partial_{h}\Gamma$, $\beta_{\mathfrak{u}}:\Gamma\times\Gamma\rightarrow \mathbb{R}$ as follows: \begin{equation*} \beta_{\mathfrak{u}}(x,y)=\sup\{\beta_{\mathfrak{u}}(x,\pi^{'}):\pi^{'} \mbox{ is a geodesic ray from }y\mbox{ to } \mathfrak{u}\}. \end{equation*} The \textit{horosphere} with the centre $\mathfrak{u}$ passing through $x\in\Gamma$, denoted $H_{x}(\mathfrak{u})$ is the set \begin{equation*} H_{x}(\mathfrak{u})=\{ y\in\Gamma:\ \beta_{\mathfrak{u}}(x,y)=0\}. \end{equation*} The following result is another special case of Theorem \ref{PoissonTheorem}. \begin{theo}\label{PoissonHyperbolicGraphs} Let $\Gamma$ be a finitely generated hyperbolic group with infinite hyperbolic boundary and only one end, and $Z_{n}=(\eta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\wr\Gamma$, such that $supp(\mu)$ generates $G$. Suppose that $\mu$ has finite first moment and $\Omega$ is defined as in \eqref{omega}, with the hyperbolic boundary $\partial_{h}\Gamma$ instead of $\partial\Gamma$. If $\mu_{\infty}$ is the limit distribution on $\Omega$ of the random walk $Z_{n}$ starting at $id=(\begin{bf}0\end{bf},e)$, then $(\Omega,\mu_{\infty})$ is the Poisson boundary of the random walk. \end{theo} \begin{proof} The proof is as in the preceding example. First of all, we show that the conditions required in the half-space method are satisfied for $\Gamma$ and for random walks $X_{n}$ on $\Gamma$. One can check that the hyperbolic boundary $\partial_{h}\Gamma$ satisfies the convergence property (\ref{Convergence}). When $\Gamma$ is a hyperbolic group, and the law $\nu$ of the random walk $X_{n}$ on $\Gamma$ has finite first moment, Woess \cite{WoessFixedSets1993} (see also Woess \cite{woess}) proved that $X_{n}$ converges almost surely in the hyperbolic topology to a random element from $\partial_{h}\Gamma$, and the limit distribuition is a continuous measure. The same is true for the random walk $\check{X}_{n}$ with law $\check{\nu}$ on $\Gamma$. Let $\nu_{\infty}$ and $\check{\nu}_{\infty}$ be the respective limit distributions on $\partial_{h}\Gamma$. The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$, and the first item in the half-space method is fulfilled.
In order to prove that the second item in the half-space method holds, we assign to almost every pair of boundary points $(\mathfrak{u},\mathfrak{v})\in\partial_{h}\Gamma\times\partial_{h}\Gamma$ a strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})\subset\Gamma$ and we prove that it satisfies the conditions from Proposition \ref{StripCriterion}. By the continuity of $\nu_{\infty}$ and $\check{\nu}_{\infty}$ on $\partial_{h}\Gamma$, the set $\{(\mathfrak{u},\mathfrak{v})\in\partial_{h}\Gamma\times\partial_{h}\Gamma:\mathfrak{u}=\mathfrak{v}\}$ has $(\nu_{\infty}\times\check{\nu}_{\infty})$-measure $0$, so that, in constructing $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ we consider only the case $\mathfrak{u}\neq\mathfrak{v}$. Let \begin{equation*} \mathfrak{s}(\mathfrak{u},\mathfrak{v})=\bigcup\{x\in\Gamma:x\mbox{ lies on a two way infinite geodesic between }\mathfrak{u}\mbox{ and }\mathfrak{v}\}. \end{equation*} The strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is the union of all points $x$ from all geodesics in $\Gamma$ joining $\mathfrak{u}$ and $\mathfrak{v}$. This is a subset of $\Gamma$, and $\gamma \mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathfrak{s}(\gamma\mathfrak{u},\gamma\mathfrak{v}) $, for every $\gamma\in\Gamma$. Since in a hyperbolic space any two geodesics with the same endpoints are within uniformly bounded distance one from another (see \cite{GhysHarpe1990} for details), and the geodesics have linear growth, it follows that there exists a constant $c>0$ such that \begin{equation*}
|\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n)|\leq cn, \end{equation*} for all $n$ and distinct $\mathfrak{u},\mathfrak{v}\in\partial_{h}\Gamma$, and this proves the subexponential growth of $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$.
Finally, let us partition $\Gamma$ into half-spaces. Actually, this is one of the examples where the partition is made in two half-spaces and another ``not-interesting'' set on which the configuration will be $0$. For every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, let $H_{x}(\mathfrak{u})$ (respectively, $H_{x}(\mathfrak{v})$) be the horosphere with center $\mathfrak{u}$ (respectively, $\mathfrak{v}$) and passing through $x$. Remark that the two horospheres may have non empty intersection. Consider the partition of $\Gamma$ into the subsets $\Gamma_{+}$, $\Gamma_{-}$, and $\Gamma\setminus(\Gamma_{+}\cup\Gamma_{-})$, where $\Gamma_{+}=H_{x}(\mathfrak{u})$ (that is, it contains a neighbourhood of $\mathfrak{u}$) and $\Gamma_{-}=H_{x}(\mathfrak{v})\setminus H_{x}(\mathfrak{u})$. This partition is $\Gamma$-equivariant. From the above, it follows that all the assumptions needed in the half-space method hold in the case of a finitely generated hyperbolic group $\Gamma$. Now, we apply Theorem \ref{PoissonTheorem}.
By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\Omega$-valued random variable, where $\Omega$ is defined as in \eqref{omega}, with the hyperbolic boundary $\partial_{h}\Gamma$ instead of $\partial\Gamma$. If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are their respective limit distributions on $\Omega$, then the spaces $(\Omega,\mu_{\infty})$ and $(\Omega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks.
Take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\Omega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\Gamma$ are their only respective accumulation points. Define the configuration $\Phi(b_{+},b_{-},x)$ like in \eqref{StripConfiguration}, and the strip $S(b_{+},b_{-})$ exactly like in \eqref{LamplighterStrip}. From Theorem \ref{PoissonTheorem}, $S(b_{+},b_{-})$ satisfies the conditions from Proposition \ref{StripCriterion}, and it follows that the space $(\Omega,\mu_{\infty})$ (with the hyperbolic boundary $\partial_{h}\Gamma$ instead of $\partial\Gamma$ in the definition \eqref{omega} of $\Omega$) is the Poisson boundary of the lamplighter random walk $Z_{n}$ over $G=\mathbb{Z}_{2}\wr \Gamma$. \end{proof}
\subsection{Euclidean lattices}
Let now $\Gamma= \mathbb{Z}^{d}$, $d\geq 3$, be the $d$-dimensional lattice, with the Euclidean metric $|\cdot|$ on it. For $\mathbb{Z}^{d}$, there are also natural boundaries and compactifications. A nice example of compactification is obtained by embedding $\mathbb{Z}^{d}$ into the $d$-dimensional unit disc via the map $x\mapsto x/(1+|x|)$, and taking the closure. In this compactification, the boundary $\partial\mathbb{Z}^{d}$ is the unit sphere $S_{d-1}$ in $\mathbb{R}^d$, and a sequence $x_{n}$ in $\mathbb{Z}^{d}$ converges to $\mathfrak{u}\in S_{d-1}$ if and only if $|x_{n}|\to\infty$ and $x_{n}/|x_{n}|\to\mathfrak{u}$, as $n\to\infty$.
Let us check that the property \eqref{Convergence} holds for the boundary $S_{d-1}$. For this, let $x_{n}$ a sequence converging to $\mathfrak{u}\in S_{d-1}$, and $y_{n}$ another sequence in $\mathbb{Z}_{d}$, such that $x_{n}/|x_{n}|\to\mathfrak{u}$ and $|x_{n}-y_{n}|/|x_{n}|\to 0$ as $n\to\infty$. Since \begin{equation*}
\Big{|}\dfrac{y_{n}}{|x_{n}|}-\dfrac{x_{n}}{|x_{n}|}\Big{|}\leq\dfrac{|x_{n}-y_{n}|}{|x_{n}|}\to 0, \end{equation*}
it follows that $y_{n}/|x_{n}|\to\mathfrak{u}$. Now, $y_{n}/|x_{n}|=(y_{n}/|y_{n}|)\cdot(|y_{n}|/|x_{n}|)$, and the sequence $|y_{n}|/|x_{n}|$ of real numbers converges to $1$, since we can bound it from above and from below by two sequences both converging to 1. Therefore $y_{n}/|y_{n}|\to\mathfrak{u}$, and the property \eqref{Convergence} holds. Next, if the law $\nu$ of the random walk $X_{n}$ on $\mathbb{Z}^{d}$ has non-zero first moment (drift) \begin{equation*} m=\sum_{x}x\nu(x)\in\mathbb{R}^{d}, \end{equation*}
then the law of large numbers implies that $X_{n}$ converges to the boundary $S_{d-1}$ in this compactification with deterministic limit $m/|m|$. In particular, the limit distribution $\nu_{\infty}$ is the Dirac mass at this point. Next, we state the result on the Poisson boundary of lamplighter random walks over $\mathbb{Z}^{d}$ in the case of non-zero drift, using the half-space method. Remark that, in this case the description of the Poisson boundary war earlier obtained by Kaimanovich \cite{KaimanovichPreprint}. \begin{theo}\label{PoissonEuclideanLattices} Let $\Gamma=\mathbb{Z}^{d}$, $d\geq 3$ be a Euclidean lattice and $Z_{n}=(\eta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\wr \mathbb{Z}^{d}$, such that $supp(\mu)$ generates $G$, and the projected random walk $X_{n}$ on $\mathbb{Z}^{d}$ has non-zero drift. Suppose that $\mu$ has finite first moment and $\Omega$ is defined as in \eqref{omega}, with the unit sphere $S_{d-1}$ instead of $\partial\Gamma$. If $\mu_{\infty}$ is the limit distribution on $\Omega$ of the random walk $Z_{n}$ starting at $id=(\begin{bf}0\end{bf},e)$, then $(\Omega,\mu_{\infty})$ is the Poisson boundary of the random walk. \end{theo}
\begin{proof} Let us show that the conditions required in the half-space method are satisfied for $\Gamma=\mathbb{Z}^{d}$ and for random walks $X_{n}$ on $\Gamma$. The random walk $X_{n}$ (respectively $\check{X}_{n}$) converges to the boundary $S_{d-1}$ with deterministic limit $\mathfrak{u}=m/|m|$ (respectively, $\mathfrak{v}=-m/|m|$), in the case of non-zero mean $m$, and the convergence property \eqref{Convergence} holds. The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$, and the first item in the half-space method is satisfied. The limit distributions $\nu_{\infty}$ and $\check{\nu}_{\infty}$ are the Dirac-masses at this limit points.
Now, to define a strip in $\mathbb{Z}^{d}$ is an easy task, because of the growth of $\mathbb{Z}^{d}$. For the two limit points $\mathfrak{u}$ and $\mathfrak{v}$ of $X_{n}$ and $\check{X}_{n}$, respectively, define the strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathbb{Z}^{d}$. This strip does not depend on the limit points, it is $\mathbb{Z}^{d}$-equivariant, and it has polynomial growth of order $d$, that is, also subexponential growth. Next, let us partition $\mathbb{Z}^{d}$ into half-spaces. Denote by $\overline{\mathfrak{u}\mathfrak{v}}$ the geodesic of $S_{d-1}$ joining the two deterministic boundary points $\mathfrak{u},\mathfrak{v}\in S_{d-1}$. In this case, this is exactly the diameter in the ball, since the points $\mathfrak{u}$ and $\mathfrak{v}$ are antipodal points, i.e. they are opposite through the centre. For every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathbb{Z}^{d}$, consider the hyperplane which passes through $x$ and is orthogonal to $\overline{\mathfrak{u}\mathfrak{v}}$. This hyperplane cuts $\mathbb{Z}^{d}$ into two disjoint spaces $\Gamma_{+}$ and $\Gamma_{-}$, containing $\mathfrak{u}$ and $\mathfrak{v}$, respectively. Hence, $\Gamma=\mathbb{Z}^{d}$ is partitioned into half-spaces $\Gamma_{+}$ and $\Gamma_{-}$, which are $\mathbb{Z}^{d}$-equivariant. From the above, it follows that all the assumptions needed in the half-space method hold in the case of a Euclidean lattice $\mathbb{Z}^{d}$. Now, we apply Theorem \ref{PoissonTheorem}.
By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\Omega$-valued random variable, where $\Omega$ is defined as in \eqref{omega}, with $S_{d-1}$ instead of $\partial\Gamma$. Nevertheless, the only ``active'' points of non-zero $\nu_{\infty}$- and $\check{\nu}_{\infty}$-measure on $S_{d-1}$ are $\mathfrak{u}=m/|m|$ and $\mathfrak{v}=-m/|m|$, respectively. More precisely, $\Omega$ can be written as \begin{equation}\label{EuclideanOmega} \Omega =\Big{(}\mathcal{C}_{\mathfrak{u}}\times\{\mathfrak{u}\}\Big{)}\cup\Big{(}\mathcal{C}_{\mathfrak{v}}\times\{\mathfrak{v}\}\Big{)}, \end{equation} where $\mathcal{C}_{\mathfrak{u}}$ (respectively, $\mathcal{C}_{\mathfrak{v}}$) is the set of all configurations accumulating only at $\mathfrak{u}$ (respectively, $\mathfrak{v}$).
If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are the limit distributions of $Z_{n}$ and $\check{Z}_{n}$ on $\Omega$, then the spaces $(\Omega,\mu_{\infty})$ and $(\Omega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks. Take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\Omega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\Gamma$ are their only respective accumulation points. Define the configuration $\Phi(b_{+},b_{-},x)$ like in \eqref{StripConfiguration}, and the strip $S(b_{+},b_{-})$ exactly like in \eqref{LamplighterStrip}. From Theorem \ref{PoissonTheorem}, $S(b_{+},b_{-})$ satisfies the conditions from Proposition \ref{StripCriterion}, and it follows that the space $(\Omega,\mu_{\infty})$, with $\Omega$ as in \eqref{EuclideanOmega} is the Poisson boundary of the lamplighter random walk $Z_{n}$ over $G=\mathbb{Z}_{2}\wr \Gamma$. \end{proof} \paragraph*{Final remarks.} One can also apply the method in order to find the Poisson boundary of lamplighter random walks over polycyclic groups, nilpotent groups, discrete groups of semi-simple Lie groups. Another application of the method is the determination of the Poisson boundary of random walks over ``iterated'' lamplighter groups. That is, we consider our base group as being $\mathbb{Z}_{2}\wr\Gamma$ and we construct a new lamplighter group $\mathbb{Z}_{r}\wr(\mathbb{Z}_{r}\wr\Gamma)$ over $\mathbb{Z}_{r}\wr\Gamma$. The interesting fact here is that, the geometry of the group $\mathbb{Z}_{r}\wr\Gamma$ is completely different from that one of $\Gamma$. For instance, when $\Gamma$ is a group with infinitely many ends, $\mathbb{Z}_{r}\wr\Gamma$ has only one end. Our method still works, that is, we start with a strip in the lamplighter graph $\mathbb{Z}_{r}\wr X$ and we lift it to a ''bigger`` one in $\mathbb{Z}_{r}\wr (\mathbb{Z}_{r}\wr X)$, following the steps of our method.
\paragraph*{Acknowledgements} I am grateful to Wolfgang Woess for numerous numerous fruitful disscusions and for his help during the writing of this manuscript, and also to Vadim Kaimanovich for several hints and useful remarks regarding the content and exposition. I would also like to thank to the referee for careful reading, suggestions and corrections that helped to improve the paper.
\begin{small}
\end{small}
\end{document} |
\begin{document}
\title{Quantized VCG Mechanisms for Polymatroid Environments } \titlenote{This work was supported in part by the National Science Foundation grant TWC-1314620.}
\author{Hao Ge and Randall A. Berry} \affiliation{
\institution{Department of Electrical and Computer Engineering, Northwestern University, Evanston, Illinois} }
\begin{abstract} Many network resource allocation problems can be viewed as allocating a divisible resource, where the allocations are constrained to lie in a polymatroid. We consider market-based mechanisms for such problems. Though the Vickrey-Clarke-Groves (VCG) mechanism can provide the efficient allocation with strong incentive properties (namely dominant strategy incentive compatibility), its well-known high communication requirements can prevent it from being used. There have been a number of approaches for reducing the communication costs of VCG by weakening its incentive properties. Here, instead we take a different approach of reducing communication costs via quantization while maintaining VCG's dominant strategy incentive properties. The cost for this approach is a loss in efficiency which we characterize. We first consider quantizing the resource allocations so that agents need only submit a finite number of bids instead of full utility function. We subsequently consider quantizing the agent's bids. \end{abstract}
\begin{CCSXML}
<ccs2012>
<concept>
<concept_id>10003033.10003068.10003078</concept_id>
<concept_desc>Networks~Network economics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003079.10011672</concept_id>
<concept_desc>Networks~Network performance analysis</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003456.10003457.10003490.10003514</concept_id>
<concept_desc>Social and professional topics~Information system economics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012> \end{CCSXML}
\ccsdesc[500]{Networks~Network economics} \ccsdesc[300]{Networks~Network performance analysis} \ccsdesc[100]{Social and professional topics~Information system economics}
\keywords{Mechanism design, Quantization, Worst-case efficiency}
\maketitle
\section{Introduction} Efficient allocation of limited resources is a fundamental problem in networked systems. Economic-based models are a common way of addressing this problem. In such models, agents are endowed with utility functions and the goal is to maximize the social welfare, given by the sum of the agents' utilities. Hence, given the agents' utility functions, efficient resource allocation then reduces to solving an optimization problem. In networked systems, two challenges to such an approach are (1) the agents are distributed and only limited communication may be available to communicate user utilities and resource allocations decisions, and (2) self-interested users may not have incentive to correctly report their utilities.
The incentive issue has been studied through the lens of mechanism design. One of the most celebrated results in this literature is the Vickrey-Clarke-Groves (VCG) mechanism \cite{vickrey1961counterspeculation,clarke1971multipart,groves1973incentives}, which provides an elegant solution to the incentive issue. In the VCG mechanism, each agent is required to report her utility function and in turn receives an allocation and makes a payment based on the reports of every agent. Through a carefully designed payment, the VCG mechanism achieves the efficient outcome under a strong incentive guarantee. Namely, it is dominant strategy incentive compatible (DSIC) for agents to truthfully report their utility and if they do so, then the mechanism simply maximizes the sum of the reported utility, yielding the efficient outcome. However, in the context of divisible network resources, such as power, bandwidth or storage space, each agent's utility may be infinite dimensional and hence VCG mechanisms do not address the issue of limited communication.
The limited communication issue has also been well studied. In particular, we highlight the seminal work of Kelly, \cite{kelly1997charging,kelly1998rate}, in which agents submit one-dimensional bids and receive an allocation determined by the ratio of their bid and a congestion price. For price-taking agents, who do not anticipate the impact of their actions on the price, this mechanism can lead to the socially optimal allocation, but as shown in \cite{johari2004efficiency}, when buyers can anticipate the impact of their actions on congestion price, this mechanism can lead to an efficiency loss.
In \cite{yang2007vcg,johari2009efficiency}, a ``VCG-Kelly" mechanism was proposed that combined the one-dimensional bids in Kelly's mechanism and the VCG payment rule. This mechanism is shown to obtain the socially optimal outcome as with VCG. However, the incentive guarantee is relaxed to the weaker notion of Nash equilibria instead of a dominant strategy equilibria as in VCG.\footnote{Justifying a Nash equilibria requires that agents are aware of each others' pay-offs (and rationality) or apply some type of dynamic process to reach this point.}
In our previous work, \cite{haoge2018,ge2018quantized}, we introduced an alternative approach for reducing communication costs while maintaining VCG's dominant strategy incentive properties when allocating a single divisible resource. This approach was to quantize the resource and then run VCG on the quantized resource. This incurs some efficiency loss, which was bounded. We build on this approach in this paper, but instead of a single divisible resource, we consider resource allocations that are constrained to lie within a polymatroid in $\mathbb R_+^N$, where $N$ is the number of agents. Such a polymatroid environment is a natural and non-trivial generalization of a single divisible resource. For example, if the single divisible resource represents the rate on a given link in a network, then a polymatroid can be viewed as giving the region of rates obtainable by different source-destination pairs across a network with multiple links (see e.g. \cite{federgruen1986optimal}). As shown in \cite{Yang2005RevenueAS,berry2006kelly}, mechanisms for single links may not work when applied in a more general network setting. Polymatroid environments can also model many other network settings such as multiclass queueing systems \cite{shanthikumar1992multiclass}, various network flow problems \cite{lawler1982computing,imai1983network,lawler1982flow}, spectrum sharing \cite{kang2014sum,berry2013market} and problems in network information theory \cite{tse1998multiaccess,ge2015secure,tse2004diversity}. Given a polymatroid feasible region, we study quantized VCG mechanisms and the resulting efficiency losses for strategic agents.
The contributions of this paper are the following: \begin{itemize}
\item{Given a polymatroid environment, we determine a way to quantize the resource into partitions so that a feasible allocation of partitions lies in an {\it integral polymatroid}.}
\item{Based on the integral polymatroid, we propose a \textit{quantized VCG mechanism} for allocating partitions. This mechanism inherits VCG's DSIC property and further, due to the polymatroid structure, can be implemented using a greedy algorithm. We also characterize the worst-case efficiency loss of this mechanism due to the quantization.}
\item{In addition to the quantization of the resource, we further study a {\it rounded VCG mechanism} in which the agents' bids are also quantized. The bid quantization results in the mechanism no longer being DSIC, but we show that $\epsilon$-dominant strategies exist. We characterize the worst-case efficiency for three different types of such strategies.}
\item{We analyze the trade-off between communication cost and overall efficiency in our proposed mechanisms. We further discuss the impact of the parameters in each mechanism.} \end{itemize}
The rest of the paper is organized as follows: in Sect. 2, we review some properties of polymatroids and the VCG mechanism. Our two classes of quantized mechanisms are introduced and analyzed in Sect. 3-4. We conclude the paper in Sect. 5.
\emph{Notations:} $\mathbb{R}_{+}^N$ and $\mathbb{N}^N$ denote the set of nonnegative real-valued and integer-valued vectors of dimension $N$, respectively. We use $\lfloor x \rfloor$ for the floor, $\lceil x \rceil$ for the ceiling, and $\{x\}$ for the fractional part of $x$. We denote by $\sigma_{-i}$ the subset of $\sigma$ for everybody except for agent $i$ and $(z,\sigma_{-i})$ denotes the set $\sigma$ with the $i^{th}$ component replaced with $z$. We write $[ \cdot]^+$ as the projection onto the nonnegative orthant. The indicator function $\mathds{1}_w$ equals 1 if the event $w$ happens, otherwise 0.
\section{Model and Background} \subsection{Polymatroids} We begin by reviewing some basic definitions and properties related to polymatroids.
\begin{definition}\cite{edmonds1970submodular}
Given a finite ground set $\mathcal{N} = \{1,\cdots,N\}$, a set function $f$: $2^{\mathcal{N}} \rightarrow \mathbb{R}$ is \textit{submodular} if:
\begin{equation*}
f(S) + f(T) \geq f(S \cap T) + f(S \cup T),
\end{equation*}
for any subsets $S$ and $T$ of $\mathcal N$. If the inequality holds strictly, the set function $f$ is strictly submodular. \end{definition}
An equivalent definition for submodularity is: \begin{equation*}
f(S\cup \{i\})+f(S\cup\{j\}) \geq f(S) + f(S\cup\{i, j\} ), \end{equation*} for any $S \subseteq \mathcal{N}$ and distinct $i,j \in \mathcal{N} \setminus S$.
A set function $f$: $2^{\mathcal{N}} \rightarrow \mathbb{R}$ is normalized if $f(\emptyset) = 0$; monotone (non-decreasing) if $f(S) \leq f(T), \forall S \subseteq T$; and integer-valued if $f(S) \in \mathbb{Z}, \forall S \subseteq \mathcal{N}$.
\begin{definition}
Let
\begin{equation*}
P_f = \{{\bf x} \in \mathbb{R}_{+}^N|\sum_{n \in S}x_n \leq f(S), \forall S \in \mathcal{N}\}.
\end{equation*}
If the set function $f$ is normalized, monotone and submodular, then $P_f$ is a {\it polymatroid} with rank function $f$. \end{definition}
If the rank function of a polymatroid is integer-valued, then it follows that all of corner points are also integer valued. Further, it can sometimes be convenient to regard an \textit{integral polymatroid}, i.e., a polymatroid consisting of only points in $\mathbb N^N$ instead of points in $\mathbb R^N$ \cite{edmonds1970submodular}. We adopt this view here when discussing integral polymatroids.
The following are two well known properties of polymatroids. \begin{lemma}\label{lemma:1}
Given a polymatroid $P_f$, if ${\bf x^*} \in P_f$, then the polyhedron
$$
P_f -{\bf x}^* := \{{\bf x} \in \mathbb{R}_{+}^N | \sum_{n\in S} x_n \leq f(S)-\sum_{n \in S} x^*_n, \forall S \in \mathcal{N}\}.
$$
is also a polymatroid with rank function $\tilde{f}$, where
$$
\tilde{f}(S) = \min_{T:S\subseteq T}(f(T)-\sum_{n \in T} x^*_n).
$$ \end{lemma}
\begin{lemma}\label{lemma:2.1}
For any polymatroid $P_f$, there exists a ${\bf x} \in P_f$ such that $\sum_{n=1}^N x_n =f(\mathcal{N})$. \end{lemma}
The second lemma shows that the sum constraint over all $\mathcal N$ is always tight for some point within the polymatroid. The set of all such points is referred to as the polymatroid's dominant face, which is formally defined next. \begin{definition}
The \textit{dominant face} of a polymatroid $P_f$, denoted by $\mathcal{D}(P_f)$, is defined as:
$$
\mathcal{D}(P_f) = \{{\bf x} \in \mathbb{R}_{+}^N| {\bf x} \in P_f, \sum_{n=1}^N x_n = f(\mathcal{N}) \}.
$$ \end{definition}
As defined in \cite{salimi2015representability}, the minimum distance of a set function $f$ over a ground set $\mathcal{N}$ is: \begin{equation}
\bigtriangleup f := \min_{S\subseteq \mathcal{N}} \min_{i,j \in \mathcal{N} \setminus S} |D_f(S \cup \{i\}, S\cup\{j\})|, \end{equation} where the distance $D_f(S,T)$ between two arbitrary subsets $S$ and $T$ is defined as: \begin{equation}
D_f(S,T) = f(S) + f(T) -f(S \cap T) -f(S\cup T). \end{equation} This can be thought of as an indication of how strictly supermodular the set function is. For a submodular function $f$, the distance between two arbitrary subsets is always non-negative. Furthermore, if the set function $f$ is strictly submodular, then the minimum distance $\bigtriangleup f$ is strictly greater than 0.
Following \cite{salimi2015representability}, we have the following theorem: \begin{theorem}
Given a polymatroid $P_f$ with $\bigtriangleup f = 0$, then for any $\eta > 0$, there exists another polymatroid $P_{\tilde{f}}$ such that $\bigtriangleup \tilde{f} > 0$ and $f(S) -\tilde{f}(S) \leq \eta, \forall S \subseteq \mathcal{N}$. \end{theorem} \begin{proof}
Let $g$ be a normalized set function defined as $g(S) = |S^c|^2$ for any nonempty set $S$. Then let $\tilde{f}(\emptyset) = 0$ and $\tilde{f}(S) = f(S) - \gamma g(S)$ for any nonempty set $S$. With a small value of $\gamma$, we can make sure the difference between $f$ and $\tilde{f}$ is less than $\eta$. Moreover, since $-g(S)$ is submodular with a nonnegative minimum distance, we can show $\tilde{f}$ is a rank function with $\bigtriangleup \tilde{f} > 0$. The detailed proof is omitted due to space consideration. \end{proof}
The above theorem shows that given any polymatroid, we can find another polymatroid with positive minimum distance such that the new polymatroid lies in the given one and the discrepancy could be arbitrarily small. Due to this reason, in the remainder of the paper, we only consider polymatroids constrained by set functions with positive minimum distance.\footnote{If this is not the case, then we can use the above construction and as will be evident it would have negligible impact on the resulting efficiency of our mechanisms.}
\subsection{Social Welfare Maximization Problem} We consider a model where a set of $N$ agents are competing for a resource constrained by a polymatroid environment. Denote the set of agents as $\mathcal{N}= \{1,\cdots, N\}$. A feasible allocation ${\bf x} =\{x_n\}_{n=1}^N$ is then any point that lies in a polymatroid associated with a set function $f$ over $\mathcal{N}$. In particular, the agents in any subset $S$ could obtain a total allocation of at most $f(S)$, i.e., $\sum_{n \in S}x_n \leq f(S)$. Without loss of generality, we assume that $f(\mathcal{N}) = 1$. Moreover, each individual's allocation must be non-negative and hence ${\bf x} \in \mathbb{R}_{+}^N$.
Each agent is endowed with a utility function $u_n(x_n)$, which only depends on the agent's final allocation. Throughout this paper, these utility functions satisfy the following assumptions:
\begin{assumption} Each agent's utility function is only known by that agent, i.e., it is private information. \end{assumption}
\begin{assumption} The utility functions $u_n$ are nonnegative, concave, and strictly increasing. \end{assumption}
\begin{assumption} For all agents, the marginal utility per unit resource is bounded, that is, there exist positive constants $\alpha$ and $\beta$ such that for any agent $n$, $\beta < u_n'< \alpha$. These bounds are public information. \end{assumption}
\begin{assumption} The utility functions are in monetary units and each agent's quasilinear payoff is defined as $u_n(x_n)-p_n$, where $p_n$ is the payment for the allocated resource. \end{assumption}
Assumption 1 results in the incentive issue: utilities are private information, each agent may lie about them. Assumption 2 and 3 show agents have ``elastic" resource requirements and the unit value for the resource lies within some range, which is reasonable in many practical settings. The last assumption is widely adopted in pricing mechanisms and the quasilinear structure makes each agent's strategy and the corresponding final allocation dependent on the payment rule in this game as well as the utility functions.
Under this setting, the goal of the allocator is to make the best use of the resource, i.e., to maximize the social welfare. This is given by solving the following optimization problem: \begin{align}\label{main:1} &\max \ \ \ \ \sum_{n=1}^N u_{n}(x_n)\\
\nonumber &\text{subject to} \ \ {\bf x}\in P_f= \{{\bf x} \in \mathbb{R}_{+}^N | \sum_{n\in S} x_n \leq f(S), \forall S \in \mathcal{N}\}. \end{align}
Since the total sum constraint is always binding in polymatroids, we have the following result from \cite{groenevelt1991two}. We give a proof of this for completeness. \begin{proposition}\label{prop:1} The optimal solution to optimization problem (\ref{main:1}) lies on the dominant face of the polymatroid. \end{proposition} \begin{proof} We show this by means of contradiction. Suppose $\bf x^*$ is the optimal solution, but $\sum_{n=1}^N x^*_n < f(\mathcal{N})$. Then we can construct a new polymatroid $P_g$ by subtracting the optimal solution $\bf x^*$: \begin{equation}
P_g = \{{\bf x} \in \mathbb{R}_{+}^N | \sum_{n\in S} x_n \leq f(S)-\sum_{n \in S} x^*_n, \forall S \in \mathcal{N}\}. \end{equation} The new set function is $g(S) = f(S)-\sum_{n \in S} x^*_n$, which satisfies the properties for set function to define a polymatroid. Since $f(\mathcal{N}) -\sum_{n=1}^N x^*_n > 0 $, the polymatroid $P_g$ is not empty and so we can find a nonzero vector $\bf x'$ inside. Moreover, each utility function is strictly increasing, hence $\sum_{n=1}^N u_n(x^*_n+x_n') >\sum_{n=1}^N u_n(x^*_n)$, which is a contradiction. This finishes the proof. \end{proof}
\subsection{Vickrey-Clarke-Groves (VCG) Mechanism} Next we review the VCG mechanism. In this mechanism, the allocator asks agents to report their utility functions and determines the allocation which maximizes the social welfare under the reported utilities. The VCG payments for given agent is then given by the difference between the total utility the other agents would have received if the given agent was not present and the total utility that they received based on the reported utilities.\footnote{This is the most common form of VCG payments known as the Clarke pivot rule, more generally changing agent $n$'s payment by any function that does not depend on the reported utility of agent $n$ is also valid.} In other words, each agent's payment equals the increase in the others' sum utility if she is absent.
\begin{definition}\cite{nisan2007algorithmic} A mechanism where players have private information is said to be \textit{dominant-strategy incentive-compatible} (DSIC) if it is a weakly-dominant strategy for every player to reveal his/her private information. \end{definition}
The VCG mechanism is DSIC, and given that agents follow the dominant strategy of reporting their true utility, the mechanisms will be efficient, i.e. it will maximize the social welfare.
\subsection{Quantized VCG Mechanism } Given a divisible resource, the utility function an agent submits in the VCG mechanism can be an arbitrary real valued function of the amount of resource obtained. As we have discussed, this can result in the excessive communication overhead. Next we discuss the approach in \cite{haoge2018} for limiting this overhead via quantization, when the resource is a single divisible resource, i.e., the resource constraint is given by $\sum_{n\in\mathcal N} x_n \leq 1$. In this approach, the allocator partitions the whole resource into $M$ divisions equally and restricts agents to get an integer number of divisions. Essentially, this mechanism then runs a VCG mechanism for the quantized resource. Under this setting, each agent $n$ only needs to submit $M$ bids: $v_{n1}, \cdots, v_{nM}$, where $v_{nm}$ indicates agent $n$'s marginal utility from getting an $m$th additional unit of resource. This reduces the information an agent needs to submit from specifying an infinite dimensional function to specifying an $M$-dimensional vector. For small values of $M$, this quantization also simplifies the optimization faced by the resource allocator as it can now determine the socially optimal allocation via a greedy algorithm instead of needing to solve a convex optimization problem.\footnote{Also note that using a greedy algorithm gives the exact social optimal for the quantized problem, while when solving the convex optimization, the solution will typically be with some $\epsilon$ specified by the algorithm used. For large $\epsilon$, this in turn could impact the incentive guarantee of VCG.} Finally, since this is still a VCG mechanism, it it still DSIC and implements the social optimal outcome with the quantized resource. However there is a loss in efficiency due to the quantization of the resource. In \cite{haoge2018}, tight bounds on this loss are given. Specifically, the efficiency (defined as the ratio of the welfare with $M$ divisions to the optimal welfare without quantization) is shown to always be no less than \begin{equation}\label{eq:old} \frac{M}{M+N-1} \end{equation} where $N$ is the number of agents. Here we extend this to polymatroid environments.
\section{Quantizing VCG for Polymatroids } In this section, we give two toy examples that show blindly partitioning the resource into devisions may result in a large efficiency loss or the failure of greedy algorithm for determining the assignment.
\begin{example}
Consider two agents competing for the resource, where the feasible region is a polymatroid $P_f = \{(x_1,x_2) \in \mathbb{R}^2_+| x_1 \leq 0.6 , x_2 \leq 0.6, x_1 + x_2 \leq 1\}$, their utilities are $u_1(x_1) = 2x_1$ and $u_2(x_2) = x_2$, respectively. \end{example}
For the setting in Example 1, suppose we partition the total resource into 3 divisions equally ($M = 3$), so that the amount of each division is $\frac{1}{3}$. The feasible region for the number of divisions each agent can get then becomes:\footnote{Here we are assuming that any quantized allocation has to respect the original constrains in $P_f$; hence, for example, agent 1 can not receive two divisions since $2/3 > 0.6$.} $$
P_{\tilde{f}} = \{(y_1,y_2) \in \mathbb{N}^2| y_1 \leq 1 , y_2 \leq 1, y_1 + y_2 \leq 3\}. $$ Observe that here the constraint function is no longer submodular and the sum constraint in $P_{\tilde{f}}$ is never tight. In particular note that the maximum number of divisions allocated is now 2 ($y_1 +y_2 \leq 2$), which means that $1/3$ of the total resource is never allocated. For the given utilities this results in an efficiency of $0.63$, which is lower than the value of $0.75$ given by the bound in (\ref{eq:old}).\footnote{Moreover this poor efficiency is obtained with linear utilities, which lead to no loss in the setting in \cite{haoge2018}.} By changing the parameters in this example, even lower efficiencies are possible.
\begin{example}
Consider the following polymatroid as the feasible region: $P_f =\{ {\bf x} \in \mathbb{R}^3_+| x_1 \leq 0.7, x_2\leq 0.7, x_3\leq 0.7, x_1 +x_2 \leq 0.9, x_2+x_3 \leq 0.9, x_1+x_3 \leq 0.9, x_1+x_2+x_3 \leq 1 \}$, the utility functions for the agents are $u_1(x_1) = 1.2 x_1$, $u_2(x_2) = 1.1x_2$, $u_3(x_3) = x_3$. \end{example}
Similarly, if we partition the resource in Example 2 into 3 divisions, then the feasible region for number of divisions each agent can get is: \begin{align*}
P_{\tilde{f}} = & \{(y_1,y_2, y_3) \in \mathbb{N}^3| y_1 \leq 2 , y_2 \leq 2,y_3 \leq 2, y_1 + y_2 \leq 2,\\
& y_1+y_3 \leq 2, y_2+y_3 \leq 2, y_1+y_2+y_3 \leq 3\}. \end{align*} In this case ${\bf y} = (1,1,1)$ is a feasible solution and the corresponding allocation is ${\bf x} = (\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$. In this case, all of the resource is allocated and the social welfare is $1.1$. However, if we use the greedy algorithm to find the optimal allocation given the true marginal valuations, the solution and the corresponding allocation are ${\bf y}^* = (2,0,0)$ and ${\bf x}^* = (\frac{2}{3},0,0)$, respectively. This results in a social welfare of $0.8$ which is less than the value of $1.1$ obtained by the feasible solution $(1,1,1)$. In other words the greedy algorithm no longer gives the optimal quantized allocation. The issue again is that $P_{\tilde{f}}$ is not a polymatroid.
These two examples illustrate that in a polymatroid environment more care is needed when quantizing the resource. In both cases, the issue is that region resulting from quantization is no longer a polymatroid. Next we discuss an approach to ensure that this does not occur by ensuring that the region $P_{\tilde{f}}$ obtained after quantization is always an integral polymatroid.
\subsection{Integral Polymatroid Construction} Given a real-valued polymatroid, $P_f$,we next show that if a large enough number of partitions are used, then the issues in the previous two examples do not arise. The construction for doing this is given in Algorithm \ref{integer-valued}. This algorithm specifies an integral set $P_{\tilde{f}}$, which indicates the number of quantized partitions for each agent.
\begin{algorithm}[h]
\caption{Resource Partition}
\begin{algorithmic}\label{integer-valued}
\STATE \textbf{Inputs:} Polymatroid $P_f$ associated with set function $f$, which has the minimum distance $\triangle f$.
\STATE Choose integer $M$ such that $ M \geq \frac{2}{\triangle f}$.
\FOR {$S \subseteq \mathcal{N}$ }
\STATE {$\tilde{f}(S) = \lfloor f(S)M \rfloor$;}
\ENDFOR
\end{algorithmic} \end{algorithm}
We next give several properties of the set $P_{\tilde{f}}$ given by Algorithm 1. First, it is straightforward to see the following lemma, which states that any constraint in $P_{\tilde{f}}$ is within $\frac{1}{M}$ of the corresponding constraint in $P_f$ and that the sum constraints are the same. \begin{lemma}
For any $S \subseteq \mathcal{N}$, $0 \leq f(S) - \frac{\tilde{f}(S)}{M} < \frac{1}{M}$; further, $\frac{\tilde{f}(\mathcal N)}{M} = f(\mathcal N)=1$. \end{lemma}
Next we show that the constraints in $P_{\tilde{f}}$ always allow any set of users to have at least two partitions. (Note this is not the case for Example 1 in the previous section when $M=3$).
\begin{proposition}\label{prop:2}
For any non-empty subset $S \subseteq \mathcal{N}$, $\tilde{f}(S) \geq 2$. \end{proposition} \begin{proof}
First, according to the definition of the minimum distance, for any agent $i$ and $j$, we have:
$$
\bigtriangleup f \leq f(\{i\}) + f(\{j\}) -f(\emptyset) -f(\{i,j\})\leq f(i).
$$
Hence, for any nonempty subset $S$, we can find $i\in S$ and
\begin{equation*}
\tilde{f}(S) = \lfloor f(S)\frac{2}{\bigtriangleup f}\rfloor \geq\lfloor f(\{i\})\frac{2}{\bigtriangleup f}\rfloor \geq 2.
\end{equation*} \end{proof}
Finally we show that $P_{\tilde{f}}$ is an integral polymatroid.
\begin{theorem}
The polyhedron, $P_{\tilde{f}}$, associated with $\tilde{f}$ given by Algorithm 1 is an integral polymatroid. \end{theorem} \begin{proof}
First, we know $\tilde{f}(\emptyset) = \lfloor f(\emptyset )M\rfloor = 0$, so $\tilde{f}$ is normalized.
For monotonicity, consider two sets $S$ and $T$ such that $S \subseteq T$, since $f$ is monotonically increasing, then
\begin{equation}
\tilde{f}(S) = \lfloor f(S )M\rfloor \leq \lfloor f(T )M\rfloor = \tilde{f}(T).
\end{equation}
Moreover, we have
\begin{align}
\nonumber &\tilde{f}(S+ \{e_1\})+\tilde{f}(S+\{e_2\}) \\
\nonumber\geq &Mf(S+ \{e_1\})-1 + Mf(S+\{e_2\}) -1\\
\nonumber\geq & M(\bigtriangleup f +f(S) +f(S+\{e_1\} +\{e_2\} )) -2\\
\nonumber\geq & M\bigtriangleup f -2 +\tilde{f}(S) + \tilde{f}(S+\{e_1\} +\{e_2\} )\\
\geq & \tilde{f}(S) + \tilde{f}(S+\{e_1\} +\{e_2\} ),
\end{align}
where we used the definition of minimum distance and the fact that $M\geq \frac{2}{\bigtriangleup f} $, respectively, in the last two steps. Therefore, $\tilde{f}$ is submodular and $P_{\tilde{f}}$ is an integral polymatroid. \end{proof}
Since $P_{\bar{f}}$ is a polymatroid, there always exists an allocation on its dominant face and by construction $\tilde{f}(\mathcal N) = M$, so that any such allocation will utilize all of the available resource.
\subsection{Quantized VCG Mechanism} Given the feasible region $P_f$, we define a {\it quantized VCG mechanism} as a mechanisms in which the agents are only allocated an integer number divisions, where the set of allocations must lie in the integral polymatroid $P_{\tilde{f}}$ constructed in Algorithm 1.
Under this setting, the optimal allocation is ${\bf x}^* = \frac{1}{M} {\bf y}^* $, where ${\bf y}^*$ is the optional allocation of divisions given by the following integer optimization problem: \begin{align}\label{5}
&\text{maximize} \ \ \ \ \sum_{n=1}^N u_{n}(\frac{y_n}{M})\\
\nonumber &\text{subject to} \ \ \ \ {\bf y} \in P_{\tilde{f}}. \end{align} \begin{corollary}\label{lemma:2}
The maximizer ${\bf y^*}$ lies on the dominant face of $P_{\tilde{f}}$. \end{corollary}
This is a straightforward generalization of Proposition 1 to the discrete setting and can be proven in a similar manner. Note also from the discussion above, such a solution will utilize all of the resource.
To determine the allocation, the resource allocator essentially runs a VCG mechanism for the possible allocations in $P_{\tilde{f}}$. This requires each agent to submit its valuation for each possible bundle of units it can obtain. Since each agent $n$ can get at most $\tilde{f}(\{n\})$ units, agent $n$ only needs to report $\tilde{f}(\{n\})$ values: $u_n(\frac{y_n}{M})$, $1\leq y_n \leq \tilde{f}(\{n\})$. Equivalently, agent $n$ can submit the marginal utility instead. Note that the range of the marginal values will be smaller than the range of the actual utility values and so in some sense reporting the marginal values also reduces the communication cost. This will be made more precise in the next section, when we also consider quantizing the bids. The marginal utility is denoted by $V_n = \{v_{n1},\cdots, v_{n\tilde{f}(\{n\})} \}$, where \begin{equation}
v_{nm} = u_n(\frac{m}{M}) - u_n(\frac{m-1}{M}), \ \ \ \ 1 \leq m \leq \tilde{f}(\{n\}. \end{equation} Furthermore, under Assumption 3, all utility functions have bounded marginal valuations and hence each bid $v_{nm}$ is lower bounded by $\frac{\beta}{M}$. To further reduce the range of bids, we can require agent $n$ to submit the surrogate marginal utility $\hat{v}_{nm} = v_{nm} - \frac{\beta}{M}$ so that: \begin{equation}
u_n(\frac{y_n}{M}) = \sum_{m=1}^{y_n} \hat{v}_{nm} + \frac{\beta y_n}{M}. \end{equation} Notice that both $v_{nm} $ and $\hat{v}_{nm} $ are non-increasing in $m$ due to the concavity of $u_n$.
The detailed mechanism is as follows: \begin{itemize}
\item{Determine the number of partitions and partition the resource into $M$ divisions equally using Algorithm \ref{integer-valued}. The feasible region for number of units is $P_{\tilde{f}}$}.
\item{Solicit and accept sealed surrogate marginal utility vectors $\hat{{V}}_n = \{\hat{v}_{n1}, \cdots, v_{n\tilde{f}(\{n\})} \}$}.
\item{Determine the allocation of divisions that optimize (\ref{5}) where the objective is the utility given by the given bids $\hat{V}_n$. The final allocation agent $n$ gets is $y^*_n$. The sum of bids picked is denoted by
\begin{equation}
\text{OPT}( \hat{V}) = \sum_{n=1}^N \sum_{m=1}^{y_n^*} \hat{v}_{nm}.
\end{equation}
}
\item{Set price $ p_n$ for agent $n$ as:
\begin{align*}
p_n=\text{OPT}(0,\hat{V}_{-n})-(\text{OPT}({\hat{V}})-\sum_{m=1}^{y^*_n}\hat{v}_{nm}) + \frac{y^*_n\beta}{M}.
\end{align*}} \end{itemize} We next give a greedy algorithm in Algorithm \ref{greedy} to determine the allocation in (\ref{5}). \begin{algorithm}[h]
\caption{Greedy allocation algorithm}
\begin{algorithmic}\label{greedy}
\STATE \textbf{Inputs:} Integer-valued polymatroid $P_{\tilde{f}}$; $N$ descending list: $\hat{{V}}_1, \cdots,\hat{{V}}_N$, the first element of $\hat{{ V}}_n$ is denoted by $\hat{{ V}}_n[0].$
\STATE \textbf{Initialization:} Set ${\bf y} = {\bf 0} $, $\mathcal{N} = \{1,\cdots,N\}, $$J({\bf y}) = \mathcal{N}$.
\WHILE {$J({\bf y})$ is not empty}
\STATE {$n = \arg\max_{i \in \mathcal{N}}\{\hat{{ V}}_i[0]\}$ (break ties arbitrarily);}
\STATE {$y_n$ $+=$ $1$;}
\STATE {Remove the first element from $\hat{{ V}}_n$;}
\IF {not $\hat{V}_n$}
\STATE {Remove $n$ from $\mathcal{N}$;}
\ENDIF
\STATE {$J({\bf y}) = \{i| (y_i+1, y_{-i}) \in P_{\tilde{f}}, i\in \mathcal{N} \}$;}
\ENDWHILE
\end{algorithmic} \end{algorithm} With this algorithm, the resources are allocated unit by unit. At each pass through, the algorithm maintains a list $J(\mathbf{y})$ of feasible agents who can still receive an additional unit of resource. It then assign the next unit to the agent from this set with the largest surrogate marginal utility.
Based on \cite{edmonds1971matroids,rado1957note,edmonds2003submodular}, \cite{glebov1973one} establishes the following theorem: \begin{theorem} \label{thm:3}
Suppose $\mathcal{J} \subseteq \mathbb{N}^N$ is a down-monotone and finite family of integer nonnegative vectors. The greedy algorithm optimizes a separable concave function over $\mathcal J$ if and only if $\mathcal J$ is an integral polymatroid. \end{theorem}
Using this result, we can show the following theorem: \begin{theorem}\label{thm:4}
Given the surrogate marginal utility vectors $\hat{V}$, for any set of concave and non-decreasing utility functions $\bf {u}$ such that for each agent $n$, $u_n(\frac{y_n}{M}) = \sum_{m=1}^{y_n} \hat{v}_{nm} + \frac{\beta y_n}{M}$, then ${\bf y}^*({\hat{ V}})$ given by Algorithm \ref{greedy} is the optimizer to problem (\ref{5}) and the optimal outcome is $\text{OPT}( \hat{V})+ \beta$, i.e.,
\begin{equation}\label{eq.17}
{\bf y}^*({\hat{V}}) =\arg\max_{{\bf y} \in P_{\tilde{f}}}\sum_{n=1}^N {u}_{n}(\frac{y_n}{M}),
\end{equation}
\begin{equation}\label{eq:14}
\max_{{\bf y} \in P_{\tilde{f}}}\sum_{n=1}^N {u}_{n}(\frac{y_n}{M}) =\sum_{n=1}^N {u}_{n}(\frac{y^*_n({\hat{V}})}{M}) =\text{OPT}( \hat{V})+ \beta .
\end{equation} \end{theorem} \begin{proof}
Combining Theorem \ref{thm:3} and Corollary \ref{lemma:2} yields (\ref{eq.17}). We can show (\ref{eq:14}) according to the definition of $\text{OPT}( \hat{V})$. The detailed proof is omitted here.
\iffalse
Notice that we run the greedy algorithm for the surrogate marginal utility. From Theorem \ref{thm:3}, we have
\begin{equation}\label{eq.19}
{\bf y}^*({\hat{V}}) = \arg\max_{{\bf y} \in \mathcal{D}(P_{\tilde{f}})}\sum_{n=1}^N {u}_{n}(\frac{y_n}{M}) -\frac{y_n \beta}{M}.
\end{equation}
Since $\sum_{n=1}^N y_i =M$ for any ${\bf y} \in \mathcal{D}(P_{\tilde{f}})$, therefore
\begin{equation}\label{eq.20}
\max_{{\bf y} \in \mathcal{D}(P_{\tilde{f}})}\sum_{n=1}^N {u}_{n}(\frac{y_n}{M}) -\frac{y_n \beta}{M} \equiv\max_{{\bf y} \in \mathcal{D}(P_{\tilde{f}})}\sum_{n=1}^N {u}_{n}(\frac{y_n}{M}) -\beta.
\end{equation}
Furthermore, since $P_f$ is a polymatroid and from Corollary \ref{lemma:2} we have
\begin{equation}\label{eq.21}
\max_{{\bf y} \in \mathcal{D}(P_{\tilde{f}})}\sum_{n=1}^N {u}_{n}(\frac{y_n}{M}) \equiv \max_{{\bf y} \in P_{\tilde{f}}}\sum_{n=1}^N {u}_{n}(\frac{y_n}{M}).
\end{equation}
Combining (\ref{eq.19})-(\ref{eq.21}) yields (\ref{eq.17}).
According to the definition of $\text{OPT}( \hat{V})$, the social welfare is:
\begin{equation}
\sum_{n=1}^N {u}_{n}(\frac{y^*_n({\hat{V}})}{M}) =\text{OPT}( \hat{V})+ \sum_{n=1}^N\frac{\beta y^*_n}{M} =\text{OPT}( \hat{V})+ \beta,
\end{equation}
which finishes the proof.
\fi \end{proof}
This theorem shows the allocation determined by the greedy algorithm is the optimal solution to problem (\ref{5}) assuming the submitted surrogate marginal utilities are truthful. Next we show telling the truth is the (weakly) dominant strategy for each agent.
\begin{corollary}
The quantized VCG mechanism is DSIC. \end{corollary} \begin{proof}
The difference between marginal utility and surrogate utility is more of an implementation issue. There is a one to one mapping between them and having agents send one or the other is equivalent. The last term $\frac{y_n^*\beta}{M}$ in the payment compensates the difference ensuring the quantized VCG mechanism is still DSIC.
\iffalse
Also, we can see this from the view of payoff. For agent $n$, by reporting the true surrogate marginal utility $\hat{V}_n$, the payoff is:
\begin{align}
\nonumber &u_n(\frac{y_n^*}{M})-p_n\\
\nonumber =&u_n(\frac{y_n^*}{M}) -\text{OPT}(0,\hat{V}_{-n})+(\text{OPT}({\hat{V}})-\sum_{m=1}^{y^*_n}\hat{v}_{nm}) - \frac{y^*_n\beta}{M}\\
=&u_n(\frac{y_n^*}{M}) +\sum_{j \neq n}u_j(\frac{y_j^*}{M}) -\text{OPT}(0,\hat{V}_{-n}) -\beta.
\end{align}
The last two terms are constants for agent $n$ and we can see reporting the the true surrogate marginal utility makes each agent's goal consistent with the allocator's, hence this mechanism is DSIC.
\fi \end{proof}
\subsection{Examples of Quantized VCG Performance} In this section we examine the performance of the quantized VCG mechanism for the two examples introduced in the previous section. In the following section, we turn to analyzing the performance in more general settings.
\textbf{Example 1 }(continued): Recall, for the original polymatroid $P_f$, the optimal allocation is ${\bf x}^* = (0.6,0.4)$. For $P_f$, we can find the minimum distance is $\bigtriangleup f = 0.2$. Hence, following Algorithm \ref{greedy}, we have $M \geq \frac{2}{0.2}=10$. Fig.~\ref{ex:1} shows the allocation $(x_1,x_2)$ given by the quantized VCG mechanism as $M$ varies. Also shown are the optimal allocations in $P_f$. It can be seen that with 10 partitions, the quantized VCG allocation is close to the optimal and approaches it further as $M$ increases. \begin{figure}
\caption{Performance of the quantized VCG mechanism for Example 1.}
\label{ex:1}
\end{figure}
\textbf{Example 2 }(continued): For this example, the optimal allocation is ${\bf x}^* = (0.7,0.2,0.1)$ and the minimum distance is $\bigtriangleup f = 0.5$. Hence, from Algorithm \ref{greedy} we have $M \geq \frac{2}{0.5}=4$. Fig. \ref{ex:1} shows the allocation $(x_1,x_2,x_3)$ as a function of the number of divisions. Again, the allocation given by the quantized VCG mechanism is close to the optimal allocation for $M = 4$ and becomes closer as $M$ increases. \begin{figure}
\caption{Performance of the quantized VCG mechanism for Example 2.}
\label{ex:2}
\end{figure}
Also these figures show the trade-off between the efficiency of the quantized VCG mechanism and the communication cost. As we increase the number of partitions, the difference between the allocation given by the quantized VCG mechanism and the optimal solution is smaller, giving better efficiency. On the other hand, a larger number of partitions means a larger communication cost (and larger computational costs).
\subsection{Efficiency Analysis} As in much of the literature, we evaluate the performance of the quantized VCG mechanism via the worst-case efficiency. This is defined as the ratio between the social welfare given by the quantized VCG mechanism and the optimal social welfare without quantization in the worst case (over the class of admissible utilities). Specifically, the worst-case efficiency for our mechanism is: \begin{equation}
\inf_{u_1,\cdots,u_N} \frac{\max_{{\bf y} \in P_{\tilde{f}}} \sum _{n=1}^N{u}_n(\frac{y_n}{M})}{\max_{{\bf x} \in P_f} \sum _{n=1}^N{u}_n(x_n)}, \end{equation} where $u_i$ is any utility function satisfying Assumptions 2 and 3. \begin{theorem}\label{th:6}
For a system with $N$ agents, suppose we partition the resource into $M$ divisions equally and run the quantized VCG mechanism as proposed, where $M$ satisfies the condition in Algorithm 2. In this case, the efficiency is at least $$\frac{ M\beta + 2(\alpha-\beta)}{M\alpha +2(\alpha - \beta) -[M-N+1]^+(\alpha-\beta)}.$$ \end{theorem} \begin{proof}
If $\bf x ^*$ is the optimal solution without quantization, then ${\bf y} = \lfloor {\bf x}^* M\rfloor$ must lie in $P_{\tilde{f}}$. In addition, we can always find $\bf \mathring y$ on the dominant face of $P_{\tilde{f}} - {\bf y}$, which is also a polymatroid by Lemma \ref{lemma:1} (and is also integral since {$\mathbf y$} is integer-valued).
Hence,
$\sum_{n=1}^N \mathring{y}_n = M -\sum_{n=1}^N \lfloor x^*_n M\rfloor= \sum_{n=1}^N \{ x^*_n M\}$
and so $\bf y + \mathring y$ is a feasible allocation of divisions on the dominant face of $P_{\tilde{f}}$.
Thus, the efficiency is lower bounded by
\begin{equation}\label{eq:26}
\frac{ \sum _{n=1}^N{u}_n(\frac{y_n+\mathring{y}_n}{M})}{ \sum _{n=1}^N{u}_n(x_n^*)}\geq \frac{ \sum _{n=1}^N{u}_n(\frac{y_n}{M})+ \frac{\beta}{M}\sum_{n=1}^N \mathring{y}_n}{ \sum _{n=1}^N{u}_n(x_n^*)} .
\end{equation}
Recall that the utility functions are concave and for agent $n$ we have:
\begin{align}
\nonumber u_n(x_n^*) \leq& u_n'(\frac{y_n}{M})(x_n^*-\frac{y_n}{M})+ u_n(\frac{y_n}{M}) \\
=&(u_n(\frac{y_n}{M}) - \frac{y_n}{M} u_n'(\frac{y_n}{M})) + x_n^* u_n'(\frac{y_n}{M}).
\end{align}
Therefore,
\begin{align}
\nonumber &\frac{ \sum _{n=1}^N{u}_n(\frac{y_n}{M})+ \frac{\beta}{M}\sum_{n=1}^N \mathring{y}_n}{ \sum _{n=1}^N{u}_n(x_n^*)} . \\
\nonumber \label{eq:1818}\geq &\frac{\sum_{n=1}^N \frac{y_n}{M} u_n'(\frac{y_n}{M})+\frac{\beta}{M} \sum_{n=1}^N \mathring{y}_i }{\sum_{n=1}^N x_n^* u_n'(\frac{y_n}{M})}\\
= &\frac{\sum_{n=1}^N \lfloor x^*_n M\rfloor u_n'(\frac{y_n}{M}) +\beta \sum_{n=1}^N \{ x^*_n M\}}{\sum_{n=1}^N \lfloor x^*_n M\rfloor u_n'(\frac{y_n}{M})+\sum_{n=1}^N \{ x^*_n M\} u_n'(\frac{y_n}{M})}.
\end{align}
This indicates to find the lower bound for the efficiency, we should only focus on utility functions $u_n$ that are linear for $x \leq \frac{y_n}{M}$.
Proposition~\ref{prop:2} shows that $\tilde{f}(S) \geq 2$ for any subset $S$. Hence, the greedy algorithm must pick the 2 greatest bids. Let $\alpha^* = \max_n u_n'(\frac{y_n}{M})$ and so we have:
\begin{equation}
\sum_{n=1}^N \lfloor x^*_n M\rfloor u_n'(\frac{y_n}{M}) \geq 2\alpha^* + \beta (\sum_{n=1}^N \lfloor x^*_n M\rfloor-2),
\end{equation}
\begin{equation}
\sum_{n=1}^N \{ x^*_n M\} u_n'(\frac{y_n}{M}) \leq \alpha^* \sum_{n=1}^N \{ x^*_n M\}.
\end{equation}
Therefore,
\begin{equation}\label{eq:28}
(\ref{eq:1818})\geq \frac{(M-2)\beta + 2\alpha^*}{(M+2)\alpha^* -2\beta -\sum\limits_{n=1}^N \lfloor x^*_n M\rfloor(\alpha^*-\beta)},
\end{equation}
which is increasing in $\sum_{n=1}^N \lfloor x^*_n M\rfloor$ because $\alpha^* -\beta $ is always nonnegative. Since $\sum_{n =1}^N x_n^*M = M $, we have
$
\sum\limits_{n=1}^N \{ x^*_n M\} \leq N-1,
$
and hence
\begin{equation}\label{eq:29}
\sum_{n=1}^N \lfloor x^*_n M\rfloor \geq \max\{ 0,M - N + 1\} .
\end{equation}
Substituting (\ref{eq:29}) into (\ref{eq:28}) yields:
$$ (\ref{eq:1818}) \geq\left\{
\begin{aligned}
&\frac{2\alpha^* +(M-2)\beta}{(M+2)\alpha^* -2\beta}, \quad \quad \quad \quad \quad \quad M < N-1, \\
&\frac{2\alpha^* + (M-2)\beta}{(N+1)\alpha^* + (M-N-1)\beta}, \ \ \ M \geq N-1.
\end{aligned}
\right.$$
In either case, we can check that the lower bound is a decreasing function over $\alpha^*$ and we know $\alpha^* \leq \alpha$, therefore, we finish the proof by letting $\alpha^* = \alpha$. \end{proof}
This lower bound is tight. Consider the example with two agents: $P_f = \{{\bf x}| x_1\leq 1-\eta, x_2 \leq \frac{2}{3} +\eta, x_1+x_2 \leq 1\}$ and $u_1(x) = \alpha x$, $u_2(x) =\beta x$. We can see in this case $\bigtriangleup f = \frac{2}{3}$ and $M$ should be greater than 3. If we pick $M =3 $, then the resulting allocation is $(\frac{2}{3}, \frac{1}{3})$, while the optimal allocation is $(1-\eta,\eta)$. Hence, the efficiency is $\frac{2\alpha +\beta}{3(1-\eta)\alpha + 3\eta\beta}$. As $\eta$ goes to 0, we achieve the lower bound, which is $\frac{2\alpha+\beta}{3\alpha}$.
\textit{Remark 2}: When $\beta \rightarrow \alpha $, the lower bound approaches 1 and is tight, which makes sense because in this case, all agents have the same valuations of the resource and so the allocation is always efficient.
The lower bound is an increasing function of number of partitions, which indicates the trade-off between the efficiency and the communication cost as well. As $M$ goes to infinity, the lower bound goes to 1 since the mechanism approaches the VCG mechanism. Moreover, for a fixed number of partitions, the lower bound decreases as the number of agents grows, and when $N \geq M + 1$, the lower bound becomes independent of the number of agents and is a constant given $M$.
\section{Rounded Quantized VCG Mechanism} In the last section, we specified the quantized VCG mechanism for allocating a divisible resource in a polymatroid environment and provided a lower bound for the worst-case efficiency. In the quantized VCG mechanism, agent $n$ only needs to send $\tilde{f}(\{n\})$ bids in total to indicate her utility for the possible allocation. This is a reduction of the dimensionality of the needed communication - a common measure used in both the engineering literature (e.g. \cite{yang2007vcg,haoge2018}) and in economics (e.g., this is the notion of information efficiency used by Hurowitz and Reiter \cite{hurwicz2006designing}). However, from an information theoretic point of view, conveying a real number still requires an infinite number of bits. In a small networked system, agents can send a very long bids to approximate the real value, but when the number of agents grows in the network, the total amount of communication may be unacceptable. Hence, in this section we further consider quantizing the bids sent by each agent in addition to the quantization of the resource in a large networked system (e.g., $N \geq 10$).\footnote{Again this builds on work in \cite{haoge2018}, which considered a similar approach for a single divisible resource and only one of the bidding strategies we discuss below.}
Concretely, we determine a monetary unit $\delta$. Each agent is restricted to give valuations that are integer multiple of $\delta$. Thereby, to indicate the surrogate marginal utility, agent $n$ needs to send an integer vector $\hat{W}_n = \{\hat{w}_{n1},\cdots,\hat{w}_{n\tilde{f}(\{n\})} \}$ to approximate her true utility. We call the resulting mechanism the \textit{rounded quantized VCG mechanism. }
More precisely this mechanism is defined as follows: \begin{itemize}
\item{Determine the number of partitions and partition the resource into $M$ divisions equally using Algorithm \ref{integer-valued}. Again, denote the feasible region for number of units by $P_{\tilde{f}}$}.
\item{Determine and broadcast the monetary unit $\delta$ }.
\item{Solicit and accept sealed value vectors $\hat{W}_n $}.
\item{Run the quantized VCG mechanism with marginal utility vectors $\hat{V}_n= \hat{W}_n \delta$. Break ties randomly and determine the allocation and payment.} \end{itemize}
In other words, following this mechanism, agent $n$ equivalently reports another function $\tilde{u}_n(x)$ to approximate $u_n$, where \begin{equation}
\tilde{u}_n(\frac{y_n}{M}) = \sum _{m=1} ^{y_n} \hat{w}_{nm}\delta + \beta \frac{y_n}{M}. \end{equation} As shown in Theorem \ref{thm:4}, the resource allocator aims to find allocation $\bf \tilde{y}^*$ to maximize the sum of $\tilde{u}_n$, e.g., \begin{equation}\label{eq:33}
{\bf \tilde{y}}^* = \arg\max_{{\bf y } \in P_{\tilde{f}}} \sum_{n=1}^N \tilde{u}_n(\frac{y_n}{M}). \end{equation}
Obviously, the social welfare given by the rounded quantized VCG mechanism is less than that given by the quantized VCG since each agent cannot report her utility accurately. We will analyze the loss due to this restriction later in this section. Also, here we require that agents submit a non-increasing sequence of bids (as we will see in the following, due to bid quantization agents may not have an incentive to do this without the restriction). The main reason for this restriction is that it is needed for us to use the greedy algorithm to correctly determine the outcome and payments. \subsection{Equilibrium Analysis} It is shown in \cite{haoge2018} that there is no dominant strategy when using such a mechanism over a single-link network. Hence, this is also true for our more general polymatroid setting. \footnote{If we set all constraints to the same value, e.g., $f(S) = 1$, for all $S$, then our problem is reduced to a single-link problem.} We instead adopt the following relaxed solution concept which allows for agents to tolerate some loss:
\begin{definition}
Given any $\epsilon \geq 0$, a strategy $\sigma_n^*$ for agent $n$ is called \textit{$\epsilon$-dominant} if for all $\sigma_n$ and $\sigma_{-n}$, $u_i(\sigma_n,\sigma_{-n}) -u_n(\sigma_n^*,\sigma_{-n}) \leq\epsilon.$ \end{definition}
In other words, for agent $n$, any unilateral deviation from strategy $\sigma_n^*$ leads to at most an $\epsilon$ gain. A dominant strategy is a special case of an $\epsilon$-dominant strategy with $\epsilon = 0$.
\begin{definition}
A strategy profile $\sigma^*= (\sigma_1^*,\cdots,\sigma_N^*)$ forms an $\epsilon$-equilibrium if for every agent $n$, $\sigma_n^*$ is $\epsilon$-dominant. \end{definition}
\begin{figure*}
\caption{The FLOOR strategy.}
\caption{The CEILING strategy.}
\caption{The CEILOOR strategy.}
\caption{Equivalent utility function $\tilde{u}(x)$ under different strategies, the three example equilibrium strategies, where the true utility function $\hat{u}(x) = \sqrt{x}$ and the monetary unit $\delta = 0.05$.}
\label{fig:floor}
\label{fig:animals}
\end{figure*}
For this section, we assume agents can tolerate a loss $\epsilon$ and adopt the above solution concept. Note that this solution concept shares the key properties of dominant strategies. For example, once again, agents do not need any knowledge of the actions or rationality of other players to determine their action.
One result of this relaxation in the solution concept is that agents no longer have a unique strategy. We illustrate this in the next three theorems, which each show a different equilibrium profile (see also Figure \ref{fig:animals}). \begin{theorem}\label{th:666}
Under the rounded quantized VCG mechanism with given $\epsilon$, if set $\delta = \frac{\epsilon}{M}$, then for any agent $n$, it is an $\epsilon$-dominant strategy to report $\hat{w}_{nm}=\lfloor \frac{\hat{v}_{nm}}{\delta}\rfloor$ for the $m^{th}$ partition. Moreover, if all agents play this $\epsilon$-dominant strategy, the maximum difference between social welfare given by the quantized VCG and the rounded quantized VCG is $\eta = \epsilon=M\delta $. \end{theorem} \begin{proof}
For agent $n$, as shown in the last section, reporting the true surrogate marginal utility $\hat{V}_n$ (equivalently reporting ${u}_n$) is optimal without the quantized bid constraint and the corresponding maximum payoff is:
\begin{align}
\nonumber&\max\limits_{{\bf y} \in P_{\tilde{f}}} u_n(\frac{y_n}{M}) +\sum_{j \neq n}\tilde{u}_j(\frac{y_j}{M}) -\text{OPT}(0,\hat{W}_{-n}\delta)-\beta \\
\label{17}=& R -\text{OPT}(0,\hat{W}_{-n}\delta)-\beta.
\end{align}
Here, $\hat{W}_{-n}$ indicates the other agents' bids, which correspond to $\tilde{u}_{-n}$, and
$
R =\max\limits_{{\bf y} \in P_{\tilde{f}}} u_n(\frac{y_n}{M}) +\sum_{j \neq n}\tilde{u}_j(\frac{y_j}{M}).
$
Suppose agent $n$ reports $W_n = \lfloor \frac{V_n}{\delta} \rfloor$ and the corresponding utility function is $\tilde{u}_n$. Given the final allocation $\bf \tilde{y}^*$, then agent $n$'s payoff is:
\begin{align}
\nonumber u_n(\frac{\tilde{y}_n^*}{M}) - p_n =& u_n(\frac{\tilde{y}_n^*}{M}) +\sum_{j \neq n}\tilde{u}_j(\frac{\tilde{y}_j^*}{M}) -\text{OPT}(0,\tilde{W}_{-n}\delta)-\beta\\
\label{16}= & L - \text{OPT}(0,\tilde{W}_{-n}\delta) -\beta,
\end{align}
where
$
\label{eq:L}L= \max \limits_{{\bf y} \in P_{\tilde{f}}} \sum\limits_{n=1}^N \tilde{u}_{n}(\frac{y_n}{M})+u_n(\frac{\tilde{y}_n^*}{M}) -\tilde{u}_n(\frac{\tilde{y}_n^*}{M}).
$
To show the strategy is $\epsilon-$dominant, it is sufficient to show $L \geq R -\epsilon$, which implies that agent $n$'s loss under this strategy is less than $\epsilon$ compared with the maximum payoff she can get.
As shown in Fig. \ref{fig:animals}(a), by reporting the floor value, for $y_n =0,1\cdots, M$,
\begin{equation}\label{eq:36}
\tilde{u}_n(\frac{y_n}{M}) = \sum _{m=1} ^{y_n} \hat{w}_{nm}\delta +\beta \frac{y_n}{M}= \sum _{m=1} ^{y_n} \lfloor \frac{\hat{v}_{nm}}{\delta}\rfloor\delta +\beta \frac{y_n}{M}.
\end{equation}
Hence, we have:
\begin{equation}\label{eq:37}
u_n(\frac{y_n}{M}) -y_n\delta \leq \tilde{u}_n(\frac{y_n}{M}) \leq u_n(\frac{y_n}{M})
\end{equation}
and so
\begin{align}
\nonumber L \geq &\max \limits_{{\bf y} \in P_{\tilde{f}}} \sum_{n=1}^N \tilde{u}_{n}(\frac{y_n}{M})\\
\nonumber\geq & \max\limits_{{\bf y} \in P_{\tilde{f}}}u_n(\frac{y_n}{M}) +\sum_{j \neq n}\tilde{u}_j(\frac{y_j}{M}) - y_n\delta\\
\label{20}\geq & R- M\delta.
\end{align}
Therefore, it is an $\epsilon$-dominant strategy to report the floor value.
Next, we show that if all agents follow this strategy, the difference between the social welfare given by the quantized VCG and the rounded quantized VCG mechanism is upper bounded by $M\delta $. In other words, the loss in social welfare due to the quantized bid constraint is no greater than $M\delta$. According to the definition, the difference is:
\begin{align*}
\sum_{n=1}^N u_{n}(\frac{y_n^*}{M}) - \sum_{n=1}^N u_{n}(\frac{\tilde{y}^*_n}{M}) \leq & \sum_{n=1}^N \tilde{u}_{n}(\frac{y_n^*}{M})+ y_n^*\delta- \sum_{n=1}^N u_{n}(\frac{\tilde{y}^*_n}{M})\\
\leq & \sum_{n=1}^N \tilde{u}_{n}(\frac{\tilde{y}^*_n}{M})- \sum_{n=1}^N u_{n}(\frac{\tilde{y}^*_n}{M}) + M\delta \\
\leq & M\delta .
\end{align*}
The inequality (\ref{eq:37}) is used in the last two steps. \end{proof}
For simplicity, we call this the FLOOR strategy. Similarly, we next define a CEILING strategy and again bound for loss in social welfare. \begin{theorem}
Under the rounded quantized VCG mechanism with given $\epsilon$, if set $\delta = \frac{\epsilon}{M}$, then for any agent $n$, it is an $\epsilon$-dominant strategy to report $\hat{w}_{nm}=\lceil \frac{\hat{v}_{nm}}{\delta}\rceil$ for the $m^{th}$ partition. If all agents play this $\epsilon$-dominant strategy, the maximum difference between the social welfare given by the quantized VCG and the rounded quantized VCG is $\eta =\epsilon= M\delta $. \end{theorem}
The proof is similar as that for FLOOR strategy.
By reporting the floor or ceiling value, each agent has an approximation for the true utility function and makes their bids non-decreasing, as we require. However, the bid is always smaller (greater) than the true value if we use the FLOOR (CEILING) strategy. As we can observe in Fig \ref{fig:animals}(a)-(b), the difference between bids and true values will accumulate and the approximation is less accurate when $x$ is large.
Inspired by the above two strategies, to give a better approximation of the true utility function, one may think about combining the two strategies instead of reporting floor values or ceiling values solely. This is our third $\epsilon$-dominant strategy, which we refer to as the CEILOOR strategy. \begin{theorem}
Under the rounded quantized VCG mechanism with given $\epsilon$, if the number of partitions $M$ is even and set $\delta = \frac{\epsilon}{M}$, then it is an $\frac{\epsilon}{2}$-dominant strategy for agent $n$ to report $$ \hat{w}_{nm}=\left\{
\begin{aligned}
\lceil \frac{\hat{v}_{nm}}{\delta}\rceil \ \ \ \ \ \ \ &0 < m \leq \frac{M}{2} \\
\lfloor \frac{\hat{v}_{nm}}{\delta}\rfloor\ \ \ \ \ \ \ &\frac{M}{2} < m \leq M.
\end{aligned}
\right.$$
If all agents play this strategy, the maximum difference between the social welfare given by the quantized VCG and the rounded quantized VCG is $\eta =\epsilon= M\delta $. \end{theorem} The detailed proof is omitted here. \iffalse \begin{proof}
See appendix for a detailed proof. \end{proof}
\begin{proof}
By choosing this strategy,
$$ \tilde{u}_n(\frac{y_n}{M}) =\left\{
\begin{aligned}
&\delta \sum _{m=1} ^{y_n} \lceil \frac{\hat{v}_{nm}}{\delta}\rceil +\beta\frac{y_n}{M}, \quad \quad \quad \quad \quad \quad \quad y_n \leq \frac{M}{2} \\
&\delta(\sum _{i=1} ^{\frac{M}{2}} \lceil \frac{\hat{v}_{ni}}{\delta}\rceil + \sum _{i=\frac{M}{2}+1} ^{y_n} \lfloor \frac{\hat{v}_{ni}}{\delta}\rfloor)+\beta\frac{y_n}{M}, \ \ y_n>\frac{M}{2}.
\end{aligned}
\right.$$
If we assume $\bf \bar{y}$ is the maximizer of $R$, then according to the definition of $L$ and $R$, we have
\begin{align}
\nonumber L \geq & \sum_{n=1}^N \tilde{u}_{n}(\frac{\bar{y}_n}{M})+u_n(\frac{\tilde{y}_n^*}{M}) -\tilde{u}_n(\frac{\tilde{y}_n^*}{M})\\
\label{wtm}= & R + \tilde{u}_n(\frac{\bar{y}_n}{M})-\tilde{u}_n(\frac{\tilde{y}_n^*}{M})-(u_n(\frac{\bar{y}_n}{M})-u_n(\frac{\tilde{y}_n^*}{M})).
\end{align}
Consider three different cases:
(i) If $\bar{y}_n = \tilde{y}_n^*$, then $L \geq R$, which further implies $L = R$ since $L$ is always no greater than $R$.
(ii) If $\bar{y}_n > \tilde{y}_n^*$, which means agent $n$ gets fewer divisions by using the CEILOOR strategy, then $\bar{y}_n >\tilde{y}_n^* \geq \frac{M}{2}$ since agent $n$ only decreases her bids for the second half of the partitions. Therefore,
\begin{align*}
\nonumber&\tilde{u}_n(\frac{\bar{y}_n}{M})-\tilde{u}_n(\frac{\tilde{y}_n^*}{M})-(u_n(\frac{\bar{y}_n}{M})-u_n(\frac{\tilde{y}_n^*}{M}))\\
\nonumber=& \sum_{m = \tilde{y}_n^*+ 1}^{\bar{y}_n} (\lfloor \frac{\hat{v}_{nm}}{\delta}\rfloor\delta - \hat{v}_{nm})\\
\nonumber\geq & -(\bar{y}_n -\tilde{y}_n^*) \delta\\
\geq & -\frac{M\delta}{2}.
\end{align*}
Plugging the above inequality into (\ref{wtm}), we get $L \geq R -\frac{\epsilon}{2}$.
(ii) If $\bar{y}_n < \tilde{y}_n^*$, which means agent $n$ gets more divisions by using the CEILOOR strategy, then similarly we can conclude $ \bar{y}_n <\tilde{y}_n^* \leq \frac{M}{2}$. Hence,
\begin{align*}
\nonumber&\tilde{u}_n(\frac{\bar{y}_n}{M})-\tilde{u}_n(\frac{\tilde{y}_n^*}{M})-(u_n(\frac{\bar{y}_n}{M})-u_n(\frac{\tilde{y}_n^*}{M}))\\
\nonumber=& -\sum^{\tilde{y}_n^*}_{m = \bar{y}_n+1} (\lceil \frac{\hat{v}_{nm}}{\delta}\rceil\delta - \hat{v}_{nm})\\
\nonumber\geq & -(\tilde{y}_n^*-\bar{y}_n) \delta\\
\geq & -\frac{M\delta}{2}.
\end{align*}
Again, we have $L \geq R -\frac{M\delta}{2} = R -\frac{\epsilon}{2}$.
Therefore, for all possible cases, $L \geq R -\frac{\epsilon}{2}$, which indicates the CEILOOR strategy is $\frac{\epsilon}{2}$-dominant.
Next, we bound the loss in social welfare resulting from the bid quantization. Note that in the quantized VCG mechanism, if the optimal allocation is ${\bf y}^*$, at most one agent can get more than $\frac{M}{2 }$ divisions. If no agent gets more than $\frac{M}{2}$ divisions, then it must be that ${y}_n^* \leq \frac{M}{2}$, for $1\leq n\leq N$. Hence, the relationship (\ref{22}) still holds for any ${y}_n^*$ and via the same approach, it follows that the loss is no greater than $M\delta$.
If one agent can get more than $\frac{M}{2}$ divisions then without loss of generality, assume $y^*_1 > \frac{M}{2}$. There are two subcases:
(a) $\tilde{y}^*_1 \geq \frac{M}{2}$: for this case, according to the definition of $\tilde{\bf y}^*$ in (\ref{eq:33}),
\begin{equation}
\sum_{n=1}^N \tilde{u}_{n}(\frac{\tilde{y}^*_n}{M}) \geq \sum_{n=1}^N \tilde{u}_{n}(\frac{y^*_n}{M}),
\end{equation}
which means:
\begin{align}
\nonumber &\sum_{n=2}^{N}\sum_{m=1}^{y^*_n}\lceil \frac{v_{nm}}{\delta} \rceil \delta + \sum_{m=1}^{\frac{M}{2}}\lceil \frac{v_{1m}}{\delta} \rceil \delta+ \sum_{m=\frac{M}{2}+1}^{y_1^*}\lfloor\frac{v_{1m}}{\delta} \rfloor\delta\\
\leq & \sum_{n=2}^{N}\sum_{m=1}^{\tilde{y}^*_n}\lceil \frac{v_{nm}}{\delta} \rceil \delta + \sum_{m=1}^{\frac{M}{2}}\lceil \frac{v_{1m}}{\delta} \rceil \delta+ \sum_{m=\frac{M}{2}+1}^{\tilde{y}_1^*}\lfloor \frac{v_{1m}}{\delta} \rfloor \delta.
\end{align}
Simplify the above inequality and we have:
\begin{align*}
&\delta(\sum_{n=2}^N (\sum_{m=1}^{y^*_n}\lceil \frac{v_{nm}}{\delta} \rceil-\sum_{m=1}^{\tilde{y}^*_n}\lceil \frac{v_{nm}}{\delta} \rceil) \\
\leq & -\delta(\sum_{m=\frac{M}{2}+1}^{y_1^*}\lfloor \frac{v_{1m}}{\delta} \rfloor
-\sum_{m=\frac{M}{2}+1}^{\tilde{y}_1^*}\lfloor\frac{v_{1m}}{\delta} \rfloor).
\end{align*}
For simplicity, Let
\begin{equation}
A = \delta(\sum_{n=2}^N (\sum_{m=1}^{y^*_n}\lceil \frac{v_{nm}}{\delta} \rceil-\sum_{m=1}^{\tilde{y}^*_n}\lceil \frac{v_{nm}}{\delta} \rceil),
\end{equation}
and
\begin{equation}
B = \delta(\sum_{m=\frac{M}{2}+1}^{y_1^*}\lfloor \frac{v_{1m}}{\delta} \rfloor
-\sum_{m=\frac{M}{2}+1}^{\tilde{y}_1^*}\lfloor\frac{v_{1m}}{\delta} \rfloor).
\end{equation}
Hence,
\begin{equation}\label{eq:48}
A + B \leq 0.
\end{equation}
Moreover,
\begin{align}
\nonumber A =&\delta \sum_{n=2}^N (\mathds{1}_{\{y^*_n \geq \tilde{y}^*_n\}} \sum_{i=\tilde{y}^*_n +1}^{y^*_n}\lceil \frac{v_{ni}}{\delta} \rceil - \mathds{1}_{\{y^*_n < \tilde{y}^*_n\}} \sum_{i=y^*_n +1}^{\tilde{y}^*_n}\lceil \frac{v_{ni}}{\delta} \rceil)\\
\nonumber \geq&\sum_{n=2}^N (\mathds{1}_{\{y^*_n \geq \tilde{y}^*_n\}} \sum_{i=\tilde{y}^*_n +1}^{y^*_n} v_{ni} - \mathds{1}_{\{y^*_n < \tilde{y}^*_n\}} \sum_{i=y^*_n +1}^{\tilde{y}^*_n}( v_{ni}+\delta))\\
\nonumber \geq & \sum_{n=2}^N (\mathds{1}_{\{y^*_n \geq \tilde{y}^*_n\}}\sum_{i=\tilde{y}^*_n +1}^{y^*_n} v_{ni}- \mathds{1}_{\{y^*_n < \tilde{y}^*_n\}} (\tilde{y}^*_n\delta+\!\sum_{i=y^*_n +1}^{\tilde{y}^*_n}v_{ni})\\
\nonumber = & \sum_{n=2}^N u_{n}(\frac{y_n^*}{M}) - \sum_{n=2}^Nu_{n}(\frac{\tilde{y}^*_n}{M}) - \sum_{n=2}^N \mathds{1}_{\{y^*_n < \tilde{y}^*_n\}}\tilde{y}^*_n\delta\\
\nonumber \geq &\sum_{n=2}^N u_{n}(\frac{y_n^*}{M}) - \sum_{n=2}^Nu_{n}(\frac{\tilde{y}^*_n}{M})- \sum_{n=2}^N \tilde{y}^*_n\delta\\
\label{yyoo27}= &\sum_{n=2}^N u_{n}(\frac{y_n^*}{M}) - \sum_{n=2}^Nu_{n}(\frac{\tilde{y}^*_n}{M}) -(M - \tilde{y}^*_1)\delta.
\end{align}
This implies:
\begin{equation}\label{eq:50}
\sum_{n=2}^N u_{n}(\frac{y_n^*}{M}) - \sum_{n=2}^Nu_{n}(\frac{\tilde{y}^*_n}{M}) \leq A +(M - \tilde{y}^*_1)\delta.
\end{equation}
Furthermore,
\begin{align}
\nonumber B = &\mathds{1}_{\{y^*_1 \geq \tilde{y}^*_1\}} \sum_{i=\tilde{y}^*_1 +1}^{y^*_1}\lfloor \frac{v_{1i}}{\delta} \rfloor \delta - \mathds{1}_{\{y^*_1 < \tilde{y}^*_1\}} \sum_{i=y^*_1 +1}^{\tilde{y}^*_1}\lfloor \frac{v_{1i}}{\delta} \rfloor \delta\\
\nonumber \geq & \mathds{1}_{\{y^*_1 \geq \tilde{y}^*_1\}} \sum_{i=\tilde{y}^*_1 +1}^{y^*_1}(v_{1i} -\delta) - \mathds{1}_{\{y^*_1 < \tilde{y}^*_1\}} \sum_{i=y^*_1 +1}^{\tilde{y}^*_1}v_{1i} \\
\nonumber = & u_1(\frac{y_1^*}{M}) -u_1(\frac{\tilde{y}_1^*}{M}) - \mathds{1}_{\{y^*_1 \geq \tilde{y}^*_1\}}(y_1^*-\tilde{y}_1^*)\delta\\
\geq & u_1(\frac{y_1^*}{M}) -u_1(\frac{\tilde{y}_1^*}{M}) - \frac{M\delta}{2}.
\end{align}
This implies:
\begin{equation}\label{eq:52}
u_1(\frac{y_1^*}{M}) -u_1(\frac{\tilde{y}_1^*}{M}) \leq B +\frac{M\delta}{2}.
\end{equation}
Combining (\ref{eq:48}), (\ref{eq:50}) and (\ref{eq:52}), we have:
\begin{equation}
\sum_{n=1}^N u_{n}(\frac{x_n^*}{M}) - \sum_{n=1}^Nu_{n}(\frac{\tilde{x}^*_n}{M})\leq (\frac{3M}{2} - \tilde{y}^*_1) \delta \leq M\delta.
\end{equation}
(b) $\tilde{y}^*_1 < \frac{M}{2}$: in this case, the bid $\lceil \frac{v_{1\frac{M}{2}}}{\delta}\rceil$ is not picked. Hence even if agent 1 submits $\lceil \frac{v_{1\frac{M}{2}}}{\delta}\rceil$ for the second half of the divisions (e.g., $w_{1i} = \lceil \frac{v_{1\frac{M}{2}}}{\delta}\rceil$ for $i > \frac{M}{2}$), the optimal allocation $\bf \tilde{y}^*$ will remain the same since the allocator ignores agent 1's bids after $w_{1\tilde{y}_1^*}$.
Therefore, we have:
\begin{align}
\nonumber
&\sum_{n=2}^{N}\sum_{m=1}^{y^*_n}\lceil \frac{v_{nm}}{\delta} \rceil \delta + \sum_{m=1}^{\frac{M}{2}}\lceil \frac{v_{1m}}{\delta} \rceil \delta+ (y_1^* - \frac{M}{2})\lceil\frac{v_{1\frac{M}{2}}}{\delta} \rceil\delta\\
\label{121630}\leq & \sum_{n=2}^{N}\sum_{m=1}^{\tilde{y}^*_n}\lceil \frac{v_{nm}}{\delta} \rceil \delta + \sum_{m=1}^{\tilde{y}_1^*}\lceil \frac{v_{1m}}{\delta} \rceil \delta.
\end{align}
Note also that
\begin{equation}\label{121731}
(y_1^* - \frac{M}{2})\lceil\frac{v_{1\frac{M}{2}}}{\delta} \rceil\delta- \sum_{m=\frac{M}{2}+1}^{y_1^*}\lfloor\frac{v_{1m}}{\delta} \rfloor\delta \geq y_1^* - \frac{M}{2}.
\end{equation}
Combining (\ref{121630}) and (\ref{121731}) and simplifying yields
\begin{align}
\nonumber &\delta(\sum_{n=2}^N (\sum_{m=1}^{y^*_n}\lceil \frac{v_{nm}}{\delta} \rceil-\sum_{m=1}^{\tilde{y}^*_n}\lceil \frac{v_{nm}}{\delta} \rceil) \\
\label{eq:56}\leq & -\delta(\sum_{m=\tilde{y}_1^*+1}^{\frac{M}{2}}\lfloor \frac{v_{1m}}{\delta} \rfloor
+\sum_{m=\frac{M}{2}+1}^{y_1^*}\lfloor\frac{v_{1m}}{\delta} \rfloor)+ \frac{M}{2} - y_1^*.
\end{align}
Keep the definition of $A$ as before, now define $B$ as
\begin{equation}\label{newb}
B =\delta(\sum_{m=\tilde{y}_1^*+1}^{\frac{M}{2}}\lfloor \frac{v_{1m}}{\delta} \rfloor
+\sum_{m=\frac{M}{2}+1}^{y_1^*}\lfloor\frac{v_{1m}}{\delta} \rfloor).
\end{equation}
The inequality (\ref{eq:56}) can then be rewritten as:
\begin{equation}\label{eq:58}
A + B \leq \frac{M}{2} -y_1^*.
\end{equation}
Inequality (\ref{eq:50}) still holds here and for $B$:
\begin{align}
\nonumber B \geq &\sum_{m=\tilde{y}_1^*+1}^{\frac{M}{2}}v_{1m} + \sum_{m=\frac{M}{2}+1}^{y_1^*} (v_{1m} -\delta)\\
\label{eq:59}\geq & u_1(\frac{y_1^*}{M}) -u_1(\frac{\tilde{y}_1^*}{M}) - (y_1^*-\frac{M}{2})\delta,
\end{align}
which means:
\begin{equation}\label{eq:60}
u_1(\frac{y_1^*}{M}) -u_1(\frac{\tilde{y}_1^*}{M}) \leq B + (y_1^*-\frac{M}{2})\delta.
\end{equation}
Combining (\ref{eq:50}), (\ref{eq:58}) and (\ref{eq:60}), we have:
\begin{equation}
\sum_{n=1}^N u_{n}(\frac{y_n^*}{M}) - \sum_{n=1}^Nu_{n}(\frac{\tilde{y}^*_n}{M})\leq (M - \tilde{y}^*_1) \delta \leq M\delta.
\end{equation}
Hence, in both cases, the loss in social welfare is upper bounded by $M\delta$. \end{proof} \fi
\textit{Remark 3:} Though all these strategies lead to no more than $\epsilon$ regret, note that only the FLOOR strategy is individually rational as in the other cases an agent's pay-off may be negative. However, if we also relax individual rationality to allowing for a loss of at more $\epsilon$, then all strategies are $\epsilon$-individually rational.
In the previous results, we assumed that all agents choose the same type of $\epsilon$-dominant strategy. Next, we show that if agents choose different types of strategies, then this can lead to larger welfare losses.
\begin{theorem}
Under the rounded quantized VCG mechanism with given $\epsilon$, if $\delta = \frac{\epsilon}{M} $ and there is no agreement among agents on the type of $\epsilon$-dominant strategy, then the maximum difference between social welfare given by the quantized VCG and the rounded quantized VCG is $\eta = 2\epsilon $. \end{theorem}
We omit the proof here for space consideration.
The preceding analysis showed that the inconsistence in the choice of strategy may result in a loss of efficiency. This suggest that a resource allocator may want to encourage agents to follow the same type of strategy. Also note that the CEILOOR strategy is more appealing because it offers a smaller bound on loss for each agent (since it is $\frac{\epsilon}{2}$-dominant instead of $\epsilon$ dominant) and thus might be preferable. On the other hand, as noted above the FLOOR strategy is individually rational and so might be preferred from that point-of-view.
Under the assumption that all agents follow the same strategy, we hope to analyze the lower bound for efficiency in each case. However, before divining in, one important fact should be considered that in practical setting, compared with the pay-off, the loss each agent can tolerate should be pretty small. Hence, we further make the following assumption to ensure that $\epsilon$ is small compared to the possible utility an agent may obtain. \begin{assumption}
For each agent, the maximum loss $\epsilon$ is smaller than half of the minimum utility if she gets all the resource, i.e., $\epsilon \leq \frac{\beta}{2}$. \end{assumption}
This assumption provides a pretty loose upper bound for $\epsilon$ and in fact $\epsilon$ could be much smaller than this bound in practice. Based on the previous results, we next derive the overall worst-case efficiency bounds in different scenarios. \begin{theorem}\label{thm:11}
For a large networked system with more than 3 agents, under the rounded quantized VCG mechanism with given $\epsilon$, if set $\delta = \frac{\epsilon}{M}$ and all agents choose the same strategy simultaneously, the worst-case efficiency is no less than $$\frac{ M\beta + 2(\alpha-\beta)-M\epsilon}{M\alpha +2(\alpha - \beta) -[M-N+1]^+(\alpha-\beta)}.$$ \end{theorem} \begin{proof}
The main idea is simply combining the losses from quantizing the resource and the bids. The details are omitted here.
\iffalse
instead of (\ref{eq:26}), the efficiency is lower bounded by:
\begin{equation}\label{eq:63}
\frac{ \sum _{n=1}^N{u}_n(\frac{y_n}{M})+ \frac{\beta}{M}\sum_{n=1}^N \mathring{y}_n - \eta}{ \sum _{n=1}^N{u}_n(x_n^*)},
\end{equation}
where $\eta = \epsilon$ if all agents choose the same strategy. Following a similar derivation as in Theorem \ref{th:6}, we can show the lower bound is an increasing function of $\sum_{n=1}^N \lfloor x^*_n M\rfloor$ and hence by the inequality (\ref{eq:29}) we have:
$$
(\ref{eq:63}) \geq \frac{2\alpha^*+(M-2)\beta - M\epsilon}{(M+2)\alpha^* -2\beta -[M-N+1]^+(\alpha^*-\beta)},
$$
or equivalently,
$$ (\ref{eq:63}) \geq\left\{
\begin{aligned}
&\frac{2\alpha^* +(M-2)\beta-M\epsilon}{(M+2)\alpha^* -2\beta}, \quad \quad \quad \ \ \ M < N-1, \\
&\frac{2\alpha^* + (M-2)\beta-M\epsilon}{(N+1)\alpha^* + (M-N-1)\beta}, \ \ \ \ \ M \geq N-1.
\end{aligned}
\right.$$
For the first case, the derivative over $\alpha^*$ of the lower bound is:
$$
\frac{-M^2\beta +(M^2+2M)\epsilon}{((M+2)\alpha^* -2\beta)^2} \leq \frac{-\frac{M^2\beta}{2} +M\beta}{((M+2)\alpha^* -2\beta)^2}\leq 0,
$$
where we use the fact $\epsilon \leq \frac{\beta}{2}$ and $M \geq 2$ as shown in Proposition (\ref{prop:2}). Therefore, this bound is decreasing in $\alpha^*$. Similarly, if $M \geq N-1$, the lower bound could be rewritten as:
$$
\frac{2\alpha^* + (M-2-\frac{M\epsilon}{\beta})\beta}{(N+1)\alpha^* + (M-N-1)\beta},
$$
and we can show in a large networked system ($N \geq 3$),
$$
\frac{2}{N+1} \leq \frac{M-2-\frac{M\epsilon}{\beta}}{M-N-1},
$$
hence, this bound is also decreasing in $\alpha^*$. By letting $\alpha^* =\alpha$, we get the lower bound and finish the proof.
\fi \end{proof}
\subsection{Discussion} For a specific scenario, given a requirement for the efficiency and $\epsilon$, these results can be used to determine a minimum number of partitions that we can guarantee will achieve this requirement. Let $M^*$ denote the value for a given bound which meets this target. The number of partitions a planner should choose to minimize the communication cost is then be $\max(M^*, \frac{2}{\triangle f})$ where the other terms follow from Algorithm 1.
Likewise, given the number of partitions, we can study the impact of the monetary unit $\delta$. Under the three $\epsilon$-dominant strategies discussed, the maximum bid one may submit is $\lceil \frac{\alpha-\beta}{M\delta}\rceil = \lceil \frac{\alpha-\beta}{\epsilon}\rceil$, and one agent needs at most $M\log_2 \lceil \frac{\alpha-\beta}{\epsilon}\rceil$ bits in total to convey its bids to the resource allocator. We can see the trade-off between communication cost and allocation efficiency here. If agents have a lower tolerance of loss, which means $\epsilon$ decreases, then from Theorem \ref{thm:11}, we know the worst-case efficiency will improve but the communication cost will increase. As $\epsilon$ goes to 0, the rounded quantized VCG mechanism approaches the quantized VCG mechanism. On the other hand, a large $\epsilon$ indicates the agents will only roughly approximate their utility function; subsequently, the worst-case efficiency will be low but so will the required communication cost.
\section{Conclusion} We considered two mechanisms for allocating a resource constrained to lie in a polymatroid capacity region with limited communication exchanged: the quantized VCG mechanism and the rounded VCG mechanism. These two mechanisms utilize quantization to reduce the communication cost between the resource allocator and the agents. Using the properties of polymatroids, we showed that these mechanisms preserve the dominant strategy incentive properties of VCG to varying degrees. We also bounded the worst-case efficiency for each mechanism in different scenarios. There are many ways that this work could be extended including allowing collusions among agents or considering revenue maximization.
\appendix
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.